diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzojhe" "b/data_all_eng_slimpj/shuffled/split2/finalzzojhe" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzojhe" @@ -0,0 +1,5 @@ +{"text":"\\section*{ Nomenclature}\n\\input{Introduction}\n\\input{SecondOrderDSMCTheory}\n\\input{CaseStudies2}\n\\input{Conclusion}\n\\small\n\\section*{ACKNOWLEDGMENT}\n This material is based upon the work supported by the National Science Foundation under Grant No. 1434273. Dr. Ken Butts from Toyota Motor Engineering $\\&$ Manufacturing North America is gratefully acknowledged for his technical comments during the course of this study. \\vspace{-0.3cm}\n\\bibliographystyle{unsrt}\n\n\\section{Case Study: Automotive Engine Control} \n\\label{sec:CS}\\vspace{-0.15cm}\nHere, application of the proposed method in Section~\\ref{sec:UncertaintyPrediction} is demonstrated for a physics-based spark ignition (SI) combustion engine model~\\cite{Shaw} during cold start. Our proposed algorithm fits the requirements of this automotive control problem well, as it contains complicated plant model dynamics prone to uncertainty in slowly-fluctuating environments, yet require uncertainty mitigation to achieve tracking of desired trajectory behavior\n\nThe engine model~\\cite{Shaw} is parameterized for a 2.4-liter, 4-cylinder, DOHC 16-valve Toyota 2AZ-FE engine. The engine rated power is 117kW $@$ 5600~RPM, and it has a rated torque of 220 Nm $@$ 4000~RPM. The experimental validation of different components of the engine model is available in~\\cite{Sanketi}. The nonlinear model has four states including the exhaust gas temperature~($T_{exh}$), fuel mass flow rate into the cylinders (${\\dot{m}_f}$), the engine speed~(${\\omega_e}$), and the mass of air inside the intake manifold ($m_{a}$). The control problem is defined to steer $T_{exh}$, ${\\omega_e}$, and air-fuel ratio ($AFR$) to their pre-defined desired values. A set of four SISO DSMCs is designed to achieve this objective. Four states of the model and corresponding dynamics and controllers will be discussed in the following sections. Details of the functions and constants in the engine model are found in the Appendix and~\\cite{Sanketi}. \n\n$\\bullet {\\textbf{~~Exhaust~Gas~Temperature~Controller:}}$~Discretized model for exhaust gas temperature ($T_{exh}$) is:\n\\vspace{-0.2cm}\n\\begin{gather}\n\\label{eq:Engine_discretized_1}\nT_{exh}(k+1)=(1-\\frac{T}{\\tau_e})T_{exh}(k)\\\\\n+\\frac{T}{\\tau_e}(7.5\\Delta(k)+600)AFI(k) \\nonumber\n\\end{gather}\nwhere $\\Delta(k)$ is the control input. The sliding surface for $T_{exh}$ controller is defined to be the error in tracking the desired exhaust gas temperature ($s_{1}=T_{exh}-{T_{exh,d}}$). The dynamics of the exhaust gas temperature ($f_{{T_{exh}}}$) with multiplicative unknown term ($\\alpha_{T_{exh}}$) is: \\vspace{-0.20cm}\n\\begin{gather}\n\\label{eq:Engine_discretized_Texh}\nf_{{T_{exh}}}=\\alpha_{T_{exh}}\\Big(\\frac{1}{\\tau_e}[600AFI-T_{exh}]\\Big)\n\\end{gather}\nThe exhaust gas time constant ($\\tau_{e}$) has a significant role in the exhaust gas temperature dynamics (Eq.~(\\ref{eq:Engine_discretized_Texh})). This means that any error in estimating the time constant ($\\tau_{e}$) directly affects the dynamics and causes deviation from the nominal model. Multiplicative uncertainty term ($\\alpha_{T_{exh}}$) is assumed to represent any error in estimating $\\tau_{e}$. The error in the modeled $T_{exh}$ dynamics is removed by using the following adaptation law with respect to Eq.~(\\ref{eq:D2SMC_14}): \\vspace{-0.15cm}\n\\begin{gather}\n\\label{eq:adaptive_Texh}\n\\hat{\\alpha}_{T_{exh}}(k+1)=\\hat{\\alpha}_{T_{exh}}(k)+\\frac{T(s_1(k))}{\\tau_e \\rho_{\\alpha_1}}(600AFI-T_{exh}(k))\n\\end{gather}\nBy incorporating Eq.~(\\ref{eq:Engine_discretized_1}) and $\\hat{\\alpha}_{T_{exh}}$ from Eq.~(\\ref{eq:adaptive_Texh}) into Eq.~(\\ref{eq:D2SMC_6}), the second order adaptive DSMC for exhaust gas temperature becomes: \\vspace{-0.25cm}\n\\begin{gather}\\label{eq:Engine_DSMC_Final_1}\n\\Delta(k)=\\frac{\\tau_e}{7.5\\,.\\,AFI\\,.\\,T}[-\\hat{\\alpha}_{T_{exh}}(k)\\frac{T}{\\tau_e}(600\\,.\\,AFI\\\\\n-T_{exh}(k))-(\\beta_1+1)s_1(k)+T_{exh,d}(k+1)-T_{exh,d}(k)] \\nonumber\n\\end{gather} \\vspace{-0.75cm}\n\n$\\bullet { \\textbf{~~Fuel~Flow~Rate~Controller:}}$~The discretized difference equation for the fuel flow rate is: \\vspace{-0.2cm}\n\\begin{gather}\n\\label{eq:Engine_discretized_2}\n\\dot{m}_f(k+1)=\\dot{m}_f(k)+\\frac{T}{\\tau_f}[\\dot{m}_{fc}(k)-\\dot{m}_f(k)]\n\\end{gather}\nThe fuel flow dynamic ($f_{\\dot{m}_f}$) with multiplicative uncertainty term ($\\alpha_{\\dot{m}_f}$) is as follows: \\vspace{-0.25cm}\n\\begin{gather}\n\\label{eq:Engine_discretized_mdotf}\nf_{\\dot{m}_f}=-\\alpha_{\\dot{m}_f}\\Big(\\frac{1}{\\tau_f}\\dot{m}_f(k)\\Big) \n\\end{gather}\nThe sliding variable for the fuel flow controller is defined to be the error in tracking the desired fuel mass flow ($s_{2}={\\dot{m}_{f}}-{\\dot{m}_{f,d}}$). In a similar manner to $T_{exh}$ dynamics, the fuel evaporation time constant $\\tau_{f}$ dictates the dynamics of the fuel flow into the cylinder. Consequently, any error in estimating $\\tau_{f}$ leads to a considerable deviation from the nominal model. $\\alpha_{\\dot{m}_f}$ is introduced to the fuel flow dynamics to represent the uncertainty in estimating $\\tau_{f}$. The adaptation law for $\\alpha_{\\dot{m}_f}$ is: \n\\vspace{-0.4cm}\n\\begin{gather}\n\\label{eq:adaptive_mdotf}\n\\hat{\\alpha}_{\\dot{m}_f}(k+1)=\\hat{\\alpha}_{\\dot{m}_f}(k)-\\frac{T(s_2(k))}{\\tau_f \\rho_{\\alpha_2}}\\dot{m}_f(k)\n\\end{gather}\nwhere, $\\dot{m}_{f,d}$ in $s_2$ is calculated according to desired AFR. The adaptive control law for $\\dot{m}_{fc}$ is: \\vspace{-0.15cm}\n\\begin{gather}\\label{eq:Engine_DSMC_Final_2}\n\\dot{m}_{fc}(k)=\\frac{\\tau_f}{T}[\\hat{\\alpha}_{\\dot{m}_f}(k)\\frac{T}{\\tau_f}\\dot{m}_f(k) \\\\-(\\beta_2+1)s_2(k)+\\dot{m}_{f,d}(k+1)-\\dot{m}_{f,d}(k)] \\nonumber\n\\end{gather} \\vspace{-0.5cm}\n\n$\\bullet { \\textbf{~~Engine~Speed~Controller:}}$~The rotational dynamics of the engine is described by using the following equation:\\vspace{-0.25cm}\n\\begin{gather}\n\\label{eq:Engine_discretized_3}\n\\omega_e(k+1)=\\omega_e(k)+\\frac{T}{J}T_E(k) \n\\end{gather}\nwhere $T_E(k)$ is the engine torque and is found by $30000~m_a(k)-(0.4~\\omega_e(k)+100)$. There is no direct control input for modulating the engine speed, therefore $m_a$ is considered as the synthetic control input. The calculated $m_a$ from engine speed controller will be used as the desired trajectory in intake air mass flow rate controller. $f_{\\omega_e}$ of the engine with multiplicative uncertainty ($\\alpha_{\\omega_e}$) is as follows: \\vspace{-0.25cm}\n\\begin{gather}\n\\label{eq:Engine_discretized_we}\nf_{\\omega_e}=-\\alpha_{\\omega_e}\\Big(\\frac{1}{J}T_{loss}\\Big)\n\\end{gather} \nwhere $T_{loss}=0.4\\omega_e+100$. $T_{loss}$ represents the torque losses on the crankshaft. Thus, the multiplicative uncertainty $\\alpha_{\\omega_e}$ compensates for any error in estimated torque loss. The sliding variable for the engine speed controller is defined to be $s_3=\\omega_e-{\\omega_{e,d}}$. $\\alpha_{\\omega_e}$ is driven to ``1'' using the following adaptation law: \\vspace{-0.3cm}\n\\begin{gather}\n\\label{eq:adaptive_we}\n\\hat{\\alpha}_{\\omega_e}(k+1)=\\hat{\\alpha}_{\\omega_e}(k)-\\frac{T(s_3(k))}{J.\\rho_{\\alpha_3}}(0.4\\omega_e(k)+100)\n\\end{gather}\nFinally, the desired synthetic control input ($m_{a,d}$) is:\\vspace{-0.3cm}\n\\begin{gather}\\label{eq:Engine_DSMC_Final_3}\nm_{a,d}(k)=\\frac{J}{30,000\\,T}[\\hat{\\alpha}_{\\omega_e}(k)\\frac{T}{J}(100+0.4\\omega_e(k))\\\\\\nonumber-(\\beta_3+1)s_3(k) \n+\\omega_{e,d}(k+1)-\\omega_{e,d}(k)]\n\\end{gather}\n\n$\\bullet { \\textbf{~~Air~Mass~Flow~Controller:}}$\nThe following state difference equation describes the air mass flow behaviour: \\vspace{-0.2cm}\n\\begin{gather}\n\\label{eq:Engine_discretized_3}\nm_a(k+1)=m_a(k)+T[\\dot{m}_{ai}(k)-\\dot{m}_{ao}(k)]\n\\end{gather}\nThe calculated $m_{a,d}$ from Eq.~(\\ref{eq:Engine_DSMC_Final_3}) is used as the desired trajectory to obtain $\\dot{m}_{ai}$ as the control input of $m_a$ controller. The last sliding surface for the air mass flow controller is defined to be $s_4=m_a-{m_{a,d}}$. The intake air manifold mass dynamic with the unknown term ($\\alpha_{m_a}$) is: \\vspace{-0.25cm}\n\\begin{gather}\n\\label{eq:Engine_discretized_ma}\nf_{m_a}=-\\alpha_{m_a}\\dot{m}_{ao}(k)\n\\end{gather} \\vspace{-0.35cm}\nwhere, air mass flow into the cylinder is determined by~\\cite{Shaw}: \\vspace{-0.2cm}\n\\begin{gather}\n\\label{eq:Engine_discretized_ma2}\n\\dot{m}_{ao}=k_1\\eta_{vol}m_a\\omega_e \n\\end{gather}\n$\\eta_{vol}$ is the volumetric efficiency. As can be seen from Eq.~(\\ref{eq:Engine_discretized_ma}) and~(\\ref{eq:Engine_discretized_ma2}), the multiplicative uncertainty term in the intake air manifold dynamics ($\\alpha_{m_a}$) represents the uncertainty in $\\dot{m}_{ao}$ that is extracted from $\\eta_{vol}$ map. $\\alpha_{m_a}$ is updated using the following adaptation law: \\vspace{-0.2cm}\n\\begin{gather}\n\\label{eq:adaptive_ma}\n\\hat{\\alpha}_{m_a}(k+1)=\\hat{\\alpha}_{m_a}(k)-\\frac{T(s_4(k))}{\\rho_{\\beta_4}}\\dot{m}_{ao}\n\\end{gather}\nFinally, the controller input is: \\vspace{-0.2cm}\n\\begin{gather}\\label{eq:Engine_DSMC_Final_4}\n\\dot{m}_{ai}(k)=\\frac{1}{T}[\\hat{\\alpha}_{m_a}(k)\\dot{m}_{ao}(k)T-(\\beta_4+1)s_4(k)\\\\\n+m_{a,d}(k+1)-m_{a,d}(k)] \\nonumber\n\\end{gather}\nIn the absence of model uncertainties ($\\alpha_{T_{exh}}=\\alpha_{\\dot{m}_f}=\\alpha_{\\omega_e}=\\alpha_{{m}_a}=1$), Figures~\\ref{fig:Engine_2DSMC_SamplingComparison_10ms} and~\\ref{fig:Engine_2DSMC_SamplingComparison_40ms} show the results of tracking the desired $AFR$, $T_{exh}$, and engine speed trajectories, using the first and second order DSMCs for sampling times of $10~ms$ and $40~ms$, respectively. The mean tracking errors for both controllers are listed in Table~\\ref{table:tracking_Results}. It can be observed from Fig.~~\\ref{fig:Engine_2DSMC_SamplingComparison_10ms} and Table~\\ref{table:tracking_Results} when the signals at the controller I\/O are sampled every $10~ms$, both first and second order DSMCs illustrate acceptable tracking performances, while the second order controller is 50\\% more robust on average in terms of the tracking errors. As long as the Shannon's sampling theorem criteria, which states that the sampling frequency must be at least twice the maximum frequency of the measured analog signal, is satisfied, increasing the sampling time helps to reduce the computation cost. Upon increasing the sampling rate from $10~ms$ to $40~ms$, the first order DSMC performance degrades significantly. On the other side, despite the increase in the sampling time, the second order DSMC still presents smooth and accurate tracking results. By comparing the first and second order DSMC results at $T=40~ms$, it can be concluded that the proposed second order DSMC outperforms the first order controller by up to 85\\% in terms of the mean tracking errors. \\vspace{-0.5cm}\n\n\\begin{figure}[h!]\n\\begin{center}\n\\includegraphics[angle=0,width= \\columnwidth]{Engine_2DSMC_SamplingComparison_10ms.eps} \\vspace{-0.75cm}\n\\caption{\\label{fig:Engine_2DSMC_SamplingComparison_10ms}Engine tracking results by the first and second order DSMCs with $T=10~ms$.} \\vspace{-0.8cm}\n\\end{center}\n\\end{figure}\n\\begin{table} [htbp!]\n\\small\n\\begin{center}\n\\caption{Mean ($\\bar{e}$) of Tracking Errors. Values Inside the Parentheses Show the Resulting Improvement from the Second Order DSMC Compared to the First Order DSMC.} \\linespread{1.15}\n\\label{table:tracking_Results}\\vspace{-0.15cm}\n\\begin{tabular}{lccccc}\n \\hline\\hline\n\\multicolumn{1}{c}{} & \\multicolumn{2}{c}{$\\bar{e}~(T=10~ms)$} & & \\multicolumn{2}{c}{$\\bar{e}~(T=40~ms)$} \\\\\n\\cline{2-3} \\cline{5-6}\n\\textbf{}& {$1^{st}$-Order} & {$2^{nd}$-Order} & & {$1^{st}$-Order} & {$2^{nd}$-Order} \\\\\n\\textbf{}& {DSMC} & {DSMC} & & {DSMC} & {DSMC} \\\\\n\\textbf{} & {\\textcolor{blue}{Reference}}& \\textbf{} & & {\\textcolor{blue}{Reference}} & \\textbf{} \\\\ \\hline\nAFR& 0.028 & 0.010 & & 0.126 & 0.019 \\\\ \\vspace{0.05cm}\n[-] & & \\textcolor{blue}{(-64\\%)} & & & \\textcolor{blue}{(-84.9\\%)} \\\\ \\hline \\vspace{0.05cm} \n$T_{exh}$ & 0.2 & 0.1 & & 1.8 & 0.2 \\\\ \\vspace{0.05cm}\n[$^o$C] & & \\textcolor{blue}{(-50\\%)} & & & \\textcolor{blue}{(-88.9\\%)}\\\\ \\hline \\vspace{0.05cm} \n$N$ & 0.1 & 0.06 & & 1.9 & 0.3\\\\ \\vspace{0.05cm}\n [RPM] & & \\textcolor{blue}{(-40\\%)} & & & \\textcolor{blue}{(-84.2\\%)} \\\\ \n\\hline\\hline \n\\end{tabular} \\vspace{-0.5cm}\n\\end{center}\n\\end{table}\n\\linespread{1} \n\\begin{figure}[h!]\n\\begin{center}\n\\includegraphics[angle=0,width= \\columnwidth]{Engine_2DSMC_SamplingComparison_40ms.eps} \\vspace{-0.75cm}\n\\caption{\\label{fig:Engine_2DSMC_SamplingComparison_40ms}Engine tracking results by the first and second order DSMCs with $T=40~ms$.} \\vspace{-0.8cm}\n\\end{center}\n\\end{figure} \n\nThe effect of the unknown multiplicative terms (up to 25\\%) on the engine plant's dynamics ($f$) is shown in Fig.~\\ref{fig:Engine_2DSMC_DynamicImpact_40ms}. The uncertainty terms in the model introduce a permanent error in the estimated dynamics compared to the nominal model. If these errors are not removed in the early seconds of the controller operation, the tracking performance will be affected adversely. Upon activation of the adaptation mechanism, as it can be observed from Fig.~\\ref{fig:Engine_2DSMC_DynamicImpact_40ms}, the model with error is steered towards the nominal model in less than 2 $sec$. Consequently, the errors in the model are removed. Fig.~\\ref{fig:Engine_2DSMC_ParamConv_40ms} shows the results of unknown multiplicative uncertainty term ($\\hat{\\alpha}$) estimation against the actual (nominal) values ($\\alpha$). \\vspace{-0.35cm}\n\n\\begin{figure}[h!]\n\\begin{center}\n\\includegraphics[angle=0,width= \\columnwidth]{Engine_2DSMC_DynamicImpact_40ms.eps} \\vspace{-0.75cm}\n\\caption{\\label{fig:Engine_2DSMC_DynamicImpact_40ms}The effect of the model uncertainty terms on the engine dynamics when using the second order DSMC and how the adaptation mechanism drives the model with error to its nominal value: (a) $T_{exh}$, (b) $\\dot{m}_f$, (c) $\\omega_e$, and (d) $m_a$ ($T=40~ms$).} \\vspace{-0.75cm}\n\\end{center}\n\\end{figure}\n\n\\begin{figure}[h!]\n\\begin{center}\n\\includegraphics[angle=0,width= \\columnwidth]{Engine_2DSMC_ParamConv_40ms.eps} \\vspace{-0.75cm}\n\\caption{\\label{fig:Engine_2DSMC_ParamConv_40ms}Estimation of unknown multiplicative parameters in adaptive DSMC ($T=40~ms$).} \\vspace{-0.7cm}\n\\end{center}\n\\end{figure}\n\nFig.~\\ref{fig:Engine_2DSMC_AdaptiveTracking_40ms} shows the comparison between the tracking performances of the non-adaptive and adaptive second order DSMCs. As expected, the non-adaptive DMSC fails to track the desired trajectories, which explains the importance of handling the model uncertainties in the body of the DSMC. On the other hand, once the adaptation algorithm is enabled and the convergence period of the unknown parameters is over, the adaptive DSMC tracks all the desired trajectories smoothly with the minimum error under $T=40~ms$\n\\vspace{-0.4cm}\n\n\\begin{figure}[h!]\n\\begin{center}\n\\includegraphics[angle=0,width= \\columnwidth]{Engine_2DSMC_AdaptiveTracking_40ms.eps} \\vspace{-0.75cm}\n\\caption{\\label{fig:Engine_2DSMC_AdaptiveTracking_40ms}Comparison between adaptive and non-adaptive second order DSMCs for the engine model with uncertainties: (a) $AFR$, (b) $T_{exh}$, and (c) $N$~($T=40~ms$).} \\vspace{-0.9cm}\n\\end{center}\n\\end{figure}\n\n\n\\section{Case Studies} \\label{sec:CS}\\vspace{-0.15cm}\nHere, application of the proposed method in Section~\\ref{sec:UncertaintyPrediction} is demonstrated for one linear (DC motor) and one nonlinear (SI engine) case studies. Our proposed algorithm fits the requirements of these automotive control problems well, as they contain complicated plant model dynamics prone to uncertainty in slowly-fluctuating environments, yet require uncertainty mitigation to achieve tracking of desired trajectory behavior.\\vspace{-0.15cm}\n\\subsection{Linear Case Study: DC Motor Speed Control} \\label{CS_DCMotor} \\vspace{-0.10cm}\nDC motors are common automotive actuators for control applications that require rotary and transitional motions. For speed regulation of a DC motor, the control input is voltage ($V$) to the motor's armature and the output is rotation speed (${\\theta}$) of the shaft. Assuming a constant magnetic field and linear relationship between motor torque and armature current ($\\mathcal{I}$), by choosing the rotor speed and current as the state variables, the following discretized linear time-invariant state-space representation can be used to describe the dynamics of the DC motor~\\cite{DCMotor_ref}, in which $\\alpha_{pq}$ represents the unknown multiplicative term on the plant's parameters: \\vspace{-0.25cm}\n\\begin{subequations}\n\\begin{gather}\\label{eq:DC_motor_linear_DSMC_Uncertain}\n\\theta(k+1)=T\\left([-{\\alpha}_{11}\\frac{k_f}{J}]\\theta(k)\\right) \\\\\n+T\\left(\\frac{k_m}{J}{\\mathcal{I}}(k)+\\frac{1}{J} \\Gamma\\right) +\\theta(k) \\nonumber \\\\\n{\\mathcal{I}}(k+1)=T\\left([-\\alpha_{21}\\frac{k_b}{L}]\\theta(k)+[-\\alpha_{22}\\frac{R}{L}]{\\mathcal{I}}(k)\\right)\\\\\n+\\frac{T}{L}V(k)+{\\mathcal{I}}(k) \\nonumber\n\\end{gather}\n\\end{subequations}\nwhere $J$ is the rotor's moment of inertia, $R$ is the electrical resistance, $L$ is the electric inductance, $\\Gamma$ is the generated torque on the rotor, $k_f$ is the mechanical damping, $k_m$ is the motor torque constant, and $k_b$ is the electromotive force constant. The DC motor model constants are listed in\nthe Appendix. Three multiplicative ($\\alpha_{pq}$) unknown parameters are derived to their nominal values by solving the adaptation law in Eq.~(\\ref{eq:D2SMC_14}). A second order DSMC is designed to regulate the DC motor rotational speed with respect to the desired speed profile ($\\theta_d$). The first sliding surface is defined as the error in tracking the desired speed profile ($s_{1}=\\theta-\\theta_d$). Since there is no direct control input on DC motor rotational speed, $\\mathcal{I}_d$ is defined as the synthetic control input for controlling the shaft speed. $\\mathcal{I}_d$ is used to define the second sliding surface ($s_{2}=\\mathcal{I}-\\mathcal{I}_d$) in which the control input is voltage. The final synthetic ($\\mathcal{I}_d$) and physical ($V$) control inputs are calculated as follows: \\vspace{-0.25cm}\n\\begin{subequations} \\label{eq:DC_motor_adaptiveDSMC_modified}\n\\begin{gather}\n{\\mathcal{I}}_d(k)=\\frac{J}{k_m}\\left(\\frac{1}{T}\\left[-\\beta_1(\\theta(k)-\\theta_d(k))+\\theta_d(k+1)\\right.\\right.\\\\\n\\left.\\left.-\\theta(k)\\right]+[\\hat{\\alpha}_{11}\\frac{k_f}{J}\\theta(k)]-\\frac{1}{J}\\Gamma\\right) \\nonumber \\\\\nV(k)=L\\left(\\frac{1}{T_s}\\left[-\\beta_2({\\mathcal{I}}(k)-{\\mathcal{I}}_d(k))+{\\mathcal{I}}_d(k+1)\n\\right.\\right.\\\\\n\\left.\\left.-{\\mathcal{I}}(k)\\right] \n+[\\hat{\\alpha}_{21}\\frac{k_b}{L}]\\theta(k)+\\hat{\\alpha}_{22}\\frac{R}{L}{\\mathcal{I}}(k)\\right) \\nonumber\n\\end{gather}\n\\end{subequations}\nSimulations are done in MATLAB Simulink$^{\\textregistered}$~which allows for testing the controller in a model-in-the-loop (MIL) platform for different sampling rates. In the absence of model uncertainties ($\\alpha_{pq}=1$), Fig.~\\ref{fig:DC_2DSMC_SamplingComparison} shows the comparison between the first~\\cite{Pan_Discrete} and second order DSMCs for different sampling rates. Although at lower sampling rates (e.g., $200~ms$) both controllers show similar performances, by increasing the sampling rate to $800~ms$, the higher robustness characteristics of the second order DSMC in comparison with the first order controller are revealed. The comparison results show that the second order DSMC is improving the tracking errors by 69\\% on average for different sampling rates, compared to the first order DSMC.\\vspace{-0.35cm}\n\\begin{figure}[h!]\n\\begin{center}\n\\includegraphics[angle=0,width= \\columnwidth]{DC_2DSMC_SamplingComparison.eps} \\vspace{-0.85cm}\n\\caption{\\label{fig:DC_2DSMC_SamplingComparison}Comparison between the first and second order DSMCs in tracking the desired speed trajectory of the DC motor for different sampling rates. (a) First order DSMC, (b) Second order DSMC. Values inside the parentheses show the resulting improvement ($\\downarrow$) from the second order DSMC compared to the first order DSMC.} \\vspace{-0.55cm}\n\\end{center}\n\\end{figure}\n\nThe convergence results of the unknown parameters are shown in Fig.~\\ref{fig:DC_2DSMC_ParamConv}. This is done by simultaneously solving Eq.~(\\ref{eq:D2SMC_14}) and Eq.~(\\ref{eq:DC_motor_adaptiveDSMC_modified}). The performance of the first~\\cite{Amini_DSCC2016}, and second order adaptive DSMCs are shown in Fig.~\\ref{fig:DC_2DSMC_SpeedTracking} for tracking the desired speed profile under 200 $ms$ of sampling time, and up 50\\% uncertainty on the plant's parameters. As can be observed, the second order DSMC is able to significantly improve the tracking performance by 60\\% compared to the first order adaptive DSMC.\\vspace{-0.45cm}\n\\begin{figure}[h!]\n\\begin{center}\n\\includegraphics[angle=0,width= \\columnwidth]{DC_2DSMC_ParamConv.eps} \\vspace{-0.85cm}\n\\caption{\\label{fig:DC_2DSMC_ParamConv}Convergence results of the unknown multiplicative ($\\alpha$) terms in the DC motor model (sampling time=200 ms).} \\vspace{-0.75cm}\n\\end{center}\n\\end{figure}\n\\begin{figure}[h!]\n\\begin{center}\n\\includegraphics[angle=0,width= \\columnwidth]{DC_2DSMC_SpeedTracking.eps} \\vspace{-0.85cm}\n\\caption{\\label{fig:DC_2DSMC_SpeedTracking}DC motor speed control using the first and second order \\textit{adaptive} DSMCs (sampling time=200 ms).} \\vspace{-1cm}\n\\end{center}\n\\end{figure}\n\\subsection{Nonlinear Case Study: Automotive Engine Control}\\label{subsec:Engine_Control}\\vspace{-0.10cm}\nIn this section, the performance of the proposed adaptive second order DSMC is studied for a physics-based spark ignition (SI) combustion engine model~\\cite{Shaw} during cold start. The engine model~\\cite{Shaw} is parameterized for a 2.4-liter, 4-cylinder, DOHC 16-valve Toyota 2AZ-FE engine. The engine rated power is 117kW $@$ 5600~RPM, and it has a rated torque of 220 Nm $@$ 4000~RPM. The experimental validation of different components of the engine model is available in~\\cite{Sanketi}. The nonlinear model has four states including the exhaust gas temperature~($T_{exh}$), fuel mass flow rate into the cylinders (${\\dot{m}_f}$), the engine speed~(${\\omega_e}$), and the mass of air inside the intake manifold ($m_{a}$). The control problem is defined to steer $T_{exh}$, ${\\omega_e}$, and air-fuel ratio ($AFR$) to their pre-defined desired values. A set of four SISO DSMCs is designed to achieve this objective. Four states of the model and corresponding dynamics and controllers will be discussed in the following sections. Details of the functions and constants in the engine model are found in the Appendix and~\\cite{Sanketi}. \n\n$\\bullet {\\textbf{~~Exhaust~Gas~Temperature~Controller:}}$~Discretized model for exhaust gas temperature ($T_{exh}$) is:\n\\vspace{-0.2cm}\n\\begin{gather}\n\\label{eq:Engine_discretized_1}\nT_{exh}(k+1)=(1-\\frac{T}{\\tau_e})T_{exh}(k)\\\\\n+\\frac{T}{\\tau_e}(7.5\\Delta(k)+600)AFI(k) \\nonumber\n\\end{gather}\nwhere $\\Delta(k)$ is the control input. The sliding surface for $T_{exh}$ controller is defined to be the error in tracking the desired exhaust gas temperature ($s_{1}=T_{exh}-{T_{exh,d}}$). The dynamics of the exhaust gas temperature ($f_{{T_{exh}}}$) with multiplicative unknown term ($\\alpha_{T_{exh}}$) is: \\vspace{-0.20cm}\n\\begin{gather}\n\\label{eq:Engine_discretized_Texh}\nf_{{T_{exh}}}=\\alpha_{T_{exh}}\\Big(\\frac{1}{\\tau_e}[600AFI-T_{exh}]\\Big)\n\\end{gather}\nThe exhaust gas time constant ($\\tau_{e}$) has a significant role in the exhaust gas temperature dynamics (Eq.~(\\ref{eq:Engine_discretized_Texh})). This means that any error in estimating the time constant ($\\tau_{e}$) directly affects the dynamics and causes deviation from the nominal model. Multiplicative uncertainty term ($\\alpha_{T_{exh}}$) is assumed to represent any error in estimating $\\tau_{e}$. The error in the modeled $T_{exh}$ dynamics is removed by using the following adaptation law with respect to Eq.~(\\ref{eq:D2SMC_14}): \\vspace{-0.15cm}\n\\begin{gather}\n\\label{eq:adaptive_Texh}\n\\hat{\\alpha}_{T_{exh}}(k+1)=\\hat{\\alpha}_{T_{exh}}(k)+\\frac{T(s_1(k))}{\\tau_e \\rho_{\\alpha_1}}(600AFI-T_{exh}(k))\n\\end{gather}\nBy incorporating Eq.~(\\ref{eq:Engine_discretized_1}) and $\\hat{\\alpha}_{T_{exh}}$ from Eq.~(\\ref{eq:adaptive_Texh}) into Eq.~(\\ref{eq:D2SMC_6}), the second order adaptive DSMC for exhaust gas temperature becomes: \\vspace{-0.25cm}\n\\begin{gather}\\label{eq:Engine_DSMC_Final_1}\n\\Delta(k)=\\frac{\\tau_e}{7.5\\,.\\,AFI\\,.\\,T}[-\\hat{\\alpha}_{T_{exh}}(k)\\frac{T}{\\tau_e}(600\\,.\\,AFI\\\\\n-T_{exh}(k))-(\\beta_1+1)s_1(k)+T_{exh,d}(k+1)-T_{exh,d}(k)] \\nonumber\n\\end{gather} \\vspace{-0.75cm}\n\n$\\bullet { \\textbf{~~Fuel~Flow~Rate~Controller:}}$~The discretized difference equation for the fuel flow rate is: \\vspace{-0.2cm}\n\\begin{gather}\n\\label{eq:Engine_discretized_2}\n\\dot{m}_f(k+1)=\\dot{m}_f(k)+\\frac{T}{\\tau_f}[\\dot{m}_{fc}(k)-\\dot{m}_f(k)]\n\\end{gather}\nThe fuel flow dynamic ($f_{\\dot{m}_f}$) with multiplicative uncertainty term ($\\alpha_{\\dot{m}_f}$) is as follows: \\vspace{-0.25cm}\n\\begin{gather}\n\\label{eq:Engine_discretized_mdotf}\nf_{\\dot{m}_f}=-\\alpha_{\\dot{m}_f}\\Big(\\frac{1}{\\tau_f}\\dot{m}_f(k)\\Big) \n\\end{gather}\nThe sliding variable for the fuel flow controller is defined to be the error in tracking the desired fuel mass flow ($s_{2}={\\dot{m}_{f}}-{\\dot{m}_{f,d}}$). In a similar manner to $T_{exh}$ dynamics, the fuel evaporation time constant $\\tau_{f}$ dictates the dynamics of the fuel flow into the cylinder. Consequently, any error in estimating $\\tau_{f}$ leads to a considerable deviation from the nominal model. $\\alpha_{\\dot{m}_f}$ is introduced to the fuel flow dynamics to represent the uncertainty in estimating $\\tau_{f}$. The adaptation law for $\\alpha_{\\dot{m}_f}$ becomes: \n\\vspace{-0.4cm}\n\\begin{gather}\n\\label{eq:adaptive_mdotf}\n\\hat{\\alpha}_{\\dot{m}_f}(k+1)=\\hat{\\alpha}_{\\dot{m}_f}(k)-\\frac{T(s_2(k))}{\\tau_f \\rho_{\\alpha_2}}\\dot{m}_f(k)\n\\end{gather}\nwhere, $\\dot{m}_{f,d}$ is calculated according to desired AFR. The adaptive control law for $\\dot{m}_{fc}$ is: \\vspace{-0.15cm}\n\\begin{gather}\\label{eq:Engine_DSMC_Final_2}\n\\dot{m}_{fc}(k)=\\frac{\\tau_f}{T}[\\hat{\\alpha}_{\\dot{m}_f}(k)\\frac{T}{\\tau_f}\\dot{m}_f(k) \\\\-(\\beta_2+1)s_2(k)+\\dot{m}_{f,d}(k+1)-\\dot{m}_{f,d}(k)] \\nonumber\n\\end{gather} \\vspace{-0.75cm}\n\n$\\bullet { \\textbf{~~Engine~Speed~Controller:}}$~The rotational dynamics of the engine is described by using the following difference equation:\\vspace{-0.4cm}\n\\begin{gather}\n\\label{eq:Engine_discretized_3}\n\\omega_e(k+1)=\\omega_e(k)+\\frac{T}{J}T_E(k) \n\\end{gather}\nwhere $T_E(k)$ is the engine torque and is found by $30000~m_a(k)-(0.4~\\omega_e(k)+100)$. There is no direct control input for modulating the engine speed, therefore $m_a$ is considered as the synthetic control input. The calculated $m_a$ from engine speed controller will be used as the desired trajectory in intake air mass flow rate controller. $f_{\\omega_e}$ of the engine with multiplicative uncertainty ($\\alpha_{\\omega_e}$) is as follows: \\vspace{-0.25cm}\n\\begin{gather}\n\\label{eq:Engine_discretized_we}\nf_{\\omega_e}=-\\alpha_{\\omega_e}\\Big(\\frac{1}{J}T_{loss}\\Big)\n\\end{gather} \nwhere $T_{loss}=0.4\\omega_e+100$. $T_{loss}$ represents the torque losses on the crankshaft. Thus, the multiplicative uncertainty $\\alpha_{\\omega_e}$ compensates for any error in estimated torque loss. The sliding variable for the engine speed controller is defined to be $s_3=\\omega_e-{\\omega_{e,d}}$. $\\alpha_{\\omega_e}$ is driven to ``1'' using the following adaptation law: \\vspace{-0.3cm}\n\\begin{gather}\n\\label{eq:adaptive_we}\n\\hat{\\alpha}_{\\omega_e}(k+1)=\\hat{\\alpha}_{\\omega_e}(k)-\\frac{T(s_3(k))}{J.\\rho_{\\alpha_3}}(0.4\\omega_e(k)+100)\n\\end{gather}\nFinally, the desired synthetic control input ($m_{a,d}$) is:\\vspace{-0.3cm}\n\\begin{gather}\\label{eq:Engine_DSMC_Final_3}\nm_{a,d}(k)=\\frac{J}{30,000\\,T}[\\hat{\\alpha}_{\\omega_e}(k)\\frac{T}{J}(100+0.4\\omega_e(k))\\\\\\nonumber-(\\beta_3+1)s_3(k) \n+\\omega_{e,d}(k+1)-\\omega_{e,d}(k)]\n\\end{gather}\n\n$\\bullet { \\textbf{~~Air~Mass~Flow~Controller:}}$\nThe following state difference equation describes the air mass flow behaviour in discrete time: \\vspace{-0.2cm}\n\\begin{gather}\n\\label{eq:Engine_discretized_3}\nm_a(k+1)=m_a(k)+T[\\dot{m}_{ai}(k)-\\dot{m}_{ao}(k)]\n\\end{gather}\nThe calculated $m_{a,d}$ from Eq.~(\\ref{eq:Engine_DSMC_Final_3}) is used as the desired trajectory to obtain $\\dot{m}_{ai}$ as the control input of $m_a$ controller. The last sliding surface for the air mass flow controller is defined to be $s_4=m_a-{m_{a,d}}$. The intake air manifold mass dynamic with the unknown term ($\\alpha_{m_a}$) is: \\vspace{-0.25cm}\n\\begin{gather}\n\\label{eq:Engine_discretized_ma}\nf_{m_a}=-\\alpha_{m_a}\\dot{m}_{ao}(k)\n\\end{gather} \\vspace{-0.35cm}\nwhere, air mass flow into the cylinder is determined by~\\cite{Shaw}: \\vspace{-0.2cm}\n\\begin{gather}\n\\label{eq:Engine_discretized_ma2}\n\\dot{m}_{ao}=k_1\\eta_{vol}m_a\\omega_e \n\\end{gather}\n$\\eta_{vol}$ is the volumetric efficiency. As can be seen from Eq.~(\\ref{eq:Engine_discretized_ma}) and~(\\ref{eq:Engine_discretized_ma2}), the multiplicative uncertainty term in the intake air manifold dynamics ($\\alpha_{m_a}$) represents the uncertainty in $\\dot{m}_{ao}$ that is extracted from $\\eta_{vol}$ map. $\\alpha_{m_a}$ is updated using the following adaptation law: \\vspace{-0.2cm}\n\\begin{gather}\n\\label{eq:adaptive_ma}\n\\hat{\\alpha}_{m_a}(k+1)=\\hat{\\alpha}_{m_a}(k)-\\frac{T(s_4(k))}{\\rho_{\\beta_4}}\\dot{m}_{ao}\n\\end{gather}\nFinally, the controller input is: \\vspace{-0.2cm}\n\\begin{gather}\\label{eq:Engine_DSMC_Final_4}\n\\dot{m}_{ai}(k)=\\frac{1}{T}[\\hat{\\alpha}_{m_a}(k)\\dot{m}_{ao}(k)T-(\\beta_4+1)s_4(k)\\\\\n+m_{a,d}(k+1)-m_{a,d}(k)] \\nonumber\n\\end{gather}\nIn the absence of model uncertainties ($\\alpha_{T_{exh}}=\\alpha_{\\dot{m}_f}=\\alpha_{\\omega_e}=\\alpha_{{m}_a}=1$), Fig.~\\ref{fig:Engine_2DSMC_SamplingComparison_10ms} and~\\ref{fig:Engine_2DSMC_SamplingComparison_40ms} show the results of tracking the desired $AFR$, $T_{exh}$, and engine speed trajectories, using the first and second order DSMCs for sampling times of $10~ms$ and $40~ms$, respectively. The mean tracking errors for both controllers are listed in Table~\\ref{table:tracking_Results}. It can be observed from Fig.~~\\ref{fig:Engine_2DSMC_SamplingComparison_10ms} and Table~\\ref{table:tracking_Results} when the signals at the controller I\/O are sampled every $10~ms$, both first and second order DSMCs illustrate acceptable tracking performances, while the second order controller is 50\\% more robust on average in terms of the tracking errors. Upon increasing the sampling rate from $10~ms$ to $40~ms$, the first order DSMC performance degrades significantly. On the other side, despite the increase in the sampling time, the second order DSMC still presents smooth and accurate tracking results. By comparing the first and second order DSMC results at $T=40~ms$, it can be concluded that the proposed second order DSMC outperforms the first order controller by up to 85\\% in terms of the mean tracking errors. \n\n\\begin{figure}[h!]\n\\begin{center}\n\\includegraphics[angle=0,width= \\columnwidth]{Engine_2DSMC_SamplingComparison_10ms.eps} \\vspace{-0.75cm}\n\\caption{\\label{fig:Engine_2DSMC_SamplingComparison_10ms}Engine tracking results by the first and second order DSMCs with $T=10~ms$.} \\vspace{-0.55cm}\n\\end{center}\n\\end{figure}\n\\begin{figure}[h!]\n\\begin{center}\n\\includegraphics[angle=0,width= \\columnwidth]{Engine_2DSMC_SamplingComparison_40ms.eps} \\vspace{-0.75cm}\n\\caption{\\label{fig:Engine_2DSMC_SamplingComparison_40ms}Engine tracking results by the first and second order DSMCs with $T=40~ms$.} \\vspace{-0.40cm}\n\\end{center}\n\\end{figure} \n\\begin{table} [htbp!]\n\\small\n\\begin{center}\n\\caption{Mean ($\\bar{e}$) of Tracking Errors. Values Inside the Parentheses Show the Resulting Improvement from the Second Order DSMC Compared to the First Order DSMC.} \\linespread{1.15}\n\\label{table:tracking_Results}\\vspace{-0.15cm}\n\\begin{tabular}{lccccc}\n \\hline\\hline\n\\multicolumn{1}{c}{} & \\multicolumn{2}{c}{$\\bar{e}~(T=10~ms)$} & & \\multicolumn{2}{c}{$\\bar{e}~(T=40~ms)$} \\\\\n\\cline{2-3} \\cline{5-6}\n\\textbf{}& {$1^{st}$-Order} & {$2^{nd}$-Order} & & {$1^{st}$-Order} & {$2^{nd}$-Order} \\\\\n\\textbf{}& {DSMC} & {DSMC} & & {DSMC} & {DSMC} \\\\\n\\textbf{} & {\\textcolor{blue}{Reference}}& \\textbf{} & & {\\textcolor{blue}{Reference}} & \\textbf{} \\\\ \\hline\nAFR& 0.028 & 0.010 & & 0.126 & 0.019 \\\\ \\vspace{0.05cm}\n[-] & & \\textcolor{blue}{(-64\\%)} & & & \\textcolor{blue}{(-84.9\\%)} \\\\ \\hline \\vspace{0.05cm} \n$T_{exh}$ & 0.2 & 0.1 & & 1.8 & 0.2 \\\\ \\vspace{0.05cm}\n[$^o$C] & & \\textcolor{blue}{(-50\\%)} & & & \\textcolor{blue}{(-88.9\\%)}\\\\ \\hline \\vspace{0.05cm} \n$N$ & 0.1 & 0.06 & & 1.9 & 0.3\\\\ \\vspace{0.05cm}\n [RPM] & & \\textcolor{blue}{(-40\\%)} & & & \\textcolor{blue}{(-84.2\\%)} \\\\ \n\\hline\\hline \n\\end{tabular} \\vspace{-0.5cm}\n\\end{center}\n\\end{table}\n\\linespread{1} \n\nThe effect of the unknown multiplicative terms (up to 25\\%) on the engine plant's dynamics ($f$) is shown in Fig.~\\ref{fig:Engine_2DSMC_DynamicImpact_40ms}. The uncertainty terms in the model introduce a permanent error in the estimated dynamics compared to the nominal model. If these errors are not removed in the early seconds of the controller operation, the tracking performance will be affected adversely. Upon activation of the adaptation mechanism, as it can be observed from Fig.~\\ref{fig:Engine_2DSMC_DynamicImpact_40ms}, the model with error is steered towards the nominal model in less than 2 $sec$. Consequently, the errors in the model are removed. Fig.~\\ref{fig:Engine_2DSMC_ParamConv_40ms} shows the results of unknown multiplicative uncertainty term ($\\hat{\\alpha}$) estimation against the actual (nominal) values ($\\alpha$).\n\n\\begin{figure}[h!]\n\\begin{center}\n\\includegraphics[angle=0,width= \\columnwidth]{Engine_2DSMC_DynamicImpact_40ms.eps} \\vspace{-0.75cm}\n\\caption{\\label{fig:Engine_2DSMC_DynamicImpact_40ms}The effect of the model uncertainty terms on the engine dynamics when using the second order DSMC and how the adaptation mechanism drives the model with error to its nominal value: (a) $T_{exh}$, (b) $\\dot{m}_f$, (c) $\\omega_e$, and (d) $m_a$ ($T=40~ms$).} \\vspace{-0.55cm}\n\\end{center}\n\\end{figure}\n\n\\begin{figure}[h!]\n\\begin{center}\n\\includegraphics[angle=0,width= \\columnwidth]{Engine_2DSMC_ParamConv_40ms.eps} \\vspace{-0.75cm}\n\\caption{\\label{fig:Engine_2DSMC_ParamConv_40ms}Estimation of unknown multiplicative parameters in adaptive DSMC ($T=40~ms$).} \\vspace{-0.55cm}\n\\end{center}\n\\end{figure}\n\nFig.~\\ref{fig:Engine_2DSMC_AdaptiveTracking_40ms} shows the comparison between the tracking performances of the non-adaptive and adaptive second order DSMCs. As expected, the non-adaptive DMSC fails to track the desired trajectories, which explains the importance of handling the model uncertainties in the body of the DSMC. On the other hand, once the adaptation algorithm is enabled and the convergence period of the unknown parameters is over, the adaptive DSMC tracks all the desired trajectories smoothly with the minimum error under $40~ms$ sampling time\n\\vspace{-0.4cm}\n\n\\begin{figure}[h!]\n\\begin{center}\n\\includegraphics[angle=0,width= \\columnwidth]{Engine_2DSMC_AdaptiveTracking_40ms.eps} \\vspace{-0.75cm}\n\\caption{\\label{fig:Engine_2DSMC_AdaptiveTracking_40ms}Comparison between adaptive and non-adaptive second order DSMCs for the engine model with uncertainties: (a) $AFR$, (b) $T_{exh}$, and (c) $N$~($T=40~ms$).} \\vspace{-0.75cm}\n\\end{center}\n\\end{figure}\n\n\n\\section{Summary and Conclusions} \\label{sec:Conclusion} \\vspace{-0.1cm}\nA new adaptive second order discrete sliding mode controller (DSMC) formulation for nonlinear uncertain systems was introduced in this paper. Based on the discrete Lyapunov stability theorem, an adaptation law was determined for removing generic unknown multiplicative uncertainty terms within the nonlinear difference equation of the plant's model. The proposed controller was examined for a spark ignition combustion engine control problem to track desired air-fuel ratio, engine speed, and exhaust gas temperature trajectories. Comparing to the first order DSMC, the second order DSMC shows significantly better robustness against data sampling imprecisions, and can provide up to 80\\% improvement in terms of the tracking errors. The better performance of the second order DSMC can be traced in driving the higher order derivatives (difference functions) of the sliding variable to zero. In the presence of the model uncertainties, it was shown that the adaptation mechanism is able to remove the errors in the modeled dynamics quickly, and steer the dynamics towards their nominal values. Increasing the sampling time raises the required time for the adaptation law to compensate for the uncertainties in the models. This required time was increased by two times, when the sampling time was increased from $10~ms$ to $40~ms$ in the engine tracking control problem, though the adaptation mechanism still could remove the model uncertainties in less than two seconds.\\vspace{-0.25cm}\n\n\n\n\n\n\n\n\n\n\\section{INTRODUCTION} \\label{Sec:Intro} \\vspace{-0.2cm}\nThe key feature of sliding mode control (SMC) is converting a high dimensional tracking control problem into a lower dimensional stabilization control problem~\\cite{Slotine}. SMC is well known for its robust characteristics against model uncertainty\/mismatch and external disturbances, while it requires low computational efforts. However, there are challenging issues that arise during digital implementation of SMC, among which chattering phenomenon has been widely reported in the literature~\\cite{utkin2013sliding}. One effective approach for reducing the oscillation due to chattering is the use of higher order SMC for continuous-time systems. This approach was first introduced in the 1980s~\\cite{nollet2008observer}. The basic idea of the higher order SMC is to drive all the higher order derivatives of the sliding variable to the sliding manifold, in addition to the zero convergence condition of the sliding variable. In this approach, the chattering caused by the discontinuity is transferred to the higher derivatives of the sliding variable. The final control input is calculated by integrating the $r-1$ derivatives of the input for $r-1$ times, and the result would be a continuous chattering-free signal of a $r^{th}$-order SMC~\\cite{nollet2008observer}. Higher order SMC leads to less oscillations; however, it adds complexity to the calculations.\n\nIn addition to the high frequency oscillations issue, it is shown in the literature~\\cite{Amini_DSC,AminiSAE2016,Hansen_DSCC} that upon digital implementation of the baseline SMC software, controller performance degrades from its expected behaviour significantly. The gap between the designed and implemented SMCs is mostly created due to data sampling errors that are introduced by the analog-to-digital (ADC) converter unit at the controller input\/output (I\/O).\nDiscrete sliding mode control (DSMC) was shown to be an effective approach to mitigate the ADC implementation imprecisions and enhancing the controller robustness against ADC uncertainties~\\cite{Pan_Discrete,Amini_ACC2016}. However, the chattering phenomenon due to the discontinuous nature of the discrete controller is more problematic for the DSMC and can even lead to instability since the\nsampling rate is far from infinite~\\cite{sira1991non}. \n\nSimilar to continuous-time SMC, it was shown in~\\cite{mihoub2009real} that a second order DSMC shows less oscillations compared to a first order DSMC. The second order DSMC in~\\cite{mihoub2009real} is formulated for linear systems without consideration of the uncertainties in the model. Moreover, the study in~\\cite{mihoub2009real} lacks the stability analysis of the closed-loop system. In this paper, a new second order DSMC formulation is developed for a general class of single-input single-output (SISO) \\textit{uncertain nonlinear systems}. Moreover, the asymptotic stability of the new controller is guaranteed via a Lyapunov stability argument. \n\nSimilar to implementation imprecisions, any uncertainty in the plant model, which is used for designing the model-based controller, results in a significant gap between the designed and implemented controllers.\nThe previous works in the literature that aimed to handle uncertainties in the model via an adaptive SMC structure are limited to continuous-time domain~\\cite{Slotine}, and linear systems~\\cite{Chan_Automatica}. The adaptive DSMC formulation from our previous works in~\\cite{Pan_DSC,Amini_DSCC2016,Amini_CEP} presents a generic solution for removing the model uncertainties for nonlinear systems based on a first order DSMC formulation. The proposed second order DSMC formulation from this paper allows us to derive the adaptation laws via a Lyapunov stability argument to remove the uncertainty in the plant's model quickly. \n\nThe contribution of this paper is threefold. First, a new second order DSMC is formulated for a general class of nonlinear affine systems. Second, the proposed controller is extended to handle the multiplicative type of model uncertainty using a discrete Lyapunov stability argument that also guarantees the asymptotic stability of the closed-loop system. Third, this paper presents the first application of the second order DSMC for an automotive combustion engine control problem. The proposed second order DSMC not only demonstrates robust behavior against data sampling imprecisions compared to a first order DSMC, but it also removes the uncertainties in the model quickly and steers the dynamics to their nominal values.\n\\vspace{-0.15cm}\n\n\\section{Second Order Sliding Mode Control} \\vspace{-0.15cm} \\label{sec:UncertaintyPrediction}\n\n\\subsection{Continuous Second Order Sliding Mode Control}\\vspace{-0.10cm} \\label{sec:ContinousTimeSecondSMC}\nA general class of continuous-time SISO nonlinear systems can be expressed as follows:\n\\vspace{-0.25cm}\n\\begin{gather}\\label{eq:C2SMC_1}\n\\dot{x}=f(t,x,u)\n\\end{gather}\nwhere $x{\\in{\\mathbb{R}^{n}}}$ and $u{\\in{\\mathbb{R}}}$ are the state and the input variables, respectively. The sliding mode order is the number of continuous successive derivatives of the differentiable sliding variable $s$, and it is a measure of the degree of smoothness of the sliding variable in the vicinity of the sliding manifold. For the continuous-time systems, the $r^{th}$ order sliding mode is determined by the following equalities~\\cite{salgado2004robust}:\n\\vspace{-0.22cm}\n\\begin{gather}\\label{eq:C2SMC_2}\ns(t,x)=\\dot{s}(t,x)=\\ddot{s}(t,x)=...=s^{r-1}(t,x)=0\n\\end{gather}\nThe sliding variable ($s$) is defined as the difference between desired ($x_d$) and measured signal ($x$):\n\\vspace{-0.22cm}\n\\begin{gather}\\label{eq:C2SMC_3}\n s(t,x)=x(t)-x_d(t) \n\\end{gather}\nFor the second order SMC design,\na new sliding variable ($\\xi$) is defined according to $s$ and $\\dot{s}$\n\\vspace{-0.22cm}\n\\begin{gather}\\label{eq:C2SMC_4}\n\\xi(t,x)=\\dot{s}(t,x)+\\lambda s(t,x),~\\lambda>0\n\\end{gather}\nEq.~(\\ref{eq:C2SMC_4}) describes the sliding surface of a system with a relative order equal to one, in which the input is $\\dot{u}$ and output is $\\xi(t,x)$~\\cite{sira1990structure}. The control input is obtained according to the following law:\n\\vspace{-0.22cm}\n\\begin{gather}\\label{eq:C2SMC_5}\n\\dot{\\xi}(t,x)=0 \\Rightarrow \\ddot{s}(t,x)+\\lambda \\dot{s}(t,x)=0\n\\end{gather}\nwhich according to the sliding variable definition needs the second derivative of the state variable ($\\ddot{x}(t)$).~\nBy substituting Eq.~(\\ref{eq:C2SMC_3}) and $\\ddot{x}(t)$ into Eq.~(\\ref{eq:C2SMC_5}), $\\dot{u}$ is calculated as follows:\n\\vspace{-0.25cm}\n\\begin{gather}\\label{eq:C2SMC_9}\n\\dot{u}(t)=\\frac{1}{\\frac{\\partial}{\\partial u}f(t,x,u)}\\Big(-\\frac{\\partial}{\\partial t}f(t,x,u) \\\\-\\big(\\frac{\\partial}{\\partial x}f(t,x,u)\\big)f(t,x,u)+\\ddot{x}_d(t)-\\lambda\\dot{s}(t,x)\\Big) \\nonumber\n\\end{gather}\nand finally the control input is:\n\\vspace{-0.25cm}\n\\begin{gather}\\label{eq:C2SMC_10}\nu(t)=\\int\\dot{u}(t)dt\n\\end{gather}\n\nThis approach guarantees asymptotic convergence of the sliding variable and its derivative to zero~\\cite{mihoub2009real}. \\vspace{-0.15cm}\n\\subsection{Discrete Adaptive Second Order Sliding Mode Control} \\label{sec:DiscreteTimeSecondDSMC}\nThe affine form of the nonlinear system in Eq.~(\\ref{eq:C2SMC_1}) with an unknown multiplicative term ($\\alpha$) can be presented using the following state space equation:\n\\vspace{-0.15cm}\n\\begin{gather}\\label{eq:D2SMC_1}\n\\dot{x}(t)=\\alpha f(x(t))+g(x(t))u(t)\n\\end{gather}\nwhere $g(x(t))$ is a non-zero input coefficient and $f(x(t))$ represents the dynamics of the plant and does not depend on the inputs. $\\alpha$ is an unknown constant, and represents the errors in the modeled plant's dynamic. By applying the first order Euler approximation the continuous model in Eq.~(\\ref{eq:D2SMC_1}) is descritized as follows:\n\\vspace{-0.15cm}\n\\begin{gather}\\label{eq:D2SMC_2}\nx(k+1)=T\\alpha f(x(k))+Tg(x(k))u(k)+x(k)\n\\end{gather}\nwhere $T$ is the sampling time. Similar to Eq.~(\\ref{eq:C2SMC_4}), a new discrete sliding variable is defined:\n\\vspace{-0.25cm}\n\\begin{gather}\\label{eq:D2SMC_3}\n\\xi(k)={s}(k+1)+\\beta s(k),~\\beta>0\n\\end{gather}\nwhere $s(k)=x(k)-x_d(k)$, and $\\beta$ is the new sliding variable gain. Substituting Eq.~(\\ref{eq:D2SMC_2}) into Eq.~(\\ref{eq:D2SMC_3}) yields:\n\\vspace{-0.2cm}\n\\begin{gather}\\label{eq:D2SMC_4}\n\\xi(k)=T\\alpha f(x(k))+Tgu(k)+x(k)-x_d(k+1)+\\beta s(k)\n\\end{gather}\nThe second order discrete sliding law is defined as~\\cite{mihoub2009real}:\n\\vspace{-0.25cm}\n\\begin{gather}\\label{eq:D2SMC_5}\n\\xi(k+2)=\\xi(k+1)=\\xi(k)=0\n\\end{gather}\nApplying Eq.~(\\ref{eq:D2SMC_5}) to the nonlinear system in Eq.~(\\ref{eq:D2SMC_2}) results in the following control input:\n\\vspace{-0.15cm}\n\\begin{gather}\\label{eq:D2SMC_6}\nu(k)=\\frac{1}{gT}\\Big(-T\\hat{\\alpha}(k)f(x(k))-x(k)+x_d(k+1)-\\beta s(k)\\Big) \n\\end{gather}\nwhere $\\hat{\\alpha}$ is the estimation of the unknown multiplicative uncertainty term in the plant's model. By incorporating the control law ($u$) into the second order sliding variable ($\\xi$), we have:\n\\vspace{-0.3cm}\n\\begin{gather}\\label{eq:D2SMC_7}\n\\xi(k)=Tf(\\alpha-\\hat{\\alpha}(k))=Tf\\tilde{\\alpha}(k)\n\\end{gather}\nwhere ($\\tilde{\\alpha}$) is the difference between the unknown and estimated multiplicative uncertainty terms ($\\tilde{\\alpha}(k)=\\alpha-\\hat{\\alpha}(k)$). In order to determine the stability of the closed-loop system, and derive the adaptation law to remove the uncertainty in the model, a Lyapunov stability analysis is employed here. The following Lyapunov candidate function is proposed: \n\\begin{gather}\\label{eq:D2SMC_8}\nV(k)=\\frac{1}{2}\\Big({s}^2(k+1)+\\beta{s}^2(k)\\Big)\\\\\n+\\frac{1}{2}\\rho_{\\alpha}\\Big(\\tilde{\\alpha}^2(k+1)+\\beta \\tilde{\\alpha}^2(k)\\Big) \\nonumber\n\\end{gather}\nwhere $\\rho_{\\alpha}>0$ is a tunable parameter (adaptation gain) chosen for the numerical sensitivity of the unknown parameter estimation. The proposed Lyapunov function in Eq.~(\\ref{eq:D2SMC_8}) is positive definite and quadratic with respect to the sliding variable ($s(k)$) and the unknown parameter estimation error ($\\tilde{\\alpha}(k)$). In the discrete time domain, the negative semi-definite condition is required for the difference function of $V$ to guarantee the asymptotic stability of the closed-loop system~\\cite{Pan_DSC,Amini_CEP}. The Lyapunov difference function is calculated using a Taylor series expansion: \\vspace{-0.20cm}\n\\begin{gather}\\label{eq:D2SMC_9}\nV(k+1)=V(k)+\\frac{\\partial V(k)}{\\partial s(k)}\\Delta s(k)\\\\\n+\\frac{\\partial V(k)}{\\partial s(k+1)}\\Delta s(k+1)+\\frac{\\partial V(k)}{\\partial \\tilde{\\alpha}(k)}\\Delta \\tilde{\\alpha}(k) \\nonumber \\\\\n+\\frac{\\partial V(k)}{\\partial \\tilde{\\alpha}(k+1)}\\Delta \\tilde{\\alpha}(k+1) \n+\\frac{1}{2} \\frac{\\partial^2 V(k)}{\\partial {s}^2(k)}\\Delta {s}^2(k) \\nonumber \\\\\n+\\frac{1}{2} \\frac{\\partial^2 V(k)}{\\partial {s}^2(k+1)}\\Delta {s^2(k+1)}+\n\\frac{1}{2} \\frac{\\partial^2 V(k)}{\\partial {\\tilde{\\alpha}}^2(k)}\\Delta {\\tilde{\\alpha}}^2(k) \\nonumber \\\\\n+\\frac{1}{2} \\frac{\\partial^2 V(k)}{\\partial {\\tilde{\\alpha}}^2(k+1)}\\Delta {\\tilde{\\alpha}}^2(k+1)+... \\nonumber\n\\end{gather\nwhere $\\Delta s(k)\\equiv s(k+1)-s(k)$ and $\\Delta \\tilde{\\alpha}(k) \\equiv \\tilde{\\alpha}(k+1)-\\tilde{\\alpha}(k)$. \nNext, the Lyapunov difference function ($\\Delta V(k)=V(k+1)-V(k)$) is calculated by substituting the values of the partial derivatives into Eq.~(\\ref{eq:D2SMC_9}): \\vspace{-0.15cm}\n\\begin{gather}\\label{eq:D2SMC_11}\n\\Delta V(k)=\\beta s(k)\\Delta s(k)+s(k+1)\\Delta s(k+1) \\\\\n+\\rho_{\\alpha}\\beta\\tilde{\\alpha}(k)\\Delta\\tilde{\\alpha}(k)+\\rho_{\\alpha}\\tilde{\\alpha}(k+1)\\Delta\\tilde{\\alpha}(k+1) \\nonumber \\\\\n+\\frac{1}{2} \\beta\\Delta {s}^2(k)+\\frac{1}{2}\\Delta{s}^2(k+1)\\nonumber \\\\\n+\\frac{1}{2}\\rho_{\\alpha}\\beta\\Delta{\\tilde{\\alpha}}^2(k)+\\frac{1}{2}\\rho_{\\alpha}\\Delta{\\tilde{\\alpha}}^2(k+1)+... \\nonumber\n\\end{gather}\nwhere cross term second order derivatives are zero. Eq.~(\\ref{eq:D2SMC_11}) can be simplified after substituting Eq.~(\\ref{eq:D2SMC_7}) at $k$ and $k+1$ time steps: \\vspace{-0.2cm}\n\\begin{gather}\\label{eq:D2SMC_12}\n\\Delta V(k)=-\\beta(\\beta+1)s^2(k)-(\\beta+1)s^2(k+1)\\\\\n+\\beta s(k)Tf\\tilde{\\alpha}(k)+\\rho_{\\alpha}\\beta \\tilde{\\alpha}(k)\\Delta \\tilde{\\alpha}(k)\\nonumber \\\\\n+s(k+1)Tf\\tilde{\\alpha}(k+1)+\\rho_{\\alpha}\\tilde{\\alpha}(k+1)\\Delta\\tilde{\\alpha}(k+1) \\nonumber \\\\\n+O\\left(\\Delta {s}^2(k),\\Delta{s}^2(k+1),\\Delta\\tilde{\\alpha}^2(k),\\Delta\\tilde{\\alpha}^2(k+1)\\right)+... \\nonumber \n\\end{gather}\nwhich yields: \\vspace{-0.25cm}\n\\begin{gather}\\label{eq:D2SMC_13}\n\\Delta V(k)=-(\\beta+1)\\big(s^2(k+1)+\\beta s^2(k)\\big)\\\\\n+\\rho_{\\alpha}\\beta\\tilde{\\alpha}(k)\\Big(\\frac{s(k)Tf}{\\rho_{\\alpha}}+\\Delta\\tilde{\\alpha}(k)\\Big) \\nonumber \\\\\n+\\rho_{\\alpha}\\tilde{\\alpha}(k+1)\\Big(\\frac{s(k+1)Tf}{\\rho_{\\alpha}}+\\Delta\\tilde{\\alpha}(k+1)\\Big) \\nonumber \\\\\n+O\\left(\\Delta {s}^2(k),\\Delta{s}^2(k+1),\\Delta\\tilde{\\alpha}^2(k),\\Delta\\tilde{\\alpha}^2(k+1)\\right)+... \\nonumber \n\\end{gather}\nin which the higher order ($>2$) terms are zero. As can be seen from Eq.~(\\ref{eq:D2SMC_13}), the first term is negative definite when $\\beta>0$. To guarantee the asymptotic stability of the closed-loop system, and minimize the tracking errors, the Lyapunov difference function should be at least negative semi-definite~\\cite{Amini_DSC}. To this end, the second and third terms in Eq.~(\\ref{eq:D2SMC_13}) should become zero, which leads to the following adaptation law:\\vspace{-0.35cm}\n\\begin{gather}\\label{eq:D2SMC_14}\n\\tilde{\\alpha}(k+1)=\\tilde{\\alpha}(k)-\\frac{s(k)Tf}{\\rho_{\\alpha}} \n\\end{gather} \nBy using Eq.~(\\ref{eq:D2SMC_14}) to estimate the unknown uncertainty term, the Lyapunov difference function becomes: \\vspace{-0.20cm}\n\\begin{gather}\\label{eq:D2SMC_15}\n\\Delta V(k)=-(\\beta+1)\\big(s^2(k+1)+\\beta s^2(k)\\big)\\\\\n+O\\left(\\Delta {s}^2(k),\\Delta{s}^2(k+1),\\Delta\\tilde{\\alpha}^2(k),\\Delta\\tilde{\\alpha}^2(k+1)\\right) \\nonumber\n\\end{gather}\n\nLet us assume that by using Eq.~(\\ref{eq:D2SMC_14}), the uncertainty in the model will be removed. This means that the error in estimating the unknown parameter converges to zero ($\\tilde{\\alpha}(k+1)=\\tilde{\\alpha}(k)=0$). Thus, by expanding the second order terms ($O(.)$), and assuming a small enough sampling time ($T$), which means all terms that contain $T^2$ can be neglected, Eq.~(\\ref{eq:D2SMC_11}) can be re-arranged as follows: \\vspace{-0.20cm}\n\\begin{gather}\\label{eq:D2SMC_16}\n\\Delta V(k)=\\beta s(k)(s(k+1)-s(k))\\\\\n+s(k+1)(s(k+2)-s(k+1))+\\frac{1}{2} \\beta(s(k+1)-s(k))^2 \\nonumber \\\\\n+\\frac{1}{2}(s(k+2)-s(k+1))^2+... \\nonumber\n\\end{gather}\nSince it was assumed that the uncertainty in the model is compensated by Eq.~(\\ref{eq:D2SMC_14}), $s(k+1)$ and $s(k+2)$ can be replaced by $-\\beta s(k)$ and $\\beta^2 s(k)$, respectively, according to Eq.~(\\ref{eq:D2SMC_5}). Thus, Eq.~(\\ref{eq:D2SMC_16}) can be simplified as: \\vspace{-0.25cm}\n\\begin{gather}\\label{eq:D2SMC_18}\n\\Delta V(k)=-\\frac{1}{2}\\beta\\big(-{\\beta}^3-{\\beta}^2+\\beta+1\\big)s^2(k)\n\\end{gather}\n$-{\\beta}^3-{\\beta}^2+\\beta+1$ is positive if $1>\\beta>0$. In other words, if $1>\\beta>0$, then $\\Delta V(k)\\leq 0$, which guarantees the asymptotic stability of the system: \\vspace{-0.15cm}\n\\begin{gather}\\label{eq:D2SMC_19}\nV\\rightarrow 0 \\Rightarrow s(k+1)~\\&~s(k) \\rightarrow 0 \\xRightarrow[]{1>\\beta>0} \\xi(k) \\rightarrow 0\n\\end{gather}\n\nIt was shown that the second order sliding mode (Eq.~(\\ref{eq:D2SMC_5})) and the adaptation law from Eq.~(\\ref{eq:D2SMC_14}) guarantee the negative semi-definite condition of the Lyapunov difference function. This means that the sliding variable ($s$) and the error in estimating the unknown parameter ($\\tilde{\\alpha}$) converge to zero in finite time. Moreover, since the second order DSMC steers both first and second derivatives (difference functions) of the sliding variable to the origin, it provides better tracking performance, lower chattering, and higher robustness against data sampling imprecisions, compared to the first order DSMC. Fig.~\\ref{fig:AdaptiveDSMC_Schematic} shows the overall schematic of the proposed second order adaptive DSMC along with the adaptation mechanism. \\vspace{-0.4cm}\n\\begin{figure}[h!]\n\\begin{center}\n\\includegraphics[width=\\columnwidth]{AdaptiveDSMC_Schematic.eps} \\vspace{-0.75cm}\n\\caption{\\label{fig:AdaptiveDSMC_Schematic} Schematic of the proposed second order adaptive DSMC.} \\vspace{-0.7cm}\n\\end{center}\n\\end{figure}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nSpintronics aims at using the information carried by the electrons' spin in solids.\nFor this purpose, establishing reliable methods to create, transfer and detect the spin current is an urgent task.\nCompared to the charge transports, spin transports have one serious fundamental difficulty.\nThat is the non-conservation of the spin in solids.\nThis limits the range of the spin transmission to be less than the spin diffusion length, which is typically $\\mu$m scale in metals.\n\n\nThe non-conservation of the spins is expressed by a source term in the continuity equation for the spin \n\\begin{align}\n\\dot{s}^\\alpha+\\nabla \\cdot\\bm{j}_{\\rm s}^\\alpha={\\cal T}^\\alpha.\n \\label{sconteq}\n\\end{align}\nHere $s$ and $\\bm{j}_{\\rm s}$ are the spin density and spin current, respectively, $\\alpha=x,y,z$ is the spin direction and ${\\cal T}$ is the spin relaxation torque resulting in the non-conservation of the spin.\nIn the most cases in metals, the dominant origin of ${\\cal T}$ is the spin-orbit interaction.\n\nAlthough the relaxation torque term is essential in spin transports, it has so far been treated only on the phenomenological ground.\nThe continuity equation is equivalent to the Boltzmann equation, which is useful in discussing the spin transports.\nThe Boltzmann equation for the distribution function of each spin channel was discussed by Son et al. \\cite{Son87} and later by Valet and Fert \\cite{Valet93} in the context of the giant magnetoresistance in multilayer systems.\nIn their analysis, they approximated the spin relaxation torque as proportional to the inverse of a spin relaxation time $\\tau_{\\rm sf}$ \nand to some unknown function representing a driving force for the spin accumulation.\nThe driving force was written in terms of what they called the spin chemical potential $\\mu_{\\rm s}$.\nThe relaxation torque was approximated as ${\\cal T} ^z =\\mu_{\\rm s}\/\\tau_{\\rm sf}$.\nThey argued that $\\mu_{\\rm s}$ satisfies the diffusion equation,\n$\\nabla^2 \\mu_{\\rm s} =-\\ell_{\\rm sf}^{-2} \\mu_{\\rm s}$, with the diffusion length $\\ell_{\\rm sf}\\propto \\sqrt{\\tau_{\\rm sf}}$.\nMicroscopic calculation for $\\mu_{\\rm s}$ has not been done so far.\n\n\n\nThe diffusion equation for the spin has been widely used to discuss recent spin transports in metallic junctions \\cite{Takahashi08}.\nThe decay of spin transport has been confirmed in non-local spin injection experiments \\cite{Jedema01,Kimura07,Seki08}, which\n indicate that the spin diffusion decays with a decay length of 350-500nm in Cu and 100nm in Au at room temperature.\nAlthough the spin diffusion equation appears to be so far successful, the phenomenological treatment of the spin relaxation term and the spin chemical potential must be improved \nto consider the spin transport seriously.\n\n\nBesides diffusive spin current, there is another spin current that is driven by an effective field.\nIn contrast to the diffusive one, this field-driven contribution \nshould not decay in uniform (single domain) ferromagnets, since the ratio of the spin current and the charge current is determined by the spin polarization ratio of the material, which is a statistical mechanical quantity. \nThe field-driven (local) spin current and the diffusive spin current behave differently, as was recently demonstrated theoretically in the case of the inverse spin Hall effect \\cite{Takeuchi10}. \n\n\nIn the field of the current-driven magnetization dynamics, the spin relaxation torque has been studied from the microscopic viewpoint \\cite{KTS06,TKS_PR08,TE08}.\nIn this context, Eq. (\\ref{sconteq}) gives the expression for the torque acting on the spin density ${{\\bm s}}$ as \n\\begin{align}\n\\tau^\\alpha = -\\nabla \\cdot\\bm{j}_{\\rm s}^\\alpha+{\\cal T}^\\alpha.\n\\end{align}\nIn the adiabatic limit, i.e., slowly varying magnetization, and under uniform current, the first term reduces to \n$\\nabla \\cdot\\bm{j}_{\\rm s}^\\alpha=(P\/2e)(\\bm{j}\\cdot\\nabla){{\\bm s}}^\\alpha$, where $P$ is the spin polarization of the current \\cite{TKS_PR08,TE08}, namely to the adiabatic spin-transfer torque.\nWhen the spin-relaxation sets in, the conduction electron no longer follows the magnetization profile, and new contribution to the torque arises from the ${\\cal T}$ term.\nThis torque was shown to be\n\\begin{align}\n{\\cal T} =\n-\\beta \\frac{P}{es^2} \n({{\\bm s}}\\times (\\bm{j}\\cdot\\nabla){{\\bm s}}), \\label{betatorque}\n\\end{align}\nwhere $\\beta$ is a coefficient inversely proportional to the spin relaxation time $\\tau_{\\rm s}$ \\cite{KTS06,TE08}.\nThis torque, called $\\beta$ term, turned out to be essential in determining the efficiency of the current-driven domain wall motion \\cite{TK04,Zhang04,Thiaville05}.\nThe magnitude of the parameter $\\beta$ has recently been intensively studied experimentally by measuring the domain wall speed under current \\cite{Thomas06,Moore09}.\nTheoretical formulation for estimating $\\beta$ in the first-principles calculations was carried out recently \\cite{Garate09}.\n\n\n\nThe spin relaxation torque has been studied also from the viewpoint of how to define the spin current.\nIt was discussed that the spin relaxation torque contains a term written as a divergence of the torque dipole density, ${\\bm P}$ \\cite{Culcer04}.\nGeneralized argument was given by Shi et al. \\cite{Shi06}, where they discussed that the $z$ component of the relaxation torque is written as a divergence, \n\\begin{align}\n{\\cal T}^z= -\\nabla\\cdot{\\bm P}, \n\\end{align}\nif the system has the inversion symmetry. \nThis means that the total torque integrated over the system should vanish.\nShi et al. also argued that if the relaxation torque is a divergence of ${\\bm P}$, one can define a spin current that is conserved. \nIn fact, defining \n$\\tilde{\\bm{j}_{\\rm s}}\\equiv \\bm{j}_{\\rm s}+{\\bm P}$, the continuity equation (\\ref{sconteq}) reduces to $\\dot{s}^z+\\nabla\\cdot \\tilde{\\bm{j}_{\\rm s}}=0$.\nThe explicit form of the torque dipole density was not calculated in Ref. \\cite{Shi06}.\nObviously, in the presence of the inhomogeneity of the magnetization, the $\\beta$ torque (\\Eqref{betatorque}) cannot be written as a divergence, and thus it indeed represents the spin angular momentum lost by the spin relaxation.\n\n\nThe result that the spin relaxation torque is given by a derivative of the applied electric field ${\\bm E}$ is understood as follows.\nThe spin relaxation torque should of course vanish when ${\\bm E}=0$.\nIt cannot be directly proportional to ${\\bm E}$, since field-driven spin current in uniform ferromagnets should not decay. \nTherefore, the simplest expression for the relaxation torque is a derivative of the field.\nIt is not, however, obvious whether it should be always written in a rotationally invariant way, or if it can be\nanisotropic, since the rotational invariance is broken in uniform ferromagnets because of the magnetization.\n\n\n\nThe first aim of the present paper is to calculate the spin relaxation torque microscopically in the presence of the applied electric field. \nThe spin relaxation mechanism we take into account is the spin-orbit interaction due to random impurities. \nOur explicit calculation reveals that the spin relaxation torque is not always rotationally symmetric in uniform ferromagnets, but is generally given by\n\\begin{align}\n{\\cal T}^z = \\gamma (\\nabla\\cdot {\\bm E})\n+\\delta\\gamma(\\partial_z E_z),\n\\label{TauE}\n\\end{align}\nwhere $z$ axis is along the magnetization, \n $\\gamma$ and $\\delta\\gamma$ are coefficient proportional to the inverse spin relaxation time.\nThe spin relaxation torque is, therefore, anisotropic.\nRelaxation torque of \\Eqref{TauE} indicates that the torque dipole density is given by\n\\begin{align}\n{\\bm P} = -(\\gamma {\\bm E} +\\delta\\gamma ({\\bm n}\\cdot {\\bm E}){\\bm n}),\n\\end{align}\nwhere ${\\bm n}$ represents the direction of the magnetization.\nThese anisotropic behaviors of the transport quantities is common when spin-orbit interaction exists, as is well-known in charge transport as the anisotropic magnetoresistance (AMR) \\cite{Mcguire75}. \n\nThe second aim of the paper is to study the spin current on the same microscopic footing as the relaxation torque.\nWe show that the spin current is made up of a field-driven contribution which is local and the diffusive one with nonlocality.\nThe field-driven contribution is anisotropic like the AMR effect for the charge current \n(we call the effect as spin AMR effect).\nThe diffusive contribution is given as a gradient of a spin chemical potential, $\\mu_{\\rm s}$.\nWe will derive the linear-response expression for the spin chemical potential. \nOur microscopic study on the spin current demonstrates the validity of the half-phenomenological treatments \\cite{Valet93}.\n\n\n\n\n\n\n\\section{Model}\n\nWe consider the conduction electron system taking account of the spin-orbit interaction, the impurity scattering without spin flip, and the applied electric field. \nThe Hamiltonian of the system is given as\n$H=H_0+ {H_{\\rm so}}+H_{\\rm em}+H_{\\rm imp}$, where \n$H_0$\n is the free electron Hamiltonian including the uniform magnetization, ${H_{\\rm so}}$ is the spin-orbit interaction, $H_{\\rm em}$ is the interaction with the gauge field representing the applied electric field, and $H_{\\rm imp}$ is the spin-independent impurity scattering.\nThe free part reads \n\\begin{align}\nH_0\\equiv \\sum_{{\\bm k}\\sigma}\\epsilon_{{\\bm k}\\sigma} c^\\dagger_{{\\bm k}\\sigma} c_{{\\bm k}\\sigma} ,\n\\end{align}\n where \nthe electron creation and annihilation operators are denoted by ${c^{\\dagger}}$ and $c$, respectively, $\\epsilon_{{\\bm k}\\sigma}\\equiv \\frac{k^2}{2m}-{\\epsilon_F}-\\sigma {M}$, ${\\epsilon_F}$ is the Fermi energy, ${M}$ is the spin splitting due to the magnetization and $\\sigma\\equiv \\pm$ represents the spin.\nThe spin-orbit interaction is represented by\n${H_{\\rm so}}={H_{\\rm so}}^0+{H_{\\rm so}}^{A}$,\n where \n($\\sigma_k$ ($k=x,y,z$) is the Pauli matrix)\n\\begin{align}\n{H_{\\rm so}}^0 &= -\\frac{i}{2}\n\\sum_{ijk} \\epsilon_{ijk}\\int\\! {d^3x} (\\nabla_i v_{\\rm so}^{(k)}) \n({c^{\\dagger}} \\vvec{\\nabla}_j \\sigma_k c)\n\\\\\n{H_{\\rm so}}^{A} &= - e\\sum_{ijk} \\epsilon_{ijk}\\int\\! {d^3x} \n (\\nabla_i v_{\\rm so}^{(k)}) A_j({\\bm x},t) ({c^{\\dagger}} \\sigma_k c).\n\\end{align}\n(We suppress the spin index when obvious, namely, $c=(c_+,c_-)$. )\nThe spin-orbit potential $v_{\\rm so}^{(k)}$ is assumed to arise from random impurities and to depend on the spin direction ($k$).\nThe averaging over the spin-orbit potential is carried out as \n\\begin{align}\n\\average{ v_{\\rm so}^{(k)}({\\bm p}) v_{\\rm so}^{(\\gamma)}(-{\\bm p}')}_{\\rm i} =n_{{\\rm so}}{\\lambda_{\\rm so}}^2 \\delta_{{\\bm p},{\\bm p}'}\\delta_{k\\gamma}, \n\\label{vsav}\n\\end{align} \nwhere $n_{{\\rm so}}$ and ${\\lambda_{\\rm so}}$ are the concentration of the spin-orbit impurities and the strength of the interaction, respectively.\nThe average of the spin-orbit potential at the linear order is zero in our model, and thus we do not take account of the the anomalous Hall and spin Hall effects. \nWe consider a case where electric field \n${\\bm E}({\\bm x},t)(\\equiv -\\dot{{\\bm A}}({\\bm x},t))$ is position and time dependent. \nThe electromagnetic interaction is written as\n\\begin{align}\nH_{\\rm em} = -\\frac{e}{m} \\sum_{{\\bm k},{\\bm q}}\n\\sum_i k_i A_i({\\bm q},\\Omega) ({c^{\\dagger}}_{{\\kv}-\\frac{{\\bm q}}{2}}c_{{\\kv}+\\frac{{\\bm q}}{2}}),\n\\end{align}\nwhere $\\Omega$ is the frequency of the electric field.\nWe will consider the limit of small $\\Omega$ and small $q$.\nThe scattering by the normal impurities is represented by\n\\begin{align}\nH_{\\rm imp} &= \n\\sum_{i=1}^{N_{\\rm imp}}\\sum_{{\\bm k}\\kv'} \n\\frac{v_{\\rm imp}}{N} e^{i({\\bm k}-{\\bm k}')\\cdot{\\bm R}_i} {c^{\\dagger}}_{{\\bm k}'}c_{{\\bm k}},\n\\end{align}\nwhere $v_{\\rm imp}$ represents the strength of the impurity potential, ${\\bm R}_i$ represents the position of random impurities, $N_{\\rm imp}$ is the number of impurities, and \n$N\\equiv V\/a^3$ is number of sites.\nTo estimate physical quantities, we take the random average over impurity positions in a standard manner \\cite{TKS_PR08}.\n\n\nTo derive the spin continuity equation, \\Eqref{sconteq}, we derive the equation of motion for the spin density, \n${\\bm \\se}({\\bm x},t)\\equiv \\average{{c^{\\dagger}}({\\bm x},t){\\bm \\sigma} c({\\bm x},t)}$\n($\\average{\\ }$ represents the quantum average).\nThe time development of the spin density reads \n\\begin{align}\n\\dot{s}^\\alpha &= \ni\\average { [H,c^\\dagger]\\sigma^\\alpha c+ c^\\dagger\\sigma^\\alpha [H,c] },\\label{eqcommutators}\n\\end{align}\nwhere $H$ is the total Hamiltonian of the system.\nThe commutators are calculated as in Appendix \\ref{APP:eqofmo}, and \\Eqref{eqcommutators}\nturns out to be \\Eqref{sconteq}, namely \n\\begin{align}\n\\dot{s}^\\alpha = -\\nabla \\cdot\\bm{j}_{\\rm s}^\\alpha+{\\cal T}^\\alpha, \\nonumber\n\\end{align}\n with the spin current given as\n\\begin{align}\nj_{\\rm s}^{\\alpha} \\equiv j_{\\rm s}^{{\\rm (n)},\\alpha}+j_{\\rm s}^{{\\rm so},\\alpha}\n, \n\\end{align}\nwhere \n\\begin{align}\n \\jspin{,i}{{\\rm (n)},\\alpha} \n &\\equiv -\\frac{i}{2m} \n\\average{ {c^{\\dagger}} \\sigma^\\alpha \\vvec{\\nabla}_i c }\n -\\frac{e}{m}A_i \n\\average{ {c^{\\dagger}} \\sigma_\\alpha c }\n\\nonumber\\\\\n&\\equiv \n \\jspin{,i}{(0),\\alpha} + \\jspin{,i}{A,\\alpha} ,\n\\label{spincurrentsdefs}\n\\end{align}\nand \n\\begin{align}\n \\jspin{,i}{{\\rm so}, \\alpha} \n \\equiv \n-\\sum_{j} \\epsilon_{ij\\alpha} \n(\\nabla_jv_{\\rm so}^{(\\alpha)}) \\average{ {c^{\\dagger}} c}. \\label{jssodef}\n\\end{align}\nThe relaxation torque reads \n\\begin{align}\n{\\cal T}^{\\alpha}\n\\equiv {\\cal T}_{{\\rm so}}^{\\alpha} +{\\cal T}_{{\\rm so}}^{A,\\alpha}, \\label{totaltaudef}\n\\end{align}\n where \n\\begin{align}\n{\\cal T}_{{\\rm so}}^{\\alpha} \n&\\equiv \ni \\sum_{ijkl} \\epsilon_{ijk} \\epsilon_{\\alpha lk} (\\nabla_iv_{\\rm so}^{(k)})\n\\average{ {c^{\\dagger}} \\sigma_l \\vvec{\\nabla}_j c},\n\\label{Tsodef} \n\\\\\n{\\cal T}_{{\\rm so}}^{A,\\alpha} \n& \\equiv \n2e \\sum_{ijkl} \\epsilon_{ijk} \\epsilon_{\\alpha lk} (\\nabla_iv_{\\rm so} ^{(k)} ) A_j\n\\average{ {c^{\\dagger}} \\sigma_l c}.\\label{TsoAdef}\n\\end{align}\n\n\nThe spin relaxation torque depends on the definition of the spin current. \nFor instance, if we redefine the spin current as\n ${\\bm{j}_{\\rm s}' } ^\\alpha \\equiv {\\bm{j}_{\\rm s}^\\alpha }-{\\bm C}^\\alpha$, where ${\\bm C}$ is a vector, the continuity equation (\\ref {sconteq}) becomes \n$\\dot{s}^\\alpha+\\nabla \\cdot{ \\bm{j}_{\\rm s}' } ^\\alpha={{\\cal T}'} ^\\alpha$, where \nthe relaxation torque reads ${{\\cal T}'} ^\\alpha\\equiv {{\\cal T}} ^\\alpha+\\nabla\\cdot {\\bm C}^\\alpha$.\nThis ambiguity of spin current definition of course does not affect physical quantities such as the total torque acting on the spin density, which is given by $\\dot{{{\\bm s}}}$.\n\n\n\n\n\\section{Spin relaxation torque}\n\n\nWe calculate the spin relaxation torque as a linear response to the applied electric field. \nThe uniform magnetization is chosen as along $z$ axis.\nThe spin-orbit interaction is included to the second order.\n\\begin{figure}[tbh]\n\\begin{center}\n\\includegraphics[width=0.45\\hsize]{spinrelaxationdiag6.eps}\n\\caption{ \nFeynman diagrams representing the relaxation torque.\nSolid lines represent the electron Green's function with the lifetime ($\\tau_\\sigma$) included, \n$v_{\\rm so}$ represents the spin-orbit interaction (double dashed line) and dotted line represents the interaction with the gauge field ($A$).\nThe first two diagrams are the contributions to ${\\cal T}_{{\\rm so}}^{\\alpha}$\nand the last diagram is the contribution to ${\\cal T}_{{\\rm so}}^{A,\\alpha}$.\nThe vertex marked by cross represents the relaxation torque.\n\\label{FIGspinrelxationdiag1}\n}\n\\end{center}\n\\end{figure}\nThe contributions to the relaxation torque, \\Eqref{totaltaudef}, are shown in Fig. \\ref{FIGspinrelxationdiag1}.\nThe leading contribution for small $1\/({\\epsilon_F}\\tau)$ and $q\\ell$ \n($\\ell$ is the electron mean free path)\nturns out to be the first diagram in Fig. \\ref{FIGspinrelxationdiag1}, which reads \n(see Appendix \\ref{SEC:appA} for details)\n\\begin{align}\n{\\cal T}^{\\alpha} \n&= \\delta_{\\alpha,z}\n\\frac{2}{45\\pi}\nn_{{\\rm so}} {\\lambda_{\\rm so}}^2 \\frac{e}{m^2}\n(3\\nabla\\cdot\\dot{{\\bm A}}+\\nabla_z\\dot{A}_z)\n\\nonumber\\\\\n&\\times \n\\sum_{{\\bm k}\\kv'}\nk^2 (k')^4 \n\\sum_{\\sigma=\\pm} \\sigma \\green{{\\bm k},\\sigma}{{\\rm r}} \n\\green{{\\bm k}',-\\sigma}{{\\rm r}} (\\green{{\\bm k}',-\\sigma}{{\\rm a}})^2 \n+{\\rm c.c.},\n\\label{Tauzdominant}\n\\end{align}\nwhere $\\green{{\\bm k}\\sigma}{{\\rm r}}$ and $\\green{{\\bm k}\\sigma}{{\\rm a}}$ are the retarded and advanced electron Green's functions, respectively, carrying the wave vector ${\\bm k}$ and spin $\\sigma$ with zero frequency. \nAs we see, only $z$ component of the torque is finite.\nThe Green's functions include the lifetime arising from the self-energy process due to normal impurities and the spin-orbit interaction.\nThe inverse lifetime for the electron with spin $\\sigma(=\\pm)$ is given as\n\\begin{align}\n{\\tau_\\sigma}^{-1}\n=\n2\\pi n_{\\rm imp}v_{\\rm imp}^2 \\nu_\\sigma\n(1+\\kappa_{z,\\sigma}+\\kappa_{\\perp} \\gamma_\\sigma),\n\\end{align}\nwhere $\\nu_\\sigma$ is the spin-resolved electron density of states, $n_{\\rm imp}$ and $v_{\\rm imp}$ are the concentration and the potential strength of the impurities,\n$\\kappa_{z,\\sigma}\\equiv \\frac{1}{3}\\frac{n_{{\\rm so}}{\\lambda_{\\rm so}}^2}{n_{\\rm imp}v_{\\rm imp}^2}k_{F\\sigma}^4 $\nand \n$\\kappa_{\\perp} \\equiv \\frac{2}{3}\\frac{n_{{\\rm so}}{\\lambda_{\\rm so}}^2}{n_{\\rm imp}v_{\\rm imp}^2}k_{F+}^2 k_{F-}^2 $\nare dimensionless ratios of the spin-orbit interaction to the normal impurity scattering\n($k_{F\\sigma}$ is the spin-dependent Fermi wavelength), and \n$\\gamma_\\sigma\\equiv\\frac{\\nu_{-\\sigma}}{\\nu_\\sigma}$.\nThe total relaxation torque is therefore given by (\\Eqref{TauE}),\n\\begin{align}\n{\\cal T}^z = \\gamma (\\nabla\\cdot {\\bm E})\n+\\delta\\gamma(\\partial_z E_z),\n\\end{align}\nwhere \n\\begin{align}\n\\gamma\n&\\equiv\n \\frac{8\\pi e}{15m^2}\nn_{{\\rm so}} {\\lambda_{\\rm so}}^2 \n\\dos_{+} \\dos_{-} k_{F+}^2 k_{F-}^2 \n\\sum_{\\sigma=\\pm} \\sigma k_{F\\sigma}^2\\tau_{\\sigma}^2\n\\end{align}\nand $\\delta\\gamma\\equiv \\gamma\/3$.\nThe parameter $\\gamma$ is proportional to the spin flip rate due to the spin-orbit interaction.\nOur result indicates that the \nrelaxation torque is zero in uniform ferromagnet when uniform electric field is applied.\nThus the spin current does not decay in this case.\nIn fact, the static solution of \\Eqref{sconteq} with ${\\cal T}^z=0$ is \n$j_{\\rm s}^z={\\rm constant}$. \n\nThe degree of the asymmetry, $\\delta\\gamma\/\\gamma$, is not universal but is model dependent.\nFor instance, in the case of junction with weak electron hopping at point-like leads, $\\delta \\gamma$ vanishes \\cite{Takezoe10}. \n\n\n\n\n\\section{Spin current}\n\nWe here calculate the spin current within the same formalism.\nWithin the linear response theory, the spin current is calculated by estimating the Feynman diagrams shown in Figs. \\ref{FIGspincurrent} and \\ref{FIGdiffusion}, which correspond to the field-induced contribution and the effect of the diffusive electron motion, respectively.\n\n\n\\begin{figure}[tbh]\n\\begin{center}\n\\includegraphics[width=0.6\\hsize]{spincurrentdiag6.eps}\n\\caption{ \nFeynman diagrams representing the local contribution to the spin current\n(the first two terms in \\Eqref{jsexpression2}).\nThe first three diagrams correspond to $ \\jspin{}{{\\rm (n)}}$, and the last two diagrams represent the contribution from the anomalous velocity,\n $\\jspin{}{{\\rm so}}$.\nDotted and double dashed lines denote the interaction with the applied electric field and the spin-orbit interaction, respectively.\nThe vertex marked by cross represents the spin current.\n\\label{FIGspincurrent}\n}\n\\end{center}\n\\end{figure}\n\\begin{figure}[tbh]\n\\begin{center}\n\\includegraphics[width=0.65\\hsize]{spincurrentdiffusive3.eps}\n\\caption{ \nLeft: The vertex correction contribution to $j_{\\rm s}^{{\\rm (n)},z}$,\nresulting in the diffusive spin current\n(the last term of \\Eqref{jsexpression2}).\nRight:\n$\\Gamma_{\\sigma\\sigma'}$, which is a ladder process of the successive electron scattering by the normal impurity and the spin-orbit interaction (represented by thick dotted lines)\nconnecting the spin indices $\\sigma$ and $\\sigma'$.\n\\label{FIGdiffusion}\n}\n\\end{center}\n\\end{figure}\n\n\nWe first estimate the normal part of the spin current, \n$\\jspin{,i}{{\\rm (n)},z}$ (\\Eqref{spincurrentsdefs}), shown in the first two diagrams in Fig. \\ref{FIGspincurrent}.\nThe contribution $\\jspin{,i}{{\\rm (n)},z}$ is defined including the anomalous contribution from the electromagnetic gauge field, $\\jspin{,i}{A,z}$. \nIts dominant contribution in the limit of small $\\Omega$ and small $q$ is calculated as \n(see Appendix \\ref{SEC:appB} for detail)\n\\begin{align}\n\\jspin{,i}{{\\rm (n)},z} \n&=\n -i\\frac{1}{2\\pi} \\frac{e^2}{m^2}\n\\sum_{{\\bm k}{\\bm q}}e^{-i{\\bm q}\\cdot{\\bm x}}\\int\\!\\frac{d\\Omega}{2\\pi} e^{i\\Omega t}\n\\sum_{j}\n \\Omega A_j({\\bm q},\\Omega) \n\\sum_{\\sigma} \\sigma \\left[\nk_i k_j\\green{{\\kv}-\\frac{{\\bm q}}{2},\\sigma}{{\\rm r}} \\green{{\\kv}+\\frac{{\\bm q}}{2},\\sigma}{{\\rm a}} \n\\right.\n\\nonumber\\\\\n&\n\\left. +\n\\sum_{{\\bm k}'\\sigma'}\nk_i k'_j\n\\green{{\\kv}-\\frac{{\\bm q}}{2},\\sigma}{{\\rm r}} \\green{{\\kv}+\\frac{{\\bm q}}{2},\\sigma}{{\\rm a}} \n\\green{{\\kvp}-\\frac{{\\bm q}}{2},\\sigma'}{{\\rm r}} \\green{{\\kvp}+\\frac{{\\bm q}}{2},\\sigma'}{{\\rm a}} \nn_{\\rm imp}v_{\\rm imp}^2 \\Gamma_{\\sigma\\sigma'}({\\bm q},\\Omega)\n\\right]\n\\label{spincurrent1}\n\\end{align}\nIn \\Eqref{spincurrent1}, the first term is the contribution shown in the left of Fig. \\ref{FIGspincurrent},\nand the second term is the contribution from the vertex correction (Fig. \\ref{FIGdiffusion}).\n\nThe factor $\\Gamma_{\\sigma\\sigma'}({\\bm q},\\Omega)$ contains all the \nvertex corrections due to the normal impurities and the spin-orbit interaction shown in Fig. \\ref{FIGdiffusion}.\nThe equation of motion for $\\Gamma_{\\sigma\\sigma'}$ is derived in the same manner as in Ref. \\cite{Hikami80} carried out in the context of quantum correction (the diffusion without the spin-orbit interaction was considered in Ref. \\cite{Ban09}). \nThe equation is obtained as\n\\begin{align}\n\\Gamma_{\\sigma\\sigma} \n&= (1+\\kappa_z)(1+\\Pi_\\sigma\\Gamma_{\\sigma\\sigma})\n+\\kappa_\\perp \\Pi_{-\\sigma}\\Gamma_{-\\sigma,\\sigma}\n\\nonumber\\\\\n\\Gamma_{\\sigma,-\\sigma} \n&= \\kappa_\\perp(1+\\Pi_{-\\sigma}\\Gamma_{-\\sigma,-\\sigma})\n+(1+\\kappa_z) \\Pi_{\\sigma}\\Gamma_{\\sigma,-\\sigma},\n\\label{Gammaeqs}\n\\end{align}\nwhere \n\\begin{align}\n\\Pi_\\sigma({\\bm q},\\Omega)\n&\\equiv n_{\\rm imp}v_{\\rm imp}^2 {\\sum_{\\kv}} \n \\green{{\\kv}-\\frac{{\\bm q}}{2},\\sigma}{{\\rm r}} \\green{{\\kv}+\\frac{{\\bm q}}{2},\\sigma}{{\\rm a}} \\nonumber\\\\\n&\\simeq \n [1-(D_\\sigma q^2 \\tau_\\sigma+\\kappa_z+\\kappa_\\perp\\gamma_\\sigma)],\n\\label{Pires}\n\\end{align}\nwhere $D_\\sigma \\equiv \\frac{(k_{F\\sigma})^2}{3m^2}\\tau_\\sigma$ is the diffusion constant.\nNeglecting quantities of order of $(\\kappa_z,\\kappa_\\perp)^2$, \\Eqref{Gammaeqs} is solved as\n\\begin{align}\n\\Gamma_{\\sigma\\sigma} \n&= \\frac{1+\\kappa_z-(1+2\\kappa_z)\\Pi_{-\\sigma}}\n{[1-(1+\\kappa_z)\\Pi_+][1-(1+\\kappa_z)\\Pi_-]}\n\\nonumber\\\\\n\\Gamma_{\\sigma,-\\sigma} \n&= \\frac{\\kappa_\\perp}\n{[1-(1+\\kappa_z)\\Pi_+][1-(1+\\kappa_z)\\Pi_-]}.\n\\label{Gammaeqs2}\n\\end{align}\nUsing \\Eqref{Pires}, we obtain $\\Gamma_{\\sigma\\sigma'}$ as (assuming the rotational symmetry for the wave vectors when averaging over the spin-orbit potential)\n\\begin{align}\n\\Gamma_{\\sigma\\sigma} \n&= \\frac{1}\n{D_\\sigma q^2 \\tau_\\sigma+\\kappa_\\perp\\gamma_\\sigma}\n\\nonumber\\\\\n\\Gamma_{\\sigma,-\\sigma} \n&= \\frac{\\kappa_\\perp}\n{[D_+ q^2 \\tau_++\\kappa_\\perp\\gamma_+]\n [D_- q^2 \\tau_-+\\kappa_\\perp\\gamma_-]}.\n\\label{Gammaeqs3}\n\\end{align}\nBy use of \\Eqref{Gammaeqs3} and summing over the wave vectors,\nthe normal spin current, \\Eqref{spincurrent1}, reads\n\\begin{align}\n\\jspin{,i}{{\\rm (n)},z} \n&=\n\\sigma_{{\\rm s}}^0 E_i\n- \\nabla_i \\mu_{\\rm s},\n\\label{jsexpression0}\n\\end{align}\nwhere \n$\\sigma_{{\\rm s}}^0 \\equiv e \\sum_{\\pm} (\\pm) D_\\pm \\nu_\\pm$ is the bare spin conductivity divided by $e$.\nThe first term of \\Eqref{jsexpression0} is the field-driven contribution.\nThe second gradient term is a diffusive contribution (vertex corrections), arising from the spin accumulation.\nThe effective potential describing the spin accumulation, $\\mu_{\\rm s}$, \nreads\n\\begin{align}\n\\mu_{\\rm s} \\equiv\n\\int\\! {d^3x}' \\chi({\\bm x}-{\\bm x}') (\\nabla\\cdot {\\bm E})({\\bm x}'), \\label{muspin}\n\\end{align}\nwhere $\\chi$ is a correlation function arising from the electron diffusion, given as ($V$ is the system volume)\n\\begin{align}\n\\chi({\\bm x})\n&\\equiv \n- \\sum_{\\pm} (\\pm) \\sigma_{\\pm}\n\\frac{1}{V}{\\sum_{\\qv}} \\frac{e^{-i{\\bm q}\\cdot{\\bm x}}}\n{q^2+(\\ls{,\\pm})^{-2}}. \\label{chidef}\n\\end{align}\nHere\n$\\sigma_\\pm \\equiv e D_\\pm \\nu_\\pm$ is the spin-resolved Boltzmann conductivity divided by $e$, and the correlation length is given as\n\\begin{align}\n\\ls{,\\sigma} \n= \\sqrt{D_\\sigma\\tau_{{\\rm s},\\sigma}}.\n\\end{align}\nThe lifetime of the spin $\\sigma$ electron reads \n$\\tau_{{\\rm s},\\sigma}\n\\equiv {\\tau_\\sigma}\/{(\\kappa_{\\perp} \\gamma_\\sigma)}$.\nDefining $\\mu_{\\rm s}=\\mu_+-\\mu_-$, \nwe see that spin-resolved effective potential satisfies\n\\begin{align}\n(-\\nabla^2 + (\\ls{,\\sigma})^{-2})\n\\mu_\\sigma \n= -\\sigma_\\sigma (\\nabla\\cdot{\\bm E})\n.\\label{mudiffusioneq}\n\\end{align}\n\n\nIn three-dimensions, the correlation function reads \n\\begin{align}\n\\chi({\\bm x})=\\frac{1}{4\\pi |{\\bm x}|}\\sum_{\\pm} (\\pm){\\sigma_{\\pm}}e^{-|{\\bm x}|\/\\ls{,\\pm}}.\n\\end{align}\n\n\n\n\n\n\n \n\n\nThe local part of the spin current arises also from the anomalous spin current due to the spin-orbit interaction, defined in \\Eqref{jssodef}.\nThis contribution is calculated by evaluating the last two diagrams in Fig. \\ref{FIGspincurrent}\nas (see Appendix \\ref{SEC:appB})\n\\begin{align}\n\\jspin{,i}{{\\rm so},z} \n&=\n\\delta \\sigma_{{\\rm s}} (1-\\delta_{i,z}) E_i,\n\\label{jssores2}\n\\end{align}\nwhere\n\\begin{align}\n\\delta \\sigma_{{\\rm s}} \n&\\equiv \n\\frac{\\pi}{9}\nn_{{\\rm so}}{\\lambda_{\\rm so}}^2 \\frac{e}{m} \n\\sum_{\\pm}(\\pm) (k_{F\\pm})^4 (\\nu_{\\pm})^2 \\tau_\\pm .\n\\label{jssores3}\n\\end{align}\nThis spin-orbit correction to the spin conductivity is anisotropic, resulting in a spin version of the anisotropic magnetoresistance (AMR) effect, namely, spin AMR effect. \n\n\nFrom Eqs. (\\ref{spincurrent1})(\\ref{Gammaeqs3})(\\ref{jssores2}), the leading contribution to the spin current for small ${\\bm q}$ and $\\Omega$ is obtained as the sum of the local part driven by the electric field and the diffusive part as\n\\begin{align}\n\\bm{j}_{{\\rm s}}^{z}=\\sigma_{{\\rm s}} {\\bm E} -\\delta\\sigma_{\\rm s} ({\\bm n}\\cdot{\\bm E}) {\\bm n} \n- \\nabla \\mu_{\\rm s} , \n\\label{jsexpression2}\n\\end{align}\nwhere \n$\\sigma_{{\\rm s}} \\equiv \\sigma_{{\\rm s}}^{0} +\\delta \\sigma_{{\\rm s}} $ and ${\\bm n}$ is the unit vector along the magnetization.\nIn terms of the angle $\\theta$ defined by\n$\\cos\\theta\\equiv ({\\bm n}\\cdot{\\bm E})\/E$, the magnitude of the field-driven (local) current reads\n\\begin{align}\nj_{{\\rm s}}^{{\\rm loc},z}=\\sqrt{ (\\sigma_{{\\rm s}\\parallel})^2 \n + ((\\sigma_{{\\rm s}\\perp })^2-(\\sigma_{{\\rm s}\\parallel})^2) \\sin^2\\theta},\n\\label{jsmag}\n\\end{align}\nwhere $\\sigma_{{\\rm s}\\parallel}\\equiv \\sigma_{{\\rm s}}^{0}$ and $\\sigma_{{\\rm s}\\perp}\\equiv \\sigma_{{\\rm s}}$.\nWhen the degree of the anisotropy is small, the spin current becomes \n\\begin{align}\n\\frac{j_{{\\rm s}}^{{\\rm loc},z}}{E}\n=\\sigma_{{\\rm s}\\parallel}\\left(1+ \n \\frac{1}{2}\\left( \\frac{\\sigma_{{\\rm s}\\perp }}{\\sigma_{{\\rm s}\\parallel}} \\right)^2 \\sin^2\\theta \\right).\n\\label{jsmag2}\n\\end{align}\nWe define the magnitude of the spin AMR as \n\\begin{align}\n\\frac{\\Delta\\rho_{{\\rm s}}}{\\rho_{{\\rm s}\\perp}}\n& \\equiv \\frac{\\rho_{{\\rm s}\\parallel}-\\rho_{{\\rm s}\\perp}} {\\rho_{{\\rm s}\\perp}} \n= \\frac{ \\delta \\sigma_{{\\rm s}} \/ \\sigma_{{\\rm s}} }{1-\\delta \\sigma_{{\\rm s}} \/ \\sigma_{{\\rm s}}},\n\\end{align}\nwhere $\\rho_{{\\rm s}\\alpha} \\equiv (\\sigma_{{\\rm s}\\alpha})^{-1}$ ($\\alpha=\\parallel,\\perp$).\n\n\n\n\n\n\n\n\\section{Spin injection}\n\nWe have thus derived the explicit expression for the spin chemical potential within the linear response theory.\nLet us apply \\Eqref{muspin} to a ferromagnetic-normal metal junction with an insulating barrier, used in the nonlocal spin injection experiments \\cite{Kimura07}, depicted in Fig. \\ref{FIGspininjection}(a).\nWhen the voltage is applied perpendicular to the interface (we choose the $x$ axis in this direction), the electric field is uniform inside the ferromagnet and the normal metal except at the interface. \nWriting the voltage drop at the interface (chosen as at $x=0$) by $V_{\\rm FN}$, we obtain \n\\begin{align}\n\\nabla\\cdot{\\bm E} \\simeq \\delta(x) V_{\\rm FN}\/d,\n\\end{align}\n where $d$ is the width of the interaface, which is treated as small enough compared with the electron mean free path, resulting in the delta function in $\\nabla\\cdot{\\bm E}$. \nIn totally unpolarized non-magnetic metals, namely, if $\\sigma_+=\\sigma_-$ and $D_+=D_-$, the correlation function in \\Eqref{chidef} always vanishes.\nAs is naively guessed, therefore, spin injection thus requires an effective spin polarization close to the interface, induced by the exchange interaction with the ferromagnet.\nThis spin polarization is expected to be localized within a short distance of a few lattice constants from the interface. \nLet us approximate the interface polarization by introducing spin-dependent diffusion constant and the density of states, $ \\overline{D}_\\sigma$ and $\\overline\\nu_\\sigma$, respectively, at the interface.\nThe long-range behavior of the spin correlation function in the non-magnetic side is then obtained as\n\\begin{align}\n\\chi^{\\rm (N)}({\\bm x}) &=\n-\\frac{e}{4\\pi} (\\sum_\\sigma \\sigma \\overline{D}_\\sigma\\overline\\nu_\\sigma)\\frac{e^{-|{\\bm x}|\/\\ls{}}}{|{\\bm x}|}, \n\\end{align}\nwhere $\\ls{}$ in the spin diffusion length in the normal metal \n( $\\ls{}$ is a long ($\\sim \\mu$m) length scale and thus does not depend on the spin).\nWe therefore obtain from \\Eqref{muspin} the chemical potential as\n\\begin{align}\n\\mu_{\\rm s}^{\\rm (N)} ({\\bm x}) &=\n\\frac{q_{\\rm s} }{4\\pi |{\\bm x}|}e^{-|{\\bm x}|\/\\ls{}},\n\\end{align}\n where \n$q_{\\rm s}\\equiv \n(\\sum_\\sigma \\sigma \\overline{D}_\\sigma \\overline\\nu_\\sigma) eV_{\\rm FN})A_{\\rm FN}\/d\n$,\n is the spin accumulation rate at the interface (per unit time), and $A_{\\rm FN}$ is the area of the junction.\nThis result of $\\mu_{\\rm s}$ is consistent with intuitive and phenomenological results of the spin injection in the perpendicular structure shown in Fig. \\ref{FIGspininjection}(a). \nIn contrast, when the voltage is applied parallel to the ideal interface as shown in Fig. \\ref{FIGspininjection}(b), spin injection does not occur since $\\nabla\\cdot{\\bm E}=0$ at the interaface and thus $ \\mu_{\\rm s}^{\\rm (N)}=0$.\n\\begin{figure}[tbh]\n\\begin{center}\n\\includegraphics[width=0.5\\hsize]{spininjection.eps}\n\\caption{ \n(a) Creation of diffusive spin current (spin injection) by applying \nthe electric voltage perpendicular to the F-N interface. \n(b) When the voltage is applied parallel to an ideal interface, no spin current is induced since $\\nabla\\cdot{\\bm E}=0$.\n\\label{FIGspininjection}\n}\n\\end{center}\n\\end{figure}\n\n\n\n\n\n\n\n\n\\section{Total torque and asymmetric $\\beta$ term}\n\nThe continuity equation (\\ref{sconteq}), indicates that the spin polarization (magnetization) changes due to the spin relaxation.\n(Change of the magnetization magnitude is a feature of the itinerant magnetism.)\nBy use of Eqs. (\\ref{TauE})(\\ref{mudiffusioneq})(\\ref{jsexpression2}), we see that \\Eqref{sconteq}\nresults in\n\\begin{align}\n\\dot{s}^z \n&= \\gamma_{\\tau} \\nabla\\cdot{\\bm E}+\\delta \\gamma_{\\tau} \\nabla_z E_z\n+\\sum_{\\pm}(\\pm)\\frac{\\mu_{\\pm}}{(\\ls{,\\pm})^2}, \\label{dotsz}\n\\end{align}\nwhere \n$\\gamma_{\\tau} \\equiv \\gamma-\\delta \\sigma_{{\\rm s}}$\nand \n$\\delta \\gamma_{\\tau} \\equiv \\delta\\gamma+\\delta \\sigma_{{\\rm s}}$.\nGeneral case with uniform magnetization along any unit vector ${\\bm n}$ is given by (${{\\bm s}}\\equiv s{\\bm n}$) \n\\begin{align}\n\\dot{{{\\bm s}}} \n\n=\n{\\bm n} \\left( \\gamma_{\\tau} \\nabla\\cdot{\\bm E} + \\delta \\gamma_{\\tau} \\nabla_\\parallel E_\\parallel\n+\\sum_{\\pm}(\\pm)\\frac{\\mu_{\\pm}}{(\\ls{,\\pm})^2} \\right), \\label{dotsnv}\n\\end{align}\nwhere $E_\\parallel\\equiv {\\bm n}\\cdot{\\bm E}$ and $\\nabla_\\parallel\\equiv {\\bm n}\\cdot\\nabla$.\n\n\nIn addition to the change of the magnitude, \\Eqref{dotsnv}, there is a torque, which is perpendicular to ${\\bm n}$.\nSuch torque arises when the magnetization is not homogeneous, and plays important roles in current-induced magnetization dynamics. \nWe have carried out the calculation of the current-induced torque done in Ref. \\cite{TE08} on the same footing as the derivation of \\Eqref{TauE}.\nAs a result, we found that the $\\beta$ term becomes asymmetric as (see Appendix \\ref{APP:torque} for details of the calculation)\n\\begin{align}\n{\\cal T}^{(\\beta),\\alpha}_{{\\rm so}} \n& =\n-\\frac{P}{es^2} \n[\\beta {{\\bm s}}\\times (\\bm{j}\\cdot\\nabla){{\\bm s}} \n+\\delta \\beta {{\\bm s}}\\times (j_\\parallel \\nabla_\\parallel){{\\bm s}}]^\\alpha,\n\\end{align} \nwhere $j_\\parallel\\equiv {\\bm n}\\cdot\\bm{j}$ is the current along the local magnetization \nand $\\delta \\beta\/\\beta=-1\/5$ in the present model.\nThe spin transfer torque due to the spin-orbit interaction is thus different from that due to the spin-flip scattering.\nThe expression of the total torque allowing for the spatially varying current density and the magnetization is therefore obtained as \n\\begin{align}\n\\dot{{{\\bm s}}} \n&= \n-\\frac{P}{2e}\n(\\nabla\\cdot\\bm{j}){\\bm n} \n-\\frac{P}{e}\\left[ \\beta {\\bm n}\\times (\\bm{j}\\cdot\\nabla){\\bm n}\n+\\delta \\beta {\\bm n}\\times (j_\\parallel \\nabla_\\parallel){\\bm n})\n\\right]\n\\nonumber\\\\\n& \n+ {\\bm n}\n\\left(\n\\gamma_{\\tau} (\\nabla\\cdot{\\bm E})\n+\\delta\\gamma_{\\tau} (\\partial_\\parallel E_\\parallel)\n+\\sum_{\\pm} (\\pm) (\\ls{,\\pm})^{-2}\\mu_\\pm\n\\right).\n\\label{torquesum}\n\\end{align}\nThis expression clearly demonstrates that the spin relaxation torque requires some inhomogeneity either of the applied current or the spin structure, in addition to the spin-orbit (or spin flip) interaction.\nTotally homogeneous system does not relax.\nThe last term in Eq. (\\ref{torquesum}) gives useful information for measuring the spin accumulation induced by the spin current.\n\n\n\n\n\n\n\n\\section{Conclusion}\n\nWe have carried out a microscopic calculation of the spin relaxation torque and the spin current induced in disordered ferromagnetic metals by the applied electric field. \nThe spin-orbit interaction arising from the random impurities is included as a source of spin relaxation, and inhomogeneity of the applied electric field is taken into account.\nWe found that the spin relaxation torque in the uniform magnetization case is written as a divergence of the electric field plus an anisotropic term. \nThe spin current was shown to be made up of field-driven (local) and diffusive (nonlocal) contributions, the latter written as a gradient of a spin chemical potential.\nWe have derived a general linear response expression for the spin chemical potential.\nThe spin injection effect was briefly discussed based on our results.\nWhen the analysis is applied to the inhomogeneous magnetization case, we argued that the $\\beta$ torque in the current-induced magnetization dynamics can be anisotropic.\n\n\nBefore finishing, we emphasize that the expression for the spin current and $\\mu_{\\rm s}$ are meaningless without specifying the physical observable to be measured.\nIn the inverse spin Hall effect, which was originally proposed as \\cite{Hirsch99,Takahashi08,Saitoh06}\n$j_\\mu \\propto \\epsilon_{\\mu\\nu\\rho}\\jspin{,\\nu}{\\rho}$, \nit has recently been demonstrated that the charge current is not directly proportional to the spin current \\cite{Hosono10,Takeuchi10}.\nSolving for the spin current only does not therefore provide physical information. \n\n\n\n\n\n\n\n\n\\section*{Acknowledgment}\nThe authors thank E. Saitoh, J. Shibata, H. Kohno, S. Murakami for valuable discussions. \nThis work was supported by a Grant-in-Aid for Scientific Research in Priority Areas, \"Creation and control of spin current\" (1948027), the Kurata Memorial Hitachi Science and Technology Foundation and the Sumitomo Foundation. \n\n\n\n\n\n\n\n \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\\label{sec:intro}\nThe optimal allocation of communication resources is part of the constant race for better performance in mobile communications.\nAs new technologies such as high resolution video streaming and vehicular communications emerge, the demands to be fulfilled by a scheduler are becoming increasingly heterogenous, resulting in a complex optimization task that must be solved in real time.\n\nIn recent years, deep learning~(\\deeplearning) methods have distinguished themselves with successes in complex tasks, including image classification~\\cite{krizhevsky2012imagenet} and control~\\cite{mnih2015human}, among others.\n\\deeplearning, as a subset of machine learning~(\\machinelearning), follows a different paradigm compared to classic model-based approaches. Instead of designing an optimal algorithm for a given goal, an algorithm is iteratively approximated based on training data. Without the need to explicitly model the underlying processes, the issue of modeling complexity is sidestepped.\nThis property has sparked an interest for research applying \\deeplearning to the highly performance driven field of communication systems.\n\nIn particular, increasing the performance and capabilities of mobile communication systems promotes new prospects in emergency patient care.\nFor example, video, vitals, or specialist input can be exchanged wirelessly between on-site medical professionals and hospital staff, yielding a significant headstart in time-sensitive treatments~\\cite{zanatta2016pre}.\nNevertheless, a deliberate approach is required due to the low tolerance for delays and outliers in medical tasks.\n\nVarious forays have been made into integrating \\deeplearning methods with communications resource allocation tasks, mostly based on deep Q-Networks~(\\dqn)~\\cite{joseph2019towards, ye2019deep, al2020learn} and actor-critic methods~\\cite{huang2020deep}.\nHowever, model-based approaches have long thrived thanks to the valuable expert knowledge that shapes their design. Ideally, incorporating this expert knowledge into data-based methods could govern and stabilize the learning process and produce more tractable algorithms~\\cite{shlezinger2020model}.\n\nTherefore, we investigate a combined approach that offers some of the advantages of both \\deeplearning and model-based approaches.\nFor the task of scheduling discrete resources in vehicle-to-base-station communication, we implement a group of simple, model-based scheduling algorithms.\nWe then train a \\dqn to select the best model-based algorithm in a given situation, optimizing the long term effect on typical performance indicators: time-outs, sum capacity, and packet rate.\nParticular attention is paid to the performance of Emergency Vehicles~(\\textsc{\\small \\emergencyvehicle}\\xspace) as a priority class of users.\n\n\n\n\\section{Setup \\& Notation}\nIn the following, we introduce our system model. We describe the communication channel between user and base station, the resources available for allocation, and the generation of jobs. We then formulate the resource allocation problem.\n\n\\subsection{Resource Allocation System Model}\nIn our system model, a number of \\( \\numusers \\)~user vehicles are connected to a base station, moving on a 2D plane in a grid akin to the Manhattan-model of movement~\\cite{aschenbruck2008survey}.\nAt any discrete time instance~\\( \\timeindex \\), vehicles~\\( \\userindex \\) will take a fixed size step in a direction of movement that is randomly drawn, with a \\SI{98}{\\percent} chance to re-select their prior direction of movement, \\SI{0}{\\percent} chance to select the direction opposite to their prior movement, and uniform chance to draw a \\ang{90} turn left, right, or stopping.\n\nThe communication channel between base station and user vehicle~\\( \\userindex \\) is characterized by its power gain\n\\begin{align}\n\t\\channelquality_\\userindex[\\timeindex]\n\t=\n\t| \\tilde{\\channelquality}_{\\userindex}[\\timeindex] |^{\\num{2}} \\cdot \\pathloss_{\\userindex}[\\timeindex],\n\\end{align}\nknown to the base station. For each simulation time step~\\( \\timeindex \\), fading channel amplitudes~\\( | \\tilde{\\channelquality}_{\\userindex}[\\timeindex] | \\) are randomly drawn from a {Rayleigh distribution} and multiplied with a distance proportional path loss factor\n\\begin{align}\n\t\\pathloss_\\userindex[\\timeindex]\n\t=\n\t\\min\\left(\n\t1,\\,\\left(\\distanceuser_\\userindex[\\timeindex]\\right)^{\\num{-1}}\n\t\\right),\n\\end{align}\nwhere \\( \\distanceuser_{\\userindex}[\\timeindex] \\)~is the vehicles distance from the base station.\nThe path loss factor ensures a degree of correlation of the power gain~\\( \\channelquality_{\\userindex}[\\timeindex] \\) over time.\nWhile exponents of~\\( \\num{-2} \\) or lower are more typically assumed in modeling real-life path loss, we selected an exponent of~\\( \\num{-1} \\) to reduce the spread of path loss values encountered. For the purpose of this paper, this lessens the computation required in the subsequent learning task.\n\nThe base station has access to a limited number~\\( \\numresourcesavailable \\) of discrete resource blocks available for allocation, as shown in \\reffig{fig:resourceblocks}.\nThese resources can be filled with jobs~\\( \\jobindex \\) from the job queue.\nAt each time step~\\( \\timeindex \\), new jobs are generated at a probability~\\( \\probability_{\\jobindex} \\) per user.\nA job~\\( \\jobindex \\) is assigned to a specific user~\\( \\userindex \\) and is defined by two attributes, a request size~\\( \\resourceblock_{\\jobindex,\\,\\text{req}}[\\timeindex] \\) in discrete resource blocks, and a time-to-timeout~\\( \\timetotimeout_{\\jobindex}[\\timeindex] \\) in discrete simulation steps.\nWe define the set~\\( \\jobsset[\\timeindex] \\) of all jobs in the job queue in time step~\\( \\timeindex \\), and the set~\\( \\jobsset_{\\userindex}[\\timeindex] \\) of jobs in queue assigned to user~\\( \\userindex \\) in~\\(\\timeindex \\).\n\nThe job attributes' initial values are governed by a user profile that declares a maximum job size~\\( \\resourceblock_{\\userindex,\\,\\text{max}} \\) and initial time-to-timeout~\\( \\timetotimeout_{\\userindex,\\,\\text{init}} \\) for users~\\( \\userindex \\) of that profile.\nAt generation, the job is assigned a size~\\( \\resourceblock_{\\jobindex,\\,\\text{req}}[\\timeindex] \\) drawn from a discrete uniform distribution, \\( \n\t{\\resourceblock_{\\jobindex,\\,\\text{req}}[\\timeindex] \\sim \\mathbb{U}\\{1,\\, \\resourceblock_{\\userindex,\\,\\text{max}}\\}} \n\\). When a number of the base stations resources~\\( \\numresourcesavailable \\) is allocated to a job~\\( \\jobindex \\), that jobs remaining size is decreased accordingly, while a lifetime count~\\( \\resourceblock_{\\userindex,\\,\\text{sx}}[\\timeindex] \\) of resources scheduled to user~\\( \\userindex \\) is increased.\nThe jobs initial time-to-timeout~\\( {\n\t\\timetotimeout_{\\jobindex}[\\timeindex] \\gets \\timetotimeout_{\\userindex,\\,\\text{init}}\n} \\) is decremented for each time step~\\( \\timeindex \\) that passes without the job being fully scheduled. Once the time-to-timeout~\\( \\timetotimeout_{\\jobindex}[\\timeindex] \\) reaches zero, the job is discarded from the job queue and added to a set~\\( \\jobsset_{\\text{fail}}[\\timeindex] \\) for that time step~\\( \\timeindex \\), for use in performance metric calculation.\n\n\\begin{figure}[tb]\n\t\\centering\n\t\\input{figures\/adaptiveresourceblocks.pgf}\n\t\\caption\n\t{%\n\t\tJobs consisting of a number of discrete resource blocks arrive in the job queue.\n\t\tA scheduler is tasked with assigning them to a limited number~\\( \\numresourcesavailable \\) of resources.\n\t}%\n\t\\label{fig:resourceblocks}\n\\end{figure}\n\n\\subsection{Problem Statement}\nA scheduler is tasked with assigning the limited number~\\( \\numresourcesavailable \\) of resource blocks to the jobs in queue.\nThree metrics are selected to gauge the performance of a scheduling algorithm:\n\\begin{enumerate}\n\t\\item \\( \\reward_L[\\timeindex] \\): Resource blocks discarded from timeout\n\t\\item \\( \\reward_P[\\timeindex] \\): Global packet rate achieved\n\t\\item \\( \\reward_C[\\timeindex] \\): Global channel capacity achieved\n\\end{enumerate}\n\nFirstly, the \\emph{sum of resource blocks discarded} due to timing out\n\\begin{align}\n\t\\reward_L[\\timeindex]\n\t=\n\t\\sum_{\\jobindex \\in \\jobsset_{\\text{fail}}[\\timeindex]} \\resourceblock_{\\jobindex,\\,\\text{req}}[\\timeindex] \n\\end{align}\nfor all users should be minimized.\n\nSecondly, a packet rate is introduced as the lifetime ratio of resources requested and scheduled for a user~\\( \\userindex \\).\nWe define the set~\\( \\jobsset_{\\userindex,\\,\\text{new}}[\\timeindex] \\) as the set of new jobs assigned to user~\\( \\userindex \\), generated in time step~\\( \\timeindex \\). Using the lifetime sum of discrete resources~\\( \\resourceblock_{\\userindex,\\,\\text{sx}}[\\timeindex] \\) scheduled to jobs of user~\\( \\userindex \\), we calculate the \\emph{sum packet rate} over all users:\n\\begin{align}\n\t\\reward_P[\\timeindex]\n\t= \t\\sum_{\\userindex=\\num{1}}^{\\numusers}\n\t\t\\frac{\n\t\t\t\\resourceblock_{\\userindex,\\,\\text{sx}}[\\timeindex]\n\t\t}{\n\t\t\t\\sum_{\\tilde{\\timeindex}=\\num{1}}^{\\timeindex}\n\t\t\t\\sum_{\\jobindex \\in \\jobsset_{\\userindex,\\,\\text{new}}\\left[\\tilde{\\timeindex}\\right]}\n\t\t\t\\resourceblock_{\\jobindex,\\,\\text{req}}\\left[\\tilde{\\timeindex}\\right]\n\t\t}\n\t= \t\\sum_{\\userindex=\\num{1}}^{\\numusers}\n\t\t\\packetrate_{\\userindex}[\\timeindex]\n\t.\n\\end{align}\nA strong sum packet rate performance is achieved by a scheduler that does not neglect any single user.\n\nLast, the \\emph{sum rate capacity} achieved by each transmission is calculated using the Signal-to-Noise ratio~(\\SNR) resulting from the selected channels instantaneous fading characteristics as\n\\begin{align}\n\t\\reward_C[\\timeindex]\n\t=\n\t\\sum_{\\userindex=1}^{\\numusers}\n\t\\log\n\t\\left(\n\t\t1 +\n\t\t\\channelquality_{\\userindex}[\\timeindex]\n\t\t\\frac{\\signalpower}{\\noisepower}\n\t\\right) \n\t=\n\t\\sum_{\\userindex=1}^{\\numusers}\n\t\\log\n\t\\left(\n\t\t1 + \\SNR_{\\userindex}[\\timeindex]\n\t\\right) \n\\end{align}\nfor a Gaussian input alphabet, where signal power~\\( \\signalpower \\) and expected noise power~\\( \\noisepower \\) are fixed for all vehicles~\\( \\userindex \\).\n\nThe scheduler is be tasked with balancing all performance metrics, thus, we collect all target metrics in a weighted sum utility\n\\begin{align}\n\t\\tilde{\\reward}[\\timeindex] = \n\t \\weight_C\\reward_C[\\timeindex]\n\t+ \\weight_P \\reward_P[\\timeindex]\n\t- \\weight_L \\reward_L[\\timeindex],\n\\end{align}\nwith respective tunable weights~\\( \\weight_C, \\weight_P, \\weight_L \\). Additionally, we designate a priority class of \\textsc{\\small \\emergencyvehicle}\\xspace-type users. Their significance is communicated to the optimization process by adding \\textsc{\\small \\emergencyvehicle}\\xspace timeouts~\\( \\reward_{L,\\, \\emergencyvehicleformula}[\\timeindex] \\) to the weighted sum utility with their own tunable weight~\\( \\weight_{L,\\, \\emergencyvehicleformula} \\):\n\\begin{align}\\label{eq:reward}\n\t\\reward[\\timeindex]\n\t&=\t \\tilde{\\reward}[\\timeindex]\n\t\t- \\weight_{L,\\, \\emergencyvehicleformula} \\reward_{L,\\, \\emergencyvehicleformula}[\\timeindex]\\nonumber\\\\\n\t&=\t \\weight_C \\reward_C[\\timeindex]\n\t\t+ \\weight_P \\reward_P[\\timeindex]\n\t\t- \\weight_L \\reward_L[\\timeindex]\n\t\t- \\weight_{L,\\, \\emergencyvehicleformula} \\reward_{L,\\, \\emergencyvehicleformula}[\\timeindex].\n\\end{align}\n\n\n\\section{Algorithm Selection Approach}\\label{sec:deep}\nWhere model-based approaches struggle to find an optimal solution, deep Reinforcement Learning~(\\reinforcementlearning) may be applied to learn a function that approximates the optimal algorithm along the domain of reasonable input data.\n\\reffig{fig:RL_Training} schematically depicts the desired learning process.\nAt the same time, some applications require boundary conditions to be met, which tend to be easy to formulate in model-based algorithms but cannot be enforced directly in standard \\reinforcementlearning approaches.\n\n\\begin{figure}[tb]\n\t\\centering\n\t\\input{figures\/RL_training.pgf}\n\t\\caption\n\t{%\n\t\tIn the desired RL loop, the parametrized scheduler interacts with the system by selecting an action \\( \\actionsca \\) and receiving a resulting reward~\\( \\reward[\\timeindex] \\) and new system state~\\( \\statevec[\\timeindex+1] \\).\n\t\tBased on these experiences, the scheduler updates its parametrization~\\( \\parameters \\) to promote high-reward actions and demote low reward actions.\n\t}%\n\t\\label{fig:RL_Training}\n\\end{figure}\n\nOur scheduler learns to adaptively switch between a selection of model-based algorithms to tackle this problem.\nIn order to select the best algorithm, the scheduler makes use of a deep Q-Network~(\\dqn)~\\cite{sutton2018reinforcement} to learn the long-term benefit of selecting each operation mode in a given situation.\nAs a result, model-based design benefits, such as hard performance guarantees and human interpretability, are provided by the pool of model-based algorithms.\nMeanwhile, the superposition of algorithms allows for greater performance than each individual algorithm on flexible goal metrics.\nAs an additional benefit, the individual model-based algorithms do not have to be sophisticated enough to perform well in every circumstance, so long as the overall selection is rich enough to serve any problem.\n\n\\subsection{Model-Based Algorithms}\nFour model-based scheduling algorithms are implemented for the \\dqn to select from:~\\cite{schmidt2018analyse}\n\n(1) A \\textbf{Maximum Throughput\\,(\\textsc{\\small MT}\\xspace)} scheduler that allocates as many resources as requested by order of descending channel power gains.\n\n(2) A \\textbf{Max-Min-Fair\\,(\\textsc{\\small MMF}\\xspace)} scheduler looks to distribute the available resources~\\( \\numresourcesavailable \\) performantly but fairly among the number~\\( \\numusers_{\\text{req}}[\\timeindex] \\) of users that have jobs assigned to them in the job queue in time step~\\( \\timeindex \\). It allocates by order of priority\n\\( \n\t{\\channelquality_{\\userindex}[\\timeindex]\\,\n\t\/\\,\n\t\\sum_{\\jobindex \\in \\jobsset_{\\userindex}[\\timeindex]}\n\t\\resourceblock_{\\jobindex,\\,\\text{req}}[\\timeindex]}\n \\), favoring good channels and small requests, but allocates at most an equal share\n\\( \n\t\\left\\lfloor\n\t\t\\numresourcesavailable\n\t\t\/\n\t\t\\numusers_{\\text{req}}[\\timeindex]\n\t\\right\\rfloor\n\\).\n\n(3) A \\textbf{Delay Sensitive\\,(\\textsc{\\small DS}\\xspace)} scheduler assigns a channel priority \n\\begin{align*}\n\t{p}_{c, \\userindex}[\\timeindex]\n\t=\n\t\\frac{\n\t\t\\packetrate_{\\userindex}[\\timeindex]\n\t}\n\t{\n\t\t\\sum_{q=1}^{\\numusers} \\packetrate_{q}[\\timeindex]\n\t}\n\t\\frac{\n\t\t\\channelquality_{\\userindex}[\\timeindex]\n\t}\n\t{\n\t\t\\sum_{q=1}^{\\numusers} \\channelquality_{q}[\\timeindex]\n\t}\n\\end{align*}\ngiven each users relative channel power gain~\\( \\channelquality_{\\userindex}[\\timeindex] \\) and packet rate~\\( \\packetrate_{\\userindex}[\\timeindex] \\).\nThe \\textsc{\\small DS}\\xspace scheduler further draws on the users sum timeouts~\\(\n\t{\\timeouts_{\\userindex}[\\timeindex] = \\sum_{\\tilde{\\timeindex}=\\num{1}}^{\\timeindex} \\reward_{L, \\userindex}\\left[\\tilde{\\timeindex}\\right]} \n\\) and lowest remaining time~\\(\n\t{\\lowesttimetotimeout_{\\userindex}[\\timeindex] = \\min_{\\jobindex \\in \\jobsset_{\\userindex}[\\timeindex]} \\timetotimeout_{\\jobindex}[\\timeindex]}\n\\) for a timeout urgency\n\\begin{align*}\n\t{p}_{l, \\userindex}[\\timeindex]\n\t=\n\t\\frac{\n\t\t\\timeouts_{\\userindex}[\\timeindex]\n\t\t\/\n\t\t\\lowesttimetotimeout_{\\userindex}[\\timeindex]\n\t}\n\t{\n\t\t\\sum_{q=1}^{\\numusers}\n\t\t\\timeouts_{q}[\\timeindex]\n\t\t\/\n\t\t\\lowesttimetotimeout_{q}[\\timeindex]\n\t}\n\t.\n\\end{align*} \nEach user is allotted a share of the available resources according to the normalized, weighted priority vector\n\\begin{align*}\n\t\\mathbf{p}[\\timeindex]\n\t=\n\t(\n\tw_1\\mathbf{p}_{c}[\\timeindex]\n\t+\n\tw_2\\mathbf{p}_{l}[\\timeindex]\n\t)\n\t\/\n\t(\n\tw_1\n\t+\n\tw_2\n\t)\n\t.\n\\end{align*}\nUniquely, this scheduler skips jobs that are about to time out if the allotted discrete resources are not sufficient to complete the job. In this case, the resources are freed for the next highest priority.\n\n(4) An \\textbf{\\textsc{\\small \\emergencyvehicle}\\xspace Priority} scheduler assigns as many resources as requested to any \\textsc{\\small \\emergencyvehicle}\\xspace{}s and distributes remaining resources one-by-one, randomly assigning them to requesting users.\n\n\n\\subsection{Deep Algorithm Selection}\nOur deep learning scheduler consists of multiple elements: A pre- and post-processor, a \\dqn, an~\\( \\argmax \\) module, a memory module, and a learning algorithm that tunes the \\dqn. In this section, we will introduce these modules and illustrate their interconnection.\n\nFirst, to be able to make informed decisions, a small preprocessor prepares information about the current state of job queue and communication link as a system state vector~\\( \\statevec[\\timeindex] \\). Per user~\\( \\userindex \\), the preprocessor summarizes\n\\begin{itemize}\n\t\\item current queue length \\(\n\t\t\\statesca_{\\userindex, 1}[\\timeindex]\n\t\t\t= \\sum_{\\jobindex \\in \\jobsset_{\\userindex}[\\timeindex]} \\resourceblock_{\\jobindex,\\,\\text{req}}[\\timeindex]\n\t\t\\),\n\t\\item channel power gains \\(\n\t\t\\statesca_{\\userindex, 2}[\\timeindex]\n\t\t\t= \\channelquality_\\userindex[\\timeindex]\n\t\t\\),\n\t\\item average remaining time \\(\n\t\t\\statesca_{\\userindex, 3}[\\timeindex]\n\t\t\t= \\frac{1}{\\left| \\jobsset_{\\userindex}[\\timeindex] \\right|} \\sum_{\\jobindex \\in \\jobsset_{\\userindex}[\\timeindex]} \\timetotimeout_{\\jobindex}[\\timeindex]\n\t\t\\),\n\t\\item minimum remaining time \\(\n\t\t\\statesca_{\\userindex, 4}[\\timeindex]\n\t\t\t= \\min_{\\jobindex \\in \\jobsset_{\\userindex}[\\timeindex]} \\timetotimeout_{\\jobindex}[\\timeindex]\n\t\t\\),\n\t\\item and past packet rate \\(\n\t\t\\statesca_{\\userindex, 5}[\\timeindex]\n\t\t\t= \\packetrate_\\userindex[\\timeindex]\n\t\t\\),\n\\end{itemize}\nfor a length of \\( 5\\numusers \\) features for \\( \\numusers \\) users.\n\nFor a given system state~\\( \\statevec[\\timeindex] \\) and choice of model-based algorithm~\\( \\actionsca_{\\actionindex} \\), we define the long term expected rewards\n\\begin{align}\n\t\\label{eq:def_q}\n\t\\longrewardsca(\\statevec[\\timeindex], \\actionsca_{\\actionindex})\n\t=\n\t\\expectation\n\t\\left[\n\t\\sum_{z=\\timeindex}^{\\infty}\n\t\\rewarddiscount^{z-\\timeindex}\n\t\\reward[\\timeindex]\n\t\\;\\middle|\\;\n\t\\statevec = \\statevec[\\timeindex],\\,\n\t\\actionsca = \\actionsca_{\\actionindex}\n\t\\right],\n\\end{align}\nwith reward~\\( \\reward \\) as defined in~\\refeq{eq:reward}. The long term rewards are discounted by an exponential factor~\\(\n\t{\\num{0} \\leq \\rewarddiscount \\leq \\num{1}}\n\\).\nAs depicted in \\reffig{fig:dqn}, a \\dqn is set up to output an estimate~\\( \\longrewardestimsca \\) of the long term expected rewards~\\( \\longrewardsca \\) for each choice of model-based algorithm~\\( \\actionsca_{\\actionindex} \\).\nGiven a perfect approximation~\\(\n\t{\\longrewardestimsca = \\longrewardsca}\n\\), we maximize the long term expected rewards by simply selecting whichever action~\\(\n\t\\argmax_{\\actionindex} \\longrewardestimsca(\\statevec[\\timeindex],\\,\\actionsca_{\\actionindex})\n\\) has the highest \\emph{estimated} long term rewards.\nTherefore, the learning goal is to update the \\dqn parameters~\\( \\parameters \\) such that the estimate is approximately close to the true long term reward~\\( \\longrewardsca \\) for all possible~\\(\n\t{(\\statevec, \\actionsca_{\\actionindex})}\n\\), relying on the universal approximator property of neural networks~\\cite{sutton2018reinforcement}.\n\n\\begin{figure}[tb]\n\t\\centering\n\t\\input{figures\/DQN.pgf}\n\t\\caption\n\t{%\n\t\tThe DQN module estimates expected long term rewards~\\(\n\t\t{\\longrewardestimsca(\\statevec[\\timeindex], \\actionsca_\\actionindex)}\n\t\t\\) for a given state~\\( \\statevec[\\timeindex] \\) and all available actions~\\( \\actionsca_{\\actionindex} \\), according to its current parametrization~\\( \\parameters \\).\n\t\tIn this case, actions~\\( \\actionsca_{\\actionindex} \\) are the available model-based algorithms. The scheduler then selects the algorithm~\\( \\actionsca_{\\actionindex} \\) with the highest expected long term reward.\n\t}%\n\t\\label{fig:dqn}\n\\end{figure}\n\nFirst, the scheduler must make experiences to learn from.\nUsing an \\( \\explorationchance \\)-greedy exploration scheme, the scheduler initially explores the simulation by taking random actions, \\ie selecting a random model-based algorithm for a given state.\nStates, decisions, and their direct outcome are recorded and stored in a replay buffer in the form of tuples\n\\begin{align}\n\t\\experience\n\t=\n\t\\left(\n\t\t\\statevec[\\timeindex],\\,\n\t\t\\actionsca[\\timeindex],\\,\n\t\t\\reward[\\timeindex],\\,\n\t\t\\statevec[\\timeindex+1]\n\t\\right)\n\t.\n\\end{align}\n\nWe highlight that an experience only contains the immediate reward~\\( \\reward[\\timeindex] \\), not the desired long-term rewards~\\(\n\t{\\longrewardsca(\\statevec[\\timeindex], \\actionsca[\\timeindex])}\n\\), \\ie~\\refeq{eq:def_q}.\nHowever, we can extract additional information from~\\( \\statevec[\\timeindex+1] \\), the state that following the action. By again using our \\dqn, we estimate the rewards following~\\( \\statevec[\\timeindex+1] \\) to construct a learning target\n\\begin{align}\n\t\\label{eq:bootstrap}\n\t\\longrewardestimsca_{\\text{target}}(\\experience)\n\t=\n\t\\reward[\\timeindex]\n\t+\n\t\\rewarddiscount\n\t\\max_\\actionindex\n\t\t\\longrewardestimsca(\\statevec[\\timeindex+1], \\actionsca_\\actionindex)\n\t,\n\\end{align}\nthat incorporates the information of~\\( \\reward[\\timeindex] \\) and ~\\( \\statevec[\\timeindex + \\num{1}] \\) from the experience.\nUsing this target~\\( {\\longrewardestimsca_{\\text{target}}(\\experience)} \\), an estimation error~\\cite{sutton2018reinforcement}\n\\begin{align}\n\t\\tderror\n\t=\n\t\\longrewardestimsca_{\\text{target}}\\left( \\experience \\right)\n\t-\n\t\\longrewardestimsca \\left( \\experience \\right)\n\\end{align}\ngiven the networks current parametrization can be calculated for any experience tuple from the buffer.\n\nFollowing the principle of stochastic gradient descent~(\\stochasticgradientdescent), the networks parameters~\\( \\parameters \\) are then adjusted by sampling a minibatch of~\\( \\batchsize \\) experiences from the buffer to minimize a mean square error cost\n\\begin{align}\n\t\\cost\n\t=\n\t\\frac{\n\t\t1\n\t}\n\t{\n\t\t\\batchsize\n\t}\n\t\\sum_{\\batchindex=1}^{\\batchsize}\n\t(\n\t\t\\tderror_{\\batchindex}\n\t)^2\n\\end{align}\nfor the batch.\nMinibatch parameter updates are carried out every time a new experience is made.\n\nAs training progresses and the schedulers understanding of the simulation environment improves, the probability~\\( \\explorationchance \\) of selecting random exploration actions is gradually decreased, relying more and more on the network to make decisions. When prompted, at the beginning of a time step~\\( \\timeindex \\), the network will estimate the expected long term rewards~\\( \\longrewardestimsca \\) for selecting any of the available model-based algorithms~\\( \\actionsca_{\\actionindex} \\), given the current state~\\( \\statevec[\\timeindex] \\) and the networks current parameters~\\( \\parameters \\). The scheduler then selects the model-based algorithm~\\( \\actionsca_{\\actionindex} \\) with the highest expected long term rewards.\n\nTo increase training efficiency, we implement optimizations to the base \\dqn learning method: \n\\begin{itemize}\n\t\\item An infrequently updated network copy is used for the bootstrapping in \\refeq{eq:bootstrap} to increase training stability (\\emph{Target Network},~\\cite{mnih2015human})\n\t\\item Sampling of experiences from the buffer is weighted proportional to the experiences' estimation error magnitude (\\emph{Prioritized Replay},~\\cite{hessel2018rainbow})\n\t\\item The neural network structure is altered to seperately learn the contribution of state and action to the reward estimate (\\emph{DuelingDQN},~\\cite{hessel2018rainbow})\n\\end{itemize}\n\n\n\\section{Performance Evaluation}\\label{sec:performance}\n\\subsection{Implementation Details}\nWe configure the simulation with \\(\n\t{\\numresourcesavailable = \\num{16}}\n\\)~available resources and \\(\n\t{\\numusers = \\num{10}}\n\\) users.\nUser profiles are set up according to \\reftab{tab:user_profiles}, with five~'Normal', two~'High Datarate', two~'Low Latency' users, and one~'Emergency Vehicle'.\nFor the given configuration, a job creation probability~\\(\n\t{\\probability_{\\jobindex} = \\SI{20}{\\percent}}\n\\) for each simulation step and user puts a high expected load of \\( \\num{1.6} \\)~requests per available resource on average on the schedulers.\nWe run a total of \\( \\num{10000} \\)~episodes at \\( \\num{50} \\)~time steps~\\( \\timeindex \\) per episode for training and evaluation each, with the mini batch size set to~\\( {\\batchsize = 64} \\) sampled experiences per training step.\nParameter optimization via \\stochasticgradientdescent is carried out by the Adam optimizer~\\cite{kingma_adam_2015} with default settings and a learning rate of~\\( \\num{1e-4} \\).\nReward weightings are set to \\( {\\weight_C = \\weight_P = \\num{0.25}} \\), \\( {\\weight_L = \\weight_{L,\\, \\emergencyvehicleformula} = \\num{1.0}} \\), giving roughly equal significance to each goal metric with the average expected magnitude of the respective rewards.\n\n\\begin{table}[tb]\n\t\\renewcommand{\\arraystretch}{1.3}\n\t\\caption{User Profiles}\n\t\\label{tab:user_profiles}\n\t\\begin{center}\n\t\t\\begin{tabular}{c|c|c}\n\t\t\t\\hline\n\t\t\t& \\textbf{Delay} & \\textbf{Max Job Size} \\( \\resourceblock_{\\text{\\text{max}}} \\) \\\\ \n\t\t\t& in sim. steps\t\t& in res. blocks\t\t\\\\\\hline\\hline\n\t\t\tNormal\t\t\t\t& \\( \\num{20} \\)\t\t\t\t& \\( \\num{30} \\) \\\\\n\t\t\tHigh Packet Rate\t& \\( \\num{20} \\)\t\t\t\t& \\( \\num{40} \\) \\\\\n\t\t\tLow Latency\t\t\t& \\( \\num{2} \\)\t\t\t\t\t& \\( \\num{8} \\) \\\\\n\t\t\tEmergency Vehicle\t& \\( \\num{1} \\)\t\t\t\t\t& \\( \\num{16} \\) \\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\n\t\\end{center}\n\\end{table}\n\n\nFor the algorithm selection \\dqn, a five layer feed-forward network is selected.\nLayers have \\( \\num{300} \\)~nodes each, except for the last layer that was split in two branches with \\( \\num{200} \\)~nodes each according to the DuelingDQN structure~\\cite{hessel2018rainbow}.\nExploration is done by selecting random actions at an initial chance~\\(\n\t{\\explorationchance = \\SI{99}{\\percent}}\n\\), decaying linearly to~\\( \\SI{0}{\\percent} \\) after~\\( \\SI{80}{\\percent} \\) of training episodes.\nThe exponential future reward decay factor is set to~\\( {\\rewarddiscount = \\num{0.9}} \\) in order to put significance on only a low number of future steps. User TX-SNR~\\( {\\signalpower \/ \\noisepower} \\) is fixed to \\( \\SI{13}{\\dB} \\).\n\nWe implement the simulation in python using the tensorflow library primarily. The full implementation is made available in~\\cite{gracla2020code}.\n\n\\subsection{Results}\n\nAs the simulation model contains stochastic components, the results achievable on each metric have an inherent variance depending on the specific realizations.\nFor example, a spike in generated jobs will result in timeouts irrespective of scheduling method.\nFor this reason, results achieved during testing are displayed in cumulative histograms.\n\n\\begin{figure*}[tb]\n\t\\centering\n\t\\subfloat[]{\n\t\t\\input{figures\/scheduling_testing_capacities.pgf}\n\t\t\\label{fig:results_capacity}\n\t}\n\t\\hfil\n\t\\subfloat[]{\n\t\t\\input{figures\/scheduling_testing_datarate_satisfaction.pgf}\n\t\t\\label{fig:results_packet_rate}\n\t}\n\t\\\\\n\t\\subfloat[]{\n\t\t\\input{figures\/scheduling_testing_latency_violations.pgf}\n\t\t\\label{fig:results_latency_violations}\n\t}\n\t\\hfil\n\t\\subfloat[]{\n\t\t\\input{figures\/scheduling_testing_timeouts_ev_only.pgf}\n\t\t\\label{fig:results_ev_latency_violations}\n\t}\n\t\\caption{Cumulative histograms of performance of Maximum Throughput~(MT), Max-Min-Fair~(MMF) and Delay Sensitive~(DS) schedulers as well as DQN adaptive scheduler on submetrics of capacity~(a), packet rate~(b), timeouts~(c) and EV timeouts~(d).}\n\t\\label{fig:results_submetrics}\n\\end{figure*}\n\n\\reffig{fig:results_reward} shows each schedulers performance on the combined reward metric~\\( \\reward \\).\nOn this metric, the \\dqn adaptive scheduler is able to find a strategy that outperforms any single model-based algorithm.\nBreaking the sum metric~\\( \\reward \\) down into its constituent parts sheds light on how this is achieved. As shown in \\reffig{fig:results_submetrics}, the \\dqn scheduler balances the submetrics against each other, yielding some performance on each of them compared to the benchmark.\n\n\\begin{figure}[tb]\n\t\\centering\n\t\\input{figures\/scheduling_testing_rewards.pgf}\n\t\\caption{Cumulative histogram of performance of Maximum Throughput~(MT), Max-Min-Fair~(MMF) and Delay Sensitive~(DS) schedulers as well as DQN adaptive scheduler on the weighted sum reward metric. For each episode, all achieved rewards \\( \\reward \\) are summed. Achieved reward sums are normalized by the highest achieved reward sum.}\n\t\\label{fig:results_reward}\n\\end{figure}\n\nOf particular interest are the \\textsc{\\small \\emergencyvehicle}\\xspace-specific timeouts in \\reffig{fig:results_ev_latency_violations}.\nThis metric is not specifically targeted by any of the model-based algorithms depicted.\nA modest double-weighting of \\textsc{\\small \\emergencyvehicle}\\xspace specific timeouts within the goal metric~\\( \\reward \\), combined with the introduction of the otherwise suboptimal \\textsc{\\small \\emergencyvehicle}\\xspace Priority scheduling algorithm to the selection pool, has enabled the \\dqn based scheduler to significantly suppress timeouts in Emergency Vehicles even compared to the otherwise timeout focused Delay Sensitive algorithm without hurting overall performance overmuch.\nAs \\reffig{fig:results_algorithm_selections} shows, the \\textsc{\\small \\emergencyvehicle}\\xspace Priority algorithm was only selected a comparably low amount of times to achieve this goal.\n\n\\begin{figure}[tb]\n\t\\centering\n\t\\input{figures\/scheduling_testing_algorithm_selection.pgf}\n\t\\caption{Relative selection rate of EV Priority~(EV P), Delay Sensitive~(DS), Max-Min-Fair~(MMF) and Maximum Throughput~(MT) schedulers during testing.}\n\t\\label{fig:results_algorithm_selections}\n\\end{figure}\n\nThe \\dqn{}s learned behaviour can also be monitored to reveal underlying features of the simulation.\nAs \\reffig{fig:results_algorithm_selections} highlights, for the given reward weighting, the \\textsc{\\small MMF}\\xspace algorithm was only selected a low number of times.\nFurther investigation could reveal whether these are, for example, high impact outlier cases that could be better served with an additional model-based algorithm that is specifically targeted to them, or whether the \\textsc{\\small MMF}\\xspace algorithm is not fit for the given scenario.\n\nThe \\dqn decision making does however add another layer of computation to the scheduling process.\nFurther, while the ensemble method can relax the sophistication required from each model-based part of the ensemble, the composite scheduling function can only assume the function space spanned by the group of model-based algorithms.\nIn other words, the \\dqn adaptive method is unable to discover strategies that go beyond combining the available models.\nDetermining whether the selection of model-based algorithms provided is rich enough to serve the problem therefore remains a burden on the designer.\n\n\\section{Conclusion}\\label{sec:conclusion}\nIn this paper, we presented a \\reinforcementlearning-based communications resource scheduler that constructs a scheduling paradigm based on an ensemble of model-based algorithms.\nWe achieve this by adaptively switching to whichever algorithm promises the highest expected long term benefit based on the current queue state.\nThis approach combines the flexible goal optimization of \\reinforcementlearning methods with the rigid predictability of model-based algorithms, which makes it noteworthy for applications such as scheduling \\textsc{\\small \\emergencyvehicle}\\xspace data that demand high performance, but have low tolerance for negative outliers as they may occur in classic \\reinforcementlearning approaches.\n\nFor the simulation presented, the adaptive model-switching scheduler is able to outperform single, model-based algorithms on a weighted sum utility metric.\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\n\\subsection{Background and Significance of Hyperspectral Remote Sensing}\n\\IEEEPARstart{I}{maging} spectroscopy, which was first-ever to be conceptualized by Goetz1 \\textit{et al.} in 1980's \\cite{goetz1985imaging}, is a seminal hyperspectral (HS) imaging technique of truly achieving the integration of the 1-D spectrum and the 2-D image for earth remote sensing (RS). Imaging spectroscopy is a typical ``passive'' RS technique, which assembles spectroscopy and digital photography into a unified system. Fig. \\ref{fig:GRSM_Imaging} shows the data acquisition process of two different imaging patterns: ``active'' RS and ``passive'' RS \\cite{tsang1985theory}. The resulting HS image collects hundreds of 2-D images finely sampled from the approximately contiguous wavelength across the whole electromagnetic spectrum \\cite{turner2003remote} (see Fig. \\ref{fig:electromagnetic}). This enables the recognition and identification of the materials, particularly for those that have extremely similar spectral signatures in visual cues (e.g., RGB) \\cite{hong2015novel}, at a more accurate and finer level. As a result, HS RS has been significantly advanced and widely applied in many challenging tasks of earth observation \\cite{bioucas2013hyperspectral}, such as fine-grained land cover classification, mineral mapping, water quality assessment, precious farming, urban planning and monitoring, disaster management and prediction, and concealed target detection.\n\n\\begin{figure}[!t]\n\t \\centering\n\t\t\\includegraphics[width=0.48\\textwidth]{GRSM_Imaging}\n \\caption{An illustration to clarify the data acquisition process of two different imaging patterns, i.e., ``active'' RS and ``passive'' RS. }\n\\label{fig:GRSM_Imaging}\n\\end{figure}\n\n\\begin{figure*}[!t]\n\t \\centering \n\t\t\\includegraphics[width=0.8\\textwidth]{electromagnetic}\n \\caption{A showcase to clarify the electromagnetic spectrum: the order from low to high frequency is Long-waves, Radio-waves, Micro-waves, Infrared, Visible, Ultraviolet, X-rays, and Gamma-rays, where four widely-concerned intervals, e.g., Radio-waves, Infrared, Visible, and X-rays, are finely partitioned.}\n\\label{fig:electromagnetic}\n\\end{figure*}\n\nMore specifically, characterized by the distinctive 3-D signaling structure, the advantages of the HS image over conventional 1-D or 2-D signal products can be summarized as\n\\begin{itemize}\n \\item compared to the common 1-D signal system, the HS 2-D spatial pattern provides us more structured information, enabling the discrimination of underlying objects of interest at a more semantically meaningful level;\n \\item Beyond the 2-D natural images, the rich spectral information in HS images is capable of detecting materials through tiny discrepancies in the spectral domain, since the HS imaging instruments exploit the sensors to collect hundreds or thousands of wavelength channels with an approximately continuous spectral sampling at a subtle interval (e.g., 10nm). \n\\end{itemize}\n\nFurthermore, the significance of HS images compared to other RS imaging techniques with a lower spectral resolution, e.g., multispectral (MS) or RGB imaging, mainly embodies in the following three aspects:\n\\begin{itemize}\n \\item[1)] HS images are capable of finely discriminating the different classes that belong to the same category, such as \\textit{Bitumen} and \\textit{Asphalt}, \\textit{Stressed Grass} and \\textit{Synthetic Grass}, \\textit{Alunite} and \\textit{Kaolin}. For those optical broadband imaging products (e.g., MS imagery), they can only identify certain materials with the observable differences in the spectral signatures, e.g., \\textit{Water}, \\textit{Trees}, \\textit{Soil}.\n \\item[2)] The higher spectral resolution creates the possibilities to some challenging applications that can be hardly achieved by only depending on formerly imaging techniques, e.g., parameter extraction of biophysics and biochemistry, biodiversity conservation, monitoring and management of the ecosystem, automatic detection of food safety, which provides new insight into RS and geoscience fields.\n \\item[3)] Due to the limitations of image resolution either in spectral or spatial domains, physically and chemically atmospheric effects, and environmental conditions (e.g., the interference of soil background, illumination, the uncontrolled shadow caused by clouds or building occlusion, topography change, complex noises), those traditional RS imaging techniques were to a great extent dominated by qualitative analysis. As HS RS arises, quantitative or semi-quantitative analysis becomes increasingly plausible in many practical cases.\n\\end{itemize}\n\n\\subsection{An Ever-Growing Relation between Non-convex Modeling and Interpretable AI in Hyperspectral Remote Sensing}\n\nIn recent years, a vast number of HS RS missions (e.g., MODIS, HypSEO, DESIS, Gaofen-5, EnMap, HyspIRI, etc.) have been launched to enhance our understanding and capabilities to the Earth and environment, contributing to the rapid and better development in a wide range of relevant applications, such as land cover land use classification, spectral unmixing, data fusion, image restoration, and multimodal data analysis. With the ever-growing availability of RS data sources from both satellite and airborne sensors on a large scale and even global scale, expert system-centric data processing and analysis mode has run into bottlenecks and can not meet the demand of the big data era. For this reason, data-driven signal and image processing, machine learning (ML), AI models have been garnering growing interest and attention from the researchers in the RS community.\n\nSupported by well-established theory and numerical optimization, convex models have been proven to be effective to model a variety of HS tasks under highly idealized assumptions. However, there exist unknown, uncertain, and unpredictable factors in the complex real scenes. Due to these factors that lead to the lack of sound understanding and modeling capability to the scene, convex models fail to work properly. The specific reasons could be two-fold. On the one hand, integrating the benefits of 1-D and 2-D signals, the 3-D structurized HS images offer greater potential and better solutions (compared to natural images) to deal with the varying situation, but simultaneously increase the model's complexity and uncertainty to some extent. On the other hand, due to unprecedented spatial, spectral, and temporal resolutions of HS images in remotely sensed HS imaging, the difficulties and challenges in the sophisticated HS vision approaches are mainly associated with the volume of the HS data, complex material (spectral) mixing behavior, and uncontrolled degradation mechanisms in data acquisition caused by illumination, noise, and atmospheric effects. \n\nThe aforementioned factors, to a great extent, limit convex models to be intelligent approaches for fully understanding and interpreting the real-life scenario. Therefore, this naturally motivates us to investigate the possibility of processing and analyzing the HS data in a non-convex modeling fashion. In the following, we briefly make a qualitative comparison between convex and non-convex models to clarify that non-convex modeling might be an optimally feasible solution towards interpretable AI models in HS RS.\n\n\\begin{itemize}\n \\item Convex models are theoretically guaranteed to converge to the global optimal solution, yet most tasks related to HS RS are complex in reality and hardly simplified to an equivalent and perfect convex formulation. This to some extent makes convex models inapplicable to practical tasks, due to the lack of interpretability and completeness for problem modeling.\n \\item Rather, non-convex models are capable of characterizing the complex studied scene in HS RS more finely and completely, thereby more tending to achieve automatization and intelligentization in the real world. Moreover, by excavating the intrinsic properties from the HS data effectively to yield physically meaningful priors, the solution space of non-convex models can be shrunk to a ``good'' region bit by bit.\n \\item Although non-convex models are complex by considering more complicated prior knowledge, possibly leading to the lack of stable generalization ability, they hold the higher potential that convex models do not have, particularly in explaining models, understanding scenes, and achieving intelligent HS image processing and analysis. Furthermore, this might be able to provide researchers with a broader range of HS vision related topics, making it applicable for more real cases in a variety of HS RS-related tasks.\n\\end{itemize}\n\n\\subsection{Contributions}\nWith the advent of the big data era, an ever-increasing data bulk and diversity brings rare opportunities and challenges for the development of HS RS in earth observation. Data-driven AI approaches, e.g., ML-based, deep learning (DL)-based, have occupied a prominent place in manifold HS RS applications. Nevertheless, how to open the ``model'' and give them interpretability remains unknown yet. In this article, we raise a bold and understandable standpoint, that is, non-convex modeling might be an effective means to bridge the gap between interpretable AI models and HS RS. To support the opinion, this article provides a detailed and systematic overview by reviewing advanced and latest literature with an emphasis on non-convex modeling in terms of five classic and burgeoning topics related to HS RS. More specifically, \n\\begin{itemize}\n \\item We present a comprehensive discussion and analysis related to non-convex modeling in five well-noticed and promising HS RS-related applications, such as HS image restoration, dimensionality reduction and classification, spectral unmixing, data fusion and enhancement, and cross-modality learning. \n \\item For each topic, a few representative works are emphatically and detailedly introduced by the attempts to make a connection between non-convex modeling and intelligent\/interpretable models. Moreover, the example experiments (qualitatively or quantitatively) are subsequently performed after the detailed method description. Those selected methods that engage in the comparative experiments are accompanied by available code and data links for the sake of reproducibility. Finally, the remaining challenges are highlighted to further clarify the gap between interpretable ML\/AI models and practical HS RS applications. \n \\item Regarding the three aspects of non-convex modeling, interpretable AI, and HS RS, the end of this article concludes with some remarks, makes summary analysis, and hints at plausible future research work.\n\\end{itemize}\n\nWe need to point out and emphasize, however, that this paper features food for thoughts for advanced readers, and it is not an introduction for beginners entering the field. The goal of this paper is to provide a cutting-edge survey rather than a real tutorial. As a result, readers are expected to have some prior knowledge across multidisciplinary, such as HS RS, convex optimization, non-convex modeling, ML, and AI, where some basic principle, definitions, and deductions need to be mastered. For beginners who are willing to start with new researches on non-convex modeling for HS RS applications, we recommend reading and learning following materials and references to get to know, for example, \n\\begin{itemize}\n \\item what is the HS imaging or HS RS (e.g., principles, superiority, practicability) and its relevant applications (topics) \\cite{bioucas2013hyperspectral};\n \\item convex optimization and its solutions (including a detailed description of physical meaningful priors, e.g., low-rank, sparsity, graph regularization, non-negativity, sum-to-one, etc.) as well as its relationship with non-convex modeling \\cite{boyd2004convex};\n \\item a general guideline on why and how to build non-convex models and how to solve non-convex minimization problems \\cite{ekeland1979nonconvex};\n \\item classic and advanced ML algorithms, including essential ideas, designing thought, implementation process \\cite{mohri2018foundations};\n \\item a big picture about what is the AI and how to build the basic AI models \\cite{nilsson2014principles}.\n\\end{itemize}\n\nMoreover, we hope that this paper can be also regarded as a good starting point and evolve many novels, interesting, and noteworthy research issues around the fields of non-convex modeling, interpretable ML\/AI, and HS RS, serving to more application cases in reality. \n\n\\begin{figure*}[!t]\n\t \\centering\n\t\t\t\\includegraphics[width=1\\textwidth]{SPM_Motivation}\n \\caption{Illustration of five promising topics in HS RS, including image restoration, dimensionality reduction and classification, data fusion and enhancement, spectral unmixing, and cross-modality learning.}\n\\label{fig:motivation}\n\\end{figure*}\n\n\\section{Outline and Basic Preparation}\n\nThis paper starts with a brief introduction to a general non-convex minimization problem in signal or image processing, and then specifies the non-convex modeling with each topic in HS RS from an ML perspective. According to the characteristics of HS imaging, these different issues may bring fresh challenges to researches on non-convex modeling and optimization, which contributes to boosting the development of both HS RS and ML for intelligent data processing and analysis. Very recently, some successful showcases have garnered growing attention from researchers who engage in HS data processing and analysis, ML, or statistical optimization, leading to many newly-developed non-convex models and their solutions. They can be roughly categorized into the following several groups in a wide range of real applications related to HS RS. Fig. \\ref{fig:motivation} gives the illustration for each topic.\n\n\\subsection{A Brief Introduction from Convex to Non-convex Models}\nAs the name suggests, in the convex model, the shape of the area represented by the function (or model) is convex. Accordingly, let the function $f: \\mathbb{R}_{n}\\rightarrow\\mathbb{R}$ be convex, if the domain of $f$ is a convex set, and for the variable $\\theta\\in [0,1]$, any two points (e.g., $x$ and $y$) meet the following condition:\n\\begin{equation}\n\\label{g_eq1}\n\\begin{aligned}\n f(y)\\geq \\theta f(x) + (1-\\theta) f(y),\n\\end{aligned}\n\\end{equation}\nthen the function $f$ is convex, whose necessary and sufficient condition is $f(y)\\geq f(x)+\\bigtriangledown f(x)(x-y)$.\n\nWithin the domain, the globally optimal solution of the convex model can be obtained by common convex optimization techniques, such as linear programming \\cite{chvatal1983linear}, quadratic programming \\cite{frank1956algorithm}, second order cone programming \\cite{lobo1998applications}.\n\nInformally, although convex methods have been widely used to model various tasks, owing to the existence and uniqueness of solutions and the relatively low model complexity, the real scenes are complex and changeable, inevitably leading to many uncertainties and difficulties in the modeling process. For this reason, non-convex modeling is capable of providing stronger modeling power to the algorithm designer and can fully meet the demand of characterizing the real complex scenes well. This naturally motivates us to shift our emphases on some key issues related to non-convex modeling.\n\nA general non-convex model usually consists of a smooth objective function (\\textit{e.g.,} Euclidean loss, negative log-likelihood) with Lipschitz gradient $f(\\mathbf{X})$ and the non-convex constraints $\\mathcal{C}$, which can be generalized by optimizing the following minimization problem:\n\n\\begin{equation}\n\\label{g_eq2}\n\\begin{aligned}\n \\mathop{\\min}_{\\mathbf{X}}\\frac{1}{2}f(\\mathbf{X}) \\;\\; {\\rm s.t.}\\; \\mathbf{X}\\in \\mathcal{C},\n\\end{aligned}\n\\end{equation}\nwhere $\\mathbf{X}$ is the to-be-estimated variable, which can be defined as a vector (1-D signal), a matrix (2-D image), or an unfolded matrix (3-D HS image). The constraints $C$ could be sparsity-promoting variants, low-rank, TV, and others, which need to be determined by the specific tasks.\n\nUnlike convex models, there exist many local minimums in non-convex models. This poses a big challenge on finding globally optimal solutions. For possible solutions of non-convex methods, one strategy is to relax the non-convex problem to an approximately convex model \\cite{jain2017non}. Another can break the non-convex problems down into several convex subproblems and solve them in parallel by the means of convex ways \\cite{boyd2011distributed}.\n\nIn the light of the different research goals, the general model in Eq. (\\ref{g_eq2}) can be extended to the task-driven variants covering a broad scope within the HS RS, including image restoration, dimensionality reduction, data fusion and enhancement, spectral unmixing, and cross-modality learning. It should be noted that in the following sections, some representative methods will be introduced with a focus on non-convex modeling, while the alternating direction method of multipliers (ADMM) optimization framework is recommended for use as a general solver to solve these non-convex models. For specific solutions of these models in each topic, please refer to the cited references.\n\n\\subsection{Main Abbreviation}\n\\vspace{5pt}\n\\begin{supertabular}{ll}\nAI: & artificial intelligence.\\\\\nANN: & artificial neural network.\\\\\nCML: & cross-modality learning.\\\\\nDL: & deep learning.\\\\\nDR: & dimensionality reduction.\\\\\nGSD: & ground sampling distance.\\\\\nHS: & hyperspectral.\\\\\nKNN: & $k$ nearest neighbors.\\\\\nLDA: & linear discriminant analysis.\\\\\nLMM: & linear mixing model.\\\\\nMA: & manifold alignment.\\\\\nML: & machine learning.\\\\\nMML: & multimodality learning.\\\\\nMS: & multispectral.\\\\\nNMF: & non-negative matrix factorization.\\\\\nRS: & remote sensing.\\\\\nSAR: & synthetic aperture radar.\\\\\nSOTA: & state-of-the-art.\\\\\nSSL: & shared subspace learning.\\\\\nSU: & spectral unmixing.\\\\\n1-D: & one-dimensional.\\\\\n2-D: & two-dimensional.\\\\\n3-D: & three-dimensional.\\\\\n\\end{supertabular}\n\n\\subsection{Nomenclature}\n\\vspace{5pt}\n\\begin{supertabular}{ll}\n$\\tensor{X}$: & to-be-estimated 3-D HS image.\\\\\n$\\tensor{Y}$: & observed 3-D HS image.\\\\\n$\\tensor{N}_{G}$: & 3-D Gaussian noise. \\\\\n$\\tensor{N}_{S}$: & 3-D sparse noise.\\\\\n$\\tensor{O}$: & core tensor.\\\\\n$r$: & rank of matrix.\\\\\n$\\mathbf{X}$: & unfolded 2-D matrix of $\\tensor{X}$.\\\\\n$\\mathbf{x}_{i}$: & the $i$-th pixel (1-D vector) of $\\mathbf{X}$.\\\\\n$\\mathbf{Y}$: & unfolded 2-D matrix of $\\tensor{Y}$.\\\\\n$\\mathbf{N}_{S}$: & unfolded 2-D matrix of $\\tensor{N}_{S}$.\\\\\n$\\mathbf{H}$: & first-order difference matrix.\\\\\n$\\mathcal{C}$: & model constraint set.\\\\\n$\\Phi$: & to-be-estimated variable set.\\\\\n$f$: & transformation functions.\\\\\n$\\mathbf{Q}$: & combination coefficients of NMF.\\\\\n$\\mathbf{W}$: & graph or manifold structure.\\\\\n$\\mathbf{L}$: & Laplacian matrix.\\\\\n$\\mathbf{D}$: & degree matrix.\\\\\n$\\mathbf{U}$: & subspace projection matrix.\\\\\n$\\mathbf{I}$: & identity matrix.\\\\\n$\\mathbf{d}$: & distance or similarity matrix.\\\\\n$\\sigma$: & standard derivation.\\\\\n$C_{k}$: & sample set of the $k$-th class.\\\\\n$\\mathbf{M}$: & one-hot encoded matrix.\\\\\n$\\mathbf{P}$: & regression matrix.\\\\\n$\\mathbf{E}$: & endmember matrix.\\\\\n$\\mathbf{E}_{0}$: & reference endmember matrix.\\\\\n$\\mathbf{A}$: & abundance matrix.\\\\\n$\\mathbf{S}$: & scaling factors (matrix).\\\\\n$\\mathbf{V}$: & spectral variability dictionary (matrix).\\\\\n$\\mathbf{J}$: & coefficients corresponding to $\\mathbf{V}$.\\\\ \n$\\mathbf{R}$: & spatial degradation function.\\\\\n$\\mathbf{G}$: & spectral response function.\\\\\n$\\mathbf{N}_{H}$: & HS noise.\\\\\n$\\mathbf{N}_{H}$: & MS noise.\\\\\n$\\mathbf{Z}$: & high spatial resolution MS image.\\\\\n$m$: & the number of the considered modality.\\\\\n$c$: & scaling constant.\\\\\n\\end{supertabular}\n\n\\subsection{Notation}\n\\begin{supertabular}{ll}\n$\\norm{\\mathbf{X}}_{\\F}$ & Forbenius norm of $\\mathbf{X}$, obtained by $\\sqrt{\\sum_{i,j}\\mathbf{X}_{i,j}^{2}}$\\\\\n$\\norm{\\mathbf{X}}_{1,1}$ & $\\ell_{1}$ norm of $\\mathbf{X}$, obtained by $\\sum_{i,j}|\\mathbf{X}_{i,j}|$\\\\\n$\\norm{\\mathbf{X}}_{2,1}$ & $\\ell_{2,1}$ norm of $\\mathbf{X}$, obtained by $\\sum_{i}|\\sqrt{\\sum_{j}\\mathbf{X}_{i,j}^{2}}|$\\\\\n$\\norm{\\mathbf{X}}_{1\/2}$ & $\\ell_{1\/2}$ norm of $\\mathbf{X}$, obtained by $\\sum_{i,j}\\mathbf{x}_{j}(i)^{1\/2}$\\\\\n$\\norm{\\mathbf{X}}_{q}$ & $\\ell_{q}$ norm of $\\mathbf{X}$, obtained by $\\sum_{i,j}\\mathbf{x}_{j}(i)^{q}$\\\\\n$\\norm{\\mathbf{X}}_{{\\rm TV}}$ & ${\\rm TV}$ norm of $\\mathbf{X}$, obtained by $\\norm{\\mathbf{H}_{h}\\mathbf{X}+\\mathbf{H}_{v}\\mathbf{X}}_{2,1}$\\\\\n$\\norm{\\mathbf{X}}_{0}$ & $\\ell_{0}$ norm of $\\mathbf{X}$, obtained by $\\lim_{p\\rightarrow0}\\sum_{i,j}|\\mathbf{X}_{i,j}|^{p}$\\\\\n$\\tr(\\mathbf{X})$ & trace of $\\mathbf{X}$, obtained by $\\sum_{i}\\mathbf{X}_{i,i}$\\\\\n$\\norm{\\mathbf{X}}_{*}$ & nuclear norm of $\\mathbf{X}$, obtained by $\\tr(\\sqrt{\\mathbf{X}^{\\top}\\mathbf{X}})$\\\\\n$\\norm{\\mathbf{x}}_{2}$ & $\\ell_{2}$ norm of $\\mathbf{x}$, obtained by $\\sqrt{\\sum_{j}\\mathbf{x}_{j}^{2}}$\\\\\n$\\odot$ & the element-wise multiplication operator\\\\\n$\\phi_{i}$ & the neighbouring pixels of the target pixel $i$\\\\\n\\end{supertabular}\n\n\\section{Hyperspectral Image Restoration}\nOwing to the wealthy spectral information, the HS image has been widely used in different kinds of applications, including urban planning, agriculture, forestry, target detection, and so on. However, due to the limitation of hyperspectral imaging systems and the weather conditions, HS images are always suffering from the pollution of various noise. Fig.~\\eqref{fig:noiseType} illustrates different noise types of HS images observed from airborne and spaceborne sensors. Therefore, HS image denoising and restoration is a necessary pre-processing for the subsequent applications to assist the noise.\n\n\\begin{figure}[!t]\n\t \\centering\n\t\t\\includegraphics[width=0.48\\textwidth]{Noise_type}\n \\caption{Examples for different noise types of HS Images observed from airborne and spaceborne sensors.}\n\\label{fig:noiseType}\n\\end{figure}\n\nThe statistical distribution of hyperspectral noise is complicated. For instance, the readout noise, which is assumed to obey the Gaussian distribution, is produced by the imaging device (Charge-coupled Device) during the conversation from electrons to the final image~\\cite{martin2007anisotropic}. The stripes are generated in the hyperspectral data collected by the pushroom hyperspectral sensors~\\cite{datt2003preprocessing}. Due to the weather environment, the obtained HS data are always suffering from the cloud and cloud shadow~\\cite{gomez2005cloud}. Besides, the HS images are also suffering from the signal-dependent noise~\\cite{acito2011signal}, multiplicative noise, impulsive noise, Laplace noise and so on. Furthermore, In this paper, we follow the mainstream and focus on the Gaussian noise removal~\\cite{Chang1999,yuan2012hyperspectral} and mixed noise removal problem~\\cite{zhang2013hyperspectral,he2015total}.\n\nSince 1980s, researchers have paid attention to the HS noise analysis. For example, maximum noise fraction (MNF) transformation ~\\cite{green1988transformation} noise-adjusted principal component analysis~\\cite{Chang1999} are utilized to extract high-quality components for the subsequent classification and reject the low-quality components. Following the mainstream of gray\/color image denoising in computer vision society, various state-of-the-art technologies, such as wavelets~\\cite{rasti2014wavelet}, sparse representation~\\cite{qian2012hyperspectral,li2016noise}, TV~\\cite{yuan2012hyperspectral,wu2017structure}, non-local means processing~\\cite{CVPR2014Meng,he2018non}, low-rank matrix representation~\\cite{zhang2013hyperspectral,he2015hyperspectral,xie2016hyperspectral,he2015total}, tensor representation~\\cite{xie2017kronecker,chen2020TCYB,chen2020TGRS}, DL~\\cite{chang2018hsi,yuanHSID-CNN2018} and so on.\n\nThe non-convex regularized methods have also been developed for HS image restoration. From~\\cite{zhang2013hyperspectral} the HS images are assumed to be corrupted by the additive noise, including Gaussian noise, stripes, deadlines, pixel missing, impulse noise and so on. The observation model is formulated as:\n\\begin{equation}\n\\label{eq:ob}\n\\tensor{Y} = \\tensor{X} + \\tensor{N}_{G} + \\tensor{N}_{S},\n\\end{equation}\nwhere $\\tensor{Y}$ represents the observed noisy image, $\\tensor{X}$ stands for the latent clean image, $\\tensor{N}_{G}$ is the Gaussian noise, and $\\tensor{N}_{S}$ is the sparse noise, including stripes, deadlines, pixel missing and impulse noise. Typically, when sparse noise $\\tensor{N}_{S}$ is omitted, the model \\eqref{eq:ob} is degraded to the Gaussian noise removal problem. From~\\cite{zhang2013hyperspectral}, the low-rank property exploration of clean image $\\tensor{X}$ has attracted much attention and achieved state-of-the-art HS image denoising performance~\\cite{he2018non,chen2020TGRS}. Generally speaking, the mainstreams are from two points. Firstly, how to explore the low-rank property of clean image $\\tensor{X}$. Until now, spectral low-rank property~\\cite{zhang2013hyperspectral,he2015hyperspectral}, spatial low-rank property~\\cite{liu2012denoising,chen2020TCYB} and non-local low-rank property~\\cite{xie2017kronecker,he2018non} have been well studied. Furthermore, how to balance the contribution of different low-rank properties is also a key problem~\\cite{chang2017hyper,he2018non}. Secondly, the low-rank constraint of $\\tensor{X}$ formulates a non-convex optimization problem. Therein, how to efficiently solve the rank constraint non-convex problem is another key problem. In the next subsection, we review several outstanding works and illustrate how these works formulate the low-rank modeling, and how to solve the non-convex problem.\n\n\\begin{table}[!t]\n \\centering\n \\caption{Prior properties of the selected methods. One, two, and three $\\bullet$ denote the low, medium, and high intensity of prior information, respectively.}\n \\footnotesize\n \\begin{tabular}{c||c|c|c||c|c}\n \\toprule[1.5pt]\n \\multirow{2}{*}{Methods} & \\multicolumn{3}{c||}{Low-rankness} & \\multicolumn{2}{c}{Local smoothness}\\\\\n \\cline{2-6}& spatial & spectral & non-local & spatial & spectral \\\\\n \\hline\n LRTA & $\\bullet \\bullet$ & $\\bullet \\bullet$ & & & \\\\\n NAILRMA & & $\\bullet \\bullet$ & & & \\\\\n TDL & & $\\bullet$ & $\\bullet \\bullet \\bullet$ & & \\\\\n FastHyDe & & $\\bullet \\bullet$ & $\\bullet$ & & \\\\\n NGmeet & & $\\bullet \\bullet$ & $\\bullet \\bullet$ & & \\\\\n \\hline\n LRMR & & $\\bullet \\bullet$ & & & \\\\\n LRTV & & $\\bullet \\bullet$ & & $\\bullet \\bullet$ & \\\\\n LRTDTV & $\\bullet$ & $\\bullet \\bullet$ & & $\\bullet \\bullet$ & $\\bullet \\bullet$ \\\\\n LRTDGS & $\\bullet$ & $\\bullet \\bullet \\bullet$ & & $\\bullet \\bullet \\bullet$ & \\\\\n LRTF-FS & & $\\bullet \\bullet \\bullet$ & & $\\bullet \\bullet \\bullet$ & $\\bullet \\bullet$ \\\\\n \\bottomrule[1.5pt]\n \\end{tabular}%\n \\label{tab:Denoising_property}%\n\\end{table}%\n\n\n\\begin{table*}[!t]\n \\centering\n \\caption{The restoration results of different selected methods on Gaussian noise and mixed noised, respectively.}\n \\footnotesize\n \\begin{tabular}{c||ccccc|ccccc}\n \\toprule[1.5pt]\n \\multirow{2}{*}{Index} & \\multicolumn{5}{c|}{Gaussian noise removal} & \\multicolumn{5}{c}{Mixed noise removal} \\\\\n\\cline{2-11} & LRTA & NAILRMA & TDL & FastHyDe & NGmeet & LRMR & LRTV & LRTDTV & LRTDGS & LRTF-FS \\\\\n \\hline\n PSNR & 25.99 & 32.81 & 32.11 & 33.51 & 33.82 & 32.22 & 33.05 & 32.34 &33.33 & 33.26 \\\\\n SSIM & 0.7095 & 0.9519 & 0.9443 & 0.9601 & 0.9607 & 0.9401 & 0.9459 &0.9335 &0.9596 & 0.9549 \\\\\n MSA & 11.35 & 4.75 & 4.72 & 4.42 & 4.38 & 4.95 & 5.06 & 4.09 & 4.24 & 4.44 \\\\\n \\bottomrule[1.5pt]\n \\end{tabular}%\n \\label{tab:DenoisResults}%\n\\end{table*}%\n\n\\subsection{Gaussian Noise Removal}\nIn this section, five methods are selected to represent the state-of-the-art HS image Gaussian noise removal approaches. These methods utilize different low-rank matrix\/tensor decomposition models to exploit the spatial, spectral, or non-local low-rank properties of the original clean HS image. The properties of these five methods are summarized in Table \\ref{tab:Denoising_property}. The five methods are briefly described in the following.\n\n\\subsubsection{LRTA}\\footnote{\\url{https:\/\/www.sandia.gov\/tgkolda\/TensorToolbox\/}}\nOn the basis of the observation model~\\eqref{eq:ob} with ignoring sparse noise $S$, low-rank tensor approximation (LRMA)~\\cite{renard2008denoising} tries to restore the HS image from the following objective function\n\\begin{equation}\n\\label{LRTA}\n\\min_{\\tensor{X}} \\|\\tensor{Y}- \\tensor{X}\\|_{\\F}^2\\;\\;\n{\\rm s.t.}\\; \\tensor{X} = \\tensor{O} \\times_1 \\mat{A} \\times_2 \\mat{B} \\times_3 \\mat{C},\n\\end{equation}\nwhere $\\tensor{X} = \\tensor{O} \\times_1 \\mat{A} \\times_2 \\mat{B} \\times_3 \\mat{C},$ is the Tucker decomposition,\n$\\tensor{O} \\in \\mathbb{R}^{r_1 \\times r_2 \\times r_3}$ stands for the core tensor, and $\\mat{A} \\in \\mathbb{R}^{M \\times r_1}, \\mat{B} \\in \\mathbb{R}^{N \\times r_2}, \\mat{C} \\in \\mathbb{R}^{B \\times r_3}$ are the factors related to different dimensions. With the rank $(r_1, r_2, r_3)$ of Tucker decomposition set in advance, the LRTA model \\eqref{LRTA} can simultaneously capture the global spatial and spectral low-rank properties. \\eqref{LRTA} provides a simple general model for different kinds of low-rank matrix\/tensor decomposition based HS image denoising methods, that's to say, we can change the Tucker decomposition constraint of $\\tensor{X}$ to different kinds of matrix\/tensor decomposition, such as canonical polyadic (CP) decomposition, tensor train decomposition, tensor ring decomposition and so on.\n\n\\subsubsection{NAILRMA}\\footnote{\\url{https:\/\/sites.google.com\/site\/rshewei\/home}}\nNoise-adjusted iterative LRMA (NAILRMA)~\\cite{he2015hyperspectral} method assumes that the spectral low-rank property is more important than that of spatial ones, and therefore simply spectral low-rank regularizer is utilized to restrict the original spectral image $\\tensor{X}$. From other works~\\cite{zhang2020}, it also indicates that the spatial TV regularizer is more important than that of the spatial low-rank regularizer. In HS images, the similar signatures representing the same class also appear in the nearby spatial location. To enhance the spectral low-rank property, NAILRMA segmented the HS image into spatial overlapping patches and process each patch individually. The noise intensity in different bands of the HS image is different, which is a big challenge appearing in the HS image Gaussian noise removal, is mitigated by the noise-adjusted iterative strategy~\\cite{he2015hyperspectral}. At last, the randomized singular value decomposition (RSVD) is utilized to solve the non-convex low-rank approximation problem.\n\n\\subsubsection{TDL}\\footnote{\\url{http:\/\/gr.xjtu.edu.cn\/web\/dymeng\/}}\nTensor dictionary learning (TDL) combines the non-local regularization and low-rank tensor approximation. The noisy HS image is firstly segmented into spatial overlapping patches, and the similar patches are clustered together to formulate a higher-order tensor. In this way, the non-local spatial information is collected. Then the higher-order tensors are denoised as the same of~\\eqref{LRTA}, and finally the denoised higher-order tensors are utilized to formulate the final denoised HS image. TDL represents the first method to exploit non-local low-rank property, and the subsequent methods LLRT~\\cite{chang2017hyper}, KBR~\\cite{xie2017kronecker}, and NLTR~\\cite{chen2020TGRS} also achieve remarkable HS image Gaussian noise removal results.\n\n\\subsubsection{FastHyDe}\\footnote{\\url{https:\/\/github.com\/LinaZhuang\/FastHyDe_FastHyIn}}\nThe main difference between HS images and color\/multispectral images is the number of spectral bands. To eliminate this difference and utilize well developed color\/multispectral denoising methods for HS image denoising, Zhuang \\textit{et.al} proposed the fast HS denoising (FastHyDe)~\\cite{zhuang2018fast} method by translating the HS image to low-dimensional reduced image via SVD. By this translation, various state-of-the-art color\/multispectral image denoising methods, such as wavelets~\\cite{Chen2011TGRS} and BM3D~\\cite{zhuang2018fast} are used to denoise the reduced image. Finally, the denoised reduced image is translated back to the denoised HS image via inverse SVD. Generally speaking, under the framework of FastHyDe, the HS image noise removal task is linked to the development of color\/multispectral image noise removal tasks.\n\n\\subsubsection{NGmeet}\\footnote{\\url{https:\/\/github.com\/quanmingyao\/NGMeet}}\nSpatial non-local low-rank regularizer can produce state-of-the-art HS image noise removal performance. However, as the increase of spectral number, the time cost of non-local related methods also increase incredibly~\\cite{chang2017hyper,xie2017kronecker}. Non-local meets global (NGmeet) method also tries to translate the HS image to the reduced image, and utilizes a non-local low-rank method to denoise the reduced image. Different from FastHyDe, NGmeet tries to perfect the framework by iteratively eliminating the error caused by SVD on the noisy HS image, and automatically estimating the spectral rank of the reduced image.\n\n\n\\subsection{Mixed Noise Removal}\nIn this section, we select five representative methods for the HS image mixed noise removal. These methods are on the basis of the observation model~\\eqref{eq:ob}. We focus on the non-convex low-rank regularizer of original image $\\tensor{X}$. The properties of these five methods are summarized in Table \\ref{tab:Denoising_property}.\n\n\\subsubsection{LRMR}\nZhang \\textit{et al.} firstly introduced the observation model \\eqref{eq:ob} to analysis of complex HS noise~\\cite{zhang2013hyperspectral}. LRMR tries to restore the original clean image $\\tensor{X}$ from the noisy image via low-rank and sparse decomposition model as follows:\n\\begin{equation}\n\\label{LRMR}\n\\min_{\\mat{X}} \\norm{\\mat{Y}- \\mat{X} - \\mat{N}_{S}}_{\\F}^2+ \\lambda_{1} rank(\\mat{X}) + \\lambda_{2} card(\\mat{N}_{S}),\n\\end{equation}\nwhere $\\mat{Y}, \\mat{X}, \\mat{N}_{S}$ are the reshaped matrices of $\\tensor{Y}, \\tensor{X}, \\tensor{N}_{S}$ along the spectral dimension, respectively, $\\lambda_1$ and $\\lambda_2$ are the parameters to trade-off the contributions of $rank(\\mat{X})$ and non-zero elements $card(\\mat{N}_{S})$. LRMR utilizes ``GoDec'' algorithm~\\cite{zhou2011godec} to alternatively update non-convex constraint $\\mat{X}$ and $\\mat{N}_{S}$, and finally obtains the restored image. To improve the efficiency of the optimization to~\\eqref{LRMR}, several non-convex substitutions, such as reweighted nuclear norm~\\cite{xie2016hyperspectral}, $\\gamma$-norm~\\cite{chenyongyong2017TGRS}, smooth rank approximation~\\cite{Ye2019TGRS} and normalized $\\epsilon$-Penalty~\\cite{Xie2020TIP}, are further developed to exploit the spectral low-rank property of $\\mat{X}$.\n\n\\subsubsection{LRTV}\nLow-rank total variation (LRTV) claimed that spectral low-rank property is not enough to describe the property and HS images, and therein introduced TV to explore the spatial smoothness via TV. Generally, low-rank regularization and TV are the two most studied regularizers, and the combination of them to produce the state-of-the-art HS image mixed noise removal is becoming popular. Most of the following works try to either improve the low-rank modeling~\\cite{Fan2018TGRS,Wang2018} or the smoothness modeling~\\cite{he2017Jstars,chen2020TCYB,ZengTGRS2020} of the HS image to further improve the restoration accuracy. To further combine low-rank and TV, the low-rank exploration of the HS difference image is also developed~\\cite{Sun2017letter,Peng2020TIP}.\n\n\\subsubsection{LRTDTV}\nLow-rank tensor decomposition with TV (LRTDTV)~\\cite{Wang2018} tries to improve LRTV by utilizing low-rank tensor decomposition to exploit the low-rank property of HS images, meanwhile spatial-spectral TV (SSTV) to explore the spatial and spectral smoothness simultaneously. Although LRTDTV achieved better mixed noise removal results as reported in~\\cite{Wang2018}, the spatial rank utilized in LRTDTV is much larger than that of spectral rank. This is mainly because the spatial low-rank property of HS images is not so important compared to the spectral low-rank property. From another side, the spatial non-local low-rank regularization is proved to be more efficient~\\cite{zhang2020} than spatial low-rank property for HS restoration problem.\n\n\\subsubsection{LRTDGS}\\footnote{\\url{ https:\/\/chenyong1993.github.io\/yongchen.github.io\/}}\nLow-rank tensor decomposition with group sparse regularization (LRTDGS)~\\cite{chen2020TCYB} also utilizes low-rank rank tensor decomposition to exploit the low-rank property of HS images. Differently, LRTDGS explores the group sparsity of the difference image instead of SSTV in LRTDTV. From the mathematical modeling, LRTDGS utilizes weighted $ell_{2,1}$ norm regularization to fulfill the row-group sparsity of the difference image.\n\n\\subsubsection{LRTF-FR}\\footnote{\\url{ https:\/\/yubangzheng.github.io\/homepage\/}}\nFollowing the idea of NGmeet~\\cite{he2018non}, factor-regularized low-rank tensor factorization (LRTF-FR)~\\cite{ZhengTGRS2020} also utilizes matrix decomposition to decouple the spatial and spectral priors. From one side, the spectral signatures of the HS image are assumed to be of smooth structure. From another side, the reduced image is assumed to have a group sparse structure in the difference domain. The optimization model of LRTF-FR is\n\\begin{equation}\n\\label{LTRF-FR}\n\\begin{aligned}\n \\min_{\\tensor{X},\\tensor{N}_{S}}&\\norm{\\tensor{Y}- \\tensor{X} - \\tensor{N}_{S}}_{\\F}^2+ \\lambda_1\\norm{\\tensor{X} \\times_3 \\mat{H}_3}_{2,1}\\\\\n &+ \\lambda_2 \\sum_{k=1}^{2} \\norm{\\tensor{X} \\times_k \\mat{H}_k}_{\\F}^{2} + \\lambda_3 \\norm{\\tensor{N}_{S}}_{1,1},\n\\end{aligned}\n\\end{equation}\nwhere $\\mat{H}_k, k=1,2,3$ are the first-order difference matrices. Furthermore, in the optimization to \\eqref{LTRF-FR}, the reweighted strategy is utilized to update $\\ell_{2,1}$ norm and $L\\ell_1$ norm to further improve the restored results.\n\n\\subsection{Experimental Study}\nWe choose the HS image from DLR Earth Sensing Imaging Spectrometer (DESIS) installed on the International Space Station (ISS)\\cite{alonso2019data} for the experimental study to compare different methods on the Gaussian and mixed noise removal tasks. We remove the noisy bands and select a sub-image of size $400 \\times 400 \\times 200$ as the clean reference image, which is normalized to $[0,1]$. Firstly, we add Gaussian noise of noise variance $0.1569$ to simulate the Gaussian noisy image, and apply different Gaussian noise removal methods to remove the Gaussian noise. Furthermore, we add salt \\& pepper noise and stripes to simulate the mixed noisy image, and apply mixed noise removal methods to remove the mixed noise. As similar in~\\cite{chen2020TCYB}, we choose the mean of peak signal-to-noise rate (PSNR) over all bands, the mean of structural similarity (SSIM) over all bands, and the mean of spectral angle mapping (MSA) overall spectral vectors to evaluate the restored results.\n\n\\begin{figure*}[!t]\n\t \\centering\n\t\t\t\\includegraphics[width=1\\textwidth]{HyDe}\n \\caption{The illustration of different methods on the noise removal results. The first row shows the results of different methods on the Guassian noise remove experiments (R:70, G:100, B:36). The second row displays the results of different methods on the mixed noise remove experiments (R:31, G:80, B:7).}\n\\label{fig:restoration}\n\\end{figure*}\n\nTable \\ref{tab:DenoisResults} presents the evaluation values of different methods on Gaussian noise and mixed noise removal results, respectively. For the Gaussian noise removal task, NGmeet achieves the best values of three evaluation indices. However, the gap between NGmeet and FastHyDe is limited. For the mixed noise removal task, LRTDGS achieves the best accuracy in PSNR and SSIM values, meanwhile LRTDTV achieved the best MSA value. Combining Tables \\ref{tab:Denoising_property} and \\ref{tab:DenoisResults}, we can conclude that, firstly, the spectral low-rank prior information is important for HS restoration. Secondly, the contribution of spatial low-rank prior information for HS restoration is limited. Thirdly, on the basis of spectral low-rank regularization, spatial and spectral smoothness prior can further improve the final HS restoration results.\n\n\\subsection{Remaining Challenges}\nUp to date, many non-convex regularized methods have been proposed to develop the low-rank priors and local smoothness priors, and achieve remarkable HS restoration results for Gaussian and mixed noise removal. However, these methods still face several challenges for further work. We summary these challenges as the following.\n\n\\begin{itemize}\n \\item \\textbf{Efficiency.} Although low-rank related methods have achieved state-of-the-art restoration results, they are time-consuming. For instance, NGmeet and LRTVGS speed more than 10 minutes to process the HS image of size $400 \\times 400 \\times 200$. Furthermore, the related state-of-the-art restoration methods always exploit multiple priors of the HS image, resulting in the confusion of the parameter chosen. Therein, how to reduce the model complexity and improve the optimization efficiency of the HS image restoration is the key challenge.\n \\item \\textbf{Scalability.} Previous non-convex related methods always focus on the small HS image processing. However, HS images are used to observe the earth and the spatial size of one scene is usually very large. How to improve the scalability of the restoration approaches is the key challenge. DL provides the possibility for fast and large scale processing of HS images. Whereas DL approaches always rely on the quality of training samples, and the applicable scope of the test data is always limited. To improve the scalability, how to embed well studied non-convex regularizers to the DL architecture should also be further analyzed.\n\n \\item \\textbf{Real Application.} Until now, most HS image restoration methods are evaluated on the simulated experiments. However, in most cases, the evaluation indices fail to predict the accuracy of the real HI image restoration results. From another side, the noise distribution in the real noisy HS images is complex. How to testify the related methods on the real HS images should be also further analyzed. From another side, the training samples in the real application are always limited. The blind and unsupervised approaches will become the mainstream of future real HS image restoration.\n\\end{itemize}\n\n\n\\section{Dimensionality Reduction}\nThe HS dimensionality reduction (DR) and feature extraction have long been a fundamental but challenging research topic in HS RS \\cite{li2018discriminant,rasti2020feature}. The main reasons mainly lie in the following aspects. Due to the highly-correlated characteristic between spectral bands, the HS images are subjected to information redundancy, which could hurt the ability to discriminate the materials under the certain extremely-conditioned cases (\\textit{curse of dimensionality}). Plus, as the HS dimension gradually increases along with the spectral domain, large storage capability and high-performance computing are needed. Furthermore, these dimension-reduced features are usually applied for the high-level classification or detection task \\cite{wu2019orsim,wu2020fourier}. Recently, many works based on non-convex modeling have shown to be effective for automatically extracting dimension-reduced features of HS images. Linking with Eq. (\\ref{g_eq2}), the DR task can be generalized to the following optimization problem:\n\\begin{equation}\n\\label{DR_eq1}\n\\begin{aligned}\n \\mathop{\\min}_{f_{\\Phi},\\mathbf{X}}\\frac{1}{2}\\norm{f_{\\Phi}(\\mathbf{Y})-\\mathbf{X}}_{\\F}^{2}\\;\\; {\\rm s.t.}\\; \\mathbf{X},f_{\\Phi}\\in \\mathcal{C},\n\\end{aligned}\n\\end{equation}\nwhere $f_{\\Phi}(\\bullet)$ denotes the transformation from the original HS space to dimension-reduced subspaces with the respect to the variable set $\\Phi$, and $\\mathbf{X}$ is the low-dimensional representations of $\\mathbf{Y}$. Revolving around the general form in Eq. (\\ref{DR_eq1}), we review currently advanced DR methods from three different aspects: unsupervised, supervised, and semi-supervised models.\n\n\\begin{figure*}[!t]\n\t \\centering\n\t\t\t\\includegraphics[width=0.9\\textwidth]{GRSM_SupervisedDR}\n \\caption{An illustration for supervised DR models in HS images with two different groups: discriminant analysis based DR and regression-induced DR.}\n\\label{fig:GRSM_SupervisedDR}\n\\end{figure*}\n\n\\subsection{Unsupervised Model}\nNon-negative matrix factorization (NMF) \\cite{lee2001algorithms} is a common unsupervised learning tool, which has been widely applied in HS DR. These works can be well explained by Eq. (\\ref{DR_eq1}), the NMF-based DR problem can be then formulated as\n\\begin{equation}\n\\label{DR_eq2}\n\\begin{aligned}\n \\mathop{\\min}_{\\mathbf{Q}\\geq \\mathbf{0},\\mathbf{X}\\geq \\mathbf{0}}\\frac{1}{2}\\norm{\\mathbf{Y}-\\mathbf{X}\\mathbf{Q}}_{\\F}^{2} + \\Psi(\\mathbf{X}) + \\Omega(\\mathbf{Q}),\n\\end{aligned}\n\\end{equation}\nwhere $\\mathbf{Q}$ denotes the combination coefficients, $\\Phi(\\mathbf{X})$ and $\\Omega(\\mathbf{Q})$ are the potential regularization terms for the variables $\\mathbf{X}$ and $\\mathbf{Q}$, respectively. Until the current, there have been some advanced NMF-based works in HS DR. Gillis \\textit{et al.} \\cite{gillis2013sparse} used sparse NMF under approximations for HS data analysis. Yan \\textit{et al.} \\cite{yan2018novel} proposed a graph-regularized orthogonal NMF (GONMF) model with the application to spatial-spectral DR of HS images. Wen \\textit{et al.} \\cite{wen2016orthogonal} further extended the GONMF with combining multiple features for HS DR. Rasti \\textit{et al.} \\cite{rasti2016hyperspectral} designed an orthogonal total variation component analysis (OTVCA) approach for HS feature extraction. Moreover, the HS data are directly regarded as a high-dimensional tensor structure in \\cite{an2018tensor}, where the low-rank attribute is fully considered in the process of low-dimensional embedding. In detail, we summarize the regularization and constraints of the above methods as follows:\n\\begin{itemize}\n \\item Sparsity \\cite{gillis2013sparse}: $\\Omega(\\mathbf{Q})=\\norm{\\mathbf{Q}}_{0}$;\n \\item Graph Regularization \\cite{yan2018novel}:\\\\ $\\Psi(\\mathbf{X})=\\tr(\\mathbf{X}\\mathbf{L}\\mathbf{X}^{\\top}),\\; \\rm{s.t.}\\; \\mathbf{X}\\mathbf{X}^{\\top}=\\mathbf{I}$;\n \\item Multi-graph Regularization \\cite{wen2016orthogonal}:\\\\ $\\Psi(\\mathbf{X})=\\sum_{i=1}^{s}\\tr(\\mathbf{X}\\mathbf{L}^{i}\\mathbf{X}^{\\top}),\\; \\rm{s.t.}\\; \\mathbf{X}\\mathbf{X}^{\\top}=\\mathbf{I}$;\n \\item Total Variation \\cite{rasti2016hyperspectral}:\\\\ $\\Psi(\\mathbf{X})=\\sum_{i=1}^{r}\\norm{\\sqrt{(\\mathbf{H}_{h}\\mathbf{X}_{i})^{2}+(\\mathbf{H}_{v}\\mathbf{X}_{i})^{2}}}_{1},\\\\\n \\rm{s.t.}\\; \\mathbf{Q}\\mathbf{Q}^{\\top}=\\mathbf{I}$;\n \\item Low-rank Graph \\cite{an2018tensor}: $\\Psi(\\mathbf{X})=\\norm{\\mathbf{X}}_{*}+\\tr(\\mathbf{X}\\mathbf{L}\\mathbf{X}^{\\top})$.\n\\end{itemize}\n$\\mathbf{L}=\\mathbf{D}-\\mathbf{W}$ is the Laplacian matrix, where $\\mathbf{D}_{i,i}=\\sum_{j}\\mathbf{W}_{i,j}$ is the degree matrix and $\\mathbf{W}$ is the graph (or manifold) structure of $\\mathbf{X}$ \\cite{belkin2003laplacian}. $\\norm{\\bullet}_{0}$, $\\norm{\\bullet}_{2,1}$, and $\\norm{\\bullet}_{*}$ denote the $\\ell_{0}$-norm \\cite{aharon2006k}, $\\ell_{2,1}$-norm \\cite{nie2010efficient}, and nuclear norm \\cite{recht2010guaranteed}, respectively.\n\nAnother type of unsupervised DR approaches is \\textit{graph embedding}, also known as \\textit{manifold learning}, which can be also grouped into Eq. (\\ref{DR_eq1}) well (according to \\cite{wu2017joint}):\n\\begin{equation}\n\\label{DR_eq3}\n\\begin{aligned}\n \\mathop{\\min}_{\\mathbf{U},\\mathbf{X}}\\frac{1}{2}\\norm{\\mathbf{X}-\\mathbf{U}\\mathbf{Y}}_{\\F}^{2} + \\Psi(\\mathbf{X}) + \\Omega(\\mathbf{U})\\; {\\rm s.t.}\\; \\mathbf{X}\\mathbf{X}^{\\top}=\\mathbf{I},\n\\end{aligned}\n\\end{equation}\nwhere $\\mathbf{U}$ denotes the to-be-estimated projection matrix that bridges the high-dimensional data $\\mathbf{Y}$ with the low-dimensional embedding $\\mathbf{X}$. The regularization term for the variable $\\mathbf{U}$ can be usually expressed as\n\\begin{equation}\n\\label{DR_eq4}\n\\begin{aligned}\n \\Omega(\\mathbf{U})=\\tr(\\mathbf{U}\\mathbf{Y}\\mathbf{L}\\mathbf{Y}^{\\top}\\mathbf{U}^{\\top})+\\norm{\\mathbf{U}}_{\\F}^{2},\n\\end{aligned}\n\\end{equation}\nwhile the regularizer with respect to $\\mathbf{X}$ can be given by\n\\begin{equation}\n\\label{DR_eq5}\n\\begin{aligned}\n \\Psi(\\mathbf{X})=\\tr(\\mathbf{X}\\mathbf{L}\\mathbf{X}^{\\top}).\n\\end{aligned}\n\\end{equation}\nThe main difference between these \\textit{manifold learning}-based DR approaches lies in the graph construction, i.e., $\\mathbf{W}$. Ma \\textit{et al.} \\cite{ma2010local} integrated the KNN classifier with several representative \\textit{manifold learning} algorithms, i.e., locally linear embedding \\cite{roweis2000nonlinear}, Laplacian eigenmaps \\cite{belkin2003laplacian}, and local tangent space alignment \\cite{zhang2004principal}, for HS image classification. Huang \\textit{et al.} \\cite{huang2015dimensionality} embedded the sparse graph structure, which is performed by solving a $\\ell_{1}$-norm optimization problem, for the DR of HS images. He \\textit{et al.} \\cite{he2016weighted} extended the work of \\cite{huang2015dimensionality} by generating a weighted sparse graph. Hong \\textit{et al.} \\cite{hong2017learning} developed a new spatial-spectral graph for the DR of HS images, called RLMR, by jointly taking neighbouring pixels of a target pixel in spatial and spectral domains into account. An \\textit{et al.} \\cite{an2018patch} attempted to learn the low-dimensional tensorized HS representations on a sparse and low-rank graph. To sum up, core graphs of the aforementioned methods can be obtained by\n\\begin{itemize}\n \\item Sparse Graph \\cite{huang2015dimensionality}: $\\min_{\\mathbf{W}}\\norm{\\mathbf{W}}_{1,1}, \\;\\rm{s.t.}\\; \\mathbf{Y}=\\mathbf{Y}\\mathbf{W}$;\n \\item Weighted Sparse Graph \\cite{he2016weighted}:\\\\ $\\min_{\\mathbf{W}}\\norm{\\mathbf{d}\\odot \\mathbf{W}}_{1,1}, \\;\\rm{s.t.}\\; \\mathbf{Y}=\\mathbf{Y}\\mathbf{W},$\\\\\n where $\\mathbf{d}$ denotes a weighted matrix on $\\mathbf{W}$ and $\\odot$ is the element-wise multiplication operator;\n \\item Spatial-spectral Graph \\cite{hong2017learning}: \\\\\n $\\mathop{\\min}_{\\mathbf{w}_{i,0}}\\sum_{j\\in \\phi_{i}^{spa}}\\norm{\\mathbf{y}_{i,j}-\\sum_{k\\in \\phi_{i}^{spe}}\\mathbf{y}_{i,k}w_{i,k,j}}_{2}^{2} \\\\\n {\\rm s.t.} \\; \\norm{\\sum_{k\\in \\phi_{i}^{spe}}\\mathbf{y}_{i,k}(4w_{i,k,0}-\\sum_{k=1}^{4}w_{i,k,j})}_{2}^{2}\\leq \\eta, \\\\\n \\qquad \\mathbf{w}_{i,j}^{\\T}\\mathbf{w}_{i,j}=1,$\\\\\n where $\\phi_{i}^{spa}$ and $\\phi_{i}^{spe}$ denote the neighbouring pixels in spatial and spectral spaces, respectively;\n \\item Sparse and Low-rank Graph \\cite{an2018patch}:\\\\ $\\min_{\\mathbf{W}}\\norm{\\mathbf{W}}_{1,1}+\\norm{\\mathbf{W}}_{*}, \\;\\rm{s.t.}\\; \\mathbf{Y}=\\mathbf{Y}\\mathbf{W}$.\n\\end{itemize}\n\n\\subsection{Supervised Model}\n\nUnlike unsupervised DR that relies on embedding various priors to reduce the dimension of HS data, supervised models are capable of better learning class-separable low-dimensional representations via the use of label information. The supervised DR models can be described from two different categories in this subsection, as shown in Fig. \\ref{fig:GRSM_SupervisedDR}. A typical group is the \\textit{discriminant analysis} \\cite{li2018discriminant} closely related to \\textit{graph embedding} and \\textit{manifold learning}. Intuitively speaking, these methods belong to a special case of unsupervised \\textit{graph embedding}, which means they can be well explained by Eq. (\\ref{DR_eq3}). The main difference lies in that the labels are used for constructing the graph structure, i.e., $\\mathbf{W}$, thereby yielding a more discriminative subspace.\n\nIn the supervised DR, a direct graph structure is written as\n\\begin{equation}\n\\label{DR_eq6}\n {\\bf W}_{ij}=\n \\begin{cases}\n \\begin{aligned}\n 1, \\; \\; & \\text{if ${\\bf y}_{i}$ and ${\\bf y}_{j} \\in C_{k}$;}\\\\\n 0, \\; \\; & \\text{otherwise,}\n \\end{aligned}\n \\end{cases}\n\\end{equation}\nwhere $C_{k}$ means the sample set of the $k$-th class. Furthermore, more advanced supervised graphs have been developed to better represent the HS data in a low-dimensional subspace, such as sparse graph discriminant analysis \\cite{ly2013sparse}, collaborative graph discriminant analysis \\cite{ly2014collaborative}, feature space discriminant analysis (FSDA) \\cite{imani2015feature}, spatial-spectral local discriminant embedding \\cite{huang2019spatial}. These approaches sought to construct a \\textit{soft} graph instead of a \\textit{hard} graph in Eq. (\\ref{DR_eq6}). That is, the graph is built by using radial basis function (RBF) to measure the similarity between samples belonging to the same class \\cite{hong2020graph1}:\n\\begin{equation}\n\\label{DR_eq7}\n {\\bf W}_{ij}=\n \\begin{cases}\n \\begin{aligned}\n \\exp\\frac{-\\norm{\\mathbf{y}_{i}-\\mathbf{y}_{j}}_{2}^{2}}{2\\sigma^{2}}, \\; \\; & \\text{if ${\\bf y}_{i}$ and ${\\bf y}_{j} \\in C_{k}$;}\\\\\n 0, \\; \\; & \\text{otherwise,}\n \\end{aligned}\n \\end{cases}\n\\end{equation}\nor by solving $\\ell_{1}$-norm or $\\ell_{2}$-norm optimization functions in the same class set, e.g., \\cite{ly2013sparse}, \\cite{ly2014collaborative}.\n\nThe DR behavior can be also modeled from a regression perspective by directly connecting samples and labels \\cite{hong2019regression}, which provides a new insight into the research of the supervised HS DR. A general form for the regression-based supervised DR model can be formulated as\n\\begin{equation}\n\\label{DR_eq8}\n\\begin{aligned}\n \\mathop{\\min}_{\\mathbf{P}, \\mathbf{U}}&\\frac{1}{2}\\norm{\\mathbf{M}-\\mathbf{P}\\mathbf{X}}_{\\F}^{2}+\\Psi(\\mathbf{P})+\\Omega(\\mathbf{U})\\\\\n &{\\rm s.t.}\\; \\mathbf{X}=\\mathbf{U}\\mathbf{Y},\\; \\mathbf{U}\\mathbf{U}^{\\top}=\\mathbf{I},\n\\end{aligned}\n\\end{equation}\nwhere the variable $\\mathbf{P}$ denotes the regression coefficients or basis signals, and $\\mathbf{M}$ is the one-hot encoded matrix obtained by labels. Eq. (\\ref{DR_eq8}) can be, to some extent, regarded as an interpretable linearized artificial neural network (ANN) mode (shallow network). Ji \\textit{et al.} \\cite{ji2009linear} jointly performed DR and classification, which is a good fit for Eq. (\\ref{DR_eq8}) with $\\Psi(\\mathbf{P})=\\norm{\\mathbf{P}}_{\\F}^{2}$. To enhance the spectrally discriminative ability, Hong \\textit{et al.} \\cite{hong2018joint} employed a LDA-like graph on the basis of \\cite{ji2009linear} to regularize the low-dimensional representations in a Laplacian matrix form, i.e., $\\Omega(\\mathbf{U})=\\tr(\\mathbf{U}\\mathbf{Y}\\mathbf{L}\\mathbf{Y}^{\\top}\\mathbf{U}^{\\top})$. In the same work \\cite{hong2018joint}, Hong \\textit{et al.} further extended their model to a deep version, called JPlay, with a $k$-layered linear regression: \n\\begin{equation}\n\\label{DR_eq9}\n\\begin{aligned}\n &\\mathop{\\min}_{\\mathbf{P}, \\{\\mathbf{U}_{i}\\}_{i=1}^{k}}\\frac{1}{2}\\norm{\\mathbf{M}-\\mathbf{P}\\mathbf{X}_{i}}_{\\F}^{2}+\\Psi(\\mathbf{P})+\\Omega(\\{\\mathbf{U}_{i}\\}_{i=1}^{k})\\\\\n &{\\rm s.t.}\\; \\mathbf{X}_{i}=\\mathbf{U}_{i}\\mathbf{X}_{i-1},\\; \\mathbf{X}_{1}=\\mathbf{U}_{1}\\mathbf{Y},\\; \\mathbf{X}_{i}\\geq \\mathbf{0},\\; \\norm{\\mathbf{x}_{i}}_{2}\\leq 1,\n\\end{aligned}\n\\end{equation}\nwith $\\Psi(\\mathbf{P})=\\norm{\\mathbf{P}}_{\\F}^{2}$ and\n\\begin{equation*}\n\\begin{aligned}\n\\Omega(\\{\\mathbf{U}_{i}\\}_{i=1}^{k})&=\\sum_{i=1}^{k}\\tr(\\mathbf{U}_{i}\\mathbf{X}_{i-1}\\mathbf{L}\\mathbf{X}_{i-1}^{\\top}\\mathbf{U}_{i}^{\\top}) \\\\\n &+\\sum_{i=1}^{k}\\norm{\\mathbf{X}_{i-1}-\\mathbf{U}_{i}^{\\top}\\mathbf{U}_{i}\\mathbf{X}_{i-1}}_{\\F}^{2}.\n\\end{aligned}\n\\end{equation*}\nThe J-Play attempts to open the ``black box'' of deep networks in an explainable way by multi-layered linearized modeling. With explicit mappings and physically meaningful priors, the non-convex J-Play takes a big step towards the interpretable AI model.\n\n\\subsection{Semi-supervised Model}\nDue to the fact that labeling samples is extremely expensive, particularly for RS images covering a large geographic region, the joint use of labeled and unlabeled information then becomes crucial in DR and classification. \n\nA simple and feasible strategy for semi-supervised learning is to integrate supervised and unsupervised techniques, e.g., LDA and locality preserving projections \\cite{he2004locality}. By simultaneously constructing graphs of labeled and unlabeled samples (e.g., using Eqs. (\\ref{DR_eq6}) and (\\ref{DR_eq7}), respectively), Eq. (\\ref{DR_eq3}) can be easily extended to a semi-supervised version, leading to semi-supervised discriminant analysis (SSDA) \\cite{liao2012semisupervised}. Zhao \\textit{et al.} \\cite{zhao2014general} further improved the SSDA performance by using ``soft'' (or ``pseudo'') labels predicted by label propagation instead of directly using unsupervised similarities between unlabeled samples. Similarly, Wu \\textit{et al.} \\cite{wu2018semi} generated pseudo-labels using the Dirichlet process mixing model and achieved a novel SSDA approach to learn the low-dimensional HS embedding. These above-mentioned methods are performed surrounding various hand-crafted graph structures ($\\mathbf{W}$). \n\n\\begin{figure}[!t]\n\t \\centering\n\t\t\t\\includegraphics[width=0.35\\textwidth]{GRSM_graph}\n \\caption{An example to clarify the graph structure of JPSA method, where $\\mathbf{W}^{p}$ and $\\mathbf{W}^{sp}$ denote the pixel-wise and superpixel-wise subgraphs as well as $\\mathbf{W}^{a}$ is the aligned graph between pixels and superpixels.}\n\\label{fig:graph}\n\\end{figure}\n\n\nA different idea is to simulate the brain-like or human-like behaviors in the semi-supervised DR task. It is well known that the feedback reward is a key component that forms the intelligent information processing system. Inspired by it, \\cite{hong2019learning} developed an iterative multitask learning (IMR) framework by adaptively learning the label propagation (LP) on graphs to simulate the feedback mechanism, thereby achieving the HS DR process more effectively and efficiently. The IMR is a semi-supervised extension of Eq. (\\ref{DR_eq8}) with graph learning, which can be generally modeled as \n\\begin{equation}\n\\label{DR_eq10}\n\\begin{aligned}\n \\mathop{\\min}_{\\mathbf{P},\\mathbf{U},\\mathbf{L}}&\\sum_{j=1}^{2}\\norm{\\mathbf{M}_{j}-\\mathbf{P}\\mathbf{U}\\mathbf{Y}_{j}}_{\\F}^{2}+\\Psi(\\mathbf{P})+\\Omega(\\mathbf{U})\\\\\n &\\mathrm{s.t.} \\; \\mathbf{U}\\mathbf{U}^{\\top}=\\mathbf{I}, \\; \\mathbf{L}\\in \\mathcal{C},\n\\end{aligned}\n\\end{equation}\nwhere $\\mathbf{Y}_{1}$ and $\\mathbf{Y}_{2}$ denote the labeled and unlabeled samples from $\\mathbf{Y}$, respectively. $\\Psi(\\mathbf{P})=\\norm{\\mathbf{P}}_{\\F}^{2}$ and $\\Omega(\\mathbf{U})=\\tr(\\mathbf{U}\\mathbf{Y}\\mathbf{L}\\mathbf{Y}^{\\top}\\mathbf{U}^{\\top})$. The non-convex constraint $\\mathcal{C}$ with the respect to the variable $\\mathbf{L}$ can be summarized as\n\\begin{equation*}\n\\begin{aligned}\n \\mathcal{C}:=\\{\\mathbf{L}=\\mathbf{L}^{\\top}, \\; \\mathbf{L}_{p,q,p\\neq q}\\preceq 0, \\;\\mathbf{L}_{p,q,p=q}\\succeq 0, \\;\\tr(\\mathbf{L})=c\\},\n\\end{aligned}\n\\end{equation*}\nwhere $c>0$ is a scaling constant. Eq. (\\ref{DR_eq10}) is a typical data-driven graph learning model, which is capable of automatically learning the graph structure from the data without any hand-crafted priors. By using the iterative strategy to simulate the feedback system, $\\mathbf{M}_{2}^{(t+1)}$ in the $t$$+$$1$-step can be updated by the graph-based LP on the learned graph of the $t$-step $\\mathbf{W}^{(t)}$:\n\\begin{equation}\n\\label{DR_eq11}\n\\begin{aligned}\n\\cdots\\cdots\\mathbf{M}_{2}^{(t+1)}\\leftarrow\\mathbf{W}^{(t)}\\leftarrow\\mathbf{M}_{2}^{(t)}\\cdots\\cdots.\n\\end{aligned}\n\\end{equation}\n\nBesides, another intelligent feature extraction algorithm, named JPSA, which is extended from \\cite{hong2018joint}, was presented in \\cite{hong2020joint} by the attempts to align pixels and superpixels for spatial-spectral semi-supervised HS DR. JPSA basically follows the JPlay framework and the major difference is the graph structure $\\mathbf{W}$. The graph in JPSA consists of not only pixel-wise and superpixel-wise similarities but also aligned components between pixels and superpixels. Fig. \\ref{fig:graph} gives an example to clarify the graph structure of JPSA. Note that the JPSA's graph can be seen as a full data-driven structure, which can, to a great extent, self-express the intrinsic properties of HS data and further achieves intelligent information extraction and DR.\n\n\\begin{table}[!t]\n\\centering\n\\caption{Quantitative comparison of different DR algorithms in terms of OA, AA, and $\\kappa$ using the NN classifier on the Indian Pines dataset. The best one is shown in bold.}\n\\resizebox{0.45\\textwidth}{!}{\n\\begin{tabular}{c||c|ccc}\n\\toprule[1.5pt] Methods & dimension & OA (\\%) & AA (\\%) & $\\kappa$ \\\\\n\\hline \\hline\nOSF & 220 & 65.89 & 75.71 & 0.6148\\\\\nOTVCA \\cite{rasti2016hyperspectral} & 16 & 74.18 & 77.61 & 0.7228\\\\\nRLMR \\cite{hong2017learning} & 20 & 83.75 & 86.90 & 0.8147\\\\\nFSDA \\cite{imani2015feature} & 15 & 64.14 & 74.52 & 0.5964\\\\\nJPlay \\cite{hong2018joint} & 20 & 83.92 & 89.35 & 0.8169\\\\\nIMR \\cite{hong2019learning} & 20 & 82.80 & 86.27 & 0.8033\\\\\nJPSA \\cite{hong2020joint} & 20 & \\bf 92.98 & \\bf 95.40 & \\bf 0.9197\\\\\n\\bottomrule[1.5pt]\n\\end{tabular}}\n\\label{tab:DR}\n\\end{table}\n\n\\subsection{Experimental Study}\nClassification is explored as a potential application to evaluate the performance of state-of-the-art (SOTA) DR algorithms, including original spectral features (OSF), OTVCA\\footnote{\\url{https:\/\/github.com\/danfenghong\/HyFTech}} \\cite{rasti2016hyperspectral}, RLMR\\footnote{\\url{https:\/\/github.com\/danfenghong\/IEEE_JSTARS_RLML}} \\cite{hong2017learning}, FSDA \\cite{imani2015feature}, JPlay\\footnote{\\url{https:\/\/github.com\/danfenghong\/ECCV2018_J-Play}} \\cite{hong2018joint}, IMR \\cite{hong2019learning}, and JPSA \\cite{hong2020joint}. Experiments are performed on the Indian Pine data using the nearest neighbor (NN) classifier in terms of three indices: \\textit{Overall Accuracy (OA)}, \\textit{Average Accuracy (AA)}, and \\textit{Kappa Coefficient} ($\\kappa$). The scene consists of $145 \\times 145$ pixels and $220$ spectral bands ranging from $0.4 \\mu m$ to $2.5 \\mu m$. More details regarding training and test samples can be found in \\cite{hang2019cascaded}.\n\nTable \\ref{tab:DR} lists the quantitative results of different DR methods. Overall, OSF without feature extraction or DR yields the worst classification performance, compared to those SOTA DR methods. This, to a great extend, demonstrates the effectiveness and necessity of DR in the HS image classification task. It is worth noting that the approaches with spatial-spectral modeling, e.g., OTVCA, RLMR, JPSA, tend to obtain better classification results. The performance of RLMR is superior to that of OTVCA, owing to the full consideration of the neighboring information in a graph form rather than the smoothing operation only modeled by the TV regularization. As a linearized deep model, supervised JPlay obviously performs better than others, especially FSDA that is also a supervised DR model. More importantly, the JPSA with a semi-supervised learning strategy dramatically outperforms other competitors, since it can jointly learn richer representations from both pixels and superpixels by means of spatial-spectral manifold alignment and deep regressive regression.\n\n\\subsection{Remaining Challenges}\nAlthough extensive SOTA methods have recently shown the effectiveness and superiority in the HS DR and classification, there is still a long way to go towards the AI-guided intelligent information processing. We herein summarize the potential remaining challenges briefly.\n\\begin{itemize}\n \\item \\textbf{Optimal Subspace Dimension.} Subspace dimension is a crucial parameter in DR, which is determined experimentally and empirically in most of existing methods. Despite some parameter estimation algorithms, e.g., intrinsic dimension \\cite{levina2005maximum}, subspace identification \\cite{bioucas2008hyperspectral}, they fail to avoid the pre-survey of various prior knowledge and human intervention in the dimension estimation process.\n \\item \\textbf{Effects of Noises.} HS images usually suffer from noise degradation in remotely sensed imaging. These noises are complex and closely associated with spectral signatures. Therefore, separating noises from HS data effectively and reducing the noise sensitivity (or preserving spectral discrimination) in the DR process remains challenging. \n \\item \\textbf{Robustness and Generalization.} Robust estimation and advanced non-convex regularizers have been widely applied to model the DR behavior, yet the complex noise type, the limited training samples, and noisy labels hinder the robustness and generalization ability to be further improved. For this reason, more robust and intelligent models should be developed in either theory or practice emphatically in the next generation DR technique.\n\\end{itemize}\n\n\\begin{figure*}[!t]\n\t \\centering\n\t\t\t\\includegraphics[width=0.9\\textwidth]{MaterialMixture}\n \\caption{A showcase in a real HS scene (Pavia City Centre) to have a quick look at the 3-D HS cube, spectral signals, and material mixture as well as pure pixels (i.e., \\textit{endmember}) and mixed pixels. In the studied scene, the pure pixels correspond to two spectral reflectance curves of \\textit{vegetation} and \\textit{water}, respectively, while the examples of mixed pixels explain the case of spectral mixing, e.g., the two mixed pixels comprise of three pure components (\\textit{endmembers}) in varying proportions. In addition, the figure located in the right upper gives two toy examples to illustrate the material miscibility.}\n\\label{fig:MaterialMixture}\n\\end{figure*}\n\n\\section{Spectral Unmixing}\n\nSpectral unmixing (SU) can be usually seen as a special case of blind source separation (BSS) problem in ML, referring to a procedure that decomposes the observed pixel spectrum of the HS image into a series of constituent spectral signals (or \\textit{endmembers}) of pure materials and a set of corresponding abundance fractions (or \\textit{abundance maps}) \\cite{bioucas2012hyperspectral}. Due to the meter-level ground sampling distance (GSD) of HS imaging, the spectral signatures for most pixels in HS images are acquired in the form of a complex mixture that consists of at least two types of materials. Fig. \\ref{fig:MaterialMixture} gives a showcase to visualize the HS cube, spectral signatures, and material mixing process as well as pure and mixed pixels. Different from general signals, e.g., digital signals, speech signals, there are specific absorption properties in the spectrum signals of different materials. Plus, HS images suffer from miscellaneous unknown degradation, either physically or chemically, in the remotely sensed imaging, inevitably bringing many uncertainties in SU. Therefore, SU plays a unique role in HS RS, yielding many challenging researchable tasks compared to BSS in ML. \n\nIdeally, a linear mixing model (LMM) can be used to accurately describe the SU process \\cite{keshava2002spectral}, which is modeled as the following constrained optimization problem:\n\\begin{equation}\n\\label{SU_eq1}\n\\begin{aligned}\n \\mathop{\\min}_{\\mathbf{E},\\mathbf{A}}\\frac{1}{2}\\norm{\\mathbf{Y}-\\mathbf{E}\\mathbf{A}}_{\\F}^{2}\\;\\;\\rm{s.t.}\\; \\mathbf{E},\\mathbf{A}\\in \\mathcal{C}.\n\\end{aligned}\n\\end{equation}\nThe variables $\\mathbf{E}$ and $\\mathbf{A}$ in Eq. (\\ref{SU_eq1}) stand for the endmembers and abundance maps in the SU issue, respectively. According to the endmembers ($\\mathbf{E}$) that are available (given) or not in the process of SU, existing SU models can be loosely divided into \\textit{blind SU} and \\textit{endmember-guided SU}.\n\n\\subsection{Blind Spectral Unmixing}\nNMF is a baseline model in a wide range of applications, and the same is true in SU. Up to the present, NMF-based interpretable models have been developed extensively for pursing the intelligent SU with the consideration of physically meaningful priors with respect to $\\mathbf{E}$ and $\\mathbf{A}$, e.g., the abundance non-negative constraint (ANC), the abundance sum-to-one constraint (ASC). The resulting basic blind SU model can be written as \n\\begin{equation}\n\\label{SU_eq2}\n\\begin{aligned}\n \\mathop{\\min}_{\\mathbf{E},\\mathbf{A}}\\frac{1}{2}\\norm{\\mathbf{Y}-\\mathbf{E}\\mathbf{A}}_{\\F}^{2} + \\Phi(\\mathbf{E}) + \\Omega(\\mathbf{A})\\;\\;\\rm{s.t.}\\; \\mathbf{E},\\mathbf{A}\\in \\mathcal{C},\n\\end{aligned}\n\\end{equation}\nwhere the constraint $\\mathcal{C}$ is \n\\begin{equation*}\n\\begin{aligned}\n \\mathcal{C}:=\\{\\mathbf{E}\\geq \\mathbf{0},\\; \\mathbf{A}\\geq \\mathbf{0},\\; \\mathbf{1}^{\\top}\\mathbf{A}=\\mathbf{1}\\}.\n\\end{aligned}\n\\end{equation*}\n\nOn the basis of the model (\\ref{SU_eq2}), Yang \\textit{et al.} \\cite{yang2010blind} proposed sparse NMF for SU with a well-designed S-measure sparseness. Qian \\textit{et al.} \\cite{qian2011hyperspectral} imposed the sparsity constraint on abundances and used $\\ell_{1\/2}$-regularized NMF for blind SU, which has shown to be more effective than $\\ell_{0}$- and $\\ell_{1}$-norm terms. In \\cite{sigurdsson2014hyperspectral}, Sigurdsson \\textit{et al.} relaxed $\\ell_{1\/2}$-norm to $\\ell_{q}$-norm ($0\\leq q\\leq1$) for a better estimation of abundances. Thouvenin \\textit{et al.} \\cite{thouvenin2015hyperspectral} developed an improved LMM, called perturbed LMM (PLMM), by the attempts to model spectral variabilities as perturbed information that simply meets the Gaussian distribution. A similar work is presented in \\cite{drumetz2016blind}, where the scaling factor, as a major spectral variability (SV), is modeled into LMM to yield an extended LMM (ELMM) for the blind SU task. He \\textit{et al.} \\cite{he2017total} employed total variation (TV) and weighted $\\ell_{1}$-norm terms to further enhance the smoothness and sparseness of abundances. Yao \\textit{et al.} \\cite{yao2019nonconvex} sought to explain the NMF-based SU model by simulating human observations on HS images, e.g., sparsity, non-local, smooth properties, in a non-convex modeling fashion. Another type of interesting SU strategy is to embed the graph or topological structure of the HS data. The local neighboring relation is introduced into the NMF model, showing robust SU results \\cite{liu2012enhancing}. Similarly, Lu \\textit{et al.} \\cite{lu2012manifold} enforced the abundances to follow the manifold structure of spectral signatures in the form of Laplacian regularization form for HS unmixing. Wang \\textit{et al.} \\cite{wang2016hypergraph} used a structuralized hypergraph regularization in sparse NMF to better depict the underlying manifolds of the HS data. Very recently, Qin \\textit{et al.} \\cite{qin2020blind} proposed a novel graph TV regularization to estimate endmembers and abundances more effectively. There are still other variants that directly unmix the 3-D HS tensor by preserving spatial structure information as much as possible. For that, Qian \\textit{et al.} \\cite{qian2016matrix} proposed a matrix-vector non-negative tensor factorization framework for blind SU. Imbiriba \\textit{et al.} \\cite{imbiriba2019low} modeled the low-rank properties in the HS tensor to address the SV for robust SU. A further modified work based on \\cite{imbiriba2019low} is proposed via weighted non-local low-rank tensor decomposition for sparse HS unmixing.\n\nBroadly, these key non-convex priors of the above models can be briefly summarized as follows:\n\\begin{itemize}\n \\item $\\ell_{1\/2}$-NMF \\cite{qian2011hyperspectral}: $\\Omega(\\mathbf{A})=\\norm{\\mathbf{A}}_{1\/2}=\\sum_{k,n=1}^{K,N}\\mathbf{a}_{n}(k)^{1\/2}$;\n \\item $\\ell_{q}$-NMF \\cite{sigurdsson2014hyperspectral}:\n $\\Omega(\\mathbf{A})=\\norm{\\mathbf{A}}_{q}=\\sum_{k,n=1}^{K,N}\\mathbf{a}_{n}(k)^{q}$;\n \\item PLMM \\cite{thouvenin2015hyperspectral}:\n $\\Phi(\\mathbf{E})=\\frac{1}{2}\\norm{\\mathbf{E}-\\mathbf{E}_{0}}_{\\F}^{2}$,\\;\n $\\Omega(\\mathbf{A})=\\frac{1}{2}\\norm{\\mathbf{A}\\mathbf{H}}_{\\F}^{2}$,\\; $\\Psi(\\Delta)=\\frac{1}{2}\\sum_{n=1}^{N}\\norm{\\Delta_{n}}_{\\F}^{2}$,\n where $\\mathbf{E}_{0}$, $\\mathbf{H}$, and $\\Delta$ denote the reference endmembers, the matrix differences in spatial four nearest neighbors, and pixel-wise perturbed information, respectively. \n \\item ELMM \\cite{drumetz2016blind}: $\\Phi(\\mathbf{E})=\\sum_{n=1}^{N}\\norm{\\mathbf{E}_{n}-\\mathbf{E}_{0}\\mathbf{S}_{n}}_{\\F}^{2}$,\\; $\\Omega(\\mathbf{A})=\\norm{\\mathbf{H}_{h}(\\mathbf{A})}_{2,1}+\\norm{\\mathbf{H}_{v}(\\mathbf{A})}_{2,1}$,\\; $\\Psi(\\mathbf{S})=\\norm{\\mathbf{H}_{h}(\\mathbf{S})}_{\\F}^{2}+\\norm{\\mathbf{H}_{v}(\\mathbf{S})}_{\\F}^{2}$, where $\\mathbf{H}_{h}$ and $\\mathbf{H}_{v}$ are the horizontal and vertical gradients;\n \\item TV-RSNMF \\cite{he2017total}: $\\Omega(\\mathbf{A})=\\norm{\\mathbf{d}\\odot\\mathbf{A}}_{1,1}+\\norm{\\mathbf{A}}_{\\rm TV}$;\n \\item NLHTV \\cite{yao2019nonconvex}:\\\\ $\\Omega(\\mathbf{A})=\\sum_{n=1}^{N}\\norm{J_{w}\\mathbf{a}_{n}}_{\\mathcal{S}_{1}}+\\sum_{i,j}\\log(|x_{i,j}|+\\epsilon)$, where $J_{w}$ and $\\norm{\\bullet}_{\\mathcal{S}_{1}}$ are defined as non-local Jacobian operator and the Schatten-1 norm, respectively.\n \\item Graph $\\ell_{1\/2}$-NMF \\cite{lu2012manifold}: $\\Omega(\\mathbf{A})=\\norm{\\mathbf{A}}_{1\/2}+\\tr(\\mathbf{A}\\mathbf{L}\\mathbf{A}^{\\top})$;\n \\item Graph TV \\cite{qin2020blind}: $\\Omega(\\mathbf{A})=\\norm{\\mathbf{A}}_{\\rm TV}+\\tr(\\mathbf{A}\\mathbf{L}\\mathbf{A}^{\\top})$.\n\\end{itemize}\n\nOwing to the powerful data fitting ability, DL-based SU approaches have recently been paid increasing attention and achieved better unmixing results \\cite{palsson2018hyperspectral,su2018stacked,palsson2019spectral,han2020deep}. Although these methods still suffer from the effects of ``black box'', i.e., the lack of model interpretability, yet their performances have preliminary shown the effectiveness and feasibility in unmixing the HS data more accurately.\n\n\\subsection{Endmember-Guided Spectral Unmixing}\nA mass of blind SU methods has been developed and shown to be effective to simultaneously obtain endmembers and abundance maps. However, these blind methods tend to extract physical meaningless endmembers, e.g., noisy signals, spectral signatures corresponding to non-existent materials, due to the lack of certain interpretable model guidance or prior knowledge. A straightforward solution is to provide nearly real endmembers extracted from HS images. This naturally leads to the researches on endmember-guided SU. As the name suggests, the SU process is performed with given reference endmembers or the guidance of extracted endmembers from the HS image. That is, the endmembers $\\mathbf{E}$ in Eq. (\\ref{SU_eq2}) are known. Accordingly, the endmember-guided SU can be implemented in a three-stage way.\n\\begin{itemize}\n \\item Firstly, the number of endmembers can be estimated by subspace estimation algorithms, e.g., HySime \\cite{bioucas2008hyperspectral};\n \\item Secondly, the endmembers can be extracted based on geometric observations of HS data structure. Several well-known methods are vertex component analysis (VCA) \\cite{nascimento2005vertex}, pixel purity index (PPI) \\cite{chang2006fast}, and fast autonomous endmember extraction (N-FINDER) \\cite{winter1999n}.\n \\item Lastly, the abundances of materials are estimated using regression-based methods, which can generally written as \n \\begin{equation}\n \\label{SU_eq3}\n \\begin{aligned}\n \\mathop{\\min}_{\\mathbf{A}}\\frac{1}{2}\\norm{\\mathbf{Y}-\\mathbf{E}\\mathbf{A}}_{\\F}^{2} + \\Omega(\\mathbf{A})\\;\\; {\\rm s.t.}\\; \\mathbf{A}\\in \\mathcal{C}.\n \\end{aligned}\n \\end{equation}\n\\end{itemize}\n\nFollowing the three steps, many well-working non-convex models have been successfully developed to estimate the abundance maps of different materials at a more accurate level. Heinz \\textit{et al.} \\cite{heinz2001fully} thoroughly analyzed the spectral mixture in the SU issue, yielding a fully constrained least-squares unmixing (FCLSU) algorithm. Due to the hard ASC, the abundances can not be fully represented in a simplex. For this reason, a partial constraint least-squares unmixing (PCLSU) \\cite{heylen2011fully} model emerges as required without ASC. Bioucas-Dias \\textit{et al.} \\cite{bioucas2010alternating} relaxed the strong $\\ell_{0}$-norm to the solvable $\\ell_{1}$-norm in the sparse HS unmixing model and designed a fast and generic optimization algorithm based on the ADMM framework \\cite{boyd2011distributed}, called sparse unmixing by variable splitting and augmented Lagrangian (SUnSAL). In \\cite{iordache2012total}, a TV spatial regularization is considered to further enhance the unmixing performance. Iordache \\textit{et al.} \\cite{iordache2013collaborative} extended the sparse regression model to the collaborative version regularized by $\\ell_{2,1}$-norm for SU. Fu \\textit{et al.} \\cite{fu2016semiblind} proposed a semi-blind HS unmixing model by correcting the mismatches between estimated endmembers and pure spectral signatures from the library. Huang \\textit{et al.} \\cite{huang2018joint} jointly imposed sparsity and low-rank properties on the abundances for better estimating abundance maps. Hong \\textit{et al.} \\cite{hong2018sulora} devised an interesting and effective subspace-based abundance estimation model. The model neatly sidesteps to directly decompose the HS data in the complex high-dimensional space instead of projecting the HS data into a more robust subspace, where the SV tends to be removed in a more generalized way with low-rank attribute embedding. Beyond the current framework, Hong \\textit{et al.} \\cite{hong2019augmented} further augmented the basic LMM by fully modeling SVs, e.g., principal scaling factors and other SVs that should be incoherent or low-coherent with endmembers, in order to yield an interpretable and more intelligent SU model, called augmented LMM (ALMM). \n\n\\begin{figure}[!t]\n\t \\centering\n\t\t\t\\includegraphics[width=0.47\\textwidth]{GRSM_SV}\n \\caption{A visual example to clarify SVs in a real HS scene (Pavia City Centre). An image patch cropped from the scene is select to show the spectral bundles involving spectral variations of trees in (a). (b) shows a pure spectral signature (i.e., \\textit{endmember}) of trees acquired from the laboratory. (c) represents the differences between (a) and (b), which is seen as SVs.}\n\\label{fig:SV}\n\\end{figure}\n\n\\begin{figure*}[!t]\n\t \\centering\n\t\t\t\\includegraphics[width=1\\textwidth]{SU_AB}\n \\caption{Visualization of abundance maps estimated by different SOTA SU algorithms on the Urban data, where SAM is computed to generate the classification-like maps regarded as the GT to measure the shape similarity of abundance maps obtained by different SU methods.}\n\\label{fig:SU_AB}\n\\end{figure*}\n\nThe non-convexity of these methods on priors, constraints, or modeling can be summarized as follows:\n\\begin{itemize}\n \\item FCLSU \\cite{heinz2001fully}: $\\mathbf{A}\\geq \\mathbf{0}$, $\\mathbf{1}^{\\top}\\mathbf{A}=\\mathbf{1}$;\n \\item PCLSU \\cite{heylen2011fully}: $\\mathbf{A}\\geq \\mathbf{0}$;\n \\item SUnSAL \\cite{bioucas2010alternating}: $\\Omega(\\mathbf{A})=\\norm{\\mathbf{A}}_{1,1}$, $\\mathbf{A}\\geq \\mathbf{0}$, $\\mathbf{1}^{\\top}\\mathbf{A}=\\mathbf{1}$;\n \\item SUnSAL-TV \\cite{iordache2012total}: $\\Omega(\\mathbf{A})=\\norm{\\mathbf{A}}_{1,1}+\\norm{\\mathbf{A}}_{\\rm TV}$, $\\mathbf{A}\\geq \\mathbf{0}$;\n \\item CSR \\cite{iordache2013collaborative}: $\\Omega(\\mathbf{A})=\\norm{\\mathbf{a}}_{2,1}=\\sum_{n=1}^{N}\\norm{\\mathbf{a}_{n}}_{2}$, $\\mathbf{A}\\geq \\mathbf{0}$;\n \\item DANSER \\cite{fu2016semiblind}: $\\Omega(\\mathbf{A})=\\sum_{n=1}^{N}(\\norm{\\mathbf{a}_{n}}_{2}^{2}+\\tau)^{p\/2}$, $\\mathbf{A}\\geq \\mathbf{0}$, $\\Phi(\\mathbf{E})=\\norm{\\mathbf{E}-\\mathbf{E}_{0}}_{\\F}^{2}$;\n \\item SULoRA \\cite{hong2018sulora}: $\\Psi(\\mathbf{U})=\\norm{\\mathbf{Y}-\\mathbf{U}\\mathbf{Y}}_{\\F}^{2}+\\norm{\\mathbf{U}}_{*}$, $\\Omega(\\mathbf{A})=\\norm{\\mathbf{A}}_{1,1}$, $\\mathbf{A}\\geq \\mathbf{0}$, where $\\mathbf{U}$ denotes the subspace projection and $\\norm{\\bullet}_{*}$ is the nuclear norm that approximates the rank property of the matrix $\\bullet$;\n \\item ALMM \\cite{hong2019augmented}: $\\Phi(\\mathbf{A})=\\norm{\\mathbf{A}}_{1,1}$, $\\mathbf{A}\\geq \\mathbf{0}$, $\\Gamma(\\mathbf{J})=\\norm{\\mathbf{J}}_{\\F}^{2}$, $\\Psi(\\mathbf{V})=\\norm{\\mathbf{A}^{\\top}\\mathbf{V}}_{\\F}^{2}+\\norm{\\mathbf{V}^{\\top}\\mathbf{V}-\\mathbf{I}}_{\\F}^{2}$, where $\\mathbf{V}$ and $\\mathbf{J}$ denote the SV dictionary and corresponding coefficients, respectively.\n\\end{itemize}\n\n\\subsection{Experimental Study}\nA real urban HS data acquired by the HYDICE over the urban area, Texas, USA, in 2015 (the latest version\\footnote{\\url{http:\/\/www.tec.army.mil\/Hypercube}}) is used to evaluate the performance of several selected SOTA unmixing methods qualitatively, including $\\ell_{1\/2}$-NMF \\cite{qian2011hyperspectral}, PLMM\\footnote{\\url{https:\/\/pthouvenin.github.io\/unmixing-plmm\/}} \\cite{thouvenin2015hyperspectral}, ELMM\\footnote{\\url{https:\/\/openremotesensing.net\/knowledgebase\/spectral-variability-and-extended-linear-mixing-model\/}} \\cite{drumetz2016blind}, NLHTV \\cite{yao2019nonconvex}, FCLSU \\cite{heinz2001fully}, SUnSAL\\footnote{\\url{http:\/\/www.lx.it.pt\/~bioucas\/}} \\cite{bioucas2010alternating}, SULoRA\\footnote{\\url{https:\/\/github.com\/danfenghong\/IEEE_JSTSP_SULoRA}} \\cite{hong2018sulora}, and ALMM\\footnote{\\url{https:\/\/github.com\/danfenghong\/ALMM_TIP}} \\cite{hong2019augmented}. The HS image consists of $307\\times 307$ pixels and 162 spectral bands after removing noisy bands in the wavelength range of $0.4 \\mu m$ to $2.5 \\mu m$ at a $2m$ GSD. Moreover, four main materials (or endmembers) are investigated in the studied scene, i.e., \\textit{asphalt}, \\textit{grass}, \\textit{trees}, and \\textit{roof}. Furthermore, HySime \\cite{bioucas2008hyperspectral} and VCA \\cite{nascimento2005vertex} algorithms are adopted to determine the number of endmembers and extract endmembers from the HS image (as the initialization for blind SU methods) for all compared algorithms, respectively.\n\nFig.~\\ref{fig:SU_AB} shows the visual comparison between different SOTA unmixing algorithms in terms of abundance maps. Owing to the consideration of real endmembers extracted from the HS scene, the last four endmember-guided SU methods perform evidently better than the blind SU ones. ELMM models the scaling factors, tending to better capture the distributions of different materials. The embedding of non-local spatial information makes the NLHTV method obtain a more similar shape of abundance maps to the GT, yielding comparable unmixing performance with ELMM. Remarkably, the unmixing results with regard to abundance maps of SULoRA and ALMM algorithms are superior to those of other methods, since the SVs can be fully considered by robustly embedding low-rank attributes in a latent subspace using SULoRA and characterizing complex real scenes more finely using ALMM.\n\n\\subsection{Remaining Challenges}\nSU has long been a challenging and widely concerned topic in HS RS. Over the past decades, tons of SU works have been proposed by the attempts to unmix these mixed spectral pixels more effectively. Yet, some key and essential issues and challenges still remain to be solved.\n\n\\begin{itemize}\n \\item \\textbf{Benchmark Data.} Unlike classification, recognition, and detection tasks, the ground truth of material abundances is able to be hardly collected, due to the immeasurability of abundance values in reality. On the other hand, spectral signatures (i.e., endmembers) of pure materials are often acquired in the lab. This usually leads to uncertain mismatches between real endmembers and lab ones. It turns to be urgent to establish benchmark datasets for SU by drawing support from more advanced imaging techniques or developing interpretable ground truth generation models or processing chain.\n \\item \\textbf{Evaluation Criteria.} Reconstruction errors (RE) or spectral angle mapper (SAM) are the two commonly used evaluation indices in SU. It should be noted, however, that the results of RE or SAM are not equivalent to those of unmixing. Linking to the issue of benchmark data, the measurement between real results and estimated ones is the optimal choice, if we have the ground truth for abundances and endmembers. If not, developing meaningful and reasonable evaluation indices (e.g., classification accuracy) should give the top priority in future work.\n \\item \\textbf{Spectral Variability.} Spectral signatures inevitably suffer from various SVs caused by illumination and topography change, noise effects from external conditions and internal equipment, atmospheric interference, and complex mixing of materials in the process of imaging. Fig. \\ref{fig:SV} shows a visual example to specify the SVs (e.g., trees) in a real HS scene. Considerable uncertainties brought by these factors have a big negative impact on accurate estimation of abundances and endmembers in SU.\n \\item \\textbf{Nonlinearity.} The complex interactions (e.g., intimate mixing, multilayered mixing \\cite{bioucas2012hyperspectral}) between multiple materials, also known as nonlinearity, inevitably occur in the process of HS imaging. The nonlinearity in SU is a longstanding and pending challenge. Most of existing nonlinear unmixing models only attempt to consider certain special cases \\cite{heylen2014review}, e.g., bilinear mixing, intimate mixtures, etc. Consequently, there is still lack of a general and powerful model that can robustly address various nonlinearities in SU.\n \\item \\textbf{Model Explainability.} The non-negativity and the sum-to-one constraint considered in LMM are the basic priors for spectral signals in HS images. However, only the two constraints fail to model the complex unmixing process in an explainable fashion. To further enhance the explainability, new spectral mixture models should be developed beyond the classic LMM by fully excavating the intrinsic attribute knowledge that lies in the HS image. \n\\end{itemize}\n\n\\section{Data Fusion and Enhancement}\nThe high spectral resolution of HS images enables the identification and discrimination of materials, meanwhile the high spatial resolution can provide the possibility of the derivation of surface parameters~\\cite{yokoya2017hyperspectral}. However, due to the equipment limitation, there is usually a trade-off between the spatial and spectral resolutions, and the HS images obtained by the spaceborne imaging spectrometers are usually with a moderate ground sampling distance~\\cite{yokoya2017hyperspectral}. To enhance the spatial resolution, one popular way is to fuse the HS images with high spatial MS images to generate new high spatial-spectral HS (HrHS) images. In particular, enormous effects have been recently made to enhance the spatial or spectral resolutions of HS images by means of ML techniques. Fig.~\\ref{fig:datafusion} illustrates the fusion process of HS-MS images to generate the HrHS image. \nSuppose we have the low-spatial resolution HS image $\\tensor{Y} \\in \\mathbb{R}^{m \\times n \\times B}$ and high-spatial resolution MS image $\\tensor{Z}\\in \\mathbb{R}^{M \\times N \\times b}$ with $M \\gg m$, $N \\gg n$ and $B \\gg b$, the fusion purpose is to generate the high-spatial resolution HS image $\\tensor{X}\\in \\mathbb{R}^{M \\times N \\times B}$. The degradation models from $\\tensor{X}$ to $\\tensor{Y}$ and $\\tensor{Z}$ are formulated as\n\\begin{align}\n\\mat{Y} = \\mat{X}\\mat{R}+\\mat{N}_{H}\n\\\\\n\\mat{Z} = \\mat{G}\\mat{X}+\\mat{N}_{M}\n\\label{eq:Fusi}\n\\end{align}\nwhere $\\mat{X}, \\mat{Y}, \\mat{Z}$ are the reshaped matrices along the spectral dimension of $\\tensor{X}, \\tensor{Y}, \\tensor{Z}$, respectively, $\\mat{R}$ is the mixed cyclic convolution and downsampling operator, $\\mat{G}$ is the spectral response function (SRF) of the MS image sensor, $\\mat{N}_{H}$ and $\\mat{N}_{M}$ are the corresponding MS-HS noise. To unify different observation models~\\cite{Eismann2004,yokoya2012coupled,QiWei2015TGRS,Simoes2015,yokoya2017hyperspectral,he2020hyperspectral}, $\\mat{N}_{H}$ and $\\mat{N}_{M}$ are assumed to be the independent identically distributed Gaussian noise. Via the maximum a posteriori (MAP) estimation method and Bayes rule~\\cite{Eismann2004,QiWei2015TGRS,Simoes2015}, the following non-convex optimization model is obtained\n\\begin{align}\n&\\min_{\\mat{X}} \\|\\mat{Y} - \\mat{X}\\mat{R}\\|_{\\F}^2+\\|\\mat{Z} - \\mat{G}\\mat{X}\\|_{\\F}^2,\n\\label{eq:FusiOpt}\n\\end{align}\nwhere $\\mat{R}$ and $\\mat{G}$ are assumed to be known (in~\\cite{QiWei2015TGRS,yokoya2017hyperspectral}, $\\mat{R}$ and $\\mat{G}$ are estimated in advance of the optimization). and As mentioned in~\\cite{QiWei2015TGRS,Simoes2015}, the optimization of \\eqref{eq:FusiOpt} is a NP-hard problem, and over-estimation of $\\mat{Z}$ will result in the unstable fusion results. Therein, additional property of $\\mat{X}$ and prior regularizers should be exploited in the optimization model \\eqref{eq:FusiOpt}. It should be noted, however, that the two functions $\\mathbf{R}$ and $\\mathbf{G}$ can be given according to known sensors and also can be learned or automatically estimated from the data itself.\n\n\\begin{figure}[!t]\n\t \\centering\n\t\t\\includegraphics[width=0.4\\textwidth]{DataFusion}\n \\caption{Illustration of MS-HS image fusion to generate the HrHS image.}\n\\label{fig:datafusion}\n\\end{figure}\n\nHS pansharpening is a heuristic way to perform the HS-MS fusion \\cite{loncan2015hyperspectral}, which has been widely applied in the HS image enhancement task. Component substitution (CS) and multiresolution analysis (MRA) are the two main types of pansharpening techniques. The former one aims to inject detailed information of MS images into the low-resolution HS image, thereby generating the high-resolution HS product. The latter one is to pansharpen the HS image by linearly combining MS bands to synthesize a high-resolution HS band using regression techniques. Another group for the HS-MS fusion task is the subspace-based model, which roughly consists of Bayesian and unmixing based methods (see \\cite{yokoya2017hyperspectral}). Different from pansharpening, the subspace-based approaches project the to-be-fused MS and HS images to a new space where the dimension is generally smaller than that of the unknown high-resolution HS image, by the means of the probability-driven Bayesian estimation (Bayesian-based methods) or SU-guided matrix joint factorization (unmixing-based methods).\n\nIn the following, we focus on the subspace methods, and review the related HS-MS image fusion methods from the non-convex modeling perspective. A more detailed review can be referred to~\\cite{yokoya2017hyperspectral,loncan2015hyperspectral}.\n\n\\begin{figure*}[!t]\n\t \\centering\n\t\t\t\\includegraphics[width=1\\textwidth]{DFE}\n \\caption{The fusion results of different methods with Chikusei image. The color is composed of bands 70, 100, 36. An enlarged region is framed in {\\bf green} and the corresponding residual image between the fused image and MS-GT is framed in {\\bf red}.}\n\\label{fig:Chikusei_reconstruct}\n\\end{figure*}\n\n\\subsection{Unmixing based methods}\nHyperspectral unmixing (HU)~\\cite{bioucas2013hyperspectral,he2017total} assumes that the mixed class of an HS image can be decomposed to the collection of constitute spectra (endmembers), and their corresponding proportions (abundances). With LMM assumption, the different endmembers do not interfere with each other~\\cite{bioucas2013hyperspectral}. By embedding the LMM model into \\eqref{eq:FusiOpt}, we can obtain the following general unmixing based approaches\n\\begin{align}\n\\label{eq:FusiOptMix}\n&\\min_{\\mat{X},\\mat{E},\\mat{A}} \\|\\mat{Y} - \\mat{X}\\mat{R}\\|_{\\F}^2+\\|\\mat{Z} - \\mat{G}\\mat{X}\\|_{\\F}^2,\\\\\n& {\\rm s.t.} \\;\\; \\mat{X} = \\mat{E}\\mat{A}, \\; \\mat{E},\\mat{A} \\geq 0,\\; \\mat{1}_{R}^{\\top}\\mat{A}=\\mat{1}_{MN},\n\\notag\n\\end{align}\nwhere $\\mat{E}$, $\\mat{A}$ are the endmember matrix and abundance matrix, which are assumed to obey the non-negative and abundance sum-to-one constraints. Generally, nonlinear unmixing models~\\cite{bioucas2013hyperspectral} can be also utilized for the fusion task of HS-MS images. However, due to the generality of the LMM model, we focus on the review of LMM based fusion approaches.\n\nEismann \\textit{et al.} proposed a maximum a posteriori estimation method to deduce the cost objective function, and introduced a stochastic mixing model (MAP-SMM) to embed LMM into the cost function~\\cite{Eismann2004}. MAP-SMM method tries to estimate the prior probabilities for all the mixture classes, including the mean vectors and covariance matrices of the endmember classes. The learned prior probabilities are passed to the cost function to help the reconstruction of the final HrHS image $\\mat{X}$.\n\nYokoya \\textit{et al.,} regarded \\eqref{eq:FusiOptMix} as the coupled NMF (CNMF) problem~\\cite{yokoya2012coupled} and introduced the multiplicative update rules to optimize \\eqref{eq:FusiOptMix}. Firstly, CNMF utilizes $\\|\\mat{Y} - \\mat{E}\\mat{A}\\mat{R}\\|_{\\F}^2$ as the cost function to update $\\mat{E}$ and $\\mat{A}_h$ with the endmember matrix $\\mat{E}$ initialized by vertex component analysis (VCA). Here, $\\mat{A}_h = \\mat{A} \\mat{R}$ is the abundance matrix from HS images. Secondly, by initializing $\\mat{E}_m = \\mat{G}\\mat{E}$ which is the endmember matrix from the MS image, CNMF again utilizes the multiplicative update rules to update $\\mat{E}$ from the cost function $\\|\\mat{Z} - \\mat{G}\\mat{E}\\mat{A}\\|_{\\F}^2$. Finally, the HrHS image $\\mat{X}$ is reconstructed from $\\mat{E}\\mat{A}$. The following works~\\cite{akhtar2014sparse,akhtar2015bayesian,dong2016hyperspectral} also utilize the CNMF framework to fuse HS-MS images. Differently,~\\cite{akhtar2014sparse,dong2016hyperspectral} introduced a non-negative dictionary learning strategy, while \\cite{akhtar2015bayesian} proposed the proximal alternating linearized minimisation algorithm to update $\\mat{E}$ and $\\mat{A}$.\n\nOn the basis of \\eqref{eq:FusiOptMix}, Wang \\textit{et al.,} further regularized $\\mat{X}$ with non-local low-rank Tucker decomposition~\\cite{Wang2020TGRS}. The improved non-local Tucker decomposition regularized CNMF model~\\cite{Wang2020TGRS} was solved by the multi-block ADMM, and achieved remarkable fusion results. It indicates that additional regularizers on $\\mat{X}$ can further improve the fusion accuracy. From another side, it is necessary to make a trade-off between the complex models with higher accuracy and the computation efficiency for real large scale HS-MS image fusion task.\n\n\\subsection{Orthogonal subspace based methods}\nAnother common assumption in HS-MS fusion is that the spectral information of $\\mat{X}$ underlies a orthogonal subspace, whose dimension is much smaller than the number of bands $B$~\\cite{QiWei2015TGRS,he2017total}, $\\textit{i.e.,}$ $\\mat{X} = \\mat{E}\\mat{A}$ with $\\mat{E}\\in \\mathbb{R}^{B \\times k}$, $\\mat{A}\\in \\mathbb{R}^{k \\times MN}$, and $k\\ll B$. $\\mat{E}^{\\top}$ is an orthogonal matrix with $\\mat{E}^{\\top}\\mat{E} = \\mat{I}_{k}$. Therefore, the subspace based model is formulated as\n\\begin{align}\n\\label{eq:FusiOpt2}\n&\\min_{\\mat{X},\\mat{E},\\mat{A}} \\|\\mat{Y} - \\mat{X}\\mat{R}\\|_{\\F}^2+\\|\\mat{Z} - \\mat{G}\\mat{X}\\|_{\\F}^2, \\\\\n& {\\rm s.t.}\\;\\; \\mat{X} = \\mat{E}\\mat{A}, \\; \\mat{E}^{\\top}\\mat{E} = \\mat{I}_{k}.\n\\notag\n\\end{align}\nAlthough additional spectral subspace prior is exploited, the optimization of \\eqref{eq:FusiOpt2} still faces several challenges. Firstly, if $k\\gg b$, that's to say, the dimension number of the subspace is larger than the bands' number of the MS image, the optimization of \\eqref{eq:FusiOpt2} is the under-estimate problem. Therefore, to ensure a reasonable solution, prior information of coefficient $\\mat{A}$ need to be exploited.~\\cite{QiWei2015TGRS} pre-trains a dictionary to represent $\\mat{A}$, and updates $\\mat{A}$ via sparse representation. Hyperspectral super-resolution (HySure) in~\\cite{Simoes2015} assumes that $\\mat{A}$ appears the spatial smoothness structure and regularize $\\mat{A}$ with band-by-band TV. \\cite{wei2015fast} translates the optimization of $\\mat{A}$ to a Sylvester equation and proposes a fast fusion method for \\eqref{eq:FusiOpt2} (FUSE).\n\nSecondly, the optimization of orthogonal matrix $\\mat{E}^{\\top}$ is another challenge due to the non-convex of \\eqref{eq:FusiOpt2}. One appearing approach~\\cite{QiWei2015TGRS,Simoes2015,wei2015fast} is to pre-estimate $\\mat{E}$ from $\\mat{Y}$ in advance, and fix the variable $\\mat{E}$ during the optimization of \\eqref{eq:FusiOpt2}. Specially, FUSE~\\cite{wei2015fast} adopted principal component analysis (PCA), meanwhile, HySure utilized VCA to extract $\\mat{E}$ from $\\mat{Y}$. Another strategy is to regularize the update of $\\mat{E}$ and $\\mat{A}$ as a coupled matrix factorization problem, and blind dictionary learning strategy is utilized to update $\\mat{E}$~\\cite{kawakami2011high}. A hybrid inexact block coordinate descent~\\cite{wu2020hybrid} is introduced to exactly estimate $\\mat{E}$.\n\n\\subsection{Tensor based methods}\nThe above subspace based methods utilize low-rank matrix decomposition to exploit the low-rank property of the reshaped high-spatial resolution HS image $\\mat{X}$. However, the original HS image is a 3-D tensor, and therein, the researchers introduce the tensor decomposition to simultaneously capture the spatial and spectral low-rank property. The coupled sparse tensor factorization (CSTF) approach~\\cite{li2018fusing} utilized Tucker decomposition, presented as follows:\n\\begin{align}\n\\label{eq:TucD}\n&\\tensor{X} = \\tensor{O}\\times_1 \\mat{E}_1\\times_2 \\mat{E}_2\\times_3 \\mat{E}_3, \\\\\n&{\\rm subject \\;to} \\;\\; \\mat{E}_i^{\\top}\\mat{E}_i = \\mat{I},\\; \\|\\tensor{O}\\|_0 \\leq \\mathcal{C},\n\\notag\n\\end{align}\nto regularize the high-spatial resolution HS image $\\tensor{X}$. In \\eqref{eq:TucD}, the core tensor $\\tensor{O}$ is assumed to obey the sparse property, and $\\mat{E}_i$ is the orthogonal matrix of the $i$-th dimension. Subsequently, CP decomposition~\\cite{kanatsoulis2018hyperspectral}, tensor train decomposition~\\cite{dian2019learning}, tensor ring decomposition~\\cite{xu2020hyperspectral,he2020hyperspectral}, and so on, are utilized to regularize $\\tensor{X}$. Furthermore, non-local LRTD is also investigated for the fusion task~\\cite{dian2017hyperspectral,wang2017hyperspectral,xu2019nonlocal}.\n\nIt is worth noting that unmixing, orthogonal subspace, and tensor based methods share the common idea that spectral space of $\\mat{X}$ should lie in the low-dimensional space. Unmixing based approaches interpret the low-rank property as the endmembers and abundances, which are assumed to be non-negative, meanwhile Orthogonal subspace and tensor based methods ignore the non-negative restrict. Unmixing based approaches are interpretable from the physical meaning, but suffering from the unstable convergence in the optimization. Orthogonal subspace and tensor based methods lose physical meaning, but can be optimized more elegantly. \n\nVery recently, there are some preliminary works to perform the fusion task by means of DL-based methods \\cite{mei2017hyperspectral,haut2018new,liu2019stfnet,liu2019efficient,zheng2020coupled,uezato2020guided,yao2020cross} and show the effective and competitive fusion performance. A similar problem existed in these methods is the model interpretability and rationality. Clearly explaining the intrinsic meaning in each layer of deep networks would contribute to better modeling the fusion task and further obtaining higher-quality products.\n\n\\begin{table}[!t]\n\\centering\n\\caption{Quantitative comparison of different algorithms on the HS-MS image fusion experiments. The best one is shown in bold.}\n\\resizebox{0.4\\textwidth}{!}{\n\\begin{tabular}{c||cccc}\n\\toprule[1.5pt] Methods & RMSE & ERGAS & SA & SSIM \\\\\n\\hline \\hline\nCNMF \\cite{yokoya2012coupled} & 6.404 & 0.715 & 4.89 & 0.8857\\\\\nICCV'15 \\cite{lanaras2015hyperspectral} & \\bf 5.203 & \\bf 0.589 & \\bf 4.64 & \\bf 0.9139\\\\\nHySure \\cite{Simoes2015} & 8.537 & 0.812 & 9.45 & 0.8527\\\\\nFUSE \\cite{wei2015fast} & 8.652 & 0.869 & 9.51 & 0.8401\\\\\nCSTF \\cite{li2018fusing} & 8.32 & 0.841 & 8.34 & 0.8419\\\\\nSTEREO \\cite{kanatsoulis2018hyperspectral} & 9.4425 & 0.891 & 9.78 & 0.8231\\\\\nNLSTF \\cite{dian2017hyperspectral} & 8.254 & 0.819 & 8.36 & 0.8424\\\\\n\\bottomrule[1.5pt]\n\\end{tabular}}\n\\label{tab:fusion}\n\\end{table}\n\n\n\\begin{figure*}[!t]\n\t \\centering\n\t\t\\subfigure[Training for MML and CML]{\n\t\t\t\\includegraphics[width=0.315\\textwidth]{MML_TR}\n\t\t}\n\t\t\\subfigure[Testing for MML]{\n\t\t\t\\includegraphics[width=0.315\\textwidth]{MML_TE}\n\t\t}\n\t\t\\subfigure[Testing for CML]{\n\t\t\t\\includegraphics[width=0.315\\textwidth]{CML_TE}\n\t\t}\n\t\t\\caption{An illustration for the model's training and testing in MML- and CML-based classification tasks (take the bi-modality as an example). (a) They share the same training process, i.e., two modalities are used for model training. The main difference lies in the testing phase, (b) MML still needs the input of two modalities, (c) while one modality is absent in CML.}\n\\label{fig:CML}\n\\end{figure*}\n\n\\subsection{Experimental Study}\nIn this section, we select unmixing based methods: CNMF\\footnote{\\url{ http:\/\/naotoyokoya.com\/Download.html}}~\\cite{yokoya2012coupled}, ICCV'15\\footnote{\\url{https:\/\/github.com\/lanha\/SupResPALM}} \\cite{lanaras2015hyperspectral}; subspace based methods: HySure\\footnote{\\url{ https:\/\/github.com\/alfaiate}}~\\cite{Simoes2015}, FUSE\\footnote{\\url{ https:\/\/github.com\/qw245\/BlindFuse}}~\\cite{wei2015fast}; tensor decomposition regularized methods: STEREO\\footnote{\\url{ https:\/\/sites.google.com\/site\/harikanats\/}}~\\cite{kanatsoulis2018hyperspectral}, CSTF~\\cite{li2018fusing}; finally non-local tensor decomposition regularized method: NLSTF\\footnote{\\url{ https:\/\/sites.google.com\/view\/renweidian\/}}~\\cite{dian2017hyperspectral} for the comparison and analysis. We use evaluation indices, including the root mean square error (RMSE), relative dimensional global error in synthesis (ERGAS) \\cite{wald2000quality}, MSA, and SSIM \\cite{wang2004image} as evaluation criteria for the fusion results of different methods.\n\nThe selected dataset for the experiment is the Chikusei dataset obtained at Chikusei, Ibaraki, Japan, on 29 July 2014~\\cite{yokoya2017hyperspectral}. The selected high-spatial resolution HS image is of size $448 \\times 448 \\times 128$, and the simulated HS-MS images are of size $448 \\times 448 \\times 3$ and $14 \\times 14 \\times 128$, respectively. Tab.~\\eqref{tab:fusion} presents the quantitative comparison results of different algorithms on the HS-MS image fusion, meanwhile Fig.~\\eqref{fig:Chikusei_reconstruct} presents the visual illustration. From the results, it can be observed that even the HS image is spatially degraded by 32 times, the fusion methods can efficiently reconstruct the spatial details with the help of a 3 band MS image. On this tested toy dataset, ICCV'15 performed the best. However, different datasets need different kinds of regularizers. The fusion of HS-MS images for efficient and large scale applications is still a challenge for further research. \n\n\\subsection{Remaining Challenges}\nSubspace based non-convex methods for the fusion of HS-MS images have been well developed. However, most remarkable results are achieved on the simulated experiments. For the real applications with HS-MS images from two different satellite sensors, there still remain several challenges.\n\n\\begin{itemize}\n \\item \\textbf{Blind.} Most fusion methods assume the linear spatial and spectral downsampling from HR-HS image to HS-MS images. However, in real applications, the degradation is complex and unknown in advance. how to blindly reconstruct the HR-HS image is a challenge in future research. \n \\item \\textbf{Regularizer.} We reviewed the subspace based fusion methods from unmixing, orthogonal subspace, and tensor decomposition perspectives. Different assumptions are suitable for the exploited for the different structure of the HS image. How to mine the essence of HS images and develop efficient regularizers for large scale processing still remains a challenge.\n \\item \\textbf{Evaluation.} In the real cases, the enhanced HrHS images from HS and MS images are not existed as the reference images in the real scenario. How to evaluate the final enhanced HrHS images is also a key problem for the future fusion approach development of HS-MS. \n\\end{itemize}\n\n\\section{Cross-modality Learning for Large-scale Land Cover Mapping}\nWith the ever-growing availability of diverse RS data sources from both satellite and airborne sensors, multimodal data processing and analysis in RS \\cite{dalla2015challenges,hong2020more} can provide potential possibilities to break the performance bottleneck in many high-level applications, e.g., land cover classification.\n\nHS data are featured by rich spectral information, which enables the high discrimination ability for material recognition at a more accurate and fine level. It should be noted, however, that the HS image coverage from space is much narrow compared to MS imaging due to the limitations of imaging principles and devices. That means the HS-dominated multimodal learning (MML) fails to identify the materials on a large geographic coverage and even global scale \\cite{hong2020x}. But fortunately, large-scale MS or synthetic aperture radar (SAR) images are openly available from e.g., Sentinel-1, Sentinel-2, Landsat-8. This, therefore, drives us to ponder over a problem: \\textit{can HS images acquired only in a limited area improve the land cover mapping performance using a larger area covered by the MS or SAR images?} This is a typical issue of cross-modality learning (CML) from a ML's point of view.\n\nTake the bi-modality as an example, CML for simplicity refers to that training a model using two modalities and one modality is absent in the testing phase, or \\emph{vice versa} (only one modality is available for training and bi-modality for testing) \\cite{ngiam2011multimodal}. Such a CML problem that exists widely in a variety of RS tasks is more applicable to real-world cases. Fig. \\ref{fig:CML} illustrates the differences between MML and CML in terms of training and testing process. The core idea of CML is to find a new data space, where the information can be exchanged effectively across different modalities. Thereupon, we formulate this process in a general way as follows: \n\\begin{equation}\n\\label{CML_eq1}\n\\begin{aligned}\n \\mathop{\\min}_{\\mathbf{X},\\{\\mathbf{U}_{s}\\}_{s=1}^{m}}\\sum_{s=1}^{m}\\frac{1}{2}\\norm{\\mathbf{X}-\\mathbf{U}_{s}\\mathbf{Y}_{s}}_{\\F}^{2} \\;\\; {\\rm s.t.}\\; \\mathbf{X},\\{\\mathbf{U}_{s}\\}_{s=1}^{m}\\in \\mathcal{C},\n\\end{aligned}\n\\end{equation}\nwhere $m$ is the number of input modality. For simplicity, we only consider the bi-modality case in this topic, i.e., $m=2$. According to different learning strategies on modalities, CML can be roughly categorized into two groups: manifold alignment (MA) and shared subspace learning (SSL). The differences between the two types of approaches mainly lie in \n\\begin{itemize}\n \\item MA learns the low-dimensional embedding by preserving the aligned manifold (or graph) structure between different modalities. In the process of graph construction, the similarities between samples (unsupervised MA) and indirect label information (supervised or semi-supervised MA) are used. Despite the competitive performance obtained by MA-based approaches for the CML task, the discrimination ability of learned features remains limited due to the lack of directly bridging low-dimensional features with label information.\n \\item SSL, as the name suggests, aims to find a latent shared subspace, where the features of different modalities are linked via a manifold alignment regularizer. Also, the learned features are further connected with label information. The two steps are jointly optimized in a SSL model, tending to yield more discriminative feature representations.\n\\end{itemize}\nMore specifically, we will briefly review and detail some representative approaches belonging to the aforementioned two groups as follows.\n\n\\subsection{Manifold Alignment based Approach}\nAs the name suggests, MA is capable of aligning multiple modalities on manifolds into a latent subspace, achieving a highly effective knowledge transfer \\cite{wang2009general}. Due to the interactive learning ability, MA has a good fit for large-scale RS image classification. In \\cite{matasci2011transfer}, the domain adaptation was investigated to reduce the gap between the source and target domains of HS data for land cover classification. By simultaneously considering labeled and unlabeled samples, Tuia \\textit{et al.} \\cite{tuia2014semisupervised} used semi-supervised MA (SSMA) techniques \\cite{wang2011heterogeneous} to align the multi-view RS images onto the manifold space by the attempts to eliminate the effects of image variants caused by different views. Matasci \\textit{et al.} \\cite{matasci2015semisupervised} modified the classic transfer component analysis \\cite{pan2011domain}, making it applicable to land cover classification of RS images. Moreover, a kernelized MA approach presented in \\cite{tuia2016kernel} projected the multimodal RS data to a higher dimensional space and aligned them in a nonlinear way. Hu \\textit{et al.} \\cite{hu2019comparative} deeply reviewed the semi-supervised MA methods with respect to the fusion classification of HS and polarimetric SAR images. Based on the work in \\cite{hu2019comparative}, the same investigators made full use of topological data analysis and designed a new graph structure for optical (e.g., HS) and SAR data fusion \\cite{hu2019mima}. \n\nMathematically, the MA idea can be implemented by solving the following non-convex model:\n\\begin{equation}\n\\label{CML_eq2}\n\\begin{aligned}\n \\mathop{\\min}_{\\{\\mathbf{U}\\}_{s=1}^{m}}\\frac{A+C}{B},\n\\end{aligned}\n\\end{equation}\nwhere $A$, $B$, and $C$ are \n\\begin{equation*}\n\\begin{aligned}\n &A = \\frac{1}{2}\\sum_{p=1}^{m}\\sum_{q=1}^{m}\\sum_{i=1}^{n}\\sum_{j=1}^{n}\\norm{\\mathbf{U}_{p}\\mathbf{y}_{p}^{i}-\\mathbf{U}_{q}\\mathbf{y}_{q}^{j}}_{2}^{2}\\mathbf{W}_{sim}^{i,j},\\\\\n &B = \\frac{1}{2}\\sum\\limits_{p=1}^{m}\\sum\\limits_{q=1}^{m}\\sum\\limits_{i=1}^{n}\\sum\\limits_{j=1}^{n}\\norm{\\mathbf{U}_{p}\\mathbf{y}_{p}^{i}-\\mathbf{U}_{q}\\mathbf{y}_{q}^{j}}_{2}^{2}\\mathbf{W}_{dis}^{i,j},\\\\\n &C = \\frac{1}{2}\\sum_{t=1}^{m}\\sum_{i=1}^{n}\\sum_{j=1}^{n}\\norm{\\mathbf{U}_{t}\\mathbf{y}_{t}^{i}-\\mathbf{U}_{t}\\mathbf{y}_{t}^{j}}_{2}^{2}\\mathbf{W}_{t}^{i,j}.\n\\end{aligned}\n\\end{equation*}\nBy minimizing the problem (\\ref{CML_eq2}), the $\\{\\mathbf{U}\\}_{s=1}^{m}$ can be estimated via generalized eigenvalues decomposition. We then have $\\mathbf{X}=\\mathbf{U}_{s}\\mathbf{Y}_{s}$. Three different graphs need to be pre-computed in Eq. (\\ref{CML_eq2}), including the similarity graph, i.e., $\\mathbf{W}_{sim}$:\n\\begin{equation}\n\\label{CML_eq3}\n\\begin{aligned}\n\t \\mathbf{W}_{sim}=\n\t \\left[\n \t \\begin{matrix}\n \t \\mathbf{W}_{sim}^{1,1}&\\mathbf{W}_{sim}^{1,2}&\\cdots& \\mathbf{W}_{sim}^{1,m}\\\\\n \t \\mathbf{W}_{sim}^{2,1}&\\mathbf{W}_{sim}^{2,2}&\\cdots& \\mathbf{W}_{sim}^{2,m}\\\\\n \t \\vdots&\\vdots&\\ddots&\\vdots\\\\\n \t \\mathbf{W}_{sim}^{m,1}&\\mathbf{W}_{sim}^{m,2}&\\cdots& \\mathbf{W}_{sim}^{m,m}\\\\\n \\end{matrix}\n \\right],\n\\end{aligned}\n\\end{equation}\nthe dissimilarity matrix, i.e., $\\mathbf{W}_{dis}$:\n\\begin{equation}\n\\label{CML_eq4}\n\\begin{aligned}\n\t \\mathbf{W}_{dis}=\n\t \\left[\n \t \\begin{matrix}\n \t \\mathbf{W}_{dis}^{1,1}&\\mathbf{W}_{dis}^{1,2}&\\cdots& \\mathbf{W}_{dis}^{1,m}\\\\\n \t \\mathbf{W}_{dis}^{2,1}&\\mathbf{W}_{dis}^{2,2}&\\cdots& \\mathbf{W}_{dis}^{2,m}\\\\\n \t \\vdots&\\vdots&\\ddots&\\vdots\\\\\n \t \\mathbf{W}_{dis}^{m,1}&\\mathbf{W}_{dis}^{m,2}&\\cdots& \\mathbf{W}_{dis}^{m,m}\\\\\n \\end{matrix}\n \\right],\n\\end{aligned}\n\\end{equation}\nand the topology structure for each single modality obtained by $knn$ graph, i.e., $\\mathbf{W}_{t}$:\n\\begin{equation}\n\\label{CML_eq5}\n\\begin{aligned}\n\t \\mathbf{W}_{t}=\n\t \\left[\n \t \\begin{matrix}\n \t \\mathbf{W}_{t}^{1,1}&\\mathbf{0}&\\cdots& \\mathbf{0}\\\\\n \t \\mathbf{0}&\\mathbf{W}_{d}^{2,2}&\\cdots& \\mathbf{0}\\\\\n \t \\vdots&\\vdots&\\ddots&\\vdots\\\\\n \t \\mathbf{0}&\\mathbf{0}&\\cdots& \\mathbf{W}_{t}^{m,m}\\\\\n \\end{matrix}\n \\right].\n\\end{aligned}\n\\end{equation}\nIn Eqs. (\\ref{CML_eq3})$-$(\\ref{CML_eq5}), $\\mathbf{W}_{sim}^{i,j}$, $\\mathbf{W}_{dis}^{i,j}$, and $\\mathbf{W}_{t}^{i,j}$ are given, respectively, by\n\\begin{equation*}\n \\mathbf{W}_{sim}^{i,j}=\n \\begin{cases}\n \\begin{aligned}\n 1, \\; \\; & \\text{if $\\mathbf{y}_{p}^{i}$ and $\\mathbf{y}_{q}^{j} \\in C_{k}$}\\\\\n 0, \\; \\; & \\text{otherwise,}\n \\end{aligned}\n \\end{cases}\n\\end{equation*} \n\\begin{equation*}\n \\mathbf{W}_{dis}^{i,j}=\n \\begin{cases}\n \\begin{aligned}\n 1, \\; \\; & \\text{if $\\mathbf{y}_{p}^{i}$ and $\\mathbf{y}_{q}^{j}\\notin C_{k}$ }\\\\\n 0, \\; \\; & \\text{otherwise,}\n \\end{aligned}\n \\end{cases}\n\\end{equation*} \n\\begin{equation*}\n \\mathbf{W}_{t}^{i,j}=\n \\begin{cases}\n \\begin{aligned}\n \\exp\\frac{\\norm{\\mathbf{y}^{i}-\\mathbf{y}^{j}}_{2}^{2}}{2\\sigma^{2}}, \\; \\; & \\text{if $\\mathbf{y}_{p}^{i}\\in\\phi_{k}(\\mathbf{y}_{q}^{j})$;}\\\\\n 0, \\; \\; & \\text{otherwise,}\n \\end{aligned}\n \\end{cases}\n\\end{equation*} \nwhere $\\phi_{k}(\\bullet)$ denotes the $k$ nearest neighbors of $\\bullet$.\n\n\\subsection{Shared Subspace Learning based Approach}\nDue to the lack of the direct relational modeling between the learned features and label information, MA-based approaches fail to activate the connections across modalities effectively \\cite{ngiam2011multimodal}, thereby yielding the relatively weak transferability between different modalities, particularly heterogeneous data. There have been some tentative works in recent years, providing potential solutions to overcome the aforementioned challenges. For example, Hong \\textit{et al.} \\cite{hong2019cospace} for the first time proposed a supervised CoSpace model to learn a latent discriminative subspace from HS-MS correspondences for the CML-related classification problem. Beyond it, the same authors \\cite{hong2019learnable} fully tapped the potential of the CoSpace by learning the data-driven graph structure from both labeled and unlabeled samples, yielding a learnable manifold alignment (LeMA) approach. Moreover, \\cite{hong2020learning} deeply investigated and analyzed different regression techniques, i.e., $\\ell_2$-norm ridge regression, $\\ell_1$-norm sparse regression, in CoSpace. In \\cite{hong2020graph}, a semi-supervised graph-induced aligned learning (GiAL) was developed by jointly regressing labels and pseudo-labels.\n\nAccordingly, these methods can be generalized to be a unified model \\cite{hong2019cospace} to address the CML's problem in a regression-based fashion:\n\\begin{equation}\n\\label{CML_eq6}\n\\begin{aligned}\n \\mathop{\\min}_{\\mathbf{P}, \\{\\mathbf{U}_{s}\\}_{s=1}^{m}}&\\frac{1}{2}\\norm{\\mathbf{M}-\\mathbf{P}\\mathbf{U}_{s}\\mathbf{Y}_{s}}_{\\F}^{2}+\\Psi(\\mathbf{P})+\\Omega(\\{\\mathbf{U}_{s}\\}_{s=1}^{m})\\\\\n & {\\rm s.t.}\\;\\; \\mathbf{U}_{s}\\mathbf{U}_{s}^{\\top}=\\mathbf{I}, \\;s=1,\\cdots,m,\n\\end{aligned}\n\\end{equation}\nwhere $\\{\\mathbf{U}_{s}\\}_{s=1}^{m}$ denote the projections linking to the shared features for different modalities. To avoid the over-fitting of the model and stabilize the learning process, $\\mathbf{P}$ can be regularized by the Frobenius-norm \\cite{hong2019cospace} or $\\ell_{1,1}$-norm \\cite{hong2020learning}:\n\\begin{equation}\n\\label{CML_eq7}\n\\begin{aligned}\n \\Psi(\\mathbf{P})=\\norm{\\mathbf{P}}_{\\F}^{2}, \\; \\text{or}\\; \\norm{\\mathbf{P}}_{1,1},\n\\end{aligned}\n\\end{equation}\nand $\\Omega(\\{\\mathbf{U}_{s}\\}_{s=1}^{m})$ is specified as a manifold alignment term on the multimodal data, which is written as\n\\begin{equation}\n\\label{CML_eq8}\n\\begin{aligned}\n \\Omega(\\{\\mathbf{U}_{s}\\}_{s=1}^{m})=\\tr(\\mathbf{U}\\mathbf{Y}\\mathbf{L}\\mathbf{Y}^{\\top}\\mathbf{U}^{\\top}),\n\\end{aligned}\n\\end{equation}\nwhere $\\mathbf{U}=[\\mathbf{U}_{1},\\mathbf{U}_{2},\\cdots,\\mathbf{U}_{m}]$ and\n\\begin{equation*}\n\\begin{aligned}\n \\mathbf{Y}=\n\t \\left[\n \t \\begin{matrix}\n \t \\mathbf{Y}_{1} & \\mathbf{0} & \\cdots & \\mathbf{0}\\\\\n \t \\mathbf{0} & \\mathbf{Y}_{2} & \\cdots & \\mathbf{0}\\\\\n \t \\vdots&\\vdots&\\ddots&\\vdots\\\\\n \t \\mathbf{0} & \\mathbf{0} & \\cdots & \\mathbf{Y}_{m}\\\\\n \\end{matrix}\n \\right]. \n\\end{aligned}\n\\end{equation*}\nSimilar to Fig. \\ref{fig:graph}, $\\mathbf{L}$ is a joint Laplacian matrix. \n\nUsing the general model in Eq. (\\ref{CML_eq6}), \n\\begin{itemize}\n \\item \\cite{hong2019cospace} considers the HS-MS correspondences that exist in an overlapped region as the model input. The learned shared representations (e.g., $\\mathbf{X}=\\mathbf{U}_{s}\\mathbf{Y}_{s}$) can be then used for classification on a larger area, even though only MS data are available in the inference phase;\n \\item Differently, \\cite{hong2019learnable} inputs not only the labeled HS-MS pairs but also unlabeled MS data in large quantity. With the graph learning, i.e., the variable $\\mathbf{L}$ is to be learned from the data rather than fixed by a given RBF, the unlabeled information can be made use of to find a better decision boundary. According to the equivalent form of Eq. (\\ref{CML_eq8}), we then have\n \\begin{equation}\n \\label{CML_eq9}\n \\begin{aligned}\n \\tr(\\mathbf{U}\\mathbf{Y}\\mathbf{L}\\mathbf{Y}^{\\top}\\mathbf{U}^{\\top})=\\frac{1}{2}\\tr(\\mathbf{W}\\mathbf{d})=\\frac{1}{2}\\norm{\\mathbf{W}\\odot\\mathbf{d}}_{1,1},\n \\end{aligned}\n \\end{equation}\n where $\\mathbf{d}_{i,j}=\\norm{\\mathbf{x}_{i}-\\mathbf{x}_{j}}_{2}^{2}$ denotes the pair-wise distance in Euclidean space. Using Eq. (\\ref{CML_eq9}), the resulting optimization problem with respect to the variable $\\mathbf{W}$ is \n \\begin{equation}\n \\label{CML_eq10}\n \\begin{aligned}\n \\frac{1}{2}&\\norm{\\mathbf{W}\\odot\\mathbf{d}}_{1,1}\\\\\n &{\\rm s.t.} \\; \\mathbf{W}=\\mathbf{W}^{\\top},\\; \\mathbf{W}_{i,j}\\geq 0,\\; \\norm{\\mathbf{W}}_{1,1}=c.\n \\end{aligned}\n \\end{equation}\n \\item Inspired by the brain-like feedback mechanism presented in \\cite{hong2019learning}, a more intelligent CML model was proposed \\cite{hong2020graph}. With the joint use of labels and pseudo-labels updated by the graph feedback in each iteration, more representative features can be also learned (even if a certain modality is absent, i.e., the CML case). \n\\end{itemize}\n\n\\begin{table}[!t]\n\\centering\n\\caption{Quantitative comparison of SOTA algorithms related to the CML's issue in terms of OA, AA, and $\\kappa$ using the NN classifier on the Houston2013 datasets. The best one is shown in bold.}\n\\resizebox{0.35\\textwidth}{!}{\n\\begin{tabular}{c||ccc}\n\\toprule[1.5pt] Methods & OA (\\%) & AA (\\%) & $\\kappa$ \\\\\n\\hline \\hline\nO-Baseline & 62.12 & 65.97 & 0.5889\\\\\nUSMA \\cite{he2004locality} & 65.54 & 68.81 & 0.6251\\\\\nSMA \\cite{wang2009general} & 68.01 & 70.50 & 0.6520\\\\\nSSMA \\cite{wang2011heterogeneous} & 69.29 & 72.00 & 0.6659\\\\\nCoSpace \\cite{hong2019cospace} & 69.38 & 71.69 & 0.6672\\\\\nLeMA \\cite{hong2019learnable} & 73.42 & 74.76 & 0.7110\\\\\nGiAL \\cite{hong2020graph} & \\bf 80.66 & \\bf 81.31 & \\bf 0.7896\\\\\n\\bottomrule[1.5pt]\n\\end{tabular}}\n\\label{tab:CML}\n\\end{table}\n\n\\subsection{Experimental Study}\nWe evaluate the performance of several SOTA algorithms related to the CML's issue both quantitatively and qualitatively. They are O-Baseline (i.e., using original image features), unsupervised MA (USMA) \\cite{he2004locality}, supervised MA (SMA)\\footnote{\\url{https:\/\/sites.google.com\/site\/changwangnk\/home\/ma-html}} \\cite{wang2009general}, SSMA \\cite{wang2011heterogeneous}, CoSpace\\footnote{\\url{https:\/\/github.com\/danfenghong\/IEEE_TGRS_CoSpace}} \\cite{hong2019cospace}, LeMA\\footnote{\\url{https:\/\/github.com\/danfenghong\/ISPRS_LeMA}} \\cite{hong2019learnable}, and GiAL \\cite{hong2020graph}. Three common indices, e.g., \\textit{OA}, \\textit{AA}, and $\\kappa$, are adopted to quantify the classification performance using the SVM classifier on the Houston2013 HS-MS datasets that have been widely used in many researches \\cite{hong2019cospace,hong2019learnable,hong2020learning,hong2020graph}. \n\nTable \\ref{tab:CML} gives the quantitative comparison between the above-mentioned methods for the CML-related classification, while Fig. \\ref{fig:CML_CM_H} visualizes a region of interest (ROI) of classification maps. By and large, the classification accuracy of O-Baseline, i.e., only using MS data, is much lower than other methods. By aligning multimodal data on manifolds, MA-based approaches perform better than O-Baseline with the approximated increase of $3\\%$ OA in USMA, $6\\%$ OA in SMA, and $7\\%$ OA in SSMA. As expected, the classification performance of SSL-based models, e.g., CoSpace, LeMA, and GiAL, is obviously superior to that of MA-based ones. In particular, GiAL dramatically outperforms other competitors, owing to the use of the brain-like feedback mechanism and graph-driven pseudo-label learning. Visually, shared learning methods tend to capture more robust spectral properties and achieve more realistic classification results. As can be seen from Fig. \\ref{fig:CML_CM_H}, the shadow region covered by clouds can be finely classified by CoSpace, LeMA, and GiAL, while MA-based models fail to identify the materials well in the region.\n\n\\begin{figure}[!t]\n\t \\centering\n\t\t\\subfigure{\n\t\t\t\\includegraphics[width=0.42\\textwidth]{CML_CM_H}\n\t\t}\n \\caption{ROI visualization of classification maps using different SOTA methods related to the CML's issue.}\n\\label{fig:CML_CM_H}\n\\end{figure}\n\n\\subsection{Remaining Challenges}\nCML has drawn growing interest from researchers in computer vision and ML, yet it is rarely investigated in the RS community. In other words, CML is a relatively emerging topic in RS, which means there are lots of difficulties (or challenges) to be overcome. In detail,\n\n\\begin{itemize}\n \\item \\textbf{Data Preparation.} Since multimodal data are acquired by different contexts, sensors, resolutions, etc., this inevitably poses a great challenge to data collection and processing. For example, the errors caused by interpolation of different resolutions, registration methods of geographical coordinates, pixel-wise biases of different sensors, and uncertainties of image degradation in the imaging process easily generate unregistered multimodal data to a great extent.\n \\item \\textbf{Model Transferability.} Due to different imaging mechanisms and principles, the viscosity between pixels from the same modality is much stronger than from the different modalities. This might lead to difficulties in fusing multimodal information at a deep level, particularly heterogeneous data (e.g., HS and SAR data), further limiting the model's transferability.\n \\item \\textbf{Labeling.} Unlike natural images or street view images that are relatively easy and accurate to be labeled manually, labeling RS scenes (field trips needed) is extremely expensive and time-consuming. Consequently, a limited number of labeled samples are available for training and even worse, there are many noisy labels in these samples. These problems will be noteworthy and to-be-solved key points of the next generation interpretable AI models in the RS-related CML task.\n\\end{itemize}\n\n\\section{Conclusion and Future Prospect}\nCharacterized by the nearly continuous spectral profile that is capable of sampling and representing the whole electromagnetic spectrum, HS images play an important role in both promoting developments of new techniques and accelerating the practical applications, not only limiting to the fields of RS, geoscience, signal and image processing, numerical optimization and modeling, ML, and AI. However, there still exist severe difficulties and challenges that need to be carefully considered in the development and application of HS RS techniques. One sign reveals that HS data analysis methods dominated by expert systems have been unable to meet the demand of an ever-growing volume of HS data whether in performance gain or in processing efficiency. Another sign is that despite the currently unprecedented progress made on computer vision, ML, and AI techniques, the model compatibility and interpretability for HS RS applications remain limited.\n\nDue to the SVs of HS data caused by various degradation mechanisms (e.g., environmental condition, atmospheric effects, spectral nonlinear mixing, etc.), the redundancy of high-dimensional HS signals, and the complex practical cases underlying in the HS products (e.g., low spatial resolution, narrow imaging range, instrumental noises), convex models under ideal circumstances usually fail to extract useful and diagnostic information from HS images (especially those products that are corrupted seriously) and thereby understand our environment. Considering that non-convex modeling is capable of characterizing more complex real scenes and better providing the model interpretability, in this article we present a comprehensive and technical survey over five promising and representative research topics related to HS RS with a focus on non-convex modeling, such as HS image restoration, dimensionality reduction and classification, data fusion and enhancement, spectral unmixing, and cross-modality learning. Among these topics, we review the current state-of-the-art methods with illustrations, show the significance and superiority of non-convex modeling to bridge the gap between HS RS and interpretable AI, and point out remaining challenges and future research directions.\n\nIt is well-known that the HS image processing and analysis chain is wide-ranging. Apart from the five topics covered in this paper, we are not able to detailedly report all the important and promising applications related to HS RS missions. There are several very noteworthy and active fields that should be paid more attention in future work, including target\/change detection, time-series analysis, multitemporal fusion\/classification, physical parameter inversion, image quality assessment, and various practical applications (e.g., precious farming, disaster management and response). Moreover, some crucial steps for the HS image pre-processing algorithms are also missing, such as atmospheric and geometric corrections, geographic coordinate registration, etc. Furthermore, the methodologies summarized and reported in this article mainly focus on the survey of shallow non-convex models. Undeniably, deep models, e.g., DL-based methods, are capable of excavating deeper and intrinsic properties of HS data. There is, therefore, room for improvement in the development of more intelligent DL-related non-convex modeling with application to HS RS. For example, embedding more physically meaningful priors and devising advanced and novel deep unfolding \\cite{hershey2014deep} or unrolling \\cite{diamond2017unrolled} strategies to closely integrate data-driven DL and theoretically-guaranteed optimization technique is to open and interpret the so-called ``black box'' in DL models.\n\nFinally, we have to admit that non-convex modeling and optimization is a powerful tool across multidisciplinary and the relevant studies along the direction have been made tremendous progress theoretically and technically. This provides the possibility of creating new methodologies and implementing interpretable AI for various HS RS applications. In this paper, we attempt to ``intellectualize'' these models by introducing more interpretable and physically meaningful knowledge to meet the actual needs in a non-convex modeling fashion. In other words, we hope that non-convex modeling can play the role as a bridge to connect interpretable AI models and various research topics in HS RS. Our efforts in this paper are made to foster curiosity and create a good starting point for post-graduate, Ph.D. students, and senior researchers working in the HS-related fields, thereby further looking for new and advanced research directions in the interdisciplinary involving signal and image processing, ML, AI, and RS.\n\n\\section*{Acknowledgement}\nThe authors would like to thank Prof. D. Landgrebe from\nPurdue University for providing the AVIRIS Indian Pines data, Prof. P. Gamba from the University of Pavia for providing the ROSIS-3 Pavia University and Centre data, the Hyperspectral Image Analysis group at the University of Houston for providing the CASI University of Houston dataset used in the IEEE GRSS DFC2013 and DFC2018, and the Hyperspectral Digital Imagery Collection Experiment (HYDICE) for sharing the urban dataset free of charge. \n\nThis work from the D. Hong and X. Zhu sides is jointly supported by the German Research Foundation (DFG) under grant ZH 498\/7-2, by the Helmholtz Association through the framework of Helmholtz Artificial Intelligence (HAICU) - Local Unit ``Munich Unit @Aeronautics, Space and Transport (MASTr)'' and Helmholtz Excellent Professorship ``Data Science in Earth Observation - Big Data Fusion for Urban Research'', by the German Federal Ministry of Education and Research (BMBF) in the framework of the international AI future lab ``AI4EO -- Artificial Intelligence for Earth Observation: Reasoning, Uncertainties, Ethics and Beyond''. This work from the L. Gao side is supported by the National Natural Science Foundation of China under Grant 42030111 and Grant 41722108. This work from the W. He and N. Yokoya sides is supported by the Japan Society for the Promotion of Science under KAKENHI 19K20308 and KAKENHI 18K18067. This work from the J. Chanussot side has been partially supported by MIAI@Grenoble Alpes, (ANR-19-P3IA-0003) and by the AXA Research Fund. \n\nThe corresponding authors of this paper are Dr. Wei He and Prof. Lianru Gao.\n\n\\bibliographystyle{ieeetr}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{Intro.sec} In our $\\Lambda$CDM Universe,\nmost of the mass is made up of dark matter. On large scales baryons\ntrace the dark matter and its gravitational potential. Baryonic gas\nfalls into galaxies at the centres of dark matter haloes, where it\ncools radiatively and collapses to form stars. By naive gravitational\narguments, star formation should be a fast affair, consuming the gas\nover local free-fall times. However, from observations we know that it\nis a slow and inefficient process, taking $\\sim20-100$ free-fall\ntimes, depending on the scale under consideration\n\\citep[e.g.][]{Zuckerman1974, Krumholz2007,\n EvansNealJ2009}. Also, while observers have a notoriously hard time\nconfirming the existence of gas flowing into galaxies\n\\citep{Crighton2013}, they instead routinely detect oppositely\ndirected \\emph{outflows} at velocities of hundreds of km\/s \\citep[see\nreview by][]{Veilleux2005}.\n\nTo understand the non-linear problem of galaxy formation and\nevolution, theorists use cosmological simulations of dark matter,\ndescribing the flow and collapse of baryonic star-forming gas either\nwith directly coupled hydrodynamics or semi-analytic models. Strong\nfeedback in galaxies is a vital ingredient in any model of galaxy\nevolution, be it hydrodynamical or semi-analytic, that comes even\nclose to reproducing basic observables, such as the star formation\nhistory of the Universe, the stellar mass function of galaxies, the\nKennicutt-Schmidt relation, rotational velocities, and outflows\n\\citep[e.g.][]{Vogelsberger2013, Dubois2014, Hopkins2014,\n Schaye2015, Wang2015, Somerville2015}.\n\nIn order to capture the inefficient formation of stars, the first\ngeneration of galaxy evolution models included core collapse (or type\nII) supernova (SN) feedback, where massive stars ($\\ga 8 \\ \\rm{M}_{\\odot}$) end\ntheir short lives with explosions which inject mass, metals, and\nenergy into the inter-stellar medium (ISM). In early hydrodynamical\nsimulations, the time-integrated type II SN energy of a stellar\npopulation, $10^{51}$ ergs per SN event, was dumped thermally into the\ngas neighbouring the stellar population \\citep{Katz1992}. However,\nsuch thermal dump feedback had little impact on star formation,\nresulting in an over-abundance of massive and compact galaxies. This\nso-called over-cooling problem is partly numerical in nature, and a\nresult of low resolution both in time and space. As discussed by\n\\cite{DallaVecchia2012}, the energy is injected into too large a\ngas mass, typically resulting in much lower temperatures than those at\nwork in sub-pc scale SN remnants. The relatively high cooling rates at\nthe typical initial temperatures attained in the remnant, of\n$10^5-10^6$ K, allow a large fraction of the injected energy to be\nradiated away before the gas reacts hydrodynamically, resulting in\nsuppressed SN blasts and hence weak feedback. Gas cooling is,\nhowever, also a real and physical phenomenon, and while it is\nover-estimated in under-resolved simulations, a large fraction of the\nenergy in SN remnants may in fact be radiated away instead of being\nconverted into large-scale bulk motion \\citep{Thornton1998}.\n\nA number of sub-resolution SN feedback models have been developed\nover the last two decades for cosmological simulations, with the\nprimary motivation of reproducing large-scale observables, such as the\ngalaxy mass function, by means of efficient feedback. The four main\nclasses of these empirically motivated SN feedback models are i)\nkinetic feedback \\citep{Navarro1993}, where a fraction of the SN\nenergy is injected directly as momentum, often in combination with\ntemporarily disabling hydrodynamical forces \\citep{Springel2003},\nii) delayed cooling \\citep[e.g.][]{Gerritsen1997, Stinson2006},\nwhere radiative cooling is turned off for some time in the SN remnant,\niii) stochastic feedback \\citep{DallaVecchia2012}, where the SN\nenergy is re-distributed in time and space into fewer but more\nenergetic explosions, and iv) multiphase resolution elements that\nside-step unnatural `average' gas states at the resolution limit\n\\citep{Springel2003, Keller2014}.\n\nIn principle, a physically oriented approach to implementing SN\nfeedback with sub-grid models is desirable. The goal is then to\ninject the SN blast as it would emerge on the smallest resolved scale,\nby making use of analytic models and\/or high-resolution simulations\nthat capture the adiabatic phase, radiative cooling, the momentum\ndriven phase, and the interactions between different SN\nremnants. However, these base descriptions usually include simplified\nassumptions about the medium surrounding the SN remnant, and fail to\ncapture the complex inhomogeneities that exist on unresolved scales\nand can have a large impact on cooling rates. In addition, even if the\nSN energy is injected more or less correctly at resolved scales, it\nwill generally fail to evolve realistically thereafter because the\nmulti-phase ISM of simulated galaxies is still at best marginally\nresolved. Hence there remains a large uncertainty in how efficiently\nthe SN blast couples to the ISM. This translates into considerable\nfreedom, which requires SN feedback models to be calibrated to\nreproduce a set of observations \\citep[see discussion\nin][]{Schaye2015}.\n\nThe most recent generation of cosmological simulations has been\nrelatively successful in reproducing a variety of observations, in\nlarge part thanks to the development of subgrid models for efficient\nfeedback and the ability to calibrate their parameters, as well as the\ninclusion of efficient active-galactic nucleus (AGN) feedback in\nhigh-mass galaxies. However, higher-resolution simulation works\n\\citep[e.g.][]{Hopkins2012b, Agertz2013} suggest that SNe alone\nmay not provide the strong feedback needed to produce the inefficient\nstar formation we observe in the Universe.\n\nAttention has thus been turning towards complementary forms of stellar\nfeedback, which provide additional support to the action of\nSNe. Possible additional feedback mechanisms include stellar winds\n\\citep[e.g.][]{Dwarkadas2007, Rogers2013, Fierlinger2016},\nradiation pressure (e.g. \\citealt{Haehnelt1995, Thompson2005,\n Murray2010}, but see \\citealt{Rosdahl2015}), and cosmic rays\n\\citep[e.g.][]{Booth2013, Hanasz2013, Salem2014,\n Girichidis2016}.\n\nNone the less, SN explosions remain a powerful source of energy and\nmomentum in the ISM and a vital ingredient in galaxy evolution. For\nthe foreseeable future a sub-resolution description of them will\nremain necessary in cosmological simulations and even in most feasible\nstudies of isolated galaxies. The true efficiency of SN feedback is\nstill not well known, and hence we do not know to what degree we need\nto improve our SN feedback sub-resolution models versus appealing to\nthe aforementioned complementary physics.\n\nRather than introducing a new or improved sub-resolution SN feedback\nmodel, the goal of this paper is to study existing models, using\ncontrolled and relatively inexpensive numerical experiments of\nisolated galaxy discs modelled with gravity and hydrodynamics in the\nEulerian (i.e. grid-based) code {\\sc Ramses}{} \\citep{Teyssier2002}. We\nuse those simulations to assess each model's effectiveness in\nsuppressing star formation and generating galactic winds, the main\nobservational constraints we have on feedback in galaxies.\n\nWe study five subgrid prescriptions for core-collapse SN feedback in\nisolated galaxy discs. We explore the `maximum' and `minimum' effects\nwe can get from SN feedback using these models, and consider how they\nvary with galaxy mass, resolution, and feedback parameters where\napplicable. The simplest of those models is the `classic'\n\\emph{thermal dump}, where the SN energy is simply injected into the\nlocal volume containing the stellar population. Three additional\nmodels we consider have been implemented and used previously in\n{\\sc Ramses}{}. These are, in chronological order, \\emph{kinetic feedback},\ndescribed in \\cite{Dubois2008} and used in the Horizon-AGN\ncosmological simulations \\citep{Dubois2014}, \\emph{delayed\n cooling}, described in \\cite{Teyssier2013}, and \\emph{mechanical\n feedback}, described in \\cite{Kimm2014} and\n\\cite{Kimm2015}. In addition, for this work we have implemented\n\\emph{stochastic feedback} in {\\sc Ramses}{}, adapted from a previous\nimplementation in the smoothed particle hydrodynamics (SPH) code\n{\\sc Gadget}{}, described in \\citet[][henceforth\n{DS12}{}]{DallaVecchia2012}.\n\n\\begin{table*}\n \\centering\n \\caption\n {Simulation initial conditions and parameters for the two disc\n galaxies modelled in this paper. The listed parameters are, from \n left to right:\n Galaxy acronym used throughout the paper, $\\vcirc$: circular\n velocity at the virial radius, $R_{\\rm vir}$: halo virial radius\n (defined as the radius within which the DM density is $200$ times the\n critical density at redshift zero), $L_{\\rm box}$:\n simulation box length, $M_{\\rm halo}$: DM halo mass, $M_{\\rm disc}$: disc\n galaxy mass in baryons (stars+gas), $f_{\\rm gas}$: disc gas\n fraction, $M_{\\rm bulge}$: stellar bulge mass, $N_{\\rm part}$:\n Number of DM\/stellar particles, $m_{*}$: mass of\n stellar particles formed during the simulations, $\\Delta x_{\\rm max}$:\n coarsest cell resolution, $\\Delta x_{\\rm min}$: finest cell resolution,\n $Z_{\\rm disc}$: disc metallicity.}\n \\label{sims.tbl}\n \\begin{tabular}{l|rrrrrrrrrrrr}\n \\toprule\n Galaxy & $\\vcirc$ & $R_{\\rm vir}$ & $L_{\\rm box}$ & $M_{\\rm halo}$ \n & $M_{\\rm disc}$ & $f_{\\rm gas}$ & $M_{\\rm bulge}$ & $N_{\\rm part}$ \n & $m_{*}$ & $\\Delta x_{\\rm max}$& $\\Delta x_{\\rm min}$ & $Z_{\\rm disc}$ \\\\ \n acronym & [$\\kms$] & [kpc] & [kpc] & [$\\rm{M}_{\\odot}$]\n & [$\\rm{M}_{\\odot}$]& & [$\\rm{M}_{\\odot}$] & \n & [$\\rm{M}_{\\odot}$]& [kpc] & [pc] & [$\\Zsun$] \\\\\n \\midrule\n {\\sc g9} & $65$ & $89$ & $300$ &$10^{11}$ \n &$3.5 \\times 10^9$& $0.5$ &$3.5 \\times 10^8$& $10^6$ \n & $2.0 \\times 10^3$ & $2.3$ & $18$ & 0.1\\\\\n {\\sc g10} & $140$ & $192$ & $600$ & $10^{12}$\n &$3.5 \\times 10^{10}$ & $0.3$ & $3.5 \\times 10^9$ & $10^6$ \n & $1.6 \\times 10^4$ & $4.7$ & $36$ & $1.0$ \\\\\n \\bottomrule\n \\end{tabular}\n\\end{table*}\n\nThe layout of this paper is as follows. First, we describe the setup\nof our isolated galaxy disc simulations in \\Sec{simulations.sec}. We\nthen describe the SN feedback models in \\Sec{models.sec}. In\n\\Sec{comparison.sec} we compare results for each of these models\nusing their fiducial parameters in galaxy discs of two different\nmasses, focusing on the suppression of star formation and the\ngeneration of outflows. In \\Sec{res_conv.sec} we compare how these\nresults converge with numerical resolution, both in terms of physical\nscale, i.e. minimum gas cell size, and also in terms of stellar\nparticle mass. In Sections \\ref{fb_stoch.sec} - \\ref{fb_kin.sec} we\ntake a closer look at the stochastic, delayed cooling, and kinetic\nfeedback models respectively, and study how varying the free\nparameters in each model affects star formation, outflows and gas\nmorphology. The reader can skip those sections or pick out those of\ninterest, without straying from the thread of the paper. We discuss\nour results and implications in \\Sec{Discussion.sec}, and, finally, we\nconclude in \\Sec{Conclusions.sec}.\n\n\\section{Simulations} \\label{simulations.sec}\nBefore we introduce the SN feedback models compared in this paper, we\nbegin by describing the default setup of the simulations common to all\nruns.\n\nWe run controlled experiments of two rotating isolated disc galaxies,\nconsisting of gas and stars, embedded in dark matter (DM) haloes. The\nmain difference between the two galaxies is an order of magnitude\ndifference in mass, both baryonic and DM. We use the AMR code\n{\\sc Ramses}{} \\citep{Teyssier2002}, which simulates the interaction of\ndark matter, stellar populations and baryonic gas, via gravity,\nhydrodynamics and radiative cooling. The equations of hydrodynamics\nare computed using the HLLC Riemann solver \\citep{Toro1994} and the\nMinMod slope limiter to construct variables at cell interfaces from\nthe cell-centred values. We assume an adiabatic index of\n$\\gamma = 5\/3$ to relate the pressure and internal energy, appropriate\nfor an ideal monatomic gas. The trajectories of collisionless DM and\nstellar particles are computed using a particle-mesh solver with\ncloud-in-cell interpolation (\\citealt{Guillet2011}; the resolution\nof the gravitational force is the same as that of the hydrodynamical\nsolver).\n\n\\subsection{Initial conditions}\nThe main parameters for the simulated galaxies and their host DM haloes\nare presented in \\Tab{sims.tbl}. We focus most of our analysis on the\nlower-mass galaxy, which we name {\\sc g9}{}. It has a baryonic mass of\n$M_{\\rm bar} = M_{\\rm disc} + M_{\\rm bulge} = 3.8 \\times 10^9 \\ \\rm{M}_{\\odot}$, with an initial\ngas fraction of $f_{\\rm gas}=0.5$, and it is hosted by a DM halo of mass\n$M_{\\rm halo}=10^{11} \\ \\rm{M}_{\\odot}$. We also compare a less detailed set of\nresults for feedback models in a more massive galaxy, {\\sc g10}{}, similar\n(though somewhat lower) in mass to our Milky-Way (MW), with\n$M_{\\rm bar}= 3.8 \\times 10^{10} \\ \\rm{M}_{\\odot}$, $f_{\\rm gas}=0.3$, and\n$M_{\\rm halo}=10^{12} \\ \\rm{M}_{\\odot}$. Each simulation is run for $250$ Myr, which\nis $2.5$ orbital times (at the scale radii) for both galaxy masses,\nand enough for star formation and outflows to settle to quasi-static\nstates.\n\nThe initial conditions are generated with the {\\sc MakeDisk} code by\nVolker Springel \\citep[see][]{Springel2005a,Kim2014}, which has been\nadapted to generate {\\sc Ramses}{}-readable format by Romain Teyssier and\nDamien Chapon. The DM halo follows an NFW density profile\n\\citep{Navarro1997} with concentration parameter $c=10$ and spin\nparameter $\\lambda=0.04$ \\citep{Maccio2008}. The dark matter in each\nhalo is modelled by one million collisionless particles, hence the\n{\\sc g9}{} and {\\sc g10}{} galaxies have DM mass resolution of $10^5$ and\n$10^6 \\ \\rm{M}_{\\odot}$, respectively. The initial disc consists of stars and\ngas, both set up with density profiles which are exponential in radius\nand Gaussian in height from the mid-plane (scale radii of $1.5$ kpc\nfor {\\sc g9}{} and $3.2$ kpc for {\\sc g10}{}, and scale heights one tenth of the\nscale radius in both cases). The galaxies contain stellar bulges with\nmasses and scale radii both one tenth that of the disc. The initial\nstellar particle number is $1.1\\times 10^6$ , a million of which are\nin the disc and the remainder in the bulge. The mass of the initial\nstellar particles is $1.7 \\times 10^3$ and $10^4 \\ \\rm{M}_{\\odot}$ for the\n{\\sc g9}{} and {\\sc g10}{} galaxies, respectively, close to the masses of\nstellar particles formed during the simulation runs, which are shown\nin \\Tab{sims.tbl}. While contributing to the dynamical evolution and\ngravitational potential of the rotating galaxy disc, the initial\nstellar particles do not explode as SNe. This initial lack of feedback\nresults in over-efficient early star formation and a subsequent strong\nfeedback episode which typically then suppresses the star formation to\na semi-equilibrium state within a few tens of Myr (see e.g. star\nformation rate plots in \\Sec{sf.sec}). To overcome this shortcoming,\nfuture improvements should include sensible age assignments to the\ninitial stellar particles, which could be used to perform SN feedback\nright from the start of the simulations.\n\nThe temperature of the gas discs is initialised to a uniform $T=10^4$\nK and the ISM metallicity $Z_{\\rm disc}$ is set to $0.1$ and $1$\n$\\Zsun$ for the {\\sc g9}{} and {\\sc g10}{} galaxies, respectively. The\ncircum-galactic medium (CGM) initially consists of a homogeneous hot\nand diffuse gas, with $n_{\\rm{H}}=10^{-6} \\ \\cci$, $T=10^6 $ K, and zero\nmetallicity. The cutoffs for the gas discs are chosen to minimize the\ndensity contrast between the disc edges and the CGM. The square box\nwidths for the {\\sc g9}{} and {\\sc g10}{} galaxies are $300$ and $600$ kpc,\nrespectively, and we use outflow (i.e. zero gradient) boundary\nconditions on all sides.\n\nThe same initial conditions and similar simulation settings were used\nin \\cite{Rosdahl2015}, where we studied stellar radiation feedback\nin combination with thermal dump SNe. The main differences from the\nsetup of the previous work, apart from not including stellar\nradiation, is that here we include a homogeneous UV background, we\nform stellar particles that are about a factor three more massive, and\nthe previous work included a bug, now fixed\\footnote{Thanks to Sylvia\n Ploeckinger for finding and fixing the issue.}, in metal cooling,\nwhere the contribution of hydrogen and helium was double-counted at\nSolar metallicity\\footnote{We have checked and verified that the metal\n cooling bug has a negligible effect on the results of both this and\n our previous work.}. The most significant of these changes is the\nlarger stellar particle mass, which boosts the efficiency of thermal\ndump SN feedback in suppressing star formation and, to a lesser\nextent, in generating outflows.\n \n\\subsection{Adaptive refinement}\nEach refinement level uses half the cell width of the next coarser\nlevel, starting at the box width at the first level. Our simulations\nstart at level $7$, corresponding to a coarse resolution of\n$2^7=128^3$ cells, and adaptively refine up to a maximum level 14,\ncorresponding to an effective resolution of $16384^3$ cells. This\ncorresponds to an optimal physical resolution of $18$ pc and $36$ pc\nin the less and more massive galaxies, respectively. Refinement is\ndone on mass: a cell is refined if it is below the maximum refinement\nlevel, if its total mass (DM+stars+gas) exceeds $8 \\, m_{*}$\n(see mass values in \\Tab{sims.tbl}), or if its width exceeds a quarter\nof the local Jeans length\n \n\\subsection{Gas thermochemistry}\nThe gas temperature and the non-equilibrium ionization states of\nhydrogen and helium are evolved with the method presented in\n\\cite{{Rosdahl2013}}, which includes collisional\nionization\/excitation, recombination, bremsstrahlung, di-electronic\nrecombination, and Compton electron scattering off cosmic microwave\nbackground photons. We include hydrogen and helium photo-ionization\nand heating of diffuse gas from a redshift zero\n\\cite{Faucher-Giguere2009} UV background, and enforce an exponential\ndamping of the UV radiation above the self-shielding density of\n$n_{\\rm{H}}=10^{-2} \\ \\cci$. \n\nAbove $10^4$ K, the contribution to cooling from metals is added using\n{\\sc Cloudy}{} \\citep[][version 6.02]{Ferland1998} generated tables,\nassuming photoionization equilibrium with a redshift zero\n\\cite{Haardt1996} UV background. Below $10^4$ K, we use fine\nstructure cooling rates from \\cite{Rosen1995}, allowing the gas to\ncool radiatively to $10$ K. \n\n\\subsection{Star formation}\\label{sf_model.sec}\nWe use a standard star formation (SF) model which follows a Schmidt\nlaw. In each cell where the hydrogen number density is above the star\nformation threshold, $n_{*} = 10 \\ \\cci$, gas is converted into stars at\na rate $\\dot \\rho_{*} = \\epsilon_{*} \\rho \/ t_{\\rm ff}$, where $\\rho$ is the\ngas (mass) density and $\\epsilon_{*}$ is the star formation efficiency per\nfree-fall time, $t_{\\rm ff} = \\left[ 3 \\pi\/(32 G \\rho) \\right]^{1\/2}$,\nwhere $G$ is the gravitational constant. Stellar populations are\nrepresented by collisionless stellar particles that are created\nstochastically using a Poissonian distribution \\citep[for details\nsee][]{Rasera2006}, which returns the stellar particle mass as an\ninteger multiple of $m_*$ (see \\Tab{sims.tbl}). We use $\\epsilon_{*}=2\\%$ in\nthis work \\citep[e.g.][]{Krumholz2007}. In future work we will\nconsider how varying the details of star formation affects the\nefficiency of SN feedback, but that is beyond the scope of the present\npaper. The stellar particle masses are given in \\Tab{sims.tbl}, and\nare equal to the SF density threshold, $n_{*}$, times the volume of a\nmaximally refined gas cell\\footnote{We do not allow more than $90 \\%$\n of the cell gas to be removed when forming stars. Thus, stellar\n particles actually do not form below a density of $1.11 \\ n_{*}$.}.\n\n\\subsection{Artificial Jeans pressure} \nTo prevent numerical fragmentation of gas below the Jeans scale\n\\citep{Truelove1997}, an artificial `Jeans pressure' is maintained\nin each gas cell in addition to the thermal pressure. In terms of an\neffective temperature, the floor can be written as\n$\\TJeans = T_0 \\ n_{\\rm{H}} \/ n_{*}$, where we have set $T_0=500$ K (and $n_{*}$\nis the aforementioned star formation threshold), to ensure that the\nJeans length is resolved by a constant minimum number of cell widths\nat any density -- 7 and 3.5 cell widths in the smaller and larger\ngalaxy simulations, respectively \\citep[see Eq. 3\nin][]{Rosdahl2015}. The pressure floor is non-thermal, in the\nsense that the gas temperature which is evolved in the thermochemistry\nis the difference between the total temperature and the floor --\ntherefore we can have $T << T_{\\rm J}$.\n\n\\section{SN feedback} \\label{models.sec} Supernova feedback is\nperformed with single and instantaneous injections of the cumulative\nSN energy per stellar population particle. Each stellar particle has\nan energy and mass injection budget of\n\\begin{align}\nE_{\\rm SN} &= 10^{51} \\ {\\rm{erg}} \\ \\ \\eta_{\\rm SN} \\ \\frac{m_{*}}{m_{\\rm SN}}, \n \\label{Esn.eq} \\\\\nm_{\\rm ej} &=\\eta_{\\rm SN} \\ m_*, \\label{msn.eq}\n\\end{align}\nrespectively, where $\\eta_{\\rm SN}$ is the fraction of stellar mass that is\nrecycled into SN ejecta\\footnote{Note that we will neglect the mass\n that ends up in stellar remnants of SNe.}, $m_{\\rm SN}$ is the average\nstellar mass of a type II SN progenitor, and, as a reminder, $m_{*}$\nis the mass of the stellar particle. We assume a\n\\cite{Chabrier2003} initial mass function (IMF) and set\n$\\eta_{\\rm SN}=0.2$ and $m_{\\rm SN}=10 \\ \\rm{M}_{\\odot}$, giving at least\n$40 \\times 10^{51}$ ergs per particle in the {\\sc g9}{} galaxy and\n$320 \\times 10^{51}$ ergs in the {\\sc g10}{} galaxy. We neglect the metal\nyield associated with stellar populations, i.e. the stellar particles\ninject no metals into the gas, and the metallicity of the gas disc\nstays at roughly the initial value of $0.1$ solar (which is negligibly\ndiluted due to mixing with the pristine CGM). The time delay for the\nSN event is 5 Myr after the birth of the stellar particle.\n\nThe model for SN energy and mass injection, and how it affects the\ngalaxy properties and its environment, is the topic of this paper. We\nexplore five different SN models, which we now describe.\n\n\\subsection{Thermal dump feedback}\nThis is the most simple feedback model, and one which is well known\nto suffer from catastrophic radiative losses at low resolution\n\\citep[e.g.][]{Katz1992}. The (thermal) energy and mass of the\nexploding stellar particle are dumped into the cell hosting it, and\nthe corresponding mass is removed from the particle. Unless the\nSedov-Taylor phase is well resolved in both space and time, the\nthermal energy radiates away before it can adiabatically develop into\na shock wave. The primary aim of each of the SN models that follow is\nto overcome this `overcooling' problem. \n\nNote that in SPH simulations, the energy in thermal dump feedback is\ntypically distributed over $\\sim 10^2$ neighbouring gas particles,\nwhereas in our implementation all the energy is injected into a single\ncell. Consequently, in SPH simulations with similar resolution, the\namount of gas that is heated is typically larger. This can lead to\nlower temperatures and larger radiative losses in SPH, but in the case\nof strong density gradients around the feedback event, it can also\nenhance feedback efficiency if SPH particles with a low density\nreceive part of the SN energy.\n\n\\subsection{Stochastic thermal feedback}\nWhile the other SN models described in this paper existed previously\nin {\\sc Ramses}{} and have been described and studied individually in\nprevious publications, we have for this work adapted to {\\sc Ramses}{} the\nstochastic SN feedback model presented in\n\\citet[][a.k.a. {DS12}{}]{DallaVecchia2012}, which has so far only\nbeen used in SPH. The idea is to heat the gas in the cell hosting the\nstellar particle to a temperature high enough that the cooling time is\nlong compared with the sound crossing time across the cell. The energy\nthen has the chance to do significant work on the gas before being\nradiated away, and overcooling is reduced.\n\nAs argued in {DS12}{}, a single SN energy injection should heat the gas\nenough that the ratio between the cooling time and sound crossing time\nacross a resolution element is $t_{\\rm{c}}\/t_{\\rm{s}} \\ga 10$. Given a local\ngas density $n_{\\rm{H}}$, a physical resolution $\\Delta x$, and assuming cooling\nis dominated by Bremsstrahlung (true for $T\\ga 10^7$ K), an expression\nfrom {DS12}{} (their eq. 15) can be used to derive an approximate\nrequired temperature increase, $\\Delta T_{\\rm stoch}$, to enforce this minimum\nratio and thus avoid catastrophic cooling, resulting in the condition\nthat\n\\begin{align}\\label{DT_min.eq}\n \\Delta T_{\\rm stoch} \\ga 1.1 \\times 10^{7} \\ {\\rm K} \n \\ \\left( \\frac{n_{\\rm{H}}}{10 \\ \\cci} \\right)\n \\ \\left( \\frac{\\Delta x}{100 \\ {\\rm pc}} \\right).\n\\end{align}\n\nSome specified time delay after the birth of the stellar population\nparticle ($5$ Myr in this work), it injects its total available\nenergy, $E_{\\rm SN}$, into the hosting gas cell. Since $E_{\\rm SN}$ may be smaller\nthan what is needed for the required temperature increase $\\Delta T_{\\rm stoch}$,\nthe feedback event is done stochastically, with a probability\n\\begin{align}\\label{pSN.eq}\n\\begin{split}\n p_{\\rm SN} & = \\frac{E_{\\rm SN}}{\\Delta \\epsilon \\ m_{\\rm cell}} \\\\\n &= 1.6 \\\n \\left( \\frac{\\eta_{\\rm SN}}{0.2} \\right)\n \\left( \\frac{m_{*}}{2 \\times 10^3 \\ {\\rm \\rm{M}_{\\odot}}} \\right) \\\\\n & \\qquad {} \\left( \\frac{\\Delta x}{18 \\ {\\rm pc}} \\right)^{-3}\n \\left( \\frac{n_{\\rm{H}}}{10 \\ \\cci} \\right)^{-1}\n \\left( \\frac{\\Delta T_{\\rm stoch}}{10^{7.5} \\ {\\rm K}} \\right)^{-1}\n ,\n\\end{split}\n\\end{align}\nwhere $m_{\\rm cell}$ is the gas mass of the host cell (including the SN\nejecta) and \n\\begin{align}\n\\Delta \\epsilon=\\frac{k_{\\rm B} \\Delta T_{\\rm stoch}}{(\\gamma-1)m_{\\rm p} \\mu}\n\\end{align}\nis the required specific energy, with $k_{\\rm B}$ the Boltzmann constant,\n$m_{\\rm p}$ the proton mass, and $\\mu$ the mean particle mass in units of\n$m_{\\rm p}$\\footnote{We use $\\mu=0.6$, assuming the gas to be\n ionized.}. When a stellar particle is due to inject SN energy,\n$p_{\\rm SN}$ is calculated via \\Eq{pSN.eq}. If $p_{\\rm SN}\\ge 1$, the available\nenergy is sufficient to meet the cooling time constraint and it is\nsimply injected into the host cell. On the other hand, if $p_{\\rm SN} <1$, a\nrandom number $r$ between 0 and 1 is drawn: only if $r \\sigma_{\\rm min}$, radiative\ncooling is disabled in that location, mimicking the non-thermal nature\nof turbulent energy. When the local turbulent velocity has fallen\nbelow $\\sigma_{\\rm min}$, via decay, diffusion, and mixing, radiative\ncooling is enabled again.\n\nThe main free parameter in the model is $t_{\\rm{delay}}$, which determines\nhow quickly the turbulent energy disappears. $\\sigma_{\\rm min}$ is\nalso an adjustable parameter, but it has more or less the same effect\nas $t_{\\rm{delay}}$, so we keep it fixed at $\\sigma_{\\rm min}=100 \\, \\kms$\n(corresponding to about $0.1\\%$ of the injected specific energy of a\nSN, or about $1\/30$th of the velocity in its unloaded remnant). The\nvalue of $t_{\\rm{delay}}$ can be motivated by an underlying physical\nmechanism, e.g. the crossing time over a few cell widths, after which\nthe resolved hydrodynamics should take over the unresolved advection\nof energy. The appendix of \\cite{Dubois2015} derives an expression\nfor the choice of an appropriate $t_{\\rm{delay}}$, given the local SN rate,\ndensity, and resolution (their eq. A8), for which our {\\sc g9}{}\nsimulation settings ($\\epsilon_{*}=0.02$, $\\eta_{\\rm SN}=0.2$, $n_{\\rm{H}}=10 \\ \\cci$,\n$\\Delta x=18$ pc) give $t_{\\rm{delay}} \\approx 1.3$ Myr. However, in this paper we\nfollow the literature \\citep{Teyssier2013, Roskar2014,\n Mollitor2015, Rieder2016}, and use a much larger fiducial\nvalue of $t_{\\rm{delay}}=10$ Myr for the {\\sc g9}{} galaxy, and $t_{\\rm{delay}}=20$ Myr in\nlow-resolution versions of {\\sc g9}{} and in the {\\sc g10}{} galaxy. Assuming\ndecay dominates over diffusion and mixing, and assuming the SN\nremnants travel at $\\sim 100$ ($1,000$) km\/s, our fiducial $t_{\\rm{delay}}=10$\nMyr corresponds to a delay length scale of $\\sim 1$ ($10$) kpc. We\nexplore variations of $t_{\\rm{delay}}$ in \\Sec{fb_dc.sec} (including values\nclose to that derived by \\citealt{Dubois2015}).\n\nThe disadvantage of delayed cooling is that while overcooling is in\npart a numerical problem, radiative cooling is a real and physical\nprocess, without which stars would not form at all. By neglecting\nradiative cooling altogether, even if for a relatively short time,\ndelayed cooling is likely to result in over-efficient type II SN\nfeedback (but, at the same time, it perhaps compensates for the\nneglect of other feedback processes which may be important in galaxy\nevolution). In addition, delayed cooling can result in the gas\npopulating parts of the temperature-density diagram where the cooling\ntime is short, which may yield unrealistic predictions for absorption\nand emission diagnostics.\n\n\\subsection{Kinetic feedback}\nWe use the kinetic feedback model presented in\n\\citet[][]{Dubois2008}. Here, the trick to overcoming numerical\novercooling is to skip the unresolved Sedov-Taylor phase and directly\ninject the expected collective result of that phase for a stellar\npopulation, which is an expanding momentum-conserving shock wave (or\nsnowplow). Note, however, that the injected kinetic energy may\nsubsequently be converted into thermal energy if shocks develop.\n\nSN mass and momentum is injected into gas within a bubble radius of\nthe exploding stellar particle. The free parameters for the method are\n$f_{\\rm k}$, the fraction of $E_{\\rm SN}$ which is released in kinetic form,\n$r_{\\rm bubble}$, the radius of the bubble, and $\\eta_{\\rm W}$, the sub-resolution mass\nloading factor of the Sedov-Taylor phase, describing how much mass,\nrelative to the stellar mass, is redistributed from the cell at the\nbubble centre to the bubble outskirts.\n\nThe redistributed mass consists of two components: one is the SN\nejecta, $m_{\\rm ej}=\\eta_{\\rm SN} m_{*}$, removed from the stellar particle, the\nother is the swept up mass, $m_{\\rm sw}=\\eta_{\\rm W}m_{*}$, removed from the\ncentral host cell (no more than $25\\%$ of the central cell mass is\nremoved, hence for individual feedback events at relatively low\ndensities it may happen that $m_{\\rm sw}$ is smaller than\n$\\eta_{\\rm W}m_{*}$). The total wind mass is thus $m_{\\rm W}=m_{\\rm ej}+m_{\\rm sw}$, which is\nredistributed uniformly (i.e. uniform density) to all cells inside the\nbubble.\n\nThe kinetic energy, $f_{\\rm k} E_{\\rm SN}$, is likewise distributed to the bubble\ncells, but with an injected velocity (directed radially away from the\nstellar particle) that increases linearly with distance from the\ncentre, such as to approximate the ideal Sedov-Taylor solution:\n\\begin{align}\n {\\bf v}(\\Delta m_{\\rm cell}) = f_{\\rm N} v_{\\rm W} \n \\frac{\\bf{r}_{\\rm cell}}{r_{\\rm bubble}},\n\\end{align}\nwhere $\\Delta m_{\\rm cell}$ is the mass added to the cell,\n$r_{\\rm cell}$ is the position of the centre of the cell relative to\nthe stellar particle, $f_{\\rm N}\\sim 1$ is a bubble normalisation\nconstant\\footnote{The normalisation constant is the volume-weighted\n average distance from the centre, for each volume element in the\n bubble. In the ideal case of infinitely small cells, the factor is\n $1.29$.} required to ensure that the total redistributed energy is\nequal to $f_{\\rm k} E_{\\rm SN}$, and\n\\begin{align} \\label{kin_v.eq}\n v_{\\rm W} = \\sqrt{\\frac{2 f_{\\rm k} E_{\\rm SN}}{m_{\\rm W}}} \n = 3,162 \\ {\\rm km \/ s} \\ \\sqrt{\\frac{f_{\\rm k}}{1+\\eta_{\\rm W}\/\\eta_{\\rm SN}}}\n\\end{align}\nis the unnormalised wind velocity, where we used \\Eq{Esn.eq} for the\nlatter equality. Note that this is the velocity of the added mass,\ni.e. each cell gains momentum\n\\begin{align}\n \\Delta {\\bf p} = {\\bf v}(\\Delta m_{\\rm cell}) \\Delta m_{\\rm cell}\n \\propto \\sqrt{f_{\\rm k} \\eta_{\\rm SN} (\\eta_{\\rm SN} + \\eta_{\\rm W})},\n\\end{align}\nso if the mass already in the cell is substantial compared to the\nadded mass, the resulting velocity change can be small. The injection\nis performed in the mass-weighted frame of the SN particle (with\n$m_{\\rm ej}$) and host cell (with $m_{\\rm sw}$). The remaining thermal energy,\n$(1-f_{\\rm k}) E_{\\rm SN}$, is then distributed uniformly between the bubble\ncells.\n\nIn this work, we use fiducial parameters $f_{\\rm k}=1$, $\\eta_{\\rm W}=1$, and\n$r_{\\rm bubble}=150$ pc, a size comparable to galactic super-bubbles (note that\nit is also comparable to the initial scale height of the stellar and\ngas disc in our simulations, which is $150$ pc and $320$ pc for the\n{\\sc g9}{} and {\\sc g10}{} galaxies, respectively). These values give a velocity\nfor the gas ejected from the central cell (from Eq. \\ref{kin_v.eq}) of\n$v_{\\rm W} \\approx 1,300$ km\/s. Our choice of $f_{\\rm k}=1$ implies that\nthere have been neither radiative losses nor momentum cancellation\nfrom the set of unresolved individual SNe inside the bubble. We\nexplore the effects of a smaller bubble and higher mass loading in\n\\Sec{fb_kin.sec}.\n\n\\subsection{Mechanical feedback}\nThis model was introduced to the {\\sc Ramses}{} code by \\citet[][see also\n\\citealt{Kimm2015}]{Kimm2014}, and an analogue SPH scheme was earlier\ndescribed independently in \\cite{Hopkins2014}. Here, momentum is\ndeposited into the neighbour cells of a SN hosting cell, with the\nmagnitude adaptively depending on whether the adiabatic phase of the\nSN remnant is captured by this small bubble of cells and the mass\nwithin it, or whether the momentum-conserving (snowplow) phase is\nexpected to have already developed on this scale. In the first case,\nthe momentum is given by energy conservation, while in the latter\ncase, the final momentum, which depends via the cooling efficiency on\nthe density and metallicity, is given by theoretical works\n\\citep{Blondin1998, Thornton1998}.\n\nIn a single SN injection event, SN momentum (and any excess energy) is\ndistributed over the nearest neighbours (sharing at least two\nvertices) of the SN host cell. The number of such cells can vary,\ndepending on the cell refinement, but given the extreme limit where\nall the neighbours are at a finer level (i.e. half the cell width) of\nthe SN host cell, the maximum number of neighbours is $N_{\\rm nbor}=48$. When\na neighbouring cell is at the same level as the host, or one level\ncoarser, it is given an integer weight $\\wc$ corresponding to how many\nof the $N_{\\rm nbor}$ finer level cell units it contains ($4$ if sharing a\nplane with the host, $2$ if sharing a line). The SN host cell has a\nweight of $\\wc=4$, so the total number of cell units receiving direct\nSN energy injection is $N_{\\rm inj}=N_{\\rm nbor}+4$.\n\nThe goal is to inject into each neighbour cell a momentum $\\Delta p$,\ncorresponding to that generated during the energy conserving phase if\nthat is resolved, but to let $\\Delta p$ converge towards that of the\nmomentum-conserving snowplow phase in the limit that the energy\nconserving phase is unresolved. In each SN neighbour cell, this limit\n(energy vs momentum conserving) depends on the local mass-loading,\ni.e. the ratio of the local wind mass, to the SN ejecta given to that\ncell,\n\\begin{align}\n \\chi \\equiv \\frac{\\Deltam_{\\rm W}}{\\Deltam_{\\rm ej}},\n\\end{align}\nwith $\\Deltam_{\\rm ej}=\\frac{\\wc m_{\\rm ej}}{N_{\\rm inj}}$,\n$\\Deltam_{\\rm W}=m_{\\rm nbor}+\\frac{\\wc m_{\\rm cen}}{N_{\\rm inj}}+\\Deltam_{\\rm ej}$, $m_{\\rm cen}$ the\nmass initially contained in the SN host cell, and\n$m_{\\rm nbor}=\\wc\\rho_{\\rm nbor}\\left( \\Delta x_{\\rm cen}\/2 \\right)^3$ the initial neighbouring\ngas mass, with $\\rho_{\\rm nbor}$ and $\\Delta x_{\\rm cen}$ the gas neighbour cell gas\ndensity and host cell width, respectively.\n\n\nThe momentum injected into each neighbour cell, radially from the\nsource, is\n\\begin{align}\n \\Delta p = \\frac{\\wc}{N_{\\rm inj}} \\! \\left\\{ \\! \\! \\!\n \\begin{array}{lr}\n \\sqrt{2 \\, \\chi \\, m_{\\rm ej} \\, f_{\\rm e} \\, E_{\\rm SN}} & {\\rm if}\\, \n \\chi < \\chi_{\\rm tr} \\label{dp_mech.eq},\\\\\n 3 \\! \\times \\! 10^{5} \\, \\rm{M}_{\\odot} \\, \\frac{\\rm km}{\\rm s} \\, \n E_{51}^{\\frac{16}{17}} \\, n_0^{-\\frac{2}{17}} \\ Z'^{-0.14} \\! \\! \\! \n & {\\rm otherwise.}\n \\end{array}\n \\right.\n\\end{align} \nHere, the upper expression represents the resolved energy conserving\nphase, and comes from assuming the (final) cell gas mass of\n$\\Delta m_{\\rm W}$ receives a kinetic energy $\\frac{\\wc E_{\\rm SN}}{N_{\\rm inj}}$ (we\nignore $f_{\\rm e}$ for the moment). The lower expression represents the\nasymptotic momentum reached in the snowplow phase, with $E_{51}$ is the\ntotal SN energy (i.e. $E_{\\rm SN}$) in units of $10^{51}$ erg, $n_0$ the\nlocal hydrogen number density in units of $1 \\ \\cci$, and\n$Z'=\\max \\left(Z\/\\Zsun,0.01\\right)$. The Solar metallicity form of the\nexpression was derived from analytic arguments, and confirmed with\nnumerical experiments, in \\cite{Blondin1998}, and the $Z$\ndependency was added in the numerical work of \\cite{Thornton1998}.\n\nThe phase transition ratio, $\\chi_{\\rm tr}$, is found by equating the\nsnowplow expression in \\Eq{dp_mech.eq} with\n$\\sqrt{2 \\, \\chi_{\\rm tr} \\, m_{\\rm ej} \\, f_{\\rm tr} \\, E_{\\rm SN}}$, where $f_{\\rm tr}=2\/3$ is the\nfraction of the SN energy assumed to be in kinetic form at the\ntransition. This gives\n\\begin{align}\n\\begin{split}\n \\chi_{\\rm tr} =& \\frac{900}{m_{\\rm SN}\/\\rm{M}_{\\odot} \\ f_{\\rm tr}} \\ E_{51}^{-2\/17} \\ n_0^{-4\/17} \n \\ Z'^{-0.28} \\\\\n =& \\, 97 \\, \\left( \\frac{m_{\\rm SN}}{10 \\, \\rm{M}_{\\odot}}\\right)^{-\\frac{15}{17}}\n \\left( \\frac{\\eta_{\\rm SN}}{0.2}\\right)^{-\\frac{2}{17}}\n \\left( \\frac{m_{*}}{2 \\! \\times \\! 10^3 \\, \\rm{M}_{\\odot}}\\right)\n ^{-\\frac{2}{17}} \\label{chi_tr.eq}\n \\\\\n & \\qquad {} \\left( \\frac{n_{\\rm{H}}}{10\\, \\cci}\\right)^{-\\frac{4}{17}}\n \\left( \\frac{Z}{0.1 \\, \\Zsun}\\right)^{-0.28},\n\\end{split}\n\\end{align} \nwhere we used \\Eq{Esn.eq}, $E_{51}=m_{\\rm ej}\/m_{\\rm SN}$, and normalised to\ntypical values for the {\\sc g9}{} galaxy in the latter equality. The\nfunction\n\\begin{align}\n f_{\\rm e}=1-(1-f_{\\rm tr}) \\frac{\\chi-1}{\\chi_{\\rm tr}-1}\n\\end{align}\nensures a smooth transition between the two expressions in\n\\Eq{dp_mech.eq}.\n\n\nIf the momentum injection results in removal of total energy in a\ncell, due to cancellation of velocities, the surplus energy (initial\nminus final) is added to the cell in thermal form. As it has no\npreferred direction, the SN host cell receives only thermal energy. In\n\\cite{Kimm2014} and \\cite{Kimm2015}, due to wrong book-keeping\nof the surplus energy, the thermal energy injection during the\nadiabatic phase was overestimated (by a factor $\\sim 2 - 4$) in\nregions where the swept up mass is large compared to the SN ejecta (by\na similar factor), but the correct momentum and energy was used during\nthe snowplow phase and the adiabatic phase with little mass loading\n($\\chi\\sim 1$). This bug has since been corrected.\n\nIf we assume that all the cells receiving the SN energy have the same\nrefinement level (i.e. the same cell width) and density and that the\ndensity is at least as high as the threshold for star formation, then\nthe initial mass of the neighbour dominates $\\Delta m_{\\rm W}$ and we can\nget a rough estimate for the local mass loading,\n\\begin{align}\n\\begin{split} \\label{chi_mech_approx.eq}\n \\chi \\approx & \\, \\frac{m_{\\rm nbor}}{\\Delta m_{\\rm ej}} \\\\\n =& \\, \\frac{N_{\\rm inj}}{\\wc} \\frac{\\rho \\Delta x^3}{\\eta_{\\rm SN} m_{*}} \\\\\n =& \\, 60.8 \\ \\ \\left( \\frac{\\wc}{4} \\right)^{-1}\n \\left( \\frac{n_{\\rm{H}}}{10 \\, \\cci} \\right)\n \\left( \\frac{\\Delta x}{18 {\\, \\rm pc}} \\right)^3 \\\\\n & \\qquad {} \\qquad {} \\left( \\frac{\\eta_{\\rm SN}}{0.2} \\right)^{-1} \n \\left( \\frac{m_{*}}{2 \\! \\times \\! 10^3 \\, \\rm{M}_{\\odot}} \\right)^{-1} \\\\\n =& \\, 0.63 \\ \\chi_{\\rm tr} \\ \\ \\left( \\frac{\\wc}{4} \\right)^{-1}\n \\left( \\frac{n_{\\rm{H}}}{10 \\, \\cci} \\right)^{\\frac{21}{17}}\n \\left( \\frac{\\Delta x}{18 {\\, \\rm pc}} \\right)^3 \\\\\n & \\qquad {} \\qquad {}\\left( \\frac{\\eta_{\\rm SN}}{0.2} \\right)^{-\\frac{15}{17}} \n \\left( \\frac{m_{*}}{2 \\! \\times \\! 10^3 \\, \\rm{M}_{\\odot}} \n \\right)^{-\\frac{15}{17}} \\\\\n & \\qquad {} \\qquad {} \n \\left( \\frac{m_{\\rm SN}}{10 \\, \\rm{M}_{\\odot}}\\right)^{\\frac{15}{17}}\n \\left( \\frac{Z}{0.1 \\, \\Zsun}\\right)^{0.28}.\n\\end{split}\n\\end{align}\nHere, the last equality, which comes from comparing with\n\\Eq{chi_tr.eq} and normalising to the {\\sc g9}{} simulation parameters,\nshows that mechanical feedback events are marginally resolved, with\nthe momentum injection being done using the upper expression in\n\\Eq{dp_mech.eq} for $n_{\\rm{H}} \\la 1.6 \\, n_{*}$, but switching to the final\nsnowplow momentum, i.e. the lower expression in \\Eq{dp_mech.eq}, for\nhigher gas densities. For the {\\sc g10}{} galaxy, where the resolution is\nlower ($\\Delta x=36$ pc), the stellar mass higher\n($1.6 \\times 10^4 \\, \\rm{M}_{\\odot}$), and the metallicity higher ($\\Zsun$),\nthe SN blasts are slightly worse resolved, with\n$\\chi \\approx 1.53 \\, \\chi_{\\rm tr}$ (at $n_{*}$) for the same\nassumptions. Here the effects of lower resolution and higher\nmetallicity, towards worse-resolved SN blasts, are counter-weighted by\nthe higher stellar particle mass.\n\n\\section{SN feedback model comparison}\\label{comparison.sec}\n\nWe begin by comparing all SN feedback models using the fiducial\nsettings. Later in this paper we will study each feedback model in\nmore detail and show how the results vary with the values of the free\nparameters. We focus on star formation, outflows, and galaxy\nmorphologies. Unless stated otherwise, our analysis will be restricted\nto the lower-mass {\\sc g9}{} galaxy.\n\n\\subsection{Galaxy morphologies}\n\\begin{figure}\n \\centering\n \\subfloat\n {\\includegraphics[width=0.23\\textwidth]\n {figures\/maps_1_250_n01.pdf}}\n \\hspace{-0.6mm}\n \\subfloat\n {\\includegraphics[width=0.23\\textwidth]\n {figures\/maps_1_250_t01.pdf}} \\\\\n \\vspace{-3.7mm}\n \\subfloat\n {\\includegraphics[width=0.23\\textwidth]\n {figures\/maps_1_250_s08.pdf}}\n \\hspace{-0.6mm}\n \\subfloat\n {\\includegraphics[width=0.23\\textwidth]\n {figures\/maps_1_250_d02.pdf}} \\\\\n \\vspace{-3.7mm}\n \\subfloat\n {\\includegraphics[width=0.23\\textwidth]\n {figures\/maps_1_250_y04.pdf}}\n \\hspace{-0.6mm}\n \\subfloat\n {\\includegraphics[width=0.23\\textwidth]\n {figures\/maps_1_250_k01.pdf}}\n \\caption\n {\\label{maps_Fid.fig}Maps of gas column densities in the {\\sc g9}{}\n galaxy at $250$ Myr, for the different SN feedback models. Each\n panel shows face-on and edge-on views, with the model indicated\n in the bottom left corner. The top left panel includes the\n physical length scale and the colour scale for the gas column\n density.}\n\\end{figure}\n\nIn \\Fig{maps_Fid.fig}, we show the total hydrogen column density\nface-on and edge-on at the end of the $250$ Myr run for each feedback\nmodel. The maps illustrate how the gas morphology is affected by the\nSN feedback models. Without feedback (top left panel), the galaxy\nbecomes clumpy, containing dense star-forming clouds which accrete\ngas, thus creating large `holes'. The gas outside the thin edge-on\ndisc is diffuse and featureless.\n\nCompared to the no feedback case, thermal dump feedback (top right\npanel) significantly changes the gas morphology, reducing the gas\nclumpiness and thickening the disc. In fact, comparing to other panels\nin \\Fig{maps_Fid.fig}, it has here a very similar morphological effect\nas the stochastic and mechanical feedback models (middle left and\nbottom right panels, respectively). We will come back to this\nsimilarity in later sections.\n\nDelayed cooling (middle right panel) and kinetic feedback (bottom\nleft), on the other hand, produce quite different morphologies from\nother models in \\Fig{maps_Fid.fig}. Delayed cooling diffuses the gas\nmore, with less obvious spiral structure, and the disc becomes\nthicker, indicating increasing feedback efficiency. In stark contrast,\nkinetic feedback results in a very thin disc plane, and thin, well\ndefined spiral filaments. Judging qualitatively from these images of\ncolumn density, delayed cooling appears most efficient in terms of\nsmoothing out the gas, thickening the disc, and creating outflows,\nwhile kinetic feedback visually appears weakest, with relatively dense\nand filamentary gas in the disc and low column densities out of the\ndisc plane. However, as we will see in what follows, kinetic feedback\nactually has the strongest and fastest (but relatively diffuse)\noutflows. We note that these distinct features of kinetic feedback are\nsensitive to the radius of momentum and mass injection, i.e. the $r_{\\rm bubble}$\nparameter. As we will argue in \\Sec{SD.sec}, with our fiducial bubble\nsize of $150$ pc, the momentum injection is essentially\nhydrodynamically decoupled from the galactic disc, and as we show in\n\\Sec{fb_kin.sec}, a considerably smaller bubble leads to kinetic\nfeedback behaving similarly to thermal dump, stochastic, and\nmechanical feedback.\n\n\n\\begin{figure}\n \\centering\n \\includegraphics\n {figures\/sfr.pdf}\n \\caption\n {\\label{SFR.fig}Star formation rates in the {\\sc g9}{} galaxy for the SN\n feedback models indicated in the legend using their fiducial\n parameters. Thermal dump, stochastic, and mechanical feedback\n produce nearly identical SFRs, while kinetic feedback produces a\n steadily declining SFR, and delayed cooling is by far the most\n efficient at suppressing star formation.}\n\\end{figure}\n\\subsection{Star formation} \\label{sf.sec} The feedback efficiencies\ncan be quantified and compared via the star formation rates (SFRs),\nwhich we show for the {\\sc g9}{} galaxy in \\Fig{SFR.fig}. The SFRs are\ncalculated by binning the stellar mass formed over time intervals of\n$1.2$ Myr. They vary by almost two orders of magnitude, depending on\nthe feedback model utilised, and one order of magnitude at the end of\nthe simulation runtime, by which time the rate of evolution has\nsettled down after the initial collapse of the disc (due to radiative\ncooling and lack of initial feedback) and burst of star formation\naround the $20$ Myr mark.\n\nFocusing on the star formation around $250$ Myr, we find that the\nfeedback models separate roughly into the same three groups as in our\nassessment of the morphologies. Thermal dump, stochastic, and\nmechanical feedback all perform almost identically in terms of star\nformation, indicating that thermal dump is not strongly affected by\novercooling (see \\Sec{resolved.sec}). The SFR is suppressed by about a\nfactor of $3-4$ compared to the no feedback case (labelled NoFB in the\nplot). This may seem an inefficient suppression, compared to the\ninferred $1-2\\%$ average efficiency of star formation observed in the\nUniverse, but it should be kept in mind that the star formation model\nalready has a built in sub-resolution efficiency of only $\\epsilon_{*}=2\\%$\n(see \\Sec{sf_model.sec}). We comment further on the choice and effect\nof $\\epsilon_{*}$ in the discussion (\\Sec{sfe.sec}).\n\n\\begin{figure}\n \\centering\n \\includegraphics\n {figures\/sfr_mw.pdf}\n \\caption\n {\\label{SFR_MW.fig}Star formation rates, as in \\Fig{SFR.fig}, but\n for the more massive and lower-resolution {\\sc g10}{} galaxy (note that\n the y-axis is scaled up by a factor of ten). Here, we find\n larger differences between thermal dump, stochastic, and\n mechanical feedback, while kinetic feedback and delayed cooling\n remain qualitatively the same as in the less massive galaxy.}\n\\end{figure}\n\nAt $250$ Myr, kinetic feedback has a SFR fairly close to those three\naforementioned models. The difference is that the star formation rate\nhas not stabilised, but is declining steadily. As we will see, this is\ndue to the strong outflow removing gas from the star-forming ISM.\n\nDelayed cooling is by far the most effective at suppressing star\nformation. The feedback from the initial peak in the SFR almost blows\napart the gas disc, but once it has settled again the SFR stabilises\naround $0.1-0.2 \\, \\msunyr$, though it remains somewhat bursty. The\nfinal SFR at $250$ Myr is almost an order of magnitude lower than for\nthe other feedback models.\n\n\\subsubsection{Star formation in the more massive galaxy}\n\nIn \\Fig{SFR_MW.fig} we show the SFRs in the ten times more massive\n(and lower resolution) {\\sc g10}{} galaxy simulations.\n\nDue to the combination of the deeper gravitational potential, stronger\n(metal) cooling, lower resolution, and the SN events happening at\nhigher gas densities (typically by $0.5-1$ dex) we find more\ndifferences between the feedback models in their ability to suppress\nstar formation than for the {\\sc g9}{} galaxy. Thermal dump feedback is\nweak, with the star formation stabilising at the same rate as for the\ncase of no feedback. With stochastic and mechanical feedback the star\nformation is suppressed by about a factor of two compared to thermal\ndump, with mechanical feedback being somewhat stronger. Kinetic\nfeedback shows the same qualitative behaviour as in the lower-mass\ngalaxy, with an initially high SFR that declines steadily due to gas\noutflows. Again, delayed cooling gives SFRs that are much lower than\nfor the other models.\n\n\\subsubsection{The Kennicutt-Schmidt relation}\n\nIn the local Universe, SFR surface densities, $\\Sigma_{\\rm SFR}$, are observed\non large scales to follow the Universal Kennicutt-Schmidt (KS)\nrelation, $\\Sigma_{\\rm SFR} \\propto \\Sigma_{\\rm gas}^{1.4}$, where $\\Sigma_{\\rm gas}$ is the gas\nsurface density \\citep{Kennicutt1998}. We plot in \\Fig{KS_Fid.fig}\nthe relation between SFR and gas surface densities in our simulations\nat $250$ Myr for the different feedback models, and compare it with\nthe empirical relation shown as a solid line (normalised for a\nChabrier IMF, see \\citealt{DallaVecchia2012}). In this plot we\ninclude results from both the low-mass {\\sc g9}{} and high-mass {\\sc g10}{}\ngalaxies, in order to show a wide range of surface densities, and to\ndemonstrate how the feedback efficiency changes for each model with\ngalaxy mass, metallicity, and physical resolution. Results for the\n{\\sc g9}{} galaxy are shown with smaller opaque symbols, while the {\\sc g10}{}\ngalaxy is represented by larger and more transparent symbols. The gas\nand SFR surface densities are averaged along annulli around the galaxy\ncentre, with equally spaced azimuthal bins of $\\Delta r=500$ pc, and\nwe only include gas within a height of $2$ kpc from the disc plane\n($4$ kpc in the case of the {\\sc g10}{} disc).\n\n\\begin{figure}\n \\centering\n \\includegraphics\n {figures\/ks_500pc_fid_py.pdf}\n \\caption\n {\\label{KS_Fid.fig}The Kennicutt-Schmidt relation for different\n feedback models at $250$ Myr. Small opaque symbols indicate the\n {\\sc g9}{} galaxy, while larger and more transparent symbols are for\n the {\\sc g10}{} galaxy. The values are averages within equally spaced\n azimuthal bins of $\\Delta r=500$ pc. The grey solid line shows the\n empirical \\protect\\cite{Kennicutt1998} law (see text).}\n\\end{figure}\n\nAll feedback models, and even the case of no feedback, produce slopes\nin the KS relation in rough accordance with observations at gas\nsurface densities substantially above the `knee' at\n$\\Sigma_{\\rm gas} \\approx 10 \\, \\msunpc$, though the slopes tend to be slightly\nsteeper than observed. The similarity to the observed slope is in\nlarge part a result of the built-in star formation model,\n$\\dot \\rho_{*} \\propto \\rho^{1.5}$. However, even though all\nsimulations have the same sub-resolution local star formation\nefficiency of $\\epsilon_{*} = 2\\%$, the $\\Sigma_{\\rm SFR}$ normalisation varies by\nabout an order of magnitude, with delayed cooling being most efficient\nat suppressing the star formation for any given gas surface density,\nowing to the large scale height of the disc. At high gas surface\ndensities ($\\Sigma_{\\rm gas} \\gg 10 \\, \\msunpc$), all methods predict too high\n$\\Sigma_{\\rm SFR}$, except delayed cooling which predicts too low values.\n\nFor the lower-mass {\\sc g9}{} galaxy (smaller solid symbols), thermal\ndump, stochastic, mechanical feedback, and delayed cooling are all\nsimilar in the KS plot, though delayed cooling does not produce as\nhigh gas surface densities as the other models. Kinetic feedback has\nsignificantly higher SFR surface densities for given gas surface\ndensities (but relatively low maximum gas surface densities), owing to\nthe very thin disc produced by the almost decoupled injection of\nmomentum.\n\nFor the more massive {\\sc g10}{} galaxy, which was simulated with lower\nresolution, the picture is quite different (large transparent\nsymbols). With thermal dump feedback, the SFR surface densities shift\nsignificantly upwards and the relation is quite similar to the no\nfeedback case. Stochastic feedback, and to a lesser extent, mechanical\nfeedback, also shift upwards, away from the observed relation. For\nkinetic feedback, the relation is however almost unchanged in the more\nmassive galaxy (except for low gas surface densities, where it is\nhigher), but consistently remains about a factor two above the\nobserved relation. With delayed cooling, the gas surface densities\nbecome much higher than in the lower mass galaxy, but the SFR surface\ndensities are significantly lower than observed.\n\nFor delayed cooling, we can calibrate the available free parameter to\nimprove the comparison to observations. Halving the delayed cooling\ntime-scale in the {\\sc g10}{} galaxy, to the same value as used for the\n{\\sc g9}{} galaxy, results in a KS relation which is very close to the\nobserved one. For the other models, we cannot calibrate the feedback\nparameters to close in on the observed relation, and other measures\nare required, such as increasing the feedback energy per unit stellar\nmass. Another option is to reduce the star formation efficiency\nparameter, $\\epsilon_{*}$, in which case a fair match to observations can be\nproduced, but at the cost of making the feedback insignificant\ncompared to the no feedback case in terms of morphology, total SFR,\nand outflows (the feedback essentially all becomes captured inside\n$\\epsilon_{*}$).\n\n\\subsection{Outflows} \\label{outflows.sec} Galactic outflows are a\nvital factor in delaying the conversion of gas into stars. Feedback\nprocesses in the ISM are thought to eject large quantities of gas from\nthe galaxy, some of the gas escaping the gravitational pull of the\ngalactic halo altogether. Most of the gas, however, is expected to be\nejected at velocities below the halo escape velocity and to be\nrecycled into the disc. Galactic outflows are routinely detected in\nobservations \\citep[e.g.][]{Steidel2010, Heckman2015}, and while\nthe outflow speed of cold material can be fairly accurately\ndetermined, other properties of the outflows are not well constrained,\nincluding the mass outflow rate, the fraction of gas escaping the\nhalo, the density, and thermal state of the gas.\n\nOutflows are often characterised in terms of the mass loading factor,\nwhich is the ratio of the outflow rate and the rate of star formation\nin the galaxy. Its definition is somewhat ambiguous, as it depends on\nthe geometry and distance from the galaxy at which the outflows are\nmeasured, which is hard to determine in observations. Observational\nworks have inferred outflow mass loading factors well exceeding unity\n\\citep[see e.g.][]{Bland-Hawthorn2007a, Schroetter2015}, and many\ntheoretical models require mass loading factors of $1-10$ in sub-L$_*$\ngalaxies to reproduce observable quantities in the Universe\n\\citep[e.g.][]{Puchwein2013,Vogelsberger2013,Barai2015,Mitra2015}.\n\nIt is therefore important to consider outflow properties when\nevaluating SN feedback models. Models that produce weak or no\noutflows, with mass loading factors well below unity, could be at odds\nwith current mainstream theories of galaxy evolution (although it is\nnot known whether SN feedback is directly responsible for outflows --\ne.g. cosmic rays could play a major role; \\citealt{Booth2013,\n Hanasz2013, Salem2014, Girichidis2016}).\n\nIn \\Fig{OFtime.fig} we compare the time-evolution of outflows from the\n{\\sc g9}{} galaxy with the different SN feedback schemes\\footnote{We show\n outflow plots for the {\\sc g9}{} galaxy only, but we comment on outflows\n in the more massive {\\sc g10}{} galaxy (which have similar properties) at\n the end of this subsection.}. We measure the gross gas outflow\n(i.e. ignoring inflow) across planes parallel to the galaxy disc, at a\ndistance of $2$ kpc in the left panels and further out at $20$ kpc in\nthe right panels. The top row of panels shows the mass outflow rate\nacross those planes ($\\dot M_{\\rm out}$), the middle row shows the mass loading\nfactor ($\\betaOut$), and the bottom row shows the mass-weighted\naverage of the outflow velocity perpendicular to the outflow plane\n($\\vzout$).\n\nIn terms of outflows $2$ kpc above the disc plane (left panels of\n\\Fig{OFtime.fig}), kinetic feedback is strongest, with\n$\\dot M_{\\rm out}\\approx 1 \\, \\msunyr$ and $\\betaOut$ slightly above\nunity. Delayed cooling produces a substantially lower outflow rate,\nbut since the SFR is also much lower, the mass loading factor is\nhigher. The other feedback models give much lower outflow rates, and\nhave mass loading factors $\\sim 10^{-2}-10^{-1}$. At a larger distance\nfrom the disc of $20$ kpc (right panels of \\Fig{OFtime.fig}), the\nsituation is quite similar. All models except for kinetic feedback\nhave declining outflow rates and mass loading factors, owing to the\nstrong initial starburst, which can be seen to result in an outflow\nrate peaking around $50$ Myr.\n\n\\begin{figure}\n \\centering\n \\includegraphics\n {figures\/oftime.pdf}\n \\caption\n {\\label{OFtime.fig}Gross mass outflow rates ($\\dot M_{\\rm out}$), mass loading\n factors ($\\betaOut$), and mass-weighted average outflow velocities\n ($\\vzout$), across planes $2$ and $20$ kpc above the disc plane of\n the {\\sc g9}{} galaxy (left and right columns, respectively), for the\n SN feedback models and their fiducial parameters. The colour\n coding and linestyles are the same as in \\Fig{SFR.fig}. The thin\n horizontal lines in the bottom panels indicate the escape\n velocity.}\n\\end{figure}\n\nIn the bottom panels of \\Fig{OFtime.fig} we compare the average\noutflow velocities to the DM halo escape velocity\\footnote{The escape\n velocity estimate ignores the contribution of baryons. Hence, it is\n an underestimate, that is likely non-negligible close to the disc,\n but insignificant at $20$ kpc.},\n\\begin{align}\n \\vesc(h) \\approx 1.16 \\ \\vcirc \\\n \\sqrt{\\frac{\\ln \\left(1+c x \\right)}{x}},\n\\end{align}\nwhere $x=h\/R_{\\rm vir}$ \\citep[e.g.][]{Mo2004}, which has been marked\nwith horizontal grey solid lines. Close to the disc, the average\nvelocity for kinetic feedback is marginally higher than escape, but\nslowly declining due to the declining SFR. For the other feedback\nmodels, the outflow velocity is well below escape. Ten times further\nout, the mean outflow velocities are considerably higher for all\nfeedback models. This is to some degree due to the initial starburst,\nwhich ejects high-velocity outflows early in the simulation runs, and\nto some degree due to gas at lower velocities not having reached $20$\nkpc and thus not contributing to the velocity average. In any case,\nthese (average) velocities are still below the escape velocity, again\nwith the exception of kinetic feedback. This implies that for all\nmodels except kinetic feedback, most of the outflowing gas will not\nescape to infinity, but instead fall back on the galaxy where it will\neventually produce stars. Kinetic feedback does give the gas high\nenough velocity so that in principle it can escape the halo entirely,\nwhile in practice this may be complicated by CGM and IGM gas which\nstands in the way and needs to be swept out.\n\nThe main message of \\Fig{OFtime.fig}, however, is not the escape\nvelocity, but the low mass loading factors for thermal dump,\nstochastic, and mechanical feedback, far below the order unity\ninferred from observations of local galaxies. As before, we see a\nstrong similarity between the results produced by thermal dump,\nstochastic, and mechanical feedback.\n\n\\begin{figure}\n \\includegraphics[width=0.49\\textwidth]\n {figures\/sqo_fid_one.pdf}\n \\caption\n {\\label{SQO_Fid.fig}Local outflow properties at $250$ Myr across\n planes $2$ kpc from the {\\sc g9}{} disc as a function gas surface\n density, sampled from a $10$ kpc wide square grid of $100$ by\n $100$ squares in the xy-plane of the galaxy disc. All curves are\n binned by gas surface density, with shaded regions showing\n standard deviations within each bin. {\\bf Top panel:} gross\n outflow rates per unit area. {\\bf Middle panel:} mass loading\n factors, i.e. average mass outflow fluxes divided by star\n formation surface densities. {\\bf Bottom panel:} mass-weighted\n average gross outflow velocities (with the escape velocity shown\n by a horizontal solid line).}\n\\end{figure}\n\nIn \\Fig{SQO_Fid.fig}, we study how the outflow properties $2$ kpc from\nthe disc scale with the local gas surface density. The panels show,\nfrom top to bottom, gross outflow rate per unit area ($\\Sigma_{\\rm out}$), mass\nloading factor $\\betaOut \\equiv \\Sigma_{\\rm out}\/\\Sigma_{\\rm SFR}$, and mass-weighted\ngross outflow velocity. We split the face of the disc into a $10$ kpc\nwide grid of $100$ pc squares (that is, $100^2$ squares), and extract\nthe outflow properties in each square. In the plots, the outflow\nproperties are binned by the gas surface density, and the shaded\nregions show the logarithmic standard deviation in each bin.\n\nFor each model, the general trend is that higher gas surface\ndensities correspond to higher outflow rates and velocities, but lower\nmass loading factors. The outflow velocities are noticeably higher\nthan those (at $250$ Myr) in \\Fig{OFtime.fig}. The reason is that\n\\Fig{OFtime.fig} shows the mass-weighted average of all outflowing\ngas, while \\Fig{SQO_Fid.fig} is restricted to a $10$ kpc wide square\nplane directly above and below the disc, hence capturing the more\ncollimated part of the outflows. Kinetic feedback clearly stands out\nas having the highest outflow velocities, peaking close to $10^3$ km\/s\n(at the peak surface densities), with little scatter. With kinetic\nfeedback, almost all of the outflowing gas directly above or below the\ndisc is moving faster than the escape velocity (indicated with a\nhorizontal solid line). The other feedback models produce outflow\nrates and velocities that are alike (within roughly a factor of two),\nand much lower than for kinetic feedback, with the exception of\ndelayed cooling, which has a massive, slow outflow, and the highest\nmass loading factor.\n\n\\begin{figure*}\n \\centering\n \\subfloat\n {\\includegraphics[width=0.3\\textwidth]\n {figures\/mapsof_2_250_n01.pdf}}\n \\subfloat\n {\\includegraphics[width=0.3\\textwidth]\n {figures\/mapsof_2_250_t01.pdf}}\n \\subfloat\n {\\includegraphics[width=0.3\\textwidth]\n {figures\/mapsof_2_250_s08.pdf}} \\\\\n \\vspace{-4mm}\n \\subfloat\n {\\includegraphics[width=0.3\\textwidth]\n {figures\/mapsof_2_250_d02.pdf}}\n \\subfloat\n {\\includegraphics[width=0.3\\textwidth]\n {figures\/mapsof_2_250_y04.pdf}}\n \\subfloat\n {\\includegraphics[width=0.3\\textwidth]\n {figures\/mapsof_2_250_k01.pdf}}\n \\caption\n {\\label{mapsOF_fid.fig}Slices along the $xz$-axes (at $y=0$; the\n disc is seen edge-on), for the {\\sc g9}{} galaxy at $250$ Myr. Each\n panel of two images shows the hydrogen number density (left) and\n the gas temperature (right) for a given model. The models are,\n clockwise from the top left (as indicated in the lower left corner\n of each density map): no feedback, thermal dump, stochastic,\n mechanical, kinetic, delayed cooling. In each map, dotted\n horizontal lines mark planes at which we measure the outflow\n properties shown in Figures \\ref{OFtime.fig} and\n \\ref{SQO_Fid.fig}, i.e. at $2$ and $20$ kpc from the plane of the\n disc. Thermal dump, stochastic, and mechanical feedback produce\n qualitatively similar multi-phase outflows. Delayed cooling\n produces outflows that are dense, cold, and slow, whereas those\n produced by kinetic feedback are diffuse, hot, and fast.}\n\\end{figure*}\n\nIn \\Fig{mapsOF_fid.fig} we show images of slices along the $xz$-plane\n(at $y=0$) at $250$ Myr, with each set of two panels showing the\nhydrogen density (left) and temperature (right) for a given feedback\nmodel. Delayed cooling and kinetic feedback clearly stand out here.\nThe former yields dense and cold ($T\\la10^4$ K) outflows. The outflows\nfor the latter are diffuse, hot\n($10^6 \\ {\\rm K} \\, \\la T \\la 10^8 \\ {\\rm K}$), and the most extended\n(which is expected since \\Fig{SQO_Fid.fig} shows they are by far the\nfastest). The remaining three feedback models produce qualitatively\nsimilar multiphase ($10^4 \\ {\\rm K} \\, \\la T \\la 10^6$ K)\noutflows. The clear distinction between the most effective feedback\nmodel, i.e. delayed cooling, and the other, less effective models,\nin the outflow properties, could be used in future work as an\nobservational probe into how accurately those models represent actual\nfeedback in galaxies.\n\n\\subsubsection{Outflows from the more massive galaxy}\nFor the more massive {\\sc g10}{} galaxy, the outflow differences, which we\ndo not plot, are qualitatively similar to those in the above\nanalysis. Kinetic feedback gives the highest mass loading factor,\nwhich is again of order unity both at $2$ and $20$ kpc. All the other\nmodels give similar mass loading factors $2$ kpc above the disc as\nfor {\\sc g9}{}, but in contrast to {\\sc g9}{} the mass loading drops by $1-2$\norders of magnitude at $20$ kpc (the biggest drop occurring for\ndelayed cooling), due to the stronger gravitational pull. The outflow\nvelocities are slightly higher for all models, but they are still\nmuch (marginally for kinetic feedback) lower than the ($\\approx 500$\nkm\/s) escape velocity.\n\n\\subsection{Gas properties}\n\\begin{figure*}\n \\centering\n \\includegraphics[width=0.9\\textwidth]\n {figures\/ph_fid.pdf}\n \\caption\n {\\label{PH_Fid.fig}Phase diagrams at $250$ Myr for {\\sc g9}{} runs with\n various feedback models and fiducial settings. The shaded grey\n region marks where the temperature is below the Jeans temperature\n which is added artificially for pressure support. Dashed\n red horizontal and vertical red lines enclose gas that is\n star-forming. The solid vertical red line in each plot marks the\n (mass-weighted) mean density, and the red solid curve shows the\n mean temperature as a function of density. The diagrams are almost\n identical for thermal dump, stochastic, and mechanical\n feedback. For delayed cooling, we find a lot of dense gas at\n `forbidden' temperatures ($\\sim 10^5$ K), where the cooling rate\n peaks.}\n\\end{figure*}\nIn \\Fig{PH_Fid.fig} we compare gas properties for runs with the\ndifferent SN feedback models, using phase diagrams of gas temperature\nversus density at $250$ Myr. The colour scheme represents the mass of\ngas in each temperature-density bin. The mass-weighted mean density in\neach diagram is represented by a solid vertical line, while the\nmass-weighted mean temperature in each density bin is shown by solid\nred curves. Star-forming gas is enclosed by a dotted box, while gas\nwith temperatures below the artificial Jeans temperature (which has\nbeen subtracted from the `thermal' temperature plotted here) is\nindicated by the shaded diagonal region in the bottom right corner of\neach diagram.\n\nWe continue to see the same qualitative picture as before: delayed\ncooling and kinetic feedback stand out, while the remaining three\nmodels look similar. Delayed cooling yields by far the lowest mean\ndensity, $\\left< n_{\\rm{H}} \\right> \\approx 3 \\, \\cci$, which is well below\nthe star formation density threshold of $n_{*}=10 \\ \\cci$, and almost\ntwo orders of magnitude below the mean density without feedback. The\nother models all yield mean densities near $n_{*}$.\n\nDelayed cooling produces the highest mean temperatures at intermediate\ndensities of $n_{\\rm{H}} = 10^{-2}-10 \\ \\cci$ with a lot of gas at\ntemperatures $10^{4.5}-10^6$ K, which is probably unphysical given the\nshort radiative cooling times in this regime. Curiously, in more\ndiffuse gas, $n_{\\rm{H}} = 10^{-5}-10^{-3} \\, \\cci$ delayed cooling produces\nby far the lowest temperatures, $T\\sim 10^3-10^4$ K, while in the same\ndensity regime the gas is typically at $\\sim 10^5$ K with other\nfeedback models (even in the absence of feedback). From identical\nphase diagrams excluding the galaxy disc, we have confirmed that this\ndiffuse gas is primarily `CGM' gas outside the disc. In the case of\ndelayed cooling, these outflows, i.e. gas outside the disc, span a\nwide range of densities,\n$n_{\\rm{H}} \\sim 10^{-6} - 3 \\times 10^{-1} \\, \\cci$, in stark contrast to\nthe other feedback models, where the CGM gas reaches maximum\ndensities of a few times $10^{-3} \\, \\cci$. With kinetic feedback,\nthe CGM has a clear bi-modality not seen for other models, with some\nof the gas following an adiabat starting around\n$n_{\\rm{H}}\\sim 10^{-2.5} \\, \\cci$, $T\\sim 10^{7.5}$ K, and extending towards\nmuch lower densities, and the remainder at $T \\ga 10^4$ K and spanning\ndensities of $n_{\\rm{H}} \\sim 10^{-5} - 3 \\times 10^{-3} \\, \\cci$ (the latter\nis flowing 'diagonally' from the disc, i.e. at a steep angle from the\naxis of disc rotation).\n\nAll feedback models produce hot and relatively diffuse gas,\npopulating the region $T\\sim10^4-10^8$ K,\n$n_{\\rm{H}}\\sim 3 \\times 10^{-4} - 10 \\, \\cci$ in \\Fig{PH_Fid.fig}. One might\nexpect this to belong to the outflowing CGM. However, if we exclude\nthe disc (out to $2$ kpc in height and $10$ kpc in radius), all of\nthis hot diffuse gas disappears from the phase diagrams, indicating\nthat it in fact belongs to the ISM. In the case of thermal dump,\nstochastic, and mechanical feedback, the outflowing CGM is indeed warm\nto hot, but dilute, with $n_{\\rm{H}}\\sim 10^{-6}-10^{-4} \\, \\cci$, while\nkinetic feedback produces the aforementioned bi-modality in the\noutflowing CGM, and delayed cooling produces circum-galactic outflows\nthat are predominantly at temperatures between $10^4$ and $10^5$ K.\n\n\\subsection{Impact of feedback on the local\n environment} \\label{SD.sec} A major factor in any feedback model is\nhow efficiently it clears away those dense clouds where stars can\nform. When dense regions are quickly cleared by early SN explosions in\na stellar cluster, this can also boost the efficiency of subsequent SN\nexplosions which then take place at lower densities where cooling is\nless efficient and the momentum obtained in the SN remnant increases\n\\citep[Eq. \\ref{dp_mech.eq}; see also][]{Kim2014, Martizzi2015}.\nSNe exploding in the diffuse ISM have been suggested to prevent the\nformation of star-forming clouds \\citep[e.g.][]{Iffrig2015}, to\nmaintain the hot volume-filling ISM, and to generate fast outflows\n\\citep[e.g.][]{Ceverino2009}.\n\n\\begin{figure}\n \\centering\n \\includegraphics\n {figures\/sd_cum.pdf}\n \\caption\n {\\label{SD_cum.fig}Local densities at which stellar particles are\n formed over the run-time of the {\\sc g9}{} galaxy (dashed curves) and\n at which subsequent SN events take place 5 Myr later (solid\n curves), colour coded by feedback model as indicated in the upper\n legend. For all of the feedback models studied, star formation\n takes place at densities close to the density threshold for star\n formation, $n_{*}=10 \\ \\cci$, which is significantly lower than the\n densities at which stars form without feedback. Except for\n kinetic feedback, most SN events happen at much lower densities,\n indicating that the local environment is significantly altered by\n feedback.}\n\\end{figure}\n\nIn \\Fig{SD_cum.fig} we show the gas densities at which stellar\nparticles are born (dashed curves) and the densities at which the SN\nevents take place (solid curves) for each of the feedback models in\nthe {\\sc g9}{} galaxy. Focusing first on the star formation densities,\nthey are almost indistinguishable for all the models, and differ only\nfor the case of no feedback, for which the stars typically form at\nsignificantly higher densities. This shows, as we have already seen\nfrom the morphological comparison in \\Fig{maps_Fid.fig}, that all the\nfeedback models are efficient at preventing and\/or destroying\nstar-forming clouds in this {\\sc g9}{} galaxy, and almost all the stellar\nparticles are formed within one dex of the star formation density\nthreshold of $n_{*}=10 \\ \\cci$. For the no feedback case, the clouds can\ncollapse to higher densities, impeded only by the density-dependent\npressure floor.\n\nFor the densities at which the SN events take place, there are larger\ndifferences between the feedback models. Thermal dump, stochastic,\nand mechanical feedback are similar, with a non-negligible\n$\\approx 10$ per cent of the SN energy injected below $0.1 \\ \\cci$\n($20-40$ per cent below $1 \\ \\cci$), and SN events consistently taking\nplace at lower densities than star formation.\n\nDelayed cooling stands out, in having more SNe than those\naforementioned models at densities $\\ga 10^{-1} \\ \\cci$, but fewer\nSNe for lower densities. This comes from the efficiency of delayed\ncooling in diffusing and thickening the ISM disc, resulting in a\nmass-weighted density distribution of the ISM (not shown) which peaks\nat $n_{\\rm{H}} \\approx 1 \\ \\cci$, about a dex lower than for thermal dump,\nstochastic, and mechanical feedback, but a volume-weighted\ndistribution (also not shown) which peaks at\n$n_{\\rm{H}} \\approx 10^{-2} \\ \\cci$, about a dex \\emph{higher} than for those\nother models. Delayed cooling hence smooths out not just the density\npeaks in the ISM, but also the density throughs, such that stars form\nat lower average densities, but SNe explode at higher minimum\ndensities than for the other models.\n\nStanding out much more distinctly, kinetic feedback SNe explode in gas\nwith almost exactly the same densities (and even higher) as the stars\nare formed, with almost no SNe exploding at lower densities. This\nsignature of kinetic feedback suggests a decoupling between the\ninjected momentum and the immediate environment surrounding the\nexploding stellar particle. Instead of quickly dispersing the sites of\nstar formation, the gas is gradually transported away from those\nsites, out of the ISM, without coupling to the immediately surrounding\ngas. This explains the thin bubble-free gas disc (\\Fig{maps_Fid.fig})\nand the gradual decline of the SFR (\\Fig{SFR.fig}), which is due to\nslow gas depletion. These distinct features are however strongly\ndependent on the size of the bubble, $r_{\\rm bubble}$, into which the SN momentum\nand mass is injected. The fiducial size $r_{\\rm bubble}=150$ pc results in this\ndecoupling between the SNe and the surrounding gas. In\n\\Sec{fb_kin.sec} we experiment with a smaller bubble size ($r_{\\rm bubble}=40$\npc) and find a very different behaviour for kinetic feedback, with\nresults resembling those for thermal dump, stochastic, and mechanical\nSNe, i.e. much lower outflow rates, a flatter star formation rate with\ntime, and a thicker disc.\n\n\\subsection{Similarity of three models in the low-mass\n galaxy} \\label{resolved.sec}\n\nFor the low-mass {\\sc g9}{} galaxy (but not for the more massive {\\sc g10}{}\ngalaxy), we have seen that the results for thermal dump, stochastic,\nand mechanical feedback are near identical in terms of morphologies,\nstar formation, and gas properties.\n\nIn \\Eq{pSN.eq} we showed that the probability for stochastic feedback\nevents is above unity at the density threshold for star formation\n($n_{*}=10 \\, \\cci$) in the {\\sc g9}{} galaxy, and in\n\\Eq{chi_mech_approx.eq} we saw that, also at $n_{*}$, mechanical\nfeedback remains in the resolved adiabatic phase. In addition, we\nfound in \\Sec{SD.sec} that SN events do typically take place at\ndensities close to $n_{*}$. Hence there appears to be no significant\nnumerical overcooling issue in the {\\sc g9}{} runs, and it is then no\nsurprise to find similar results for thermal dump, stochastic, and\nmechanical feedback. It can be argued that the adiabatic phase of\nthermal dump feedback is resolved. Note that this may imply that\ndelayed cooling and kinetic feedback, for the fiducial parameters we\nhave chosen, converge to wrong results for the effects of SN feedback.\n\nFor the more massive {\\sc g10}{} galaxy, this is not the case: these\naforementioned feedback models give quite different results\n(\\Fig{SFR_MW.fig}), and thermal dump does not do much to suppress star\nformation compared to the no feedback case. Purely from Equations\n\\ref{pSN.eq} (stochastic probability) and \\ref{chi_mech_approx.eq}\n(mechanical feedback phase), one might expect the adiabatic phase to\nbe resolved here as well, since the changes in stellar mass and\nresolution, compared to the {\\sc g9}{} galaxy, cancel out. However, in\npart due to stronger metal cooling, but more importantly due to the\nlarger gravitational potential of the {\\sc g10}{} galaxy, stars form and\nexplode at significantly higher densities than in the {\\sc g9}{}\ngalaxy. Hence the probability for stochastic feedback events becomes\nlower than unity (on average $0.35$ in the stochastic feedback run),\nmost mechanical feedback events become pure snowplow momentum\ninjections, and thermal dump becomes a victim of numerical\novercooling.\n\n\\cite{Kim2014} derived a density limit at which the momentum\ncreated by single thermal dump type II SN explosions is\nresolution-converged with grid-hydrodynamics. They found that\nconvergence is maintained with a cell width\n$\\Delta x \\la 10 \\, {\\rm pc} \\ n_0^{-0.46}$, where\n$n_0=\\frac{n_{\\rm{H}}}{1 \\, \\cci}$. Taking $n_{*}=10 \\, \\cci$ we would need a\nresolution of $\\approx 3.5$ pc for converged thermal dump feedback.\nThat is a considerably higher resolution than ours ($18$ pc), so\nnaively we would expect overcooling to be significant for the {\\sc g9}{}\nthermal dump simulation. In light of the above finding, that thermal\ndump feedback appears more or less resolved in the {\\sc g9}{} galaxy, the\nlack of resolution according to \\cite{Kim2014} is mitigated by the\nfact that each stellar particle in our {\\sc g9}{} simulations releases the\nequivalent of 40 type II SN explosions instantaneously, instead of\none, as assumed in \\cite{Kim2014}.\n\n\n\\subsection{SN model comparison summary}\n\n\\begin{itemize}\n\\item For the low-mass {\\sc g9}{} galaxy, the results for thermal dump,\n stochastic, and mechanical feedback are near identical in terms of\n morphologies, star formation, and gas properties. This is an\n indication that the adiabatic phase of SN explosions is\n resolved. For the more massive {\\sc g10}{} galaxy, thermal dump is\n significantly weaker than stochastic and mechanical feedback in\n suppressing star formation (though not so much in generating\n outflows).\n\\item Delayed cooling is by far most efficient at suppressing star\n formation and yields results closest to the observed\n Kennicutt-Schmidt relation (at least for our assumed star formation\n efficiency.\n\\item Thanks to a large fiducial `bubble radius' of $150$ pc for\n momentum and mass injection, kinetic feedback has the highest\n outflow rates and a mass loading factor of order unity. Delayed\n cooling follows, with weaker outflow rates but a slightly higher\n (but declining) mass loading factor. The other feedback models\n produce much lower outflow rates and mass loading factors than those\n two more efficient models. In the more massive ($\\sim$ MW) {\\sc g10}{}\n galaxy, the mass loading factor is similar to that of {\\sc g9}{} close\n to the galaxy plane, but drops by $1-2$ orders of magnitude ten\n times further out at $20$ kpc, for all models except kinetic\n feedback, which maintains a mass loading factor of unity.\n\\item The feedback models producing the lowest outflow rates and mass\n loading factors produce hot and dilute outflows, while delayed\n cooling yields distinctively cold and dense outflows. For kinetic\n feedback the outflows have a clear bi-modal phase structure, with\n relatively cold and dense outflows close to the disc, and hot and\n diffuse outflows further out following a temperature-density\n relation suggesting adiabatic cooling.\n\\item Given the large fiducial bubble radius, which effectively\n decouples feedback from the ISM, kinetic feedback produces by far\n the fastest outflow, some of it above the escape velocity. All other\n models produce outflow velocities about an order of magnitude\n lower, well below the escape velocity.\n\\end{itemize}\n\n\\section{Resolution convergence} \\label{res_conv.sec} Resolution\nconvergence is an important factor in assessing SN feedback\nmodels. Ideally, the effects of feedback should remain constant if the\nresolution is increased, or at least if it is varied within reasonable\nlimits, i.e. within a factor of a few\\footnote{Convergence with a\n dramatic change in resolution, on the other hand, is usually not a\n desired goal. With much lower resolution, a sub-grid model becomes\n meaningless as the structures with which the model is supposed to\n interact become completely unresolved and a `lower level' of\n sub-grid physics must take over the current ones. With much higher\n resolution, some of the real physics become resolved, and the\n sub-grid model becomes irrelevant (though ideally it should then\n converge towards a `first-principles' methodology).}. In practice\nsuch constancy, while desirable, is hard to obtain without making\nsignificant sacrifices, such as disabling physical processes like\nhydrodynamical interactions or radiative cooling in the ISM. A second\nbest choice is a small and predictable change with varying resolution,\nso the feedback parameters can be easily calibrated for different\nsetups in order to achieve ``weak convergence'' \\citep{Schaye2015}.\n\nIn this section we aim to understand how and to what extent measurable\ngalaxy properties change with resolution for the different feedback\nmodels. For this purpose, we use a lower-resolution version of the\n{\\sc g9}{} galaxy, which we call {\\sc g9lr}{}. The setup is identical to\n{\\sc g9}{}, except that the minimum cell width is two times larger,\ni.e. $36$ pc, and the particle mass (in the initial conditions as well\nas for new stellar particles) is eight times higher (i.e.\n$1.6 \\times 10^4 \\ \\rm{M}_{\\odot}$ for stellar particles,\n$\\approx 8 \\times 10^5 \\ \\rm{M}_{\\odot}$ for DM particles). For simplicity, and\nbecause the Jeans length is already resolved by $7$ cell widths in the\nhigher-resolution runs (and hence by $3.5$ cell widths in the\nlower-resolution ones), we leave the pressure floor and star formation\nthreshold unchanged.\n\nIn the left column of panels in \\Fig{SFR_res.fig}, we plot, for each\nfeedback model, the ratios of SFRs (upper panels) and mass loading\nfactors $2$ kpc from the disc (lower panels) for {\\sc g9lr}{} over {\\sc g9}{}\nruns.\n\nFor delayed cooling, the resolution has a significant effect on the\nrelative SFR, although it should be kept in mind that the SFR for\ndelayed cooling is quite small in the first place (and hence the\nabsolute change is low compared to the SFRs of other models). For\nother models, the SFRs change insignificantly with resolution, though\nthere is a systematic tendency of slightly lower SFR with lower\nresolution.\n\nThe lower resolution has a larger effect on the outflow rates, shown\nin the bottom left panel of \\Fig{SFR_res.fig}. Decreasing the\nresolution systematically increases the outflow rates for mechanical\nfeedback (by up to a factor four), thermal dump, stochastic feedback,\nand delayed cooling, (by roughly a factor two), while for kinetic\nfeedback the outflow rate is almost unchanged. The outflow rate\nincrease is likely connected to the winds being launched on larger\nscales, due to the larger cell width and mass (note that the specific\nenergy, i.e. the ratio between the SN energy and receiving gas mass is\nsimilar to the higher resolution run, since the particle mass is $8$\ntimes larger). The obvious exception is kinetic feedback, where the\nenergy is distributed within a bubble radius which we have kept\nconstant, and indeed the outflow rate remains unchanged.\n\n\\begin{figure}\n \\centering\n \\includegraphics\n {figures\/sfr_of_ratios_res_x2.pdf}\n \\caption\n {\\label{SFR_res.fig}Resolution convergence tests. The upper (lower)\n panels show, for the different feedback models, ratios of SFRs\n (mass loading factors at 2 kpc from the disc) of {\\sc g9}{} runs at\n low resolution and at the fiducial resolution. The left (right)\n panels shows ratios where the stellar particle mass is increased\n by a factor of eight (kept fixed) in the low-resolution runs. For\n all plots, we have averaged the SFRs over intervals of $20$\n Myr. For low resolution with more stellar massive particles, the\n SFRs are well converged except for delayed cooling, though with a\n trend of marginally lower SFRs. The mass loading factors do change\n (except for kinetic feedback), generally showing an increase with\n lower resolution. With a fixed stellar particle mass (i.e. lowered\n SN specific energy), thermal dump feedback becomes weaker with\n lower resolution, while other feedback models maintain similar\n SFRs but higher mass loading factors.}\n\\end{figure}\n\nWe also consider the effect of `SN' resolution, where we lower the\ngrid resolution (and that of the initial conditions particles), just\nas in the {\\sc g9lr}{} runs, but keep the mass of newly formed stellar\nparticles fixed compared to the {\\sc g9}{} runs. These runs, which we call\n{\\sc g9lr\\_m$_*$}{}, give another measure of resolution convergence for the\nfeedback models, as the specific energy per feedback event (i.e. SN\nenergy over the local gas mass) is reduced by a factor of eight\ncompared to {\\sc g9}{} runs, while it was kept constant in\n\\Fig{SFR_res.fig}. We compare those to the {\\sc g9}{} runs in the right\ncolumn of panels in \\Fig{SFR_res.fig}, showing the ratios of SFRs and\nmass loading factors for the feedback models. For the SFR, the\nlargest difference occurs for thermal dump feedback, which becomes\nmuch less efficient at suppressing star formation rates due to the\nlower particle masses. The adiabatic phase becomes severely\nunresolved, and there is no mechanism built into the model to\ncompensate. Other feedback models maintain similar average star\nformation with the lower specific energies. The mass loading factors\nincrease somewhat if we decrease the spatial resolution and the\nspecific SN energies (bottom right panel of \\Fig{SFR_res.fig}), but at\n$250$ Myr the increase is smaller than for fixed specific energies\n(bottom left panel). Delayed cooling is an exception, showing a\ndecrease in mass loading at some time intervals, but an increase in\nothers, which is likely just caused by the relatively large variations\nin the SFRs.\n\nWith thermal dump feedback, resolution non-convergence is a well known\nproblem. With lower resolution, a larger gas mass is heated in a\nsingle feedback event, resulting in lower initial temperatures given a\nfixed SN energy. This would normally lead to higher SFRs, but in the\nleft panels of \\Fig{SFR_res.fig} the effect is counter-balanced by the\nuse of more massive stellar particles, slightly increasing the\nfeedback efficiency due to the higher SN energy per feedback event.\n\nFor the case of stochastic feedback, the fairly good convergence of\nthe SFR with both spatial resolution and stellar particle mass is not\na big surprise, since the stochasticity is built in to ensure that the\nheated gas receives a fixed specific energy, regardless of cell size,\ndensity, and particle mass. The outflow rates are relatively poorly\nconverged for stochastic feedback and low stellar particle mass, which\nlikely comes from the aforementioned tendency for the outflow rate to\nincrease with lower resolution, and hence larger launching scales,\neven if the specific SN energy is constant. Mechanical feedback was\nshown by \\cite{Kimm2014} to converge well with resolution in terms\nof the final momentum reached, and indeed the whole point of the model\nis to maintain the same momentum injection regardless of whether the\nmomentum buildup is captured numerically or not. While we confirm that\nthe convergence is good for the SFR, the outflow rates are not very\nwell converged, neither in terms of spatial resolution nor stellar\nparticle mass. For delayed cooling, resolution convergence is not an\nobvious property, but if cooling is turned off long enough, the energy\n(i.e. cooling) losses should become insignificant and hence\nindependent of the resolution. The fact that the SFR (and to a lesser\nextent the mass loading factor) is poorly converged for delayed\ncooling hints that cooling losses are still significant, but we remind\nthat the absolute change is actually lower than for the other feedback\nmodels. As long as kinetic feedback is sufficiently decoupled from\nthe ISM surrounding the SN explosion due to the use of a large SN\nbubble radius, it is not surprising to see good resolution\nconvergence, since the main effect of the feedback is then simply to\nslowly deplete the disc of gas mass.\n\nIn summary, except for thermal dump feedback, the feedback models\nconverge fairly well in terms of SFRs. However, none except for\nkinetic feedback (with a bubble radius larger than the disc height)\nconverge well in terms of the outflow mass loading factor, with the\nmass loading factor decreasing with higher resolution.\n\n\n\\section{Varying the temperature jump for stochastic feedback} \\label{fb_stoch.sec}\n\nWe now examine the effect of varying the stochastic heating,\nparameter, $\\Delta T_{\\rm stoch}$. We do not go lower than the fiducial value of\n$\\Delta T_{\\rm stoch}=10^{7.5}$ K, because already here the feedback is not very\nstochastic: in the {\\sc g9}{} galaxy the average probability for SN\ncandidates (which, as a reminder, is the ratio of the available SN\nenergy of the stellar particle to the energy required to heat the host\ngas cell by $\\Delta T_{\\rm stoch}$) is $p_{\\rm SN}\\approx 50\\%$ ($\\approx 35\\%$ in\n{\\sc g10}{}). Lower $\\Delta T_{\\rm stoch}$ would lead to order unity probabilities for\nSN explosions, converging towards thermal dump feedback.\n\nWe thus consider three values for $\\Delta T_{\\rm stoch}$ in addition to the\nfiducial one, each time increasing the injected specific energy by\nhalf a dex, i.e. $\\Delta T_{\\rm stoch}=10^{7.5}, 10^{8}, 10^{8.5}$, and $10^{9}$\nK.\n\n\\begin{figure}\n \\centering\n \\subfloat\n {\\includegraphics[width=0.23\\textwidth]\n {figures\/fb_stoch\/maps_1_250_sdt80.pdf}}\n \\subfloat\n {\\includegraphics[width=0.23\\textwidth]\n {figures\/fb_stoch\/maps_1_250_sdt90.pdf}}\n \\caption\n {\\label{maps_stoch.fig}Maps of total hydrogen column density for the\n {\\sc g9}{} galaxy at $250$ Myr, for variations in the stochastic\n heating, with $\\Delta T_{\\rm stoch}=10^8$ K on the left and $\\Delta T_{\\rm stoch}=10^9$ K\n on the right. Each panel shows face-on and edge-on views.}\n\\end{figure}\n\\Fig{maps_stoch.fig} shows maps of the hydrogen column density for\n$\\Delta T_{\\rm stoch}=10^8$ and $10^9$ K. For the case of $\\Delta T_{\\rm stoch}=10^8$ K, there\nis not a significant difference from the fiducial run (the middle left\npanel in \\Fig{maps_Fid.fig}), though the gas disc becomes slightly\nthicker and clumpier. For $\\Delta T_{\\rm stoch}=10^9$ K, the difference is much\nclearer. Here the face-on disc is more diffuse overall, but at the\nsame time it contains more dense clumps and filaments, especially\nclose to the centre and at the disc outskirts. This is due to the\nincreased stochasticity of SN explosions: the low probability for SN\nevents, on average $p_{\\rm SN}\\approx 3 \\%$, allows dense star-forming\nclumps to live longer before they are hit by the first SN\nexplosion\\footnote{We note that \\cite{DallaVecchia2012} advise\n against such high values of $\\Delta T_{\\rm stoch}$ that probabilities for\n feedback events become $\\ll 1$, which is clearly the case\n here.}. This effect is amplified at the outskirts, where there is\nrelatively little star formation and thus a low rate of SN explosions\nper unit volume. Looking at the disc edge-on, there is much more\nstructure in the CGM for our maximum value of $\\Delta T_{\\rm stoch}$.\n\nIn \\Fig{OF_stoch.fig} we zoom out and look at the large-scale outflows\nin the `maximum' case of $\\Delta T_{\\rm stoch}=10^9$ K. The outflows are\ndramatically different from the fiducial setting for stochastic\nfeedback shown in the top right panel of \\Fig{mapsOF_fid.fig}: they\nare much denser, clumpier, (mostly) hotter, and more extended. Indeed,\nlooking at the dashed lines in \\Fig{SFR_stoch.fig}, we see that the\noutflow rate at $20$ kpc increases almost linearly with $\\Delta T_{\\rm stoch}$,\nand is about $3$ times higher than the SFR (solid) at the end of the\nrun for $\\Delta T_{\\rm stoch}=10^9$ K. The average outflow velocity (not shown)\nincreases by a factor $2-3$ at $2$ kpc for the highest $\\Delta T_{\\rm stoch}$\nconsidered (compared to the lowest), but is unaffected at $20$ kpc.\n\n\\begin{figure}\n \\centering\n {\\includegraphics[width=0.45\\textwidth]\n {figures\/fb_stoch\/mapsof_2_250_sdt90.pdf}}\n \\caption\n {\\label{OF_stoch.fig}Edge-on views of the {\\sc g9}{} galaxy at $250$\n Myr, for the run with the strongest stochastic heating\n ($\\Delta T_{\\rm stoch}=10^9$ K). The two maps show slices of the hydrogen\n number density (left) and temperature (right). Dotted horizontal\n lines mark planes $2$ and $20$ kpc from the disc.}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics\n {figures\/fb_stoch\/sfr_stoch.pdf}\n \\caption\n {\\label{SFR_stoch.fig}SFRs (solid curves) and gross outflow rates\n $20$ kpc from the disc (dashed curves) for the {\\sc g9}{} galaxy, with\n variations in $\\Delta T_{\\rm stoch}$ for the stochastic SN feedback model.\n Increasing $\\Delta T_{\\rm stoch}$ within reasonable limits has little effect\n on the SFR, but does increase the outflow rate (and hence the mass\n loading factor) significantly.}\n\\end{figure}\n\nVarying $\\Delta T_{\\rm stoch}$ has a much weaker effect on the SFR than on the\noutflow, as shown in \\Fig{SFR_stoch.fig}. The `first' two increases in\n$\\Delta T_{\\rm stoch}$ have almost no effect on the SFR, while the highest value\nproduces an initially higher SFR which then declines gradually, much\nlike in the case of kinetic feedback, and ends up significantly lower\nthan for lower $\\Delta T_{\\rm stoch}$ values. As with kinetic feedback, the\ndecline in SFR is likely due to the strong outflow depleting the\ngalaxy of fuel for star formation. Also, due to the lower heating\nprobability, star-forming clumps can continue to form stars longer\nwithout having strong SN events disrupting them.\n\nWe recall that the SN energy is not directly redistributed to lower\ngas densities with stochastic feedback (see Eq. \\ref{pSN2.eq}). On the\ncontrary, we find that increasing $\\Delta T_{\\rm stoch}$ indirectly results in the\nSN energy being deposited at higher gas densities due to the increased\nclumpiness of the gas (not shown; $\\Delta T_{\\rm stoch} = 10^9$ K results in\nenergy being deposited at $\\approx 0.5-1$ dex higher densities,\ncompared to the fiducial case). Hence the stochasticity increases\nfeedback efficiency purely by increasing the injected energy of a\ngiven SN event \\emph{at a given density} (while the total SN energy\nover the galaxy is unchanged), and hence also the local cooling time.\n\nIn summary, increasing the stochasticity of feedback by increasing\n$\\Delta T_{\\rm stoch}$ strongly increases the outflows. The SFRs are insensitive\nto these stochasticity variations except for the highest value of\n$\\Delta T_{\\rm stoch}$ considered. With very large stochasticity, the disc also\nbecomes increasingly clumpy.\n\n\\section{Varying the time-scale in delayed cooling\n feedback} \\label{fb_dc.sec} Of the feedback models we have compared\nin \\Sec{comparison.sec}, with their fiducial setup parameters, delayed\ncooling feedback suppresses star formation most strongly and yields\nthe highest mass loading factors for the outflows (see Figures\n\\ref{SFR.fig}, \\ref{SFR_MW.fig}, \\ref{KS_Fid.fig}, and\n\\ref{OFtime.fig}).\n\nSince delayed cooling feedback in its fiducial setup is strong, we\nexamine here what happens if we reduce the value of its free\nparameter, which is the cooling delay time-scale, $t_{\\rm{delay}}$. The\nfiducial value is $t_{\\rm{delay}}=10$ Myr, so here we compare with two runs\nwith significantly lower delay time-scales of $2$ and $0.5$ Myr. We\nfind that the results are highly sensitive to variations in\n$t_{\\rm{delay}}$. In \\Fig{SFR_dcool.fig} we show the {\\sc g9}{} SFRs and gross\noutflow rates across planes $20$ kpc from the disc, for those\nvariations. As expected, a shorter delay time-scale strongly decreases\nthe feedback efficiency, producing higher SFRs and lower outflow\nrates. However, even for the shortest delay time-scale considered here,\nthe SN feedback is still more efficient in terms of suppressing star\nformation than any of the other feedback models (and second only to\nkinetic feedback in terms of outflow rates). Note that the mass\nloading factor is particularly sensitive to the value of the free\nparameter. At $250$ Myr it decreases from $\\betaOut \\approx 2$ for\n$t_{\\rm{delay}} =10$ Myr to $\\betaOut \\sim 10^{-1}$ for $t_{\\rm{delay}} =0.5$ Myr\n(both at $2$ and $20$ kpc from the disc). The outflow velocities,\nwhich we do not show here, are unaffected at $20$ kpc, and are fairly\ninsensitive to $t_{\\rm{delay}}$ at $2$ kpc (few tens of percent increase in\noutflow velocity from smallest to highest $t_{\\rm{delay}}$).\n\n\\begin{figure}\n \\centering\n \\includegraphics\n {figures\/fb_dcool\/sfr_dcool.pdf}\n \\caption\n {\\label{SFR_dcool.fig}Star formation and gross outflow rates for the\n {\\sc g9}{} galaxy for variations in the delay time-scale, $t_{\\rm{delay}}$, for\n delayed cooling feedback. The solid lines show SFRs, while dashed\n lines show outflow rates $20$ kpc from the disc. Decreasing\n $t_{\\rm{delay}}$ from the fiducial value of $10$ Myr makes the SN feedback\n weaker. However, even at the lowest dissipation time-scale shown,\n the SFR at $250$ Myr is still low and the outflow rates are still\n high compared to the other feedback models (for their fiducial\n parameters).}\n\\end{figure}\n\nWe analysed the KS relation for the same variations in $t_{\\rm{delay}}$ (not\nshown). Decreasing $t_{\\rm{delay}}$ produces a KS relation which looks\nincreasingly like that of mechanical feedback (for the {\\sc g9}{} galaxy)\nin \\Fig{KS_Fid.fig}, with a similar slope and maximum gas surface\ndensity of $7 \\times 10^{2} \\ \\msunpc$ for $t_{\\rm{delay}}=0.5$\nMyr. Morphologically, the galaxy with the shortest dissipation\ntime-scale also looks quite similar (though a bit more diffuse) as the\nruns with mechanical, stochastic, and thermal dump feedback in\n\\Fig{maps_Fid.fig}.\n\nWhile the outflow rates decrease for shorter delay times-cales, the\nmorphological and phase structure of the outflows remain qualitatively\nsimilar.\n\n\\section{Variations in kinetic SN feedback} \\label{fb_kin.sec}\n\nIn \\Fig{SFR_kin.fig}, we show variations in the SFRs and gross mass\noutflow rates at $20$ kpc for the {\\sc g9}{} galaxy, for variations in the\nfree kinetic feedback parameters, which are the bubble radius\n(fiducially $150$ pc) and the sub-resolution mass loading parameter\n$\\eta_{\\rm W}$ (fiducially set to $1$).\n\nDecreasing the bubble radius, $r_{\\rm bubble}$, to about two times the minimum\ncell width (of $18$ pc), has a significant effect on the SFR and\ndramatically suppresses the gas outflow rate, which both (and also the\noutflow speed) become similar to those produced by thermal dump,\nstochastic, and mechanical feedback. Morphologically (not shown), the\nruns with the small bubble radius also resemble the runs with these\nother feedback models.\n\nThe smaller bubbles are less efficient at driving large-scale\noutflows, because a larger fraction of the energy is now deposited\ninto dense ISM gas. For the same reason, the SFR at early times\ndecreases for smaller bubbles. At late times the SFR is higher\nbecause there remains substantially more gas in the galaxy as a\nconsequence of the weaker large-scale winds.\n\nIncreasing the sub-resolution mass loading, $\\eta_{\\rm W}$, by an order of\nmagnitude has little effect on star formation (\\Fig{SFR_kin.fig}),\noutflows (rates and speeds), and morphology (not shown). This is\nbecause most SN events occur quite close to the density threshold for\nstar formation (see \\Fig{SD_cum.fig}), meaning that usually there is\nnot much more mass available in the host cell than roughly matches the\nstellar particle mass. Hence, an initial mass loading of more than\n$\\approx 1$ is not possible, since it would mean removing more mass\nfrom the host cell than is available for re-distribution to the SN\nbubble. To investigate the effect of lower stellar particle masses, or\nstar formation happening well above the density threshold, we\nperformed runs where we used a three times lower stellar particle\nmass, $m_{*}=600 \\ \\rm{M}_{\\odot}$. Here increasing $\\eta_{\\rm W}$ \\emph{reduces} the\noutflow rates significantly (as it does to a smaller extent in\n\\Fig{SFR_kin.fig} for the smaller bubble radius). Moreover, the\nfeedback becomes more coupled to the disc, which becomes much\nthicker. This can be understood from the fact that with a higher mass\nloading, i.e. a larger ejected mass, the velocity of the ejected gas\nmust decrease to conserve momentum, and hence it is less likely to\nescape from the galaxy.\n\n\\begin{figure}\n \\centering\n \\includegraphics\n {figures\/fb_kin\/sfr_kin.pdf}\n \\caption\n {\\label{SFR_kin.fig}Star formation and outflow rates for variations\n in the bubble radius ($r_{\\rm bubble}$) and sub-resolution mass loading\n ($\\eta_{\\rm W}$) for kinetic feedback in the {\\sc g9}{} galaxy. The solid\n lines show SFRs, while dashed lines show outflow rates $20$ kpc\n from the disc. Reducing the bubble radius (thicker curves) from\n the fiducial value of $150$ pc has a significant effect on the SFR\n and dramatically reduces the outflow rates. Changes in $\\eta_{\\rm W}$\n have little effect, but this is because the order unity ratio of\n the stellar particle mass to the local cell mass allows little\n room for increased sub-resolution mass loading.}\n\\end{figure}\n\nWe also looked at the KS relation for these variations in kinetic\nfeedback (not shown). Here, again, varying $\\eta_{\\rm W}$ has a negligible\neffect, while the smaller bubble radius gives a relation very similar\nto e.g. mechanical feedback.\n\n\\section{Discussion} \\label{Discussion.sec} \n\nThe effects of the SN feedback models we have studied are sensitive\nto the resolution and\/or mass of the simulated galaxies. For our\nlower-mass {\\sc g9}{} galaxy, thermal dump, stochastic, and mechanical\nfeedback give very similar results in terms of SFRs, the\nKennicutt-Schmidt relation, and outflow mass loading factors, and we\nhave argued that this is an indication that the adiabatic phase of\nthermal SN feedback is `resolved' in this galaxy, with the caveat that\nwe inject the equivalent of $40$ individual SN explosions\ninstantaneously. For our (ten times) more massive {\\sc g10}{} galaxy,\nhowever, this no longer applies, and thermal dump feedback is\nsignificantly weaker than any of the other models.\n\nIf we compare our results to observations of SFRs in the local\nuniverse, delayed cooling comes closest to reality. Observed SFRs for\nlocal star-forming galaxies of similar stellar masses to {\\sc g9}{} are\ntypically in the range $\\approx 0.05-0.4 \\ \\msunyr$ \\citep[Figure 11\nin][]{Chang2015}. In \\Fig{SFR.fig}, we see that delayed cooling is\nthe only feedback model giving SFRs matching those observations,\nwhile the other models give values well above the upper limits. For\nstellar masses similar to our {\\sc g10}{} galaxy ($10^{10} \\, \\rm{M}_{\\odot}$),\n\\cite{Chang2015} give SFRs in the one-sigma range of\n$\\approx 0.3-1.5 \\, \\msunyr$, which actually falls slightly below the\ndelayed cooling SFR in \\Fig{SFR_MW.fig}. While such a comparison to\nobservations gives some indication of what is required to produce a\nrealistic suppression of star formation, we stress that comparing the\nresults from our isolated and somewhat idealised setup to observed\nstar formation rates has many caveats. Perhaps most significantly, the\ngas fractions in our galaxies are high compared to those of local\ngalaxies \\citep[see e.g. the compilations in][]{Bahe2016,\n Sales2016}.\n\nSuch caveats matter less if we consider the Kennicutt-Schmidt\nrelation. Here, all models except delayed cooling produce a slope\nthat is too steep compared to observations and\/or SFR surface\ndensities that are significantly too high for a given gas surface\ndensity. Delayed cooling does relatively well in terms of slope but\nfor our fiducial parameter choice it undershoots the SFR surface\ndensity. However, unlike for the other models, this can be calibrated\ntowards a reasonable fit to the observed KS relation by tuning the\ndelayed cooling time-scale.\n\nIn addition, delayed cooling and `decoupled' kinetic feedback are the\nonly models able to produce mass loading factors exceeding unity,\nwhile for other models with their fiducial parameters the mass\nloading factors are at best an order of magnitude below unity.\n\nWhile one can argue that these factors make delayed cooling for these\nobservables empirically most successful, the model produces\nunrealistic thermal signatures in the ISM and CGM, where large amounts\nof gas occupy a region in temperature-density space where the cooling\ntime is very short. Moreover, the convergence between thermal dump,\nstochastic, and mechanical feedback suggest the adiabatic phase is\nresolved and hence that the results from delayed cooling and kinetic\nfeedback may be unphysical. \n\nMechanical feedback could be argued to be most physically motivated,\nbeing based on analytic calculations and high (sub-pc) resolution\nsimulations of the final momentum attained in various environments in\nterms of gas density and metallicity. Yet it does not come near\nobservations in suppressing star formation and produces weak outflows\nin our simulations. Perhaps this discrepancy can be traced to the\nidealistic assumptions made when deriving the mechanical momentum\n(Eq. \\ref{dp_mech.eq}). While the momentum is realistic and converged\nfor a homogeneous medium, such a medium is not a good description of a\nmulti-phase and porous feedback-regulated star-forming ISM, and may\nlead to an under-prediction of the generated momentum\n\\citep[e.g.][]{Kimm2015}. We also neglect the preprocessing of the\nlocal environment by stellar radiation, which lowers the surrounding\ngas densities and has been shown to increase the momentum injection\nfrom stellar populations, typically by a factor $\\sim 2$\n\\citep[e.g.][]{Geen2015}.\n\nIt was recently reported by \\cite{Gentry2016} that idealised\nexperiments of SN feedback in the literature have under-predicted the\nfinal momentum by up to an order of magnitude, due to i) the neglect\nof multiple successive SN explosions, ii) a lack of resolution, and\niii) a preference for Eulerian hydrodynamical solvers, which are\nargued to suffer from over-mixing and hence over-cooling. If these\nresults are confirmed, we may have much more momentum to play with.\n\nAnother culprit may be the simplified setup of our simulations. For\ncontrol and to reduce the computational cost, we used isolated galaxy\nsimulations, assuming an initial state of the galaxy and its dark\nmatter halo rather than letting it evolve naturally in a cosmological\ncontext. Ignoring environmental factors such as mergers and gas\naccretion, may change the behaviour of SN feedback.\n\nFinally, there are other feedback processes at play in galaxies that\nwe have neglected, such as AGN (thought to be important at high,\nlarger than $\\approx$ MW masses; e.g. \\citealt{Crain2015}), radiation\npressure \\citep[e.g.][]{Rosdahl2015}, and cosmic rays\n\\citep[e.g.][]{Booth2013, Hanasz2013, Salem2014, Girichidis2016,\n Simpson2016}. The efficiency and interplay of each of those\nprocesses is not well constrained, but they likely provide an\nadditional suppression of the SFR on top of SN feedback. If they turn\nout to play an important role in galaxy evolution, some empirically\ncalibrated sub-resolution models for SN feedback may be re-interpreted\nto also represent other feedback processes at play in galaxies.\n\n\\subsection{Dependence on factors other than SN feedback} \\label{sfe.sec} \n\nGalaxy evolution involves a complex interplay of many physical\nprocesses, and the SN feedback efficiencies that we have reported may\nbe sensitive to factors other than the SN feedback models.\n\nOne of the most important choices in our simulations aside from the\nimplementation of feedback is the local star formation efficiency,\nwhich we have set to $\\epsilon_{*}=2\\%$. While this is a normal choice in\nsimulations of galaxy evolution, \\cite{Agertz2015} argued that such\na low value artificially suppresses the effects of feedback, and that\nhigher local SF efficiencies of $\\epsilon_{*} \\approx 0.1$ are needed to\nallow feedback to self-regulate star formation. We have experimented\nwith $\\epsilon_{*}=0.1$ in the {\\sc g9lr}{} galaxy (while keeping the other\nparameters fixed). While a full analysis is beyond the scope of the\npresent work, it is worth mentioning the qualitative effects. For all\nSN feedback models considered, it results in a high early SFR peak,\nfollowed by a dip and then convergence towards similar but somewhat\nlower SFRs compared to those obtained for the fiducial\n$\\epsilon_{*}=0.02$. However, the SFR surface density moves in the wrong\ndirection, i.e. away from the observed KS relation, both in terms of\nslope and normalisation.\n\nWe also investigated the effect of reducing $\\epsilon_{*}$, to find what\ncalibration is required to reproduce the observed KS relation. A value\nof $\\epsilon_{*}=0.002$, i.e. ten times lower than our fiducial value,\nproduces a good match to the observed relation, but at the cost of\nmaking the self-regulating effect of SN feedback negligible. In other\nwords, the $\\epsilon_{*}$ parameter takes over as the dominant feedback\nmechanism when set to a very low value.\n\nWhile our non-comprehensive probes of the effect of a varying\nnumerical star formation efficiency $\\epsilon_{*}$ are discouraging, we have\nthus far not studied variations in $\\epsilon_{*}$ in combination with other\nfactors. For example, in combination with a high local star formation\nefficiency, early feedback may play a significant role by suppressing\nrunaway star formation for the $5$ Myr from the birth of the first\nstars until the onset of SN explosions in a given star-forming region\n\\citep[e.g.][]{Hopkins2012b, Stinson2013, Agertz2015}. Also\nthe locality of star formation may be important for the efficiency of\nfeedback, i.e. it may matter whether the distribution of star\nformation, and hence SN explosions, is scattered smoothly in space and\ntime, or happens in short localised bursts\n\\citep[e.g.][]{Federrath2013, Hopkins2013b, Walch2015a}.\n\nWe will investigate the role of $\\epsilon_{*}$ in more detail in future\nwork, where we combine higher local SF efficiencies with more\nstringent SF criteria and early feedback in the form of stellar\nradiation.\n\nThere are many more variations which may or may not affect the\nfeedback efficiency, and again we have made limited explorations,\nwhich we summarise below.\n\\begin{itemize}\n\\item We varied the density threshold for star formation, $n_{*}$, by a\n factor of ten in either direction, for stochastic feedback. Higher\n $n_{*}$ does give lower initial SFRs, but after $250$ Myr the values\n are the same to within a few percent for two orders of magnitude\n variations in $n_{*}$. Outflow rates remain nearly unchanged.\n\\item In the more massive galaxy we ran identical simulations with\n $0.1$ Solar metallicity, instead of the default Solar\n metallicity. The reduced metallicity has a marginal effect on the\n outflows and on the SFRs in the case of delayed cooling and kinetic\n feedback. However, for thermal dump, stochastic, and mechanical\n feedback, it reduces the SFRs by a few tens of percent and increases\n the outflow mass loading factor by about a factor of two.\n\\item Switching from non-equilibrium hydrogen and helium\n thermochemistry to an equilibrium assumption in the {\\sc g9}{} galaxy\n significantly increases outflow rates for thermal dump (factor\n $\\sim 5$), stochastic feedback (factor $\\sim 4$), and mechanical\n feedback (factor $\\sim 5$), while the SFRs are only marginally\n reduced (and there is almost no effect on either SFRs or outflows\n with kinetic feedback or delayed cooling). In a forthcoming paper,\n we will explore the physical effects of equilibrium versus\n non-equilibrium thermochemistry in the context of SN feedback.\n\\item With thermal dump and stochastic feedback we scaled the Jeans\n pressure floor by a factor of three in each direction (i.e. three\n times higher and lower pressure at a given gas density). The disc\n morphology becomes slightly but noticeably smoother with a higher\n Jeans pressure, but the effect on star formation and outflows is\n negligible. A very large increase of the pressure floor suppresses\n the effect of SN feedback with any model, since it overtakes the\n role of SN feedback in suppressing the collapse of gas to\n star-forming densities.\n\\item With mechanical feedback, we explored the effect of runaway OB\n stars in the {\\sc g9}{} galaxy, giving a kick to each newborn stellar\n particle with a random direction and random speed of $0-50 \\, \\kms$.\n This has negligible effect on the SFRs, but the outflow rates are\n enhanced by almost a factor of ten, due to SN explosions taking\n place at significantly lower densities on average.\n\\item Also with mechanical feedback, we simulated individual $10^{51}$\n erg SN explosions, stochastically sampled for each stellar particle\n over the $5-50$ Myr lifetimes assumed for massive stars in the\n (Chabrier) IMF \\citep[see details in][]{Kimm2015}. The effect is\n similar to the above test with runaway OB stars: the SFRs are\n marginally higher, while the outflow rates are increased\n significantly, though somewhat less than for the runaway OB\n stars. The reason for the increased outflow rates is that for a\n given particle, early SN explosions can clear away the dense\n environment, leading to late SN explosions taking place at reduced\n densities. We also combined runaway OB stars and individual SN\n explosions in the same run (again for the {\\sc g9}{} galaxy). This\n produces SFRs marginally higher than individual SNe only and\n outflows marginally higher than runaway OB stars only, i.e. it does\n not give an extra boost to the outflows.\n\\item Changing the time delay for stochastic SN feedback (zero to $20$\n Myr) has similar effects as runaway OB stars: The SFRs are only\n marginally affected, while an increased delay increases the outflow\n mass loading factor (it is halved for zero delay and doubles for a\n $20$ Myr delay).\n\\end{itemize}\n\n\\subsection{Comparison with stochastic heating in\n GADGET}\\label{DS-comp.sec}\nThe stochastic feedback model included in this work is based on the\nscheme introduced in {DS12}{} and used in the {\\sc Gadget}{} code. Similarly\nto this work, {DS12}{} explore the effects of their stochastic feedback\nmodel using rotating isolated galaxy discs of two masses, the main\ndifference being that their lower mass disc is roughly ten times less\nmassive than our {\\sc g9}{} disc (while their higher mass disc is\ncomparable in mass to our {\\sc g10}{}). They compare different values for\nstochastic heating and find $\\Delta T_{\\rm stoch}=10^{7.5}$ K to be efficient in\nsuppressing star formation (in the massive disc) and generating strong\noutflows (in both discs). In {DS12}{} this value, which we also choose as\nour fiducial value for stochastic heating, gives outflow mass loading\nfactors (at $20 \\%$ of the virial radii) of $\\approx 40$ and\n$\\approx 2$ for the low- and high-mass galaxies, respectively. This is\n$\\approx 40-200$ times larger than our mass loading factors, pointing\nto a significant difference between our simulations and {DS12}{} in the\nefficiency of stochastic feedback in suppressing star formation and\ngenerating outflows.\n\nIn \\App{ds.app} we discuss the differences between our simulations and\nthose of {DS12}{} and attempt to more closely reproduce their setup. We\nconclude that there is a major difference between the two versions\n(AMR and SPH) of stochastic feedback, with the SPH version being much\nmore efficient at generating outflows, for a given SFR. Pinpointing\nthe reason(s) for this difference, however, and whether it is due to\nsubtle differences in setup or resolution, or to more fundamental\ndifferences between AMR and SPH, remains a challenge for follow-up\nwork.\n\n\\section{Conclusions} \\label{Conclusions.sec} \n\nWe used simulations of isolated galaxy discs with the {\\sc Ramses}{} code\nto assess sub-resolution models for SN feedback in AMR simulations,\nin particular their efficiency in suppressing star formation and\ngenerating outflows. We focused our analysis on a dwarf galaxy, ten\ntimes less massive than the MW, using a spatial resolution of $18$ pc\nand a stellar (DM) mass resolution of $2 \\times 10^3 \\ \\rm{M}_{\\odot}$\n($10^5 \\ \\rm{M}_{\\odot}$), but also included a more limited analysis of a MW\nmass galaxy (using a resolution of $36$ pc, $1.6 \\times 10^4 \\ \\rm{M}_{\\odot}$\nstellar particles and $10^6 \\ \\rm{M}_{\\odot}$ DM particles).\n\nWe studied five SN feedback models: i) thermal dump of SN energy into\nthe host cell of the star particle, ii) stochastic thermal feedback,\nwhere the SN energy is re-distributed into fewer but more energetic\nexplosions, iii) kinetic feedback, where momentum is deposited\ndirectly into a bubble around the star particle, iv) delayed cooling,\nwhere cooling is suppressed temporarily in the expanding SN remnant,\nand v) mechanical feedback, which injects energy or momentum depending\non the resolution. Three of those models can be calibrated with\nadjustable parameters, which are the minimum local heating temperature\nfor stochastic feedback, the bubble size and local mass loading for\nkinetic feedback, and the cooling suppression time for delayed\ncooling). The mechanical feedback model has no free parameters (once\nthe SN energy has been decided) and the injected momentum is based on\nanalytic derivations and high-resolution simulations of cooling losses\nin expanding SN blasts. We compared the results produced using these\nmodels with their fiducial settings, and for those models with\nadjustable parameters we studied the effects of parameter\nvariations. Our main results are as follows.\n\nFor our low-mass, high-resolution galaxy, thermal dump, stochastic,\nand mechanical feedback produce nearly identical results (Figs.\n\\ref{SFR.fig} and \\ref{OFtime.fig}). We showed that at our current\nresolution and star formation densities, stochastic feedback is\nactually not that stochastic, and mechanical feedback is still mostly\nin the adiabatic phase. Hence those feedback models are converged in\nthat setup, and thermal dump feedback adequately resolves the energy\ninjection (by multiple SNe in a single event). For our more massive\ngalaxy, stochastic and mechanical feedback become significantly\nstronger than thermal dump feedback, but are still weak compared to\ndelayed cooling and kinetic feedback (\\Fig{SFR_MW.fig}).\n\nStrong outflows are not easily generated in our AMR simulations. Mass\nloading factors of unity or above require extreme measures, such as\nturning off cooling for a prolonged time, or kinetic feedback that is\nin effect hydrodynamically decoupled due to the bubble radius\nexceeding the disc height (\\Fig{OFtime.fig}). The outflows produced by\ndelayed cooling and kinetic feedback are very distinct, the former\nbeing cold, dense, and slow, while the latter are hot, diffuse, fast,\nand featureless (Figs. \\ref{SQO_Fid.fig}, \\ref{mapsOF_fid.fig}, and\n\\ref{PH_Fid.fig}). The other models produce slow and remarkably\nsimilar outflows at intermediate densities and temperatures.\n\nSave for thermal dump feedback, all models do well in terms of\nresolution convergence when considering SFRs, while, with the\nexception of kinetic feedback, they produce significantly higher\noutflow rates at lower resolution (\\Fig{SFR_res.fig}).\n\nAlthough a direct comparison is difficult, stochastic feedback appears\nto produce much weaker outflows than in the similar disc runs with the\noriginal SPH version of the model of {DS12}{}. This discrepancy is\nperhaps a result of subtle setup differences between our discs and\nthose of {DS12}{}, but we cannot rule out a more fundamental AMR versus\nSPH difference. Stochastic feedback does become efficient at\ngenerating massive outflows in our AMR discs if we use very high\nvalues for the stochastic heating temperature (up to $10^9$ K), but\nthis comes at the cost of strong stochasticity due to low SN\nprobabilities (Figs. \\ref{SFR_stoch.fig} and \\ref{maps_stoch.fig}).\n\nThe major handle on the generation of outflows appears to be how well\nthe SN feedback model circumvents gas cooling, directly or\nindirectly. Delayed cooling is the only model which succeeds at\ngenerating outflows with mass loading factors exceeding unity\n(\\Fig{OFtime.fig}), at reproducing the observed main sequence SFRs\n(Figs. \\ref{SFR.fig} and \\ref{SFR_MW.fig}), and the Kennicutt-Schmidt\nrelation (with appropriate calibration; \\Fig{KS_Fid.fig}). The other\nmodels fail to produce SN feedback strong enough to reproduce these\nobservations. This is discouraging, as delayed cooling retains too\nmuch energy for too long, which in reality is partly lost to radiative\ncooling, while the other feedback models are arguably more physically\nmotivated. Moreover, for the low-mass galaxy we argued that thermal\ndump, stochastic, and mechanical feedback converge because we resolve\nthe adiabatic phase of the feedback events. This implies that in this\ncase delayed cooling and kinetic feedback yield incorrect answers. In\nparticular, delayed cooling results in gas occupying regions of\ntemperature-density space where the cooling time is very short, which\ncompromises predictions for observational diagnostics.\n\nPossible reasons for the disconnect between observations and our\nresults are: i) a lack of additional feedback physics, such as\nradiation feedback or cosmic rays, ii) an incomplete setup, i.e. an\ninsufficiently realistic description of galaxies, iii) other aspects\nof the subgrid physics, such as star formation, are unrealistic\n\\citep[e.g.][]{Agertz2015, Semenov2016}, iv) overcooling on\ngalactic scales is still an issue at our resolution, even if different\nfeedback models converge to the same results, and a significantly\nhigher resolution is required.\n\nThe current analysis will serve as a foundation for future studies of\nfeedback in galaxies, where we will use a sub-set of these models to\nstudy the interplay of SN feedback with different sub-grid methods for\nstar formation and with feedback in the form of stellar radiation.\n\n\\section*{Acknowledgements}\nWe thank J\\'er\\'emy Blaizot, L\\'eo Michel Dansac, Julien Devriendt,\nSylvia Ploeckinger, and Maxime Trebitsch for useful discussions, and\nthe anonymous referee for constructive comments. This work was funded\nby the European Research Council under the European Union's Seventh\nFramework Programme (FP7\/2007-2013) \/ ERC Grant agreement\n278594-GasAroundGalaxies. TK was supported by the ERC Advanced Grant\n320596 ``The Emergence of Structure during the Epoch of\nReionization''. The simulations were in part performed using the DiRAC\nData Centric system at Durham University, operated by the Institute\nfor Computational Cosmology on behalf of the STFC DiRAC HPC Facility\n(www.dirac.ac.uk). This equipment was funded by BIS National\nE-infrastructure capital grant ST\/K00042X\/1, STFC capital grant\nST\/H008519\/1, and STFC DiRAC Operations grant ST\/K003267\/1 and Durham\nUniversity. DiRAC is part of the National E-Infrastructure. We also\nacknowledge PRACE for awarding us access to the ARCHER resource\n(http:\/\/www.archer.ac.uk) based in the UK at the University of\nEdinburgh (PRACE-3IP project FP7 RI-312763).\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}