diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzikix" "b/data_all_eng_slimpj/shuffled/split2/finalzzikix" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzikix" @@ -0,0 +1,5 @@ +{"text":"\\section{Supplementary Lemmas for the proof of Theorem \\ref{thm:regression}}\n\\label{sec:supp_B}\n\\subsection{Proof of Lemma \\ref{bandwidth}}\n\\begin{proof}\nFirst we establish the fact that $\\theta_0^s \\to \\theta_0$. Note that for all $n$, we have: \n$$\n\\mathbb{M}^s(\\theta_0^s) \\le \\mathbb{M}^s(\\theta_0) \n$$\nTaking $\\limsup$ on the both side we have: \n$$\n\\limsup_{n \\to \\infty} \\mathbb{M}^s(\\theta_0^s) \\le \\mathbb{M}(\\theta_0) \\,.\n$$\nNow using Lemme \\ref{lem:uniform_smooth} we have: \n$$\n\\limsup_{n \\to \\infty} \\mathbb{M}^s(\\theta_0^s) = \\limsup_{n \\to \\infty} \\left[\\mathbb{M}^s(\\theta_0^s) - \\mathbb{M}(\\theta_0^s) + \\mathbb{M}(\\theta_0^s)\\right] = \\limsup_{n \\to \\infty} \\mathbb{M}(\\theta_0^s) \\,.\n$$\nwhich implies $\\limsup_{n \\to \\infty} \\mathbb{M}(\\theta_0^s) \\le \\mathbb{M}(\\theta_0)$ and from the continuity of $\\mathbb{M}(\\theta)$ and $\\theta_0$ being its unique minimizer, we conclude the proof. Now, using Lemma \\ref{lem:pop_curv_nonsmooth} and Lemma \\ref{lem:uniform_smooth} we further obtain: \n\\begin{align}\n u_- d^2(\\theta_0^s, \\theta_0) & \\le \\mathbb{M}(\\theta_0^s) - \\mathbb{M}(\\theta_0) \\notag \\\\\n & = \\mathbb{M}(\\theta_0^s) - \\mathbb{M}^s(\\theta^s_0) + \\underset{\\le 0}{\\underline{\\mathbb{M}^s(\\theta_0^s) - \\mathbb{M}^s(\\theta_0)}} + \\mathbb{M}^s(\\theta_0) - \\mathbb{M}(\\theta_0) \\notag \\\\\n \\label{eq:est_dist_bound} & \\le \\sup_{\\theta \\in \\Theta}\\left|\\mathbb{M}^s(\\theta) - \\mathbb{M}(\\theta)\\right| \\le K_1 \\sigma_n \\,. \n\\end{align}\nNote that we neeed consistency of $\\theta_0^s$ here as the lower bound in Lemma \\ref{lem:pop_curv_nonsmooth} is only valid in a neighborhood around $\\theta_0$. As $\\theta_0^s$ is the minimizer of $\\mathbb{M}^s(\\theta)$, from the first order condition we have: \n\\begin{align}\n \\label{eq:beta_grad}\\nabla_{\\beta}\\mathbb{M}^s_n(\\theta_0^s) & = -2\\mathbb{E}\\left[X(Y - X^{\\top}\\beta_0^s)\\right] + 2\\mathbb{E} \\left\\{\\left[X_iX_i^{\\top}\\delta_0^s\\right] K\\left(\\frac{Q_i^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\right\\} = 0 \\\\\n \\label{eq:delta_grad}\\nabla_{\\delta}\\mathbb{M}^s_n(\\theta_0^s) & = \\mathbb{E} \\left\\{\\left[-2X_i\\left(Y_i - X_i^{\\top}\\beta_0^s\\right) + 2X_iX_i^{\\top}\\delta_0^s\\right] K\\left(\\frac{Q_i^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\right\\} = 0\\\\\n \\label{eq:psi_grad}\\nabla_{\\psi}\\mathbb{M}^s_n(\\theta_0^s) & = \\frac{1}{\\sigma_n}\\mathbb{E} \\left\\{\\left[-2\\left(Y_i - X_i^{\\top}\\beta_0^s\\right)X_i^{\\top}\\delta_0^s + (X_i^{\\top}\\delta_0^s)^2\\right]\\tilde Q_i K'\\left(\\frac{Q_i^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\right\\} = 0\n\\end{align}\nWe first show that $(\\tilde \\psi^s_0 - \\tilde \\psi_0)\/\\sigma_n \\to 0$ by \\emph{reductio ab absurdum}. From equation \\eqref{eq:est_dist_bound}, we know $\\|\\psi_0^s - \\psi_0\\|\/\\sigma_n = O(1)$. Hence it has a convergent subsequent $\\psi^s_{0, n_k}$, where $(\\tilde \\psi^s_{0, n_k} - \\tilde \\psi_0)\/\\sigma_n \\to h$. If we can prove that $h = 0$, then we establish every subsequence of $\\|\\psi_0^s - \\psi_0\\|\/\\sigma_n$ has a further subsequence which converges to $0$ which further implies $\\|\\psi_0^s - \\psi_0\\|\/\\sigma_n$ converges to $0$. To save some notations, we prove that if $(\\psi_0^s - \\psi_0)\/\\sigma_n \\to h$ then $h = 0$. We start with equation \\eqref{eq:psi_grad}. Define $\\tilde \\eta = (\\tilde \\psi^s_0 - \\tilde \\psi_0)\/\\sigma_n = (\\psi_0^s - \\psi_0)\/\\sigma_n$ where $\\tilde \\psi$ is all the co-ordinates of $\\psi$ except the first one, as the first co-ordinate of $\\psi$ is always assumed to be $1$ for identifiability purpose. \n\\allowdisplaybreaks\n\\begin{align}\n 0 & = \\frac{1}{\\sigma_n}\\mathbb{E} \\left\\{\\left[-2\\left(Y_i - X_i^{\\top}\\beta_0^s\\right)X_i^{\\top}\\delta_0^s + (X_i^{\\top}\\delta_0^s)^2\\right]\\tilde Q_i K'\\left(\\frac{Q_i^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\right\\} \\notag \\\\\n & = \\frac{1}{\\sigma_n}\\mathbb{E}\\left[\\left( -2\\delta_0^s XX^{\\top}(\\beta_0 - \\beta^s_0) -2\\delta_0^s XX^{\\top}\\delta_0\\mathds{1}_{Q^{\\top}\\delta_0 > 0} + (X_i^{\\top}\\delta_0^s)^2\\right)\\tilde QK'\\left(\\frac{Q^{\\top}\\psi^s_0}{\\sigma_n}\\right)\\right] \\notag \\\\\n & = \\frac{1}{\\sigma_n}\\mathbb{E}\\left[\\left( -2\\delta_0^s XX^{\\top}(\\beta_0 - \\beta^s_0) -2\\delta_0^s XX^{\\top}(\\delta_0 - \\delta_0^s)\n \\mathds{1}_{Q^{\\top}\\delta_0 > 0} \\right. \\right. \\notag \\\\\n & \\hspace{10em} \\left. \\left. + (X_i^{\\top}\\delta_0^s)^2\\left(1 - 2\\mathds{1}_{Q^{\\top}\\delta_0 > 0}\\right)\\right)\\tilde QK'\\left(\\frac{Q^{\\top}\\psi^s_0}{\\sigma_n}\\right)\\right] \\notag \\\\\n & = \\frac{-2}{\\sigma_n}\\mathbb{E}\\left[\\left(\\delta_0^{s^{\\top}} g(Q)(\\beta_0 - \\beta^s_0)\\right)\\tilde QK'\\left(\\frac{Q^{\\top}\\psi^s_0}{\\sigma_n}\\right)\\right] \\notag \\\\\n & \\qquad \\qquad \\qquad - \\frac{2}{\\sigma_n} \\mathbb{E}\\left[\\left(\\delta_0^{s^{\\top}}g(Q)(\\delta_0 - \\delta^s_0)\\right)\\tilde QK'\\left(\\frac{Q^{\\top}\\psi^s_0}{\\sigma_n}\\right)\\mathds{1}_{Q^{\\top}\\delta_0 > 0}\\right] \\notag \\\\\n & \\hspace{15em} + \\frac{1}{\\sigma_n}\\mathbb{E}\\left[\\left(\\delta_0^{s^{\\top}}g(Q)\\delta^s_0\\right)\\tilde QK'\\left(\\frac{Q^{\\top}\\psi^s_0}{\\sigma_n}\\right)\\left(1 - 2\\mathds{1}_{Q^{\\top}\\delta_0 > 0}\\right)\\right] \\notag \\\\\n & = -\\underbrace{\\frac{2}{\\sigma_n}\\mathbb{E}\\left[\\left(\\delta_0^{s^{\\top}} g(Q)(\\beta_0 - \\beta^s_0)\\right)\\tilde QK'\\left(\\frac{Q^{\\top}\\psi^s_0}{\\sigma_n}\\right)\\right]}_{T_1} \\notag \\\\\n & \\qquad \\qquad -\\underbrace{\\frac{2}{\\sigma_n} \\mathbb{E}\\left[\\left(\\delta_0^{s^{\\top}}g(Q)(\\delta_0 - \\delta^s_0)\\right)\\tilde QK'\\left(\\frac{Q^{\\top}\\psi^s_0}{\\sigma_n}\\right)\\mathds{1}_{Q^{\\top}\\delta_0 > 0}\\right]}_{T_2} \\notag \\\\\n \\label{eq:pop_est_conv_1} & \\qquad \\qquad \\qquad + \\underbrace{\\frac{1}{\\sigma_n}\\mathbb{E}\\left[\\left(\\delta_0{\\top}g(Q)\\delta_0\\right)\\tilde QK'\\left(\\frac{Q^{\\top}\\psi^s_0}{\\sigma_n}\\right)\\left(1 - 2\\mathds{1}_{Q^{\\top}\\delta_0 > 0}\\right)\\right]}_{T_3} \\notag \\\\\n & \\qquad \\qquad \\qquad \\qquad + \\underbrace{\\frac{2}{\\sigma_n}\\mathbb{E}\\left[\\left((\\delta_0 - \\delta_0^s)^{\\top}g(Q)\\delta_0\\right)\\tilde QK'\\left(\\frac{Q^{\\top}\\psi^s_0}{\\sigma_n}\\right)\\left(1 - 2\\mathds{1}_{Q^{\\top}\\delta_0 > 0}\\right)\\right]}_{T_4} \\notag \\\\\n & = T_1 + T_2 + T_3 + T_4\n \\end{align}\nAs mentioned earlier, there is a bijection between $(Q_1, \\tilde Q)$ and $(Q^{\\top}\\psi_0, \\tilde Q)$. The map of one side is obvious. The other side is also trivial as the first coordinate of $\\psi_0$ is 1, which makes $Q^{\\top}\\psi_0 = Q_1 + \\tilde Q^{\\top}\\tilde \\psi_0$: \n$$\n(Q^{\\top}\\psi_0, \\tilde Q) \\mapsto (Q^{\\top}\\psi_0 - \\tilde Q^{\\top}\\tilde \\psi_0, \\tilde Q) \\,.\n$$\nWe first show that $T_1, T_2$ and $T_4$ are $o(1)$. Towards that end first note that: \n\\begin{align*}\n|T_1| & \\le \\frac{2}{\\sigma_n}\\mathbb{E}\\left[\\|g(Q)\\|_{op} \\ \\|\\tilde Q\\| \\ \\left|K'\\left(\\frac{Q^{\\top}\\psi^s_0}{\\sigma_n}\\right)\\right|\\right]\\|\\delta_0^s\\|\\|\\beta_0 - \\beta_0^s\\| \\\\\n|T_2| & \\le \\frac{2}{\\sigma_n} \\mathbb{E}\\left[\\|g(Q)\\|_{op} \\ \\|\\tilde Q\\| \\ \\left|K'\\left(\\frac{Q^{\\top}\\psi^s_0}{\\sigma_n}\\right)\\right|\\right]\\|\\delta_0^s\\|\\|\\delta_0 - \\delta_0^s\\| \\\\\n|T_4| & \\le \\frac{2}{\\sigma_n} \\mathbb{E}\\left[\\|g(Q)\\|_{op} \\ \\|\\tilde Q\\| \\ \\left|K'\\left(\\frac{Q^{\\top}\\psi^s_0}{\\sigma_n}\\right)\\right|\\right]\\|\\delta_0^s\\|\\|\\delta_0 - \\delta_0^s\\|\n\\end{align*}\nFrom the above bounds, it is immediate that to show that above terms are $o(1)$ all we need to show is: \n$$\n \\frac{1}{\\sigma_n}\\mathbb{E}\\left[\\|g(Q)\\|_{op} \\ \\|\\tilde Q\\| \\ \\left|K'\\left(\\frac{Q^{\\top}\\psi^s_0}{\\sigma_n}\\right)\\right|\\right] = O(1) \\,.\n$$\nTowards that direction, define $\\eta = (\\tilde \\psi_0^s - \\tilde \\psi_0)\/\\sigma_n$: \n\\begin{align*}\n& \\frac{1}{\\sigma_n}\\mathbb{E}\\left[\\|g(Q)\\|_{op} \\ \\|\\tilde Q\\| \\ \\left|K'\\left(\\frac{Q^{\\top}\\psi^s_0}{\\sigma_n}\\right)\\right|\\right] \\\\\n& \\le c_+ \\frac{1}{\\sigma_n}\\mathbb{E}\\left[\\|\\tilde Q\\| \\ \\left|K'\\left(\\frac{Q^{\\top}\\psi^s_0}{\\sigma_n}\\right)\\right|\\right] \\\\\n& = c_+ \\frac{1}{\\sigma_n}\\int \\int \\|\\tilde q\\| \\left|K'\\left(\\frac{t}{\\sigma_n} + \\tilde q^{\\top}\\eta \\right)\\right| f_0\\left(t \\mid \\tilde q\\right) f(\\tilde q) \\ dt \\ d\\tilde q \\\\\n& = c_+ \\int \\int \\|\\tilde q\\| \\left|K'\\left(t + \\tilde q^{\\top}\\eta \\right)\\right| f_0\\left(\\sigma_n t \\mid \\tilde q\\right) f(\\tilde q) \\ dt \\ d\\tilde q \\\\\n& = c_+ \\int \\|\\tilde q\\| f_0\\left(0 \\mid \\tilde q\\right) \\int \\left|K'\\left(t + \\tilde q^{\\top}\\eta \\right)\\right| \\ dt \\ f(\\tilde q) \\ d\\tilde q + R_1 \\\\\n& = c_+ \\int \\left|K'\\left(t\\right)\\right| dt \\ \\mathbb{E}\\left[\\|\\tilde Q\\| f_0(0 \\mid \\tilde Q)\\right] + R_1 = O(1) + R_1 \\,.\n\\end{align*}\nTherefore, all it remains to show is $R_1$ is also $O(1)$ (or of smaller order): \n\\begin{align*}\n|R_1| & = \\left|c_+ \\int \\int \\|\\tilde q\\| \\left|K'\\left(t + \\tilde q^{\\top}\\eta \\right)\\right| \\left(f_0\\left(\\sigma_n t \\mid \\tilde q\\right) - f_0(0 \\mid \\tilde q) \\right)f(\\tilde q) \\ dt \\ d\\tilde q\\right| \\\\\n& \\le c_+ F_+ \\sigma_n \\int \\|\\tilde q\\| \\int_{-\\infty}^{\\infty} |t|\\left|K'\\left(t + \\tilde q^{\\top}\\eta \\right)\\right| \\ dt \\ f(\\tilde q) \\ d\\tilde q \\\\\n& = c_+ F_+ \\sigma_n \\int \\|\\tilde q\\| \\int_{-\\infty}^{\\infty} |t - q^{\\top}\\eta|\\left|K'\\left(t\\right)\\right| \\ dt \\ f(\\tilde q) \\ d\\tilde q \\\\\n& \\le c_+ F_+ \\sigma_n \\left[\\int \\|\\tilde q\\| \\int_{-\\infty}^{\\infty} |t|\\left|K'\\left(t\\right)\\right| \\ dt \\ f(\\tilde q) \\ d\\tilde q + \\int \\|\\tilde q\\|^2\\|\\eta\\| \\int_{-\\infty}^{\\infty}\\left|K'\\left(t\\right)\\right| \\ dt \\ f(\\tilde q) \\ d\\tilde q\\right] \\\\\n& = c_+ F_+ \\sigma_n \\left[\\left(\\int_{-\\infty}^{\\infty} |t|\\left|K'\\left(t\\right)\\right| \\ dt\\right) \\times \\mathbb{E}[\\|\\tilde Q\\|] + \\left(\\int_{-\\infty}^{\\infty}\\left|K'\\left(t\\right)\\right| \\ dt\\right) \\times \\|\\eta\\| \\ \\mathbb{E}[\\|\\tilde Q\\|^2]\\right] \\\\\n& = O(\\sigma_n) = o(1) \\,.\n\\end{align*}\nThis completes the proof. For $T_3$, the limit is non-degenerate which can be calculated as follows: \n\\begin{align*}\nT_3 &= \\frac{1}{\\sigma_n}\\mathbb{E}\\left[\\left(\\delta_0{\\top}g(Q)\\delta_0\\right)\\tilde QK'\\left(\\frac{Q^{\\top}\\psi^s_0}{\\sigma_n}\\right)\\left(1 - 2\\mathds{1}_{Q^{\\top}\\delta_0 > 0}\\right)\\right] \\\\\n& = \\frac{1}{\\sigma_n} \\int \\int \\left(\\delta_0{\\top}g(t - \\tilde q^{\\top}\\tilde \\psi_0, \\tilde q)\\delta_0\\right)\\tilde q K'\\left(\\frac{t}{\\sigma_n} + \\tilde q^{\\top} \\eta\\right)\\left(1 - 2\\mathds{1}_{t > 0}\\right) \\ f_0(t \\mid \\tilde q) \\ f(\\tilde q) \\ dt \\ d\\tilde q \\\\\n& = \\int \\int \\left(\\delta_0{\\top}g(\\sigma_n t - \\tilde q^{\\top}\\tilde \\psi_0, \\tilde q)\\delta_0\\right)\\tilde q K'\\left(t + \\tilde q^{\\top} \\eta\\right)\\left(1 - 2\\mathds{1}_{t > 0}\\right) \\ f_0(\\sigma_n t \\mid \\tilde q) \\ f(\\tilde q) \\ dt \\ d\\tilde q \\\\\n& = \\int \\int \\left(\\delta_0{\\top}g(- \\tilde q^{\\top}\\tilde \\psi_0, \\tilde q)\\delta_0\\right)\\tilde q K'\\left(t + \\tilde q^{\\top} \\eta\\right)\\left(1 - 2\\mathds{1}_{t > 0}\\right) \\ f_0(0 \\mid \\tilde q) \\ f(\\tilde q) \\ dt \\ d\\tilde q + R \\\\\n& = \\int \\left(\\delta_0{\\top}g(- \\tilde q^{\\top}\\tilde \\psi_0, \\tilde q)\\delta_0\\right)\\tilde q f_0(0 \\mid \\tilde q) \\left[\\int_{-\\infty}^0 K'\\left(t + \\tilde q^{\\top} \\eta\\right) \\ dt - \\int_0^\\infty K'\\left(t + \\tilde q^{\\top}\\tilde \\eta\\right) \\ dt \\right] \\ f(\\tilde q) \\ d\\tilde q + R \\\\\n&= \\int \\left(\\delta_0{\\top}g(- \\tilde q^{\\top}\\tilde \\psi_0, \\tilde q)\\delta_0\\right)\\tilde q f_0(0 \\mid \\tilde q)\\left(2K\\left(\\tilde q^{\\top}\\eta\\right) - 1\\right) \\ f(\\tilde q) \\ d\\tilde q + R \\\\\n& = \\mathbb{E}\\left[\\tilde Q f(0 \\mid \\tilde Q) \\left(\\delta_0^{\\top}g(- \\tilde Q^{\\top}\\tilde \\psi_0, \\tilde Q)\\delta_0\\right)\\left(2K(\\tilde Q^{\\top} \\eta)- 1\\right)\\right] + R \n\\end{align*} \nThat the remainder $R$ is $o(1)$ again follows by similar calculation as before and hence skipped. Therefore we have when $\\eta = (\\tilde \\psi_0^s - \\psi_0)\/\\sigma_n \\to h$: \n$$\nT_3 \\overset{n \\to \\infty}{\\longrightarrow} \\mathbb{E}\\left[\\tilde Q f(0 \\mid \\tilde Q) \\left(\\delta_0^{\\top}g(- \\tilde Q^{\\top}\\tilde \\psi_0, \\tilde Q)\\delta_0\\right)\\left(2K(\\tilde Q^{\\top}h)- 1\\right)\\right] \\,,\n$$\nwhich along with equation \\eqref{eq:pop_est_conv_1} implies: \n$$\n\\mathbb{E}\\left[\\tilde Q f(0 \\mid \\tilde Q) \\left(\\delta_0^{\\top}g(- \\tilde Q^{\\top}\\tilde \\psi_0, \\tilde Q)\\delta_0\\right)\\left(2K(\\tilde Q^{\\top}h)- 1\\right)\\right] = 0 \\,.\n$$\nTaking inner product with respect to $h$ on both side of the above equation we obtain: \n$$\n\\mathbb{E}\\left[\\tilde Q^{\\top}h f(0 \\mid \\tilde Q) \\left(\\delta_0^{\\top}g(- \\tilde Q^{\\top}\\tilde \\psi_0, \\tilde Q)\\delta_0\\right)\\left(2K(\\tilde Q^{\\top}h)- 1\\right)\\right] = 0\n$$\nNow from the symmetry of our Kernel $K$ we have $\\left(\\delta_0^{\\top}g(- \\tilde Q^{\\top}\\tilde \\psi_0, \\tilde Q)\n\\delta_0\\right)\\tilde Q^{\\top}h f(0 \\mid \\tilde Q) (2K(\\tilde Q^{\\top}\\tilde h) - 1) \\ge 0$ almost surely. As the expectation is $0$, we further deduce that $\\tilde Q^{\\top}h f(0 \\mid \\tilde Q) (2K(\\tilde Q^{\\top}\\tilde h)-1) = 0$ almost surely, which further implies $h = 0$. \n\\\\\\\\\n\\noindent\nWe next prove that $(\\beta_0 - \\beta^s_0)\/\\sqrt{\\sigma_n} \\to 0$ and $(\\delta_0 - \\delta^s_0)\/\\sqrt{\\sigma_n} \\to 0$ using equations\\eqref{eq:beta_grad} and \\eqref{eq:delta_grad}. We start with equation \\eqref{eq:beta_grad}: \n\\begin{align}\n 0 & = -\\mathbb{E}\\left[X(Y - X^{\\top}\\beta_0^s)\\right] + \\mathbb{E} \\left\\{\\left[X_iX_i^{\\top}\\delta_0^s\\right] K\\left(\\frac{Q_i^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\right\\} \\notag \\\\\n & = -\\mathbb{E}\\left[XX^{\\top}(\\beta_0 - \\beta_0^s)\\right] - \\mathbb{E}[XX^{\\top}\\delta_0\\mathds{1}_{Q^{\\top}\\psi_0 > 0}] + \\mathbb{E} \\left[ g(Q)K\\left(\\frac{Q_i^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\right]\\delta_0^s \\notag \\\\\n & = -\\Sigma_X(\\beta_0 - \\beta_0^s) -\\mathbb{E}\\left[g(Q)\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right](\\delta_0 - \\delta_0^s) + \\mathbb{E} \\left[g(Q)\\left\\{K\\left(\\frac{Q_i^{\\top}\\psi_0^s}{\\sigma_n}\\right) - \\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right\\}\\right]\\delta_0^s \\notag \\\\\n \\label{eq:deriv1} & = \\Sigma_X\\frac{(\\beta_0^2 - \\beta_0)}{\\sigma_n} + \\mathbb{E}\\left[g(Q)\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right]\\frac{(\\delta_0^2 - \\delta_0)}{\\sigma_n} + \\frac{1}{\\sigma_n}\\mathbb{E} \\left[g(Q)\\left\\{K\\left(\\frac{Q_i^{\\top}\\psi_0^s}{\\sigma_n}\\right) - \\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right\\}\\right]\\delta_0^s \\notag \\\\ \n & = \\left( \\mathbb{E}\\left[g(Q)\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right]\\right)^{-1}\\Sigma_X \\frac{(\\beta_0^2 - \\beta_0)}{\\sigma_n} + \\frac{\\delta_0^s - \\delta_0}{\\sigma_n} \\notag \\\\\n & \\qquad \\qquad \\qquad \\qquad + \\left( \\mathbb{E}\\left[g(Q)\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right]\\right)^{-1} \\frac{1}{\\sigma_n}\\mathbb{E} \\left[g(Q)\\left\\{K\\left(\\frac{Q_i^{\\top}\\psi_0^s}{\\sigma_n}\\right) - \\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right\\}\\right]\\delta_0^s \n\\end{align}\nFrom equation \\eqref{eq:delta_grad} we have:\n\\begin{align}\n 0 & = \\mathbb{E} \\left\\{\\left[-X\\left(Y - X^{\\top}\\beta_0^s\\right) + XX^{\\top}\\delta_0^s\\right] K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\right\\} \\notag \\\\\n & = -\\mathbb{E}\\left[g(Q)K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\right](\\beta_0 - \\beta_0^s) - \\mathbb{E}\\left[g(Q)K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right]\\delta_0 \\notag \\\\\n & \\hspace{20em}+ \\mathbb{E}\\left[g(Q)K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\right]\\delta_0^s \\notag \\\\\n & = -\\mathbb{E}\\left[g(Q)K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\right](\\beta_0 - \\beta_0^s) - \\mathbb{E}\\left[g(Q)K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right](\\delta_0 - \\delta_0^s) \\notag \\\\\n & \\hspace{20em} + \\mathbb{E}\\left[g(Q)K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\left(1 - \\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right)\\right]\\delta_0^s \\notag \\\\\n & = \\mathbb{E}\\left[g(Q)K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\right]\\frac{(\\beta_0^s - \\beta_0)}{\\sigma_n} + \\mathbb{E}\\left[g(Q)K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right]\\frac{(\\delta^s_0 - \\delta_0)}{\\sigma_n} \\notag \\\\\n \\label{eq:deriv2} & \\hspace{20em} + \\frac{1}{\\sigma_n}\\mathbb{E}\\left[g(Q)K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\left(1 - \\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right)\\right]\\delta_0^s \\notag \\\\ \n & = \\left( \\mathbb{E}\\left[g(Q)K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right]\\right)^{-1}\\mathbb{E}\\left[g(Q)K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\right]\\frac{(\\beta_0^s - \\beta_0)}{\\sigma_n} + \\frac{(\\delta^s_0 - \\delta_0)}{\\sigma_n} \\notag \\\\\n & \\qquad \\qquad \\qquad + \\left( \\mathbb{E}\\left[g(Q)K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right]\\right)^{-1} \\frac{1}{\\sigma_n}\\mathbb{E}\\left[g(Q)K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\left(1 - \\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right)\\right]\\delta_0^s \n \\end{align}\nSubtracting equation \\eqref{eq:deriv2} from \\eqref{eq:deriv1} we obtain: \n$$\n0 = A_n \\frac{(\\beta_0^s - \\beta_0)}{\\sigma_n} + b_n \\,,\n$$\ni.e. \n$$\n\\lim_{n \\to \\infty} \\frac{(\\beta_0^s - \\beta_0)}{\\sigma_n} = \\lim_{n \\to \\infty} -A_n^{-1}b_n \\,.\n$$\nwhere: \n\\begin{align*}\nA_n & = \\left( \\mathbb{E}\\left[g(Q)\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right]\\right)^{-1}\\Sigma_X \\\\\n& \\qquad \\qquad - \\left( \\mathbb{E}\\left[g(Q)K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right]\\right)^{-1}\\mathbb{E}\\left[g(Q)K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\right] \\\\\nb_n & = \\left( \\mathbb{E}\\left[g(Q)\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right]\\right)^{-1} \\frac{1}{\\sigma_n}\\mathbb{E} \\left[g(Q)\\left\\{K\\left(\\frac{Q_i^{\\top}\\psi_0^s}{\\sigma_n}\\right) - \\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right\\}\\right]\\delta_0^s \\\\\n& \\qquad - \\left( \\mathbb{E}\\left[g(Q)K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right]\\right)^{-1} \\frac{1}{\\sigma_n}\\mathbb{E}\\left[g(Q)K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\left(1 - \\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right)\\right]\\delta_0^s \n\\end{align*}\nIt is immediate via DCT that as $n \\to \\infty$: \n\\begin{align}\n \\label{eq:limit_3} \\mathbb{E}\\left[g(Q)K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\right] & \\longrightarrow \\mathbb{E}\\left[g(Q)\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right] \\,. \\\\\n \\label{eq:limit_4} \\mathbb{E}\\left[g(Q)K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right] & \\longrightarrow \\mathbb{E}\\left[g(Q)\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right] \\,.\n\\end{align}\nFrom equation \\eqref{eq:limit_3} and \\eqref{eq:limit_4} it is immediate that: \n\\begin{align*}\n\\lim_{n \\to \\infty} A_n & = \\left( \\mathbb{E}\\left[g(Q)\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right]\\right)^{-1}\\Sigma_X - I \\\\\n& = \\left( \\mathbb{E}\\left[g(Q)\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right]\\right)^{-1}\\left( \\mathbb{E}\\left[g(Q)\\mathds{1}_{Q^{\\top}\\psi_0 \\le 0}\\right]\\right) := A\\,.\n\\end{align*}\nNext observe that: \n\\begin{align}\n & \\frac{1}{\\sigma_n} \\mathbb{E}\\left[g(Q)\\left\\{K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right) - \\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right\\}\\right] \\notag \\\\\n & = \\frac{1}{\\sigma_n} \\mathbb{E}\\left[g(Q)\\left\\{K\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n} + \\tilde Q^{\\top}\\tilde \\eta\\right) - \\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right\\}\\right] \\notag \\\\\n & = \\int_{\\mathbb{R}^{p-1}} \\int_{-\\infty}^{\\infty} g(\\sigma_nt - \\tilde q^{\\top}\\tilde \\psi_0, \\tilde q)\\left[K\\left(t + \\tilde q^{\\top}\\tilde \\eta\\right) - \\mathds{1}_{t > 0}\\right] f(\\sigma_n t \\mid \\tilde q) \\ dt \\ f(\\tilde q) \\ d\\tilde q \\notag \\\\\n \\label{eq:limit_1} & \\longrightarrow \\mathbb{E}\\left[g(-\\tilde Q^{\\top}\\tilde \\psi_0, \\tilde Q)f(0 \\mid \\tilde Q)\\right] \\cancelto{0}{\\int_{-\\infty}^{\\infty} \\left[K\\left(t\\right) - \\mathds{1}_{t > 0}\\right] \\ dt} \\,. \n\\end{align}\nSimilar calculation yields: \n\\begin{align}\n \\label{eq:limit_2} & \\frac{1}{\\sigma_n} \\mathbb{E}\\left[g(Q)K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\left(1 - \\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right)\\right] \\notag \\\\\n & \\longrightarrow \\mathbb{E}[g(-\\tilde Q^{\\top}\\tilde \\psi_0, \\tilde Q)f_0(0 \\mid \\tilde Q)]\\int_{-\\infty}^{\\infty} \\left[K\\left(t\\right)\\mathds{1}_{t \\le 0}\\right] \\ dt \\,.\n\\end{align}\nCombining equation \\eqref{eq:limit_1} and \\eqref{eq:limit_2} we conclude: \n\\begin{align*}\n\\lim_{n \\to \\infty} b_n &= \\left( \\mathbb{E}\\left[g(Q)\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right]\\right)^{-1} \\mathbb{E}[g(-\\tilde Q^{\\top}\\tilde \\psi_0, \\tilde Q)\\delta_0f_0(0 \\mid \\tilde Q)]\\int_{-\\infty}^{\\infty} \\left[K\\left(t\\right)\\mathds{1}_{t \\le 0}\\right] \\ dt \\\\\n& := b \\,.\n\\end{align*}\nwhich further implies, \n$$\n\\lim_{n \\to \\infty} \\frac{(\\beta_0^s - \\beta_0)}{\\sigma_n} = -A^{-1}b \\implies (\\beta_0^s - \\beta_0) = o(\\sqrt{\\sigma_n})\\,,\n$$\nand by similar calculations: \n$$\n(\\delta_0^s - \\delta_0) = o(\\sqrt{\\sigma_n}) \\,.\n$$\nThis completes the proof. \n\\end{proof}\n\n\n\n\n\n\n\\subsection{Proof of Lemma \\ref{lem:pop_curv_nonsmooth}}\n\\begin{proof}\nFrom the definition of $M(\\theta)$ it is immediate that $\\mathbb{M}(\\theta_0) = \\mathbb{E}[{\\epsilon}^2] = \\sigma^2$. For any general $\\theta$: \n\\begin{align*}\n \\mathbb{M}(\\theta) & = \\mathbb{E}\\left[\\left(Y - X^{\\top}\\left(\\beta + \\delta\\mathds{1}_{Q^{\\top}\\psi > 0}\\right)\\right)^2\\right] \\\\\n & = \\sigma^2 + \\mathbb{E}\\left[\\left( X^{\\top}\\left(\\beta + \\delta\\mathds{1}_{Q^{\\top}\\psi > 0} - \\beta_0 - \\delta_0\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right)\\right)^2\\right] \\\\\n & \\ge \\sigma^2 + c_- \\mathbb{E}_Q\\left[\\left\\|\\beta - \\beta_0 + \\delta\\mathds{1}_{Q^{\\top}\\psi > 0}- \\delta_0\\mathds{1}_{Q^{\\top}\\psi_0 > 0} \\right\\|^2\\right]\n\\end{align*}\nThis immediately implies: \n$$\n\\mathbb{M}(\\theta) - \\mathbb{M}(\\theta_0) \\ge c_- \\mathbb{E}\\left[\\left\\|\\beta - \\beta_0 + \\delta\\mathds{1}_{Q^{\\top}\\psi > 0}- \\delta_0\\mathds{1}_{Q^{\\top}\\psi_0 > 0} \\right\\|^2\\right] \\,.\n$$\n\n\\noindent\nFor notational simplicity, define $p_{\\psi} = \\mathbb{P}(Q^{\\top}\\psi > 0)$. Expanding the RHS we have: \n\\begin{align}\n & \\mathbb{E}\\left[\\left\\|\\beta - \\beta_0 + \\delta\\mathds{1}_{Q^{\\top}\\psi > 0}- \\delta_0\\mathds{1}_{Q^{\\top}\\psi_0 > 0} \\right\\|^2\\right] \\notag \\\\\n & = \\|\\beta - \\beta_0\\|^2 + 2(\\beta - \\beta_0)^{\\top}\\mathbb{E}\\left[\\delta\\mathds{1}_{Q^{\\top}\\psi > 0}- \\delta_0\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right] + \\mathbb{E}\\left[\\left\\|\\delta\\mathds{1}_{Q^{\\top}\\psi > 0}- \\delta_0\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right\\|^2\\right] \\notag \\\\\n & = \\|\\beta - \\beta_0\\|^2 + 2(\\beta - \\beta_0)^{\\top}\\mathbb{E}\\left[\\delta\\mathds{1}_{Q^{\\top}\\psi > 0}-\\delta\\mathds{1}_{Q^{\\top}\\psi_0 > 0} + \\delta\\mathds{1}_{Q^{\\top}\\psi_0 > 0} - \\delta_0\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right] \\notag \\\\\n & \\qquad \\qquad \\qquad \\qquad \\qquad+ \\mathbb{E}\\left[\\left\\|\\delta\\mathds{1}_{Q^{\\top}\\psi > 0}-\\delta\\mathds{1}_{Q^{\\top}\\psi_0 > 0} + \\delta\\mathds{1}_{Q^{\\top}\\psi_0 > 0} - \\delta_0\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right\\|^2\\right] \\notag \\\\\n & = \\|\\beta - \\beta_0\\|^2 + 2(\\beta - \\beta_0)^{\\top}(\\delta - \\delta_0)p_{\\psi_0} + \\|\\delta - \\delta_0\\|^2 p_{\\psi_0} \\notag \\\\\n & \\qquad \\qquad \\qquad + 2(\\beta - \\beta_0)^{\\top}\\delta\\left(p_{\\psi} - p_{\\psi_0}\\right) + \\|\\delta\\|^2 \\mathbb{P}\\left(\\text{sign}(Q^{\\top}\\psi) \\neq \\text{sign}(Q^{\\top}\\psi_0)\\right) \\notag \\\\\n \\label{eq:nsb1} & \\qquad \\qquad \\qquad \\qquad \\qquad - 2\\delta^{\\top}(\\delta - \\delta_0)\\mathbb{P}\\left(Q^{\\top}\\psi_0 > 0, Q^{\\top}\\psi < 0\\right) \n \\end{align}\nUsing the fact that $2ab \\ge (a^2\/c) + cb^2$ for any constant $c$ we have: \n\\begin{align*}\n& \\|\\beta - \\beta_0\\|^2 + 2(\\beta - \\beta_0)^{\\top}(\\delta - \\delta_0)p_{\\psi_0} + \\|\\delta - \\delta_0\\|^2 p_{\\psi_0} \\\\\n& \\ge \\|\\beta - \\beta_0\\|^2 + \\|\\delta - \\delta_0\\|^2 p_{\\psi_0} - \\frac{\\|\\beta - \\beta_0\\|^2 p_{\\psi_0}}{c} - c \\|\\delta - \\delta_0\\|^2 p_{\\psi_0} \\\\\n& = \\|\\beta - \\beta_0\\|^2\\left(1 - \\frac{p_{\\psi_0}}{c}\\right) + \\|\\delta - \\delta_0\\|^2 p_{\\psi_0} (1 - c) \\,.\n\\end{align*}\nfor any $c$. To make the RHS non-negative we pick $p_{\\psi_0} < c < 1$ and concludes that: \n\\begin{equation}\n\\label{eq:nsb2}\n \\|\\beta - \\beta_0\\|^2 + 2(\\beta - \\beta_0)^{\\top}(\\delta - \\delta_0)p_{\\psi_0} + \\|\\delta - \\delta_0\\|^2 p_{\\psi_0} \\gtrsim \\left( \\|\\beta - \\beta_0\\|^2 + \\|\\delta - \\delta_0\\|^2\\right) \\,.\n\\end{equation}\nFor the last 3 summands of RHS of equation \\eqref{eq:nsb1}: \n\\begin{align}\n& 2(\\beta - \\beta_0)^{\\top}\\delta\\left(p_{\\psi} - p_{\\psi_0}\\right) + \\|\\delta\\|^2 \\mathbb{P}\\left(\\text{sign}(Q^{\\top}\\psi) \\neq \\text{sign}(Q^{\\top}\\psi_0)\\right) \\notag \\\\\n & \\qquad \\qquad - 2\\delta^{\\top}(\\delta - \\delta_0)\\mathbb{P}\\left(Q^{\\top}\\psi_0 > 0, Q^{\\top}\\psi < 0\\right) \\notag \\\\\n & = 2(\\beta - \\beta_0)^{\\top}\\delta \\mathbb{P}\\left(Q^{\\top}\\psi > 0, Q^{\\top}\\psi_0 < 0\\right) - 2(\\beta - \\beta_0)^{\\top}\\delta \\mathbb{P}\\left(Q^{\\top}\\psi < 0, Q^{\\top}\\psi_0 > 0\\right) \\notag \\\\\n & \\qquad \\qquad + |\\delta\\|^2 \\mathbb{P}\\left(\\text{sign}(Q^{\\top}\\psi) \\neq \\text{sign}(Q^{\\top}\\psi_0)\\right) - 2\\delta^{\\top}(\\delta - \\delta_0)\\mathbb{P}\\left(Q^{\\top}\\psi_0 > 0, Q^{\\top}\\psi < 0\\right) \\notag \\\\\n & = \\left[\\|\\delta\\|^2 - 2(\\beta - \\beta_0)^{\\top}\\delta - 2\\delta^{\\top}(\\delta - \\delta_0)\\right]\\mathbb{P}\\left(Q^{\\top}\\psi_0 > 0, Q^{\\top}\\psi < 0\\right) \\notag \\\\\n & \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad + \\left[\\|\\delta\\|^2 + 2(\\beta - \\beta_0)^{\\top}\\delta\\right]\\mathbb{P}\\left(Q^{\\top}\\psi > 0, Q^{\\top}\\psi_0 < 0\\right) \\notag \\\\\n & = \\left[\\|\\delta_0\\|^2 - 2(\\beta - \\beta_0)^{\\top}(\\delta - \\delta_0) - 2(\\beta - \\beta_0)^{\\top}\\delta_0 - \\|\\delta - \\delta_0\\|^2\\right]\\mathbb{P}\\left(Q^{\\top}\\psi_0 > 0, Q^{\\top}\\psi < 0\\right) \\notag \\\\\n & \\qquad + \\left[\\|\\delta_0\\|^2 + \\|\\delta - \\delta_0\\|^2 + 2(\\delta - \\delta_0)^{\\top}\\delta_0 + 2(\\beta - \\beta_0)^{\\top}(\\delta - \\delta_0) + 2(\\beta - \\beta_0)^{\\top}\\delta_0\\right]\\mathbb{P}\\left(Q^{\\top}\\psi > 0, Q^{\\top}\\psi_0 < 0\\right) \\notag \\\\\n & \\ge \\left[\\|\\delta_0\\|^2 - 2\\|\\beta - \\beta_0\\|\\|\\delta - \\delta_0\\| - 2\\|\\beta - \\beta_0\\|\\|\\delta_0\\| - \\|\\delta - \\delta_0\\|^2\\right]\\mathbb{P}\\left(Q^{\\top}\\psi_0 > 0, Q^{\\top}\\psi < 0\\right) \\notag \\\\\n & \\qquad + \\left[\\|\\delta_0\\|^2 + \\|\\delta - \\delta_0\\|^2 + 2\\|\\delta - \\delta_0\\|\\|\\delta_0\\| + 2\\|\\beta - \\beta_0\\|\\|\\delta - \\delta_0\\| + 2\\|\\beta - \\beta_0\\|\\|\\delta_0\\|\\right]\\mathbb{P}\\left(Q^{\\top}\\psi > 0, Q^{\\top}\\psi_0 < 0\\right) \\notag \\\\\n\\label{eq:nsb3} & \\gtrsim \\|\\delta_0\\|^2 \\mathbb{P}\\left(\\text{sign}(Q^{\\top}\\psi) \\neq \\text{sign}(Q^{\\top}\\psi_0)\\right) \\gtrsim \\|\\psi - \\psi_0\\| \\hspace{0.2in} [\\text{By Assumption }\\ref{eq:assm}]\\,.\n\\end{align}\nCombining equation \\eqref{eq:nsb2} and \\eqref{eq:nsb3} we complete the proof of lower bound. The upper bound is relatively easier: note that by our previous calculation: \n\\begin{align*}\n \\mathbb{M}(\\theta) - \\mathbb{M}(\\theta_0) & = \\mathbb{E}\\left[\\left( X^{\\top}\\left(\\beta + \\delta\\mathds{1}_{Q^{\\top}\\psi > 0} - \\beta_0 - \\delta_0\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right)\\right)^2\\right] \\\\\n & \\le c_+\\mathbb{E}\\left[\\left\\|\\beta - \\beta_0 + \\delta\\mathds{1}_{Q^{\\top}\\psi > 0}- \\delta_0\\mathds{1}_{Q^{\\top}\\psi_0 > 0} \\right\\|^2\\right] \\\\\n & = c_+\\mathbb{E}\\left[\\left\\|\\beta - \\beta_0 + \\delta\\mathds{1}_{Q^{\\top}\\psi > 0}- \\delta\\mathds{1}_{Q^{\\top}\\psi_0 > 0} + \\delta\\mathds{1}_{Q^{\\top}\\psi_0 > 0} - \\delta_0\\mathds{1}_{Q^{\\top}\\psi_0 > 0} \\right\\|^2\\right] \\\\\n & \\lesssim \\left[\\|\\beta - \\beta_0\\|^2 + \\|\\delta - \\delta_0\\|^2 + \\mathbb{P}\\left(\\text{sign}(Q^{\\top}\\psi) \\neq \\text{sign}(Q^{\\top}\\psi_0)\\right)\\right] \\\\\n & \\lesssim \\left[\\|\\beta - \\beta_0\\|^2 + \\|\\delta - \\delta_0\\|^2 + \\|\\psi - \\psi_0\\|\\right] \\,.\n\\end{align*}\nThis completes the entire proof.\n\\end{proof}\n\n\n\n\n\\subsection{Proof of Lemma \\ref{lem:uniform_smooth}}\n\\begin{proof}\nThe difference of the two losses: \n\\begin{align*}\n\\left|\\mathbb{M}^s(\\theta) - \\mathbb{M}(\\theta)\\right| & = \\left|\\mathbb{E}\\left[\\left\\{-2\\left(Y_i - X^{\\top}\\beta\\right)X^{\\top}\\delta + (X^{\\top}\\delta)^2\\right\\}\\left(K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right) - \\mathds{1}_{Q^{\\top}\\psi > 0}\\right)\\right]\\right| \\\\\n& \\le \\mathbb{E}\\left[\\left|-2\\left(Y_i - X^{\\top}\\beta\\right)X^{\\top}\\delta + (X^{\\top}\\delta)^2\\right|\\left|K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right) - \\mathds{1}_{Q^{\\top}\\psi > 0}\\right|\\right] \\\\\n& := \\mathbb{E}\\left[m(Q)\\left|K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right) - \\mathds{1}_{Q^{\\top}\\psi > 0}\\right|\\right] \n\\end{align*}\nwhere $m(Q) = \\mathbb{E}\\left[\\left|-2\\left(Y_i - X^{\\top}\\beta\\right)X^{\\top}\\delta + (X^{\\top}\\delta)^2\\right| \\mid Q\\right]$. This function can be bounded as follows: \n\\begin{align*}\nm(Q) & = \\mathbb{E}\\left[\\left|-2\\left(Y_i - X^{\\top}\\beta\\right)X^{\\top}\\delta + (X^{\\top}\\delta)^2\\right| \\mid Q\\right] \\\\\n& \\le \\mathbb{E}[ (X^{\\top}\\delta)^2 \\mid Q] + 2\\mathbb{E}\\left[\\left|(\\beta - \\beta_0)^{\\top}XX^{\\top}\\delta\\right|\\right] + 2\\mathbb{E}\\left[\\left|\\delta_0^{\\top}XX^{\\top}\\delta\\right|\\right] \\\\\n& \\le c_+\\left(\\|\\delta\\|^2 + 2\\|\\beta - \\beta_0\\|\\|\\delta\\| + 2\\|\\delta\\|\\|\\delta_0\\|\\right) \\lesssim 1 \\,,\n\\end{align*}\nas our parameter space is compact. For the rest of the calculation define $\\eta = (\\tilde \\psi - \\tilde \\psi_0)\/\\sigma_n$. The definition of $\\eta$ may be changed from proof to proof, but it will be clear from the context. Therefore we have: \n\\begin{align*}\n\\left|\\mathbb{M}^s(\\theta) - \\mathbb{M}(\\theta)\\right| & \\lesssim \\mathbb{E}\\left[\\left|K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right) - \\mathds{1}_{Q^{\\top}\\psi > 0}\\right|\\right] \\\\\n& = \\mathbb{E}\\left[\\left| \\mathds{1}\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n} + \\eta^{\\top}\\tilde{Q} \\ge 0\\right) - K\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n} + \\eta^{\\top}\\tilde{Q}\\right)\\right|\\right] \\\\\n& = \\sigma_n \\int_{\\mathbb{R}^{p-1}} \\int_{-\\infty}^{\\infty} \\left | \\mathds{1}\\left(t \\ge 0\\right) - K\\left(t \\right)\\right | f_0(\\sigma_n (t-\\eta^{\\top}\\tilde{q}) | \\tilde{q}) \\ dt \\ dP(\\tilde{q}) \\\\\n& \\le f_+ \\sigma_n \\int_{-\\infty}^{\\infty} \\left | \\mathds{1}\\left(t \\ge 0\\right) - K\\left(t \\right)\\right | \\ dt \\lesssim \\sigma_n \\,.\n\\end{align*}\nwhere the integral over $t$ is finite follows from the definition of the kernel. This completes the proof. \n\\end{proof}\n\n\n\n\n\n\n\n\n\\subsection{Proof of Lemma \\ref{lem:pop_smooth_curvarture}}\n\\begin{proof}\nFirst note that we can write: \n\\begin{align}\n & \\mathbb{M}^s(\\theta) - \\mathbb{M}^s(\\theta_0^s) \\notag \\\\\n & = \\underbrace{\\mathbb{M}^s(\\theta) - \\mathbb{M}(\\theta)}_{\\ge -K_1\\sigma_n} + \\underbrace{\\mathbb{M}(\\theta) - \\mathbb{M}(\\theta_0)}_{\\underbrace{\\ge u_- d^2(\\theta, \\theta_0)}_{\\ge \\frac{u_-}{2} d^2(\\theta, \\theta_0^s) - u_-\\sigma_n }} + \\underbrace{\\mathbb{M}(\\theta_0) - \\mathbb{M}(\\theta_0^s)}_{\\ge - u_+ d^2(\\theta_0, \\theta_0^s) \\ge - u_+\\sigma_n} + \\underbrace{\\mathbb{M}(\\theta_0^s) - \\mathbb{M}^s(\\theta_0^s)}_{\\ge - K_1 \\sigma_n} \\notag \\\\\n & \\ge \\frac{u_-}{2}d^2(\\theta, \\theta_0^s) - (2K_1 + \\xi)\\sigma_n \\notag \\\\\n & \\ge \\frac{u_-}{2}\\left[\\|\\beta - \\beta^s_0\\|^2 + \\|\\delta - \\delta^s_0\\|^2 + \\|\\psi - \\psi^s_0\\|\\right] - (2K_1 + \\xi)\\sigma_n \\notag \\\\ \n & \\ge \\left[\\frac{u_-}{2}\\left(\\|\\beta - \\beta^s_0\\|^2 + \\|\\delta - \\delta^s_0\\|^2\\right) + \\frac{u_-}{4}\\|\\psi - \\psi^s_0\\|\\right]\\mathds{1}_{\\|\\psi - \\psi^s_0\\| > \\frac{4(2K_1 + \\xi)}{u_-}\\sigma_n} \\notag \\\\\n \\label{eq:lower_curv_smooth} & \\gtrsim \\left[\\|\\beta - \\beta^s_0\\|^2 + \\|\\delta - \\delta^s_0\\|^2 + \\|\\psi - \\psi^s_0\\|\\right]\\mathds{1}_{\\|\\psi - \\psi^s_0\\| > \\frac{4(2K_1 + \\xi)}{u_-}\\sigma_n}\n\\end{align}\nwhere $\\xi$ can be taken as close to $0$ as possible. Henceforth we set $\\mathcal{K} = 4(2K_1 + \\xi)\/u_-$. For the other part of the curvature (i.e. when $\\|\\psi - \\psi_0^s\\| \\le \\mathcal{K} \\sigma_n$) we start with a two step Taylor expansion of the smoothed loss function: \n\\begin{align*}\n \\mathbb{M}^s(\\theta) - \\mathbb{M}^s(\\theta_0^s) = \\frac12 (\\theta_0 - \\theta^0_s)^{\\top}\\nabla^2 \\mathbb{M}^s(\\theta^*)(\\theta_0 - \\theta^0_s) \n\\end{align*}\nRecall the definition of $\\mathbb{M}^s(\\theta)$: \n$$\n\\mathbb{M}^s_n(\\theta) = \\mathbb{E}\\left(Y - X^{\\top}\\beta\\right)^2 + \\mathbb{E} \\left\\{\\left[-2\\left(Y_i - X_i^{\\top}\\beta\\right)X_i^{\\top}\\delta + (X_i^{\\top}\\delta)^2\\right] K\\left(\\frac{Q_i^{\\top}\\psi}{\\sigma_n}\\right)\\right\\}\n$$\nThe partial derivates of $\\mathbb{M}^s(\\theta)$ with respect to $(\\beta, \\delta, \\psi)$ was derived in equation \\eqref{eq:beta_grad} - \\eqref{eq:psi_grad}. From there, we calculate the hessian of $\\mathbb{M}^s(\\theta)$: \n\\begin{align*}\n \\nabla_{\\beta\\beta}\\mathbb{M}^s(\\theta) & = 2\\Sigma_X \\\\\n \\nabla_{\\delta\\delta}\\mathbb{M}^s(\\theta) & = 2 \\mathbb{E}\\left[g(Q)K\\left(\\frac{Q_i^{\\top}\\psi}{\\sigma_n}\\right)\\right] = 2 \\mathbb{E}\\left[g(Q)K\\left(\\frac{Q_i^{\\top}\\psi_0 }{\\sigma_n} + \\tilde Q^{\\top}\\tilde \\eta\\right)\\right] \\\\\n \\nabla_{\\psi\\psi} \\mathbb{M}^s(\\theta) & = \\frac{1}{\\sigma_n^2}\\mathbb{E} \\left\\{\\left[-2\\left(Y_i - X_i^{\\top}\\beta\\right)X_i^{\\top}\\delta + (X_i^{\\top}\\delta)^2\\right]\\tilde Q_i\\tilde Q_i^{\\top} K''\\left(\\frac{Q_i^{\\top}\\psi_0 }{\\sigma_n} + \\tilde Q^{\\top}\\tilde \\eta\\right)\\right\\} \\\\\n \\nabla_{\\beta \\delta}\\mathbb{M}^s(\\theta) & = 2 \\mathbb{E}\\left[g(Q)K\\left(\\frac{Q_i^{\\top}\\psi}{\\sigma_n}\\right)\\right] = 2 \\mathbb{E}\\left[g(Q)K\\left(\\frac{Q_i^{\\top}\\psi_0 }{\\sigma_n} + \\tilde Q^{\\top}\\tilde \\eta\\right)\\right] \\\\\n \\nabla_{\\beta \\psi}\\mathbb{M}^s(\\theta) & = \\frac{2}{\\sigma_n}\\mathbb{E}\\left(g(Q)\\delta\\tilde Q^{\\top}K'\\left(\\frac{Q_i^{\\top}\\psi_0 }{\\sigma_n} + \\tilde Q^{\\top}\\tilde \\eta\\right)\\right) \\\\\n \\nabla_{\\delta \\psi} \\mathbb{M}^s(\\theta) & = \\frac{2}{\\sigma_n}\\mathbb{E} \\left\\{\\left[-X_i\\left(Y_i - X_i^{\\top}\\beta\\right) + X_iX_i^{\\top}\\delta\\right]\\tilde Q_i^{\\top} K'\\left(\\frac{Q_i^{\\top}\\psi_0 }{\\sigma_n} + \\tilde Q^{\\top}\\tilde \\eta\\right)\\right\\} \\,.\n\\end{align*}\nwhere we use $\\tilde \\eta$ for a generic notation for $(\\tilde \\psi - \\tilde \\psi_0)\/\\sigma_n$. For notational simplicity, we define $\\gamma = (\\beta, \\delta)$ and $\\nabla^2\\mathbb{M}^{s, \\gamma}(\\theta)$, $\\nabla^2\\mathbb{M}^{s, \\gamma \\psi}(\\theta), \\nabla^2\\mathbb{M}^{s, \\psi \\psi}(\\theta)$ to be corresponding blocks of the hessian matrix. We have: \n\\begin{align}\n \\mathbb{M}^s(\\theta) - \\mathbb{M}^s(\\theta_0^s) & = \\frac12 (\\theta - \\theta^0_s)^{\\top}\\nabla^2 \\mathbb{M}^s(\\theta^*)(\\theta - \\theta^0_s) \\notag \\\\\n & = \\frac12 (\\gamma - \\gamma_0^s)^{\\top}\\nabla^2 \\mathbb{M}^{s, \\gamma}(\\theta^*)(\\gamma - \\gamma^0_s) + (\\gamma - \\gamma_0^s)^{\\top}\\nabla^2 \\mathbb{M}^{s, \\gamma \\psi}(\\theta^*)(\\psi - \\psi^0_s) \\notag \\\\\n & \\qquad \\qquad \\qquad \\qquad + \\frac12(\\psi - \\psi_0^s)^{\\top}\\nabla^2 \\mathbb{M}^{s, \\psi \\psi}(\\theta^*)(\\psi - \\psi^0_s) \\notag \\\\\n \\label{eq:hessian_1} & := \\frac12 \\left(T_1 + 2T_2 + T_3\\right)\n\\end{align}\nNote that we can write: \n\\begin{align*}\n T_1 & = (\\gamma - \\gamma_0^s)^{\\top}\\nabla^2 \\mathbb{M}^{s, \\gamma}(\\tilde \\theta)(\\gamma - \\gamma^0_s) \\\\\n & = (\\gamma - \\gamma_0^s)^{\\top}\\nabla^2 \\mathbb{M}^{s, \\gamma}(\\theta_0)(\\gamma - \\gamma^0_s) + (\\gamma - \\gamma_0^s)^{\\top}\\left[\\nabla^2 \\mathbb{M}^{s, \\gamma}(\\tilde \\theta) - \\nabla^2 \\mathbb{M}^{s, \\gamma}(\\theta_0)\\right](\\gamma - \\gamma^0_s) \n\\end{align*}\nThe operator norm of the difference of two hessians can be bounded as: \n$$\n\\left\\|\\nabla^2 \\mathbb{M}^{s, \\gamma}(\\theta^*) - \\nabla^2 \\mathbb{M}^{s, \\gamma}(\\theta_0)\\right\\|_{op} = O(\\sigma_n) \\,.\n$$\nfor any $\\theta^*$ in a neighborhood of $\\theta_0^s$ with $\\|\\psi - \\psi_0^s\\| \\le \\mathcal{K} \\sigma_n$. To prove this note that for any $\\theta$: \n$$\n\\nabla^2 \\mathbb{M}^{s, \\gamma}(\\theta^*) - \\nabla^2 \\mathbb{M}^{s, \\gamma}(\\theta_0) = 2\\begin{pmatrix}0 & A \\\\\nA & A\\end{pmatrix} = \\begin{pmatrix}0 & 1 \\\\ 1 & 1\\end{pmatrix} \\otimes A \n$$\nwhere: \n$$\nA = \\mathbb{E}\\left[g(Q)K\\left(\\frac{Q_i^{\\top}\\psi}{\\sigma_n}\\right)\\right] - \\mathbb{E}\\left[g(Q)K\\left(\\frac{Q_i^{\\top}\\psi_0 }{\\sigma_n}\\right)\\right]\n$$\nTherefore it is enough to show $\\|A\\|_{op} = O(\\sigma_n)$. Towards that direction: \n\\begin{align*}\nA & = \\mathbb{E}\\left[g(Q)K\\left(\\frac{Q_i^{\\top}\\psi}{\\sigma_n}\\right)\\right] - \\mathbb{E}\\left[g(Q)K\\left(\\frac{Q_i^{\\top}\\psi_0 }{\\sigma_n}\\right)\\right] \\\\\n& = \\sigma_n \\int \\int g(\\sigma_n t - \\tilde q^{\\top}\\tilde \\psi_0)\\left(K(t + \\tilde q^{\\top}\\eta) - K(t) \\right) f_0(\\sigma_n t \\mid \\tilde q) \\ f(\\tilde q) \\ dt \\ d\\tilde q \\\\\n& = \\sigma_n \\left[\\int \\int g(- \\tilde q^{\\top}\\tilde \\psi_0)\\left(K(t + \\tilde q^{\\top}\\eta) - K(t) \\right) f_0(0 \\mid \\tilde q) \\ f(\\tilde q) \\ dt \\ d\\tilde q + R \\right] \\\\\n& = \\sigma_n \\left[\\int \\int g(- \\tilde q^{\\top}\\tilde \\psi_0)f_0(0 \\mid \\tilde q) \\int_t^{t + \\tilde q^{\\top}\\eta}K'(s) \\ ds \\ f(\\tilde q) \\ dt \\ d\\tilde q + R \\right] \\\\\n& = \\sigma_n \\left[\\int g(- \\tilde q^{\\top}\\tilde \\psi_0)f_0(0 \\mid \\tilde q) \\int_{-\\infty}^{\\infty}K'(s) \\int_{s-\\tilde q^{\\top}\\eta}^s \\ dt \\ ds \\ f(\\tilde q)\\ d\\tilde q + R \\right] \\\\\n& = \\sigma_n \\left[\\int g(- \\tilde q^{\\top}\\tilde \\psi_0)f_0(0 \\mid \\tilde q)\\tilde q^{\\top}\\eta \\ f(\\tilde q)\\ d\\tilde q + R \\right] \\\\\n& = \\sigma_n \\left[\\mathbb{E}\\left[g(- \\tilde Q^{\\top}\\tilde \\psi_0, \\tilde Q)f_0(0 \\mid \\tilde Q)\\tilde Q^{\\top}\\eta\\right] + R \\right]\n\\end{align*}\nusing the fact that $\\left\\|\\mathbb{E}\\left[g(- \\tilde Q^{\\top}\\tilde \\psi_0, \\tilde Q)f_0(0 \\mid \\tilde Q)\\tilde Q^{\\top}\\eta\\right]\\right\\|_{op} = O(1)$ and $\\|R\\|_{op} = O(\\sigma_n)$ we conclude the claim. From the above claim we conclude: \n\\begin{equation}\n \\label{eq:hessian_gamma}\n T_1 = (\\gamma - \\gamma_0^s)^{\\top}\\nabla^2 \\mathbb{M}^{s, \\gamma}(\\theta^*)(\\gamma - \\gamma^s_0) \\ge \\|\\gamma - \\gamma^s_0\\|^2(1 - O(\\sigma_n)) \\ge \\frac12 \\|\\gamma - \\gamma_0^s\\|^2\n\\end{equation}\nfor all large $n$. \n\\\\\\\\\n\\noindent \nWe next deal with the cross term $T_2$ in equation \\eqref{eq:hessian_1}. Towards that end first note that: \n\\begin{align*}\n & \\frac{1}{\\sigma_n}\\mathbb{E}\\left((g(Q)\\delta)\\tilde Q^{\\top}K'\\left(\\frac{Q_i^{\\top}\\psi_0 }{\\sigma_n} + \\tilde Q^{\\top}\\eta^*\\right)\\right) \\\\\n & = \\int_{\\mathbb{R}^{(p-1)}}\\left[ \\int_{-\\infty}^{\\infty} \\left(g\\left(\\sigma_nt - \\tilde q^{\\top}\\tilde \\psi_0, \\tilde q\\right)\\delta\\right) K'\\left(t + \\tilde q^{\\top}\\eta^*\\right) f_0(\\sigma_n t \\mid \\tilde q) \\ dt\\right] \\tilde q^{\\top} \\ f(\\tilde q) \\ d\\tilde q \\\\\n & = \\int_{\\mathbb{R}^{(p-1)}}\\left[ \\int_{-\\infty}^{\\infty} \\left(g\\left(- \\tilde q^{\\top}\\tilde \\psi_0, \\tilde q\\right)\\delta\\right) K'\\left(t + \\tilde q^{\\top}\\eta^*\\right) f_0(0 \\mid \\tilde q) \\ dt\\right] \\tilde q^{\\top} \\ f(\\tilde q) \\ d\\tilde q + R_1\\\\\n & = \\mathbb{E}\\left[\\left(g\\left( - \\tilde Q^{\\top}\\tilde \\psi_0, \\tilde Q\\right)\\delta\\right)\\tilde Q^{\\top}f_0(0 \\mid \\tilde Q)\\right] + R_1\n\\end{align*}\nwhere the remainder term $R_1$ can be further decomposed $R_1 = R_{11} + R_{12} + R_{13}$ with: \n\\begin{align*}\n \\left\\|R_{11}\\right\\| & = \\left\\|\\int_{\\mathbb{R}^{(p-1)}}\\left[ \\int_{-\\infty}^{\\infty} \\left(g\\left(- \\tilde q^{\\top}\\tilde \\psi_0, \\tilde q\\right)\\delta\\right) K'\\left(t + \\tilde q^{\\top}\\eta^*\\right) (f_0(\\sigma_nt\\mid \\tilde q) - f_0(0 \\mid \\tilde q)) \\ dt\\right] \\tilde q^{\\top} \\ f(\\tilde q) \\ d\\tilde q\\right\\| \\\\\n & \\le \\left\\|\\int_{\\mathbb{R}^{(p-1)}}\\left[ \\int_{-\\infty}^{\\infty} \\left\\|g\\left(- \\tilde q^{\\top}\\tilde \\psi_0, \\tilde q\\right)\\right\\|_{op}\\|\\delta\\| \\left|K'\\left(t + \\tilde q^{\\top}\\eta^*\\right)\\right| \\left|f_0(\\sigma_nt\\mid \\tilde q) - f_0(0 \\mid \\tilde q)\\right| \\ dt\\right] \\left|\\tilde q\\right| \\ f(\\tilde q) \\ d\\tilde q\\right\\| \\\\\n & \\le \\sigma_n \\dot{f}^+ c_+ \\|\\delta\\| \\int_{\\mathbb{R}^{(p-1)}} \\|\\tilde q\\| \\int_{-\\infty}^{\\infty} |t| \\left|K'\\left(t + \\tilde q^{\\top}\\eta^*\\right)\\right| \\ dt \\ f(\\tilde q) \\ d\\tilde q \\\\\n & \\le \\sigma_n \\dot{f}^+ c_+ \\|\\delta\\| \\int_{\\mathbb{R}^{(p-1)}} \\|\\tilde q\\| \\int_{-\\infty}^{\\infty} |t - \\tilde q^{\\top}\\eta^*| \\left|K'\\left(t\\right)\\right| \\ dt \\ f(\\tilde q) \\ d\\tilde q \\\\\n & \\le \\sigma_n \\dot{f}^+ c_+ \\|\\delta\\| \\left[\\int_{\\mathbb{R}^{(p-1)}} \\|\\tilde q\\| \\int_{-\\infty}^{\\infty} |t| \\left|K'\\left(t\\right)\\right| \\ dt \\ f(\\tilde q) \\ d\\tilde q \\right. \\\\\n & \\qquad \\qquad \\qquad \\left. + \\int_{\\mathbb{R}^{(p-1)}} \\|\\tilde q\\|^2 \\|\\eta^*\\| \\int_{-\\infty}^{\\infty} |K'(t)| \\ dt \\ f(\\tilde q) \\ d\\tilde q\\right] \\\\\n & \\le \\sigma_n \\dot{f}^+ c_+ \\|\\delta\\| \\left[\\int_{\\mathbb{R}^{(p-1)}} \\|\\tilde q\\| \\int_{-\\infty}^{\\infty} |t| \\left|K'\\left(t\\right)\\right| \\ dt \\ f(\\tilde q) \\ d\\tilde q + \\mathcal{K}\\int_{\\mathbb{R}^{(p-1)}} \\|\\tilde q\\|^2 \\int_{-\\infty}^{\\infty} |K'(t)| \\ dt \\ f(\\tilde q) \\ d\\tilde q\\right] \\\\\n & \\lesssim \\sigma_n \\,.\n\\end{align*}\nwhere the last bound follows from our assumptions using the fact that: \n\\begin{align*}\n & \\|R_{12}\\| \\\\\n &= \\left\\|\\int_{\\mathbb{R}^{(p-1)}}\\left[ \\int_{-\\infty}^{\\infty} \\left(\\left(g\\left(\\sigma_n t- \\tilde q^{\\top}\\tilde \\psi_0, \\tilde q\\right) - g\\left(- \\tilde q^{\\top}\\tilde \\psi_0, \\tilde q\\right)\\right)\\delta\\right) K'\\left(t + \\tilde q^{\\top} \\eta^*\\right) f_0(0 \\mid \\tilde q) \\ dt\\right] \\tilde q^{\\top} \\ f(\\tilde q) \\ d\\tilde q\\right\\| \\\\\n & \\le \\int \\|\\tilde q\\|\\|\\delta\\|f_0(0 \\mid \\tilde q) \\int_{-\\infty}^{\\infty} \\left\\|g\\left(\\sigma_n t- \\tilde q^{\\top}\\tilde \\psi_0, \\tilde q\\right) - g\\left(- \\tilde q^{\\top}\\tilde \\psi_0, \\tilde q\\right) \\right\\|_{op}\\left|K'\\left(t + \\tilde q^{\\top} \\eta^*\\right)\\right| \\ dt \\ f(\\tilde q) \\ d\\tilde q \\\\\n & \\le \\dot{c}_+ \\sigma_n \\int \\|\\tilde q\\|\\|\\delta\\|f_0(0 \\mid \\tilde q)\\dot \\int_{-\\infty}^{\\infty} |t| \\left|K'\\left(t + \\tilde q^{\\top}\\tilde \\eta\\right)\\right| \\ dt \\ f(\\tilde q) \\ d\\tilde q \\hspace{0.2in} [\\text{Assumption }\\ref{eq:assm}]\\\\\n & \\lesssim \\sigma_n \\,.\n\\end{align*}\nThe other remainder term $R_{13}$ is the higher order term and can be shown to be $O(\\sigma_n^2)$ using same techniques. This implies for all large $n$: \n\\begin{align*}\n \\left\\|\\nabla_{\\beta \\psi}\\mathbb{M}^s(\\theta)\\right\\|_{op} & = O(1) \\,.\n\\end{align*}\nand similar calculation yields $ \\left\\|\\nabla_{\\delta \\psi}\\mathbb{M}^s(\\theta)\\right\\|_{op} = O(1)$. Using this we have: \n\\begin{align}\n T_2 & = (\\gamma - \\gamma_0^s)^{\\top}\\nabla^2 \\mathbb{M}^{s, \\gamma \\psi}(\\tilde \\theta)(\\psi - \\psi^0_s) \\notag \\\\\n & = (\\beta - \\beta_0^s)^{\\top}\\nabla_{\\beta \\psi}^2 \\mathbb{M}^{s}(\\tilde \\theta)(\\psi - \\psi^0_s) + (\\delta - \\delta_0^s)^{\\top}\\nabla_{\\delta \\psi}^2 \\mathbb{M}^{s}(\\tilde \\theta)(\\psi - \\psi^0_s) \\notag \\\\\n & \\ge - C\\left[\\|\\beta - \\beta_0^s\\| + \\|\\delta - \\delta_0^s\\| \\right]\\|\\psi - \\psi^0_s\\| \\notag \\\\\n & \\ge -C \\sqrt{\\sigma_n}\\left[\\|\\beta - \\beta_0^s\\| + \\|\\delta - \\delta_0^s\\| \\right]\\frac{\\|\\psi - \\psi^0_s\\| }{\\sqrt{\\sigma_n}} \\notag \\\\\n \\label{eq:hessian_cross} & \\gtrsim - \\sqrt{\\sigma_n}\\left(\\|\\beta - \\beta_0^s\\|^2 + \\|\\delta - \\delta_0^s\\|^2 +\\frac{\\|\\psi - \\psi^0_s\\|^2 }{\\sigma_n} \\right)\n\\end{align}\nNow for $T_3$ note that: \n\\allowdisplaybreaks\n\\begin{align*}\n& \\sigma_n \\nabla_{\\psi\\psi} \\mathbb{M}^s_n(\\theta) \\\\\n& = \\frac{1}{\\sigma_n}\\mathbb{E} \\left\\{\\left[-2\\left(Y_i - X_i^{\\top}\\beta\\right)X_i^{\\top}\\delta + (X_i^{\\top}\\delta)^2\\right]\\tilde Q_i\\tilde Q_i^{\\top} K''\\left(\\frac{Q_i^{\\top}\\psi_0 }{\\sigma_n} + \\tilde Q^{\\top}\\tilde \\eta\\right)\\right\\} \\\\\n& = \\frac{1}{\\sigma_n}\\mathbb{E} \\left\\{\\left[-2\\left(Y_i - X_i^{\\top}\\beta\\right)X_i^{\\top}\\delta \\right]\\tilde Q_i\\tilde Q_i^{\\top} K''\\left(\\frac{Q_i^{\\top}\\psi_0 }{\\sigma_n} + \\tilde Q^{\\top}\\tilde \\eta\\right)\\right\\} \\\\\n& \\qquad \\qquad \\qquad + \\frac{1}{\\sigma_n}\\mathbb{E} \\left\\{(\\delta^{\\top}g(Q) \\delta)\\tilde Q_i\\tilde Q_i^{\\top} K''\\left(\\frac{Q_i^{\\top}\\psi_0 }{\\sigma_n} + \\tilde Q^{\\top}\\tilde \\eta\\right)\\right\\} \\\\\n& = \\frac{1}{\\sigma_n}\\mathbb{E} \\left\\{\\left[-2 X_i^{\\top}\\left(\\beta_0 -\\beta\\right)X_i^{\\top}\\delta - 2(X_i^{\\top}\\delta_0)(X_i^{\\top}\\delta)\\mathds{1}_{Q_i^{\\top}\\psi_0 > 0}\\right]\\tilde Q_i\\tilde Q_i^{\\top} K''\\left(\\frac{Q_i^{\\top}\\psi_0 }{\\sigma_n} + \\tilde Q^{\\top}\\tilde \\eta\\right)\\right\\} \\\\\n& \\qquad \\qquad \\qquad + \\frac{1}{\\sigma_n}\\mathbb{E} \\left\\{(\\delta^{\\top}g(Q) \\delta)\\tilde Q_i\\tilde Q_i^{\\top} K''\\left(\\frac{Q_i^{\\top}\\psi_0 }{\\sigma_n} + \\tilde Q^{\\top}\\tilde \\eta\\right)\\right\\} \\\\\n& = \\frac{-2}{\\sigma_n}\\mathbb{E} \\left\\{((\\beta_0 - \\beta)^{\\top}g(Q) \\delta)\\tilde Q_i\\tilde Q_i^{\\top} K''\\left(\\frac{Q_i^{\\top}\\psi_0 }{\\sigma_n} + \\tilde Q^{\\top}\\tilde \\eta\\right)\\right\\} \\\\\n& \\qquad \\qquad \\qquad + \\frac{-2}{\\sigma_n}\\mathbb{E} \\left\\{(\\delta_0^{\\top}g(Q) \\delta)\\tilde Q_i\\tilde Q_i^{\\top} K''\\left(\\frac{Q_i^{\\top}\\psi_0 }{\\sigma_n} + \\tilde Q^{\\top}\\tilde \\eta\\right)\\mathds{1}_{Q_i^{\\top}\\psi_0 > 0}\\right\\} \\\\\n& \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad + \\frac{1}{\\sigma_n}\\mathbb{E} \\left\\{(\\delta^{\\top}g(Q) \\delta)\\tilde Q_i\\tilde Q_i^{\\top} K''\\left(\\frac{Q_i^{\\top}\\psi_0 }{\\sigma_n} + \\tilde Q^{\\top}\\tilde \\eta\\right)\\right\\} \\\\\n& = \\underbrace{\\frac{-2}{\\sigma_n}\\mathbb{E} \\left\\{((\\beta_0 - \\beta)^{\\top}g(Q)\\delta)\\tilde Q_i\\tilde Q_i^{\\top} K''\\left(\\frac{Q_i^{\\top}\\psi_0 }{\\sigma_n} + \\tilde Q^{\\top}\\tilde \\eta\\right)\\right\\}}_{M_1} \\\\\n& \\qquad \\qquad \\qquad + \\underbrace{\\frac{-2}{\\sigma_n}\\mathbb{E} \\left\\{(\\delta_0^{\\top}g(Q) \\delta_0)\\tilde Q_i\\tilde Q_i^{\\top} K''\\left(\\frac{Q_i^{\\top}\\psi_0 }{\\sigma_n} + \\tilde Q^{\\top}\\tilde \\eta\\right)\\mathds{1}_{Q_i^{\\top}\\psi_0 > 0}\\right\\}}_{M_2} \\\\\n& \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad +\n\\underbrace{\\frac{-2}{\\sigma_n}\\mathbb{E} \\left\\{(\\delta_0^{\\top} g(Q) (\\delta - \\delta_0))\\tilde Q_i\\tilde Q_i^{\\top} K''\\left(\\frac{Q_i^{\\top}\\psi_0 }{\\sigma_n} + \\tilde Q^{\\top}\\tilde \\eta\\right)\\mathds{1}_{Q_i^{\\top}\\psi_0 > 0}\\right\\}}_{M_3} \\\\\n& \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad + \\underbrace{\\frac{1}{\\sigma_n}\\mathbb{E} \\left\\{(\\delta^{\\top}g(Q) \\delta)\\tilde Q_i\\tilde Q_i^{\\top} K''\\left(\\frac{Q_i^{\\top}\\psi_0 }{\\sigma_n} + \\tilde Q^{\\top}\\tilde \\eta\\right)\\right\\}}_{M_4} \\\\\n& := M_1 + M_2 + M_3 + M_4\n\\end{align*}\nWe next show that $M_1$ and $M_4$ are $O(\\sigma_n)$. Towards that end note that for any two vectors $v_1, v_2$: \n\\begin{align*}\n & \\frac{1}{\\sigma_n}\\mathbb{E} \\left\\{(v_1^{\\top}g(Q)v_2)\\tilde Q_i\\tilde Q_i^{\\top} K''\\left(\\frac{Q_i^{\\top}\\psi_0 }{\\sigma_n} + \\tilde Q^{\\top}\\tilde \\eta\\right)\\right\\} \\\\\n & = \\int \\tilde q \\tilde q^{\\top} \\int_{-\\infty}^{\\infty}(v_1^{\\top}g(\\sigma_nt - \\tilde q^{\\top}\\tilde \\eta, \\tilde q)v_2) K''(t + \\tilde q^{\\top}\\tilde \\eta) f(\\sigma_nt \\mid \\tilde q) \\ dt \\ f(\\tilde q) \\ d\\tilde q \\\\\n & = \\int \\tilde q \\tilde q^{\\top} (v_1^{\\top}g( - \\tilde q^{\\top}\\tilde \\eta, \\tilde q)v_2)f(0 \\mid \\tilde q) f(\\tilde q) \\ d\\tilde q \\cancelto{0}{\\int_{-\\infty}^{\\infty} K''(t) \\ dt} + R = R\n\\end{align*}\nas $\\int K''(t) \\ dt = 0$ follows from our choice of kernel $K(x) = \\Phi(x)$. Similar calculation as in the case of analyzing the remainder of $T_2$ yields $\\|R\\|_{op} = O(\\sigma_n)$.\n\\noindent\nThis immediately implies $\\|M_1\\|_{op} = O(\\sigma_n)$ and $\\|M_4\\|_{op} = O(\\sigma_n)$. Now for $M_2$: \n\\begin{align}\nM_2 & = \\frac{-2}{\\sigma_n}\\mathbb{E} \\left\\{(\\delta_0^{\\top}g(Q) \\delta_0)\\tilde Q_i\\tilde Q_i^{\\top} K''\\left(\\frac{Q_i^{\\top}\\psi_0 }{\\sigma_n} + \\tilde Q^{\\top}\\tilde \\eta\\right)\\mathds{1}_{Q_i^{\\top}\\psi_0 > 0}\\right\\} \\notag \\\\\n& = -2\\int \\int (\\delta_0^{\\top}g(\\sigma_n t - \\tilde q^{\\top}\\tilde \\psi_0) \\delta_0)\\tilde q\\tilde q^{\\top} K''\\left(t + \\tilde q^{\\top}\\eta^*\\right)\\mathds{1}_{t > 0} f_0(\\sigma_n t \\mid \\tilde q) \\ dt f(\\tilde q) \\ d\\tilde q \\notag \\\\\n& = -2\\int (\\delta_0^{\\top}g(- \\tilde q^{\\top}\\tilde \\psi_0) \\delta_0)\\tilde q\\tilde q^{\\top} f_0(0 \\mid \\tilde q) \\int_{-\\infty}^{\\infty} K''\\left(t + \\tilde q^{\\top}\\eta^*\\right)\\mathds{1}_{t > 0} \\ dt f(\\tilde q) \\ d\\tilde q + R \\notag \\\\\n\\label{eq:M_2_double_deriv} & = 2\\mathbb{E}\\left[(\\delta_0^{\\top}g(- \\tilde Q^{\\top}\\tilde \\psi_0) \\delta_0)\\tilde \nQ\\tilde Q^{\\top} f_0(0 \\mid \\tilde Q) K'(\\tilde Q^{\\top}\\eta^*)\\right] + R \n\\end{align}\nwhere the remainder term R is $O_p(\\sigma_n)$ can be established as follows: \n\\begin{align*}\nR & = -2\\left[\\int \\int (\\delta_0^{\\top}g(\\sigma_n t - \\tilde q^{\\top}\\tilde \\psi_0) \\delta_0)\\tilde q\\tilde q^{\\top} K''\\left(t + \\tilde q^{\\top}\\eta^*\\right)\\mathds{1}_{t > 0} f_0(\\sigma_n t \\mid \\tilde q) \\ dt f(\\tilde q) \\ d\\tilde q \\right. \\\\\n& \\qquad \\qquad - \\left. \\int (\\delta_0^{\\top}g(- \\tilde q^{\\top}\\tilde \\psi_0) \\delta_0)\\tilde q\\tilde q^{\\top} f_0(0 \\mid \\tilde q) \\int_{-\\infty}^{\\infty} K''\\left(t + \\tilde q^{\\top}\\eta^*\\right)\\mathds{1}_{t > 0} \\ dt f(\\tilde q) \\ d\\tilde q \\right] \\\\\n& = -2\\left\\{\\left[\\int \\int (\\delta_0^{\\top}g(\\sigma_n t - \\tilde q^{\\top}\\tilde \\psi_0) \\delta_0)\\tilde q\\tilde q^{\\top} K''\\left(t + \\tilde q^{\\top}\\eta^*\\right)\\mathds{1}_{t > 0} f_0(\\sigma_n t \\mid \\tilde q) \\ dt f(\\tilde q) \\ d\\tilde q \\right. \\right. \\\\\n& \\qquad \\qquad - \\left. \\left. \\int (\\delta_0^{\\top}g(\\sigma_n t - \\tilde q^{\\top}\\tilde \\psi_0) \\delta_0) \\tilde q\\tilde q^{\\top} f_0(0 \\mid \\tilde q) \\int_{-\\infty}^{\\infty} K''\\left(t + \\tilde q^{\\top}\\eta^*\\right)\\mathds{1}_{t > 0} \\ dt f(\\tilde q) \\ d\\tilde q \\right] \\right. \\\\\n& \\left. + \\left[\\int (\\delta_0^{\\top}g(\\sigma_n t - \\tilde q^{\\top}\\tilde \\psi_0) \\delta_0)\\tilde q\\tilde q^{\\top} f_0(0 \\mid \\tilde q) \\int_{-\\infty}^{\\infty} K''\\left(t + \\tilde q^{\\top}\\eta^*\\right)\\mathds{1}_{t > 0} \\ dt f(\\tilde q) \\ d\\tilde q \\right. \\right. \\\\\n& \\qquad \\qquad \\left. \\left. -\\int (\\delta_0^{\\top}g(- \\tilde q^{\\top}\\tilde \\psi_0) \\delta_0)\\tilde q\\tilde q^{\\top} f_0(0 \\mid \\tilde q) \\int_{-\\infty}^{\\infty} K''\\left(t + \\tilde q^{\\top}\\eta^*\\right)\\mathds{1}_{t > 0} \\ dt f(\\tilde q) \\ d\\tilde q \\right]\\right\\} \\\\\n& = -2(R_1 + R_2) \\,.\n\\end{align*}\nFor $R_1$: \n\\begin{align*}\n\\left\\|R_1\\right\\|_{op} & = \\left\\|\\left[\\int \\int (\\delta_0^{\\top}g(\\sigma_n t - \\tilde q^{\\top}\\tilde \\psi_0) \\delta_0)\\tilde q\\tilde q^{\\top} K''\\left(t + \\tilde q^{\\top}\\eta^*\\right)\\mathds{1}_{t > 0} f_0(\\sigma_n t \\mid \\tilde q) \\ dt f(\\tilde q) \\ d\\tilde q \\right. \\right. \\,.\\\\\n& \\qquad \\qquad - \\left. \\left. \\int (\\delta_0^{\\top}g(\\sigma_n t - \\tilde q^{\\top}\\tilde \\psi_0) \\delta_0)\\tilde q\\tilde q^{\\top} f_0(0 \\mid \\tilde q) \\int_{-\\infty}^{\\infty} K''\\left(t + \\tilde q^{\\top}\\eta^*\\right)\\mathds{1}_{t > 0} \\ dt f(\\tilde q) \\ d\\tilde q \\right] \\right\\|_{op} \\\\\n& \\le c_+ \\int \\int \\|\\tilde q\\|^2 |K''\\left(t + \\tilde q^{\\top}\\eta^*\\right)| |f_0(\\sigma_n t \\mid \\tilde q) -f_0(0\\mid \\tilde q)| \\ dt \\ f(\\tilde q) \\ d\\tilde q \\\\\n& \\le c_+ F_+\\sigma_n \\int \\|\\tilde q\\|^2 \\int |t| |K''\\left(t + \\tilde q^{\\top}\\eta^*\\right)| \\ dt \\ f(\\tilde q) \\ d\\tilde q \\\\\n& = c_+ F_+\\sigma_n \\int \\|\\tilde q \\|^2 \\int |t - \\tilde q^{\\top}\\eta^*| |K''\\left(t\\right)| \\ dt \\ f(\\tilde q) \\ d\\tilde q \\\\\n& \\le c_+ F_+ \\sigma_n \\left[\\mathbb{E}[\\|\\tilde Q\\|^2]\\int |t||K''(t)| \\ dt + \\|\\eta^*\\|\\mathbb{E}[\\|\\tilde Q\\|^3]\\int |K''(t)| \\ dt\\right] = O(\\sigma_n) \\,.\n\\end{align*}\nand similarly for $R_2$: \n\\begin{align*}\n\\|R_2\\|_{op} & = \\left\\|\\left[\\int (\\delta_0^{\\top}g(\\sigma_n t - \\tilde q^{\\top}\\tilde \\psi_0) \\delta_0)\\tilde q\\tilde q^{\\top} f_0(0 \\mid \\tilde q) \\int_{-\\infty}^{\\infty} K''\\left(t + \\tilde q^{\\top}\\eta^*\\right)\\mathds{1}_{t > 0} \\ dt f(\\tilde q) \\ d\\tilde q \\right. \\right. \\\\\n& \\qquad \\qquad \\left. \\left. -\\int (\\delta_0^{\\top}g(- \\tilde q^{\\top}\\tilde \\psi_0) \\delta_0)\\tilde q\\tilde q^{\\top} f_0(0 \\mid \\tilde q) \\int_{-\\infty}^{\\infty} K''\\left(t + \\tilde q^{\\top}\\eta^*\\right)\\mathds{1}_{t > 0} \\ dt f(\\tilde q) \\ d\\tilde q \\right]\\right\\|_{op} \\\\\n& \\le F_+ \\|\\delta_0\\|^2 \\int \\left\\|g(\\sigma_n t - \\tilde q^{\\top}\\tilde \\psi_0) - g( - \\tilde q^{\\top}\\tilde \\psi_0) \\right\\|_{op} \\|\\tilde q\\|^2 \\int_{-\\infty}^{\\infty} |K''\\left(t + \\tilde q^{\\top}\\eta^*\\right)| \\ dt \\\\\n& \\le G_+ F_+ \\sigma_n \\int \\|\\tilde q\\|^2 \\int_{-\\infty}^{\\infty} |t||K''\\left(t + \\tilde q^{\\top}\\eta^*\\right)| \\ dt = O(\\sigma_n) \\,.\n\\end{align*}\nTherefore from \\eqref{eq:M_2_double_deriv} we conclude: \n\\begin{equation}\nM_2 = 2\\mathbb{E}\\left[(\\delta_0^{\\top}g(- \\tilde Q^{\\top}\\tilde \\psi_0) \\delta_0)\\tilde \nQ\\tilde Q^{\\top} f_0(0 \\mid \\tilde Q) K'(\\tilde Q^{\\top}\\eta^*)\\right] + O(\\sigma_n) \\,.\n\\end{equation}\nSimilar calculation for $M_3$ yields: \n\\begin{equation*}\nM_3 = 2\\mathbb{E}\\left[(\\delta_0^{\\top}g(- \\tilde Q^{\\top}\\tilde \\psi_0)(\\delta - \\delta_0))\\tilde \nQ\\tilde Q^{\\top} f_0(0 \\mid \\tilde Q) K'(\\tilde Q^{\\top}\\eta^*)\\right] + O(\\sigma_n) \\,.\n\\end{equation*}\ni.e. \n\\begin{equation}\n\\|M_3\\|_{op} \\le c_+ \\mathbb{E}\\left[\\|\\tilde Q\\|^2f_0(0 \\mid \\tilde Q) K'(\\tilde Q^{\\top}\\eta^*)\\right]\\|\\delta_0\\| \\|\\delta - \\delta_0\\| \\,.\n\\end{equation}\nNow we claim that for any $\\mathcal{K} < \\infty$, $\\lambda_{\\min} (M_2) > 0$ for all $\\|\\eta^*\\| \\le \\mathcal{K}$. Towards that end, define a function $\\lambda:B_{\\mathbb{R}^{2d}}(1) \\times B_{\\mathbb{R}^{2d}}(\\mathcal{K}) \\to \\mathbb{R}_+$ as: \n$$\n\\lambda: (v, \\eta) \\mapsto 2\\mathbb{E}\\left[(\\delta_0^{\\top}g(- \\tilde Q^{\\top}\\tilde \\psi_0) \\delta_0) \n\\left(v^{\\top}\\tilde Q\\right) ^2 f_0(0 \\mid \\tilde Q) K'(\\tilde Q^{\\top}\\eta)\\right]\n$$\nClearly $\\lambda \\ge 0$ and is continuous on a compact set. Hence its infimum must be attained. Suppose the infimum is $0$, i.e. there exists $(v^*, \\eta^*)$ such that: \n$$\n\\mathbb{E}\\left[(\\delta_0^{\\top}g(- \\tilde Q^{\\top}\\tilde \\psi_0) \\delta_0) \n\\left(v^{*^{\\top}}\\tilde Q\\right) ^2 f_0(0 \\mid \\tilde Q) K'(\\tilde Q^{\\top}\\eta^*)\\right] = 0 \\,.\n$$\nas $\\lambda_{\\min}(g(\\dot)) \\ge c_+$, we must have $\\left(v^{*^{\\top}}\\tilde Q\\right) ^2 f_0(0 \\mid \\tilde Q) K'(\\tilde Q^{\\top}\\eta^*) = 0$ almost surely. But from our assumption, $\\left(v^{*^{\\top}}\\tilde Q\\right) ^2 > 0$ and $K'(\\tilde Q^{\\top}\\eta^*) > 0$ almost surely, which implies $f_0(0 \\mid \\tilde q) = 0$ almost surely, which is a contradiction. Hence there exists $\\lambda_-$ such that: \n$$\n\\lambda_{\\min} (M_2) \\ge \\lambda_- > 0 \\ \\ \\forall \\ \\ \\|\\psi - \\psi_0^s\\| \\le \\mathcal{K} \\,.\n$$\nHence we have: \n$$\n\\lambda_{\\min}\\left(\\sigma_n \\nabla_{\\psi \\psi}\\mathbb{M}^2(\\theta)\\right) \\ge \\frac{\\lambda_-}{2}(1 - O(\\sigma_n)) \n$$\nfor all theta such that $d_*(\\theta, \\theta_0^s) \\le {\\epsilon} \\,.$ \n\\begin{align}\n\\label{eq:hessian_psi}\n & \\frac{1}{\\sigma_n}(\\psi - \\psi_0^s)^{\\top}\\sigma_n \\nabla^{\\psi \\psi}\\mathbb{M}^s(\\tilde \\theta) (\\psi - \\psi^0) \\gtrsim \\frac{\\|\\psi - \\psi^s_0\\|^2}{\\sigma_n} \\left(1- O(\\sigma_n)\\right) \n\\end{align}\nFrom equation \\eqref{eq:hessian_gamma}, \\eqref{eq:hessian_cross} and \\eqref{eq:hessian_psi} we have: \n\\begin{align*}\n& \\frac12 (\\theta_0 - \\theta^0_s)^{\\top}\\nabla^2 \\mathbb{M}^s(\\theta^*)(\\theta_0 - \\theta^0_s) \\\\\n& \\qquad \\qquad \\gtrsim \\left[\\|\\beta - \\beta^s_0\\|^2 + \\|\\gamma - \\gamma^s_0\\|^2 + \\frac{\\|\\psi - \\psi^s_0\\|^2}{\\sigma_n}\\right]\\mathds{1}_{\\|\\psi - \\psi_0^s\\| \\le \\mathcal{K} \\sigma_n} \\,.\n\\end{align*}\nThis, along with equation \\eqref{eq:lower_curv_smooth} concludes the proof. \n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\\subsection{Proof of Lemma \\ref{asymp-normality}}\nWe start by proving that analogues of Lemma 2 of \\cite{seo2007smoothed}: we show that: \n\\begin{align*}\n\\lim_{n \\to \\infty} \\mathbb{E}\\left[ \\sqrt{n\\sigma_n}\\nabla \\mathbb{M}_n^{s, \\psi}(\\theta_0)\\right] & = 0 \\\\\n\\lim_{n \\to \\infty} {\\sf var}\\left[ \\sqrt{n\\sigma_n}\\nabla \\mathbb{M}_n^{s, \\psi}(\\theta_0)\\right] & = V^{\\psi}\n\\end{align*}\nfor some matrix $V^{\\psi}$ which will be specified later in the proof. To prove the limit of the expectation: \n\\begin{align*}\n& \\mathbb{E}\\left[ \\sqrt{n\\sigma_n}\\nabla \\mathbb{M}_n^{s, \\psi}(\\theta_0)\\right] \\\\\n& = \\sqrt{\\frac{n}{\\sigma_n}}\\mathbb{E}\\left[\\left\\{(Y - X^{\\top}(\\beta_0 + \\delta_0))^2 - (Y - X^{\\top}\\beta_0)^2\\right\\}\\tilde Q K'\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right)\\right] \\\\\n& = \\sqrt{\\frac{n}{\\sigma_n}}\\mathbb{E}\\left[\\left(\\delta_0^{\\top}g(Q)\\delta_0\\right)\\left(1 - 2\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right)\\tilde Q K'\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right)\\right] \\\\\n& = \\sqrt{\\frac{n}{\\sigma_n}} \\times \\sigma_n \\int \\int \\left(\\delta_0^{\\top}g(\\sigma_nt - \\tilde q^{\\top}\\tilde \\psi_0, \\tilde q)\\delta_0\\right)\\left(1 - 2\\mathds{1}_{t > 0}\\right)\\tilde q K'\\left(t\\right) \\ f_0(\\sigma_n t \\mid \\tilde q) f (\\tilde q) \\ dt \\ d\\tilde q \\\\\n& = \\sqrt{n\\sigma_n} \\left[\\int \\tilde q \\left(\\delta_0^{\\top}g(- \\tilde q^{\\top}\\tilde \\psi_0, \\tilde q)\\delta_0\\right)f_0(0 \\mid \\tilde q) \\cancelto{0}{\\left(\\int_{-\\infty}^{\\infty} \\left(1 - 2\\mathds{1}_{t > 0}\\right)K'\\left(t\\right) \\ dt\\right)} f (\\tilde q) d\\tilde q + O(\\sigma_n)\\right] \\\\\n& = O(\\sqrt{n\\sigma_n^3}) = o(1) \\,.\n\\end{align*}\nFor the variance part: \n\\begin{align*}\n& {\\sf var}\\left[ \\sqrt{n\\sigma_n}\\nabla \\mathbb{M}_n^{s, \\psi}(\\theta_0)\\right] \\\\\n& = \\frac{1}{\\sigma_n}{\\sf var}\\left(\\left\\{(Y - X^{\\top}(\\beta_0 + \\delta_0))^2 - (Y - X^{\\top}\\beta_0)^2\\right\\}\\tilde Q K'\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right)\\right) \\\\\n& = \\frac{1}{\\sigma_n}\\mathbb{E}\\left(\\left\\{(Y - X^{\\top}(\\beta_0 + \\delta_0))^2 - (Y - X^{\\top}\\beta_0)^2\\right\\}^2 \\tilde Q\\tilde Q^{\\top} \\left(K'\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right)\\right)^2\\right) \\\\\n& \\qquad \\qquad + \\frac{1}{\\sigma_n}\\mathbb{E}^{\\otimes 2}\\left[\\left\\{(Y - X^{\\top}(\\beta_0 + \\delta_0))^2 - (Y - X^{\\top}\\beta_0)^2\\right\\}\\tilde Q K'\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right)\\right]\n\\end{align*}\nThe outer product of the expectation (the second term of the above summand) is $o(1)$ which follows from our previous analysis of the expectation term. For the second moment: \n\\begin{align*}\n& \\frac{1}{\\sigma_n}\\mathbb{E}\\left(\\left\\{(Y - X^{\\top}(\\beta_0 + \\delta_0))^2 - (Y - X^{\\top}\\beta_0)^2\\right\\}^2 \\tilde Q\\tilde Q^{\\top} \\left(K'\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right)\\right)^2\\right) \\\\\n& = \\frac{1}{\\sigma_n}\\mathbb{E}\\left(\\left\\{(X^{\\top}\\delta_0)^2(1 - 2\\mathds{1}_{Q^{\\top}\\psi_0 > 0}) -2{\\epsilon} (X^{\\top}\\delta_0)\\right\\}^2 \\tilde Q\\tilde Q^{\\top} \\left(K'\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right)\\right)^2\\right) \\\\\n& = \\frac{1}{\\sigma_n}\\left[\\mathbb{E}\\left((X^{\\top}\\delta_0)^4 \\tilde Q\\tilde Q^{\\top} \\left(K'\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right)\\right)^2\\right) + 4\\sigma_{\\epsilon}^2\\mathbb{E}\\left((X^{\\top}\\delta_0)^2 \\tilde Q\\tilde Q^{\\top} \\left(K'\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right)\\right)^2\\right) \\right] \\\\\n& \\longrightarrow \\left(\\int_{-\\infty}^{\\infty}(K'(t))^2 \\ dt\\right)\\left[\\mathbb{E}\\left(g_{4, \\delta_0}(-\\tilde Q^{\\top}\\tilde \\psi_0, \\tilde Q)\\tilde Q\\tilde Q^{\\top}f_0(0 \\mid \\tilde Q)\\right) \\right. \\\\\n& \\hspace{10em}+ \\left. 4\\sigma_{\\epsilon}^2\\mathbb{E}\\left(\\delta_0^{\\top}g(-\\tilde Q^{\\top}\\tilde \\psi_0, \\tilde Q)\\delta_0 \\tilde Q\\tilde Q^{\\top}f_0(0 \\mid \\tilde Q)\\right)\\right] \\\\\n& := 2V^{\\psi} \\,.\n\\end{align*}\nFinally using Lemma 6 of \\cite{horowitz1992smoothed} we conclude that $ \\sqrt{n\\sigma_n}\\nabla \\mathbb{M}_n^{s, \\psi}(\\theta_0) \\implies \\mathcal{N}(0, V^{\\psi})$. \n\\\\\\\\\n\\noindent\nWe next prove that $ \\sqrt{n}\\nabla \\mathbb{M}_n^{s, \\gamma}(\\theta_0)$ to normal distribution. This is a simple application of CLT along with bounding some remainder terms which are asymptotically negligible. The gradients are: \n\\begin{align*}\n\\sqrt{n}\\begin{pmatrix} \\nabla_{\\beta}\\mathbb{M}^s_n(\\theta_0^s) \\\\ \\nabla_{\\delta}\\mathbb{M}^s_n(\\theta_0^s) \\end{pmatrix} & = 2\\sqrt{n}\\begin{pmatrix}\\frac1n \\sum_i X_i(X_i^{\\top}\\beta_0 - Y_i)+ \\frac1n \\sum_i X_iX_i^{\\top}\\delta_0 K\\left(\\frac{Q_i^{\\top}\\psi_0}{\\sigma_n}\\right) \\\\ \n\\frac1n \\sum_i \\left[X_i(X_i^{\\top}\\beta_0 + X_i^{\\top}\\delta_0 - Y_i)\\right] K\\left(\\frac{Q_i^{\\top}\\psi_0^s}{\\sigma_n}\\right) \\end{pmatrix} \\\\\n& = 2\\begin{pmatrix} -\\frac{1}{\\sqrt{n}} \\sum_i X_i {\\epsilon}_i + \\frac{1}{\\sqrt{n}} \\sum_i X_iX_i^{\\top}\\delta_0 \\left(K\\left(\\frac{Q_i^{\\top}\\psi_0}{\\sigma_n}\\right) - \\mathds{1}_{Q_i^{\\top}\\psi_0 > 0}\\right) \\\\ -\\frac{1}{\\sqrt{n}} \\sum_i X_i {\\epsilon}_iK\\left(\\frac{Q_i^{\\top}\\psi_0}{\\sigma_n}\\right) + \\frac{1}{\\sqrt{n}} \\sum_i X_iX_i^{\\top}\\delta_0K\\left(\\frac{Q_i^{\\top}\\psi_0}{\\sigma_n}\\right)\\mathds{1}_{Q_i^{\\top}\\psi_0 \\le 0} \n\\end{pmatrix}\\\\\n& = 2\\begin{pmatrix} -\\frac{1}{\\sqrt{n}} \\sum_i X_i {\\epsilon}_i + R_1 \\\\ -\\frac{1 }{\\sqrt{n}} \\sum_i X_i {\\epsilon}_i\\mathbf{1}_{Q_i^{\\top}\\psi_0 > 0} +R_2 \n\\end{pmatrix}\n\\end{align*}\nThat $(1\/\\sqrt{n})\\sum_i X_i {\\epsilon}_i$ converges to normal distribution follows from a simple application of CLT. Therefore, once we prove that $R_1$ and $R_2$ are $o_p(1)$ we have: \n$$\n\\sqrt{n} \\nabla_{\\gamma}\\mathbb{M}^s_n(\\theta_0^s) \\overset{\\mathscr{L}}{\\implies} \\mathcal{N}\\left(0, 4V^{\\gamma}\\right)\n$$\nwhere: \n\\begin{equation}\n\\label{eq:def_v_gamma}\nV^{\\gamma} = \\sigma_{\\epsilon}^2 \\begin{pmatrix}\\mathbb{E}\\left[XX^{\\top}\\right] & \\mathbb{E}\\left[XX^{\\top}\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right] \\\\\n\\mathbb{E}\\left[XX^{\\top}\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right] & \\mathbb{E}\\left[XX^{\\top}\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right] \\end{pmatrix} \\,.\n\\end{equation}\nTo complete the proof we now show that $R_1$ and $R_2$ are $o_p(1)$. For $R_1$, we show that $\\mathbb{E}[R_1] \\to 0$ and ${\\sf var}(R_1) \\to 0$. For the expectation part: \n\\begin{align*}\n & \\mathbb{E}[R_1] \\\\\n & = \\sqrt{n}\\mathbb{E}\\left[XX^{\\top}\\delta_0 \\left(K\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right) - \\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right)\\right] \\\\\n & = \\sqrt{n}\\delta_0^{\\top}\\mathbb{E}\\left[g(Q) \\left(K\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right) - \\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right)\\right] \\\\\n & = \\sqrt{n}\\int_{\\mathbb{R}^{p-1}} \\int_{-\\infty}^{\\infty} \\delta_0^{\\top}g\\left(t-\\tilde q^{\\top}\\tilde \\psi_0, \\tilde q\\right)\\left(\\mathds{1}_{t > 0} - K\\left(\\frac{t}{\\sigma_n}\\right)\\right)f_0(t \\mid \\tilde q) f(\\tilde q) \\ dt \\ d\\tilde q \\\\\n & = \\sqrt{n}\\sigma_n \\int_{\\mathbb{R}^{p-1}} \\int_{-\\infty}^{\\infty} \\delta_0^{\\top}g\\left(\\sigma_n z-\\tilde q^{\\top}\\tilde \\psi_0, \\tilde q\\right)\\left(\\mathds{1}_{z > 0} - K\\left(z\\right)\\right)f_0(\\sigma_n z \\mid \\tilde q) f(\\tilde q) \\ dz \\ d\\tilde q \\\\\n & = \\sqrt{n}\\sigma_n \\left[\\int_{\\mathbb{R}^{p-1}}\\delta_0^{\\top}g\\left(-\\tilde q^{\\top}\\tilde \\psi_0, \\tilde q\\right) f_0(0 \\mid \\tilde q) f(\\tilde q) \\ d\\tilde q \\cancelto{0}{\\left[\\int_{-\\infty}^{\\infty} \\left(\\mathds{1}_{z > 0} - K\\left(z\\right)\\right)\\ dz\\right]} + O(\\sigma_n) \\right] \\\\\n & = O(\\sqrt{n}\\sigma_n^2) = o(1) \\,.\n\\end{align*}\nFor the variance part: \n\\begin{align*}\n& {\\sf var}(R_1) \\\\\n& = {\\sf var}\\left(XX^{\\top}\\delta_0 \\left(K\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right) - \\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right)\\right) \\\\\n& \\le \\mathbb{E}\\left[\\|X\\|^2 \\delta_0^{\\top}XX^{\\top}\\delta_0 \\left(K\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right) - \\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right)^2\\right] \\\\\n& = O(\\sigma_n ) = o(1) \\,.\n\\end{align*}\nThis shows that ${\\sf var}(R_1) = o(1)$ and this establishes $R_1 = o_p(1)$. The proof for $R_2$ is similar and hence skipped for brevity. \n\\\\\\\\\nOur next step is to prove that $\\sqrt{n\\sigma_n}\\nabla_{\\psi}\\mathbb{M}^s_n(\\theta_0^s)$ and $\\sqrt{n}\\nabla \\mathbb{M}^{s, \\gamma}_n(\\theta_0^s)$ are asymptotically uncorrelated. Towards that end, first note that: \n\\begin{align*}\n& \\mathbb{E}\\left[X(X^{\\top}\\beta_0 - Y) + XX^{\\top}\\delta_0 K\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right) \\right] \\\\\n& = \\mathbb{E}\\left[XX^{\\top}\\delta_0\\left(K\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right) - \\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right)\\right] \\\\\n& = \\mathbb{E}\\left[g(Q)\\delta_0\\left(K\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right) - \\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right)\\right] \\\\\n& = \\sigma_n \\int \\int g(\\sigma_n t - \\tilde q^{\\top}\\tilde \\psi_0, \\tilde q)(K(t) - \\mathds{1}_{t>0})f_0(\\sigma_n t \\mid \\tilde q) f(\\tilde q) \\ dt \\ d\\tilde q \\\\\n& = \\sigma_n \\int g(- \\tilde q^{\\top}\\tilde \\psi_0, \\tilde q)\\cancelto{0}{\\int_{-\\infty}^{\\infty} (K(t) - \\mathds{1}_{t>0}) \\ dt} \\ f_0(0 \\mid \\tilde q) f(\\tilde q) \\ dt \\ d\\tilde q + O(\\sigma_n^2) \\\\\n& = O(\\sigma_n^2) \\,.\n\\end{align*}\nAlso, it follows from the proof of $\\mathbb{E}\\left[\\sqrt{n\\sigma_n}\\nabla_\\psi \\mathbb{M}_n^s(\\theta_0)\\right] \\to 0$ we have: \n$$\n\\mathbb{E}\\left[\\left\\{(Y - X^{\\top}(\\beta_0 + \\delta_0))^2 - (Y - X^{\\top}\\beta_0)^2\\right\\}\\tilde Q K'\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right)\\right] = O(\\sigma_n^2) \\,.\n$$\nFinally note that: \n\\begin{align*}\n& \\mathbb{E}\\left[\\left(\\left\\{(Y - X^{\\top}(\\beta_0 + \\delta_0))^2 - (Y - X^{\\top}\\beta_0)^2\\right\\}\\tilde Q K'\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right)\\right) \\times \\right. \\\\\n& \\qquad \\qquad \\qquad \\qquad \\qquad \\left. \\left(X(X^{\\top}\\beta_0 - Y) + XX^{\\top}\\delta_0 K\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right)\\right)^{\\top}\\right] \\\\\n& = \\mathbb{E}\\left[\\left(\\left\\{(X^{\\top}\\delta_0)^2(1 - 2\\mathds{1}_{Q^{\\top}\\psi_0 > 0}) - 2{\\epsilon} X^{\\top}\\delta_0\\right\\}\\tilde QK'\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right)\\right) \\right. \\\\\n& \\qquad \\qquad \\qquad \\qquad \\qquad \\left. \\times \\left\\{XX^{\\top}\\delta_0\\left(K\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right) - \\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right) - X{\\epsilon} \\right\\}\\right] \\\\\n& = \\mathbb{E}\\left[\\left((X^{\\top}\\delta_0)^2(1 - 2\\mathds{1}_{Q^{\\top}\\psi_0 > 0})\\tilde QK'\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right)\\right) \\right. \\\\\n& \\qquad \\qquad \\qquad \\left. \\times \\left(XX^{\\top}\\delta_0\\left(K\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right) - \\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right)\\right)^{\\top}\\right] \\\\\n& \\qquad \\qquad + 2\\sigma^2_{\\epsilon} \\mathbb{E}\\left[XX^{\\top}\\delta_0\\tilde Q^{\\top}K'\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right)\\right] \\\\\n&= O(\\sigma_n ) \\,.\n\\end{align*}\nNow getting back to the covariance: \n\\begin{align*}\n& \\mathbb{E}\\left[\\left(\\sqrt{n\\sigma_n}\\nabla_{\\psi}\\mathbb{M}^s_n(\\theta_0)\\right)\\left(\\sqrt{n}\\nabla_\\beta \\mathbb{M}^s_n(\\theta_0)\\right)^{\\top}\\right] \\\\\n& = \\frac{1}{\\sqrt{\\sigma_n}}\\mathbb{E}\\left[\\left(\\left\\{(Y - X^{\\top}(\\beta_0 + \\delta_0))^2 - (Y - X^{\\top}\\beta_0)^2\\right\\}\\tilde Q K'\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right)\\right) \\times \\right. \\\\\n& \\qquad \\qquad \\qquad \\qquad \\qquad \\left. \\left(X(X^{\\top}\\beta_0 - Y) + XX^{\\top}\\delta_0 K\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right)\\right)^{\\top}\\right] \\\\\n& \\qquad \\qquad + \\frac{n-1}{\\sqrt{\\sigma_n}}\\left[\\mathbb{E}\\left[\\left\\{(Y - X^{\\top}(\\beta_0 + \\delta_0))^2 - (Y - X^{\\top}\\beta_0)^2\\right\\}\\tilde Q K'\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right)\\right] \\right. \\\\\n& \\qquad \\qquad \\qquad \\qquad \\times \\left. \\left(\\mathbb{E}\\left[X(X^{\\top}\\beta_0 - Y) + XX^{\\top}\\delta_0 K\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right) \\right]\\right)^{\\top}\\right] \\\\\n& = \\frac{1}{\\sqrt{\\sigma_n}} \\times O(\\sigma_n) + \\frac{n-1}{\\sqrt{\\sigma_n}} \\times O(\\sigma_n^4) = o(1) \\,.\n\\end{align*}\nThe proof for $\\mathbb{E}\\left[\\left(\\sqrt{n\\sigma_n}\\nabla_{\\psi}\\mathbb{M}^s_n(\\theta_0)\\right)\\left(\\sqrt{n}\\nabla_\\delta \\mathbb{M}^s_n(\\theta_0)\\right)^{\\top}\\right]$ is similar and hence skipped. This completes the proof. \n\n\n\n\\subsection{Proof of Lemma \\ref{conv-prob}}\nTo prove first note that by simple application of law of large number (and using the fact that $\\|\\psi^* - \\psi_0\\|\/\\sigma_n = o_p(1)$ we have: \n\\begin{align*}\n\\nabla^2 \\mathbb{M}_n^{s, \\gamma}(\\theta^*) & = 2\\begin{pmatrix}\\frac{1}{n}\\sum_i X_i X_i^{\\top} & \\frac{1}{n}\\sum_i X_i X_i^{\\top}K\\left(\\frac{Q_i^{\\top}\\psi^*}{\\sigma_n}\\right) \\\\ \\frac{1}{n}\\sum_i X_i X_i^{\\top}K\\left(\\frac{Q_i^{\\top}\\psi^*}{\\sigma_n}\\right) & \\frac{1}{n}\\sum_i X_i X_i^{\\top}K\\left(\\frac{Q_i^{\\top}\\psi^*}{\\sigma_n}\\right)\n\\end{pmatrix} \\\\\n& \\overset{p}{\\longrightarrow} 2 \\begin{pmatrix}\\mathbb{E}\\left[XX^{\\top}\\right] & \\mathbb{E}\\left[XX^{\\top}\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right] \\\\ \\mathbb{E}\\left[XX^{\\top}\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right] & \\mathbb{E}\\left[XX^{\\top}\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right] \\end{pmatrix} := 2Q^{\\gamma}\n\\end{align*}\nThe proof of the fact that $\\sqrt{\\sigma_n}\\nabla^2_{\\psi \\gamma}\\mathbb{M}_n^s(\\theta^*) = o_p(1)$ is same as the proof of Lemma 5 of \\cite{seo2007smoothed} and hence skipped. Finally the proof of the fact that \n$$\n\\sigma_n \\nabla^2_{\\psi \\psi}\\mathbb{M}_n^s(\\theta^*) \\overset{p}{\\longrightarrow} 2Q^{\\psi}\\,.\n$$\nfor some non-negative definite matrix $Q$. The proof is similar to that of Lemma 6 of \\cite{seo2007smoothed}, using which we conclude the proof with: \n$$\nQ^{\\psi} = \\left(\\int_{-\\infty}^{\\infty} -\\text{sign}(t) K''(t) \\ dt\\right) \\times \\mathbb{E}\\left[\\delta_0^{\\top} g\\left(-\\tilde Q^{\\top}\\tilde \\psi_0, \\tilde Q\\right)\\delta_0 \\tilde Q \\tilde Q^{\\top} f_0(0 \\mid \\tilde Q)\\right] \\,.\n$$\nThis completes the proof. So we have established: \n\\begin{align*}\n\\sqrt{n}\\left(\\hat \\gamma^s - \\gamma_0\\right) & \\overset{\\mathscr{L}}{\\implies} \\mathcal{N}\\left(0, \\left(Q^\\gamma\\right)^{-1}V^\\gamma \\left(Q^\\gamma\\right)^{-1}\\right) \\,, \\\\\n\\sqrt{\\frac{n}{\\sigma_n}}\\left(\\hat \\psi^s - \\psi_0\\right) & \\overset{\\mathscr{L}}{\\implies} \\mathcal{N}\\left(0, \\left(Q^\\psi\\right)^{-1}V^\\psi \\left(Q^\\psi\\right)^{-1}\\right) \\,.\n\\end{align*}\nand they are asymptotically uncorrelated. \n\n\n\n\n\n\n\n\n\n\n\n\\section{Proof of Theorem \\ref{thm:binary}}\n\\label{sec:supp_classification}\nIn this section, we present the details of the binary response model, the assumptions, a roadmap of the proof and then finally prove Theorem \\ref{thm:binary}.\n\\noindent \n\\begin{assumption}\n\\label{as:distribution}\nThe below assumptions pertain to the parameter space and the distribution of $Q$:\n\\begin{enumerate}\n\\item The parameter space $\\Theta$ is a compact subset of $\\mathbb{R}^p$. \n\\item The support of the distribution of $Q$ contains an open subset around origin of $\\mathbb{R}^p$ and the distribution of $Q_1$ conditional on $\\tilde{Q} = (Q_2, \\dots, Q_p)$ has, almost surely, everywhere positive density with respect to Lebesgue measure. \n\\end{enumerate}\n\\end{assumption}\n\n\n\n\\noindent \nFor notational convenience, define the following: \n\\begin{enumerate}\n\\item Define $f_{\\psi} (\\cdot | \\tilde{Q})$ to the conditional density of $Q^{\\top}\\psi$ given $\\tilde{Q}$ for $\\theta \\in \\Theta$. Note that the following relation holds: $$f_{\\theta}(\\cdot |\\tilde{Q}) = f_{Q_1}(\\cdot - \\tilde{\\psi}^{\\top}\\tilde{Q} | \\tilde{Q}) \\,.$$ where we define $f_{Q_1}(\\cdot | \\tilde X)$ is the conditional density of $Q_1$ given $\\tilde Q$. \n\\item Define $f_0(\\cdot | \\tilde{Q}) = f_{\\psi_0}(\\cdot | \\tilde{Q})$ where $\\psi_0$ is the unique minimizer of the population score function $M(\\psi)$. \n\\item Define $f_{\\tilde Q}(\\cdot)$ to be the marginal density of $\\tilde Q$. \n\\end{enumerate}\n\n\n\\noindent\nThe rest of the assumptions are as follows: \n\\begin{assumption}\n\\label{as:differentiability}\n$f_0(y|\\tilde{Q})$ is at-least once continuously differentiable almost surely for all $\\tilde{Q}$. Also assume that there exists $\\delta$ and $t$ such that $$\\inf_{|y| \\le \\delta} f_0(y|\\tilde{Q}) \\ge t$$ for all $\\tilde{Q}$ almost surely. \n\\end{assumption}\nThis assumption can be relaxed in the sense that one can allow the lower bound $t$ to depend on $\\tilde{Q}$, provided that some further assumptions are imposed on $\\mathbb{E}(t(\\tilde{Q}))$. As this does not add anything of significance to the import of this paper, we use Assumption \\ref{as:differentiability} to simplify certain calculations. \n\n\n\n\\begin{assumption}\n\\label{as:density_bound}\nDefine $m\\left(\\tilde{Q}\\right) = \\sup_{t}f_{X_1}(t | \\tilde{Q}) = \\sup_{\\theta} \\sup_{t}f_{\\theta}(t | \\tilde{Q})$. Assume that $\\mathbb{E}\\left(m\\left(\\tilde{Q}\\right)^2\\right) < \\infty$. \n\\end{assumption}\n\n\n\n\n\n\n\n\\begin{assumption}\n\\label{as:derivative_bound}\nDefine $h(\\tilde{Q}) = \\sup_{t} f_0'(t | \\tilde{Q})$. Assume that $\\mathbb{E}\\left(h^2\\left(\\tilde{Q}\\right)\\right) < \\infty$. \n\\end{assumption}\n\\begin{assumption}\n\\label{as:eigenval_bound}\nAssume that $f_{\\tilde{Q}}(0) > 0$ and also that the minimum eigenvalue of $\\mathbb{E}\\left(\\tilde{Q}\\tilde{Q}^{\\top}f_0(0|\\tilde{Q})\\right) > 0$. \n\\end{assumption}\n\n\n\n\n\n\\subsection{Sufficient conditions for above assumptions }\nWe now demonstrate some sufficient conditions for the above assumptions to hold. If the support of $Q$ is compact and both $f_1(\\cdot | \\tilde Q)$ and $f'_1(\\cdot | \\tilde Q)$ are uniformly bounded in $\\tilde Q$, then Assumptions $(\\ref{as:distribution}, \\ \\ref{as:differentiability}, \\ \\ref{as:density_bound},\\ \\ref{as:derivative_bound})$ follow immediately. The first part of Assumption \\ref{as:eigenval_bound}, i.e. the assumption $f_{\\tilde{Q}}(0) > 0$ is also fairly general and satisfied by many standard probability distributions. The second part of Assumption \\ref{as:eigenval_bound} is satisfied when $f_0(0|\\tilde{Q})$ has some lower bound independent of $\\tilde{Q}$ and $\\tilde{Q}$ has non-singular dispersion matrix. \n\n\n\n\n\nBelow we state our main theorem. In the next section, we first provide a roadmap of our proof and then fill in the corresponding details. For the rest of the paper, \\emph{we choose our bandwidth $\\sigma_n$ to satisfy $\\frac{\\log{n}}{n \\sigma_n} \\rightarrow 0$}. \n\n\n\\noindent\n\\begin{remark}\nAs our procedure requires the weaker condition $(\\log{n})\/(n \\sigma_n) \\rightarrow 0$, it is easy to see from the above Theorem that the rate of convergence can be almost as fast as $n\/\\sqrt{\\log{n}}$. \n\\end{remark}\n\\begin{remark}\nOur analysis remains valid in presence of an intercept term. Assume, without loss of generality, that the second co-ordinate of $Q$ is $1$ and let $\\tilde{Q} = (Q_3, \\dots, Q_p)$. It is not difficult to check that all our calculations go through under this new definition of $\\tilde Q$. We, however, avoid this scenario for simplicity of exposition. \n\\end{remark}\n\\vspace{0.2in}\n\\noindent\n{\\bf Proof sketch: }We now provide a roadmap of the proof of Theorem \\ref{thm:binary} in this paragraph while the elaborate technical derivations in the later part. \nDefine the following: $$T_n(\\psi) = \\nabla \\mathbb{M}_n^s(\\psi)= -\\frac{1}{n\\sigma_n}\\sum_{i=1}^n (Y_i - \\gamma)K'\\left(\\frac{Q_i^{\\top}\\psi}{\\sigma_n}\\right)\\tilde{Q}_i$$ $$Q_n(\\psi) = \\nabla^2 \\mathbb{M}_n^s(\\psi) = -\\frac{1}{n\\sigma_n^2}\\sum_{i=1}^n (Y_i - \\gamma)K''\\left(\\frac{Q_i^{\\top}\\psi}{\\sigma_n}\\right)\\tilde{Q}_i\\tilde{Q}_i^{\\top}$$ As $\\hat{\\psi}^s$ minimizes $\\mathbb{M}^s_n(\\psi)$ we have $T_n(\\hat{\\psi}^s) = 0$. Using one step Taylor expansion we have:\n\\allowdisplaybreaks \n\\begin{align*}\nT_n(\\hat{\\psi}^s) = T_n(\\psi_0) + Q_n(\\psi^*_n)\\left(\\hat{\\psi}^s - \\psi_0\\right) = 0\n\\end{align*}\nor: \n\\begin{equation}\n\\label{eq:main_eq} \\sqrt{n\/\\sigma_n}\\left(\\hat{\\psi}^s - \\psi_0\\right) = -\\left(\\sigma_nQ_n(\\psi^*_n)\\right)^{-1}\\sqrt{n\\sigma_n}T_n(\\psi_0) \n\\end{equation}\nfor some intermediate point $\\psi^*_n$ between $\\hat \\psi^s$ and $\\psi_0$. The following lemma establishes the asymptotic properties of $T_n(\\psi_0)$: \n\\begin{lemma}[Asymptotic Normality of $T_n$]\n\\label{asymp-normality}\n\\label{asymp-normality}\nIf $n\\sigma_n^{3} \\rightarrow \\lambda$, then \n$$\n\\sqrt{n \\sigma_n} T_n(\\psi_0) \\Rightarrow \\mathcal{N}(\\mu, \\Sigma)\n$$\nwhere \n$$\\mu = -\\sqrt{\\lambda}\\frac{\\beta_0 - \\alpha_0}{2}\\left[\\int_{-1}^{1} K'\\left(t\\right)|t| \\ dt \\right] \\int_{\\mathbb{R}^{p-1}}\\tilde{Q} f'(0 | \\tilde{Q}) \\ dP(\\tilde{Q})\n$$ \nand \n$$\\Sigma = \\left[a_1 \\int_{-1}^{0} \\left(K'\\left(t\\right)\\right)^2 \\ dt + a_2 \\int_{0}^{1} \\left(K'\\left(t\\right)\\right)^2 \\ dt \\right]\\int_{\\mathbb{R}^{p-1}}\\tilde{Q}\\tilde{Q}^{\\top} f(0|\\tilde{Q}) \\ dP(\\tilde{Q}) \\,.\n$$ \nHere $a_1 = (1 - \\gamma)^2 \\alpha_0 + \\gamma^2 (1-\\alpha_0), a_2 = (1 - \\gamma)^2 \\beta_0 + \\gamma^2 (1-\\beta_0)$ and $\\alpha_0, \\beta_0, \\gamma$ are model parameters defined around equation \\eqref{eq:new_loss}. \n\\end{lemma}\n\\noindent\nIn the case that $n \\sigma_n^3 \\rightarrow 0$, which, holds when $n\\sigma_n \\rightarrow 0$ as assumed prior to the statement of the theorem, $\\lambda = 0$ and we have: \n$$\\sqrt{n \\sigma_n} T_n(\\psi_0) \\rightarrow \\mathcal{N}(0, \\Sigma) \\,.$$ \nNext, we analyze the convergence of $Q_n(\\psi^*_n)^{-1}$ which is stated in the following lemma: \n\\begin{lemma}[Convergence in Probability of $Q_n$]\n\\label{conv-prob}\nUnder Assumptions (\\ref{as:distribution} - \\ref{as:eigenval_bound}), for any random sequence $\\breve{\\psi}_n$ such that $\\|\\breve{\\psi}_n - \\psi_0\\|\/\\sigma_n \\overset{P} \\rightarrow 0$, \n$$\n\\sigma_n Q_n(\\breve{\\psi}_n) \\overset{P} \\rightarrow Q = \\frac{\\beta_0 - \\alpha_0}{2}\\left(\\int_{-1}^{1} -K''\\left(t \\right)\\text{sign}(t) \\ dt\\right) \\ \\mathbb{E}\\left(\\tilde{Q}\\tilde{Q}^{\\top} f(0 |\\tilde{Q})\\right) \\,.\n$$\n\\end{lemma}\nIt will be shown later that the condition $\\|\\breve{\\psi}_n - \\psi_0\\|\/\\sigma_n \\overset{P} \\rightarrow 0$ needed in Lemma \\ref{conv-prob} holds for the (random) sequence $\\psi^*_n$. Then, combining Lemma \\ref{asymp-normality} and Lemma \\ref{conv-prob} we conclude from equation \\ref{eq:main_eq} that: \n$$\n\\sqrt{n\/\\sigma_n} \\left(\\hat{\\psi}^s - \\psi_0\\right) \\Rightarrow N(0, Q^{-1}\\Sigma Q^{-1}) \\,.\n$$ \nThis concludes the proof of the our Theorem \\ref{thm:binary} with $\\Gamma = Q^{-1}\\Sigma Q^{-1}$. \n\\newline\n\\newline\nObserve that, to show $\\left\\|\\psi^*_n - \\psi_0 \\right\\| = o_P(\\sigma_n)$, it suffices to to prove that $\\left\\|\\hat \\psi^s - \\psi_0 \\right\\| = o_P(\\sigma_n)$. Towards that direction, we have following lemma: \n\n\\begin{lemma}[Rate of convergence]\n\\label{lem:rate}\nUnder Assumptions (\\ref{as:distribution} - \\ref{as:eigenval_bound}), \n$$\nn^{2\/3}\\sigma_n^{-1\/3} d^2_n\\left(\\hat \\psi^s, \\psi_0^s\\right) = O_P(1) \\,,\n$$ \nwhere \n$$\nd_n\\left(\\psi, \\psi_0^s\\right) = \\sqrt{\\left[\\frac{\\|\\psi - \\psi_0^s\\|^2}{\\sigma_n} \\mathds{1}(\\|\\psi - \\psi_0^s\\| \\le \\mathcal{K}\\sigma_n) + \\|\\psi - \\psi_0^s\\| \\mathds{1}(\\|\\psi - \\psi_0^s\\| \\ge \\mathcal{K}\\sigma_n)\\right]}\n$$\nfor some specific constant $\\mathcal{K}$. (This constant will be mentioned precisely in the proof). \n\\end{lemma}\n\n\\noindent\nThe lemma immediately leads to the following corollary: \n\n\\begin{corollary}\n\\label{rate-cor}\nIf $n\\sigma_n \\rightarrow \\infty$ then $\\|\\hat \\psi^s - \\psi_0^s\\|\/\\sigma_n \\overset{P} \\longrightarrow 0$.\n\\end{corollary}\n\n\\noindent\nFinally, to establish $\\|\\hat \\psi^s - \\psi_0\\|\/\\sigma_n \\overset{P} \\rightarrow 0$, all we need is that $\\|\\psi_0^s - \\psi_0\\|\/\\sigma_n \\rightarrow 0$ as demonstrated in the following lemma:\n\n\\begin{lemma}[Convergence of population minimizer]\n\\label{bandwidth}\nFor any sequence of $\\sigma_n \\rightarrow 0$, we have: $\\|\\psi_0^s - \\psi_0\\|\/\\sigma_n \\rightarrow 0$. \n\\end{lemma}\n\n\\noindent\nHence the final roadmap is the following: Using Lemma \\ref{bandwidth} and Corollary \\ref{rate-cor} we establish that $\\|\\hat \\psi^s - \\psi_0\\|\/\\sigma_n \\rightarrow 0$ if $n\\sigma_n \\rightarrow \\infty$. This, in turn, enables us to prove that $\\sigma_n Q_n(\\psi^*_n) \\overset{P} \\rightarrow Q$,which, along with Lemma \\ref{asymp-normality}, establishes the main theorem. \n\n\\begin{remark}\n\\label{rem:gamma}\nIn the above analysis, we have assumed knowledge of $\\gamma$ in between $(\\alpha_0, \\beta_0)$. However, all our calculations go through if we replace $\\gamma$ by its estimate (say $\\bar Y$) with more tedious book-keeping. One way to simplify the calculations is to split the data into two halves, estimate $\\gamma$ (via $\\bar Y$) from the first half and then use it as a proxy for $\\gamma$ in the second half of the data to estimate $\\psi_0$. As this procedure does not add anything of interest to the core idea of our proof, we refrain from doing so here. \n\\end{remark}\n\n\\subsection{Variant of quadratic loss function}\n\\label{loss_func_eq}\nIn this sub-section we argue why the loss function in \\eqref{eq:new_loss} is a variant of the quadratic loss function for any $\\gamma \\in (\\alpha_0, \\beta_0)$. Assume that we know $\\alpha_0, \\beta_0$ and seek to estimate $\\psi_0$. We start with an expansion of the quadratic loss function: \n\\begin{align*}\n& \\mathbb{E}\\left(Y - \\alpha_0\\mathds{1}_{Q^{\\top}\\psi \\le 0} - \\beta_0 \\mathds{1}_{Q^{\\top}\\psi > 0}\\right)^2 \\\\\n& = \\mathbb{E}\\left(\\mathbb{E}\\left(Y - \\alpha_0\\mathds{1}_{Q^{\\top}\\psi \\le 0} - \\beta_0 \\mathds{1}_{Q^{\\top}\\psi > 0}\\right)^2 \\ | X\\right) \\\\\n& = \\mathbb{E}_{Q}\\left(\\mathbb{E}\\left( Y^2 \\mid Q \\right) \\right) + \\mathbb{E}_{Q}\\left(\\alpha_0\\mathds{1}_{Q^{\\top}\\psi \\le 0} + \\beta_0 \\mathds{1}_{Q^{\\top}\\psi > 0}\\right)^2 \\\\\n& \\qquad \\qquad \\qquad -2 \\mathbb{E}_{Q}\\left(\\left(\\alpha_0\\mathds{1}_{Q^{\\top}\\psi \\le 0} + \\beta_0 \\mathds{1}_{Q^{\\top}\\psi > 0}\\right) \\mathbb{E}(Y \\mid Q)\\right) \\\\\n& = \\mathbb{E}_Q\\left(\\mathbb{E}\\left( Y \\mid Q \\right) \\right) + \\mathbb{E}_Q\\left(\\alpha_0\\mathds{1}_{Q^{\\top}\\psi \\le 0} + \\beta_0 \\mathds{1}_{Q^{\\top}\\psi > 0}\\right)^2 \\\\\n& \\qquad \\qquad \\qquad -2 \\mathbb{E}_Q\\left(\\left(\\alpha_0\\mathds{1}_{Q^{\\top}\\psi \\le 0} + \\beta_0 \\mathds{1}_{Q^{\\top}\\psi > 0}\\right) \\mathbb{E}(Y \\mid Q)\\right) \\\\\n\\end{align*}\nSince the first summand is just $\\mathbb{E} Y$, it is irrelevant to the minimization. A cursory inspection shows that it suffices to minimize\n\\begin{align}\n& \\mathbb{E}\\left(\\left(\\alpha_0\\mathds{1}_{Q^{\\top}\\psi \\le 0} + \\beta_0 \\mathds{1}_{Q^{\\top}\\psi > 0}\\right) - \\mathbb{E}(Y \\mid Q)\\right)^2 \\notag\\\\\n\\label{eq:lse_1} & = (\\beta_0 - \\alpha_0)^2 \\P\\left(\\text{sign}(Q^{\\top}\\psi) \\neq \\text{sign}(Q^{\\top}\\psi_0)\\right)\n\\end{align}\nOn the other hand the loss we are considering is $\\mathbb{E}\\left((Y - \\gamma)\\mathds{1}_{Q^{\\top}\\psi \\le 0}\\right)$: \n\\begin{align}\n\\label{eq:lse_2} \\mathbb{E}\\left((Y - \\gamma)\\mathds{1}_{Q^{\\top}\\psi \\le 0}\\right) & = (\\beta_0 - \\gamma)\\P(Q^{\\top}\\psi_0 > 0 , Q^{\\top}\\psi \\le 0) \\notag \\\\\n& \\hspace{10em}+ (\\alpha_0 - \\gamma)\\P(Q^{\\top}\\psi_0 \\le 0, Q^{\\top}\\psi \\le 0)\\,,\n\\end{align}\nwhich can be rewritten as: \n\\begin{align*}\n& (\\alpha_0 - \\gamma)\\P(X^{\\top} \\psi_0 \\leq 0) + (\\beta_0 - \\gamma)\\,\\P(X^{\\top} \\psi_0 > 0, X^{\\top} \\psi \\leq 0) \\\\\n& \\qquad \\qquad \\qquad + (\\gamma - \\alpha_0)\\,P (X^{\\top} \\psi_0 \\leq 0, X^{\\top} \\psi > 0) \\,.\n\\end{align*}\nBy Assumption \\ref{as:distribution}, for $\\psi \\neq \\psi_0$, $\\P\\left(\\text{sign}(Q^{\\top}\\psi) \\neq \\text{sign}(Q^{\\top}\\psi_0)\\right) > 0$. As an easy consequence, equation \\eqref{eq:lse_1} is uniquely minimized at $\\psi = \\psi_0$. To see that the same is true for \\eqref{eq:lse_2} when $\\gamma \\in (\\alpha_0, \\beta_0)$, note that the first summand in the equation does not depend on $\\psi$, that the second and third summands are both non-negative and that at least one of these must be positive under Assumption \\ref{as:distribution}. \n\\subsection{Linear curvature of the population score function}\nBefore going into the proofs of the Lemmas and the Theorem, we argue that the population score function $M(\\psi)$ has linear curvature near $\\psi_0$, which is useful in proving Lemma \\ref{lem:rate}. We begin with the following observation: \n\\begin{lemma}[Curvature of population risk]\n\\label{lem:linear_curvature}\nUnder Assumption \\ref{as:differentiability} we have: $$u_- \\|\\psi - \\psi_0\\|_2 \\le \\mathbb{M}(\\psi) - \\mathbb{M}(\\psi_0) \\le u_+ \\|\\psi - \\psi_0\\|_2$$ for some constants $0 < u_- < u_+ < \\infty$, for all $\\psi \\in \\psi$. \n\\end{lemma}\n\\begin{proof}\nFirst, we show that \n$$\n\\mathbb{M}(\\psi) - \\mathbb{M}(\\psi_0) = \\frac{(\\beta_0 - \\alpha_0)}{2} \\P(\\text{sign}(Q^{\\top}\\psi) \\neq X^{\\top}(\\psi_0))\n$$ which follows from the calculation below:\n\\begin{align*}\n& \\mathbb{M}(\\psi) - \\mathbb{M}(\\psi_0) \\\\\n& = \\mathbb{E}\\left((Y - \\gamma)\\mathds{1}(Q^{\\top}\\psi \\le 0)\\right) - \\mathbb{E}\\left((Y - \\gamma)\\mathds{1}(Q^{\\top}\\psi_0 \\le 0)\\right) \\\\\n& = \\frac{\\beta_0 - \\alpha_0}{2} \\mathbb{E}\\left(\\left\\{\\mathds{1}(Q^{\\top}\\psi \\le 0) - \\mathds{1}(Q^{\\top}\\psi_0 \\le 0)\\right\\}\\left\\{\\mathds{1}(Q^{\\top}\\psi_0 \\ge 0) - \\mathds{1}(Q^{\\top}\\psi_0 \\le 0)\\right\\}\\right) \\\\\n& = \\frac{\\beta_0 - \\alpha_0}{2} \\mathbb{E}\\left(\\left\\{\\mathds{1}(Q^{\\top}\\psi \\le 0, Q^{\\top}\\psi_0 \\ge 0) - \\mathds{1}(Q^{\\top}\\psi \\le 0, Q^{\\top}\\psi_0 \\le 0) + \\mathds{1}(Q^{\\top}\\psi_0 \\le 0)\\right\\}\\right) \\\\\n& = \\frac{\\beta_0 - \\alpha_0}{2} \\mathbb{E}\\left(\\left\\{\\mathds{1}(Q^{\\top}\\psi \\le 0, Q^{\\top}\\psi_0 \\ge 0) + \\mathds{1}(Q^{\\top}\\psi \\ge 0, Q^{\\top}\\psi_0 \\le 0)\\right\\}\\right) \\\\\n& = \\frac{\\beta_0 - \\alpha_0}{2} \\P(\\text{sign}(Q^{\\top}\\psi) \\neq \\text{sign}(Q^{\\top}\\psi_0)) \\,.\n\\end{align*}\nWe now analyze the probability of the wedge shaped region, the region between the two hyperplanes $Q^{\\top}\\psi = 0$ and $Q^{\\top}\\psi_0 = 0$. Note that, \n\\allowdisplaybreaks\n\\begin{align}\n& \\P(Q^{\\top}\\psi > 0 > Q^{\\top}\\psi_0) \\notag\\\\\n& = \\P(-\\tilde{Q}^{\\top}\\tilde{\\psi} < X_1 < -\\tilde{Q}^{\\top}\\tilde{\\psi}_0) \\notag\\\\\n\\label{lin1} & = \\mathbb{E}\\left[\\left(F_{X_1 | \\tilde{Q}}\\left(-\\tilde{Q}^{\\top}\\tilde{\\psi}_0\\right) - F_{X_1 | \\tilde{Q}}\\left(-\\tilde{Q}^{\\top}\\tilde{\\psi}\\right)\\right)\\mathds{1}\\left(\\tilde{Q}^{\\top}\\tilde{\\psi}_0 \\le \\tilde{Q}^{\\top}\\tilde{\\psi}\\right)\\right]\n\\end{align}\nA similar calculation yields\n\\allowdisplaybreaks\n\\begin{align}\n\\label{lin2} \\P(Q^{\\top}\\psi < 0 < Q^{\\top}\\psi_0) & = \\mathbb{E}\\left[\\left(F_{X_1 | \\tilde{Q}}\\left(-\\tilde{Q}^{\\top}\\tilde{\\psi}\\right) - F_{X_1 | \\tilde{Q}}\\left(-\\tilde{Q}^{\\top}\\tilde{\\psi}_0\\right)\\right)\\mathds{1}\\left(\\tilde{Q}^{\\top}\\tilde{\\psi}_0 \\ge \\tilde{Q}^{\\top}\\tilde{\\psi}\\right)\\right]\n\\end{align}\nAdding both sides of equation \\ref{lin1} and \\ref{lin2} we get: \n\\begin{equation}\n\\label{wedge_expression}\n\\P(\\text{sign}(Q^{\\top}\\psi) \\neq \\text{sign}(Q^{\\top}\\psi_0)) = \\mathbb{E}\\left[\\left|F_{X_1 | \\tilde{Q}}\\left(-\\tilde{Q}^{\\top}\\tilde{\\psi}\\right) - F_{X_1 | \\tilde{Q}}\\left(-\\tilde{Q}^{\\top}\\tilde{\\psi}_0\\right)\\right|\\right]\n\\end{equation}\nDefine $\\psi_{\\max} = \\sup_{\\psi \\in \\psi}\\|\\psi\\|$, which is finite by Assumption \\ref{as:distribution}. Below, we establish the lower bound:\n\\allowdisplaybreaks\n\\begin{align*}\n& \\P(\\text{sign}(Q^{\\top}\\psi) \\neq \\text{sign}(Q^{\\top}\\psi_0)) \\notag\\\\\n& = \\mathbb{E}\\left[\\left|F_{X_1 | \\tilde{Q}}\\left(-\\tilde{Q}^{\\top}\\tilde{\\psi}\\right) - F_{X_1 | \\tilde{Q}}\\left(-\\tilde{Q}^{\\top}\\tilde{\\psi}_0\\right)\\right|\\right] \\\\\n& \\ge \\mathbb{E}\\left[\\left|F_{X_1 | \\tilde{Q}}\\left(-\\tilde{Q}^{\\top}\\tilde{\\psi}\\right) - F_{X_1 | \\tilde{Q}}\\left(-\\tilde{Q}^{\\top}\\tilde{\\psi}_0\\right)\\right|\\mathds{1}\\left(\\left|\\tilde{Q}^{\\top}\\tilde{\\psi}\\right| \\vee \\left| \\tilde{Q}^{\\top}\\tilde{\\psi}_0\\right| \\le \\delta\\right)\\right] \\hspace{0.2in} [\\delta \\ \\text{as in Assumption \\ref{as:differentiability}}]\\\\\n& \\ge \\mathbb{E}\\left[\\left|F_{X_1 | \\tilde{Q}}\\left(-\\tilde{Q}^{\\top}\\tilde{\\psi}\\right) - F_{X_1 | \\tilde{Q}}\\left(-\\tilde{Q}^{\\top}\\tilde{\\psi}_0\\right)\\right|\\mathds{1}\\left(\\|\\tilde{Q}\\| \\le \\delta\/\\psi_{\\max}\\right)\\right] \\\\\n& \\ge t \\mathbb{E}\\left[\\left| \\tilde{Q}^{\\top}(\\psi - \\psi_0)\\right| \\mathds{1}\\left(\\|\\tilde{Q}\\| \\le \\delta\/\\psi_{\\max}\\right)\\right] \\\\\n& = t \\|\\psi - \\psi_0\\| \\,\\mathbb{E}\\left[\\left| \\tilde{Q}^{\\top}\\frac{(\\psi - \\psi_0)}{\\|\\psi - \\psi_0\\|}\\right| \\mathds{1}\\left(\\|\\tilde{Q}\\| \\le \\delta\/\\psi_{\\max}\\right)\\right] \\\\\n& \\ge t\\|\\psi - \\psi_0\\| \\inf_{\\gamma \\in S^{p-1}}\\mathbb{E}\\left[\\left| \\tilde{Q}^{\\top}\\gamma\\right| \\mathds{1}\\left(\\|\\tilde{Q}\\| \\le \\delta\/\\psi_{\\max}\\right)\\right] \\\\\n& = u_-\\|\\psi - \\psi_0\\| \\,.\n\\end{align*} \nAt the very end, we have used the fact that $$\\inf_{\\gamma \\in S^{p-1}}\\mathbb{E}\\left[\\left| \\tilde{Q}^{\\top}\\gamma\\right| \\mathds{1}\\left(\\|\\tilde{Q}\\| \\le \\delta\/\\psi_{\\max}\\right)\\right] > 0$$ To prove this, assume that the infimum is 0. Then, there exists $\\gamma_0 \\in S^{p-1}$ such that \n$$\\mathbb{E}\\left[\\left| \\tilde{Q}^{\\top}\\gamma_0\\right| \\mathds{1}\\left(\\|\\tilde{Q}\\| \\le \\delta\/\\psi_{\\max}\\right)\\right] = 0 \\,,$$ \nas the above function continuous in $\\gamma$ and any continuous function on a compact set attains its infimum. Hence, $\\left|\\tilde{Q}^{\\top}\\gamma_0 \\right| = 0$ for all $\\|\\tilde{Q}\\| \\le \\delta\/\\psi_{\\max}$, which implies that $\\tilde{Q}$ does not have full support, violating Assumption \\ref{as:distribution} (2). This gives a contradiction.\n\\\\\\\\\n\\noindent\nEstablishing the upper bound is relatively easier. Going back to equation \\eqref{wedge_expression}, we have: \n\\begin{align*}\n& \\P(\\text{sign}(Q^{\\top}\\psi) \\neq \\text{sign}(Q^{\\top}\\psi_0)) \\notag\\\\\n& = \\mathbb{E}\\left[\\left|F_{Q_1 | \\tilde{Q}}\\left(-\\tilde{Q}^{\\top}\\tilde{\\psi}\\right) - F_{Q_1 | \\tilde{Q}}\\left(-\\tilde{Q}^{\\top}\\tilde{\\psi}_0\\right)\\right|\\right] \\\\\n& \\le \\mathbb{E}\\left[m(\\tilde Q) \\, \\|Q\\| \\,\\|\\psi- \\psi_0\\|\\right] \\hspace{0.2in} [m(\\cdot) \\ \\text{is defined in Assumption \\ref{as:density_bound}}]\\\\\n& \\le u_+ \\|\\psi - \\psi_0\\| \\,,\n\\end{align*}\nas $ \\mathbb{E}\\left[m(\\tilde Q) \\|Q\\|\\right] < \\infty$ by Assumption \\ref{as:density_bound} and the sub-Gaussianity of $\\tilde X$. \n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\\subsection{Proof of Lemma \\ref{asymp-normality}}\n\\begin{proof}\nWe first prove that under our assumptions $\\sigma_n^{-1} \\mathbb{E}(T_n(\\psi_0)) \\overset{n \\to \\infty}\\longrightarrow A$ where $$A = -\\frac{\\beta_0 - \\alpha_0}{2!}\\left[\\int_{-\\infty}^{\\infty} K'\\left(t\\right)|t| \\ dt \\right] \\int_{\\mathbb{R}^{p-1}}\\tilde{Q}f_0'(0 | \\tilde{Q}) \\ dP(\\tilde{Q})$$ The proof is based on Taylor expansion of the conditional density: \n\\allowdisplaybreaks\n\\begin{align*}\n& \\sigma_n^{-1} \\mathbb{E}(T_n(\\psi_0)) \\\\\n& = -\\sigma_n^{-2}\\mathbb{E}\\left((Y - \\gamma)K'\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right)\\tilde{Q}\\right) \\\\\n& = -\\frac{\\beta_0 - \\alpha_0}{2}\\sigma_n^{-2}\\mathbb{E}\\left(K'\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right)\\tilde{Q}(\\mathds{1}(Q^{\\top}\\psi_0 \\ge 0) - \\mathds{1}(Q^{\\top}\\psi_0 \\le 0))\\right) \\\\\n& = -\\frac{\\beta_0 - \\alpha_0}{2}\\sigma_n^{-2}\\int_{\\mathbb{R}^{p-1}}\\tilde{Q}\\left[\\int_{0}^{\\infty} K'\\left(\\frac{z}{\\sigma_n}\\right)f_0(z|\\tilde{Q}) \\ dz - \\int_{-\\infty}^{0} K'\\left(\\frac{z}{\\sigma_n}\\right)f_0(z|\\tilde{Q}) \\ dz \\right] \\ dP(\\tilde{Q}) \\\\\n& = -\\frac{\\beta_0 - \\alpha_0}{2}\\sigma_n^{-1}\\int_{\\mathbb{R}^{p-1}}\\tilde{Q}\\left[\\int_{0}^{\\infty} K'\\left(t\\right)f_0(\\sigma_n t|\\tilde{Q}) \\ dt - \\int_{-\\infty}^{0} K'\\left(t\\right)f_0(\\sigma_n t |\\tilde{Q}) \\ dt \\right] \\ dP(\\tilde{Q}) \\\\\n& = -\\frac{\\beta_0 - \\alpha_0}{2}\\sigma_n^{-1}\\left[\\int_{\\mathbb{R}^{p-1}}\\tilde{Q}\\left[\\int_{0}^{\\infty} K'\\left(t\\right)f_0(0|\\tilde{Q}) \\ dt - \\int_{-\\infty}^{0} K'\\left(t\\right)f_0(0 |\\tilde{Q}) \\ dt \\right] \\ dP(\\tilde{Q}) \\right. \\\\ \n& \\qquad \\qquad \\qquad + \\left. \\int_{\\mathbb{R}^{p-1}}\\sigma_n \\left[\\int_{0}^{\\infty} K'\\left(t\\right)tf_0'(\\lambda \\sigma_n t|\\tilde{Q}) \\ dt - \\int_{-\\infty}^{0} K'\\left(t\\right) t f_0'(\\lambda \\sigma_n t |\\tilde{Q}) \\ dt \\right] \\ dP(\\tilde{Q}) \\right] \\hspace{0.2in} [0 < \\lambda < 1]\\\\ \n& = -\\frac{\\beta_0 - \\alpha_0}{2}\\int_{\\mathbb{R}^{p-1}}\\tilde{Q}\\left[\\int_{0}^{\\infty} k\\left(t\\right)tf_0'(\\lambda \\sigma_n t|\\tilde{Q}) \\ dz - \\int_{-\\infty}^{0} k\\left(t\\right)tf_0'(\\lambda \\sigma_nt |\\tilde{Q}) \\ dz \\right] \\ dP(\\tilde{Q})\\\\\n& \\underset{n \\rightarrow \\infty} \\longrightarrow -\\frac{\\beta_0 - \\alpha_0}{2}\\left[\\int_{-\\infty}^{\\infty} k\\left(t\\right)|t| \\ dt \\right] \\int_{\\mathbb{R}^{p-1}}\\tilde{Q}f_0'(0 | \\tilde{Q}) \\ dP(\\tilde{Q})\n\\end{align*}\nNext, we prove that $\\mbox{Var}\\left(\\sqrt{n\\sigma_n}T_n(\\psi_0)\\right)\\longrightarrow \\Sigma$ as $n \\rightarrow \\infty$, where $\\Sigma$ is as defined in Lemma \\ref{asymp-normality}. Note that: \n\\allowdisplaybreaks\n\\begin{align*}\n\\mbox{Var}\\left(\\sqrt{n\\sigma_n}T_n(\\psi_0)\\right) & = \\sigma_n \\mathbb{E}\\left((Y - \\gamma)^2\\left(K'\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right)^2\\frac{\\tilde{Q}\\tilde{Q}^{\\top}}{\\sigma_n^2}\\right)\\right) - \\sigma_n \\mathbb{E}(T_n(\\psi_0))\\mathbb{E}(T_n(\\psi_0))^{\\top}\n\\end{align*}\nAs $\\sigma_n^{-1}\\mathbb{E}(T_n(\\psi_0)) \\rightarrow A$, we can conclude that $\\sigma_n \\mathbb{E}(T_n(\\psi_0))\\mathbb{E}(T_n(\\psi_0))^{\\top} \\rightarrow 0$. \nDefine $a_1 = (1 - \\gamma)^2 \\alpha_0 + \\gamma^2 (1-\\alpha_0), a_2 = (1 - \\gamma)^2 \\beta_0 + \\gamma^2 (1-\\beta_0)$. For the first summand: \n\\allowdisplaybreaks\n\\begin{align*}\n& \\sigma_n \\mathbb{E}\\left((Y - \\gamma)^2\\left(K^{'^2}\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right)\\frac{\\tilde{Q}\\tilde{Q}^{\\top}}{\\sigma_n^2}\\right)\\right) \\\\\n& = \\frac{1}{\\sigma_n} \\int_{\\mathbb{R}^{p-1}}\\tilde{Q}\\tilde{Q}^{\\top} \\left[a_1 \\int_{-\\infty}^{0} K^{'^2}\\left(\\frac{z}{\\sigma_n}\\right) f(z|\\tilde{Q}) \\ dz \\right. \\notag \\\\ & \\left. \\qquad \\qquad \\qquad + a_2 \\int_{0}^{\\infty}K^{'^2}\\left(\\frac{z}{\\sigma_n}\\right) f(z|\\tilde{Q}) \\ dz \\right] \\ dP(\\tilde{Q})\\\\\n& = \\int_{\\mathbb{R}^{p-1}}\\tilde{Q}\\tilde{Q}^{\\top} \\left[a_1 \\int_{-\\infty}^{0} K^{'^2}\\left(t\\right)f(\\sigma_n t|\\tilde{Q}) \\ dt + a_2 \\int_{0}^{\\infty} K^{'^2}\\left(t\\right) f(\\sigma_n t |\\tilde{Q}) \\ dt \\right] \\ dP(\\tilde{Q}) \\\\\n& = \\int_{\\mathbb{R}^{p-1}}\\tilde{Q}\\tilde{Q}^{\\top} \\left[a_1 \\int_{-\\infty}^{0} K^{'^2}\\left(t\\right)f(\\sigma_n t|\\tilde{Q}) \\ dt + a_2 \\int_{0}^{\\infty} K^{'^2}\\left(t\\right) f(\\sigma_n t |\\tilde{Q}) \\ dt \\right] \\ dP(\\tilde{Q}) \\\\\n& \\underset{n \\rightarrow \\infty} \\longrightarrow \\left[a_1 \\int_{-\\infty}^{0} K^{'^2}\\left(t\\right) \\ dt + a_2 \\int_{0}^{\\infty} K^{'^2}\\left(t\\right) \\ dt \\right]\\int_{\\mathbb{R}^{p-1}}\\tilde{Q}\\tilde{Q}^{\\top} f(0|\\tilde{Q}) \\ dP(\\tilde{Q}) \\ \\ \\overset{\\Delta} = \\Sigma \\, . \n\\end{align*}\nFinally, suppose $n \\sigma_n^{3} \\rightarrow \\lambda$. Define $W_n = \\sqrt{n\\sigma_n}\\left[T_n(\\psi) - \\mathbb{E}(T_n(\\psi))\\right]$. Using Lemma 6 of Horowitz \\cite{horowitz1992smoothed}, it is easily established that $W_n \\Rightarrow N(0, \\Sigma)$. Also, we have: \n\\allowdisplaybreaks \n\\begin{align*}\n\\sqrt{n\\sigma_n}\\mathbb{E}(T_n(\\psi_0)) = \\sqrt{n\\sigma_n^{3}}\\sigma_n^{-1}\\mathbb{E}(T_n(\\psi_0) & \\rightarrow \\sqrt{\\lambda}A = \\mu\n\\end{align*}\nAs $\\sqrt{n\\sigma_n}T_n(\\psi_0) = W_n + \\sqrt{n\\sigma_n}\\mathbb{E}(T_n(\\psi_0))$, we conclude that $\\sqrt{n\\sigma_n} T_n(\\psi_0) \\Rightarrow N(\\mu, \\Sigma)$.\n\\end{proof}\n\\subsection{Proof of Lemma \\ref{conv-prob}}\n\\begin{proof}\nLet $\\epsilon_n \\downarrow 0$ be a sequence such that $\\P(\\|\\breve{\\psi}_n - \\psi_0\\| \\le \\epsilon_n \\sigma_n) \\rightarrow 1$. Define $\\Psi_n = \\{\\psi: \\|\\psi - \\psi_0\\| \\le \\epsilon_n \\sigma_n\\}$. We show that $$\\sup_{\\psi \\in \\psi_n} \\|\\sigma_n Q_n(\\psi) - Q\\|_F \\overset{P} \\to 0$$ where $\\|\\cdot\\|_F$ denotes the Frobenius norm of a matrix. Sometimes, we omit the subscript $F$ when there is no ambiguity. Define $\\mathcal{G}_n$ to be collection of functions: \n$$\n\\mathcal{G}_n= \\left\\{g_{\\psi}(y, q) = -\\frac{1}{\\sigma_n}(y - \\gamma)\\tilde q\\tilde q^{\\top} \\left(K''\\left(\\frac{q^{\\top}\\psi}{\\sigma_n}\\right) - K''\\left(\\frac{q^{\\top}\\psi_0}{\\sigma_n}\\right)\\right), \\psi \\in \\Psi_n \\right\\}\n$$\nThat the function class $\\mathcal{G}_n$ has bounded uniform entropy integral (BUEI) is immediate from the fact that the function $Q \\to Q^{\\top}\\psi$ has finite VC dimension (as the hyperplanes has finite VC dimension) and it does change upon constant scaling. Therefore $Q \\mapsto Q^{\\top}\\psi\/\\sigma_n$ also has finite VC dimension which does not depend on n and hence BUEI. As composition with a monotone function and multiplication with constant (parameter free) functions or multiplication of two BUEI class of functions keeps BUEI property, we conclude that $\\mathcal{G}_n$ has BUEI. \nWe first expand the expression in two terms:\n\\allowdisplaybreaks\n\\begin{align*}\n\\sup_{\\psi \\in \\psi_n} \\|\\sigma_n Q_n(\\psi) - Q\\| & \\le \\sup_{\\psi \\in \\psi_n} \\|\\sigma_n Q_n(\\psi) - \\mathbb{E}(\\sigma_n Q_n(\\psi))\\| + \\sup_{\\psi \\in \\psi_n} \\| \\mathbb{E}(\\sigma_n Q_n(\\psi)) - Q\\| \\\\ \n& = \\|(\\mathbb{P}_n - P)\\|_{\\mathcal{G}_n} + \\sup_{\\psi \\in \\psi_n}\\| \\mathbb{E}(\\sigma_n Q_n(\\psi)) - Q\\| \\\\\n& = T_{1,n} + T_{2,n} \\hspace{0.3in} \\,. [\\text{Say}]\n\\end{align*}\n\n\n\\vspace{0.2in}\n\\noindent\nThat $T_{1,n} \\overset{P} \\to 0$ follows from uniform law of large number of a BUEI class (e.g. combining Theorem 2.4.1 and Theorem 2.6.7 of \\cite{vdvw96}). \nFor uniform convergence of the second summand $T_{n,2}$, define $\\chi_n = \\{\\tilde{Q}: \\|\\tilde{Q}\\| \\le 1\/\\sqrt{\\epsilon_n}\\}$. Then $\\chi_n \\uparrow \\mathbb{R}^{p-1}$. Also for any $\\psi \\in \\Psi_n$, if we define $\\gamma_n \\equiv \\gamma_n(\\psi) = (\\psi - \\psi_0)\/\\sigma_n$, then $|\\tilde \\gamma_n^{\\top}\\tilde{Q}| \\le \\sqrt{\\epsilon_n}$ for all $n$ and for all $\\psi \\in \\Psi_n, \\tilde{Q} \\in \\chi_n$. Now, \n\\allowdisplaybreaks\n\\begin{align*}\n& \\sup_{\\psi \\in \\psi_n}\\| \\mathbb{E}(\\sigma_n Q_n(\\psi)) - Q\\| \\notag \\\\\n&\\qquad \\qquad = \\sup_{\\psi \\in \\psi_n}\\| (\\mathbb{E}(\\sigma_n Q_n(\\psi)\\mathds{1}(\\chi_n))-Q_1) + (\\mathbb{E}(\\sigma_n Q_n(\\psi)\\mathds{1}(\\chi_n^c))-Q_2)\\|\n\\end{align*}\nwhere $$Q_1 = \\frac{\\beta_0 - \\alpha_0}{2}\\left(\\int_{-\\infty}^{\\infty} -K''\\left(t \\right)\\text{sign}(t) \\ dt\\right) \\ \\mathbb{E}\\left(\\tilde{Q}\\tilde{Q}^{\\top} f_0(0 |\\tilde{Q})\\mathds{1}(\\chi_n) \\right)$$ $$Q_2 = \\frac{\\beta_0 - \\alpha_0}{2}\\left(\\int_{-\\infty}^{\\infty} -K''\\left(t \\right)\\text{sign}(t) \\ dt\\right) \\ \\mathbb{E}\\left(\\tilde{Q}\\tilde{Q}^{\\top} f(0 |\\tilde{Q})\\mathds{1}(X_n^c) \\right) \\,.$$\nNote that \n\\allowdisplaybreaks\n\\begin{flalign}\n& \\|\\mathbb{E}(\\sigma_n Q_n(\\psi)\\mathds{1}(\\chi_n)) - Q_1\\| \\notag\\\\\n& =\\left\\| \\frac{\\beta_0 - \\alpha_0}{2}\\left[\\int_{\\chi_n} \\tilde{Q}\\tilde{Q}^{\\top} \\left[\\int_{-\\infty}^{\\tilde{Q}^{\\top}\\gamma_n} K''\\left(t \\right) f_0(\\sigma_n (t-\\tilde{Q}^{\\top}\\gamma_n) |\\tilde{Q}) \\ dt \\right. \\right. \\right. \\notag \\\\\n& \\left. \\left. \\left. \\qquad \\qquad - \\int_{\\tilde{Q}^{\\top}\\gamma_n}^{\\infty} K''\\left(t\\right) f_0(\\sigma_n (t - \\tilde{Q}^{\\top}\\gamma_n) | \\tilde{Q}) \\ dt \\right]dP(\\tilde{Q})\\right]\\right. \\notag\\\\ & \\left. \\qquad \\qquad \\qquad - \\frac{\\beta_0 - \\alpha_0}{2}\\left[\\int_{\\chi_n} \\tilde{Q}\\tilde{Q}^{\\top} f(0 |\\tilde{Q})\\left[\\int_{-\\infty}^{0} K''\\left(t \\right) \\ dt - \\int_{0}^{\\infty} K''\\left(t\\right) \\ dt \\right]dP(\\tilde{Q})\\right] \\right \\|\\notag\\\\\n& =\\left \\| \\frac{\\beta_0 - \\alpha_0}{2}\\left[\\int_{\\chi_n} \\tilde{Q}\\tilde{Q}^{\\top} \\left[\\int_{-\\infty}^{\\tilde{Q}^{\\top}\\gamma_n} K'''\\left(t \\right) (f_0(\\sigma_n (t-\\tilde{Q}^{\\top}\\gamma_n) |\\tilde{Q})-f_0(0 | \\tilde{Q})) \\ dt \\right. \\right. \\right.\\notag\\\\& \\qquad \\qquad- \\left. \\left. \\left. \\int_{\\tilde{Q}^{\\top}\\gamma_n}^{\\infty} K''\\left(t\\right) (f_0(\\sigma_n (t - \\tilde{Q}^{\\top}\\gamma_n) | \\tilde{Q}) - f_0(0 | \\tilde{Q})) \\ dt \\right]dP(\\tilde{Q})\\right]\\right. \\notag\\\\ & \\qquad \\qquad \\qquad + \\left. \\frac{\\beta_0 - \\alpha_0}{2}\\left[\\int_{\\chi_n} \\tilde{Q}\\tilde{Q}^{\\top} f_0(0 |\\tilde{Q}) \\left[\\int_{-\\infty}^{\\tilde{Q}^{\\top}\\gamma_n} K''\\left(t \\right) \\ dt - \\int_{-\\infty}^{0} K''\\left(t \\right) \\ dt \\right. \\right. \\right. \\notag \\\\ \n& \\qquad \\qquad \\qquad \\qquad \\left. \\left. \\left. + \\int_{\\tilde{Q}^{\\top}\\gamma_n}^{\\infty} K''\\left(t \\right) \\ dt - \\int_{0}^{\\infty} K''\\left(t\\right) \\ dt \\right]dP(\\tilde{Q})\\right] \\right \\|\\notag\\\\\n& \\le \\frac{\\beta_0 - \\alpha_0}{2}\\sigma_n \\int_{\\chi_n}\\|\\tilde{Q}\\tilde{Q}^{\\top}\\|h(\\tilde{Q})\\int_{-\\infty}^{\\infty}|K''(t)||t - \\gamma_n^{\\top}\\tilde{Q}| \\ dt \\ dP(\\tilde{Q}) \\notag\\\\ & \\qquad \\qquad + \\frac{\\beta_0 - \\alpha_0}{2} \\int_{\\chi_n}\\|\\tilde{Q}\\tilde{Q}^{\\top}\\| f_0(0 | \\tilde{Q}) \\left[\\left| \\int_{-\\infty}^{\\tilde{Q}^{\\top}\\gamma_n} K''\\left(t \\right) \\ dt - \\int_{-\\infty}^{0} K''\\left(t \\right) \\ dt \\right| \\right. \\notag \\\\ & \\left. \\qquad \\qquad \\qquad + \\left| \\int_{\\tilde{Q}^{\\top}\\gamma_n}^{\\infty} K''\\left(t \\right) \\ dt - \\int_{0}^{\\infty} K''\\left(t\\right) \\ dt \\right|\\right] \\ dP(\\tilde{Q})\\notag\\\\\n& \\le \\frac{\\beta_0 - \\alpha_0}{2}\\left[\\sigma_n \\int_{\\chi_n}\\|\\tilde{Q}\\tilde{Q}^{\\top}\\|h(\\tilde{Q})\\int_{-\\infty}^{\\infty}|K''(t)||t - \\gamma_n^{\\top}\\tilde{Q}| \\ dt \\ dP(\\tilde{Q}) \\right. \\notag \\\\ \n& \\left. \\qquad \\qquad \\qquad + 2\\int_{\\chi_n}\\|\\tilde{Q}\\tilde{Q}^{\\top}\\| f_0(0 | \\tilde{Q}) (K'(0) - K'(\\gamma_n^{\\top}\\tilde{Q})) \\ dP(\\tilde{Q})\\right]\\notag \\\\\n\\label{cp1}&\\rightarrow 0 \\hspace{0.3in} [\\text{As} \\ n \\rightarrow \\infty] \\,,\n\\end{flalign}\nby DCT and Assumptions \\ref{as:distribution} and \\ref{as:derivative_bound}. For the second part: \n\\allowdisplaybreaks\n\\begin{align}\n& \\|\\mathbb{E}(\\sigma_n Q_n(\\psi)\\mathds{1}(\\chi_n^c)) - Q_2\\|\\notag\\\\\n& =\\left\\| \\frac{\\beta_0 - \\alpha_0}{2}\\left[\\int_{\\chi_n^c} \\tilde{Q}\\tilde{Q}^{\\top} \\left[\\int_{-\\infty}^{\\tilde{Q}^{\\top}\\gamma_n} K''\\left(t \\right) f_0(\\sigma_n (t-\\tilde{Q}^{\\top}\\gamma_n) |\\tilde{Q}) \\ dt \\right. \\right. \\right. \\notag \\\\ \n& \\left. \\left. \\left. \\qquad \\qquad - \\int_{\\tilde{Q}^{\\top}\\gamma_n}^{\\infty} K''\\left(t\\right) f_0(\\sigma_n (t - \\tilde{Q}^{\\top}\\gamma_n) | \\tilde{Q}) \\ dt \\right]dP(\\tilde{Q})\\right]\\right. \\notag\\\\ & \\left. \\qquad \\qquad \\qquad -\\frac{\\beta_0 - \\alpha_0}{2}\\left[\\int_{\\chi_n^c} \\tilde{Q}\\tilde{Q}^{\\top} f_0(0 |\\tilde{Q})\\left[\\int_{-\\infty}^{0} K''\\left(t \\right) \\ dt - \\int_{0}^{\\infty} K''\\left(t\\right) \\ dt \\right]dP(\\tilde{Q})\\right] \\right \\|\\notag\\\\\n& \\le \\frac{\\beta_0 - \\alpha_0}{2} \\int_{\\infty}^{\\infty} |K''(t)| \\ dt \\int_{\\chi_n^c} \\|\\tilde{Q}\\tilde{Q}^{\\top}\\|(m(\\tilde{Q}) + f_0(0|\\tilde{Q})) \\ dP(\\tilde{Q}) \\notag\\\\\n\\label{cp2} & \\rightarrow 0 \\hspace{0.3in} [\\text{As} \\ n \\rightarrow \\infty] \\,,\n\\end{align}\nagain by DCT and Assumptions \\ref{as:distribution} and \\ref{as:density_bound}. Combining equations \\ref{cp1} and \\ref{cp2}, we conclude the proof. \n\\end{proof}\n\n\n\\subsection{Proof of Lemma \\ref{bandwidth}}\nHere we prove that $\\|\\psi^s_0 - \\psi_0\\|\/\\sigma_n \\rightarrow 0$ where $\\psi^s_0$ is the minimizer of $\\mathbb{M}^s(\\psi)$ and $\\psi_0$ is the minimizer of $M(\\psi)$. \n\\begin{proof}\nDefine $\\eta = (\\psi^s_0 - \\psi_0)\/\\sigma_n$. At first we show that, $\\|\\tilde \\eta\\|_2$ is $O(1)$, i.e. there exists some constant $\\Omega_1$ such that $\\|\\tilde \\eta\\|_2 \\le \\Omega_1$ for all $n$: \n\\begin{align*}\n\\|\\psi^s_0 - \\psi_0\\|_2 & \\le \\frac{1}{u_-} \\left(\\mathbb{M}(\\psi_n) - \\mathbb{M}(\\psi_0)\\right) \\hspace{0.2in} [\\text{Follows from Lemma} \\ \\ref{lem:linear_curvature}]\\\\\n& \\le \\frac{1}{u_-} \\left(\\mathbb{M}(\\psi_n) - \\mathbb{M}^s(\\psi_n) + \\mathbb{M}^s(\\psi_n) - \\mathbb{M}^s(\\psi_0) + \\mathbb{M}^s(\\psi_0) - \\mathbb{M}(\\psi_0)\\right) \\\\\n& \\le \\frac{1}{u_-} \\left(\\mathbb{M}(\\psi_n) - \\mathbb{M}^s(\\psi_n) + \\mathbb{M}^s(\\psi_0) - M(\\psi_0)\\right) \\hspace{0.2in} [\\because \\mathbb{M}^s(\\psi_n) - \\mathbb{M}^s(\\psi_0) \\le 0]\\\\\n& \\le \\frac{2K_1}{u_-}\\sigma_n \\hspace{0.2in} [\\text{from equation} \\ \\eqref{eq:lin_bound_1}]\n\\end{align*}\n\n\\noindent\nAs $\\psi^s_0$ minimizes $\\mathbb{M}^s(\\psi)$: \n$$\\nabla \\mathbb{M}^s(\\psi^s_0) = -\\mathbb{E}\\left((Y-\\gamma)\\tilde{Q}K'\\left(\\frac{Q^{\\top}\\psi^0_s}{\\sigma_n}\\right)\\right) = 0$$\nHence:\n\\begin{align*}\n0 &= \\mathbb{E}\\left((Y-\\gamma)\\tilde{Q}K'\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\right) \\\\\n& = \\frac{(\\beta_0 - \\alpha_0)}{2} \\mathbb{E}\\left(\\tilde{Q}K'\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\left\\{\\mathds{1}(Q^{\\top}\\psi_0 \\ge 0) -\\mathds{1}(Q^{\\top}\\psi_0 < 0)\\right\\}\\right) \\\\\n& = \\frac{(\\beta_0 - \\alpha_0)}{2} \\mathbb{E}\\left(\\tilde{Q}K'\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n} + \\tilde{\\eta}^{\\top} \\tilde{Q}\\right)\\left\\{\\mathds{1}(Q^{\\top}\\psi_0 \\ge 0) -\\mathds{1}(Q^{\\top}\\psi_0 < 0)\\right\\}\\right) \\\\\n& = \\frac{(\\beta_0 - \\alpha_0)}{2} \\left[\\int_{\\mathbb{R}^{p-1}}\\tilde{Q} \\int_0^{\\infty} K'\\left(\\frac{z}{\\sigma_n} + \\tilde{\\eta}^{\\top} \\tilde{Q}\\right) \\ f_0(z|\\tilde{Q}) \\ dz \\ dP(\\tilde{Q})\\right. \\\\\n& \\qquad \\qquad \\qquad \\qquad \\qquad \\left. - \\int_{\\mathbb{R}^{p-1}}\\tilde{Q} \\int_{-\\infty}^0 K'\\left(\\frac{z}{\\sigma_n} + \\tilde{\\eta}^{\\top} \\tilde{Q}\\right) \\ f_0(z|\\tilde{Q}) \\ dz \\ dP(\\tilde{Q})\\right] \\\\\n& =\\sigma_n \\frac{(\\beta_0 - \\alpha_0)}{2} \\left[\\int_{\\mathbb{R}^{p-1}}\\tilde{Q} \\int_0^{\\infty} K'\\left(t + \\tilde{\\eta}^{\\top} \\tilde{Q}\\right) \\ f_0(\\sigma_n t|\\tilde{Q}) \\ dt \\ dP(\\tilde{Q})\\right. \\\\\n& \\qquad \\qquad \\qquad \\qquad \\qquad \\left. - \\int_{\\mathbb{R}^{p-1}}\\tilde{Q} \\int_{-\\infty}^0 K'\\left(t + \\tilde{\\eta}^{\\top} \\tilde{Q}\\right) \\ f_0(\\sigma_n t|\\tilde{Q}) \\ dz \\ dP(\\tilde{Q})\\right] \n\\end{align*}\nAs $\\sigma_n\\frac{(\\beta_0 - \\alpha_0)}{2} > 0$, we can forget about it and continue. Also, as we have proved $\\|\\tilde \\eta\\| = O(1)$, there exists a subsequence $\\eta_{n_k}$ and a point $c \\in \\mathbb{R}^{p-1}$ such that $\\eta_{n_k} \\rightarrow c$. Along that sub-sequence we have: \n\\begin{align*}\n0 & = \\left[\\int_{\\mathbb{R}^{p-1}}\\tilde{Q} \\int_0^{\\infty} K'\\left(t + \\tilde{\\eta}_{n_k}^{\\top} \\tilde{Q}\\right) \\ f_0(\\sigma_{n_k} t|\\tilde{Q}) \\ dt \\ dP(\\tilde{Q})\\right. \\\\\n& \\qquad \\qquad \\qquad \\qquad \\qquad \\left. - \\int_{\\mathbb{R}^{p-1}}\\tilde{Q} \\int_{-\\infty}^0 K'\\left(t + \\tilde{\\eta}_{n_k}^{\\top} \\tilde{Q}\\right) \\ f_0(\\sigma_{n_k} t|\\tilde{Q}) \\ dt \\ dP(\\tilde{Q})\\right] \n\\end{align*}\nTaking limits on both sides and applying DCT (which is permissible by DCT) we conclude: \n\\begin{align*}\n0 & = \\left[\\int_{\\mathbb{R}^{p-1}}\\tilde{Q} \\int_0^{\\infty} K'\\left(t +c^{\\top} \\tilde{Q}\\right) \\ f_0(0|\\tilde{Q}) \\ dt \\ dP(\\tilde{Q})\\right. \\\\\n& \\qquad \\qquad \\qquad \\qquad \\qquad \\left. - \\int_{\\mathbb{R}^{p-1}}\\tilde{Q} \\int_{-\\infty}^0 K'\\left(t + c^{\\top} \\tilde{Q}\\right) \\ f_0(0|\\tilde{Q}) \\ dt \\ dP(\\tilde{Q})\\right] \\\\\n& = \\left[\\int_{\\mathbb{R}^{p-1}}\\tilde{Q} \\ f_0(0|\\tilde{Q}) \\int_{c^{\\top} \\tilde{Q}}^{\\infty} K'\\left(t\\right) \\ dt \\ dP(\\tilde{Q})\\right. \\\\\n& \\qquad \\qquad \\qquad \\qquad \\qquad \\left. - \\int_{\\mathbb{R}^{p-1}}\\tilde{Q}\\ f_0(0|\\tilde{Q}) \\int_{-\\infty}^{c^{\\top} \\tilde{Q}} K'\\left(t \\right) \\ dt \\ dP(\\tilde{Q})\\right] \\\\\n& = \\left[\\int_{\\mathbb{R}^{p-1}}\\tilde{Q} \\ f_0(0|\\tilde{Q}) \\left[1 - K(c^{\\top} \\tilde{Q})\\right] \\ dt \\ dP(\\tilde{Q})\\right. \\\\\n& \\qquad \\qquad \\qquad \\qquad \\qquad\\left. - \\int_{\\mathbb{R}^{p-1}}\\tilde{Q}\\ f_0(0|\\tilde{Q}) K(c^{\\top} \\tilde{Q}) \\ dt \\ dP(\\tilde{Q})\\right] \\\\\n& = \\mathbb{E}\\left(\\tilde{Q} \\left(2K(c^{\\top} \\tilde{Q}) - 1\\right)f_0(0|\\tilde{Q})\\right) \\,.\n\\end{align*}\nNow, taking the inner-products of both sides with respect to $c$, we get: \n\\begin{equation}\n\\label{eq:zero_eq}\n\\mathbb{E}\\left(c^{\\top}\\tilde{Q} \\left(2K(c^{\\top} \\tilde{Q}) - 1\\right)f_0(0|\\tilde{Q})\\right) = 0 \\,.\n\\end{equation}\nBy our assumption that $K$ is symmetric kernel and that $K(t) > 0$ for all $t \\in (-1, 1)$, we easily conclude that $c^{\\top}\\tilde{Q} \\left(2K(c^{\\top} \\tilde{Q}) - 1\\right) \\ge 0$ almost surely in $\\tilde{Q}$ with equality iff $c^{\\top}X = 0$, which is not possible unless $c = 0$. Hence we conclude that $c = 0$. This shows that any convergent subsequence of $\\eta_n$ converges to $0$, which completes the proof. \n\\end{proof}\n\n\n\n\\subsection{Proof of Lemma \\ref{lem:rate}}\n\\begin{proof}\nTo obtain the rate of convergence of our kernel smoothed estimator we use Theorem 3.4.1 of \\cite{vdvw96}: There are three key ingredients that one needs to take care of if in order to apply this theorem: \n\\begin{enumerate}\n\\item Consistency of the estimator (otherwise the conditions of the theorem needs to be valid for all $\\eta$). \n\\item The curvature of the population score function near its minimizer.\n\\item A bound on the modulus of continuity in a vicinity of the minimizer of the population score function. \n\\end{enumerate}\nBelow, we establish the curvature of the population score function (item 2 above) globally, thereby obviating the need to establish consistency separately. Recall that the population score function was defined as: \n$$\n\\mathbb{M}^s(\\psi) = \\mathbb{E}\\left((Y - \\gamma)\\left(1 - K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right)\\right)\\right)\n$$ \nand our estimator $\\hat{\\psi}_n$ is the argmin of the corresponding sample version. Consider the set of functions $\\mathcal{H}_n = \\left\\{h_{\\psi}: h_{\\psi}(q,y) = (y - \\gamma)\\left(1 - K\\left(\\frac{q^{\\top}\\psi}{\\sigma_n}\\right)\\right)\\right\\}$. Next, we argue that $\\mathcal{H}_n$ is a VC class of functions with fixed VC dimension. We know that the function $\\{(q,y) \\mapsto q^{\\top}\\psi\/\\sigma_n: \\psi \\in \\psi\\}$ has fixed VC dimension (i.e. not depending on $n$). Now, as a finite dimensional VC class of functions composed with a fixed monotone function or multiplied by a fixed function still remains a finite dimensional VC class, we conclude that $\\mathcal{H}_n$ is a fixed dimensional VC class of functions with bounded envelope (as the functions considered here are bounded by 1). \n\nNow, we establish a lower bound on the curvature of the population score function $\\mathbb{M}^s(\\psi)$ near its minimizer $\\psi_n$: \n$$\n\\mathbb{M}^s(\\psi) - \\mathbb{M}^s(\\psi_n) \\gtrsim d^2_n(\\psi, \\psi_n)$$ where $$d_n(\\psi, \\psi_n) = \\sqrt{\\frac{\\|\\psi - \\psi_n\\|^2}{\\sigma_n} \\mathds{1}\\left(\\|\\psi - \\psi_n\\| \\le \\mathcal{K}\\sigma_n\\right) + \\|\\psi - \\psi_n\\|\\mathds{1}\\left(\\|\\psi - \\psi_n\\| > \\mathcal{K}\\sigma_n\\right)}\n$$ for some constant $\\mathcal{K} > 0$. The intuition behind this compound structure is following: When $\\psi$ is in $\\sigma_n$ neighborhood of $\\psi_n$, $\\mathbb{M}^s(\\psi)$ behaves like a smooth quadratic function, but when it is away from the truth, $\\mathbb{M}^s(\\psi)$ starts resembling $M(\\psi)$ which induces the linear curvature. \n\\\\\\\\\n\\noindent\nFor the linear part, we first establish that $|\\mathbb{M}(\\psi) - \\mathbb{M}^s(\\psi)| = O(\\sigma_n)$ uniformly for all $\\psi$. Define $\\eta = (\\psi - \\psi_0)\/\\sigma_n$:\n\\allowdisplaybreaks\n\\begin{align}\n& |\\mathbb{M}(\\psi) - \\mathbb{M}^s(\\psi)| \\notag \\\\\n& \\le \\mathbb{E}\\left(\\left | \\mathds{1}(Q^{\\top}\\psi \\ge 0) - K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right)\\right | \\right) \\notag\\\\\n& = \\mathbb{E}\\left(\\left | \\mathds{1}\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n} + \\eta^{\\top}\\tilde{Q} \\ge 0\\right) - K\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n} + \\eta^{\\top}\\tilde{Q}\\right)\\right | \\right) \\notag \\\\\n& = \\sigma_n \\int_{\\mathbb{R}^{p-1}} \\int_{-\\infty}^{\\infty} \\left | \\mathds{1}\\left(t + \\eta^{\\top}\\tilde{Q} \\ge 0\\right) - K\\left(t + \\eta^{\\top}\\tilde{Q}\\right)\\right | f_0(\\sigma_n t | \\tilde{Q}) \\ dt \\ dP(\\tilde{Q}) \\notag\\\\\n& = \\sigma_n \\int_{\\mathbb{R}^{p-1}} \\int_{-\\infty}^{\\infty} \\left | \\mathds{1}\\left(t \\ge 0\\right) - K\\left(t \\right)\\right | f_0(\\sigma_n (t-\\eta^{\\top}\\tilde{Q}) | \\tilde{Q}) \\ dt \\ dP(\\tilde{Q}) \\notag \\\\\n& = \\sigma_n \\int_{\\mathbb{R}^{p-1}} m(\\tilde{Q})\\int_{-\\infty}^{\\infty} \\left | \\mathds{1}\\left(t \\ge 0\\right) - K\\left(t \\right)\\right | \\ dt \\ dP(\\tilde{Q}) \\notag\\\\\n& = \\sigma_n \\mathbb{E}(m(\\tilde{Q})) \\int_{-\\infty}^{\\infty} \\left | \\mathds{1}\\left(t \\ge 0\\right) - K\\left(t \\right)\\right | \\ dt \\notag \\\\\n\\label{eq:lin_bound_1} & \\le K_1 \\sigma_n \\mathbb{E}(m(\\tilde{Q})) < \\infty \\hspace{0.3in} [\\text{by Assumption \\ref{as:density_bound}}] \\,.\n\\end{align}\nHere, the constant $K_1$ is $\\mathbb{E}(m(\\tilde{Q})) \\left[\\int_{-1}^{1}\\left | \\mathds{1}\\left(t \\ge 0\\right) - K\\left(t \\right)\\right | \\ dt \\right]$ which does not depend on $\\psi$, hence the bound is uniform over $\\psi$. Next: \n\\begin{align*}\n\\mathbb{M}^s(\\psi) - \\mathbb{M}^s(\\psi_0^s) & = \\mathbb{M}^s(\\psi) - \\mathbb{M}(\\psi) + \\mathbb{M}(\\psi) - \\mathbb{M}(\\psi_0) \\\\\n& \\qquad \\qquad + \\mathbb{M}(\\psi_0) - \\mathbb{M}(\\psi_0^s) + \\mathbb{M}(\\psi_0^s) -\\mathbb{M}^s(\\psi_0^s) \\\\ \n& = T_1 + T_2 + T_3 + T_4\n\\end{align*}\n\\noindent\nWe bound each summand separately: \n\\begin{enumerate}\n\\item $T_1 = \\mathbb{M}^s(\\psi) - \\mathbb{M}(\\psi) \\ge -K_1 \\sigma_n$ by equation \\ref{eq:lin_bound_1}\\, \n\\item $T_2 = \\mathbb{M}(\\psi) - \\mathbb{M}(\\psi_0) \\ge u_-\\|\\psi - \\psi_0\\|$ by Lemma \\ref{lem:linear_curvature}\\,\n\\item $T_3 = \\mathbb{M}(\\psi_0) - \\mathbb{M}(\\psi_0^s) \\ge -u_+\\|\\psi_0^s - \\psi_0\\| \\ge -\\epsilon_1 \\sigma_n$ where one can take $\\epsilon_1$ as small as possible, as we have established $\\|\\psi_0^s - \\psi_0\\|\/\\sigma_n \\rightarrow 0$. This follows by Lemma \\ref{lem:linear_curvature} along with Lemma \\ref{bandwidth}\\, \n\\item $T_4 = \\mathbb{M}(\\psi_0^s) -\\mathbb{M}^s(\\psi_0^s) \\ge -K_1 \\sigma_n$ by equation \\ref{eq:lin_bound_1}. \n\\end{enumerate}\nCombining, we have \n\\allowdisplaybreaks\n\\begin{align*}\n\\mathbb{M}^s(\\psi) - \\mathbb{M}^s(\\psi_0^s) & \\ge u_-\\|\\psi - \\psi_0\\| -(2K_1 + \\epsilon_1) \\sigma_n \\\\\n& \\ge ( u_-\/2)\\|\\psi - \\psi_0\\| \\hspace{0.2in} \\left[\\text{If} \\ \\|\\psi - \\psi_0\\| \\ge \\frac{2(2K_1 + \\epsilon_1)}{u_-}\\sigma_n\\right] \\\\\n& \\ge ( u_-\/4)\\|\\psi - \\psi_0^s\\| \n\\end{align*}\nwhere the last inequality holds for all large $n$ as proved in Lemma \\ref{bandwidth}. Using Lemma \\ref{bandwidth} again, we conclude that for any pair of positive constants $(\\epsilon_1, \\epsilon_2)$: \n$$\\|\\psi - \\psi_0^s\\| \\ge \\left(\\frac{2(2K_1 + \\epsilon_1)}{u_-}+\\epsilon_2\\right)\\sigma_n \\Rightarrow \\|\\psi - \\psi_0\\| \\ge \\frac{2(2K_1 + \\epsilon_1)}{u_-}\\sigma_n$$ for all large $n$, which implies: \n\\begin{align}\n& \\mathbb{M}^s(\\psi) - \\mathbb{M}^s(\\psi_0^s) \\notag \\\\\n& \\ge (u_-\/4) \\|\\psi - \\psi_0^s\\| \\mathds{1}\\left(\\|\\psi - \\psi_0^s\\| \\ge \\left(\\frac{2(2K_1 + \\epsilon_1)}{u_-}+\\epsilon_2\\right)\\sigma_n \\right) \\notag \\\\\n\\label{lb2} & \\ge (u_-\/4) \\|\\psi - \\psi_0^s\\| \\mathds{1}\\left(\\frac{\\|\\psi - \\psi_0^s\\|}{\\sigma_n} \\ge \\left(\\frac{7K_1}{u_-}\\right) \\right) \\hspace{0.2in} [\\text{for appropriate specifications of} \\ \\epsilon_1, \\epsilon_2] \\notag \\\\\n& := (u_-\/4) \\|\\psi - \\psi_0^s\\| \\mathds{1}\\left(\\frac{\\|\\psi - \\psi_0^s\\|}{\\sigma_n} \\ge \\mathcal{K} \\right)\n\\end{align}\n\n\\noindent\nIn the next part, we find the lower bound when $\\|\\psi - \\psi^0_s\\| \\le \\mathcal{K} \\sigma_n$. For the quadratic curvature, we perform a two step Taylor expansion: Define $\\eta = (\\psi - \\psi_0)\/\\sigma_n$. We have: \n\\allowdisplaybreaks \n\\begin{align}\n& \\nabla^2\\mathbb{M}^s(\\psi) \\notag\\\\\n& = \\frac{\\beta_0 - \\alpha_0}{2}\\frac{1}{\\sigma_n^2} \\mathbb{E}\\left(\\tilde{Q}\\tilde{Q}^{\\top} K''\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right)\\left\\{\\mathds{1}(Q^{\\top}\\psi_0 \\le 0) - \\mathds{1}(Q^{\\top}\\psi_0 \\ge 0)\\right\\}\\right) \\notag\\\\\n& = \\frac{\\beta_0 - \\alpha_0}{2}\\frac{1}{\\sigma_n^2} \\mathbb{E}\\left(\\tilde{Q}\\tilde{Q}^{\\top} K''\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n} + \\tilde{Q}^{\\top}\\tilde \\eta \\right)\\left\\{\\mathds{1}(Q^{\\top}\\psi_0 \\le 0) - \\mathds{1}(Q^{\\top}\\psi_0 \\ge 0)\\right\\}\\right) \\notag\\\\\n& = \\frac{\\beta_0 - \\alpha_0}{2}\\frac{1}{\\sigma_n^2} \\mathbb{E}\\left[\\tilde{Q}\\tilde{Q}^{\\top} \\left[\\int_{-\\infty}^{0} K''\\left(\\frac{z}{\\sigma_n} + \\tilde{Q}^{\\top}\\tilde \\eta \\right) f_0(z |\\tilde{Q}) \\ dz \\right. \\right. \\notag \\\\ \n& \\left. \\left. \\qquad \\qquad \\qquad \\qquad -\\int_{0}^{\\infty} K''\\left(\\frac{z}{\\sigma_n} + \\tilde{Q}^{\\top}\\tilde \\eta \\right) f_0(z | \\tilde{Q}) \\ dz \\right]\\right] \\notag\\\\\n& = \\frac{\\beta_0 - \\alpha_0}{2}\\frac{1}{\\sigma_n} \\mathbb{E}\\left[\\tilde{Q}\\tilde{Q}^{\\top} \\left[\\int_{-\\infty}^{0} K''\\left(t+ \\tilde{Q}^{\\top}\\tilde \\eta \\right) f_0(\\sigma_n t |\\tilde{Q}) \\ dt \\right. \\right. \\notag \\\\\n& \\left. \\left. \\qquad \\qquad \\qquad \\qquad - \\int_{0}^{\\infty} K''\\left(t + \\tilde{Q}^{\\top}\\tilde \\eta \\right) f_0(\\sigma_n t | \\tilde{Q}) \\ dt \\right]\\right] \\notag\\\\\n& = \\frac{\\beta_0 - \\alpha_0}{2}\\frac{1}{\\sigma_n} \\mathbb{E}\\left[\\tilde{Q}\\tilde{Q}^{\\top} f_0(0| \\tilde{Q})\\left[\\int_{-\\infty}^{0} K''\\left(t+ \\tilde{Q}^{\\top}\\tilde \\eta \\right) \\ dt \\right. \\right. \\notag \\\\\n& \\left. \\left. \\qquad \\qquad \\qquad \\qquad - \\int_{0}^{\\infty} K''\\left(t + \\tilde{Q}^{\\top}\\tilde \\eta \\right) \\ dt \\right]\\right] + R \\notag\\\\\n\\label{eq:quad_eq_1} & =(\\beta_0 - \\alpha_0)\\frac{1}{\\sigma_n}\\mathbb{E}\\left[\\tilde{Q}\\tilde{Q}^{\\top} f_0(0| \\tilde{Q})K'(\\tilde{Q}^{\\top}\\tilde \\eta)\\right] + R \\,.\n\\end{align}\nAs we want a lower bound on the set $\\|\\psi - \\psi^0_s\\| \\le \\mathcal{K} \\sigma_n$, we have $\\|\\eta\\| \\le \\mathcal{K}$. For the rest of the analysis, define \n\\begin{align*}\n\\Lambda: (v_1, v_2) \\mapsto \\inf_{\\|v_1\\| = 1, \\|v_2\\| \\le \\mathcal{K}} \\mathbb{E}_{\\tilde X}\\left[|v_1^{\\top}\\tilde{Q}|^2 f(0|\\tilde{Q})K'(\\tilde{Q}^{\\top}v_2) \\right]\n\\end{align*}\nClearly $\\Lambda \\ge 0$ and continuous on a compact set, hence its infimum is attained. Suppose $\\Lambda(v_1, v_2) = 0$ for some $v_1, v_2$. Then we have: \n\\begin{align*}\n\\mathbb{E}\\left[|v_1^{\\top}\\tilde{Q}|^2 f(0|\\tilde{Q})K'(\\tilde{Q}^{\\top}v_2) \\right] = 0 \\,,\n\\end{align*}\nwhich further implies $|\\tilde v_1^{\\top}\\tilde X| = 0$ almost surely and violates Assumption \\ref{as:eigenval_bound}. Hence, our claim is demonstrated. On the other hand, for the remainder term of equation \\eqref{eq:quad_eq_1}: \nfix $\\nu \\in S^{p-1}$. Then: \n\\allowdisplaybreaks\n\\begin{align}\n& \\left| \\nu^{\\top} R \\nu \\right| \\notag \\\\\n& = \\left|\\frac{1}{\\sigma_n} \\mathbb{E}\\left[\\left(\\nu^{\\top}\\tilde{Q}\\right)^2 \\left[\\int_{-\\infty}^{0} K''\\left(t+ \\tilde{Q}^{\\top}\\tilde \\eta \\right) (f_0(\\sigma_n t |\\tilde{Q}) - f_0(0|\\tilde{Q})) \\ dt \\right. \\right. \\right. \\notag \\\\\n& \\qquad \\qquad \\qquad \\qquad \\left. \\left. \\left. - \\int_{0}^{\\infty} K''\\left(t + \\tilde{Q}^{\\top}\\tilde \\eta \\right) (f_0(\\sigma_n t |\\tilde{Q}) - f_0(0|\\tilde{Q})) \\ dt \\right]\\right]\\right| \\notag\\\\\n& \\le \\mathbb{E} \\left[\\left(\\nu^{\\top}\\tilde{Q}\\right)^2h(\\tilde{Q}) \\int_{-\\infty}^{\\infty} \\left|K''\\left(t+ \\tilde{Q}^{\\top}\\tilde \\eta \\right)\\right| |t| \\ dt\\right] \\notag\\\\\n& \\le \\mathbb{E} \\left[\\left(\\nu^{\\top}\\tilde{Q}\\right)^2h(\\tilde{Q}) \\int_{-1}^{1} \\left|K''\\left(t\\right)\\right| |t - \\tilde{Q}^{\\top}\\tilde \\eta | \\ dt\\right] \\notag\\\\\n\\label{eq:quad_eq_3} & \\le \\mathbb{E} \\left[\\left(\\nu^{\\top}\\tilde{Q}\\right)^2h(\\tilde{Q})(1+ \\|\\tilde{Q}\\|\/2\\kappa) \\int_{-1}^{1} \\left|K''\\left(t\\right)\\right| \\ dt\\right] = C_1 \\hspace{0.2in} [\\text{say}]\n\\end{align}\nby Assumption \\ref{as:distribution} and Assumption \\ref{as:derivative_bound}. By a two-step Taylor expansion, we have: \n\\begin{align*}\n\\mathbb{M}^s(\\psi) - \\mathbb{M}^s(\\psi_0^s) & = \\frac12 (\\psi - \\psi_0^s)^{\\top} \\nabla^2\\mathbb{M}^s(\\psi^*_n) (\\psi - \\psi_0^s) \\\\\n& \\ge \\left(\\min_{\\|v_1\\| = 1, \\|v_2 \\| \\le \\mathcal{K}} \\Lambda(v_1, v_2)\\right) \\frac{\\|\\psi - \\psi_0^s\\|^2}{2\\sigma_n} - \\frac{C_1\\sigma_n}{2} \\, \\frac{\\|\\psi - \\psi_0^s\\|^2_2}{\\sigma_n} \\\\\n& \\gtrsim \\frac{\\|\\psi - \\psi_0^s\\|^2_2}{\\sigma_n} \\,\n\\end{align*}\nThis concludes the proof of the curvature. \n\\\\\\\\\n\\noindent \nFinally, we bound the modulus of continuity:\n$$\\mathbb{E}\\left(\\sup_{d_n(\\psi, \\psi_0^s) \\le \\delta} \\left|(\\mathbb{M}^s_n-\\mathbb{M}^s)(\\psi) - (\\mathbb{M}^s_n-\\mathbb{M}^s)(\\psi_n)\\right|\\right) \\,.$$ \nThe proof is similar to that of Lemma \\ref{lem:rate_smooth} and therefore we sketch the main steps briefly. Define the estimating function $f_\\psi$ as: \n$$\nf_\\psi(Y, Q) = (Y - \\gamma)\\left(1 - K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right)\\right) \n$$\nand the collection of functions $\\mathcal{F}_\\zeta = \\{f_\\psi - f_{\\psi_0^n}: d_n(\\psi, \\psi_0^s) \\le \\delta\\}$. That $\\mathcal{F}_\\zeta$ has finite VC dimension follows from the same argument used to show $\\mathcal{G}_n$ has finite VC dimension in the proof of Lemma \\ref{conv-prob}. Now to bound modulus of continuity, we use Lemma 2.14.1 of \\cite{vdvw96}, which implies: \n$$\n\\sqrt{n}\\mathbb{E}\\left(\\sup_{d_n(\\psi, \\psi_0^s) \\le \\delta} \\left|(\\mathbb{M}^s_n-\\mathbb{M}^s)(\\psi) - (\\mathbb{M}^s_n-\\mathbb{M}^s)(\\psi_n)\\right|\\right) \\lesssim \\mathcal{J}(1, \\mathcal{F}_\\zeta) \\sqrt{PF_\\zeta^2}\n$$\nwhere $F_\\zeta(Y, Q)$ is the envelope of $\\mathcal{F}_\\zeta$ defined as: \n\\begin{align*}\nF_\\zeta(Y, Q) & = \\sup_{d_*(\\psi, \\psi_0^s) \\le \\zeta}\\left|(Y - \\gamma)\\left(K\\left(\\frac{Q^{\\top}\\psi^s_0}{\\sigma_n}\\right)-K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right)\\right)\\right| \\\\\n& = \\left|(Y - \\gamma)\\right| \\sup_{d_*(\\psi, \\psi_0^s) \\le \\zeta} \\left|\\left(K\\left(\\frac{Q^{\\top}\\psi^s_0}{\\sigma_n}\\right)-K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right)\\right)\\right|\n\\end{align*}\nand $\\mathcal{J}(1, \\mathcal{F}_\\zeta)$ is the entropy integral which can be bounded above by a constant independent of $n$ as the class $\\mathcal{F}_\\zeta$ has finite VC dimension. As in the proof of Lemma \\ref{lem:rate_smooth}, we here consider two separate cases: (1) $\\zeta \\le \\sqrt{\\mathcal{K} \\sigma_n}$ and (2) $\\zeta > \\sqrt{\\mathcal{K} \\sigma_n}$. In the first case, we have $\\sup_{d_n(\\psi, \\psi_0^s) \\le \\zeta} \\|\\psi_ - \\psi_0^s\\| = \\zeta \\sqrt{\\sigma_n}$. This further implies: \n\\begin{align*}\n & \\sup_{d_*(\\psi, \\psi_0^s) \\le \\zeta} \\left|\\left\\{K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right) - K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right)\\right\\}\\right|^2 \\\\\n & \\le \\max\\left\\{\\left|\\left\\{K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right) - K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n} + \\|\\tilde Q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}\\right)\\right\\}\\right|^2, \\right. \\\\\n & \\qquad \\qquad \\qquad \\qquad \\left. \\left|\\left\\{K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right) - K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n} - \\|\\tilde Q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}\\right)\\right\\}\\right|^2\\right\\} \\\\\n & := \\max\\{T_1, T_2\\} \\,.\n\\end{align*}\nTherefore to bound $\\mathbb{E}[F_\\zeta^2(Y, Q)]$ is equivalent to bounding both $\\mathbb{E}[(Y- \\gamma)^2 T_1]$ and $\\mathbb{E}[(Y - \\gamma)^2 T_2]$ separately, which, in turn equivalent to bound $\\mathbb{E}[T_1]$ and $\\mathbb{E}[T_2]$, as $|Y - \\gamma| \\le 1$. These bounds follows from similar calculation as of Lemma \\ref{lem:rate_smooth}, hence skipped. Finally we have in this case, $$\n\\mathbb{E}[F_\\zeta^2(Y, Q)] \\lesssim \\zeta \\sqrt{\\sigma_n} \\,.\n$$ \nThe other case, when $\\zeta > \\sqrt{\\mathcal{K} \\sigma_n}$ also follows by similar calculation of Lemma \\ref{lem:rate_smooth}, which yields: \n$$\n\\mathbb{E}[F_\\zeta^2(Y, Q)] \\lesssim \\zeta^2 \\,.\n$$\n\n\n\n\n\n\n\n\n\n\n\n\n\\noindent\nUsing this in the maximal inequality yields: \n\\begin{align*}\n\\sqrt{n}\\mathbb{E}\\left(\\sup_{d_n(\\psi, \\psi_0) \\le \\delta} \\left|\\mathbb{M}_n(\\psi - \\psi_n) - \\mathbb{M}^s(\\psi - \\psi_n)\\right|\\right) & \\lesssim \\sqrt{\\zeta}\\sigma^{1\/4}_n\\mathds{1}_{\\zeta \\le \\sqrt{\\mathcal{K} \\sigma_n}} + \\zeta \\mathds{1}_{\\zeta > \\sqrt{\\mathcal{K} \\sigma_n}} \\\\\n& := \\phi_n(\\zeta) \\,\n\\end{align*}\nThis implies (following the same argument as of Lemma \\ref{lem:rate_smooth}): \n$$\nn^{2\/3}\\sigma_n^{-1\/3}d^2(\\hat \\psi^s, \\psi_0^s) = O_p(1) \\,.\n$$\nNow as $n^{2\/3}\\sigma_n^{-1\/3} \\gg \\sigma_n^{-1}$, we have: \n$$\n\\frac{1}{\\sigma_n}d_n^2(\\hat \\psi^s, \\psi_0^s) = o_p(1) \\,.\n$$\nwhich further indicates\n\\begin{align}\n\\label{rate1} & n^{2\/3}\\sigma_n^{-1\/3}\\left[\\frac{\\|\\hat \\psi^s - \\psi_0^s\\|^2}{\\sigma_n} \\mathds{1}(\\|\\hat \\psi^s - \\psi_0^s\\| \\le \\mathcal{K}\\sigma_n) \\right. \\notag \\\\\n& \\qquad \\qquad \\qquad \\left. + \\|\\hat \\psi^s - \\psi_0^s\\| \\mathds{1}(\\|\\hat \\psi^s - \\psi_0^s\\|\\ge \\mathcal{K}\\sigma_n)\\right] = O_P(1)\n\\end{align}\nThis implies: \n\\begin{enumerate}\n\\item $\\frac{n^{2\/3}}{\\sigma_n^{4\/3}}\\|\\hat \\psi^s - \\psi_0^s\\| \\mathds{1}(\\|\\hat \\psi^s - \\psi_0^s\\|\\le \\mathcal{K}\\sigma_n) = O_P(1)$\n\\item $\\frac{n^{2\/3}}{\\sigma_n^{1\/3}}\\|\\hat \\psi^s - \\psi_0^s\\| \\mathds{1}(\\|\\hat \\psi^s - \\psi_0^s\\| \\ge \\mathcal{K}\\sigma_n) = O_P(1)$\n\\end{enumerate}\nTherefore: \n\\begin{align*}\n& \\frac{n^{2\/3}}{\\sigma_n^{4\/3}}\\|\\hat \\psi^s - \\psi_0^s\\| \\mathds{1}(\\|\\hat \\psi^s - \\psi_0^s\\| \\le \\mathcal{K}\\sigma_n) \\\\\n& \\qquad \\qquad \\qquad + \\frac{n^{2\/3}}{\\sigma_n^{1\/3}}\\|\\hat \\psi^s - \\psi_0^s\\| \\mathds{1}(\\|\\hat \\psi^s - \\psi_0^s\\| \\ge \\mathcal{K}\\sigma_n) = O_p(1) \\,.\n\\end{align*}\ni.e. \n$$\n\\left(\\frac{n^{2\/3}}{\\sigma_n^{4\/3}} \\wedge \\frac{n^{2\/3}}{\\sigma_n^{1\/3}}\\right)\\|\\hat \\psi^s - \\psi_0^s\\| = O_p(1) \\,.\n$$\nNow $(n^{2\/3}\/\\sigma_n^{4\/3} \\gg 1\/\\sigma_n$ as long as $n^{2\/3} \\gg \\sigma_n^{1\/3}$ which is obviously true. On the other hand, $n^{2\/3}\/\\sigma_n^{1\/3} \\gg 1\/\\sigma_n$ iff $n\\sigma_n \\gg 1$ which is also true as per our assumption. Therefore we have: \n$$\n\\frac{\\|\\hat \\psi^s - \\psi_0^s\\|}{\\sigma_n} = O_p(1) \\,.\n$$\nThis completes the proof. \n\n\\end{proof}\n\n\\section{Proof of Theorem \\ref{thm:regression}}\n\n\\section{Appendix}\nIn this section, we present the proof of Lemma \\ref{lem:rate_smooth}, which lies at the heart of our refined analysis of the smoothed change plane estimator. Proofs of the other lemmas and our results for the binary response model are available in the Appendix \\ref{sec:supp_B}. \n\\subsection{Proof of Lemma \\ref{lem:rate_smooth}}\n\n\n\\begin{proof}\nThe proof of Lemma \\ref{lem:rate_smooth} is quite long, hence we further break it into few more lemmas. \n\\begin{lemma}\n\\label{lem:pop_curv_nonsmooth}\nUnder Assumption \\eqref{eq:assm}, there exists $u_- , u_+ > 0$ such that: \n$$\nu_- d^2(\\theta, \\theta_0) \\le \\mathbb{M}(\\theta) - \\mathbb{M}(\\theta_0) \\le u_+ d^2(\\theta, \\theta_0) \\,,\n$$\nfor $\\theta$ in a (non-srinking) neighborhood of $\\theta_0$, where: \n$$\nd(\\theta, \\theta_0) := \\sqrt{\\|\\beta - \\beta_0\\|^2 + \\|\\delta - \\delta_0\\|^2 + \\|\\psi - \\psi_0\\|} \\,.\n$$\n\\end{lemma}\n\n\n\n\n\n\\begin{lemma}\n\\label{lem:uniform_smooth}\nUnder Assumption \\ref{eq:assm} the smoothed loss function $\\mathbb{M}^s(\\theta)$ is uniformly close to the non-smoothed loss function $\\mathbb{M}(\\theta)$: \n$$\n\\sup_{\\theta \\in \\Theta}\\left|\\mathbb{M}^s(\\theta) - \\mathbb{M}(\\theta)\\right| \\le K_1 \\sigma_n \\,,\n$$ \nfor some constant $K_1$. \n\\end{lemma}\n\n\n\n\n\n\n\n\n\n\\begin{lemma}\n\\label{lem:pop_smooth_curvarture}\nUnder certain assumptions: \n\\begin{align*}\n\\mathbb{M}^s(\\theta) - \\mathbb{M}^s(\\theta_0^s) & \\gtrsim \\|\\beta - \\beta_0^s\\|^2 + \\|\\delta - \\delta_0^s\\|^2 \\\\\n& \\qquad \\qquad + \\frac{\\|\\psi - \\psi_0^s\\|^2}{\\sigma_n} \\mathds{1}_{\\|\\psi - \\psi_0^s\\| \\le \\mathcal{K}\\sigma_n} + \\|\\psi - \\psi_0^s\\| \\mathds{1}_{\\|\\psi - \\psi_0^s\\| > \\mathcal{K}\\sigma_n} \\\\\\\n& := d_*^2(\\theta, \\theta_0^s) \\,.\n\\end{align*}\nfor some constant $\\mathcal{K}$ and for all $\\theta$ in a neighborhood of $\\theta_0$, which does not change with $n$. \n\\end{lemma}\n\nThe proofs of the three lemmas above can be found in Appendix \\ref{sec:supp_B}. We next move to the proof of Lemma \\ref{lem:rate_smooth}. In Lemma \\ref{lem:pop_smooth_curvarture} we have established the curvature of the smooth loss function $\\mathbb{M}^s(\\theta)$ around $\\theta_0^s$. To determine the rate of convergence of $\\hat \\theta^s$ to $\\theta_0^s$, we further need an upper bound on the modulus of continuity of our loss function. Towards that end, first recall that our loss function is:\n$$\nf_{\\theta}(Y, X, Q) = \\left(Y - X^{\\top}\\beta\\right)^2 + \\left[-2\\left(Y - X^{\\top}\\beta\\right)X^{\\top}\\delta + (X^{\\top}\\delta)^2\\right] K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right)\n$$\nThe centered loss function can be written as: \n\\begin{align}\n & f_{\\theta}(Y, X, Q) - f_{\\theta_0^s}(Y, X, Q) \\notag \\\\\n & = \\left(Y - X^{\\top}\\beta\\right)^2 + \\left[-2\\left(Y - X^{\\top}\\beta\\right)X^{\\top}\\delta + (X^{\\top}\\delta)^2\\right] K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right) \\notag \\\\\n & \\qquad \\qquad \\qquad \\qquad - \\left(Y - X^{\\top}\\beta_0^s\\right)^2 - \\left[-2\\left(Y - X^{\\top}\\beta_0^s\\right)X^{\\top}\\delta_0^s + (X^{\\top}\\delta_0^s)^2\\right] K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right) \\notag \\\\\n & = \\left(Y - X^{\\top}\\beta\\right)^2 + \\left[-2\\left(Y - X^{\\top}\\beta\\right)X^{\\top}\\delta + (X^{\\top}\\delta)^2\\right] K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right) \\notag \\\\\n & \\qquad \\qquad \\qquad \\qquad - \\left(Y - X^{\\top}\\beta_0^s\\right)^2 - \\left[-2\\left(Y - X^{\\top}\\beta_0^s\\right)X^{\\top}\\delta_0^s + (X^{\\top}\\delta_0^s)^2\\right] K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right) \\notag \\\\\n & \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad - \\left[-2\\left(Y - X^{\\top}\\beta_0^s\\right)X^{\\top}\\delta_0^s + (X^{\\top}\\delta_0^s)^2\\right] \\left\\{K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right) - K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right)\\right\\} \\notag \\\\\n & = \\underbrace{\\left(Y - X^{\\top}\\beta\\right)^2 - \\left(Y - X^{\\top}\\beta_0^s\\right)^2}_{M_1} \\notag \\\\\n & \\qquad + \\underbrace{\\left\\{ \\left[-2\\left(Y - X^{\\top}\\beta\\right)X^{\\top}\\delta + (X^{\\top}\\delta)^2\\right] - \\left[-2\\left(Y - X^{\\top}\\beta_0^s\\right)X^{\\top}\\delta_0^s + (X^{\\top}\\delta_0^s)^2\\right]\\right\\} K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right)}_{M_2} \\notag \\\\\n & \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad - \\underbrace{\\left[-2\\left(Y - X^{\\top}\\beta_0^s\\right)X^{\\top}\\delta_0^s + (X^{\\top}\\delta_0^s)^2\\right] \\left\\{K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right) - K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right)\\right\\}}_{M_3} \\notag \\\\\n \\label{eq:expand_f} & := M_1 + M_2 + M_3\n\\end{align}\nFor the rest of the analysis, fix $\\zeta > 0$ and consider the collection of functions $\\mathcal{F}_{\\zeta}$ which is defined as: \n$$\n\\mathcal{F}_{\\zeta} = \\left\\{f_\\theta - f_{\\theta^s}: d_*(\\theta, \\theta^s) \\le \\zeta\\right\\} \\,.\n$$\nFirst note that $\\mathcal{F}_\\zeta$ has bounded uniform entropy integral (henceforth BUEI) over $\\zeta$. To establish this, it is enough to argue that the collection $\\mathcal{F} = \\{ f_\\theta : \\theta \\in \\Theta\\}$ is BUEI. Note that the functions $X \\mapsto X^{\\top}\\beta$ has VC dimension $p$ and so is the map $X \\mapsto X^{\\top}(\\beta + \\delta)$. Therefore the functions $(X, Y) \\mapsto (Y - X^{\\top}(\\beta + \\delta))^2 - (Y - X^{\\top}\\beta)^2$ is also BUEI, as composition with monotone function (here $x^2$) and taking difference keeps this property. Further by the hyperplane $Q \\mapsto Q^{\\top}\\psi$ also has finite dimension (only depends on the dimension of $Q$) and the VC dimension does not change by scaling it with $\\sigma_n$. Therefore the functions $Q \\mapsto Q^{\\top}\\psi\/sigma_n$ has same VC dimension as $Q \\mapsto Q^{\\top}\\psi$ which is independent of $n$. Again, as composition of monotone function keeps BUEI property, the functions $Q \\mapsto K(Q^{\\top}\\psi\/\\sigma_n)$ is also BUEI. As the product of two BUEI class is BUEI, we conclude that $\\mathcal{F}$ (and hence $\\mathcal{F}_\\zeta$) is BUEI. \n\\\\\\\\\n\\noindent\nNow to bound the modulus of continuity we use Lemma 2.14.1 of \\cite{vdvw96}: \n\\begin{equation*}\n\\label{eq:moc_bound}\n\\sqrt{n}\\mathbb{E}\\left[\\sup_{\\theta: d_*(\\theta, \\theta_0^s) \\le \\zeta} \\left|\\left(\\mathbb{P}_n - P\\right)\\left(f_\\theta - f_{\\theta_0^s}\\right)\\right|\\right] \\lesssim \\mathcal{J}(1, \\mathcal{F}_\\zeta) \\sqrt{\\mathbb{E}\\left[F_{\\zeta}^2(X, Y, Q)\\right]}\n\\end{equation*}\nwhere $F_\\zeta$ is some envelope function of $\\mathcal{F}_\\zeta$. As the function class $\\mathcal{F}_\\zeta$ has bounded entropy integral, $\\mathcal{J}(1, \\mathcal{F}_\\zeta) $ can be bounded above by some constant independent of $n$. We next calculate the order of the envelope function $F_\\zeta$. Recall that, by definition of envelope function is: \n$$\nF_{\\zeta}(X, Y, Q) \\ge \\sup_{\\theta: d_*(\\theta, \\theta_0^s) \\le \\zeta} \\left| f_{\\theta} - f_{\\theta_0}\\right| \\,.\n$$\nand we can write $f_\\theta - f_{\\theta_0^s} = M_1 + M_2 + M_3$ which follows from equation \\eqref{eq:expand_f}. Therefore, to find the order of the envelope function, it is enough to find the order of bounds of $M_1, M_2, M_3$ over the set $d_*(\\theta, \\theta_0^s) \\le \\zeta$. We start with $M_1$: \n\\begin{align}\n \\sup_{d_*(\\theta, \\theta_0^s) \\le \\zeta}|M_1| & = \\sup_{d_*(\\theta, \\theta_0^s) \\le \\delta}\\left|\\left(Y - X^{\\top}\\beta\\right)^2 - \\left(Y - X^{\\top}\\beta_0^s\\right)^2\\right| \\notag \\\\\n & = \\sup_{d_*(\\theta, \\theta_0^s) \\le \\zeta} \\left|2YX^{\\top}(\\beta_0^s - \\beta) + (X^{\\top}\\beta)^2 - (X^{\\top}\\beta_0^S)^2\\right| \\notag \\\\\n & \\le \\sup_{d_*(\\theta, \\theta_0^s) \\le \\zeta} \\|\\beta - \\beta_0^s\\| \\left[2|Y|\\|X\\| + (\\|\\beta_0^s\\| + \\zeta)\\|X\\|^2\\right] \\notag \\\\\n \\label{eq:env_1} & \\le \\zeta\\left[2|Y|\\|X\\| + (\\|\\beta_0^s\\| + \\zeta)\\|X\\|^2\\right] := F_{1, \\zeta}(X, Y, Q) \\hspace{0.1in} [\\text{Envelope function of }M_1]\n\\end{align}\nand the second term: \n\\allowdisplaybreaks\n\\begin{align}\n & \\sup_{d_*(\\theta, \\theta_0^s) \\le \\zeta} |M_2| \\notag \\\\\n & = \\sup_{d_*(\\theta, \\theta_0^s) \\le \\zeta} \\left|\\left\\{\\left[-2\\left(Y - X^{\\top}\\beta\\right)X^{\\top}\\delta + (X^{\\top}\\delta)^2\\right] \\right. \\right. \\notag \\\\\n & \\qquad \\qquad \\qquad \\qquad \\left. \\left. - \\left[-2\\left(Y - X^{\\top}\\beta_0^s\\right)X^{\\top}\\delta_0^s + (X^{\\top}\\delta_0^s)^2\\right]\\right\\}\\right|K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right) \\notag \\\\\n & \\le \\sup_{d_*(\\theta, \\theta_0^s) \\le \\zeta} \\left|\\left\\{\\left[-2\\left(Y - X^{\\top}\\beta\\right)X^{\\top}\\delta + (X^{\\top}\\delta)^2\\right] \\right. \\right. \\notag \\\\\n & \\qquad \\qquad \\qquad \\qquad \\left. \\left. - \\left[-2\\left(Y - X^{\\top}\\beta_0^s\\right)X^{\\top}\\delta_0^s + (X^{\\top}\\delta_0^s)^2\\right]\\right\\}\\right| \\notag \\\\\n & = \\sup_{d_*(\\theta, \\theta_0^s) \\le \\zeta} \\left|\\left\\{\\left[2Y(X^{\\top}\\delta_0^s - X^{\\top}\\delta) + 2[(X^{\\top}\\beta)(X^{\\top}\\delta) \\right. \\right. \\right. \\notag \\\\\n & \\qquad \\qquad \\qquad \\qquad \\left. \\left. \\left. - (X^{\\top}\\beta_0^s)(X^{\\top}\\delta_0^s)] + (X^{\\top}\\delta)^2 - (X^{\\top}\\delta_0^s)^2\\right]\\right\\}\\right| \\notag \\\\\n & \\le \\sup_{d_*(\\theta, \\theta_0^s) \\le \\zeta} \\left\\{\\|\\delta - \\delta_0^s\\|2|Y|\\|X\\| + 2\\|\\beta - \\beta_0\\|\\|X\\|\\|\\delta\\| \\right. \\notag \\\\\n & \\qquad \\qquad \\qquad \\qquad \\left. + 2\\|\\delta - \\delta_0^s\\|\\|X\\|\\|\\beta_0^s\\| + 2\\|X\\|\\|\\delta + \\delta_0^s\\|\\|\\delta - \\delta_0^s\\|\\right\\} \\notag \\\\ \\notag \\\\\n & \\le \\zeta \\left[2|Y|\\|X\\| + 2\\|X\\|(\\|\\delta_0^s\\| + \\|\\zeta\\|) + 2\\|X\\|\\|\\beta_0^s\\| + 2\\|X\\|(\\|\\delta_0^s\\| + \\zeta)\\right] \\notag \\\\ \n \\label{eq:env_2}& = \\zeta \\times 2\\|X\\|\\left[2|Y| + 2(\\|\\delta_0^s\\| + \\|\\zeta\\|) + \\|\\beta_0^s\\|\\right] := F_{2, \\zeta}(X, Y, Q) \\hspace{0.1in} [\\text{Envelope function of }M_2]\n \\end{align}\nFor the third term, note that: \n\\begin{align*}\n& \\sup_{d_*(\\theta, \\theta_0^s) \\le \\zeta} |M_3| \\\\\n& \\le \\left|\\left[-2\\left(Y - X^{\\top}\\beta_0^s\\right)X^{\\top}\\delta_0^s + (X^{\\top}\\delta_0^s)^2\\right]\\right| \\times \\sup_{d_*(\\theta, \\theta_0^s) \\le \\zeta} \\left|\\left\\{K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right) - K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right)\\right\\}\\right| \\\\\n& := F_{3, \\zeta} (X, Y, Q)\n\\end{align*}\nHenceforth, we define the envelope function to be $F_\\zeta = F_{\\zeta, 1} + F_{\\zeta, 2} + F_{\\zeta, 3}$. Hence we have by triangle inequality: \n$$\n\\sqrt{\\mathbb{E}\\left[F_{\\zeta}^2(X, Y, Q)\\right]} \\le \\sum_{i=1}^3 \\sqrt{\\mathbb{E}\\left[F_{i, \\zeta}^2(X, Y, Q)\\right]}\n$$\nFrom equation \\eqref{eq:env_1} and \\eqref{eq:env_2} we have: \n\\begin{equation}\n\\label{eq:moc_bound_2}\n\\sqrt{\\mathbb{E}\\left[F_{1, \\zeta}^2(X, Y, Q)\\right]} + \\sqrt{\\mathbb{E}\\left[F_{2, \\zeta}^2(X, Y, Q)\\right]} \\lesssim \\zeta \\,.\n\\end{equation}\nFor $F_{3, \\zeta}$, first note that: \n\\begin{align*}\n & \\mathbb{E}\\left[\\left|\\left[-2\\left(Y - X^{\\top}\\beta_0^s\\right)X^{\\top}\\delta_0^s + (X^{\\top}\\delta_0^s)^2\\right]\\right|^2 \\mid Q\\right] \\\\\n & \\le 8\\mathbb{E}\\left[\\left(Y - X^{\\top}\\beta_0^s\\right)^2(X^{\\top}\\delta_0)^2 \\mid Q\\right] + 2\\mathbb{E}[(X^{\\top}\\delta_0^s)^4 \\mid Q] \\\\\n & \\le \\left\\{8\\|\\beta - \\beta_0^s\\|^2\\|\\delta_0\\|^2 + 8\\|\\delta_0\\|^4 + 2\\|\\delta_0^s\\|^4\\right\\}m_4(Q) \\,.\n\\end{align*}\nwhere $m_4(Q)$ is defined in Assumption \\ref{eq:assm}. In this part, we have to tackle the dichotomous behavior of $\\psi$ around $\\psi_0^s$ carefully. Henceforth define $d_*^2(\\psi, \\psi_0^s)$ as: \n\\begin{align*}\n d_*^2(\\psi, \\psi_0^s) = & \\frac{\\|\\psi - \\psi_0^s\\|^2}{\\sigma_n}\\mathds{1}_{\\|\\psi - \\psi_0^s\\| \\le \\mathcal{K}\\sigma_n} + \\|\\psi - \\psi_0^s\\|\\mathds{1}_{\\|\\psi - \\psi_0^s\\| > \\mathcal{K}\\sigma_n} \n\\end{align*}\nThis is a slight abuse of notation, but the reader should think of it as the part of $\\psi$ in $d_*^2(\\theta, \\theta_0^s)$. Define $B_{\\zeta}(\\psi_0^s)$ to be set of all $\\psi$'s such that $d^2_*(\\psi, \\psi_0^s) \\le \\zeta^2$. We can decompose $B_{\\zeta}(\\psi_0^s)$ as a disjoint union of two sets: \n\\begin{align*}\n B_{\\zeta, 1}(\\psi_0^s) & = \\left\\{\\psi: d^2_*(\\psi, \\psi_0^s) \\le \\zeta^2, \\|\\psi - \\psi_0^s\\| \\le \\mathcal{K}\\sigma_n\\right\\} \\\\\n & = \\left\\{\\psi:\\frac{\\|\\psi - \\psi_0^s\\|^2}{\\sigma_n} \\le \\zeta^2, \\|\\psi - \\psi_0^s\\| \\le \\mathcal{K}\\sigma_n\\right\\} \\\\\n & = \\left\\{\\psi:\\|\\psi - \\psi_0^s\\| \\le \\zeta \\sqrt{\\sigma_n}, \\|\\psi - \\psi_0^s\\| \\le \\mathcal{K}\\sigma_n\\right\\} \\\\\\\\\n B_{\\zeta, 2}(\\psi_0^s) & = \\left\\{\\psi: d^2_*(\\psi, \\psi_0^s) \\le \\zeta^2, \\|\\psi - \\psi_0^s\\| > \\mathcal{K}\\sigma_n\\right\\} \\\\\n & = \\left\\{\\psi: \\|\\psi - \\psi_0^s\\| \\le \\zeta^2, \\|\\psi - \\psi_0^s\\| > \\mathcal{K}\\sigma_n\\right\\} \n\\end{align*}\nAssume $\\mathcal{K} > 1$. The case where $\\mathcal{K} < 1$ follows from similar calculations and hence skipped for brevity. Consider the following two cases: \n\\\\\\\\\n\\noindent\n{\\bf Case 1: }Suppose $\\zeta \\le \\sqrt{\\mathcal{K}\\sigma_n}$. Then $B_{\\zeta, 2} = \\phi$. Also as $\\mathcal{K} > 1$, we have: $\\zeta\\sqrt{\\sigma_n} \\le \\mathcal{K}\\sigma_n$. Hence we have: \n$$\n\\sup_{d_*^2(\\psi, \\psi_0^s) \\le \\zeta^2}\\|\\psi - \\psi_0^s\\| = \\sup_{B_{\\zeta, 1}}\\|\\psi - \\psi_0^s\\| = \\zeta\\sqrt{\\sigma_n} \\,.\n$$\nThis implies: \n\\begin{align*}\n & \\sup_{d_*(\\theta, \\theta_0^s) \\le \\zeta} \\left|\\left\\{K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right) - K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right)\\right\\}\\right|^2 \\\\\n & \\le \\max\\left\\{\\left|\\left\\{K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right) - K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n} + \\|\\tilde Q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}\\right)\\right\\}\\right|^2, \\right. \\\\\n & \\qquad \\qquad \\qquad \\left. \\left|\\left\\{K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right) - K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n} - \\|\\tilde Q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}\\right)\\right\\}\\right|^2\\right\\} \\\\\n & := \\max\\{T_1, T_2\\} \\,.\n\\end{align*}\nTherefore we have: \n$$\n\\mathbb{E}\\left[F^2_{3, \\zeta}(X, Y, Q)\\right] \\le \\mathbb{E}[m_4(Q) T_1] + \\mathbb{E}[m_4(Q) T_2] \\,.\n$$\nNow: \n\\begin{align}\n & \\mathbb{E}[m_4(Q) T_1] \\notag \\\\\n & = \\mathbb{E}\\left[m_4(Q) \\left|\\left\\{K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right) - K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n} + \\|\\tilde Q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}\\right)\\right\\}\\right|^2\\right] \\notag \\\\\n & = \\sigma_n \\int_{\\mathbb{R}^{p-1}}\\int_{-\\infty}^{\\infty} m_4(\\sigma_nt - \\tilde q^{\\top}\\tilde \\psi_0^s, \\tilde q) \\left|K\\left(t\\right) - K\\left(t + \\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}\\right)\\right|^2 \\ f_s(\\sigma_nt\\mid \\tilde q) \\ dt \\ f(\\tilde q) \\ d\\tilde q \\notag\\\\\n & \\le \\sigma_n \\int_{\\mathbb{R}^{p-1}}\\int_{-\\infty}^{\\infty} m_4(\\sigma_nt - \\tilde q^{\\top}\\tilde \\psi_0^s, \\tilde q) \\left|K\\left(t\\right) - K\\left(t + \\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}\\right)\\right| \\ f_s(\\sigma_nt\\mid \\tilde q) \\ dt \\ f(\\tilde q) \\ d\\tilde q \\notag \\\\\n & = \\sigma_n \\int_{\\mathbb{R}^{p-1}}\\int_{-\\infty}^{\\infty}m_4(\\sigma_nt - \\tilde q^{\\top}\\tilde \\psi_0^s, \\tilde q) \\int_{t}^{t + \\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}} K'(s) \\ ds \\ f_s(\\sigma_nt\\mid \\tilde q) \\ dt \\ f(\\tilde q) \\ d\\tilde q \\notag \\\\\n & = \\sigma_n \\int_{\\mathbb{R}^{p-1}}\\int_{-\\infty}^{\\infty}K'(s) \\int_{s- \\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}}^s m_4(\\sigma_nt - \\tilde q^{\\top}\\tilde \\psi_0^s, \\tilde q) f_s(\\sigma_nt\\mid \\tilde q) \\ dt \\ ds \n \\ f(\\tilde q) \\ d\\tilde q \\notag \\\\\n & = \\zeta \\sqrt{\\sigma_n} \\mathbb{E}[\\|\\tilde Q\\|m_4(-\\tilde Q^{\\top}\\psi_0^s, \\tilde Q)f_s(0 \\mid \\tilde Q)] + R \\notag \n\\end{align}\nwhere as before we split $R$ into three parts $R = R_1 + R_2 + R_3$. \n\\begin{align}\n \\left|R_1\\right| & = \\left|\\sigma_n \\int_{\\mathbb{R}^{p-1}}\\int_{-\\infty}^{\\infty}K'(s) \\int_{s- \\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}}^s m_4(- \\tilde q^{\\top}\\tilde \\psi_0^s, \\tilde q) (f_s(\\sigma_nt\\mid \\tilde q) - f_s(0 \\mid \\tilde q)) \\ dt \\ ds \\ f(\\tilde q) \\ d\\tilde q\\right| \\notag \\\\\n \\label{eq:r1_env_1} & \\le \\sigma_n^2 \\int_{\\mathbb{R}^{p-1}}m_4(- \\tilde q^{\\top}\\tilde \\psi_0^s, \\tilde q)\\dot f_s(\\tilde q) \\int_{-\\infty}^{\\infty}K'(s) \\int_{s- \\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}}^s |t| dt\\ ds \\ f(\\tilde q) \\ d\\tilde q \n \\end{align}\nWe next calculate the inner integral (involving $(s,t)$) of equation \\eqref{eq:r1_env_1}: \n\\begin{align*}\n& \\int_{-\\infty}^{\\infty}K'(s) \\int_{s- \\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}}^s |t| dt\\ ds \\\\\n& =\\left(\\int_{-\\infty}^0 + \\int_0^{\\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}} + \\int_{\\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}}^{\\infty}\\right)K'(s) \\int_{s- \\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}}^s |t| dt\\ ds \\\\\n& = \\frac12\\int_{-\\infty}^0 K'(s)\\left[\\left(s- \\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}\\right)^2 - s^2\\right] \\ ds + \\frac12\\int_0^{\\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}} K'(s)\\left[\\left(s- \\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}\\right)^2 + s^2\\right] \\ ds \\\\\n& \\qquad \\qquad \\qquad \\qquad + \\frac12 \\int_{\\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}}^{\\infty}K'(s) \\left[s^2 - \\left(s- \\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}\\right)^2\\right] \\ ds\\\\\n& = -\\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}} \\int_{-\\infty}^0 K'(s) s \\ ds + \\|\\tilde q\\|^2\\frac{\\zeta^2}{2\\sigma_n} \\int_{-\\infty}^0 K'(s) \\ ds + \\int_0^{\\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}} s^2K'(s) \\ ds \\\\ \n& \\qquad \\qquad -\\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}} \\int_0^{\\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}} sK'(s) \\ ds + \\|\\tilde q\\|^2\\frac{\\zeta^2}{2\\sigma_n} \\int_0^{\\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}} K'(s) \\ ds \\\\\n& \\qquad \\qquad \\qquad + \\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}} \\int_{\\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}}^{\\infty} sK'(s) \\ ds - \\|\\tilde q\\|^2\\frac{\\zeta^2}{2\\sigma_n} \\int_{\\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}}^{\\infty} K'(s) \\ ds \\\\\n& = \\|\\tilde q\\|^2\\frac{\\zeta^2}{2\\sigma_n}\\left[2K\\left(\\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}\\right) - 1\\right] + \\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}} \\left[ -\\int_{-\\infty}^0 K'(s) s \\ ds - \\right. \\\\\n& \\qquad \\qquad \\left. \\int_0^{\\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}} K'(s)s \\ ds + \\int_{\\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}}^{\\infty} sK'(s) \\ ds\\right] + \\int_0^{\\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}} s^2K'(s) \\ ds \\\\\n& = \\|\\tilde q\\|^2\\frac{\\zeta^2}{\\sigma_n}\\left[K\\left(\\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}\\right) - K(0)\\right] + \\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}} \\left[ -\\int_{-\\infty}^{-\\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}} K'(s) s \\ ds + \\int_{\\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}}^{\\infty} sK'(s) \\ ds\\right] \\\\\n& \\qquad \\qquad + \\int_0^{\\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}} s^2K'(s) \\ ds \\\\\n& = \\|\\tilde q\\|^2\\frac{\\zeta^2}{\\sigma_n}\\left[K\\left(\\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}\\right) - K(0)\\right] + \\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}\\int_{-\\infty}^{\\infty} K'(s)|s|\\mathds{1}_{|s| \\ge \\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}} \\ ds + \\int_0^{\\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}} s^2K'(s) \\ ds \\\\\n& \\le \\dot{K}_+ \\|\\tilde q\\|^3\\frac{\\zeta^3}{\\sigma^{3\/2}_n} + \\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}} \\int_{-\\infty}^{\\infty} K'(s)|s| \\ ds + \\|\\tilde q\\|^2\\frac{\\zeta^2}{\\sigma_n}\\left(K\\left(\\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}\\right) - K(0)\\right) \\\\\n& \\lesssim \\|\\tilde q\\|^3\\frac{\\zeta^3}{\\sigma^{3\/2}_n} + \\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}} \n\\end{align*}\nPutting this bound in equation \\eqref{eq:r1_env_1} we obtain: \n \\begin{align*}\n |R_1| & \\le \\frac{\\sigma_n^2}{2} \\int_{\\mathbb{R}^{p-1}}m_4(- \\tilde q^{\\top}\\tilde \\psi_0^s, \\tilde q)\\dot f_s(\\tilde q) \\left(\\|\\tilde q\\|^3\\frac{\\zeta^3}{\\sigma^{3\/2}_n} + \\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}\\right) \\ f(\\tilde q) \\ d\\tilde q \\\\\n & \\le \\frac{\\zeta^3}{2\\sqrt{\\sigma_n}} \\mathbb{E}\\left[m_4(- \\tilde Q^{\\top}\\tilde \\psi_0^s, \\tilde Q)\\dot f_s(\\tilde Q)\\|\\tilde Q\\|^3\\right] + \\frac{\\zeta \\sqrt{\\sigma_n}}{2} \\mathbb{E}\\left[m_4(- \\tilde Q^{\\top}\\tilde \\psi_0^s, \\tilde Q)\\dot f_s(\\tilde Q)\\|\\tilde Q\\|\\right] \n\\end{align*}\nand \n\\begin{align*}\n & \\left|R_2\\right| \\\\\n & = \\left|\\sigma_n \\int_{\\mathbb{R}^{p-1}}\\int_{-\\infty}^{\\infty}K'(s) \\int_{s- \\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}}^s \\left(m_4(\\sigma_n t - \\tilde q^{\\top}\\tilde \\psi_0^s, \\tilde q) - m_4( - \\tilde q^{\\top}\\tilde \\psi_0^s, \\tilde q)\\right)f_s(0 \\mid \\tilde q) \\ dt \\ ds \\ f(\\tilde q) \\ d\\tilde q\\right| \\\\\n & \\le \\sigma_n^2 \\int_{\\mathbb{R}^{p-1}}\\dot m_4( \\tilde q)f_s(0 \\mid \\tilde q) \\int_{-\\infty}^{\\infty}K'(s) \\int_{s- \\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}}^s |t| dt\\ ds \\ f(\\tilde q) \\ d\\tilde q \\\\\n & \\le \\sigma_n^2 \\int_{\\mathbb{R}^{p-1}}\\dot m_4( \\tilde q)f_s(0 \\mid \\tilde q) \\left(\\|\\tilde q\\|^3\\frac{\\zeta^3}{\\sigma^{3\/2}_n} + \\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}\\right) \\ f(\\tilde q) \\ d\\tilde q \\\\\n & = \\zeta \\sigma_n^{3\/2} \\mathbb{E}\\left[\\dot m_4( \\tilde Q)f_s(0 \\mid \\tilde Q)\\|\\tilde Q\\|\\right] + \\zeta^3 \\sqrt{\\sigma_n} \\mathbb{E}\\left[\\dot m_4( \\tilde Q)f_s(0 \\mid \\tilde Q)\\|\\tilde Q\\|^3\\right]\n\\end{align*}\nThe third residual $R_3$ is even higher order term and hence skipped. It is immediate that the order of the remainders are equal to or smaller than $\\zeta \\sqrt{\\sigma_n}$ which implies: \n$$\n\\mathbb{E}[m_4(Q)T_1] \\lesssim \\zeta\\sqrt{\\sigma_n} \\,.\n$$\nThe calculation for $T_2$ is similar and hence skipped for brevity. Combining conclusions for $T_1$ and $T_2$ we conclude when $\\zeta \\le \\sqrt{\\mathcal{K} \\sigma_n}$: \n\\begin{align}\n& \\mathbb{E}\\left[F^2_{3, \\zeta}(X, Y, Q)\\right] \\notag \\\\\n & \\mathbb{E}\\left[\\left|\\left[-2\\left(Y - X^{\\top}\\beta_0^s\\right)X^{\\top}\\delta_0^s + (X^{\\top}\\delta_0^s)^2\\right]\\right|^2 \\times \\sup_{d_*(\\theta, \\theta_0^s) \\le \\zeta} \\left|\\left\\{K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right) - K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right)\\right\\}\\right|^2\\right] \\notag \\\\\n & \\lesssim \\mathbb{E}\\left[m_4(Q)\\sup_{d_*(\\theta, \\theta_0^s) \\le \\zeta} \\left|\\left\\{K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right) - K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right)\\right\\}\\right|^2\\right] \\notag \\\\\n \\label{eq:env_3} & \\lesssim \\zeta \\sqrt{\\sigma_n} \\,.\n\\end{align}\n\\\\\n\\noindent\n{\\bf Case 2: } Now consider $\\zeta > \\sqrt{\\mathcal{K} \\sigma_n}$. Then it is immediate that: \n$$\n\\sup_{d_*^2(\\psi, \\psi^s_0) \\le \\zeta^2} \\|\\psi - \\psi^s_0\\| = \\zeta^2 \\,.\n$$\nUsing this we have: \n\\begin{align}\n & \\mathbb{E}[m_4(Q) T_1] \\notag \\\\\n & = \\mathbb{E}\\left[m_4(Q)\\left|\\left\\{K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right) - K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n} + \\|\\tilde Q\\|\\frac{\\zeta^2}{\\sqrt{\\sigma_n}}\\right)\\right\\}\\right|^2\\right] \\notag \\\\\n & = \\sigma_n \\int_{\\mathbb{R}^{p-1}}\\int_{-\\infty}^{\\infty} m_4(\\sigma_nt - \\tilde q^{\\top}\\tilde \\psi_0^s, \\tilde q) \\left|K\\left(t\\right) - K\\left(t + \\|\\tilde q\\|\\frac{\\zeta^2}{\\sigma_n}\\right)\\right|^2 \\ f_s(\\sigma_nt\\mid \\tilde q) \\ dt \\ f(\\tilde q) \\ d\\tilde q \\notag \\\\\n & \\le \\sigma_n \\int_{\\mathbb{R}^{p-1}}\\int_{-\\infty}^{\\infty} m_4(\\sigma_nt - \\tilde q^{\\top}\\tilde \\psi_0^s, \\tilde q) \\left|K\\left(t\\right) - K\\left(t + \\|\\tilde q\\|\\frac{\\zeta^2}{\\sigma_n}\\right)\\right| \\ f_s(\\sigma_nt\\mid \\tilde q) \\ dt \\ f(\\tilde q) \\ d\\tilde q \\notag \\\\\n & \\le \\sigma_n \\int_{\\mathbb{R}^{p-1}}\\int_{-\\infty}^{\\infty} m_4(\\sigma_nt - \\tilde q^{\\top}\\tilde \\psi_0^s, \\tilde q)\\|\\tilde q\\|\\frac{\\zeta^2}{\\sigma_n} \\ f_s(\\sigma_nt\\mid \\tilde q) \\ dt \\ f(\\tilde q) \\ d\\tilde q \\notag \\\\\n & = \\zeta^2 \\int_{\\mathbb{R}^{p-1}}m_4(- \\tilde q^{\\top}\\tilde \\psi_0^s, \\tilde q) f_s(0 \\mid \\tilde q)\\|\\tilde q\\| \\ f(\\tilde q) \\ d\\tilde q + R \\notag\\\\\n & \\le \\zeta^2 \\mathbb{E}\\left[\\|\\tilde Q\\|m_4\\left(- \\tilde Q^{\\top}\\tilde \\psi_0^s, \\tilde Q\\right) f_s(0 \\mid \\tilde Q)\\right] + R \\notag \n\\end{align}\nThe analysis of the remainder term is similar and if is of higher order. This concludes when $\\zeta > \\sqrt{K\\sigma_n}$: \n\\begin{align}\n& \\mathbb{E}\\left[F^2_{3, \\zeta}(X, Y, Q)\\right] \\notag \\\\\n & \\mathbb{E}\\left[\\left|\\left[-2\\left(Y - X^{\\top}\\beta_0^s\\right)X^{\\top}\\delta_0^s + (X^{\\top}\\delta_0^s)^2\\right]\\right|^2 \\times \\sup_{d_*(\\theta, \\theta_0^s) \\le \\zeta} \\left|\\left\\{K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right) - K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right)\\right\\}\\right|^2\\right] \\notag \\\\\n & \\lesssim \\mathbb{E}\\left[m_4(Q)\\sup_{d_*(\\theta, \\theta_0^s) \\le \\zeta} \\left|\\left\\{K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right) - K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right)\\right\\}\\right|^2\\right] \\notag \\\\\n \\label{eq:env_4} & \\lesssim \\zeta^2\n\\end{align}\nCombining \\eqref{eq:env_3}, \\eqref{eq:env_4} with equation \\eqref{eq:moc_bound_2} we have:\n\\begin{align*}\n\\sqrt{n}\\mathbb{E}\\left[\\sup_{\\theta: d_*(\\theta, \\theta_0^s) \\le \\zeta} \\left|\\left(\\mathbb{P}_n - P\\right)\\left(f_\\theta - f_{\\theta_0^s}\\right)\\right|\\right] & \\lesssim \\sqrt{\\zeta}\\sigma_n^{1\/4}\\mathds{1}_{\\zeta \\le \\sqrt{\\mathcal{K}\\sigma_n}} + \\zeta \\mathds{1}_{\\zeta > \\sqrt{\\mathcal{K} \\sigma_n}} \\\\\n& := \\phi_n(\\zeta) \\,.\n\\end{align*}\nHence to obtain rate we have to solve $r_n^2 \\phi_n(1\/r_n) \\le \\sqrt{n}$, i.e. (ignoring $\\mathcal{K}$ as this does not affect the rate)\n$$\nr_n^{3\/2}\\sigma_n^{1\/4}\\mathds{1}_{r_n \\ge \\sigma_n^{-1\/2}} + r_n \\mathds{1}_{r_n \\le \\sigma_n^{-1\/2}} \\le \\sqrt{n} \\,.\n$$\nNow if $r_n \\le \\sigma_n^{-1\/2}$ then $r_n = \\sqrt{n}$ which implies $\\sqrt{n} \\le \\sigma_n^{-1\/2}$ i.e. $n\\sigma_n \\to 0$ and hence contradiction. On the other hand, if $r_n \\ge \\sigma_n^{-1\/2}$ then $r_n = n^{1\/3}\\sigma_n^{-1\/6}$. This implies $n^{1\/3}\\sigma_n^{-1\/6} \\ge \\sigma_n^{-1\/2}$, i.e. $n^{1\/3} \\ge \\sigma_n^{-1\/3}$, i.e. $n\\sigma_n \\to \\infty$ which is okay. This implies: \n$$\nn^{2\/3}\\sigma_n^{-1\/3}d^2(\\hat \\theta^s, \\theta_0^s) = O_p(1) \\,.\n$$\nNow as $n^{2\/3}\\sigma_n^{-1\/3} \\gg \\sigma_n^{-1}$, we have: \n$$\n\\frac{1}{\\sigma_n}d^2(\\hat \\theta^s, \\theta_0^s) = o_p(1) \\,.\n$$\nwhich further indicates $\\|\\hat \\psi^s - \\psi_0^s\\|\/\\sigma_n = o_p(1)$. This, along with the fact that $\\|\\psi_0^s - \\psi_0\\|\/\\sigma_n = o(1)$ (from Lemma \\ref{bandwidth}), establishes that $\\|\\hat \\psi_0^s - \\psi_0\\|\/\\sigma_n = o_p(1)$. This completes the proof. \n\\end{proof}\n\n\n\n\n\\section{Real data analysis}\n\\label{sec:real_data}\nWe illustrate our method using cross-country data on pollution (carbon-dioxide), income and urbanization obtained from the World Development Indicators (WDI), World Bank. The Environmental Kuznets Curve hypothesis (EKC henceforth), a popular and ongoing area of research in environmental economics, posits that at an initial stage of economic development pollution increases with economic growth, and then diminishes when society's priorities change, leading to an inverted U-shaped relation between income (measured via real GDP per capita) and pollution. The hypothesis has led to numerous empirical papers (i) testing the hypothesis (whether the relation is inverted U-shaped for countries\/regions of interest in the sample), (ii) exploring the threshold level of income at which pollution starts falling, as well as (iii) examining the countries\/regions which belong to the upward rising part versus the downward sloping part of the inverted U-shape, if at all. The studies have been performed using US state level data or cross-country data (e.g. \\cite{shafik1992economic}, \\cite{millimet2003environmental}, \\cite{aldy2005environmental}, \\cite{lee2019nonparametric},\\cite{boubellouta2021cross}, \\cite{list1999environmental}, \\cite{grossman1995economic}, \\cite{bertinelli2005environmental}, \\cite{azomahou2006economic}, \\cite{taskin2000searching} to name a few). While some of these papers have found evidence in favor of the EKC hypothesis (inverted U-shaped income-pollution relation), others have found evidence against it (monotonically increasing or other shapes for the relation). The results often depend on countries\/regions in the sample, period of analysis, as well as the pollutant studied.\n\\\\\\\\\n\\noindent\nWhile income-pollution remains the focal point of most EKC studies, several of them have also included urban agglomeration (UA) or some other measures of urbanization as an important control variable especially while investigating carbon emissions.\\footnote {Although income growth is connected to urbanization, countries are heterogenous and follow different growth paths due to their varying geographical structures, population densities, infrastructures, ownerships of resources making a case for using urbanization as another control covariate in the income-pollution study. The income growth paths of oil rich UAE, manufacturing based China, serviced based Singapore, low population density Canada (with vast land) are all different.} (see for example, \\cite{shafik1992economic}, \\cite{boubellouta2021cross}and \\cite{liang2019urbanization}). The theory of ecological economics posits potentially varying effects of increased urbanization on pollution\u2013 (i) urbanization leading to more pollution (due to its close links with sanitations, dense transportations, and proximities to polluting manufacturing industries), (ii) urbanization potentially leading to less pollution based on 'compact city theory' (see \\cite{burton2000compact}, \\cite{capello2000beyond}, \\cite{sadorsky2014effect}) that explains the potential benefits of increased urbanization in terms of economies of scale (for example, replacing dependence on automobiles with large scale subway systems, using multi-storied buildings instead of single unit houses, keeping more open green space). \\cite{liddle2010age}, using 17 developed countries, find a positive and significant effect of urbanization on pollution. On the contrary, using a set of 69 countries \\cite{sharma2011determinants} find a negative and significant effect of urbanization on pollution while \\cite{du2012economic} find an insignificant effect of urbanization on carbon emission. Using various empirical strategies \\cite{sadorsky2014effect} conclude that the positive and negative effects of urbanization on carbon pollution may cancel out depending on the countries involved often leaving insignificant effects on pollution. They also note that many countries are yet to achieve a sizeable level of urbanization which presumably explains why many empirical works using less developed countries find insignificant effect of urbanization. In summary, based on the existing literature, both the relationship between urbanization and pollution as well as the relationship between income and pollution appear to depend largely on the set of countries considered in the sample. This motivates us to use UA along with income in our change plane model for analyzing carbon-dioxide emission to plausibly separate the countries into two regimes. \n\\\\\\\\\n\\noindent\nFollowing the broad literature we use pollution emission per capita (carbon-dioxide measured in metric tons per capita) as the dependent variable and real GDP per capita (measured in 2010 US dollars), its square (as is done commonly in the EKC literature) and a popular measure of urbanization, namely urban agglomeration (UA)\\footnote{The exact definition can be found in the World Development Indicators database from the World Bank website.} as covariates (in our notation $X$) in our regression. In light of the preceding discussions we fit a change plane model comprising real GDP per capita and UA (in our notation $Q$). To summarize the setup, we use the continuous response model as described in equation \\eqref{eq:regression_main_eqn}, i.e \n\\begin{align*}\nY_i & = X_i^{\\top}\\beta_0 + X_i^{\\top}\\delta_0\\mathds{1}_{Q_i^{\\top}\\psi_0 > 0} + {\\epsilon}_i \\\\\n& = X_i^{\\top}\\beta_0\\mathds{1}_{Q_i^{\\top}\\psi_0 \\le 0} + X_i^{\\top}(\\beta_0 + \\delta_0)\\mathds{1}_{Q_i^{\\top}\\psi_0 > 0} + {\\epsilon}_i\n\\end{align*}\nwith the per capita $CO_2$ emission in metric ton as $Y$, per capita GDP, square of per capita GDP and UA as $X$ (hence $X \\in \\mathbb{R}^3$) and finally, per capita GDP and UA as $Q$ (hence $Q \\in \\mathbb{R}^2$). Observe that $\\beta_0$ represents the regression coefficients corresponding to the countries with $Q_i^{\\top}\\psi_0 \\le 0$ (henceforth denoted by Group 1) and $(\\beta_0+ \\delta_0)$ represents the regression coefficients corresponding to the countries with $Q_i^{\\top}\\psi_0 \\ge 0$ (henceforth denoted by Group 2). As per our convention, in the interests of identifiability we assume $\\psi_{0, 1} = 1$, where $\\psi_{0,1}$ is the change plane parameter corresponding to per capita GDP. Therefore the only change plane coefficient to be estimated is $\\psi_{0, 2}$, the change plane coefficient for UA. For numerical stability, we divide per capita GDP by $10^{-4}$ (consequently square of per capital GDP is scaled by $10^{-8}$)\\footnote{This scaling helps in the numerical stability of the gradient descent algorithm used to optimize the least squares criterion.}. After some pre-processing (i.e. removing rows consisting of NA and countries with $100\\%$ UA) we estimate the coefficients $(\\beta_0, \\delta_0, \\psi_0)$ of our model based on data from 115 countries with $\\sigma_n = 0.05$ and test the significance of the various coefficients using the methodologies described in Section \\ref{sec:inference}. We present our findings in Table \\ref{tab:ekc_coeff}. \n\\begin{table}[!h]\n \\centering\n \\begin{tabular}{|c||c||c|}\n \\hline\n Coefficients & Estimated values & p-values \\\\\n \\hline \\hline \n $\\beta_{0, 1}$ (\\text{RGDPPC for Group 1}) & 6.98555060 & 4.961452e-10 \\\\\n $\\beta_{0, 2}$ (\\text{squared RGDPPC for Group 1}) & -0.43425991 & 7.136484e-02 \\\\\n $\\beta_{0, 3}$ (\\text{UA for Group 1}) & -0.02613813 & 1.066065e-01\n\\\\\n $\\beta_{0, 1} + \\delta_{0, 1}$ (\\text{RGDPPC for Group 2}) & 2.0563337 & 0.000000e+00\\\\\n $\\beta_{0, 2} + \\delta_{0, 2}$ (\\text{squared RGDPPC for Group 2}) & -0.1866490 & 4.912843e-04 \\\\\n $\\beta_{0, 3} + \\delta_{0, 3}$ (\\text{UA for Group 2}) & 0.1403171& 1.329788e-05 \\\\\n $\\psi_{0,2 }$ (\\text{Change plane coeff for UA}) & -0.07061785 & 0.000000e+00\\\\\n \\hline\n \\end{tabular}\n \\caption{Table of the estimated regression and change plane coefficients along with their p-values.}\n \\label{tab:ekc_coeff}\n\\end{table}\n\\\\\\\\\n\\noindent\nFrom the above analysis, we find that GDP has significantly positive effect on pollution for both groups of countries. The effect of its squared term is negative for both groups; but the effect is significant for Group-2 consisting of mostly high income countries whereas its effect is insignificant (at the 5\\% level) for the Group-1 countries (consisting of mostly low or middle income and few high income countries). Thus, not surprisingly, we find evidence in favor of EKC for the developed countries, but not for the mixed group. Notably, Group-1 consists of a mixed set of countries like Angola, Sudan, Senegal, India, China, Israel, UAE etc., whereas Group-1 consists of rich and developed countries like Canada, USA, UK, France, Germany etc. The urban variable, on the other hand, is seen to have insignificant effect on Group-1 which is in keeping with \\cite{du2012economic}, \\cite{sadorsky2014effect}. Many of them are yet to achieve substantial urbanization and this is more true for our sample period \\footnote{We use 6 years average from 2010-2015 for GDP and pollution measures. Such averaging is in accordance with the cross-sectional empirical literature using cross-country\/regional data and helps avoid business cycle fluctuations in GDP. It also minimizes the impacts of outlier events such as the financial crisis or great recession period. The years that we have chosen are ones for which we could find data for the largest number of countries.}. In contrast, UA has a positive and significant effect on Group-2 (developed) countries which is consistent with the findings of \\cite{liddle2010age}, for example. Note that UA plays a crucial role in dividing the countries into different regimes, as the estimated value of $\\psi_{0,2}$ is significant. Thus, we are able to partition countries into two regimes: a mostly rich and a mixed group. \n\\\\\\\\\n\\noindent\nNote that many underdeveloped countries and poorer regions of emerging countries are still swamped with greenhouse gas emissions from burning coal, cow dung etc., and usage of poor exhaust systems in houses and for transport. This is more true for rural and semi-urban areas of developing countries. So even while being less urbanized compared to developed nations, their overall pollution load is high (due to inefficient energy usage and higher dependence on fossil fuels as pointed out above) and rising with income and they are yet to reach the descending part of the inverted U-shape for the income-pollution relation. On the contrary, for countries in Group-2, the adoption of more efficient energy and exhaust systems are common in households and transportations in general, leading to eventually decreasing pollution with increasing income (supporting EKC). Both the results are in line with the existing EKC literature. Additionally we find that the countries in Group 2 are yet to achieve 'compact city' and green urbanization. This is a stylized fact that is confirmed by the positive and significant effect of UA on pollution in our analysis. \n\\\\\\\\\n\\noindent\nThere are many future potential applications of our method in economics. Similar analyses can be performed for other pollutants (such as sulfur emission, electrical waste\/e-waste, nitrogen pollution etc.). While income\/GDP remains a common, indeed the most crucial variable in pollution studies, other covariates (including change plane defining variables) may vary, depending on the pollutant of interest. Another potential application can be that of identifying the determinants of family health expenses in household survey data. Families are often asked about their health expenses incurred in the past one year. An interesting case in point may be household surveys collected in India where one finds numerous (large) joint families with several children and old people residing in the same household and most families are uninsured. It is often seen that health expenditure increases with income with a major factor being the costs associated with regularly performed preventative medical examinations which are affordable only once a certain income level is reached. The important covariates here are per capita family income, family wealth, `dependency ratio' (number of children and old to the total number of people in the family) and the binary indicator of any history of major illness\/hospitalizations in the family in the past year. Family income per capita and history of major illness are natural candidate covariates for defining the change plane. \n\n\n\\section{Binary response model}\n\\label{sec:classification_analysis}\nRecall our binary response model in equation \\eqref{eq:classification_eqn}. To estimate $\\psi_0$, we resort to the following loss (without smoothing): \n\\begin{equation}\n\\label{eq:new_loss}\n\\mathbb{M}(\\psi) = \\mathbb{E}\\left((Y - \\gamma)\\mathds{1}(Q^{\\top}\\psi \\le 0)\\right)\\end{equation}\nwith $\\gamma \\in (\\alpha_0, \\beta_0)$, which can be viewed as a variant of the square error loss function: \n$$\n\\mathbb{M}(\\alpha, \\beta, \\psi) = \\mathbb{E}\\left(\\left(Y - \\alpha\\mathds{1}(Q^{\\top}\\psi < 0) - \\beta\\mathds{1}(Q^{\\top}\\psi > 0)\\right)^2\\right)\\,.\n$$\nWe establish the connection between these losses in sub-section \\ref{loss_func_eq}. It is easy to prove that under fairly mild conditions (discussed later) \n$\\psi_0 = {\\arg\\min}_{\\psi \\in \\Theta}\\mathbb{M}(\\psi)$, uniquely. Under the standard classification paradigm, when we know a priori that \n$\\alpha_0 < 1\/2 < \\beta_0$, we can take $\\gamma = 1\/2$, and in the absence of this constraint, $\\bar{Y}$, which converges to some $\\gamma$ between $\\alpha_0$ and $\\beta_0$, may be substituted in the loss function. In the rest of the paper, we confine ourselves to a known $\\gamma$, and for technical simplicity, we take $\\gamma = \\frac{(\\beta_0 + \\alpha_0)}{2}$, but this assumption can be removed with more mathematical book-keeping. Thus, $\\psi_0$ is estimated by: \n\\begin{equation}\n\\label{non-smooth-score} \n\\hat \\psi = {\\arg\\min}_{\\psi \\in \\Theta} \\mathbb{M}_n(\\psi) = {\\arg\\min}_{\\psi \\in \\Theta} \\frac{1}{n}\\sum_{\\i=1}^n (Y_i - \\gamma)\\mathds{1}(Q_i^{\\top}\\psi \\le 0)\\,.\n\\end{equation} We resort to a smooth approximation of the indicator function in \n\\eqref{non-smooth-score} using a distribution kernel with suitable bandwidth. The smoothed version of the population score function then becomes: \n\\begin{equation}\n\\label{eq:kernel_smoothed_pop_score}\n\\mathbb{M}^s(\\psi) = \\mathbb{E}\\left((Y - \\gamma)\\left(1-K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right)\\right)\\right)\n\\end{equation}\nwhere as in the continuous response model, we use $K(x) = \\Phi(x)$, and the corresponding empirical version is: \n\\begin{equation}\n\\label{eq:kernel_smoothed_emp_score}\n\\mathbb{M}^s_n(\\psi) = \\frac{1}{n}\\sum_{i=1}^n \\left((Y_i - \\gamma)\\left(1-K\\left(\\frac{Q_i^{\\top}\\psi}{\\sigma_n}\\right)\\right)\\right)\n\\end{equation}\nDefine $\\hat{\\psi}^s$ and $\\psi_0^s$ to be the minimizer of the smoothed version of the empirical (equation \\eqref{eq:kernel_smoothed_emp_score}) and population score (equation \\eqref{eq:kernel_smoothed_pop_score}) function respectively. Here we only consider the choice of bandwidth $n\\sigma_n \\to \\infty$ and $n\\sigma_n^2 \\to 0$. Analogous to Theorem \\ref{thm:regression} we prove the following result for binary response model: \n\\begin{theorem}\n\\label{thm:binary}\nUnder Assumptions (\\ref{as:distribution} - \\ref{as:eigenval_bound}): \n$$\n\\sqrt{\\frac{n}{\\sigma_n}}\\left(\\hat{\\psi}_n - \\psi_0\\right) \\Rightarrow N(0, \\Gamma) \\,,\n$$ \nfor some non-stochastic matrix $\\Gamma$, which will be defined explicitly in the proof. \n\\end{theorem}\nWe have therefore established that in the regime $n\\sigma_n \\to \\infty$ and $n\\sigma_n^2 \\to 0$, it is possible to attain asymptotic normality using a smoothed estimator for binary response model. \n\n\n\n\n\n\\section{Inferential methods}\n\\label{sec:inference}\nWe draw inferences on $(\\beta_0, \\delta_0, \\psi_0)$ by resorting to similar techniques as in \\cite{seo2007smoothed}. For the continuous response model, we need consistent estimators of $V^{\\gamma}, Q^{\\gamma}, V^{\\psi}, Q^{\\psi}$ (see Lemma \\ref{conv-prob} for the definitions) for hypothesis testing. By virtue of the aforementioned Lemma, we can estimate $Q^{\\gamma}$ and $Q^{\\psi}$ as follows: \n\\begin{align*}\n\\hat Q^{\\gamma} & = \\nabla^2_{\\gamma} \\mathbb{M}_n^s(\\hat \\theta) \\,, \\\\ \n\\hat Q^{\\psi} & = \\sigma_n \\nabla^2_{\\psi} \\mathbb{M}_n^s(\\hat \\theta) \\,.\n\\end{align*}\nThe consistency of the above estimators is established in the proof of Lemma \\ref{conv-prob}. For the other two parameters $V^{\\gamma}, V^{\\psi}$ we use the following estimators: \n\\begin{align*}\n\\hat V^{\\psi} & = \\frac{1}{n\\sigma_n^2}\\sum_{i=1}^n\\left(\\left(Y_i - X_i^{\\top}(\\hat \\beta + \\hat \\delta)\\right)^2 - \\left(Y_i- X_i^{\\top}\\hat \\beta\\right)^2\\right)^2\\tilde Q_i \\tilde Q_i^{\\top}\\left(K'\\left(\\frac{Q_i^{\\top}\\hat \\psi}{\\sigma_n}\\right)\\right)^2 \\\\\n\\hat V^{\\gamma} & = \\hat \\sigma^2_{\\epsilon} \\begin{pmatrix} \\frac{1}{n}X_iX_i^{\\top} & \\frac{1}{n}X_iX_i^{\\top}\\mathds{1}_{Q_i^{\\top}\\hat \\psi > 0} \\\\ \\frac{1}{n}X_iX_i^{\\top}\\mathds{1}_{Q_i^{\\top}\\hat \\psi > 0} & \\frac{1}{n}X_iX_i^{\\top}\\mathds{1}_{Q_i^{\\top}\\hat \\psi > 0} \\end{pmatrix}\n\\end{align*}\nwhere $\\hat \\sigma^2_{\\epsilon}$ can be obtained as $(1\/n)(Y_i - X_i^{\\top}\\hat \\beta - X_i^{\\top}\\hat \\delta \\mathds{1}(Q_i^{\\top}\\hat \\psi > 0))^2$, i.e. the residual sum of squares. The explicit value of $V_\\gamma$ (as derived in equation \\eqref{eq:def_v_gamma} in the proof Lemma \\ref{asymp-normality}) is: \n$$\nV^{\\gamma} = \\sigma_{\\epsilon}^2 \\begin{pmatrix}\\mathbb{E}\\left[XX^{\\top}\\right] & \\mathbb{E}\\left[XX^{\\top}\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right] \\\\\n\\mathbb{E}\\left[XX^{\\top}\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right] & \\mathbb{E}\\left[XX^{\\top}\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right] \\end{pmatrix} \n$$ \nTherefore, the consistency of $\\hat V_\\gamma$ is immediate from the law of large numbers. The consistency of $\\hat V^{\\psi}$ follows via arguments similar to those employed in proving Lemma \\ref{conv-prob} but under somewhat more stringent moment conditions: in particular, we need $\\mathbb{E}[\\|X\\|^8] < \\infty$ and $\\mathbb{E}[(X^{\\top}\\delta_0)^k \\mid Q]$ to be Lipschitz functions over $Q$ for $1 \\le k \\le 8$. The inferential techniques for the classification model are similar and hence skipped, to avoid repetition. \n\n\n\n\n\n\n\n\\section{Proof of Theorem \\ref{thm:regression}}\n\n\\section{Appendix}\nIn this section, we present the proof of Lemma \\ref{lem:rate_smooth}, which lies at the heart of our refined analysis of the smoothed change plane estimator. Proofs of the other lemmas and our results for the binary response model are available in the Appendix \\ref{sec:supp_B}. \n\\subsection{Proof of Lemma \\ref{lem:rate_smooth}}\n\n\n\\begin{proof}\nThe proof of Lemma \\ref{lem:rate_smooth} is quite long, hence we further break it into few more lemmas. \n\\begin{lemma}\n\\label{lem:pop_curv_nonsmooth}\nUnder Assumption \\eqref{eq:assm}, there exists $u_- , u_+ > 0$ such that: \n$$\nu_- d^2(\\theta, \\theta_0) \\le \\mathbb{M}(\\theta) - \\mathbb{M}(\\theta_0) \\le u_+ d^2(\\theta, \\theta_0) \\,,\n$$\nfor $\\theta$ in a (non-srinking) neighborhood of $\\theta_0$, where: \n$$\nd(\\theta, \\theta_0) := \\sqrt{\\|\\beta - \\beta_0\\|^2 + \\|\\delta - \\delta_0\\|^2 + \\|\\psi - \\psi_0\\|} \\,.\n$$\n\\end{lemma}\n\n\n\n\n\n\\begin{lemma}\n\\label{lem:uniform_smooth}\nUnder Assumption \\ref{eq:assm} the smoothed loss function $\\mathbb{M}^s(\\theta)$ is uniformly close to the non-smoothed loss function $\\mathbb{M}(\\theta)$: \n$$\n\\sup_{\\theta \\in \\Theta}\\left|\\mathbb{M}^s(\\theta) - \\mathbb{M}(\\theta)\\right| \\le K_1 \\sigma_n \\,,\n$$ \nfor some constant $K_1$. \n\\end{lemma}\n\n\n\n\n\n\n\n\n\n\\begin{lemma}\n\\label{lem:pop_smooth_curvarture}\nUnder certain assumptions: \n\\begin{align*}\n\\mathbb{M}^s(\\theta) - \\mathbb{M}^s(\\theta_0^s) & \\gtrsim \\|\\beta - \\beta_0^s\\|^2 + \\|\\delta - \\delta_0^s\\|^2 \\\\\n& \\qquad \\qquad + \\frac{\\|\\psi - \\psi_0^s\\|^2}{\\sigma_n} \\mathds{1}_{\\|\\psi - \\psi_0^s\\| \\le \\mathcal{K}\\sigma_n} + \\|\\psi - \\psi_0^s\\| \\mathds{1}_{\\|\\psi - \\psi_0^s\\| > \\mathcal{K}\\sigma_n} \\\\\\\n& := d_*^2(\\theta, \\theta_0^s) \\,.\n\\end{align*}\nfor some constant $\\mathcal{K}$ and for all $\\theta$ in a neighborhood of $\\theta_0$, which does not change with $n$. \n\\end{lemma}\n\nThe proofs of the three lemmas above can be found in Appendix \\ref{sec:supp_B}. We next move to the proof of Lemma \\ref{lem:rate_smooth}. In Lemma \\ref{lem:pop_smooth_curvarture} we have established the curvature of the smooth loss function $\\mathbb{M}^s(\\theta)$ around $\\theta_0^s$. To determine the rate of convergence of $\\hat \\theta^s$ to $\\theta_0^s$, we further need an upper bound on the modulus of continuity of our loss function. Towards that end, first recall that our loss function is:\n$$\nf_{\\theta}(Y, X, Q) = \\left(Y - X^{\\top}\\beta\\right)^2 + \\left[-2\\left(Y - X^{\\top}\\beta\\right)X^{\\top}\\delta + (X^{\\top}\\delta)^2\\right] K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right)\n$$\nThe centered loss function can be written as: \n\\begin{align}\n & f_{\\theta}(Y, X, Q) - f_{\\theta_0^s}(Y, X, Q) \\notag \\\\\n & = \\left(Y - X^{\\top}\\beta\\right)^2 + \\left[-2\\left(Y - X^{\\top}\\beta\\right)X^{\\top}\\delta + (X^{\\top}\\delta)^2\\right] K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right) \\notag \\\\\n & \\qquad \\qquad \\qquad \\qquad - \\left(Y - X^{\\top}\\beta_0^s\\right)^2 - \\left[-2\\left(Y - X^{\\top}\\beta_0^s\\right)X^{\\top}\\delta_0^s + (X^{\\top}\\delta_0^s)^2\\right] K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right) \\notag \\\\\n & = \\left(Y - X^{\\top}\\beta\\right)^2 + \\left[-2\\left(Y - X^{\\top}\\beta\\right)X^{\\top}\\delta + (X^{\\top}\\delta)^2\\right] K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right) \\notag \\\\\n & \\qquad \\qquad \\qquad \\qquad - \\left(Y - X^{\\top}\\beta_0^s\\right)^2 - \\left[-2\\left(Y - X^{\\top}\\beta_0^s\\right)X^{\\top}\\delta_0^s + (X^{\\top}\\delta_0^s)^2\\right] K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right) \\notag \\\\\n & \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad - \\left[-2\\left(Y - X^{\\top}\\beta_0^s\\right)X^{\\top}\\delta_0^s + (X^{\\top}\\delta_0^s)^2\\right] \\left\\{K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right) - K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right)\\right\\} \\notag \\\\\n & = \\underbrace{\\left(Y - X^{\\top}\\beta\\right)^2 - \\left(Y - X^{\\top}\\beta_0^s\\right)^2}_{M_1} \\notag \\\\\n & \\qquad + \\underbrace{\\left\\{ \\left[-2\\left(Y - X^{\\top}\\beta\\right)X^{\\top}\\delta + (X^{\\top}\\delta)^2\\right] - \\left[-2\\left(Y - X^{\\top}\\beta_0^s\\right)X^{\\top}\\delta_0^s + (X^{\\top}\\delta_0^s)^2\\right]\\right\\} K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right)}_{M_2} \\notag \\\\\n & \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad - \\underbrace{\\left[-2\\left(Y - X^{\\top}\\beta_0^s\\right)X^{\\top}\\delta_0^s + (X^{\\top}\\delta_0^s)^2\\right] \\left\\{K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right) - K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right)\\right\\}}_{M_3} \\notag \\\\\n \\label{eq:expand_f} & := M_1 + M_2 + M_3\n\\end{align}\nFor the rest of the analysis, fix $\\zeta > 0$ and consider the collection of functions $\\mathcal{F}_{\\zeta}$ which is defined as: \n$$\n\\mathcal{F}_{\\zeta} = \\left\\{f_\\theta - f_{\\theta^s}: d_*(\\theta, \\theta^s) \\le \\zeta\\right\\} \\,.\n$$\nFirst note that $\\mathcal{F}_\\zeta$ has bounded uniform entropy integral (henceforth BUEI) over $\\zeta$. To establish this, it is enough to argue that the collection $\\mathcal{F} = \\{ f_\\theta : \\theta \\in \\Theta\\}$ is BUEI. Note that the functions $X \\mapsto X^{\\top}\\beta$ has VC dimension $p$ and so is the map $X \\mapsto X^{\\top}(\\beta + \\delta)$. Therefore the functions $(X, Y) \\mapsto (Y - X^{\\top}(\\beta + \\delta))^2 - (Y - X^{\\top}\\beta)^2$ is also BUEI, as composition with monotone function (here $x^2$) and taking difference keeps this property. Further by the hyperplane $Q \\mapsto Q^{\\top}\\psi$ also has finite dimension (only depends on the dimension of $Q$) and the VC dimension does not change by scaling it with $\\sigma_n$. Therefore the functions $Q \\mapsto Q^{\\top}\\psi\/sigma_n$ has same VC dimension as $Q \\mapsto Q^{\\top}\\psi$ which is independent of $n$. Again, as composition of monotone function keeps BUEI property, the functions $Q \\mapsto K(Q^{\\top}\\psi\/\\sigma_n)$ is also BUEI. As the product of two BUEI class is BUEI, we conclude that $\\mathcal{F}$ (and hence $\\mathcal{F}_\\zeta$) is BUEI. \n\\\\\\\\\n\\noindent\nNow to bound the modulus of continuity we use Lemma 2.14.1 of \\cite{vdvw96}: \n\\begin{equation*}\n\\label{eq:moc_bound}\n\\sqrt{n}\\mathbb{E}\\left[\\sup_{\\theta: d_*(\\theta, \\theta_0^s) \\le \\zeta} \\left|\\left(\\mathbb{P}_n - P\\right)\\left(f_\\theta - f_{\\theta_0^s}\\right)\\right|\\right] \\lesssim \\mathcal{J}(1, \\mathcal{F}_\\zeta) \\sqrt{\\mathbb{E}\\left[F_{\\zeta}^2(X, Y, Q)\\right]}\n\\end{equation*}\nwhere $F_\\zeta$ is some envelope function of $\\mathcal{F}_\\zeta$. As the function class $\\mathcal{F}_\\zeta$ has bounded entropy integral, $\\mathcal{J}(1, \\mathcal{F}_\\zeta) $ can be bounded above by some constant independent of $n$. We next calculate the order of the envelope function $F_\\zeta$. Recall that, by definition of envelope function is: \n$$\nF_{\\zeta}(X, Y, Q) \\ge \\sup_{\\theta: d_*(\\theta, \\theta_0^s) \\le \\zeta} \\left| f_{\\theta} - f_{\\theta_0}\\right| \\,.\n$$\nand we can write $f_\\theta - f_{\\theta_0^s} = M_1 + M_2 + M_3$ which follows from equation \\eqref{eq:expand_f}. Therefore, to find the order of the envelope function, it is enough to find the order of bounds of $M_1, M_2, M_3$ over the set $d_*(\\theta, \\theta_0^s) \\le \\zeta$. We start with $M_1$: \n\\begin{align}\n \\sup_{d_*(\\theta, \\theta_0^s) \\le \\zeta}|M_1| & = \\sup_{d_*(\\theta, \\theta_0^s) \\le \\delta}\\left|\\left(Y - X^{\\top}\\beta\\right)^2 - \\left(Y - X^{\\top}\\beta_0^s\\right)^2\\right| \\notag \\\\\n & = \\sup_{d_*(\\theta, \\theta_0^s) \\le \\zeta} \\left|2YX^{\\top}(\\beta_0^s - \\beta) + (X^{\\top}\\beta)^2 - (X^{\\top}\\beta_0^S)^2\\right| \\notag \\\\\n & \\le \\sup_{d_*(\\theta, \\theta_0^s) \\le \\zeta} \\|\\beta - \\beta_0^s\\| \\left[2|Y|\\|X\\| + (\\|\\beta_0^s\\| + \\zeta)\\|X\\|^2\\right] \\notag \\\\\n \\label{eq:env_1} & \\le \\zeta\\left[2|Y|\\|X\\| + (\\|\\beta_0^s\\| + \\zeta)\\|X\\|^2\\right] := F_{1, \\zeta}(X, Y, Q) \\hspace{0.1in} [\\text{Envelope function of }M_1]\n\\end{align}\nand the second term: \n\\allowdisplaybreaks\n\\begin{align}\n & \\sup_{d_*(\\theta, \\theta_0^s) \\le \\zeta} |M_2| \\notag \\\\\n & = \\sup_{d_*(\\theta, \\theta_0^s) \\le \\zeta} \\left|\\left\\{\\left[-2\\left(Y - X^{\\top}\\beta\\right)X^{\\top}\\delta + (X^{\\top}\\delta)^2\\right] \\right. \\right. \\notag \\\\\n & \\qquad \\qquad \\qquad \\qquad \\left. \\left. - \\left[-2\\left(Y - X^{\\top}\\beta_0^s\\right)X^{\\top}\\delta_0^s + (X^{\\top}\\delta_0^s)^2\\right]\\right\\}\\right|K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right) \\notag \\\\\n & \\le \\sup_{d_*(\\theta, \\theta_0^s) \\le \\zeta} \\left|\\left\\{\\left[-2\\left(Y - X^{\\top}\\beta\\right)X^{\\top}\\delta + (X^{\\top}\\delta)^2\\right] \\right. \\right. \\notag \\\\\n & \\qquad \\qquad \\qquad \\qquad \\left. \\left. - \\left[-2\\left(Y - X^{\\top}\\beta_0^s\\right)X^{\\top}\\delta_0^s + (X^{\\top}\\delta_0^s)^2\\right]\\right\\}\\right| \\notag \\\\\n & = \\sup_{d_*(\\theta, \\theta_0^s) \\le \\zeta} \\left|\\left\\{\\left[2Y(X^{\\top}\\delta_0^s - X^{\\top}\\delta) + 2[(X^{\\top}\\beta)(X^{\\top}\\delta) \\right. \\right. \\right. \\notag \\\\\n & \\qquad \\qquad \\qquad \\qquad \\left. \\left. \\left. - (X^{\\top}\\beta_0^s)(X^{\\top}\\delta_0^s)] + (X^{\\top}\\delta)^2 - (X^{\\top}\\delta_0^s)^2\\right]\\right\\}\\right| \\notag \\\\\n & \\le \\sup_{d_*(\\theta, \\theta_0^s) \\le \\zeta} \\left\\{\\|\\delta - \\delta_0^s\\|2|Y|\\|X\\| + 2\\|\\beta - \\beta_0\\|\\|X\\|\\|\\delta\\| \\right. \\notag \\\\\n & \\qquad \\qquad \\qquad \\qquad \\left. + 2\\|\\delta - \\delta_0^s\\|\\|X\\|\\|\\beta_0^s\\| + 2\\|X\\|\\|\\delta + \\delta_0^s\\|\\|\\delta - \\delta_0^s\\|\\right\\} \\notag \\\\ \\notag \\\\\n & \\le \\zeta \\left[2|Y|\\|X\\| + 2\\|X\\|(\\|\\delta_0^s\\| + \\|\\zeta\\|) + 2\\|X\\|\\|\\beta_0^s\\| + 2\\|X\\|(\\|\\delta_0^s\\| + \\zeta)\\right] \\notag \\\\ \n \\label{eq:env_2}& = \\zeta \\times 2\\|X\\|\\left[2|Y| + 2(\\|\\delta_0^s\\| + \\|\\zeta\\|) + \\|\\beta_0^s\\|\\right] := F_{2, \\zeta}(X, Y, Q) \\hspace{0.1in} [\\text{Envelope function of }M_2]\n \\end{align}\nFor the third term, note that: \n\\begin{align*}\n& \\sup_{d_*(\\theta, \\theta_0^s) \\le \\zeta} |M_3| \\\\\n& \\le \\left|\\left[-2\\left(Y - X^{\\top}\\beta_0^s\\right)X^{\\top}\\delta_0^s + (X^{\\top}\\delta_0^s)^2\\right]\\right| \\times \\sup_{d_*(\\theta, \\theta_0^s) \\le \\zeta} \\left|\\left\\{K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right) - K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right)\\right\\}\\right| \\\\\n& := F_{3, \\zeta} (X, Y, Q)\n\\end{align*}\nHenceforth, we define the envelope function to be $F_\\zeta = F_{\\zeta, 1} + F_{\\zeta, 2} + F_{\\zeta, 3}$. Hence we have by triangle inequality: \n$$\n\\sqrt{\\mathbb{E}\\left[F_{\\zeta}^2(X, Y, Q)\\right]} \\le \\sum_{i=1}^3 \\sqrt{\\mathbb{E}\\left[F_{i, \\zeta}^2(X, Y, Q)\\right]}\n$$\nFrom equation \\eqref{eq:env_1} and \\eqref{eq:env_2} we have: \n\\begin{equation}\n\\label{eq:moc_bound_2}\n\\sqrt{\\mathbb{E}\\left[F_{1, \\zeta}^2(X, Y, Q)\\right]} + \\sqrt{\\mathbb{E}\\left[F_{2, \\zeta}^2(X, Y, Q)\\right]} \\lesssim \\zeta \\,.\n\\end{equation}\nFor $F_{3, \\zeta}$, first note that: \n\\begin{align*}\n & \\mathbb{E}\\left[\\left|\\left[-2\\left(Y - X^{\\top}\\beta_0^s\\right)X^{\\top}\\delta_0^s + (X^{\\top}\\delta_0^s)^2\\right]\\right|^2 \\mid Q\\right] \\\\\n & \\le 8\\mathbb{E}\\left[\\left(Y - X^{\\top}\\beta_0^s\\right)^2(X^{\\top}\\delta_0)^2 \\mid Q\\right] + 2\\mathbb{E}[(X^{\\top}\\delta_0^s)^4 \\mid Q] \\\\\n & \\le \\left\\{8\\|\\beta - \\beta_0^s\\|^2\\|\\delta_0\\|^2 + 8\\|\\delta_0\\|^4 + 2\\|\\delta_0^s\\|^4\\right\\}m_4(Q) \\,.\n\\end{align*}\nwhere $m_4(Q)$ is defined in Assumption \\ref{eq:assm}. In this part, we have to tackle the dichotomous behavior of $\\psi$ around $\\psi_0^s$ carefully. Henceforth define $d_*^2(\\psi, \\psi_0^s)$ as: \n\\begin{align*}\n d_*^2(\\psi, \\psi_0^s) = & \\frac{\\|\\psi - \\psi_0^s\\|^2}{\\sigma_n}\\mathds{1}_{\\|\\psi - \\psi_0^s\\| \\le \\mathcal{K}\\sigma_n} + \\|\\psi - \\psi_0^s\\|\\mathds{1}_{\\|\\psi - \\psi_0^s\\| > \\mathcal{K}\\sigma_n} \n\\end{align*}\nThis is a slight abuse of notation, but the reader should think of it as the part of $\\psi$ in $d_*^2(\\theta, \\theta_0^s)$. Define $B_{\\zeta}(\\psi_0^s)$ to be set of all $\\psi$'s such that $d^2_*(\\psi, \\psi_0^s) \\le \\zeta^2$. We can decompose $B_{\\zeta}(\\psi_0^s)$ as a disjoint union of two sets: \n\\begin{align*}\n B_{\\zeta, 1}(\\psi_0^s) & = \\left\\{\\psi: d^2_*(\\psi, \\psi_0^s) \\le \\zeta^2, \\|\\psi - \\psi_0^s\\| \\le \\mathcal{K}\\sigma_n\\right\\} \\\\\n & = \\left\\{\\psi:\\frac{\\|\\psi - \\psi_0^s\\|^2}{\\sigma_n} \\le \\zeta^2, \\|\\psi - \\psi_0^s\\| \\le \\mathcal{K}\\sigma_n\\right\\} \\\\\n & = \\left\\{\\psi:\\|\\psi - \\psi_0^s\\| \\le \\zeta \\sqrt{\\sigma_n}, \\|\\psi - \\psi_0^s\\| \\le \\mathcal{K}\\sigma_n\\right\\} \\\\\\\\\n B_{\\zeta, 2}(\\psi_0^s) & = \\left\\{\\psi: d^2_*(\\psi, \\psi_0^s) \\le \\zeta^2, \\|\\psi - \\psi_0^s\\| > \\mathcal{K}\\sigma_n\\right\\} \\\\\n & = \\left\\{\\psi: \\|\\psi - \\psi_0^s\\| \\le \\zeta^2, \\|\\psi - \\psi_0^s\\| > \\mathcal{K}\\sigma_n\\right\\} \n\\end{align*}\nAssume $\\mathcal{K} > 1$. The case where $\\mathcal{K} < 1$ follows from similar calculations and hence skipped for brevity. Consider the following two cases: \n\\\\\\\\\n\\noindent\n{\\bf Case 1: }Suppose $\\zeta \\le \\sqrt{\\mathcal{K}\\sigma_n}$. Then $B_{\\zeta, 2} = \\phi$. Also as $\\mathcal{K} > 1$, we have: $\\zeta\\sqrt{\\sigma_n} \\le \\mathcal{K}\\sigma_n$. Hence we have: \n$$\n\\sup_{d_*^2(\\psi, \\psi_0^s) \\le \\zeta^2}\\|\\psi - \\psi_0^s\\| = \\sup_{B_{\\zeta, 1}}\\|\\psi - \\psi_0^s\\| = \\zeta\\sqrt{\\sigma_n} \\,.\n$$\nThis implies: \n\\begin{align*}\n & \\sup_{d_*(\\theta, \\theta_0^s) \\le \\zeta} \\left|\\left\\{K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right) - K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right)\\right\\}\\right|^2 \\\\\n & \\le \\max\\left\\{\\left|\\left\\{K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right) - K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n} + \\|\\tilde Q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}\\right)\\right\\}\\right|^2, \\right. \\\\\n & \\qquad \\qquad \\qquad \\left. \\left|\\left\\{K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right) - K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n} - \\|\\tilde Q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}\\right)\\right\\}\\right|^2\\right\\} \\\\\n & := \\max\\{T_1, T_2\\} \\,.\n\\end{align*}\nTherefore we have: \n$$\n\\mathbb{E}\\left[F^2_{3, \\zeta}(X, Y, Q)\\right] \\le \\mathbb{E}[m_4(Q) T_1] + \\mathbb{E}[m_4(Q) T_2] \\,.\n$$\nNow: \n\\begin{align}\n & \\mathbb{E}[m_4(Q) T_1] \\notag \\\\\n & = \\mathbb{E}\\left[m_4(Q) \\left|\\left\\{K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right) - K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n} + \\|\\tilde Q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}\\right)\\right\\}\\right|^2\\right] \\notag \\\\\n & = \\sigma_n \\int_{\\mathbb{R}^{p-1}}\\int_{-\\infty}^{\\infty} m_4(\\sigma_nt - \\tilde q^{\\top}\\tilde \\psi_0^s, \\tilde q) \\left|K\\left(t\\right) - K\\left(t + \\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}\\right)\\right|^2 \\ f_s(\\sigma_nt\\mid \\tilde q) \\ dt \\ f(\\tilde q) \\ d\\tilde q \\notag\\\\\n & \\le \\sigma_n \\int_{\\mathbb{R}^{p-1}}\\int_{-\\infty}^{\\infty} m_4(\\sigma_nt - \\tilde q^{\\top}\\tilde \\psi_0^s, \\tilde q) \\left|K\\left(t\\right) - K\\left(t + \\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}\\right)\\right| \\ f_s(\\sigma_nt\\mid \\tilde q) \\ dt \\ f(\\tilde q) \\ d\\tilde q \\notag \\\\\n & = \\sigma_n \\int_{\\mathbb{R}^{p-1}}\\int_{-\\infty}^{\\infty}m_4(\\sigma_nt - \\tilde q^{\\top}\\tilde \\psi_0^s, \\tilde q) \\int_{t}^{t + \\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}} K'(s) \\ ds \\ f_s(\\sigma_nt\\mid \\tilde q) \\ dt \\ f(\\tilde q) \\ d\\tilde q \\notag \\\\\n & = \\sigma_n \\int_{\\mathbb{R}^{p-1}}\\int_{-\\infty}^{\\infty}K'(s) \\int_{s- \\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}}^s m_4(\\sigma_nt - \\tilde q^{\\top}\\tilde \\psi_0^s, \\tilde q) f_s(\\sigma_nt\\mid \\tilde q) \\ dt \\ ds \n \\ f(\\tilde q) \\ d\\tilde q \\notag \\\\\n & = \\zeta \\sqrt{\\sigma_n} \\mathbb{E}[\\|\\tilde Q\\|m_4(-\\tilde Q^{\\top}\\psi_0^s, \\tilde Q)f_s(0 \\mid \\tilde Q)] + R \\notag \n\\end{align}\nwhere as before we split $R$ into three parts $R = R_1 + R_2 + R_3$. \n\\begin{align}\n \\left|R_1\\right| & = \\left|\\sigma_n \\int_{\\mathbb{R}^{p-1}}\\int_{-\\infty}^{\\infty}K'(s) \\int_{s- \\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}}^s m_4(- \\tilde q^{\\top}\\tilde \\psi_0^s, \\tilde q) (f_s(\\sigma_nt\\mid \\tilde q) - f_s(0 \\mid \\tilde q)) \\ dt \\ ds \\ f(\\tilde q) \\ d\\tilde q\\right| \\notag \\\\\n \\label{eq:r1_env_1} & \\le \\sigma_n^2 \\int_{\\mathbb{R}^{p-1}}m_4(- \\tilde q^{\\top}\\tilde \\psi_0^s, \\tilde q)\\dot f_s(\\tilde q) \\int_{-\\infty}^{\\infty}K'(s) \\int_{s- \\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}}^s |t| dt\\ ds \\ f(\\tilde q) \\ d\\tilde q \n \\end{align}\nWe next calculate the inner integral (involving $(s,t)$) of equation \\eqref{eq:r1_env_1}: \n\\begin{align*}\n& \\int_{-\\infty}^{\\infty}K'(s) \\int_{s- \\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}}^s |t| dt\\ ds \\\\\n& =\\left(\\int_{-\\infty}^0 + \\int_0^{\\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}} + \\int_{\\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}}^{\\infty}\\right)K'(s) \\int_{s- \\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}}^s |t| dt\\ ds \\\\\n& = \\frac12\\int_{-\\infty}^0 K'(s)\\left[\\left(s- \\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}\\right)^2 - s^2\\right] \\ ds + \\frac12\\int_0^{\\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}} K'(s)\\left[\\left(s- \\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}\\right)^2 + s^2\\right] \\ ds \\\\\n& \\qquad \\qquad \\qquad \\qquad + \\frac12 \\int_{\\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}}^{\\infty}K'(s) \\left[s^2 - \\left(s- \\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}\\right)^2\\right] \\ ds\\\\\n& = -\\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}} \\int_{-\\infty}^0 K'(s) s \\ ds + \\|\\tilde q\\|^2\\frac{\\zeta^2}{2\\sigma_n} \\int_{-\\infty}^0 K'(s) \\ ds + \\int_0^{\\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}} s^2K'(s) \\ ds \\\\ \n& \\qquad \\qquad -\\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}} \\int_0^{\\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}} sK'(s) \\ ds + \\|\\tilde q\\|^2\\frac{\\zeta^2}{2\\sigma_n} \\int_0^{\\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}} K'(s) \\ ds \\\\\n& \\qquad \\qquad \\qquad + \\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}} \\int_{\\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}}^{\\infty} sK'(s) \\ ds - \\|\\tilde q\\|^2\\frac{\\zeta^2}{2\\sigma_n} \\int_{\\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}}^{\\infty} K'(s) \\ ds \\\\\n& = \\|\\tilde q\\|^2\\frac{\\zeta^2}{2\\sigma_n}\\left[2K\\left(\\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}\\right) - 1\\right] + \\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}} \\left[ -\\int_{-\\infty}^0 K'(s) s \\ ds - \\right. \\\\\n& \\qquad \\qquad \\left. \\int_0^{\\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}} K'(s)s \\ ds + \\int_{\\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}}^{\\infty} sK'(s) \\ ds\\right] + \\int_0^{\\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}} s^2K'(s) \\ ds \\\\\n& = \\|\\tilde q\\|^2\\frac{\\zeta^2}{\\sigma_n}\\left[K\\left(\\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}\\right) - K(0)\\right] + \\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}} \\left[ -\\int_{-\\infty}^{-\\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}} K'(s) s \\ ds + \\int_{\\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}}^{\\infty} sK'(s) \\ ds\\right] \\\\\n& \\qquad \\qquad + \\int_0^{\\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}} s^2K'(s) \\ ds \\\\\n& = \\|\\tilde q\\|^2\\frac{\\zeta^2}{\\sigma_n}\\left[K\\left(\\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}\\right) - K(0)\\right] + \\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}\\int_{-\\infty}^{\\infty} K'(s)|s|\\mathds{1}_{|s| \\ge \\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}} \\ ds + \\int_0^{\\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}} s^2K'(s) \\ ds \\\\\n& \\le \\dot{K}_+ \\|\\tilde q\\|^3\\frac{\\zeta^3}{\\sigma^{3\/2}_n} + \\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}} \\int_{-\\infty}^{\\infty} K'(s)|s| \\ ds + \\|\\tilde q\\|^2\\frac{\\zeta^2}{\\sigma_n}\\left(K\\left(\\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}\\right) - K(0)\\right) \\\\\n& \\lesssim \\|\\tilde q\\|^3\\frac{\\zeta^3}{\\sigma^{3\/2}_n} + \\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}} \n\\end{align*}\nPutting this bound in equation \\eqref{eq:r1_env_1} we obtain: \n \\begin{align*}\n |R_1| & \\le \\frac{\\sigma_n^2}{2} \\int_{\\mathbb{R}^{p-1}}m_4(- \\tilde q^{\\top}\\tilde \\psi_0^s, \\tilde q)\\dot f_s(\\tilde q) \\left(\\|\\tilde q\\|^3\\frac{\\zeta^3}{\\sigma^{3\/2}_n} + \\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}\\right) \\ f(\\tilde q) \\ d\\tilde q \\\\\n & \\le \\frac{\\zeta^3}{2\\sqrt{\\sigma_n}} \\mathbb{E}\\left[m_4(- \\tilde Q^{\\top}\\tilde \\psi_0^s, \\tilde Q)\\dot f_s(\\tilde Q)\\|\\tilde Q\\|^3\\right] + \\frac{\\zeta \\sqrt{\\sigma_n}}{2} \\mathbb{E}\\left[m_4(- \\tilde Q^{\\top}\\tilde \\psi_0^s, \\tilde Q)\\dot f_s(\\tilde Q)\\|\\tilde Q\\|\\right] \n\\end{align*}\nand \n\\begin{align*}\n & \\left|R_2\\right| \\\\\n & = \\left|\\sigma_n \\int_{\\mathbb{R}^{p-1}}\\int_{-\\infty}^{\\infty}K'(s) \\int_{s- \\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}}^s \\left(m_4(\\sigma_n t - \\tilde q^{\\top}\\tilde \\psi_0^s, \\tilde q) - m_4( - \\tilde q^{\\top}\\tilde \\psi_0^s, \\tilde q)\\right)f_s(0 \\mid \\tilde q) \\ dt \\ ds \\ f(\\tilde q) \\ d\\tilde q\\right| \\\\\n & \\le \\sigma_n^2 \\int_{\\mathbb{R}^{p-1}}\\dot m_4( \\tilde q)f_s(0 \\mid \\tilde q) \\int_{-\\infty}^{\\infty}K'(s) \\int_{s- \\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}}^s |t| dt\\ ds \\ f(\\tilde q) \\ d\\tilde q \\\\\n & \\le \\sigma_n^2 \\int_{\\mathbb{R}^{p-1}}\\dot m_4( \\tilde q)f_s(0 \\mid \\tilde q) \\left(\\|\\tilde q\\|^3\\frac{\\zeta^3}{\\sigma^{3\/2}_n} + \\|\\tilde q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}\\right) \\ f(\\tilde q) \\ d\\tilde q \\\\\n & = \\zeta \\sigma_n^{3\/2} \\mathbb{E}\\left[\\dot m_4( \\tilde Q)f_s(0 \\mid \\tilde Q)\\|\\tilde Q\\|\\right] + \\zeta^3 \\sqrt{\\sigma_n} \\mathbb{E}\\left[\\dot m_4( \\tilde Q)f_s(0 \\mid \\tilde Q)\\|\\tilde Q\\|^3\\right]\n\\end{align*}\nThe third residual $R_3$ is even higher order term and hence skipped. It is immediate that the order of the remainders are equal to or smaller than $\\zeta \\sqrt{\\sigma_n}$ which implies: \n$$\n\\mathbb{E}[m_4(Q)T_1] \\lesssim \\zeta\\sqrt{\\sigma_n} \\,.\n$$\nThe calculation for $T_2$ is similar and hence skipped for brevity. Combining conclusions for $T_1$ and $T_2$ we conclude when $\\zeta \\le \\sqrt{\\mathcal{K} \\sigma_n}$: \n\\begin{align}\n& \\mathbb{E}\\left[F^2_{3, \\zeta}(X, Y, Q)\\right] \\notag \\\\\n & \\mathbb{E}\\left[\\left|\\left[-2\\left(Y - X^{\\top}\\beta_0^s\\right)X^{\\top}\\delta_0^s + (X^{\\top}\\delta_0^s)^2\\right]\\right|^2 \\times \\sup_{d_*(\\theta, \\theta_0^s) \\le \\zeta} \\left|\\left\\{K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right) - K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right)\\right\\}\\right|^2\\right] \\notag \\\\\n & \\lesssim \\mathbb{E}\\left[m_4(Q)\\sup_{d_*(\\theta, \\theta_0^s) \\le \\zeta} \\left|\\left\\{K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right) - K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right)\\right\\}\\right|^2\\right] \\notag \\\\\n \\label{eq:env_3} & \\lesssim \\zeta \\sqrt{\\sigma_n} \\,.\n\\end{align}\n\\\\\n\\noindent\n{\\bf Case 2: } Now consider $\\zeta > \\sqrt{\\mathcal{K} \\sigma_n}$. Then it is immediate that: \n$$\n\\sup_{d_*^2(\\psi, \\psi^s_0) \\le \\zeta^2} \\|\\psi - \\psi^s_0\\| = \\zeta^2 \\,.\n$$\nUsing this we have: \n\\begin{align}\n & \\mathbb{E}[m_4(Q) T_1] \\notag \\\\\n & = \\mathbb{E}\\left[m_4(Q)\\left|\\left\\{K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right) - K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n} + \\|\\tilde Q\\|\\frac{\\zeta^2}{\\sqrt{\\sigma_n}}\\right)\\right\\}\\right|^2\\right] \\notag \\\\\n & = \\sigma_n \\int_{\\mathbb{R}^{p-1}}\\int_{-\\infty}^{\\infty} m_4(\\sigma_nt - \\tilde q^{\\top}\\tilde \\psi_0^s, \\tilde q) \\left|K\\left(t\\right) - K\\left(t + \\|\\tilde q\\|\\frac{\\zeta^2}{\\sigma_n}\\right)\\right|^2 \\ f_s(\\sigma_nt\\mid \\tilde q) \\ dt \\ f(\\tilde q) \\ d\\tilde q \\notag \\\\\n & \\le \\sigma_n \\int_{\\mathbb{R}^{p-1}}\\int_{-\\infty}^{\\infty} m_4(\\sigma_nt - \\tilde q^{\\top}\\tilde \\psi_0^s, \\tilde q) \\left|K\\left(t\\right) - K\\left(t + \\|\\tilde q\\|\\frac{\\zeta^2}{\\sigma_n}\\right)\\right| \\ f_s(\\sigma_nt\\mid \\tilde q) \\ dt \\ f(\\tilde q) \\ d\\tilde q \\notag \\\\\n & \\le \\sigma_n \\int_{\\mathbb{R}^{p-1}}\\int_{-\\infty}^{\\infty} m_4(\\sigma_nt - \\tilde q^{\\top}\\tilde \\psi_0^s, \\tilde q)\\|\\tilde q\\|\\frac{\\zeta^2}{\\sigma_n} \\ f_s(\\sigma_nt\\mid \\tilde q) \\ dt \\ f(\\tilde q) \\ d\\tilde q \\notag \\\\\n & = \\zeta^2 \\int_{\\mathbb{R}^{p-1}}m_4(- \\tilde q^{\\top}\\tilde \\psi_0^s, \\tilde q) f_s(0 \\mid \\tilde q)\\|\\tilde q\\| \\ f(\\tilde q) \\ d\\tilde q + R \\notag\\\\\n & \\le \\zeta^2 \\mathbb{E}\\left[\\|\\tilde Q\\|m_4\\left(- \\tilde Q^{\\top}\\tilde \\psi_0^s, \\tilde Q\\right) f_s(0 \\mid \\tilde Q)\\right] + R \\notag \n\\end{align}\nThe analysis of the remainder term is similar and if is of higher order. This concludes when $\\zeta > \\sqrt{K\\sigma_n}$: \n\\begin{align}\n& \\mathbb{E}\\left[F^2_{3, \\zeta}(X, Y, Q)\\right] \\notag \\\\\n & \\mathbb{E}\\left[\\left|\\left[-2\\left(Y - X^{\\top}\\beta_0^s\\right)X^{\\top}\\delta_0^s + (X^{\\top}\\delta_0^s)^2\\right]\\right|^2 \\times \\sup_{d_*(\\theta, \\theta_0^s) \\le \\zeta} \\left|\\left\\{K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right) - K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right)\\right\\}\\right|^2\\right] \\notag \\\\\n & \\lesssim \\mathbb{E}\\left[m_4(Q)\\sup_{d_*(\\theta, \\theta_0^s) \\le \\zeta} \\left|\\left\\{K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right) - K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right)\\right\\}\\right|^2\\right] \\notag \\\\\n \\label{eq:env_4} & \\lesssim \\zeta^2\n\\end{align}\nCombining \\eqref{eq:env_3}, \\eqref{eq:env_4} with equation \\eqref{eq:moc_bound_2} we have:\n\\begin{align*}\n\\sqrt{n}\\mathbb{E}\\left[\\sup_{\\theta: d_*(\\theta, \\theta_0^s) \\le \\zeta} \\left|\\left(\\mathbb{P}_n - P\\right)\\left(f_\\theta - f_{\\theta_0^s}\\right)\\right|\\right] & \\lesssim \\sqrt{\\zeta}\\sigma_n^{1\/4}\\mathds{1}_{\\zeta \\le \\sqrt{\\mathcal{K}\\sigma_n}} + \\zeta \\mathds{1}_{\\zeta > \\sqrt{\\mathcal{K} \\sigma_n}} \\\\\n& := \\phi_n(\\zeta) \\,.\n\\end{align*}\nHence to obtain rate we have to solve $r_n^2 \\phi_n(1\/r_n) \\le \\sqrt{n}$, i.e. (ignoring $\\mathcal{K}$ as this does not affect the rate)\n$$\nr_n^{3\/2}\\sigma_n^{1\/4}\\mathds{1}_{r_n \\ge \\sigma_n^{-1\/2}} + r_n \\mathds{1}_{r_n \\le \\sigma_n^{-1\/2}} \\le \\sqrt{n} \\,.\n$$\nNow if $r_n \\le \\sigma_n^{-1\/2}$ then $r_n = \\sqrt{n}$ which implies $\\sqrt{n} \\le \\sigma_n^{-1\/2}$ i.e. $n\\sigma_n \\to 0$ and hence contradiction. On the other hand, if $r_n \\ge \\sigma_n^{-1\/2}$ then $r_n = n^{1\/3}\\sigma_n^{-1\/6}$. This implies $n^{1\/3}\\sigma_n^{-1\/6} \\ge \\sigma_n^{-1\/2}$, i.e. $n^{1\/3} \\ge \\sigma_n^{-1\/3}$, i.e. $n\\sigma_n \\to \\infty$ which is okay. This implies: \n$$\nn^{2\/3}\\sigma_n^{-1\/3}d^2(\\hat \\theta^s, \\theta_0^s) = O_p(1) \\,.\n$$\nNow as $n^{2\/3}\\sigma_n^{-1\/3} \\gg \\sigma_n^{-1}$, we have: \n$$\n\\frac{1}{\\sigma_n}d^2(\\hat \\theta^s, \\theta_0^s) = o_p(1) \\,.\n$$\nwhich further indicates $\\|\\hat \\psi^s - \\psi_0^s\\|\/\\sigma_n = o_p(1)$. This, along with the fact that $\\|\\psi_0^s - \\psi_0\\|\/\\sigma_n = o(1)$ (from Lemma \\ref{bandwidth}), establishes that $\\|\\hat \\psi_0^s - \\psi_0\\|\/\\sigma_n = o_p(1)$. This completes the proof. \n\\end{proof}\n\n\n\n\n\\section{Supplementary Lemmas for the proof of Theorem \\ref{thm:regression}}\n\\label{sec:supp_B}\n\\subsection{Proof of Lemma \\ref{bandwidth}}\n\\begin{proof}\nFirst we establish the fact that $\\theta_0^s \\to \\theta_0$. Note that for all $n$, we have: \n$$\n\\mathbb{M}^s(\\theta_0^s) \\le \\mathbb{M}^s(\\theta_0) \n$$\nTaking $\\limsup$ on the both side we have: \n$$\n\\limsup_{n \\to \\infty} \\mathbb{M}^s(\\theta_0^s) \\le \\mathbb{M}(\\theta_0) \\,.\n$$\nNow using Lemme \\ref{lem:uniform_smooth} we have: \n$$\n\\limsup_{n \\to \\infty} \\mathbb{M}^s(\\theta_0^s) = \\limsup_{n \\to \\infty} \\left[\\mathbb{M}^s(\\theta_0^s) - \\mathbb{M}(\\theta_0^s) + \\mathbb{M}(\\theta_0^s)\\right] = \\limsup_{n \\to \\infty} \\mathbb{M}(\\theta_0^s) \\,.\n$$\nwhich implies $\\limsup_{n \\to \\infty} \\mathbb{M}(\\theta_0^s) \\le \\mathbb{M}(\\theta_0)$ and from the continuity of $\\mathbb{M}(\\theta)$ and $\\theta_0$ being its unique minimizer, we conclude the proof. Now, using Lemma \\ref{lem:pop_curv_nonsmooth} and Lemma \\ref{lem:uniform_smooth} we further obtain: \n\\begin{align}\n u_- d^2(\\theta_0^s, \\theta_0) & \\le \\mathbb{M}(\\theta_0^s) - \\mathbb{M}(\\theta_0) \\notag \\\\\n & = \\mathbb{M}(\\theta_0^s) - \\mathbb{M}^s(\\theta^s_0) + \\underset{\\le 0}{\\underline{\\mathbb{M}^s(\\theta_0^s) - \\mathbb{M}^s(\\theta_0)}} + \\mathbb{M}^s(\\theta_0) - \\mathbb{M}(\\theta_0) \\notag \\\\\n \\label{eq:est_dist_bound} & \\le \\sup_{\\theta \\in \\Theta}\\left|\\mathbb{M}^s(\\theta) - \\mathbb{M}(\\theta)\\right| \\le K_1 \\sigma_n \\,. \n\\end{align}\nNote that we neeed consistency of $\\theta_0^s$ here as the lower bound in Lemma \\ref{lem:pop_curv_nonsmooth} is only valid in a neighborhood around $\\theta_0$. As $\\theta_0^s$ is the minimizer of $\\mathbb{M}^s(\\theta)$, from the first order condition we have: \n\\begin{align}\n \\label{eq:beta_grad}\\nabla_{\\beta}\\mathbb{M}^s_n(\\theta_0^s) & = -2\\mathbb{E}\\left[X(Y - X^{\\top}\\beta_0^s)\\right] + 2\\mathbb{E} \\left\\{\\left[X_iX_i^{\\top}\\delta_0^s\\right] K\\left(\\frac{Q_i^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\right\\} = 0 \\\\\n \\label{eq:delta_grad}\\nabla_{\\delta}\\mathbb{M}^s_n(\\theta_0^s) & = \\mathbb{E} \\left\\{\\left[-2X_i\\left(Y_i - X_i^{\\top}\\beta_0^s\\right) + 2X_iX_i^{\\top}\\delta_0^s\\right] K\\left(\\frac{Q_i^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\right\\} = 0\\\\\n \\label{eq:psi_grad}\\nabla_{\\psi}\\mathbb{M}^s_n(\\theta_0^s) & = \\frac{1}{\\sigma_n}\\mathbb{E} \\left\\{\\left[-2\\left(Y_i - X_i^{\\top}\\beta_0^s\\right)X_i^{\\top}\\delta_0^s + (X_i^{\\top}\\delta_0^s)^2\\right]\\tilde Q_i K'\\left(\\frac{Q_i^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\right\\} = 0\n\\end{align}\nWe first show that $(\\tilde \\psi^s_0 - \\tilde \\psi_0)\/\\sigma_n \\to 0$ by \\emph{reductio ab absurdum}. From equation \\eqref{eq:est_dist_bound}, we know $\\|\\psi_0^s - \\psi_0\\|\/\\sigma_n = O(1)$. Hence it has a convergent subsequent $\\psi^s_{0, n_k}$, where $(\\tilde \\psi^s_{0, n_k} - \\tilde \\psi_0)\/\\sigma_n \\to h$. If we can prove that $h = 0$, then we establish every subsequence of $\\|\\psi_0^s - \\psi_0\\|\/\\sigma_n$ has a further subsequence which converges to $0$ which further implies $\\|\\psi_0^s - \\psi_0\\|\/\\sigma_n$ converges to $0$. To save some notations, we prove that if $(\\psi_0^s - \\psi_0)\/\\sigma_n \\to h$ then $h = 0$. We start with equation \\eqref{eq:psi_grad}. Define $\\tilde \\eta = (\\tilde \\psi^s_0 - \\tilde \\psi_0)\/\\sigma_n = (\\psi_0^s - \\psi_0)\/\\sigma_n$ where $\\tilde \\psi$ is all the co-ordinates of $\\psi$ except the first one, as the first co-ordinate of $\\psi$ is always assumed to be $1$ for identifiability purpose. \n\\allowdisplaybreaks\n\\begin{align}\n 0 & = \\frac{1}{\\sigma_n}\\mathbb{E} \\left\\{\\left[-2\\left(Y_i - X_i^{\\top}\\beta_0^s\\right)X_i^{\\top}\\delta_0^s + (X_i^{\\top}\\delta_0^s)^2\\right]\\tilde Q_i K'\\left(\\frac{Q_i^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\right\\} \\notag \\\\\n & = \\frac{1}{\\sigma_n}\\mathbb{E}\\left[\\left( -2\\delta_0^s XX^{\\top}(\\beta_0 - \\beta^s_0) -2\\delta_0^s XX^{\\top}\\delta_0\\mathds{1}_{Q^{\\top}\\delta_0 > 0} + (X_i^{\\top}\\delta_0^s)^2\\right)\\tilde QK'\\left(\\frac{Q^{\\top}\\psi^s_0}{\\sigma_n}\\right)\\right] \\notag \\\\\n & = \\frac{1}{\\sigma_n}\\mathbb{E}\\left[\\left( -2\\delta_0^s XX^{\\top}(\\beta_0 - \\beta^s_0) -2\\delta_0^s XX^{\\top}(\\delta_0 - \\delta_0^s)\n \\mathds{1}_{Q^{\\top}\\delta_0 > 0} \\right. \\right. \\notag \\\\\n & \\hspace{10em} \\left. \\left. + (X_i^{\\top}\\delta_0^s)^2\\left(1 - 2\\mathds{1}_{Q^{\\top}\\delta_0 > 0}\\right)\\right)\\tilde QK'\\left(\\frac{Q^{\\top}\\psi^s_0}{\\sigma_n}\\right)\\right] \\notag \\\\\n & = \\frac{-2}{\\sigma_n}\\mathbb{E}\\left[\\left(\\delta_0^{s^{\\top}} g(Q)(\\beta_0 - \\beta^s_0)\\right)\\tilde QK'\\left(\\frac{Q^{\\top}\\psi^s_0}{\\sigma_n}\\right)\\right] \\notag \\\\\n & \\qquad \\qquad \\qquad - \\frac{2}{\\sigma_n} \\mathbb{E}\\left[\\left(\\delta_0^{s^{\\top}}g(Q)(\\delta_0 - \\delta^s_0)\\right)\\tilde QK'\\left(\\frac{Q^{\\top}\\psi^s_0}{\\sigma_n}\\right)\\mathds{1}_{Q^{\\top}\\delta_0 > 0}\\right] \\notag \\\\\n & \\hspace{15em} + \\frac{1}{\\sigma_n}\\mathbb{E}\\left[\\left(\\delta_0^{s^{\\top}}g(Q)\\delta^s_0\\right)\\tilde QK'\\left(\\frac{Q^{\\top}\\psi^s_0}{\\sigma_n}\\right)\\left(1 - 2\\mathds{1}_{Q^{\\top}\\delta_0 > 0}\\right)\\right] \\notag \\\\\n & = -\\underbrace{\\frac{2}{\\sigma_n}\\mathbb{E}\\left[\\left(\\delta_0^{s^{\\top}} g(Q)(\\beta_0 - \\beta^s_0)\\right)\\tilde QK'\\left(\\frac{Q^{\\top}\\psi^s_0}{\\sigma_n}\\right)\\right]}_{T_1} \\notag \\\\\n & \\qquad \\qquad -\\underbrace{\\frac{2}{\\sigma_n} \\mathbb{E}\\left[\\left(\\delta_0^{s^{\\top}}g(Q)(\\delta_0 - \\delta^s_0)\\right)\\tilde QK'\\left(\\frac{Q^{\\top}\\psi^s_0}{\\sigma_n}\\right)\\mathds{1}_{Q^{\\top}\\delta_0 > 0}\\right]}_{T_2} \\notag \\\\\n \\label{eq:pop_est_conv_1} & \\qquad \\qquad \\qquad + \\underbrace{\\frac{1}{\\sigma_n}\\mathbb{E}\\left[\\left(\\delta_0{\\top}g(Q)\\delta_0\\right)\\tilde QK'\\left(\\frac{Q^{\\top}\\psi^s_0}{\\sigma_n}\\right)\\left(1 - 2\\mathds{1}_{Q^{\\top}\\delta_0 > 0}\\right)\\right]}_{T_3} \\notag \\\\\n & \\qquad \\qquad \\qquad \\qquad + \\underbrace{\\frac{2}{\\sigma_n}\\mathbb{E}\\left[\\left((\\delta_0 - \\delta_0^s)^{\\top}g(Q)\\delta_0\\right)\\tilde QK'\\left(\\frac{Q^{\\top}\\psi^s_0}{\\sigma_n}\\right)\\left(1 - 2\\mathds{1}_{Q^{\\top}\\delta_0 > 0}\\right)\\right]}_{T_4} \\notag \\\\\n & = T_1 + T_2 + T_3 + T_4\n \\end{align}\nAs mentioned earlier, there is a bijection between $(Q_1, \\tilde Q)$ and $(Q^{\\top}\\psi_0, \\tilde Q)$. The map of one side is obvious. The other side is also trivial as the first coordinate of $\\psi_0$ is 1, which makes $Q^{\\top}\\psi_0 = Q_1 + \\tilde Q^{\\top}\\tilde \\psi_0$: \n$$\n(Q^{\\top}\\psi_0, \\tilde Q) \\mapsto (Q^{\\top}\\psi_0 - \\tilde Q^{\\top}\\tilde \\psi_0, \\tilde Q) \\,.\n$$\nWe first show that $T_1, T_2$ and $T_4$ are $o(1)$. Towards that end first note that: \n\\begin{align*}\n|T_1| & \\le \\frac{2}{\\sigma_n}\\mathbb{E}\\left[\\|g(Q)\\|_{op} \\ \\|\\tilde Q\\| \\ \\left|K'\\left(\\frac{Q^{\\top}\\psi^s_0}{\\sigma_n}\\right)\\right|\\right]\\|\\delta_0^s\\|\\|\\beta_0 - \\beta_0^s\\| \\\\\n|T_2| & \\le \\frac{2}{\\sigma_n} \\mathbb{E}\\left[\\|g(Q)\\|_{op} \\ \\|\\tilde Q\\| \\ \\left|K'\\left(\\frac{Q^{\\top}\\psi^s_0}{\\sigma_n}\\right)\\right|\\right]\\|\\delta_0^s\\|\\|\\delta_0 - \\delta_0^s\\| \\\\\n|T_4| & \\le \\frac{2}{\\sigma_n} \\mathbb{E}\\left[\\|g(Q)\\|_{op} \\ \\|\\tilde Q\\| \\ \\left|K'\\left(\\frac{Q^{\\top}\\psi^s_0}{\\sigma_n}\\right)\\right|\\right]\\|\\delta_0^s\\|\\|\\delta_0 - \\delta_0^s\\|\n\\end{align*}\nFrom the above bounds, it is immediate that to show that above terms are $o(1)$ all we need to show is: \n$$\n \\frac{1}{\\sigma_n}\\mathbb{E}\\left[\\|g(Q)\\|_{op} \\ \\|\\tilde Q\\| \\ \\left|K'\\left(\\frac{Q^{\\top}\\psi^s_0}{\\sigma_n}\\right)\\right|\\right] = O(1) \\,.\n$$\nTowards that direction, define $\\eta = (\\tilde \\psi_0^s - \\tilde \\psi_0)\/\\sigma_n$: \n\\begin{align*}\n& \\frac{1}{\\sigma_n}\\mathbb{E}\\left[\\|g(Q)\\|_{op} \\ \\|\\tilde Q\\| \\ \\left|K'\\left(\\frac{Q^{\\top}\\psi^s_0}{\\sigma_n}\\right)\\right|\\right] \\\\\n& \\le c_+ \\frac{1}{\\sigma_n}\\mathbb{E}\\left[\\|\\tilde Q\\| \\ \\left|K'\\left(\\frac{Q^{\\top}\\psi^s_0}{\\sigma_n}\\right)\\right|\\right] \\\\\n& = c_+ \\frac{1}{\\sigma_n}\\int \\int \\|\\tilde q\\| \\left|K'\\left(\\frac{t}{\\sigma_n} + \\tilde q^{\\top}\\eta \\right)\\right| f_0\\left(t \\mid \\tilde q\\right) f(\\tilde q) \\ dt \\ d\\tilde q \\\\\n& = c_+ \\int \\int \\|\\tilde q\\| \\left|K'\\left(t + \\tilde q^{\\top}\\eta \\right)\\right| f_0\\left(\\sigma_n t \\mid \\tilde q\\right) f(\\tilde q) \\ dt \\ d\\tilde q \\\\\n& = c_+ \\int \\|\\tilde q\\| f_0\\left(0 \\mid \\tilde q\\right) \\int \\left|K'\\left(t + \\tilde q^{\\top}\\eta \\right)\\right| \\ dt \\ f(\\tilde q) \\ d\\tilde q + R_1 \\\\\n& = c_+ \\int \\left|K'\\left(t\\right)\\right| dt \\ \\mathbb{E}\\left[\\|\\tilde Q\\| f_0(0 \\mid \\tilde Q)\\right] + R_1 = O(1) + R_1 \\,.\n\\end{align*}\nTherefore, all it remains to show is $R_1$ is also $O(1)$ (or of smaller order): \n\\begin{align*}\n|R_1| & = \\left|c_+ \\int \\int \\|\\tilde q\\| \\left|K'\\left(t + \\tilde q^{\\top}\\eta \\right)\\right| \\left(f_0\\left(\\sigma_n t \\mid \\tilde q\\right) - f_0(0 \\mid \\tilde q) \\right)f(\\tilde q) \\ dt \\ d\\tilde q\\right| \\\\\n& \\le c_+ F_+ \\sigma_n \\int \\|\\tilde q\\| \\int_{-\\infty}^{\\infty} |t|\\left|K'\\left(t + \\tilde q^{\\top}\\eta \\right)\\right| \\ dt \\ f(\\tilde q) \\ d\\tilde q \\\\\n& = c_+ F_+ \\sigma_n \\int \\|\\tilde q\\| \\int_{-\\infty}^{\\infty} |t - q^{\\top}\\eta|\\left|K'\\left(t\\right)\\right| \\ dt \\ f(\\tilde q) \\ d\\tilde q \\\\\n& \\le c_+ F_+ \\sigma_n \\left[\\int \\|\\tilde q\\| \\int_{-\\infty}^{\\infty} |t|\\left|K'\\left(t\\right)\\right| \\ dt \\ f(\\tilde q) \\ d\\tilde q + \\int \\|\\tilde q\\|^2\\|\\eta\\| \\int_{-\\infty}^{\\infty}\\left|K'\\left(t\\right)\\right| \\ dt \\ f(\\tilde q) \\ d\\tilde q\\right] \\\\\n& = c_+ F_+ \\sigma_n \\left[\\left(\\int_{-\\infty}^{\\infty} |t|\\left|K'\\left(t\\right)\\right| \\ dt\\right) \\times \\mathbb{E}[\\|\\tilde Q\\|] + \\left(\\int_{-\\infty}^{\\infty}\\left|K'\\left(t\\right)\\right| \\ dt\\right) \\times \\|\\eta\\| \\ \\mathbb{E}[\\|\\tilde Q\\|^2]\\right] \\\\\n& = O(\\sigma_n) = o(1) \\,.\n\\end{align*}\nThis completes the proof. For $T_3$, the limit is non-degenerate which can be calculated as follows: \n\\begin{align*}\nT_3 &= \\frac{1}{\\sigma_n}\\mathbb{E}\\left[\\left(\\delta_0{\\top}g(Q)\\delta_0\\right)\\tilde QK'\\left(\\frac{Q^{\\top}\\psi^s_0}{\\sigma_n}\\right)\\left(1 - 2\\mathds{1}_{Q^{\\top}\\delta_0 > 0}\\right)\\right] \\\\\n& = \\frac{1}{\\sigma_n} \\int \\int \\left(\\delta_0{\\top}g(t - \\tilde q^{\\top}\\tilde \\psi_0, \\tilde q)\\delta_0\\right)\\tilde q K'\\left(\\frac{t}{\\sigma_n} + \\tilde q^{\\top} \\eta\\right)\\left(1 - 2\\mathds{1}_{t > 0}\\right) \\ f_0(t \\mid \\tilde q) \\ f(\\tilde q) \\ dt \\ d\\tilde q \\\\\n& = \\int \\int \\left(\\delta_0{\\top}g(\\sigma_n t - \\tilde q^{\\top}\\tilde \\psi_0, \\tilde q)\\delta_0\\right)\\tilde q K'\\left(t + \\tilde q^{\\top} \\eta\\right)\\left(1 - 2\\mathds{1}_{t > 0}\\right) \\ f_0(\\sigma_n t \\mid \\tilde q) \\ f(\\tilde q) \\ dt \\ d\\tilde q \\\\\n& = \\int \\int \\left(\\delta_0{\\top}g(- \\tilde q^{\\top}\\tilde \\psi_0, \\tilde q)\\delta_0\\right)\\tilde q K'\\left(t + \\tilde q^{\\top} \\eta\\right)\\left(1 - 2\\mathds{1}_{t > 0}\\right) \\ f_0(0 \\mid \\tilde q) \\ f(\\tilde q) \\ dt \\ d\\tilde q + R \\\\\n& = \\int \\left(\\delta_0{\\top}g(- \\tilde q^{\\top}\\tilde \\psi_0, \\tilde q)\\delta_0\\right)\\tilde q f_0(0 \\mid \\tilde q) \\left[\\int_{-\\infty}^0 K'\\left(t + \\tilde q^{\\top} \\eta\\right) \\ dt - \\int_0^\\infty K'\\left(t + \\tilde q^{\\top}\\tilde \\eta\\right) \\ dt \\right] \\ f(\\tilde q) \\ d\\tilde q + R \\\\\n&= \\int \\left(\\delta_0{\\top}g(- \\tilde q^{\\top}\\tilde \\psi_0, \\tilde q)\\delta_0\\right)\\tilde q f_0(0 \\mid \\tilde q)\\left(2K\\left(\\tilde q^{\\top}\\eta\\right) - 1\\right) \\ f(\\tilde q) \\ d\\tilde q + R \\\\\n& = \\mathbb{E}\\left[\\tilde Q f(0 \\mid \\tilde Q) \\left(\\delta_0^{\\top}g(- \\tilde Q^{\\top}\\tilde \\psi_0, \\tilde Q)\\delta_0\\right)\\left(2K(\\tilde Q^{\\top} \\eta)- 1\\right)\\right] + R \n\\end{align*} \nThat the remainder $R$ is $o(1)$ again follows by similar calculation as before and hence skipped. Therefore we have when $\\eta = (\\tilde \\psi_0^s - \\psi_0)\/\\sigma_n \\to h$: \n$$\nT_3 \\overset{n \\to \\infty}{\\longrightarrow} \\mathbb{E}\\left[\\tilde Q f(0 \\mid \\tilde Q) \\left(\\delta_0^{\\top}g(- \\tilde Q^{\\top}\\tilde \\psi_0, \\tilde Q)\\delta_0\\right)\\left(2K(\\tilde Q^{\\top}h)- 1\\right)\\right] \\,,\n$$\nwhich along with equation \\eqref{eq:pop_est_conv_1} implies: \n$$\n\\mathbb{E}\\left[\\tilde Q f(0 \\mid \\tilde Q) \\left(\\delta_0^{\\top}g(- \\tilde Q^{\\top}\\tilde \\psi_0, \\tilde Q)\\delta_0\\right)\\left(2K(\\tilde Q^{\\top}h)- 1\\right)\\right] = 0 \\,.\n$$\nTaking inner product with respect to $h$ on both side of the above equation we obtain: \n$$\n\\mathbb{E}\\left[\\tilde Q^{\\top}h f(0 \\mid \\tilde Q) \\left(\\delta_0^{\\top}g(- \\tilde Q^{\\top}\\tilde \\psi_0, \\tilde Q)\\delta_0\\right)\\left(2K(\\tilde Q^{\\top}h)- 1\\right)\\right] = 0\n$$\nNow from the symmetry of our Kernel $K$ we have $\\left(\\delta_0^{\\top}g(- \\tilde Q^{\\top}\\tilde \\psi_0, \\tilde Q)\n\\delta_0\\right)\\tilde Q^{\\top}h f(0 \\mid \\tilde Q) (2K(\\tilde Q^{\\top}\\tilde h) - 1) \\ge 0$ almost surely. As the expectation is $0$, we further deduce that $\\tilde Q^{\\top}h f(0 \\mid \\tilde Q) (2K(\\tilde Q^{\\top}\\tilde h)-1) = 0$ almost surely, which further implies $h = 0$. \n\\\\\\\\\n\\noindent\nWe next prove that $(\\beta_0 - \\beta^s_0)\/\\sqrt{\\sigma_n} \\to 0$ and $(\\delta_0 - \\delta^s_0)\/\\sqrt{\\sigma_n} \\to 0$ using equations\\eqref{eq:beta_grad} and \\eqref{eq:delta_grad}. We start with equation \\eqref{eq:beta_grad}: \n\\begin{align}\n 0 & = -\\mathbb{E}\\left[X(Y - X^{\\top}\\beta_0^s)\\right] + \\mathbb{E} \\left\\{\\left[X_iX_i^{\\top}\\delta_0^s\\right] K\\left(\\frac{Q_i^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\right\\} \\notag \\\\\n & = -\\mathbb{E}\\left[XX^{\\top}(\\beta_0 - \\beta_0^s)\\right] - \\mathbb{E}[XX^{\\top}\\delta_0\\mathds{1}_{Q^{\\top}\\psi_0 > 0}] + \\mathbb{E} \\left[ g(Q)K\\left(\\frac{Q_i^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\right]\\delta_0^s \\notag \\\\\n & = -\\Sigma_X(\\beta_0 - \\beta_0^s) -\\mathbb{E}\\left[g(Q)\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right](\\delta_0 - \\delta_0^s) + \\mathbb{E} \\left[g(Q)\\left\\{K\\left(\\frac{Q_i^{\\top}\\psi_0^s}{\\sigma_n}\\right) - \\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right\\}\\right]\\delta_0^s \\notag \\\\\n \\label{eq:deriv1} & = \\Sigma_X\\frac{(\\beta_0^2 - \\beta_0)}{\\sigma_n} + \\mathbb{E}\\left[g(Q)\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right]\\frac{(\\delta_0^2 - \\delta_0)}{\\sigma_n} + \\frac{1}{\\sigma_n}\\mathbb{E} \\left[g(Q)\\left\\{K\\left(\\frac{Q_i^{\\top}\\psi_0^s}{\\sigma_n}\\right) - \\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right\\}\\right]\\delta_0^s \\notag \\\\ \n & = \\left( \\mathbb{E}\\left[g(Q)\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right]\\right)^{-1}\\Sigma_X \\frac{(\\beta_0^2 - \\beta_0)}{\\sigma_n} + \\frac{\\delta_0^s - \\delta_0}{\\sigma_n} \\notag \\\\\n & \\qquad \\qquad \\qquad \\qquad + \\left( \\mathbb{E}\\left[g(Q)\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right]\\right)^{-1} \\frac{1}{\\sigma_n}\\mathbb{E} \\left[g(Q)\\left\\{K\\left(\\frac{Q_i^{\\top}\\psi_0^s}{\\sigma_n}\\right) - \\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right\\}\\right]\\delta_0^s \n\\end{align}\nFrom equation \\eqref{eq:delta_grad} we have:\n\\begin{align}\n 0 & = \\mathbb{E} \\left\\{\\left[-X\\left(Y - X^{\\top}\\beta_0^s\\right) + XX^{\\top}\\delta_0^s\\right] K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\right\\} \\notag \\\\\n & = -\\mathbb{E}\\left[g(Q)K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\right](\\beta_0 - \\beta_0^s) - \\mathbb{E}\\left[g(Q)K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right]\\delta_0 \\notag \\\\\n & \\hspace{20em}+ \\mathbb{E}\\left[g(Q)K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\right]\\delta_0^s \\notag \\\\\n & = -\\mathbb{E}\\left[g(Q)K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\right](\\beta_0 - \\beta_0^s) - \\mathbb{E}\\left[g(Q)K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right](\\delta_0 - \\delta_0^s) \\notag \\\\\n & \\hspace{20em} + \\mathbb{E}\\left[g(Q)K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\left(1 - \\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right)\\right]\\delta_0^s \\notag \\\\\n & = \\mathbb{E}\\left[g(Q)K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\right]\\frac{(\\beta_0^s - \\beta_0)}{\\sigma_n} + \\mathbb{E}\\left[g(Q)K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right]\\frac{(\\delta^s_0 - \\delta_0)}{\\sigma_n} \\notag \\\\\n \\label{eq:deriv2} & \\hspace{20em} + \\frac{1}{\\sigma_n}\\mathbb{E}\\left[g(Q)K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\left(1 - \\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right)\\right]\\delta_0^s \\notag \\\\ \n & = \\left( \\mathbb{E}\\left[g(Q)K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right]\\right)^{-1}\\mathbb{E}\\left[g(Q)K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\right]\\frac{(\\beta_0^s - \\beta_0)}{\\sigma_n} + \\frac{(\\delta^s_0 - \\delta_0)}{\\sigma_n} \\notag \\\\\n & \\qquad \\qquad \\qquad + \\left( \\mathbb{E}\\left[g(Q)K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right]\\right)^{-1} \\frac{1}{\\sigma_n}\\mathbb{E}\\left[g(Q)K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\left(1 - \\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right)\\right]\\delta_0^s \n \\end{align}\nSubtracting equation \\eqref{eq:deriv2} from \\eqref{eq:deriv1} we obtain: \n$$\n0 = A_n \\frac{(\\beta_0^s - \\beta_0)}{\\sigma_n} + b_n \\,,\n$$\ni.e. \n$$\n\\lim_{n \\to \\infty} \\frac{(\\beta_0^s - \\beta_0)}{\\sigma_n} = \\lim_{n \\to \\infty} -A_n^{-1}b_n \\,.\n$$\nwhere: \n\\begin{align*}\nA_n & = \\left( \\mathbb{E}\\left[g(Q)\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right]\\right)^{-1}\\Sigma_X \\\\\n& \\qquad \\qquad - \\left( \\mathbb{E}\\left[g(Q)K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right]\\right)^{-1}\\mathbb{E}\\left[g(Q)K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\right] \\\\\nb_n & = \\left( \\mathbb{E}\\left[g(Q)\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right]\\right)^{-1} \\frac{1}{\\sigma_n}\\mathbb{E} \\left[g(Q)\\left\\{K\\left(\\frac{Q_i^{\\top}\\psi_0^s}{\\sigma_n}\\right) - \\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right\\}\\right]\\delta_0^s \\\\\n& \\qquad - \\left( \\mathbb{E}\\left[g(Q)K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right]\\right)^{-1} \\frac{1}{\\sigma_n}\\mathbb{E}\\left[g(Q)K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\left(1 - \\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right)\\right]\\delta_0^s \n\\end{align*}\nIt is immediate via DCT that as $n \\to \\infty$: \n\\begin{align}\n \\label{eq:limit_3} \\mathbb{E}\\left[g(Q)K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\right] & \\longrightarrow \\mathbb{E}\\left[g(Q)\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right] \\,. \\\\\n \\label{eq:limit_4} \\mathbb{E}\\left[g(Q)K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right] & \\longrightarrow \\mathbb{E}\\left[g(Q)\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right] \\,.\n\\end{align}\nFrom equation \\eqref{eq:limit_3} and \\eqref{eq:limit_4} it is immediate that: \n\\begin{align*}\n\\lim_{n \\to \\infty} A_n & = \\left( \\mathbb{E}\\left[g(Q)\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right]\\right)^{-1}\\Sigma_X - I \\\\\n& = \\left( \\mathbb{E}\\left[g(Q)\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right]\\right)^{-1}\\left( \\mathbb{E}\\left[g(Q)\\mathds{1}_{Q^{\\top}\\psi_0 \\le 0}\\right]\\right) := A\\,.\n\\end{align*}\nNext observe that: \n\\begin{align}\n & \\frac{1}{\\sigma_n} \\mathbb{E}\\left[g(Q)\\left\\{K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right) - \\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right\\}\\right] \\notag \\\\\n & = \\frac{1}{\\sigma_n} \\mathbb{E}\\left[g(Q)\\left\\{K\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n} + \\tilde Q^{\\top}\\tilde \\eta\\right) - \\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right\\}\\right] \\notag \\\\\n & = \\int_{\\mathbb{R}^{p-1}} \\int_{-\\infty}^{\\infty} g(\\sigma_nt - \\tilde q^{\\top}\\tilde \\psi_0, \\tilde q)\\left[K\\left(t + \\tilde q^{\\top}\\tilde \\eta\\right) - \\mathds{1}_{t > 0}\\right] f(\\sigma_n t \\mid \\tilde q) \\ dt \\ f(\\tilde q) \\ d\\tilde q \\notag \\\\\n \\label{eq:limit_1} & \\longrightarrow \\mathbb{E}\\left[g(-\\tilde Q^{\\top}\\tilde \\psi_0, \\tilde Q)f(0 \\mid \\tilde Q)\\right] \\cancelto{0}{\\int_{-\\infty}^{\\infty} \\left[K\\left(t\\right) - \\mathds{1}_{t > 0}\\right] \\ dt} \\,. \n\\end{align}\nSimilar calculation yields: \n\\begin{align}\n \\label{eq:limit_2} & \\frac{1}{\\sigma_n} \\mathbb{E}\\left[g(Q)K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\left(1 - \\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right)\\right] \\notag \\\\\n & \\longrightarrow \\mathbb{E}[g(-\\tilde Q^{\\top}\\tilde \\psi_0, \\tilde Q)f_0(0 \\mid \\tilde Q)]\\int_{-\\infty}^{\\infty} \\left[K\\left(t\\right)\\mathds{1}_{t \\le 0}\\right] \\ dt \\,.\n\\end{align}\nCombining equation \\eqref{eq:limit_1} and \\eqref{eq:limit_2} we conclude: \n\\begin{align*}\n\\lim_{n \\to \\infty} b_n &= \\left( \\mathbb{E}\\left[g(Q)\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right]\\right)^{-1} \\mathbb{E}[g(-\\tilde Q^{\\top}\\tilde \\psi_0, \\tilde Q)\\delta_0f_0(0 \\mid \\tilde Q)]\\int_{-\\infty}^{\\infty} \\left[K\\left(t\\right)\\mathds{1}_{t \\le 0}\\right] \\ dt \\\\\n& := b \\,.\n\\end{align*}\nwhich further implies, \n$$\n\\lim_{n \\to \\infty} \\frac{(\\beta_0^s - \\beta_0)}{\\sigma_n} = -A^{-1}b \\implies (\\beta_0^s - \\beta_0) = o(\\sqrt{\\sigma_n})\\,,\n$$\nand by similar calculations: \n$$\n(\\delta_0^s - \\delta_0) = o(\\sqrt{\\sigma_n}) \\,.\n$$\nThis completes the proof. \n\\end{proof}\n\n\n\n\n\n\n\\subsection{Proof of Lemma \\ref{lem:pop_curv_nonsmooth}}\n\\begin{proof}\nFrom the definition of $M(\\theta)$ it is immediate that $\\mathbb{M}(\\theta_0) = \\mathbb{E}[{\\epsilon}^2] = \\sigma^2$. For any general $\\theta$: \n\\begin{align*}\n \\mathbb{M}(\\theta) & = \\mathbb{E}\\left[\\left(Y - X^{\\top}\\left(\\beta + \\delta\\mathds{1}_{Q^{\\top}\\psi > 0}\\right)\\right)^2\\right] \\\\\n & = \\sigma^2 + \\mathbb{E}\\left[\\left( X^{\\top}\\left(\\beta + \\delta\\mathds{1}_{Q^{\\top}\\psi > 0} - \\beta_0 - \\delta_0\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right)\\right)^2\\right] \\\\\n & \\ge \\sigma^2 + c_- \\mathbb{E}_Q\\left[\\left\\|\\beta - \\beta_0 + \\delta\\mathds{1}_{Q^{\\top}\\psi > 0}- \\delta_0\\mathds{1}_{Q^{\\top}\\psi_0 > 0} \\right\\|^2\\right]\n\\end{align*}\nThis immediately implies: \n$$\n\\mathbb{M}(\\theta) - \\mathbb{M}(\\theta_0) \\ge c_- \\mathbb{E}\\left[\\left\\|\\beta - \\beta_0 + \\delta\\mathds{1}_{Q^{\\top}\\psi > 0}- \\delta_0\\mathds{1}_{Q^{\\top}\\psi_0 > 0} \\right\\|^2\\right] \\,.\n$$\n\n\\noindent\nFor notational simplicity, define $p_{\\psi} = \\mathbb{P}(Q^{\\top}\\psi > 0)$. Expanding the RHS we have: \n\\begin{align}\n & \\mathbb{E}\\left[\\left\\|\\beta - \\beta_0 + \\delta\\mathds{1}_{Q^{\\top}\\psi > 0}- \\delta_0\\mathds{1}_{Q^{\\top}\\psi_0 > 0} \\right\\|^2\\right] \\notag \\\\\n & = \\|\\beta - \\beta_0\\|^2 + 2(\\beta - \\beta_0)^{\\top}\\mathbb{E}\\left[\\delta\\mathds{1}_{Q^{\\top}\\psi > 0}- \\delta_0\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right] + \\mathbb{E}\\left[\\left\\|\\delta\\mathds{1}_{Q^{\\top}\\psi > 0}- \\delta_0\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right\\|^2\\right] \\notag \\\\\n & = \\|\\beta - \\beta_0\\|^2 + 2(\\beta - \\beta_0)^{\\top}\\mathbb{E}\\left[\\delta\\mathds{1}_{Q^{\\top}\\psi > 0}-\\delta\\mathds{1}_{Q^{\\top}\\psi_0 > 0} + \\delta\\mathds{1}_{Q^{\\top}\\psi_0 > 0} - \\delta_0\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right] \\notag \\\\\n & \\qquad \\qquad \\qquad \\qquad \\qquad+ \\mathbb{E}\\left[\\left\\|\\delta\\mathds{1}_{Q^{\\top}\\psi > 0}-\\delta\\mathds{1}_{Q^{\\top}\\psi_0 > 0} + \\delta\\mathds{1}_{Q^{\\top}\\psi_0 > 0} - \\delta_0\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right\\|^2\\right] \\notag \\\\\n & = \\|\\beta - \\beta_0\\|^2 + 2(\\beta - \\beta_0)^{\\top}(\\delta - \\delta_0)p_{\\psi_0} + \\|\\delta - \\delta_0\\|^2 p_{\\psi_0} \\notag \\\\\n & \\qquad \\qquad \\qquad + 2(\\beta - \\beta_0)^{\\top}\\delta\\left(p_{\\psi} - p_{\\psi_0}\\right) + \\|\\delta\\|^2 \\mathbb{P}\\left(\\text{sign}(Q^{\\top}\\psi) \\neq \\text{sign}(Q^{\\top}\\psi_0)\\right) \\notag \\\\\n \\label{eq:nsb1} & \\qquad \\qquad \\qquad \\qquad \\qquad - 2\\delta^{\\top}(\\delta - \\delta_0)\\mathbb{P}\\left(Q^{\\top}\\psi_0 > 0, Q^{\\top}\\psi < 0\\right) \n \\end{align}\nUsing the fact that $2ab \\ge (a^2\/c) + cb^2$ for any constant $c$ we have: \n\\begin{align*}\n& \\|\\beta - \\beta_0\\|^2 + 2(\\beta - \\beta_0)^{\\top}(\\delta - \\delta_0)p_{\\psi_0} + \\|\\delta - \\delta_0\\|^2 p_{\\psi_0} \\\\\n& \\ge \\|\\beta - \\beta_0\\|^2 + \\|\\delta - \\delta_0\\|^2 p_{\\psi_0} - \\frac{\\|\\beta - \\beta_0\\|^2 p_{\\psi_0}}{c} - c \\|\\delta - \\delta_0\\|^2 p_{\\psi_0} \\\\\n& = \\|\\beta - \\beta_0\\|^2\\left(1 - \\frac{p_{\\psi_0}}{c}\\right) + \\|\\delta - \\delta_0\\|^2 p_{\\psi_0} (1 - c) \\,.\n\\end{align*}\nfor any $c$. To make the RHS non-negative we pick $p_{\\psi_0} < c < 1$ and concludes that: \n\\begin{equation}\n\\label{eq:nsb2}\n \\|\\beta - \\beta_0\\|^2 + 2(\\beta - \\beta_0)^{\\top}(\\delta - \\delta_0)p_{\\psi_0} + \\|\\delta - \\delta_0\\|^2 p_{\\psi_0} \\gtrsim \\left( \\|\\beta - \\beta_0\\|^2 + \\|\\delta - \\delta_0\\|^2\\right) \\,.\n\\end{equation}\nFor the last 3 summands of RHS of equation \\eqref{eq:nsb1}: \n\\begin{align}\n& 2(\\beta - \\beta_0)^{\\top}\\delta\\left(p_{\\psi} - p_{\\psi_0}\\right) + \\|\\delta\\|^2 \\mathbb{P}\\left(\\text{sign}(Q^{\\top}\\psi) \\neq \\text{sign}(Q^{\\top}\\psi_0)\\right) \\notag \\\\\n & \\qquad \\qquad - 2\\delta^{\\top}(\\delta - \\delta_0)\\mathbb{P}\\left(Q^{\\top}\\psi_0 > 0, Q^{\\top}\\psi < 0\\right) \\notag \\\\\n & = 2(\\beta - \\beta_0)^{\\top}\\delta \\mathbb{P}\\left(Q^{\\top}\\psi > 0, Q^{\\top}\\psi_0 < 0\\right) - 2(\\beta - \\beta_0)^{\\top}\\delta \\mathbb{P}\\left(Q^{\\top}\\psi < 0, Q^{\\top}\\psi_0 > 0\\right) \\notag \\\\\n & \\qquad \\qquad + |\\delta\\|^2 \\mathbb{P}\\left(\\text{sign}(Q^{\\top}\\psi) \\neq \\text{sign}(Q^{\\top}\\psi_0)\\right) - 2\\delta^{\\top}(\\delta - \\delta_0)\\mathbb{P}\\left(Q^{\\top}\\psi_0 > 0, Q^{\\top}\\psi < 0\\right) \\notag \\\\\n & = \\left[\\|\\delta\\|^2 - 2(\\beta - \\beta_0)^{\\top}\\delta - 2\\delta^{\\top}(\\delta - \\delta_0)\\right]\\mathbb{P}\\left(Q^{\\top}\\psi_0 > 0, Q^{\\top}\\psi < 0\\right) \\notag \\\\\n & \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad + \\left[\\|\\delta\\|^2 + 2(\\beta - \\beta_0)^{\\top}\\delta\\right]\\mathbb{P}\\left(Q^{\\top}\\psi > 0, Q^{\\top}\\psi_0 < 0\\right) \\notag \\\\\n & = \\left[\\|\\delta_0\\|^2 - 2(\\beta - \\beta_0)^{\\top}(\\delta - \\delta_0) - 2(\\beta - \\beta_0)^{\\top}\\delta_0 - \\|\\delta - \\delta_0\\|^2\\right]\\mathbb{P}\\left(Q^{\\top}\\psi_0 > 0, Q^{\\top}\\psi < 0\\right) \\notag \\\\\n & \\qquad + \\left[\\|\\delta_0\\|^2 + \\|\\delta - \\delta_0\\|^2 + 2(\\delta - \\delta_0)^{\\top}\\delta_0 + 2(\\beta - \\beta_0)^{\\top}(\\delta - \\delta_0) + 2(\\beta - \\beta_0)^{\\top}\\delta_0\\right]\\mathbb{P}\\left(Q^{\\top}\\psi > 0, Q^{\\top}\\psi_0 < 0\\right) \\notag \\\\\n & \\ge \\left[\\|\\delta_0\\|^2 - 2\\|\\beta - \\beta_0\\|\\|\\delta - \\delta_0\\| - 2\\|\\beta - \\beta_0\\|\\|\\delta_0\\| - \\|\\delta - \\delta_0\\|^2\\right]\\mathbb{P}\\left(Q^{\\top}\\psi_0 > 0, Q^{\\top}\\psi < 0\\right) \\notag \\\\\n & \\qquad + \\left[\\|\\delta_0\\|^2 + \\|\\delta - \\delta_0\\|^2 + 2\\|\\delta - \\delta_0\\|\\|\\delta_0\\| + 2\\|\\beta - \\beta_0\\|\\|\\delta - \\delta_0\\| + 2\\|\\beta - \\beta_0\\|\\|\\delta_0\\|\\right]\\mathbb{P}\\left(Q^{\\top}\\psi > 0, Q^{\\top}\\psi_0 < 0\\right) \\notag \\\\\n\\label{eq:nsb3} & \\gtrsim \\|\\delta_0\\|^2 \\mathbb{P}\\left(\\text{sign}(Q^{\\top}\\psi) \\neq \\text{sign}(Q^{\\top}\\psi_0)\\right) \\gtrsim \\|\\psi - \\psi_0\\| \\hspace{0.2in} [\\text{By Assumption }\\ref{eq:assm}]\\,.\n\\end{align}\nCombining equation \\eqref{eq:nsb2} and \\eqref{eq:nsb3} we complete the proof of lower bound. The upper bound is relatively easier: note that by our previous calculation: \n\\begin{align*}\n \\mathbb{M}(\\theta) - \\mathbb{M}(\\theta_0) & = \\mathbb{E}\\left[\\left( X^{\\top}\\left(\\beta + \\delta\\mathds{1}_{Q^{\\top}\\psi > 0} - \\beta_0 - \\delta_0\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right)\\right)^2\\right] \\\\\n & \\le c_+\\mathbb{E}\\left[\\left\\|\\beta - \\beta_0 + \\delta\\mathds{1}_{Q^{\\top}\\psi > 0}- \\delta_0\\mathds{1}_{Q^{\\top}\\psi_0 > 0} \\right\\|^2\\right] \\\\\n & = c_+\\mathbb{E}\\left[\\left\\|\\beta - \\beta_0 + \\delta\\mathds{1}_{Q^{\\top}\\psi > 0}- \\delta\\mathds{1}_{Q^{\\top}\\psi_0 > 0} + \\delta\\mathds{1}_{Q^{\\top}\\psi_0 > 0} - \\delta_0\\mathds{1}_{Q^{\\top}\\psi_0 > 0} \\right\\|^2\\right] \\\\\n & \\lesssim \\left[\\|\\beta - \\beta_0\\|^2 + \\|\\delta - \\delta_0\\|^2 + \\mathbb{P}\\left(\\text{sign}(Q^{\\top}\\psi) \\neq \\text{sign}(Q^{\\top}\\psi_0)\\right)\\right] \\\\\n & \\lesssim \\left[\\|\\beta - \\beta_0\\|^2 + \\|\\delta - \\delta_0\\|^2 + \\|\\psi - \\psi_0\\|\\right] \\,.\n\\end{align*}\nThis completes the entire proof.\n\\end{proof}\n\n\n\n\n\\subsection{Proof of Lemma \\ref{lem:uniform_smooth}}\n\\begin{proof}\nThe difference of the two losses: \n\\begin{align*}\n\\left|\\mathbb{M}^s(\\theta) - \\mathbb{M}(\\theta)\\right| & = \\left|\\mathbb{E}\\left[\\left\\{-2\\left(Y_i - X^{\\top}\\beta\\right)X^{\\top}\\delta + (X^{\\top}\\delta)^2\\right\\}\\left(K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right) - \\mathds{1}_{Q^{\\top}\\psi > 0}\\right)\\right]\\right| \\\\\n& \\le \\mathbb{E}\\left[\\left|-2\\left(Y_i - X^{\\top}\\beta\\right)X^{\\top}\\delta + (X^{\\top}\\delta)^2\\right|\\left|K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right) - \\mathds{1}_{Q^{\\top}\\psi > 0}\\right|\\right] \\\\\n& := \\mathbb{E}\\left[m(Q)\\left|K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right) - \\mathds{1}_{Q^{\\top}\\psi > 0}\\right|\\right] \n\\end{align*}\nwhere $m(Q) = \\mathbb{E}\\left[\\left|-2\\left(Y_i - X^{\\top}\\beta\\right)X^{\\top}\\delta + (X^{\\top}\\delta)^2\\right| \\mid Q\\right]$. This function can be bounded as follows: \n\\begin{align*}\nm(Q) & = \\mathbb{E}\\left[\\left|-2\\left(Y_i - X^{\\top}\\beta\\right)X^{\\top}\\delta + (X^{\\top}\\delta)^2\\right| \\mid Q\\right] \\\\\n& \\le \\mathbb{E}[ (X^{\\top}\\delta)^2 \\mid Q] + 2\\mathbb{E}\\left[\\left|(\\beta - \\beta_0)^{\\top}XX^{\\top}\\delta\\right|\\right] + 2\\mathbb{E}\\left[\\left|\\delta_0^{\\top}XX^{\\top}\\delta\\right|\\right] \\\\\n& \\le c_+\\left(\\|\\delta\\|^2 + 2\\|\\beta - \\beta_0\\|\\|\\delta\\| + 2\\|\\delta\\|\\|\\delta_0\\|\\right) \\lesssim 1 \\,,\n\\end{align*}\nas our parameter space is compact. For the rest of the calculation define $\\eta = (\\tilde \\psi - \\tilde \\psi_0)\/\\sigma_n$. The definition of $\\eta$ may be changed from proof to proof, but it will be clear from the context. Therefore we have: \n\\begin{align*}\n\\left|\\mathbb{M}^s(\\theta) - \\mathbb{M}(\\theta)\\right| & \\lesssim \\mathbb{E}\\left[\\left|K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right) - \\mathds{1}_{Q^{\\top}\\psi > 0}\\right|\\right] \\\\\n& = \\mathbb{E}\\left[\\left| \\mathds{1}\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n} + \\eta^{\\top}\\tilde{Q} \\ge 0\\right) - K\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n} + \\eta^{\\top}\\tilde{Q}\\right)\\right|\\right] \\\\\n& = \\sigma_n \\int_{\\mathbb{R}^{p-1}} \\int_{-\\infty}^{\\infty} \\left | \\mathds{1}\\left(t \\ge 0\\right) - K\\left(t \\right)\\right | f_0(\\sigma_n (t-\\eta^{\\top}\\tilde{q}) | \\tilde{q}) \\ dt \\ dP(\\tilde{q}) \\\\\n& \\le f_+ \\sigma_n \\int_{-\\infty}^{\\infty} \\left | \\mathds{1}\\left(t \\ge 0\\right) - K\\left(t \\right)\\right | \\ dt \\lesssim \\sigma_n \\,.\n\\end{align*}\nwhere the integral over $t$ is finite follows from the definition of the kernel. This completes the proof. \n\\end{proof}\n\n\n\n\n\n\n\n\n\\subsection{Proof of Lemma \\ref{lem:pop_smooth_curvarture}}\n\\begin{proof}\nFirst note that we can write: \n\\begin{align}\n & \\mathbb{M}^s(\\theta) - \\mathbb{M}^s(\\theta_0^s) \\notag \\\\\n & = \\underbrace{\\mathbb{M}^s(\\theta) - \\mathbb{M}(\\theta)}_{\\ge -K_1\\sigma_n} + \\underbrace{\\mathbb{M}(\\theta) - \\mathbb{M}(\\theta_0)}_{\\underbrace{\\ge u_- d^2(\\theta, \\theta_0)}_{\\ge \\frac{u_-}{2} d^2(\\theta, \\theta_0^s) - u_-\\sigma_n }} + \\underbrace{\\mathbb{M}(\\theta_0) - \\mathbb{M}(\\theta_0^s)}_{\\ge - u_+ d^2(\\theta_0, \\theta_0^s) \\ge - u_+\\sigma_n} + \\underbrace{\\mathbb{M}(\\theta_0^s) - \\mathbb{M}^s(\\theta_0^s)}_{\\ge - K_1 \\sigma_n} \\notag \\\\\n & \\ge \\frac{u_-}{2}d^2(\\theta, \\theta_0^s) - (2K_1 + \\xi)\\sigma_n \\notag \\\\\n & \\ge \\frac{u_-}{2}\\left[\\|\\beta - \\beta^s_0\\|^2 + \\|\\delta - \\delta^s_0\\|^2 + \\|\\psi - \\psi^s_0\\|\\right] - (2K_1 + \\xi)\\sigma_n \\notag \\\\ \n & \\ge \\left[\\frac{u_-}{2}\\left(\\|\\beta - \\beta^s_0\\|^2 + \\|\\delta - \\delta^s_0\\|^2\\right) + \\frac{u_-}{4}\\|\\psi - \\psi^s_0\\|\\right]\\mathds{1}_{\\|\\psi - \\psi^s_0\\| > \\frac{4(2K_1 + \\xi)}{u_-}\\sigma_n} \\notag \\\\\n \\label{eq:lower_curv_smooth} & \\gtrsim \\left[\\|\\beta - \\beta^s_0\\|^2 + \\|\\delta - \\delta^s_0\\|^2 + \\|\\psi - \\psi^s_0\\|\\right]\\mathds{1}_{\\|\\psi - \\psi^s_0\\| > \\frac{4(2K_1 + \\xi)}{u_-}\\sigma_n}\n\\end{align}\nwhere $\\xi$ can be taken as close to $0$ as possible. Henceforth we set $\\mathcal{K} = 4(2K_1 + \\xi)\/u_-$. For the other part of the curvature (i.e. when $\\|\\psi - \\psi_0^s\\| \\le \\mathcal{K} \\sigma_n$) we start with a two step Taylor expansion of the smoothed loss function: \n\\begin{align*}\n \\mathbb{M}^s(\\theta) - \\mathbb{M}^s(\\theta_0^s) = \\frac12 (\\theta_0 - \\theta^0_s)^{\\top}\\nabla^2 \\mathbb{M}^s(\\theta^*)(\\theta_0 - \\theta^0_s) \n\\end{align*}\nRecall the definition of $\\mathbb{M}^s(\\theta)$: \n$$\n\\mathbb{M}^s_n(\\theta) = \\mathbb{E}\\left(Y - X^{\\top}\\beta\\right)^2 + \\mathbb{E} \\left\\{\\left[-2\\left(Y_i - X_i^{\\top}\\beta\\right)X_i^{\\top}\\delta + (X_i^{\\top}\\delta)^2\\right] K\\left(\\frac{Q_i^{\\top}\\psi}{\\sigma_n}\\right)\\right\\}\n$$\nThe partial derivates of $\\mathbb{M}^s(\\theta)$ with respect to $(\\beta, \\delta, \\psi)$ was derived in equation \\eqref{eq:beta_grad} - \\eqref{eq:psi_grad}. From there, we calculate the hessian of $\\mathbb{M}^s(\\theta)$: \n\\begin{align*}\n \\nabla_{\\beta\\beta}\\mathbb{M}^s(\\theta) & = 2\\Sigma_X \\\\\n \\nabla_{\\delta\\delta}\\mathbb{M}^s(\\theta) & = 2 \\mathbb{E}\\left[g(Q)K\\left(\\frac{Q_i^{\\top}\\psi}{\\sigma_n}\\right)\\right] = 2 \\mathbb{E}\\left[g(Q)K\\left(\\frac{Q_i^{\\top}\\psi_0 }{\\sigma_n} + \\tilde Q^{\\top}\\tilde \\eta\\right)\\right] \\\\\n \\nabla_{\\psi\\psi} \\mathbb{M}^s(\\theta) & = \\frac{1}{\\sigma_n^2}\\mathbb{E} \\left\\{\\left[-2\\left(Y_i - X_i^{\\top}\\beta\\right)X_i^{\\top}\\delta + (X_i^{\\top}\\delta)^2\\right]\\tilde Q_i\\tilde Q_i^{\\top} K''\\left(\\frac{Q_i^{\\top}\\psi_0 }{\\sigma_n} + \\tilde Q^{\\top}\\tilde \\eta\\right)\\right\\} \\\\\n \\nabla_{\\beta \\delta}\\mathbb{M}^s(\\theta) & = 2 \\mathbb{E}\\left[g(Q)K\\left(\\frac{Q_i^{\\top}\\psi}{\\sigma_n}\\right)\\right] = 2 \\mathbb{E}\\left[g(Q)K\\left(\\frac{Q_i^{\\top}\\psi_0 }{\\sigma_n} + \\tilde Q^{\\top}\\tilde \\eta\\right)\\right] \\\\\n \\nabla_{\\beta \\psi}\\mathbb{M}^s(\\theta) & = \\frac{2}{\\sigma_n}\\mathbb{E}\\left(g(Q)\\delta\\tilde Q^{\\top}K'\\left(\\frac{Q_i^{\\top}\\psi_0 }{\\sigma_n} + \\tilde Q^{\\top}\\tilde \\eta\\right)\\right) \\\\\n \\nabla_{\\delta \\psi} \\mathbb{M}^s(\\theta) & = \\frac{2}{\\sigma_n}\\mathbb{E} \\left\\{\\left[-X_i\\left(Y_i - X_i^{\\top}\\beta\\right) + X_iX_i^{\\top}\\delta\\right]\\tilde Q_i^{\\top} K'\\left(\\frac{Q_i^{\\top}\\psi_0 }{\\sigma_n} + \\tilde Q^{\\top}\\tilde \\eta\\right)\\right\\} \\,.\n\\end{align*}\nwhere we use $\\tilde \\eta$ for a generic notation for $(\\tilde \\psi - \\tilde \\psi_0)\/\\sigma_n$. For notational simplicity, we define $\\gamma = (\\beta, \\delta)$ and $\\nabla^2\\mathbb{M}^{s, \\gamma}(\\theta)$, $\\nabla^2\\mathbb{M}^{s, \\gamma \\psi}(\\theta), \\nabla^2\\mathbb{M}^{s, \\psi \\psi}(\\theta)$ to be corresponding blocks of the hessian matrix. We have: \n\\begin{align}\n \\mathbb{M}^s(\\theta) - \\mathbb{M}^s(\\theta_0^s) & = \\frac12 (\\theta - \\theta^0_s)^{\\top}\\nabla^2 \\mathbb{M}^s(\\theta^*)(\\theta - \\theta^0_s) \\notag \\\\\n & = \\frac12 (\\gamma - \\gamma_0^s)^{\\top}\\nabla^2 \\mathbb{M}^{s, \\gamma}(\\theta^*)(\\gamma - \\gamma^0_s) + (\\gamma - \\gamma_0^s)^{\\top}\\nabla^2 \\mathbb{M}^{s, \\gamma \\psi}(\\theta^*)(\\psi - \\psi^0_s) \\notag \\\\\n & \\qquad \\qquad \\qquad \\qquad + \\frac12(\\psi - \\psi_0^s)^{\\top}\\nabla^2 \\mathbb{M}^{s, \\psi \\psi}(\\theta^*)(\\psi - \\psi^0_s) \\notag \\\\\n \\label{eq:hessian_1} & := \\frac12 \\left(T_1 + 2T_2 + T_3\\right)\n\\end{align}\nNote that we can write: \n\\begin{align*}\n T_1 & = (\\gamma - \\gamma_0^s)^{\\top}\\nabla^2 \\mathbb{M}^{s, \\gamma}(\\tilde \\theta)(\\gamma - \\gamma^0_s) \\\\\n & = (\\gamma - \\gamma_0^s)^{\\top}\\nabla^2 \\mathbb{M}^{s, \\gamma}(\\theta_0)(\\gamma - \\gamma^0_s) + (\\gamma - \\gamma_0^s)^{\\top}\\left[\\nabla^2 \\mathbb{M}^{s, \\gamma}(\\tilde \\theta) - \\nabla^2 \\mathbb{M}^{s, \\gamma}(\\theta_0)\\right](\\gamma - \\gamma^0_s) \n\\end{align*}\nThe operator norm of the difference of two hessians can be bounded as: \n$$\n\\left\\|\\nabla^2 \\mathbb{M}^{s, \\gamma}(\\theta^*) - \\nabla^2 \\mathbb{M}^{s, \\gamma}(\\theta_0)\\right\\|_{op} = O(\\sigma_n) \\,.\n$$\nfor any $\\theta^*$ in a neighborhood of $\\theta_0^s$ with $\\|\\psi - \\psi_0^s\\| \\le \\mathcal{K} \\sigma_n$. To prove this note that for any $\\theta$: \n$$\n\\nabla^2 \\mathbb{M}^{s, \\gamma}(\\theta^*) - \\nabla^2 \\mathbb{M}^{s, \\gamma}(\\theta_0) = 2\\begin{pmatrix}0 & A \\\\\nA & A\\end{pmatrix} = \\begin{pmatrix}0 & 1 \\\\ 1 & 1\\end{pmatrix} \\otimes A \n$$\nwhere: \n$$\nA = \\mathbb{E}\\left[g(Q)K\\left(\\frac{Q_i^{\\top}\\psi}{\\sigma_n}\\right)\\right] - \\mathbb{E}\\left[g(Q)K\\left(\\frac{Q_i^{\\top}\\psi_0 }{\\sigma_n}\\right)\\right]\n$$\nTherefore it is enough to show $\\|A\\|_{op} = O(\\sigma_n)$. Towards that direction: \n\\begin{align*}\nA & = \\mathbb{E}\\left[g(Q)K\\left(\\frac{Q_i^{\\top}\\psi}{\\sigma_n}\\right)\\right] - \\mathbb{E}\\left[g(Q)K\\left(\\frac{Q_i^{\\top}\\psi_0 }{\\sigma_n}\\right)\\right] \\\\\n& = \\sigma_n \\int \\int g(\\sigma_n t - \\tilde q^{\\top}\\tilde \\psi_0)\\left(K(t + \\tilde q^{\\top}\\eta) - K(t) \\right) f_0(\\sigma_n t \\mid \\tilde q) \\ f(\\tilde q) \\ dt \\ d\\tilde q \\\\\n& = \\sigma_n \\left[\\int \\int g(- \\tilde q^{\\top}\\tilde \\psi_0)\\left(K(t + \\tilde q^{\\top}\\eta) - K(t) \\right) f_0(0 \\mid \\tilde q) \\ f(\\tilde q) \\ dt \\ d\\tilde q + R \\right] \\\\\n& = \\sigma_n \\left[\\int \\int g(- \\tilde q^{\\top}\\tilde \\psi_0)f_0(0 \\mid \\tilde q) \\int_t^{t + \\tilde q^{\\top}\\eta}K'(s) \\ ds \\ f(\\tilde q) \\ dt \\ d\\tilde q + R \\right] \\\\\n& = \\sigma_n \\left[\\int g(- \\tilde q^{\\top}\\tilde \\psi_0)f_0(0 \\mid \\tilde q) \\int_{-\\infty}^{\\infty}K'(s) \\int_{s-\\tilde q^{\\top}\\eta}^s \\ dt \\ ds \\ f(\\tilde q)\\ d\\tilde q + R \\right] \\\\\n& = \\sigma_n \\left[\\int g(- \\tilde q^{\\top}\\tilde \\psi_0)f_0(0 \\mid \\tilde q)\\tilde q^{\\top}\\eta \\ f(\\tilde q)\\ d\\tilde q + R \\right] \\\\\n& = \\sigma_n \\left[\\mathbb{E}\\left[g(- \\tilde Q^{\\top}\\tilde \\psi_0, \\tilde Q)f_0(0 \\mid \\tilde Q)\\tilde Q^{\\top}\\eta\\right] + R \\right]\n\\end{align*}\nusing the fact that $\\left\\|\\mathbb{E}\\left[g(- \\tilde Q^{\\top}\\tilde \\psi_0, \\tilde Q)f_0(0 \\mid \\tilde Q)\\tilde Q^{\\top}\\eta\\right]\\right\\|_{op} = O(1)$ and $\\|R\\|_{op} = O(\\sigma_n)$ we conclude the claim. From the above claim we conclude: \n\\begin{equation}\n \\label{eq:hessian_gamma}\n T_1 = (\\gamma - \\gamma_0^s)^{\\top}\\nabla^2 \\mathbb{M}^{s, \\gamma}(\\theta^*)(\\gamma - \\gamma^s_0) \\ge \\|\\gamma - \\gamma^s_0\\|^2(1 - O(\\sigma_n)) \\ge \\frac12 \\|\\gamma - \\gamma_0^s\\|^2\n\\end{equation}\nfor all large $n$. \n\\\\\\\\\n\\noindent \nWe next deal with the cross term $T_2$ in equation \\eqref{eq:hessian_1}. Towards that end first note that: \n\\begin{align*}\n & \\frac{1}{\\sigma_n}\\mathbb{E}\\left((g(Q)\\delta)\\tilde Q^{\\top}K'\\left(\\frac{Q_i^{\\top}\\psi_0 }{\\sigma_n} + \\tilde Q^{\\top}\\eta^*\\right)\\right) \\\\\n & = \\int_{\\mathbb{R}^{(p-1)}}\\left[ \\int_{-\\infty}^{\\infty} \\left(g\\left(\\sigma_nt - \\tilde q^{\\top}\\tilde \\psi_0, \\tilde q\\right)\\delta\\right) K'\\left(t + \\tilde q^{\\top}\\eta^*\\right) f_0(\\sigma_n t \\mid \\tilde q) \\ dt\\right] \\tilde q^{\\top} \\ f(\\tilde q) \\ d\\tilde q \\\\\n & = \\int_{\\mathbb{R}^{(p-1)}}\\left[ \\int_{-\\infty}^{\\infty} \\left(g\\left(- \\tilde q^{\\top}\\tilde \\psi_0, \\tilde q\\right)\\delta\\right) K'\\left(t + \\tilde q^{\\top}\\eta^*\\right) f_0(0 \\mid \\tilde q) \\ dt\\right] \\tilde q^{\\top} \\ f(\\tilde q) \\ d\\tilde q + R_1\\\\\n & = \\mathbb{E}\\left[\\left(g\\left( - \\tilde Q^{\\top}\\tilde \\psi_0, \\tilde Q\\right)\\delta\\right)\\tilde Q^{\\top}f_0(0 \\mid \\tilde Q)\\right] + R_1\n\\end{align*}\nwhere the remainder term $R_1$ can be further decomposed $R_1 = R_{11} + R_{12} + R_{13}$ with: \n\\begin{align*}\n \\left\\|R_{11}\\right\\| & = \\left\\|\\int_{\\mathbb{R}^{(p-1)}}\\left[ \\int_{-\\infty}^{\\infty} \\left(g\\left(- \\tilde q^{\\top}\\tilde \\psi_0, \\tilde q\\right)\\delta\\right) K'\\left(t + \\tilde q^{\\top}\\eta^*\\right) (f_0(\\sigma_nt\\mid \\tilde q) - f_0(0 \\mid \\tilde q)) \\ dt\\right] \\tilde q^{\\top} \\ f(\\tilde q) \\ d\\tilde q\\right\\| \\\\\n & \\le \\left\\|\\int_{\\mathbb{R}^{(p-1)}}\\left[ \\int_{-\\infty}^{\\infty} \\left\\|g\\left(- \\tilde q^{\\top}\\tilde \\psi_0, \\tilde q\\right)\\right\\|_{op}\\|\\delta\\| \\left|K'\\left(t + \\tilde q^{\\top}\\eta^*\\right)\\right| \\left|f_0(\\sigma_nt\\mid \\tilde q) - f_0(0 \\mid \\tilde q)\\right| \\ dt\\right] \\left|\\tilde q\\right| \\ f(\\tilde q) \\ d\\tilde q\\right\\| \\\\\n & \\le \\sigma_n \\dot{f}^+ c_+ \\|\\delta\\| \\int_{\\mathbb{R}^{(p-1)}} \\|\\tilde q\\| \\int_{-\\infty}^{\\infty} |t| \\left|K'\\left(t + \\tilde q^{\\top}\\eta^*\\right)\\right| \\ dt \\ f(\\tilde q) \\ d\\tilde q \\\\\n & \\le \\sigma_n \\dot{f}^+ c_+ \\|\\delta\\| \\int_{\\mathbb{R}^{(p-1)}} \\|\\tilde q\\| \\int_{-\\infty}^{\\infty} |t - \\tilde q^{\\top}\\eta^*| \\left|K'\\left(t\\right)\\right| \\ dt \\ f(\\tilde q) \\ d\\tilde q \\\\\n & \\le \\sigma_n \\dot{f}^+ c_+ \\|\\delta\\| \\left[\\int_{\\mathbb{R}^{(p-1)}} \\|\\tilde q\\| \\int_{-\\infty}^{\\infty} |t| \\left|K'\\left(t\\right)\\right| \\ dt \\ f(\\tilde q) \\ d\\tilde q \\right. \\\\\n & \\qquad \\qquad \\qquad \\left. + \\int_{\\mathbb{R}^{(p-1)}} \\|\\tilde q\\|^2 \\|\\eta^*\\| \\int_{-\\infty}^{\\infty} |K'(t)| \\ dt \\ f(\\tilde q) \\ d\\tilde q\\right] \\\\\n & \\le \\sigma_n \\dot{f}^+ c_+ \\|\\delta\\| \\left[\\int_{\\mathbb{R}^{(p-1)}} \\|\\tilde q\\| \\int_{-\\infty}^{\\infty} |t| \\left|K'\\left(t\\right)\\right| \\ dt \\ f(\\tilde q) \\ d\\tilde q + \\mathcal{K}\\int_{\\mathbb{R}^{(p-1)}} \\|\\tilde q\\|^2 \\int_{-\\infty}^{\\infty} |K'(t)| \\ dt \\ f(\\tilde q) \\ d\\tilde q\\right] \\\\\n & \\lesssim \\sigma_n \\,.\n\\end{align*}\nwhere the last bound follows from our assumptions using the fact that: \n\\begin{align*}\n & \\|R_{12}\\| \\\\\n &= \\left\\|\\int_{\\mathbb{R}^{(p-1)}}\\left[ \\int_{-\\infty}^{\\infty} \\left(\\left(g\\left(\\sigma_n t- \\tilde q^{\\top}\\tilde \\psi_0, \\tilde q\\right) - g\\left(- \\tilde q^{\\top}\\tilde \\psi_0, \\tilde q\\right)\\right)\\delta\\right) K'\\left(t + \\tilde q^{\\top} \\eta^*\\right) f_0(0 \\mid \\tilde q) \\ dt\\right] \\tilde q^{\\top} \\ f(\\tilde q) \\ d\\tilde q\\right\\| \\\\\n & \\le \\int \\|\\tilde q\\|\\|\\delta\\|f_0(0 \\mid \\tilde q) \\int_{-\\infty}^{\\infty} \\left\\|g\\left(\\sigma_n t- \\tilde q^{\\top}\\tilde \\psi_0, \\tilde q\\right) - g\\left(- \\tilde q^{\\top}\\tilde \\psi_0, \\tilde q\\right) \\right\\|_{op}\\left|K'\\left(t + \\tilde q^{\\top} \\eta^*\\right)\\right| \\ dt \\ f(\\tilde q) \\ d\\tilde q \\\\\n & \\le \\dot{c}_+ \\sigma_n \\int \\|\\tilde q\\|\\|\\delta\\|f_0(0 \\mid \\tilde q)\\dot \\int_{-\\infty}^{\\infty} |t| \\left|K'\\left(t + \\tilde q^{\\top}\\tilde \\eta\\right)\\right| \\ dt \\ f(\\tilde q) \\ d\\tilde q \\hspace{0.2in} [\\text{Assumption }\\ref{eq:assm}]\\\\\n & \\lesssim \\sigma_n \\,.\n\\end{align*}\nThe other remainder term $R_{13}$ is the higher order term and can be shown to be $O(\\sigma_n^2)$ using same techniques. This implies for all large $n$: \n\\begin{align*}\n \\left\\|\\nabla_{\\beta \\psi}\\mathbb{M}^s(\\theta)\\right\\|_{op} & = O(1) \\,.\n\\end{align*}\nand similar calculation yields $ \\left\\|\\nabla_{\\delta \\psi}\\mathbb{M}^s(\\theta)\\right\\|_{op} = O(1)$. Using this we have: \n\\begin{align}\n T_2 & = (\\gamma - \\gamma_0^s)^{\\top}\\nabla^2 \\mathbb{M}^{s, \\gamma \\psi}(\\tilde \\theta)(\\psi - \\psi^0_s) \\notag \\\\\n & = (\\beta - \\beta_0^s)^{\\top}\\nabla_{\\beta \\psi}^2 \\mathbb{M}^{s}(\\tilde \\theta)(\\psi - \\psi^0_s) + (\\delta - \\delta_0^s)^{\\top}\\nabla_{\\delta \\psi}^2 \\mathbb{M}^{s}(\\tilde \\theta)(\\psi - \\psi^0_s) \\notag \\\\\n & \\ge - C\\left[\\|\\beta - \\beta_0^s\\| + \\|\\delta - \\delta_0^s\\| \\right]\\|\\psi - \\psi^0_s\\| \\notag \\\\\n & \\ge -C \\sqrt{\\sigma_n}\\left[\\|\\beta - \\beta_0^s\\| + \\|\\delta - \\delta_0^s\\| \\right]\\frac{\\|\\psi - \\psi^0_s\\| }{\\sqrt{\\sigma_n}} \\notag \\\\\n \\label{eq:hessian_cross} & \\gtrsim - \\sqrt{\\sigma_n}\\left(\\|\\beta - \\beta_0^s\\|^2 + \\|\\delta - \\delta_0^s\\|^2 +\\frac{\\|\\psi - \\psi^0_s\\|^2 }{\\sigma_n} \\right)\n\\end{align}\nNow for $T_3$ note that: \n\\allowdisplaybreaks\n\\begin{align*}\n& \\sigma_n \\nabla_{\\psi\\psi} \\mathbb{M}^s_n(\\theta) \\\\\n& = \\frac{1}{\\sigma_n}\\mathbb{E} \\left\\{\\left[-2\\left(Y_i - X_i^{\\top}\\beta\\right)X_i^{\\top}\\delta + (X_i^{\\top}\\delta)^2\\right]\\tilde Q_i\\tilde Q_i^{\\top} K''\\left(\\frac{Q_i^{\\top}\\psi_0 }{\\sigma_n} + \\tilde Q^{\\top}\\tilde \\eta\\right)\\right\\} \\\\\n& = \\frac{1}{\\sigma_n}\\mathbb{E} \\left\\{\\left[-2\\left(Y_i - X_i^{\\top}\\beta\\right)X_i^{\\top}\\delta \\right]\\tilde Q_i\\tilde Q_i^{\\top} K''\\left(\\frac{Q_i^{\\top}\\psi_0 }{\\sigma_n} + \\tilde Q^{\\top}\\tilde \\eta\\right)\\right\\} \\\\\n& \\qquad \\qquad \\qquad + \\frac{1}{\\sigma_n}\\mathbb{E} \\left\\{(\\delta^{\\top}g(Q) \\delta)\\tilde Q_i\\tilde Q_i^{\\top} K''\\left(\\frac{Q_i^{\\top}\\psi_0 }{\\sigma_n} + \\tilde Q^{\\top}\\tilde \\eta\\right)\\right\\} \\\\\n& = \\frac{1}{\\sigma_n}\\mathbb{E} \\left\\{\\left[-2 X_i^{\\top}\\left(\\beta_0 -\\beta\\right)X_i^{\\top}\\delta - 2(X_i^{\\top}\\delta_0)(X_i^{\\top}\\delta)\\mathds{1}_{Q_i^{\\top}\\psi_0 > 0}\\right]\\tilde Q_i\\tilde Q_i^{\\top} K''\\left(\\frac{Q_i^{\\top}\\psi_0 }{\\sigma_n} + \\tilde Q^{\\top}\\tilde \\eta\\right)\\right\\} \\\\\n& \\qquad \\qquad \\qquad + \\frac{1}{\\sigma_n}\\mathbb{E} \\left\\{(\\delta^{\\top}g(Q) \\delta)\\tilde Q_i\\tilde Q_i^{\\top} K''\\left(\\frac{Q_i^{\\top}\\psi_0 }{\\sigma_n} + \\tilde Q^{\\top}\\tilde \\eta\\right)\\right\\} \\\\\n& = \\frac{-2}{\\sigma_n}\\mathbb{E} \\left\\{((\\beta_0 - \\beta)^{\\top}g(Q) \\delta)\\tilde Q_i\\tilde Q_i^{\\top} K''\\left(\\frac{Q_i^{\\top}\\psi_0 }{\\sigma_n} + \\tilde Q^{\\top}\\tilde \\eta\\right)\\right\\} \\\\\n& \\qquad \\qquad \\qquad + \\frac{-2}{\\sigma_n}\\mathbb{E} \\left\\{(\\delta_0^{\\top}g(Q) \\delta)\\tilde Q_i\\tilde Q_i^{\\top} K''\\left(\\frac{Q_i^{\\top}\\psi_0 }{\\sigma_n} + \\tilde Q^{\\top}\\tilde \\eta\\right)\\mathds{1}_{Q_i^{\\top}\\psi_0 > 0}\\right\\} \\\\\n& \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad + \\frac{1}{\\sigma_n}\\mathbb{E} \\left\\{(\\delta^{\\top}g(Q) \\delta)\\tilde Q_i\\tilde Q_i^{\\top} K''\\left(\\frac{Q_i^{\\top}\\psi_0 }{\\sigma_n} + \\tilde Q^{\\top}\\tilde \\eta\\right)\\right\\} \\\\\n& = \\underbrace{\\frac{-2}{\\sigma_n}\\mathbb{E} \\left\\{((\\beta_0 - \\beta)^{\\top}g(Q)\\delta)\\tilde Q_i\\tilde Q_i^{\\top} K''\\left(\\frac{Q_i^{\\top}\\psi_0 }{\\sigma_n} + \\tilde Q^{\\top}\\tilde \\eta\\right)\\right\\}}_{M_1} \\\\\n& \\qquad \\qquad \\qquad + \\underbrace{\\frac{-2}{\\sigma_n}\\mathbb{E} \\left\\{(\\delta_0^{\\top}g(Q) \\delta_0)\\tilde Q_i\\tilde Q_i^{\\top} K''\\left(\\frac{Q_i^{\\top}\\psi_0 }{\\sigma_n} + \\tilde Q^{\\top}\\tilde \\eta\\right)\\mathds{1}_{Q_i^{\\top}\\psi_0 > 0}\\right\\}}_{M_2} \\\\\n& \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad +\n\\underbrace{\\frac{-2}{\\sigma_n}\\mathbb{E} \\left\\{(\\delta_0^{\\top} g(Q) (\\delta - \\delta_0))\\tilde Q_i\\tilde Q_i^{\\top} K''\\left(\\frac{Q_i^{\\top}\\psi_0 }{\\sigma_n} + \\tilde Q^{\\top}\\tilde \\eta\\right)\\mathds{1}_{Q_i^{\\top}\\psi_0 > 0}\\right\\}}_{M_3} \\\\\n& \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad + \\underbrace{\\frac{1}{\\sigma_n}\\mathbb{E} \\left\\{(\\delta^{\\top}g(Q) \\delta)\\tilde Q_i\\tilde Q_i^{\\top} K''\\left(\\frac{Q_i^{\\top}\\psi_0 }{\\sigma_n} + \\tilde Q^{\\top}\\tilde \\eta\\right)\\right\\}}_{M_4} \\\\\n& := M_1 + M_2 + M_3 + M_4\n\\end{align*}\nWe next show that $M_1$ and $M_4$ are $O(\\sigma_n)$. Towards that end note that for any two vectors $v_1, v_2$: \n\\begin{align*}\n & \\frac{1}{\\sigma_n}\\mathbb{E} \\left\\{(v_1^{\\top}g(Q)v_2)\\tilde Q_i\\tilde Q_i^{\\top} K''\\left(\\frac{Q_i^{\\top}\\psi_0 }{\\sigma_n} + \\tilde Q^{\\top}\\tilde \\eta\\right)\\right\\} \\\\\n & = \\int \\tilde q \\tilde q^{\\top} \\int_{-\\infty}^{\\infty}(v_1^{\\top}g(\\sigma_nt - \\tilde q^{\\top}\\tilde \\eta, \\tilde q)v_2) K''(t + \\tilde q^{\\top}\\tilde \\eta) f(\\sigma_nt \\mid \\tilde q) \\ dt \\ f(\\tilde q) \\ d\\tilde q \\\\\n & = \\int \\tilde q \\tilde q^{\\top} (v_1^{\\top}g( - \\tilde q^{\\top}\\tilde \\eta, \\tilde q)v_2)f(0 \\mid \\tilde q) f(\\tilde q) \\ d\\tilde q \\cancelto{0}{\\int_{-\\infty}^{\\infty} K''(t) \\ dt} + R = R\n\\end{align*}\nas $\\int K''(t) \\ dt = 0$ follows from our choice of kernel $K(x) = \\Phi(x)$. Similar calculation as in the case of analyzing the remainder of $T_2$ yields $\\|R\\|_{op} = O(\\sigma_n)$.\n\\noindent\nThis immediately implies $\\|M_1\\|_{op} = O(\\sigma_n)$ and $\\|M_4\\|_{op} = O(\\sigma_n)$. Now for $M_2$: \n\\begin{align}\nM_2 & = \\frac{-2}{\\sigma_n}\\mathbb{E} \\left\\{(\\delta_0^{\\top}g(Q) \\delta_0)\\tilde Q_i\\tilde Q_i^{\\top} K''\\left(\\frac{Q_i^{\\top}\\psi_0 }{\\sigma_n} + \\tilde Q^{\\top}\\tilde \\eta\\right)\\mathds{1}_{Q_i^{\\top}\\psi_0 > 0}\\right\\} \\notag \\\\\n& = -2\\int \\int (\\delta_0^{\\top}g(\\sigma_n t - \\tilde q^{\\top}\\tilde \\psi_0) \\delta_0)\\tilde q\\tilde q^{\\top} K''\\left(t + \\tilde q^{\\top}\\eta^*\\right)\\mathds{1}_{t > 0} f_0(\\sigma_n t \\mid \\tilde q) \\ dt f(\\tilde q) \\ d\\tilde q \\notag \\\\\n& = -2\\int (\\delta_0^{\\top}g(- \\tilde q^{\\top}\\tilde \\psi_0) \\delta_0)\\tilde q\\tilde q^{\\top} f_0(0 \\mid \\tilde q) \\int_{-\\infty}^{\\infty} K''\\left(t + \\tilde q^{\\top}\\eta^*\\right)\\mathds{1}_{t > 0} \\ dt f(\\tilde q) \\ d\\tilde q + R \\notag \\\\\n\\label{eq:M_2_double_deriv} & = 2\\mathbb{E}\\left[(\\delta_0^{\\top}g(- \\tilde Q^{\\top}\\tilde \\psi_0) \\delta_0)\\tilde \nQ\\tilde Q^{\\top} f_0(0 \\mid \\tilde Q) K'(\\tilde Q^{\\top}\\eta^*)\\right] + R \n\\end{align}\nwhere the remainder term R is $O_p(\\sigma_n)$ can be established as follows: \n\\begin{align*}\nR & = -2\\left[\\int \\int (\\delta_0^{\\top}g(\\sigma_n t - \\tilde q^{\\top}\\tilde \\psi_0) \\delta_0)\\tilde q\\tilde q^{\\top} K''\\left(t + \\tilde q^{\\top}\\eta^*\\right)\\mathds{1}_{t > 0} f_0(\\sigma_n t \\mid \\tilde q) \\ dt f(\\tilde q) \\ d\\tilde q \\right. \\\\\n& \\qquad \\qquad - \\left. \\int (\\delta_0^{\\top}g(- \\tilde q^{\\top}\\tilde \\psi_0) \\delta_0)\\tilde q\\tilde q^{\\top} f_0(0 \\mid \\tilde q) \\int_{-\\infty}^{\\infty} K''\\left(t + \\tilde q^{\\top}\\eta^*\\right)\\mathds{1}_{t > 0} \\ dt f(\\tilde q) \\ d\\tilde q \\right] \\\\\n& = -2\\left\\{\\left[\\int \\int (\\delta_0^{\\top}g(\\sigma_n t - \\tilde q^{\\top}\\tilde \\psi_0) \\delta_0)\\tilde q\\tilde q^{\\top} K''\\left(t + \\tilde q^{\\top}\\eta^*\\right)\\mathds{1}_{t > 0} f_0(\\sigma_n t \\mid \\tilde q) \\ dt f(\\tilde q) \\ d\\tilde q \\right. \\right. \\\\\n& \\qquad \\qquad - \\left. \\left. \\int (\\delta_0^{\\top}g(\\sigma_n t - \\tilde q^{\\top}\\tilde \\psi_0) \\delta_0) \\tilde q\\tilde q^{\\top} f_0(0 \\mid \\tilde q) \\int_{-\\infty}^{\\infty} K''\\left(t + \\tilde q^{\\top}\\eta^*\\right)\\mathds{1}_{t > 0} \\ dt f(\\tilde q) \\ d\\tilde q \\right] \\right. \\\\\n& \\left. + \\left[\\int (\\delta_0^{\\top}g(\\sigma_n t - \\tilde q^{\\top}\\tilde \\psi_0) \\delta_0)\\tilde q\\tilde q^{\\top} f_0(0 \\mid \\tilde q) \\int_{-\\infty}^{\\infty} K''\\left(t + \\tilde q^{\\top}\\eta^*\\right)\\mathds{1}_{t > 0} \\ dt f(\\tilde q) \\ d\\tilde q \\right. \\right. \\\\\n& \\qquad \\qquad \\left. \\left. -\\int (\\delta_0^{\\top}g(- \\tilde q^{\\top}\\tilde \\psi_0) \\delta_0)\\tilde q\\tilde q^{\\top} f_0(0 \\mid \\tilde q) \\int_{-\\infty}^{\\infty} K''\\left(t + \\tilde q^{\\top}\\eta^*\\right)\\mathds{1}_{t > 0} \\ dt f(\\tilde q) \\ d\\tilde q \\right]\\right\\} \\\\\n& = -2(R_1 + R_2) \\,.\n\\end{align*}\nFor $R_1$: \n\\begin{align*}\n\\left\\|R_1\\right\\|_{op} & = \\left\\|\\left[\\int \\int (\\delta_0^{\\top}g(\\sigma_n t - \\tilde q^{\\top}\\tilde \\psi_0) \\delta_0)\\tilde q\\tilde q^{\\top} K''\\left(t + \\tilde q^{\\top}\\eta^*\\right)\\mathds{1}_{t > 0} f_0(\\sigma_n t \\mid \\tilde q) \\ dt f(\\tilde q) \\ d\\tilde q \\right. \\right. \\,.\\\\\n& \\qquad \\qquad - \\left. \\left. \\int (\\delta_0^{\\top}g(\\sigma_n t - \\tilde q^{\\top}\\tilde \\psi_0) \\delta_0)\\tilde q\\tilde q^{\\top} f_0(0 \\mid \\tilde q) \\int_{-\\infty}^{\\infty} K''\\left(t + \\tilde q^{\\top}\\eta^*\\right)\\mathds{1}_{t > 0} \\ dt f(\\tilde q) \\ d\\tilde q \\right] \\right\\|_{op} \\\\\n& \\le c_+ \\int \\int \\|\\tilde q\\|^2 |K''\\left(t + \\tilde q^{\\top}\\eta^*\\right)| |f_0(\\sigma_n t \\mid \\tilde q) -f_0(0\\mid \\tilde q)| \\ dt \\ f(\\tilde q) \\ d\\tilde q \\\\\n& \\le c_+ F_+\\sigma_n \\int \\|\\tilde q\\|^2 \\int |t| |K''\\left(t + \\tilde q^{\\top}\\eta^*\\right)| \\ dt \\ f(\\tilde q) \\ d\\tilde q \\\\\n& = c_+ F_+\\sigma_n \\int \\|\\tilde q \\|^2 \\int |t - \\tilde q^{\\top}\\eta^*| |K''\\left(t\\right)| \\ dt \\ f(\\tilde q) \\ d\\tilde q \\\\\n& \\le c_+ F_+ \\sigma_n \\left[\\mathbb{E}[\\|\\tilde Q\\|^2]\\int |t||K''(t)| \\ dt + \\|\\eta^*\\|\\mathbb{E}[\\|\\tilde Q\\|^3]\\int |K''(t)| \\ dt\\right] = O(\\sigma_n) \\,.\n\\end{align*}\nand similarly for $R_2$: \n\\begin{align*}\n\\|R_2\\|_{op} & = \\left\\|\\left[\\int (\\delta_0^{\\top}g(\\sigma_n t - \\tilde q^{\\top}\\tilde \\psi_0) \\delta_0)\\tilde q\\tilde q^{\\top} f_0(0 \\mid \\tilde q) \\int_{-\\infty}^{\\infty} K''\\left(t + \\tilde q^{\\top}\\eta^*\\right)\\mathds{1}_{t > 0} \\ dt f(\\tilde q) \\ d\\tilde q \\right. \\right. \\\\\n& \\qquad \\qquad \\left. \\left. -\\int (\\delta_0^{\\top}g(- \\tilde q^{\\top}\\tilde \\psi_0) \\delta_0)\\tilde q\\tilde q^{\\top} f_0(0 \\mid \\tilde q) \\int_{-\\infty}^{\\infty} K''\\left(t + \\tilde q^{\\top}\\eta^*\\right)\\mathds{1}_{t > 0} \\ dt f(\\tilde q) \\ d\\tilde q \\right]\\right\\|_{op} \\\\\n& \\le F_+ \\|\\delta_0\\|^2 \\int \\left\\|g(\\sigma_n t - \\tilde q^{\\top}\\tilde \\psi_0) - g( - \\tilde q^{\\top}\\tilde \\psi_0) \\right\\|_{op} \\|\\tilde q\\|^2 \\int_{-\\infty}^{\\infty} |K''\\left(t + \\tilde q^{\\top}\\eta^*\\right)| \\ dt \\\\\n& \\le G_+ F_+ \\sigma_n \\int \\|\\tilde q\\|^2 \\int_{-\\infty}^{\\infty} |t||K''\\left(t + \\tilde q^{\\top}\\eta^*\\right)| \\ dt = O(\\sigma_n) \\,.\n\\end{align*}\nTherefore from \\eqref{eq:M_2_double_deriv} we conclude: \n\\begin{equation}\nM_2 = 2\\mathbb{E}\\left[(\\delta_0^{\\top}g(- \\tilde Q^{\\top}\\tilde \\psi_0) \\delta_0)\\tilde \nQ\\tilde Q^{\\top} f_0(0 \\mid \\tilde Q) K'(\\tilde Q^{\\top}\\eta^*)\\right] + O(\\sigma_n) \\,.\n\\end{equation}\nSimilar calculation for $M_3$ yields: \n\\begin{equation*}\nM_3 = 2\\mathbb{E}\\left[(\\delta_0^{\\top}g(- \\tilde Q^{\\top}\\tilde \\psi_0)(\\delta - \\delta_0))\\tilde \nQ\\tilde Q^{\\top} f_0(0 \\mid \\tilde Q) K'(\\tilde Q^{\\top}\\eta^*)\\right] + O(\\sigma_n) \\,.\n\\end{equation*}\ni.e. \n\\begin{equation}\n\\|M_3\\|_{op} \\le c_+ \\mathbb{E}\\left[\\|\\tilde Q\\|^2f_0(0 \\mid \\tilde Q) K'(\\tilde Q^{\\top}\\eta^*)\\right]\\|\\delta_0\\| \\|\\delta - \\delta_0\\| \\,.\n\\end{equation}\nNow we claim that for any $\\mathcal{K} < \\infty$, $\\lambda_{\\min} (M_2) > 0$ for all $\\|\\eta^*\\| \\le \\mathcal{K}$. Towards that end, define a function $\\lambda:B_{\\mathbb{R}^{2d}}(1) \\times B_{\\mathbb{R}^{2d}}(\\mathcal{K}) \\to \\mathbb{R}_+$ as: \n$$\n\\lambda: (v, \\eta) \\mapsto 2\\mathbb{E}\\left[(\\delta_0^{\\top}g(- \\tilde Q^{\\top}\\tilde \\psi_0) \\delta_0) \n\\left(v^{\\top}\\tilde Q\\right) ^2 f_0(0 \\mid \\tilde Q) K'(\\tilde Q^{\\top}\\eta)\\right]\n$$\nClearly $\\lambda \\ge 0$ and is continuous on a compact set. Hence its infimum must be attained. Suppose the infimum is $0$, i.e. there exists $(v^*, \\eta^*)$ such that: \n$$\n\\mathbb{E}\\left[(\\delta_0^{\\top}g(- \\tilde Q^{\\top}\\tilde \\psi_0) \\delta_0) \n\\left(v^{*^{\\top}}\\tilde Q\\right) ^2 f_0(0 \\mid \\tilde Q) K'(\\tilde Q^{\\top}\\eta^*)\\right] = 0 \\,.\n$$\nas $\\lambda_{\\min}(g(\\dot)) \\ge c_+$, we must have $\\left(v^{*^{\\top}}\\tilde Q\\right) ^2 f_0(0 \\mid \\tilde Q) K'(\\tilde Q^{\\top}\\eta^*) = 0$ almost surely. But from our assumption, $\\left(v^{*^{\\top}}\\tilde Q\\right) ^2 > 0$ and $K'(\\tilde Q^{\\top}\\eta^*) > 0$ almost surely, which implies $f_0(0 \\mid \\tilde q) = 0$ almost surely, which is a contradiction. Hence there exists $\\lambda_-$ such that: \n$$\n\\lambda_{\\min} (M_2) \\ge \\lambda_- > 0 \\ \\ \\forall \\ \\ \\|\\psi - \\psi_0^s\\| \\le \\mathcal{K} \\,.\n$$\nHence we have: \n$$\n\\lambda_{\\min}\\left(\\sigma_n \\nabla_{\\psi \\psi}\\mathbb{M}^2(\\theta)\\right) \\ge \\frac{\\lambda_-}{2}(1 - O(\\sigma_n)) \n$$\nfor all theta such that $d_*(\\theta, \\theta_0^s) \\le {\\epsilon} \\,.$ \n\\begin{align}\n\\label{eq:hessian_psi}\n & \\frac{1}{\\sigma_n}(\\psi - \\psi_0^s)^{\\top}\\sigma_n \\nabla^{\\psi \\psi}\\mathbb{M}^s(\\tilde \\theta) (\\psi - \\psi^0) \\gtrsim \\frac{\\|\\psi - \\psi^s_0\\|^2}{\\sigma_n} \\left(1- O(\\sigma_n)\\right) \n\\end{align}\nFrom equation \\eqref{eq:hessian_gamma}, \\eqref{eq:hessian_cross} and \\eqref{eq:hessian_psi} we have: \n\\begin{align*}\n& \\frac12 (\\theta_0 - \\theta^0_s)^{\\top}\\nabla^2 \\mathbb{M}^s(\\theta^*)(\\theta_0 - \\theta^0_s) \\\\\n& \\qquad \\qquad \\gtrsim \\left[\\|\\beta - \\beta^s_0\\|^2 + \\|\\gamma - \\gamma^s_0\\|^2 + \\frac{\\|\\psi - \\psi^s_0\\|^2}{\\sigma_n}\\right]\\mathds{1}_{\\|\\psi - \\psi_0^s\\| \\le \\mathcal{K} \\sigma_n} \\,.\n\\end{align*}\nThis, along with equation \\eqref{eq:lower_curv_smooth} concludes the proof. \n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\\subsection{Proof of Lemma \\ref{asymp-normality}}\nWe start by proving that analogues of Lemma 2 of \\cite{seo2007smoothed}: we show that: \n\\begin{align*}\n\\lim_{n \\to \\infty} \\mathbb{E}\\left[ \\sqrt{n\\sigma_n}\\nabla \\mathbb{M}_n^{s, \\psi}(\\theta_0)\\right] & = 0 \\\\\n\\lim_{n \\to \\infty} {\\sf var}\\left[ \\sqrt{n\\sigma_n}\\nabla \\mathbb{M}_n^{s, \\psi}(\\theta_0)\\right] & = V^{\\psi}\n\\end{align*}\nfor some matrix $V^{\\psi}$ which will be specified later in the proof. To prove the limit of the expectation: \n\\begin{align*}\n& \\mathbb{E}\\left[ \\sqrt{n\\sigma_n}\\nabla \\mathbb{M}_n^{s, \\psi}(\\theta_0)\\right] \\\\\n& = \\sqrt{\\frac{n}{\\sigma_n}}\\mathbb{E}\\left[\\left\\{(Y - X^{\\top}(\\beta_0 + \\delta_0))^2 - (Y - X^{\\top}\\beta_0)^2\\right\\}\\tilde Q K'\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right)\\right] \\\\\n& = \\sqrt{\\frac{n}{\\sigma_n}}\\mathbb{E}\\left[\\left(\\delta_0^{\\top}g(Q)\\delta_0\\right)\\left(1 - 2\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right)\\tilde Q K'\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right)\\right] \\\\\n& = \\sqrt{\\frac{n}{\\sigma_n}} \\times \\sigma_n \\int \\int \\left(\\delta_0^{\\top}g(\\sigma_nt - \\tilde q^{\\top}\\tilde \\psi_0, \\tilde q)\\delta_0\\right)\\left(1 - 2\\mathds{1}_{t > 0}\\right)\\tilde q K'\\left(t\\right) \\ f_0(\\sigma_n t \\mid \\tilde q) f (\\tilde q) \\ dt \\ d\\tilde q \\\\\n& = \\sqrt{n\\sigma_n} \\left[\\int \\tilde q \\left(\\delta_0^{\\top}g(- \\tilde q^{\\top}\\tilde \\psi_0, \\tilde q)\\delta_0\\right)f_0(0 \\mid \\tilde q) \\cancelto{0}{\\left(\\int_{-\\infty}^{\\infty} \\left(1 - 2\\mathds{1}_{t > 0}\\right)K'\\left(t\\right) \\ dt\\right)} f (\\tilde q) d\\tilde q + O(\\sigma_n)\\right] \\\\\n& = O(\\sqrt{n\\sigma_n^3}) = o(1) \\,.\n\\end{align*}\nFor the variance part: \n\\begin{align*}\n& {\\sf var}\\left[ \\sqrt{n\\sigma_n}\\nabla \\mathbb{M}_n^{s, \\psi}(\\theta_0)\\right] \\\\\n& = \\frac{1}{\\sigma_n}{\\sf var}\\left(\\left\\{(Y - X^{\\top}(\\beta_0 + \\delta_0))^2 - (Y - X^{\\top}\\beta_0)^2\\right\\}\\tilde Q K'\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right)\\right) \\\\\n& = \\frac{1}{\\sigma_n}\\mathbb{E}\\left(\\left\\{(Y - X^{\\top}(\\beta_0 + \\delta_0))^2 - (Y - X^{\\top}\\beta_0)^2\\right\\}^2 \\tilde Q\\tilde Q^{\\top} \\left(K'\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right)\\right)^2\\right) \\\\\n& \\qquad \\qquad + \\frac{1}{\\sigma_n}\\mathbb{E}^{\\otimes 2}\\left[\\left\\{(Y - X^{\\top}(\\beta_0 + \\delta_0))^2 - (Y - X^{\\top}\\beta_0)^2\\right\\}\\tilde Q K'\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right)\\right]\n\\end{align*}\nThe outer product of the expectation (the second term of the above summand) is $o(1)$ which follows from our previous analysis of the expectation term. For the second moment: \n\\begin{align*}\n& \\frac{1}{\\sigma_n}\\mathbb{E}\\left(\\left\\{(Y - X^{\\top}(\\beta_0 + \\delta_0))^2 - (Y - X^{\\top}\\beta_0)^2\\right\\}^2 \\tilde Q\\tilde Q^{\\top} \\left(K'\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right)\\right)^2\\right) \\\\\n& = \\frac{1}{\\sigma_n}\\mathbb{E}\\left(\\left\\{(X^{\\top}\\delta_0)^2(1 - 2\\mathds{1}_{Q^{\\top}\\psi_0 > 0}) -2{\\epsilon} (X^{\\top}\\delta_0)\\right\\}^2 \\tilde Q\\tilde Q^{\\top} \\left(K'\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right)\\right)^2\\right) \\\\\n& = \\frac{1}{\\sigma_n}\\left[\\mathbb{E}\\left((X^{\\top}\\delta_0)^4 \\tilde Q\\tilde Q^{\\top} \\left(K'\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right)\\right)^2\\right) + 4\\sigma_{\\epsilon}^2\\mathbb{E}\\left((X^{\\top}\\delta_0)^2 \\tilde Q\\tilde Q^{\\top} \\left(K'\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right)\\right)^2\\right) \\right] \\\\\n& \\longrightarrow \\left(\\int_{-\\infty}^{\\infty}(K'(t))^2 \\ dt\\right)\\left[\\mathbb{E}\\left(g_{4, \\delta_0}(-\\tilde Q^{\\top}\\tilde \\psi_0, \\tilde Q)\\tilde Q\\tilde Q^{\\top}f_0(0 \\mid \\tilde Q)\\right) \\right. \\\\\n& \\hspace{10em}+ \\left. 4\\sigma_{\\epsilon}^2\\mathbb{E}\\left(\\delta_0^{\\top}g(-\\tilde Q^{\\top}\\tilde \\psi_0, \\tilde Q)\\delta_0 \\tilde Q\\tilde Q^{\\top}f_0(0 \\mid \\tilde Q)\\right)\\right] \\\\\n& := 2V^{\\psi} \\,.\n\\end{align*}\nFinally using Lemma 6 of \\cite{horowitz1992smoothed} we conclude that $ \\sqrt{n\\sigma_n}\\nabla \\mathbb{M}_n^{s, \\psi}(\\theta_0) \\implies \\mathcal{N}(0, V^{\\psi})$. \n\\\\\\\\\n\\noindent\nWe next prove that $ \\sqrt{n}\\nabla \\mathbb{M}_n^{s, \\gamma}(\\theta_0)$ to normal distribution. This is a simple application of CLT along with bounding some remainder terms which are asymptotically negligible. The gradients are: \n\\begin{align*}\n\\sqrt{n}\\begin{pmatrix} \\nabla_{\\beta}\\mathbb{M}^s_n(\\theta_0^s) \\\\ \\nabla_{\\delta}\\mathbb{M}^s_n(\\theta_0^s) \\end{pmatrix} & = 2\\sqrt{n}\\begin{pmatrix}\\frac1n \\sum_i X_i(X_i^{\\top}\\beta_0 - Y_i)+ \\frac1n \\sum_i X_iX_i^{\\top}\\delta_0 K\\left(\\frac{Q_i^{\\top}\\psi_0}{\\sigma_n}\\right) \\\\ \n\\frac1n \\sum_i \\left[X_i(X_i^{\\top}\\beta_0 + X_i^{\\top}\\delta_0 - Y_i)\\right] K\\left(\\frac{Q_i^{\\top}\\psi_0^s}{\\sigma_n}\\right) \\end{pmatrix} \\\\\n& = 2\\begin{pmatrix} -\\frac{1}{\\sqrt{n}} \\sum_i X_i {\\epsilon}_i + \\frac{1}{\\sqrt{n}} \\sum_i X_iX_i^{\\top}\\delta_0 \\left(K\\left(\\frac{Q_i^{\\top}\\psi_0}{\\sigma_n}\\right) - \\mathds{1}_{Q_i^{\\top}\\psi_0 > 0}\\right) \\\\ -\\frac{1}{\\sqrt{n}} \\sum_i X_i {\\epsilon}_iK\\left(\\frac{Q_i^{\\top}\\psi_0}{\\sigma_n}\\right) + \\frac{1}{\\sqrt{n}} \\sum_i X_iX_i^{\\top}\\delta_0K\\left(\\frac{Q_i^{\\top}\\psi_0}{\\sigma_n}\\right)\\mathds{1}_{Q_i^{\\top}\\psi_0 \\le 0} \n\\end{pmatrix}\\\\\n& = 2\\begin{pmatrix} -\\frac{1}{\\sqrt{n}} \\sum_i X_i {\\epsilon}_i + R_1 \\\\ -\\frac{1 }{\\sqrt{n}} \\sum_i X_i {\\epsilon}_i\\mathbf{1}_{Q_i^{\\top}\\psi_0 > 0} +R_2 \n\\end{pmatrix}\n\\end{align*}\nThat $(1\/\\sqrt{n})\\sum_i X_i {\\epsilon}_i$ converges to normal distribution follows from a simple application of CLT. Therefore, once we prove that $R_1$ and $R_2$ are $o_p(1)$ we have: \n$$\n\\sqrt{n} \\nabla_{\\gamma}\\mathbb{M}^s_n(\\theta_0^s) \\overset{\\mathscr{L}}{\\implies} \\mathcal{N}\\left(0, 4V^{\\gamma}\\right)\n$$\nwhere: \n\\begin{equation}\n\\label{eq:def_v_gamma}\nV^{\\gamma} = \\sigma_{\\epsilon}^2 \\begin{pmatrix}\\mathbb{E}\\left[XX^{\\top}\\right] & \\mathbb{E}\\left[XX^{\\top}\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right] \\\\\n\\mathbb{E}\\left[XX^{\\top}\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right] & \\mathbb{E}\\left[XX^{\\top}\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right] \\end{pmatrix} \\,.\n\\end{equation}\nTo complete the proof we now show that $R_1$ and $R_2$ are $o_p(1)$. For $R_1$, we show that $\\mathbb{E}[R_1] \\to 0$ and ${\\sf var}(R_1) \\to 0$. For the expectation part: \n\\begin{align*}\n & \\mathbb{E}[R_1] \\\\\n & = \\sqrt{n}\\mathbb{E}\\left[XX^{\\top}\\delta_0 \\left(K\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right) - \\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right)\\right] \\\\\n & = \\sqrt{n}\\delta_0^{\\top}\\mathbb{E}\\left[g(Q) \\left(K\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right) - \\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right)\\right] \\\\\n & = \\sqrt{n}\\int_{\\mathbb{R}^{p-1}} \\int_{-\\infty}^{\\infty} \\delta_0^{\\top}g\\left(t-\\tilde q^{\\top}\\tilde \\psi_0, \\tilde q\\right)\\left(\\mathds{1}_{t > 0} - K\\left(\\frac{t}{\\sigma_n}\\right)\\right)f_0(t \\mid \\tilde q) f(\\tilde q) \\ dt \\ d\\tilde q \\\\\n & = \\sqrt{n}\\sigma_n \\int_{\\mathbb{R}^{p-1}} \\int_{-\\infty}^{\\infty} \\delta_0^{\\top}g\\left(\\sigma_n z-\\tilde q^{\\top}\\tilde \\psi_0, \\tilde q\\right)\\left(\\mathds{1}_{z > 0} - K\\left(z\\right)\\right)f_0(\\sigma_n z \\mid \\tilde q) f(\\tilde q) \\ dz \\ d\\tilde q \\\\\n & = \\sqrt{n}\\sigma_n \\left[\\int_{\\mathbb{R}^{p-1}}\\delta_0^{\\top}g\\left(-\\tilde q^{\\top}\\tilde \\psi_0, \\tilde q\\right) f_0(0 \\mid \\tilde q) f(\\tilde q) \\ d\\tilde q \\cancelto{0}{\\left[\\int_{-\\infty}^{\\infty} \\left(\\mathds{1}_{z > 0} - K\\left(z\\right)\\right)\\ dz\\right]} + O(\\sigma_n) \\right] \\\\\n & = O(\\sqrt{n}\\sigma_n^2) = o(1) \\,.\n\\end{align*}\nFor the variance part: \n\\begin{align*}\n& {\\sf var}(R_1) \\\\\n& = {\\sf var}\\left(XX^{\\top}\\delta_0 \\left(K\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right) - \\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right)\\right) \\\\\n& \\le \\mathbb{E}\\left[\\|X\\|^2 \\delta_0^{\\top}XX^{\\top}\\delta_0 \\left(K\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right) - \\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right)^2\\right] \\\\\n& = O(\\sigma_n ) = o(1) \\,.\n\\end{align*}\nThis shows that ${\\sf var}(R_1) = o(1)$ and this establishes $R_1 = o_p(1)$. The proof for $R_2$ is similar and hence skipped for brevity. \n\\\\\\\\\nOur next step is to prove that $\\sqrt{n\\sigma_n}\\nabla_{\\psi}\\mathbb{M}^s_n(\\theta_0^s)$ and $\\sqrt{n}\\nabla \\mathbb{M}^{s, \\gamma}_n(\\theta_0^s)$ are asymptotically uncorrelated. Towards that end, first note that: \n\\begin{align*}\n& \\mathbb{E}\\left[X(X^{\\top}\\beta_0 - Y) + XX^{\\top}\\delta_0 K\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right) \\right] \\\\\n& = \\mathbb{E}\\left[XX^{\\top}\\delta_0\\left(K\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right) - \\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right)\\right] \\\\\n& = \\mathbb{E}\\left[g(Q)\\delta_0\\left(K\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right) - \\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right)\\right] \\\\\n& = \\sigma_n \\int \\int g(\\sigma_n t - \\tilde q^{\\top}\\tilde \\psi_0, \\tilde q)(K(t) - \\mathds{1}_{t>0})f_0(\\sigma_n t \\mid \\tilde q) f(\\tilde q) \\ dt \\ d\\tilde q \\\\\n& = \\sigma_n \\int g(- \\tilde q^{\\top}\\tilde \\psi_0, \\tilde q)\\cancelto{0}{\\int_{-\\infty}^{\\infty} (K(t) - \\mathds{1}_{t>0}) \\ dt} \\ f_0(0 \\mid \\tilde q) f(\\tilde q) \\ dt \\ d\\tilde q + O(\\sigma_n^2) \\\\\n& = O(\\sigma_n^2) \\,.\n\\end{align*}\nAlso, it follows from the proof of $\\mathbb{E}\\left[\\sqrt{n\\sigma_n}\\nabla_\\psi \\mathbb{M}_n^s(\\theta_0)\\right] \\to 0$ we have: \n$$\n\\mathbb{E}\\left[\\left\\{(Y - X^{\\top}(\\beta_0 + \\delta_0))^2 - (Y - X^{\\top}\\beta_0)^2\\right\\}\\tilde Q K'\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right)\\right] = O(\\sigma_n^2) \\,.\n$$\nFinally note that: \n\\begin{align*}\n& \\mathbb{E}\\left[\\left(\\left\\{(Y - X^{\\top}(\\beta_0 + \\delta_0))^2 - (Y - X^{\\top}\\beta_0)^2\\right\\}\\tilde Q K'\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right)\\right) \\times \\right. \\\\\n& \\qquad \\qquad \\qquad \\qquad \\qquad \\left. \\left(X(X^{\\top}\\beta_0 - Y) + XX^{\\top}\\delta_0 K\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right)\\right)^{\\top}\\right] \\\\\n& = \\mathbb{E}\\left[\\left(\\left\\{(X^{\\top}\\delta_0)^2(1 - 2\\mathds{1}_{Q^{\\top}\\psi_0 > 0}) - 2{\\epsilon} X^{\\top}\\delta_0\\right\\}\\tilde QK'\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right)\\right) \\right. \\\\\n& \\qquad \\qquad \\qquad \\qquad \\qquad \\left. \\times \\left\\{XX^{\\top}\\delta_0\\left(K\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right) - \\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right) - X{\\epsilon} \\right\\}\\right] \\\\\n& = \\mathbb{E}\\left[\\left((X^{\\top}\\delta_0)^2(1 - 2\\mathds{1}_{Q^{\\top}\\psi_0 > 0})\\tilde QK'\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right)\\right) \\right. \\\\\n& \\qquad \\qquad \\qquad \\left. \\times \\left(XX^{\\top}\\delta_0\\left(K\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right) - \\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right)\\right)^{\\top}\\right] \\\\\n& \\qquad \\qquad + 2\\sigma^2_{\\epsilon} \\mathbb{E}\\left[XX^{\\top}\\delta_0\\tilde Q^{\\top}K'\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right)\\right] \\\\\n&= O(\\sigma_n ) \\,.\n\\end{align*}\nNow getting back to the covariance: \n\\begin{align*}\n& \\mathbb{E}\\left[\\left(\\sqrt{n\\sigma_n}\\nabla_{\\psi}\\mathbb{M}^s_n(\\theta_0)\\right)\\left(\\sqrt{n}\\nabla_\\beta \\mathbb{M}^s_n(\\theta_0)\\right)^{\\top}\\right] \\\\\n& = \\frac{1}{\\sqrt{\\sigma_n}}\\mathbb{E}\\left[\\left(\\left\\{(Y - X^{\\top}(\\beta_0 + \\delta_0))^2 - (Y - X^{\\top}\\beta_0)^2\\right\\}\\tilde Q K'\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right)\\right) \\times \\right. \\\\\n& \\qquad \\qquad \\qquad \\qquad \\qquad \\left. \\left(X(X^{\\top}\\beta_0 - Y) + XX^{\\top}\\delta_0 K\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right)\\right)^{\\top}\\right] \\\\\n& \\qquad \\qquad + \\frac{n-1}{\\sqrt{\\sigma_n}}\\left[\\mathbb{E}\\left[\\left\\{(Y - X^{\\top}(\\beta_0 + \\delta_0))^2 - (Y - X^{\\top}\\beta_0)^2\\right\\}\\tilde Q K'\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right)\\right] \\right. \\\\\n& \\qquad \\qquad \\qquad \\qquad \\times \\left. \\left(\\mathbb{E}\\left[X(X^{\\top}\\beta_0 - Y) + XX^{\\top}\\delta_0 K\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right) \\right]\\right)^{\\top}\\right] \\\\\n& = \\frac{1}{\\sqrt{\\sigma_n}} \\times O(\\sigma_n) + \\frac{n-1}{\\sqrt{\\sigma_n}} \\times O(\\sigma_n^4) = o(1) \\,.\n\\end{align*}\nThe proof for $\\mathbb{E}\\left[\\left(\\sqrt{n\\sigma_n}\\nabla_{\\psi}\\mathbb{M}^s_n(\\theta_0)\\right)\\left(\\sqrt{n}\\nabla_\\delta \\mathbb{M}^s_n(\\theta_0)\\right)^{\\top}\\right]$ is similar and hence skipped. This completes the proof. \n\n\n\n\\subsection{Proof of Lemma \\ref{conv-prob}}\nTo prove first note that by simple application of law of large number (and using the fact that $\\|\\psi^* - \\psi_0\\|\/\\sigma_n = o_p(1)$ we have: \n\\begin{align*}\n\\nabla^2 \\mathbb{M}_n^{s, \\gamma}(\\theta^*) & = 2\\begin{pmatrix}\\frac{1}{n}\\sum_i X_i X_i^{\\top} & \\frac{1}{n}\\sum_i X_i X_i^{\\top}K\\left(\\frac{Q_i^{\\top}\\psi^*}{\\sigma_n}\\right) \\\\ \\frac{1}{n}\\sum_i X_i X_i^{\\top}K\\left(\\frac{Q_i^{\\top}\\psi^*}{\\sigma_n}\\right) & \\frac{1}{n}\\sum_i X_i X_i^{\\top}K\\left(\\frac{Q_i^{\\top}\\psi^*}{\\sigma_n}\\right)\n\\end{pmatrix} \\\\\n& \\overset{p}{\\longrightarrow} 2 \\begin{pmatrix}\\mathbb{E}\\left[XX^{\\top}\\right] & \\mathbb{E}\\left[XX^{\\top}\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right] \\\\ \\mathbb{E}\\left[XX^{\\top}\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right] & \\mathbb{E}\\left[XX^{\\top}\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right] \\end{pmatrix} := 2Q^{\\gamma}\n\\end{align*}\nThe proof of the fact that $\\sqrt{\\sigma_n}\\nabla^2_{\\psi \\gamma}\\mathbb{M}_n^s(\\theta^*) = o_p(1)$ is same as the proof of Lemma 5 of \\cite{seo2007smoothed} and hence skipped. Finally the proof of the fact that \n$$\n\\sigma_n \\nabla^2_{\\psi \\psi}\\mathbb{M}_n^s(\\theta^*) \\overset{p}{\\longrightarrow} 2Q^{\\psi}\\,.\n$$\nfor some non-negative definite matrix $Q$. The proof is similar to that of Lemma 6 of \\cite{seo2007smoothed}, using which we conclude the proof with: \n$$\nQ^{\\psi} = \\left(\\int_{-\\infty}^{\\infty} -\\text{sign}(t) K''(t) \\ dt\\right) \\times \\mathbb{E}\\left[\\delta_0^{\\top} g\\left(-\\tilde Q^{\\top}\\tilde \\psi_0, \\tilde Q\\right)\\delta_0 \\tilde Q \\tilde Q^{\\top} f_0(0 \\mid \\tilde Q)\\right] \\,.\n$$\nThis completes the proof. So we have established: \n\\begin{align*}\n\\sqrt{n}\\left(\\hat \\gamma^s - \\gamma_0\\right) & \\overset{\\mathscr{L}}{\\implies} \\mathcal{N}\\left(0, \\left(Q^\\gamma\\right)^{-1}V^\\gamma \\left(Q^\\gamma\\right)^{-1}\\right) \\,, \\\\\n\\sqrt{\\frac{n}{\\sigma_n}}\\left(\\hat \\psi^s - \\psi_0\\right) & \\overset{\\mathscr{L}}{\\implies} \\mathcal{N}\\left(0, \\left(Q^\\psi\\right)^{-1}V^\\psi \\left(Q^\\psi\\right)^{-1}\\right) \\,.\n\\end{align*}\nand they are asymptotically uncorrelated. \n\n\n\n\n\n\n\n\n\n\n\n\\section{Proof of Theorem \\ref{thm:binary}}\n\\label{sec:supp_classification}\nIn this section, we present the details of the binary response model, the assumptions, a roadmap of the proof and then finally prove Theorem \\ref{thm:binary}.\n\\noindent \n\\begin{assumption}\n\\label{as:distribution}\nThe below assumptions pertain to the parameter space and the distribution of $Q$:\n\\begin{enumerate}\n\\item The parameter space $\\Theta$ is a compact subset of $\\mathbb{R}^p$. \n\\item The support of the distribution of $Q$ contains an open subset around origin of $\\mathbb{R}^p$ and the distribution of $Q_1$ conditional on $\\tilde{Q} = (Q_2, \\dots, Q_p)$ has, almost surely, everywhere positive density with respect to Lebesgue measure. \n\\end{enumerate}\n\\end{assumption}\n\n\n\n\\noindent \nFor notational convenience, define the following: \n\\begin{enumerate}\n\\item Define $f_{\\psi} (\\cdot | \\tilde{Q})$ to the conditional density of $Q^{\\top}\\psi$ given $\\tilde{Q}$ for $\\theta \\in \\Theta$. Note that the following relation holds: $$f_{\\theta}(\\cdot |\\tilde{Q}) = f_{Q_1}(\\cdot - \\tilde{\\psi}^{\\top}\\tilde{Q} | \\tilde{Q}) \\,.$$ where we define $f_{Q_1}(\\cdot | \\tilde X)$ is the conditional density of $Q_1$ given $\\tilde Q$. \n\\item Define $f_0(\\cdot | \\tilde{Q}) = f_{\\psi_0}(\\cdot | \\tilde{Q})$ where $\\psi_0$ is the unique minimizer of the population score function $M(\\psi)$. \n\\item Define $f_{\\tilde Q}(\\cdot)$ to be the marginal density of $\\tilde Q$. \n\\end{enumerate}\n\n\n\\noindent\nThe rest of the assumptions are as follows: \n\\begin{assumption}\n\\label{as:differentiability}\n$f_0(y|\\tilde{Q})$ is at-least once continuously differentiable almost surely for all $\\tilde{Q}$. Also assume that there exists $\\delta$ and $t$ such that $$\\inf_{|y| \\le \\delta} f_0(y|\\tilde{Q}) \\ge t$$ for all $\\tilde{Q}$ almost surely. \n\\end{assumption}\nThis assumption can be relaxed in the sense that one can allow the lower bound $t$ to depend on $\\tilde{Q}$, provided that some further assumptions are imposed on $\\mathbb{E}(t(\\tilde{Q}))$. As this does not add anything of significance to the import of this paper, we use Assumption \\ref{as:differentiability} to simplify certain calculations. \n\n\n\n\\begin{assumption}\n\\label{as:density_bound}\nDefine $m\\left(\\tilde{Q}\\right) = \\sup_{t}f_{X_1}(t | \\tilde{Q}) = \\sup_{\\theta} \\sup_{t}f_{\\theta}(t | \\tilde{Q})$. Assume that $\\mathbb{E}\\left(m\\left(\\tilde{Q}\\right)^2\\right) < \\infty$. \n\\end{assumption}\n\n\n\n\n\n\n\n\\begin{assumption}\n\\label{as:derivative_bound}\nDefine $h(\\tilde{Q}) = \\sup_{t} f_0'(t | \\tilde{Q})$. Assume that $\\mathbb{E}\\left(h^2\\left(\\tilde{Q}\\right)\\right) < \\infty$. \n\\end{assumption}\n\\begin{assumption}\n\\label{as:eigenval_bound}\nAssume that $f_{\\tilde{Q}}(0) > 0$ and also that the minimum eigenvalue of $\\mathbb{E}\\left(\\tilde{Q}\\tilde{Q}^{\\top}f_0(0|\\tilde{Q})\\right) > 0$. \n\\end{assumption}\n\n\n\n\n\n\\subsection{Sufficient conditions for above assumptions }\nWe now demonstrate some sufficient conditions for the above assumptions to hold. If the support of $Q$ is compact and both $f_1(\\cdot | \\tilde Q)$ and $f'_1(\\cdot | \\tilde Q)$ are uniformly bounded in $\\tilde Q$, then Assumptions $(\\ref{as:distribution}, \\ \\ref{as:differentiability}, \\ \\ref{as:density_bound},\\ \\ref{as:derivative_bound})$ follow immediately. The first part of Assumption \\ref{as:eigenval_bound}, i.e. the assumption $f_{\\tilde{Q}}(0) > 0$ is also fairly general and satisfied by many standard probability distributions. The second part of Assumption \\ref{as:eigenval_bound} is satisfied when $f_0(0|\\tilde{Q})$ has some lower bound independent of $\\tilde{Q}$ and $\\tilde{Q}$ has non-singular dispersion matrix. \n\n\n\n\n\nBelow we state our main theorem. In the next section, we first provide a roadmap of our proof and then fill in the corresponding details. For the rest of the paper, \\emph{we choose our bandwidth $\\sigma_n$ to satisfy $\\frac{\\log{n}}{n \\sigma_n} \\rightarrow 0$}. \n\n\n\\noindent\n\\begin{remark}\nAs our procedure requires the weaker condition $(\\log{n})\/(n \\sigma_n) \\rightarrow 0$, it is easy to see from the above Theorem that the rate of convergence can be almost as fast as $n\/\\sqrt{\\log{n}}$. \n\\end{remark}\n\\begin{remark}\nOur analysis remains valid in presence of an intercept term. Assume, without loss of generality, that the second co-ordinate of $Q$ is $1$ and let $\\tilde{Q} = (Q_3, \\dots, Q_p)$. It is not difficult to check that all our calculations go through under this new definition of $\\tilde Q$. We, however, avoid this scenario for simplicity of exposition. \n\\end{remark}\n\\vspace{0.2in}\n\\noindent\n{\\bf Proof sketch: }We now provide a roadmap of the proof of Theorem \\ref{thm:binary} in this paragraph while the elaborate technical derivations in the later part. \nDefine the following: $$T_n(\\psi) = \\nabla \\mathbb{M}_n^s(\\psi)= -\\frac{1}{n\\sigma_n}\\sum_{i=1}^n (Y_i - \\gamma)K'\\left(\\frac{Q_i^{\\top}\\psi}{\\sigma_n}\\right)\\tilde{Q}_i$$ $$Q_n(\\psi) = \\nabla^2 \\mathbb{M}_n^s(\\psi) = -\\frac{1}{n\\sigma_n^2}\\sum_{i=1}^n (Y_i - \\gamma)K''\\left(\\frac{Q_i^{\\top}\\psi}{\\sigma_n}\\right)\\tilde{Q}_i\\tilde{Q}_i^{\\top}$$ As $\\hat{\\psi}^s$ minimizes $\\mathbb{M}^s_n(\\psi)$ we have $T_n(\\hat{\\psi}^s) = 0$. Using one step Taylor expansion we have:\n\\allowdisplaybreaks \n\\begin{align*}\nT_n(\\hat{\\psi}^s) = T_n(\\psi_0) + Q_n(\\psi^*_n)\\left(\\hat{\\psi}^s - \\psi_0\\right) = 0\n\\end{align*}\nor: \n\\begin{equation}\n\\label{eq:main_eq} \\sqrt{n\/\\sigma_n}\\left(\\hat{\\psi}^s - \\psi_0\\right) = -\\left(\\sigma_nQ_n(\\psi^*_n)\\right)^{-1}\\sqrt{n\\sigma_n}T_n(\\psi_0) \n\\end{equation}\nfor some intermediate point $\\psi^*_n$ between $\\hat \\psi^s$ and $\\psi_0$. The following lemma establishes the asymptotic properties of $T_n(\\psi_0)$: \n\\begin{lemma}[Asymptotic Normality of $T_n$]\n\\label{asymp-normality}\n\\label{asymp-normality}\nIf $n\\sigma_n^{3} \\rightarrow \\lambda$, then \n$$\n\\sqrt{n \\sigma_n} T_n(\\psi_0) \\Rightarrow \\mathcal{N}(\\mu, \\Sigma)\n$$\nwhere \n$$\\mu = -\\sqrt{\\lambda}\\frac{\\beta_0 - \\alpha_0}{2}\\left[\\int_{-1}^{1} K'\\left(t\\right)|t| \\ dt \\right] \\int_{\\mathbb{R}^{p-1}}\\tilde{Q} f'(0 | \\tilde{Q}) \\ dP(\\tilde{Q})\n$$ \nand \n$$\\Sigma = \\left[a_1 \\int_{-1}^{0} \\left(K'\\left(t\\right)\\right)^2 \\ dt + a_2 \\int_{0}^{1} \\left(K'\\left(t\\right)\\right)^2 \\ dt \\right]\\int_{\\mathbb{R}^{p-1}}\\tilde{Q}\\tilde{Q}^{\\top} f(0|\\tilde{Q}) \\ dP(\\tilde{Q}) \\,.\n$$ \nHere $a_1 = (1 - \\gamma)^2 \\alpha_0 + \\gamma^2 (1-\\alpha_0), a_2 = (1 - \\gamma)^2 \\beta_0 + \\gamma^2 (1-\\beta_0)$ and $\\alpha_0, \\beta_0, \\gamma$ are model parameters defined around equation \\eqref{eq:new_loss}. \n\\end{lemma}\n\\noindent\nIn the case that $n \\sigma_n^3 \\rightarrow 0$, which, holds when $n\\sigma_n \\rightarrow 0$ as assumed prior to the statement of the theorem, $\\lambda = 0$ and we have: \n$$\\sqrt{n \\sigma_n} T_n(\\psi_0) \\rightarrow \\mathcal{N}(0, \\Sigma) \\,.$$ \nNext, we analyze the convergence of $Q_n(\\psi^*_n)^{-1}$ which is stated in the following lemma: \n\\begin{lemma}[Convergence in Probability of $Q_n$]\n\\label{conv-prob}\nUnder Assumptions (\\ref{as:distribution} - \\ref{as:eigenval_bound}), for any random sequence $\\breve{\\psi}_n$ such that $\\|\\breve{\\psi}_n - \\psi_0\\|\/\\sigma_n \\overset{P} \\rightarrow 0$, \n$$\n\\sigma_n Q_n(\\breve{\\psi}_n) \\overset{P} \\rightarrow Q = \\frac{\\beta_0 - \\alpha_0}{2}\\left(\\int_{-1}^{1} -K''\\left(t \\right)\\text{sign}(t) \\ dt\\right) \\ \\mathbb{E}\\left(\\tilde{Q}\\tilde{Q}^{\\top} f(0 |\\tilde{Q})\\right) \\,.\n$$\n\\end{lemma}\nIt will be shown later that the condition $\\|\\breve{\\psi}_n - \\psi_0\\|\/\\sigma_n \\overset{P} \\rightarrow 0$ needed in Lemma \\ref{conv-prob} holds for the (random) sequence $\\psi^*_n$. Then, combining Lemma \\ref{asymp-normality} and Lemma \\ref{conv-prob} we conclude from equation \\ref{eq:main_eq} that: \n$$\n\\sqrt{n\/\\sigma_n} \\left(\\hat{\\psi}^s - \\psi_0\\right) \\Rightarrow N(0, Q^{-1}\\Sigma Q^{-1}) \\,.\n$$ \nThis concludes the proof of the our Theorem \\ref{thm:binary} with $\\Gamma = Q^{-1}\\Sigma Q^{-1}$. \n\\newline\n\\newline\nObserve that, to show $\\left\\|\\psi^*_n - \\psi_0 \\right\\| = o_P(\\sigma_n)$, it suffices to to prove that $\\left\\|\\hat \\psi^s - \\psi_0 \\right\\| = o_P(\\sigma_n)$. Towards that direction, we have following lemma: \n\n\\begin{lemma}[Rate of convergence]\n\\label{lem:rate}\nUnder Assumptions (\\ref{as:distribution} - \\ref{as:eigenval_bound}), \n$$\nn^{2\/3}\\sigma_n^{-1\/3} d^2_n\\left(\\hat \\psi^s, \\psi_0^s\\right) = O_P(1) \\,,\n$$ \nwhere \n$$\nd_n\\left(\\psi, \\psi_0^s\\right) = \\sqrt{\\left[\\frac{\\|\\psi - \\psi_0^s\\|^2}{\\sigma_n} \\mathds{1}(\\|\\psi - \\psi_0^s\\| \\le \\mathcal{K}\\sigma_n) + \\|\\psi - \\psi_0^s\\| \\mathds{1}(\\|\\psi - \\psi_0^s\\| \\ge \\mathcal{K}\\sigma_n)\\right]}\n$$\nfor some specific constant $\\mathcal{K}$. (This constant will be mentioned precisely in the proof). \n\\end{lemma}\n\n\\noindent\nThe lemma immediately leads to the following corollary: \n\n\\begin{corollary}\n\\label{rate-cor}\nIf $n\\sigma_n \\rightarrow \\infty$ then $\\|\\hat \\psi^s - \\psi_0^s\\|\/\\sigma_n \\overset{P} \\longrightarrow 0$.\n\\end{corollary}\n\n\\noindent\nFinally, to establish $\\|\\hat \\psi^s - \\psi_0\\|\/\\sigma_n \\overset{P} \\rightarrow 0$, all we need is that $\\|\\psi_0^s - \\psi_0\\|\/\\sigma_n \\rightarrow 0$ as demonstrated in the following lemma:\n\n\\begin{lemma}[Convergence of population minimizer]\n\\label{bandwidth}\nFor any sequence of $\\sigma_n \\rightarrow 0$, we have: $\\|\\psi_0^s - \\psi_0\\|\/\\sigma_n \\rightarrow 0$. \n\\end{lemma}\n\n\\noindent\nHence the final roadmap is the following: Using Lemma \\ref{bandwidth} and Corollary \\ref{rate-cor} we establish that $\\|\\hat \\psi^s - \\psi_0\\|\/\\sigma_n \\rightarrow 0$ if $n\\sigma_n \\rightarrow \\infty$. This, in turn, enables us to prove that $\\sigma_n Q_n(\\psi^*_n) \\overset{P} \\rightarrow Q$,which, along with Lemma \\ref{asymp-normality}, establishes the main theorem. \n\n\\begin{remark}\n\\label{rem:gamma}\nIn the above analysis, we have assumed knowledge of $\\gamma$ in between $(\\alpha_0, \\beta_0)$. However, all our calculations go through if we replace $\\gamma$ by its estimate (say $\\bar Y$) with more tedious book-keeping. One way to simplify the calculations is to split the data into two halves, estimate $\\gamma$ (via $\\bar Y$) from the first half and then use it as a proxy for $\\gamma$ in the second half of the data to estimate $\\psi_0$. As this procedure does not add anything of interest to the core idea of our proof, we refrain from doing so here. \n\\end{remark}\n\n\\subsection{Variant of quadratic loss function}\n\\label{loss_func_eq}\nIn this sub-section we argue why the loss function in \\eqref{eq:new_loss} is a variant of the quadratic loss function for any $\\gamma \\in (\\alpha_0, \\beta_0)$. Assume that we know $\\alpha_0, \\beta_0$ and seek to estimate $\\psi_0$. We start with an expansion of the quadratic loss function: \n\\begin{align*}\n& \\mathbb{E}\\left(Y - \\alpha_0\\mathds{1}_{Q^{\\top}\\psi \\le 0} - \\beta_0 \\mathds{1}_{Q^{\\top}\\psi > 0}\\right)^2 \\\\\n& = \\mathbb{E}\\left(\\mathbb{E}\\left(Y - \\alpha_0\\mathds{1}_{Q^{\\top}\\psi \\le 0} - \\beta_0 \\mathds{1}_{Q^{\\top}\\psi > 0}\\right)^2 \\ | X\\right) \\\\\n& = \\mathbb{E}_{Q}\\left(\\mathbb{E}\\left( Y^2 \\mid Q \\right) \\right) + \\mathbb{E}_{Q}\\left(\\alpha_0\\mathds{1}_{Q^{\\top}\\psi \\le 0} + \\beta_0 \\mathds{1}_{Q^{\\top}\\psi > 0}\\right)^2 \\\\\n& \\qquad \\qquad \\qquad -2 \\mathbb{E}_{Q}\\left(\\left(\\alpha_0\\mathds{1}_{Q^{\\top}\\psi \\le 0} + \\beta_0 \\mathds{1}_{Q^{\\top}\\psi > 0}\\right) \\mathbb{E}(Y \\mid Q)\\right) \\\\\n& = \\mathbb{E}_Q\\left(\\mathbb{E}\\left( Y \\mid Q \\right) \\right) + \\mathbb{E}_Q\\left(\\alpha_0\\mathds{1}_{Q^{\\top}\\psi \\le 0} + \\beta_0 \\mathds{1}_{Q^{\\top}\\psi > 0}\\right)^2 \\\\\n& \\qquad \\qquad \\qquad -2 \\mathbb{E}_Q\\left(\\left(\\alpha_0\\mathds{1}_{Q^{\\top}\\psi \\le 0} + \\beta_0 \\mathds{1}_{Q^{\\top}\\psi > 0}\\right) \\mathbb{E}(Y \\mid Q)\\right) \\\\\n\\end{align*}\nSince the first summand is just $\\mathbb{E} Y$, it is irrelevant to the minimization. A cursory inspection shows that it suffices to minimize\n\\begin{align}\n& \\mathbb{E}\\left(\\left(\\alpha_0\\mathds{1}_{Q^{\\top}\\psi \\le 0} + \\beta_0 \\mathds{1}_{Q^{\\top}\\psi > 0}\\right) - \\mathbb{E}(Y \\mid Q)\\right)^2 \\notag\\\\\n\\label{eq:lse_1} & = (\\beta_0 - \\alpha_0)^2 \\P\\left(\\text{sign}(Q^{\\top}\\psi) \\neq \\text{sign}(Q^{\\top}\\psi_0)\\right)\n\\end{align}\nOn the other hand the loss we are considering is $\\mathbb{E}\\left((Y - \\gamma)\\mathds{1}_{Q^{\\top}\\psi \\le 0}\\right)$: \n\\begin{align}\n\\label{eq:lse_2} \\mathbb{E}\\left((Y - \\gamma)\\mathds{1}_{Q^{\\top}\\psi \\le 0}\\right) & = (\\beta_0 - \\gamma)\\P(Q^{\\top}\\psi_0 > 0 , Q^{\\top}\\psi \\le 0) \\notag \\\\\n& \\hspace{10em}+ (\\alpha_0 - \\gamma)\\P(Q^{\\top}\\psi_0 \\le 0, Q^{\\top}\\psi \\le 0)\\,,\n\\end{align}\nwhich can be rewritten as: \n\\begin{align*}\n& (\\alpha_0 - \\gamma)\\P(X^{\\top} \\psi_0 \\leq 0) + (\\beta_0 - \\gamma)\\,\\P(X^{\\top} \\psi_0 > 0, X^{\\top} \\psi \\leq 0) \\\\\n& \\qquad \\qquad \\qquad + (\\gamma - \\alpha_0)\\,P (X^{\\top} \\psi_0 \\leq 0, X^{\\top} \\psi > 0) \\,.\n\\end{align*}\nBy Assumption \\ref{as:distribution}, for $\\psi \\neq \\psi_0$, $\\P\\left(\\text{sign}(Q^{\\top}\\psi) \\neq \\text{sign}(Q^{\\top}\\psi_0)\\right) > 0$. As an easy consequence, equation \\eqref{eq:lse_1} is uniquely minimized at $\\psi = \\psi_0$. To see that the same is true for \\eqref{eq:lse_2} when $\\gamma \\in (\\alpha_0, \\beta_0)$, note that the first summand in the equation does not depend on $\\psi$, that the second and third summands are both non-negative and that at least one of these must be positive under Assumption \\ref{as:distribution}. \n\\subsection{Linear curvature of the population score function}\nBefore going into the proofs of the Lemmas and the Theorem, we argue that the population score function $M(\\psi)$ has linear curvature near $\\psi_0$, which is useful in proving Lemma \\ref{lem:rate}. We begin with the following observation: \n\\begin{lemma}[Curvature of population risk]\n\\label{lem:linear_curvature}\nUnder Assumption \\ref{as:differentiability} we have: $$u_- \\|\\psi - \\psi_0\\|_2 \\le \\mathbb{M}(\\psi) - \\mathbb{M}(\\psi_0) \\le u_+ \\|\\psi - \\psi_0\\|_2$$ for some constants $0 < u_- < u_+ < \\infty$, for all $\\psi \\in \\psi$. \n\\end{lemma}\n\\begin{proof}\nFirst, we show that \n$$\n\\mathbb{M}(\\psi) - \\mathbb{M}(\\psi_0) = \\frac{(\\beta_0 - \\alpha_0)}{2} \\P(\\text{sign}(Q^{\\top}\\psi) \\neq X^{\\top}(\\psi_0))\n$$ which follows from the calculation below:\n\\begin{align*}\n& \\mathbb{M}(\\psi) - \\mathbb{M}(\\psi_0) \\\\\n& = \\mathbb{E}\\left((Y - \\gamma)\\mathds{1}(Q^{\\top}\\psi \\le 0)\\right) - \\mathbb{E}\\left((Y - \\gamma)\\mathds{1}(Q^{\\top}\\psi_0 \\le 0)\\right) \\\\\n& = \\frac{\\beta_0 - \\alpha_0}{2} \\mathbb{E}\\left(\\left\\{\\mathds{1}(Q^{\\top}\\psi \\le 0) - \\mathds{1}(Q^{\\top}\\psi_0 \\le 0)\\right\\}\\left\\{\\mathds{1}(Q^{\\top}\\psi_0 \\ge 0) - \\mathds{1}(Q^{\\top}\\psi_0 \\le 0)\\right\\}\\right) \\\\\n& = \\frac{\\beta_0 - \\alpha_0}{2} \\mathbb{E}\\left(\\left\\{\\mathds{1}(Q^{\\top}\\psi \\le 0, Q^{\\top}\\psi_0 \\ge 0) - \\mathds{1}(Q^{\\top}\\psi \\le 0, Q^{\\top}\\psi_0 \\le 0) + \\mathds{1}(Q^{\\top}\\psi_0 \\le 0)\\right\\}\\right) \\\\\n& = \\frac{\\beta_0 - \\alpha_0}{2} \\mathbb{E}\\left(\\left\\{\\mathds{1}(Q^{\\top}\\psi \\le 0, Q^{\\top}\\psi_0 \\ge 0) + \\mathds{1}(Q^{\\top}\\psi \\ge 0, Q^{\\top}\\psi_0 \\le 0)\\right\\}\\right) \\\\\n& = \\frac{\\beta_0 - \\alpha_0}{2} \\P(\\text{sign}(Q^{\\top}\\psi) \\neq \\text{sign}(Q^{\\top}\\psi_0)) \\,.\n\\end{align*}\nWe now analyze the probability of the wedge shaped region, the region between the two hyperplanes $Q^{\\top}\\psi = 0$ and $Q^{\\top}\\psi_0 = 0$. Note that, \n\\allowdisplaybreaks\n\\begin{align}\n& \\P(Q^{\\top}\\psi > 0 > Q^{\\top}\\psi_0) \\notag\\\\\n& = \\P(-\\tilde{Q}^{\\top}\\tilde{\\psi} < X_1 < -\\tilde{Q}^{\\top}\\tilde{\\psi}_0) \\notag\\\\\n\\label{lin1} & = \\mathbb{E}\\left[\\left(F_{X_1 | \\tilde{Q}}\\left(-\\tilde{Q}^{\\top}\\tilde{\\psi}_0\\right) - F_{X_1 | \\tilde{Q}}\\left(-\\tilde{Q}^{\\top}\\tilde{\\psi}\\right)\\right)\\mathds{1}\\left(\\tilde{Q}^{\\top}\\tilde{\\psi}_0 \\le \\tilde{Q}^{\\top}\\tilde{\\psi}\\right)\\right]\n\\end{align}\nA similar calculation yields\n\\allowdisplaybreaks\n\\begin{align}\n\\label{lin2} \\P(Q^{\\top}\\psi < 0 < Q^{\\top}\\psi_0) & = \\mathbb{E}\\left[\\left(F_{X_1 | \\tilde{Q}}\\left(-\\tilde{Q}^{\\top}\\tilde{\\psi}\\right) - F_{X_1 | \\tilde{Q}}\\left(-\\tilde{Q}^{\\top}\\tilde{\\psi}_0\\right)\\right)\\mathds{1}\\left(\\tilde{Q}^{\\top}\\tilde{\\psi}_0 \\ge \\tilde{Q}^{\\top}\\tilde{\\psi}\\right)\\right]\n\\end{align}\nAdding both sides of equation \\ref{lin1} and \\ref{lin2} we get: \n\\begin{equation}\n\\label{wedge_expression}\n\\P(\\text{sign}(Q^{\\top}\\psi) \\neq \\text{sign}(Q^{\\top}\\psi_0)) = \\mathbb{E}\\left[\\left|F_{X_1 | \\tilde{Q}}\\left(-\\tilde{Q}^{\\top}\\tilde{\\psi}\\right) - F_{X_1 | \\tilde{Q}}\\left(-\\tilde{Q}^{\\top}\\tilde{\\psi}_0\\right)\\right|\\right]\n\\end{equation}\nDefine $\\psi_{\\max} = \\sup_{\\psi \\in \\psi}\\|\\psi\\|$, which is finite by Assumption \\ref{as:distribution}. Below, we establish the lower bound:\n\\allowdisplaybreaks\n\\begin{align*}\n& \\P(\\text{sign}(Q^{\\top}\\psi) \\neq \\text{sign}(Q^{\\top}\\psi_0)) \\notag\\\\\n& = \\mathbb{E}\\left[\\left|F_{X_1 | \\tilde{Q}}\\left(-\\tilde{Q}^{\\top}\\tilde{\\psi}\\right) - F_{X_1 | \\tilde{Q}}\\left(-\\tilde{Q}^{\\top}\\tilde{\\psi}_0\\right)\\right|\\right] \\\\\n& \\ge \\mathbb{E}\\left[\\left|F_{X_1 | \\tilde{Q}}\\left(-\\tilde{Q}^{\\top}\\tilde{\\psi}\\right) - F_{X_1 | \\tilde{Q}}\\left(-\\tilde{Q}^{\\top}\\tilde{\\psi}_0\\right)\\right|\\mathds{1}\\left(\\left|\\tilde{Q}^{\\top}\\tilde{\\psi}\\right| \\vee \\left| \\tilde{Q}^{\\top}\\tilde{\\psi}_0\\right| \\le \\delta\\right)\\right] \\hspace{0.2in} [\\delta \\ \\text{as in Assumption \\ref{as:differentiability}}]\\\\\n& \\ge \\mathbb{E}\\left[\\left|F_{X_1 | \\tilde{Q}}\\left(-\\tilde{Q}^{\\top}\\tilde{\\psi}\\right) - F_{X_1 | \\tilde{Q}}\\left(-\\tilde{Q}^{\\top}\\tilde{\\psi}_0\\right)\\right|\\mathds{1}\\left(\\|\\tilde{Q}\\| \\le \\delta\/\\psi_{\\max}\\right)\\right] \\\\\n& \\ge t \\mathbb{E}\\left[\\left| \\tilde{Q}^{\\top}(\\psi - \\psi_0)\\right| \\mathds{1}\\left(\\|\\tilde{Q}\\| \\le \\delta\/\\psi_{\\max}\\right)\\right] \\\\\n& = t \\|\\psi - \\psi_0\\| \\,\\mathbb{E}\\left[\\left| \\tilde{Q}^{\\top}\\frac{(\\psi - \\psi_0)}{\\|\\psi - \\psi_0\\|}\\right| \\mathds{1}\\left(\\|\\tilde{Q}\\| \\le \\delta\/\\psi_{\\max}\\right)\\right] \\\\\n& \\ge t\\|\\psi - \\psi_0\\| \\inf_{\\gamma \\in S^{p-1}}\\mathbb{E}\\left[\\left| \\tilde{Q}^{\\top}\\gamma\\right| \\mathds{1}\\left(\\|\\tilde{Q}\\| \\le \\delta\/\\psi_{\\max}\\right)\\right] \\\\\n& = u_-\\|\\psi - \\psi_0\\| \\,.\n\\end{align*} \nAt the very end, we have used the fact that $$\\inf_{\\gamma \\in S^{p-1}}\\mathbb{E}\\left[\\left| \\tilde{Q}^{\\top}\\gamma\\right| \\mathds{1}\\left(\\|\\tilde{Q}\\| \\le \\delta\/\\psi_{\\max}\\right)\\right] > 0$$ To prove this, assume that the infimum is 0. Then, there exists $\\gamma_0 \\in S^{p-1}$ such that \n$$\\mathbb{E}\\left[\\left| \\tilde{Q}^{\\top}\\gamma_0\\right| \\mathds{1}\\left(\\|\\tilde{Q}\\| \\le \\delta\/\\psi_{\\max}\\right)\\right] = 0 \\,,$$ \nas the above function continuous in $\\gamma$ and any continuous function on a compact set attains its infimum. Hence, $\\left|\\tilde{Q}^{\\top}\\gamma_0 \\right| = 0$ for all $\\|\\tilde{Q}\\| \\le \\delta\/\\psi_{\\max}$, which implies that $\\tilde{Q}$ does not have full support, violating Assumption \\ref{as:distribution} (2). This gives a contradiction.\n\\\\\\\\\n\\noindent\nEstablishing the upper bound is relatively easier. Going back to equation \\eqref{wedge_expression}, we have: \n\\begin{align*}\n& \\P(\\text{sign}(Q^{\\top}\\psi) \\neq \\text{sign}(Q^{\\top}\\psi_0)) \\notag\\\\\n& = \\mathbb{E}\\left[\\left|F_{Q_1 | \\tilde{Q}}\\left(-\\tilde{Q}^{\\top}\\tilde{\\psi}\\right) - F_{Q_1 | \\tilde{Q}}\\left(-\\tilde{Q}^{\\top}\\tilde{\\psi}_0\\right)\\right|\\right] \\\\\n& \\le \\mathbb{E}\\left[m(\\tilde Q) \\, \\|Q\\| \\,\\|\\psi- \\psi_0\\|\\right] \\hspace{0.2in} [m(\\cdot) \\ \\text{is defined in Assumption \\ref{as:density_bound}}]\\\\\n& \\le u_+ \\|\\psi - \\psi_0\\| \\,,\n\\end{align*}\nas $ \\mathbb{E}\\left[m(\\tilde Q) \\|Q\\|\\right] < \\infty$ by Assumption \\ref{as:density_bound} and the sub-Gaussianity of $\\tilde X$. \n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\\subsection{Proof of Lemma \\ref{asymp-normality}}\n\\begin{proof}\nWe first prove that under our assumptions $\\sigma_n^{-1} \\mathbb{E}(T_n(\\psi_0)) \\overset{n \\to \\infty}\\longrightarrow A$ where $$A = -\\frac{\\beta_0 - \\alpha_0}{2!}\\left[\\int_{-\\infty}^{\\infty} K'\\left(t\\right)|t| \\ dt \\right] \\int_{\\mathbb{R}^{p-1}}\\tilde{Q}f_0'(0 | \\tilde{Q}) \\ dP(\\tilde{Q})$$ The proof is based on Taylor expansion of the conditional density: \n\\allowdisplaybreaks\n\\begin{align*}\n& \\sigma_n^{-1} \\mathbb{E}(T_n(\\psi_0)) \\\\\n& = -\\sigma_n^{-2}\\mathbb{E}\\left((Y - \\gamma)K'\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right)\\tilde{Q}\\right) \\\\\n& = -\\frac{\\beta_0 - \\alpha_0}{2}\\sigma_n^{-2}\\mathbb{E}\\left(K'\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right)\\tilde{Q}(\\mathds{1}(Q^{\\top}\\psi_0 \\ge 0) - \\mathds{1}(Q^{\\top}\\psi_0 \\le 0))\\right) \\\\\n& = -\\frac{\\beta_0 - \\alpha_0}{2}\\sigma_n^{-2}\\int_{\\mathbb{R}^{p-1}}\\tilde{Q}\\left[\\int_{0}^{\\infty} K'\\left(\\frac{z}{\\sigma_n}\\right)f_0(z|\\tilde{Q}) \\ dz - \\int_{-\\infty}^{0} K'\\left(\\frac{z}{\\sigma_n}\\right)f_0(z|\\tilde{Q}) \\ dz \\right] \\ dP(\\tilde{Q}) \\\\\n& = -\\frac{\\beta_0 - \\alpha_0}{2}\\sigma_n^{-1}\\int_{\\mathbb{R}^{p-1}}\\tilde{Q}\\left[\\int_{0}^{\\infty} K'\\left(t\\right)f_0(\\sigma_n t|\\tilde{Q}) \\ dt - \\int_{-\\infty}^{0} K'\\left(t\\right)f_0(\\sigma_n t |\\tilde{Q}) \\ dt \\right] \\ dP(\\tilde{Q}) \\\\\n& = -\\frac{\\beta_0 - \\alpha_0}{2}\\sigma_n^{-1}\\left[\\int_{\\mathbb{R}^{p-1}}\\tilde{Q}\\left[\\int_{0}^{\\infty} K'\\left(t\\right)f_0(0|\\tilde{Q}) \\ dt - \\int_{-\\infty}^{0} K'\\left(t\\right)f_0(0 |\\tilde{Q}) \\ dt \\right] \\ dP(\\tilde{Q}) \\right. \\\\ \n& \\qquad \\qquad \\qquad + \\left. \\int_{\\mathbb{R}^{p-1}}\\sigma_n \\left[\\int_{0}^{\\infty} K'\\left(t\\right)tf_0'(\\lambda \\sigma_n t|\\tilde{Q}) \\ dt - \\int_{-\\infty}^{0} K'\\left(t\\right) t f_0'(\\lambda \\sigma_n t |\\tilde{Q}) \\ dt \\right] \\ dP(\\tilde{Q}) \\right] \\hspace{0.2in} [0 < \\lambda < 1]\\\\ \n& = -\\frac{\\beta_0 - \\alpha_0}{2}\\int_{\\mathbb{R}^{p-1}}\\tilde{Q}\\left[\\int_{0}^{\\infty} k\\left(t\\right)tf_0'(\\lambda \\sigma_n t|\\tilde{Q}) \\ dz - \\int_{-\\infty}^{0} k\\left(t\\right)tf_0'(\\lambda \\sigma_nt |\\tilde{Q}) \\ dz \\right] \\ dP(\\tilde{Q})\\\\\n& \\underset{n \\rightarrow \\infty} \\longrightarrow -\\frac{\\beta_0 - \\alpha_0}{2}\\left[\\int_{-\\infty}^{\\infty} k\\left(t\\right)|t| \\ dt \\right] \\int_{\\mathbb{R}^{p-1}}\\tilde{Q}f_0'(0 | \\tilde{Q}) \\ dP(\\tilde{Q})\n\\end{align*}\nNext, we prove that $\\mbox{Var}\\left(\\sqrt{n\\sigma_n}T_n(\\psi_0)\\right)\\longrightarrow \\Sigma$ as $n \\rightarrow \\infty$, where $\\Sigma$ is as defined in Lemma \\ref{asymp-normality}. Note that: \n\\allowdisplaybreaks\n\\begin{align*}\n\\mbox{Var}\\left(\\sqrt{n\\sigma_n}T_n(\\psi_0)\\right) & = \\sigma_n \\mathbb{E}\\left((Y - \\gamma)^2\\left(K'\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right)^2\\frac{\\tilde{Q}\\tilde{Q}^{\\top}}{\\sigma_n^2}\\right)\\right) - \\sigma_n \\mathbb{E}(T_n(\\psi_0))\\mathbb{E}(T_n(\\psi_0))^{\\top}\n\\end{align*}\nAs $\\sigma_n^{-1}\\mathbb{E}(T_n(\\psi_0)) \\rightarrow A$, we can conclude that $\\sigma_n \\mathbb{E}(T_n(\\psi_0))\\mathbb{E}(T_n(\\psi_0))^{\\top} \\rightarrow 0$. \nDefine $a_1 = (1 - \\gamma)^2 \\alpha_0 + \\gamma^2 (1-\\alpha_0), a_2 = (1 - \\gamma)^2 \\beta_0 + \\gamma^2 (1-\\beta_0)$. For the first summand: \n\\allowdisplaybreaks\n\\begin{align*}\n& \\sigma_n \\mathbb{E}\\left((Y - \\gamma)^2\\left(K^{'^2}\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n}\\right)\\frac{\\tilde{Q}\\tilde{Q}^{\\top}}{\\sigma_n^2}\\right)\\right) \\\\\n& = \\frac{1}{\\sigma_n} \\int_{\\mathbb{R}^{p-1}}\\tilde{Q}\\tilde{Q}^{\\top} \\left[a_1 \\int_{-\\infty}^{0} K^{'^2}\\left(\\frac{z}{\\sigma_n}\\right) f(z|\\tilde{Q}) \\ dz \\right. \\notag \\\\ & \\left. \\qquad \\qquad \\qquad + a_2 \\int_{0}^{\\infty}K^{'^2}\\left(\\frac{z}{\\sigma_n}\\right) f(z|\\tilde{Q}) \\ dz \\right] \\ dP(\\tilde{Q})\\\\\n& = \\int_{\\mathbb{R}^{p-1}}\\tilde{Q}\\tilde{Q}^{\\top} \\left[a_1 \\int_{-\\infty}^{0} K^{'^2}\\left(t\\right)f(\\sigma_n t|\\tilde{Q}) \\ dt + a_2 \\int_{0}^{\\infty} K^{'^2}\\left(t\\right) f(\\sigma_n t |\\tilde{Q}) \\ dt \\right] \\ dP(\\tilde{Q}) \\\\\n& = \\int_{\\mathbb{R}^{p-1}}\\tilde{Q}\\tilde{Q}^{\\top} \\left[a_1 \\int_{-\\infty}^{0} K^{'^2}\\left(t\\right)f(\\sigma_n t|\\tilde{Q}) \\ dt + a_2 \\int_{0}^{\\infty} K^{'^2}\\left(t\\right) f(\\sigma_n t |\\tilde{Q}) \\ dt \\right] \\ dP(\\tilde{Q}) \\\\\n& \\underset{n \\rightarrow \\infty} \\longrightarrow \\left[a_1 \\int_{-\\infty}^{0} K^{'^2}\\left(t\\right) \\ dt + a_2 \\int_{0}^{\\infty} K^{'^2}\\left(t\\right) \\ dt \\right]\\int_{\\mathbb{R}^{p-1}}\\tilde{Q}\\tilde{Q}^{\\top} f(0|\\tilde{Q}) \\ dP(\\tilde{Q}) \\ \\ \\overset{\\Delta} = \\Sigma \\, . \n\\end{align*}\nFinally, suppose $n \\sigma_n^{3} \\rightarrow \\lambda$. Define $W_n = \\sqrt{n\\sigma_n}\\left[T_n(\\psi) - \\mathbb{E}(T_n(\\psi))\\right]$. Using Lemma 6 of Horowitz \\cite{horowitz1992smoothed}, it is easily established that $W_n \\Rightarrow N(0, \\Sigma)$. Also, we have: \n\\allowdisplaybreaks \n\\begin{align*}\n\\sqrt{n\\sigma_n}\\mathbb{E}(T_n(\\psi_0)) = \\sqrt{n\\sigma_n^{3}}\\sigma_n^{-1}\\mathbb{E}(T_n(\\psi_0) & \\rightarrow \\sqrt{\\lambda}A = \\mu\n\\end{align*}\nAs $\\sqrt{n\\sigma_n}T_n(\\psi_0) = W_n + \\sqrt{n\\sigma_n}\\mathbb{E}(T_n(\\psi_0))$, we conclude that $\\sqrt{n\\sigma_n} T_n(\\psi_0) \\Rightarrow N(\\mu, \\Sigma)$.\n\\end{proof}\n\\subsection{Proof of Lemma \\ref{conv-prob}}\n\\begin{proof}\nLet $\\epsilon_n \\downarrow 0$ be a sequence such that $\\P(\\|\\breve{\\psi}_n - \\psi_0\\| \\le \\epsilon_n \\sigma_n) \\rightarrow 1$. Define $\\Psi_n = \\{\\psi: \\|\\psi - \\psi_0\\| \\le \\epsilon_n \\sigma_n\\}$. We show that $$\\sup_{\\psi \\in \\psi_n} \\|\\sigma_n Q_n(\\psi) - Q\\|_F \\overset{P} \\to 0$$ where $\\|\\cdot\\|_F$ denotes the Frobenius norm of a matrix. Sometimes, we omit the subscript $F$ when there is no ambiguity. Define $\\mathcal{G}_n$ to be collection of functions: \n$$\n\\mathcal{G}_n= \\left\\{g_{\\psi}(y, q) = -\\frac{1}{\\sigma_n}(y - \\gamma)\\tilde q\\tilde q^{\\top} \\left(K''\\left(\\frac{q^{\\top}\\psi}{\\sigma_n}\\right) - K''\\left(\\frac{q^{\\top}\\psi_0}{\\sigma_n}\\right)\\right), \\psi \\in \\Psi_n \\right\\}\n$$\nThat the function class $\\mathcal{G}_n$ has bounded uniform entropy integral (BUEI) is immediate from the fact that the function $Q \\to Q^{\\top}\\psi$ has finite VC dimension (as the hyperplanes has finite VC dimension) and it does change upon constant scaling. Therefore $Q \\mapsto Q^{\\top}\\psi\/\\sigma_n$ also has finite VC dimension which does not depend on n and hence BUEI. As composition with a monotone function and multiplication with constant (parameter free) functions or multiplication of two BUEI class of functions keeps BUEI property, we conclude that $\\mathcal{G}_n$ has BUEI. \nWe first expand the expression in two terms:\n\\allowdisplaybreaks\n\\begin{align*}\n\\sup_{\\psi \\in \\psi_n} \\|\\sigma_n Q_n(\\psi) - Q\\| & \\le \\sup_{\\psi \\in \\psi_n} \\|\\sigma_n Q_n(\\psi) - \\mathbb{E}(\\sigma_n Q_n(\\psi))\\| + \\sup_{\\psi \\in \\psi_n} \\| \\mathbb{E}(\\sigma_n Q_n(\\psi)) - Q\\| \\\\ \n& = \\|(\\mathbb{P}_n - P)\\|_{\\mathcal{G}_n} + \\sup_{\\psi \\in \\psi_n}\\| \\mathbb{E}(\\sigma_n Q_n(\\psi)) - Q\\| \\\\\n& = T_{1,n} + T_{2,n} \\hspace{0.3in} \\,. [\\text{Say}]\n\\end{align*}\n\n\n\\vspace{0.2in}\n\\noindent\nThat $T_{1,n} \\overset{P} \\to 0$ follows from uniform law of large number of a BUEI class (e.g. combining Theorem 2.4.1 and Theorem 2.6.7 of \\cite{vdvw96}). \nFor uniform convergence of the second summand $T_{n,2}$, define $\\chi_n = \\{\\tilde{Q}: \\|\\tilde{Q}\\| \\le 1\/\\sqrt{\\epsilon_n}\\}$. Then $\\chi_n \\uparrow \\mathbb{R}^{p-1}$. Also for any $\\psi \\in \\Psi_n$, if we define $\\gamma_n \\equiv \\gamma_n(\\psi) = (\\psi - \\psi_0)\/\\sigma_n$, then $|\\tilde \\gamma_n^{\\top}\\tilde{Q}| \\le \\sqrt{\\epsilon_n}$ for all $n$ and for all $\\psi \\in \\Psi_n, \\tilde{Q} \\in \\chi_n$. Now, \n\\allowdisplaybreaks\n\\begin{align*}\n& \\sup_{\\psi \\in \\psi_n}\\| \\mathbb{E}(\\sigma_n Q_n(\\psi)) - Q\\| \\notag \\\\\n&\\qquad \\qquad = \\sup_{\\psi \\in \\psi_n}\\| (\\mathbb{E}(\\sigma_n Q_n(\\psi)\\mathds{1}(\\chi_n))-Q_1) + (\\mathbb{E}(\\sigma_n Q_n(\\psi)\\mathds{1}(\\chi_n^c))-Q_2)\\|\n\\end{align*}\nwhere $$Q_1 = \\frac{\\beta_0 - \\alpha_0}{2}\\left(\\int_{-\\infty}^{\\infty} -K''\\left(t \\right)\\text{sign}(t) \\ dt\\right) \\ \\mathbb{E}\\left(\\tilde{Q}\\tilde{Q}^{\\top} f_0(0 |\\tilde{Q})\\mathds{1}(\\chi_n) \\right)$$ $$Q_2 = \\frac{\\beta_0 - \\alpha_0}{2}\\left(\\int_{-\\infty}^{\\infty} -K''\\left(t \\right)\\text{sign}(t) \\ dt\\right) \\ \\mathbb{E}\\left(\\tilde{Q}\\tilde{Q}^{\\top} f(0 |\\tilde{Q})\\mathds{1}(X_n^c) \\right) \\,.$$\nNote that \n\\allowdisplaybreaks\n\\begin{flalign}\n& \\|\\mathbb{E}(\\sigma_n Q_n(\\psi)\\mathds{1}(\\chi_n)) - Q_1\\| \\notag\\\\\n& =\\left\\| \\frac{\\beta_0 - \\alpha_0}{2}\\left[\\int_{\\chi_n} \\tilde{Q}\\tilde{Q}^{\\top} \\left[\\int_{-\\infty}^{\\tilde{Q}^{\\top}\\gamma_n} K''\\left(t \\right) f_0(\\sigma_n (t-\\tilde{Q}^{\\top}\\gamma_n) |\\tilde{Q}) \\ dt \\right. \\right. \\right. \\notag \\\\\n& \\left. \\left. \\left. \\qquad \\qquad - \\int_{\\tilde{Q}^{\\top}\\gamma_n}^{\\infty} K''\\left(t\\right) f_0(\\sigma_n (t - \\tilde{Q}^{\\top}\\gamma_n) | \\tilde{Q}) \\ dt \\right]dP(\\tilde{Q})\\right]\\right. \\notag\\\\ & \\left. \\qquad \\qquad \\qquad - \\frac{\\beta_0 - \\alpha_0}{2}\\left[\\int_{\\chi_n} \\tilde{Q}\\tilde{Q}^{\\top} f(0 |\\tilde{Q})\\left[\\int_{-\\infty}^{0} K''\\left(t \\right) \\ dt - \\int_{0}^{\\infty} K''\\left(t\\right) \\ dt \\right]dP(\\tilde{Q})\\right] \\right \\|\\notag\\\\\n& =\\left \\| \\frac{\\beta_0 - \\alpha_0}{2}\\left[\\int_{\\chi_n} \\tilde{Q}\\tilde{Q}^{\\top} \\left[\\int_{-\\infty}^{\\tilde{Q}^{\\top}\\gamma_n} K'''\\left(t \\right) (f_0(\\sigma_n (t-\\tilde{Q}^{\\top}\\gamma_n) |\\tilde{Q})-f_0(0 | \\tilde{Q})) \\ dt \\right. \\right. \\right.\\notag\\\\& \\qquad \\qquad- \\left. \\left. \\left. \\int_{\\tilde{Q}^{\\top}\\gamma_n}^{\\infty} K''\\left(t\\right) (f_0(\\sigma_n (t - \\tilde{Q}^{\\top}\\gamma_n) | \\tilde{Q}) - f_0(0 | \\tilde{Q})) \\ dt \\right]dP(\\tilde{Q})\\right]\\right. \\notag\\\\ & \\qquad \\qquad \\qquad + \\left. \\frac{\\beta_0 - \\alpha_0}{2}\\left[\\int_{\\chi_n} \\tilde{Q}\\tilde{Q}^{\\top} f_0(0 |\\tilde{Q}) \\left[\\int_{-\\infty}^{\\tilde{Q}^{\\top}\\gamma_n} K''\\left(t \\right) \\ dt - \\int_{-\\infty}^{0} K''\\left(t \\right) \\ dt \\right. \\right. \\right. \\notag \\\\ \n& \\qquad \\qquad \\qquad \\qquad \\left. \\left. \\left. + \\int_{\\tilde{Q}^{\\top}\\gamma_n}^{\\infty} K''\\left(t \\right) \\ dt - \\int_{0}^{\\infty} K''\\left(t\\right) \\ dt \\right]dP(\\tilde{Q})\\right] \\right \\|\\notag\\\\\n& \\le \\frac{\\beta_0 - \\alpha_0}{2}\\sigma_n \\int_{\\chi_n}\\|\\tilde{Q}\\tilde{Q}^{\\top}\\|h(\\tilde{Q})\\int_{-\\infty}^{\\infty}|K''(t)||t - \\gamma_n^{\\top}\\tilde{Q}| \\ dt \\ dP(\\tilde{Q}) \\notag\\\\ & \\qquad \\qquad + \\frac{\\beta_0 - \\alpha_0}{2} \\int_{\\chi_n}\\|\\tilde{Q}\\tilde{Q}^{\\top}\\| f_0(0 | \\tilde{Q}) \\left[\\left| \\int_{-\\infty}^{\\tilde{Q}^{\\top}\\gamma_n} K''\\left(t \\right) \\ dt - \\int_{-\\infty}^{0} K''\\left(t \\right) \\ dt \\right| \\right. \\notag \\\\ & \\left. \\qquad \\qquad \\qquad + \\left| \\int_{\\tilde{Q}^{\\top}\\gamma_n}^{\\infty} K''\\left(t \\right) \\ dt - \\int_{0}^{\\infty} K''\\left(t\\right) \\ dt \\right|\\right] \\ dP(\\tilde{Q})\\notag\\\\\n& \\le \\frac{\\beta_0 - \\alpha_0}{2}\\left[\\sigma_n \\int_{\\chi_n}\\|\\tilde{Q}\\tilde{Q}^{\\top}\\|h(\\tilde{Q})\\int_{-\\infty}^{\\infty}|K''(t)||t - \\gamma_n^{\\top}\\tilde{Q}| \\ dt \\ dP(\\tilde{Q}) \\right. \\notag \\\\ \n& \\left. \\qquad \\qquad \\qquad + 2\\int_{\\chi_n}\\|\\tilde{Q}\\tilde{Q}^{\\top}\\| f_0(0 | \\tilde{Q}) (K'(0) - K'(\\gamma_n^{\\top}\\tilde{Q})) \\ dP(\\tilde{Q})\\right]\\notag \\\\\n\\label{cp1}&\\rightarrow 0 \\hspace{0.3in} [\\text{As} \\ n \\rightarrow \\infty] \\,,\n\\end{flalign}\nby DCT and Assumptions \\ref{as:distribution} and \\ref{as:derivative_bound}. For the second part: \n\\allowdisplaybreaks\n\\begin{align}\n& \\|\\mathbb{E}(\\sigma_n Q_n(\\psi)\\mathds{1}(\\chi_n^c)) - Q_2\\|\\notag\\\\\n& =\\left\\| \\frac{\\beta_0 - \\alpha_0}{2}\\left[\\int_{\\chi_n^c} \\tilde{Q}\\tilde{Q}^{\\top} \\left[\\int_{-\\infty}^{\\tilde{Q}^{\\top}\\gamma_n} K''\\left(t \\right) f_0(\\sigma_n (t-\\tilde{Q}^{\\top}\\gamma_n) |\\tilde{Q}) \\ dt \\right. \\right. \\right. \\notag \\\\ \n& \\left. \\left. \\left. \\qquad \\qquad - \\int_{\\tilde{Q}^{\\top}\\gamma_n}^{\\infty} K''\\left(t\\right) f_0(\\sigma_n (t - \\tilde{Q}^{\\top}\\gamma_n) | \\tilde{Q}) \\ dt \\right]dP(\\tilde{Q})\\right]\\right. \\notag\\\\ & \\left. \\qquad \\qquad \\qquad -\\frac{\\beta_0 - \\alpha_0}{2}\\left[\\int_{\\chi_n^c} \\tilde{Q}\\tilde{Q}^{\\top} f_0(0 |\\tilde{Q})\\left[\\int_{-\\infty}^{0} K''\\left(t \\right) \\ dt - \\int_{0}^{\\infty} K''\\left(t\\right) \\ dt \\right]dP(\\tilde{Q})\\right] \\right \\|\\notag\\\\\n& \\le \\frac{\\beta_0 - \\alpha_0}{2} \\int_{\\infty}^{\\infty} |K''(t)| \\ dt \\int_{\\chi_n^c} \\|\\tilde{Q}\\tilde{Q}^{\\top}\\|(m(\\tilde{Q}) + f_0(0|\\tilde{Q})) \\ dP(\\tilde{Q}) \\notag\\\\\n\\label{cp2} & \\rightarrow 0 \\hspace{0.3in} [\\text{As} \\ n \\rightarrow \\infty] \\,,\n\\end{align}\nagain by DCT and Assumptions \\ref{as:distribution} and \\ref{as:density_bound}. Combining equations \\ref{cp1} and \\ref{cp2}, we conclude the proof. \n\\end{proof}\n\n\n\\subsection{Proof of Lemma \\ref{bandwidth}}\nHere we prove that $\\|\\psi^s_0 - \\psi_0\\|\/\\sigma_n \\rightarrow 0$ where $\\psi^s_0$ is the minimizer of $\\mathbb{M}^s(\\psi)$ and $\\psi_0$ is the minimizer of $M(\\psi)$. \n\\begin{proof}\nDefine $\\eta = (\\psi^s_0 - \\psi_0)\/\\sigma_n$. At first we show that, $\\|\\tilde \\eta\\|_2$ is $O(1)$, i.e. there exists some constant $\\Omega_1$ such that $\\|\\tilde \\eta\\|_2 \\le \\Omega_1$ for all $n$: \n\\begin{align*}\n\\|\\psi^s_0 - \\psi_0\\|_2 & \\le \\frac{1}{u_-} \\left(\\mathbb{M}(\\psi_n) - \\mathbb{M}(\\psi_0)\\right) \\hspace{0.2in} [\\text{Follows from Lemma} \\ \\ref{lem:linear_curvature}]\\\\\n& \\le \\frac{1}{u_-} \\left(\\mathbb{M}(\\psi_n) - \\mathbb{M}^s(\\psi_n) + \\mathbb{M}^s(\\psi_n) - \\mathbb{M}^s(\\psi_0) + \\mathbb{M}^s(\\psi_0) - \\mathbb{M}(\\psi_0)\\right) \\\\\n& \\le \\frac{1}{u_-} \\left(\\mathbb{M}(\\psi_n) - \\mathbb{M}^s(\\psi_n) + \\mathbb{M}^s(\\psi_0) - M(\\psi_0)\\right) \\hspace{0.2in} [\\because \\mathbb{M}^s(\\psi_n) - \\mathbb{M}^s(\\psi_0) \\le 0]\\\\\n& \\le \\frac{2K_1}{u_-}\\sigma_n \\hspace{0.2in} [\\text{from equation} \\ \\eqref{eq:lin_bound_1}]\n\\end{align*}\n\n\\noindent\nAs $\\psi^s_0$ minimizes $\\mathbb{M}^s(\\psi)$: \n$$\\nabla \\mathbb{M}^s(\\psi^s_0) = -\\mathbb{E}\\left((Y-\\gamma)\\tilde{Q}K'\\left(\\frac{Q^{\\top}\\psi^0_s}{\\sigma_n}\\right)\\right) = 0$$\nHence:\n\\begin{align*}\n0 &= \\mathbb{E}\\left((Y-\\gamma)\\tilde{Q}K'\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\right) \\\\\n& = \\frac{(\\beta_0 - \\alpha_0)}{2} \\mathbb{E}\\left(\\tilde{Q}K'\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right)\\left\\{\\mathds{1}(Q^{\\top}\\psi_0 \\ge 0) -\\mathds{1}(Q^{\\top}\\psi_0 < 0)\\right\\}\\right) \\\\\n& = \\frac{(\\beta_0 - \\alpha_0)}{2} \\mathbb{E}\\left(\\tilde{Q}K'\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n} + \\tilde{\\eta}^{\\top} \\tilde{Q}\\right)\\left\\{\\mathds{1}(Q^{\\top}\\psi_0 \\ge 0) -\\mathds{1}(Q^{\\top}\\psi_0 < 0)\\right\\}\\right) \\\\\n& = \\frac{(\\beta_0 - \\alpha_0)}{2} \\left[\\int_{\\mathbb{R}^{p-1}}\\tilde{Q} \\int_0^{\\infty} K'\\left(\\frac{z}{\\sigma_n} + \\tilde{\\eta}^{\\top} \\tilde{Q}\\right) \\ f_0(z|\\tilde{Q}) \\ dz \\ dP(\\tilde{Q})\\right. \\\\\n& \\qquad \\qquad \\qquad \\qquad \\qquad \\left. - \\int_{\\mathbb{R}^{p-1}}\\tilde{Q} \\int_{-\\infty}^0 K'\\left(\\frac{z}{\\sigma_n} + \\tilde{\\eta}^{\\top} \\tilde{Q}\\right) \\ f_0(z|\\tilde{Q}) \\ dz \\ dP(\\tilde{Q})\\right] \\\\\n& =\\sigma_n \\frac{(\\beta_0 - \\alpha_0)}{2} \\left[\\int_{\\mathbb{R}^{p-1}}\\tilde{Q} \\int_0^{\\infty} K'\\left(t + \\tilde{\\eta}^{\\top} \\tilde{Q}\\right) \\ f_0(\\sigma_n t|\\tilde{Q}) \\ dt \\ dP(\\tilde{Q})\\right. \\\\\n& \\qquad \\qquad \\qquad \\qquad \\qquad \\left. - \\int_{\\mathbb{R}^{p-1}}\\tilde{Q} \\int_{-\\infty}^0 K'\\left(t + \\tilde{\\eta}^{\\top} \\tilde{Q}\\right) \\ f_0(\\sigma_n t|\\tilde{Q}) \\ dz \\ dP(\\tilde{Q})\\right] \n\\end{align*}\nAs $\\sigma_n\\frac{(\\beta_0 - \\alpha_0)}{2} > 0$, we can forget about it and continue. Also, as we have proved $\\|\\tilde \\eta\\| = O(1)$, there exists a subsequence $\\eta_{n_k}$ and a point $c \\in \\mathbb{R}^{p-1}$ such that $\\eta_{n_k} \\rightarrow c$. Along that sub-sequence we have: \n\\begin{align*}\n0 & = \\left[\\int_{\\mathbb{R}^{p-1}}\\tilde{Q} \\int_0^{\\infty} K'\\left(t + \\tilde{\\eta}_{n_k}^{\\top} \\tilde{Q}\\right) \\ f_0(\\sigma_{n_k} t|\\tilde{Q}) \\ dt \\ dP(\\tilde{Q})\\right. \\\\\n& \\qquad \\qquad \\qquad \\qquad \\qquad \\left. - \\int_{\\mathbb{R}^{p-1}}\\tilde{Q} \\int_{-\\infty}^0 K'\\left(t + \\tilde{\\eta}_{n_k}^{\\top} \\tilde{Q}\\right) \\ f_0(\\sigma_{n_k} t|\\tilde{Q}) \\ dt \\ dP(\\tilde{Q})\\right] \n\\end{align*}\nTaking limits on both sides and applying DCT (which is permissible by DCT) we conclude: \n\\begin{align*}\n0 & = \\left[\\int_{\\mathbb{R}^{p-1}}\\tilde{Q} \\int_0^{\\infty} K'\\left(t +c^{\\top} \\tilde{Q}\\right) \\ f_0(0|\\tilde{Q}) \\ dt \\ dP(\\tilde{Q})\\right. \\\\\n& \\qquad \\qquad \\qquad \\qquad \\qquad \\left. - \\int_{\\mathbb{R}^{p-1}}\\tilde{Q} \\int_{-\\infty}^0 K'\\left(t + c^{\\top} \\tilde{Q}\\right) \\ f_0(0|\\tilde{Q}) \\ dt \\ dP(\\tilde{Q})\\right] \\\\\n& = \\left[\\int_{\\mathbb{R}^{p-1}}\\tilde{Q} \\ f_0(0|\\tilde{Q}) \\int_{c^{\\top} \\tilde{Q}}^{\\infty} K'\\left(t\\right) \\ dt \\ dP(\\tilde{Q})\\right. \\\\\n& \\qquad \\qquad \\qquad \\qquad \\qquad \\left. - \\int_{\\mathbb{R}^{p-1}}\\tilde{Q}\\ f_0(0|\\tilde{Q}) \\int_{-\\infty}^{c^{\\top} \\tilde{Q}} K'\\left(t \\right) \\ dt \\ dP(\\tilde{Q})\\right] \\\\\n& = \\left[\\int_{\\mathbb{R}^{p-1}}\\tilde{Q} \\ f_0(0|\\tilde{Q}) \\left[1 - K(c^{\\top} \\tilde{Q})\\right] \\ dt \\ dP(\\tilde{Q})\\right. \\\\\n& \\qquad \\qquad \\qquad \\qquad \\qquad\\left. - \\int_{\\mathbb{R}^{p-1}}\\tilde{Q}\\ f_0(0|\\tilde{Q}) K(c^{\\top} \\tilde{Q}) \\ dt \\ dP(\\tilde{Q})\\right] \\\\\n& = \\mathbb{E}\\left(\\tilde{Q} \\left(2K(c^{\\top} \\tilde{Q}) - 1\\right)f_0(0|\\tilde{Q})\\right) \\,.\n\\end{align*}\nNow, taking the inner-products of both sides with respect to $c$, we get: \n\\begin{equation}\n\\label{eq:zero_eq}\n\\mathbb{E}\\left(c^{\\top}\\tilde{Q} \\left(2K(c^{\\top} \\tilde{Q}) - 1\\right)f_0(0|\\tilde{Q})\\right) = 0 \\,.\n\\end{equation}\nBy our assumption that $K$ is symmetric kernel and that $K(t) > 0$ for all $t \\in (-1, 1)$, we easily conclude that $c^{\\top}\\tilde{Q} \\left(2K(c^{\\top} \\tilde{Q}) - 1\\right) \\ge 0$ almost surely in $\\tilde{Q}$ with equality iff $c^{\\top}X = 0$, which is not possible unless $c = 0$. Hence we conclude that $c = 0$. This shows that any convergent subsequence of $\\eta_n$ converges to $0$, which completes the proof. \n\\end{proof}\n\n\n\n\\subsection{Proof of Lemma \\ref{lem:rate}}\n\\begin{proof}\nTo obtain the rate of convergence of our kernel smoothed estimator we use Theorem 3.4.1 of \\cite{vdvw96}: There are three key ingredients that one needs to take care of if in order to apply this theorem: \n\\begin{enumerate}\n\\item Consistency of the estimator (otherwise the conditions of the theorem needs to be valid for all $\\eta$). \n\\item The curvature of the population score function near its minimizer.\n\\item A bound on the modulus of continuity in a vicinity of the minimizer of the population score function. \n\\end{enumerate}\nBelow, we establish the curvature of the population score function (item 2 above) globally, thereby obviating the need to establish consistency separately. Recall that the population score function was defined as: \n$$\n\\mathbb{M}^s(\\psi) = \\mathbb{E}\\left((Y - \\gamma)\\left(1 - K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right)\\right)\\right)\n$$ \nand our estimator $\\hat{\\psi}_n$ is the argmin of the corresponding sample version. Consider the set of functions $\\mathcal{H}_n = \\left\\{h_{\\psi}: h_{\\psi}(q,y) = (y - \\gamma)\\left(1 - K\\left(\\frac{q^{\\top}\\psi}{\\sigma_n}\\right)\\right)\\right\\}$. Next, we argue that $\\mathcal{H}_n$ is a VC class of functions with fixed VC dimension. We know that the function $\\{(q,y) \\mapsto q^{\\top}\\psi\/\\sigma_n: \\psi \\in \\psi\\}$ has fixed VC dimension (i.e. not depending on $n$). Now, as a finite dimensional VC class of functions composed with a fixed monotone function or multiplied by a fixed function still remains a finite dimensional VC class, we conclude that $\\mathcal{H}_n$ is a fixed dimensional VC class of functions with bounded envelope (as the functions considered here are bounded by 1). \n\nNow, we establish a lower bound on the curvature of the population score function $\\mathbb{M}^s(\\psi)$ near its minimizer $\\psi_n$: \n$$\n\\mathbb{M}^s(\\psi) - \\mathbb{M}^s(\\psi_n) \\gtrsim d^2_n(\\psi, \\psi_n)$$ where $$d_n(\\psi, \\psi_n) = \\sqrt{\\frac{\\|\\psi - \\psi_n\\|^2}{\\sigma_n} \\mathds{1}\\left(\\|\\psi - \\psi_n\\| \\le \\mathcal{K}\\sigma_n\\right) + \\|\\psi - \\psi_n\\|\\mathds{1}\\left(\\|\\psi - \\psi_n\\| > \\mathcal{K}\\sigma_n\\right)}\n$$ for some constant $\\mathcal{K} > 0$. The intuition behind this compound structure is following: When $\\psi$ is in $\\sigma_n$ neighborhood of $\\psi_n$, $\\mathbb{M}^s(\\psi)$ behaves like a smooth quadratic function, but when it is away from the truth, $\\mathbb{M}^s(\\psi)$ starts resembling $M(\\psi)$ which induces the linear curvature. \n\\\\\\\\\n\\noindent\nFor the linear part, we first establish that $|\\mathbb{M}(\\psi) - \\mathbb{M}^s(\\psi)| = O(\\sigma_n)$ uniformly for all $\\psi$. Define $\\eta = (\\psi - \\psi_0)\/\\sigma_n$:\n\\allowdisplaybreaks\n\\begin{align}\n& |\\mathbb{M}(\\psi) - \\mathbb{M}^s(\\psi)| \\notag \\\\\n& \\le \\mathbb{E}\\left(\\left | \\mathds{1}(Q^{\\top}\\psi \\ge 0) - K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right)\\right | \\right) \\notag\\\\\n& = \\mathbb{E}\\left(\\left | \\mathds{1}\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n} + \\eta^{\\top}\\tilde{Q} \\ge 0\\right) - K\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n} + \\eta^{\\top}\\tilde{Q}\\right)\\right | \\right) \\notag \\\\\n& = \\sigma_n \\int_{\\mathbb{R}^{p-1}} \\int_{-\\infty}^{\\infty} \\left | \\mathds{1}\\left(t + \\eta^{\\top}\\tilde{Q} \\ge 0\\right) - K\\left(t + \\eta^{\\top}\\tilde{Q}\\right)\\right | f_0(\\sigma_n t | \\tilde{Q}) \\ dt \\ dP(\\tilde{Q}) \\notag\\\\\n& = \\sigma_n \\int_{\\mathbb{R}^{p-1}} \\int_{-\\infty}^{\\infty} \\left | \\mathds{1}\\left(t \\ge 0\\right) - K\\left(t \\right)\\right | f_0(\\sigma_n (t-\\eta^{\\top}\\tilde{Q}) | \\tilde{Q}) \\ dt \\ dP(\\tilde{Q}) \\notag \\\\\n& = \\sigma_n \\int_{\\mathbb{R}^{p-1}} m(\\tilde{Q})\\int_{-\\infty}^{\\infty} \\left | \\mathds{1}\\left(t \\ge 0\\right) - K\\left(t \\right)\\right | \\ dt \\ dP(\\tilde{Q}) \\notag\\\\\n& = \\sigma_n \\mathbb{E}(m(\\tilde{Q})) \\int_{-\\infty}^{\\infty} \\left | \\mathds{1}\\left(t \\ge 0\\right) - K\\left(t \\right)\\right | \\ dt \\notag \\\\\n\\label{eq:lin_bound_1} & \\le K_1 \\sigma_n \\mathbb{E}(m(\\tilde{Q})) < \\infty \\hspace{0.3in} [\\text{by Assumption \\ref{as:density_bound}}] \\,.\n\\end{align}\nHere, the constant $K_1$ is $\\mathbb{E}(m(\\tilde{Q})) \\left[\\int_{-1}^{1}\\left | \\mathds{1}\\left(t \\ge 0\\right) - K\\left(t \\right)\\right | \\ dt \\right]$ which does not depend on $\\psi$, hence the bound is uniform over $\\psi$. Next: \n\\begin{align*}\n\\mathbb{M}^s(\\psi) - \\mathbb{M}^s(\\psi_0^s) & = \\mathbb{M}^s(\\psi) - \\mathbb{M}(\\psi) + \\mathbb{M}(\\psi) - \\mathbb{M}(\\psi_0) \\\\\n& \\qquad \\qquad + \\mathbb{M}(\\psi_0) - \\mathbb{M}(\\psi_0^s) + \\mathbb{M}(\\psi_0^s) -\\mathbb{M}^s(\\psi_0^s) \\\\ \n& = T_1 + T_2 + T_3 + T_4\n\\end{align*}\n\\noindent\nWe bound each summand separately: \n\\begin{enumerate}\n\\item $T_1 = \\mathbb{M}^s(\\psi) - \\mathbb{M}(\\psi) \\ge -K_1 \\sigma_n$ by equation \\ref{eq:lin_bound_1}\\, \n\\item $T_2 = \\mathbb{M}(\\psi) - \\mathbb{M}(\\psi_0) \\ge u_-\\|\\psi - \\psi_0\\|$ by Lemma \\ref{lem:linear_curvature}\\,\n\\item $T_3 = \\mathbb{M}(\\psi_0) - \\mathbb{M}(\\psi_0^s) \\ge -u_+\\|\\psi_0^s - \\psi_0\\| \\ge -\\epsilon_1 \\sigma_n$ where one can take $\\epsilon_1$ as small as possible, as we have established $\\|\\psi_0^s - \\psi_0\\|\/\\sigma_n \\rightarrow 0$. This follows by Lemma \\ref{lem:linear_curvature} along with Lemma \\ref{bandwidth}\\, \n\\item $T_4 = \\mathbb{M}(\\psi_0^s) -\\mathbb{M}^s(\\psi_0^s) \\ge -K_1 \\sigma_n$ by equation \\ref{eq:lin_bound_1}. \n\\end{enumerate}\nCombining, we have \n\\allowdisplaybreaks\n\\begin{align*}\n\\mathbb{M}^s(\\psi) - \\mathbb{M}^s(\\psi_0^s) & \\ge u_-\\|\\psi - \\psi_0\\| -(2K_1 + \\epsilon_1) \\sigma_n \\\\\n& \\ge ( u_-\/2)\\|\\psi - \\psi_0\\| \\hspace{0.2in} \\left[\\text{If} \\ \\|\\psi - \\psi_0\\| \\ge \\frac{2(2K_1 + \\epsilon_1)}{u_-}\\sigma_n\\right] \\\\\n& \\ge ( u_-\/4)\\|\\psi - \\psi_0^s\\| \n\\end{align*}\nwhere the last inequality holds for all large $n$ as proved in Lemma \\ref{bandwidth}. Using Lemma \\ref{bandwidth} again, we conclude that for any pair of positive constants $(\\epsilon_1, \\epsilon_2)$: \n$$\\|\\psi - \\psi_0^s\\| \\ge \\left(\\frac{2(2K_1 + \\epsilon_1)}{u_-}+\\epsilon_2\\right)\\sigma_n \\Rightarrow \\|\\psi - \\psi_0\\| \\ge \\frac{2(2K_1 + \\epsilon_1)}{u_-}\\sigma_n$$ for all large $n$, which implies: \n\\begin{align}\n& \\mathbb{M}^s(\\psi) - \\mathbb{M}^s(\\psi_0^s) \\notag \\\\\n& \\ge (u_-\/4) \\|\\psi - \\psi_0^s\\| \\mathds{1}\\left(\\|\\psi - \\psi_0^s\\| \\ge \\left(\\frac{2(2K_1 + \\epsilon_1)}{u_-}+\\epsilon_2\\right)\\sigma_n \\right) \\notag \\\\\n\\label{lb2} & \\ge (u_-\/4) \\|\\psi - \\psi_0^s\\| \\mathds{1}\\left(\\frac{\\|\\psi - \\psi_0^s\\|}{\\sigma_n} \\ge \\left(\\frac{7K_1}{u_-}\\right) \\right) \\hspace{0.2in} [\\text{for appropriate specifications of} \\ \\epsilon_1, \\epsilon_2] \\notag \\\\\n& := (u_-\/4) \\|\\psi - \\psi_0^s\\| \\mathds{1}\\left(\\frac{\\|\\psi - \\psi_0^s\\|}{\\sigma_n} \\ge \\mathcal{K} \\right)\n\\end{align}\n\n\\noindent\nIn the next part, we find the lower bound when $\\|\\psi - \\psi^0_s\\| \\le \\mathcal{K} \\sigma_n$. For the quadratic curvature, we perform a two step Taylor expansion: Define $\\eta = (\\psi - \\psi_0)\/\\sigma_n$. We have: \n\\allowdisplaybreaks \n\\begin{align}\n& \\nabla^2\\mathbb{M}^s(\\psi) \\notag\\\\\n& = \\frac{\\beta_0 - \\alpha_0}{2}\\frac{1}{\\sigma_n^2} \\mathbb{E}\\left(\\tilde{Q}\\tilde{Q}^{\\top} K''\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right)\\left\\{\\mathds{1}(Q^{\\top}\\psi_0 \\le 0) - \\mathds{1}(Q^{\\top}\\psi_0 \\ge 0)\\right\\}\\right) \\notag\\\\\n& = \\frac{\\beta_0 - \\alpha_0}{2}\\frac{1}{\\sigma_n^2} \\mathbb{E}\\left(\\tilde{Q}\\tilde{Q}^{\\top} K''\\left(\\frac{Q^{\\top}\\psi_0}{\\sigma_n} + \\tilde{Q}^{\\top}\\tilde \\eta \\right)\\left\\{\\mathds{1}(Q^{\\top}\\psi_0 \\le 0) - \\mathds{1}(Q^{\\top}\\psi_0 \\ge 0)\\right\\}\\right) \\notag\\\\\n& = \\frac{\\beta_0 - \\alpha_0}{2}\\frac{1}{\\sigma_n^2} \\mathbb{E}\\left[\\tilde{Q}\\tilde{Q}^{\\top} \\left[\\int_{-\\infty}^{0} K''\\left(\\frac{z}{\\sigma_n} + \\tilde{Q}^{\\top}\\tilde \\eta \\right) f_0(z |\\tilde{Q}) \\ dz \\right. \\right. \\notag \\\\ \n& \\left. \\left. \\qquad \\qquad \\qquad \\qquad -\\int_{0}^{\\infty} K''\\left(\\frac{z}{\\sigma_n} + \\tilde{Q}^{\\top}\\tilde \\eta \\right) f_0(z | \\tilde{Q}) \\ dz \\right]\\right] \\notag\\\\\n& = \\frac{\\beta_0 - \\alpha_0}{2}\\frac{1}{\\sigma_n} \\mathbb{E}\\left[\\tilde{Q}\\tilde{Q}^{\\top} \\left[\\int_{-\\infty}^{0} K''\\left(t+ \\tilde{Q}^{\\top}\\tilde \\eta \\right) f_0(\\sigma_n t |\\tilde{Q}) \\ dt \\right. \\right. \\notag \\\\\n& \\left. \\left. \\qquad \\qquad \\qquad \\qquad - \\int_{0}^{\\infty} K''\\left(t + \\tilde{Q}^{\\top}\\tilde \\eta \\right) f_0(\\sigma_n t | \\tilde{Q}) \\ dt \\right]\\right] \\notag\\\\\n& = \\frac{\\beta_0 - \\alpha_0}{2}\\frac{1}{\\sigma_n} \\mathbb{E}\\left[\\tilde{Q}\\tilde{Q}^{\\top} f_0(0| \\tilde{Q})\\left[\\int_{-\\infty}^{0} K''\\left(t+ \\tilde{Q}^{\\top}\\tilde \\eta \\right) \\ dt \\right. \\right. \\notag \\\\\n& \\left. \\left. \\qquad \\qquad \\qquad \\qquad - \\int_{0}^{\\infty} K''\\left(t + \\tilde{Q}^{\\top}\\tilde \\eta \\right) \\ dt \\right]\\right] + R \\notag\\\\\n\\label{eq:quad_eq_1} & =(\\beta_0 - \\alpha_0)\\frac{1}{\\sigma_n}\\mathbb{E}\\left[\\tilde{Q}\\tilde{Q}^{\\top} f_0(0| \\tilde{Q})K'(\\tilde{Q}^{\\top}\\tilde \\eta)\\right] + R \\,.\n\\end{align}\nAs we want a lower bound on the set $\\|\\psi - \\psi^0_s\\| \\le \\mathcal{K} \\sigma_n$, we have $\\|\\eta\\| \\le \\mathcal{K}$. For the rest of the analysis, define \n\\begin{align*}\n\\Lambda: (v_1, v_2) \\mapsto \\inf_{\\|v_1\\| = 1, \\|v_2\\| \\le \\mathcal{K}} \\mathbb{E}_{\\tilde X}\\left[|v_1^{\\top}\\tilde{Q}|^2 f(0|\\tilde{Q})K'(\\tilde{Q}^{\\top}v_2) \\right]\n\\end{align*}\nClearly $\\Lambda \\ge 0$ and continuous on a compact set, hence its infimum is attained. Suppose $\\Lambda(v_1, v_2) = 0$ for some $v_1, v_2$. Then we have: \n\\begin{align*}\n\\mathbb{E}\\left[|v_1^{\\top}\\tilde{Q}|^2 f(0|\\tilde{Q})K'(\\tilde{Q}^{\\top}v_2) \\right] = 0 \\,,\n\\end{align*}\nwhich further implies $|\\tilde v_1^{\\top}\\tilde X| = 0$ almost surely and violates Assumption \\ref{as:eigenval_bound}. Hence, our claim is demonstrated. On the other hand, for the remainder term of equation \\eqref{eq:quad_eq_1}: \nfix $\\nu \\in S^{p-1}$. Then: \n\\allowdisplaybreaks\n\\begin{align}\n& \\left| \\nu^{\\top} R \\nu \\right| \\notag \\\\\n& = \\left|\\frac{1}{\\sigma_n} \\mathbb{E}\\left[\\left(\\nu^{\\top}\\tilde{Q}\\right)^2 \\left[\\int_{-\\infty}^{0} K''\\left(t+ \\tilde{Q}^{\\top}\\tilde \\eta \\right) (f_0(\\sigma_n t |\\tilde{Q}) - f_0(0|\\tilde{Q})) \\ dt \\right. \\right. \\right. \\notag \\\\\n& \\qquad \\qquad \\qquad \\qquad \\left. \\left. \\left. - \\int_{0}^{\\infty} K''\\left(t + \\tilde{Q}^{\\top}\\tilde \\eta \\right) (f_0(\\sigma_n t |\\tilde{Q}) - f_0(0|\\tilde{Q})) \\ dt \\right]\\right]\\right| \\notag\\\\\n& \\le \\mathbb{E} \\left[\\left(\\nu^{\\top}\\tilde{Q}\\right)^2h(\\tilde{Q}) \\int_{-\\infty}^{\\infty} \\left|K''\\left(t+ \\tilde{Q}^{\\top}\\tilde \\eta \\right)\\right| |t| \\ dt\\right] \\notag\\\\\n& \\le \\mathbb{E} \\left[\\left(\\nu^{\\top}\\tilde{Q}\\right)^2h(\\tilde{Q}) \\int_{-1}^{1} \\left|K''\\left(t\\right)\\right| |t - \\tilde{Q}^{\\top}\\tilde \\eta | \\ dt\\right] \\notag\\\\\n\\label{eq:quad_eq_3} & \\le \\mathbb{E} \\left[\\left(\\nu^{\\top}\\tilde{Q}\\right)^2h(\\tilde{Q})(1+ \\|\\tilde{Q}\\|\/2\\kappa) \\int_{-1}^{1} \\left|K''\\left(t\\right)\\right| \\ dt\\right] = C_1 \\hspace{0.2in} [\\text{say}]\n\\end{align}\nby Assumption \\ref{as:distribution} and Assumption \\ref{as:derivative_bound}. By a two-step Taylor expansion, we have: \n\\begin{align*}\n\\mathbb{M}^s(\\psi) - \\mathbb{M}^s(\\psi_0^s) & = \\frac12 (\\psi - \\psi_0^s)^{\\top} \\nabla^2\\mathbb{M}^s(\\psi^*_n) (\\psi - \\psi_0^s) \\\\\n& \\ge \\left(\\min_{\\|v_1\\| = 1, \\|v_2 \\| \\le \\mathcal{K}} \\Lambda(v_1, v_2)\\right) \\frac{\\|\\psi - \\psi_0^s\\|^2}{2\\sigma_n} - \\frac{C_1\\sigma_n}{2} \\, \\frac{\\|\\psi - \\psi_0^s\\|^2_2}{\\sigma_n} \\\\\n& \\gtrsim \\frac{\\|\\psi - \\psi_0^s\\|^2_2}{\\sigma_n} \\,\n\\end{align*}\nThis concludes the proof of the curvature. \n\\\\\\\\\n\\noindent \nFinally, we bound the modulus of continuity:\n$$\\mathbb{E}\\left(\\sup_{d_n(\\psi, \\psi_0^s) \\le \\delta} \\left|(\\mathbb{M}^s_n-\\mathbb{M}^s)(\\psi) - (\\mathbb{M}^s_n-\\mathbb{M}^s)(\\psi_n)\\right|\\right) \\,.$$ \nThe proof is similar to that of Lemma \\ref{lem:rate_smooth} and therefore we sketch the main steps briefly. Define the estimating function $f_\\psi$ as: \n$$\nf_\\psi(Y, Q) = (Y - \\gamma)\\left(1 - K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right)\\right) \n$$\nand the collection of functions $\\mathcal{F}_\\zeta = \\{f_\\psi - f_{\\psi_0^n}: d_n(\\psi, \\psi_0^s) \\le \\delta\\}$. That $\\mathcal{F}_\\zeta$ has finite VC dimension follows from the same argument used to show $\\mathcal{G}_n$ has finite VC dimension in the proof of Lemma \\ref{conv-prob}. Now to bound modulus of continuity, we use Lemma 2.14.1 of \\cite{vdvw96}, which implies: \n$$\n\\sqrt{n}\\mathbb{E}\\left(\\sup_{d_n(\\psi, \\psi_0^s) \\le \\delta} \\left|(\\mathbb{M}^s_n-\\mathbb{M}^s)(\\psi) - (\\mathbb{M}^s_n-\\mathbb{M}^s)(\\psi_n)\\right|\\right) \\lesssim \\mathcal{J}(1, \\mathcal{F}_\\zeta) \\sqrt{PF_\\zeta^2}\n$$\nwhere $F_\\zeta(Y, Q)$ is the envelope of $\\mathcal{F}_\\zeta$ defined as: \n\\begin{align*}\nF_\\zeta(Y, Q) & = \\sup_{d_*(\\psi, \\psi_0^s) \\le \\zeta}\\left|(Y - \\gamma)\\left(K\\left(\\frac{Q^{\\top}\\psi^s_0}{\\sigma_n}\\right)-K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right)\\right)\\right| \\\\\n& = \\left|(Y - \\gamma)\\right| \\sup_{d_*(\\psi, \\psi_0^s) \\le \\zeta} \\left|\\left(K\\left(\\frac{Q^{\\top}\\psi^s_0}{\\sigma_n}\\right)-K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right)\\right)\\right|\n\\end{align*}\nand $\\mathcal{J}(1, \\mathcal{F}_\\zeta)$ is the entropy integral which can be bounded above by a constant independent of $n$ as the class $\\mathcal{F}_\\zeta$ has finite VC dimension. As in the proof of Lemma \\ref{lem:rate_smooth}, we here consider two separate cases: (1) $\\zeta \\le \\sqrt{\\mathcal{K} \\sigma_n}$ and (2) $\\zeta > \\sqrt{\\mathcal{K} \\sigma_n}$. In the first case, we have $\\sup_{d_n(\\psi, \\psi_0^s) \\le \\zeta} \\|\\psi_ - \\psi_0^s\\| = \\zeta \\sqrt{\\sigma_n}$. This further implies: \n\\begin{align*}\n & \\sup_{d_*(\\psi, \\psi_0^s) \\le \\zeta} \\left|\\left\\{K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right) - K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right)\\right\\}\\right|^2 \\\\\n & \\le \\max\\left\\{\\left|\\left\\{K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right) - K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n} + \\|\\tilde Q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}\\right)\\right\\}\\right|^2, \\right. \\\\\n & \\qquad \\qquad \\qquad \\qquad \\left. \\left|\\left\\{K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n}\\right) - K\\left(\\frac{Q^{\\top}\\psi_0^s}{\\sigma_n} - \\|\\tilde Q\\|\\frac{\\zeta}{\\sqrt{\\sigma_n}}\\right)\\right\\}\\right|^2\\right\\} \\\\\n & := \\max\\{T_1, T_2\\} \\,.\n\\end{align*}\nTherefore to bound $\\mathbb{E}[F_\\zeta^2(Y, Q)]$ is equivalent to bounding both $\\mathbb{E}[(Y- \\gamma)^2 T_1]$ and $\\mathbb{E}[(Y - \\gamma)^2 T_2]$ separately, which, in turn equivalent to bound $\\mathbb{E}[T_1]$ and $\\mathbb{E}[T_2]$, as $|Y - \\gamma| \\le 1$. These bounds follows from similar calculation as of Lemma \\ref{lem:rate_smooth}, hence skipped. Finally we have in this case, $$\n\\mathbb{E}[F_\\zeta^2(Y, Q)] \\lesssim \\zeta \\sqrt{\\sigma_n} \\,.\n$$ \nThe other case, when $\\zeta > \\sqrt{\\mathcal{K} \\sigma_n}$ also follows by similar calculation of Lemma \\ref{lem:rate_smooth}, which yields: \n$$\n\\mathbb{E}[F_\\zeta^2(Y, Q)] \\lesssim \\zeta^2 \\,.\n$$\n\n\n\n\n\n\n\n\n\n\n\n\n\\noindent\nUsing this in the maximal inequality yields: \n\\begin{align*}\n\\sqrt{n}\\mathbb{E}\\left(\\sup_{d_n(\\psi, \\psi_0) \\le \\delta} \\left|\\mathbb{M}_n(\\psi - \\psi_n) - \\mathbb{M}^s(\\psi - \\psi_n)\\right|\\right) & \\lesssim \\sqrt{\\zeta}\\sigma^{1\/4}_n\\mathds{1}_{\\zeta \\le \\sqrt{\\mathcal{K} \\sigma_n}} + \\zeta \\mathds{1}_{\\zeta > \\sqrt{\\mathcal{K} \\sigma_n}} \\\\\n& := \\phi_n(\\zeta) \\,\n\\end{align*}\nThis implies (following the same argument as of Lemma \\ref{lem:rate_smooth}): \n$$\nn^{2\/3}\\sigma_n^{-1\/3}d^2(\\hat \\psi^s, \\psi_0^s) = O_p(1) \\,.\n$$\nNow as $n^{2\/3}\\sigma_n^{-1\/3} \\gg \\sigma_n^{-1}$, we have: \n$$\n\\frac{1}{\\sigma_n}d_n^2(\\hat \\psi^s, \\psi_0^s) = o_p(1) \\,.\n$$\nwhich further indicates\n\\begin{align}\n\\label{rate1} & n^{2\/3}\\sigma_n^{-1\/3}\\left[\\frac{\\|\\hat \\psi^s - \\psi_0^s\\|^2}{\\sigma_n} \\mathds{1}(\\|\\hat \\psi^s - \\psi_0^s\\| \\le \\mathcal{K}\\sigma_n) \\right. \\notag \\\\\n& \\qquad \\qquad \\qquad \\left. + \\|\\hat \\psi^s - \\psi_0^s\\| \\mathds{1}(\\|\\hat \\psi^s - \\psi_0^s\\|\\ge \\mathcal{K}\\sigma_n)\\right] = O_P(1)\n\\end{align}\nThis implies: \n\\begin{enumerate}\n\\item $\\frac{n^{2\/3}}{\\sigma_n^{4\/3}}\\|\\hat \\psi^s - \\psi_0^s\\| \\mathds{1}(\\|\\hat \\psi^s - \\psi_0^s\\|\\le \\mathcal{K}\\sigma_n) = O_P(1)$\n\\item $\\frac{n^{2\/3}}{\\sigma_n^{1\/3}}\\|\\hat \\psi^s - \\psi_0^s\\| \\mathds{1}(\\|\\hat \\psi^s - \\psi_0^s\\| \\ge \\mathcal{K}\\sigma_n) = O_P(1)$\n\\end{enumerate}\nTherefore: \n\\begin{align*}\n& \\frac{n^{2\/3}}{\\sigma_n^{4\/3}}\\|\\hat \\psi^s - \\psi_0^s\\| \\mathds{1}(\\|\\hat \\psi^s - \\psi_0^s\\| \\le \\mathcal{K}\\sigma_n) \\\\\n& \\qquad \\qquad \\qquad + \\frac{n^{2\/3}}{\\sigma_n^{1\/3}}\\|\\hat \\psi^s - \\psi_0^s\\| \\mathds{1}(\\|\\hat \\psi^s - \\psi_0^s\\| \\ge \\mathcal{K}\\sigma_n) = O_p(1) \\,.\n\\end{align*}\ni.e. \n$$\n\\left(\\frac{n^{2\/3}}{\\sigma_n^{4\/3}} \\wedge \\frac{n^{2\/3}}{\\sigma_n^{1\/3}}\\right)\\|\\hat \\psi^s - \\psi_0^s\\| = O_p(1) \\,.\n$$\nNow $(n^{2\/3}\/\\sigma_n^{4\/3} \\gg 1\/\\sigma_n$ as long as $n^{2\/3} \\gg \\sigma_n^{1\/3}$ which is obviously true. On the other hand, $n^{2\/3}\/\\sigma_n^{1\/3} \\gg 1\/\\sigma_n$ iff $n\\sigma_n \\gg 1$ which is also true as per our assumption. Therefore we have: \n$$\n\\frac{\\|\\hat \\psi^s - \\psi_0^s\\|}{\\sigma_n} = O_p(1) \\,.\n$$\nThis completes the proof. \n\n\\end{proof}\n\n\n\\section{Methodology and Theory for Continuous Response Model}\n\\label{sec:theory_regression}\nIn this section we present our analysis for the continuous response model. Without smoothing, the original estimating equation is: \n$$\nf_{\\beta, \\delta, \\psi}(Y, X, Q) = \\left(Y - X^{\\top}\\beta - X^{\\top}\\delta\\mathds{1}_{Q^{\\top}\\psi > 0}\\right)^2 \n$$\nand we estimate the parameters as: \n\\begin{align}\n\\label{eq:ls_estimator}\n\\left(\\hat \\beta^{LS}, \\hat \\delta^{LS}, \\hat \\psi^{LS}\\right) & = {\\arg\\min}_{(\\beta, \\delta, \\psi) \\in \\Theta} \\mathbb{P}_n f_{\\beta, \\delta, \\psi} \\notag \\\\\n& := {\\arg\\min}_{(\\beta, \\delta, \\psi) \\in \\Theta}\\mathbb{M}_n(\\beta, \\delta, \\psi)\\,.\n\\end{align}\nwhere $\\mathbb{P}_n$ is empirical measure based on i.i.d. observations $\\{(X_i, Y_i, Q_i)\\}_{i=1}^n$ and $\\Theta$ is the parameter space. Henceforth, we assume $\\Theta$ is a compact subset of dimension ${\\bbR}^{2p+d}$. We also define $\\theta = (\\beta, \\delta, \\psi)$, i.e. all the parameters together as a vector and by $\\theta_0$ is used to denote the true parameter vector $(\\beta_0, \\delta_0, \\psi_0)$. Some modification of equation \\eqref{eq:ls_estimator} leads to the following: \n\\begin{align*}\n(\\hat \\beta^{LS}, \\hat \\delta^{LS}, \\hat \\psi^{LS}) & = {\\arg\\min}_{\\beta, \\delta, \\psi} \\sum_{i=1}^n \\left(Y_i - X_i^{\\top}\\beta - X_i^{\\top}\\delta\\mathds{1}_{Q_i^{\\top}\\psi > 0}\\right)^2 \\\\ \n& = {\\arg\\min}_{\\beta, \\delta, \\psi} \\sum_{i=1}^n \\left[\\left(Y_i - X_i^{\\top}\\beta\\right)^2\\mathds{1}_{Q_i^{\\top}\\psi_0 \\le 0} \\right. \\\\\n& \\hspace{14em} \\left. + \\left(Y_i - X_i^{\\top}\\beta - X_i^{\\top}\\delta\\right)^2\\mathds{1}_{Q_i^{\\top}\\psi > 0} \\right] \\\\\n& = {\\arg\\min}_{\\beta, \\delta, \\psi} \\sum_{i=1}^n \\left[\\left(Y_i - X_i^{\\top}\\beta\\right)^2 + \\left\\{\\left(Y_i - X_i^{\\top}\\beta - X_i^{\\top}\\delta\\right)^2 \\right. \\right. \\\\\n& \\hspace{17em} \\left. \\left. - \\left(Y_i - X_i^{\\top}\\beta\\right)^2\\right\\}\\mathds{1}_{Q_i^{\\top}\\psi > 0} \\right] \n\\end{align*}\nTypical empirical process calculations yield under mild conditions: \n$$\n\\|\\hat \\beta^{LS} - \\beta_0\\|^2 + \\|\\hat \\delta^{LS} - \\delta_0\\|^2 + \\|\\hat \\psi^{LS} - \\psi_0 \\|_2 = O_p(n^{-1})\n$$\nbut inference is difficult as the limit distribution is unknown, and in any case, would be a highly non-standard distribution. Recall that even in the one-dimensional change point model with fixed jump size, the least squares change point estimator converges at rate $n$ to the truth with a non-standard limit distribution, namely a minimizer of a two-sided compound Poisson process (see \\cite{lan2009change} for more details). To obtain a computable estimator with tractable limiting distribution, we resort to a smooth approximation of the indicator function in \\eqref{eq:ls_estimator} using a distribution kernel with suitable bandwidth, i.e we replace $\\mathds{1}_{Q_i^{\\top}\\psi > 0}$ by $K(Q_i^{\\top}\\psi\/\\sigma_n)$ for some appropriate distribution function $K$ and bandwidth $\\sigma_n$, i.e. \n\\begin{align*}\n(\\hat \\beta^S, \\hat \\delta^S, \\hat \\psi^S) & = {\\arg\\min}_{\\beta, \\delta, \\psi} \\left\\{ \\frac1n \\sum_{i=1}^n \\left[\\left(Y_i - X_i^{\\top}\\beta\\right)^2 + \\left\\{\\left(Y_i - X_i^{\\top}\\beta - X_i^{\\top}\\delta\\right)^2 \\right. \\right. \\right. \\\\\n& \\hspace{15em} \\left. \\left. \\left. - \\left(Y_i - X_i^{\\top}\\beta\\right)^2\\right\\}K\\left(\\frac{Q_i^{\\top}\\psi}{\\sigma_n}\\right) \\right] \\right\\} \\\\\n& = {\\arg\\min}_{(\\beta, \\delta, \\psi) \\in \\Theta} \\mathbb{P}_n f^s_{(\\beta, \\delta, \\psi)}(X, Y, Q) \\\\\n& := {\\arg\\min}_{\\theta \\in \\Theta} \\mathbb{M}^s_n(\\theta) \\,.\n\\end{align*}\nDefine $\\mathbb{M}$ (resp. $\\mathbb{M}^s$) to be the population counterpart of $\\mathbb{M}_n$ and $\\mathbb{M}_n^s$ respectively which are defined as: \n\\begin{align*}\n\\mathbb{M}(\\theta) & = \\mathbb{E}\\left(Y - X^{\\top}\\beta\\right)^2 + \\mathbb{E}\\left(\\left[-2\\left(Y_i - X^{\\top}\\beta\\right)X^{\\top}\\delta + (X^{\\top}\\delta)^2\\right] \\mathds{1}_{Q^{\\top}\\psi > 0}\\right) \\,, \\\\\n\\mathbb{M}^s(\\theta) & = \\mathbb{E}\\left[(Y - X^{\\top}\\beta)^2 + \\left\\{-2(Y-X^{\\top}\\beta)(X^{\\top}\\delta) + (X^{\\top}\\delta)^2\\right\\}K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right)\\right] \\,.\n\\end{align*}\nAs noted in the proof of \\textcolor{blue}{Seo and Linton}, the assumption $\\log{n}\/n\\sigma_n^2 \\to 0$ was only used to show: \n$$\n\\frac{\\left\\|\\hat \\psi^s - \\psi_0\\right\\|}{\\sigma_n} = o_p(1) \\,.\n$$\nIn this paper, we show that one can achieve the same conclusion as long as $n\\sigma_n \\to \\infty$. The rest of the proof for the normality is similar to that of \\cite{seo2007smoothed}, we will present it briefly for the ease the readers. The proof is quite long and technical, therefore we break the proof into several lemmas. We, first, list our assumptions: \n\\begin{assumption}\n\\label{eq:assm}\n\\begin{enumerate}\n\\item Define $f_\\psi(\\cdot \\mid \\tilde Q)$ to be the conditional distribution of $Q^{\\top}\\psi$ given $\\tilde Q$. (In particular we will denote by $f_0(\\cdot \\mid \\tilde q)$ to be conditional distribution of $Q^{\\top}\\psi_0$ given $\\tilde Q$ and $f_s(\\cdot \\mid \\tilde q)$ to be the conditional distribution of $Q^{\\top}\\psi_0^s$ given $\\tilde Q$. Assume that there exists $F_+$ such that $\\sup_t f_0(t | \\tilde Q) \\le F_+$ almost surely on $\\tilde Q$ and for all $\\psi$ in a neighborhood of $\\psi_0$ (in particular for $\\psi_0^s$). Further assume that $f_\\psi$ is differentiable and the derivative is bounded by $F_+$ for all $\\psi$ in a neighborhood of $\\psi_0$ (again in particular for $\\psi_0^s$).\n\\vspace{0.1in}\n\\item Define $g(Q) = {\\sf var}(X \\mid Q)$. There exists $c_-$ and $c_+$ such that $c_- \\le \\lambda_{\\min}(g(Q)) \\le \\lambda_{\\max}(g(Q)) \\le c_+$ almost surely. Also assume that $g$ is a Lipschitz with constant $G_+$ with respect to $Q$. \n\\vspace{0.1in}\n\\item There exists $p_+ < \\infty$ and $p_- > 0, r > 0$ such that: \n$$\np_- \\|\\psi - \\psi_0\\| \\le \\mathbb{P}\\left(\\text{sign}\\left(Q^{\\top}\\psi\\right) \\neq \\text{sign}\\left(Q^{\\top}\\psi_0\\right)\\right) \\le p_+ \\|\\psi - \\psi_0\\| \\,,\n$$\nfor all $\\psi$ such that $\\|\\psi - \\psi_0\\| \\le r$. \n\\vspace{0.1in}\n\\item For all $\\psi$ in the parameter space $0 < \\mathbb{P}\\left(Q^{\\top}\\psi > 0\\right) < 1$. \n\\vspace{0.1in} \n\\item Define $m_2(Q) = \\mathbb{E}\\left[\\|X\\|^2 \\mid Q\\right]$ and $m_4(Q) = \\mathbb{E}\\left[\\|X\\|^4 \\mid Q\\right]$. Assume $m_2, m_4$ are bounded Lipschitz function of $Q$. \n\\end{enumerate}\n\\end{assumption}\n\n\n\\subsection{Sufficient conditions for above assumptions }\nWe now demonstrate some sufficient conditions for the above assumptions to hold. The first condition is essentially a condition on the conditional density of the first co-ordinate of $Q$ given all other co-ordinates. If this conditional density is bounded and has bounded derivative, then first assumption is satisfied. This condition is satisfied in fair generality. The second assumption implies that the conditional distribution of X given Q has variance in all the direction over all $Q$. This is also very weak condition, as is satisfied for example if X and Q and independent (with $X$ has non-degenerate covariance matrix) or $(X, Q)$ are jointly normally distributed to name a few. This condition can further be weaken by assuming that the maximum and minimum eigenvalues of $\\mathbb{E}[g(Q)]$ are bounded away from $\\infty$ and $0$ respectively but it requires more tedious book-keeping. The third assumption is satisfied as long as as $Q^{\\top}\\psi$ has non-zero density near origin, while the fourth assumption merely states that the support of $Q$ is not confined to one side of the hyperplane for any hyperplane and a simple sufficient condition for this is $Q$ has continuous density with non-zero value at the origin. The last assumption is analogous to the second assumption for the conditional fourth moment which is also satisfied in fair generality. \n\\\\\\\\\n\\noindent\n{\\bf Kernel function and bandwidth: } We take $K(x) = \\Phi(x)$ (distribution of standard normal random variable) for our analysis. For the bandwidth we assume $n\\sigma_n^2 \\to 0$ and $n \\sigma_n \\to \\infty$ as the other case, (i.e. $n\\sigma_n^2 \\to \\infty$) is already established in \\cite{seo2007smoothed}. \n\\\\\\\\\n\\noindent\nBased on Assumption \\ref{eq:assm} and our choice of kernel and bandwidth we establish the following theorem: \n\\begin{theorem}\n\\label{thm:regression}\nUnder Assumption \\ref{eq:assm} and the above choice of kernel and bandwidth we have: \n$$\n\\sqrt{n}\\begin{pmatrix}\\begin{pmatrix} \\hat \\beta^s \\\\ \\hat \\delta^s \\end{pmatrix} - \\begin{pmatrix} \\beta_0 \\\\ \\delta_0 \\end{pmatrix} \\end{pmatrix} \\overset{\\mathscr{L}}{\\implies} \\mathcal{N}(0, \\Sigma_{\\beta, \\delta})\n$$\nand \n$$\n\\sqrt{n\/\\sigma_n} \\left(\\hat \\psi^s - \\psi_0\\right) \\overset{\\mathscr{L}}{\\implies} \\mathcal{N}(0, \\Sigma_\\psi) \\,,\n$$\nfor matrices $\\Sigma_{\\beta, \\delta}$ and $\\Sigma_\\psi$ mentioned explicitly in the proof. Moreover they are asymptotically independent. \n\\end{theorem}\nThe proof of the theorem is relatively long, so we break it into several lemmas. We provide a roadmap of the proof in this section while the elaborate technical derivations of the supporting lemmas can be found in Appendix. Let $\\nabla \\mathbb{M}_n^s(\\theta)$ and $\\nabla^2 \\mathbb{M}_n^s(\\theta)$ be the gradient and Hessian of $\\mathbb{M}_n^s(\\theta)$ with respect to $\\theta$. As $\\hat \\theta^s$ minimizes $\\mathbb{M}_n^s(\\theta)$, we have from the first order condition, $\\nabla \\mathbb{M}_n^s(\\hat \\theta^s) = 0$. Using one step Taylor expansion we have:\n\\allowdisplaybreaks \n\\begin{align*}\n\\label{eq:taylor_first}\n\\nabla \\mathbb{M}_n^s(\\hat \\theta^s) = \\nabla \\mathbb{M}_n^s(\\theta_0) + \\nabla^2 \\mathbb{M}_n^s(\\theta^*)\\left(\\hat \\theta^s - \\theta_0\\right) = 0\n\\end{align*}\ni.e.\n\\begin{equation}\n\\label{eq:main_eq} \n\\left(\\hat{\\theta}^s - \\theta_0\\right) = -\\left(\\nabla^2 \\mathbb{M}_n^s(\\theta^*)\\right)^{-1} \\nabla \\mathbb{M}_n^s(\\theta_0)\n\\end{equation}\nfor some intermediate point $\\theta^*$ between $\\hat \\theta^s$ and $\\theta_0$. \nFollowing the notation of \\cite{seo2007smoothed}, define a diagonal matrix $D_n$ of dimension $2p + d$ with first $2p$ elements being 1 and the last $d$ elements being $\\sqrt{\\sigma_n}$. \n we can write: \n\\begin{align}\n\\sqrt{n}D_n^{-1}(\\hat \\theta^s - \\theta_0) & = - \\sqrt{n}D_n^{-1}\\nabla^2\\mathbb{M}_n^s(\\theta^*)^{-1}\\nabla \\mathbb{M}_n^s(\\theta_0) \\notag \\\\\n\\label{eq:taylor_main} & = \\begin{pmatrix} \\nabla^2\\mathbb{M}_n^{s, \\gamma}(\\theta^*) & \\sqrt{\\sigma_n}\\nabla^2\\mathbb{M}_n^{s, \\gamma \\psi}(\\theta^*) \\\\\n\\sqrt{\\sigma_n}\\nabla^2\\mathbb{M}_n^{s, \\gamma \\psi}(\\theta^*) & \\sigma_n\\nabla^2\\mathbb{M}_n^{s, \\psi}(\\theta^*)\\end{pmatrix}^{-1}\\begin{pmatrix} \\sqrt{n}\\nabla \\mathbb{M}_n^{s, \\gamma}(\\theta_0) \\\\ \\sqrt{n\\sigma_n}\\nabla \\mathbb{M}_n^{s, \\psi}(\\theta_0)\\end{pmatrix}\n\\end{align}\nwhere $\\gamma = (\\beta, \\delta) \\in {\\bbR}^{2p}$. The following lemma establishes the asymptotic properties of $\\nabla \\mathbb{M}_n^s(\\theta_0)$: \n\\begin{lemma}[Asymptotic Normality of $\\nabla \\mathbb{M}_n^s(\\theta_0)$]\n\\label{asymp-normality}\n\\label{asymp-normality}\nUnder assumption \\ref{eq:assm} we have: \n\\begin{align*}\n\\sqrt{n}\\nabla \\mathbb{M}_n^{s, \\gamma}(\\theta_0) \\implies \\mathcal{N}\\left(0, 4V^{\\gamma}\\right) \\,,\\\\\n\\sqrt{n\\sigma_n}\\nabla \\mathbb{M}_n^{s, \\psi}(\\theta_0) \\implies \\mathcal{N}\\left(0, V^{\\psi}\\right) \\,.\n\\end{align*} \nfor some n.n.d. matrices $V^{\\gamma}$ and $V^{\\psi}$ which is mentioned explicitly in the proof. Further more $\\sqrt{n}\\nabla \\mathbb{M}_n^{s, \\gamma}(\\theta_0)$ and $\\sqrt{n\\sigma_n}\\nabla \\mathbb{M}_n^{s, \\psi}(\\theta_0)$ are asymptotically independent. \n\\end{lemma}\n\\noindent\nNext, we analyze the convergence of $\\nabla^2 \\mathbb{M}_n^s(\\theta^*)$ which is stated in the following lemma: \n\\begin{lemma}[Convergence in Probability of $\\nabla^s \\mathbb{M}_n^s(\\theta^*)$]\n\\label{conv-prob}\nUnder Assumption \\eqref{eq:assm}, for any random sequence $\\breve{\\theta} = \\left(\\breve{\\beta}, \\breve{\\delta}, \\breve{\\psi}\\right)$ such that $\\breve{\\beta} \\overset{p}{\\to} \\beta_0, \\breve{\\delta} \\overset{p}{\\to} \\delta_0, \\|\\breve{\\psi} - \\psi_0\\|\/\\sigma_n \\overset{P} \\rightarrow 0$, we have: \n\\begin{align*}\n\\nabla^2_{\\gamma} \\mathbb{M}_n^s(\\breve{\\theta}) & \\overset{p}{\\longrightarrow} 2Q^{\\gamma} \\,, \\\\\n\\sqrt{\\sigma_n}\\nabla^2_{\\psi \\gamma} \\mathbb{M}_n^s(\\breve{\\theta}) & \\overset{p}{\\longrightarrow} 0 \\,, \\\\\n\\sigma_n \\nabla^2_{\\psi} \\mathbb{M}_n^s(\\breve{\\theta}) & \\overset{p}{\\longrightarrow} Q^{\\psi} \\,.\n\\end{align*}\nfor some matrices $Q^{\\gamma}, Q^{\\psi}$ mentioned explicitly in the proof. This, along with equation \\eqref{eq:taylor_main}, establishes: \n\\begin{align*}\n\\sqrt{n}\\left(\\hat \\gamma^s - \\gamma_0\\right) & \\overset{\\mathscr{L}}{\\implies} \\mathcal{N}\\left(0, Q^{\\gamma^{-1}}V^{\\gamma}Q^{\\gamma^{-1}}\\right) \\,, \\\\\n\\sqrt{n\/\\sigma_n}\\left(\\hat \\psi^s - \\psi_0\\right) & \\overset{\\mathscr{L}}{\\implies} \\mathcal{N}\\left(0, Q^{\\psi^{-1}}V^{\\psi}Q^{\\psi^{-1}}\\right) \\,.\n\\end{align*}\nwhere as before $\\hat \\gamma^s = (\\hat \\beta^s, \\hat \\delta^s)$. \n\\end{lemma}\nIt will be shown later that the condition $\\|\\breve{\\psi}_n - \\psi_0\\|\/\\sigma_n \\overset{P} \\rightarrow 0$ needed in Lemma \\ref{conv-prob} holds for the (random) sequence $\\psi^*$, the intermediate point in the Taylor expansion. Then, combining Lemma \\ref{asymp-normality} and Lemma \\ref{conv-prob} we conclude the proof of Theorem \\ref{thm:regression}.\nObserve that, to show $\\left\\|\\psi^* - \\psi_0 \\right\\| = o_P(\\sigma_n)$, it suffices to to prove that $\\left\\|\\hat \\psi^s - \\psi_0 \\right\\| = o_P(\\sigma_n)$. Towards that direction, we have following lemma: \n\n\\begin{lemma}[Rate of convergence]\n\\label{lem:rate_smooth}\nUnder Assumption \\ref{eq:assm} and our choice of kernel and bandwidth, \n$$\nn^{2\/3}\\sigma_n^{-1\/3} d^2_*\\left(\\hat \\theta^s, \\theta_0^s\\right) = O_P(1) \\,,\n$$\nwhere \n\\begin{align*}\nd_*^2(\\theta, \\theta_0^s) & = \\|\\beta - \\beta_0^s\\|^2 + \\|\\delta - \\delta_0^s\\|^2 \\\\\n& \\qquad \\qquad + \\frac{\\|\\psi - \\psi_0^s\\|^2}{\\sigma_n} \\mathds{1}_{\\|\\psi - \\psi_0^s\\| \\le \\mathcal{K}\\sigma_n} + \\|\\psi - \\psi_0^s\\| \\mathds{1}_{\\|\\psi - \\psi_0^s\\| > \\mathcal{K}\\sigma_n} \\,.\n\\end{align*}\nfor some specific constant $\\mathcal{K}$. (This constant will be mentioned precisely in the proof). Hence as $n\\sigma_n \\to \\infty$, we have $n^{2\/3}\\sigma_n^{-1\/3} \\gg \\sigma_n^{-1}$ which implies $\\|\\hat \\psi^s - \\psi_0^s\\|\/\\sigma_n \\overset{P} \\longrightarrow 0 \\,.$\n\\end{lemma}\n\\noindent\nThe above lemma establishes $\\|\\hat \\psi^s - \\psi_0^s\\|\/\\sigma_n = o_p(1)$ but our goal is to show that $\\|\\hat \\psi^s - \\psi_0\\|\/\\sigma_n = o_p(1)$. Therefore, we further need $\\|\\psi^s_0 - \\psi_0\\|\/\\sigma_n \\rightarrow 0$ which is demonstrated in the following lemma:\n\n\\begin{lemma}[Convergence of population minimizer]\n\\label{bandwidth}\nUnder Assumption \\ref{eq:assm} and our choice of kernel and bandwidth, we have: $\\|\\psi^s_0 - \\psi_0\\|\/\\sigma_n \\rightarrow 0$. \n\\end{lemma}\n\n\\noindent\nHence the final roadmap is the following: Using Lemma \\ref{bandwidth} and Lemma \\ref{lem:rate_smooth} we establish that $\\|\\hat \\psi^s - \\psi_0\\|\/\\sigma_n = o_p(1)$ if $n\\sigma_n \\to 0$. This, in turn, enables us to prove Lemma \\ref{conv-prob}, i.e. $\\sigma_n \\nabla^2 \\mathbb{M}_n^s(\\theta^*) \\overset{P} \\rightarrow Q$,which, along with Lemma \\ref{asymp-normality}, establishes the main theorem. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Introduction}\nThe simple linear regression model assumes a uniform linear relationship between the covariate and the response, in the sense that the regression parameter $\\beta$ is the same over the entire covariate domain. In practice, the situation can be more complicated: for instance, the regression parameter may differ from sub-population to sub-population within a large (super-) population. Some common techniques to account for such heterogeneity include mixed linear models, introducing an interaction effect, or fitting different models among each sub-population which corresponds to a supervised classification setting where the true groups (sub-populations) are \\emph{a priori known}. \n\t\\newline\n\t\\newline\n\t\\indent A more difficult scenario arises when the sub-populations are unknown, in which case regression and classification must happen simultaneously. Consider the scenario where the conditional mean of $Y_i$ given $X_i$ is different for different unknown sub-groups. A well-studied treatment of this problem -- the so-called change point problem -- considers a simple thresholding model where membership in a sub-group is determined by whether a real-valued observable $X$ falls to the left or right of an unknown parameter $\\gamma$. More recently, there has been work for multi-dimensional covariates, namely when the membership is determined by which side a random vector $X$ falls with respect to an hyperplane with unknown normal vector $\\theta_0$. A concrete example appears in \\cite{wei2014latent} who extend the linear thresholding model due to \\cite{kang2011new} to general dimensions: \n\t\\begin{eqnarray}\\label{eq:weimodel}\n\tY=\\mu_1\\cdot 1_{X^{\\top}\\theta_0\\geq 0}+\\mu_2\\cdot 1_{X^{\\top}\\theta_0<0}+\\varepsilon\\,,\n\t\\end{eqnarray}\n\tand studied computational algorithms and consistency of the same. This model and others with similar structure, called \\emph{change plane models}, are useful in various fields of research, e.g. modeling treatment effect heterogeneity in drug treatment (\\cite{imai2013estimating}), modeling sociological data on voting and employment (\\cite{imai2013estimating}), or cross country growth regressions in econometrics \n(\\cite{seo2007smoothed}).\n\t\\newline\n\t\\newline\n\\indent Other aspects of this model have also been investigated. \\cite{fan2017change} examined the change plane model from the statistical testing point of view, with the null hypothesis being the absence of a separating hyperplane. They proposed a test statistic, studied its asymptotic distribution and provided sample size recommendations for achieving target values of power. \\cite{li2018multi} extended the change point detection problem in the multi-dimensional setup by considering the case where $X^{\\top}\\theta_0$ forms a multiple change point data sequence. \n\nThe key difficultly with change plane type models is the inherent discontinuity in the optimization criteria involved where the parameter of interest appears as an argument to some indicator function, rendering the optimization extremely hard. To alleviate this, one option is to kernel smooth the indicator function, an approach that was adopted by Seo and Linton \\cite{seo2007smoothed} in a version of the change-plane problem, motivated by earlier results of Horowitz \\cite{horowitz1992smoothed} that dealt with a smoothed version of the maximum score estimator. Their model has an additive structure of the form:\n\\[Y_t = \\beta^{\\top}X_t + \\delta^{\\top} \\tilde{X}_t \\mathds{1}_{Q_t^{\\top} \\boldmath \\psi > 0} + \\epsilon_t \\,,\\]\nwhere $\\psi$ is the (fixed) change-plane parameter, and $t$ can be viewed as a time index. Under a set of assumptions on the model (Assumptions 1 and 2 of their paper), they showed asymptotic normality of their estimator of $\\psi$ obtained by minimizing a smoothed least squares criterion\nthat uses a differentiable distribution function $\\mathcal{K}$. The rate of convergence of $\\hat{\\psi}$ to the truth was shown to be $\\sqrt{n\/\\sigma_n}$ where $\\sigma_n$ was the bandwidth parameter used to smooth the least squares function. As noted in their Remark 3, under the special case of i.i.d. observations, their requirement that $\\log n\/(n \\sigma_n^2) \\rightarrow 0$ translates to a maximal convergence rate of $n^{3\/4}$ up to a logarithmic factor. The work of \\cite{li2018multi} who considered multiple parallel change planes (determined by a fixed dimensional normal vector) and high dimensional linear models in the regions between consecutive hyperplanes also builds partly upon the methods of \\cite{seo2007smoothed} and obtains the same (almost) $n^{3\/4}$ rate for the normal vector (as can be seen by putting Condition 6 in their paper in conjunction with the conclusion of Theorem 3). \n\\\\\\\\\n\nWhile it is established that the condition $n\\sigma_n^2 \\to \\infty$ is sufficient (upto a log factor) for achieving asymptotic normality of the smoothed estimator, there is no result in the existing literature to ascertain whether its necessity. Intuitively speaking, the necessary condition for asymptotic normality ought to be $n \\sigma_n \\to 0$, as this will ensure a growing number of observations in a $\\sigma_n$ neighborhood around the true hyperplane, allowing the central limit theorem to kick in. In this paper we \\emph{bridge this gap} by proving that asymptotic normality of the smoothed change point estimator is, in fact, achievable with $n \\sigma_n \\to \\infty$. \nThis implies that the best possible rate of convergence of the smoothed estimator can be arbitrarily close to $n^{-1}$, the minimax optimal rate of estimation for this problem. To demonstrate this, we focus on two change plane estimation problems, one with a continuous and another with a binary response. The continuous response model we analyze here is the following: \n\\begin{equation}\n\\label{eq:regression_main_eqn}\nY_i = \\beta_0^{\\top}X_i + \\delta_0^{\\top}X_i\\mathds{1}_{Q_i^{\\top}\\psi_0 > 0} + {\\epsilon}_i \\,.\n\\end{equation}\nfor i.i.d. observations $\\{(X_i, Y_i, Q_i\\}_{i=1}^n$, where the zero-mean transitory shocks ${\\epsilon}_i \\rotatebox[origin=c]{90}{$\\models$} (X_i, Q_i)$. Our calculation can be easily extended to the case when the covariates on the either side of the change hyperplane are different and $\\mathbb{E}[{\\epsilon} \\mid X, Q] = 0$ with more tedious bookkeeping. As this generalization adds little of interest, conceptually, to our proof, we posit the simpler model for ease of understanding.\nAs the parameter $\\psi_0$ is only identifiable upto its norm, we assume that the first co-ordinate is $1$ (along the lines of \\cite{seo2007smoothed}) which removes one degree of freedom and makes the parameter identifiable. \n\\\\\\\\\nTo illustrate that a similar phenomenon transpires with binary response, we also study a canonical version of such a model which can be briefly described as follows: The covariate $Q \\sim P$ where $P$ is distribution on $\\mathbb{R}^d$ and the conditional distribution of $Y$ given $Q$ is modeled as follows: \n\\begin{equation}\n\\label{eq:classification_eqn}\nP(Y=1|Q) = \\alpha_0 \\mathds{1}(Q^{\\top}\\psi_0 \\le 0) + \\beta_0\\mathds{1}(Q^{\\top}\\psi_0 > 0)\n\\end{equation}\nfor some parameters $\\alpha_0, \\beta_0\\in (0,1)$ and $\\psi_0\\in\\mathbb{R}^d$ (with first co-ordinate being one for identifiability issue as for the continuous response model), the latter being of primary interest for estimation. \nThis model is identifiable up to a permutation of $(\\alpha_0, \\beta_0)$, so we further assume $\\alpha_0 < \\beta_0$. For both models, we show that $\\sqrt{n\/\\sigma_n}(\\hat \\psi - \\psi_0)$ converges to zero-mean normal distribution as long as $n \\sigma_n \\to \\infty$ but the calculations for the binary model are completely relegated to Appendix \\ref{sec:supp_classification}. \n\\\\\\\\\n{\\bf Organization of the paper:} The rest of the paper is organized as follows: In Section \\ref{sec:theory_regression} we present the methodology, the statement of the asymptotic distributions and a sketch of the proof for the continuous response model \\eqref{eq:regression_main_eqn}. In Section \\ref{sec:classification_analysis} we briefly describe the binary response model \\eqref{eq:classification_eqn} and related assumptions, whilst the details can be found in the supplementary document. In Section \\ref{sec:simulation} we present some simulation results, both for the binary and the continuous response models to study the effect of the bandwidth on the quality of the normal approximation in finite samples. In Section \\ref{sec:real_data}, we present a real data analysis where we analyze the effect of income and urbanization on the $CO_2$ emission in different countries. \n\\\\\\\\\n{\\bf Notations: } Before delving into the technical details, we first setup some notations here. We assume from now on, $X \\in {\\bbR}^p$ and $Q \\in {\\bbR}^d$. For any vector $v$ we define by $\\tilde v$ as the vector with all the co-ordinates expect the first one. We denote by $K$ the kernel function used to smooth the indicator function. For any matrix $A$, we denote by $\\|A\\|_2$ (or $\\|A\\|_F$) as its Frobenious norm and $\\|A\\|_{op}$ as its operator norm. For any vector, $\\| \\cdot \\|_2$ denotes its $\\ell_2$ norm. \n\n\n\n\n\n\n\\input{regression.tex}\n\\input{classification.tex}\n\n\\section{Simulation studies}\n\\label{sec:simulation}\nIn this section, we present some simulation results to analyse the effect of the choice of $\\sigma_n$ on the finite sample approximation of asymptotic normality, i.e. Berry-Essen type bounds. If we choose a smaller sigma, the rate of convergence is accelerated but the normal approximation error at smaller sample sizes will be higher, as we don't have enough observations in the vicinity of the change hyperplane for the CLT to kick in. This problem is alleviated by choosing $\\sigma_n$ larger, but this, on the other hand, compromises the convergence rate. Ideally, a Berry-Essen type of bound will quantify this, but this will require a different set of techniques and is left as an open problem. In our simulations, we generate data from following setup: \n\\begin{enumerate}\n\\item Set $N = 50000, p = 3, \\alpha_0 = 0.25, \\beta = 0.75$ and some $\\theta_0 \\in \\mathbb{R}^p$ with first co-ordinate $ = 1$. \n\\item Generate $X_1, \\dots, X_n \\sim \\mathcal{N}(0, I_p)$. \n\\item Generate $Y_i \\sim \\textbf{Bernoulli}\\left(\\alpha_0\\mathds{1}_{X_i^{\\top}\\theta_0 \\le 0} + \\beta_0 \\mathds{1}_{X_i^{\\top}\\theta_0 > 0}\\right)$. \n\\item Estimate $\\hat \\theta$ by minimizing $\\mathbb{M}_n(\\theta)$ (replacing $\\gamma$ by $\\bar Y$) based on $\\{(X_i, Y_i)\\}_{i=1}^n$ for different choices of $\\sigma_n$. \n\\end{enumerate}\nWe repeat Step 2 - Step 4 a hundred times to obtain $\\hat \\theta_1, \\dots, \\hat \\theta_{100}$. Define $s_n$ to be the standard deviation of $\\{\\hat \\theta_i\\}_{i=1}^{100}$. Figures ref{fig:co2} and \\ref{fig:co3} show the qqplots of $\\tilde \\theta_i = (\\hat \\theta_i - \\theta_0)\/s_n$ against the standard normal for four different choices of $\\sigma_n = n^{-0.6}, n^{-0.7}, n^{-0.8}, n^{-0.9}$. \n\\begin{figure}\n\\centering \n\\includegraphics[scale=0.4]{Coordinate_2}\n\\caption{In this figure, we present qqplot for estimating second co-ordinate of $\\theta_0$ with different choices of $\\sigma_n$ mentioned at the top of each plots.}\n\\label{fig:co2}\n\\end{figure}\n\\begin{figure}\n\\centering \n\\includegraphics[scale=0.4]{Coordinate_3}\n\\caption{In this figure, we present qqplot for estimating third co-ordinate of $\\theta_0$ with different choices of $\\sigma_n$ mentioned at the top of each plots.}\n\\label{fig:co3}\n\\end{figure}\nIt is evident that smaller value of $\\sigma_n$ yield a poor normal approximation. Although our theory shows that asymptotic normality holds as long as $n\\sigma_n \\to \\infty$, in practice we recommend choosing $\\sigma_n$ such that $n\\sigma_n \\ge 30$ for the central limit of theorem to take effect. \n\n\n\n\\begin{comment}\n\\section{Real data analysis}\n\\label{sec:real_data}\nWe illustrate our method using cross-country data on pollution (carbon-di-oxide), income and urbanization obtained from the World Development Indicators (WDI), World Bank (website?). The Environmental Kuznets Curve hypothesis (EKC henceforth), a popular and ongoing area of research in environmental economics, posits that at an initial stage of economic development pollution increases with economic growth, and then diminishes when society's priorities change, leading to an inverted U-shaped relation between income (measured via real GDP per capita) and pollution. The hypothesis has led to numerous empirical papers (i) testing the hypothesis (whether the relation is inverted U-shaped for countries\/regions of interest in the sample), (ii) exploring the threshold level of income (change point) at which pollution starts falling, as well as (iii) examining the countries\/regions which belong to the upward rising part versus the downward sloping part of the inverted U-shape, if at all. The studies have been performed using US state level data or cross-country data (e.g. \\cite{shafik1992economic}, \\cite{millimet2003environmental}, \\cite{aldy2005environmental}, \\cite{lee2019nonparametric},\\cite{boubellouta2021cross}, \\cite{list1999environmental}, \\cite{grossman1995economic}, \\cite{bertinelli2005environmental}, \\cite{azomahou2006economic}, \\cite{taskin2000searching} to name a few) and most have found strong evidence in favor of the EKC hypothesis. While regressing pollution emission per capita on income and its squared term, most studies find a significantly positive effect of income and a significantly negative effect of its quadratic term on pollution, thus concluding in favor of the inverted U-shapedness of the relation.\n\\\\\\\\\n\\noindent\nWhile income-pollution remains the focal point of most EKC studies, several of them have also included urban agglomeration (UA) or some other measures of urbanization as an important control variable especially while investigating carbon emissions.\\footnote{Although income growth is connected to urbanization, they are different due to varying geographical area, population density, and infrastructure of the countries. Also, different countries follow different income growth paths \u2013 labor intensive manufacturing, technology based manufacturing, human capital based servicing, technology based service sector, or natural resource (oil) based growth owing to their differences in terms of location, ownership of natural resources and capital.} (see for example, \\cite{shafik1992economic}, \\cite{boubellouta2021cross}, \\cite{liang2019urbanization}). The ecological economics literature finds mixed evidence in this regard \u2013 (i) urbanization leading to more pollution (due to its close links with sanitation or dense transportations issues and proximities to polluting manufacturing industries), (ii) urbanization leading to less pollution (explained by 'compact city theory'). The 'compact city theory' (see \\cite{burton2000compact}, \\cite{capello2000beyond}, \\cite{sadorsky2014effect}) explains the benefits of increased urbanization in terms of economies of scale (for example, replacing dependence on automobiles with subway systems, the use of costlier but improved and green technology for basic infrastructure etc). \\cite{cole2004examining}, using a set of 86 countries, and \\cite{liddle2010age}, using 17 developed countries, find a positive and significant effect of urbanization on pollution. On the contrary, using a set of 69 countries \\cite{sharma2011determinants} find a negative and significant effect of urbanization on pollution while \\cite{du2012economic} find an insignificant effect of urbanization on carbon emission in China. Using various empirical strategies \\cite{sadorsky2014effect} conclude that the positive and negative effects of urbanization on carbon pollution may cancel out depending on the countries involved, often leaving insignificant effects on pollution. In summary, based on the existing literature, the relationship between urbanization and pollution relation appears to depend largely on the set of countries considered in the sample. This motivates us to use UA along with income in our change plane model for analyzing carbon-dioxide emission to plausibly separate the countries into two regimes. \n\\\\\\\\\n\\noindent\nFollowing the broad literature we use pollution emission per capita (carbon-dioxide measured in metric tons per capita) as the dependent variable and real GDP per capita (measured in 2010 US dollars), its square (as is done commonly in the EKC literature) and a popular measure of urbanization, namely urban agglomeration (UA)\\footnote{The exact definition can be found in WDI website} as covariates (in our notation $X$) in our regression. In light of the preceding discussions we fit a change plane model comprising real GDP per capita and UA (in our notation $Q$). To summarize the setup, we use the continuous response model as described in equation \\eqref{eq:regression_main_eqn}, i.e \n\\begin{align*}\nY_i & = X_i^{\\top}\\beta_0 + X_i^{\\top}\\delta_0\\mathds{1}_{Q_i^{\\top}\\psi_0 > 0} + {\\epsilon}_i \\\\\n& = X_i^{\\top}\\beta_0\\mathds{1}_{Q_i^{\\top}\\psi_0 \\le 0} + X_i^{\\top}(\\beta_0 + \\delta_0)\\mathds{1}_{Q_i^{\\top}\\psi_0 > 0} + {\\epsilon}_i\n\\end{align*}\nwith the per capita $CO_2$ emission in metric ton as $Y$, per capita GDP, square of per capita GDP and UA as $X$ (hence $X \\in \\mathbb{R}^3$) and finally, per capita GDP and UA as $Q$ (hence $Q \\in \\mathbb{R}^2$). Observe that $\\beta_0$ represents the regression coefficients corresponding to the countries with $Q_i^{\\top}\\psi_0 \\le 0$ (henceforth denoted by Group 1) and $(\\beta_0+ \\delta_0)$ represents the regression coefficients corresponding to the countries with $Q_i^{\\top}\\psi_0 \\ge 0$ (henceforth denoted by Group 2). As per our convention, in the interests of identifiability we assume $\\psi_{0, 1} = 1$, where $\\psi_{0,1}$ is the change plane parameter corresponding to per capita GDP. Therefore the only change plane coefficient to be estimated is $\\psi_{0, 2}$, the change plane coefficient for UA. For numerical stability, we divide per capita GDP by $10^{-4}$ (consequently square of per capital GDP is scaled by $10^{-8}$)\\footnote{This scaling helps in the numerical stability of the gradient descent algorithm used to optimize the least squares criterion.}. After some pre-processing (i.e. removing rows consisting of NA and countries with $100\\%$ UA) we estimate the coefficients $(\\beta_0, \\delta_0, \\psi_0)$ of our model based on data from 115 countries with $\\sigma_n = 0.05$ and test the significance of the various coefficients using the methodologies described in Section \\ref{sec:inference}. We present our findings in Table \\ref{tab:ekc_coeff}. \n\\begin{table}[!h]\n \\centering\n \\begin{tabular}{|c||c||c|}\n \\hline\n Coefficients & Estimated values & p-values \\\\\n \\hline \\hline \n $\\beta_{0, 1}$ (\\text{RGDPPC for Group 1}) & 6.98555060 & 4.961452e-10 \\\\\n $\\beta_{0, 2}$ (\\text{squared RGDPPC for Group 1}) & -0.43425991 & 7.136484e-02 \\\\\n $\\beta_{0, 3}$ (\\text{UA for Group 1}) & -0.02613813 & 1.066065e-01\n\\\\\n $\\beta_{0, 1} + \\delta_{0, 1}$ (\\text{RGDPPC for Group 2}) & 2.0563337 & 0.000000e+00\\\\\n $\\beta_{0, 2} + \\delta_{0, 2}$ (\\text{squared RGDPPC for Group 2}) & -0.1866490 & 4.912843e-04 \\\\\n $\\beta_{0, 3} + \\delta_{0, 3}$ (\\text{UA for Group 2}) & 0.1403171& 1.329788e-05 \\\\\n $\\psi_{0,2 }$ (\\text{Change plane coeff for UA}) & -0.07061785 & 0.000000e+00\\\\\n \\hline\n \\end{tabular}\n \\caption{Table of the estimated regression and change plane coefficients along with their p-values.}\n \\label{tab:ekc_coeff}\n\\end{table}\n\\\\\\\\\n\\noindent\nFrom the above analysis, we find that both GDP and its square have statistically significant effects on carbon pollution for both groups of countries with their expected signs (positive for GDP and negative for its square term), supporting the inverted U-shaped income-pollution nexus and being in line with most papers in the EKC literature. The urban variable, on the other hand, is seen to have insignificant effect on Group 1 countries (less developed and emerging) which is in keeping with \\cite{du2012economic}, \\cite{sadorsky2014effect}. However, the urban variable seems to have a positive and significant impact on Group-2 countries which is in line with \\cite{liddle2010age} for example. Note that many of the group-1 countries are yet to experience sizeable levels of urbanization compared to the other group and this is even truer for our sample period.\\footnote{We use 6 years average from 2010-2015 for GDP and pollution measures. Such averaging is in accordance with the cross-sectional empirical literature using cross-country\/regional data and helps avoid business cycle fluctuations in GDP. It also minimizes the impacts of outlier events such as the financial crisis or great recession period. The years that have we have chosen are ones for which we could find data for the largest number of countries.}Further, note that UA plays a crucial role in dividing the countries into different regimes, as the estimated value of $\\psi_{0,2}$ is significant. \n\\\\\\\\\n\\noindent\nThere are many future potential applications of our method in economics. Similar exercises can be followed for other pollutants (such as sulfur emission, electrical waste\/e-waste, nitrogen pollution etc.). While income\/GDP remains a common, indeed the most crucial variable in the pollution study, other covariates (including change plane defining variables) may vary, depending on the pollutant of interest. One may also be interested in the determinants of health expenses in the household survey data. Often the families are asked about their health expenses incurred in the past one year. An interesting case in point may be household surveys collected in India where there are many large joint families with several children and old people live in the same household and most families are uninsured. It is often seen that health expenses increases with income as it includes expenses on regularly performed preventative medical examinations which are affordable only beyond an income level is reached. The important covariates are per capita family income, family wealth, 'dependency ratio' (number of children and old to the total number of people in the family) and binary indicator of any history of major illness\/hospitalizations in the family in the past year. Family income per capita and history of major illness can potentially define the change plane.\n\\end{comment}\n\n\n\\input{Real_data_analysis.tex}\n\\section{Conclusion}\n\\label{sec:conclusion}\nIn this paper we have established that under some mild assumptions the kernel-smoothed change plane estimator is asymptotically normal with near optimal rate $n^{-1}$. To the best of our knowledge, the state of the art result in this genre of problems is due to \\cite{seo2007smoothed}, where they demonstrate a best possible rate about $n^{-3\/4}$ for i.i.d. data. The main difference between their approach and ours is mainly the proof of Lemma \\ref{bandwidth}. Our techniques are based upon modern empirical process theory which allow us to consider much smaller bandwidths $\\sigma_n$ compared to those in \\cite{seo2007smoothed}, who appear to require larger values to achieve the result, possibly owing to their reliance on the techniques developed in \\cite{horowitz1992smoothed}. Although we have established it is possible to have asymptotic normality with really small bandwidths, we believe that the finite sample approximation (e.g. Berry-Essen bound) to normality could be poor, which is also evident from our simulation. \n\n\n\n\n\n\n\\section{Real data analysis}\n\\label{sec:real_data}\nWe illustrate our method using cross-country data on pollution (carbon-dioxide), income and urbanization obtained from the World Development Indicators (WDI), World Bank. The Environmental Kuznets Curve hypothesis (EKC henceforth), a popular and ongoing area of research in environmental economics, posits that at an initial stage of economic development pollution increases with economic growth, and then diminishes when society's priorities change, leading to an inverted U-shaped relation between income (measured via real GDP per capita) and pollution. The hypothesis has led to numerous empirical papers (i) testing the hypothesis (whether the relation is inverted U-shaped for countries\/regions of interest in the sample), (ii) exploring the threshold level of income at which pollution starts falling, as well as (iii) examining the countries\/regions which belong to the upward rising part versus the downward sloping part of the inverted U-shape, if at all. The studies have been performed using US state level data or cross-country data (e.g. \\cite{shafik1992economic}, \\cite{millimet2003environmental}, \\cite{aldy2005environmental}, \\cite{lee2019nonparametric},\\cite{boubellouta2021cross}, \\cite{list1999environmental}, \\cite{grossman1995economic}, \\cite{bertinelli2005environmental}, \\cite{azomahou2006economic}, \\cite{taskin2000searching} to name a few). While some of these papers have found evidence in favor of the EKC hypothesis (inverted U-shaped income-pollution relation), others have found evidence against it (monotonically increasing or other shapes for the relation). The results often depend on countries\/regions in the sample, period of analysis, as well as the pollutant studied.\n\\\\\\\\\n\\noindent\nWhile income-pollution remains the focal point of most EKC studies, several of them have also included urban agglomeration (UA) or some other measures of urbanization as an important control variable especially while investigating carbon emissions.\\footnote {Although income growth is connected to urbanization, countries are heterogenous and follow different growth paths due to their varying geographical structures, population densities, infrastructures, ownerships of resources making a case for using urbanization as another control covariate in the income-pollution study. The income growth paths of oil rich UAE, manufacturing based China, serviced based Singapore, low population density Canada (with vast land) are all different.} (see for example, \\cite{shafik1992economic}, \\cite{boubellouta2021cross}and \\cite{liang2019urbanization}). The theory of ecological economics posits potentially varying effects of increased urbanization on pollution\u2013 (i) urbanization leading to more pollution (due to its close links with sanitations, dense transportations, and proximities to polluting manufacturing industries), (ii) urbanization potentially leading to less pollution based on 'compact city theory' (see \\cite{burton2000compact}, \\cite{capello2000beyond}, \\cite{sadorsky2014effect}) that explains the potential benefits of increased urbanization in terms of economies of scale (for example, replacing dependence on automobiles with large scale subway systems, using multi-storied buildings instead of single unit houses, keeping more open green space). \\cite{liddle2010age}, using 17 developed countries, find a positive and significant effect of urbanization on pollution. On the contrary, using a set of 69 countries \\cite{sharma2011determinants} find a negative and significant effect of urbanization on pollution while \\cite{du2012economic} find an insignificant effect of urbanization on carbon emission. Using various empirical strategies \\cite{sadorsky2014effect} conclude that the positive and negative effects of urbanization on carbon pollution may cancel out depending on the countries involved often leaving insignificant effects on pollution. They also note that many countries are yet to achieve a sizeable level of urbanization which presumably explains why many empirical works using less developed countries find insignificant effect of urbanization. In summary, based on the existing literature, both the relationship between urbanization and pollution as well as the relationship between income and pollution appear to depend largely on the set of countries considered in the sample. This motivates us to use UA along with income in our change plane model for analyzing carbon-dioxide emission to plausibly separate the countries into two regimes. \n\\\\\\\\\n\\noindent\nFollowing the broad literature we use pollution emission per capita (carbon-dioxide measured in metric tons per capita) as the dependent variable and real GDP per capita (measured in 2010 US dollars), its square (as is done commonly in the EKC literature) and a popular measure of urbanization, namely urban agglomeration (UA)\\footnote{The exact definition can be found in the World Development Indicators database from the World Bank website.} as covariates (in our notation $X$) in our regression. In light of the preceding discussions we fit a change plane model comprising real GDP per capita and UA (in our notation $Q$). To summarize the setup, we use the continuous response model as described in equation \\eqref{eq:regression_main_eqn}, i.e \n\\begin{align*}\nY_i & = X_i^{\\top}\\beta_0 + X_i^{\\top}\\delta_0\\mathds{1}_{Q_i^{\\top}\\psi_0 > 0} + {\\epsilon}_i \\\\\n& = X_i^{\\top}\\beta_0\\mathds{1}_{Q_i^{\\top}\\psi_0 \\le 0} + X_i^{\\top}(\\beta_0 + \\delta_0)\\mathds{1}_{Q_i^{\\top}\\psi_0 > 0} + {\\epsilon}_i\n\\end{align*}\nwith the per capita $CO_2$ emission in metric ton as $Y$, per capita GDP, square of per capita GDP and UA as $X$ (hence $X \\in \\mathbb{R}^3$) and finally, per capita GDP and UA as $Q$ (hence $Q \\in \\mathbb{R}^2$). Observe that $\\beta_0$ represents the regression coefficients corresponding to the countries with $Q_i^{\\top}\\psi_0 \\le 0$ (henceforth denoted by Group 1) and $(\\beta_0+ \\delta_0)$ represents the regression coefficients corresponding to the countries with $Q_i^{\\top}\\psi_0 \\ge 0$ (henceforth denoted by Group 2). As per our convention, in the interests of identifiability we assume $\\psi_{0, 1} = 1$, where $\\psi_{0,1}$ is the change plane parameter corresponding to per capita GDP. Therefore the only change plane coefficient to be estimated is $\\psi_{0, 2}$, the change plane coefficient for UA. For numerical stability, we divide per capita GDP by $10^{-4}$ (consequently square of per capital GDP is scaled by $10^{-8}$)\\footnote{This scaling helps in the numerical stability of the gradient descent algorithm used to optimize the least squares criterion.}. After some pre-processing (i.e. removing rows consisting of NA and countries with $100\\%$ UA) we estimate the coefficients $(\\beta_0, \\delta_0, \\psi_0)$ of our model based on data from 115 countries with $\\sigma_n = 0.05$ and test the significance of the various coefficients using the methodologies described in Section \\ref{sec:inference}. We present our findings in Table \\ref{tab:ekc_coeff}. \n\\begin{table}[!h]\n \\centering\n \\begin{tabular}{|c||c||c|}\n \\hline\n Coefficients & Estimated values & p-values \\\\\n \\hline \\hline \n $\\beta_{0, 1}$ (\\text{RGDPPC for Group 1}) & 6.98555060 & 4.961452e-10 \\\\\n $\\beta_{0, 2}$ (\\text{squared RGDPPC for Group 1}) & -0.43425991 & 7.136484e-02 \\\\\n $\\beta_{0, 3}$ (\\text{UA for Group 1}) & -0.02613813 & 1.066065e-01\n\\\\\n $\\beta_{0, 1} + \\delta_{0, 1}$ (\\text{RGDPPC for Group 2}) & 2.0563337 & 0.000000e+00\\\\\n $\\beta_{0, 2} + \\delta_{0, 2}$ (\\text{squared RGDPPC for Group 2}) & -0.1866490 & 4.912843e-04 \\\\\n $\\beta_{0, 3} + \\delta_{0, 3}$ (\\text{UA for Group 2}) & 0.1403171& 1.329788e-05 \\\\\n $\\psi_{0,2 }$ (\\text{Change plane coeff for UA}) & -0.07061785 & 0.000000e+00\\\\\n \\hline\n \\end{tabular}\n \\caption{Table of the estimated regression and change plane coefficients along with their p-values.}\n \\label{tab:ekc_coeff}\n\\end{table}\n\\\\\\\\\n\\noindent\nFrom the above analysis, we find that GDP has significantly positive effect on pollution for both groups of countries. The effect of its squared term is negative for both groups; but the effect is significant for Group-2 consisting of mostly high income countries whereas its effect is insignificant (at the 5\\% level) for the Group-1 countries (consisting of mostly low or middle income and few high income countries). Thus, not surprisingly, we find evidence in favor of EKC for the developed countries, but not for the mixed group. Notably, Group-1 consists of a mixed set of countries like Angola, Sudan, Senegal, India, China, Israel, UAE etc., whereas Group-1 consists of rich and developed countries like Canada, USA, UK, France, Germany etc. The urban variable, on the other hand, is seen to have insignificant effect on Group-1 which is in keeping with \\cite{du2012economic}, \\cite{sadorsky2014effect}. Many of them are yet to achieve substantial urbanization and this is more true for our sample period \\footnote{We use 6 years average from 2010-2015 for GDP and pollution measures. Such averaging is in accordance with the cross-sectional empirical literature using cross-country\/regional data and helps avoid business cycle fluctuations in GDP. It also minimizes the impacts of outlier events such as the financial crisis or great recession period. The years that we have chosen are ones for which we could find data for the largest number of countries.}. In contrast, UA has a positive and significant effect on Group-2 (developed) countries which is consistent with the findings of \\cite{liddle2010age}, for example. Note that UA plays a crucial role in dividing the countries into different regimes, as the estimated value of $\\psi_{0,2}$ is significant. Thus, we are able to partition countries into two regimes: a mostly rich and a mixed group. \n\\\\\\\\\n\\noindent\nNote that many underdeveloped countries and poorer regions of emerging countries are still swamped with greenhouse gas emissions from burning coal, cow dung etc., and usage of poor exhaust systems in houses and for transport. This is more true for rural and semi-urban areas of developing countries. So even while being less urbanized compared to developed nations, their overall pollution load is high (due to inefficient energy usage and higher dependence on fossil fuels as pointed out above) and rising with income and they are yet to reach the descending part of the inverted U-shape for the income-pollution relation. On the contrary, for countries in Group-2, the adoption of more efficient energy and exhaust systems are common in households and transportations in general, leading to eventually decreasing pollution with increasing income (supporting EKC). Both the results are in line with the existing EKC literature. Additionally we find that the countries in Group 2 are yet to achieve 'compact city' and green urbanization. This is a stylized fact that is confirmed by the positive and significant effect of UA on pollution in our analysis. \n\\\\\\\\\n\\noindent\nThere are many future potential applications of our method in economics. Similar analyses can be performed for other pollutants (such as sulfur emission, electrical waste\/e-waste, nitrogen pollution etc.). While income\/GDP remains a common, indeed the most crucial variable in pollution studies, other covariates (including change plane defining variables) may vary, depending on the pollutant of interest. Another potential application can be that of identifying the determinants of family health expenses in household survey data. Families are often asked about their health expenses incurred in the past one year. An interesting case in point may be household surveys collected in India where one finds numerous (large) joint families with several children and old people residing in the same household and most families are uninsured. It is often seen that health expenditure increases with income with a major factor being the costs associated with regularly performed preventative medical examinations which are affordable only once a certain income level is reached. The important covariates here are per capita family income, family wealth, `dependency ratio' (number of children and old to the total number of people in the family) and the binary indicator of any history of major illness\/hospitalizations in the family in the past year. Family income per capita and history of major illness are natural candidate covariates for defining the change plane. \n\n\\section{Binary response model}\n\\label{sec:classification_analysis}\nRecall our binary response model in equation \\eqref{eq:classification_eqn}. To estimate $\\psi_0$, we resort to the following loss (without smoothing): \n\\begin{equation}\n\\label{eq:new_loss}\n\\mathbb{M}(\\psi) = \\mathbb{E}\\left((Y - \\gamma)\\mathds{1}(Q^{\\top}\\psi \\le 0)\\right)\\end{equation}\nwith $\\gamma \\in (\\alpha_0, \\beta_0)$, which can be viewed as a variant of the square error loss function: \n$$\n\\mathbb{M}(\\alpha, \\beta, \\psi) = \\mathbb{E}\\left(\\left(Y - \\alpha\\mathds{1}(Q^{\\top}\\psi < 0) - \\beta\\mathds{1}(Q^{\\top}\\psi > 0)\\right)^2\\right)\\,.\n$$\nWe establish the connection between these losses in sub-section \\ref{loss_func_eq}. It is easy to prove that under fairly mild conditions (discussed later) \n$\\psi_0 = {\\arg\\min}_{\\psi \\in \\Theta}\\mathbb{M}(\\psi)$, uniquely. Under the standard classification paradigm, when we know a priori that \n$\\alpha_0 < 1\/2 < \\beta_0$, we can take $\\gamma = 1\/2$, and in the absence of this constraint, $\\bar{Y}$, which converges to some $\\gamma$ between $\\alpha_0$ and $\\beta_0$, may be substituted in the loss function. In the rest of the paper, we confine ourselves to a known $\\gamma$, and for technical simplicity, we take $\\gamma = \\frac{(\\beta_0 + \\alpha_0)}{2}$, but this assumption can be removed with more mathematical book-keeping. Thus, $\\psi_0$ is estimated by: \n\\begin{equation}\n\\label{non-smooth-score} \n\\hat \\psi = {\\arg\\min}_{\\psi \\in \\Theta} \\mathbb{M}_n(\\psi) = {\\arg\\min}_{\\psi \\in \\Theta} \\frac{1}{n}\\sum_{\\i=1}^n (Y_i - \\gamma)\\mathds{1}(Q_i^{\\top}\\psi \\le 0)\\,.\n\\end{equation} We resort to a smooth approximation of the indicator function in \n\\eqref{non-smooth-score} using a distribution kernel with suitable bandwidth. The smoothed version of the population score function then becomes: \n\\begin{equation}\n\\label{eq:kernel_smoothed_pop_score}\n\\mathbb{M}^s(\\psi) = \\mathbb{E}\\left((Y - \\gamma)\\left(1-K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right)\\right)\\right)\n\\end{equation}\nwhere as in the continuous response model, we use $K(x) = \\Phi(x)$, and the corresponding empirical version is: \n\\begin{equation}\n\\label{eq:kernel_smoothed_emp_score}\n\\mathbb{M}^s_n(\\psi) = \\frac{1}{n}\\sum_{i=1}^n \\left((Y_i - \\gamma)\\left(1-K\\left(\\frac{Q_i^{\\top}\\psi}{\\sigma_n}\\right)\\right)\\right)\n\\end{equation}\nDefine $\\hat{\\psi}^s$ and $\\psi_0^s$ to be the minimizer of the smoothed version of the empirical (equation \\eqref{eq:kernel_smoothed_emp_score}) and population score (equation \\eqref{eq:kernel_smoothed_pop_score}) function respectively. Here we only consider the choice of bandwidth $n\\sigma_n \\to \\infty$ and $n\\sigma_n^2 \\to 0$. Analogous to Theorem \\ref{thm:regression} we prove the following result for binary response model: \n\\begin{theorem}\n\\label{thm:binary}\nUnder Assumptions (\\ref{as:distribution} - \\ref{as:eigenval_bound}): \n$$\n\\sqrt{\\frac{n}{\\sigma_n}}\\left(\\hat{\\psi}_n - \\psi_0\\right) \\Rightarrow N(0, \\Gamma) \\,,\n$$ \nfor some non-stochastic matrix $\\Gamma$, which will be defined explicitly in the proof. \n\\end{theorem}\nWe have therefore established that in the regime $n\\sigma_n \\to \\infty$ and $n\\sigma_n^2 \\to 0$, it is possible to attain asymptotic normality using a smoothed estimator for binary response model. \n\n\n\n\n\n\\section{Inferential methods}\n\\label{sec:inference}\nWe draw inferences on $(\\beta_0, \\delta_0, \\psi_0)$ by resorting to similar techniques as in \\cite{seo2007smoothed}. For the continuous response model, we need consistent estimators of $V^{\\gamma}, Q^{\\gamma}, V^{\\psi}, Q^{\\psi}$ (see Lemma \\ref{conv-prob} for the definitions) for hypothesis testing. By virtue of the aforementioned Lemma, we can estimate $Q^{\\gamma}$ and $Q^{\\psi}$ as follows: \n\\begin{align*}\n\\hat Q^{\\gamma} & = \\nabla^2_{\\gamma} \\mathbb{M}_n^s(\\hat \\theta) \\,, \\\\ \n\\hat Q^{\\psi} & = \\sigma_n \\nabla^2_{\\psi} \\mathbb{M}_n^s(\\hat \\theta) \\,.\n\\end{align*}\nThe consistency of the above estimators is established in the proof of Lemma \\ref{conv-prob}. For the other two parameters $V^{\\gamma}, V^{\\psi}$ we use the following estimators: \n\\begin{align*}\n\\hat V^{\\psi} & = \\frac{1}{n\\sigma_n^2}\\sum_{i=1}^n\\left(\\left(Y_i - X_i^{\\top}(\\hat \\beta + \\hat \\delta)\\right)^2 - \\left(Y_i- X_i^{\\top}\\hat \\beta\\right)^2\\right)^2\\tilde Q_i \\tilde Q_i^{\\top}\\left(K'\\left(\\frac{Q_i^{\\top}\\hat \\psi}{\\sigma_n}\\right)\\right)^2 \\\\\n\\hat V^{\\gamma} & = \\hat \\sigma^2_{\\epsilon} \\begin{pmatrix} \\frac{1}{n}X_iX_i^{\\top} & \\frac{1}{n}X_iX_i^{\\top}\\mathds{1}_{Q_i^{\\top}\\hat \\psi > 0} \\\\ \\frac{1}{n}X_iX_i^{\\top}\\mathds{1}_{Q_i^{\\top}\\hat \\psi > 0} & \\frac{1}{n}X_iX_i^{\\top}\\mathds{1}_{Q_i^{\\top}\\hat \\psi > 0} \\end{pmatrix}\n\\end{align*}\nwhere $\\hat \\sigma^2_{\\epsilon}$ can be obtained as $(1\/n)(Y_i - X_i^{\\top}\\hat \\beta - X_i^{\\top}\\hat \\delta \\mathds{1}(Q_i^{\\top}\\hat \\psi > 0))^2$, i.e. the residual sum of squares. The explicit value of $V_\\gamma$ (as derived in equation \\eqref{eq:def_v_gamma} in the proof Lemma \\ref{asymp-normality}) is: \n$$\nV^{\\gamma} = \\sigma_{\\epsilon}^2 \\begin{pmatrix}\\mathbb{E}\\left[XX^{\\top}\\right] & \\mathbb{E}\\left[XX^{\\top}\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right] \\\\\n\\mathbb{E}\\left[XX^{\\top}\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right] & \\mathbb{E}\\left[XX^{\\top}\\mathds{1}_{Q^{\\top}\\psi_0 > 0}\\right] \\end{pmatrix} \n$$ \nTherefore, the consistency of $\\hat V_\\gamma$ is immediate from the law of large numbers. The consistency of $\\hat V^{\\psi}$ follows via arguments similar to those employed in proving Lemma \\ref{conv-prob} but under somewhat more stringent moment conditions: in particular, we need $\\mathbb{E}[\\|X\\|^8] < \\infty$ and $\\mathbb{E}[(X^{\\top}\\delta_0)^k \\mid Q]$ to be Lipschitz functions over $Q$ for $1 \\le k \\le 8$. The inferential techniques for the classification model are similar and hence skipped, to avoid repetition. \n\n\n\n\n\n\n\n\\section{Introduction}\nThe simple linear regression model assumes a uniform linear relationship between the covariate and the response, in the sense that the regression parameter $\\beta$ is the same over the entire covariate domain. In practice, the situation can be more complicated: for instance, the regression parameter may differ from sub-population to sub-population within a large (super-) population. Some common techniques to account for such heterogeneity include mixed linear models, introducing an interaction effect, or fitting different models among each sub-population which corresponds to a supervised classification setting where the true groups (sub-populations) are \\emph{a priori known}. \n\t\\newline\n\t\\newline\n\t\\indent A more difficult scenario arises when the sub-populations are unknown, in which case regression and classification must happen simultaneously. Consider the scenario where the conditional mean of $Y_i$ given $X_i$ is different for different unknown sub-groups. A well-studied treatment of this problem -- the so-called change point problem -- considers a simple thresholding model where membership in a sub-group is determined by whether a real-valued observable $X$ falls to the left or right of an unknown parameter $\\gamma$. More recently, there has been work for multi-dimensional covariates, namely when the membership is determined by which side a random vector $X$ falls with respect to an hyperplane with unknown normal vector $\\theta_0$. A concrete example appears in \\cite{wei2014latent} who extend the linear thresholding model due to \\cite{kang2011new} to general dimensions: \n\t\\begin{eqnarray}\\label{eq:weimodel}\n\tY=\\mu_1\\cdot 1_{X^{\\top}\\theta_0\\geq 0}+\\mu_2\\cdot 1_{X^{\\top}\\theta_0<0}+\\varepsilon\\,,\n\t\\end{eqnarray}\n\tand studied computational algorithms and consistency of the same. This model and others with similar structure, called \\emph{change plane models}, are useful in various fields of research, e.g. modeling treatment effect heterogeneity in drug treatment (\\cite{imai2013estimating}), modeling sociological data on voting and employment (\\cite{imai2013estimating}), or cross country growth regressions in econometrics \n(\\cite{seo2007smoothed}).\n\t\\newline\n\t\\newline\n\\indent Other aspects of this model have also been investigated. \\cite{fan2017change} examined the change plane model from the statistical testing point of view, with the null hypothesis being the absence of a separating hyperplane. They proposed a test statistic, studied its asymptotic distribution and provided sample size recommendations for achieving target values of power. \\cite{li2018multi} extended the change point detection problem in the multi-dimensional setup by considering the case where $X^{\\top}\\theta_0$ forms a multiple change point data sequence. \n\nThe key difficultly with change plane type models is the inherent discontinuity in the optimization criteria involved where the parameter of interest appears as an argument to some indicator function, rendering the optimization extremely hard. To alleviate this, one option is to kernel smooth the indicator function, an approach that was adopted by Seo and Linton \\cite{seo2007smoothed} in a version of the change-plane problem, motivated by earlier results of Horowitz \\cite{horowitz1992smoothed} that dealt with a smoothed version of the maximum score estimator. Their model has an additive structure of the form:\n\\[Y_t = \\beta^{\\top}X_t + \\delta^{\\top} \\tilde{X}_t \\mathds{1}_{Q_t^{\\top} \\boldmath \\psi > 0} + \\epsilon_t \\,,\\]\nwhere $\\psi$ is the (fixed) change-plane parameter, and $t$ can be viewed as a time index. Under a set of assumptions on the model (Assumptions 1 and 2 of their paper), they showed asymptotic normality of their estimator of $\\psi$ obtained by minimizing a smoothed least squares criterion\nthat uses a differentiable distribution function $\\mathcal{K}$. The rate of convergence of $\\hat{\\psi}$ to the truth was shown to be $\\sqrt{n\/\\sigma_n}$ where $\\sigma_n$ was the bandwidth parameter used to smooth the least squares function. As noted in their Remark 3, under the special case of i.i.d. observations, their requirement that $\\log n\/(n \\sigma_n^2) \\rightarrow 0$ translates to a maximal convergence rate of $n^{3\/4}$ up to a logarithmic factor. The work of \\cite{li2018multi} who considered multiple parallel change planes (determined by a fixed dimensional normal vector) and high dimensional linear models in the regions between consecutive hyperplanes also builds partly upon the methods of \\cite{seo2007smoothed} and obtains the same (almost) $n^{3\/4}$ rate for the normal vector (as can be seen by putting Condition 6 in their paper in conjunction with the conclusion of Theorem 3). \n\\\\\\\\\n\nWhile it is established that the condition $n\\sigma_n^2 \\to \\infty$ is sufficient (upto a log factor) for achieving asymptotic normality of the smoothed estimator, there is no result in the existing literature to ascertain whether its necessity. Intuitively speaking, the necessary condition for asymptotic normality ought to be $n \\sigma_n \\to 0$, as this will ensure a growing number of observations in a $\\sigma_n$ neighborhood around the true hyperplane, allowing the central limit theorem to kick in. In this paper we \\emph{bridge this gap} by proving that asymptotic normality of the smoothed change point estimator is, in fact, achievable with $n \\sigma_n \\to \\infty$. \nThis implies that the best possible rate of convergence of the smoothed estimator can be arbitrarily close to $n^{-1}$, the minimax optimal rate of estimation for this problem. To demonstrate this, we focus on two change plane estimation problems, one with a continuous and another with a binary response. The continuous response model we analyze here is the following: \n\\begin{equation}\n\\label{eq:regression_main_eqn}\nY_i = \\beta_0^{\\top}X_i + \\delta_0^{\\top}X_i\\mathds{1}_{Q_i^{\\top}\\psi_0 > 0} + {\\epsilon}_i \\,.\n\\end{equation}\nfor i.i.d. observations $\\{(X_i, Y_i, Q_i\\}_{i=1}^n$, where the zero-mean transitory shocks ${\\epsilon}_i \\rotatebox[origin=c]{90}{$\\models$} (X_i, Q_i)$. Our calculation can be easily extended to the case when the covariates on the either side of the change hyperplane are different and $\\mathbb{E}[{\\epsilon} \\mid X, Q] = 0$ with more tedious bookkeeping. As this generalization adds little of interest, conceptually, to our proof, we posit the simpler model for ease of understanding.\nAs the parameter $\\psi_0$ is only identifiable upto its norm, we assume that the first co-ordinate is $1$ (along the lines of \\cite{seo2007smoothed}) which removes one degree of freedom and makes the parameter identifiable. \n\\\\\\\\\nTo illustrate that a similar phenomenon transpires with binary response, we also study a canonical version of such a model which can be briefly described as follows: The covariate $Q \\sim P$ where $P$ is distribution on $\\mathbb{R}^d$ and the conditional distribution of $Y$ given $Q$ is modeled as follows: \n\\begin{equation}\n\\label{eq:classification_eqn}\nP(Y=1|Q) = \\alpha_0 \\mathds{1}(Q^{\\top}\\psi_0 \\le 0) + \\beta_0\\mathds{1}(Q^{\\top}\\psi_0 > 0)\n\\end{equation}\nfor some parameters $\\alpha_0, \\beta_0\\in (0,1)$ and $\\psi_0\\in\\mathbb{R}^d$ (with first co-ordinate being one for identifiability issue as for the continuous response model), the latter being of primary interest for estimation. \nThis model is identifiable up to a permutation of $(\\alpha_0, \\beta_0)$, so we further assume $\\alpha_0 < \\beta_0$. For both models, we show that $\\sqrt{n\/\\sigma_n}(\\hat \\psi - \\psi_0)$ converges to zero-mean normal distribution as long as $n \\sigma_n \\to \\infty$ but the calculations for the binary model are completely relegated to Appendix \\ref{sec:supp_classification}. \n\\\\\\\\\n{\\bf Organization of the paper:} The rest of the paper is organized as follows: In Section \\ref{sec:theory_regression} we present the methodology, the statement of the asymptotic distributions and a sketch of the proof for the continuous response model \\eqref{eq:regression_main_eqn}. In Section \\ref{sec:classification_analysis} we briefly describe the binary response model \\eqref{eq:classification_eqn} and related assumptions, whilst the details can be found in the supplementary document. In Section \\ref{sec:simulation} we present some simulation results, both for the binary and the continuous response models to study the effect of the bandwidth on the quality of the normal approximation in finite samples. In Section \\ref{sec:real_data}, we present a real data analysis where we analyze the effect of income and urbanization on the $CO_2$ emission in different countries. \n\\\\\\\\\n{\\bf Notations: } Before delving into the technical details, we first setup some notations here. We assume from now on, $X \\in {\\bbR}^p$ and $Q \\in {\\bbR}^d$. For any vector $v$ we define by $\\tilde v$ as the vector with all the co-ordinates expect the first one. We denote by $K$ the kernel function used to smooth the indicator function. For any matrix $A$, we denote by $\\|A\\|_2$ (or $\\|A\\|_F$) as its Frobenious norm and $\\|A\\|_{op}$ as its operator norm. For any vector, $\\| \\cdot \\|_2$ denotes its $\\ell_2$ norm. \n\n\n\n\n\n\n\\input{regression.tex}\n\\input{classification.tex}\n\n\\section{Simulation studies}\n\\label{sec:simulation}\nIn this section, we present some simulation results to analyse the effect of the choice of $\\sigma_n$ on the finite sample approximation of asymptotic normality, i.e. Berry-Essen type bounds. If we choose a smaller sigma, the rate of convergence is accelerated but the normal approximation error at smaller sample sizes will be higher, as we don't have enough observations in the vicinity of the change hyperplane for the CLT to kick in. This problem is alleviated by choosing $\\sigma_n$ larger, but this, on the other hand, compromises the convergence rate. Ideally, a Berry-Essen type of bound will quantify this, but this will require a different set of techniques and is left as an open problem. In our simulations, we generate data from following setup: \n\\begin{enumerate}\n\\item Set $N = 50000, p = 3, \\alpha_0 = 0.25, \\beta = 0.75$ and some $\\theta_0 \\in \\mathbb{R}^p$ with first co-ordinate $ = 1$. \n\\item Generate $X_1, \\dots, X_n \\sim \\mathcal{N}(0, I_p)$. \n\\item Generate $Y_i \\sim \\textbf{Bernoulli}\\left(\\alpha_0\\mathds{1}_{X_i^{\\top}\\theta_0 \\le 0} + \\beta_0 \\mathds{1}_{X_i^{\\top}\\theta_0 > 0}\\right)$. \n\\item Estimate $\\hat \\theta$ by minimizing $\\mathbb{M}_n(\\theta)$ (replacing $\\gamma$ by $\\bar Y$) based on $\\{(X_i, Y_i)\\}_{i=1}^n$ for different choices of $\\sigma_n$. \n\\end{enumerate}\nWe repeat Step 2 - Step 4 a hundred times to obtain $\\hat \\theta_1, \\dots, \\hat \\theta_{100}$. Define $s_n$ to be the standard deviation of $\\{\\hat \\theta_i\\}_{i=1}^{100}$. Figures ref{fig:co2} and \\ref{fig:co3} show the qqplots of $\\tilde \\theta_i = (\\hat \\theta_i - \\theta_0)\/s_n$ against the standard normal for four different choices of $\\sigma_n = n^{-0.6}, n^{-0.7}, n^{-0.8}, n^{-0.9}$. \n\\begin{figure}\n\\centering \n\\includegraphics[scale=0.4]{Coordinate_2}\n\\caption{In this figure, we present qqplot for estimating second co-ordinate of $\\theta_0$ with different choices of $\\sigma_n$ mentioned at the top of each plots.}\n\\label{fig:co2}\n\\end{figure}\n\\begin{figure}\n\\centering \n\\includegraphics[scale=0.4]{Coordinate_3}\n\\caption{In this figure, we present qqplot for estimating third co-ordinate of $\\theta_0$ with different choices of $\\sigma_n$ mentioned at the top of each plots.}\n\\label{fig:co3}\n\\end{figure}\nIt is evident that smaller value of $\\sigma_n$ yield a poor normal approximation. Although our theory shows that asymptotic normality holds as long as $n\\sigma_n \\to \\infty$, in practice we recommend choosing $\\sigma_n$ such that $n\\sigma_n \\ge 30$ for the central limit of theorem to take effect. \n\n\n\n\\begin{comment}\n\\section{Real data analysis}\n\\label{sec:real_data}\nWe illustrate our method using cross-country data on pollution (carbon-di-oxide), income and urbanization obtained from the World Development Indicators (WDI), World Bank (website?). The Environmental Kuznets Curve hypothesis (EKC henceforth), a popular and ongoing area of research in environmental economics, posits that at an initial stage of economic development pollution increases with economic growth, and then diminishes when society's priorities change, leading to an inverted U-shaped relation between income (measured via real GDP per capita) and pollution. The hypothesis has led to numerous empirical papers (i) testing the hypothesis (whether the relation is inverted U-shaped for countries\/regions of interest in the sample), (ii) exploring the threshold level of income (change point) at which pollution starts falling, as well as (iii) examining the countries\/regions which belong to the upward rising part versus the downward sloping part of the inverted U-shape, if at all. The studies have been performed using US state level data or cross-country data (e.g. \\cite{shafik1992economic}, \\cite{millimet2003environmental}, \\cite{aldy2005environmental}, \\cite{lee2019nonparametric},\\cite{boubellouta2021cross}, \\cite{list1999environmental}, \\cite{grossman1995economic}, \\cite{bertinelli2005environmental}, \\cite{azomahou2006economic}, \\cite{taskin2000searching} to name a few) and most have found strong evidence in favor of the EKC hypothesis. While regressing pollution emission per capita on income and its squared term, most studies find a significantly positive effect of income and a significantly negative effect of its quadratic term on pollution, thus concluding in favor of the inverted U-shapedness of the relation.\n\\\\\\\\\n\\noindent\nWhile income-pollution remains the focal point of most EKC studies, several of them have also included urban agglomeration (UA) or some other measures of urbanization as an important control variable especially while investigating carbon emissions.\\footnote{Although income growth is connected to urbanization, they are different due to varying geographical area, population density, and infrastructure of the countries. Also, different countries follow different income growth paths \u2013 labor intensive manufacturing, technology based manufacturing, human capital based servicing, technology based service sector, or natural resource (oil) based growth owing to their differences in terms of location, ownership of natural resources and capital.} (see for example, \\cite{shafik1992economic}, \\cite{boubellouta2021cross}, \\cite{liang2019urbanization}). The ecological economics literature finds mixed evidence in this regard \u2013 (i) urbanization leading to more pollution (due to its close links with sanitation or dense transportations issues and proximities to polluting manufacturing industries), (ii) urbanization leading to less pollution (explained by 'compact city theory'). The 'compact city theory' (see \\cite{burton2000compact}, \\cite{capello2000beyond}, \\cite{sadorsky2014effect}) explains the benefits of increased urbanization in terms of economies of scale (for example, replacing dependence on automobiles with subway systems, the use of costlier but improved and green technology for basic infrastructure etc). \\cite{cole2004examining}, using a set of 86 countries, and \\cite{liddle2010age}, using 17 developed countries, find a positive and significant effect of urbanization on pollution. On the contrary, using a set of 69 countries \\cite{sharma2011determinants} find a negative and significant effect of urbanization on pollution while \\cite{du2012economic} find an insignificant effect of urbanization on carbon emission in China. Using various empirical strategies \\cite{sadorsky2014effect} conclude that the positive and negative effects of urbanization on carbon pollution may cancel out depending on the countries involved, often leaving insignificant effects on pollution. In summary, based on the existing literature, the relationship between urbanization and pollution relation appears to depend largely on the set of countries considered in the sample. This motivates us to use UA along with income in our change plane model for analyzing carbon-dioxide emission to plausibly separate the countries into two regimes. \n\\\\\\\\\n\\noindent\nFollowing the broad literature we use pollution emission per capita (carbon-dioxide measured in metric tons per capita) as the dependent variable and real GDP per capita (measured in 2010 US dollars), its square (as is done commonly in the EKC literature) and a popular measure of urbanization, namely urban agglomeration (UA)\\footnote{The exact definition can be found in WDI website} as covariates (in our notation $X$) in our regression. In light of the preceding discussions we fit a change plane model comprising real GDP per capita and UA (in our notation $Q$). To summarize the setup, we use the continuous response model as described in equation \\eqref{eq:regression_main_eqn}, i.e \n\\begin{align*}\nY_i & = X_i^{\\top}\\beta_0 + X_i^{\\top}\\delta_0\\mathds{1}_{Q_i^{\\top}\\psi_0 > 0} + {\\epsilon}_i \\\\\n& = X_i^{\\top}\\beta_0\\mathds{1}_{Q_i^{\\top}\\psi_0 \\le 0} + X_i^{\\top}(\\beta_0 + \\delta_0)\\mathds{1}_{Q_i^{\\top}\\psi_0 > 0} + {\\epsilon}_i\n\\end{align*}\nwith the per capita $CO_2$ emission in metric ton as $Y$, per capita GDP, square of per capita GDP and UA as $X$ (hence $X \\in \\mathbb{R}^3$) and finally, per capita GDP and UA as $Q$ (hence $Q \\in \\mathbb{R}^2$). Observe that $\\beta_0$ represents the regression coefficients corresponding to the countries with $Q_i^{\\top}\\psi_0 \\le 0$ (henceforth denoted by Group 1) and $(\\beta_0+ \\delta_0)$ represents the regression coefficients corresponding to the countries with $Q_i^{\\top}\\psi_0 \\ge 0$ (henceforth denoted by Group 2). As per our convention, in the interests of identifiability we assume $\\psi_{0, 1} = 1$, where $\\psi_{0,1}$ is the change plane parameter corresponding to per capita GDP. Therefore the only change plane coefficient to be estimated is $\\psi_{0, 2}$, the change plane coefficient for UA. For numerical stability, we divide per capita GDP by $10^{-4}$ (consequently square of per capital GDP is scaled by $10^{-8}$)\\footnote{This scaling helps in the numerical stability of the gradient descent algorithm used to optimize the least squares criterion.}. After some pre-processing (i.e. removing rows consisting of NA and countries with $100\\%$ UA) we estimate the coefficients $(\\beta_0, \\delta_0, \\psi_0)$ of our model based on data from 115 countries with $\\sigma_n = 0.05$ and test the significance of the various coefficients using the methodologies described in Section \\ref{sec:inference}. We present our findings in Table \\ref{tab:ekc_coeff}. \n\\begin{table}[!h]\n \\centering\n \\begin{tabular}{|c||c||c|}\n \\hline\n Coefficients & Estimated values & p-values \\\\\n \\hline \\hline \n $\\beta_{0, 1}$ (\\text{RGDPPC for Group 1}) & 6.98555060 & 4.961452e-10 \\\\\n $\\beta_{0, 2}$ (\\text{squared RGDPPC for Group 1}) & -0.43425991 & 7.136484e-02 \\\\\n $\\beta_{0, 3}$ (\\text{UA for Group 1}) & -0.02613813 & 1.066065e-01\n\\\\\n $\\beta_{0, 1} + \\delta_{0, 1}$ (\\text{RGDPPC for Group 2}) & 2.0563337 & 0.000000e+00\\\\\n $\\beta_{0, 2} + \\delta_{0, 2}$ (\\text{squared RGDPPC for Group 2}) & -0.1866490 & 4.912843e-04 \\\\\n $\\beta_{0, 3} + \\delta_{0, 3}$ (\\text{UA for Group 2}) & 0.1403171& 1.329788e-05 \\\\\n $\\psi_{0,2 }$ (\\text{Change plane coeff for UA}) & -0.07061785 & 0.000000e+00\\\\\n \\hline\n \\end{tabular}\n \\caption{Table of the estimated regression and change plane coefficients along with their p-values.}\n \\label{tab:ekc_coeff}\n\\end{table}\n\\\\\\\\\n\\noindent\nFrom the above analysis, we find that both GDP and its square have statistically significant effects on carbon pollution for both groups of countries with their expected signs (positive for GDP and negative for its square term), supporting the inverted U-shaped income-pollution nexus and being in line with most papers in the EKC literature. The urban variable, on the other hand, is seen to have insignificant effect on Group 1 countries (less developed and emerging) which is in keeping with \\cite{du2012economic}, \\cite{sadorsky2014effect}. However, the urban variable seems to have a positive and significant impact on Group-2 countries which is in line with \\cite{liddle2010age} for example. Note that many of the group-1 countries are yet to experience sizeable levels of urbanization compared to the other group and this is even truer for our sample period.\\footnote{We use 6 years average from 2010-2015 for GDP and pollution measures. Such averaging is in accordance with the cross-sectional empirical literature using cross-country\/regional data and helps avoid business cycle fluctuations in GDP. It also minimizes the impacts of outlier events such as the financial crisis or great recession period. The years that have we have chosen are ones for which we could find data for the largest number of countries.}Further, note that UA plays a crucial role in dividing the countries into different regimes, as the estimated value of $\\psi_{0,2}$ is significant. \n\\\\\\\\\n\\noindent\nThere are many future potential applications of our method in economics. Similar exercises can be followed for other pollutants (such as sulfur emission, electrical waste\/e-waste, nitrogen pollution etc.). While income\/GDP remains a common, indeed the most crucial variable in the pollution study, other covariates (including change plane defining variables) may vary, depending on the pollutant of interest. One may also be interested in the determinants of health expenses in the household survey data. Often the families are asked about their health expenses incurred in the past one year. An interesting case in point may be household surveys collected in India where there are many large joint families with several children and old people live in the same household and most families are uninsured. It is often seen that health expenses increases with income as it includes expenses on regularly performed preventative medical examinations which are affordable only beyond an income level is reached. The important covariates are per capita family income, family wealth, 'dependency ratio' (number of children and old to the total number of people in the family) and binary indicator of any history of major illness\/hospitalizations in the family in the past year. Family income per capita and history of major illness can potentially define the change plane.\n\\end{comment}\n\n\n\\input{Real_data_analysis.tex}\n\\section{Conclusion}\n\\label{sec:conclusion}\nIn this paper we have established that under some mild assumptions the kernel-smoothed change plane estimator is asymptotically normal with near optimal rate $n^{-1}$. To the best of our knowledge, the state of the art result in this genre of problems is due to \\cite{seo2007smoothed}, where they demonstrate a best possible rate about $n^{-3\/4}$ for i.i.d. data. The main difference between their approach and ours is mainly the proof of Lemma \\ref{bandwidth}. Our techniques are based upon modern empirical process theory which allow us to consider much smaller bandwidths $\\sigma_n$ compared to those in \\cite{seo2007smoothed}, who appear to require larger values to achieve the result, possibly owing to their reliance on the techniques developed in \\cite{horowitz1992smoothed}. Although we have established it is possible to have asymptotic normality with really small bandwidths, we believe that the finite sample approximation (e.g. Berry-Essen bound) to normality could be poor, which is also evident from our simulation. \n\n\n\n\n\n\n\\section{Methodology and Theory for Continuous Response Model}\n\\label{sec:theory_regression}\nIn this section we present our analysis for the continuous response model. Without smoothing, the original estimating equation is: \n$$\nf_{\\beta, \\delta, \\psi}(Y, X, Q) = \\left(Y - X^{\\top}\\beta - X^{\\top}\\delta\\mathds{1}_{Q^{\\top}\\psi > 0}\\right)^2 \n$$\nand we estimate the parameters as: \n\\begin{align}\n\\label{eq:ls_estimator}\n\\left(\\hat \\beta^{LS}, \\hat \\delta^{LS}, \\hat \\psi^{LS}\\right) & = {\\arg\\min}_{(\\beta, \\delta, \\psi) \\in \\Theta} \\mathbb{P}_n f_{\\beta, \\delta, \\psi} \\notag \\\\\n& := {\\arg\\min}_{(\\beta, \\delta, \\psi) \\in \\Theta}\\mathbb{M}_n(\\beta, \\delta, \\psi)\\,.\n\\end{align}\nwhere $\\mathbb{P}_n$ is empirical measure based on i.i.d. observations $\\{(X_i, Y_i, Q_i)\\}_{i=1}^n$ and $\\Theta$ is the parameter space. Henceforth, we assume $\\Theta$ is a compact subset of dimension ${\\bbR}^{2p+d}$. We also define $\\theta = (\\beta, \\delta, \\psi)$, i.e. all the parameters together as a vector and by $\\theta_0$ is used to denote the true parameter vector $(\\beta_0, \\delta_0, \\psi_0)$. Some modification of equation \\eqref{eq:ls_estimator} leads to the following: \n\\begin{align*}\n(\\hat \\beta^{LS}, \\hat \\delta^{LS}, \\hat \\psi^{LS}) & = {\\arg\\min}_{\\beta, \\delta, \\psi} \\sum_{i=1}^n \\left(Y_i - X_i^{\\top}\\beta - X_i^{\\top}\\delta\\mathds{1}_{Q_i^{\\top}\\psi > 0}\\right)^2 \\\\ \n& = {\\arg\\min}_{\\beta, \\delta, \\psi} \\sum_{i=1}^n \\left[\\left(Y_i - X_i^{\\top}\\beta\\right)^2\\mathds{1}_{Q_i^{\\top}\\psi_0 \\le 0} \\right. \\\\\n& \\hspace{14em} \\left. + \\left(Y_i - X_i^{\\top}\\beta - X_i^{\\top}\\delta\\right)^2\\mathds{1}_{Q_i^{\\top}\\psi > 0} \\right] \\\\\n& = {\\arg\\min}_{\\beta, \\delta, \\psi} \\sum_{i=1}^n \\left[\\left(Y_i - X_i^{\\top}\\beta\\right)^2 + \\left\\{\\left(Y_i - X_i^{\\top}\\beta - X_i^{\\top}\\delta\\right)^2 \\right. \\right. \\\\\n& \\hspace{17em} \\left. \\left. - \\left(Y_i - X_i^{\\top}\\beta\\right)^2\\right\\}\\mathds{1}_{Q_i^{\\top}\\psi > 0} \\right] \n\\end{align*}\nTypical empirical process calculations yield under mild conditions: \n$$\n\\|\\hat \\beta^{LS} - \\beta_0\\|^2 + \\|\\hat \\delta^{LS} - \\delta_0\\|^2 + \\|\\hat \\psi^{LS} - \\psi_0 \\|_2 = O_p(n^{-1})\n$$\nbut inference is difficult as the limit distribution is unknown, and in any case, would be a highly non-standard distribution. Recall that even in the one-dimensional change point model with fixed jump size, the least squares change point estimator converges at rate $n$ to the truth with a non-standard limit distribution, namely a minimizer of a two-sided compound Poisson process (see \\cite{lan2009change} for more details). To obtain a computable estimator with tractable limiting distribution, we resort to a smooth approximation of the indicator function in \\eqref{eq:ls_estimator} using a distribution kernel with suitable bandwidth, i.e we replace $\\mathds{1}_{Q_i^{\\top}\\psi > 0}$ by $K(Q_i^{\\top}\\psi\/\\sigma_n)$ for some appropriate distribution function $K$ and bandwidth $\\sigma_n$, i.e. \n\\begin{align*}\n(\\hat \\beta^S, \\hat \\delta^S, \\hat \\psi^S) & = {\\arg\\min}_{\\beta, \\delta, \\psi} \\left\\{ \\frac1n \\sum_{i=1}^n \\left[\\left(Y_i - X_i^{\\top}\\beta\\right)^2 + \\left\\{\\left(Y_i - X_i^{\\top}\\beta - X_i^{\\top}\\delta\\right)^2 \\right. \\right. \\right. \\\\\n& \\hspace{15em} \\left. \\left. \\left. - \\left(Y_i - X_i^{\\top}\\beta\\right)^2\\right\\}K\\left(\\frac{Q_i^{\\top}\\psi}{\\sigma_n}\\right) \\right] \\right\\} \\\\\n& = {\\arg\\min}_{(\\beta, \\delta, \\psi) \\in \\Theta} \\mathbb{P}_n f^s_{(\\beta, \\delta, \\psi)}(X, Y, Q) \\\\\n& := {\\arg\\min}_{\\theta \\in \\Theta} \\mathbb{M}^s_n(\\theta) \\,.\n\\end{align*}\nDefine $\\mathbb{M}$ (resp. $\\mathbb{M}^s$) to be the population counterpart of $\\mathbb{M}_n$ and $\\mathbb{M}_n^s$ respectively which are defined as: \n\\begin{align*}\n\\mathbb{M}(\\theta) & = \\mathbb{E}\\left(Y - X^{\\top}\\beta\\right)^2 + \\mathbb{E}\\left(\\left[-2\\left(Y_i - X^{\\top}\\beta\\right)X^{\\top}\\delta + (X^{\\top}\\delta)^2\\right] \\mathds{1}_{Q^{\\top}\\psi > 0}\\right) \\,, \\\\\n\\mathbb{M}^s(\\theta) & = \\mathbb{E}\\left[(Y - X^{\\top}\\beta)^2 + \\left\\{-2(Y-X^{\\top}\\beta)(X^{\\top}\\delta) + (X^{\\top}\\delta)^2\\right\\}K\\left(\\frac{Q^{\\top}\\psi}{\\sigma_n}\\right)\\right] \\,.\n\\end{align*}\nAs noted in the proof of \\textcolor{blue}{Seo and Linton}, the assumption $\\log{n}\/n\\sigma_n^2 \\to 0$ was only used to show: \n$$\n\\frac{\\left\\|\\hat \\psi^s - \\psi_0\\right\\|}{\\sigma_n} = o_p(1) \\,.\n$$\nIn this paper, we show that one can achieve the same conclusion as long as $n\\sigma_n \\to \\infty$. The rest of the proof for the normality is similar to that of \\cite{seo2007smoothed}, we will present it briefly for the ease the readers. The proof is quite long and technical, therefore we break the proof into several lemmas. We, first, list our assumptions: \n\\begin{assumption}\n\\label{eq:assm}\n\\begin{enumerate}\n\\item Define $f_\\psi(\\cdot \\mid \\tilde Q)$ to be the conditional distribution of $Q^{\\top}\\psi$ given $\\tilde Q$. (In particular we will denote by $f_0(\\cdot \\mid \\tilde q)$ to be conditional distribution of $Q^{\\top}\\psi_0$ given $\\tilde Q$ and $f_s(\\cdot \\mid \\tilde q)$ to be the conditional distribution of $Q^{\\top}\\psi_0^s$ given $\\tilde Q$. Assume that there exists $F_+$ such that $\\sup_t f_0(t | \\tilde Q) \\le F_+$ almost surely on $\\tilde Q$ and for all $\\psi$ in a neighborhood of $\\psi_0$ (in particular for $\\psi_0^s$). Further assume that $f_\\psi$ is differentiable and the derivative is bounded by $F_+$ for all $\\psi$ in a neighborhood of $\\psi_0$ (again in particular for $\\psi_0^s$).\n\\vspace{0.1in}\n\\item Define $g(Q) = {\\sf var}(X \\mid Q)$. There exists $c_-$ and $c_+$ such that $c_- \\le \\lambda_{\\min}(g(Q)) \\le \\lambda_{\\max}(g(Q)) \\le c_+$ almost surely. Also assume that $g$ is a Lipschitz with constant $G_+$ with respect to $Q$. \n\\vspace{0.1in}\n\\item There exists $p_+ < \\infty$ and $p_- > 0, r > 0$ such that: \n$$\np_- \\|\\psi - \\psi_0\\| \\le \\mathbb{P}\\left(\\text{sign}\\left(Q^{\\top}\\psi\\right) \\neq \\text{sign}\\left(Q^{\\top}\\psi_0\\right)\\right) \\le p_+ \\|\\psi - \\psi_0\\| \\,,\n$$\nfor all $\\psi$ such that $\\|\\psi - \\psi_0\\| \\le r$. \n\\vspace{0.1in}\n\\item For all $\\psi$ in the parameter space $0 < \\mathbb{P}\\left(Q^{\\top}\\psi > 0\\right) < 1$. \n\\vspace{0.1in} \n\\item Define $m_2(Q) = \\mathbb{E}\\left[\\|X\\|^2 \\mid Q\\right]$ and $m_4(Q) = \\mathbb{E}\\left[\\|X\\|^4 \\mid Q\\right]$. Assume $m_2, m_4$ are bounded Lipschitz function of $Q$. \n\\end{enumerate}\n\\end{assumption}\n\n\n\\subsection{Sufficient conditions for above assumptions }\nWe now demonstrate some sufficient conditions for the above assumptions to hold. The first condition is essentially a condition on the conditional density of the first co-ordinate of $Q$ given all other co-ordinates. If this conditional density is bounded and has bounded derivative, then first assumption is satisfied. This condition is satisfied in fair generality. The second assumption implies that the conditional distribution of X given Q has variance in all the direction over all $Q$. This is also very weak condition, as is satisfied for example if X and Q and independent (with $X$ has non-degenerate covariance matrix) or $(X, Q)$ are jointly normally distributed to name a few. This condition can further be weaken by assuming that the maximum and minimum eigenvalues of $\\mathbb{E}[g(Q)]$ are bounded away from $\\infty$ and $0$ respectively but it requires more tedious book-keeping. The third assumption is satisfied as long as as $Q^{\\top}\\psi$ has non-zero density near origin, while the fourth assumption merely states that the support of $Q$ is not confined to one side of the hyperplane for any hyperplane and a simple sufficient condition for this is $Q$ has continuous density with non-zero value at the origin. The last assumption is analogous to the second assumption for the conditional fourth moment which is also satisfied in fair generality. \n\\\\\\\\\n\\noindent\n{\\bf Kernel function and bandwidth: } We take $K(x) = \\Phi(x)$ (distribution of standard normal random variable) for our analysis. For the bandwidth we assume $n\\sigma_n^2 \\to 0$ and $n \\sigma_n \\to \\infty$ as the other case, (i.e. $n\\sigma_n^2 \\to \\infty$) is already established in \\cite{seo2007smoothed}. \n\\\\\\\\\n\\noindent\nBased on Assumption \\ref{eq:assm} and our choice of kernel and bandwidth we establish the following theorem: \n\\begin{theorem}\n\\label{thm:regression}\nUnder Assumption \\ref{eq:assm} and the above choice of kernel and bandwidth we have: \n$$\n\\sqrt{n}\\begin{pmatrix}\\begin{pmatrix} \\hat \\beta^s \\\\ \\hat \\delta^s \\end{pmatrix} - \\begin{pmatrix} \\beta_0 \\\\ \\delta_0 \\end{pmatrix} \\end{pmatrix} \\overset{\\mathscr{L}}{\\implies} \\mathcal{N}(0, \\Sigma_{\\beta, \\delta})\n$$\nand \n$$\n\\sqrt{n\/\\sigma_n} \\left(\\hat \\psi^s - \\psi_0\\right) \\overset{\\mathscr{L}}{\\implies} \\mathcal{N}(0, \\Sigma_\\psi) \\,,\n$$\nfor matrices $\\Sigma_{\\beta, \\delta}$ and $\\Sigma_\\psi$ mentioned explicitly in the proof. Moreover they are asymptotically independent. \n\\end{theorem}\nThe proof of the theorem is relatively long, so we break it into several lemmas. We provide a roadmap of the proof in this section while the elaborate technical derivations of the supporting lemmas can be found in Appendix. Let $\\nabla \\mathbb{M}_n^s(\\theta)$ and $\\nabla^2 \\mathbb{M}_n^s(\\theta)$ be the gradient and Hessian of $\\mathbb{M}_n^s(\\theta)$ with respect to $\\theta$. As $\\hat \\theta^s$ minimizes $\\mathbb{M}_n^s(\\theta)$, we have from the first order condition, $\\nabla \\mathbb{M}_n^s(\\hat \\theta^s) = 0$. Using one step Taylor expansion we have:\n\\allowdisplaybreaks \n\\begin{align*}\n\\label{eq:taylor_first}\n\\nabla \\mathbb{M}_n^s(\\hat \\theta^s) = \\nabla \\mathbb{M}_n^s(\\theta_0) + \\nabla^2 \\mathbb{M}_n^s(\\theta^*)\\left(\\hat \\theta^s - \\theta_0\\right) = 0\n\\end{align*}\ni.e.\n\\begin{equation}\n\\label{eq:main_eq} \n\\left(\\hat{\\theta}^s - \\theta_0\\right) = -\\left(\\nabla^2 \\mathbb{M}_n^s(\\theta^*)\\right)^{-1} \\nabla \\mathbb{M}_n^s(\\theta_0)\n\\end{equation}\nfor some intermediate point $\\theta^*$ between $\\hat \\theta^s$ and $\\theta_0$. \nFollowing the notation of \\cite{seo2007smoothed}, define a diagonal matrix $D_n$ of dimension $2p + d$ with first $2p$ elements being 1 and the last $d$ elements being $\\sqrt{\\sigma_n}$. \n we can write: \n\\begin{align}\n\\sqrt{n}D_n^{-1}(\\hat \\theta^s - \\theta_0) & = - \\sqrt{n}D_n^{-1}\\nabla^2\\mathbb{M}_n^s(\\theta^*)^{-1}\\nabla \\mathbb{M}_n^s(\\theta_0) \\notag \\\\\n\\label{eq:taylor_main} & = \\begin{pmatrix} \\nabla^2\\mathbb{M}_n^{s, \\gamma}(\\theta^*) & \\sqrt{\\sigma_n}\\nabla^2\\mathbb{M}_n^{s, \\gamma \\psi}(\\theta^*) \\\\\n\\sqrt{\\sigma_n}\\nabla^2\\mathbb{M}_n^{s, \\gamma \\psi}(\\theta^*) & \\sigma_n\\nabla^2\\mathbb{M}_n^{s, \\psi}(\\theta^*)\\end{pmatrix}^{-1}\\begin{pmatrix} \\sqrt{n}\\nabla \\mathbb{M}_n^{s, \\gamma}(\\theta_0) \\\\ \\sqrt{n\\sigma_n}\\nabla \\mathbb{M}_n^{s, \\psi}(\\theta_0)\\end{pmatrix}\n\\end{align}\nwhere $\\gamma = (\\beta, \\delta) \\in {\\bbR}^{2p}$. The following lemma establishes the asymptotic properties of $\\nabla \\mathbb{M}_n^s(\\theta_0)$: \n\\begin{lemma}[Asymptotic Normality of $\\nabla \\mathbb{M}_n^s(\\theta_0)$]\n\\label{asymp-normality}\n\\label{asymp-normality}\nUnder assumption \\ref{eq:assm} we have: \n\\begin{align*}\n\\sqrt{n}\\nabla \\mathbb{M}_n^{s, \\gamma}(\\theta_0) \\implies \\mathcal{N}\\left(0, 4V^{\\gamma}\\right) \\,,\\\\\n\\sqrt{n\\sigma_n}\\nabla \\mathbb{M}_n^{s, \\psi}(\\theta_0) \\implies \\mathcal{N}\\left(0, V^{\\psi}\\right) \\,.\n\\end{align*} \nfor some n.n.d. matrices $V^{\\gamma}$ and $V^{\\psi}$ which is mentioned explicitly in the proof. Further more $\\sqrt{n}\\nabla \\mathbb{M}_n^{s, \\gamma}(\\theta_0)$ and $\\sqrt{n\\sigma_n}\\nabla \\mathbb{M}_n^{s, \\psi}(\\theta_0)$ are asymptotically independent. \n\\end{lemma}\n\\noindent\nNext, we analyze the convergence of $\\nabla^2 \\mathbb{M}_n^s(\\theta^*)$ which is stated in the following lemma: \n\\begin{lemma}[Convergence in Probability of $\\nabla^s \\mathbb{M}_n^s(\\theta^*)$]\n\\label{conv-prob}\nUnder Assumption \\eqref{eq:assm}, for any random sequence $\\breve{\\theta} = \\left(\\breve{\\beta}, \\breve{\\delta}, \\breve{\\psi}\\right)$ such that $\\breve{\\beta} \\overset{p}{\\to} \\beta_0, \\breve{\\delta} \\overset{p}{\\to} \\delta_0, \\|\\breve{\\psi} - \\psi_0\\|\/\\sigma_n \\overset{P} \\rightarrow 0$, we have: \n\\begin{align*}\n\\nabla^2_{\\gamma} \\mathbb{M}_n^s(\\breve{\\theta}) & \\overset{p}{\\longrightarrow} 2Q^{\\gamma} \\,, \\\\\n\\sqrt{\\sigma_n}\\nabla^2_{\\psi \\gamma} \\mathbb{M}_n^s(\\breve{\\theta}) & \\overset{p}{\\longrightarrow} 0 \\,, \\\\\n\\sigma_n \\nabla^2_{\\psi} \\mathbb{M}_n^s(\\breve{\\theta}) & \\overset{p}{\\longrightarrow} Q^{\\psi} \\,.\n\\end{align*}\nfor some matrices $Q^{\\gamma}, Q^{\\psi}$ mentioned explicitly in the proof. This, along with equation \\eqref{eq:taylor_main}, establishes: \n\\begin{align*}\n\\sqrt{n}\\left(\\hat \\gamma^s - \\gamma_0\\right) & \\overset{\\mathscr{L}}{\\implies} \\mathcal{N}\\left(0, Q^{\\gamma^{-1}}V^{\\gamma}Q^{\\gamma^{-1}}\\right) \\,, \\\\\n\\sqrt{n\/\\sigma_n}\\left(\\hat \\psi^s - \\psi_0\\right) & \\overset{\\mathscr{L}}{\\implies} \\mathcal{N}\\left(0, Q^{\\psi^{-1}}V^{\\psi}Q^{\\psi^{-1}}\\right) \\,.\n\\end{align*}\nwhere as before $\\hat \\gamma^s = (\\hat \\beta^s, \\hat \\delta^s)$. \n\\end{lemma}\nIt will be shown later that the condition $\\|\\breve{\\psi}_n - \\psi_0\\|\/\\sigma_n \\overset{P} \\rightarrow 0$ needed in Lemma \\ref{conv-prob} holds for the (random) sequence $\\psi^*$, the intermediate point in the Taylor expansion. Then, combining Lemma \\ref{asymp-normality} and Lemma \\ref{conv-prob} we conclude the proof of Theorem \\ref{thm:regression}.\nObserve that, to show $\\left\\|\\psi^* - \\psi_0 \\right\\| = o_P(\\sigma_n)$, it suffices to to prove that $\\left\\|\\hat \\psi^s - \\psi_0 \\right\\| = o_P(\\sigma_n)$. Towards that direction, we have following lemma: \n\n\\begin{lemma}[Rate of convergence]\n\\label{lem:rate_smooth}\nUnder Assumption \\ref{eq:assm} and our choice of kernel and bandwidth, \n$$\nn^{2\/3}\\sigma_n^{-1\/3} d^2_*\\left(\\hat \\theta^s, \\theta_0^s\\right) = O_P(1) \\,,\n$$\nwhere \n\\begin{align*}\nd_*^2(\\theta, \\theta_0^s) & = \\|\\beta - \\beta_0^s\\|^2 + \\|\\delta - \\delta_0^s\\|^2 \\\\\n& \\qquad \\qquad + \\frac{\\|\\psi - \\psi_0^s\\|^2}{\\sigma_n} \\mathds{1}_{\\|\\psi - \\psi_0^s\\| \\le \\mathcal{K}\\sigma_n} + \\|\\psi - \\psi_0^s\\| \\mathds{1}_{\\|\\psi - \\psi_0^s\\| > \\mathcal{K}\\sigma_n} \\,.\n\\end{align*}\nfor some specific constant $\\mathcal{K}$. (This constant will be mentioned precisely in the proof). Hence as $n\\sigma_n \\to \\infty$, we have $n^{2\/3}\\sigma_n^{-1\/3} \\gg \\sigma_n^{-1}$ which implies $\\|\\hat \\psi^s - \\psi_0^s\\|\/\\sigma_n \\overset{P} \\longrightarrow 0 \\,.$\n\\end{lemma}\n\\noindent\nThe above lemma establishes $\\|\\hat \\psi^s - \\psi_0^s\\|\/\\sigma_n = o_p(1)$ but our goal is to show that $\\|\\hat \\psi^s - \\psi_0\\|\/\\sigma_n = o_p(1)$. Therefore, we further need $\\|\\psi^s_0 - \\psi_0\\|\/\\sigma_n \\rightarrow 0$ which is demonstrated in the following lemma:\n\n\\begin{lemma}[Convergence of population minimizer]\n\\label{bandwidth}\nUnder Assumption \\ref{eq:assm} and our choice of kernel and bandwidth, we have: $\\|\\psi^s_0 - \\psi_0\\|\/\\sigma_n \\rightarrow 0$. \n\\end{lemma}\n\n\\noindent\nHence the final roadmap is the following: Using Lemma \\ref{bandwidth} and Lemma \\ref{lem:rate_smooth} we establish that $\\|\\hat \\psi^s - \\psi_0\\|\/\\sigma_n = o_p(1)$ if $n\\sigma_n \\to 0$. This, in turn, enables us to prove Lemma \\ref{conv-prob}, i.e. $\\sigma_n \\nabla^2 \\mathbb{M}_n^s(\\theta^*) \\overset{P} \\rightarrow Q$,which, along with Lemma \\ref{asymp-normality}, establishes the main theorem. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nModeling the long-timescale behavior of complex dynamical systems is a fundamental task in the physical sciences. In principle, molecular dynamics (MD) simulations allow us to probe the spatiotemporal details of molecular processes, but the so-called sampling problem severely limits their usefulness in practice. This sampling problem comes from the fact that a typical free energy landscape consists of many metastable states separated by free energy barriers much higher than the thermal energy $\\kT$. Therefore, on the timescale one can simulate, barrier crossings are rare events, and the system remains kinetically trapped in a single metastable state.\n\nOne way to alleviate the sampling problem is to employ enhanced sampling methods~\\cite{abrams2014enhanced,valsson2016enhancing}. In particular, one class of such methods works by identifying a few critical slow degrees of freedom, commonly referred to as collective variables (CVs), and then enhancing their fluctuations by introducing an external bias potential~\\cite{valsson2016enhancing,yang2019enhanced,bussi2020using}. The performance of CV-based enhanced sampling methods depends heavily on the quality of the CVs. Effective CVs should discriminate between the relevant metastable states and include most of the slow degrees of freedom~\\cite{noe2017collective}. Typically, the CVs are selected manually by using physical and chemical intuition. Within the enhanced sampling community, numerous generally applicable CVs~\\cite{abrams2014enhanced,pietrucci_strategies_2017,rydzewski2017ligand} have been developed and implemented in open-source codes~\\cite{Fiorin_2013,tribello2014plumed,Sidky_2018}. However, despite immense progress in devising CVs, it may be far from trivial to find a set of CVs that quantify all the essential characteristics of a molecular system.\n\nMachine learning (ML) techniques, in particular dimensionality reduction or representation learning methods~\\cite{murdoch2019definitions,xie2020representation}, provide a possible solution to this problem by automatically finding or constructing the CVs directly from the simulation data~\\cite{wang2020machine,noe2020machine,Gkeka2020mlffcgv,sidky2020machine}. Such dimensionality reduction methods typically work in a high-dimensional feature space (e.g., distances, dihedral angles, or more intricate functions~\\cite{geiger2013neural,rogal2019neural, musil2021physicsinspired}) instead of directly using the microscopic coordinates, as this is much more efficient. Dimensionality reduction may employ linear or nonlinear transformations, e.g., diffusion map~\\cite{coifman2005geometric,coifman2006diffusion,nadler2006diffusion,coifman2008diffusion}, stochastic neighbor embedding (SNE)~\\cite{hinton2002stochastic,maaten2008visualizing,maaten2009learning}, sketch-map~\\cite{ceriotti2011simplifying,tribello2012using}, and UMAP~\\cite{mcinnes2018umap}. In the recent years, there has been a growing interest in performing nonlinear dimensionality reduction with deep neural networks (NNs) to provide parametric embeddings. Inspired by the seminal work of Ma and Dinner~\\cite{Ma2005autorc}, several such techniques recently applied to finding CVs include variational autoencoders~\\cite{chen2018molecular,hernandez2018variational,ribeiro2018reweighted,chen2018collective}, time-lagged autoencoders~\\cite{wehmeyer2018time}, symplectic flows~\\cite{li2020neural}, stochastic kinetic embedding~\\cite{zhang2018unfolding}, and encoder-map~\\cite{Lemke2019EncMap}.\n\nThis work proposes a novel technique called multiscale reweighted stochastic embedding (MRSE) that unifies dimensionality reduction via deep NNs and enhanced sampling methods. The method constructs a low-dimensional representation of CVs by learning a parametric embedding from a high-dimensional feature space to a low-dimensional latent space. Our work builds upon various SNE methods~\\cite{hinton2002stochastic,maaten2008visualizing,maaten2009learning,van2014accelerating}. We introduce several new aspects to SNE that makes MRSE particularly suitable for enhanced sampling simulations:\n\\begin{enumerate}\n \\item A weight-tempered random sampling as a landmark selection scheme to obtain training data sets that strike a balance between equilibrium representation and capturing important metastable states lying higher in free energy.\n \\item Multiscale representation of the high-dimensional feature space via a Gaussian mixture probability model.\n \\item Reweighting procedure to account for the sampling of the training data from a biased probability distribution.\n\\end{enumerate}\n\nWe note that the overall objective of our research is to employ MRSE within an enhanced sampling scheme and improve the learned CVs iteratively. However, we focus mainly on the learning procedure for training data from enhanced sampling simulations in this work. Therefore, to eliminate the influence of possible incomplete sampling, we employ idealistic sampling conditions that are generally not achievable in practice~\\cite{pant2020statistical}. To gauge the performance of the learning procedure and the quality of the resulting embeddings, we apply MRSE to three model systems (the M\\\"uller-Brown potential, alanine dipeptide, and alanine tetrapeptide) and provide a thorough analysis of the results.\n\n\n\n\n\n\\section{Methods}\n\\label{sec:methods}\n\\subsection{Collective Variable Based Enhanced Sampling}\n\\label{sec:cv_based_methods}\nWe start by giving a theoretical background on CV-based enhanced sampling methods. We consider a molecular system, described by microscopic coordinates $\\mathbf{R}$ and a potential energy function $U(\\mathbf{R})$, which we want to study using MD or Monte Carlo simulations. Without loss of generality, we limit our discussion to the canonical ensemble (NVT). At equilibrium, the microscopic coordinates follow the Boltzmann distribution, $P(\\mathbf{R}) = \\e^{-\\beta U(\\mathbf{R})}\/\\int \\d\\mathbf{R} \\,\\e^{-\\beta U(\\mathbf{R})}$, where $\\beta = (k_{\\mathrm{B}}T)^{-1}$ is the inverse of the thermal energy.\n\nIn CV-based enhanced sampling methods, we identify a small set of coarse-grained order parameters that correspond to the essential slow degrees of freedom, referred to as CVs. The CVs are defined as $\\mathbf{s}(\\mathbf{R}) = [s_1(\\mathbf{R}), s_2(\\mathbf{R}), \\ldots, s_d(\\mathbf{R})]$, where $d$ is the number of CVs (i.e., the dimension of the CV space), and the dependence on $\\mathbf{R}$ can be either explicit or implicit. Having defined the CVs, we obtain their equilibrium marginal distribution by integrating out all other degrees of freedom:\n\\begin{align}\n\\label{eq:ps}\n P(\\mathbf{s}) =\n \\int \\d\\mathbf{R} \\, \\delta\n \\left[\n \\mathbf{s} - \\mathbf{s}(\\mathbf{R})\n \\right]\n P(\\mathbf{R}),\n\\end{align}\nwhere $\\delta[\\cdot]$ is the Dirac delta function. The integral in eq~\\ref{eq:ps} is equivalent to $\\big< \\delta[\\mathbf{s}-\\mathbf{s(R)}] \\big>$, where $\\left<\\cdot\\right>$ denotes an ensemble average. Up to an unimportant constant, the free energy surface (FES) is given by $F(\\mathbf{s})= -\\beta^{-1} \\log P(\\mathbf{s})$. In systems plagued by sampling problems, the FES consists of many metastable states separated by free energy barriers much larger than the thermal energy $k_{\\mathrm{B}}T$. Therefore, on the timescales we can simulate, the system stays kinetically trapped and is unable to explore the full CV space. In other words, barrier crossings between metastable states are rare events.\n\nCV-based enhanced sampling methods overcome the sampling problem by introducing an external bias potential $V(\\mathbf{s}(\\mathbf{R}))$ acting in CV space. This leads to sampling according to a biased distribution $P_{V}(\\mathbf{R}) = \\e^{-\\beta \\left[U(\\mathbf{R})+V(\\mathbf{s}(\\mathbf{R})) \\right]}\/\\int \\d\\mathbf{R} \\,\\e^{-\\beta \\left[U(\\mathbf{R})+V(\\mathbf{s}(\\mathbf{R})) \\right]}$. We can trace this idea of non-Boltzmann sampling back to the seminal work by Torrie and Valleau published in 1977~\\cite{torrie1977nonphysical}. Most CV-based methods adaptively construct the bias potential on-the-fly during the simulation to reduce free energy barriers or even completely flatten them. At convergence, the CVs follow a biased distribution:\n\\begin{equation}\n P_{V}(\\mathbf{s}) =\n \\int \\d\\mathbf{R} \\, \\delta\n \\left[\n \\mathbf{s} - \\mathbf{s}(\\mathbf{R})\n \\right]\n P_{V}(\\mathbf{R}) =\n \\frac{ \\e^{ -\\beta\\left[ F(\\mathbf{s}) + V(\\mathbf{s}) \\right] } }\n {\\int\\d\\mathbf{s} \\, \\e^{ -\\beta\\left[ F(\\mathbf{s}) + V(\\mathbf{s}) \\right] } },\n\\end{equation}\nthat is easier to sample. CV-based methods differ in how they construct the bias potential and which kind of biased CV sampling they obtain at convergence. A non-exhaustive list of modern CV-based enhanced sampling techniques includes multiple windows umbrella sampling~\\cite{Kastner2011umbreallsampling}, adaptive biasing force~\\cite{Darve-JCP-2001,Comer2015_TheAdaptiveBiasing,Lesage2016_Smoothed}, Gaussian-mixture umbrella sampling~\\cite{Maragakis-JPCB-2009}, metadynamics~\\cite{laio2002escaping,barducci2008well,valsson2016enhancing}, variationally enhanced sampling~\\cite{valsson2014variational,Valsson2020_VES}, on-the-fly probability-enhanced sampling (OPES)~\\cite{Invernizzi2020opus,invernizzi2020unified}, and ATLAS~\\cite{gilberti2020atlas}. In the following, we focus on well-tempered metadynamics (WT-MetaD)~\\cite{barducci2008well,valsson2016enhancing}. However, we can use MRSE with almost any CV-based enhanced sampling approach.\n\nIn WT-MetaD, the time-dependent bias potential is constructed by periodically depositing repulsive Gaussian kernels at the current location in CV space. Based on the previously deposited bias, the Gaussian height is scaled such that it gradually decreases over time~\\cite{barducci2008well}. In the long-time limit, the Gaussian height goes to zero. As has been proven~\\cite{PhysRevLett.112.240602}, the bias potential at convergence is related to the free energy by:\n\\begin{equation}\n \\label{eq:wt-bias_infty}\n V(\\bs,t\\to\\infty) = -\\left( 1-\\frac{1}{\\gamma} \\right) F(\\bs),\n\\end{equation}\nand we obtain a so-called well-tempered distribution for the CVs:\n\\begin{equation}\n \\label{eq:wt-pv}\n P_{V}(\\mathbf{s}) = \\frac{ \\left[ P(\\mathbf{s}) \\right]^{1\/\\gamma} }\n {\\int\\d\\mathbf{s}\\, \\left[ P(\\mathbf{s}) \\right]^{1\/\\gamma}},\n\\end{equation}\nwhere $\\gamma>1$ is a parameter called bias factor that determines how much we enhance CV fluctuations. The limit $\\gamma\\to 1$ corresponds to the unbiased ensemble, while the limit $\\gamma\\to\\infty$ corresponds to conventional (non-well-tempered) metadynamics~\\cite{laio2002escaping}. If we take the logarithm of both sides of eq~\\ref{eq:wt-pv}, we can see that sampling the well-tempered distribution is equivalent to sampling an effective FES, $F_{\\gamma}(\\bs) = F(\\bs)\/\\gamma$, where the barriers of the original FES are reduced by a factor of $\\gamma$. In general, one should select a bias factor $\\gamma$ such that effective free energy barriers are on the order of the thermal energy $k_{\\mathrm{B}}T$.\n\nDue to the external bias potential, each microscopic configuration $\\mathbf{R}$ carries an additional statistical weight $w(\\mathbf{R})$ that needs to be taken into account when calculating equilibrium properties. For a static bias potential, the weight is time-independent and given by $w(\\mathbf{R})=\\e^{\\beta V(\\mathbf{s}(\\mathbf{R}))}$. In WT-MetaD, however, we need to take into account the time-dependence of the bias potential, and thus, the weight is modified in the following way:\n\\begin{equation}\n \\label{eq:weight-wtm}\n w(\\mathbf{R},t)=\\exp[\\beta \\tilde{V}( \\mathbf{s}(\\mathbf{R}), t )],\n\\end{equation}\nwhere $\\tilde{V}(\\mathbf{s}(\\mathbf{R}),t)=V(\\mathbf{s}(\\mathbf{R}),t)-c(t)$ is the relative bias potential modified by introducing $c(t)$, a time-dependent constant that can be calculated from the bias potential at time $t$ as~\\cite{tiwary_rewt,valsson2016enhancing}:\n\\begin{equation}\n \\label{eq:coft}\n c(t)=\\frac{1}{\\beta}\\log{\n \\frac{\\int \\d\\mathbf{s}\\,\n \\exp\\left[\n \\frac{\\gamma}{\\gamma-1} \\beta V(\\mathbf{s},t)\n \\right]}\n {\\int \\d\\mathbf{s}\\,\n \\exp\\left[\n \\frac{1}{\\gamma-1} \\beta V(\\mathbf{s},t)\n \\right]}}.\n\\end{equation}\nThere are also other ways to reweight WT-MetaD simulations~\\cite{bonomi_rewt,Branduardi-JCTC-2012,Giberti_2019,Sch_fer_2020}.\n\nIn MD simulations, we do not only need to know the values of the CVs but also their derivatives with respect to the microscopic coordinates, $\\nabla_{\\mathbf{R}} \\, \\mathbf{s}(\\mathbf{R})$. The derivatives are needed to calculate the biasing force $-\\nabla_{\\mathbf{R}} \\, V(\\mathbf{s}(\\mathbf{R})) = -\\partial_{\\mathbf{s}} V(\\mathbf{s}) \\cdot\\nabla_{\\mathbf{R}} \\, \\mathbf{s}(\\mathbf{R})$. In practice, however, the CVs might not depend directly on $\\mathbf{R}$, but rather indirectly through a set of some other input variables (e.g., features). We can even define a CV that is a chain of multiple variables that depend sequentially on each other. In such cases, it is sufficient to know the derivatives of the CVs with respect to the input variables, as we can obtain the total derivatives via the chain rule. In codes implementing CVs and enhanced sampling methods~\\cite{Fiorin_2013,tribello2014plumed,Sidky_2018}, like \\textsc{plumed}~\\cite{tribello2014plumed,plumed-nest}, the handling of the chain rule is done automatically. Thus, when implementing a new CV, we only need to calculate its values and derivatives with respect to the input variables.\n\nHaving provided the basics of CV-based enhanced sampling simulations, we now introduce our method for learning CVs.\n\n\n\n\n\n\\subsection{Multiscale Reweighted Stochastic Embedding (MRSE)}\n\\label{sec:mrse}\nThe basis of our method is the $t$-distributed variant of stochastic neighbor embedding ($t$-SNE)~\\cite{maaten2008visualizing}, a dimensionality reduction algorithm for visualizing high-dimensional data, for instance, generated by unbiased MD simulations~\\cite{rydzewski2016machine,zhou2018t,spiwok2020time,fleetwood2021identification}. We introduce here a parametric and multiscale variant of SNE aimed at learning CVs from atomistic simulations. In particular, we focus on using the method within enhanced sampling simulations, where we need to consider biased simulation data. We refer to this method as multiscale reweighted stochastic embedding or MRSE.\n\nWe consider a high-dimensional feature space, $\\bx=[x_1, \\dots, x_k]$, of dimension $k$. The features could be distances, dihedral angles, or some more complex functions~\\cite{geiger2013neural,rogal2019neural,musil2021physicsinspired}, which depend on the microscopic coordinates. We introduce a parametric embedding function $f_\\bt(\\bx)=\\bs(\\bx)$, that depends on parameters $\\bt$, to map from the high-dimensional feature space to the low-dimensional latent space (i.e., the CV space), $\\bs=[s_1, \\dots, s_d]$, of dimension $d$. From a molecular simulation, we collect $N$ observations (or simply samples) of the features, $[\\bx_1, \\dots, \\bx_N]^T$, that we use as training data. Using these definitions, the problem of finding a low-dimensional set of CVs amounts to using the training data to find an optimal parametrization for the embedding function given a nonlinear ML model. We can then use the embedding as CVs and project any point in feature space to CV space.\n\nIn SNE methods, this problem is approached by taking the training data and modeling the pairwise probability distributions for distances in the feature and latent space. To establish the notation, we write the pairwise probability distributions as ${\\bf M}=(p_{ij})$ and $\\sfQ=(q_{ij})$, where $1\\leq i,j \\leq N$, for the feature and the latent space, respectively. For the pairwise probability distribution ${\\bf M}$ ($\\sfQ$), the interpretation of a single element $p_{ij}$ ($q_{ij}$) is that higher the value, higher is the probability of picking $\\bx_j$ ($\\bs_j$) as a neighbor of $\\bx_i$ ($\\bs_i$). The mapping from the feature space to the latent space is then varied by adjusting the parameters $\\bt$ to minimize a loss function that measures the statistical difference between the two pairwise probability distributions. In the following, we explicitly introduce the pairwise probability distributions and the loss function used in MRSE.\n\n\n\n\n\n\n\\subsubsection{Feature Pairwise Probability Distribution}\n\\label{sec:feature_distribution}\nWe model the feature pairwise probability distribution for a pair of samples $\\bx_i$ and $\\bx_j$ from the training data as a discrete Gaussian mixture. Each term in the mixture is a Gaussian kernel:\n\\begin{equation}\n K_{\\varepsilon_i}(\\bx_i,\\bx_j)=\\exp\\left(-\\varepsilon_i\\|\\bx_i-\\bx_j\\|^2_2\\right)\n\\end{equation}\nthat is characterized by a scale parameter $\\varepsilon_i$ associated to feature sample $\\bx_i$. A scale parameter is defined as $\\varepsilon_i=1\/(2\\sigma^2_i)$, where $\\sigma_i$ is the standard deviation (i.e., bandwidth) of the Gaussian kernel. Because $\\varepsilon_i \\neq \\varepsilon_j$, the kernels are not symmetric. To measure the distance between data points, we employ the Euclidean distance $\\|\\cdot\\|_2$ as an appropriate metric for representing high-dimensional data on a low-dimensional manifold~\\cite{globerson2007euclidean}. Then, a pair $\\bx_i$ and $\\bx_j$ of points close to each other, as measured by the Euclidean distance, have a high probability of being neighbors.\n\n\\begin{figure}[htp]\n \\includegraphics[width=0.5\\columnwidth]{fig-probabilities.pdf}\n \\caption{Schematic representation depicting how MRSE (and $t$-SNE) preserves the local structure of high-dimensional data. The pairwise probability distributions are represented by Gaussian kernels in the high-dimensional feature space and by the $t$-distribution kernels in the low-dimensional latent space. The minimization of the Kullback-Leibler divergence between the pairwise probability distributions enforces similar feature samples close to each other and separates dissimilar feature samples in the latent space. As the difference between the distributions fulfills $\\Delta'>\\Delta$, MRSE is likely to group close-by points into metastable states that are well separated.}\n \\label{fig:probabilities}\n\\end{figure}\n\nFor training data obtained from an enhanced sampling simulation, we need to correct the feature pairwise probability distribution because each feature sample $\\bx$ has an associated statistical weight $w(\\bx)$. To this aim, we introduce a reweighted Gaussian kernel as:\n\\begin{equation}\n \\label{eq:reweighted_kernel}\n \\tilde{K}_{\\varepsilon_{i}}(\\bx_i, \\bx_j) =\n r(\\bx_i, \\bx_j) K_{\\varepsilon_{i}}(\\bx_i,\\bx_j),\n\\end{equation}\nwhere $r(\\bx_i, \\bx_j)=\\sqrt{w(\\bx_i)w(\\bx_j)}$ is a pairwise reweighting factor. As noted previously, the exact expression for the weights depends on the enhanced sampling method used. For training data from an unbiased simulation, or if we do not incorporate the weights into the training, all the weights are equal to one and $r(\\bx_i, \\bx_j) \\equiv 1$ for $1 \\leq i,j \\leq N$.\n\nA reweighted pairwise probability distribution for the feature space is then written as:\n\\begin{equation}\n\\label{eq:ppp}\n \\sfP=\n \\Big(\n p^{\\boldsymbol{\\varepsilon}}_{ij}\n \\Big)_{1\\leq i,j \\leq N}\n ~~\\text{and}~~\n p^{\\boldsymbol{\\varepsilon}}_{ij}=\n \\frac{\\tilde{K}_{\\varepsilon_{i}}(\\bx_i, \\bx_j)}\n {\\sum_{k} \\tilde{K}_{\\varepsilon_{i}}(\\bx_i, \\bx_k)},\n\\end{equation}\nwith $p^{\\boldsymbol{\\varepsilon}}_{ii}=0$. This equation represents the reweighted pairwise probability of features $\\bx_i$ and $\\bx_j$ for a given set of scale parameters $\\boldsymbol{\\varepsilon}=[\\varepsilon_1,\\varepsilon_2, \\dots, \\varepsilon_N]$, where each scale parameter is assigned to a row of the matrix $\\sfP$. The pairwise probabilities $p^{\\boldsymbol{\\varepsilon}}_{ij}$ are not symmetric due to the different values of the scale parameters ($\\varepsilon_i \\neq \\varepsilon_j$), which is in contrast to $t$-SNE, where the symmetry of the feature pairwise probability distribution is enforced~\\cite{maaten2008visualizing}.\n\nAs explained in Section~\\ref{sec:multiscale_rep} below, the multiscale feature pairwise probability distribution ${\\bf M}$ is written as a mixture of such pairwise probability distributions, each with a different set of scale parameters. In the next section, we describe how to calculate the scale parameters for the probability distribution given by eq~\\ref{eq:ppp}.\n\n\n\n\n\n\\subsubsection{Entropy of the Reweighted Feature Probability Distribution}\n\\label{sec:entropies}\nThe scale parameters $\\boldsymbol{\\varepsilon}$ used for the reweighted Gaussian kernels in eq~\\ref{eq:ppp} are positive scaling factors that need to be optimized to obtain a proper density estimation of the underlying data. We have that $\\varepsilon_i=1\/(2\\sigma^2_i)$, where $\\sigma_i$ is the standard deviation (i.e., bandwidth) of the Gaussian kernel. Therefore, we want a smaller $\\sigma_i$ in dense regions and a larger $\\sigma_i$ in sparse regions. To achieve this task, we define the Shannon entropy of the $i$th Gaussian probability as:\n\\begin{equation}\n \\label{eq:information_entropy}\n H(\\bx_i)=-\\sum_j p_{ij}^{\\varepsilon_i} \\log p_{ij}^{\\varepsilon_i},\n\\end{equation}\nwhere the term $p_{ij}^{\\varepsilon_i}$ refers to matrix elements from the $i$th row of $\\sfP$ as eq~\\ref{eq:information_entropy} is solved for each row independently. We can write $p_{ij}^{\\varepsilon_i} = \\frac{1}{\\bar{p}_i} \\tilde{K}_{\\varepsilon_{i}}(\\bx_i, \\bx_j)$ where $\\bar{p}_i=\\sum_k \\tilde{K}_{\\varepsilon_{i}}(\\bx_i, \\bx_k)$ is a row-wise normalization constant.\n\nInserting $p_{ij}^{\\varepsilon_i}$ from eq~\\ref{eq:ppp} leads to the following expression:\n\\begin{align}\n \\label{eq:ent}\n H(\\bx_i) = \\log\\bar{p}_i\n &+ \\frac{\\varepsilon_{i}}{ \\bar{p}_i }\n \\sum_j \\tilde{K}_{\\varepsilon_{i}}(\\bx_i, \\bx_j) \\|\\bx_i-\\bx_j\\|_2^2 \\nonumber\\\\\n & \\underbrace{ -\\frac{1}{ \\bar{p}_i }\\sum_j \\tilde{K}_{\\varepsilon_{i}}(\\bx_i, \\bx_j) \\log r(\\bx_i,\\bx_j) }_{ H_V(\\bx_i) },\n\\end{align}\nwhere $H_V(\\bx_i)$ is a correction term due to the reweighting factor $r(\\bx_i,\\bx_j)$ introduced in eq~\\ref{eq:reweighted_kernel}. The reweighting factor is included also in the other two terms through $\\tilde{K}_{\\varepsilon_{i}}(\\bx_i, \\bx_j)$. For weights of exponential form, like in WT-MetaD (eq~\\ref{eq:weight-wtm}), we have $w(\\bx_i)=\\e^{\\beta V(\\bx_i)}$, and the correction term $H_V(\\bx_i)$ further reduces to:\n\\begin{equation}\n \\label{eq:entropy_expl}\n H_V(\\bx_i)=-\\frac{\\beta}{2}\n \\left(\n \\frac{1}{\\bar{p}_i}\n \\sum_j \\tilde{K}_{\\varepsilon_{i}}(\\bx_i, \\bx_j) V(\\bx_i) + V(\\bx_j)\n \\right).\n\\end{equation}\nFor the derivation of eq~\\ref{eq:ent} and eq~\\ref{eq:entropy_expl}, see Section~S1 in the Supporting Information (SI).\n\nFor an unbiased simulation, or if we do not incorporate the weights into the training, is $r(\\bx_i,\\bx_j)\\equiv 1$ for $1\\leq i,j \\leq N$ and the correction term $H_V(\\bx_i)$ vanishes. Equation~\\ref{eq:ent} then becomes $H(\\bx_i) = \\log\\bar{p}_i + \\frac{\\varepsilon_{i}}{ \\bar{p}_i }\\sum_j {K}_{\\varepsilon_{i}}(\\bx_i, \\bx_j) \\|\\bx_i-\\bx_j\\|_2^2$.\n\nWe use eq~\\ref{eq:ent} to define an objective function for an optimization procedure that fits the Gaussian kernel to the data by adjusting the scale parameter so that $H(\\bx_i)$ is approximately $\\log_2 PP$ (i.e., $\\min_{\\varepsilon_{i}} \\left[H(\\bx_i)-\\log_2 PP\\right]$). Here $PP$ is a model parameter that represents the perplexity of a discrete probability distribution. Perplexity is defined as an exponential of the Shannon entropy, $PP = 2^H$, and measures the quality of predictions for a probability distribution~\\cite{cover2006elements}. We can view the perplexity as the effective number of neighbors in a manifold~\\cite{maaten2008visualizing,maaten2009learning}. To find the optimal values of the scale parameters, we perform the optimization using a binary search separately for each row of $\\sfP$ (eq~\\ref{eq:ppp}).\n\n\n\n\n\n\\subsubsection{Multiscale Representation}\n\\label{sec:multiscale_rep}\nAs suggested in the work of Hinton and Roweis~\\cite{hinton2002stochastic}, the feature probability distribution can be extended to a mixture, as done in refs~\\citenum{lee2014multiscale,de2018perplexity,crecchi2020perplexity}. To this aim, for a given value of the perplexity $PP$, we find the optimal set of scale parameters $\\boldsymbol{\\varepsilon}^{PP}$ using eq~\\ref{eq:ent}. We do this for multiple values of the perplexity, $PP_l=2^{L_{PP}-l+1}$, where $l$ goes from 0 to $L_{PP}={\\lfloor \\log N \\rfloor}$-2, and $N$ is the size of the training data set. We then write the probabilities $p_{ij}$ as an average over the different reweighted feature pairwise probability distributions:\n\\begin{equation}\n {\\bf M}=\\Big(p_{ij}\\Big)_{1\\leq i,j \\leq N}\n ~~\\text{and}~~\n p_{ij} = \\frac{1}{N_{PP}} \\sum^{L_{PP}}_{l=0} p^{\\boldsymbol{\\varepsilon}^{PP_l}}_{ij},\n\\end{equation}\nwhere $N_{PP}$ is the number of perplexities. Therefore, by taking $p_{ij}$ as a Gaussian mixture over different perplexities, we obtain a multiscale representation of the feature probability distribution ${\\bf M}$, without the need of setting perplexity by the user.\n\n\n\n\n\n\\subsubsection{Latent Pairwise Probability Distribution}\n\\label{sec:latent_distribution}\nA known issue in many dimensionality reduction methods, including SNE, is the so-called ``crowding problem''~\\cite{sammon1969nonlinear,hinton2002stochastic}, which is caused partly by the curse of dimensionality~\\cite{marimont1979nearest}. In the context of enhanced sampling, the crowding problem would lead to the definition of CVs that inadequately discriminate between metastable states due to highly localized kernel functions in the latent space. As shown in Figure~\\ref{fig:probabilities}, if we change from a Gaussian kernel to a more heavy-tailed kernel for the latent space probability distribution, like a $t$-distribution kernel, we enforce that close-by data points are grouped while far-away data points are separated.\n\nTherefore, for the pairwise probability distribution in the latent space, we use a one-dimensional heavy-tailed $t$-distribution, which is the same as in $t$-SNE. We set:\n\\begin{equation}\n\\label{eq:Q}\n\\sfQ=\\Big(q_{ij}\\Big)_{1\\leq i,j \\leq N}\n~~\\text{and}~~\n q_{ij}=\\frac{\\left(1+\\|\\bs_i-\\bs_j\\|^2_2\\right)^{-1}}{\\sum_{k}%\n \\left(1+\\|\\bs_i-\\bs_k\\|^2_2\\right)^{-1}},\n\\end{equation}\nwhere $q_{ii}=0$ and the latent variables (i.e., the CVs) are obtained via the embedding function, e.g., $\\bs_i = f_\\bt(\\bx_i)$.\n\n\n\n\n\n\\subsubsection{Minimization of Loss Function}\n\\label{sec:kl_div}\nFor the loss function to be minimized during the training procedure, we use the Kullback-Leibler (KL) divergence $\\kldiv$ to measure the statistical distance between the pairwise probability distributions ${\\bf M}$ and $\\sfQ$~\\cite{kullback1951information}. The loss function $L$ for a data batch is defined as:\n\\begin{align}\n\\label{eq:kl}\n \\kldiv = \\frac{1}{N_b}\\sum_{i=1}^{N_b}\\sum_{\\substack{j=1 \\\\ i \\neq j}}^{N_b} p_{ij}\\log\\left(\\frac{p_{ij}}{q_{ij}}\\right),\n\\end{align}\nwhere $\\kldiv \\geq 0$ with equality only when ${\\bf M}=\\sfQ$, and we split the training data into $B$ batches of size $N_b$. We show the derivation of the loss function for the full set of $N$ training data points in Section~S2 in the SI.\n\n\\begin{figure}[htp]\n \\includegraphics[width=0.4\\columnwidth]{fig-nn.pdf}\n \\caption{Neural network used to model the parametric embedding function $f_\\bt(\\bx)$. The input features $\\bx$, $\\dim(\\bx)=k$ are fed into the NN to generate the output CVs $\\bs$, $\\dim(\\bs)=d$. The parameters $\\bt$ represent the weights and biases of NN. The input layer is shown in blue, and the output layer is depicted in red. The hidden layers (gray) use dropout and leaky ReLU activations.}\n \\label{fig:nn}\n\\end{figure}\n\nFor the parametric embedding function $f_\\bt(\\bx)$, we employ a deep NN (see Figure~\\ref{fig:nn}). After minimizing the loss function, we can use the parametric NN embedding function to project any given point in feature space to the latent space without rerunning the training procedure. Therefore, we can use the embedding as CVs, $\\bs(\\bx)=f_\\bt(\\bx)$. The derivatives of $f_\\bt(\\bx)$ with respect to $\\bx$ are obtained using backpropagation. Using the chain rule, we can then calculate the derivatives of $\\bs(\\bx)$ with respect to the microscopic coordinates $\\mathbf{R}$, which is needed to calculate the biasing force in an enhanced sampling simulation.\n\n\n\n\n\n\\subsection{Weight-Tempered Random Sampling of Landmarks}\n\\label{sec:wtrs}\nA common way to reduce the size of a training set is to employ a landmark selection scheme before performing a dimensionality reduction~\\cite{ceriotti2013demonstrating,long2019landmark,tribello2019using_a,tribello2019using_b}. The idea is to select a subset of the feature samples (i.e., landmarks) representing the underlying characteristics of the simulation data.\n\nWe can achieve this by selecting the landmarks randomly or with some given frequency in an unbiased simulation. If the unbiased simulation has sufficiently sampled phase space or if we use an enhanced sampling method that preserves the equilibrium distribution, like parallel tempering (PT)~\\cite{parallel_tempering}, the landmarks represent the equilibrium Boltzmann distribution. However, such a selection of landmarks might give an inadequate representation of transient metastable states lying higher in free energy, as they are rarely observed in unbiased simulations sampling the equilibrium distribution.\n\nFor simulation data resulting from an enhanced sampling simulation, we need to account for sampling from a biased distribution when selecting the landmarks. Thus, we take the statistical weights $w(\\bR)$ into account within the landmark selection scheme. Ideally, we want the landmarks obtained from the biased simulation to strike a balance between an equilibrium representation and capturing higher-lying metastable states. Inspired by well-tempered farthest-point sampling (WT-FPS)~\\cite{ceriotti2013demonstrating} (see Section~S3 in the SI), we achieve this by proposing a simple landmark selection scheme appropriate for enhanced sampling simulations that we call weight-tempered random sampling.\n\nIn weight-tempered random sampling, we start by modifying the underlying data density by rescaling the statistical weights of the feature samples as $w(\\bR) \\rightarrow [w(\\bR)]^{1\/\\alpha}$. Here, $\\alpha \\geq 1$ is a tempering parameter similar in a spirit to the bias factor $\\gamma$ in the well-tempered distribution (eq~\\ref{eq:wt-pv}). Next, we randomly sample landmarks according to the rescaled weights. This procedure results in landmarks distributed according to the following probability distribution:\n\\begin{equation}\n P_\\alpha(\\bx) =\n \\frac\n {\\int\\d\\bR \\, \\left[w(\\bR)\\right]^{1\/\\alpha} \\delta\\left[\\bx-\\bx(\\bR)\\right] P_V(\\bR)}\n {\\int\\d\\bR \\, \\left[w(\\bR)\\right]^{1\/\\alpha} P_V(\\bR)},\n\\label{eq:wt-random-sampling}\n\\end{equation}\nwhich we can rewrite as a biased ensemble average:\n\\begin{equation}\n P_\\alpha(\\bx) =\n \\frac\n {\\Big< [w(\\bR)]^{1\/\\alpha} \\delta[\\bx - \\bx(\\bR)] \\Big>_V}\n {\\Big< [w(\\bR)]^{1\/\\alpha} \\Big>_V}.\n\\label{eq:wt-palpha}\n\\end{equation}\nSimilar weight transformations have been used for treating weights degeneracy in importance sampling~\\cite{koblents2015population}.\n\nFor $\\alpha=1$, we recover weighted random sampling~\\cite{bortz1975new}, where we sample landmarks according to their unscaled weights $w(\\bR)$. As we can see from eq~\\ref{eq:wt-random-sampling}, this should, in principle, give an equilibrium representation of landmarks, $P_{\\alpha=1}(\\bx)=P(\\bx)$. By employing $\\alpha>1$, we gradually start to ignore the underlying weights when sampling the landmarks and enhance the representation of metastable states lying higher in free energy. In the limit of $\\alpha\\to\\infty$, we ignore the weights (i.e., all are equal to unity) and sample the landmarks randomly so that their distribution should be equal to the biased feature distribution sampled under the influence of the bias potential, $P_{\\alpha\\to\\infty}(\\bx)=P_{V}(\\bx)$. Therefore, the tempering parameter $\\alpha$ allows us to tune the landmark selection between these two limits of equilibrium and biased representation. Using $\\alpha>1$ that is not too large, we can obtain a landmark selection that makes a trade-off between an equilibrium representation and capturing higher-lying metastable states.\n\nTo understand better the effect of the tempering parameter $\\alpha$, we can look at how the landmarks are distributed in the space of the biased CVs for the well-tempered case (eq~\\ref{eq:wt-pv}). As shown in Section~S4 in the SI, we obtain:\n\\begin{equation}\n P_\\alpha(\\bs) = \\frac{ \\left[P(\\bs)\\right]^{1\/\\tilde{\\alpha} }}\n { \\int\\d\\bs\\; \\left[P(\\bs)\\right]^{1\/\\tilde{\\alpha} }},\n \\label{eq:effective_alpha_pdf}\n\\end{equation}\nwhere we introduce an effective tempering parameter $\\tilde{\\alpha}$ as:\n\\begin{equation}\n \\tilde{\\alpha}=\\left( \\frac{1}{\\alpha} - \\frac{1}{\\alpha\\gamma} + \\frac{1}{\\gamma} \\right)^{-1} = \\frac{\\gamma \\alpha} {\\gamma + \\alpha - 1}\n \\label{eq:effective_alpha}\n\\end{equation}\nthat is unity for $\\alpha=1$ and goes to $\\gamma$ in the limit $\\alpha\\to\\infty$. Thus, the effect of $\\alpha$ is to broaden the CV distribution of the selected landmarks. In Figure~\\ref{fig:ala-wt-eff-alpha}, we show how the effective tempering parameter $\\tilde{\\alpha}$ depends on $\\alpha$ for typical bias factor values $\\gamma$.\n\n\\begin{figure}[htp]\n \\includegraphics[width=0.5\\columnwidth]{fig-ala1-effective-alpha.pdf}\n \\caption{The effective tempering parameter $\\tilde{\\alpha}$ in the weight-tempered random sampling landmark selection scheme.}\n \\label{fig:ala-wt-eff-alpha}\n\\end{figure}\n\nThe effect of $\\alpha$ on the landmark feature distribution $P_\\alpha(\\bx)$ is harder to gauge as we cannot write the biased feature distribution $P_V(\\bx)$ as a closed-form expression. In particular, for the well-tempered case, $P_V(\\bx)$ is not given by $\\propto [P(\\bx)]^{1\/\\gamma}$, as the features are generally not fully correlated to the biased CVs~\\cite{gil2015enhanced}. The correlation of the features with biased CVs will vary greatly, also within the selected feature set. For example, for features uncorrelated to the biased CVs, the biased distribution is nearly the same as the unbiased distribution. Consequently, the effect of tempering parameter $\\alpha$ for a given feature will depend on the correlation with the biased CVs. In Section~\\ref{sec:ala1_results}, we will show examples of this issue.\n\n\n\n\n\n\\subsection{Implementation}\n\\label{sec:implementation}\nWe implement the MRSE method and the weight-tempered random sampling landmark selection method in an additional module called \\texttt{LowLearner} in a development version (2.7.0-dev) of the open-source \\textsc{plumed}~\\cite{tribello2014plumed,plumed-nest} enhanced sampling plugin. The implementation is available openly at Zenodo~\\cite{mrse-dataset} (DOI: \\href{https:\/\/zenodo.org\/record\/4756093}{\\texttt{10.5281\/zenodo.4756093}}) and from the \\textsc{plumed} NEST~\\cite{plumed-nest} under \\texttt{plumID:21.023} at \\url{https:\/\/www.plumed-nest.org\/eggs\/21\/023\/}. We use the LibTorch~\\cite{pytorch} library (PyTorch C++ API, git commit \\texttt{89d6e88} used to obtain the results in this paper) that allows us to perform immediate execution of dynamic tensor computations with automatic differentiation~\\cite{paszke2017automatic}.\n\n\n\n\n\n\\section{Computational Details}\n\\label{sec:comp_details}\n\\subsection{Model Systems}\n\\label{sec:model_systems}\nWe consider three different model systems to evaluate the performance of the MRSE approach: the M\\\"uller-Brown Potential, alanine dipeptide, and alanine tetrapeptide. We use WT-MetaD simulations to generate biased simulation data sets used to train the MRSE embeddings for all systems. We also run unbiased simulation data sets for alanine di- and tetrapeptide by performing PT simulations that ensure proper sampling of the equilibrium distribution.\n\n\n\n\n\n\\subsubsection{M\\\"uller-Brown Potential}\n\\label{sec:mb_details}\nWe consider the dynamics of a single particle moving on the two-dimensional M\\\"uller-Brown potential~\\cite{MuellerBrown_1979}, $U(x, y)=\\sum_{j} A_{j}\\e^{p_{j}(x,y)}$, where $p_{j}(x,y)=a_{j}(x-x_{0,j})^2 + b_{j}(x-x_{0,j})(y-y_{0,j}) + c_{j} (y-y_{0,j})^2$, $x, y$ are the particle coordinates, and $\\mathbf{A}, \\mathbf{a}, \\mathbf{b}, \\mathbf{c}, \\mathbf{x}_{0}$ and $\\mathbf{y}_{0}$ are the parameters of the potential given by $\\mathbf{A}=(-40, -20, -34, 3)$, $\\mathbf{a}=(-1, -1, 6.5, 0.7)$, $\\mathbf{b}=(0,0,11,0.6)$, $\\mathbf{c}=(-10,-10,-6.5,-0.7)$, $\\mathbf{x}_{0}=(1,0,-0.5,-1$), and $\\mathbf{y}_{0}=(0,0.5,1.5,1)$. Note that the $\\mathbf{A}$ parameters are not the same as in ref~\\citenum{MuellerBrown_1979} as we scale the potential to reduce the height of the barrier by a factor of 5. The FES as a function of the coordinates $x$ and $y$ is given directly by the potential, $F(x,y)=U(x,y)$. We employ rescaled units such that $k_{\\mathrm B}=1$. We use the \\texttt{pesmd} code from \\textsc{plumed}~\\cite{tribello2014plumed,plumed-nest} to simulate the system at a temperature of $T=1$ using a Langevin thermostat~\\cite{bussi_accurate_2007} with a friction coefficient of 10 and employ a time step of 0.005. At this temperature, the potential has a barrier of around 20 $k_{\\mathrm B}T$ between its two states and thus is a rare event system.\n\nFor the WT-MetaD simulations, we take $x$ and $y$ as CVs. We use different bias factors values (3, 4, 5, and 7), an initial Gaussian height of 1.2, a Gaussian width of 0.1 for both CVs, and deposit Gaussians every 200 steps. We calculate $c(t)$ (eq~\\ref{eq:coft}), needed for the weights, every time a Gaussian is added using a grid of $500^2$ over the domain $[-5,5]^2$. We run the WT-MetaD simulations for a total time of $2 \\times 10^7$ steps. We skip the first 20\\% of the runs (up to step $4 \\times 10^6$) to ensure that we avoid the period at the beginning of the simulations where the weights might be unreliable due to rapid changes of the bias potential. For the remaining part, we normalize the weights such that they lie in the range 0 to 1 to avoid numerical issues.\n\nWe employ features saved every 1600 steps for the landmark selection data sets, yielding a total of $10^4$ samples. From these data sets, we then use weight-tempered random sampling with $\\alpha=2$ to select 2000 landmarks that we use as training data to generate the MRSE embeddings.\n\nFor the embeddings, we use the coordinates $x$ and $y$ as input features ($k=2$), while the number of output CVs is also 2 ($d=2$). We do not standardize or preprocess the input features.\n\n\n\n\n\n\\subsubsection{Alanine Dipeptide}\n\\label{sec:ala1_details}\nWe perform the alanine dipeptide (Ace-Ala-Nme) simulations using the \\textsc{gromacs} 2019.2 code~\\cite{gromacs} patched with a development version of the \\textsc{plumed} plugin~\\cite{tribello2014plumed,plumed-nest}. We use the Amber99-SB force field~\\cite{Hornak2006a}, and a time step of 2 fs. We perform the simulations in the canonical ensemble using the stochastic velocity rescaling thermostat~\\cite{bussi2007canonical} with a relaxation time of 0.1 fs. We constrain hydrogen bonds using LINCS~\\cite{hess2008p}. The simulations are performed in vacuum without periodic boundary conditions. We employ no cut-offs for electrostatic and non-bonded van der Waals interactions.\n\nWe employ 4 replicas with temperatures distributed geometrically in the range 300 K to 800 K (300.0 K, 416.0 K, 576.9 K, 800.0 K) for the PT simulation. We attempt exchanges between neighboring replicas every 10 ps. We run the PT simulation for 100 ns per replica. We only use the 300 K replica for analysis.\n\nWe perform the WT-MetaD simulations at 300 K using the backbone dihedral angles $\\Phi$ and $\\Psi$ as CVs and employ different values for the bias factor (2, 3, 5, and 10). We use an initial Gaussian height of 1.2 kJ\/mol, a Gaussian width of 0.2 rad for both CVs, and deposit Gaussians every 1 ps. We calculate $c(t)$ (eq~\\ref{eq:coft}) every time a Gaussian is added (i.e., every 1 ps) employing a grid of $500^2$ over the domain $[-\\pi,\\pi]^2$. We run the WT-MetaD simulations for 100 ns. We skip the first 20 ns of the runs (i.e., first 20\\%) to ensure that we avoid the period at the beginning of the simulations where the weights might be unreliable due to rapid changes in the bias potential. For the remaining part, we normalize the weights such that they lie in the range 0 to 1 to avoid numerical issues.\n\nFor the landmark selection data sets, we employ features saved every 1 ps, which results in data sets of $8 \\times 10^4$ and $1 \\times 10^5$ samples for the WT-MetaD and PT simulations, respectively. We select 4000 landmarks for the training from these data sets, using weighted random sampling for the PT simulation and weight-tempered random sampling for the WT-MetaD simulations ($\\alpha=2$ unless otherwise specified).\n\nFor the embeddings, we use 21 heavy atoms pairwise distances as input features ($k=21$) and the number of output CVs as 2 ($d=2$). To obtain an impartial selection of features, we start with all 45 heavy atoms pairwise distances. Then, to avoid unimportant features, we automatically check for low variance features and remove all distances with a variance below $2 \\times 10^{-4}$ nm$^{2}$ from the training set (see Section~S9 in the SI). This procedure removes 24 distances and leaves 21 distances for the embeddings (both training and projections). We standardize remaining distances individually such that their mean is zero and their standard deviation is one.\n\n\n\n\n\n\\subsubsection{Alanine Tetrapeptide}\n\\label{sec:ala3_details}\nWe perform simulations of alanine tetrapeptide (Ace-Ala$_3$-Nme) in vacuum using the \\textsc{gromacs} 2019.2 code~\\cite{gromacs} and a development version of the \\textsc{plumed} plugin~\\cite{tribello2014plumed,plumed-nest}. We use the same MD setup and parameters as for the alanine dipeptide system, e.g., the Amber99-SB force field~\\cite{Hornak2006a}, see Section~\\ref{sec:ala1_details} for further details.\n\nFor the PT simulation, we employ 8 replicas with temperatures ranging from 300 K to 1000 K according to a geometric distribution (300.0 K, 356.4 K, 424.3 K, 502.6 K, 596.9 K, 708.9 K, 842.0 K, 1000.0 K). We attempt exchanges between neighboring replicas every 10 ps. We simulate each replica for 100 ns. We only use the 300 K replica for analysis.\n\nWe perform the WT-MetaD simulation at 300 K using the backbone dihedral angles $\\Phi_1$, $\\Phi_2$, and $\\Phi_3$ as CVs and a bias factor of 5. We use an initial Gaussian height of 1.2 kJ\/mol, a Gaussian width of 0.2 rad, and deposit Gaussians every 1 ps. We run the WT-MetaD simulation for 200 ns. We calculate $c(t)$ every 50 ps using a grid of $200^3$ over the domain $[-\\pi,\\pi]^3$. We skip the first 40 ns of the run (i.e., first 20\\%) to ensure that we avoid the period at the beginning of the simulation where the weights are not equilibrated. We normalize the weights such that they lie in the range 0 to 1.\n\nFor the landmark selection data sets, we employ features saved every 2 ps for the WT-MetaD simulation and every 1 ps for the PT simulation. This results in data sets of $8 \\times 10^4$ and $1 \\times 10^5$ samples for the WT-MetaD and PT simulations, respectively. We select 4000 landmarks for the training from these data sets, using weighted random sampling for the PT simulation and weight-tempered random sampling with $\\alpha=2$ for the WT-MetaD simulations.\n\nFor the embeddings, we use sines and cosines of the dihedral angles $(\\Phi_1,\\Psi_1,\\Phi_2,\\Psi_2,\\Phi_3,\\Psi_3)$ as input features ($k=12$), and the number of output CVs is 2 ($d=2$). We do not standardize or preprocess the input features further.\n\n\n\n\n\n\\subsection{Neural Network Architecture}\n\\label{sec:nn_architecture}\nFor the NN, we use the same size and number of layers as in the work of van der Maaten and Hinton~\\cite{maaten2009learning,hinton2006reducing}. The NN consists of an input layer with a size equal to the dimension of the feature space $k$, followed by three hidden layers of sizes $h_1=500$, $h_2=500$, and $h_3=2000$, and an output layer with a size equal to the dimension of the latent space $d$.\n\nTo allow for any output value, we do not wrap the output layer within an activation function. Moreover, for all hidden layers, we employ leaky rectified linear units (leaky ReLU)~\\cite{maas2013rectifier} with a leaky parameter set to $0.2$. Each hidden layer is followed by a dropout layer~\\cite{dropout} (dropout probability $p=0.1$). For the details regarding the architecture of NNs, see Table~\\ref{tab:hyperparams}.\n\n\n\n\n\n\\subsection{Training Procedure}\n\\label{sec:training_details}\nWe shuffle the training data sets and divide them into batches of size 500. We initialize all trainable weights of the NNs with the Glorot normal scheme~\\cite{glorot2010understanding} using the gain value calculated for leaky ReLU. The bias parameters of the NNs are initialized with 0.005.\n\nWe minimize the loss function given by eq~\\ref{eq:kl} using the Adam optimizer~\\cite{kingma2014adam} with AMSGrad~\\cite{Reddi2019}, where we use learning rate $\\eta=10^{-3}$, and momenta $\\beta_1=0.9$ and $\\beta_2=0.999$. We also employ a standard L2 regularization term on the trainable network parameters in the form of weight decay set to $10^{-4}$. We perform the training for 100 epochs in all cases. The loss function learning curves for the systems considered here are shown in Section~S7 in the SI.\n\nWe report all hyperparameters used to obtain the results in this work in Table~\\ref{tab:hyperparams}. For reproducibility purposes, we also list the random seeds used while launching the training runs (the seed affects both the landmark selection and the shuffling of the landmarks during the training).\n\n\\begin{table*}[htp!]\n \\centering\\footnotesize\n \\caption{Hyperparameters used to obtain the results reported in this paper.}\n \\begin{tabular}{|l|l|l|l|}\n \\hline\n Hyperparameter & M\\\"uller-Brown & Alanine dipeptide & Alanine tetrapeptide \\\\\n \\hline\n Features & $x$ and $y$ & Heavy atom distances & Dihedral angles (cos\/sin) \\\\\n NN architecture & [2, 500, 500, 2000, 2] & [21, 500, 500, 2000, 2] & [12, 500, 500, 2000, 2] \\\\\n Optimizer & Adam (AMSGrad) & Adam (AMSGrad) & Adam (AMSGrad) \\\\\n Number of landmarks & $N=2000$ & $N=4000$ & $N=4000$ \\\\\n Batch size & $N_b=500$ & $N_b=500$ & $N_b=500$ \\\\\n Training iterations & 100 & 100 & 100 \\\\\n Learning rate & $\\eta=10^{-3}$ & $\\eta=10^{-3}$ & $\\eta=10^{-3}$ \\\\\n Seed & 111 & 111 (SI: 222, 333) & 111 \\\\\n Leaky parameter & 0.2 & 0.2 & 0.2 \\\\\n Dropout & $p=0.1$ & $p=0.1$ & $p=0.1$ \\\\\n Weight decay & $10^{-4}$ & $10^{-4}$ & $10^{-4}$ \\\\\n $\\beta_1,\\beta_2$ & 0.9 and 0.999 & 0.9 and 0.999 & 0.9 and 0.999 \\\\\n \\hline\n \\end{tabular}\n \\label{tab:hyperparams}\n\\end{table*}\n\n\n\n\n\n\\subsection{Kernel Density Estimation}\n\\label{sec:kde}\nWe calculate FESs for the trained MRSE embeddings using kernel density estimation (KDE) with Gaussian kernels. We employ a grid of $200^2$ for the FES figures. We choose the bandwidths for each simulation data set by first estimating them using Silverman's rule and then adjusting the bandwidths by comparing the KDE FES to an FES obtained with a discrete histogram. We show a representative comparison between KDE and discrete FESs in Section~S6 in the SI. We employ reweighting for FESs from WT-MetaD simulation data where we weigh each Gaussian KDE kernel by the statistical weight $w(\\mathbf{R})$ of the given data point.\n\n\n\n\n\n\\subsection{Data Availability}\n\\label{sec:data}\nThe data supporting the results of this study are openly available at Zenodo~\\cite{mrse-dataset} (DOI: \\href{https:\/\/zenodo.org\/record\/4756093}{\\texttt{10.5281\/zenodo.4756093}}). \\textsc{plumed} input files and scripts required to replicate the results presented in the main text are available from the \\textsc{plumed} NEST~\\cite{plumed-nest} under \\texttt{plumID:21.023} at \\url{https:\/\/www.plumed-nest.org\/eggs\/21\/023\/}.\n\n\n\n\n\n\\section{Results}\n\\label{sec:results}\n\\subsection{M\\\"uller-Brown Potential}\n\\label{sec:mb_results}\nWe start by considering a single particle moving on the two-dimensional M\\\"uller-Brown potential shown in Figure~\\ref{fig:mb-emb}(a). We use this system as a simple test to check if the MRSE method can preserve the topography of the FES in the absence of any dimensionality reduction when performing a mapping with a relatively large NN.\n\n\\begin{figure}[htp]\n \\includegraphics[width=0.8\\columnwidth]{fig-mb-emb.png}\n \\caption{Results for the M\\\"uller-Brown potential. FESs for MRSE embeddings obtained from the WT-MetaD simulation ($\\gamma=5$). We show MRSE embeddings obtained with (b) and without (c) incorporating weights into the training via a reweighted feature pairwise probability distribution (see eq~\\ref{eq:reweighted_kernel}). The units for the MRSE embeddings are arbitrary and only shown as a visual guide. To facilitate comparison, we post-process the MRSE embeddings using the Procrustes algorithm to find an optimal rotation that best aligns with the original coordinates $x$ and $y$, see text.}\n \\label{fig:mb-emb}\n\\end{figure}\n\nWe train the MRSE embeddings on simulation data sets obtained from WT-MetaD simulations using the coordinates $x$ and $y$ as CVs. Here, we show only the results obtained with bias factor $\\gamma=5$, while the results for other values are shown in Section~S8 in the SI. The MRSE embeddings can be freely rotated and overall rotation is largely determined by the random seed used to generate the embeddings. Therefore, to facilitate comparison, we show here results obtained using the Procrustes algorithm to find an optimal rotation of the MRSE embeddings that best aligns with the original coordinates $x$ and $y$. The original non-rotated embeddings are shown in Section~S8 in the SI. We present the FESs obtained with the MRSE embeddings in Figure~\\ref{fig:mb-emb}(b-c). We can see that the embeddings preserve the topography of the FESs very well and demonstrate a fine separation of metastable states, both when we incorporate the weights into the training through eq~\\ref{eq:reweighted_kernel} (panel b), and when we do not (panel c).\n\\begin{figure}[htp]\n \\includegraphics[width=0.4\\columnwidth]{fig-mb-vs.pdf}\n \\caption{Results for the M\\\"uller-Brown potential. We show how the MRSE embeddings map the coordinates $x$ and $y$ by plotting the normalized coordinates $x$ and $y$ versus the normalized MRSE CVs. The MRSE embeddings are trained using data from a WT-MetaD simulation with $\\gamma=5$, and obtained with (red) and without (blue) incorporating weights into the training via a reweighted feature pairwise probability distribution (see eq~\\ref{eq:reweighted_kernel}). To facilitate comparison, we post-process the MRSE embeddings using the Procrustes algorithm to find an optimal rotation that best aligns with the original coordinates $x$ and $y$, see text.}\n \\label{fig:mb-vs}\n\\end{figure}\n\nTo quantify the difference between the $x$ and $y$ coordinates and the CVs found by MRSE, we normalize all coordinates and plot CV$_1$ as a function of $x$ and CV$_2$ as a function of $y$. In Figure~\\ref{fig:mb-vs}, we can see that the points lie along the identity line, which shows that both MRSE embeddings preserve well the original coordinates of the MB system. In other words, the embeddings maintain the normalized distances between points. We analyze this aspect in a detailed manner for a high-dimensional set of features in Section~\\ref{sec:ala1_results}.\n\n\n\n\n\n\\subsection{Alanine Dipeptide}\n\\label{sec:ala1_results}\nNext, we consider alanine dipeptide in vacuum, a small system often used to benchmark free energy and enhanced sampling methods. The free energy landscape of the system is described by the backbone $(\\Phi,\\Psi)$ dihedral angles. Generally, the $(\\Phi,\\Psi)$ angles are taken as CVs for biasing, as we do here to generate the training data set. However, for this particular setup in vacuum, it is sufficient to bias $\\Phi$ to drive the sampling between states as $\\Psi$ is a fast CV compared to $\\Phi$. We can see in Figure~\\ref{fig:ala-mol} that three metastable states characterize the FES. The C$7_{\\mathrm{eq}}$ and C$5$ states are separated only by a small barrier of around 1--2 $k_{\\mathrm{B}}T$, so transitions between these two states are frequent. The C$7_{\\mathrm{ax}}$ state lies higher in free energy (i.e., is less probable to sample), and is separated by a high barrier of around 14 $k_{\\mathrm{B}}T$ from the other two states, so transitions from C$7_{\\mathrm{eq}}$\/C$5$ to C$7_{\\mathrm{ax}}$ are rare.\n\\begin{figure}[htp]\n \\includegraphics[width=0.5\\linewidth]{fig-ala1-mol.pdf}\n \\caption{Results for alanine dipeptide in vacuum at 300 K. (a) The free energy landscape $F(\\Phi,\\Psi)$ from the PT simulation. The metastable states C$7_{\\mathrm{eq}}$, C5, and C$7_{\\mathrm{ax}}$ are shown. (b) The molecular structure of alanine dipeptide with the dihedral angles $\\Phi$ and $\\Psi$ indicated.}\n \\label{fig:ala-mol}\n\\end{figure}\n\nFor the MRSE embeddings, we do not use the $(\\Phi,\\Psi)$ angles as input features, but rather a set of 21 heavy atom pairwise distances that we impartially select as described in Section~\\ref{sec:ala1_details}. Using only the pairwise distances as input features makes the exercise of learning CVs more challenging as the $\\Phi$ and $\\Psi$ angles cannot be represented as linear combinations of the interatomic distances. We can assess the quality of our results by examining how well the MRSE embeddings preserve the topography of the FES on local and global scales. However, before presenting the MRSE embeddings, let us consider the landmark selection, which we find crucial to our protocol to construct embeddings accurately.\n\nAs discussed in Section~\\ref{sec:wtrs}, we need to have a landmark selection scheme that takes into account the weights of the configurations and gives a balanced selection that ideally is close to the equilibrium distribution but represents all metastable states of the system, also the higher-lying ones. We devise for this task a method called weight-tempered random sampling. This method has a tempering parameter $\\alpha$ that allows us to interpolate between an equilibrium and a biased representation of landmarks (see eq~\\ref{eq:wt-random-sampling}).\n\n\\begin{figure}[htp]\n \\includegraphics[width=0.4\\columnwidth]{fig-ala1-wt-sampling.pdf}\n \\caption{Results for alanine dipeptide in vacuum at 300 K. The effect of the tempering parameter $\\alpha$ in the weight-tempered random sampling landmark selection scheme for a WT-MetaD simulation ($\\gamma=5$) biasing $(\\Phi, \\Psi)$. Marginal landmark distributions for two examples of features (i.e., heavy atom distances)) from the feature set that are (a) correlated and (b) uncorrelated with the biased CVs. The units are nm.}\n \\label{fig:ala-wt-sampling}\n\\end{figure}\n\nThe effect of the tempering parameter $\\alpha$ on the landmark feature distribution $P_\\alpha(\\bx)$ will depend on the correlation of the features with the biased CVs. The correlation will vary greatly, also within the selected feature set. In Figure~\\ref{fig:ala-wt-sampling}, we show the marginal distributions for two examples from the feature set. For a feature correlated with the biased CVs, the biasing enhances the fluctuations, and we observe a significant difference between the equilibrium distribution and the biased one, as expected. In this case, the effect of introducing $\\alpha$ is to interpolate between these two limits. On the other hand, for a feature not correlated to the biased CVs, the equilibrium and biased distribution are almost the same, and $\\alpha$ does not affect the distribution of this feature.\n\nIn Figure~\\ref{fig:ala-weights}, we show the results from the landmark selection for one of the WT-MetaD simulations ($\\gamma=5$). In the top row, we show how the selected landmarks are distributed in the CV space. In the bottom row, we show the effective FES of selected landmarks projected on the $\\Psi$ dihedral angle.\n\nFor $\\alpha=1$, equivalent to weighted random sampling~\\cite{tribello2019using_b}, we can see that we get a worse representation of the C$7_{\\mathrm{ax}}$ state as compared to the other states. We can understand this issue by considering the weights of configurations in the C$7_{\\mathrm{ax}}$ that are are considerably smaller than the weights from the other states. As shown in Section~S10 in the SI, using the $\\alpha=1$ landmarks results in an MRSE embedding close to the equilibrium PT embedding (shown in Figure~\\ref{fig:ala-embeddings}(a) below), but has a worse separation of the metastable states as compared to other embeddings.\n\nOn the other hand, if we use $\\alpha=2$, we obtain a much more balanced landmark selection that is relatively close to the equilibrium distribution but has a sufficient representation of the C$7_{\\mathrm{ax}}$ state. Using larger values of $\\alpha$ renders a selection closer to the sampling from the underlying biased simulation, with more features higher in free energy. We observe that using $\\alpha=2$ gives the best MRSE embedding. In contrast, higher values of $\\alpha$ result in worse embeddings characterized by an inadequate mapping of the C$7_{\\mathrm{ax}}$ state, as can be seen in Section~S12 in the SI. Therefore, in the following, we use a value of $\\alpha=2$ for the tempering parameter in the landmark selection. This value corresponds to an effective landmark CV distribution broadening of $\\tilde{\\alpha} \\approx 1.67$ (see eqs~\\ref{eq:effective_alpha_pdf} and~\\ref{eq:effective_alpha}).\n\n\\begin{figure}[htp]\n \\includegraphics[width=0.9\\columnwidth]{fig-ala-weights.pdf}\n \\caption{Results for alanine dipeptide in vacuum at 300 K. Weight-tempered random sampling as a landmark selection scheme for a WT-MetaD simulation ($\\gamma=5$) biasing $(\\Phi, \\Psi)$. (a) In the first two panels, we show the reference FES in the $(\\Phi, \\Psi)$ space and the points sampled during the simulations. In the subsequent panels, we present the 4000 landmarks selected for different values of the $\\alpha$ parameter. (b) In the bottom row, we show the results projected on $\\Phi$, where the reference FES is shown in light blue. The projections (black) are calculated as a negative logarithm of the histogram of the selected landmarks.}\n \\label{fig:ala-weights}\n\\end{figure}\n\nThese landmark selection results underline the importance of having a balanced selection of landmarks that is close to the equilibrium distribution and gives a proper representation of all metastable states, but excludes points from unimportant higher-lying free energy regions. The exact value of $\\alpha$ that achieves such optimal selection will depend on the underlying free energy landscape.\n\nIn Section~S11 in the SI, we show results obtained using WT-FPS for the landmark selection (see Section~S3 in the SI for a description of WT-FPS). We can observe that the WT-MetaD embeddings obtained using WT-FPS with $\\alpha=2$ are similar to the WT-MetaD embeddings shown in Figure~\\ref{fig:ala-embeddings} below. Thus, for small values of the tempering parameter, both methods give similar results.\n\nHaving established how to perform the landmark selection, we now consider the results for MRSE embeddings obtained on unbiased and biased simulation data at 300 K. The unbiased simulation data comes from a PT simulation that accurately captures the equilibrium distribution within each replica~\\cite{parallel_tempering}. Therefore, for the 300 K replica used for the analysis and training, we obtain the equilibrium populations of the different metastable states while not capturing the higher-lying and transition state regions. In principle, we could also include simulation data from the higher-lying replica into the training by considering statistical weights to account for the temperature difference, but this would defeat the purpose of using the PT to generate unbiased simulation data that does not require reweighting. We refer to the embedding trained on the PT simulation data as the PT embedding. The biased simulation data comes from WT-MetaD simulations where we bias the ($\\Phi$, $\\Psi$) angles. We refer to these embeddings as the WT-MetaD embeddings.\n\nIn the WT-MetaD simulations, we use bias factors from 2 to 10 to generate training data sets representing a biased distribution that progressively goes from a distribution closer to the equilibrium one to more flatter distribution as we increase $\\gamma$ (see eq~\\ref{eq:wt-pv}). In this way, we can test how the MRSE training and reweighting procedure works when handling simulation data obtained under different biasing strengths.\n\nFor the WT-MetaD training data sets, we also investigate the effect of not incorporating the weight into the training via a reweighted feature pairwise probability distribution (i.e., all weights equal to unity in eq~\\ref{eq:reweighted_kernel}). In this case, only the weight-tempered random sampling landmark selection takes the weights into account. In the following, we refer to these WT-MetaD embeddings as without reweighting or not-reweighted.\n\nTo be consistent and allow for a fair comparison between embeddings, we evaluate all the trained WT-MetaD embeddings on the unbiased PT simulation data and use the resulting projections to perform analysis and generate FESs. This procedure is possible as both the unbiased PT and the biased WT-MetaD simulations sample all metastable states of alanine dipeptide (i.e., the WT-MetaD simulations do not sample metastable states that the PT simulation does not).\n\n\\begin{figure}[htp]\n \\includegraphics[width=0.9\\columnwidth]{fig-ala-clusters.pdf}\n \\caption{Results for alanine dipeptide in vacuum at 300 K. Clustering of the PT simulation data for the different embeddings. The results show how the embeddings map the metastable states. The data points are colored accordingly to their cluster. The first panel shows the metastable state clusters in the $(\\Phi,\\Psi)$ space. The second panel shows the results for the PT embedding. The third and fourth panels show the results for a representative case of a WT-MetaD embedding ($\\gamma=5$), obtained with and without incorporating weights into the training via a reweighted feature probability distribution (see eq~\\ref{eq:reweighted_kernel}), respectively. For the details about clustering~\\cite{scikit-learn}, see Section~S5 in the SI. The units for the MRSE embeddings are arbitrary and only shown as a visual guide.}\n \\label{fig:ala-cluster}\n\\end{figure}\n\nTo establish that the MRSE embeddings correctly map the metastable states, we start by considering the clustering results in Figure~\\ref{fig:ala-cluster}. We can see that the PT embedding (second panel) preserves the topography of the FES and correctly maps all the important metastable states. We can say the same for the reweighted (third panel) and not-reweighted (fourth panel) embeddings. Thus, the embeddings map both the local and global characteristics of the FES accurately. Next, we consider the MRSE embeddings for the different bias factors.\n\n\\begin{figure}[htp]\n \\includegraphics[width=0.8\\linewidth]{fig-ala-embeddings.pdf}\n \\caption{Results for alanine dipeptide in vacuum at 300 K. MRSE embeddings trained on unbiased and biased simulation data. (a) The free energy landscape $F(\\Phi,\\Psi)$ from the PT simulation. The metastable states C$7_{\\mathrm{eq}}$, C5, and C$7_{\\mathrm{ax}}$ are shown. (b) The FES for the MRSE embedding trained using the PT simulation data. (c) The FESs for the MRSE embeddings trained using the WT-MetaD simulation data. We show results obtained from the simulations using different bias factors $\\gamma$. We show WT-MetaD embeddings obtained with (top row) and without (bottom row) incorporating weights into the training via a reweighted feature pairwise probability distribution (see eq~\\ref{eq:reweighted_kernel}). We obtain all the FESs by calculating the embeddings on the PT simulation data and using kernel density estimation as described in Section~\\ref{sec:kde}. The units for the MRSE embeddings are arbitrary and only shown as a visual guide.}\n \\label{fig:ala-embeddings}\n\\end{figure}\n\nIn Figure~\\ref{fig:ala-embeddings}, we show the FESs for the different embeddings along with the FES for the $\\Phi$ and $\\Psi$ dihedral angles. For the reweighted WT-MetaD embeddings (top row of panel c), we can observe that all the embeddings are of consistent quality and exhibit a clear separation of the metastable states. In contrast, we can see that the not-reweighted WT-MetaD embeddings (bottom row of panel c) have a slightly worse separation of the metastable states. Thus, we can conclude that incorporating the weights into the training via a reweighted feature pairwise probability distribution (see eq~\\ref{eq:reweighted_kernel}) improves the visual quality of the embeddings for this system.\n\n\\begin{figure}[htp]\n \\includegraphics[width=0.5\\columnwidth]{fig-ala-fed.pdf}\n \\caption{Results for alanine dipeptide in vacuum at 300 K. Free energy differences between metastable states for the FESs of the embeddings shown in Figure~\\ref{fig:ala-embeddings}. We show the reference values from the $F(\\Phi, \\Psi)$ FES obtained from the PT simulation at 300 K as horizontal gray lines. The results for the reweighted embeddings are shown as red crosses, while the results for the not-reweighted embeddings are shown as blue dots. The results for the PT embedding are shown as green plus symbols.}\n \\label{fig:ala-fed}\n\\end{figure}\n\nTo further check the quality of the embeddings, we calculate the free energy difference between metastable states as $\\Delta F_{\\text{A,B}}=-\\frac{1}{\\beta}\\log(\\int_{\\text{A}}\\d\\bs\\,\\e^{-\\beta F(\\bs)} \/ \\int_{\\text{B}}\\d\\bs\\,\\e^{-\\beta F(\\bs)})$, where the integration domains are the regions in CV space corresponding to the states A and B, respectively. This equation is only valid if the CVs correctly discriminate between the different metastable states. For the MRSE embeddings, we can thus identify the integration regions for the different metastable states in the FES and calculate the free energy differences. Reference values can be obtained by integrating the $F(\\Phi,\\Psi)$ FES from the PT simulation. A deviation from a reference value would indicate that an embedding does not correctly map the density of the metastable states. In Figure~\\ref{fig:ala-fed}, we show the free energy differences for all the MRSE embeddings. All free energy differences obtained with the MRSE embeddings agree with the reference values within a 0.1 $\\kT$ difference for both reweighted and not-reweighted WT-MetaD embeddings. For bias factors larger than 3, we can observe that the reweighted embeddings perform distinctly better than the not-reweighted ones.\n\n\\begin{figure}[htp]\n \\includegraphics[width=\\linewidth]{fig-ala-distances.pdf}\n \\caption{Results for alanine dipeptide in vacuum at 300 K. The joint probability density functions for the pairwise distances in the high-dimensional feature space and the low-dimensional latent space for the embeddings shown in Figure~\\ref{fig:ala-embeddings}. We show the results for the (a) PT and (b) WT-MetaD embeddings (evaluated on the PT simulation data). These histograms show the similarities between distances in the feature and latent spaces. For an embedding that preserves distances accurately, the density would lie on the identity line $y=x$ (shown as a black line). We normalize the distances to lie in the range 0 to 1.}\n \\label{fig:ala-distances}\n\\end{figure}\n\nAs a final test of the MRSE embeddings for this system, we follow the approach used by Tribello and Gasparotto~\\cite{tribello2019using_a,tribello2019using_b}. We calculate the pairwise distances between points in the high-dimensional feature space and the corresponding pairwise distances between points in the low-dimensional latent (i.e., CV) space given by the embeddings. We then calculate the joint probability density function of the distances using histogramming. The joint probability density should be concentrated on the identity line if an embedding preserves distances accurately. However, this only is valid for the MRSE embeddings constructed without incorporating the weights into the training, since for this case, there are no additional constraints besides geometry.\n\nAs we can see in Figure~\\ref{fig:ala-distances}, the joint density is concentrated close to the identity line for most cases. For the reweighted WT-MetaD embeddings (panel b), the density for the distances in the middle range slightly deviates from the identity line in contrast to the not-reweighted embeddings. This deviation is due to additional constraints on the latent space. In the reweighted cases, apart from the Euclidean distances, we also include the statistical weights into the construction of the feature pairwise probability distribution. Consequently, having landmarks with low weights in the feature space decreases the probability of being neighbors to these landmarks in the latent space. Therefore, the deviation from the identity line must be higher for the reweighted embeddings.\n\nSummarizing the results in this section, we can observe that MRSE can construct embeddings, both from unbiased and biased simulation data, that correctly describe the local and global characteristics of the free energy landscape of alanine dipeptide. For the biased WT-MetaD simulation data, we have investigated the effect of not including the weights in the training of the MRSE embeddings. Then only the landmark selection takes the weights into account. The not-reweighted embeddings are similar or slightly worse than the reweighted ones. We can explain the slight difference between the reweighted and not-reweighted embeddings by that the weight-tempered random sampling does the primary reweighting. Nevertheless, we can conclude that incorporating the weights into the training is beneficial for the alanine dipeptide test case.\n\n\n\n\n\n\\subsection{Alanine Tetrapeptide}\n\\label{sec:ala3_results}\nAs the last example, we consider alanine tetrapeptide, a commonly used test system for enhanced sampling methods~\\cite{valsson2015well,tiwary2016spectral,mccarty2017variational,yang2018refining,bonati2019neural,Invernizzi2020opus,gilberti2020atlas}. Alanine tetrapeptide is a considerably more challenging test case than alanine dipeptide. Its free energy landscape consists of many metastable states, most of which are high in free energy and thus difficult to capture in an unbiased simulation. We anticipate that we can only obtain an embedding that accurately separates all of the metastable states by using training data from an enhanced sampling simulation, which better captures higher-lying metastable states. Thus, the system is a good test case to evaluate the performance of the MRSE method and the reweighting procedure.\n\n\\begin{figure}[htp]\n \\includegraphics[width=\\columnwidth]{fig-ala3-system.png}\n \\caption{Results for alanine tetrapeptide in vacuum at 300 K. (a) The conditional FESs (eq~\\ref{eq:conditional_fe}), obtained from the WT-MetaD simulation, shown as a function of $\\Phi_1$ and $\\Phi_2$ for two minima of $\\Phi_3$ labeled as A and B. We denote the ten metastable states as $s_1$ to $s_{10}$. (b) The alanine tetrapeptide system with the backbone dihedral angles $\\mathbf{\\Phi} \\equiv (\\Phi_1,\\Phi_2,\\Phi_3)$ and $\\mathbf{\\Psi} \\equiv (\\Psi_1,\\Psi_2,\\Psi_3)$ that we use as the input features for the MRSE embeddings. (c) The free energy profile $F(\\Phi_3)$, obtained from the WT-MetaD simulation, with the two minima A and B. The grey shaded area indicates the areas integrated over in eq~\\ref{eq:conditional_fe}. The FESs are obtained using kernel density estimation as described in Section~\\ref{sec:kde}.}\n \\label{fig:ala3-system}\n\\end{figure}\n\nAs it is often customary~\\cite{valsson2015well,tiwary2016spectral,Invernizzi2020opus,gilberti2020atlas}, we consider the backbone dihedral angles $\\mathbf{\\Phi} \\equiv (\\Phi_1,\\Phi_2,\\Phi_3)$ and $\\mathbf{\\Psi} \\equiv (\\Psi_1,\\Psi_2,\\Psi_3)$ that characterize the configurational landscape of alanine tetrapeptide. We show the dihedral angles in Figure~\\ref{fig:ala3-system}(b). For this particular setup in vacuum, it is sufficient to use $\\mathbf{\\Phi}$ to describe the free energy landscape and separate the metastable states, as $\\mathbf{\\Psi}$ are fast CVs in comparison to $\\mathbf{\\Phi}$~\\cite{valsson2015well,Invernizzi2020opus}. To generate biased simulation data, we perform a WT-MetaD simulation using the $\\mathbf{\\Phi}$ angles as CVs and a bias factor $\\gamma=5$. Moreover, we perform a PT simulation and employ the 300 K replica to obtain unbiased simulation data. As before, the embeddings obtained by training on these simulation data sets are denoted as WT-MetaD and PT embeddings, respectively. As before, we also consider a WT-MetaD embedding, denoted as not-reweighted, where we do not include the weights into the construction of the feature pairwise probability distribution.\n\nTo verify the quality of the sampling and the accuracy of the FESs, we compare the results obtained from the WT-MetaD and PT simulations to results from bias-exchange metadynamics simulations~\\cite{piana2007bias} using $\\mathbf{\\Phi}$ and $\\mathbf{\\Psi}$ as CVs (see Section~S13 in the SI). Comparing the free energy profiles for $\\mathbf{\\Phi}$ obtained with different methods (Figure~S12 in the SI), and keeping in mind that the 300 K replica from the PT simulation only describes well the lower-lying metastable states, we find that all simulations are in good agreement. Therefore, we conclude that the WT-MetaD and PT simulations are converged.\n\nTo show the results from the three-dimensional CV space on a two-dimensional surface, we consider a conditional FES where the landscape is given as a function of $\\Phi_1$ and $\\Phi_2$ conditioned on values of $\\Phi_3$ being in one of the two distinct minima shown in Figure~\\ref{fig:ala3-system}(c). We label these minima as A and B. We define the conditional FES as:\n\\begin{equation}\n \\label{eq:conditional_fe}\n F(\\Phi_1,\\Phi_2 | \\Phi_3 \\in S)=-\\frac{1}{\\beta}\\log\\int_{S} \\mathrm{d}\\Phi_3 \\, \\e^{-\\beta F(\\mathbf{\\Phi})},\n\\end{equation}\nwhere $F(\\mathbf{\\Phi})$ is the FES obtained from the WT-MetaD simulation (aligned such that its minimum is at zero), $S$ is either the A or B minima, and we integrate over the regions indicated by the gray areas in Figure~\\ref{fig:ala3-system}(c). We show the two conditional FESs in Figure~\\ref{fig:ala3-system}(a). Through a visual inspection of Figure~\\ref{fig:ala3-system}, we can identify ten different metastable states, denoted as $s_1$ to $s_{10}$. Three of the states, $s_{5}$, $s_{7}$, and $s_{8}$, are sampled properly in the 300 K replica of the PT simulation, and thus we consider them as the equilibrium metastable states. The rest of the metastable states are located higher in free energy and only sampled accurately in the WT-MetaD simulation. The number of the metastable states observed in Figure~\\ref{fig:ala3-system}(a) is in agreement with a recent study of Giberti et al.~\\cite{gilberti2020atlas}.\n\nWe can judge the quality of the MRSE embeddings based on whether they can correctly capture the metastable states in only two dimensions. As input features for the MRSE embeddings, we use sines and cosines of backbone dihedral angles $\\mathbf{\\Phi}$ and $\\mathbf{\\Psi}$ (12 features in total), instead of heavy atom distances as we do in the previous section for alanine dipeptide. We use weight-tempered random sampling with $\\alpha=2$ to select landmarks for the training of the WT-MetaD embeddings.\n\nWe show the PT and WT-MetaD embeddings in Figure~\\ref{fig:ala3-emb}. We can see that the PT embedding in Figure~\\ref{fig:ala3-emb}(a) is able to accurately describe the equilibrium metastable states (i.e., $s_5$, $s_7$, and $s_8$). However, as expected, the PT embedding cannot describe all ten metastable states, as the 300 K replica in the PT simulation rarely samples the higher-lying states.\n\\begin{figure}[htp]\n \\includegraphics[width=0.9\\columnwidth]{fig-ala3-emb.png}\n \\caption{Results for alanine tetrapeptide in vacuum at 300 K. FESs for the MRSE embeddings trained on the unbiased and biased simulation data. (a) The PT embedding trained and evaluated on the PT simulation data. (b-c) The WT-MetaD embeddings trained and evaluated on the WT-MetaD simulation data. The WT-MetaD embeddings are obtained without (b) and with (c) incorporating weights into the training via a reweighted feature pairwise probability distribution (see eq~\\ref{eq:reweighted_kernel}). The FESs are obtained using kernel density estimation as described in Section~\\ref{sec:kde}. The state labels in the FESs correspond to the labeling used in Figure~\\ref{fig:ala3-system}(a). The embeddings are rescaled so that the equilibrium states are of similar size. The units for the MRSE embeddings are arbitrary and thus not shown.}\n \\label{fig:ala3-emb}\n\\end{figure}\n\nIn contrast, we can see that the WT-MetaD embeddings in Figure~\\ref{fig:ala3-emb}(b-c) capture accurately all ten metastable states. By visual inspection of the simulation data, we can assign state labels for the embeddings in Figure~\\ref{fig:ala3-emb}, corresponding to the states labeled in Figure~\\ref{fig:ala3-system}(a). One interesting aspect of the MRSE embeddings in Figure~\\ref{fig:ala3-emb} is that they similarly map the equilibrium states, even if we obtain the embeddings from different simulation data sets (PT and WT-MetaD). This similarity underlines the consistency of our approach. The fact that both the reweighted and not-reweighted WT-MetaD embeddings capture all ten states suggests we could use both embeddings as CVs for biasing.\n\nHowever, we can observe that the reweighted embedding has a better visual separation of the states. For example, we can see this for the separation between $s_9$ and $s_{10}$. Furthermore, we can see that the reweighted embedding separates the states from the A and B regions better than the not-reweighted embedding. In the reweighted embedding, states $s_1$ to $s_4$ are close to each other and separated from states $s_5$--$s_{10}$ as indicated by line drawn in Figure~\\ref{fig:ala3-emb}(c). Therefore, we can conclude that the reweighted WT-MetaD embedding is of better quality and better represents distances between metastable states for this system. These results show that we need to employ a reweighted feature pairwise probability distribution for more complex systems.\n\n\n\n\n\n\\section{Discussion and Conclusions}\n\\label{sec:discussion_conclusions}\nWe present multiscale reweighted stochastic embedding, a general framework that unifies enhanced sampling and machine learning for constructing collective variables. MRSE builds on top of ideas from stochastic neighbor embedding methods~\\cite{hinton2002stochastic,maaten2008visualizing,maaten2009learning,van2014accelerating}. We introduce several advancements to SNE methods that make MRSE suitable for constructing CVs from biased data obtained from enhanced sampling simulations.\n\nWe show that this method can construct CVs automatically by learning a mapping from a high-dimensional feature space to a low-dimensional latent space via a deep neural network. We can use the trained NN to project any given point in feature space to CV space without rerunning the training procedure. Furthermore, we can obtain the derivatives of the learned CVs with respect to the input features and bias the CVs within an enhanced sampling simulation. In future work, we will use this property by employing MRSE within an enhanced sampling scheme where the CVs are iteratively improved~\\cite{zhang2018unfolding,chen2018collective,ribeiro2018reweighted}.\n\nIn this work, we focus entirely on the training of the embeddings, using training data sets obtained from both unbiased simulation and biased simulation employing different biasing strengths (i.e., bias factors in WT-MetaD). As the ``garbage in, garbage out'' adage applies to ML (a model is only as good as training data), to eliminate the influence of incomplete sampling, we employ idealistic sampling conditions that are not always achievable in practice~\\cite{pant2020statistical}. In future work, we will need to consider how MRSE performs under less ideal sampling conditions. One possible option to address this issue is to generate multiple embeddings by running independent training attempts and score them using the maximum caliber principle, as suggested in ref~\\citenum{pant2020statistical}.\n\nThe choice of the input features depends on the physical system under study. In this work, we use conventional features, i.e., microscopic coordinates, distances, and dihedral angles, as they are a natural choice for the model systems considered here. In general, the features can be complicated functions of the microscopic coordinates~\\cite{musil2021physicsinspired}. For example, symmetry functions have been used as input features in studies of phase transformations in crystalline systems~\\cite{geiger2013neural,rogal2019neural}. Additionally, features may be correlated or simply redundant. See ref~\\citenum{dy2004feature} for a general outline of feature selection in unsupervised learning. We will explore the usage of more intricate input features and modern feature selection methods~\\cite{ravindra2020automatic,cersonsky2020improving} for MRSE embeddings in future work.\n\nOne of the issues with using kernel-based dimensionality reduction methods, such as diffusion maps~\\cite{coifman2008diffusion} or SNE methods~\\cite{hinton2002stochastic}, is that the user needs to select the bandwidths (i.e., the scale parameters $\\boldsymbol{\\varepsilon}$) when using the Gaussian kernels. In $t$-SNE~\\cite{maaten2008visualizing,maaten2009learning}, the Gaussian bandwidths are optimized by fitting to a parameter called perplexity. We can view the perplexity as the effective number of neighbors in a manifold~\\cite{maaten2008visualizing,maaten2009learning}. However, this only redirects the issue as the user still needs to select the perplexity parameter~\\cite{wattenberg2016how}. Larger perplexity values lead to a larger number of nearest neighbors and an embedding less sensitive to small topographic structures in the data. Conversely, lower perplexity values lead to fewer neighbors and ignore global information in favor of the local environment. However, what if several length scales characterize the data? In this case, it is impossible to represent the density of the data with a single set of bandwidths, so viewing multiple embeddings obtained with different perplexity values is quite common~\\cite{wattenberg2016how}.\n\nIn MRSE, we circumvent the issue of selecting the Gaussian bandwidths or the perplexity value by employing a multiscale representation of feature space. Instead of a single Gaussian kernel, we use a Gaussian mixture where each term has its bandwidths optimized for a different perplexity value. We perform this procedure in an automated way by employing a range of perplexity values representing several length scales. This mixture representation allows describing both the local and global characteristics of the underlying data topography. The multiscale nature of MRSE makes the method particularly suitable for tackling complex systems, where the free energy landscape consists of several metastable states of different sizes and shapes. However, as we have seen in Section~\\ref{sec:ala3_results}, also model systems may exhibit such complex behavior.\n\nEmploying nonlinear dimensionality reduction methods is particularly problematic when considering training data obtained from enhanced sampling simulations. In this case, the feature samples are drawn from a biased probability distribution, and each feature sample carries a statistical weight that we need to take into account. In MRSE, we take the weights into account when selecting the representative feature samples (i.e., landmarks) for the training. For this, we introduce a weight-tempered selection scheme that allows us to obtain landmarks that strike a balance between equilibrium distribution and capturing important metastable states lying higher in free energy. This weight-tempered random sampling method depends on a tempering parameter $\\alpha$ that allows us to tune between obtaining equilibrium and biased distribution of landmarks. This parameter is case-dependent and similar in spirit to the bias factor $\\gamma$ in WT-MetaD. Generally, $\\alpha$ should be selected so that every crucial metastable state is densely populated. However, $\\alpha$ should not be too large, as it may result in including feature samples from unimportant higher-lying free energy regions.\n\nThe weight-tempered random sampling algorithm is inspired by and bears a close resemblance to the well-tempered farthest-point sampling (WT-FPS) landmark selection algorithm, introduced by Ceriotti et al.~\\cite{ceriotti2013demonstrating}. For small values of the tempering parameter $\\alpha$, both methods give similar results as discussed in Section~\\ref{sec:ala1_results}. The main difference between the algorithms lies in the limit $\\alpha \\to \\infty$. In weight-tempered random sampling, we obtain a landmark distribution that is the same as the biased distribution from the enhanced sampling simulation. On the other hand, WT-FPS results in landmarks that are sampled uniformly distributed from the simulation data set. Due to usage of FPS~\\cite{Hochbaum1985_Abestpo} in the initial stage, WT-FPS is computationally more expensive. Thus, as we are interested in a landmark selection obtained using smaller values of $\\alpha$ and do not want uniformly distributed landmarks, we prefer weight-tempered random sampling.\n\nThe landmarks obtained with weight-tempered random sampling still carry statistical weights that can vary considerably. Thus, we also incorporate the weights into the training by employing a reweighted feature pairwise probability distribution. To test the effect of this reweighting, we constructed MRSE embeddings without including the weights in the training. Then, we only take the weights into account during the landmark selection. For alanine dipeptide, the reweighted MRSE embeddings are more consistent and slightly better than the not-reweighted ones. For the more challenging alanine tetrapeptide case, both the reweighted and not-reweighted embeddings capture all the metastable states. However, we can observe that the reweighted embedding has a better visual separation of states. Thus, we can conclude from these two systems that employing a reweighted feature pairwise probability distribution is beneficial or even essential, especially when considering more complex systems. Nevertheless, this is an issue that we need to consider further in future work.\n\nFinally, we have implemented the MRSE method and weight-tempered random sampling in the open-source \\textsc{plumed} library for enhanced sampling and free energy computation~\\cite{tribello2014plumed,plumed-nest}. Having MRSE integrated into \\textsc{plumed} is of significant advantage. We can use MRSE with the most popular MD codes and learn CVs in postprocessing and on the fly during a molecular simulation. Furthermore, we can employ the learned CVs with the various CV-based enhanced sampling methods implemented in \\textsc{plumed}. We will make our code publicly available under an open-source license by contributing it as a module called \\texttt{LowLearner} to the official \\textsc{plumed} repository in the future. In the meantime, we release an initial implementation of \\texttt{LowLearner} with our data. The archive of our data is openly available at Zenodo~\\cite{mrse-dataset} (DOI: \\href{https:\/\/zenodo.org\/record\/4756093}{\\texttt{10.5281\/zenodo.4756093}}). \\textsc{plumed} input files and scripts required to replicate the results are available from the \\textsc{plumed} NEST~\\cite{plumed-nest} under \\texttt{plumID:21.023} at \\url{https:\/\/www.plumed-nest.org\/eggs\/21\/023\/}.\n\n\n\n\n\n\\section*{Acknowledgments}\nWe want to thank Ming Chen (UC Berkeley) and Gareth Tribello (Queen's University Belfast) for valuable discussions, and Robinson Cortes-Huerto, Oleksandra Kukharenko, and Joseph F. Rudzinski (Max Planck Institute for Polymer Research) for carefully reading over an initial draft of the manuscript. JR gratefully acknowledges financial support from the Foundation for Polish Science (FNP). We acknowledge using the MPCDF (Max Planck Computing \\& Data Facility) DataShare.\n\n\n\n\n\n\\section*{Associated Content}\nThe Supporting Information is available free of charge at \\url{https:\/\/pubs.acs.org\/doi\/xxx\/yyy}.\n\n\\begin{adjustwidth}{2cm}{}\n(S1) Entropy of the reweighted feature pairwise probability distribution;\n(S2) Kullback-Leibler divergence loss for a full set of training data;\n(S3) Description of well-tempered farthest-point sampling (WT-FPS);\n(S4) Effective landmark CV distribution for weight-tempered random sampling;\n(S5) Details about the clustering used in Figure~7.\n(S6) Bandwidth values for kernel density estimation;\n(S7) Loss function learning curves;\n(S8) Additional embeddings for the M\\\"uller-Brown potential;\n(S9) Feature preprocessing in the alanine dipeptide system;\n(S10) Alanine dipeptide embeddings for different values of $\\alpha$ in weight-tempered random sampling;\n(S11) Alanine dipeptide embeddings for $\\alpha=2$ in WT-FPS;\n(S12) Alanine dipeptide embeddings for different random seed values;\n(S13) Convergence of alanine tetrapeptide simulations;\n\\end{adjustwidth}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec1}\nThe extent of social media users consists of billions of people from all around the world. Initially, social media was developed with the object in mind to help connect with family and friends online. It was used to share everyday things, events, interests, and news within closed circles of family and friends. These were personal events, e.g. birthdays, weddings, vacations, graduation ceremonies, and going out. After the usability of social media was discovered, it soon caught the attention of individuals and companies that started using social media to reach more customers. Soon after, the trend became a global phenomenon where people started connecting worldwide based on their common interests. The influence of social media on people's lives and attitudes has been widely studied and established in many different perspectives \\cite{intro1}, \\cite{intro2}.\n\nAlthough social media is a broad term, it mainly refers to Facebook, Twitter, Reddit, Instagram, and YouTube. Some social media platforms allow users to post text, photos and videos. At the same time, many other social media applications have limited options and restrictions for sharing the type of content. YouTube allows users to post videos, while Instagram only allows users to share videos and photos. 4.66 billion active internet users worldwide, and 4.2 billion users are active on social media. As of the first quarter of 2020, Facebook has 2.6 billion monthly active users globally, making it the most extensive social media network globally. Twitter is one of the leading social media with 397 million users worldwide, which is becoming increasingly prominent during events and an essential tool in politics \\cite{Statista2021.}. Another study \\cite{Tnewsmed} shows that Twitter is an effective and fast way of sharing news and developing stories. This trend has continued to grow since the last decade as the internet has become widespread.\n\nHowever, the use of social media has become more complex in the last decade. It became a broader phenomenon because of the involvement of multiple stakeholders such as companies, groups, and other organizations. Many lobbying and public relations firms got on board and started targeting social audiences to change people's perspectives and influence their decisions. Mostly these campaigns are related to a particular individual or a company. A similar process happens in the public sphere, where people rally against or support their target. It played a significant role in different outcomes, affecting countries, people, and eventually the world. One such example is ``Arab Spring\" \\cite{arabspring}, which is an event that started in Tunisia and spread among other regional countries. Another example of good and bad events in political spheres of the UK and US is given in the study that uses Twitter to evaluate the perceived impact on users \\cite{goodbadevents}.\n\nThe term ``event\" implies typically a change which is an occurrence bounded by time and space. In the context of social media, an event can be happening on the ground or online. Different mediums can broadcast events happenings on the ground while people participate in the event through social media discussion. These kinds of events can be referred to as hybrid events. An example of such a hybrid event can be a volcano eruption where people participate in the event using online discussions on social media while it is happening on the ground. While some events solely happen online, such as gaming, marketing, and learning events. Events can be communicated in text, photos and videos across social media platforms. Many events can happen on social media platforms simultaneously, providing beneficial and prosperous information. It provides information about the event itself, but it also reveals sentiments and opinions of the general public and the direction where the events are evolving shortly. This quick interaction of users and transmission of information makes it a dynamic process that sometimes proves hard to follow the latest development, making it a challenging task.\n\nEvents also have time dimensions; it is equally important to determine the time of an event after its detection. For example, an event may have occurred in the past, happened in the present, or planned to occur in the future. Based on that, further steps would be taken accordingly as per the requirements of the situation. The events occurring on social media may directly impact the personal or social life of the man\/woman. The past event can tell us people's opinions and other factors; current events can be a great source of developing a story, while future events can help us prepare in advance. The study \\cite{eventtime} reviews the existing research for the detection of disaster events and classifies them in three dimensions early warning and event detection, post-disaster, and damage assessment. \n\nThe recent example of violence in Bangladesh can explain the link between social media with real life. On Wednesday, 15 October 2021, clashes were sparked by videos and allegations that spread across social media that a Qur'an, the Muslim holy book, had been placed on the knee of a statue of the Hindu god Hanuman. The violence continued in the following days, which resulted in the deaths of 7 people, with about 150 people injured; more than 80 special shrines set up for the Hindu festival were attacked. This case shows social media's severe and robust effect on our daily lives and ground situation \\cite{bngldshviolnc}. This violence was termed as ``worst communal violence in years\" by New York Times. Similar episodes of violence are becoming a norm in India since the right of ring-wing politics. If there is the detection of events occurring on social media in advance, which alerts possible coming hazards, it can be countered in anticipation, significantly reducing the reaction immobilization of state forces while maximizing the protection of people at risk.\n\nEvent detection has been long addressed in the Topic Detection, and Tracking (TDT) in academia \\cite{onewdet}. It mainly focuses on finding and following events in a stream of broadcast news stories shared by social media posts. Event Detection (ED) is further divided into two categories depending on the type of its task; New Event Detection (NED) and Retrospective Event Detection (RED) \\cite{redev}. NED focuses on detecting a newly occurred event from online text streams, while RED aims to discover strange events from offline historical data. Often event detection is associated with identifying the first story on topics of interest through constant monitoring of social media and news streams. Other related fields of research are associated with event detection, such as; event tracking, event summarization, and event prediction. Event tracking is related to the development of some events over time. Event summarization outlines an event from the given data, while the event forecasts the next event within a current event sequence. These topics are part of the Topic Detection and Tracking (TDT) field.\n\nEvent detection is a vast research field, and there are various requirements and challenges for each task. Various terms have been used to address different events, making it complex to navigate the literature, sometimes adding confusion. We propose relevant events based on their characteristics under the umbrella term ``Dangerous Events\" (DE). Section 2 defines the term ``Dangerous Events\" and the main grouping criteria. In order to clarify the idea of different dangerous event types, we employ SocialNetCrawler \\cite{sncrwlr} for extracting tweets that are given as examples for each type.\n\nThe rest of the paper is organized as follows: Section II presents the definition related to dangerous events in social media. Section 3 discusses event detection and related terms; Section 4 is dedicated to approaches and techniques used for event detection. Section 5 presents the warning and countering measures of dangerous events and challenges. Section 6 lists the challenges, and open problems and the paper is finally concluded in Section 7 with some interesting future research directions.\n\n\\section{Dangerous Events}\\label{sec2}\nAccording to Merriam-Webster \\cite{merrwebdang}, the word ``dangerous\" means involving possible injury, pain, harm, or loss characterized by danger. In that context, we define a dangerous event as the event that poses any danger to an individual, group, or society. This danger can come in many shapes and intensities. The objective is to draw a fine line between normal, harmless, unpleasant, and extreme, abnormal and harmful events. Less sensitive, unpleasant, and disliked events do not compel the person to feel threatened. While, in the case of dangerous events, the person will feel fearful, unsafe, and threatened. This provides the objective to approach the term ``event\" in a broader sense to address the common element of all such events. The details of dangers can always be discussed in detail, providing the necessity of the situation; for example, a natural disaster proceeds urgent hate speech. In other words, the first one requires an immediate response with no time to lose, while the latter can allow some time to take action. \n\nDangerous events can be anomalies, novelty, outliers, and extreme. These terms can be used to refer to positive or negative meanings. However, Not all anomalies, novelties, and extremes are dangerous, but all dangerous events fulfil one or all of those conditions (extreme, anomaly, novelty). Authors in \\cite{extremsentilax} proposed an unsupervised approach to detect extreme sentiments on social media. It shows that Positive Extreme sentiments can be detected and differentiated from everyday positive sentiments. Therefore, it may be concluded that extreme negative sentiments are likely to turn into dangerous events.\n\nGrouping and defining dangerous events based on their characteristics is another challenging task, and it can help address the issue of approaching different types of dangerous events by narrowing it down to specific details. We will define three broad categories of dangerous events with commonality among them.\n\\begin{enumerate}\n\\item Scenario-based Dangerous Events\n\\item Sentiment-based Dangerous Events\n\\item Action-based Dangerous Events\n\\end{enumerate}\n\nFigure \\ref{figde} gives the depiction of dangerous events and their categories. In the following subsections, we will outline the definition for each type of dangerous event.\n\n\\begin{figure}[h]%\n\\centering\n\\includegraphics[width=0.9\\textwidth]{deflw.png}\n\\caption{Dangerous Events and it's categories.}\\label{figde}\n\\end{figure} \n \n\\subsection{Scenario-based Dangerous Events}\nWe refer to the word ``scenario\" as the development of events. These events are unplanned and unscripted, and most of the time, they occur naturally. Some planned events can also turn into surprising scenarios. For example, a peaceful protest can turn into a riot, like in 2020 where a peaceful protest against corona restrictions in Germany turned into an ugly situation when the rally was hijacked by right-wing extremists, which ended up storming Parliament building and exhibiting right-wing symbols and slogans \\cite{germanparl}. \n\nDetecting and tracking natural disasters on social media have been investigated intensively, and studies \\cite{sc-based1} have proposed different methods to identify those disasters by various means. The aim of these studies has been mainly to tap into the potential of social media to get the latest updated information provided by social media users in real-time and identify the areas where assistance is required. These categories are considered scenario-based dangerous events in this paper, including earthquakes, force majeure, hurricanes, floods, tornadoes, volcano eruptions, and tsunamis. Although each calamity's nature is different, the role of social media in such events provides a joint base to approach them as scenario-based dangerous events. A supposed example of scenario-based danger is obtained using the crawler tool SocialNetCrawler, which can be accessed using the link\\footnote{\\url{http:\/\/sncrawler.di.ubi.pt\/}}:\n\n\\textit{``@politicususa BREAKING: \nScientists predict a tsunami will hit \nWashington, DC on 1\/18\/2020\nWe Are Marching in DC\u2026 https:\/\/t.co\/3af4ZhyV3J\"}\n\n\\subsection{Sentiment-based Dangerous Events}\nSentiment Analysis (SA), also known as Opinion Mining (OM), is the process of extracting people's opinions, feelings, attitudes, and perceptions on different topics, products, and services. The sentiment analysis task can be viewed as a text classification problem as the process involves several operations that ultimately classify whether a particular text expresses positive or negative sentiment \\cite{sncamb}. For example, A micro-blogging website like Twitter is beneficial for predicting the index of emerging epidemics. These are platforms where users can share their feelings which can be processed to generate vital information related to many areas such as healthcare, elections, reviews, illnesses, and others. Previous research suggests that understanding user behaviour, especially regarding the feelings expressed during elections, can indicate the outcome of elections \\cite{saelctn}. \n\nSentiments can be positive and negative, but for defining sentiment-based dangerous events, the applicable sentiments are negatives and, in some instances, negative extremes. Online radicalization can be attributed to this threat related to extreme negative sentiments towards certain people, countries, and governments. Such extreme negative sentiments can result in protests, online abuse, and social unrest. Detecting these events can help reduce its impact by allowing the concerned parties to counter beforehand. A hypothetical example of sentiment-based dangerous example of a tweet obtained using SocialNetCrawler is given below:\n\n\\textit{``RT @Lrihendry: When Trump is elected in 2020, I'm outta here. \nIt's a hate-filled sewer. \nIt is nearly impossible to watch the hateful at\u2026\"}\n\n\n\\subsection{Action-based Dangerous Events}\nThe action involves human indulgence in an event. Various types of actions happen on the ground that can be detected using social media. Actions can be of many types, but we point out actions that are causing harm, loss, or threat to any entity, which again shares the common attribute of negativity and is highly similar to previously defined types of dangerous events. Some examples of Action-based dangerous events can be prison breaks, terrorist attacks, military conflicts, shootings, etc. Several studies have been published focusing on one or more types of such actions-based events. The study \\cite{antifas} focuses on anti-fascist accounts on Twitter to detect acts of violence, vandalism, de-platforming, and harassment of political speakers by Antifa. An assumed example of action-based example acquired using SocialNetCrawler is given below:\n\n\\textit{``RT @KaitMarieox: This deranged leftist and LGBT activist named Keaton Hill assaulted and threatened to kill @FJtheDeuce, a black conservati\u2026''}\n\n\\section{Event Detection methods and techniques}\n\nEvent Detection has been a popular topic in the research community. Several methods and techniques have been proposed to detect events depending on different requirements. These methods directly depend on the type of task and the data available. As such, they were detecting events from image data is undoubtedly different from text data. However, in this work, we only refer to event detection techniques related to text data, particularly data obtained from social media platforms. \n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.80\\textwidth]{edtyp.png}\n\\caption{Classification of ED methods.} \\label{evmethds}\n\\end{figure}\n\nEvent detection methods and techniques revolve around a few basic approaches. Two approaches that are being used in event detection are document-pivot and feature-pivot. What differs in these approaches is mainly the clustering approach, the way documents are used to feature vectors, and the similarity metric used to identify if the two documents represent the same event or not. Another approach is the topic modelling approach, primarily based on probabilistic models.\n\nIt originates from the Topic Detection and Tracking task (TDT) field and can be seen as a clustering issue. \\textbf{Document-pivot approach} detects events by clustering documents based on document similarity as given in Figure \\ref{doc:fig}. Documents are compared using cosine similarity with Tf-IDF (term frequency-inverse document frequency) representations, while a Locality Sensitive Hashing (LSH) \\cite{lcltyhashing} scheme is utilized to retrieve the best match rapidly. \n \\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.8\\textwidth]{dpi.png}\n\\caption{Event Detection using Document-pivot approach \\cite{31}. \\label{doc:fig} }\n\\end{figure}\n\nThis technique was initially proposed for the analysis of timestamped document streams. The bursty activity is considered an event that makes some of the text features more prominent. The features can be keywords, entities and phrases. \\textbf{Feature-pivot Approach} clusters together with terms based on the pattern they occur as shown in the Figure \\ref{featurebsd}. A study \\cite{featurenaive} uses a Naive Bayes classifier to learn the selected features such as keywords to identify civil unrest and protests and accordingly predict the event days.\n\n \\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.8\\textwidth]{fdp.png}\n\\caption{Event Detection using Feature-pivot approach \\cite{31}. \\label{featurebsd}}\n\\end{figure}\n\n\\textbf{Topic modelling approaches} are based on probabilistic models which detect events in social media documents in a similar way that topic models identify\nlatent topics in text documents. In the beginning, topic models depended on word occurrence, where the text corpora were given as a mixture of words with latent model topics and the set of identified topics were given as documents. Latent Dirichlet Allocation (LDA) \\cite{jelodar2019latent} is the most known probabilistic topic modelling technique. It is a hierarchical Bayesian model where a topic distribution is supposed to have a sparse Dirichlet prior. The model is shown in the Figure. \\ref{ldafig}, where \u03b1 is the parameter of the Dirichlet before the per-document topic distribution \u03b8 and \u03c6 is the word distribution for a topic. K represents the number of topics, M represents the document number, while N gives the number of words in a document. If the word W is the only observable variable, the learning of topics, word probabilities per topic, and the topic mixture of each document is tackled as a problem of Bayesian inference solved by Gibbs sampling.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.4\\textwidth]{LDAtm.png}\n\\caption{LDA - A common topic modeling technique \\cite{31}. } \\label{ldafig}\n\\end{figure}\n\nMany methods are proposed for the detection of events. This event detection (ED) methods are mainly categorized under the category of supervised and unsupervised, as shown in Figure \\ref{evmethds}. Supervised methods include support vector machine (SVM), Conditional random field (CRF), Decision tree (DT), Naive Bayes (NB) and others. At the same time, the unsupervised approaches include query-based, statistical-based, probabilistic based, clustering-based, and graph-based.\n\n\\subsection{Event Detection Datasets}\nDue to the growth of the internet and related technologies, the research in event detection has experienced significant interest and effort. However, the benchmark datasets for event detection witnessed slow progress. This can be attributed to the complexity and costliness of annotating events that require human input. There are a handful number of datasets available that covers event detection. These datasets are mostly limited to the small size of data and very restricted types of events. They address specific domains based on certain features. This also raises issues while using a data-hungry deep learning model and typically requires balanced data for each class. Some of these datasets are briefed in the following paragraphs. Table \\ref{tab:edds} gives the comparison of the discussed datasets and knowledge bases.\n\nMAVEN \\cite{wang2020MAVEN} which stands for MAssive eVENt detection dataset, offers a general domain event detection dataset manually annotated by humans. It uses English Wikipedia and FrameNet (Baker et al., 1998) documents for building the dataset. It contains 111,611 various events and 118,732 events mentioned. The authors claim this to be the largest available human-annotated event detection dataset. There are 164 different types of events, representing a much wider range of public domain events. The event types are grouped under five top-level types: action, change, scenario, sentiment, and possession. \n\nEventWiki \\cite{ge2018eventwiki} is a knowledge base of events, which consists of 21,275 events containing 95 types of significant events collected from Wikipedia. EventWiki gives four kinds of information: event type,\nevent info-box, event summary, and full-text description. Authors claim to be the first knowledge base of significant events, whereas most knowledge bases focus on static entities such as people, locations, and organizations.\n\nThe EventKG \\cite{Abdollahi2020EventKGClickAD} is a multilingual1 resource incorporating event-centric information extracted from several large-scale knowledge graphs such as Wikidata, DBpedia and YAGO, as well as less structured sources such as the Wikipedia Current Events Portal and Wikipedia event lists in 15 languages. It contains\ndetails of more than 1,200,000 events in nine languages. Supported languages include; English, French, German, Italian, Russian, Portuguese, Spanish, Dutch, Polish, Norwegian, Romanian, Croatian, Slovene, Bulgarian and Danish.\n\nEVIN \\cite{EVIN} which stands for EVents In News, describes a method that can extract events from a news corpus and organize them in relevant classes. It contains 453 classes of event types and 24,348 events extracted from f 300,000 heterogeneous news articles. The news articles used in this work are from a highly diverse set of newspapers and other online news providers (e.g., http:\/\/aljazeera.net\/,\nhttp:\/\/www.independent.co.uk, http:\/\/www.irishtimes.com,\netc.). These news articles were crawled from the external links mentioned on Wikipedia pages while ignoring the content of Wikipedia pages to get the articles from the original website source.\n\n\n\\begin{table}[h!]\n \\begin{center}\n \\resizebox{\\textwidth}{!}{%\n \\begin{tabular}{lllllll}\n \\toprule\n \\textbf{Dataset} & \\textbf{Events} & \\textbf{Event types} & \\textbf{Document Source}& \\textbf{Language}& \\textbf{Year} &\\textbf{Reference} \\\\\n \\midrule\n MAVEN & 111, 611 & 164 & English Wikipedia \\& FrameNet & English & 2020 & \\cite{wang2020MAVEN} \\\\\n EventWiki & 21,275 & 94 & English WIkipedia & English& 2018 & \\cite{ge2018eventwiki} \\\\\n EventKG & 1,200,000 & undefined & DBpedia \\& YAGO... & Multilingual(9) & 2020 & \\cite{Abdollahi2020EventKGClickAD} \\\\\n EVIN & 24,348 & 453\n & news corpus &English &2014 & \\cite{EVIN}\\\\\n \\bottomrule\n \\end{tabular}} \\caption{Comparison of Related Event Detection Datasets } \\label{tab:edds}\n \\end{center}\n\\end{table}\n\n\\subsection{Supervised Methods}\nSupervised methods are expensive and lengthy as they require labels and training, and this becomes difficult for more extensive data where the cost of training the model is higher and time-consuming. Some of the supervised methods for event detection are discussed below. \n\n\\textbf{Support Vector Machines (SVM):}\nSupport vector machines are based on the principle of minimizing structural risks \\cite{statlrnth} of computer learning theory. Minimizing structural risks\nis to find an assumption h for which we can guarantee the lowest true error. The real error in h is the probability that h will make an error in a sample test selected at random. An upper limit can be used to connect the true error of a hypothesis h with the error of h in the training set and the complexity of H (measured by VC-Dimension), the space of hypotheses which contains h \\cite{statlrnth}. The supporting vector machines find the hypothesis h, which (approximately) minimize this limit on the true error by controlling effectively and efficiently the VC dimension of H.\\cite{txtcatsvm}\n\nIt has been confirmed in many works that SVM is one of the most efficient algorithms for text classification. The accuracy of 87\\% was achieved to classify the traffic or nontraffic events on Twitter. It was able to identify valuable information regarding traffic events through Twitter \\cite{incdetsm}. SVM\ncombination with incremental clustering technique was applied to detect social and real-world events from photos posted on Flicker site \\cite{smedgrphmdl}.\\\\\n\n\\textbf{Conditional Random Fields (CRF):}The CRFs is an essential type of machine learning model developed based on the Maximum Entropy Markov Model (MEMM). It was first proposed by Lafferty et al. (2001) as probabilistic models to segment and label sequence data, inherit the advantages of the previous models, increase their efficiency, overcome their defects, and solve more practical problems \\cite{crfprbmdl}.\nA conditional Random Field (CRF) classifier was learned to extract the artist name and location of music events from a corpus of tweets \\cite{edinsmfeeds}. \n\n\n\\textbf{Decision Tree (DT):}\nDecision tree learning is a supervised machine learning technique for producing a decision tree from training data. A decision tree is also referred to as a classification tree or a reduction tree, and it is a predictive model which draws from observations about an item to conclusions about its target value. In the tree structure, leaves represent classifications (also referred to as labels), non-leaf nodes are features, and branches represent conjunctions of features that lead to the classification \\cite{artanlyzsft}.\nA decision tree classifier called gradient boosted was used to anticipate whether the tweets consist of an event concerning the target entity or not. \n\n\n\\textbf{Na\u00efve Bayes (NB):}\nNa\u00efve Bayes is a simple learning algorithm that uses the Bayes rule and a strong assumption that the attributes are conditionally independent if the class is given. Although this independence assumption is often violated in practice, na\u00efve Bayes often provides competitive accuracy. Its computational efficiency and many other distinctive features result in na\u00efve Bayes being extensively applied in practice.\n\nNa\u00efve Bayes gives a procedure for using the information in sample data to determine the posterior probability P(y\\textbar x) of each class y, given an object x. Once we have such estimates, they can be then used for classification or other decision support applications. \\cite{enclypmlnaivebyes}\n\n\\subsection{Unsupervised Methods}\nThe unsupervised method does not usually require training or target labels. However, they can depend on specific rules based on the model and requirements. The unsupervised methods being used for event detection are discussed below. Many unsupervised methods are developed by scientists who are grouped into different categories that are described in the following subsections. \n\n\\textbf{Query Based Methods:}\nQuery-based methods are based on queries and simple rules to identify planned rules from multiple websites. e.g., YouTube, Flicker, Twitter. An event's temporal and spatial information was extracted and then used to inquire about other social media websites to obtain relevant information.\\cite{qureybsd} \nThe query-based method requires predefined keywords if there are many keywords to avoid unimportant events.\n\n\\textbf{Statistical Based Methods:}\nDifferent researchers under this category introduced many methods. \nFor example, the average frequency of unigrams was calculated to find the significant unigrams (keywords) and combine those unigrams to illustrate the trending events. \\cite{ unsuptopkextrct} The attempt was made to detect the hot events by identifying burst features (i.e., unigram) during different time windows. Each unigram bursty feature signal was then converted into a frequency domain. They were using Discrete Fourier Transformation (DFT). However, DFT was not able to detect the period when there is a burst which is very important in ED process\\cite{twitternewsrepo}. \n\n\\textbf{Wavelet Transformation(WT):} Another technique called Wavelet Transformation (WT) was introduced to assign signals to each unigram feature. WT technique is different from DFT in term of isolating time and frequency and provide better results\\cite{edintwitter}. \nA new framework was proposed that integrated different unsupervised techniques. For example, LDA, NER, bipartite graph clustering algorithm based on relation and centrality scores to discover hidden events and extract their essential information such as time, location, and people that have been involved \\cite{edsocialweb}.\n\n\\textbf{Named Entity Relation(NER):} Named Entity Relation (NER) identifies increasing weights for the proper noun features. A proposed technique applied tweet segmentation to get the sentences containing more phrasing words instead of unigrams. Later, they computed the TFIDF of these sentences and user frequency and increased weights for the proper noun features identified by Named Entity Relation (NER). Li et al. (2012a)first applied tweets classified them using K-Nearest Neighbor (KNN) to identify the events from tweets published by Singapore users\\cite{twitterbsded}.\n\nWeiler et al. (2014) \\cite{eventidentity} used shifts of terms computed by Inverse Document Frequency (IDF) over a simple sliding window model to detect events and trace their evolution. Petrovi\u0107 et al. ( 2010) \\cite{fsdtwitter} modified and used Locality Sensitive Hashing (LSH) to perform First Story Detection (FSD) task on Twitter.\n\n\\textbf{Probabilistic Based Methods:}\nLatent Dirichlet Allocation (LDA) and Probabilistic Latent Semantic Indexing (PLSI) are topic modelling methods that are being used for event detection. In LDA, each document has many topics, and it is supposed to have a group of topics for each document. The model is shown in Figure \\ref{lda}.\n\nLDA worked well with news articles and academic abstracts, but it fell short for small texts. However, the LDA model has been improved by adding tweet pooling schemes and automatic labelling. Pooling schemes include basic scheme, author scheme, burst term scheme, temporal scheme, and hashtag scheme tweets published under the same hashtag. The experiment results proved that the hashtag scheme produced the best clusters results \\cite{ldatpc}. However, LDA defines the number of topics and terms per topic in advance, inefficiently implementing it over social media. \n \n \\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.8\\textwidth]{LDA.png}\n\\caption{Topic Modeling in LDA \\cite{srvyedtxtstrm}. } \\label{lda}\n\\end{figure}\n \n\\textbf{Clustering-Based Method:}\nClustering-based methods mainly rely on selecting the most informative features, which contribute to event detection, unlike supervised methods, which need labelled data for prediction. It contributes to detecting events more accurately.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.8\\textwidth]{cluster.png}\n\\caption{Clustering-Based method \\cite{srvyedtxtstrm}. } \n\\end{figure}\n\nMany clustering-based methods exist for text data, and K-means is a famous clustering algorithm. A novel dual-level clustering was proposed to detect events based on news representation with time2vec \\cite{dualclstr}. Clustering-based methods have been employed in various ways and other techniques such as NER, TFIDF and others in different tasks, but the ideal clustering technique is still yet to come.\n\n\\textbf{Graph-Based Methods:}\nGraph-based methods consist of nodes\/vertices representing entities and edges representing the relationship between the nodes. Valuable information can be extracted from these graphs by grouping a set of nodes based on the set of edges. Each generated group is called a cluster\/graph structure, and it is also known as community, cluster or module. The links between different nodes are called intra-edges. Meanwhile, links that connect different communities are called inter-edges.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.8\\textwidth]{graph.png}\n\\caption{Graph-based clustering method \\cite{srvyedtxtstrm}. } \\end{figure}\n\n\\subsection{Semi-Supervised Methods}\nSemi-supervised learning combines both supervised and unsupervised learning methods. Typically, a small number of labelled and largely unlabeled data is used for training purposes. Sometimes they are also referred to as the hybrid method. If there is a vast number of unlabeled data combined with insufficient labelled data, it can affect the classification accuracy. It is also referred to as imbalanced training data.\n\nSimilarly, if there is no labelled data for a particular class, the classification can become inefficient and accurate. Some of the semi-supervised methods include self-training, generative models and graph-based methods. A semi-supervised algorithm based on tolerance roughest and ensemble learning is recommended for such kinds of problems \\cite{ensamblern}. The missing class is extracted by approximation from the dataset and used as the labelled sample. The ensemble classifier iteratively builds the margin between positive and negative classes to estimate negative data further since negative data is mixed with the positive data. Therefore, classification is done without training samples by applying a hybrid approach, and it saves the cost of getting labelled data manually, especially for larger datasets.\n\n\\section{Discussion}\nThis section discusses different works related to event detection that are categorized under the types proposed earlier in this work. The types of events are scenario-based, sentiment-based, and action-based dangerous events. Each work is described in this section and its event type and technique. Furthermore, this section also discusses the research related to event prediction. Table \\ref{detable} illustrates different type of events detection from social media.\n\\subsection{Detection of Different\/Dangerous Events on Social Media}\nNourbakhsh et al. \\cite{nourbakhsh2017breaking} address natural and artificial disasters on social media. They identified events from local news sources that may become global breaking news within the next 24 hours. They used Reuters News Tracer, a real-time news detection and verification engine. It uses a fixed sphere decoding (FSD) algorithm to detect breaking stories in real-time from Twitter. Each event is shown as a cluster of tweets engaging with that story. By considering different data features, they applied SGD and SVM classifier that detects breaking disasters from postings of local authorities and local news outlets.\n\nSakaki et al. \\cite{equake2010} leverage Twitter for detecting earthquake occurrence promptly. They propose a method to scrutinize the real-time interaction of earthquakes events and, similar to detect a target event. Semantic analyses were deployed on tweets to classify them into positive and negative classes. The target for classification is two keywords; earthquake or shaking, which are also addressed as query words. Total of 597 positive samples of tweets that report earthquake occurrence is used as training data. They also implemented filtering methods to identify the location, and an application called the earthquake reporting system in Japan.\n\nLiu et al. \\cite{crisisbert} aims for crisis events. They propose a state-of-the-art attention-based deep neural networks model called CrisisBERT to embed and classify crisis events. It consists of two phases which are crisis detection and crisis recognition. In addition, another model for embedding tweets is also introduced. The experiments are conducted on C6 and C36 datasets. According to the authors, these models surpass state-of-the-art performance for detection and recognition problems by up to 8.2\\% and 25.0\\%, respectively.\n\nArchie et al. \\cite{hurrearth} proposed an unsupervised approach for the detection of sub-events in major natural disasters. Firstly, noun-verb pairs and phrases are extracted from tweets as an important sub-event prospect. In the next stage, the semantic embedding of extracted noun-verb pairs and phrases is calculated and then ranked against a crisis-specific ontology called management of Crisis (MOAC) ontology. After filtering these obtained candidate sub-events, clusters are formed, and top-ranked clusters describe the highly important sub-events. The experiments are conducted on Hurricane Harvey and the 2015 Nepal Earthquake datasets. According to the authors, the approach outperforms the current state-of-the-art sub-event identification from social media data.\n\nForests fire have become a global phenomenon due to rising droughts and increasing temperatures which is often attributed to global warming and climate change. The work \\cite{firehaze} tests the usefulness of social media to support disaster management. However, the primary data for dealing with such incidents come from NASA satellite imagery. The authors use GPS-stamped tweets posted during 2014 from Sumatra Island, Indonesia, which experiences many haze events. Twitter has proven to be a valuable resource during such events, confirmed by performing analysis on the dataset. Furthermore, the authors also announced the development of a tool for disaster management. \n\nHuang et al. \\cite{emergemcyweibo} focuses on emergency events. They consider the various type of events under the term ``emergency events\". It includes infectious disease, explosions, typhoons, hurricanes, earthquakes, floods], tsunamis, wildfires, and nuclear disasters. The model must automatically identify the attribute information 3W (What, When, and Where) of emergency events to respond in time. Their proposed solution contains three phases, the classification phase, the extraction phase, and the clustering phase, and it is based on the Similarity-Based Emergency Event Detection (SBEED) framework. The experiment is done using the Weibo dataset. Different classification models such as KNN, Decision Trees, Na\u00efve Bayes, Linear SVC (RBF), and Text-CNN are used in the classification phase. Secondly, time and location are extracted from the classification obtained. Lastly, an unsupervised dynamical text clustering algorithm is deployed to cluster events depending on the text-similarity of type, time and location information. The authors claim superiority of the proposed framework having good performance and high timeliness that can be described what emergency, and when and where it happened.\n \nPais et al. present an unsupervised approach to detect extreme sentiments on social networks. Online wings of radical groups use social media to study human sentiments engaging with uncensored content to recruit them. They use people who show sympathy for their cause to further promote their radical and extreme ideology. The authors developed a prototype system composed of two components, i.e., Extreme Sentiment Generator (ESG) and Extreme Sentiment Classifier (ESC). ESG is a statistical method used to generate a standard lexical resource called ExtremesentiLex that contains only extreme positive and negative terms. This lexicon is then embedded to ESC and tested on five different datasets. ESC finds posts with extremely negative and positive sentiments in these datasets. The result verifies that the posts that were previously classified as negatives or positives are, in fact, extremely negatives or positives in most cases.\n \nCOVID-19 pandemic has forced people to change their lifestyles. Lockdown further pushed people to use social media to express their opinions and feelings. It provides a good source for studying users' topics, emotions, and attitudes discussed during the pandemic. The authors of work \\cite{covsent} collected two massive COVID-19 datasets from Twitter and Instagram. They explore data with different aspects, including sentiment analysis, topics detection, emotions, and geo-temporal. Topic modelling on these datasets with distinct sentiment types (negative, neutral, positive) shows spikes on specific periods. Sentiment analysis detects spikes on specific periods and identifies what topics led to those spikes attributed to economy, politics, health, social, and tourism. Results showed that COVID-19 affected significant countries and experienced a shift in public opinion. Much of their attention was towards China. This study can be very beneficial to read people's behaviour as an aftermath; Chinese people living in those countries also faced discrimination and even violence because of the Covid-19 linked with China.\n \nPlaza-del-Arco et al. \\cite{hateoffense} investigate the link of hate speech and offensive language(HOF) with relevant concepts. Hate speech targets a person or group as a negative opinion, and it is related to sentiment analysis and emotion analysis as it causes anger and fear inside the person experiencing it. The approach consists of three phases and is based on multi-task learning (MTL). The setup is based on BERT, a transformer-based encoder pre-trained on a large English corpus. Four sequence classification heads are added to the encoder, and the model is fine-tuned for multi-class classification tasks. The sentiment classification task categorizes tweets into positive and negative categories, while emotion classification classifies tweets into different emotion categories (anger, disgust, fear, joy, sadness, surprise, enthusiasm, fun, hate, neutral, love, boredom, relief, none). The offence target is categorized as an individual, group, and unmentioned to others. Final classification detects HOF and classifies tweets into HOF and non-HOF. \n \nKong et al. \\cite{farrightextreme} explore a method that explains how extreme views creep into online posts. Qualitative analysis is applied to make ontology using Wikibase. It proceeded from the vocabulary of annotations such as the opinions expressed in topics and labelled data collected from three online social networking platforms (Facebook, Twitter, and Youtube). In the next stage, a dataset was created using keyword search. The labelled dataset is then expanded to using a looped machine learning algorithm. Two detailed case studies are outlined with observations of problematic online speech evolving the Australian far-right Facebook group. Using our quantitative approach, we analyzed how problematic opinions emerge. The approach exhibits how problematic opinions appear over time and how they coincide.\n \nDemszky et al.\\cite{plrztnpolitk} highlights four linguistic dimensions of political polarization in social media, which includes; topic choice, framing, affect and apparent force. These features are quantified with existing lexical methods. Clustering of tweet embeddings is proposed to identify important topics for analysis in such events. The method is deployed on 4.4M tweets related to 21 mass shootings. Evidence proves the discussions on these events are highly polarized politically, and it is driven by the framing of biased differences rather than topic choice. The measures in this study provide connecting evidence that creates a big picture of the complex ideological division penetrating public life. The method also surpasses LDA-based approaches for creating common topics. \n \nWhile most typical use of social media is focused on disease outbreaks, protests, and elections, Khandpur et al. \\cite{cyberattack} explored social media to uncover ongoing cyber-attacks. The unsupervised approach detects cyber-attacks such as; breaches of private data, distributed denial of service (DDOS) attacks, and hijacking of accounts while using only a limited set of event trigger as a fixed input.\n \nCoordinated campaigns aim to manipulate and influence users on social media platforms. Pacheco et al. \\cite{coordcomp} work aim to unravel such campaigns using an unsupervised approach. The method builds a coordination network relying on random behavioural traces shared between accounts. A total of five case studies are presented in work, including U.S. elections, Hong Kong protests, the Syrian civil war, and cryptocurrency manipulation. Networks of coordinated Twitter accounts are discovered in all these cases by inspecting their identities, images, hashtag similarities, retweets, or temporal patterns. The authors propose using the presented approach for uncovering various types of coordinated information warfare scenarios.\n \nCoordinated campaigns can also influence people towards offline violence. Xian Ng et al. \\cite{coordinariots} investigates the case of capital riots. They introduce a general methodology to discover coordinated by analyzing messages of user parleys on Parler. The method creates a user-to-user coordination network graph prompted by a user-to-text graph and a similarity graph. The text-to-text graph is built on a textual similarity of posts shared on Parler. The study of three prominent user groups in the 6 January 2020 Capitol riots detected networks of coordinated user clusters that posted similar textual content in support of different disinformation narratives connected to the U.S. 2020 elections. \n \nWanzheng Zhu and Suma Bhat \\cite{drgEuphemisticPD} studies the specific case of the use of euphemisms by fringe groups and organizations that is expression substituted for one considered to be too harsh. The work claims to address the issue of Euphemistic Phrase detection without human effort for the first time. Firstly the phrase mining is done on raw text corpus to extract standard phrases; then, word embedding similarity is implemented to select candidates of euphemistic phrases. In the final phases, those candidates are ranked using a masked language model called SpanBERT.\n \nYang Yang et al. \\cite{hmntrfk} explore the use of Network Structure Information (NSI) for detecting human trafficking on social media. They present a novel mathematical optimization framework that combines the network structure into content modelling to tackle the issue. The experimental results are proven effective for detecting information related to human trafficking.\n\n\\begin{table}[h!]\n \\begin{center}\n \\begin{tabular}{p{6cm}c}\n \\toprule\n \\textbf{Tweets} & \\textbf{Proposed dangerous event type } \\\\\n \\midrule\n ``RT @KaitMarieox: This deranged leftist and LGBT activist named Keaton Hill assaulted and threatened to kill @FJtheDeuce, a black conservati\u2026'' & Action-based dangerous event \\\\\n ``RT @Lrihendry: When Trump is elected in 2020, I'm outta here. \nIt's a hate-filled sewer. \nIt is nearly impossible to watch the hateful at\u2026\" &Sentiment-based dangerous event \\\\\n ``Scientists predict a tsunami will hit Washington, DC on 1\/18\/2020\nWe Are Marching in DC\u2026 https:\/\/t.co\/3af4ZhyV3J\" & Scenario-based dangerous event \\\\\n \\bottomrule\n \\end{tabular} \\caption{Presumed types of dangerous events for tweets.} \\label{tab:exde}\n \\end{center}\n\\end{table}\n\nAuthors present Table \\ref{tab:exde} to clarify the intent of this work by providing an example of the collected tweets and their presumed techniques. Based on the existing methods for event detection, it gives a clear objective for using these methods for detecting dangerous events.\n\n\\subsection{Event Prediction}\n Event prediction is a complex issue that revolves around many dimensions. Various events are challenging to predict before they become apparent. For example, it is impossible to predict in case of natural disasters, and they can only be detected after the occurrence. Some events can be predicted while they are still in the evolving phase. Authors of \\cite{nourbakhsh2017breaking} identify events from local news sources before they may become breaking news globally. The use case of Covid-19 can be regarded as an example where it started locally and became a global issue later. \n \n A dataset is obtained from a recent Kaggle competition to explore the usability of a method for predicting disaster in tweets. The work in \\cite{twitterbertpred} tests the efficiency of BERT embedding, which is an advanced contextual embedding method that constructs different vectors for the same word in various contexts. The result shows that the deep learning model surpasses other typical existing machine learning methods for disaster prediction from tweets. \n \n Zhou et al. \\cite{covidfatlerate} proposed a novel framework called as Social Media enhAnced pandemic suRveillance Technique (SMART) to predict Covid-19 confirmed cases and fatalities. The approach consists of two parts where firstly, heterogeneous knowledge graphs are constructed based on the extracted events. Secondly, a module of time series prediction is constructed for short-and long-term forecasts of the confirmed cases and fatality rate at the state level in the United States and finally discovering risk factors for intervening COVID-19. The approach exhibits an improvement of 7.3\\% and 7.4\\% compared to other state-of-the-art methods.\n \n Most of the other existing research targets particular scenarios of event prediction with limited scope. Keeping in mind the complexity of this problem, we only present a few related works, and the generalization is obscure. \n \n\\begin{landscape}\\centering\n\\vspace*{\\fill}\n\\begin{table}[htpb]\n\\begin{tabular}{{llllll}}\n\n & Event Type & Technique & Reference & Dataset & Year \\\\ \\cline{2-6} \nScenario-based & & & & & \\\\ \\hline\n & Natural Disasters & SVM\/SGD & \\cite{nourbakhsh2017breaking} & Twitter & 2017 \\\\\n & Earthquake & Classification(SVM)& \\cite{equake2010} & Twitter & 2010 \\\\\n & Crisis & CrisisBERT & \\cite{crisisbert} &Twitter (C6,C36) & 2021\\\\\n &Earthquake \\&Hurricane & Unsupervised & \\cite{hurrearth} &Twitter & 2019 \\\\\n \n &Fire and Haze Disaster & Classification (hotspots) & \\cite{firehaze} &NASA \\&Twitter & 2017 \\\\ \n & Emergency & Text-CNN, Linear SVC \\& Clustering & \\cite{emergemcyweibo} & Weibo & 2021 \\\\\n& ... & ... &... &... &...\\\\ \n \nSentiment-based & & && & \\\\ \\hline\n & Extreme Sentiments & Unsupervised learning & \\cite{extremsentilax} & misc. & 2020 \\\\\n & Covid19 Sentiments & word2vec & \\cite{covsent} & Twitter \\& Instagram & 2021 \\\\\n &Hate speech \\& offensive Language& BERT & \\cite{hateoffense} & HASOC(Twitter) &2021 \\\\ \n &Far-right Extremism & Classification & \\cite{farrightextreme} & Facebook, Twitter \\&Youtube & 2021 \\\\ \n & Political Polarization & Clustering & \\cite{plrztnpolitk}& Twitter &2019 \\\\ \n& ... & ... &... &... &...\\\\ \n \nAction-based & & & & & \\\\ \\hline\n& Cyber attack & Unsupervised & \\cite{cyberattack} & Twitter & 2017 \\\\\n& Coordinated campaigns & Unsupervised & \\cite{coordcomp} & misc. & 2021 \\\\ \n& Riots & Clustering & \\cite{coordinariots} & Parler & 2021 \\\\\n&Drugs Trafficking& SPANBert & \\cite{drgEuphemisticPD} &Text Corpus(subreddit) & 2021\\\\\n& Human Trafficking & Classification (NSI) & \\cite{hmntrfk}& Wiebo & 2018 \\\\\n& ... & ... &... &... &...\\\\ \n\\end{tabular}\\caption{Dangerous Events categorized under relevant types }\\label{detable}\n\\end{table} \n\\vfill\n\\end{landscape}\n \n\\section{Conclusion}\\label{sec13}\nIn this work, we laid the basis of the term ``Dangerous Events\" and explored different existing techniques and methods for detecting events on social media. Dangerous events contain a broad meaning, but we keep it essential to define the term better. We believe much more can be included in dangerous events, as we explored in the discussion section. Categorizing dangerous events into sub-categories can help specify the event and its features. The subcategories consist of scenario-based, sentiment-based and action-based dangerous events. The usefulness of social media these days provides a significant advantage in detecting such events initially. While in some cases, significant events also originate from social media and manifest in real life, such as; mass protests, communal violence and radicalization. \n\nThe events on social media are mainly polarized. People use it to express their likes or dislike, which can also be classified as positive or negative. Not all extreme events are dangerous, but all dangerous events are extreme, as people can show their happiness using extreme emotions, which is an anomaly under normal circumstances. There is a common element in all dangerous events: we all want to avoid it because of the harm it brings. We believe there is an excellent scope for related work in future. As a proposal, we suggest a dataset containing all types of dangerous events. Secondly, different techniques can be applied to this dataset to further deepen the usefulness and evolve a technique that can be generalized for all kinds of such events. Considering the limitations of event detection, techniques covering only specific events, a joint base can help discover the universally applicable method. \n\n\\backmatter\n\n\n\\bmhead{Acknowledgments}\n\nThis work was supported by National Founding from the FCT- Fundac\u00e3o para a Ci\u00eancia e a Tecnologia, through the MOVES Project-PTDC\/EEI-AUT\/28918\/2017 and by operation Centro-01-0145-FEDER-000019-C4-Centro de Compet\u00eancias em Cloud Computing, co-financed by the European Regional Development Fund (ERDF) through the Programa Operacional Regional do Centro (Centro 2020), in the scope of the Sistema de Apoio \u00e0 Investiga\u00e7\u00e3o Cient\u00edfica e Tecnol\u00f3gica.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe zodiacal cloud is the name given to the population of small dust grains that permeate the Solar System's terrestrial planet region. The source of these grains has long been debated, with the asteroid belt, Kuiper belt, comets and interplanetary grains all being suggested as possible sources. Recent work by \\citet{2010ApJ...713..816N} and \\citet{2013MNRAS.429.2894R} shows that the majority of the dust (70-95\\%) comes from Jupiter Family Comets.\n\nAlthough often related to dust in the habitable zone, our zodiacal dust is distributed over a wide range of radii from the sun. It ranges from the Asteroid belt inwards, through the habitable zone, to the dust sublimation radius at $\\sim4$ Solar radii. The radial dust distribution globally follows an inward slowly increasing power-law that is locally modified by the interaction with the inner, rocky planets and the local production of dust through comet evaporation \\citep{1998ApJ...508...44K}. In the innermost regions it forms the Fraunhofer corona \\citep[F-corona,][]{1998EP&S...50..493K}.\n\nIn analogy to the zodiacal light, warm and hot dust around other stars is called exozodiacal dust (exozodi, emission: exozodiacal light). To observe this faint emission close to nearby main sequence stars, high angular resolution and contrast are needed. Thus, interferometric observations are the method of choice. In the mid-infrared the emission of habitable zone dust can be detected using nulling interferometry \\citep{2014ApJ...797..119M}. In the near-infrared hotter dust closer to the star can be detected using optical long baseline interferometry \\citep[e.g.,][]{2006A&A...452..237A}. The largest statistical survey for exozodis so far has been carried out in the near-infrared where a detection rate of ~15 - 20\\% has been found \\citep{2013A&A...555A.104A,2014A&A...570A.128E}.\n\nFor A stars, there is no apparent correlation between the presence of an exozodi and the age of the system and for FGK stars there is tentative evidence of a trend for older systems to be more likely to show evidence for an exozodi \\citep{2014A&A...570A.128E}. This is particularly surprising as mid-infrared observations show a clear decline in presence of excess with age \\citep{2006ApJ...653..675S,2009ApJS..181..197C,2009ApJ...697.1578G} as is expected from a population evolving through collisions \\citep{2007ApJ...658..569W,2008ApJ...673.1123L}. If, as in the Solar System, comets are also responsible for the dust in exozodis then this appears to contradict with what would be expected, since the reservoir of planetesimals is not replenished, and the dust feeding rates should eventually diminish with time.\n\nThis has motivated several recent studies to investigate the origin of these exozodis, with particular focus on how they can be present in systems with ages of order of 100 Myr - 1 Gyr. A possible explanation is that the production of comets is delayed on large timescales (several 100 Myr).\nThis can be achieved if the system is in a phase similar to that of the Solar System Late Heavy Bombardment (LHB). In the Nice model of \\citet{2005Natur.435..466G}, the Solar system was originally more compact, with Neptune orbiting inside Uranus, and suffered major changes at $\\sim 800\\,$Myr. This lead Neptune to jump beyond Uranus, but also lead the four giant planets to adopt more eccentric orbits, thus enhancing close encounters with planetesimals and generating a short phase of intense production of active comets. However, whilst this mechanism appears to be a good candidate to explain the presence of exozodis in some old systems where there are other reasons to believe that a LHB-like event took place such as in the $\\eta\\,$Crv system \\citep{2009MNRAS.399..385B,2012ApJ...747...93L}, the detection rate of exozodis would be only $\\sim 0.1\\%$ if all were produced due to LHB-like events \\citep{2013MNRAS.433.2938B}.\n\nAnother possibility is that the levels of dust are maintained throughout the lifetime of these systems, as suggested by \\citet{2012A&A...548A.104B}, who investigated scattering of planetesimals from an outer cold belt to the inner parts of a system by a chain of planets.It was found that the necessary scattering rates could be obtained, although they required contrived multi-planet system architectures involving chains of tightly packed low-mass planets. The rates were found to be higher, or sustained on longer timescales if the outermost planet of the chain happens to migrate outwards into the belt while scattering planetesimals \\citep{2014MNRAS.441.2380B,2014MNRAS.442L..18R}. However, this requires additional conditions for migration to take place which depend highly on the mass of the planet and the disc surface density.\n\nIn \\citet{2015A&A...573A..87F}, a dynamical process is presented that shows that inner mean-motion resonances (MMR) with a planet on an orbit at least moderately eccentric ($\\mathrm{e_p}\\gtrsim 0.1$), is a valuable route to set planetesimals on highly eccentric orbits, potentially cometary-like, with periastrons small enough for bodies to sublimate. This mechanism combines several strengths : in contrast with scattering of the reservoir by a chain of planets, it involves only one single planet, and the generation of cometary-like orbits through this process can be delayed by timescales as large as several 100 Myr, as for a LHB-like event. In addition, it is expected to place bodies on cometary-like orbits continuously, potentially over large timescales. Therefore, this mechanism could provide a robust explanation for the presence of exozodis, especially in systems older than 100 Myr.\n\nThe general question of the ability of this mechanism to generate a flux of active comets compatible with the presence of an exozodi is addressed in this paper, with particular focus on old systems. We present this mechanism in more details in Sect.~\\ref{sec:analytical}, as well as the analytical background. We make predictions on the ability of a given MMR with a perturber of eccentricity 0.1 to scatter planetesimals on cometary-like orbits. These predictions are complemented by a numerical analysis, and we present a study case of a given MMR, namely the 5:2, in Sect.~\\ref{sec:results}. This will allow us to determine the efficiency of this mechanism in a quantitative manner (rates, timescales) and as a function of the characteristics of the planet (semi-major axis, mass). We compare achievable scattering rates with observations and show applications of this framework to the cases of a low mass disc around a Sun-like star and the Vega system in Sect.~\\ref{sec:applis}. Finally, in Sect.~\\ref{sec:conclusion}, we discuss the efficiency of other MMRs and the impact of the eccentricity of the planet, before drawing our conclusions.\n\n\n\\section{Analytical study}\\label{sec:analytical}\n\nIn this section, we will present in more detail the mechanism of \\citet{2015A&A...573A..87F}, and in particular we will show the analytical predictions that can be made on the ability of a given perturber and a given MMR to set planetesimals on cometary orbits. The question of whether this mechanism can induce sufficient feeding rates and the determination of its characteristic timescales (delays and duration) will be addressed using a numerical analysis in the next sections.\n\n\\subsection{Orbital evolution of planetesimals in MMRs with an outer eccentric planet}\n\nMMRs between a planetesimal and a planet, usually noted $\\mathrm{n:p}$, where $\\mathrm{n}$ and $\\mathrm{p}$ are integers, concern bodies with orbital periods achieving the $\\mathrm{p\/n}$ commensurability with that of the planet. Therefore, MMRs occur at specific locations relative to the orbit of the planetary perturber. The integer $\\mathrm{q=|n-p|}$ is called the order of the resonance. Resonances with $\\mathrm{n>p}$ correspond to \\emph{inner} resonances, that is, planetesimals orbiting inside the orbit of the planet, while $\\mathrm{n\\aleph_0$ we can obtain a $G$-module $X$ with convex base, such that the cardinality of $X$ is $\\alpha$.\\end{remark}\n\nWe give now two examples of $G$-modules with a convex base.\\\\\n \nLet $(K,\\, |\\ |)$ be a valued field, with value group $G$, a cyclic subgroup of $(\\R^+, \\cdot)$.\n\n\\begin{example}\\label{ex1} Define $X_1$ as $B_1\\times G$, with $B_1:=(0,\\, 1]\\subset\\R^+$ (Notice that $B_1$ is not a well ordered set, since it is not isomorphic to an ordinal).\\end{example}\n\\begin{example}\\label{ex2} Now let $X_2 := B_2\\times G$ where $B_2$ is the ordinal $\\omega_1$. (Notice that $X_2$ cannot be immersed in $\\R^+$, since $\\omega_1$ has no cofinal sequence).\\end{example}\n\n\n\n\n\\section{Banach spaces over a discretely valued field with norms on a $G$-module with convex base}\n\nWe shall study NHS in the case the field $K$ has a valuation of rank one.\\\\\n\nTo prove the following lemma we will use the ``main tool'' of \\cite{two}. \\\\\nLet $E=(E,\\,\\|\\ \\|)$ be an $X$-normed space and $x_0\\in X$. The $G$-module map $\\varphi:X\\to G^\\#$ defined by $\\varphi(x)=\\sup_{G^\\#}\\{g\\in G: gx_0\\leq x\\}$ induces a new norm in $E$ with values in $G^\\#$. It turned out that these two norms are equivalent. The space $E$ provided with the new norm is called $E_\\varphi$.\n\n\\begin{lemma}\\label{discrete} If $E$ is a NHS over a field $K$ whose valuation has rank one then the value group of $K$ is cyclic.\\end{lemma}\n\n\\begin{proof}\nIf the value group $G\\subseteq (0, \\infty)$ then so is its Dedekind completion.\\\\\nBy \\cite{two} Theorem 2.7 if $E$ is a NHS then so is $E_\\varphi$ and by \\cite{Rooij} Theorem 5.16, if each closed subspace of $E_\\varphi$ has an orthocomplement then the valuation is discrete.\\end{proof}\n\nThis has a strong consequence.\n\n\n\n\\begin{theorem}\\label{Ban1.1} Let $G=\\langle g_0\\rangle$ be a cyclic group. Each $G$-module $X$ has a convex base. In fact, $A:=[a,\\, g_0a)$ $($as well as $(a,\\, g_0a] )$ is a convex base of $X$ for any $a\\in X$.\\end{theorem}\n\\begin{proof} It is proved in \\cite{base}, Lemma 4.10 that for any $a\\in X$ the set $A$ is a convex base of the submodule $GA$. Thus, we only need to show that $G[a,\\, g_0a)=X$. So, let $x\\in X$. As $\\{g_0^nx\\}_{n\\in\\Z}$ is cofinal and coinitial there exists $m\\in\\Z$ such that $g_0^mx\\|f_2\\|>\\cdots$ and $f_n\\to 0$.\\end{theorem}\n\n\\begin{proof} Let $s_1,\\, s_2,\\, \\ldots \\in X$ such that $s_1>s_2>\\cdots$ and $s_n\\to 0$. Put $f_1:=e_1$. There is an $n_1\\in\\Z$ such that $\\|g_0^{n_1}e_2\\|<\\min\\{\\|e_1\\|,\\, s_1\\}$. Put $f_2:=g_0^{n_1}e_2$, and so on. Inductively we arrive at $f_1,\\, f_2,\\, \\ldots$, multiples of $e_1,\\, e_2,\\,\\ldots$ respectively (hence orthogonal) such that $\\|f_n\\|\\|e_2\\|>\\cdots$ then $e_n\\rightarrow 0$.\n\\item[($\\gamma$)] If $v_1,\\, v_2,\\, \\ldots\\in E$ and $\\|v_1\\|> \\|v_2\\|>\\cdots$ then $v_n\\rightarrow 0$.\n\\item[($\\delta$)] Each closed subspace of countable type is a NHS.\n\\item[($\\epsilon$)] Each closed hyperplane is orthocomplemented.\n\\item[($\\iota$)] $E$ has an orthogonal base and is spherically complete.\n\\end{itemize} \\end{theorem}\n\n\\begin{proof}\n\n($\\gamma$)$\\Rightarrow$($\\alpha$): It suffices to prove (\\ref{NHS3.1}) that each maximal orthogonal system $\\{e_i:i\\in I\\}$ is an orthogonal base. So let $D:=\\overline{[e_i: i\\in I]}$; we must prove $D=E$. Suppose $v\\in E\\setminus D$, we derive a contradiction. Consider $V:=\\{\\|v-d\\|:d\\in D\\}$. If $\\|v-d_0\\|=\\min V$ then $v-d_0\\perp D$, a contradiction. So $\\min V$ does not exist and we can therefore find $d_1,\\, d_2,\\,\\ldots\\in D$ such that $\\|v-d_1\\|>\\|v-d_2\\|>\\cdots$. By assumption $\\|v-d_n\\|\\rightarrow 0$. But then $v\\in\\overline{D}=D$ and we have our contradiction.\n\n\n($\\alpha$)$\\Rightarrow$($\\beta$): Suppose $e_1,\\, e_2,\\, \\ldots\\in E$ orthogonal, $\\|e_1\\|>\\|e_2\\|>\\cdots >s$ for some $s\\in X$. We derive a contradiction. Consider $D:=\\overline{[e_i: i\\in I]}$ and $\\phi\\in D'$ given by $\\phi\\left(\\sum \\xi_ie_i\\right)=\\sum \\xi_i $, ($\\xi_i\\rightarrow 0$).\\\\\nNow $D$ is a NHS, so $\\ker\\phi$ has an orthocomplement $Ka$ in $D$. Without lost of generality $\\phi(a)=1$. Let $a:=\\sum\\limits_{n_1}^\\infty\\lambda_ne_n$. We have $$1=|\\phi(a)|=\\left|\\sum\\limits_{n=1}^\\infty\\lambda_n\\right|\\leq\\max\\limits_n|\\lambda_n|$$ We see that there exists $i\\in\\N$ with $|\\lambda_i|\\geq 1$, $\\|a\\|=\\max\\|\\lambda_ne_n\\|\\geq \\|\\lambda_ie_i\\|\\geq \\|e_i\\|>\\|e_{i+1}\\|$.\\\\\nBut $\\phi(a-e_{i+1})=0$, so $a-e_{i+1}\\in \\ker\\phi$, so $a\\perp a-e_{i+1}$, and so $|e_{i+1}|=\\max\\{ \\|a\\|,\\,\\|a-e_{i+1}\\|\\}\\geq a$. Contradiction.\n\nTo complete the link we shall prove, by contradiction, ($\\beta$)$\\Rightarrow$($\\gamma$): Suppose we have a decreasing sequence $\\|v_1\\|> \\|v_2\\|>\\cdots>s$ ($v_1,\\, v_2,\\, \\ldots \\in E,\\, s\\in X$). \\\\\nLet $B$ be a convex base of $X$, thus $X=\\bigcup g_0^nB$. For any fixed $m\\in\\Z$ we observe that $I_m:=\\{ n\\in\\Z: \\|v_n\\|\\in g_0^mB\\}$ is finite (If $I_m$ is infinite then $\\{\\|v_n\\|:n\\in I_m\\}$ would consist of elements that are not equivalent mod $G$.Thus they would be orthogonal, but this is forbidden by assumption). But now, $s\\in g_0^mB$ for some $m$. We see that $\\|v_i\\|>g_0^{m-1}B$ for all $i$. Also $v_1\\in g_0^rB$ for some $r$. Hence the whole sequence $v_1,\\, v_2,\\, \\ldots$ is contained in $g_0^rB,\\, g_0^{r+1}B,\\, \\ldots,\\, g_0^{m-1}B$. This implies finiteness of $v_1,\\, v_2,\\, \\ldots$, a contradiction.\n\n\nThis completes the proof of the equivalence.\n\nClearly we have ($\\alpha$)$\\Rightarrow$($\\delta$). Now ($\\delta$)$\\Rightarrow$($\\gamma$) is easy by observing that $\\overline{[v_1,\\, v_2,\\,\\ldots]}$ is of countable type. ($\\alpha$)$\\Rightarrow$($\\epsilon$) is trivial. \n\nTo prove ($\\epsilon$)$\\Rightarrow$($\\alpha$), let $(e_i)_{i\\in I}$ be a maximal orthogonal system. It suffices to prove (\\ref{NHS3.1}) that $D:= \\overline{[e_i: i\\in I]}=E$. Suppose not. Then take an $a\\in E\\setminus D$ and consider the map $\\lambda a+d\\mapsto \\lambda$, ($\\lambda\\in K, \\, d\\in D$) which is in $(Ka+D)'$. By the Hahn- Banach Theorem (\\cite{morado}), $f$ extends to a $g\\in E'$. Then $H:= {\\rm Ker}\\, g$ is a closed hyperplane as $a\\in H$. By assumption there is a $z\\in E\\setminus H$ with $z\\perp H$. But then, since $D\\subseteq H$, also $z\\perp D$ which conflicts with the maximality of $\\{e_i: i\\in I\\}$.\n\nWe now prove ($\\alpha$)$\\Rightarrow$($\\iota$). Clearly $E$ has an orthogonal base. To prove spherical completeness let $\\{B_i\\}_{i\\in I}$ be a nest of balls in $E$ where $I$ is linearly ordered and $ij$.\n\nTo prove $\\bigcap B_i\\neq \\varnothing$ we may suppose that $I$ has no smallest element. Then there are $i_1,\\, i_2,\\,\\ldots\\in I$ which $r_{i_1}> r_{i_2}>\\cdots$. \\\\\nBy ($\\gamma$) we must have $r_{i_n}\\rightarrow 0$ showing that $\\bigcap B_i=\\bigcap\\limits_{n\\in\\N}B_{i_n}$ is a singleton set by ordinary completeness of $E$.\n\n\nFinally, we prove ($\\iota$)$\\Rightarrow$($\\alpha$): Let $\\{f_i\\}_{i\\in I}$ be a maximal orthogonal set in $E$. We prove that $D:=\\overline{[f_i: i\\in I]}=E$. \\\\\nNow $E$ has an orthogonal base (which has the cardinality of $I$), say $\\{e_i\\}_{i\\in I}$.\\\\\nAs $\\{e_i\\}_{i\\in I}$ is also a maximal orthogonal system we have that $D$ is isometrically isomorphic to $\\overline{[e_i: i\\in I]}=E$. Thus $D$ is spherically complete, so for each $v\\in E\\setminus D$, $\\min\\{\\|v-d\\|: d\\in D\\}$ exists, so that $v-d\\perp D$. This conflicts the maximality, so we have $D=E$.\n\nThis completes the proof of Theorem \\ref{Ban2.3}.\n\\end{proof}\n\n\n\\begin{corollary}\\label{Ban2.4} For each index set $I$ the space $c_0(I)$ is a NHS.\\end{corollary}\n\n\\begin{proof} We have $\\|c_0(I)\\|=G\\,\\cup\\, \\{0\\}$. Clearly every strictly decreasing sequence on $G$ tends to 0, so $c_0(I)$ satisfies ($\\gamma$) of \\ref{Ban2.3}, and we are done.\n\\end{proof}\n\nWe come back now to $c_0(I)$ and its link with isometries.\n\n\n\n\\begin{remark} As it has been said before, we find in \\cite{morado} Lemma 4.3.4, that a NHS over a field with an infinite rank valuation does not contain $c_0$.\\end{remark}\n\n\n\\begin{definition} $E$ is {\\bf rigid} if every linear isometry $ T: E\\to E$ is surjective.\\end{definition}\nThe subject was studied in \\cite{linear} for the case of NHS over fields with a valuation of (countable) infinite rank. Let $E$ be such a space, then $E$ is rigid, in fact no proper subspace can be isometrically isomorphic to the whole space and any isometry of a closed subspace into itself can be extended to an isometry of $E$ onto itself. Clearly this sharply contrast the case of the classical space $c_0$.\n\n\n\nThe following theorems show that the condition ``$E$ does not contain $c_0$'' will need additional hypotheses in order to ensure that $E$ is rigid.\n\n\\begin{theorem}\\label{Ban2.7} If $E$ is a spherically complete space that does not contain $c_0$ then $E$ is rigid.\\end{theorem}\n\\begin{proof} Let $T:E\\to E$ be a linear isometry, $TE\\neq E$; we derive a contradiction. Now $TE$ is spherically complete, so it has an orthocomplement in $E$, in particular, there exists a nonzero $v\\in E$, $v\\perp TE$. Inductively we find easily that $\\{v,\\, Tv,\\, T^2v,\\ldots \\} $ is orthogonal and that $\\|T^nv\\|=\\|v\\|$ for each $n$, so $E$ contains $c_0$, a contradiction.\n\n\\end{proof}\nWe obtain\n\\begin{corollary}\\label{Ban2.8} $E$ is a NHS and does not contain $c_0$ if and only if $E$ is rigid and has an orthogonal base.\\end{corollary}\n\nNot surprisingly, rigidity is a condition that forbids $E$ to contain $c_0$.\n\n\\begin{theorem}\\label{Ban2.9} A rigid space does not contain $c_0$.\\end{theorem}\n\\begin{proof} Suppose $E$ is rigid and contains $c_0$, that is there are $a_1,\\, a_2,\\, \\ldots$, orthogonal, $\\|a_n\\|=s$ for all $n$. Then $\\overline{[a_1,\\, a_2,\\, \\ldots]}$ is a NHS, it is spherically complete so it has an orthocomplement $D$.\\\\\nThen define $T$ in $\\overline{[a_1,\\, a_2,\\, \\ldots]}$ by $Ta_n=a_{n+1}$ and $T(d)=d$ for all $d\\in D$. This gives us a nonsurjective linear isometry, a contradiction.\n\\end{proof}\n\n\n\\begin{theorem}\\label{Ban2.10} Let $E$ have an orthogonal base $\\{e_1,\\, e_2,\\, \\ldots\\}$ for which $n\\neq m$ implies $\\|e_n\\|\\notin G\\|e_m\\|$. Then $E$ does not contain $c_0$.\\end{theorem}\n\\begin{proof} Suppose we have an orthogonal set $\\{a_1,\\, a_2,\\,\\ldots\\}$ and $s\\in X$ such that $\\|a_n\\|=s$ for all $n$; we derive a contradiction. For each $n\\in\\N$ we have an expansion $$a_n=\\sum_{i=1}^\\infty \\lambda_j^ne_j\\hspace{1cm} (\\lambda_j^n\\in K)$$\nThere is a unique $j_0$ with $s\\in G\\|e_{j_0}\\|$. Thus $$s=\\|a_n\\|=\\max\\limits_j\\|\\lambda_j^ne_j\\|=\\|\\lambda_{j_0}^ne_{j_0}\\|$$ Clearly we have for each $n$, $\\|a_n-\\lambda_{j_0}^ne_{j_0}\\|<\\|a_n\\|$. Therefore, by the Perturbation Lemma, the sequence $n\\mapsto \\lambda_{j_0}^ne_{j_0}$ must be orthogonal, an absurdity.\\end{proof}\n\n\n\n \n\\begin{theorem}\\label{Ban2.16} Let $E$ have an orthogonal base and suppose there exists a sequence $\\{v_n\\}$ in $E$ with $\\|v_1\\|>\\|v_2\\|>\\cdots$, $\\|v_n\\|\\nrightarrow 0$. Then $E$ is not rigid.\n\\end{theorem}\n\\begin{proof} By Theorem \\ref{Ban2.3} $E$ is not a NHS, so by Corollary \\ref{Ban2.8} $E$ cannot be rigid.\n\n\\end{proof}\n\nNow it is easy to present an example of a space that does not contain $c_0$ but is not rigid. Let $B=\\{ b_1,\\, b_2,\\,\\ldots\\}\\subseteq\\R^+$ a denumerable chain where $b_1>b_2>\\cdots$. Let $X:= B\\times G$ ($G$ a cyclic group) thus $X$ has a convex base $B\\times\\{1_G\\}$. Let $E$ have an orthogonal base $\\{ e_1,\\, e_2,\\, \\ldots\\} $ with $\\|e_n\\|=(b_n, \\, 1_G)$ for each $n$. Clearly by \\ref{Ban2.10} $E$ does not contain $c_0$. By \\ref{Ban2.16} $E$ is not rigid.\\\\\n\nThe next section contains the surprising main result of this paper.\n\n\n\n\\section{A new characterization of NHS}\nWe recall the standard definition: A linear ordering $\\leq$ of a set $S$ is a well-ordering if every non-empty subset of $S$ has a least element.\n\\begin{theorem}\\label{Ban2.12} Let $K$ be a valued field and $G=\\langle g_0\\rangle$ its cyclic value group. $E$ is a $K$-Banach space and $X:=\\|E\\setminus\\{0\\}\\|$, the set of norms, is a $G$-module with convex base $B$.\\\\\nThen $E$ is a NHS if and only if $B$ is well ordered.\\end{theorem}\n\n\n\\begin{proof} Let $E$ be a NHS, let $b_1>b_2>\\cdots$ where $b_n\\in B$; we derive a contradiction. There are $v_1,\\, v_2,\\,\\ldots\\in E$ with $\\|v_n\\|=b_n$ for each $n$. Then $\\|v_1\\|>\\|v_2\\|>\\cdots$. But by \\ref{Ban2.3}, $\\|v_n\\|\\to 0$ an impossibility as for all $n$, $b_n>g_0^{-1}B$ .\n\nConversely. let $B$ be well-ordered, let $v_1,\\, v_2,\\, \\ldots \\in E$. $\\|v_1\\|>\\|v_2\\|>\\cdots$. We have $X=\\bigcup\\limits_{n\\in\\Z}g_0^nB$, where $$\\cdots< g_0^{-1}B