diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzofub" "b/data_all_eng_slimpj/shuffled/split2/finalzzofub" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzofub" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nA three-layer actively constrained layer (ACL) beams is an elastic beam consisting of a stiff elastic beam, a piezoelectric beam, and a viscoelastic beam which creates passive damping in the structure. The piezoelectric beam itself an elastic beam with electrodes at its top and bottom surfaces, insulated at the edges (to prevent fringing effects), and connected to an external electric circuit (see Fig. \\ref{ACL}). Piezoelectric structures are widely used in in civil, aeronautic and space space structures due to their small size and high power density. They convert mechanical energy to electric energy, and vice versa. ACL composites involve a piezoelectric layer and therefore utilize the benefits of it. Modeling these composites requires better understanding of the modeling of piezoelectric layer since the they are generally actuated through the piezoelectric layer. \n\nThere are mainly three ways to electrically actuate piezoelectric materials: voltage, current or charge. Piezoelectric materials have been traditionally activated by a voltage source \\cite{Baz,Cao-Chen,Rogacheva,Smith,Stanway}, and the references therein. However, it has been observed that the type of actuation changes the controllability characteristics of the host structure \\cite{O-M3}.\nCharge or current controlled piezoelectric beams show \\%85 less hysteresis (electrical nonlinearity) than the voltage actuated ones, i.e. see \\cite{Cao-Chen,Review,Main1,Main2,M-F}, and the references therein. In the case of voltage and charge actuation, the underlying control operator is unbounded in the energy space whereas in the case of current actuation (including magnetic effects) the control operator is bounded \\cite{O-M3}.\n\n\nAccurately modeling an ACL beam also requires understanding of how sandwich structures are modeled and how the interaction of the elastic layers are established. A three-layer sandwich beam consists of stiff outer layers and a viscoelastic core layer. The core layer is supposed to deform only in transverse shear. The bending is uniform for the whole composite. Many sandwich beam models have been proposed in the literature, i.e., see \\cite{Ditaranto,Mead,Rao,Sun,Trindade,Yan}, and the references therein. These models mostly differ by the assumptions on the viscoelastic layer. For example, the Mead-Marcus type models disregard the effects of longitudinal and rotational inertias \\cite{Mead}, and the Rao-Nakra type models preserve these effects since it is noted that these inertia effects are expected to have considerable importance especially at the high frequency modes for sandwich beams \\cite{Rao}.\n\n\n The models for the ACL beams proposed in the literature mostly use the sandwich beam assumptions for the interactions of the layers, i.e. \\cite{Stanway,Trindade}, and references therein. The massive majority of these models are actuated by a voltage source \\cite{Baz,F}. Moreover, the longitudinal vibrations were not taken into account. Only the bending of the composite is studied.\n\n In this paper, we use the Rao-Nakra thin-complaint layer approximation \\cite{Hansen3} for which the longitudinal and rotational inertia terms are kept. We obtain two different models, one for the charge actuated ACL beam, another one for the current actuated ACL beam. We assume the electrostatic approach together with the quadratic-through thickness electric potential so that the induced potential effect is taken into account.\n We show that the proposed model with charge actuation can be written in semigroup formulation (state-space formulation), and it is well-posed in the natural energy space. The biggest advantage of our models is being able to study controllability and stabilizability problems for ACL beams in the infinite dimensional setting. We show that the charge actuated model can be uniformly exponentially stabilized by choosing appropriate feedback controllers which are all mechanical and collocated in this paper due to the electrostatic assumption. This type of feedback controllers avoids the well-known \\emph{spillover} effect. Our stability result relies on the compact perturbation argument and a unique continuation result for the spectral problem by the multipliers technique. A similar argument was used earlier in \\cite{O-Hansen3,O-Hansen4} for different boundary conditions.\n \n\n \\begin{figure}\n\\begin{center}\n\\includegraphics[height=5.5cm]{ACL-curr-char.png} \n\\caption{For a current or charge-actuated ACL beam, when $i_s(t)$ or $\\sigma_s(t)$ is supplied to the electrodes of the piezoelectric layer, an electric field is created between the electrodes, and therefore the piezoelectric beam either shrinks or extends, and this causes the whole composite stretch and bend.} \n\\label{ACL} \n\\end{center} \n\\end{figure}\n\n\n\\section{Modeling Active-Constrained Layer\n (ACL) beams} \\label{modeling}\n\\noindent \\textbf{I- Charge or current-actuation:}\n\n\nThe ACL beam is a composite consisting of three layers that occupy the\nregion $\\Omega=\\Omega_{xy}\\times (0, h)=[0,L]\\times [-b,b] \\times (0,h)$ at equilibrium where $\\Omega_{xy}$ is a smooth bounded domain in the plane.\nThe total thickness $h$ is assumed to be small in comparison to the dimensions of $\\Omega_{xy}$.\nThe beam consists of a stiff\u009d layer, a compliant\u009d layer, and a piezoelectric layer, see Figure \\ref{ACL}.\n\nLet $0=z_00.$\n\nThe well-posedness of the model (\\ref{d4-non})-(\\ref{d-son-non}) with (\\ref{feedback}) is shown in \\cite{Ozkan3}. For that reason, we only analyze the well-posedness of (\\ref{reduced-nog})-(\\ref{reduced-BC-nog}) with (\\ref{feedback}).\n\n\nDefine the complex linear spaces\n\\begin{eqnarray}\n\\nonumber \\mathrm V&=&\\left(H^1_L(0,L)\\right)^2 \\times H^2_L(0,L),\\quad \\mathrm H= {\\mathbb X}^2 \\times H^1_L(0,L),\\quad \\mathcal{H} = \\mathrm V \\times \\mathrm H.\n\\end{eqnarray}\n\nThe natural energy associated with (\\ref{d4-non})-(\\ref{d-son-non}) is\n\\begin{eqnarray}\n\\nonumber &&\\mathrm{E}(t) =\\frac{1}{2}\\int_0^L \\left\\{\\rho_1 h_1 |\\dot v^1|^2 + \\rho_3 h_3 |\\dot v^3|^2 + m |\\dot w|^2 + \\alpha^1 h_1 |v^1_x|^2 + \\alpha^3 h_3 |v^3_x|^2 \\right. \\\\\n\\label{Energy-nat-non} &&\\left. + \\frac{{\\gamma^2 h_3}}{{{\\varepsilon_{3}}}}~(P_{\\xi} v^3_x) (\\bar v^3)_x + K_1 |\\dot w_x|^2 + K_2 |w_{xx}|^2 +G_2 h_2 |\\phi^2|^2 \\right\\}~ dx.\n\\end{eqnarray}\n This motivates definition of the inner product on $\\mathcal H$\n{ \\small{\\begin{eqnarray}\n\\nonumber && \\left<\\left[ \\begin{array}{l}\n u_1 \\\\\n u_2 \\\\\n u_3\\\\\n u_4 \\\\\n u_5 \\\\\n u_6\n \\end{array} \\right], \\left[ \\begin{array}{l}\n v_1 \\\\\n v_2 \\\\\n v_3\\\\\n v_4\\\\\n v_5\\\\\n v_6\n \\end{array} \\right]\\right>_{\\mathcal H}= \\left<\\left[ \\begin{array}{l}\n u_4\\\\\n u_5\\\\\n u_6\n \\end{array} \\right], \\left[ \\begin{array}{l}\n v_4\\\\\n v_5\\\\\n v_6\n \\end{array} \\right]\\right>_{\\mathrm H}+ \\left<\\left[ \\begin{array}{l}\n u_1 \\\\\n u_2 \\\\\n u_3\n \\end{array} \\right], \\left[ \\begin{array}{l}\n v_1 \\\\\n v_2\\\\\n v_3\n \\end{array} \\right]\\right>_{\\mathrm V}\\\\\n\\nonumber && =\\int_0^L \\left\\{\\rho_1 h_1 u_5 {\\dot {\\bar v}}_5 + \\rho_3 h_3 u_6 {\\dot {\\bar v}}_6+ \\mu h_3 \\dot u_7 {\\dot {\\bar v}_7} + m \\dot u_8{\\dot {\\bar v}_8}+ K_1 (u_8)_x(\\bar v_8)_x+ \\alpha^1 h_1 (u_1)_{x} (\\bar v_1)_x \\right. \\\\\n\\nonumber &&\\left. + \\alpha^3 h_3 (u_3)_{x} (\\bar v_3)_x+ \\frac{{\\gamma^2 h_3}}{{{\\varepsilon_{3}}}}~(P_{\\xi} (u_3)_x) (\\bar u_3)_x + K_2 (u_4)_{xx} (\\bar v_4)_{xx} +\\frac{G_2}{h_2} (-u_1+u_2 + H(u_4)_x)(-\\bar v_1+\\bar v_2 + H(\\bar v_4)_x)\\right\\}~dx.\n \\end{eqnarray}}}\n Since $P_\\xi$ is a positive operator and the term $\\|-u_1+u_2 + H(u_4)_x\\|_{L^2(0,L)}$ is coercive, see \\cite{Hansen3} for the details., $\\langle \\, , \\, \\rangle_{\\mathcal H} $ does indeed define an inner product.\n\n Let $\\vec y=(v^1, v^3, w)$ be the smooth solution of the system of (\\ref{d4-non})-(\\ref{d-son-non}) with (\\ref{feedback}). Multiplying the equations in (\\ref{d4-non}) by $\\tilde y_1, \\tilde y_2 \\in H^1_L(0,L)$ and $ \\tilde y_3 \\in H^2_L(0,L),$ respectively, and integrating by parts yields\n\\begin{subequations}\n\\small\n \\begin{empheq}[left={\\phantomword[r]{0}{} }]{align}\n\\nonumber &\\int_0^L \\left(\\rho_1h_1\\ddot v^1 \\tilde y_1 + \\alpha^1 h_1 v^1_{x} (\\tilde y_1)_x - G_2 \\phi^2 \\tilde y_1\\right)dx + s_1\\dot v_1(L) y_1(L)= 0, & \\\\\n \\nonumber &\\int_0^L \\left(\\rho_3h_3\\ddot v^3 \\tilde y_2 +\\alpha^3 h_3 v^3_{x} (\\tilde y_2)_x + \\frac{\\gamma^2 h_3}{{\\varepsilon_{3}}} (P_\\xi v^3_x) (\\tilde y_2)_x+G_2 \\phi^2 (\\tilde y_2)_x \\right)dx + \\frac{\\gamma s_3}{{\\varepsilon_{3}}} \\dot v_3(L) y_2(L) = 0, & \\\\\n \\nonumber & \\int_0^L \\left( m \\ddot w \\tilde y_3 + K_1 \\ddot{w}_{x} (\\tilde y_3)_x + K_2 w_{xx} (\\tilde y_3)_{xx}- G_2 H \\phi^2_x (\\tilde y_3)_x \\right)dx + k_1 \\dot w_x(L) (y_3)_x(L)+ k_2\\dot w(L) y_3(L) =0.&\n\\end{empheq}\n\\end{subequations}\nNow define the linear operators\n\\begin{eqnarray}\n\\nonumber \\left_{\\mathrm V'\\times \\mathrm V}=(y,\\psi)_{\\mathrm V \\times \\mathrm V}, \\quad \\forall y,\\psi\\in \\mathrm V\\\\\n\\nonumber \\left_{\\mathrm H'\\times \\mathrm H}= \\left[ \\begin{array}{c}\n 0_{2\\times 1}\\\\\n k_2y_3(L) \\psi_3(L)\n \\end{array} \\right], \\quad \\forall \\vec y,\\vec \\psi\\in \\mathrm H\\\\\n\\label{ops}\\left_{\\mathrm H'\\times \\mathrm H}= \\left[ \\begin{array}{c}\n s_1y_1(L) \\psi_1(L) \\\\\n \\frac{\\gamma s_3}{{\\varepsilon_{3}}} y_2(L) \\psi_2(L) \\\\\n k_1(y_3)_x(L) (\\psi_3)_x(L)\n \\end{array} \\right], \\quad \\forall \\vec y,\\vec \\psi\\in \\mathrm V.\n\\end{eqnarray}\nLet $\\mathcal M : H^1_L (0,L) \\to (H^1_L(0,L))'$ be a linear operator defined by\n\\begin{eqnarray}\\label{def-M}\\left< \\mathcal M \\psi, \\tilde \\psi \\right>_{(H^1_L(0,L))', H^1_L(0,L)}= \\int_0^L (m \\psi \\tilde {\\bar \\psi} + K_1 \\psi_x \\tilde {\\bar\\psi}_x) dx.\n\\end{eqnarray}\nFrom the Lax-Milgram theorem $\\mathcal M$ and $A$ are canonical isomorphisms from $H^1_L(0,L)$ onto $(H^1_L(0,L))'$ and from $V$ onto $V',$ respectively.\nAssume that $A y \\in V',$ then we can formulate the variational equation above into the following form\n\\begin{eqnarray} M\\ddot y + A y + D_0\\dot y + B_0\\dot y=0\n\\end{eqnarray}\nwhere $M=\\left[\\rho_1 h_1 I ~~\\rho_3 h_3 I~~ \\mathcal M\\right]$ is an isomorphism from $\\mathrm H$ onto $\\mathrm H'.$\nNext we introduce the linear unbounded operator by\n\\begin{equation}\n\\label{defA}\\mathcal A: {\\text{Dom}}(A)\\times V\\subset \\mathcal H \\to \\mathcal H\n\\end{equation}\nwhere $\\mathcal A= \\left[ {\\begin{array}{*{20}c}\n O_{3\\times 3} & -I_{3\\times 3} \\\\\n M^{-1}A & M^{-1} D_0 \\\\\n\\end{array}} \\right]$\nwith $\\nonumber{\\rm {Dom}}(\\mathcal A) = \\{(\\vec z, \\vec {\\tilde z}) \\in V\\times V, A\\vec z \\in \\mathrm V' \\}$\nand if ${\\rm Dom}(\\mathcal A)'$ is the dual of ${\\rm Dom}(\\mathcal A)$ pivoted with respect to $\\mathcal H,$ we define the control operators $B$ and $D$\n \\begin{eqnarray}\n\\label{defb_0} \\quad B \\in \\mathcal{L}(\\mathbb{C} , {\\rm Dom}(\\mathcal A)'), ~ \\text{with} ~ B= \\left[ \\begin{array}{c} 0_{3\\times 1} \\\\ M^{-1}B_0. \\end{array} \\right]\n \\end{eqnarray}\n\nWriting $\\varphi=[v^1, v^3, w, \\dot v^1, \\dot v^3, \\dot w]^{\\rm T},$ the control system (\\ref{d4-non})-(\\ref{d-son-non}) with the feedback controllers (\\ref{feedback}) can be put into the state-space form\n\\begin{eqnarray}\n\\label{Semigroup}\n\\dot \\varphi + \\mathcal A \\varphi + B\\varphi =0, \\quad\\varphi(x,0) = \\varphi ^0.\n\\end{eqnarray}\n\n \\begin{lem} \\label{skew}The operator $\\mathcal{A}$ defined by (\\ref{defA}) is maximal monotone in the energy space $\\mathcal H,$\n and ${\\rm Range}(I+\\mathcal A)=\\mathcal H.$\n\\end{lem}\n\n\\textbf{Proof:} Let $\\vec z \\in {\\rm Dom}(\\mathcal A).$ A simple calculation using integration by parts and the boundary conditions yields\n{\\small{\\begin{eqnarray}\n\\nonumber &&\\left<\\mathcal A \\left[ \\begin{array}{c}\n\\vec z_1 \\\\\n\\vec z_2 \\end{array} \\right], \\left[ \\begin{array}{c}\n\\vec z_1 \\\\\n\\vec z_2 \\end{array} \\right]\\right>_{\\mathcal H\\times \\mathcal H} = \\left< \\left[ \\begin{array}{c}\n-\\vec z_2 \\\\\nM^{-1} A \\vec z_1 \\end{array} \\right], \\left[ \\begin{array}{c}\n\\vec z_1 \\\\\n\\vec z_2 \\end{array} \\right]\\right>_{\\mathcal H}= \\left<-\\vec z_2, \\vec z_1\\right>_{V\\times V} + \\left_{H\\times H}\\\\\n\\nonumber &&\\quad\\quad = -\\overline{\\left_{V'\\times V}} + \\left_{H'\\times H}.\n\\end{eqnarray}\n}}\nSince $\\vec z=\\left[ \\begin{array}{c}\n\\vec z_1 \\\\\n\\vec z_2 \\end{array} \\right] \\in {\\rm Dom}(\\mathcal A) $, then $A\\vec z _1+ D_0z_2\\in V'$ and $\\vec z_2 \\in V$ so that\n\\begin{eqnarray*}&&\\left_{H'\\times H}=\\left_{V'\\times V} =\\left_{V'\\times V}+\\left< D_0z_2, \\vec z_2\\right>_{V'\\times V}.\n\\end{eqnarray*}\nHence\n${\\rm Re}\\left<\\mathcal A \\vec z, \\vec z\\right>_{\\mathcal H\\times \\mathcal H}=\\left< D_0z_2, \\vec z_2\\right>_{V'\\times V}\\ge 0.$\nWe next verify the range condition. Let $\\vec z=\\left[ \\begin{array}{c}\n\\vec z_1 \\\\\n\\vec z_2 \\end{array} \\right]\\in \\mathcal H.$ We prove that there exists a $\\vec y=\\left[ \\begin{array}{c}\n\\vec y_1 \\\\\n\\vec y_2 \\end{array} \\right]\\in \\mathcal {\\rm Dom} (\\mathcal A )$ such that $(I+\\mathcal A) \\vec y=\\vec z.$\nA simple computation shows that proving this is equivalent to proving ${\\rm Range} (M+A + D_0)=H',$ i.e., for every $\\vec f \\in H'$ there exists a unique solution $\\vec z \\in H$ such that $(M+A+D_0)\\vec z=\\vec f.$\nThis obviously follows from the Lax Milgram's theorem. $\\square$\n\n\\begin{prop} The operator $B$ is a monotone compact operator on $\\mathrm H.$\n\\end{prop}\n\n\\textbf{Proof:} Let $\\left[ \\begin{array}{c}\n\\vec y\\\\\n\\vec z \\end{array} \\right]\\in \\mathrm H.$ Then $\\left_{\\mathcal H}= k_2 |z_3(L)|^2.$\nThe compactness follows from the fact that $M^{-1}$ is a canonical isomorphism from $\\mathrm H$ to $\\mathrm H',$ and the fact that $B$ is a rank-one operator, hence compact from $\\mathrm H$ to $\\mathrm H'.$ $\\square$\n\n\\subsection{Description of ${\\rm Dom}(\\mathcal A)$}\n\n\n\\begin{prop}\\label{prop-dom}Let $\\vec u=(\\vec y, \\vec z)^{\\rm T}\\in \\mathcal H.$ Then $\\vec u \\in {\\rm Dom}(\\mathcal A)$ if and only if the following conditions hold:\n\\begin{eqnarray}\n\\nonumber && \\vec y\\in (H^2(0,L)\\cap H^1_L(0,L))^2 \\times (H^3(0,L) \\cap H^2_L(0,L)),\\quad \\vec z\\in V ~{\\rm such ~ that}~(y_1)_{x}=(y_2)_{x}=(y_4)_{xx}\\left. \\right|_{x=L}=0.\n\\end{eqnarray}\nMoreover, the resolvent of $\\mathcal A$ is compact in the energy space $\\mathcal H.$\n\\end{prop}\n\\vspace{0.1in}\n\n\\textbf{Proof:} Let $\\vec {\\tilde u}= \\left( \\begin{array}{c}\n\\vec {\\tilde y} \\\\\n\\vec {\\tilde z}\\\\\n \\end{array} \\right)\\in \\mathcal H$ and $\\vec {u}= \\left( \\begin{array}{c}\n\\vec {y} \\\\\n\\vec {z}\\\\\n \\end{array} \\right)\\in {\\rm Dom}(\\mathcal A)$ such that $\\mathcal A \\vec u=\\vec {\\tilde u}.$ Then we have\n $$ -\\vec z= \\vec {\\tilde y} \\in V, \\quad A\\vec y + D_0 z=M\\vec{\\tilde z},$$ and therefore,\n\\begin{eqnarray}\\label{sal}\\left< \\vec y, \\vec \\varphi\\right>_{V}= \\left< \\vec {\\tilde z}, \\vec \\varphi \\right>_{H} ~~{\\rm for ~~all ~~} \\vec \\varphi \\in V.\n\\end{eqnarray}\nLet $\\vec \\psi=[\\psi_1,\\psi_2,\\psi_3]^{\\rm T} \\in (C_0^\\infty(0,L))^4.$ We define $\\varphi_i=\\psi_i$ for $i=1,2,$ and $\\varphi_3=\\int_0^x \\psi_3 (s)ds.$ Since $\\vec \\varphi\\in V,$ inserting $\\vec \\varphi$ into the above equation yields\n\\begin{eqnarray} \\nonumber &&\\int_0^L \\left\\{ -\\alpha^1 h_1 (y_1)_{xx} \\bar \\psi_1 - \\left(\\alpha^3 h_3 (y_2)_{xx}+\\frac{\\gamma^2 h_3}{{\\varepsilon_{3}}} (P_\\xi (y_2)_x)_x\\right) \\bar \\psi_2- K_2 (y_3)_{xxx} \\bar \\psi_3 \\right\\}~dx\\\\\n\\nonumber && + \\frac{G_2}{h_2} (-y_1+y_2 + H(y_3)_x)(-\\bar \\psi_1+\\bar \\psi_2 + H(\\bar \\psi_3)_x)+s_1 (z_1)_x(L)(\\psi_1)_x(L) + \\frac{\\gamma s_3}{{\\varepsilon_{3}}} z_2(L)\\psi_2(L)+ k_1(z_3)_{x}(L) \\psi_3(L) \\\\\n \\nonumber &&= \\int_0^L \\left\\{ \\left(\\int_1^x m \\tilde z_4 ds + K_1 (\\tilde z_4)_x\\right) \\bar \\psi_4 +\\rho_1 h_1 \\tilde z_1 \\bar \\psi_1+ \\rho_3 h_3 \\tilde z_2 \\bar \\psi_2 + \\mu h_3 \\tilde z_3 \\bar \\psi_3 \\right\\}~dx\\\\\n \n \\end{eqnarray}\nfor all $\\vec \\psi\\in (C_0^\\infty(0,L))^3.$ Therefore it follows that $\\vec y\\in (H^2(0,L)\\cap H^1_L(0,L))^2 \\times (H^3(0,L) \\cap H^2_L(0,L)).$\n\nNext let $\\vec \\psi \\in \\mathrm H.$ We define\n\\begin{eqnarray} \\label{dumber}\\varphi_i=\\int_0^x \\psi_i(s)ds,\\quad i=1,\\ldots,3.\n\\end{eqnarray}\nObviously $\\vec \\varphi \\in \\rm V. $ Then plugging (\\ref{dumber}) into (\\ref{sal}) yields\n\\begin{eqnarray}&& \\nonumber 0=(\\alpha^1 h_1 (y_1)_{x}(L) + s_1 z_1(L)) \\bar \\psi_1(L)+\\alpha^3 h_3 (y_2)_{x}(L)+ \\frac{\\gamma^2 h_3}{{\\varepsilon_{3}}} P_\\xi (y_2)_x(L) \\\\\n\\nonumber &&+ \\frac{\\gamma s_3}{{\\varepsilon_{3}}} z_2(L) \\bar \\psi_2(L)+ (k_1 (y_3)_{xx}(L) + k_1 (z_3)_x(L)) (\\bar \\psi_3)_x(L)\n\\end{eqnarray}\nfor all $\\psi\\in \\mathrm H.$ Hence,\n\\begin{eqnarray}\\nonumber &&\\alpha^1 h_1 (y_1)_{x}(L) + s_1 z_1(L)=\\alpha^3 h_3 (y_2)_{x}(L) + \\frac{\\gamma^2 h_3}{{\\varepsilon_{3}}} P_\\xi (y_2)_x(L)+ \\frac{\\gamma s_3}{{\\varepsilon_{3}}} z_2(L)= k_1 (y_3)_{xx}(L) + k_1 (z_3)_x(L)=0.\n\\end{eqnarray}\n\nNow let $\\vec y=\\left[ \\begin{array}{c}\n\\vec y_1 \\\\\n\\vec y_2 \\end{array} \\right]\\in \\mathcal {\\rm Dom} (\\mathcal A )$ and $\\vec z=\\left[ \\begin{array}{c}\n\\vec z_1 \\\\\n\\vec z_2 \\end{array} \\right] $such that $(I+\\mathcal A) \\vec y=\\vec z.$ By Proposition \\ref{prop-dom} and Lemma \\ref{skew}, the compactness of the resolvent follows. $\\square$\n\n\n\n\\begin{lem} \\label{xyz} The eigenvalue problem\n\\begin{eqnarray}\n \\label{dbas-eig} &&\\left\\{\n \\begin{array}{ll}\n \\alpha^1 h_1 z^1_{xx} - G_2 \\phi^2 = \\lambda^2 \\rho_1 h_1 z^1,& \\\\\n \\alpha^3 h_3 z^3_{xx} + \\frac{\\gamma^2 h_3}{{\\varepsilon_{3}}} (P_\\xi(z_3)_x)_x + G_2 \\phi^2 = \\lambda^2 \\rho_3 h_3 z^3 & \\\\\n - K_2 u_{xxxx} + G_2 H \\phi^2_x=\\lambda^2 (m u - K_1 u_{xx}),&\n\n \\end{array} \\right.\n\\end{eqnarray}\nwith the overdetermined boundary conditions\n\\begin{eqnarray}\n \\nonumber && u(0)=u_x(0)=z^1(0)=z^3(0)=z^1(L)=z^1_x (L)=z^3(L)=z^3_x(L)=0,\\\\\n \\label{d-son-eig} && u(L)=u_x(L)=u_{xx}(L)=u_{xxx}(L)=0\n\\end{eqnarray}\nhas only the trivial solution.\n\\end{lem}\n\\vspace{0.1in}\n\n\\textbf{Proof:} Now multiply the equations in (\\ref{dbas-eig}) by $-x\\bar u_x+3\\bar u,$ $ -x\\bar z^3_x +2\\bar z^3,$ and $-x \\bar z^1_x+2\\bar z^1,$ respectively, integrate by parts on $(0,L),$\n\n\\begin{eqnarray}\n\\nonumber && 0=\\int_0^L\\left\\{ -\\alpha^1 h_1|z^1_x|^2 +(x\\bar z^3_{xx} -\\bar z^3_x) \\left(\\alpha^3 h_3 z^3_x +\\frac{\\gamma^2 h_3}{{\\varepsilon_{3}}} P_\\xi z^3_x\\right) -3\\rho_1 h_1 \\lambda^2|z^1|^2 -3\\rho_3 h_3 \\lambda^2 |z^3|^2 -4m \\lambda^2 |u|^2 \\right. \\\\\n \\nonumber && - 2 K_1 \\lambda^2|u_x|^2 - G_2 h_2 \\phi^2 (x \\bar \\phi^2_x)-3G_2 h_2|\\phi^2|^2-K_2 \\bar u_{xxxx} (x u_x) + \\alpha^1_1 h_1 \\bar z^1_{xx} (x z^1_x) -\\rho_1h_1 \\lambda^2\\bar z^1 (xz^1_x)\\\\\n\\label{CH3-mult1-20} && \\left. -\\rho_3h_3 \\lambda^2 \\bar z^3 (x z^3_x) -\\lambda^2 (m\\bar u-K_1 u_{xx})(x u_x)\\right)~dx\\quad\\quad\n \\end{eqnarray}\nNow consider the conjugate eigenvalue problem corresponding to (\\ref{dbas-eig})-(\\ref{d-son-eig}). Now multiply the equations in the conjugate problem by $-x u_x-2 u,$ $-x z^1_x-3 z^1,$ and $ -x z^3_x-3 z^3,$ respectively, integrate by parts on $(0,L),$ and add them up:\n\\begin{eqnarray}\n\\nonumber && 0=\\int_0^L\\left\\{ 3\\alpha^1 h_1 |z^1_x|^2 -x z^3_{x} \\left(\\alpha^3 h_3 \\bar z^3_{xx} +\\frac{\\gamma^2 h_3}{{\\varepsilon_{3}}} (P_\\xi \\bar z^3_x)_x\\right) +3 z^3_{x} \\left(\\alpha^3 h_3 \\bar z^3_{x} +\\frac{\\gamma^2 h_3}{{\\varepsilon_{3}}} P_\\xi \\bar z^3_x\\right) \\right. \\\\\n \\nonumber && +3\\bar\\lambda^2\\rho_1 h_1 |z^1|^2 +3\\rho_3 h_3 \\bar \\lambda^2|z^3|^2 +2 m \\bar \\lambda^2 |u|^2 + 2 K_1 \\bar \\lambda^2|u_x|^2 +2K_2 |u_{xx}|^2 + G_2 h_2 \\bar \\phi^2 (x \\phi^2_x)+3G_2 h_2|\\phi^2|^2\\\\\n\\label{CH3-mult1-21} && \\left.+K_2 \\bar u_{xxxx} (x u_x) - \\alpha^1_1 h_1 \\bar z^1_{xx} (x z^1_x) +\\rho_1h_1 \\bar \\lambda^2\\bar z^1 (xz^1_x)+\\rho_3h_3 \\bar \\lambda^2 \\bar z^3 (x z^3_x) +\\bar \\lambda^2 (m\\bar u-K_1 u_{xx})(x u_x)\\right)~dx\\quad\\quad\n \\end{eqnarray}\n\nAdding (\\ref{CH3-mult1-20}) and (\\ref{CH3-mult1-21}) yields,\n\\begin{eqnarray}\n\\nonumber && 0=\\int_0^L\\left\\{ 2\\alpha^1 h_1 |z^1_x|^2 +2\\alpha^3 h_3 |z^3_x|^2 + \\frac{2\\gamma^2 h_3}{{\\varepsilon_{3}}} (P_\\xi z^3_x\\cdot \\bar z^3_x) +\\frac{\\gamma^2 h_3}{{\\varepsilon_{3}}} x\\left( -z^3_{x} (P_\\xi \\bar z^3_x)_x +\\bar z^3_{xx} P_\\xi \\bar z^3_x\\right) \\right. \\\\\n \\nonumber && +3\\bar\\rho_1 h_1 (-\\lambda^2 + \\bar \\lambda^2) |z^1|^2 +3\\rho_3 h_3 \\bar (-\\lambda^2 + \\bar \\lambda^2)|z^3|^2 + (-4m \\lambda^2 + 2 m \\bar \\lambda^2) |u|^2 + 2K_1(- \\lambda^2 + \\bar \\lambda^2)|u_x|^2 \\\\\n\\label{CH3-mult1-22} && \\left.+2K_2 |u_{xx}|^2 + G_2 h_2 \\left(- \\phi^2 (x \\bar \\phi^2_x) + \\bar \\phi^2 (x \\phi^2_x)\\right)+ (-\\lambda^2+\\bar \\lambda^2) (m\\bar u-K_1 u_{xx})(x u_x)\\right)~dx\\quad\\quad\n \\end{eqnarray}\n\nFinally, adding (\\ref{CH3-mult1-20}) and (\\ref{CH3-mult1-21}),considering only the real part of the expression above and all eigenvalues are\nlocated on the imaginary axis, i.e. $\\lambda=\\mp \\imath \\nu,$ yields\n\\begin{eqnarray}\n\\nonumber && \\int_0^L\\left( K_2 |u_{xx}|^2+ m \\nu^2|u|^2+ \\alpha^1 h_1 |z^1_x|^2 +\\left(\\alpha^3 h_3 z^3_x + \\frac{\\gamma^2 h_3}{{\\varepsilon_{3}}} (P_\\xi z^3_x)\\right)\\cdot \\bar z^3_x \\right.\\\\\n\\label{damn1}&& \\quad \\left.+\\frac{\\gamma^2 h_3}{2{\\varepsilon_{3}}} x\\left( -z^3_{x} (P_\\xi \\bar z^3_x)_x +\\bar z^3_{xx} P_\\xi \\bar z^3_x\\right) \\right)~dx=0\n \\end{eqnarray}\n\nNow we should get rid of the last term which survived through the calculations. Let $P_\\xi z^3_x=\\eta.$ Then $\\eta-\\xi \\eta_{xx}=z^3_x,$ and\n\\begin{eqnarray}\n\\nonumber &&\\frac{\\gamma^2 h_3}{2{\\varepsilon_{3}}} \\int_0^L x\\left[ -z^3_{x} (P_\\xi \\bar z^3_x)_x +\\bar z^3_{xx} P_\\xi \\bar z^3_x\\right]~dx = \\frac{\\gamma^2 h_3}{2{\\varepsilon_{3}}} \\int_0^L x\\left[ (xz^3_{xx}+z^3_x)(P_\\xi \\bar z^3_x) +\\bar z^3_{xx} P_\\xi \\bar z^3_x\\right]~dx\\\\\n\\nonumber && \\quad = \\frac{\\gamma^2 h_3}{2{\\varepsilon_{3}}} \\int_0^L \\left[ (x(\\eta_x-\\xi \\eta_{xxx})+(\\eta-\\xi\\eta_{xx}))\\bar \\eta +x(\\bar\\eta_x-\\xi\\bar \\eta_{xxx}) \\eta\\right]~dx\\\\\n\\nonumber && \\quad = \\frac{\\gamma^2 h_3}{2{\\varepsilon_{3}}} \\int_0^L \\left[ x (\\eta\\cdot \\bar\\eta)_x + |\\eta|^2 + \\xi |\\eta_x|^2 + \\xi \\left(\\eta_{xx} (\\bar\\eta+x\\bar\\eta_x) + \\bar\\eta_{xx}(\\eta+\\eta_{xx})\\right)\\right]~dx\\\\\n\\label{damn2} && \\quad = -\\frac{\\gamma^2 h_3\\xi}{2{\\varepsilon_{3}}}\\int_0^L |\\eta_x|^2 dx,\n\\end{eqnarray}\nand\n\\begin{eqnarray}\n\\label{damn3} \\frac{\\gamma^2 h_3}{{\\varepsilon_{3}}} \\int_0^L (P_\\xi z^3_x) \\cdot \\bar z^3_x ~dx&=& \\frac{\\gamma^2 h_3}{{\\varepsilon_{3}}} \\int_0^L \\eta (\\bar \\eta -\\xi \\bar \\eta_{xx}) ~dx=\\frac{\\gamma^2 h_3}{{\\varepsilon_{3}}} \\int_0^L (|\\eta|^2 + \\xi |\\eta_x|^2) ~dx.\n \\end{eqnarray}\nPlugging (\\ref{damn2}) and (\\ref{damn3}) in (\\ref{damn1}) gives\n\\begin{eqnarray}\n\\nonumber && \\int_0^L\\left( K_2 |u_{xx}|^2+ m \\nu^2|u|^2+ \\alpha^1 h_1 |z^1_x|^2 +\\left(\\alpha^3 h_3 z^3_x + \\frac{\\gamma^2 h_3}{{\\varepsilon_{3}}} (P_\\xi z^3_x)\\right)\\cdot \\bar z^3_x \\right.\\\\\n\\label{damn4} && \\quad \\left.+ \\frac{\\gamma^2 h_3}{{\\varepsilon_{3}}} (|P_\\xi z^3_x|^2 + \\frac{\\xi}{2} |(P_\\xi z^3_x)_x|^2) \\right)~dx=0.\n \\end{eqnarray}\nThis together with (\\ref{d-son-eig}) imply that $u=z^1=z^3\\equiv 0$ by (\\ref{d-son-eig}). This also covers the case $\\lambda\\equiv 0. \\square$\n\n\n\n\nNow we consider the decomposition $\\mathcal{A}+B=(\\mathcal{A}_d+B)+ \\mathcal {A}_{\\phi}$ of the semigroup generator of the original problem (\\ref{defA}) where $\\mathcal{A}_d+B$ is the semigroup generator of the decoupled system, i.e. $\\phi^2\\equiv 0$ in (\\ref{d4-non})-(\\ref{d-son-non}),\n\\begin{eqnarray}\n \\label{dbas-dcpld} &&\\left\\{\n \\begin{array}{ll}\n m \\ddot w - K_1 \\ddot{w}_{xx}+K_2w_{xxxx} =0,&\\\\\n \\rho_1h_1 \\ddot v^1 -\\alpha^1 h_1 v^1_{xx} =0, & \\\\\n \\rho_3h_3 \\ddot v^3 -\\alpha^3 h_3 v^3_{xx} = 0, &\n \\end{array} \\right.\n\\end{eqnarray}\nwith the boundary and initial conditions\n \\begin{eqnarray}\n&& \\begin{array}{ll}\n\\nonumber \\left\\{v^1, v^3, w, w_x \\right\\}_{x=0}=0, ~ \\left\\{\\alpha^1 h_1v^1_x\\right\\}_{x=L}=-s_1 \\dot v^1(L),~~ \\left\\{ \\alpha^3 h_3 v^3_x \\right\\}_{x=L}=-s_3 \\dot v^3(L,t),&\\\\\n \\end{array} \\\\\n && \\begin{array}{ll}\n\\nonumber \\left\\{K_2 w_{xx} \\right\\}_{x=L} =-k_1\\dot w_x(L,t) , ~ \\left\\{K_1 \\ddot w_x -K_2 w_{xxx} \\right\\}_{x=L} =k_2 \\dot w(L,t),& t\\in \\mathbb{R}^+ \\\\\n \\end{array} \\\\\n&& \\begin{array}{ll}\n\\label{d-son-dcpld} (v^1,v^3, w, \\dot v^1, \\dot v^3, \\dot w)(x,0) =(v^1_0, v^3_0, w_0, v^1_1, v^3_1, w_1),& x \\in [0,L].\n \\end{array}\n\\end{eqnarray}\nThe operator $\\mathcal{A}_{\\phi}:\\mathcal{H}\\to\\mathcal{H}$ is the coupling between the layers defined as the following\n\\begin{eqnarray}\\label{opB}\\mathcal {A}_{\\phi}{\\bf y}=\n \\left( \\begin{array}{c}\n 0_{3\\times 1} \\\\\n \\mathcal M^{-1} \\left(H G_2 ~\\phi^2_x\\right) \\\\\n\\frac{G_2}{h_1 \\rho_1}\\phi^2\\\\\n\\frac{{\\gamma^2 h_3}}{{{\\varepsilon_{3}}}}~(P_{\\xi} v^3_x)_x-\\frac{G_2}{h_3 \\rho_3}\\phi^2\\\\\n \\end{array} \\right).\n \\end{eqnarray}\n where ${\\bf y}=(w,u^1, u^3, \\tilde w, \\tilde v^1, \\tilde v^3)$ and $\\phi^2=\\frac{1}{h_2}\\left(-u^1+u^3 + H u_x\\right).$\n Let $E_d(t)$ be natural energy corresponding to the system (\\ref{dbas-dcpld})-(\\ref{d-son-dcpld}), i.e. (\\ref{Energy-nat-non}) without the $P_\\xi$ and $\\phi^2$ terms.\n\n\n \\begin{thm} Let ${\\mathcal A}_d+B$ be the infinitesimal generator of the semigroup corresponding to the solutions of (\\ref{dbas-dcpld})-(\\ref{d-son-dcpld}). Then the semigroup $\\{e^{({\\mathcal A}_d+B) t}\\}_{t\\ge 0}$ is exponentially\nstable in $\\mathcal H$.\n \\end{thm}\n\n \\textbf{Proof:} Note that the equations in (\\ref{dbas-dcpld}) are completely decoupled. The exponential stability of the semigroup $e^{({\\mathcal A}_d+B)t}$ follows from the exponential stability of wave equations \\cite{O-Hansen4} and the Rayleigh beam equation \\cite{B-Rao}.\n\n\n\\begin{lem} \\label{compact} The operator $A_{\\phi}:\\mathcal{H}\\to \\mathcal{H}$ defined in (\\ref{opB}) is compact.\n\\end{lem}\n\n\\vspace{0.1in}\n{\\bf Proof:} When $(w,v^1, v^3, \\tilde w, \\tilde u^1, \\tilde u^3)\\in \\mathrm H,$ we have $w\\in H^2_L(0,L)$ and $v^1, v^3 \\in (H^1_L(0,L))^{2},$ and therefore $\\phi^2 \\in H^1_L(0,L).$ Since $\\mathcal M: H^2_L(0,L)\\to \\mathrm L^2(0,L)$ remains an isomorphism, the terms in (\\ref{opB}) satisfy\n\\begin{eqnarray}\n\\label{dumb5} && \\mathcal M^{-1} \\left(\\phi^2_x\\right) \\in H^2_L(0,L), ~~\\phi^2 \\in H^1_L(0,L),~~ (P_\\xi v^3_x)_x \\in H^1_0(0,L)\n \\end{eqnarray}\nand $H^2_L(0,L)\\times H^1_L(0,L)$ is compactly embedded in $H^1_L(0,L)\\times (\\mathrm L^2(0,L))^{2},$ and $H^1_0(0,L)$ is compactly embedded in $L^2(0,L).$ Hence the operator $A_{\\phi}$ is compact in $\\mathcal H$.\n\n \\begin{thm} \\label{main-thm} Then the semigroup $\\{e^{({\\mathcal A}+B) t}\\}_{t\\ge 0}$ is exponentially\nstable in $\\mathcal H.$\n \\end{thm}\n\n \\textbf{Proof:} The semigroup $\\mathcal A+B={\\mathcal A}_d +B + {\\mathcal A}_{\\phi}$ is strongly stable on $\\mathcal H$ by Lemma \\ref{xyz}, and the operator ${\\mathcal A}_{\\phi}$ is a compact in $\\mathcal H$ by\nLemma \\ref{compact}. Therefore, since the semigroup generated by $({\\mathcal A}_d +B + {\\mathcal A}_{\\phi})-{\\mathcal A}_{\\phi}$ is uniformly\nexponentially stable in $\\mathcal H,$ the semigroup $\\mathcal A=({\\mathcal A}_d + B+ {\\mathcal A}_{\\phi})$ is uniformly\nexponentially stable in ${\\mathcal H}$ by e.g., the perturbation theorem of \\cite{Trigg}.\n\nThe main difference between the voltage and charge controlled models is the existence of the $P_\\xi$ term in the charge controlled model. By using the same argument above (by taking $P_\\xi\\equiv 0$), the voltage controlled model can be easily shown to be exponentially stable with the same feedback controller (\\ref{feedback}).\n\n\\begin{thm} Then the semigroup $\\{e^{({\\mathcal A}+B) t}\\}_{t\\ge 0}$ corresponding to (\\ref{d4-non}) ($P_\\xi\\equiv 0$) with (\\ref{feedback}) is exponentially\nstable in $\\mathcal H.$\n \\end{thm}\n\n Several remarks are in order:\n\n\\begin{rmk} (i) The number of controllers for the bending motion may be reduced to one by taking $k_2=0.$ However, it is worthwhile to note that the multiplier used in \\cite{Rao} does not work in the proof of Lemma \\ref{xyz}. For that reason, the classical four boundary conditions for $u$ at $x=L$ in (\\ref{d-son-eig}) are still needed due to the existence of the coupling $\\phi^2$ in (\\ref{dbas-eig}). Removing $u(L)=0,$ and proving the same result in Lemma \\ref{xyz} is an open question.\n\n\\noindent (ii) If the moment of inertia term in (\\ref{d4-non})-(\\ref{d-son-non}) is set to zero, $K_1\\equiv 0,$ the bending motion is described by the Euler-Bernoulli-type equation. The exponential stabilizability (\\ref{d4-non})-(\\ref{d-son-non}) with $s_2\\equiv 0,$ i.e. one controller for each equation, is still an open problem.\n\\end{rmk}\n\\section{Generalization to the Multilayer ACL beam case}\nBy using the same methodology in Section \\ref{modeling}, we can obtain the model for the multilayer generalization of the ACL sandwich beams as in Figure \\ref{ACL-mult}.\nThe equations of motion are found to be\n\\begin{equation}\\left\\{ \\begin{array}{l}\n {\\bf{h}}_{\\mathcal O} {\\bf{p}}_{\\mathcal O} {\\ddot y}_{\\mathcal O} -{\\bf{h}}_{\\mathcal O} {\\bf{\\alpha}}_{\\mathcal O} ({y}_{\\mathcal O})_{xx} -{\\bf{\\gamma}}_{\\mathcal O}^2{\\bf{\\varepsilon}}_{\\mathcal O}^{-1} {\\bf{h}}_{\\mathcal O} ~(P_{\\xi_{\\mathcal O}} ({y}_{\\mathcal O})_{x})_x+ {\\bf{B}}^{\\rm T} {\\bf{ G}}_E \\psi_E = \\delta(x-L){\\bf \\gamma}_{\\mathcal O}{\\bf \\varepsilon}_{\\mathcal O}^{-1}\\sigma_{\\mathcal O}(t), \\\\\nm\\ddot w -K_1 \\ddot w_{xx} + K_2 w_{xxxx} - N^{\\rm T} {\\bf{h}}_E {\\bf{ G}}_E \\psi_E = 0 ~~~ {\\rm{in}}~~ (0, L)\\times \\mathbb{R}^+ \\\\\n {\\text{where}}~~ {\\bf{B}} { y}_{\\mathcal O}={\\bf{h}}_E \\phi_E-{\\bf{h}}_E N w',\n \\end{array} \\right.\n\\label{maincont}\n\\end{equation}\nwith clamped-free boundary conditions and initial conditions\n\\begin{eqnarray}\\label{bdryback10}\n & \\left\\{ \\begin{array}{l}\n \\left\\{{y}_{\\mathcal O}, w, w_x \\right\\}_{x=0}=0 ~~ {\\rm{on}}~ \\mathbb{R}^+,\\label{bdrycont3}\\\\\n \\left\\{\\alpha_{\\mathcal O} h_{\\mathcal O}y_{\\mathcal O} + {\\bf{\\gamma}}_{\\mathcal O}^2{\\bf{\\varepsilon}}_{\\mathcal O}^{-1} {\\bf{h}}_{\\mathcal O} ~P_{\\xi_{\\mathcal O}} ({y}_{\\mathcal O})_{x}\\right\\}_{x=L}=0,~~~ {\\rm{on}}~~ \\mathbb{R}^+,\\label{bdrycont4}\\\\\n \\left\\{K_2 w_{xx} \\right\\}_{x=L} =-M(t) , ~ \\left\\{K_1 \\ddot w_x -K_2 w_{xxx} \\right\\}_{x=L} = g(t), ~~ {\\rm{on}}~ \\mathbb{R}^+ ,\\label{bdrycont5}\\\\\nw(x,0)=w^0(x), ~\\dot w(x,0)=w^1(x), ~{y}_{\\mathcal O}(x,0)= {y}^0_{\\mathcal O}, ~ {\\dot y}_{\\mathcal O}(x,0)= {y}^1_{\\mathcal O} ~ {\\rm{on}}~~ (0,L). \\label{initialcont}\n \\end{array} \\right. &\n\\end{eqnarray}\n\n \\begin{figure}\n\\begin{center}\n\\includegraphics[height=5.5cm]{multilayer.png} \n\\caption{The composite consists of joint ACL layers. Odd layers are all piezoelectric and are actuated by a charge source.} \n\\label{ACL-mult} \n\\end{center} \n\\end{figure}\n\nThe model (\\ref {maincont}) consists of $2m+1$ alternating stiff and complaint (core) layers, with piezoelectric layers on outside. The piezoelectric layers have odd indices $1,3,\\ldots 2m+1$ and the even layers have even indices $2,4,\\ldots 2m$.\n\n In the above, $m,K_1, K_2$ are \\emph{positive} physical constants, $w$ represents the transverse displacement, $\\phi^i$ denotes the shear angle in the $i^{\\rm{th}}$ layer, $\\phi_E=[\\phi^2,\\phi^4,\\ldots,\\phi^{2m} ]^{\\rm T},$ $y^i$ denote the longitudinal displacement\nalong the center of the $i^{\\rm{th}}$ layer, and $y_{\\mathcal O}=[y^1,y^3, \\ldots, y^{2m+1}]^{\\rm T}.$ Define the following $n\\times n$ diagonal matrices\n\\begin{eqnarray}\n\\nonumber &{\\bf{p}}_{\\mathcal O}={\\rm{diag}}~ (\\rho_1, \\ldots,\\rho_{2m+1}), ~~{\\bf{h}}_{\\mathcal O}={\\rm{diag}}~(h_1, \\ldots, h_{2m+1}), ~~{\\bf{h}}_E={\\rm{diag}}~(h_2,\\ldots, h_{2m}),&\\\\\n\\nonumber & {\\bf{\\varepsilon}}_{\\mathcal O}={\\rm{diag}}~(\\varepsilon_3^1, \\ldots, \\varepsilon_3^{2m+1}), ~~{\\bf{\\xi}}_O={\\rm{diag}}~(\\xi_1,\\ldots, \\xi_{2m+1})~~{\\bf{\\alpha}}_{\\mathcal O}={\\rm{diag}}~(\\alpha_1, \\ldots, \\alpha_{2m+1}),&\\\\\n\\nonumber & {\\bf{ G}}_E={\\rm{diag}}~( G_2, \\ldots, G_{2m}), ~~{\\bf{\\sigma}}_{\\mathcal O}={\\rm{diag}}~(\\sigma_1, \\ldots, \\sigma_{2m+1})&\n\\end{eqnarray}\nwhere $h_i, \\rho_i, E_i,$ are positive and denote the thickness, density, and Young's modulus, respectively.\nAlso $G_i\\ge 0$ denotes shear modulus of the $i^{\\rm{th}}$ layer, and $ {\\tilde{ G}}_i\\ge 0$ denotes coefficient for damping in the corresponding compliant layer.\n\nThe vector $ N$ is defined as\n$N={\\bf{h}}_E^{-1}{\\bf{A }}{\\bf{h}}_{\\mathcal O} \\vec 1_{\\mathcal O} + \\vec 1_E$\nwhere ${\\bf{A}}=(a_{ij})$ and ${\\bf{B}}=(b_{ij})$ are the $m\\times(m+1)$ matrices\n$$a_{ij} = \\left\\{ \\begin{array}{l}\n1\/2,~~{\\rm{ if }}~~j = i~~{\\rm{ or }}~~j = i + 1 \\\\\n~~0,\\quad{\\rm{ otherwise}} \\\\\n\\end{array} \\right., ~~b_{ij}=\\left\\{ \\begin{array}{l}\n(-1)^{i+j+1},~~{\\rm{ if }}~~j = i~~{\\rm{ or }}~~j = i + 1 \\\\\n~~0, \\quad\\quad\\quad\\quad {\\rm{ otherwise}} \\\\\n\\end{array} \\right. $$\nand $ \\vec 1_{\\mathcal O}$ and $\\vec 1_E$ denote the vectors with all entries $1$ in $\\mathbb{R}^{m+1}$ and $\\mathbb{R}^{m},$ respectively.\n\nIn the above, the operator $P_{\\xi_{\\mathcal O}}$ is defined by\n\\begin{eqnarray}\\label{Lgamma-mat}&&P_{\\xi_{\\mathcal O}}:={\\rm{diag}}~(P_{\\xi_1}, \\ldots, P_{\\xi_{2m+1}})\\end{eqnarray}\nwhere $$P_{\\xi_i}=\\left(-\\xi_i D_x^2+I\\right)^{-1}, \\quad \\xi_i:=\\frac{{\\varepsilon_{1}}^i h_i^2}{12{\\varepsilon_{3}}^i}, ~~~i=1,3,\\ldots, 2m+1.$$\n\nLet ${\\bf{s}}_O={\\rm{diag}}~(s_1,\\ldots, s_{2m+1}),~$ $s_i,k_1, k_2>0,$ and\n \\begin{eqnarray}\\label{feedback-mult} {\\bf \\sigma}_{\\mathcal O}^i=-{\\bf s}_{\\mathcal O}\\dot {y}_{\\mathcal O}(L,t),~~M(t)= k_1\\dot w_x(L,t),~~ g(t)=k_2 \\dot w(L).\n\\end{eqnarray}\nThe well-posedness of the model (\\ref{maincont})-(\\ref{initialcont}) with (\\ref{feedback-mult}) can be established in a similar fashion as in Section \\ref{Section-stab}. The following result follows:\n\\begin{thm} The system (\\ref{maincont})-(\\ref{initialcont}) with (\\ref{feedback-mult}) is exponentially stable.\n\\end{thm}\\\\\n\\textbf{Proof:} The proof is analogous to the proof of Theorem \\ref{main-thm}.\n\n\\begin{rmk} The equations of motion for a voltage controlled multilayer ACL beam can be obtained similarly. One can also consider a hybrid type \\emph{hybrid} multilayer ACL beam model for which some layers are actuated by voltage sources, and the rest are actuated by charge sources. The equations of motion can be also obtained in a similar fashion.\n\\end{rmk}\n \\section{Conclusion and Future Research}\nIn this paper, it is shown that in the case of electrostatic assumption, charge, current, or voltage actuated ACL beams can be uniformly exponentially stabilized by using mechanical feedback controllers. This is not the case once we consider the quasistatic or fully dynamic approaches where the magnetic effects are accounted for. For example, the voltage-actuated ACL beam model obtained by the fully dynamic or quasistatic approaches is shown to be not uniformly exponentially stabilizable \\cite{Ozkan3}. The polynomial stability result obtained in \\cite{Ozkan1} for certain combinations of material parameters on a more regular space than the natural energy space is applicable to this problem yet it is still an open problem. Moreover, the fully dynamic model for current or charge actuation is obtained in \\cite{Ozkan4}, yet the stabilization problem is currently open and under investigation.\n\n\nEven though the quadratic-through-thickness assumption for the electric potential in the case of charge or current actuation makes the beam stiffer than voltage actuated ACL beam, this effect is not observed for higher order eigenvalues due to the compactness of the operator $P_\\xi$. \n\n\n\n\n\n\n\\bibliographystyle{spiebib}\n\n\n\\bibliographystyle{plain} \n \n \n \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe observation of cosmic ray particles with energies higher than $10^{11}~GeV$\n\\cite{EHE} gives a serious challenge to the known mechanisms of acceleration. \nThe shock acceleration in different \nastrophysical objects typically gives maximal energy of accelerated protons\nless than $(1-3)\\cdot 10^{10}~GeV$ \\cite{NMA}. The unipolar induction can \nprovide the maximal energy $1\\cdot 10^{11}~GeV$ only for the extreme values \nof the parameters \\cite{BBDGP}. Much attention has recently been given to \nacceleration by ultrarelativistic shocks \\cite{Vie},\\cite{Wax}. The\nparticles here can gain a tremendous increase in energy,\nequal to $\\Gamma^2$, at a single reflection, \nwhere $\\Gamma$ is the Lorentz factor of the shock.\nHowever, it is \n known (see e.g. the simulation \nfor pulsar relativistic wind in \\cite{Hosh}) that particles entering \nthe shock region are captured there or at least have a small probability \nto escape. \n\n{\\em Topological defects, TD,} (for a review see \\cite{Book}) can naturally \nproduce particles of ultrahigh energies (UHE). The pioneering observation \nof this possibility was made by Hill, Schramm and Walker \\cite{HS} (for \na general analysis of TD as UHE CR sources see \\cite {BHSS} and for a \nreview \\cite{Sigl}).\n\nIn many cases TD become unstable and decompose to constituent fields, \nsuperheavy gauge and Higgs bosons (X-particles), which then decay \nproducing UHE CR. It could happen, for example, when two segments of \nordinary string, or monopole and antimonopole touch each other, when \nelectrical current in superconducting string reaches the critical value\nand in some other cases.\n\nIn most cases the problem with UHE CR from TD \nis not the maximal energy, but the fluxes. One very general reason \nfor the low fluxes consists in the large distance between TD. A dimension\nscale for this distance is the Hubble distance $H_0^{-1}$. However, in some \nrather exceptional cases this dimensional scale is multiplied to a small \ndimensionless value $r$. If a distance between TD is larger than \n UHE proton attenuation length (due to the GZK effect \\cite{GZK}), then \nthe flux at UHE is typically exponential suppressed. \n\n\n{\\em Ordinary cosmic strings} can produce particles when a loop annihilate \ninto double line \\cite{BR}. The produced UHE CR flux is strongly reduced due \nto the fact that a loop oscillates, and in the process of a collapse the \ntwo incoming parts of a loop touch each other in one point producing thus \nthe smaller loops, instead of two-line annihilation. However, this idea was \nrecently revived due to recent work \\cite{Vincent}. It is argued there that \nthe energy loss of the long strings is dominated by production of very \nsmall loops with the size down to the width of a string, which immediately \nannihilate into superheavy particles. A problem with this scenario is \ntoo large distance between strings (of order of the Hubble distance). \nFor a distance between an observer and a string being the same, the \nobserved spectrum of UHE CR has an exponential cutoff at energy \n$E \\sim 3\\cdot 10^{19}~eV$.\n\nSuperheavy particles can be also produced when two segments of\nstring come \ninto close contact, as in {\\it cusp} events \\cite{Bran}. \nThis process\nwas studied later by Gill and Kibble \\cite{GK}, and they concluded that\nthe resulting cosmic ray flux is far too small. \nAn interesting possibility suggested by Brandenberger \\cite{Bran} is the\n{\\em cusp} ``evaporation'' on cosmic strings. \nWhen the distance between two segments of the cusp\nbecomes of the order of the string width, the cusp may``annihilate\" \nturning into high energy particles, \nwhich are boosted by a very large Lorentz\nfactor of the cusp \\cite{Bran}. \nHowever, the resulting UHE CR flux is considerably smaller than one \nobserved \\cite{BBM}. \n\n{\\em Superconducting strings} \\cite{Witten} appear to\nbe much better suited for particle production.\nMoving through cosmic magnetic fields, such strings develop\nelectric currents and copiously produce charged heavy particles when the\ncurrent reaches certain critical value. \nThe CR flux produced by\nsuperconducting strings is affected by some model-dependent string\nparameters and by the history and spatial distribution of cosmic\nmagnetic fields. \nModels considered so far failed to account\nfor the observed flux \\cite{SJSB}.\n\n \n{\\em Monopole-antimonopole pairs } ($M{\\bar M}$) \ncan form bound states and eventually\nannihilate into UHE particles \\cite{Hill}, \\cite{BS}. \nFor an appropriate choice of the\nmonopole density $n_M$, this model is consistent with observations;\nhowever, the required (low) value of $n_M$ implies fine-tuning. \nIn the first phase transition $G \\to H \\times U(1)$ in the early \nUniverse the monopoles are produced with too high density. It must then be \ndiluted by inflation to very low density, precisely tuned to \nthe observed UHE CR flux. \n\n{\\em Monopole-string network} can be formed in the early Universe in the \nsequence of symmetry breaking \n\\begin{equation}\nG \\to H \\times U(1) \\to H \\times Z_N.\n\\label{eq:Z-N}\n\\end{equation}\nFor $N \\geq 3$ an infinite network of monopoles connected by strings is \nformed. The magnetic fluxes of monopoles in the network are channeled into \ninto the strings that connect them. The monopoles typically have additional \nunconfined magnetic and chromo-magnetic charges. When strings shrink the \nmonopoles are pulled by them and are accelerated. The \naccelerated monopoles produce extremely high energy gluons, which then \nfragment into UHE hadrons \\cite{BMV}. The produced flux is too small \nto explain UHE CR observation \\cite{BBV}.\n\n\n{\\em Cosmic necklaces} are TD which are formed \nin a sequence of symmetry breaking given by Eq.(\\ref{eq:Z-N}) when $N=2$. \n The first phase transition\nproduces monopoles, and at the second phase transition each monopole\ngets attached to two strings, with its magnetic flux channeled along the \nstrings. The resulting necklaces resemble ``ordinary'' cosmic strings\nwith monopoles playing the role of beads. Necklaces can evolve in such way\nthat a distance between monopoles diminishes and in the end all monopoles \nannihilate with the neighboring antimonopoles \\cite{BV}. The produced \nUHE CR flux \\cite{BV} is close to the observed one and the shape of the \nspectrum \nresembles one observed. The distance between necklaces can be much smaller \nthan attenuation length of UHE protons.\n\n{\\em Superheavy relic particles} can be sources of UHE CR \\cite{KR,BKV}.\nIn this scenario Cold Dark Matter (CDM) have a small admixture \nof long-lived superheavy particles. These particles must be heavy,\n$m_X > 10^{12}~GeV$, long-lived $\\tau_X > t_0$, where $t_0$ is the age \nof the Universe, and weakly interacting. The required life-time \ncan be \nprovided if this particle has (almost) conserved quantum number broken \nvery weakly due to warmhole \\cite{BKV} or instanton \\cite{KR} effects.\nSeveral mechanisms for production of such particles in the early Universe \nwere identified. Like other forms of non-dissipative CDM , X-particles \nmust accumulate in the halo of our Galaxy \\cite{BKV} and thus they produce \nUHE CR without GZK cutoff and without appreciable anisotropy. \n\n{\\em The UHE carriers} produced at the decay of superheavy \nrelic particles or from TD, can be be nucleons,\nphotons and neutrinos or neutralinos \\cite{BK}. Production of neutralinos\noccurs in particle cascade, which originates at the decay of superheavy \nX-particle in close analogy to QCD cascade \\cite{BK}. Though flux of \nUHE neutralino is of the same order as neutrino flux, its detection is more \nproblematic because of smaller cross-section. The other particles \ndiscussed as carrier of UHE signal are gluino \\cite{Farr,MN,BK}, in case it is \nLightest Supersymmetric Particle (LSP), and heavy monopole \\cite{Weil,MN}. \n\nIn this paper I will present the results obtained in our joint works with\nM.Kachelriess and A.Vilenkin about necklaces and superheavy relic \nparticles as possible sources of UHE CR. Neutralino and gluino as the \ncarriers of UHE signal will be also shortly discussed.\n\n\\section{Necklaces} \nNecklaces produced in a sequence of symmetry breaking \n$G \\to H \\times U(1) \\to H \\times Z_2$ form the infinite necklaces having \nthe shape or random walks and a distribution of closed loops. Each monopole \nin a necklace is attached to two strings. \n\nThe monopole mass $m$ and the string tension $\\mu$ are determined by\nthe corresponding symmetry breaking scales, $\\eta_s$ and $\\eta_m$ \n($\\eta_m>\\eta_s$): \n$m\\sim 4\\pi\\eta_m\/e,\\;\\;\\; \\mu\\sim 2\\pi\\eta_s^2$.\nHere, $e$ is the gauge coupling. The mass per unit length of string\nis equal to its tension, $\\mu$. Each string attached to a monopole\npulls it with a force $F \\sim \\mu$ in the direction of the string. \nThe monopole radius $\\delta_m$ and the string thickness $\\delta_s$ are\ntypically of the order $\\delta_m\\sim(e \\eta_m)^{-1}$, $~~\\delta_s\\sim\n(e \\eta_s)^{-1}$. \n\nAn important quantity for the necklace evolution is the dimensionless ratio\n\\begin{equation}\nr=m\/\\mu d,\n\\label{r}\n\\end{equation}\n\nWe expect the necklaces to evolve in a \nscaling regime. If $\\xi$ is the characteristic length scale of the network, \nequal to the typical separation of long strings and to their characteristic\ncurvature radius, then the force per unit length of string is \n$f \\sim \\mu\/\\xi$, and the acceleration is $a \\sim \n(r+1)^{-1}\\xi^{-1}$.\nWe assume that $\\xi$ changes on a Hubble time scale $\\sim t$. Then the \ntypical distance travelled by long strings in time $t$ should be \n$\\sim \\xi$, so that the strings have enough time to intercommute in a \nHubble time. This gives $at^2 \\sim\\xi$, or \n\\begin{equation}\n\\xi \\sim (r+1)^{-1\/2}t.\n\\label{xi}\n\\end{equation}\nThe typical string velocity is $v \\sim (r+1)^{-1\/2}$.\n\nIt is argued in Ref.(\\cite{BV}) that $r(t)$ is driven towards large value\n$r \\gg1$. However, for $r \\geq 10^6$ the characteristic velocity of the \nnetwork falls down below the virial velocity, and the necklaces will be \ntrapped by gravitational clustering of the matter. This may change \ndramatically the evolution of network. One possible interesting effect\nfor UHE CR can be enhancement of necklace space density within \nLocal Supercluster -- a desirable effect as far \nabsence of the GZK cutoff is concerned. However, we restrict our \nconsideration by the case $r < 10^6$. The distance between \nnecklaces is still small enough, $\\xi \\gtrsim 3~Mpc$, to assume their uniform \ndistribution, when calculating the UHE CR flux.\n\nSelf-intersections of long necklaces result in copious production of\nclosed loops. For $r\\gtrsim 1$ the motion of loops is not periodic,\nso loop self-intersections should be frequent and their fragmentation\ninto smaller loops very efficient.\nA loop of size $\\ell$ typically disintegrates on a timescale \n$\\tau\\sim r^{-1\/2}\\ell$. All monopoles trapped in the loop must, of course, \nannihilate in the end. \n\n\nAnnihilating $M{\\bar M}$ pairs decay into\nHiggs and gauge bosons, which we shall refer to collectively as\n$X$-particles. The rate of $X$-particle production is easy to\nestimate if we note that infinite necklaces lose a substantial\nfraction of their length to closed loops in a Hubble time. \nThe string length per unit volume is $\\sim \\xi^{-2}$, and the monopole\nrest energy released per unit volume per unit time is\n$r\\mu\/\\xi^2 t$. Hence, we can write\n\\begin{equation}\n\\dot{n}_X \\sim r^2 \\mu\/(t^3 m_X),\n\\label{xrate}\n\\end{equation}\nwhere $m_X\\sim e\\eta_m$ is the $X$-particle mass.\n\nX-particles emitted by annihilating monopoles decay into hadrons, photons \nand neutrinos, the latter two components are produced through decays of \npions.\n\nThe diffuse flux of ultra-high energy protons\ncan be evaluated as\n\\begin{equation}\nI_p(E)=\\frac{c\\dot{n}_X}{4 \\pi m_X} \\int_0^{t_0} dt\\,\\, W_N(m_X,x_g)\n\\frac{dE_g(E,t)}{dE}\n\\label{pflux}\n\\end{equation}\nwhere $dn_X\/dt$ is given by Eq.(\\ref{xrate}),\n$E$ is an energy of proton at observation and $E_g(E,t)$ is its energy at \ngeneration at cosmological epoch $t$, $x_g=E_g\/E$ and\n$W_N(m_X,x)$ is the fragmentation function of X-particle into nucleons\nof energy $E=xm_X$. The value of $dE_g\/dE$ can be calculated from the \nenergy losses of a proton on microwave background radiation (e.g. see \n\\cite{BBDGP}). In Eq.(\\ref{pflux}) the recoil protons are taken into \naccount, while in Ref.(\\cite{BV}) their contribution was neglected.\n\n\n\nThe fragmentation function $W_N(m_X,x)$ is calculated using the decay of \nX-particle into QCD \npartons (quark, gluons and their supersymmetric partners) with the \nconsequent development of the parton cascade. The cascade in this case is\nidentical to one initiated by $e^+e^-$ -annihilation.\nWe have used the fragmentation function in the gaussian form as \nobtained in MLLA approximation in \\cite{DHMT} and\n\\cite{ESW}. \n\n\nIn our calculations \nthe UHE proton flux is fully determined by only two parameters,\n$r^2\\mu$ and $m_X$. \nThe former is restricted by low energy diffuse gamma-radiation.\nIt results from e-m cascades initiated \nby high energy photons and electrons produced in \nthe decays of X-particles.\n\nThe cascade energy density predicted in our model is \n\\begin{equation}\n\\omega_{cas}=\\frac{1}{2}f_{\\pi}r^2\\mu \\int_0 ^{t_0}\\frac{dt}{t^3}\n\\frac{1}{(1+z)^4}=\\frac{3}{4}f_{\\pi}r^2\\frac{\\mu}{t_0^2},\n\\label{cas}\n\\end{equation}\nwhere $t_0$ is the age of the Universe (here and below we use\n$h=0.75$), $z$ is the redshift and $f_{\\pi} \\sim 1$ is the fraction of\nenergy \ntransferred to pions. In Eq.(\\ref{cas}) we took into account that half \nof the energy of pions is transferred to photons and electrons. \nThe observational bound on the cascade density, for the kind of\nsources we are considering here, is \\cite{B92} $\\omega_{cas} \\lesssim\n10^{-5}~eV\/cm^3$. This gives a bound on the parameter $r^2\\mu$.\n\n\nIn numerical calculations we used \n$r^2\\mu= 1\\times 10^{28}~GeV^2$, which results in \n$\\omega_{cas}=5.6 \\cdot 10^{-6}~eV\/cm^3$, somewhat below the observational \nlimit. Now we are left with one free parameter, $m_X$, which we fix at\n$1\\cdot 10^{14}~GeV$. The maximum energy\nof protons is then $E_{max} \\sim 10^{13}~GeV$. \nThe calculated proton \nflux is presented in Fig.1, together with a summary \nof observational data taken \nfrom ref.\\cite{akeno}. \n\nLet us now turn to the calculations of UHE gamma-ray flux from the decays \nof X-particles. The dominant channel is given by the decays of neutral \npions. The flux can be readily calculated as\n\\begin{equation}\nI_{\\gamma}(E)=\\frac{1}{4\\pi}\\dot{n}_X\\lambda_{\\gamma}(E)N_{\\gamma}(E)\n,\n\\label{gflux}\n\\end{equation}\nwhere $\\dot{n}_X$ is given by Eq.(\\ref{xrate}), \n$\\lambda_{\\gamma}(E)$ is the absorption length of a photon with energy \n$E$ due to $e^+e^-$ pair production on background radiation and \n$N_{\\gamma}(E)$ is the number of photons with energy E produced per one decay \nof X-particle. The latter is given by \n\\begin{equation}\nN_{\\gamma}(E)=\\frac{2}{m_X}\\int_{E\/m_X}^1 \\frac{dx}{x} W_{\\pi^0}(m_X,x)\n\\label{gnumber}\n\\end{equation}\nwhere $W_{\\pi^0}(m_X,x)$ is the fragmentation function of X-particles into\n$\\pi^0$ pions.\n\nAt energy $E>1\\cdot 10^{10}~GeV$ the \ndominant\ncontribution to the gamma-ray absorption comes from the radio background. The \nsignificance of this process was first noticed in \\cite{B70}(see also \nbook \\cite{BBDGP}). New calculations for this absorption \nwere recently\ndone \\cite{PB}. We have used the \nabsorption lengths from this work. \n\nWhen evaluating the flux \\ref{gflux} at $E > 1 \\cdot 10^{10}~GeV$ we neglected \ncascading of a primary photon, because pair production and inverse \ncompton scattering occur at these energies on radio background, and thus \nat each collision the energy of a cascade particle is halved.\nMoreover, assuming an intergalactic magnetic field \n$H \\geq 1\\cdot 10^{-9}$, the secondary \nelectrons and positrons loose their energy mainly due to synchrotron \nradiation and the emitted photons escape from the considered energy \ninterval \\cite{BGP}. \n\nThe calculated flux of gamma radiation is presented in Fig.1 by the curve\nlabelled $\\gamma$. One can see that at $E \\sim 1\\cdot 10^{11}~GeV$ the \ngamma ray flux is considerably lower than that of protons. This is \nmainly due \nto the difference in the attenuation lengths for protons ($110~Mpc$) and \nphotons ($2.6~Mpc$ \\cite{PB} and $2.2~Mpc$ \\cite{B70}). At higher energy \nthe attenuation length for protons dramatically decreases ($13.4~Mpc$ at \n$E=1 \\cdot 10^{12}~GeV$) and the fluxes of protons and photons become \ncomparable. \n\n\nA requirement for the models explaining the observed UHE events is that \nthe distance between sources must be smaller than the attenuation\nlength. Otherwise the flux at the \ncorresponding energy would be \nexponentially suppressed. This imposes a severe constraint on the\npossible sources. For example, in the case of protons with energy \n$E \\sim (2- 3)\\cdot 10^{11}~GeV$ \nthe proton \nattenuation length is $19~Mpc$ . If protons \npropagate rectilinearly, there should be several sources inside this\nradius;\notherwise all particles would arrive from the same direction.\nIf particles are strongly deflected in extragalactic magnetic fields, \nthe distance to the source should be even smaller. Therefore, the \nsources of \nthe observed events at the highest energy must be at a distance \n$R\\lesssim 15~Mpc$ in the case or protons. \n\nIn our model the distance between \nsources, given by Eq.(\\ref{xi}), satisfies this condition for \n$r>3\\cdot 10^{4}$.\nThis is in contrast to other potential sources,\nincluding supeconducting cosmic strings and powerful \nastronomical sources such as AGN, for which this condition imposes\nsevere restrictions. \n\nThe difficulty is even more pronounced in the case of UHE photons. \nThese particles \npropagate rectilinearly and their absorption length is shorter: \n$2 - 4~Mpc$ at $E \\sim 3\\cdot 10^{11}~GeV$. It is rather unrealistic to expect\nseveral powerful astronomical sources at such short distances. This \ncondition is very restrictive for topological defects as well. The\nnecklace model is rather exceptional regarding this aspect.\n\n\\section{UHE CR FROM RELIC QUASISTABLE PARTICLES}\n\nThis possibility was recognized recently in Refs(\\cite{KR,BKV}).\n\nOur main assumption is that Cold Dark Matter \n(CDM) has a small admixture of long-lived supermassive $X$-particles.\nSince, apart from very small scales, fluctuations grow identically\nin all components of CDM, the fraction of $X$-particles, $\\xi_X$, is \nexpected to be the same in all structures. In particular, $\\xi_X$ is\nthe same in the halo of our Galaxy and in the\nextragalactic space. Thus the halo density of $X$-particles is enhanced in \ncomparison with the extragalactic density.\nThe decays \nof these particles produce UHE CR, whose flux is dominated by the \nhalo component, and therefore has no GZK cutoff. Moreover,\nthe potentially dangerous e-m cascade radiation is suppressed.\n\nFirst, we address the elementary-particle and cosmological aspects\nof a superheavy long-living particle. Can the relic density \nof such particles be as high as required by observations of UHE CR?\nAnd can they have a lifetime comparable or larger than the age\nof the Universe?\n \nLet us assume that $X$-particle is a neutral fermion which belongs \nto a representation of the $SU(2)\\times U(1)$ group. We assume also\nthat the stability of $X$-particles is protected by a discrete\nsymmetry \nwhich is the remnant of a gauge symmetry and is\nrespected by all interactions except quantum\ngravity through wormhole effects. In other words, our particle is very\nsimilar to a very heavy neutralino with a conserved quantum number, $R'$, \nbeing the direct analogue of $R$-parity (see \\cite{BJV} and the\nreferences therein).\nThus, one can assume that the decay of $X$-particle occurs due to dimension 5\noperators, inversely proportional to the Planck mass $m_{\\rm Pl}$ and \nadditionally suppressed by a factor $\\exp(-S)$, where $S$ is the \naction of a wormhole which absorbs $R'$-charge. \nAs an example one can consider a term\n\\begin{equation}\n{\\cal L} \\sim \\frac{1}{m_{Pl}} \\bar{\\Psi}\\nu \\phi\\phi \\exp(-S),\n\\label{eq:d5}\n\\end{equation}\nwhere $\\Psi$ describes X-particle, and $\\phi$ is a $SU(2)$ scalar with vacuum \nexpectation value $v_{EW}=250$~GeV.\nAfter spontaneous symmetry breaking the term (\\ref{eq:d5}) results in \nthe mixing of $X$-particle and neutrino, and \nthe lifetime due to $X \\to \\nu +q + \\bar{q}$ , {\\it e.g.}, is given by\n\\begin{equation}\n\\tau_X \\sim \\frac{192(2\\pi)^3}{(G_Fv_{EW}^2)^2}\\frac{m_{\\rm Pl}^2}{m_X^3}\ne^{2S},\n\\label{eq:ltime}\n\\end{equation}\nwhere $G_F$ is the Fermi constant. The lifetime $\\tau_X >t_0$ for \n$X$-particle with $m_X \\geq 10^{13}$~GeV needs $S>44$. \nThis value is within the \nrange of the allowed values as discussed in Ref. \\cite{KLLS}.\n\nLet us now turn to the cosmological production of $X$-particles with \n$m_X \\geq 10^{13}$~GeV. Several mechanisms were identified in \\cite{BKV}, \nincluding \nthermal production at the reheating stage, production through the decay of \ninflaton field at the end of the \"pre-heating\"\nperiod following inflation, and through the decay of hybrid topological \ndefects, such as monopoles connected by strings or walls bounded by\nstrings. \n\nFor the thermal production, temperatures comparable to $m_X$ are needed. \nIn the case of a heavy decaying gravitino,\nthe reheating temperature $T_R$ (which is the highest temperature \nrelevant for our problem) \nis severely limited to value below $10^8- 10^{10}$~GeV, depending \non the gravitino mass (see Ref. \\cite{ellis} and references therein). \nOn the other hand, \nin models with dynamically broken supersymmetry, the lightest \nsupersymmetric particle is the gravitino. Gravitinos with mass \n$m_{3\/2} \\leq 1$~keV interact relatively strongly with the thermal bath,\nthus decoupling relatively late, and can be the CDM particle \\cite{grav}. \nIn this scenario all phenomenological\nconstraints on $T_R$ (including the decay of the second \nlightest supersymmetric particle) disappear and one can assume\n$T_R \\sim 10^{11} - 10^{12}$~GeV. In this \nrange of temperatures, $X$-particles are not in thermal equilibrium.\nIf $T_R < m_X$, the density $n_X$ of $X$-particles produced during the \nreheating phase at time $t_R$ due to $a+\\bar{a} \\to X+\\bar{X}$ is easily \nestimated as\n\\begin{equation}\nn_X(t_R) \\sim N_a n_a^2 \\sigma_X t_R \\exp(-2m_X\/T_R),\n\\label{eq:dens}\n\\end{equation} \nwhere $N_a$ is the number of flavors which participate in the production of \nX-particles, $n_a$ is the density of $a$-particles and $\\sigma_X$ is \nthe production cross-section. The density of $X$-particles at the\npresent epoch can be found by the standard procedure of calculating\nthe ratio $n_X\/s$, where \n$s$ is the entropy density. Then for $m_X = 1\\cdot 10^{13}$~GeV\nand $\\xi_X$ in the wide range of values $10^{-8} - 10^{-4}$, the required\nreheating temperature is $T_R \\sim 3\\cdot 10^{11}$~GeV.\n\nIn the second scenario mentioned above, non-equilibrium inflaton decay,\n$X$-particles are usually overproduced and a second period of \ninflation is needed \nto suppress their density.\n\nFinally, $X$-particles could be produced by TD such as strings or textures. \nParticle production occurs at string intersections or in collapsing texture \nknots. The evolution of defects is scale invariant, and roughly a constant \nnumber of particles $\\nu$ is produced per horizon volume $t^3$ per Hubble \ntime $t$. ($\\nu \\sim 1$ for textures and $\\nu \\gg 1$ for strings.) The main \ncontribution to to the X-particle density is given by the earliest epoch,\nsoon after defect formation, and we find \n$\\xi_X \\sim 10^{-6} \\nu (m_X\/10^{13}~GeV)(T_f\/10^{10}~GeV)^3$, where \n$T_f$ is the defect formation temperature. Defects of energy scale\n$\\eta \\gtrsim m_X$ could be formed at a phase transition at or slightly \nbefore the end of inflation. In the former case, $T_f \\sim T_R$ , while in \nthe latter case defects should be considered as \"formed\" when their typical \nseparation becomes smaller than $t$ (hence $T_f < T_R$). It should be noted \nthat early evolution of defects may be affected by friction; our estimate \nof $\\xi_X$ will then have to be modified. X particles can also be produced \nby hybrid topological defects: monopoles connected by strings or walls \nbound by strings. The required values of $n_X\/s$ can be obtained for a wide \nrange of defect parameters.\n\nThe decays of $X$-particles result in the production of nucleons with a \nspectrum $W_N(m_X,x)$, where $m_X$ is the mass of the X-particle \nand $x=E\/m_X$. The flux of nucleons $(p,\\bar{p},n,\\bar{n})$ from the halo and \nextragalactic space can be calculated as\n\\begin{equation}\nI_N^{i}(E)={1\\over{4\\pi}}{n_X^i\\over{\\tau_X}}R_i{1\\over{m_X}}W_N(m_X,x),\n\\label{eq:nhalo}\n\\end{equation}\nwhere index $i$ runs through $h$ (halo) and $ex$ (extragalactic), \n$R_i$ is the size of the halo $R_h$, or the attenuation length of \nUHE protons due to their collisions with microwave photons, \n$\\lambda_p(E)$, for the halo case and extragalactic case, respectively. We \nshall assume $m_Xn_X^h=\\xi_X\\rho_{\\rm CDM}^h$ and \n$m_Xn_X^{\\rm ex}=\\xi_X\\Omega_{\\rm CDM}\\rho_{\\rm cr}$, \nwhere $\\xi_X$ describes the fraction \nof $X$-particles in CDM, $\\Omega_{\\rm CDM}$ is the CDM \ndensity in units of the critical density $\\rho_{\\rm cr}$, and \n$\\rho_{CDM}^h \\approx 0.3~GeV\/cv^3$ is the CDM density in the halo.\nWe shall use the following values for these parameters: a large \nDM halo with $R_h=100$~kpc (a smaller halo with $R_h=50$~kpc is possible, \ntoo), $\\Omega_{CDM}h^2=0.2$, \nthe mass of $X$-particle in the range \n$10^{13}~{\\rm GeV}1\\cdot 10^{10}$~GeV,\non the radio background. The neutrino flux calculation is similar.\n\nBefore discussing the obtained results, we consider \nthe astrophysical constraints.\n\nThe most stringent constraint comes from electromagnetic \ncascade radiation, discussed in the previous section.\nIn the present case \nthis constraint is weaker, because the low-energy extragalactic\nnucleon flux \nis $\\sim 4$ times \nsmaller than that one from the Galactic halo (see Fig.~2). Thus \nthe cascade radiation is suppressed by the same factor. \n\nThe cascade energy density calculated by integration over cosmological epochs\n(with the dominant contribution given by the present epoch $z=0$) yields\nin our case\n\\begin{equation}\n\\omega_{\\rm cas}=\\frac{1}{5}r_X\\frac{\\Omega_{CDM}\\rho_{cr}}{H_0t_0}=\n6.3\\cdot10^2 r_X f_{\\pi}~{\\rm eV\/cm}^3.\n\\end{equation}\n\nTo fit the UHE CR observational data by nucleons from halo, \nwe need $r_X=5\\cdot 10^{-11}$. Thus the cascade energy density is \n$\\omega_{\\rm cas}=3.2\\cdot 10^{-8} f_{\\pi}$~eV\/cm$^3$, well below the\nobservational bound. \n\n\nLet us now discuss the obtained results.\nThe fluxes shown in Fig.~2 are obtained for $R_h=100$~kpc, \n$m_X=1\\cdot 10^{13}$~GeV and\n$r_X=\\xi_X t_0\/\\tau_X=5\\cdot10^{-11}$. This ratio $r_X$ allows very small \n$\\xi_X$ and $\\tau_X > t_0$. The fluxes \nnear the maximum energy $E_{\\rm max}=5\\cdot 10^{12}$~GeV were only roughly\nestimated (dotted lines on the graph). \n\nIt is easy to verify that the extragalactic nucleon flux at $E \\leq \n3\\cdot 10^{9}$~GeV is suppressed by a factor $\\sim 4$ and by a much larger \nfactor at higher energies due to energy losses. The flux of \nextragalactic photons is suppressed even stronger, because the attenuation \nlength for photons (due to absorption on radio-radiation) is much smaller\nthan for nucleons (see Ref. \\cite{PB}). This flux is not shown in the graph. \nThe flux of high energy gamma-radiation from the halo is by a factor $7$ \nhigher than that of nucleons and the neutrino flux, given in the Fig.2 as \nthe sum of the dominant halo component and subdominant extragalactic one,\nis twice higher than the gamma-ray flux.\n\nThe spectrum of the observed EAS is formed due to fluxes of gamma-rays and \nnucleons. The gamma-ray contribution to this spectrum is rather complicated.\nIn contrast to low energies, the photon-induced showers at \n$E>10^9$~GeV have the low-energy muon component as abundant as that \nfor nucleon-induced showers \\cite{AK}. However, the \nshower production by the photons is suppressed by the \nLPM effect \\cite{LPM}\nand by absorption in geomagnetic field (for recent calculations and \ndiscussion see \\cite{ps,Kasa} and references therein). \n\nWe wish to note that the excess of the gamma-ray flux over the nucleon\nflux from the halo is an unavoidable feature of this model. It follows\nfrom the more effective production of pions \nthan nucleons in the QCD cascades from the decay of $X$-particle. \n\nThe signature of our model might be the signal from the Virgo \ncluster. The virial mass of the Virgo cluster is\n$M_{\\rm Virgo} \\sim 1\\cdot 10^{15} M_{\\odot}$ and the distance to it \n$R= 20$~Mpc. If UHE protons (and antiprotons) propagate rectilinearly from \nthis source\n(which could be the case for $E_p \\sim 10^{11} - 10^{12}$~GeV), their \nflux is given by\n\\begin{equation}\nF_{p,\\bar{p}}^{\\rm Virgo}= r_X \\frac{M_{\\rm Virgo}}{t_0 R^2 m_X^2}W_N(m_X,x).\n\\end{equation} \nThe ratio of this flux to the diffuse flux from the half hemisphere is\n$6.4\\cdot 10^{-3}$. This signature becomes less pronounced at smaller \nenergies, when protons can be strongly deflected by intergalactic magnetic \nfields.\n\n\\section{LSP IS UHE CARRIER}\n\nLSP is the Lightest Supersymmetric Particle. It can be stable if R-parity \nis strictly conserved or unstable if R-parity is violated. To be able to \nreach the Earth from most remote regions in the Universe, the LSP must have \nlifetime longer than $\\tau_{LSP} \\gtrsim t_0\/\\Gamma$, where $t_0$ is the \nage of the Universe and $\\Gamma=E\/m_{LSP}$ is the Lorentz-factor of the LSP.\nIn case $m_{LSP} \\sim 100~GeV$, $\\tau_{LSP} > 1~yr$.\n\nTheoretically the best motivated candidates for LSP are the neutralino and \ngravitino. We shall not consider the latter, because it is practically \nundetectable as UHE particle. \n\n\nIn all elaborated SUSY models the gluino is not the LSP. Only, if\nthe dimension-three SUSY breaking terms are set\nto zero by hand, gluino with mass $m_{\\tilde g}={\\mathcal O}(1~GeV)$ can be the \nLSP \\cite{fa96}. There is some controversy if\nthe low-mass window $1~GeV \\lesssim m_{\\tilde g} \\lesssim 4~GeV$ for\nthe gluino is still allowed \\cite{pdg,aleph}. Nevertheless, we shall study\nthe production of high-energy gluinos and their interaction with matter\nbeing inspired by the recent suggestion \\cite{Farr} (see also \\cite{MN}), \nthat the atmospheric showers observed at the highest energies can be\nproduced by colorless hadrons containing gluinos. \nWe shall refer to any of such hadron as\n$\\tilde{g}$-hadron. Light gluinos as \nUHE particles with energy $E \\gtrsim 10^{16}$~eV were considered in \nsome detail in the literature in connection with Cyg X-3 \\cite{aur,BI}.\nAdditionally, we consider heavy gluinos with $m_{\\tilde g} \\gtrsim\n150$~GeV \\cite{MN}.\n\n\nUHE LSP are most naturally produced at the decays of unstable superheavy \nparticles, either from TD or as the relic ones \\cite{BK}. \n\nThe QCD parton cascade is not a unique cascade process. A cascade \nmultiplication of partons at the decay of superheavy particle appears \nwhenever a probability of production of extra parton has the terms \n$\\alpha \\ln Q^2$ or $\\alpha \\ln^2 Q^2$, where $Q$ is a maximum of parton \ntransverse momentum, i.e. $m_X$ in our case. Regardless of smallness of \n$\\alpha$, the cascade develops as long as\n $\\alpha \\ln Q^2 \\gtrsim 1$. Therefore, for \nextremely large $Q^2$ we are interested in, a cascade develops due to\nparton multiplication through $SU(2)\\times U(1)$ interactions as well. Like in \nQCD, the account of diagrams with $\\alpha \\ln Q^2$ gives the \nLeading Logarithm Approximation to the cascade fragmentation function.\n\nFor each next generation of cascade particles the virtuality of partons \n$q^2$ diminishes. When $q^2 \\gg m_{SUSY}^2$ , where $m_{SUSY}$ is a typical \nmass of supersymmetric particles, the number of supersymmetric partons \nin the cascade is the same as their ordinary partners. At $q^2 < \nm_{SUSY}^2$ the supersymmetric particles are not produced any more and \nthe remaining particles decay producing the LSP. In Ref.(\\cite{BK}) \na simple Monte Carlo simulation for SUSU cascading was performed \nand the spectrum of emitted LSP was calculated. LSP take away a \nconsiderable fraction of the total energy ($\\sim 40\\%$). \n\nThe fluxes of UHE LSP are shown in Fig.~3 for the case of their production \nin cosmic necklaces (see section II). When the LSP is neutralino, the \nflux is somewhat lower than neutrino flux. The neutralino-nucleon cross-\nsection, $\\sigma_{\\chi N}$, is also smaller than that for neutrino. For the \ntheoretically favorable masses of supersymmetric particles, \n$\\sigma_{\\chi N} \\sim 10^{-34}~cm^2$ at extremely high energies. If the the \nmasses of squarks are near their experimental bound, $M_{L,R} \\sim 180~GeV$, \nthe cross-section is 60 times higher.\n\n{\\em Gluino as the LSP} is another phenomenological option. Let us discuss \nshortly the status of the gluino as LSP.\n\nIn all elaborated SUSY models the gluino is not LSP, and this possibility \nis considered on purely phenomenological basis.\n Accelerator experiments give the lower limit on the gluino mass as \n$m_{\\tilde{g}} \\gtrsim 150$~GeV \\cite{pdg}. The upper limit of the \ngluino mass is given by cosmological and astrophysical constraints, as \nwas recently discussed in \\cite{MN}. In this work it \nwas shown that if the gluino provides the dark matter observed in our galaxy,\nthe signal from gluino annihilation and the abundance of anomalous heavy \nnuclei is too high. Since we are not interested in the case when gluino \nis DM particle, we can use these arguments to obtain an upper limit for \nthe gluino mass. Calculating the relic density of \ngluinos (similar as in \\cite{MN}) and using\nthe condition $\\Omega_{\\tilde{g}} \\ll \\Omega_{\\rm CDM}$, we obtained \n$m_{\\tilde{g}} \\ll 9$~TeV.\n\nNow we come to very interesting argument against existence \nof a light stable or quasistable gluino \\cite{VO}.\nIt is plausible that the {\\em glueballino} ($\\tilde{g}g$) is the lightest \nhadronic \nstate of gluino \\cite{aur,BI}. However, {\\em gluebarino\\\/}, i.e. the bound \nstate of gluino and three quarks, is almost stable because baryon number \nis extremely weakly violated. In Ref.~\\cite{VO} it is argued that the \nlightest gluebarino is the neutral state ($\\tilde{g}uud$). \n These charged gluebarinos are produced by cosmic\nrays in the earth atmosphere \\cite{VO}, and light gluino as LSP is \nexcluded by the search for heavy hydrogen or by proton decay \nexperiments (in case of quasistable gluino). In the case that the lightest \ngluebarino is neutral, see \\cite{fa96}, the arguments of \\cite{VO}\nstill work if a neutral gluebarino forms a bound state with the nuclei. \nThus, a light gluino is disfavored.\n\n\nThe situation is different if the gluino is heavy, \n$m_{\\tilde{g}}\\gtrsim 150~GeV$. This gluino can be unstable\ndue to weak R-parity violation \\cite{BJV} and have a lifetime \n$\\tau_{\\tilde{g}} \\gtrsim 1$~yr, {\\it i.e.\\\/}\nlong enough to be UHE carrier (see beginning of this section). \nThen the calculated relic density at the time \nof decay is not in conflict with the cascade nucleosynthesis and all \ncosmologically produced $\\tilde{g}$-hadrons decayed up to the present time.\nMoreover, the production of these gluinos by cosmic rays in the\natmosphere is ineffective because of their large mass.\n\nGlueballino, or more generally $\\tilde{g}$-hadron, looses its energy while \npropagating from a source to the Earth. The dominant energy loss of \nthe $\\tilde{g}$-hadron is due to pion production in collisions with microwave \nphotons. Pion production effectively starts at the same Lorentz-factor as \nin the case of the proton. This implies that the energy of the GZK \ncutoff is a factor $m_{\\tilde{g}}\/m_p$ higher than in case of the proton. \nThe attenuation length also increases because the fraction of energy lost \nnear the threshold is small, $\\mu\/m_{\\tilde{g}}$, where $\\mu$ is a pion \nmass. Therefore, even for light $\\tilde{g}$-hadrons, $m_{\\tilde{g}} \\gtrsim \n2~GeV$, the steepening of the spectrum is less pronounced than for \nprotons. \n\nThe spectrum of $\\tilde{g}$-hadrons from the cosmic necklaces accounted for \nabsorption in intergalactic space, is shown in Fig.~3.\n\nA very light UHE $\\tilde{g}$-hadron interacts with the nucleons in the \natmosphere similarly to UHE proton. The cross-section is reduced \nonly due to the radius of $\\tilde{g}$-hadron and is of order of \n$\\sim 1~mb$ \\cite{BI}. In case of very heavy $\\tilde{g}$-hadron the total \ncross-section can be of the same order of magnitude, but the \ncross-section with the large energy transfer, relevant for the \ndetection in the atmosphere, is very small \\cite{BK}. This is due to the \nfact that interaction of gluino in case of large energy transfer is \ncharacterized by large $Q^2$ and thus interaction is a deep inelastic \nQCD scattering. \n\nThus, only UHE gluino from low-mass window \n$1~GeV \\leq m_{\\tilde{g}}\\leq 4~GeV$ could be a candidate for observed \nUHE particles, but it is disfavored by the arguments given above. \n\n\\section{CONCLUSIONS}\n\nTopological Defects naturally produce particles with extremely high \nenergies, much in excess of what is presently observed. However, the fluxes \nfrom most known TD are too small. So far only necklaces \\cite{BV} and \nmonopole-antimonopole pairs \\cite{BS} can provide the observed flux of UHE CR. \n\nAnother promising sources of UHE CR are relic superheavy particles \n\\cite{KR,BKV}. These particles should be clustering in the halo of \nour Galaxy \\cite{BKV}, and thus UHE CR produced at their decays do not \nhave the GZK cutoff. The signatures of this model are dominance of \nphotons in the primary flux and Virgo cluster as a possible discrete source.\n\nApart from protons, photons and neutrinos the UHE carriers can be \nneutralinos \\cite{BK}, gluino \\cite{Farr,MN,BK} and monopoles \\cite{Weil,MN}.\nWhile neutralino is a natural candidate for the Lightest Supersymmetric \nParticle (LSP) in SUSY models, gluino can be considered as LSP only \nphenomenologically. LSP are naturally produced in the parton cascade at \nthe decay of superheavy X-particles. In case of neutralino both fluxes and \ncross-sections for interaction is somewhat lower than for neutrino. In case \nof gluinos the fluxes are comparable with that of neutralinos, but \ncross-sections for the production of observed extensive air showers are \nlarge enough only for light gluinos. These are disfavored, especially if \nthe charged gluebarino is lighter than the neutral one \\cite{VO}. \\\\*[1mm]\n\\begin{center}\nACKNOWLEDGEMENTS\n\\end{center}\n\\vspace{1mm}\nThis report is based on my recent works with Michael Kachelriess and \nAlex Vilenkin \\cite{BV,BKV,BK}. I am grateful to my co-authors for \npleasant and useful cooperation and for many discussions. \n\nMany thanks are to the organizers of the workshop for the most efficient \nwork. I am especially grateful to Jonathan Ormes for all his efforts as the \nChairman of the Organizing Committee and for inviting me to this most \ninteresting meeting.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe epitaxy of magnetically doped semiconductors constitutes a versatile mean of fabricating in a self-organized way semiconductor\/ferromagnet nanocomposites\\cite{Bonanni:2007_SST,Katayama:2007_pssa,Dietl:2008_JAP} with still widely unexplored but remarkable functionalities relevant to spintronics, nanoelectronics, photonics, and plasmonics. In these nanocomposite materials the presence of robust ferromagnetism correlates with the existence of nanoscale volumes containing a large density of magnetic cations, that is with the formation of condensed magnetic semiconductors (CMSs) buried in the host matrix and characterized by a high spin ordering temperature.\\cite{Bonanni:2010_CSR} The aggregation of CMSs and, therefore, the ferromagnetism of the resulting composite system shows a dramatic dependence on the growth conditions and co-doping with shallow impurities.\n\nIn particular, the understanding and control of CMSs in tetrahedrally coordinated semiconductor films containing transition metals (TMs) -- typical examples being (Ga,Mn)As,\\cite{Tanaka:2008_B} (Ga,Mn)N,\\cite{Martinez-Criado:2005_APL} (Ge,Mn),\\cite{Jamet:2006_NM} (Zn,Cr)Te,\\cite{Kuroda:2007_NM} and (Ga,Fe)N \\cite{Bonanni:2007_PRB,Bonanni:2008_PRL,Rovezzi:2009_PRB} -- have been lately carefully considered with the necessary aid of nanoscale characterization techniques. Indeed, the control over the CMSs formation as a function of the fabrication parameters, and the possibility to reliably produce on demand CMSs with a predefined size, structure, and distribution in the semiconductor host, are fundamental requisites for the exploitation of these nanostructures in functional devices. At the same time these studies draw us nearer to understand the origin of the ferromagnetic-like features -- persisting up to above room temperature (RT) -- found in a number of semiconductors and oxides. \\cite{Liu:2005_JMSME,Coey:2008_JPD}\n\nDilute zincblende (Ga,Mn)As grown by molecular beam epitaxy (MBE) is known to decompose upon annealing with the formation of embedded MnAs nanocrystals coherent with the GaAs matrix,\\cite{De_Boeck:1996_APL} and a striking spin-battery effect produced by these CMSs has been already proven.\\cite{Nam-Hai:2009_N}\nAs an example of the critical role played by growth parameters, MBE Ge$_{1-x}$Mn$_x$ grown below 130$^{\\circ}$C is seen to promote the self-assembling\nof coherent magnetic Mn-rich nanocolumns whilst a higher growth temperature leads to the formation of hexagonal Ge$_5$Mn$_3$ nanocrystals buried in the Ge host.\\cite{Devillers:2007_PRB}\n\nFollowing theoretical suggestions,\\cite{Dietl:2006_NM} it has recently been demonstrated experimentally that it is possible to change the charge state of TM ions in a semiconducting matrix and, therefore, the aggregation energy by co-doping with shallow donors or acceptors.\\cite{Kuroda:2007_NM,Bonanni:2008_PRL} In particular, it has been proven that in the model case of wurtzite (wz) (Ga,Fe)N fabricated by metalorganic vapor phase epitaxy (MOVPE) the Fermi-level tuning by co-doping with Mg (acceptor in GaN) or Si (donor in GaN) is instrumental in controlling the magnetic ions aggregation.\\cite{Bonanni:2008_PRL} \n\nThe same system has been thoroughly analyzed at the nanoscale by means of advanced electron microscopy as well as by synchrotron-based diffraction and absorption techniques. The structural characteristics have then been related -- together with the growth parameters -- to the magnetic properties of the material as evidenced by superconducting quantum interference device (SQUID) magnetometry.\\cite{Bonanni:2007_PRB,Bonanni:2008_PRL,Rovezzi:2009_PRB} It has been concluded that for a concentration of Fe below its optimized solubility limit ($\\sim0.4$\\% of the magnetic ions) the dilute system is predominantly paramagnetic (PM). For higher concentrations of the magnetic ions (Ga,Fe)N shows either chemical- (intermediate state) or crystallographic-phase separation.~\\cite{Bonanni:2007_PRB,Bonanni:2008_PRL,Rovezzi:2009_PRB} In the phase-separated layers a ferromagnetic (FM) behavior persisting far above RT is observed, and has been related to the presence of either Fe-rich regions coherent with the host GaN (in the intermediate state) or of Fe$_{x}$N nanocrystals in the GaN matrix. These investigations appear to elucidate the microscopic origin of the magnetic behavior of (Ga,Fe)N reported by other groups.~\\cite{Kuwabara:2001_JJAP,Kane:2006_pssb}\n\nAlong the above mentioned lines, in this work we consider further the MOVPE (Ga,Fe)N material system and we reconstruct the phase diagram of the Fe$_{x}$N nanocrystals buried in GaN as a function of the growth temperature. Synchrotron radiation x-ray diffraction (SXRD), extended fine structure x-ray absorption (EXAFS) and x-ray absorption near-edge fine structure (XANES), combined with high-resolution transmission electron microscopy (HRTEM) and SQUID magnetometry allow us to detect and to identify particular Fe$_{x}$N phases in samples fabricated at different growth temperatures $T_{\\mathrm{g}}$ as well as to establish a correlation between the existence of the specific phases and the magnetic response of the system. Our results imply, in particular, that self assembled nanocrystals with a high concentration of the magnetic constituent account for ferromagnetic-like features persisting up to above RT. These findings for (Ga,Fe)N do not support, therefore, the recent suggestions that high temperature ferromagnetism of -- the closely related -- oxides is brought about by spin polarization of defects, whereas the role of magnetic impurities is to bring the Fermi energy to an appropriate position in the band gap.~\\cite{Coey:2008_JPD} \n\nWe find that already a 5\\% variation in the growth temperature is critical for the onset of new Fe$_{x}$N species and we can confirm that an increase in the growth temperature promotes the aggregation of the magnetic ions, resulting in an enhanced density of Fe-rich nanocrystals in the matrix and in a consequent increase of the ferromagnetic response of the system. Moreover, we observe that while in the low-range of growth temperatures the Fe-rich nanoobjects tend to segregate close to the sample surface, at higher $T_{\\mathrm{g}}$ two-dimensional assemblies of nanocrystals form in a reproducible way at different depths in the layer, an arrangement expected to have a potential as template for the self-aggregation of metallic nanocolumns.\\cite{Fukushima:2006_JJAP} The non-uniform distribution of magnetic aggregates over the film volume here revealed, implies also that the CMS detection may be challenging and, in general, requires a careful examination of the whole layer, including the surface and interfacial regions.\n\nThe paper is organized as follows: in the next Section we give a concise summary of the MOVPE process employed to fabricate the (Ga,Fe)N phase-separated samples together with a brief description of the characterization techniques. A table with the relevant samples and relative parameters completes this part.\nThe central results of this work are reported in Section III and are presented in two sub-sections discussing respectively: i) the detection, identification and structural properties $vs.$ $T_{\\mathrm{g}}$ of the different Fe$_{x}$N nanocrystals in phase-separated (Ga,Fe)N, with the distribution of the nanocrystals in the sample volume, and ii) the magnetic properties of the specific families of Fe$_{x}$N phases. In Section IV we sum up the main conclusions and the prospects of this work.\n\n\\section{Experimental Procedure}\n\\subsection{Growth of (Ga,Fe)N}\nWe summarize here our study by considering a series of wurtzite (Ga,Fe)N samples fabricated by MOVPE in an AIXTRON 200 RF horizontal reactor. All structures have been deposited on $c$-plane sapphire substrates with TMGa (trimethylgallium), NH$_3$, and FeCp$_2$ (ferrocene) as precursors for, respectively, Ga, N and Fe, and with H$_2$ as carrier gas. \n\nThe growth process has been carried out according to a well established procedure,\\cite{Simbrunner:2007_APL} namely: substrate nitridation, low temperature (540$^{\\circ}$C) deposition of a GaN nucleation layer (NL), annealing of the NL under NH$_3$ until recrystallization and the growth of a $\\approx$~1~$\\mu$m thick device-quality GaN buffer at 1030$^{\\circ}$C. On the GaN buffer, Fe-doped GaN overlayers ($\\approx$~700~nm thick) have been deposited at different $T_{\\mathrm{g}}$ ranging from 800$^{\\circ}$C to 950$^{\\circ}$C, with a V\/III ratio of 300 [NH$_3$ and TMGa source flow of 1500 standard cubic centimeters per minute (sccm) and 5~sccm, respectively], with an average growth-rate of 0.21 nm\/s, and the flow-rate of the Fe-precursor set at 300~sccm. During the whole growth process the samples have been continuously rotated in order to promote the deposition homogeneity, while \\textit{in situ} and on line ellipsometry is employed for the real time control over the entire fabrication process. \n\nThe considered samples main parameters, including the Fe concentration, are displayed in Table~\\ref{Tab:table1}.\n\n\\begin{table} [h]\n\\begin{ruledtabular}\n\\caption{\\label{Tab:table1} Considered (Ga,Fe)N samples with the corresponding growth temperature, Fe concentration as evaluated by secondary ions mass spectroscopy (SIMS) as well as concentration of the dilute paramagnetic Fe$^{3+}$ ions $x_{\\mathrm{Fe}^{3+}}$ and a lower limit of the concentration of Fe ions $x_{\\mathrm{Fe}_N}$ contributing to the Fe-rich nanocrystals, as obtained from magnetization data.}\n\\label{Tab:table1}\n\\begin{tabular}{|c|cccc|}\nSample & $T_{\\mathrm{g}}$ & Fe concentration & $x_{\\mathrm{Fe}^{3+}}$ & $x_{\\mathrm{Fe}_N}$ \\\\\n & $^{\\circ}$C & [$10^{20}$ cm$^{-3}$] & [$10^{19}$ cm$^{-3}$] & [$10^{19}$ cm$^{-3}$] \\\\\n\\hline\nS690 & 800 & 1 & $3.2$ & $0.1$\\\\\nS687 & 850 & 2 & $2.9$ & $1.7$ \\\\\nS680 & 850 & 2 & $2.7$ & $1.5$\\\\\nS987 & 900 & 4 & $2.4$ & $1.6$ \\\\\nS691 & 950 & 4 & $2.9$ & $3.2$ \\\\\n\\end{tabular}\n\\end{ruledtabular}\n\\end{table}\n\n\\subsection{Synchrotron x-ray diffraction -- experimental}\n Coplanar SXRD measurements have been carried out at the Rossendorf Beamline BM20 of the European Synchrotron Radiation Facility (ESRF) in Grenoble -- France, using a photon energy of 10.005~keV. The x-ray data correspond to the diffracted intensities in reciprocal space along the sample surface normals.\nThe beamline is equipped with a double-crystal Si(111) monochromator with two collimating\/focusing mirrors (Si and Pt-coating) for rejection of higher harmonics, allowing measurements in an energy range of 6 to 33~keV. The symmetric $\\omega=2\\theta$ scans are acquired using a heavy-duty 6-circle Huber diffractometer and the most intense peaks are found for 2$\\theta$ up to 40$^{\\circ}$.\n\n\\subsection{XAFS -- experimental and method}\\label{sec:xafs_exp}\n X-ray absorption fine structure (XAFS) measurements at the Fe K edge (7112~eV) are carried out at the ``GILDA'' Italian collaborating research group beam-line~\\cite{d'Acapito:1998_EN} (BM08) of ESRF under the same experimental conditions reported in Ref.~\\onlinecite{Rovezzi:2009_PRB}, collecting both the XANES and EXAFS spectra, and employing the following method for the analysis. \n \nA set of model compounds is established: Fe substitutional of Ga in GaN (Fe$_{\\rm Ga}$),~\\cite{Rovezzi:2009_PRB} $\\zeta$-Fe$_2$N,~\\cite{Rechenbach:1996_JAC} $\\varepsilon$-Fe$_3$N,~\\cite{Jacobs:1995_JAC} $\\gamma$-Fe$_4$N,~\\cite{Jacobs:1995_JAC} $\\alpha$-Fe~\\cite{Swanson1955} and $\\gamma$-Fe.~\\cite{Gorton1965} For these input structures the XANES absorption spectra are calculated using the {\\sc fdmnes} code~\\cite{Joly:2009_JPC} while the EXAFS scattering expansion signals are computed with the {\\sc feff8.4} code~\\cite{Ankudinov:1998_PRB} in order to de-correlate the structural results to a specific software choice. In both cases muffin-tin potentials and the Hedin-Lunqvist approximation for their energy-dependent part is used, with self-consistent potential calculation for enhancing the accuracy in the determination of the Fermi energy ($E_{\\rm F}$). \n\nX-ray polarization is taken into account for Fe$_{\\rm Ga}$ while unpolarized simulations are conducted for the other phases assuming a random orientation of the nanocrystals in the sample. In addition, for XANES the convergence of the results is tested against the increasing in the input cluster size ($>$150 atoms) and the method is validated by experimental values from Fe$_{\\rm Ga}$ and $\\alpha$-Fe. The resulting simulated spectra are then convoluted $\\textit{via}$ an energy-dependent function as implemented in {\\sc fdmnes}~\\cite{Joly:2009_JPC} plus a Gaussian experimental broadening of 1~eV and fitted to the normalized XANES experimental data in the energy range from -20 to 80~eV relative to $E_{\\rm F}$ with a linear combination analysis using the {\\sc Athena} graphical interface~\\cite{Ravel:2005_JSR} to {\\sc i\\-feffit}.~\\cite{Newville:2001_JSR} \n\nAll the possible combinations with a maximum of three spectra per fit (maximum of six fit parameters: amplitude and energy shift) are tested and the best fit is chosen on the basis of the $\\chi^2$ statistics, discarding unphysical results. Finally, the XANES results are independently checked through the quantitative analysis of the EXAFS data where the background-subtracted (via the {\\sc viper} program\\cite{Klementev:2001_JPDAP}) $k^2$-weighted fine-structure oscillations, $\\chi(k)$, are fitted in the Fourier-transformed space.\n\n\\subsection{High resolution transmission electron microscopy -- experimental}\n The HRTEM studies are performed on cross-sectional samples prepared by standard mechanical polishing followed by Ar$^{+}$ ion milling at 4~kV for about 1~h. Conventional diffraction contrast images in bright-field imaging mode and high-resolution phase contrast pictures were obtained from a JEOL 2011 Fast TEM microscope operating at 200~kV and capable of an ultimate point-to-point resolution of 0.19~nm and allowing to image lattice fringes with a 0.14~nm resolution. \n \nAdditionally, energy dispersive x-ray (EDS) analysis has been performed \\textit{via} an Oxford Inca EDS equipped with a silicon detector to obtain information on the local composition. Selected area electron diffraction (SAED) and fast Fourier transform (FFT) procedures are employed to study scattering orders and $d$-spacing for respectively the larger and the smaller nanocrystals. \n\n\\subsection{SQUID magnetometry -- experimental}\n The magnetic properties have been investigated in a\nQuantum Design MPMS XL 5 SQUID magnetometer between 1.85 and 400~K\nand up to 50~kOe following the methodology described\npreviously.\\cite{Stefanowicz:2010_PRB} \n\nThe difference between the\nmagnetization values measured up to 50~kOe at 1.8~K and 5~K is\nemployed to determine the concentration $x_{\\text{Fe$^{3+}$}}$ of\nparamagnetic Fe$^{3+}$ ions in the layers.\\cite{Pacuski:2008_PRL}\nThe lower limit of the concentration of Fe ions contributing to\nthe Fe-rich nanocrystals, as well as an assessment of their Curie\ntemperature is inferred from magnetization curves at higher\ntemperatures. \n\nFinally, measurements of field cooled (FC) and zero field\ncooled (ZFC) magnetization hint to the influence of the growth\ntemperature on the size distribution of the nanocrystals.\n\n\n\\section{Results}\n\\subsection{Fe$_{x}$N phases \\textit{vs.} $T_{\\mathrm{g}}$ in crystallographically separated (Ga,Fe)N}\n\n Before entering into the detailed discussion of our studies, we would like to point out that the reproducibility of the \ndata has been accurately tested and: i) different samples grown under the same conditions have been characterized, ii) all measurements (SXRD, HRTEM, etc...) have been repeated in different runs on the same samples and we can conclude that both the (Ga,Fe)N structures are stable over time and the formation of different phases is reproduced when the growth conditions are fidely replicated. \n\n\\begin{figure}\n \\includegraphics[width=\\columnwidth]{FigSXRD.eps} [h]\n \\caption{(Color online) SXRD spectra for (Ga,Fe)N layers deposited at different growth temperatures. Inset: peak at 35.3$^{\\circ}$ deconvoluted into two components assigned to diffraction maxima (111) of $\\varepsilon$-Fe$_3$N and (110) of $\\alpha$-Fe [experiment (dotted line) and fit (smooth line)].}\n \\label{fig:SXRD}\n\\end{figure}\n\nIn Fig.~\\ref{fig:SXRD} we report SXRD diffraction spectra for the (Ga,Fe)N samples grown at different temperatures, as listed in Table I. For the layer S690 fabricated at 800$^{\\circ}$C we have no evidence of secondary phases and only diffraction peaks originating from the sapphire substrate and from the GaN matrix are revealed, in agreement with HRTEM measurements showing no phase separation. Moreover, in order to test the stability of the dilute phase, we have annealed the samples up to $T_{\\mathrm{a}}$ = 900$^{\\circ}$C and $\\textit{in situ}$ SXRD measurements upon annealing do not reveal the onset of any secondary phases, as reported in Fig.~\\ref{fig:SXRD_ann}, in accord with the behavior of dilute Mn in GaN \\cite{Stefanowicz:2010_PRB} and in contrast with (Ga,Mn)As where post-growth annealing is found to promote the precipitation of MnAs nanocrystals. \\cite{Tanaka:2008_B}\n\n\\begin{figure} \n \\includegraphics[width=0.98\\columnwidth]{Fig_anneal2.eps} [h]\n \\caption{SXRD spectra for a dilute (Ga,Fe)N sample (S690) as grown and upon $\\textit{in situ}$ annealing at $T_{\\mathrm{a}}$ = 900$^{\\circ}$C for 1~h, indicating that post-growth annealing does not induce - in the SXRD sensitivity range - crystallographic decomposition.}\n \\label{fig:SXRD_ann}\n\\end{figure}\n\nMoving to a $T_{\\mathrm{g}}$ of 850$^{\\circ}$C (S687) different diffraction peaks belonging to secondary phases become evident. We have previously reported\\cite{Bonanni:2007_PRB} that when growing (Ga,Fe)N at this temperature, one dominant Fe-rich phase is formed, namely wurzite $\\varepsilon$-Fe$_3$N, for which we identify two main peaks corresponding to the (002) and the (111) reflexes, respectively. A closer inspection of the (111)-related feature and a fit with two gaussian curves centered at 35.2$^{\\circ}$ and 35.4$^{\\circ}$, gives evidence of the presence of the (110) reflex from cubic metallic $\\alpha$-Fe. Moreover, the broad feature appearing around 38$^{\\circ}$ is associated to the (200) reflex of face centered cubic ($fcc$) $\\gamma$'-Fe$_4$N, that crystallizes in an inverse perovskite structure.\\cite{Jack:1952_AC} From the position of the peak, we can estimate that these nanocrystals are strained.\n\n\\begin{figure} [htb]\n \\includegraphics[width=0.93\\columnwidth]{FigSXRD_950.eps} [h]\n \\caption{SXRD for a sample (S874) grown at 950$^{\\circ}$C evidencing the aggregation of (200) $\\gamma$-Fe in the (Ga,Fe)N layer.}\n \\label{fig:SXRD_950}\n\\end{figure}\n\nAs the growth temperature is increased to 900$^{\\circ}$C (S987) there is no contribution left from the (110) $\\alpha$-Fe phase, and the signal from the (111) of $\\varepsilon$-Fe$_3$N is significantly quenched, indicating the reduction in either size or density of the specific phase. Furthermore, an intense peak is seen at 34$^{\\circ}$, corresponding to the (121) contribution from orthorhombic $\\zeta$-Fe$_2$N. This phase crystallizes in the $\\alpha$-PbO$_2$-like structure, where the Fe atoms show a slightly distorted hexagonal close packing (\\textit{hcp}), also found for $\\varepsilon$-Fe$_3$N.\\cite{Jacobs:1995_JAC} \n\nThe structural resemblance of $\\varepsilon$-Fe$_3$N and the $\\zeta$-Fe$_2$N is remarkable, as the \\textit{hcp} arrangement in $\\varepsilon$-Fe$_3$N is nearly retained in $\\zeta$-Fe$_2$N.\\cite{Rechenbach:1996_JAC} This gives a hint of the likely direct conversion of phase from $\\varepsilon$-Fe$_3$N into $\\zeta$-Fe$_2$N. The diffraction peak from (200) $\\gamma$'-Fe$_4$N is still present at this temperature, but its position is slightly shifted to its bulk value. A similar behavior is observed for the diffraction from (200) $\\varepsilon$-Fe$_3$N (002), shifted from 32.78$^{\\circ}$ to 32.9$^{\\circ}$.\n\nAt a growth temperature of 950$^{\\circ}$C (S691) the diffraction peak of (200) $\\gamma$'-Fe$_4$N recedes, indicating the decomposition of this \\textit{fcc} phase at temperatures above 900$^{\\circ}$C, in agreement with the phase diagram for free standing Fe$_x$N,\\cite{Jacobs:1995_JAC} reporting cubic $\\gamma$'-Fe$_4$N as stable at low temperatures. Only the (002) $\\varepsilon$-Fe$_3$N- and the (121) $\\zeta$-Fe$_2$N-related diffraction peaks are preserved with a constant intensity and position with increasing temperature, suggesting that at high $T_{\\mathrm{g}}$ these two phases and their corresponding orientations, are noticeably stable. Furthermore, in samples grown at this $T_{\\mathrm{g}}$ the peak from (200) $\\gamma$-Fe is detected around 41.12$^{\\circ}$, as reported in Fig.~\\ref{fig:SXRD_950}, in agreement with the XAFS data discussed later in this Section.\n\n\\begin{figure}[htb]\n \\centering\n \\includegraphics[width=1.1\\columnwidth]{nanosize2.eps}\n \\caption{(Color online) Average size $vs.$ $T_{\\mathrm{g}}$ of nanocrystals in the different Fe$_x$N phases, as determined from SXRD.}\n \\label{fig:nanosize}\n\\end{figure}\n\nFollowing the procedure employed previously,\\cite{Lechner:2009_APL} and based on the Williamson-Hall formula method,\\cite{Williamson:1953_AcMetal} we obtain the approximate average nanocrystals size from the full-width at half maximum (FWHM) of the diffraction peaks in the radial ($\\omega\/2\\theta$) scans. The FWHM of the (002) $\\varepsilon$-Fe$_3$N, of the (200) $\\gamma$'-Fe$_4$N, and of the (121) $\\zeta$-Fe$_2$N diffraction peaks are comparable for samples grown at different temperatures, indicating that the average size of the corresponding nanocrystals is also constant, as summarized in Fig.~\\ref{fig:nanosize}. \n\nThe (111) $\\varepsilon$-Fe$_3$N signal intensity is seen to change abruptly when comparing the results for the sample grown at 850$^{\\circ}$C to those from the layers fabricated at higher temperatures. From the FWHM for this particular orientation we can estimate that the nanocrystal average size adjusts between 16.5 and 12.0~nm in the considered temperature range. At high temperatures, the size then remains constant up to 950$^{\\circ}$C. The size of the $\\alpha$-Fe nanocrystals can only be estimated for the sample grown at 850$^{\\circ}$C, where the corresponding diffraction peak can easily be resolved and suggests an average size of these objects larger than that of the other identified phases, as confirmed by the HRTEM images reported in Fig.~\\ref{fig:TEM}.\n\n\\begin{figure}[h]\n \\includegraphics[width=0.5\\textwidth]{lcf_fig-v2a}\n \\caption{(Color online) Normalized XANES spectra (main plot) and the amplitude of the Fourier-transforms (inset) of the $k^2$-weighted EXAFS in the range from 2.5 to 10.0 ~\\AA$^{-1}$ for three samples (points) grown at different temperatures, with their relative fits (lines) summarized in Table~\\ref{tab:xafs}.}\n \\label{fig:xafs}\n\\end{figure}\n\nThe XAFS study on the (Ga,Fe)N samples fabricated at different $T_{\\mathrm{g}}$ permits to have a structural description of the atomic environment around the absorbing species from a local point of view, complementary to SXRD. The experimental data are reported in Fig.~\\ref{fig:xafs} with the relative fits obtained by following the method described in Sec.~\\ref{sec:xafs_exp}. Qualitatively an evolution with $T_{\\mathrm{g}}$ is visible and it is quantitatively confirmed by the results summarized in Table~\\ref{tab:xafs}. \n\nIn particular, from the XANES analysis -- sensitive to the occupation site symmetry and multiple scattering effects -- it is possible to infer how the composition of the different phases evolves with increasing $T_{\\mathrm{g}}$: Fe$_{\\rm Ga}$ reduces, while $\\varepsilon$-Fe$_3$N increases up to 950$^\\circ$C when the precipitation is in favor of $\\zeta$-Fe$_2$N and $\\gamma$-Fe. This behavior is confirmed also by the EXAFS spectra given in the inset to Fig.~\\ref{fig:xafs}, where the first three main peaks in the fit range from $R_{\\rm min}$ to $R_{\\rm max}$ represent, respectively, the average Fe-N, Fe-Fe and Fe-Ga coordination. \n\nIn addition, the signal present at longer distances confirms the high crystallinity and permits to include important multiple scattering paths in the fits for a better identification of the correct phase. In fact, from the Fe-Fe distances it is possible to distinguish Fe$_x$N ($\\approx$~2.75~\\AA) from pure Fe phases ($\\approx$~2.57~\\AA), while the distinction between $\\alpha$-Fe and $\\gamma$-Fe is possible with the different multiple scattering paths generated from the body centered cubic ($bcc$) and from the $fcc$ structure, respectively.\n\n\\begin{table*}[htbp]\n\\begin{ruledtabular}\n \\caption{Quantitative results of the XAFS analysis (best fits). XANES: composition ($x$) and energy shift relative to $E_{\\rm F}$ ($\\Delta$$E$) for each structure; EXAFS: average distance ($R$) and Debye-Waller factor ($\\sigma^2$) for the first three coordination shells around the absorber. For each phase the coordination numbers are kept to the crystallographic ones and rescaled by the relative fractions found by XANES and a global amplitude reduction factor, $S_0^2$, of 0.93(5) as found for Fe$_{\\rm Ga}$. Error bars on the last digit are reported in parentheses.}\n \\label{tab:xafs}\n \n \\begin{tabular}{|c|cccccccc|cccccc|}\n \n \n Fit & \\multicolumn{8}{c|}{XANES} & \\multicolumn{6}{c|}{EXAFS} \\\\\n & \\multicolumn{2}{c}{Fe$_{\\rm Ga}$} & \\multicolumn{2}{c}{$\\zeta$-Fe$_2$N} & \\multicolumn{2}{c}{$\\varepsilon$-Fe$_3$N} & \\multicolumn{2}{c|}{$\\gamma$-Fe} & \\multicolumn{2}{c}{Fe-N} & \\multicolumn{2}{c}{Fe-Fe} & \\multicolumn{2}{c|}{Fe-Ga}\\\\\n & $x$ & $\\Delta$$E$ & $x$ & $\\Delta$$E$ & $x$ & $\\Delta$$E$ & $x$ & $\\Delta$$E$ & $R$ & $\\sigma^2$ & $R$ & $\\sigma^2$ & $R$ & $\\sigma^2$\\\\\n & & (eV) & & (eV) & & (eV) & & (eV) & (\\AA) & (10$^{-3}$\\AA$^2$) & (\\AA) & (10$^{-3}$\\AA$^2$) & (\\AA) & (10$^{-3}$\\AA$^2$)\\\\\n \\hline\n 1 & 0.9(1) & 1.3(5) & - & - & 0.1(1) & 2.9(9) & - & - & 1.99(1) & 5(2) & 2.75(5) & 13(5) & 3.20(1) & 7(1)\\\\ \n 2 & 0.6(1) & 1.1(5) & - & - & 0.4(1) & 1.8(5) & - & - & 2.00(2) & 4(1) & 2.76(2) & 9(4) & 3.20(1) & 8(1)\\\\ \n 3 & 0.2(1) & 1.0(5) & 0.4(1) & 4.5(5) & - & - & 0.4(1) & -0.3(5) & 1.95(4) & 10(9) & 2.60(5) & 15(9) & 3.18(2) & 4(2)\\\\ \n \n \n \\end{tabular}\n \\end{ruledtabular}\n \n\\end{table*}\n\n\\begin{figure}[htb]\n \\centering\n \\includegraphics[width=0.8\\columnwidth]{Fig_TEM.eps}\n \\caption{Transmission electron micrographs of different Fe$_x$N phases: a) HRTEM image of a $\\varepsilon$-Fe$_{3}$N nanocrystal; (b) the corresponding FFT image, revealing that the \\textit{d}-spacing along the growth direction is about 0.216~nm. (c) HRTEM image on $\\alpha$-Fe nanocrystal in sample S687; (d) SAED pattern on the enclosing area. (e) HRTEM image of a $\\zeta$-Fe$_{2}$N nanocrystal; (f) the corresponding FFT image, revealing that the \\textit{d}-spacing along the growth direction is about 0.211~nm.}\n \\label{fig:TEM}\n\\end{figure}\n\nThe presence of the different Fe$_x$N phases detected with SXRD has been confirmed also by HRTEM measurements on the considered samples, as reported in Fig.~\\ref{fig:TEM}. All the HRTEM images presented here have been taken along the $[10\\overline{1}0]$ zone axis. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.8\\columnwidth]{distribution.eps} \n \\caption{(Color online) TEM images: distribution of the Fe-rich nanocrystals with increasing growth temperature. (a) $T_{\\mathrm{g}}$ = 800$^{\\circ}$C (S690) -- dilute (Ga,Fe)N; (b) $T_{\\mathrm{g}}$ = 850$^{\\circ}$C (S687), Fe-rich nanocrystals concentrated in proximity of the samples interface solely; $T_{\\mathrm{g}}$ = 950$^{\\circ}$C (S691) -- Fe-rich nanocrystals segregating in proximity of the sample surface (c) and of the interface between the GaN buffer and the Fe-doped layer (d); (e),(f) $T_{\\mathrm{g}}$ = 950$^{\\circ}$C (S876) -- to be compared with the TEM images of S691 in (c) and (d): reproducibility in the distribution of the Fe-rich nanocrystals for different samples grown at the same $T_{\\mathrm{g}}$.}\n \\label{fig:distribution}\n\\end{figure}\n\nBy using the SAED technique for the larger nanocrystals and a FFT combined with a subsequent reconstruction for the smaller objects, we have studied the foreign scattering orders and the \\textit{d}-spacings along the growth direction. As shown in Fig.~\\ref{fig:TEM}(a), the transitional Moir\\'e fringes indicate that there is a set of planes parallel to the GaN (002) ones with a similar \\textit{d}-spacing inside the nanocrystal. The corresponding FFT image shown in Fig.~\\ref{fig:TEM}(b) gives an additional diffraction spot close to GaN (002), corresponding to a \\textit{d}-spacing of 0.217~nm, matching the $\\textit{d}_{002}$ of $\\varepsilon$-Fe$_{3}$N. The $\\varepsilon$-Fe$_{3}$N phase is found in all the considered samples, with the exception of the one grown at 800$^{\\circ}$C (S690, dilute). The phase $\\varepsilon$-Fe$_{3}$N has the closest structure to wurtzite GaN and we can assume that the formation of $\\varepsilon$-Fe$_{3}$N is, thus, energetically favored.\\cite{Li:2008_JCG}\n\nThe micrograph displayed in Fig.~\\ref{fig:TEM}(c) has been obtained from the layer grown at 850$^{\\circ}$C and refers to a nanocrystal located in the proximity of the sample surface. The corresponding SAED pattern in Fig.~\\ref{fig:TEM}(d) reveals that the \\textit{d}-spacing of the lattice planes overlapping the GaN matrix has a value of 0.203~nm, matching the $\\textit{d}_{110}$ of $\\alpha$-Fe. For values of $T_{\\mathrm{g}}$ between 900 and 950$^{\\circ}$C, nanocrystals like the one represented in Fig.~\\ref{fig:TEM}(e) are found. The FFT image shown in Fig.~\\ref{fig:TEM}(f), reveals that the additional \\textit{d}-spacing is 0.211~nm, corresponding to the $\\textit{d}_{121}$ of $\\zeta$-Fe$_{2}$N.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=1.0\\columnwidth]{phasediagram.eps}\n \\caption{A phase diagram of (Ga,Fe)N as a function of the growth temperature.}\n \\label{fig:phasediagram}\n\\end{figure}\n\nIt should be underlined here that the size of the nanocrystals in the HRTEM images is smaller than the average value obtained from SXRD. This discrepancy originates from the fact that a cross-sectional TEM specimen must be rendered considerably thin, in order to achieve a sufficient transparency to the electron beam. Therefore, a nanocrystal is usually only partly enclosed in the investigated area. At the same time, in low magnification micrographs on thicker volumes the size of the objects becomes comparable to the average value determined by the SXRD studies.\n\nCross-sectional low-magnification TEM measurements permit to observe that while at the lower growth temperatures the Fe-rich nanoobjects tend to segregate close to the sample surface as seen in Fig.~\\ref{fig:distribution}(b), at higher $T_{\\mathrm{g}}$ two-dimensional assemblies of nanocrystals form in a reproducible way -- as proven by comparing Fig.~\\ref{fig:distribution}(c),(d) and Fig.~\\ref{fig:distribution}(e),(f) -- and this arrangement is expected to be instrumental as template for the self-aggregation of metallic nanocolumns.\\cite{Fukushima:2006_JJAP} \n\nSummarizing the SXRD, XAFS and HRTEM findings, a phase diagram of the Fe-rich phases formed in (Ga,Fe)N as a function of the growth temperature is constructed and reported in Fig.~\\ref{fig:phasediagram}, showing the dominant phases for each temperature interval.\n\nAccording to Ref.~\\onlinecite{Jack:1952_AC} when the concentration of the interstitial-atoms in the $\\varepsilon$ phase is increased by only 0.05 atoms\/100 Fe, a phase transition from $\\varepsilon$ to $\\zeta$ occurs. In this process the Fe atoms retain their relative positions but there is a slight anisotropic distortion of the $\\varepsilon$ lattice that reduces the symmetry of the (nano)crystal to $\\zeta$-orthorhombic. The hexagonal unit cell parameter a$_{hex}$ of $\\varepsilon$-Fe$_3$N splits into the parameters $b_{\\mathrm{orth}}$ and $c_{\\mathrm{orth}}$ in $\\zeta$-Fe$_2$N. Moreover, according to the Fe \\textit{vs.} N phase diagram the orthorhombic phase contains a higher percentage of nitrogen\\cite{Jack:1952_AC} compared to the hexagonal one, and this guides us to conjecture that the higher the growth temperature, the more nitrogen is introduced into the system.\n\nRemarkable is the fact that by increasing the growth temperature the (002) $\\varepsilon$-Fe$_3$N is preserved, while the (111) oriented nanocrystals are not detected. A focused study would be necessary to clarify the kinetic processes taking place between 850$^{\\circ}$C and 900$^{\\circ}$C. Moreover, it is still to be clarified whether the fact that the $\\varepsilon$-Fe$_3$N nanocrystals oriented along the growth direction are stable, while the ones lying out of the growth plane are not, may be related to differences in surface energy.\n\nThe Fe$_x$N phases found in our (Ga,Fe)N samples are listed in Table~\\ref{tab:table2}, together with their crystallographic structure, lattice parameters, $d$-spacing for the diffracted peaks, and magnetic properties.\n\n\\begin{table*} \n\\begin{ruledtabular}\n\\caption{\\label{tab:table2}Structural and magnetic parameters of\nthe Fe-rich phases found in the considered (Ga,Fe)N samples.}\n\\label{tab:table2}\n\\begin{tabular}{|c|cccc|cccc|}\n\n&&\\multicolumn{3}{c|}{Lattice parameter\\cite{Eck:1999_JCM}}&\\multicolumn{3}{c}{$d$-spacing}&\\\\\n&&\\multicolumn{3}{c|}{-------------------------------}&\\multicolumn{3}{c}{-------------------------------}&\\\\\n& Structure & $a$(nm)& $b$(nm)& $c$(nm)& literature value\\cite{icdd:2009}& SXRD& HRTEM& $\\mu_{B}$\\\\\n\\hline\n$\\gamma'$-Fe$_4$N & $\\textit{fcc}$ & 0.382 & -- & -- & 0.189 & 0.188-0.189 & 0.188 & 2.21~\\cite{Eck:1999_JCM}\\\\\n$\\varepsilon$-Fe$_3$N & wz & 0.469 & -- & 0.438 & 0.2189$_{(002)}$ & 0.2188$_{(002)}$ & 0.2178$_{(002)}$ & 2.0~\\cite{Leineweber:1999_JAC} \\\\\n& & & & & 0.208$_{(111)}$ & 0.206$_{(111)}$ & -- &\\\\\n$\\zeta$-Fe$_2$N & ortho & 0.443 & 0.554 & 0.484 & 0.2113 & 0.2114 & 0.211 & 1.5~\\cite{Eck:1999_JCM}\\\\\n$\\alpha$-Fe& $\\textit{bcc}$ & 0.286 & -- & -- & 0.202 & 0.204 & 0.203 & 2.2~\\cite{Keavney:1995_PRL} \\\\\n$\\gamma$-Fe& $\\textit{fcc}$ & 0.361 & -- & -- & 0.180$_{(200)}$ & 0.176$_{(200)}$ & -- & 0.3--1.6~\\cite{Shi:1996_PRB} \\\\ \n& & & & & 0.210$_{(111)}$ & -- & -- &\\\\\n\n\\end{tabular}\n\\end{ruledtabular}\n\\end{table*}\n\nFurther focused studies are required in order to clarify the kinetic mechanisms of segregation and possibly the range of parameters that could allow the selectivity of the species in different two-dimensional regions of the doped layers. \n\n\\subsection{Magnetic properties of Fe$_x$N phases}\n\nAs reported in Table~\\ref{tab:table2}, the different Fe$_x$N phases we identify in the considered samples are expected to show specific magnetic responses. The $\\varepsilon$-Fe$_3$N phase, predominant in the samples grown at 850$^{\\circ}$C, is ferromagnetic with a Curie temperature $T_{\\mathrm{C}}$ of 575~K.\\cite{Leineweber:1999_JAC} \n\nThe $\\gamma$'-Fe$_4$N phase, also present though in lesser amount in these layers, is FM too, with a $T_{\\mathrm{C}}$ of 750~K.\\cite{Jack:1952_AC} For the samples deposited at temperatures above 850$^{\\circ}$C, the dominant and stable phase becomes $\\zeta$-Fe$_2$N.\n\nThe magnetic response of these (Ga,Fe)N layers is quite typical for semiconductors containing TM ions at concentration above or close to the solubility limits. Regardless of the prevailing diamagnetic component from the sapphire substrate -- that we compensate with the procedure detailed elsewhere \\cite{Stefanowicz:2010_PRB} -- the field dependency of magnetization $M(H)$ is characterized primarily by a dominant paramagnetic contribution at low temperatures from diluted substitutional Fe$^{3+}$ ions and by a superparamagnetic-like component saturating (relatively) fast and originating from various magnetically ordered nanocrystals with high Fe content. Among them, the FM hexagonal $\\varepsilon$-Fe$_{3-x}$N have the highest density, according to the SXRD and HRTEM studies discussed above. Despite the richness of different phases, it is relatively straightforward to separate these major components and to treat them -- to a large extent -- qualitatively.\n\n\\begin{figure*}[t]\n \\begin{center}\n \\includegraphics[width=1.98\\columnwidth]{figANDhistANDfmt.eps}\n \\end{center}\n\\caption{(Color online) (a) Magnetic field dependence of the\nnanocrystals magnetization $M_{\\mathrm{N}}$ at selected temperatures for\nsample S687 ($T_{\\mathrm{g}}$ = 850$^{\\circ}$C). Each $M_{\\mathrm{N}}(H)$\ncurve has been measured from the maximum positive to the maximum negative\nfield, only and the dotted lines obtained by numerical invertion are guides\nfor the eye. The dashed lines represent the saturation level of\nmagnetization at each temperature.\n(b) Bullets - temperature dependence of the saturation\nmagnetization $M_{\\mathrm{N}}^{\\mathrm{Sat}}$ obtained from panel (a).\nDashed line - the Brillouin function for the magnetic moment of\n2$\\mu_{\\mathrm{B}}$ per Fe atom.\n(c) Three major contributions to the total magnetic signal (thick\nbrown solid) for sample S691 ($T_{\\mathrm{g}}$ = 950$^{\\circ}$C): i)\nparamagnetic from Fe$^{3+}$ (thick red dashed), ii) high-$T_{\\mathrm{C}}$\nsuperparamagnetic-like (thin green solid) from the nanocrystals and iii) slowly\nsaturating component (blue short dashed). Also here only a half of the\nfull hysteresis loop was measured and the dotted lines obtained by\nnumerical reflection are guides for the eye. (d) Bullets - temperature\ndependence of $M_{\\mathrm{N}}^{\\mathrm{Sat}}$ for sample S691. Dashed line\n- the Brillouin function for the magnetic moment of 2$\\mu_{\\mathrm{B}}$ per\natom Fe. The blue dotted line follows the excess of\n$M_{\\mathrm{N}}^{\\mathrm{Sat}}$ over the contribution from high\n$T_{\\mathrm{C}}$ ferromagnetic nanocrystals.}\n\\label{Fig:figANDhistANDfmt}\n\\end{figure*}\n\nWe begin by noting that the superparamagnetic-like component originates primarily from nanocrystals characterized by a relatively high magnitude of the spin ordering temperature, so that their magnetization $M_{\\mathrm{N}}(T,H)$ can be regarded as temperature independent at very low temperatures. This means that a temperature dependence of the magnetization in this range comes from dilute Fe$^{3+}$ ions, whose properties in GaN have been extensively investigated previously.\\cite{Pacuski:2008_PRL,Malguth:2008_pssb} \n\n\\begin{figure}\n \\begin{center}\n \\includegraphics[width=0.97\\columnwidth]{figFMconc2.eps}\n \\end{center}\n \\caption{(Color online) Estimated lower limit for the Fe concentration that precipitates in the form of various Fe-rich nanocrystals $x_{\\mathrm{Fe}_N}$ as a function of the growth temperature.}\n \\label{fig:figFMconc2}\n\\end{figure}\n\nAccordingly, the concentration of these ions $x_{\\mathrm{Fe}^{3+}}$ can be obtained by fitting $g\\mu_BS$$x_{\\mathrm{Fe}^{3+}}N_0\\Delta B_{S}(\\Delta T,H)$ to the difference between the experimental values of the magnetization measured at 1.85 and 5.0~K, where $\\Delta B_{S}(\\Delta T,H)$ is the difference of the corresponding paramagnetic Brillouin functions\n$\\Delta B_{S}(\\Delta T,H)=B_{S}(1.85\\, \\mathrm{K},H)-B_{S}(5\\, \\mathrm{K},H)$. We consider the spin $S = 5\/2$, the corresponding Land\\'e factor $g=2.0$, and treat $x_{\\mathrm{Fe}^{3+}}$ as the only fitting parameter.\n\nThe values established in this way are listed in Table~\\ref{Tab:table1} for the studied samples and they are then employed to calculate the paramagnetic contribution at any temperature according to $M=g\\mu_{\\mathrm{B}}Sx_{\\mathrm{Fe}^{3+}}B_{S}(T,H)$, which is then subtracted from the experimental data to obtain the magnitude of the magnetization $M_{\\mathrm{N}}(T,H)$ coming from nanocrystals.\n\nFor the layers grown at $T_{\\mathrm{g}} <$ 900$^{\\circ}$C, $M_{\\mathrm{N}}(T,H)$ saturates at all investigated temperatures for a magnetic field above $\\sim10$~kOe, as evidenced in Fig.~\\ref{Fig:figANDhistANDfmt}(a), pointing to a predominantly ferromagnetic order within the nanocrystals. The values of saturation magnetization $M_{\\mathrm{N}}^{\\mathrm{Sat}}$ obtained in this way when plotted \\textit{vs.} temperature as in Fig.~\\ref{Fig:figANDhistANDfmt}(b) allow us to assess the corresponding $T_{\\mathrm{C}}$ from a fitting of the classical Brillouin function to the experimental points. Furthermore, assuming a value of the magnetic moment\n2$\\mu_{\\mathrm{B}}$ per Fe, as in $\\varepsilon$-Fe$_3$N,\\cite{Bouchard:1974_JAP} we determine the concentration\nof Fe ions $x_{\\mathrm{Fe}_N}$ contributing to the Fe-rich nanocrystals, as shown in Table~I.\n\nHowever, for the samples deposited at $T_{\\mathrm{g}} \\geq$ 900$^{\\circ}$C the magnitude of $M_{\\mathrm{N}}(H)$ saturates only at relatively high temperatures, namely around $T\\gtrsim 150$~K, whereas at low temperatures it shows the sizable contribution of a slowly saturating component, as shown in Fig.~\\ref{Fig:figANDhistANDfmt}(c) where magnetization data acquired at $1.85$~K for the layer S691 are reported.\n\nThis new contribution must arise from magnetically coupled objects with a spin arrangement other than ferromagnetic. According to the SXRD measurements previously discussed and summarized in Fig.~\\ref{fig:SXRD} the most likely candidate is orthorombic $\\zeta$-Fe$_2$N, antiferromagnetic below 9~K,\\cite{Hinomura:1996_INC} or slowly saturating weakly ferromagnetic below 30~K.\\cite{Nagamura:2004_STAM} In this case, in order to establish $M_{\\mathrm{N}}^{\\mathrm{Sat}}$ we employ the Arrot plot method. The value of $M_{\\mathrm{N}}^{\\mathrm{Sat}}(T)$ determined in this way is reported in Fig.~\\ref{Fig:figANDhistANDfmt}(d), and is seen to differ considerably from that of layers grown at lower $T_{\\mathrm{g}}$. \n\nWe are able to approximate the experimental values of $M_{\\mathrm{N}}^{\\mathrm{Sat}}(T)$ with a single Brillouin function only for $T\\gtrsim150$~K (dashed line in Fig.~\\ref{Fig:figANDhistANDfmt}(d)). This points to a lower value of $T_{\\mathrm{C}}\\cong 430$~K, indicating a shift of the chemical composition of $\\varepsilon$-Fe$_{3-x}$N from Fe$_3$N ($x\\cong 0$) for $T_{\\mathrm{g}} <$ 900$^{\\circ}$C to at most Fe$_{2.6}$N ($x\\cong 0.4$) for $T_{\\mathrm{g}}\\geq$ 900$^{\\circ}$C, as $T_{\\mathrm{C}}$ of $\\varepsilon$-Fe$_{3-x}$N decreases with increasing nitrogen content.\\cite{Bouchard:1974_JAP} \n\nMoreover, the gradually increasing values of $M_{\\mathrm{N}}^{\\mathrm{Sat}}(T)$ below $T\\lesssim 150$~K, marked as the hatched area in Fig.~\\ref{Fig:figANDhistANDfmt}(d), indicate the presence of even more diluted $\\varepsilon$-Fe$_{3-x}$N nanocrystals with $x$ ranging from 0.5 to 1 and with a wide spectrum of $T_{\\mathrm{C}}$. Importantly, since $\\varepsilon$-Fe$_{3-x}$N preserves its crystallographic structure and the changes of the lattice parameters are minor in the whole range $0\\leq x \\leq 1$, all various $\\varepsilon$-Fe$_{3-x}$N nanocrystals contribute to the same diffraction peak in the SXRD spectrum, and are detected there as a single compound.\n\nWe note that the presence of either $\\varepsilon$-Fe$_3$N or $\\zeta$-Fe$_2$N, characterized by a low spin ordering temperature, does not hinder the determination of the $x_{\\mathrm{Fe}^{3+}}$ values, as both compounds have a rather low magnetic moment of $0.1\\mu_B$ per Fe atom. Accordingly, the resulting variation of their magnetization is small comparing to the changes of the Fe$^{3+}$ paramagnetic signal at low temperatures.\n\nThe procedure exemplified above allows us to establish the lower limit for the Fe concentration that precipitates in the form of various Fe-rich nanocrystals ($x_{\\mathrm{Fe}_N}$), and that is determined by the magnitude of $M_{\\mathrm{N}}^{\\mathrm{Sat}}$ at low temperatures. We again assume $2\\mu_{\\mathrm{B}}$ per Fe atom, as in the dominant $\\varepsilon$-Fe$_3$N. These values are collected in Table~\\ref{Tab:table1} and plotted as the function of $T_{\\mathrm{g}}$ in Fig.~\\ref{fig:figFMconc2}. We see, that $x_{\\mathrm{Fe}_N}$ consistently increases with $T_{\\mathrm{g}}$, and that the growth temperature plays a more crucial role than the Fe-precursor flow rate~\\cite{Bonanni:2007_PRB} in establishing the total value of $x_{\\mathrm{Fe}_N}$.\n\nFinally, measurements of FC and ZFC magnetization confirm the superparamagnetic-like behavior of $M_{\\mathrm{N}}(T,H)$, as reported in Fig.~\\ref{fig:figFCZFC}. As seen in Fig.~\\ref{fig:figFCZFC}(a), the layer grown at the lowest temperature (S690) shows a minimal spread in $T_{\\mathrm{B}}$, having its maximum at $T_{\\mathrm{B}} < 100$~K, and accordingly a non-zero coercivity is evident only at low temperatures. \n\nIn contrast, for most of the studied layers a broad maximum on the ZFC curve, exemplified in Fig.~\\ref{fig:figFCZFC}(b), indicates a wide spread of blocking temperatures ($T_{\\mathrm{B}}$) -- reaching RT -- and consequently a broad distribution in the volume of the nanocrystals. These high values of $T_{\\mathrm{B}}$ are responsible for the existence of the open hysteresis in the $M(H)$ curves seen in Figs.~\\ref{Fig:figANDhistANDfmt}(a),(c) and thus of a non-zero coercivity. This observation again points to the growth temperature as to the key factor in the determination of the crystallographic structure, size and chemical composition of the Fe-rich nanocrystals.\n\n\\begin{figure} [h]\n \\begin{center}\n \\includegraphics[width=0.97\\columnwidth]{figFCZFC.eps}\n \\end{center}\n \\caption{(Color online) ZFC and FC curves measured in applied magnetic field of $200$~Oe for samples grown at (a) 800$^{\\circ}$C and (b) 950$^{\\circ}$C.}\n \\label{fig:figFCZFC}\n\\end{figure}\n\n\\section{Conclusions}\nThe previous\\cite{Bonanni:2007_PRB,Bonanni:2008_PRL,Rovezzi:2009_PRB} and\npresent studies allow us to draw a number of conclusions concerning the\nincorporation of Fe into GaN and about the resulting magnetic properties,\nexpected to be generic for a broad class of magnetically doped\nsemiconductors and oxides. These materials show magnetization consisting\ntypically of two components: i) a paramagnetic contribution appearing at\nlow temperatures and with characteristics typical for dilute magnetic\nsemiconductors containing weakly interacting randomly distributed magnetic\nmoments; ii) a puzzling ferromagnetic-like component persisting up to\nabove RT but with a value of remanence much smaller\nthan the magnitude of saturation magnetization.\n\nAccording to SQUID and electron paramagnetic resonance~\\cite{Bonanni:2007_PRB} measurements on\n(Ga,Fe)N, the concentration of Ga substituting the randomly distributed\nFe$^{3+}$ ions increases with the iron precursor flow rate\nreaching typically the value of 0.1\\%. Our results imply that the\nmagnitude of the paramagnetic response and, hence, the density of dilute\nFe cations, does not virtually depend on the growth temperature. However,\nthe incorporation of Fe can be enlarged by co-doping with Si donors,\nshifting the solubility limit to higher Fe concentrations.\\cite{Bonanni:2008_PRL}\n\nThe presence of ferromagnetic-like features can be consistently\ninterpreted in terms of crystallographic and\/or chemical phase separations\ninto nanoscale regions containing a large density of the magnetic\nconstituent. Our extensive SQUID, SXRD, TEM, EXAFS, and XANES measurements\nof MOVPE-grown (Ga,Fe)N indicate that at the lowest growth temperature\n$T_{\\mathrm{g}} = 800^\\circ$C, a large majority of the Fe ions occupy random\nGa-substitutional positions. However, in films grown at higher\ntemperatures, $850 \\leq T_{\\mathrm{g}} \\leq 950^\\circ$C, a considerable\nvariety of Fe-rich nanocrystals is formed, differing in the Fe to N ratio.\nIn samples deposited at the low end of the $T_{\\mathrm{g}}$ range,\nwe observe mostly $\\varepsilon-$Fe$_3$N precipitates but also inclusions of\nelemental $\\alpha$- and $\\gamma$-Fe as well as of $\\gamma'$-Fe$_4$N. In\nall these materials $T_{\\mathrm{C}}$ is well above RT,\nso that the presence of the corresponding nanocrystals explains the\nrobust superparamagnetic behavior of (Ga,Fe)N grown at $T_{\\mathrm{g}}\n\\geq 850^\\circ$C.\n\nWith the increase of the growth temperature nanocrystals of\n$\\zeta$-Fe$_2$N form and owing to antiferromagnetic interactions specific\nto this compound, the magnetization acquires a component linear in the\nmagnetic field. This magnetic response \nhas been previously observed and assigned to the Van Vleck paramagnetism\nof isolated Fe$^{2+}$ ions. In view of the present findings, however, its\ninterpretation in terms of antiferromagnetically coupled spins in nitrogen\nrich Fe$_x$N ($x \\le 2$) nanocrystals seems more grounded.\n\nThe total amount of Fe ions contributing to the formation of the Fe-rich\nnanocrystals is found to increase with the lowering of the growth rate\nand\/or with the raising of the growth temperature. At the same time,\nhowever, the size of individual nanocrystals appears not to vary with\nthe growth parameters. Furthermore, annealing of (Ga,Fe)N containing only\ndiluted Fe cations does not result in a crystallographic phase separation.\nAltogether, our findings indicate that the aggregation of Fe ions occurs\nby nucleation at the growth front and is kinetically limited.\nMoreover, according to the TEM results presented here, the spatial distribution of\nnanocrystals is highly non-random. They tend to reside in two-dimensional\nplanes, particularly at the film surface and at the interface between the GaN buffer and the nominally Fe-doped layer. \n\nAs a whole, these findings constitute a significant step\non the way to control the chemistry and local structure of\nsemiconductor\/ferromagnetic metal nanocomposites.\\\\\n\n\n\n\n\n\n\n\n\n\n\\begin{acknowledgments}\nThe work was supported by the European Research Council through the FunDMS Advanced Grant within the \"Ideas\" 7th Framework Programme of the EC, and by the Austrian Fonds zur {F\\\"{o}rderung} der wissenschaftlichen Forschung -- FWF (P18942, P20065 and N107-NAN). We acknowledge the technical staff at the Rossendorfer Beamline (BM20) of the ESRF, and in particular C. B\\\"{a}htz and N. Jeutter for their valuable assistance. We also thank R. Jakie{\\l}a for performing SIMS measurements.\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe Sun's outer atmosphere, the corona, is a hot ($\\sim 10^6$ Kelvin), highly magnetized, dynamic plasma, with spatial scales ranging from the electron gyroradius of less than 1 cm to the solar radius of over $10^{13}$ cm. As such, it is one of the most fruitful objects for the study of astrophysical plasmas. One of the most fascinating puzzles of the Corona is how it is heated to temperatures $\\sim 1000$ times those of the underlying photosphere. It is believed that this process is magnetic in nature, but the details (e.g., the spatial distribution of heating and whether it is continuous or impulsive) are not known. One important tool for understanding these details is the reconstruction of solar temperature distributions called `Differential Emission Measures' (DEMs) from spectral images of the Sun.\n\nThe DEM, ${\\cal E} (T)$ characterizes coronal emission at a given temperature\\footnote{For most spectral lines; lines corresponding to forbidden transitions have more complicated density sensitivity, but that is beyond the scope of this paper. Interested readers may consult \\citet{delzannamasonasr2005}.}, and is defined in terms of the densities, $n(l)$, along the line-of-sight so that $\\int{\\cal E} (T) dT \\equiv \\int n^2(l)dl$ (column emission convention). Observed intensities, ${I}_n$,\nare given by integrating the instrument response, $R_n(T)$, against the DEM being observed:\n\\begin{equation}\\label{eq:demconvolution}\n\t{I}_n = \\int R_n(T){\\cal E} (T)dT\n\\end{equation}\n\nAlgorithms which reconstruct these temperature distributions must be very fast if they are to process the volume of data produced by modern solar observatories, or resolve the dynamics and fine spatial scales believed to be involved in coronal heating. For example, matching the real-time data rate of AIA \\citep{lemenetal_aia_soph_2012}, requires computation of over $10^5$ DEMs per second. \nThere are a number of existing algorithms for reconstructing DEMs from solar image data, but they are far too slow to meet this requirement. \nOne widely used method is the PINTofALE code \\citep{kashyap_drake_apj_1998}, which employs a Markov Chain Monte Carlo (MCMC) search taking seconds or minutes to compute a DEM. Another method, which is considered relatively fast and was recently applied to AIA data by \\citet{hannahkontaraap2012}, computes $\\sim 4$ DEMs per second (an updated version with substantially improved performance has recently been made available at \\url{http:\/\/www.astro.gla.ac.uk\/~iain\/demreg\/map\/}; we use this version in our comparisons).\n\nWe present a fast, iterative, regularized method for inferring DEMs using data from solar imagers such AIA and EIS. With one thread on a 3.2GHz processor, it is able to compute well over 1000 DEMs per second for an example solar active region observed by AIA, and over $100$ DEMs per second in our most difficult test cases. Moreover, we anticipate that with straightforward optimizations (e.g., conversion of computationally intensive portions of the code to C and parallelization), the performance of the code will be increased to $\\sim 10^5$ DEMs per second on a single workstation, sufficient to match AIA's real-time observing rate. \n\nThis paper describes the method, analyzes its fidelity, compares its performance and results with other DEM methods, and applies it to example solar data. We also touch on the limitations of the solar data to constrain the DEM being observed. It must be noted that the ability to recover the details of the original solar input DEM is limited, both because of the number of channels are limited and because the temperature response functions are broad, due to the width of the temperature-dependent emissivities of the spectral lines from which they are constructed. This is discussed in detail, for instance, in \\citet{craigbrownaap1976,judgeetalapj1997}.\n\n\n\\section{DEM Algorithm}\n\\subsection{First Pass}\\label{sub:firstpass}\nWe wish to infer the DEM, a continuous function, from a small number of observed intensities which are the result of the convolution of the DEM with the instrument response functions. This problem is inherently ill-posed due to the limited number of response functions and the loss of information resulting from the convolution (i.e., equation \\ref{eq:demconvolution}) . To resolve the ambiguity, we impose additional constraints on the DEM solution. The first is that the DEM can be expressed as a linear combination of some set of basis elements (e.g., a set of narrow temperature bins), $B_j$, with coefficients $e_j$ (i.e., ${\\cal E} (T) = \\sum_j e_j B_j$). The integral equation (\\ref{eq:demconvolution}) then becomes a matrix equation:\n\\begin{equation}\\label{eq:demmatrix}\n\t{I}_n = \\int R_n(T) {\\cal E} (T)dT = \\sum_k\\Big[ {e}_k \\int R_n(T) B_k(T)dT\\Big] \\equiv \\sum_k {e}_k \\gamma_n A_{nk},\n\\end{equation}\nWhere $\\gamma_n$ are a set of normalization constants for the response functions, which we choose to be their squared integral, square root. \n\nIf the number of basis elements is greater than number of instrument channels, the underconstrained inversion can be resolved by a Singular Value Decomposition \\citep[SVD;][]{nr}, which picks the coefficient vector with the smallest magnitude (i.e., least squared emission measure). This is a sort of smoothness constraint, since a solution which has emission concentrated into narrow peaks will have greater total squared EM than a solution which spreads the emission as broadly as the data will allow. Such a solution will also tend to reduce non-physical negative emission, since negative EM is likely to require excess positive emission measure elsewhere in order to produce the observed intensities, resulting in relatively high total squared EM.\n\nIf the number of basis elements is equal to the number of instrument channels, it is straightforward to invert equation \\ref{eq:demmatrix} to find the DEM coefficients, ${e}_k$. In that case, however, the quality of the inversion is highly sensitive to choice of basis. In particular, many choices of basis lead to an inversion matrix (i.e., $A_{jk}^{-1}$) that has large negative entries, causing large negative values in the resulting DEMs. \n\nThe tendency of the inverse matrix to have large negative values is reduced, however, if $A_{jk}$ is diagonally dominant (i.e., each diagonal entry of the matrix is the largest one in it's respective row\/column). In particular, if we choose the basis functions to be the instrument response functions themselves, square-normalized, we obtain a symmetric $A_{jk}$ matrix whose diagonal entries are unity, with all off-diagonal entries less than one.\nRemarkably, we find that this basis gives results identical to using a large number of narrow basis elements and an SVD\\footnote{Appendix \\ref{app:instresp_demderivation} demonstrates why this is the case}. It therefore satisfies the SVD's minimum squared emission measure constraint, and we use it to compute a first pass DEM as a starting points for our DEM inversion. It reduces the number of operations required for each pixel to only $\\sim N^2$ operations with $N$ instrument channels, increasing the speed of the inversion. \n\nEven when using the response functions as a basis, however, we continue to compute $A_{ij}^{-1}$ using an SVD. This allows us to enforce a minimum condition ($\\sim 10^{-12}$, typically\\footnote{This means the smallest singular value used to form the inverse matrix must be at least $10^{-12}$ times the largest singular value.}) on the inversion, ensuring that round-off error (from $A_{ij}$ being nearly singular) does not cause ringing. In effect, this combines highly similar channels (or linear combinations of channels, to be more precise) into a single channel rather than trying to fit them individually. \n\n\\subsection{Regularization}\\label{sub:reg}\nAnother constraint which DEM solutions must satisfy is a physically required positivity constraint. However, the first pass DEM inversion described in Section \\ref{sub:firstpass} can produce negative coefficients and therefore negative EM. This is particularly true when the errors in the observed intensities are significant, as the first-pass solutions exactly reproduce the input data and there is no guarantee that a set of noisy data intensities can be exactly reproduced by a purely positive DEM. To mitigate this issue, we once again seek to minimize the total squared emission measure, but we now allow the DEM to deviate from the input data at a specified $\\chi^2$ level. This is implemented by seeking a new DEM which minimizes the sum\n\\begin{equation}\n\t\\chi^2+\\lambda\\int [{\\cal E} (T)]^2dT = \\sum_j\\frac{\\Delta {I}_j}{\\sigma_j^2} + \\lambda\\sum_{jk}{e}_j {e}_k A_{jk},\n\\end{equation}\nwhere ${\\cal E} (T)$ is the DEM described above, and $\\lambda$ is a regularization parameter chosen to enforce the desired $\\chi^2$ threshold, $\\chi^2_0$. Using our basic DEM solution, we replace ${e}_i$ with a set of corresponding regularization corrections to the data values, $\\Delta {I}_i$, so that ${e}_j = \\sum_k A^{-1}_{jk} ({I}_k+\\Delta {I}_k)\/\\gamma_k$:\n\\begin{equation}\n\t\\chi^2+\\lambda\\int [{\\cal E} (T)]^2dT = \\sum_j\\frac{\\Delta {I}_j^2}{\\sigma_j^2} + \\lambda\\sum_{jk}({I}_j+\\Delta {I}_j)\\frac{A_{jk}^{-1}}{\\gamma_j\\gamma_k}({I}_k+\\Delta {I}_k).\n\\end{equation}\nThis nonlinear system of equations may be solved for the data corrections, $\\Delta{I}_j^2$, and regularization parameter $\\lambda$, satisfying the desired $\\chi^2$ threshold $\\chi_0^2$, in a variety of ways; we have found that a standard bisection search \\citep[see][for example]{nr} gives acceptable performance. The new regularized DEM is simply the first-pass inversion applied to the new corrected data values.\n\nRegularized solution of the DEM problem has recently been discussed by \\citet{hannahkontaraap2012}, but our regularization is simpler and faster owing to the basis set used for the DEM. Since DEMs constructed from the instrument response functions have only a small number of basis elements, however, they remain liable to producing negative emission in cases with sharp features. The solution to this problem is discussed next.\n\n\\subsection{Enforcing Non-Negativity}\nWe remove the remaining negative emission via the following iterative process: \n\\begin{enumerate}\n\t\\item Zero the negative EM in the current DEM, ${\\cal E}^{(n)}$, to create a new DEM, ${\\cal E}_+^{(n)}$. At the zeroth iteration, this is the regularized DEM from Section \\ref{sub:reg} ${\\cal E}^{(0)}=\\sum_j {e}_j B_j(T)=\\sum_{jk} B_j(T) A^{-1}_{jk} ({I}_k+\\Delta {I}_k)\/\\gamma_k$.\\label{enum:initialstep}\n\t\\item Compute the data intensities, ${I}_j^+=\\int {\\cal E}_+^{(n)}(T)R_j(T)dT$, corresponding to ${\\cal E}_+^{(n)}$.\\label{enum:iterationposintensities}\n\t\\item Take the difference between ${I}_j^+$ and the original ${I}_j$, $\\Delta {I}_j^+ = {I}_j^+-{I}_j$.\n\t\\item Compute correction DEM coefficients, $\\Delta {e}_j = \\sum_k A_{jk}^{-1}\\Delta {I}_k^+\/\\gamma_k$. $A_{jk}^{-1}$ is computed using an SVD at this step, and we enforce a relatively strong minimum condition number ($\\sim 0.1$) to reduce ringing in the correction.\\label{enum:iterationcorrectionstep}\n\t\\item Subtract the corresponding DEM corrections, $\\Delta {\\cal E}^{(n)}$ from ${\\cal E}_+^{(n)}$. By construction, this restores ${\\cal E}^{(n+1)} \\equiv {\\cal E}_+^{(n)}-\\Delta {\\cal E}^{(n)}$ to agreement with \n\tthe data, but reintroduces some negative emission.\n\t\\item Repeat from step \\ref{enum:initialstep} until ${I}_i^+$ matches ${I}_i$ to within the desired $\\chi^2$.\n\\end{enumerate}\n\nAfter we zero the negative EM in the initial DEM (i.e., at step \\ref{enum:initialstep} of the zeroth iteration), the DEM is no longer represented by a basis of instrument response functions, but rather by a continuous function (see step \\ref{enum:iterationposintensities} of the iteration). In practice, we choose to express it using an intermediate basis of closely spaced, narrow functions - usually triangle functions. \n\nTo further speed convergence of the iteration, we attempt a linear extrapolation at each step of the iteration. The extrapolation steps move the DEM vector further along the direction of the current iteration step, and are only accepted when they improve $\\chi^2$. \n\nWe find that the optimal regularization strength, and minimum condition number strength in step \\ref{enum:iterationcorrectionstep} of the iteration, vary with input DEM and data quality, so we try multiple values of these parameters, beginning with a light regularization and small minimum condition number threshold and moving to stronger regularization and larger minimum condition number threshold. Typical values of these parameters are $[0.9,0.5,0.01]$\\footnote{We define the regularization parameters to be p-values of the $\\chi^2$ distribution \\citep[see, for instance, Chapter 15 of][where they are referred to as $Q$, and $P$ refers to their complement]{nr}. With a regularization parameter of 0.9, for instance, the $\\chi^2$ of the regularized data with respect to the original data will correspond to a p-value of $0.9$.} and $[0.01,0.05,0.1]$, for the regularization strengths and minimum condition numbers, respectively. In broad terms, the operation of the algorithm can then be described as follows:\n\\begin{itemize}\n\\item Apply the light (e.g., 0.9) regularization to the data and compute the corresponding first-pass DEM.\n\\item Zero the negative emission in the first-pass DEM and compute the $\\chi^2$ of the resulting data with respect to the initial data. If $\\chi^2$ exceeds the desired threshold, attempt to iterate away the negative emission, using the smallest minimum condition threshold (e.g., 0.01), until the $\\chi^2$ threshold is reached.\n\\item If the iteration takes too long to reach the desired $\\chi^2$ threshold, retry the first two steps with the next strongest regularization strengths and minimum conditions (e.g., 0.5 and 0.05, respectively). Repeat until the $\\chi^2$ threshold is reached, or each pair of regularization strengths and minimum conditions have been tried.\n\\end{itemize}\n\nReaders who would like more implementation details are encouraged to examine our code, which we have made publicly available at \\url{http:\/\/solar.physics.montana.edu\/plowman\/firdems.tgz}. We also intend to submit the code to SolarSoft in the near future.\n\n\\section{Fidelity \\& Performance}\nWe have tested our DEM using a variety of example cases. The results are given in figures on the following pages, which show recovered DEMs, $\\chi^2$ agreement with the data, and computational time. Please note the following:\n\\begin{itemize}\n\\item All reported $\\chi^2$ have been reduced by dividing by the number of instrument channels being fit. \n\\item Unless otherwise noted, we use a one second exposure time for AIA, and 30 seconds for EIS. EIS pixels were assumed to be sampled to 2 square arcseconds.\n\\item AIA errors are assumed to be from read and shot noise alone, while EIS errors assume an\nadditional 10\\% error from other sources.\n\\item We use the 94, 131, 171, 193, 211, and 335 \\AA\\ AIA channels throughout. \n\\item EIS emissivities were computed assuming a density of $10^9 \\ \\mathrm{cm}^{-3}$, unless otherwise noted.\n\\item AIA response functions were computed by calling \\texttt{aia\\_get\\_response(\/temp, \/dn, \/evenorm)}. \n\\item Computation times are for a single thread running on a 3.2GHz Intel Xeon processor, in IDL. \n\\end{itemize}\n\nWherever possible, we use $\\log_{10}(T)$ as a temperature variable rather than $T$ (in Kelvin) itself. We believe this is the more natural parameter for the temperature, since we are interested in a temperature range spanning multiple orders of magnitude ($\\sim 10^5 - 10^7$ Kelvin). Similarly, our DEMs are scaled per unit $\\log_{10}(T)$, rather than per unit $T$. We believe this is the natural scaling when the DEMs are represented as functions of $\\log_{10}(T)$, because the area under the DEM curves is then the emission measure. Rescaling to unit $T$ may be accomplished by dividing by $T\\ln{(10)}$.\n\nWith few exceptions, we find that we are able to recover the test cases with good $\\chi^2$ (Reduced $\\chi^2$ of one or two), that the recovered DEMs are a reasonable qualitative match to the input DEMs. Typical times for AIA DEMs are approximately one millisecond, with some cases taking under 0.1 millisecond. The test cases are as follows:\n\\begin{itemize}\n\t\\item AIA inversion of Log-normal DEMs with widths of 0.2 at selected temperatures (Fig. \\ref{fig:aia_lognormaltest01_5e28}).\n\t\\item EIS inversion of Log-normal DEMs with widths of 0.2 at selected temperatures (Fig. \\ref{fig:eis_lognormaltest01_5e28}).\n\t\\item AIA inversion of Log-normal DEMs at temperatures ranging from $10^{5.5}$ to $10^{7.0}$ Kelvin, for widths of 0.2 (Fig. \\ref{fig:aia_gresp_02_5e28}) and 0.3 (Fig. \\ref{fig:aia_gresp_03_5e28}).\n\t\\item EIS inversion (using the 24 lines chosen by Warren et al.\\cite{warrenetal_apj_2011}) of Log-normal DEMs at temperatures ranging from $10^{5.5}$ to $10^{7.0}$ Kelvin, for widths of 0.2 (Fig. \\ref{fig:eis_gresp_02_5e28}) and 0.3 (Fig. \\ref{fig:eis_gresp_03_5e28}).\n\t\\item AIA inversion of DEMs produced by summing five Log-normal DEMs with randomly chosen centers, widths, and amplitudes (Fig. \\ref{fig:aia_multimod}).\n\t\\item EIS Active region DEM from \\cite{warrenetal_apj_2011} (Fig. \\ref{fig:warrendem_comparison}). A density of $10^{9.5} \\ \\mathrm{cm}^{-5}$ is assumed, to match their emissivities.\n\\end{itemize}\n\n\nWe find that narrow (compared with the temperature response functions in question) DEMs are the most difficult for our method to reconstruct, in terms of obtaining good $\\chi^2$ (i.e., $\\chi^2_R \\sim 1$) without negative emission. This can be seen by comparing Figures \\ref{fig:aia_gresp_02_5e28} and \\ref{fig:aia_gresp_03_5e28} (for AIA), or Figures \\ref{fig:eis_gresp_02_5e28} and \\ref{fig:eis_gresp_03_5e28} (for EIS). For AIA, there is somewhat more difficulty in recovering narrow DEMs at temperatures above $10^{6.5}$ Kelvin, but even in that case, acceptable $\\chi^2$ were achieved for the majority of noise realizations. Despite their relatively poor $\\chi^2$, these reconstructed DEMs are well-behaved and localized at the center of the injected DEM; the median $\\log_{10}(T)$ is recovered to within $0.1$ between $\\log_{10}(T) \\approx 5.8$ and $7.0$. The difficulty in achieving good $\\chi^2$ is absent in cases where there the emission is not concentrated near a single temperature, as can be seen in figure \\ref{fig:aia_multimod}. In all cases considered, the average time to compute a DEM was under 10 milliseconds, and less than 0.1 millisecond for some cases.\n\nIn the case of EIS, temperatures greater than $10^{6.8}$ Kelvin, above the highest temperature peak of the spectral lines used, are the most challenging. The median temperatures are accurately (i.e., $\\delta\\log_{10}(T)\\lesssim 0.1$) recovered over a range of $\\log_{10}(T)$, $5.6\\dots 6.8$. Due the larger number of channels, the computational time is somewhat longer, at $\\sim 10$ milliseconds for a narrow log-normal distribution. \n\n\\begin{figure}[!ht]\n\t\\caption{DEM inversions of Log-normal simulated DEMs of width 0.2 using the six AIA EUV channels. The input DEM is shown by the solid line, while each dotted line is a DEM inversion with randomly chosen read and shot noise.}\\label{fig:aia_lognormaltest01_5e28}\n\t\\begin{center}\\includegraphics[width=\\figwidth]{aia_dem_iterative_test_plots.eps}\\end{center}\n\\end{figure}\n\n\\begin{figure}[!ht]\n\t\\caption{Same as figure \\ref{fig:aia_lognormaltest01_5e28}, but for the 24 EIS lines used in \\citet{warrenetal_apj_2011}}\\label{fig:eis_lognormaltest01_5e28}\n\t\\begin{center}\\includegraphics[width=\\figwidth]{eis_dem_iterative_test_plots.eps}\\end{center}\n\\end{figure}\n\n\\begin{figure}[!ht]\n\t\\caption{AIA Response to Log-normal DEMs of width 0.2 and total EM $5.0 \\times 10^{28} \\textrm{cm}^{-5}$ at temperatures from 5.5 to 7.0 dex. Left: the recovered DEMs - each vertical slice of this plot is a DEM like one of the output curves of Fig. \\ref{fig:aia_lognormaltest01_5e28}, at the temperature of the corresponding x axis value. The solid lines on left show emission measure weighted median temperature (EMWMT). Center: $\\chi^2$ percentiles resulting from repeated MC trials of the read and shot noise, at each temperature. Right: The average time per DEM at each temperature.}\\label{fig:aia_gresp_02_5e28}\n\t\\begin{center}\\includegraphics[width=\\figwidth]{aia_gresp_width02_5e+28EM.eps}\\end{center}\n\\end{figure}\n\n\\begin{figure}[!ht]\n\t\\caption{Same as Fig. \\ref{fig:aia_gresp_02_5e28}, but for Log-normal DEMs of width 0.3.}\\label{fig:aia_gresp_03_5e28}\n\t\\begin{center}\\includegraphics[width=\\figwidth]{aia_gresp_width03_5e+28EM.eps}\\end{center}\n\\end{figure}\n\n\n\n\n\n\n\\begin{figure}[!ht]\n\t\\caption{EIS Response to Log-normal DEMs of width 0.2 and total EM $5.0 \\times 10^{28}\\textrm{cm}^{-5}$ at temperatures from 5.5 to 7.0 dex. Solid lines on left shows emission measure weighted median temperature (EMWMT). The 24 spectral lines from \\citet{warrenetal_apj_2011} were used.}\\label{fig:eis_gresp_02_5e28}\n\t\\begin{center}\\includegraphics[width=\\figwidth]{eis_gresp_width02_5e+28EM.eps}\\end{center}\n\\end{figure}\n\n\\begin{figure}[!ht]\n\t\\caption{EIS Response to Log-normal DEMs of width 0.3 and total EM $5.0 \\times 10^{28} \\textrm{cm}^{-5}$ at temperatures from 5.5 to 7.0 dex. Solid lines on left shows emission measure weighted median temperature (EMWMT). The 24 spectral lines from \\citet{warrenetal_apj_2011} were used.}\\label{fig:eis_gresp_03_5e28}\n\t\\begin{center}\\includegraphics[width=\\figwidth]{eis_gresp_width03_5e+28EM.eps}\\end{center}\n\\end{figure}\n\n\nFigure \\ref{fig:aia_multimod} compares our DEM inversions (top set) with those of the optimized \\citet{hannahkontaraap2012} (bottom set) `demmap' code. We ran the their code without its positivity constraint option, finding better $\\chi^2$ were achieved by simply zeroing the negative emission. The tests DEMs are randomly generated multimodal distributions, as observed by AIA. Both codes were run in a single thread on the 3.2 GHz processor mentioned above. The results are qualitatively quite similar, although the Hannah \\& Kontar method appears to produce some spurious high-temperature emission. For these test cases, our DEM method is faster at $\\sim 10^{-4}$ seconds for most DEMs compared with $3.7\\times 10^{-3}$ seconds (without positivity constraint) for the Hannah \\& Kontar DEM. Our method also appears to produce reasonable $\\chi^2$ with zero negative emission more consistently than the Hannah \\& Kontar DEM inversions.\n\n\\begin{figure}[!ht]\n\t\\caption{AIA inversion of DEMs produced by summing five Log-normal DEMs with randomly chosen centers, widths, and amplitudes. Center locations are uniformly distributed between $10^{5.75}$ and $10^7$, widths between $0.1$ and $0.3$ and total EM between $5\\times 10^{27} \\textrm{cm}^{-5}$ and $5\\times 10^{28} \\textrm{cm}^{-5}$. Top: Results from our fast DEM Method. Bottom: Same for \\citet{hannahkontaraap2012} regularized DEM method, without positivity constraint.}\\label{fig:aia_multimod}\n\t\\begin{center}\\includegraphics[width=\\figwidth]{aia_dem_multimod_test_plots.eps}\\end{center}\n\n\t\\hrule\n\t\\begin{center}\\includegraphics[width=\\figwidth]{aia_dem_multimod_test_plots_hannah.eps}\\end{center}\n\\end{figure}\n\n\n\nFigure \\ref{fig:warrendem_comparison} compares our DEM results with the PINTofALE MCMC results reported in Figure 4 of \\citet{warrenetal_apj_2011}. These results include all 24 EIS lines used by Warren et al, with their factor of 1.7 tweak to the Mg intensities, along with the XRT Open\/Al-thick filter. Our DEM curve is considerably different than that of \\citet{warrenetal_apj_2011}. In particular, we do not find a peak in the DEM at $\\log_{10}(T)\\approx 6.6$. Despite this difference, we obtain a very reasonable $\\chi^2$ of 1.16, which is fully consistent with the data. This suggests that the \\citet{warrenetal_apj_2011} reconstruction may not represent the full range of DEMs consistent with their data.\n\n\n\\begin{figure}[!ht]\n\t\\caption{Comparison of our DEM results (top) with \\citet{warrenetal_apj_2011} (Figure 4) MCMC DEM results (bottom). We include their factor of 1.7 tweak to the coronal Mg abundance. The DEMs matched the data with $\\chi^2$ of order unity.}\\label{fig:warrendem_comparison}\t\\begin{center}\\includegraphics[width=\\figwidth]{eis_dem_warren_comparison.eps}\\end{center}\n\t\\begin{center}\\includegraphics[width=\\figwidth]{warren_dem.eps}\\end{center}\n\\end{figure}\n\n\n\\subsection{Example Solar DEM Analysis}\n\nFinally, we show example DEM analysis of solar data. Figure \\ref{fig:example_ardem} shows emission weighted median temperature (EMWMT) and total emission measure for an active region observed by EIS and AIA. The data cover the area of the large raster taken by EIS at around 01:30 on April 19, 2011. They are centered $\\sim 350$ arcseconds above disk center, with an approximately $230\\times510$ arcsecond field of view. The EIS data used fits to a set of five iron lines: Fe IX 188.497, Fe X 184.537, Fe XII 195.119, Fe XV 284.163, and Fe XVI 262.976. \n\nThe average time per AIA DEM was approximately $0.13$ milliseconds, and we achieve reasonable $\\chi^2$ for over 95\\% of the AIA DEMs (the 95th percentile $\\chi^2$ for EIS is 4.7). The active region plots in Figure \\ref{fig:example_ardem} are qualitatively quite similar, suggesting that the results give useful insight into the underlying solar temperature distribution. \n\nWe also show DEMs along the loop segment indicated by the dashed line in Figure \\ref{fig:example_ardem}. Figure \\ref{fig:example_loopdem} shows DEMs along the length of the loop segment (solid lines in Figure \\ref{fig:aia_extracted_loop}), as well as equivalent sets of DEMs offset to either side of the loop (dotted lines in Figure \\ref{fig:aia_extracted_loop}). Once again, the DEMs found by AIA and by EIS are reasonably consistent, and both EIS and AIA inversions achieved good $\\chi^2$ for over 95\\% of the points along the loop. \n\nThe DEMs shown along the loop axis are not made from background subtracted data, and they show little difference from the offset DEMs, which indicates that the loop emission has been swamped out be the bright background. In an upcoming work, we compute background subtracted DEMs for this loop and compare its temperature and density profile to a set of analytic strand heating models.\n\n\\begin{figure}[!ht]\n\t\\begin{center}\\includegraphics[width=\\figwidth]{eis_aia_ardem.eps}\\end{center}\n\t\\caption{Left: EMWMT (hue) and Total EM (intensity) from AIA DEM inversion of Active region (covered by EIS fov on April 19, 2011). Middle: Same for EIS. Right: color scale for middle and left plots.}\\label{fig:example_ardem}\n\\end{figure}\n\n\\begin{figure}[!ht]\n\t\\begin{center}\\includegraphics[width=\\figwidth]{eis_aia_loopdem.eps}\\end{center}\n\t\\caption{Top: Loop DEM from AIA inversion of Active region (Center), offset 5 arcseconds to the left (left), 5 arcseconds to the right (right). Loop area indicated by dashed line in Figure \\ref{fig:example_ardem}, or in more detail in Figure \\ref{fig:aia_extracted_loop}. Bottom: Same for EIS.}\\label{fig:example_loopdem}\n\\end{figure}\n\n\\begin{figure}[!ht]\n\t\\begin{center}\\includegraphics[width=\\figwidth]{aia_extracted_loop.eps}\\end{center}\n\t\\caption{Detailed, straightened AIA images of loop outlined in Figure \\ref{fig:example_ardem}, for comparison with Figure \\ref{fig:example_loopdem}. Loop trace indicated by solid line, while $\\pm 5$ arcsecond offset background traces are indicated by dotted lines.}\\label{fig:aia_extracted_loop}\n\\end{figure}\n\n\\section{Conclusions}\n\nWe have demonstrated a method for fast reconstruction of DEM distributions using coronal data from instruments such as EIS and AIA. This DEM method achieves reduced chi-squared of order unity with no negative emission in all but a few test cases. The most difficult test cases are narrow DEMs at high ($>10^{6.5}$ Kelvin) temperatures for AIA, and temperature regions with little spectral coverage for EIS. Even for the high temperature AIA cases, we achieve reasonable $\\chi^2$ for the majority ($\\sim 75\\%$) of noise realizations. Qualitatively, the reconstructed DEMs match the input DEMs well, although the ability to recover finer details of the input DEMs is inherently limited. The data, particularly for AIA, do not constrain the fine details of the DEMs. When interpreting DEMs, great care must be exercised to determine whether or not the features of interest are genuine aspects of the coronal temperature distribution.\n\nOur DEM method is fast enough to study the dynamics of the coronal temperature distributions in real time. For AIA data, the worst cases considered take under $10$ milliseconds, and some test cases execute in less than $0.1$ milliseconds. For the whole disk, at full AIA resolution using all six coronal EUV channels, we expect less than $\\sim 1$ hour with the code as currently implemented. We plan to convert the computationally intensive parts of the code to C and rewrite them to take advantage of multithreading, which should offer a factor of $\\sim 100$ performance gain on an eight core workstation. This would reduce the time for a set of full-resolution, full-disk AIA DEMs to under $\\sim 1$ minute, and achieve the objective of matching the AIA observing rate in real time. The software is available online at \\url{http:\/\/solar.physics.montana.edu\/plowman\/firdems.tgz}, and will be submitted to SolarSoft in the near future.\n\nWe also applied our DEM reconstruction to solar data, an active region observed by EIS and AIA on April 19, 2011. We found no difficulty achieving reduced $\\chi^2$ of order unity with no negative emission, and the average time per DEM was approximately $0.13$ milliseconds, or $44$ seconds for the entire active region at full AIA resolution. We find a relatively hot active region core with median temperature of around three million Kelvin, surrounded by cooler emission with median temperature of around two million Kelvin. We also plotted DEMs as a function of length along a coronal loop segment visible in the active region. These DEMs are dominated by a relatively bright background, making it unclear what emission is associated with the loop; background subtraction of the data will be necessary to isolate the loop emission from its surroundings.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\\lettrine{L}{arge-scale} image retrieval techniques have been developing and improving greatly for more than a decade. Many of the current state-of-the-art approaches~\\cite{Nister-CVPR06,Chum-ICCV07,Jegou-IJCV10} are based on the bag-of-words (BOW) approach originally proposed by Sivic and Zisserman~\\cite{Sivic-ICCV03}. Another popular image representation arises from aggregating local descriptors like Fisher kernel~\\cite{Perronnin-CVPR10} and Vector of Locally Aggregated Descriptors (VLAD)~\\cite{Jegou-CVPR10}.\n\nThe BOW vectors are high dimensional (up to 64 million dimensions in \\cite{Mikulik-IJCV12}), so, due to the high memory and computational requirements, search is limited to a several million images on a single machine. There are more scalable approaches that tackle this problem by generating compact image representations~\\cite{Torralba-CVPR08,Perronnin-CVPR10,Jegou-CVPR10}, where the image is described by a short vector that can be additionally compressed into compact codes using binarization~\\cite{Torralba-CVPR08,Weiss-NIPS09}, product quantization~\\cite{Jegou-PAMI11}, or recently proposed additive quantization techniques~\\cite{Babenko-CVPR14}.\nIn this paper we propose and experimentally evaluate simple techniques that additionally boost retrieval performance, but at the same time preserve low memory and computational costs.\n\nShort vector image representations are often generated using the principal component analysis (PCA)~\\cite{Bishop06} technique to perform the dimensionality reduction over high-dimensional vectors. Jegou and Chum~\\cite{Jegou-ECCV12} study the effects of PCA on BOW representations. They show that both steps of PCA procedure, i.e., centering and selection of de-correlated (orthogonal) basis minimizing the dimensionality reduction error, improve retieval performance. Centering (mean subtraction) of BOW vectors provides a boost in performance by adding a higher value to the negative evidence: given two BOW vectors, a visual word jointly missing in both vectors provides useful information for the similarity measure~\\cite{Jegou-ECCV12}.\nAdditionnaly, they advocate the joint dimensionality reduction with multiple vocabularies to reduce the quantization artifacts underlying BOW and VLAD.\nThese vocabularies are created by using different initializations for the k-means algorithm, which may produce relatively highly correlated vocabularies.\n\nIn this paper, we propose to reduce the redundancy of the joint vocabulary representation (before the joint dimensionality reduction) by varying parameters of the local feature descriptors prior to the k-means quantization.\nIn particular, we propose: (i) different sizes of measurement regions for local description, (ii) different power-law normalizations of local feature descriptors, and (iii) different linear projections (PCA learned) to reduce the dimensionality of local descriptors. In this way, created vocabularies will be more complementary and joint dimensionality reduction of concatenated BOW vectors originating from several vocabularies will carry more information. Even though the proposed approaches are simple, we show that they provide significant boosts to retrieval performance with no memory or computational overhead at the query time.\n\n\n\\paragraph{Related work.}%\nThis paper can be seen as an extension of~\\cite{Jegou-ECCV12}, details of which are given later in Section~\\ref{sec:baseline}. A number of papers report results with short descriptors obtained by PCA dimensionality reduction. In~\\cite{Jegou-PAMI12} and~\\cite{Perronnin-CVPR10}, aggregated descriptors (VLAD and Fisher vector respectively) are used followed by PCA to produce low dimensional image descriptors. In a paper about VLAD~\\cite{Arandjelovic-CVPR13}, authors propose a method for adaptation of the vocabulary built on an independent dataset (adapt) and intra-normalization (innorm) method that $L_2$~normalizes all VLAD components independently, which suppresses the burstiness effect~\\cite{Jegou-CVPR09}. In~\\cite{Jegou-CVPR14}, a `democratic' weighted aggregation method for burstiness supression is introduced.\nIn this paper, we compare results of all the aforementioned methods using low dimensional descriptors $D'=128$.\\\\\n\nThe rest of the paper is organized as follows: Section~\\ref{sec:background_and_baseline} gives a brief overview of several methods: bag-of-words\\linebreak (BOW), efficient PCA dimensionality reduction of high dimensional vectors, and baseline retrieval with multiple vocabularies. Used datasets and evaluation protocols are established in Section~\\ref{sec:datasets}. Section~\\ref{sec:our_approach} introduces novel methods for joint dimensionality reduction of multiple vocabularies and presents extensive experimental evaluations. Main conclusions are given in Section~\\ref{sec:conclusion}.\n\n\\section{Background and baseline}\n\\label{sec:background_and_baseline}\n\nThis section gives a short overview of the background of bag-of-words based image retrieval and the method used in~\\cite{Jegou-ECCV12}. Key steps and ideas are discussed in higher detail to help understanding of the paper.\n\n\\subsection{Bag-of-words (BOW) image representation}\n\\label{sec:bow}\n\nFirst efficient image retrieval based on BOW image representation was proposed by Sivic and Zisserman~\\cite{Sivic-ICCV03}. They use local descriptors extracted in an image in order to construct a high-dimensional global descriptor. This procedure follows four basic steps:\n\n\\begin{enumerate}\n\n\\item For each image in the dataset, regions of interest are detected~\\cite{Mikolajczyk-IJCV04,Matas-BMVC02} and described by an invariant descriptor which is $d$-dimensional. In this work we use the multi-scale Hessian-Affine~\\cite{Perdoch-CVPR09} and MSER~\\cite{Matas-BMVC02} detectors, followed by SIFT~\\cite{Lowe-IJCV04} or RootSIFT~\\cite{Arandjelovic-CVPR12} descriptors. The rotation of the descriptor is either determined by the detected dominant orientation~\\cite{Lowe-IJCV04}, or by the gravity vector assumption~\\cite{Perdoch-CVPR09}.\nThe descriptors are extracted from different sizes of measurement regions~\\cite{Matas-BMVC02}, as described in detail in Section \\ref{sec:our_approach}. \n\n\\item Descriptors extracted from the training (independent) dataset (see Section \\ref{sec:datasets}) are clustered into $k$ clusters using the k-means algorithm, which creates a visual vocabulary. \n\n\\item For each image in the dataset, a histogram of occurrences of visual words is computed. Different weighting schemes can be used, the most popular is inverse document frequency (\\textit{idf}), which generates a $D$ dimensional BOW vector ($D = k$).\n\n\\item All resulting vectors are $L_2$~normalized, as suggested in~\\cite{Sivic-ICCV03}, producing final global image representations used for searching.\n\n\\end{enumerate}\n\n\n\\subsection{Efficient PCA of high dimensional vectors}\n\\label{sec:pca}\n\nIn most of the cases BOW image representations have very high number of dimensions ($D$ can take values up to 64 million~\\cite{Mikulik-IJCV12}). In these cases the standard PCA method (reducing $D$ to $D'$) computing the full covariance matrix is not efficient. The dual gram method (see Paragraph 12.1.4 in~\\cite{Bishop06}) can be used to learn the first $D'$ eigenvectors and eigenvalues. Instead of computing the $D\\times D$ covariance matrix $\\boldsymbol{C}$, the dual gram method computes the $n\\times n$ matrix $\\boldsymbol{Y}^{T}\\boldsymbol{Y}$, where $\\boldsymbol{Y}$ is a set of vectors used for learning, and $n$ is the number of vectors in the set $\\boldsymbol{Y}$. Eigenvalue decomposition is performed using the Arnoldi algorithm, which iteratively computes the $D'$ desired eigenvectors corresponding to the largest eigenvalues. This method is more efficient than the standard covariance matrix method if the number of vectors $n$ of the training set is smaller than the number of vector dimensions $D$, which is usually the case in the BOW approach.\n\nJegou and Chum~\\cite{Jegou-ECCV12} analyze the effects of PCA dimensionality reduction on the BOW and VLAD vectors. They show that even though PCA successfully deals with the problem of negative evidence (higher importance of jointly missing visual words in compared BOW vectors), it ignores the problem of co-occurrences (co-occurences lead to over-count some visual patterns when comparing two image vector representation, see~\\cite{Chum-CVPR10}). In order to tackle the aforementioned problem, they propose performing a whitening operation, similar to the one done in independent component analysis~\\cite{Comon94} (implicitly performed by the Mahalanobis distance), jointly with the PCA. In our experiments we will use dimensionality reduction from $D$ to $D'$ components, as done in~\\cite{Jegou-ECCV12}:\n\n\\begin{enumerate}\n\n\\item Every image vector $v = (v_1,\\ldots, v_D)$ is post-processed using power-law normalization~\\cite{Perronnin-CVPR10}: $v_i := |v_i|^{\\beta}\\times sign(v_i)$, with $0\\leq \\beta < 1$ as a fixed constant. Vector $v$ is $L_2$~normalized after processing. It has been shown~\\cite{Jegou-PAMI12} that this simple procedure reduces the impact of multiple matches and visual bursts~\\cite{Jegou-CVPR09}. In all our experiments $\\beta = 0.5$, denoted as signed square rooting (SSR).\n\n\\item First $D'$ eigenvectors of matrix $\\boldsymbol{C}$ are learned using power-law normalized training vectors $\\boldsymbol{Y} = [Y_1|\\ldots |Y_n]$, corresponding to the largest $D'$ eigenvalues $\\lambda_{1},\\ldots,\\lambda_{D'}$.\n\n\\item Every power-law normalized image descriptor used for searching $X$ is PCA-projected and truncated, and at the same time whitened and re-normalized to a new vector $\\hat{X}$ that is the final short vector representation with dimensionality $D'$:\n\\begin{equation}\n\\label{eq:pca_white}\n\\hat{X} = \\frac\n{\\text{ diag}(\\lambda_{1}^{-\\frac{1}{2}},\\ldots,\\lambda_{D'}^{-\\frac{1}{2}})\\boldsymbol{P}^TX}\n{\\left \\| \\text{ diag}(\\lambda_{1}^{-\\frac{1}{2}},\\ldots,\\lambda_{D'}^{-\\frac{1}{2}})\\boldsymbol{P}^TX \\right \\|},\n\\end{equation}\nwhere the $D\\times D'$ matrix $\\boldsymbol{P}$ is formed by the largest eigenvectors calculated in the previous step. Comparing two vectors after this dimensionality reduction with the Euclidian distance is now similar to using a Mahalanobis distance. It has been argued that the re-normalization step is critical for a better comparison metric, see~\\cite{Jegou-ECCV12}.\n\n\\end{enumerate}\n\nIn order to compare results in a fair manner, we will use $D'=128$ dimensions for all our experiments following the trend of previous research in short image representations.\n\n\\subsection{The baseline method}\n\\label{sec:baseline}\n\n\\begin{figure*}\\centering\n\\includegraphics[trim = 20mm 0mm 20mm 0mm, width=\\linewidth]{fig_related.pdf}\n\n\\caption{\\textbf{Baseline methods:} Left plots show mAP performance on Oxford5k (upper plot) and Holidays (lower plot) after straightforward concatenation of BOW vectors (no PCA dimensionality reduction performed) generated using multiple vocabularies. Note that dimensionality of BOW grows linearly with every new concatenation. Right plots present mAP performance on Oxford5k and Holidays after joint PCA dimensionality reduction of concatenated BOW representations to a $D'=128$ dimensional vector.}\n\n\\label{fig:related}\n\\end{figure*}\n\nThis paper builds upon the work~\\cite{Jegou-ECCV12}, which is briefly reviewed in this section.\nIn~\\cite{Jegou-ECCV12}, a joint dimensionality reduction of multiple vocabularies is proposed. Image representation vectors are separately SSR normalized for each vocabulary, concatenated and then jointly PCA-reduced and whitened as explained in the Section~\\ref{sec:pca}. The \\textit{idf} term is ignored, and it is noted that the influence is limited when used with multiple vocabularies. Results of this method are shown in Figure~\\ref{fig:related} (right plots). Comparing to the straightforward concatenation (Figure~\\ref{fig:related}, left plots) where the results do not noticeably improve after adding multiple vocabularies, it can be noticed that an improvement in performance is achieved even when keeping low memory requirements by using PCA dimensionality reduction. However, for some vocabularies (i.e. $k=2$k), performance is dropping after only few vocabularies used.\n\n\n\n\n\n\\section{Datasets and evaluation}\n\\label{sec:datasets}\n\nResults of our methods are evaluated on the datasets~\\cite{Philbin-CVPR07,Philbin-CVPR08,Jegou-ECCV08} that are widely used in the image retrieval area. Also, we compare our results with other approaches evaluated on the same datasets.\n\n\\paragraph{Oxford5k~\\cite{Philbin-CVPR07} and Paris6k~\\cite{Philbin-CVPR08}:}%\nBoth datasets contain a set of images (5062 for Oxford and 6300 for Paris) having 11 different landmarks together with distractors, downloaded from Flickr by searching for tags of popular landmarks. For each of the 11 landmarks there are 5 different query regions defined by a bounding box, meaning that there are 55 different query regions per dataset. The performance is reported as mean average precision (mAP), see~\\cite{Philbin-CVPR07} for more details. In our experiments we use Paris6k as a training dataset in order to learn the visual vocabulary and projections of PCA dimensionality reduction. When evaluating our methods on Oxford5k, we always use the data learned on Paris6k.\n\n\\paragraph{Oxford105k~\\cite{Philbin-CVPR07}:\nThis dataset is the combination of Oxford5k dataset and 99782 negative images crawled from Flickr using 145 most popular tags. This dataset is used to evaluate the search performance (reported as mAP) on a large scale. Paris6k is used as a training dataset for Oxford105k.\n\n\\paragraph{Holidays~\\cite{Jegou-ECCV08}:\nThis dataset is a selection of personal holidays photos (1491 images) from INRIA, including a large variety of scene types (natural, man-made, water and fire effects, etc.). A sample of 500 images from the whole dataset is selected for query purposes~\\cite{Jegou-ECCV08}. The performance is reported as mAP, like for Oxford5k and Oxford105k, after excluding the query image from the results. As a training dataset for vocabulary construction and image representation level PCA learning we use Paris6k dataset in all experiments. \n\n\n\n\n\\section{Sources of multiple codebooks}\n\\label{sec:our_approach}\n\nWe propose combining multiple vocabularies that are differing not just in random initialization of clustering procedure, but also in the data used for clustering. The feature data are alternated in the process of local features description.\nThis process is not trying to synthesize appearance deformations, but rather varying certain design choices in the pipeline of feature description, such as the relative size of the measurement region. \nVocabularies created in this manner will contain less redundancy. This is combined with joint PCA dimensionality reduction (as described in Sections~\\ref{sec:pca} and \\ref{sec:baseline}) in order to produce short-vector image representations that are used for searching the most similar images in the dataset.\n\nQuantization complexity for all vocabularies used in experiments is given in Table~\\ref{tab:complexity}. As stated in~\\cite{Jegou-ECCV12}, time necessary to quantize 2000 local descriptors of a query image, for four $k=8$k vocabularies, on 12 cores is 0.45s, using a multi-threaded exhaustive search implementation. Timings are proportional to the vocabulary size, i.e., to the number in the right column of Table~\\ref{tab:complexity}. \n\n\\setlength{\\tabcolsep}{10pt}\n\\begin{table}\n\\centering\n\n\\caption{\\textbf{Complexity of vocabularies used throughout the experiments:} Complexity is given as a number of vector comparisons per local descriptor during the construction of the final BOW image representation.}\n\n\\begin{tabular}{lr}\n\\\\\n\\hline\n\\textbf{Vocabulary} & \\textbf{Complexity}\\\\\n\\hline\n\\hline\n8k & 8192\\\\\n4k & 4096\\\\\n2k & 2048\\\\\n1k & 1024\\\\\n\\hline\n4k+2k+\\ldots+128 & 8064\\\\\n2k+1k+\\ldots+128 & 3968\\\\\n1k+512+256+128 & 1920\\\\\n512+256+128 & 896\\\\\n\\hline\n\\end{tabular\n\n\\label{tab:complexity}\n\n\\end{table}\n\n\\paragraph{Multiple measurement regions.}%\n\nAn affine invariant descriptor of an affine covariant region can be extracted from any affine covariant constructed measurement region~\\cite{Matas-BMVC02}. As an example of a measurement region that is, in general, of a different shape than the detected region, is an ellipse fitted to the regions, as proposed by~\\cite{Tuytelaars-BMVC00} and also used for MSERs~\\cite{Matas-BMVC02}. An important parameter is the relative scale of the measurement region with respect to the scale of the detected region. Since the output of the detector is designed to be repeatable, it is usually not discriminative. To increase the disriminability of the descriptor, it is commonly extracted from area larger than the detected region. In case of~\\cite{Perdoch-CVPR09}, the relative change in the radius is $r=3\\sqrt{3}$. The larger the region, the higher discriminability of the descriptor, as long as the measurement region covers a close-to-planar surface. On the other hand, larger image patches have higher chance of hitting depth discontinuities and thus being corrupted. An example of multiple measurement regions is shown in Figure~\\ref{fig:multiMR}. To take the best of this trade off, we propose to construct multiple vocabularies over descriptors extracted at multiple relative scales of the measurement regions. Including lower scales leverages the disadvantages of large measurement regions, while joint dimensionality reduction eliminates the dependencies between the representations. \n\n\\begin{figure*} \\centering\n\n\\begin{minipage}[c]{0.25\\linewidth} \\centering\n\\includegraphics[height=3cm]{fig_mes_reg_graff\/graffAmr.png}\n\\end{minipage}\n\\begin{minipage}[c]{0.74\\linewidth} \\centering\n\\setlength{\\tabcolsep}{2pt}\n\\begin{tabular}{ccccc}\n\\includegraphics[height=2.2cm]{fig_mes_reg_graff\/graffA_0_50.png} &\n\\includegraphics[height=2.2cm]{fig_mes_reg_graff\/graffA_0_75.png} &\n\\includegraphics[height=2.2cm]{fig_mes_reg_graff\/graffA_1_00.png} &\n\\includegraphics[height=2.2cm]{fig_mes_reg_graff\/graffA_1_25.png} &\n\\includegraphics[height=2.2cm]{fig_mes_reg_graff\/graffA_1_50.png}\n\\\\\n$0.5{\\times}r$ &\n$0.75{\\times}r$ &\n$1{\\times}r$ &\n$1.25{\\times}r$ &\n$1.5{\\times}r$\n\\end{tabular}\n\\end{minipage}\n\n\\vspace{5pt}\n\n\\begin{minipage}[c]{0.25\\linewidth} \\centering\n\\includegraphics[height=3cm]{fig_mes_reg_graff\/graffBmr.png}\n\\end{minipage}\n\\begin{minipage}[c]{0.74\\linewidth} \\centering\n\\setlength{\\tabcolsep}{2pt}\n\\begin{tabular}{ccccc}\n\\includegraphics[height=2.2cm]{fig_mes_reg_graff\/graffB_0_50.png} &\n\\includegraphics[height=2.2cm]{fig_mes_reg_graff\/graffB_0_75.png} &\n\\includegraphics[height=2.2cm]{fig_mes_reg_graff\/graffB_1_00.png} &\n\\includegraphics[height=2.2cm]{fig_mes_reg_graff\/graffB_1_25.png} &\n\\includegraphics[height=2.2cm]{fig_mes_reg_graff\/graffB_1_50.png}\n\\\\\n$0.5{\\times}r$ &\n$0.75{\\times}r$ &\n$1{\\times}r$ &\n$1.25{\\times}r$ &\n$1.5{\\times}r$\n\\end{tabular}\n\\end{minipage}\n\n\\vspace{2mm}\n\\caption{\\textbf{Multiple measurement regions (mMeasReg):} A corresponding feature is detected in two images (left). Multiple measurement regions for a single detected feature are shown in each row. The normalized patches (right) show different image content described by the respective descriptor.} \\label{fig:multiMR}\n\n\\end{figure*}\n\nWe consider using different sizes of measurement regions: $0.5\\times r,\\: 0.75\\times r,\\: 1\\times r,\\: 1.25\\times r,\\: 1.5\\times r$; creating slightly different SIFT descriptors used to learn every vocabulary. Implementation is very simple and during online stage the computation has to be done only for the features from query image region. Though simple, this method provides significant improvement even when concatenating vocabularies of small sizes (i.e. $k=2\\text{k}$ and $k=1\\text{k}$), see Figure~\\ref{fig:mes_reg} (left plot). We also explore the use of vocabularies with different sizes. All BOW vectors in this case are weighted proportionally to the logarithm of their vocabulary size~\\cite{Jegou-ECCV12}. In each step we concatenate a new bundle of vocabularies with multiple sizes, calculated with a different measurement region. We notice improvement when using multiple vocabulary sizes as well, see Figure~\\ref{fig:mes_reg} (right plot). For presentation of results on both plots in Figure~\\ref{fig:mes_reg}, in every step we are adding a different vocabulary created on SIFT vectors with measurement regions in predefined order: $0.5\\times r,\\: 0.75\\times r,\\: 1\\times r,\\: 1.25\\times r,\\: 1.5\\times r$. This approach is denoted as mMeasReg.\n\n\\begin{figure*}\\centering\n\\includegraphics[trim = 20mm 0mm 20mm 0mm, width=\\linewidth]{fig_mes_reg.pdf}\n\n\\caption{\\textbf{Multiple measurement regions (mMeasReg):} mAP performance improvement on Oxford5k after PCA reduction to $D'=128$ of concatenated BOW vectors produced on vocabularies created using SIFT descriptors with different measurement regions: $0.5{\\times}r,\\: 0.75{\\times}r,\\: 1{\\times}r,\\: 1.25{\\times}r,\\: 1.5{\\times}r$.}\n\n\\label{fig:mes_reg}\n\\end{figure*}\n\n\\paragraph{Multiple power-law normalized SIFT descriptors.\nSIFT descriptors~\\cite{Lowe-IJCV04} were the popular choice in most of the image retrieval systems for a long time. Arandjelovic \\etal~\\cite{Arandjelovic-CVPR12} show that using a Hellinger kernel instead of standard Euclidian distance to measure the similarity between SIFT descriptors leads to a noticeable performance boost in retrieval system. The kernel is implemented by simply square rooting every component of SIFT descriptor. Using Euclidian distance on these new RootSIFT descriptors will give the same result as using Hellinger kernel on the original SIFT descriptors. \nIn general, a power-law normalization~\\cite{Perronnin-CVPR10} with any power $0 \\leq \\beta \\leq 1$ can be applied to the descriptors ($\\beta = 0.5$ resulting in RootSIFT~\\cite{Arandjelovic-CVPR12}). Voronoi cells constructed in power-law normalized descriptor spaces can be seen as non-linear hyper-surfaces separating the features in the original (SIFT) descriptor space. Concatenation of such feature space partitionings reduces the redundant information. \n\nThere is no additional memory required and the change can be done on-the-fly with virtually no additional computational cost using simple power operation. We consider building four different vocabularies using: SIFT and SIFT with every component to the power of 0.4, 0.5, 0.6 (denoted as $\\text{SIFT}^{0.4}$, $\\text{SIFT}^{0.5}$, $\\text{SIFT}^{0.6}$ respectively). Concatenation is done on single vocabularies (Figure~\\ref{fig:SIFTs}, left plot) and on a bundle of vocabularies with different sizes (Figure~\\ref{fig:SIFTs}, right plot). Adding all SIFT modifications to the process of vocabulary creation achieves noticeable improvement of retrieval performance in the case of all vocabulary sizes. We denote this method as mRootSIFT.\n\nCombining vocabularies of different SIFT exponents improves over combining different vocabularies of a single SIFT exponent. For example, for 4~$\\times$ 2k vocabularies, the mAP on Oxford5k is $46.5$ for 4~$\\times$ $\\text{SIFT}^{0.5}$, and $47.7$ (Figure~\\ref{fig:SIFTs} left) for exponent combination.\n\n\\begin{figure*}[t!]\\centering\n\\includegraphics[trim = 20mm 0mm 20mm 0mm, width=\\linewidth]{fig_SIFTs.pdf}\n\n\\caption{\\textbf{Multiple power-law normalized SIFT descriptors (mRootSIFT):} mAP performance improvement on Oxford5k after PCA reduction to $D'=128$ of concatenated BOW vectors produced on vocabularies created using multiple local feature descriptors: SIFT, $\\text{SIFT}^{0.4}$, $\\text{SIFT}^{0.5}$, $\\text{SIFT}^{0.6}$.}\n\n\\label{fig:SIFTs}\n\\end{figure*}\n\n\n\\paragraph{Multiple linear projections of SIFT descriptors.}%\nIn locality sensitive hashing (random) linear projections are commonly used to reduce the dimensionality of the space while preserving locality. The idea pursued in this part of the paper is to use linear projections on the feature descriptors (SIFTs) before the vocabulary construction via k-means. However, random projections do not reflect the structure of the descriptors, resulting in noisy descriptor space partitionings. We propose to use PCA learned linear projections of SIFTs, learned on different training sets or subsets. The projections learned this way account for the statistics given by the training sets and hence produce meaningful distances, while inserting different biases into the vocabulary construction.\n\n\\begin{figure*}[t!]\\centering\n\\includegraphics[trim = 20mm 0mm 20mm 0mm, width=\\linewidth]{fig_PCA.pdf}\n\n\\caption{\\textbf{Multiple linear projections of SIFT descriptors (mPCA-SIFT):} mAP performance improvement on Oxford5k after PCA reduction to $D'=128$ of concatenated BOW vectors produced on vocabularies created using different PCA-reduced SIFT descriptors. For more details about all three presented methods see Section~\\ref{sec:our_approach}.}\n\n\\label{fig:PCA}\n\\end{figure*}\n\n\nThe improvement is twofold: (i) increased performance measured by mAP, and (ii) shorter quantization time during query due to shorter local descriptors after dimensionality reduction. On the other side there is a small amount of storage required to save learned projection matrices for every vocabulary, which we reuse at query. We consider and evaluate three different approaches for learning the eigenvectors used to project SIFT vectors from $D$ to $D'$ dimensions:\n\\begin{enumerate}[nolistsep]\n\n\\item We learn eigenvectors on Paris6k dataset and reduce the dimension of SIFT descriptors to $D'=80,64,48,32$ in the respective order for every newly created vocabulary (m$\\text{PCA}_1$-SIFT). Results of this experiment are shown in Figure~\\ref{fig:PCA}, \\nth{1} row.\n\n\\item We learn eigenvectors on different datasets: Paris6k, Holidays, University of Kentucky benchmark (UKB), PASCAL VOC'07 training in the respective order for every newly created vocabulary (m$\\text{PCA}_2$-SIFT). Dimension of SIFT descriptors is reduced to $D'=80$ in all cases. For the mAP performance on Oxford5k, see Figure~\\ref{fig:PCA}, \\nth{2} row.\n\n\\item We learn eigenvectors on different datasets: Paris5k, Holidays, UKB, PASCAL VOC'07 training and reduce the dimension of SIFT descriptors differently for each dataset ($D'=80,64,48,32$ respectively) creating different vocabularies (m$\\text{PCA}_3$-SIFT). Performance is presented in Figure~\\ref{fig:PCA}, \\nth{3} row.\n\n\\end{enumerate}\nNote that first vocabulary in all three different approaches is produced using standard SIFT descriptors without PCA reduction. A new vocabulary is added in every step of the experiment having joint dimensionality reduction of 5 concatenated BOW vectors in the end.\n\n\\paragraph{Multiple feature detectors.\nIn the Video Google approach~\\cite{Sivic-ICCV03} the authors combine vocabularies created from two different feature types.\nIn this paper we attempt to combine Hessian-Affine~\\cite{Perdoch-CVPR09} and MSER~\\cite{Matas-BMVC02} detectors. Even though straightforward concatenation of BOW vectors created on $k=8$k vocabularies ($48.7$ mAP on Oxford5k) gives improvement over using single BOW representations with Hessian-Affine ($44.7$) and MSER ($40.1$) features, after joint PCA reduction there is a decrease of performance when combining features ($37.0$ mAP on Oxford5k) compared to only doing PCA reduction on a single Hessian-Affine vocabulary ($38.6$), and an increase in performance when compared to PCA-reduced BOW vectors built on a single MSER vocabulary ($24.4$). Similar conclusions are made when combining smaller vocabulary sizes, i.e., there is always a drop in performance when comparing PCA reduction on a single vocabulary with Hessian-Affine features and PCA on combined vocabularies with Hessian-Affine and MSER features; mAP drop: from $39.8$ to $39.1$, from $40.7$ to $38.7$, from $36.8$ to $35.1$ for $k=4$k, $2$k, $1$k respectively. We also experimented with combining Harris-Affine~\\cite{Mikolajczyk-IJCV05} with Hessian-Affine features in the same manner as with MSER, but the improvement is not significant. PCA reduction of a single $k=8$k vocabulary on Hessian-Affine yields $38.6$ mAP on Oxford5k while joint PCA after adding a vocabulary of the same size built on Harris-Affine improves mAP to $39.0$, which is smaller improvement than using two vocabularies built on Hessian-Affine features with different randomization ($40.0$ mAP).\n\n\\paragraph{Discussion.}%\nIn order to better understand the impact of using multiple vocabularies we count the number of unique assignments in the product vocabulary. It corresponds to the number of non-empty cells of the descriptor space generated by all vocabularies simultaneously. The maximum possible number of unique assignments is equal to the product of number of clusters (cells) of all joint vocabularies. The number is related to the precision of reconstruction of each feature descriptor from its visual word assignments. For combination of vocabularies with different SIFT exponents (mRootSIFT) the number of unique assignments for Oxford5k dataset is shown in Figure~\\ref{fig:unq}. The plots are similar for all vocabulary combinations.\n\n\\setlength{\\tabcolsep}{8pt}\n\\begin{table*}\n\\centering\n\n\\caption{\\textbf{Comparison with the state-of-the-art on short vector image representation ($D'=128$):} Results in the first section of the table are mostly obtained from the paper~\\cite{Jegou-PAMI12}, except for the recent method on triangulation embedding and democratic aggregation with rotation and normalization ($\\phi_\\Delta$+$\\psi_\\text{d}$+$\\text{RN}$) proposed in~\\cite{Jegou-CVPR14}. In the second section we present results from methods that are using joint PCA and whitening of high dimensional vectors as we do. Results marked with * are obtained after our reimplementation of the methods using feature detector and descriptor as described in Section~\\ref{sec:bow} and Paris6k as a learning dataset. In the last section of the table we present results of our methods. All methods are described in detail in Section~\\ref{sec:our_approach}.}\n\n\\begin{tabular}{llccc}\n\\\\\n\\hline\n\\textbf{Method} & \\textbf{Vocabulary} & \\textbf{Oxford5k} & \\textbf{Oxford105k} & \\textbf{Holidays}\\\\\n\\hline\n\\hline\nGIST~\\cite{Oliva-IJCV01} & N\/A & $-$ & $-$ & $36.5$\\\\\nBOW~\\cite{Sivic-ICCV03} & $k{=}20\\text{k}$ & $19.4$ & $-$ & $45.2$\\\\\nImproved Fisher~\\cite{Perronnin-CVPR10} & $k{=}64$ & $30.1$ & $-$ & $56.5$\\\\\nVLAD~\\cite{Jegou-CVPR10} & $k{=}64$ & $-$ & $-$ & $51.0$\\\\\nVLAD+SSR~\\cite{Jegou-PAMI12} & $k{=}64$ & $28.7$ & $-$ & $55.7$\\\\\n$\\phi_\\Delta$+$\\psi_\\text{d}$+RN~\\cite{Jegou-CVPR14} & $k{=}16$ & $43.3$ & $35.3$ & $61.7$\\\\\n\\hline\nmVocab\/BOW~\\cite{Jegou-ECCV12} & $k{=}4{\\times}8\\text{k}$ & $41.3{\/}41.4^{*}$ & $-{\/}33.2^{*}$ & $56.7{\/}63.0^{*}$\\\\\nmVocab\/BOW~\\cite{Jegou-ECCV12} & $k{=}2{\\times}(32\\text{k}{+}\\ldots{+}128)$ & $-{\/}42.9^{*}$ & $-{\/}35.1^{*}$ & $60.0\/64.5^{*}$\\\\\nmVocab\/VLAD~\\cite{Jegou-ECCV12} & $k{=}4{\\times}256$ & $-$ & $-$ & $61.4$\\\\\nmVocab\/VLAD+adapt+innorm~\\cite{Arandjelovic-CVPR13} & $k{=}4{\\times}256$ & $44.8$ & $37.4$ & $62.5$\\\\\n\\hline\nmMeasReg\/mVocab\/BOW & $k{=}5{\\times}2\\text{k}$ & $46.9$ & $38.9$ & $66.9$\\\\\nmMeasReg\/mVocab\/BOW & $k{=}4{\\times}(4\\text{k}{+}\\ldots{+}128)$ & $47.7$ & $39.2$ & $67.3$\\\\\nmRootSIFT\/mVocab\/BOW & $k{=}4{\\times}2\\text{k}$ & $47.7$ & $39.8$ & $64.3$\\\\\nmRootSIFT\/mVocab\/BOW & $k{=}4{\\times}(2\\text{k}{+}\\ldots{+}128)$ & $48.8$ & $41.4$ & $65.6$\\\\\nm$\\text{PCA}_3$-SIFT\/mVocab\/BOW & $k{=}5{\\times}2\\text{k}$ & $45.8$ & $38.1$ & $63.2$\\\\\nm$\\text{PCA}_1$-SIFT\/mVocab\/BOW & $k{=}5{\\times}(4\\text{k}{+}\\ldots{+}128)$ & $45.5$ & $37.8$ & $64.6$\\\\\n\\hline\n\\end{tabular\n\\label{tab:mAP}\n\\end{table*}\n\n\\subsection{Comparison with the state-of-the-art}\n\nComparison with the current methods dealing with short vector image representation is given in Table~\\ref{tab:mAP}. Authors of the baseline approach on multiple vocabularies (mVocab) did not provide results for Oxford5k and Oxford105k datasets using all of their proposed methods, so we reimplemented and presented the corresponding results. Compared to their best method on Oxford5k that achieves $42.9$ mAP, our best method ($48.8$ mAP) obtains significant relative improvement of $13.8\\%$. In fact, all our methods outperform mVocab baseline methods on Oxford5k by a noticeable margin, with an improvement of $6.1\\%$ in the case of our worst performing method. When evaluating large-scale retrieval on Oxford105k dataset our methods again outperform the baseline method, relative improvement is $17.9\\%$ for our best performing method, and $7.7\\%$ for the worst performing one. In order to make a fair comparison when evaluating on Holidays dataset we again reimplemented the baseline approach, using Paris6k for learning the vocabularies and PCA projections (as we did in all our methods). In this case, the relative improvement is $4.3\\%$ with our best method (from $64.5$ mAP to $67.3$ mAP). We also compare our methods to two recent state-of-the-art approaches on short representations~\\cite{Arandjelovic-CVPR13,Jegou-CVPR14}. On Oxford5k and Oxford105k we improve as much as $8.9\\%$ and $10.7\\%$, respectively, compared to VLAD based approach~\\cite{Arandjelovic-CVPR13}, and $12.7\\%$ and $17.3\\%$, respectively, compared to T-embedding based approach~\\cite{Jegou-CVPR14}. On Holidays dataset relative improvement is $7.7\\%$ compared to the former and $9.1\\%$ compared to the latter. Note that the dataset used for learning of the meta-data for Holidays is different: we use Paris6k, while both \\cite{Arandjelovic-CVPR13} and \\cite{Jegou-CVPR14} are using an independent dataset comprising of 60k images downloaded from Flickr.\n\n\\begin{figure}[b!]\\centering\n\\includegraphics[width=.9\\columnwidth]{fig_unq.pdf}\n\\caption{\\textbf{Number of unique assignments (vocabulary cells) for Oxford5k dataset when combining vocabularies built on multiple power-law normalized SIFT descriptors (mRootSIFT):} SIFT, $\\text{SIFT}^{0.4}$, $\\text{SIFT}^{0.5}$, $\\text{SIFT}^{0.6}$.}\n\\label{fig:unq}\n\\end{figure}\n\n\\section{Conclusions}\n\\label{sec:conclusion}\n\nMethods for multiple vocabulary construction were studied and evaluated in this paper. Following~\\cite{Jegou-ECCV12}, the concatenated BOW image representations from multiple vocabularies were subject to joint dimensionality reduction to 128D descriptors. We have experimentally shown that generating diverse multiple vocabularies has crucial impact on search performance. Each of the multiple vocabularies was learned on local feature descriptors obtained with varying parameter settings. That includes\nfeature descriptors extracted from measurement regions of different scales, different power-law normalizations of the SIFT descriptors, and applying different linear projections to feature descriptors prior to k-means quantization. \nThe proposed vocabulary constructions improve performance over the baseline method~\\cite{Jegou-ECCV12}, where only different initializations were used to produce multiple vocabularies. More importantly, the {\\em all} proposed methods exceed the state-of-the-art results~\\cite{Arandjelovic-CVPR13,Jegou-CVPR14} by a large margin. The choice of the optimal combination of vocabularies to combine still remains an open problem.\n\n\\clearpage\n\n\\vspace{2mm} \\noindent {\\bf Acknowledgements.} The authors were supported by MSMT LL1303 ERC-CZ and ERC VIAMASS no. 336054 grants.\n\n\n{\\small\n\\bibliographystyle{ieee}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}