diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzphpw" "b/data_all_eng_slimpj/shuffled/split2/finalzzphpw" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzphpw" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{S:intro}\nThe hierarchy between electroweak and Planck scales can be\naddressed in extra dimensional models. Among these, the model proposed\nby Randall and Sundrum (RS) assumes warp geometry of the\nspace-time in 5 dimensions \\cite{RS}. The fifth dimension has Planck\nscale length $r_c$ and is compactified on the space $S^1\/Z_2$.\nTwo 3-branes are supported on either side of this fifth\ndimension. The exponential suppression\nalong the fifth dimension naturally suppresses Planck scale\nquantities of one 3-brane into electroweak scale on the second\n3-brane, which is identified as TeV-brane and can be interpreted\nas our universe. In the original RS model the standard model\nfields are assumed to lie on the TeV-brane while only gravity propagates in\nthe bulk. Later works have explored the phenomenology of bulk standard model fields\nin warped geometry model \\cite{GW,DHR,Pom,GN,CHNOY}. It was however shown that\nthe models with bulk gauge and Higgs fields where the spontaneous symmetry breaking\ntakes place in the bulk, encounters serious problems. \nThe two main problems are:\\\\\n\n(i) The non-Abelian gauge fields acquire masses through the Higgs vacuum\nexpectation value (vev) generated through spontaneous symmetry breaking \nin the bulk. This vev being a bulk parameter has a magnitude of the order\nof the Planck scale and therefore lends a very large bulk mass \n$\\sim$ Planck mass to the gauge boson in the bulk. As a result the lowest lying masses\nin the KK tower of the gauge boson \non the visible brane becomes $\\sim$ TeV which fails to comply with the $W$ and\n$Z$ boson masses $\\leq 100$ GeV. If one \ntries to reduce it by adjusting the bulk parameters then that would jeopardize\nthe unique feature of Planck scale to TeV scale warping,\n{\\it i.e.} the resolution of gauge hierarchy problem which was the original\nmotivation of such a warped geometry model.\n\n(ii) For an Abelian gauge boson with zero bulk mass, the massless Kaluza-Klein (KK)\nmode on the TeV-brane corresponds to\nphoton. However, the first excited state in the KK tower has an unacceptably\nlarge coupling with fermions. This puts a stringent bound on the mass of this\nstate such that the model may survive the direct search bound at the Fermilab \nTevatron as well as precision electroweak constraints. However, the mass of this\nfirst KK mode turns out to be much lower than the above bound. Once again it is\nimpossible either to reduce the coupling or to increase the mass \nby adjusting the bulk parameters without disturbing the resolution of the gauge hierarchy\/fine tuning problem.\\\\\n\nWe refer our readers to \\cite{DHR, CHNOY,DMS,higgs-others2,ssg1,cust} where both\nthese problems have been discussed in details. \nRecently, in a generalized 5-dimensional RS model with non-flat visible brane, by adjusting\nthe brane cosmological constant the problem coming from the\nprecision electroweak tests have been averted and also the bulk\nHiggs problem has been resolved \\cite{DMS}. \nThe brane cosmological constant however was found to be negative implying that the visible brane in such a case is an \nanti-de Sitter 3-brane.\n \nIn the present work\nwe address these problems from a different viewpoint {\\it i.e.} in the backdrop of a 6-dimensional doubly\nwarped model with flat 3-branes \nwhich is an extension of the original RS model to more than one extra dimensions\\cite{CS}. In this 6-dimensional\nmodel two extra spatial coordinates are compactified such that\nthe space-time manifold is $[M^{(1,3)}\\times S^1\/Z_2]\n\\times S^1\/Z_2$. Compared to the RS model, here the two\nextra dimensions, denoted by angular coordinates $y,z$,\nare doubly warped. Four 4-branes are located at the orbifolded\npoints: $y=0,\\pi$, $z=0,\\pi$. The intersection of any two\n4-branes gives a 3-brane. The 3-brane located at $(y,z)=(\\pi,0)$\nis identified with our universe. Analogous to the RS setup,\nthe mass scale suppression can be felt along both the coordinates\n$y,z$. We can choose the moduli of these coordinates, say\n$R_y$ and $r_z$, such that TeV scale masses can be\ngenerated on the visible brane located at $(y,z)=(\\pi,0)$. Since\nthere is an extra freedom through an additional modulus in this model compared to the 5-dimensional RS model,\nwe explore if the above problems relating to bulk Higgs can be solved\nin the 6-dimensional doubly warped model by adjusting the\nmoduli $R_y,r_z$ suitably.\n\nWe organize our paper as follows.\nIn the following section we explain some essential features of\nthe 6-dimensional doubly warped model. In Sec. 3 we\ndescribe the KK mode analysis of \ngauge bosons and fermions in 6-dimensional bulk and the corresponding modes on the visible 3-brane. In\nSec. 4 we present our results and argue that the precision\nelectroweak tests put no additional constraints on this\nmodel. We conclude in Sec. 5.\n\n\\section{The 6-dimensional doubly warped model}\n\\label{S:mod}\n\nAs explained previously, the 6-dimensional doubly warped\nmodel has space-time of six dimensions and the extra two\nspatial dimensions are orbifolded by $Z_2$ symmetry \\cite{CS}. The\nmanifold under consideration is $[M^{(1,3)}\\times S^1\/Z_2]\n\\times S^1\/Z_2$ with four non-compact dimensions denoted\nby $x^\\mu$, $\\mu=0,\\cdots,3$. Since we are interested in\ndoubly warped model, the metric in this model can be chosen\nas\n\\begin{equation}\nds^2 = b^2(z)[a^2(y)\\eta_{\\mu\\nu}dx^\\mu dx^\\nu +R_y^2dy^2]+r_z^2dz^2\n\\label{E:metric}\n\\end{equation}\nAs explained before the angular coordinates $y,z$ represent the\nextra spatial dimensions with moduli $R_y,r_z$, respectively. The\nMinkowski matrix in the usual 4-dimensions has the form $\\eta_{\\mu\\nu} = {\\rm diag}\n(-1,1,1,1)$. The functions $a(y),b(z)$ give warp factors in\nthe $y$ and $z$ directions, respectively. The total bulk-brane action\nof this model has a form \\cite{CS}\n\\begin{eqnarray}\nS &=& S_6+S_5\n\\nonumber \\\\\nS_6 &=& \\int d^4xdydz\\sqrt{-g_6}(R_6-\\Lambda), \\quad\n\\nonumber \\\\\nS_5 &=& \\int d^4xdydz[V_1\\delta(y)+V_2\\delta(y-\\pi)]\n +\\int d^4xdydz[V_3\\delta(z)+V_4\\delta(z-\\pi)]\n\\end{eqnarray}\nHere, $V_{1,2}$ and $V_{3,4}$ are brane tensions of the branes located at\n$y=0,\\pi$ and $z=0,\\pi$, respectively. $\\Lambda$ is the cosmological\nconstant in 6-dimensions. The 3-branes are located at the intersection points of the four 4-branes.\n\nAfter solving Einstein's\nequations the solutions to the warp functions\nof the metric as given in eq. (\\ref{E:metric}) have a form \\cite{CS}\n\\begin{eqnarray}\na(y) &=& \\exp(-c|y|), \\quad b(z) = \\frac{\\cosh(kz)}{\\cosh(k\\pi)}\n\\nonumber \\\\\nc&\\equiv & \\frac{R_yk}{r_z\\cosh(k\\pi)},\\quad k\\equiv r_z\\sqrt{\n\\frac{-\\Lambda}{10M_P^4}}\n\\label{E:sol}\n\\end{eqnarray}\nHere, $M_P$ is the Planck scale.\nThe warp factors $a(y)$ and $b(z)$ give\nlargest suppression from $y=0, z=\\pi$ brane to $y=\\pi z=0$ brane. For this reason\nwe can interpret the 3-brane formed out of the intersection of\n4-branes at $y=\\pi$ and $z=0$ as our standard model brane. The suppression\nfactor $f$ on the standard model brane can be written as\n\\begin{equation}\nf =\\frac{\\exp(-c\\pi)}{\\cosh(k\\pi)}\n\\label{E:supp}\n\\end{equation}\nThe desired suppression of $10^{-16}$ on the standard model brane\ncan be obtained for different combinations of the parameters $c$ and $k$. However, from\nthe relation for $c$ in eq. (\\ref{E:sol}) it can be noticed that\nin order not to have large hierarchy in the moduli $R_y$ and $r_z$, either of $c$\nor $k$ must be large where as other is small, e.g. $c\\sim$ 11.4 and $k\\sim$ 0.1.\nThis implies that the warping along $y$ is large whereas that along $z$ is small.\nIt has been argued that this feature may offer an explanation of the small mass hierarchy\namong the standard model fermions\\cite{CS}. \n\nThe 6-dimensional model which has been described here is thus viable\nin explaining the hierarchy between Planck scale and the electroweak\nscale without introducing large hierarchy between the moduli\n$R_y$ and $r_z$. In this model KK modes of bulk scalar fields have\nbeen studied \\cite{KMS1}. Bulk fermion fields have also been\nstudied in this model with a possibility of localizing\nthem on a 4-brane \\cite{KMS2}. However the possibility of bulk gauge and Higgs field in this\nmodel has not been explored yet. In the following section we derive the KK modes of the gauge field, fermion fields and the corresponding\ncouplings to estimate the viability of this model in respect to the problems discussed earlier. \nWe reiterate that our aim is to explore if we can put the Higgs in the bulk of such 6-dimensional\nmultiply warped model without invoking any contradiction with the precision electroweak test \n\\cite{DHR,CHNOY} as was encountered in the 5-dimensional RS model.\n\\section{Gauge bosons and fermions in the bulk}\n\nIn this section we explain the KK decomposition and eigenvalue\nequations of KK gauge bosons\nand KK fermions which arise from the respective bulk fields after\nintegrating over the two extra dimensions of the model.\n\n\\subsection{KK modes of the gauge bosons}\nFor simplicity, we consider a U(1) gauge theory, but our derivation\ngiven below is applicable to non-Abelian theory as well. In a realistic\nmodel the gauge fields can acquire non-zero masses due to spontaneous\nsymmetry breaking. In our model, Higgs mechanism can take place in the\nbulk of the 6-dimensions and the vev of the Higgs\nfield will be of the order of Planck scale. The vev of the Higgs field\ncontributes to generate the bulk mass for the gauge field. Hence, after spontaneous\nsymmetry breaking the invariant action can be written as\n\\begin{equation}\nS_G = \\int d^4xdydz\\sqrt{-G}\\left(-\\frac{1}{4}G^{MK}G^{NL}F_{KL}F_{MN}\n-\\frac{1}{2}M^2G^{MK}A_MA_K\\right),\n\\label{E:SG}\n\\end{equation}\nwhere, $M$ is the bulk mass $\\sim M_P$ and $G={\\rm det}(G_{AB})$ is the determinant\nof the metric $G_{AB}$ which is given in eq. (\\ref{E:metric}).\n$F_{KL}=\\partial_K A_L-\\partial_LA_K$ is the gauge field strength. Exploiting the\ngauge symmetry we can choose the gauge where\n$A_4=A_5=0$. The KK decomposition of the gauge field can be taken as\n\\begin{equation}\nA_\\mu = \\sum_{n,p}A^{(n,p)}_\\mu(x)\\xi_n(y)\\chi_p(z)\/\\sqrt{R_yr_z}.\n\\label{E:KKgau}\n\\end{equation}\nThe KK fields in the 4-dimensions $A_\\mu^{(n,p)}$ carry two indices\n$n,p$ due to the two additional dimensions of the model. The functions\n$\\xi_n(y)$ and $\\chi_p(z)$ give KK wave functions in the $y$ and $z$ directions,\nrespectively. Substituting the above KK decomposition in eq. (\\ref{E:SG})\nand integrating over the $y$ and $z$ coordinates, we demand\nthat the resulting action in the 4-dimensions must have a form\n\\begin{equation}\n\\sum_{n,p}-\\frac{1}{4}F_{\\mu\\nu}^{(n,p)}F^{(n,p)\\mu\\nu}-\\frac{1}{2}\nm_{n,p}^2A_{\\mu}^{(n,p)}A^{(n,p)\\mu},\n\\end{equation}\nwhere $m_{n,p}$ is the mass of the KK field $A_\\mu^{(n,p)}$. This can be achieved provided the KK wave functions \nsatisfy the following orthonormality condition.\n\\begin{equation}\n\\int dy~\\xi_n(y)\\xi_{n^\\prime}(y) = \\delta_{nn^\\prime},\n\\quad\n\\int dz~b(z)\\chi_p(z)\\chi_{p^\\prime}(z) = \\delta_{pp^\\prime}.\n\\label{E:gnorm}\n\\end{equation}\nMoreover, in addition to the above normalization conditions the following\neigenvalue equations for the $\\xi_n$ and $\\chi_p$ must also be satisfied:\n\\begin{eqnarray}\n\\frac{1}{R_y^2}\\partial_y(a^2\\partial_y\\xi_n)-m_p^2a^2\\xi_n &=& -m_{n,p}^2\\xi_n,\n\\nonumber \\\\\n\\frac{1}{r_z^2}\\partial_z(b^3\\partial_z\\chi_p)-M^2b^3\\chi_p &=& -m_{p}^2\\chi_p.\n\\label{E:KKyz}\n\\end{eqnarray}\nHere, $m_p$ is a mass parameter which is determined by solving the equation\nfor $\\chi_p(z)$, and the value of $m_p$ determines the KK mass $m_{n,p}$ through\nthe eigenvalue equation for $\\xi_n$, as given above.\n\nThe second of the eq. (\\ref{E:KKyz}) can be solved by approximating\n$b(z)\\sim\\exp[-k(\\pi-z)]=\\exp[-k\\tilde{z}]$. By writing\n$\\tilde{\\chi}_p(z)=\\exp(-3k\\tilde{z}\/2)\\chi_p(z)$ the eigenvalue\nequation for $\\tilde{\\chi}_p$ takes the form,\n\\begin{equation}\nz_p^2\\frac{d^2\\tilde{\\chi}_p}{dz_p^2}+z_p\\frac{d\\tilde{\\chi}_p}{dz_p}\n+(z_p^2-\\nu_p^2)\\tilde{\\chi}_p=0,\n\\end{equation}\nwhere $z_p=\\frac{m_p}{k^\\prime}\\exp(k\\tilde{z})$ and $\\nu_p^2=\\frac{9}{4}\n+\\left(\\frac{M}{k^\\prime}\\right)^2$. Here, $k^\\prime=k\/r_z$. The\nsolutions to the above equation are Bessel functions of order $\\nu_p$,\nand we can write\n\\begin{equation}\n\\chi_p(z)=\\frac{1}{N_p}\\exp(\\frac{3}{2}k\\tilde{z})\n\\left[J_{\\nu_p}(z_p)+b_pY_{\\nu_p}(z_p)\\right],\n\\label{E:Zeigen}\n\\end{equation}\nwhere, $N_P$ and $b_p$ are some constants.\nBy demanding that the function $\\chi_p(z)$ be continuous at the\norbifold fixed points $z=0,\\pi$ we get the following approximate\nsolution which determines the spectrum for $m_p$.\n\\begin{equation}\n3J_{\\nu_p}(x_{\\nu_p})+x_{\\nu_p}(J_{\\nu_p - 1}(x_{\\nu_p}) - J_{\\nu_p + 1}(x_{\\nu_p})) = 0,\n\\label{E:Zeq}\n\\end{equation}\nwhere $x_{\\nu_p}=\\frac{m_p}{k^\\prime}\\exp(k\\pi)$. After solving\nfor $m_p$ using the above equation, we can compute the KK mass $m_{n,p}$\nby solving the first of eq. (\\ref{E:KKyz}). By writing $\\tilde{\\xi}_n\n=\\exp(-c|y|)\\xi_n$, the eigenvalue equation for $\\tilde{\\xi}_n(y)$ becomes,\n\\begin{equation}\ny_n^2\\frac{d^2\\tilde{\\xi}_n}{dy_n^2}+y_n\\frac{d\\tilde{\\xi}_n}{dy_n}\n+(y_n^2-\\nu_n^2)\\tilde{\\xi}_n=0,\n\\end{equation}\nwhere $y_n=\\frac{m_{n,p}}{k^\\prime}\\exp(c|y|)\\cosh(k\\pi)$ and\n$\\nu_n^2=1+\\left(\\frac{m_p}{k^\\prime}\\right)^2\\cosh^2(k\\pi)$. The solution\nfor $\\xi_n(y)$ can be written in terms of Bessel function of order\n$\\nu_n$ multiplied by growing exponential factor as\n\\begin{equation}\n\\xi_n(y)=\\frac{1}{N_n}\\exp(c|y|)\n\\left[J_{\\nu_n}(y_n)+b_nY_{\\nu_n}(z_n)\\right],\n\\label{E:Yeigen}\n\\end{equation}\nwhere, $N_n$ and $b_n$ are some constants. Again, by demanding that\nthe function $\\xi_n(y)$ be continuous at the orbifold fixed points\n$y=0,\\pi$ the following equation determines the KK\nmass $m_{n,p}$.\n\\begin{equation}\nJ_{\\nu_n}(x_{\\nu_n})+x_{\\nu_n}(J_{\\nu_n - 1}(x_{\\nu_n}) - J_{\\nu_n + 1}(x_{\\nu_n}))\/2 = 0,\n\\label{E:Yeq}\n\\end{equation}\nwhere,\n\\begin{equation}\nx_{\\nu_n}=\\frac{m_{n,p}}{k^\\prime}\\exp(c\\pi)\\cosh(k\\pi).\n\\label{E:xn}\n\\end{equation}\n\nThe actual KK mass of a gauge field is found by first solving the\neq. (\\ref{E:Zeq}) for $m_p$ and then solving the eq. (\\ref{E:Yeq}), which\nis described in the previous paragraph. The wave function of these\nKK gauge fields is product of wave functions given in eqs. (\\ref{E:Zeigen})\nand (\\ref{E:Yeigen}). \\\\\nIn the above analysis if we put the bulk gauge boson mass $M = 0$, we easily\nobtain the various KK mode solutions and the corresponding masses. In this case\nthe lowest lying KK mode is massless which corresponds to the standard model photon.\n\nA nice feature of the KK gauge fields in the\n6-dimensional doubly warped model is that their wave functions\ncan be decomposed into product of functions in the two extra dimensions,\na feature which may not be evident for the bulk fermion fields which\nis the subject of the next subsection.\n\n\\subsection{KK modes of the fermions}\n\nThe invariant action for a bulk fermion field $\\Psi$ in 6-dimensions is\n\\cite{DHR,GN,CHNOY}\n\\begin{equation}\nS_f = \\int d^4xdydz\\sqrt{-G}\\left\\{E^A_a\\left[\\frac{i}{2}\\left(\\bar{\\Psi}\\Gamma^a\n\\partial_A\\Psi - \\partial_A\\bar{\\Psi}\\Gamma^a\\Psi\\right)+\\frac{\\omega_{bcA}}{8}\n\\bar{\\Psi}\\{\\Gamma^a,\\sigma^{bc}\\}\\Psi\\right] - M_f\\bar{\\Psi}\\Psi\\right\\}\n\\label{E:Sf}\n\\end{equation}\nHere, the capital letter $A$ denotes index in the curved space\nand the lower case letters $a,b,c$ denote indices in the\ntangent space.\n$\\omega_{bcA}$ is spin connection and $E^A_a$ is inverse vielbein.\n$M_f$ is the bulk mass and $\\sigma^{bc}=\\frac{i}{2}[\\Gamma^b,\\Gamma^c]$.\nThe Dirac matrices $\\Gamma^a$ in 6-dimensions would be 8$\\times$8, and\nthey can be taken as \\cite{BDP}:\n\\begin{equation}\n\\Gamma^\\mu = \\gamma^\\mu\\otimes\\sigma^0,\\quad \\Gamma^4=i\\gamma_5\\otimes\\sigma^1,\n\\quad \\Gamma^5=i\\gamma_5\\otimes\\sigma^2\n\\end{equation}\nHere, $\\gamma^\\mu$ are the Dirac matrices in 4-dimensions and\n$\\gamma_5=i\\gamma^0\\gamma^1\\gamma^2\\gamma^3$. $\\sigma^i$, $i=1,2,3$, are the\nPauli matrices and $\\sigma^0$ is the 2$\\times$2 unit matrix.\nThe chirality in 6-dimensions is defined by the matrix $\\bar{\\Gamma}=\\Gamma^0\n\\Gamma^1\\Gamma^2\\Gamma^3\\Gamma^4\\Gamma^5$ as $\\bar{\\Gamma}\\Psi_\\pm = \\pm\\Psi_\\pm$.\nThe chiral fermions in 6-dimensions have both left- and right-handed chirality\nof 4-dimensions, which can be projected by the operators $P_{L,R} =\n(1\\mp i\\Gamma^0\\Gamma^1\\Gamma^2\\Gamma^3)\/2$.\n\nAs explained in Sec. \\ref{S:intro}, we are interested in estimating the gauge coupling\nof standard model fermions to the KK gauge bosons. We take the bulk mass\nof the fermions $M_f$ to be zero, since the masses of standard model fermions\nare much below the Planck scale. The term that is associated with the spin\nconnection in eq. (\\ref{E:Sf}) would give no contribution, since the metric\nin eq. (\\ref{E:metric}) is diagonal. Hence, in our particular case\nof interest we expand the first term of eq. (\\ref{E:Sf}),\nwhich has the following form:\n\\begin{eqnarray}\nS_f= &\\int d^4xdydz& \\left\\{ b^4a^3R_yr_z i\\left(\n\\bar{\\Psi}_{+L}\\Gamma^\\mu\\partial_\\mu\\Psi_{+L}+\n\\bar{\\Psi}_{+R}\\Gamma^\\mu\\partial_\\mu\\Psi_{+R}+\n\\bar{\\Psi}_{-L}\\Gamma^\\mu\\partial_\\mu\\Psi_{-L}+\n\\bar{\\Psi}_{-R}\\Gamma^\\mu\\partial_\\mu\\Psi_{-R}\\right) \\right.\n\\nonumber \\\\\n&& \\left.\n+\\left[\\bar{\\Psi}_{+L}\\left(\\Gamma^4 D_y+\\Gamma^5 D_z\\right)\\Psi_{+R}\n+\\bar{\\Psi}_{+R}\\left(\\Gamma^4 D_y+\\Gamma^5 D_z\\right)\\Psi_{+L}\n\\right. \\right.\n\\nonumber \\\\\n&& \\left. \\left.\n+\\bar{\\Psi}_{-L}\\left(\\Gamma^4 D_y+\\Gamma^5 D_z\\right)\\Psi_{-R}\n+\\bar{\\Psi}_{-R}\\left(\\Gamma^4 D_y+\\Gamma^5 D_z\\right)\\Psi_{-L}\n\\right]\\right\\},\n\\label{E:Sfin}\n\\end{eqnarray}\nwhere the differential operators are defined as:\n$ D_y = \\frac{i}{2}b^4r_z(a^4\\partial_y\n+\\partial_ya^4)$ and $ D_z = \\frac{i}{2}a^4R_y(b^5\\partial_z\n+\\partial_zb^5)$.\nIn the subscript of the fields $\\Psi$ the $\\pm$ indicates the chirality\nin 6-dimensions and the $L,R$ stands for the left- and right-handed chirality\nof the 4-dimensions. Terms in line 2 and 3 of the above equation give\neffective masses for the KK modes in the 4-dimensions. These terms indicate\nthat in general we cannot decompose the wave functions into $y$ and $z$ parts separately,\nlike what we have done for the KK wave functions of the gauge bosons as\ndescribed previously in eq. (\\ref{E:KKgau}). Hence, for the bulk fermions the KK decomposition\ncan be taken as\n\\begin{eqnarray}\n\\Psi_{+L,-R}(x^\\mu,y,z)&=&\\frac{1}{\\sqrt{R_yr_z}}\\sum_{j,k}\\psi^{(j,k)}_{+L,-R}(x^\\mu)\nf^{(j,k)}_{+L,-R}(y,z)\\otimes\\left(\\begin{array}{c}1\\\\0\\end{array}\\right),\n\\nonumber \\\\\n\\Psi_{-L,+R}(x^\\mu,y,z)&=&\\frac{1}{\\sqrt{R_yr_z}}\\sum_{j,k}\\psi^{(j,k)}_{-L,+R}(x^\\mu)\nf^{(j,k)}_{-L,+R}(y,z)\\otimes\\left(\\begin{array}{c}0\\\\1\\end{array}\\right).\n\\label{E:KKfer}\n\\end{eqnarray}\nIn the above equation various fields of the form $\\psi^{(j,k)}(x^\\mu)$ are the KK fields\nliving in the 4-dimensions and $f$'s are the KK wave functions depending on both\n$y$ and $z$ coordinates. Substituting the above KK decomposition into\neq. (\\ref{E:Sfin}) and integrating over the $y$ and $z$ we can get the action\nof the form\n\\begin{eqnarray}\nS_f = &\\int d^4x& \\sum_{j,k}\\bar{\\psi}^{(j,k)}_{+L}i\\gamma^\\mu\\partial\\psi^{(j,k)}_{+L}\n+\\bar{\\psi}^{(j,k)}_{+R}i\\gamma^\\mu\\partial\\psi^{(j,k)}_{+R}\n+\\bar{\\psi}^{(j,k)}_{-L}i\\gamma^\\mu\\partial\\psi^{(j,k)}_{-L}\n+\\bar{\\psi}^{(j,k)}_{-R}i\\gamma^\\mu\\partial\\psi^{(j,k)}_{-R}\n\\nonumber \\\\\n&&\n-M_{j,k}(\\bar{\\psi}^{(j.k)}_{+L}\\psi^{(j,k)}_{+R}+\\bar{\\psi}^{(j.k)}_{+R}\\psi^{(j,k)}_{+L}\n+\\bar{\\psi}^{(j.k)}_{-L}\\psi^{(j,k)}_{-R}+\\bar{\\psi}^{(j.k)}_{-R}\\psi^{(j,k)}_{-L}),\n\\end{eqnarray}\nprovided the following normalization and the eigenvalue equations\nfor the KK wave functions are satisfied:\n\\begin{equation}\n\\int dydz b^4(z)a^3(y)\\left(f^{(j,k)}_{+R,+L,-R,-L}(y,z)\\right)^*\nf^{(j^\\prime,k^\\prime)}_{+R,+L,-R,-L}(y,z) =\n\\delta^{j,j^\\prime}\\delta^{k,k^\\prime},\n\\label{E:fnorm}\n\\end{equation}\n\\begin{eqnarray}\n(i{\\cal D}_y+{\\cal D}_z)f^{(j,k)}_{+R}(y,z)&=&-M_{j,k}f^{(j,k)}_{+L}(y,z),\n\\nonumber \\\\\n(-i{\\cal D}_y+{\\cal D}_z)f^{(j,k)}_{+L}(y,z)&=&-M_{j,k}f^{(j,k)}_{+R}(y,z),\n\\nonumber \\\\\n(i{\\cal D}_y+{\\cal D}_z)f^{(j,k)}_{-L}(y,z)&=&M_{j,k}f^{(j,k)}_{-R}(y,z),\n\\nonumber \\\\\n(-i{\\cal D}_y+{\\cal D}_z)f^{(j,k)}_{-R}(y,z)&=&M_{j,k}f^{(j,k)}_{-L}(y,z),\n\\label{E:feigen}\n\\end{eqnarray}\nwhere the differential operators are: ${\\cal D}_y=\\frac{i}{2R_y}(4\\partial_ya\n+2a\\partial_y)$ and ${\\cal D}_z=\\frac{i}{2r_z}a(5\\partial_zb\n+2b\\partial_z)$.\nHere, $M_{j,k}$ is the mass of the KK fermion $\\psi^{(j,k)}$.\n\nAs explained previously, we are interested in standard model fermion\ncoupling with the KK modes of gauge field. The zero mode of the KK\nfermions are identified with the standard model fermions. The wave\nfunction for these fields can be solved from eq. (\\ref{E:feigen})\nby putting $M_{j,k}=0$. Here, we show the solution for the wave\nfunction $f_{+R}^{(0,0)}(y,z)$ and the solutions for other chiral\nfermions can be analogously worked out. The eigenvalue equation\nwe are interested in is\n\\begin{equation}\n(i{\\cal D}_y+{\\cal D}_z)f_{+R}^{(0,0)}(y,z)=0,\n\\end{equation}\nwhere the differential operators ${\\cal D}_y$ and ${\\cal D}_z$ are\ndefined below eq. (\\ref{E:feigen}). For the zero-mode case\nwe can write the function $f_{+R}^{(0,0)}$ as a product of\n$y$ and $z$ parts, say $f_{+R}^{(0,0)}(y,z)=f_y(y)f_z(z)$.\nThis simplification happens only for the zero-mode case, since\nthe factor $a(y)$ in the operators ${\\cal D}_{y,z}$ can be taken\nout and the right-hand side of the above equation is zero.\nSubstituting this form of $f_{+R}^{(0,0)}(y,z)$ in the above equation we get\n\\begin{equation}\n-\\frac{1}{R_y}\\frac{(-4c+2\\partial_y)f_y}{f_y}\n+i\\frac{1}{r_z}\\frac{(5\\partial_zb+2b\\partial_z)f_z}{f_z} = 0\n\\end{equation}\nSince the functional dependence on $y$ and $z$ are completely\nseparated out, we can solve for $f_y$ and $f_z$ by taking\n$\\frac{(-4c+2\\partial_y)f_y}{f_y}=c_1$, where $c_1$ is a separation\nconstant. In terms of $c_1$ the functional dependences of\n$f_y$ and $f_z$ are given below\n\\begin{equation}\nf_y(y)=\\exp\\left(\\frac{1}{2}(c_1+4c)y\\right),\\quad\nf_z(z)=\\frac{\\exp\\left(\\frac{-ic_1r_z}{kR_y}\\tan^{-1}(\\tanh(kz\/2))\\cosh(k\\pi)\\right)}\n{\\cosh^{5\/2}(kz)}\n\\label{E:SMwf}\n\\end{equation}\nThe value of $c_1$ can be worked out in terms of $c$ and $k$\nby normalizing the wave\nfunction $f_{+R}^{(0,0)}(y,z)$ using eq. (\\ref{E:fnorm}).\n\n\\section{Bulk phenomenology}\n\nIn the previous section we have given a description of the\nKK modes of the gauge bosons and fermions in the bulk of a\n6-dimensional doubly warped model. Now using the mode\nexpansion for the bulk fields, we can calculate the gauge\ncouplings of standard model fermions with the KK modes of the\ngauge bosons.\n\nWe now address the two problems mentioned in the beginning.\nRecall that in 5-dimensional RS model it is found\nthat for non-zero bulk mass for non-Abelian gauge field the lowest lying\nmode has mass much higher than $100$ GeV {\\it i.e.} the\nmasses for $W$ and $Z$ bosons. Also,\nfor the massless gauge boson the gauge coupling with the first exited KK gauge boson\nis larger than one and hence the standard model fermions are\nstrongly coupled \\cite{DHR,CHNOY}. Due to this a stringent\nlower bound of $\\sim$ 10 TeV\non the mass of the first exited KK boson arose because of\nthe precision electroweak tests.\n\nIn this section we repeat\nthis exercise in the 6-dimensional doubly warped model, and\nwill show that due to the presence of an additional modulus we can tune\nthe lowest KK mode mass for non-Abelian gauge field\nnear $100$ GeV although the spontaneous symmetry breaking takes place in the bulk\nwith a bulk Higgs field with vev $\\sim$ Planck scale. \nFurthermore, for a gauge boson with a zero bulk mass the coupling to mass ratio\nof the first excited KK mode can survive the precision \nelectroweak test without putting any additional restriction on the model.\n\nThe action between the bulk fermions and gauge bosons\ncan be written as \\cite{DHR,CHNOY}\n\\begin{equation}\nS_{\\rm int} = \\int d^4xdydz\\sqrt{-G} g_{6d}\\bar{\\Psi}(x^\\mu,y,z)i\\Gamma^aE^A_aA_A(x^\\mu,y,z),\n\\Psi(x^\\mu,y,z)\n\\end{equation}\nwhere $g_{6d}$ is the gauge coupling in the 6-dimensions as has been discussed in Sec. 3. \nSubstituting\nthe KK decomposition for the gauge and fermion fields in the above equation and also\nreminding that we are working in the gauge choice where $A_4=A_5=0$,\nwe get the gauge coupling in the 4-dimensions as\n\\begin{equation}\ng^{(j,k)(n,p)}_{+R} = \\int dydz~g_0\\pi b^4a^3\\left(f_{+R}^{(j,k)}(y,z)\\right)^*\nf_{+R}^{(j,k)}(y,z)\\xi_n(y)\\chi_p(z),\n\\label{E:coup}\n\\end{equation}\nwhere $g_0=g_{6d}\/\\sqrt{\\pi R_y\\pi r_z}$ is the effective 4-dimensional\ngauge coupling. In the above equation we have given gauge coupling for\nthe KK fermion $\\psi_{+R}^{(j,k)}$ with the KK gauge field $A_\\mu^{(n,p)}$.\nSimilarly, the gauge couplings with the other KK fermions can be easily\nobtained by replacing the $+$ with $-$ and $R$ with $L$ accordingly\nin the above equation. The KK wave functions: $f$'s, $\\xi$ and $\\chi$,\nin the above equation should be the normalized wave functions as given\nby eqs. (\\ref{E:gnorm}) and (\\ref{E:fnorm}). However, in our particular\ncase of interest, where we are interested in precision electroweak tests,\nwe compute gauge couplings of the standard model fermions with the\nKK gauge fields. Hence, the wave functions for the fermions are of\nthe form in eq. (\\ref{E:SMwf}) and the corresponding functions for\nthe KK gauge fields are given in eqs. (\\ref{E:Zeigen}) and (\\ref{E:Yeigen}).\n\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{||c|c|c||c|c|c||} \\hline\n\\multicolumn{3}{||c||}{$\\frac{M}{k^\\prime}=0.5$}\n& \\multicolumn{3}{|c||}{$\\frac{M}{k^\\prime}=1.0$}\\\\\\hline\n$m_{1,2}$ = 143.78 & $m_{1,3}$ = 194.15 & $m_{1,4}$ = 244.52 &\n$m_{1,2}$ = 148.33 & $m_{1,3}$ = 199.21 & $m_{1,4}$ = 249.58 \\\\\n$\\tilde{g}^{1,2}$ = 0.0015 & $\\tilde{g}^{1,3}$ = 0.0028 &\n$\\tilde{g}^{1,4}$ = 0.0009 &\n$\\tilde{g}^{1,2}$ = 0.0006 & $\\tilde{g}^{1,3}$ = 0.0027 &\n$\\tilde{g}^{1,4}$ = 0.0011\\\\\\hline\n$m_{2,1}$ = 180.23 & $m_{2,2}$ = 237.94 & $m_{2,3}$ = 295.40 &\n$m_{2,1}$ = 184.53 & $m_{2,2}$ = 243.25 & $m_{2,3}$ = 300.97 \\\\\n$\\tilde{g}^{2,1}$ = 0.0127 & $\\tilde{g}^{2,2}$ = 0.0012 &\n$\\tilde{g}^{2,3}$ = 0.0023 &\n$\\tilde{g}^{2,1}$ = 0.0124 & $\\tilde{g}^{2,2}$ = 0.0005 &\n$\\tilde{g}^{2,3}$ = 0.0022 \\\\\\hline\n$m_{3,1}$ = 261.73 & $m_{3,2}$ = 322.73 & $m_{3,3}$ = 383.48 &\n$m_{3,1}$ = 266.29 & $m_{3,2}$ = 328.30 & $m_{3,3}$ = 389.56 \\\\\n$\\tilde{g}^{3,1}$ = 0.0106 & $\\tilde{g}^{3,2}$ = 0.0010 &\n$\\tilde{g}^{3,3}$ = 0.0019 &\n$\\tilde{g}^{3,1}$ = 0.0104 & $\\tilde{g}^{3,2}$ = 0.0004 &\n$\\tilde{g}^{3,3}$ = 0.0019 \\\\\\hline\n\\end{tabular}\n\\end{center}\n\\caption{The gauge couplings $g^{(0,0)(i,j)}$ of the standard model fermions\nare given in the form $\\tilde{g}^{i,j}=\\frac{g^{(0,0)(i,j)}}{g_0}$, where\n$g_0$ is the effective 4-dimensional gauge coupling. The masses of the KK\ngauge bosons $m_{i,j}$ are given in GeV units.\n$M$ is the bulk gauge boson mass and $k^\\prime = k\/r_z$.\nThe non-zero values for $\\frac{M}{k^\\prime}$ are indicated in\nthe table. $\\frac{1}{r_z}= 7\\times 10^{17}$ GeV,\n$k$ = 0.25 and $c$ = 11.52. The lowest lying mode $m_{1,1}$, which corresponds to $W$ or $Z$ boson, is not included in the table.}\n\\label{T:t1}\n\\end{table}\nNow, in order to compute the gauge couplings the unknown parameters\nthat need to be fixed are $k$, $c$, $r_z$ and the bulk mass of the gauge fields\n$M$. The non-zero value for the bulk gauge mass $M$ is around the Planck scale.\nWe can determine the remaining parameters by making the following demands:\n(a) the lowest non-zero mass of the KK tower of the bulk gauge boson should be\nidentified with either $W$ or $Z$ boson mass, (b) the suppression $f$ of\neq. (\\ref{E:supp}) should be $\\sim 10^{-16}$ and (c) the hierarchy\nbetween the moduli $R_y$ and $r_z$ should not be too large. The expression for the\nKK gauge boson mass is given in eq. (\\ref{E:xn}). For the lowest non-zero\nKK gauge boson mass which is identifie as $m_{1,1}$ the root $x_{\\nu_n}$ would be ${\\cal O}(1)$. The factor $\\exp(c\\pi)\n\\cosh(k\\pi)$ in this equation, which is the inverse of $f$,\nshould be $\\sim 10^{16}$. By demanding that the lowest\nKK mode $m_{1,1}$ has mass of $\\sim 100$ GeV, from eq. (\\ref{E:xn}) we can naively\nestimate that $k^\\prime\\sim 10^{17}$ GeV. Since we would argue that\n$k\\sim 0.1$, a consistent value for the scale $r_z$ is\n$\\frac{1}{r_z}=7\\times 10^{17}$ GeV, which is about 14 times smaller than\nthe Planck scale. The parameters $k$ and $c$ can\nbe determined from the fact that we should not get large hierarchy between\nthe moduli $R_y$ and $r_z$ and also we should get the desired suppression\nof $f\\sim 10^{-16}$ on the standard model brane. We have estimated that for\n$k$ = 0.25 and $c$ = 11.52, the ratio between the moduli is $\\frac{R_y}{r_z}$ = 61,\nwhich is not unacceptably large, and also the suppression $f$\ncame out to be $1.45\\times 10^{-16}$. For these particular values of\n$k$, $c$ and $1\/r_z$ we have given the gauge\ncouplings and the corresponding masses of the excited KK gauge fields\nin Table \\ref{T:t1}. In this table the gauge couplings $g^{(i,j)}$,\nwhere $i,j$ are integers, of the standard\nmodel fermion are given as a fraction of the 4-dimensional coupling\n$g_0$. In the case of $\\frac{M}{k^\\prime}$ = 0.5 or = 1.0,\nwe have found that the lowest non-zero mode has mass of about 95 GeV.\nHence this mode can be identified with the $W$ or $Z$ gauge boson. Since\nwe have got the right amount of $W,Z$ boson masses for the above\ndescribed values of $k,c,\\frac{1}{r_z}$, we use the same set of\nvalues to get the gauge couplings and KK gauge boson masses in the\ncase where the bulk mass $M$ is zero. In this case we have found\nthat the lowest mode $m_{0,0}$ has zero mass which can be identified with\nthe photon state. The non-zero KK masses of the photon field and\ntheir corresponding gauge coupling values are given in Table \\ref{T:t2}.\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{||c|c|c||} \\hline\n\\multicolumn{3}{||c||}{$\\frac{M}{k^\\prime}=0$}\\\\ \\hline\n$m_{1,1}$ = 93.15 & $m_{1,2}$ = 142.0 & $m_{1,3}$ = 192.38 \\\\\n$\\tilde{g}^{1,1}$ = 0.0168 & $\\tilde{g}^{1,2}$ = 0.0019 &\n$\\tilde{g}^{1,3}$ = 0.0028 \\\\\\hline\n$m_{2,1}$ = 178.45 & $m_{2,2}$ = 235.91 & $m_{2,3}$ = 293.12 \\\\\n$\\tilde{g}^{2,1}$ = 0.0128 & $\\tilde{g}^{2,2}$ = 0.0015 &\n$\\tilde{g}^{2,3}$ = 0.0023 \\\\\\hline\n$m_{3,1}$ = 259.96 & $m_{3,2}$ = 320.71 & $m_{3,3}$ = 381.21 \\\\\n$\\tilde{g}^{3,1}$ = 0.0107 & $\\tilde{g}^{3,2}$ = 0.0012 &\n$\\tilde{g}^{3,3}$ = 0.0020 \\\\\\hline\n\\end{tabular}\n\\end{center}\n\\caption{The gauge couplings $g^{(0,0)(i,j)}$ of the standard model fermions\nare given in the form $\\tilde{g}^{i,j}=\\frac{g^{(0,0)(i,j)}}{g_0}$, where\n$g_0$ is the effective 4-dimensional gauge coupling. The masses of the KK\ngauge bosons $m_{i,j}$ are given in GeV units.\n$M$ is the bulk gauge boson mass, which is taken to be zero and $k^\\prime = k\/r_z$.\nThe values of $k$, $c$ and $r_z$ in this case are same as that in Table \\ref{T:t1}. The lowest\nlying mode $m_{0,0}$, which corresponds to photon, is not included in the table.}\n\\label{T:t2}\n\\end{table}\n\nAs stated in Sec. \\ref{S:intro} that the 5-dimensional RS model\nsuffers from the precision electroweak tests due to the fact that\nthe first excited KK gague boson has coupling larger than one with\nthe standard model field. To parameterize the precision electroweak\nconstraints in extra dimensional models the following quantity\nhas been defined \\cite{DHR,RW}.\n\\begin{equation}\nV=\\sum_n\\left(\\frac{g_n}{g_0}\\frac{m_W}{M_n}\\right)^2.\n\\label{E:peV}\n\\end{equation}\nHere, $m_W$ is the mass of the $W$ gauge boson and $M_n$ is\nthe higher KK gauge boson mass.\nThe summation on $n$ in the above equation is over all the higher\nKK gauge masses $M_n$ with corresponding gauge couplings $g_n$.\nIn our 6-dimensional model the index $n$ would be replaced by\na pair of integers and we should sum over all non-zero higher order modes.\nIt has been shown that by fitting to the precision electroweak\nobservables the quantity $V$ should satisfy the condition:\n$V <$ 0.0013 at 95$\\%$ confidence level \\cite{DHR}. It can be easily checked\nthat this bound can be respected by the gauge couplings and\nthe KK masses of the tables \\ref{T:t1} and \\ref{T:t2}.\nFrom both these tables, we can notice that the\ngauge couplings are decreasing with increasing the KK gauge masses\nfor a particular value of $\\frac{M}{k^\\prime}$. Hence, in the summation\nof eq. (\\ref{E:peV}) only the first few higher KK modes are relevant.\nWe have checked that for $\\frac{M}{k^\\prime}$ = 0.5, $V$ has come out\nto be about $5\\times 10^{-5}$. In the case of photon where bulk mass\nis zero the value of $V$ is found out to be about 0.000257. From these\nresults we can conclude that precision electroweak tests can be\nsatisfied in the 6-dimensional doubly warped model without\nintroducing too much hierarchy in the moduli $R_y$ and $r_z$.\n\nAt the Tevatron the higher KK gauge bosons have been searched\nin the channel $P\\bar{P}\\to X\\to e^+e^-$ and a\nlimit of $M_T>$ 700 GeV on the heavy vector gauge boson ($X$) mass has\nbeen put-in \\cite{Teva}.\nIn our case the gauge couplings of the higher KK modes have\nbeen reduced from the 4-dimensional coupling $g_0$ by some\nfactors which are given in tables \\ref{T:t1} and \\ref{T:t2}.\nHence, in our case, the Tevatron bounds for the non-zero KK mode\nmasses should be greater than 700$\\times\\tilde{g}^{i,j}$ GeV.\nAs an example, the KK mode of 93.15 GeV mass of the Table \\ref{T:t2}\nhas the gauge coupling ratio of 0.0168. Hence, the lower bound\nfrom the Tevatron on this KK mode mass would be about 12 GeV,\nwhich is much lower than our calculated value of 93.15 GeV.\nLike wise, from each column of the tables \\ref{T:t1} and \\ref{T:t2} it\ncan be easily seen that the above mentioned Tevatron bounds\ncan be satisfied. So the 6-dimensional warped model is not only\nfree from the precision electroweak constraints but also from\nthe Tevatron limits.\n\n\\section{Conclusions}\nThe extra dimensional phenomenological models in a warped geometry encounters\nproblems in putting the Higgs and the gauge\nfields in the bulk. It was shown that it is impossible to\nconstruct proper $W$ and $Z$ boson masses on the brane from the KK modes of a\nnon-Abelian bulk gauge field through spontaneous symmetry breaking\nin the bulk. Also proper coupling and masses for the first KK excitation of\na massless bulk gauge field consistent with\nelectroweak precision test as well as Fermilab Tevatron mass bound is hard to obtain\nwithout changing the bulk parameter of the theory\nfrom their desired values. In this work we have shown that it is possible to resolve both\nthese problems in a multiply warped geometry model\nwhere there are more than one modulus. Considering a 6-dimensional model we have shown that\nby setting one of the modulus approximately \ntwo orders smaller than the Planck scale, we can have the the mass for the lowest lying mode of the\nbulk gauge field ( with bulk mass $\\sim M_P$,\nacquired through a spontaneous symmetry breaking in the bulk) on the TeV-brane \nto be of the order of $100$ GeV which therefore may be identified with the $W$, $Z$ boson mass.\nMoreover, such a choice for the moduli which does\nnot contradict the main spirit of the RS model lowers the coupling of the first\nKK mode excitation of a massless bulk gauge field so that\nit can escape the electroweak precision test. We have determined the KK mode masses\nas well as their couplings for different choices of\nthe parameter of the theory namely the ratio of the bulk mass and the bulk\ncosmological constant. In the entire analysis the value of the\nwarp factor is maintained at $10^{-16}$ so that the resolution of the gauge hierarchy problem,\nthe main objective of these models can be achieved. These findings can be easily extended to models with even larger\nnumber of warped extra dimensions \\cite{cs}. One would then arrive at similar conclusions with a lesser hierarchy among different\nmoduli. We can therefore conclude that\na consistent description of bulk Higgs and gauge field with spontaneous symmetry breaking\nin the bulk can be obtained in a warped geometry model\nif the RS model in 5-dimensions is generalized to six or higher dimensions with more than one moduli. The phenomenology of these models \ntherefore becomes an interesting area of study for the forthcoming collider experiments. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nMultifunctional material systems unite different, e.g., electric and magnetic, functionalities in either a single phase or a heterostructure. They thus are of great fundamental and technological interest. Given that the electric and magnetic properties furthermore are coupled, it becomes possible to control either the magnetization by the application of an electric field or the electric polarization via a magnetic field alone. An electric field control of magnetization is particularly appealing, as it removes the need to generate magnetic fields of sufficient strength for magnetization switching on small length scales -- thus enabling novel concepts for high density magnetic data storage applications.\n\\begin{figure}\n\\centering\n \n \\includegraphics[]{img\/figure1a_to_1e}\\\\\n \\caption{(a) Schematic illustration of the ferromagnetic thin film\/piezoelectric actuator hybrid\\xspace. (b), (d) The application of a voltage $\\ensuremath{V_\\mathrm{p}}\\xspace\\neq\\unit{0}{\\volt}$ to the actuator results in a deformation of the actuator and the affixed ferromagnetic film. The relaxed actuator at $\\ensuremath{V_\\mathrm{p}}\\xspace=\\unit{0}{\\volt}$ is shown by dotted contours. (c), (e) Schematic free energy contours in the film plane. The magnetic easy axis (e.a.) shown by the thick dashed line is oriented parallel to the compressive strain and can thus be rotated by 90\\degree by changing the polarity of $\\ensuremath{V_\\mathrm{p}}\\xspace$.}\\label{fig:panel1}\n\\end{figure}\nThus, techniques to control the magnetization by means of electric fields or currents have recently been vigorously investigated and several schemes for an electric control of magnetization have been reported. These include the spin-torque effect~\\cite{Slonczewski:1996, berger:1996, tsoi:1998, zhang:2002} in spin-valves and magnetic tunnel junctions and the direct electric field control of magnetization in intrinsically multiferroic materials~\\cite{spaldin:multiferroics, Eerenstein:2006, Fiebig2005,ramesh:2007,Lottermoser2004, gajek:2007, Chu2008, Zhao:2006} or in ferromagnetic\/ferroelectric heterostructures~\\cite{Stolichnov:2008, binek:2005}. A third, very attractive approach for the electric field control of magnetization takes advantage of the elastic channel, i.e., magnetostrictive coupling~\\cite{bootsmann:2005, wan:2006, Doerr:2006, Goennenwein:2008, brandlmaier:2008, Bihler:2008, overby:2008, jungwirth:2008, wang:2005}.\nWe here show that magnetoelasticity -- that is the effect of lattice strain on magnetic anisotropy -- in a polycrystalline nickel film\/piezoelectric actuator hybrid structure allows to switch the magnetic easy axis in the Ni film by 90\\degree at room temperature by simply changing the polarity of the voltage applied to the actuator.\nThis can be used to achieve either an irreversible magnetization orientation control of 180\\degree or a reversible control close to 90\\degree, depending on whether the magnetization orientation is prepared in a local or global minimum of the free energy via a magnetic field sweep prior to the electric control experiment.\nWe use ferromagnetic resonance (FMR) to quantify the effect of an electric field on magnetic anisotropy and superconducting quantum interference device (SQUID) magnetometry to directly record the evolution of magnetization as a function of the applied electric field. These experiments demonstrate that the strain-mediated electric field control of magnetization indeed is a viable technique in technologically relevant ferromagnetic thin films at room temperature.\n\\section{Sample preparation and experimental techniques}\\label{sec:samples}\nWe fabricated ferromagnetic thin film\/piezoelectric actuator structures by depositing nickel (Ni) thin films onto piezoelectric Pb(Zr$_x$Ti$_{1-x}$)O$_3$-based actuators~\\cite{manual:piezo}. Nickel was chosen as the ferromagnetic constituent as it is a prototype 3d itinerant ferromagnet with a Curie temperature $T_\\mathrm{c}=\\unit{627}{\\kelvin}$ well above room temperature~\\cite{kittel:ssp}, a high bulk saturation magnetization $M_\\mathrm{s}=\\unit{411}{\\kilo\\ampere\\per\\meter}$~\\cite{danan:1968} and sizeable volume magnetostriction~\\cite{chikazumi:ferromagnetism, lee:1955} $\\overline{\\lambda}=\\frac{2}{5}\\lambda_{100}+\\frac{3}{5}\\lambda_{111}=-32.9\\times10^{-6}$ with $\\lambda_{100}$ and $\\lambda_{111}$ being the single crystal saturation magnetostriction for a magnetic field applied along a crystalline $<100>$ or $<111>$ axis, respectively.\nThe actuators exhibit a hysteretic mechanical stroke of up to $1.3\\times10^{-3}$ along their dominant elongation axis [cf. Fig.~\\ref{fig:panel1}(a)] if voltages $\\unit{-30}{\\volt}\\leq \\ensuremath{V_\\mathrm{p}}\\xspace \\leq\\unit{+150}{\\volt}$ are applied.\nPrior to the deposition of the Ni film, the actuators were mechanically polished to a size of $x\\times y\\times z=\\unit{3\\times2.6\\times2}{\\milli\\meter\\cubed}$ [cf. Fig.~\\ref{fig:panel1}(a)] to accommodate the size of the sample to the restrictions imposed by the ferromagnetic resonance setup. We then used electron beam evaporation at a base pressure of \\unit{2.0\\times10^{-8} }{\\milli\\bbar} to deposit a \\unit{70}{\\nano\\meter} thick Ni film onto an area of \\unit{5}{\\milli\\meter\\squared} on the \\ensuremath{\\mathbf{\\hat{x}}}\\xspace-\\ensuremath{\\mathbf{\\hat{y}}}\\xspace face of the actuators. To prevent oxidation of the Ni film, a \\unit{10}{\\nano\\meter} thick Au film was deposited in-situ on top of the Ni layer. The multifunctional hybrid obtained after Ni deposition is sketched schematically in Fig.~\\ref{fig:panel1}(a) together with the definition of the angles that describe the orientation of the magnetization $\\mathbf{M}=(M,\\Theta,\\Phi)$ and the external magnetic field $\\mathbf{H}=(H,\\theta,\\phi)$ in the sample-affixed coordinate system.\n\nTo determine the static magnetic response of the ferromagnetic thin film\/piezoelectric actuator hybrid\\xspace we employ superconducting quantum interference device (SQUID) magnetometry. The Quantum Design MPMS-XL-7 SQUID magnetometer is sensitive to the projection $m=\\mathbf{m}\\mathbf{\\hat{H}}$ of the total magnetic moment $\\mathbf{m}$ onto the unit vector $\\mathbf{\\hat{H}}=\\frac{\\mathbf{H}}{H}$. We corrected $m$ for the paramagnetic contribution of the actuator and used the Ni film volume $V=\\unit{3.5\\times10^{-13}}{\\meter\\cubed}$ to calculate the respective projection $M=m\/V$ of the magnetization onto $\\mathbf{\\hat{H}}$. All magnetometry data shown in the following were recorded at a temperature $T=\\unit{300}{\\kelvin}$.\n\nThe magnetic anisotropy of the ferromagnetic thin film\/piezoelectric actuator hybrid\\xspace was measured by ferromagnetic resonance (FMR) at room temperature. We use a Bruker ESP 300 spin resonance spectrometer with a TE$_{102}$ cavity operating at a constant microwave frequency $\\nu_\\mathrm{MW}=\\unit{9.3}{\\giga\\hertz}$. The sample can be rotated in the FMR setup with respect to the external dc magnetic field, so that either $\\theta$ or $\\phi$ [cf. Fig.~\\ref{fig:panel1}(a)] can be adjusted at will. To allow for lock-in detection we use magnetic field modulation at a modulation frequency of \\unit{100}{\\kilo\\hertz} with a modulation amplitude of $\\mu_0 H_\\mathrm{mod}=\\unit{3.2}{\\milli\\tesla}$.\n\n\\section{Phenomenology of strain-induced magnetic anisotropy}\\label{sec:theory}\n\nThe piezoelectric actuator deforms upon the application of a voltage $\\ensuremath{V_\\mathrm{p}}\\xspace \\neq \\unit{0}{\\volt}$. Due to its elasticity, an elongation (contraction) along one cartesian direction is always accompanied by a contraction (elongation) in the two orthogonal directions. Therefore, for $\\ensuremath{V_\\mathrm{p}}\\xspace>\\unit{0}{\\volt}$, the actuator expands along its dominant elongation axis \\ensuremath{\\mathbf{\\hat{y}}}\\xspace and contracts along the two orthogonal directions \\ensuremath{\\mathbf{\\hat{x}}}\\xspace and \\ensuremath{\\mathbf{\\hat{z}}}\\xspace. The Ni film affixed to the \\ensuremath{\\mathbf{\\hat{x}}}\\xspace-\\ensuremath{\\mathbf{\\hat{y}}}\\xspace face of the actuator is hence strained tensilely along \\ensuremath{\\mathbf{\\hat{y}}}\\xspace and compressively along \\ensuremath{\\mathbf{\\hat{x}}}\\xspace for $\\ensuremath{V_\\mathrm{p}}\\xspace>\\unit{0}{\\volt}$ [cf. Fig.~\\ref{fig:panel1}(b)]. For $\\ensuremath{V_\\mathrm{p}}\\xspace<\\unit{0}{\\volt}$ the actuator contracts along \\ensuremath{\\mathbf{\\hat{y}}}\\xspace and thus expands along \\ensuremath{\\mathbf{\\hat{x}}}\\xspace and \\ensuremath{\\mathbf{\\hat{z}}}\\xspace and the Ni film thus exhibits a compressive strain along \\ensuremath{\\mathbf{\\hat{y}}}\\xspace and a tensile strain along \\ensuremath{\\mathbf{\\hat{x}}}\\xspace [cf. Fig.~\\ref{fig:panel1}(d)].\n\nTo describe the impact of this lattice strain on the Ni magnetization orientation we use a magnetic free energy density approach. The free energy $F_\\mathrm{tot}$ is a measure for the angular dependence of the magnetic hardness, with maxima in $F_\\mathrm{tot}$ corresponding to magnetically hard directions and minima to magnetically easy directions. In equilibrium, the magnetization always resides in a local minimum of $F_\\mathrm{tot}$. Contrary to single-crystalline films, the evaporated Ni films are polycrystalline and thus show no net crystalline magnetic anisotropy which may compete with strain-induced anisotropies and hereby reduce the achievable magnetization orientation effect~\\cite{brandlmaier:2008}. $F_\\mathrm{tot}$ is thus given by\n\\begin{equation}\\label{eq:Ftot}\n F_\\mathrm{tot}=F_\\mathrm{stat}+F_\\mathrm{demag}+F_\\mathrm{magel}\\;.\n\\end{equation}\nThe first term $F_\\mathrm{stat} = -\\mu_0 M H(\\sin\\Theta \\sin\\Phi \\sin\\theta \\sin\\phi +\\cos\\Theta \\cos\\theta+\\sin\\Theta \\cos\\Phi \\sin\\theta \\cos\\phi)$ in Eq.~\\eref{eq:Ftot} is the Zeemann term and describes the influence of an external magnetic field $\\mathbf{H}$ on the orientation of $\\mathbf{M}$. The uniaxial demagnetization term $F_\\mathrm{demag}=\\frac{\\mu_0}{2} M^2 \\sin^2\\Theta\\cos^2\\Phi$ is the anisotropy caused by the thin-film shape of the sample~\\cite{Morrish2001}. The last contribution to Eq.~\\eref{eq:Ftot}\n\\begin{eqnarray}\\label{eq:Fmagel}\n F_\\mathrm{magel}&=&\\frac{3}{2} \\overline{\\lambda} \\left(c^\\mathrm{Ni}_{12}-c^\\mathrm{Ni}_{11}\\right)\n \\bigl[\\varepsilon_1(\\sin^2\\Theta\\sin^2\\Phi-1\/3) \\\\\n &&+\\varepsilon_2(\\cos^2\\Theta-1\/3)+\\varepsilon_3(\\sin^2\\Theta\\cos^2\\Phi-1\/3) \\bigr]\\nonumber\n\\end{eqnarray}\ndescribes the influence of the lattice strains on the magnetic anisotropy~\\cite{chikazumi:ferromagnetism}. The strains along the \\ensuremath{\\mathbf{\\hat{x}}}\\xspace-, \\ensuremath{\\mathbf{\\hat{y}}}\\xspace- and \\ensuremath{\\mathbf{\\hat{z}}}\\xspace-axis are denoted in Voigt (matrix) notation~\\cite{nye:1985} as $\\varepsilon_1, \\varepsilon_2$ and $\\varepsilon_3$, respectively. Furthermore, $c^\\mathrm{Ni}_{11}=\\unit{2.5\\times10^{11}}{\\newton\\per\\meter^2}$ and $c^\\mathrm{Ni}_{12}=\\unit{1.6\\times10^{11}}{\\newton\\per\\meter^2}$ are the elastic moduli of Ni~\\cite{lee:1955}. The effects of shear strains ($\\varepsilon_i, i \\in \\{4,5,6\\}$) average out in our polycrystalline film and thus are neglected.\n\nUsing $\\sin^2\\Theta\\sin^2\\Phi+\\cos^2\\Theta+\\sin^2\\Theta\\cos^2\\Phi=1$ and omitting isotropic terms, Eq.~\\eref{eq:Fmagel} can be rewritten as\n\\begin{equation}\\label{eq:Fmagel_final}\n F_\\mathrm{magel}=K_\\mathrm{u,magel,y} \\cos^2\\Theta + K_\\mathrm{u,magel,z} \\sin^2\\Theta \\cos^2\\Phi\n\\end{equation}\nwith\n\\begin{eqnarray}\\label{eq:Kmagel}\n K_\\mathrm{u,magel,y} &=& \\frac{3}{2} \\overline{\\lambda}(c^\\mathrm{Ni}_{12}-c^\\mathrm{Ni}_{11})(\\varepsilon_2-\\varepsilon_1) \\\\\n \\nonumber K_\\mathrm{u,magel,z} &=& \\frac{3}{2} \\overline{\\lambda} (c^\\mathrm{Ni}_{12}-c^\\mathrm{Ni}_{11})(\\varepsilon_3-\\varepsilon_1)\\;.\n\\end{eqnarray}\n\nDue to the elasticity of the actuator and the Ni film, the strains $\\varepsilon_i$ are not independent of each other. The strains $\\varepsilon_1$ and $\\varepsilon_2$ in the \\ensuremath{\\mathbf{\\hat{x}}}\\xspace-\\ensuremath{\\mathbf{\\hat{y}}}\\xspace plane are linked by the Poisson ratio $\\nu=0.45$ of the actuator~\\cite{manual:piezo} according to\n\\begin{equation}\\label{eq:poisson_ip}\n \\varepsilon_1=-\\nu \\varepsilon_2 \\;.\n\\end{equation}\nFurthermore, due to the elasticity of the Ni film, the out-of-plane strain $\\varepsilon_3$ can be expressed as~\\cite{brandlmaier:2008}\n\\begin{equation}\\label{eq:strain_oop}\n \\varepsilon_3=-\\frac{c^\\mathrm{Ni}_{12}}{c^\\mathrm{Ni}_{11}}(\\varepsilon_1+\\varepsilon_2) \\;.\n\\end{equation}\nAssuming a linear expansion of the actuator with lateral dimension $L$ parallel to its dominant elongation axis, we can calculate the strain in the Ni film parallel to the actuator's dominant elongation axis as\n\\begin{equation}\\label{eq:strain_voltage}\n \\varepsilon_2=\\frac{\\delta L}{L}\\frac{\\ensuremath{V_\\mathrm{p}}\\xspace}{\\unit{180}{\\volt}}\\;,\n\\end{equation}\nwhere $\\delta L \/L=1.3\\times10^{-3}$ is the nominal full actuator stroke for the full voltage swing $\\unit{-30}{\\volt}\\leq \\ensuremath{V_\\mathrm{p}}\\xspace \\leq\\unit{+150}{\\volt}$.\nWith Eqs.~\\eref{eq:poisson_ip}, \\eref{eq:strain_oop} and~\\eref{eq:strain_voltage} all strains in the Ni film can thus directly be calculated for any given voltage \\ensuremath{V_\\mathrm{p}}\\xspace. This allows to determine the equilibrium orientation of the magnetization as a function of \\ensuremath{V_\\mathrm{p}}\\xspace by minimizing the total magnetoelastic free energy density $F_\\mathrm{tot}$ [Eq.~\\eref{eq:Ftot}].\n\nDue to the negative magnetostriction ($\\overline{\\lambda}<0$) of Ni and $c^\\mathrm{Ni}_{11}>c^\\mathrm{Ni}_{12}$, the in-plane easy axis of $F_\\mathrm{tot}$ is oriented orthogonal to tensile and parallel to compressive strains in the absence of external magnetic fields. Due to the strong uniaxial out-of-plane anisotropy caused by $F_\\mathrm{demag}$, the in-plane easy axis is the global easy axis. For $\\ensuremath{V_\\mathrm{p}}\\xspace>\\unit{0}{\\volt}$, the Ni film exhibits a tensile strain along \\ensuremath{\\mathbf{\\hat{y}}}\\xspace ($\\varepsilon_2>0$) and a compressive strain along \\ensuremath{\\mathbf{\\hat{x}}}\\xspace ($\\varepsilon_1<0$). The easy axis of $F_\\mathrm{tot}$ is thus oriented along the \\ensuremath{\\mathbf{\\hat{x}}}\\xspace-direction [cf. Fig.~\\ref{fig:panel1}(c)]. Accordingly, for $\\ensuremath{V_\\mathrm{p}}\\xspace<\\unit{0}{\\volt}$, the easy axis of $F_\\mathrm{tot}$ is oriented parallel to the compressive strain along \\ensuremath{\\mathbf{\\hat{y}}}\\xspace and orthogonal to the tensile strain along \\ensuremath{\\mathbf{\\hat{x}}}\\xspace [cf. Fig.~\\ref{fig:panel1}(e)]. We thus expect a 90\\degree rotation of the in-plane easy axis of $F_\\mathrm{tot}$ upon changing the polarity of \\ensuremath{V_\\mathrm{p}}\\xspace.\nNote that in the absence of external magnetic fields there are always two energetically equivalent antiparallel orientations of $\\mathbf{M}$ along the magnetic easy axis. However, for simplicity, we only show one of the two resulting possible in-plane orientations of $\\mathbf{M}$ in Figs.~\\ref{fig:panel1}(b) and~\\ref{fig:panel1}(d).\nNevertheless these panels show that it should be possible to change the orientation of $\\mathbf{M}$ from $\\mathbf{M} || \\ensuremath{\\mathbf{\\hat{x}}}\\xspace$ to $\\mathbf{M} || \\ensuremath{\\mathbf{\\hat{y}}}\\xspace$ via the application of appropriate voltages $\\ensuremath{V_\\mathrm{p}}\\xspace$ to the actuator.\n\nThe total magnetic free energy density $F_\\mathrm{tot}$ in Eq.~\\eref{eq:Ftot} is experimentally accessible by ferromagnetic resonance measurements. The FMR equations of motion~\\cite{smit:1954, smit:1955, suhl:1955}\n\\begin{eqnarray}\\label{eq:FMR_motion}\n\t \\left(\\frac{\\omega}{\\gamma}\\right)^2&=&\\frac{1}{M_\\mathrm{s}^2\\sin^2\\Theta} \\bigl.\\Bigl(\\left(\\partial_{\\Phi}^2F_\\mathrm{tot}\\right)\\left(\\partial_\\Theta^2 F_\\mathrm{tot}\\right)-\\left(\\partial_{\\Phi}\\partial_\\Theta F_\\mathrm{tot}\\right)^2\\Bigr)\\bigr|_{\\Theta_0,\\Phi_0}\\\\\n\\nonumber &&\\mathrm{and}~\\bigl.\\partial_\\Theta F_\\mathrm{tot}\\bigr|_{\\Theta=\\Theta_0} = \\bigl.\\partial_{\\Phi} F_\\mathrm{tot}\\bigr|_{\\Phi={\\Phi}_0}=0\n \\end{eqnarray}\nlink the experimentally determined ferromagnetic resonance field $\\mu_0H_\\mathrm{res}$ to the magnetic free energy density $F_\\mathrm{tot}$. In Eq.~\\eref{eq:FMR_motion}, $\\gamma$ is the gyromagnetic ratio and $\\omega=2\\pi\\nu_\\mathrm{MW}$ with the microwave frequency $\\nu_\\mathrm{MW}$.\nThe effect of damping is neglected in Eq.~\\eref{eq:FMR_motion}, as damping only affects the lineshape which will not be discussed in this article. We note that the FMR resonance field $\\mu_0H_\\mathrm{res}$ measured in experiment can be considered as a direct indicator of the relative magnetic hardness along the external dc magnetic field direction. From Eqs.~\\eref{eq:Ftot} and~\\eref{eq:FMR_motion} one finds that smaller resonance fields correspond to magnetically easier directions and larger resonance fields to magnetically harder directions.\n\\section{Ferromagnetic resonance}\\label{sec:fmr}\n\\begin{figure}\n \n \\includegraphics[width=\\textwidth]{img\/figure2a_2b}\\\\\n \\caption{(a) FMR spectra recorded with \\ensuremath{\\mathbf{H} || \\mathbf{\\hat{y}}}\\xspace at different voltages \\ensuremath{V_\\mathrm{p}}\\xspace. An increasing resonance field $\\mu_0 H_\\mathrm{res}$ (solid squares) is observed for increasing $\\ensuremath{V_\\mathrm{p}}\\xspace$ while the lineshape is not significantly altered. (b) The dependence of $\\mu_0 H_\\mathrm{res}$ on \\ensuremath{V_\\mathrm{p}}\\xspace is qualitatively different for \\ensuremath{\\mathbf{H} || \\mathbf{\\hat{y}}}\\xspace and \\ensuremath{\\mathbf{H} || \\mathbf{\\hat{x}}}\\xspace. Full symbols correspond to increasing $\\ensuremath{V_\\mathrm{p}}\\xspace$ and open symbols to decreasing $\\ensuremath{V_\\mathrm{p}}\\xspace$. The solid lines represent the resonance fields calculated from magnetoelastic theory (cf. Sec.~\\ref{sec:theory}) yielding very good agreement with the measurement. For \\ensuremath{\\mathbf{H} || \\mathbf{\\hat{y}}}\\xspace, increasing \\ensuremath{V_\\mathrm{p}}\\xspace increases $\\mu_0 H_\\mathrm{res}$ and thus the magnetic hardness of this direction, while for \\ensuremath{\\mathbf{H} || \\mathbf{\\hat{x}}}\\xspace, increasing \\ensuremath{V_\\mathrm{p}}\\xspace decreases $\\mu_0 H_\\mathrm{res}$ and thus the magnetic hardness of this direction. The hysteresis of $\\mu_0 H_\\mathrm{res}$ is due to the hysteretic stress-strain curve of the actuator.}\\label{fig:panel2}\n\\end{figure}\n\\begin{figure}\n \n \\includegraphics[width=\\textwidth]{img\/figure3a_3b}\\\\\n \\caption{(a) The symbols show the FMR resonance field $\\mu_0 H_\\mathrm{res}(\\theta)$ obtained for a constant actuator voltage $\\ensuremath{V_\\mathrm{p}}\\xspace=\\unit{-30}{\\volt}$ (open triangles) and $\\ensuremath{V_\\mathrm{p}}\\xspace=\\unit{+20}{\\volt}$ (solid circles) as a function of the orientation $\\theta$ of $\\mathbf{H}$ in the sample plane. A uniaxial (180\\degree periodic) anisotropy of $\\mu_0 H_\\mathrm{res}(\\theta)$ is observed for both voltages. However, the easy axis is rotated by 90\\degree as $\\ensuremath{V_\\mathrm{p}}\\xspace$ is changed from \\unit{+20}{\\volt} to \\unit{-30}{\\volt}. The full lines show the resonance fields simulated using the anisotropy constants from Eq.~\\eref{eq:constants}. (b) Corresponding experiments with $\\mathbf{H}$ rotated in the \\ensuremath{\\mathbf{\\hat{y}}}\\xspace plane (from within the Ni film plane to out-of-plane). Regardless of \\ensuremath{V_\\mathrm{p}}\\xspace, a strong uniaxial anisotropy with the hard axis perpendicular to the film plane is observed. The inset shows that the resonance fields for $\\mathbf{H}$ in the film plane ($\\phi=90\\degree$ and $\\phi=270\\degree$) are still shifted as a function of \\ensuremath{V_\\mathrm{p}}\\xspace in accordance to the data shown in (a) for $\\theta=90\\degree$ and $\\theta=270\\degree$.}\\label{fig:panel3}\n\\end{figure}\nIn this Section, we quantitatively determine the magnetic anisotropy of our ferromagnetic thin film\/piezoelectric actuator hybrid\\xspace structure using FMR measurements. Figure~\\ref{fig:panel2}(a) shows four FMR spectra recorded at room temperature with \\ensuremath{\\mathbf{H} || \\mathbf{\\hat{y}}}\\xspace and a voltage $\\ensuremath{V_\\mathrm{p}}\\xspace \\in \\{\\unit{-30}{\\volt},\\unit{0}{\\volt},\\unit{30}{\\volt},\\unit{90}{\\volt}\\}$ applied to the actuator, respectively. Each spectrum shows one strong ferromagnetic resonance which -- due to the magnetic field modulation and lock-in detection -- has a lineshape corresponding to the first derivative of a Lorentzian line~\\cite{Poole1996}. The resonance field $\\mu_0H_\\mathrm{res}$ of a given FMR line is determined as the arithmetic mean of its maximum and minimum and depicted by the full squares in Fig.~\\ref{fig:panel2}(a). The figure shows that, with the external magnetic field \\ensuremath{\\mathbf{H} || \\mathbf{\\hat{y}}}\\xspace, $\\mu_0H_\\mathrm{res}$ is shifted to higher magnetic fields for increasing $\\ensuremath{V_\\mathrm{p}}\\xspace$, while the lineshape is not significantly changed. According to Eqs.~\\eref{eq:Ftot} and~\\eref{eq:FMR_motion}, this implies that the \\ensuremath{\\mathbf{\\hat{y}}}\\xspace direction becomes increasingly harder as $\\ensuremath{V_\\mathrm{p}}\\xspace$ is increased.\n\nTo determine the evolution of $\\mu_0 H_\\mathrm{res}$ with \\ensuremath{V_\\mathrm{p}}\\xspace in more detail, we recorded FMR spectra similar to those shown in Fig.~\\ref{fig:panel2}(a) for \\ensuremath{V_\\mathrm{p}}\\xspace increasing from $\\unit{-30}{\\volt}$ to $\\unit{+90}{\\volt}$ (upsweep) and decreasing back to $\\unit{-30}{\\volt}$ (downsweep) in steps of $\\Delta\\ensuremath{V_\\mathrm{p}}\\xspace=\\unit{10}{\\volt}$. For \\ensuremath{\\mathbf{H} || \\mathbf{\\hat{y}}}\\xspace, we obtain the resonance fields shown by the squares in Fig.~\\ref{fig:panel2}(b). Here, full squares depict the upsweep and open squares the downsweep of \\ensuremath{V_\\mathrm{p}}\\xspace. As discussed in the context of Fig.~\\ref{fig:panel2}(a), Fig.~\\ref{fig:panel2}(b) shows that the FMR resonance field for \\ensuremath{\\mathbf{H} || \\mathbf{\\hat{y}}}\\xspace increases with increasing \\ensuremath{V_\\mathrm{p}}\\xspace and decreases with decreasing \\ensuremath{V_\\mathrm{p}}\\xspace. The small hysteresis between up- and downsweep is due to the hysteretic expansion of the actuator~\\cite{manual:piezo}. Carrying out the same series of FMR measurements with \\ensuremath{\\mathbf{H} || \\mathbf{\\hat{x}}}\\xspace yields the resonance fields shown by the triangles in Fig.~\\ref{fig:panel2}(b). For this magnetic field orientation, $\\mu_0 H_\\mathrm{res}$ decreases for increasing \\ensuremath{V_\\mathrm{p}}\\xspace and vice versa. In terms of magnetic anisotropy we thus can conclude that the \\ensuremath{\\mathbf{\\hat{x}}}\\xspace direction becomes easier for increasing \\ensuremath{V_\\mathrm{p}}\\xspace while at the same time the \\ensuremath{\\mathbf{\\hat{y}}}\\xspace direction simultaneously becomes harder.\n\nFor a more quantitative discussion we have also plotted the behavior expected from Eqs.~\\eref{eq:Ftot} and~\\eref{eq:FMR_motion} as solid lines in Fig.~\\ref{fig:panel2}(b). These lines show the resonance fields obtained by assuming a linear, non-hysteretic voltage-strain relation [cf. Eq.~\\eref{eq:strain_voltage}] and solving Eq.~\\eref{eq:FMR_motion} with $F_\\mathrm{tot}$ from Eq.~\\eref{eq:Ftot}. We use a saturation magnetization $M_\\mathrm{s}=\\unit{370}{\\kilo\\ampere\\per\\meter}$ as determined by SQUID measurements and a $g$-factor of 2.165~\\cite{meyer:1961}.\nConsidering the fact that the actuator expansion saturates at high voltages and the data in Fig.~\\ref{fig:panel2}(b) only show a minor loop of the full actuator swing, the simulation is in full agreement to the experimental results. This demonstrates that for $\\ensuremath{V_\\mathrm{p}}\\xspace<\\unit{0}{\\volt}$ the \\ensuremath{\\mathbf{\\hat{y}}}\\xspace-direction is magnetically easier than the \\ensuremath{\\mathbf{\\hat{x}}}\\xspace-direction, while for $\\ensuremath{V_\\mathrm{p}}\\xspace>\\unit{0}{\\volt}$ the \\ensuremath{\\mathbf{\\hat{x}}}\\xspace-direction is easier than the \\ensuremath{\\mathbf{\\hat{y}}}\\xspace-direction. Moreover, at $\\ensuremath{V_\\mathrm{p}}\\xspace=\\unit{0}{\\volt}$, both measurement and simulation yield resonance fields of $\\mu_0H_\\mathrm{res}\\approx\\unit{150}{\\milli\\tesla}$ for \\ensuremath{\\mathbf{H} || \\mathbf{\\hat{y}}}\\xspace as well as \\ensuremath{\\mathbf{H} || \\mathbf{\\hat{x}}}\\xspace. Thus, at this voltage, both orientations \\ensuremath{\\mathbf{\\hat{x}}}\\xspace and \\ensuremath{\\mathbf{\\hat{y}}}\\xspace are equally easy in our sample.\n\nTo quantitatively determine the full magnetic anisotropy as a function of $\\ensuremath{V_\\mathrm{p}}\\xspace$, we recorded FMR traces at constant \\ensuremath{V_\\mathrm{p}}\\xspace for several different $\\mathbf{H}$ orientations. The FMR resonance fields thus determined in experiment are shown in Fig.~\\ref{fig:panel3} together with simulations according to Eqs.~\\eref{eq:Ftot} and~\\eref{eq:FMR_motion}. The open triangles in Fig.~\\ref{fig:panel3} represent the resonance fields obtained for $\\ensuremath{V_\\mathrm{p}}\\xspace=\\unit{-30}{\\volt}$ and the full circles those obtained for $\\ensuremath{V_\\mathrm{p}}\\xspace=\\unit{+20}{\\volt}$. Note that in the polar plots in Fig.~\\ref{fig:panel3}, the distance of $\\mu_0 H_\\mathrm{res}$ from the coordinate origin is an indicator of the magnetic hardness, with easy directions corresponding to small distances and hard directions to large distances.\n\nIf $\\mathbf{H}$ is rotated in the film plane [cf. Fig.~\\ref{fig:panel3}(a), $\\phi=90\\degree$], the obtained $\\mu_0H_\\mathrm{res}(\\theta)$ shows minima at $\\theta=0\\degree$ and $\\theta=180\\degree$ for $\\ensuremath{V_\\mathrm{p}}\\xspace=\\unit{-30}{\\volt}$ and at $\\theta=90\\degree$ and $\\theta=270\\degree$ for $\\ensuremath{V_\\mathrm{p}}\\xspace=\\unit{+20}{\\volt}$, respectively. Thus, a clear 180\\degree periodicity of the resonance fields and hence a uniaxial magnetic anisotropy is observed for both \\ensuremath{V_\\mathrm{p}}\\xspace. As the orientations $\\theta$ corresponding to minima of $\\mu_0H_\\mathrm{res}$ for one voltage coincide with maxima for the other voltage, we conclude that the direction of the easy axis is rotated by 90\\degree if \\ensuremath{V_\\mathrm{p}}\\xspace is changed from $\\ensuremath{V_\\mathrm{p}}\\xspace=\\unit{-30}{\\volt}$ to $\\ensuremath{V_\\mathrm{p}}\\xspace=\\unit{+20}{\\volt}$. This is exactly the behaviour expected according to Figs.~\\ref{fig:panel1}(c) and~\\ref{fig:panel1}(e).\n\nIf $\\mathbf{H}$ is rotated from within the film plane ($\\phi=90\\degree$, $\\theta=90\\degree$) to perpendicular to the film plane ($\\phi=0\\degree$, $\\theta=90\\degree$), we obtain the resonance fields shown in Fig.~\\ref{fig:panel3}(b). In this case, we observe a strong uniaxial anisotropy with a hard axis perpendicular to the film plane regardless of \\ensuremath{V_\\mathrm{p}}\\xspace, stemming from the \\ensuremath{V_\\mathrm{p}}\\xspace-independent contribution $F_\\mathrm{demag}$ to $F_\\mathrm{tot}$. The inset in Fig.~\\ref{fig:panel3}(b) shows that for $\\mathbf{H}$ in the film plane, the resonance fields for $\\ensuremath{V_\\mathrm{p}}\\xspace=\\unit{-30}{\\volt}$ and $\\ensuremath{V_\\mathrm{p}}\\xspace=\\unit{+20}{\\volt}$ are shifted by approximately \\unit{10}{\\milli\\tesla} in full accordance to the data in Fig.~\\ref{fig:panel3}(a).\n\nAs discussed in Section~\\ref{sec:theory}, the \\ensuremath{V_\\mathrm{p}}\\xspace dependence of the resonance field is given by a \\ensuremath{V_\\mathrm{p}}\\xspace dependence of $F_\\mathrm{magel}$. We thus evaluate the measurement data shown in Figs.~\\ref{fig:panel3}(a,b) using an iterative fitting procedure of $K_\\mathrm{u,magel,y}$ in Eq.~\\eref{eq:Ftot} to fulfill Eq.~\\eref{eq:FMR_motion}. We obtain\n\\begin{eqnarray}\\label{eq:constants}\n K_\\mathrm{u,magel,y}(\\unit{-30}{\\volt})\/M_\\mathrm{s} &=& \\unit{-3.4}{\\milli\\tesla} \\\\\n \\nonumber K_\\mathrm{u,magel,y}(\\unit{+20}{\\volt})\/M_\\mathrm{s} &=& \\unit{4.4}{\\milli\\tesla}\\;.\n\\end{eqnarray}\nThe resonance fields calculated using these anisotropy fields as well as Eqs.~\\eref{eq:FMR_motion} and~\\eref{eq:Ftot} are depicted by the lines in Fig.~\\ref{fig:panel3}. The good agreement between simulation and experiment shows that $F_\\mathrm{tot}$ given by Eq.~\\eref{eq:Ftot} is sufficient to describe the magnetic anisotropy of the Ni film.\n\nIn summary, the FMR experiments conclusively demonstrate that it is possible to invert the in-plane magnetic anisotropy of our hybrid structure, i.e. to invert the sign of $ K_\\mathrm{u,magel,y}$ \\textit{solely by changing \\ensuremath{V_\\mathrm{p}}\\xspace.} As the FMR experiment furthermore allows to quantitatively determine all contributions to the free energy, Eq.~\\eref{eq:Ftot}, the orientation of the magnetization vector $\\mathbf{M}$ in our sample can be calculated a priori for arbitrary $\\mathbf{H}$ and \\ensuremath{V_\\mathrm{p}}\\xspace. However, it is not possible to directly measure the magnetization orientation as a function of \\ensuremath{V_\\mathrm{p}}\\xspace with FMR. To demonstrate that the piezo-voltage control of magnetic anisotropy indeed allows for a voltage control of $\\mathbf{M}$, we now turn to magnetometry.\n\n\n\\section{Magnetometry}\\label{sec:squid}\n\\begin{figure}\n \n \\centering\n \\includegraphics[]{img\/figure4a_4b}\\\\\n \\caption{(a) $M(H)$-loops recorded with \\ensuremath{\\mathbf{H} || \\mathbf{\\hat{y}}}\\xspace show a higher remanent magnetization for $\\ensuremath{V_\\mathrm{p}}\\xspace=\\unit{-30}{\\volt}$ than for $\\ensuremath{V_\\mathrm{p}}\\xspace=\\unit{+30}{\\volt}$, thus the \\ensuremath{\\mathbf{\\hat{y}}}\\xspace-axis is magnetically easier for $\\ensuremath{V_\\mathrm{p}}\\xspace=\\unit{-30}{\\volt}$ than for $\\ensuremath{V_\\mathrm{p}}\\xspace=\\unit{+30}{\\volt}$. (b) The \\ensuremath{\\mathbf{\\hat{x}}}\\xspace-axis is magnetically easier for $\\ensuremath{V_\\mathrm{p}}\\xspace=\\unit{+30}{\\volt}$ than for $\\ensuremath{V_\\mathrm{p}}\\xspace=\\unit{-30}{\\volt}$.}\\label{fig:panel4}\n\\end{figure}\n\\begin{figure}\n\\centering\n \n \\includegraphics[]{img\/figure5a_to_5c}\\\\\n \\caption{SQUID $M(\\ensuremath{V_\\mathrm{p}}\\xspace)$-loops show the projection $M$ of the magnetization $\\mathbf{M}$ onto the direction of the applied magnetic field $\\mathbf{H}$ as a function of $\\ensuremath{V_\\mathrm{p}}\\xspace$. The symbols represent the experimental data and the lines show the simulation of $M$ resulting from a minimization of $F_\\mathrm{tot}$ [Eq.~\\eref{eq:Ftot}], with $\\varepsilon_2$ determined using a strain gauge to explicitly take into account the actuator hysteresis. (a) For \\ensuremath{\\mathbf{H} || \\mathbf{\\hat{y}}}\\xspace with $\\mu_0 H=\\unit{-3}{\\milli\\tesla}$, $\\mathbf{M}$ exhibits an \\textit{irreversible} rotation from A to B followed by a \\textit{reversible} rotation from B to C. (b), (c) For \\ensuremath{\\mathbf{H} || \\mathbf{\\hat{y}}}\\xspace and \\ensuremath{\\mathbf{H} || \\mathbf{\\hat{x}}}\\xspace with $\\mu_0 H=\\unit{-5}{\\milli\\tesla}$, $\\mathbf{M}$ exhibits a \\textit{reversible} rotation from A to B and back to A.}\\label{fig:panel5}\n\\end{figure}\n\\begin{figure}\n\\centering\n \n \\includegraphics[]{img\/figure6a_to_6c}\\\\\n \\caption{Calculated free energy contours in the film plane at the points of the $M(\\ensuremath{V_\\mathrm{p}}\\xspace)$-loop depicted by capital letters in Fig.~\\ref{fig:panel5} (solid lines: $\\ensuremath{V_\\mathrm{p}}\\xspace=\\unit{+120}{\\volt}$, dotted lines: $\\ensuremath{V_\\mathrm{p}}\\xspace=\\unit{-30}{\\volt}$). To clarify the lifting of the free energy degeneracy by the magnetic field, an angle of 10\\degree between the magnetic field and the $\\ensuremath{\\mathbf{\\hat{x}}}\\xspace$ or $\\ensuremath{\\mathbf{\\hat{y}}}\\xspace$ axis was assumed. The open downward-oriented arrows depict the orientation of $\\mathbf{H}$ during the field preparation at \\unit{7}{\\tesla} and the closed downward-oriented arrows depict the orientation of $\\mathbf{H}$ during the actual measurement. Capital letters indicate the equilibrium $\\mathbf{M}$ orientation at the corresponding positions in Fig.~\\ref{fig:panel5}. (a) Subsequent to the field preparation at \\unit{-30}{\\volt}, $\\mathbf{M}$ resides in a \\textit{local} minimum of $F_\\mathrm{tot}$ (point A) at $\\mu_0H=\\unit{-3}{\\milli\\tesla}$ and rotates to the \\textit{global} minimum of $F_\\mathrm{tot}$ at (point B) as \\ensuremath{V_\\mathrm{p}}\\xspace is increased to $\\ensuremath{V_\\mathrm{p}}\\xspace=\\unit{+120}{\\volt}$. Sweeping the voltage from \\unit{+120}{\\volt} to \\unit{-30}{\\volt} now results in $\\mathbf{M}$ following the \\textit{global} minimum of $F_\\mathrm{tot}$. (b), (c) For $\\mu_0H=\\unit{-5}{\\milli\\tesla}$, $\\mathbf{M}$ follows the \\textit{global} minimum of $F_\\mathrm{tot}$ as \\ensuremath{V_\\mathrm{p}}\\xspace is changed.}\\label{fig:panel6}\n\\end{figure}\nIn this Section, we show that it is possible to not only change the magnetic anisotropy but to deliberately \\textit{irreversibly} and\/or \\textit{reversibly} rotate $\\mathbf{M}$ simply by changing \\ensuremath{V_\\mathrm{p}}\\xspace. To this end, we need to employ an experimental technique that is directly sensitive to the orientation of $\\mathbf{M}$ rather than to magnetic anisotropies. Here we used SQUID magnetometry to record the projection $m$ of the total magnetic moment $\\mathbf{m}$ onto the direction of the external magnetic field $\\mathbf{H}$.\n\nIn a first series of experiments we recorded $m$ as a function of the external magnetic field magnitude $\\mu_0H$ at fixed orientations of $\\mathbf{H}$ and fixed voltages \\ensuremath{V_\\mathrm{p}}\\xspace at $T=\\unit{300}{\\kelvin}$. Figure~\\ref{fig:panel4}(a) shows $M=m\/V$ measured with \\ensuremath{\\mathbf{H} || \\mathbf{\\hat{y}}}\\xspace as a function of the external magnetic field strength at constant voltage $\\ensuremath{V_\\mathrm{p}}\\xspace=\\unit{+30}{\\volt}$ (full circles) and $\\ensuremath{V_\\mathrm{p}}\\xspace={-30}{\\volt}$ (open triangles). The Ni film shows a rectangular $M(H)$ loop for $\\ensuremath{V_\\mathrm{p}}\\xspace=\\unit{-30}{\\volt}$, while for $\\ensuremath{V_\\mathrm{p}}\\xspace=\\unit{+30}{\\volt}$ the remanent magnetization is lowered by a factor of approximately three. According to, e.g., Morrish~\\cite{Morrish2001}, the rectangular loop for $\\ensuremath{V_\\mathrm{p}}\\xspace=\\unit{-30}{\\volt}$ indicates a magnetically easy axis, while the smooth, s-shaped loop for $\\ensuremath{V_\\mathrm{p}}\\xspace=\\unit{+30}{\\volt}$ indicates a magnetically harder axis. Thus Fig.~\\ref{fig:panel4}(a) shows that the \\ensuremath{\\mathbf{\\hat{y}}}\\xspace-direction is magnetically \\textit{easier} for $\\ensuremath{V_\\mathrm{p}}\\xspace=\\unit{-30}{\\volt}$ and \\textit{harder} for $\\ensuremath{V_\\mathrm{p}}\\xspace=\\unit{+30}{\\volt}$ -- as expected from the FMR experiments. Changing the orientation of $\\mathbf{H}$ to \\ensuremath{\\mathbf{H} || \\mathbf{\\hat{x}}}\\xspace yields the $M(H)$-loops shown in Fig.~\\ref{fig:panel4}(b). Following the same line of argument we can conclude that the \\ensuremath{\\mathbf{\\hat{x}}}\\xspace-direction is magnetically \\textit{easier} for $\\ensuremath{V_\\mathrm{p}}\\xspace=\\unit{+30}{\\volt}$ and \\textit{harder} for $\\ensuremath{V_\\mathrm{p}}\\xspace=\\unit{-30}{\\volt}$. Altogether these results show that we observe an in-plane anisotropy, the easy axis of which is parallel to \\ensuremath{\\mathbf{\\hat{y}}}\\xspace for $\\ensuremath{V_\\mathrm{p}}\\xspace=\\unit{-30}{\\volt}$ and parallel to \\ensuremath{\\mathbf{\\hat{x}}}\\xspace for $\\ensuremath{V_\\mathrm{p}}\\xspace=\\unit{+30}{\\volt}$. These observations are fully consistent with the FMR results, and corroborate the simple model shown in Fig.~\\ref{fig:panel1}.\n\nBefore discussing further experimental magnetometry results, we note that the free energy minima shown in Fig.~\\ref{fig:panel1}(c,e) as well as the in-plane resonance fields in Fig.~\\ref{fig:panel3}(a) are degenerate by 180\\degree. This degeneracy may lead to demagnetization due to domain formation if the polarity of $\\ensuremath{V_\\mathrm{p}}\\xspace$ is repeatedly inverted as there are always two energetically equally favorable but opposite directions of $\\mathbf{M}$. Thus, to achieve a reversible control of $\\mathbf{M}$ orientation, the degeneracy of $F_\\mathrm{tot}$ needs to be lifted. This can be achieved if a unidirectional anisotropy is superimposed on the uniaxial anisotropy. Regarding $F_\\mathrm{tot}$ in Eq.~\\eref{eq:Ftot}, only $F_\\mathrm{stat}$ possesses the desirable unidirectional anisotropy. Hence, a small but finite external magnetic field $\\mathbf{H}\\neq0$ can be used to lift the 180\\degree degeneracy. This approach works for all $\\mathbf{H}$ orientations, except for $\\mathbf{H}$ exactly parallel \\ensuremath{\\mathbf{\\hat{x}}}\\xspace or \\ensuremath{\\mathbf{\\hat{y}}}\\xspace. For \\ensuremath{\\mathbf{H} || \\mathbf{\\hat{x}}}\\xspace (with $H>0$), the easy axis parallel to \\ensuremath{\\mathbf{\\hat{x}}}\\xspace in Fig.~\\ref{fig:panel1}(c) is replaced by an easier positive \\ensuremath{\\mathbf{\\hat{x}}}\\xspace direction and a harder negative \\ensuremath{\\mathbf{\\hat{x}}}\\xspace direction, thus the degeneracy is lifted for $\\ensuremath{V_\\mathrm{p}}\\xspace>0$. However, for $\\ensuremath{V_\\mathrm{p}}\\xspace<0$ [cf. Fig.~\\ref{fig:panel1}(e)] a magnetic field \\ensuremath{\\mathbf{H} || \\mathbf{\\hat{x}}}\\xspace is orthogonal to the easy axis which thus remains degenerate. The same consideration holds for \\ensuremath{\\mathbf{H} || \\mathbf{\\hat{y}}}\\xspace where we expect degenerate free energy minima for $\\ensuremath{V_\\mathrm{p}}\\xspace>0$ and a preferred $\\mathbf{M}$ orientation for $\\ensuremath{V_\\mathrm{p}}\\xspace<0$. However, in experiment it is essentially impossible to orient $\\mathbf{H}$ \\textit{exactly} along \\ensuremath{\\mathbf{\\hat{x}}}\\xspace or \\ensuremath{\\mathbf{\\hat{y}}}\\xspace; any small misorientation between $\\mathbf{H}$ and \\ensuremath{\\mathbf{\\hat{x}}}\\xspace or \\ensuremath{\\mathbf{\\hat{y}}}\\xspace is sufficient to lift the degeneracy.\n\nWe now turn to the experimental measurement of the voltage control of $\\mathbf{M}$ orientation. To show that $\\mathbf{M}$ can be rotated by varying $\\ensuremath{V_\\mathrm{p}}\\xspace$ alone, we change $\\ensuremath{V_\\mathrm{p}}\\xspace$ at constant external magnetic bias field $\\mathbf{H}$. As SQUID magnetometry is limited to recording the projection of $\\mathbf{M}$ on the direction of the external magnetic field $\\mathbf{H}$, it is important to choose appropriate $\\mathbf{H}$ orientations in the experiments. As evident from Figs.~\\ref{fig:panel1} and \\ref{fig:panel4}, \\ensuremath{\\mathbf{H} || \\mathbf{\\hat{x}}}\\xspace and \\ensuremath{\\mathbf{H} || \\mathbf{\\hat{y}}}\\xspace are the most interesting orientations of the external magnetic field. In view of the circumstance discussed in the previous paragraph, we applied $\\mathbf{H}$ to within 1\\degree of \\ensuremath{\\mathbf{\\hat{x}}}\\xspace or \\ensuremath{\\mathbf{\\hat{y}}}\\xspace, respectively. We still refer to these orientations of $\\mathbf{H}$ as \\ensuremath{\\mathbf{H} || \\mathbf{\\hat{x}}}\\xspace and \\ensuremath{\\mathbf{H} || \\mathbf{\\hat{y}}}\\xspace for simplicity, but take into account a misalignment of 1\\degree in the calculations. As the experimental results and the corresponding simulations will show, this slight misalignment is sufficient to lift the degeneracy in the free energy regardless of $\\ensuremath{V_\\mathrm{p}}\\xspace$.\n\nRecording $M$ as a function of $\\ensuremath{V_\\mathrm{p}}\\xspace$ for two complete voltage cycles $\\unit{-30}{\\volt}\\leq \\ensuremath{V_\\mathrm{p}}\\xspace \\leq \\unit{+120}{\\volt}$ with \\ensuremath{\\mathbf{H} || \\mathbf{\\hat{y}}}\\xspace and $\\mu_0H=\\unit{-3}{\\milli\\tesla}$ yields the data points shown in Fig.~\\ref{fig:panel5}(a). Since \\ensuremath{\\mathbf{H} || \\mathbf{\\hat{y}}}\\xspace, $M$ is the projection of $\\mathbf{M}$ to the \\ensuremath{\\mathbf{\\hat{y}}}\\xspace direction in the experiment. Prior to the first voltage sweep starting at point A, the voltage was set to $\\ensuremath{V_\\mathrm{p}}\\xspace=\\unit{-30}{\\volt}$ and the sample was magnetized to a single domain state by applying $\\mu_0 H=\\unit{+7}{\\tesla}$ along \\ensuremath{\\mathbf{\\hat{y}}}\\xspace which is the easy axis for $\\ensuremath{V_\\mathrm{p}}\\xspace=\\unit{-30}{\\volt}$. The magnetic field was then swept to $\\mu_0 H=\\unit{-3}{\\milli\\tesla}$ which is close to but still below the coercive field of the rectangular loops [cf. Fig.~\\ref{fig:panel4}] and the acquisition of the $M(\\ensuremath{V_\\mathrm{p}}\\xspace)$ data was started. The fact that $M$ is positive at first (starting from point A), while the field is applied along the negative \\ensuremath{\\mathbf{\\hat{y}}}\\xspace direction, shows that $\\mathbf{M}$ and $\\mathbf{H}$ are essentially antiparallel at first. Upon increasing \\ensuremath{V_\\mathrm{p}}\\xspace in steps of $\\unit{+5}{\\volt}$ to $\\ensuremath{V_\\mathrm{p}}\\xspace=\\unit{+120}{\\volt}$ (point B), $M$ vanishes, which indicates an orthogonal orientation of $\\mathbf{M}$ and $\\mathbf{H}$. Upon reducing $\\ensuremath{V_\\mathrm{p}}\\xspace$ to its initial value of \\unit{-30}{\\volt}, $M$ becomes negative (point C), evidencing a parallel orientation of $\\mathbf{M}$ and $\\mathbf{H}$. The \\ensuremath{V_\\mathrm{p}}\\xspace cycle was then repeated once more, with $M$ now reversibly varying between its negative value at point C and zero at B. Hence the evolution of $M$ is qualitatively different in the first and in the second \\ensuremath{V_\\mathrm{p}}\\xspace cycle, with an \\textit{irreversible} $\\mathbf{M}$ rotation in the first cycle and a \\textit{reversible} $\\mathbf{M}$ rotation in the second cycle. This behaviour is expected from the magnetic free energy [cf. Eq.~\\eref{eq:Ftot}] as the preparation (point A) yields $\\mathbf{M}$ in a metastable state ($\\mathbf{M}$ antiparallel $\\mathbf{H}$).\n\nAfter a renewed preparation with $\\mu_0 H=\\unit{+7}{\\tesla}$ at $\\ensuremath{V_\\mathrm{p}}\\xspace=\\unit{-30}{\\volt}$, we repeated the experiment with the external magnetic field \\ensuremath{\\mathbf{H} || \\mathbf{\\hat{y}}}\\xspace but with a slightly larger magnitude $\\mu_0 H=\\unit{-5}{\\milli\\tesla}$ and again recorded $M$ for a complete voltage cycle. The magnetic field magnitude of $\\mu_0 H=\\unit{-5}{\\milli\\tesla}$ was chosen as it exceeds the coercive field (cf. Fig.~\\ref{fig:panel4}) while keeping the influence of the Zeemann term in Eq.~\\eref{eq:Ftot} on the total magnetic anisotropy comparable to the influence of the strain-induced anisotropies [cf. Eq.~\\eref{eq:constants}]. The experimental data are shown by the symbols in Fig.~\\ref{fig:panel5}(b). Here a parallel orientation of $\\mathbf{M}$ and $\\mathbf{H}$ is already observed at point A ($\\ensuremath{V_\\mathrm{p}}\\xspace=\\unit{-30}{\\volt}$) and $\\mathbf{M}$ rotates reversibly by approximately 70\\degree towards point B ($\\ensuremath{V_\\mathrm{p}}\\xspace=\\unit{+120}{\\volt}$) and back when \\ensuremath{V_\\mathrm{p}}\\xspace is reduced to $\\unit{-30}{\\volt}$ again (point A).\n\nTo complete the picture, we repeated the $M(\\ensuremath{V_\\mathrm{p}}\\xspace)$ experiment again, but now applied \\ensuremath{\\mathbf{H} || \\mathbf{\\hat{x}}}\\xspace. The magnetic preparation with $\\mu_0H=\\unit{7}{\\tesla}$ was now performed at $\\ensuremath{V_\\mathrm{p}}\\xspace=\\unit{+120}{\\volt}$ to ensure that the preparation field was again applied along an easy axis. The data recorded subsequently at $\\mu_0H=\\unit{-5}{\\milli\\tesla}$ [cf. Fig.~\\ref{fig:panel5}(c)] show a reversible rotation of $\\mathbf{M}$ by approximately 60\\degree between points A, B ($\\ensuremath{V_\\mathrm{p}}\\xspace=\\unit{-30}{\\volt}$) and back to A.\n\nTo quantitatively simulate the evolution of $M$ with \\ensuremath{V_\\mathrm{p}}\\xspace depicted in Fig.~\\ref{fig:panel5}, we again took advantage of the fact that the free energy of the Ni film is known from the FMR experiments. To account for the hysteresis of the actuator expansion which is responsible for hysteresis in the $M(\\ensuremath{V_\\mathrm{p}}\\xspace)$ loops in Fig.~\\ref{fig:panel5}, we hereby used $\\varepsilon_2$ as measured using a Vishay general purpose strain gauge in the voltage range of $\\unit{-30}{\\volt}\\leq\\ensuremath{V_\\mathrm{p}}\\xspace\\leq\\unit{+120}{\\volt}$. The $\\varepsilon_2$ data thus obtained are in accordance with the actuator data sheet.\nUsing $\\varepsilon_2(\\ensuremath{V_\\mathrm{p}}\\xspace)$ as well as the material constants given above we obtained in-plane free energy contours for each voltage \\ensuremath{V_\\mathrm{p}}\\xspace. Figure~\\ref{fig:panel6} exemplarily shows such free energy contours at the voltages depicted by the capital letters in Fig.~\\ref{fig:panel5}. To clearly visualize the effect of $\\mathbf{H}$ misalignment with respect to the sample coordinate system on $F_\\mathrm{tot}$, the plots in Fig.~\\ref{fig:panel6} were calculated assuming a misalignment in the in-plane orientation of the external magnetic field $\\theta$ of 10\\degree, so that $\\theta=10\\degree$ for \\ensuremath{\\mathbf{H} || \\mathbf{\\hat{y}}}\\xspace and $\\theta=100\\degree$ for \\ensuremath{\\mathbf{H} || \\mathbf{\\hat{x}}}\\xspace. Figure~\\ref{fig:panel6} clearly shows that under these conditions the local minima of $F_\\mathrm{tot}$ are non-degenerate for $\\ensuremath{V_\\mathrm{p}}\\xspace=\\unit{-30}{\\volt}$ and $\\ensuremath{V_\\mathrm{p}}\\xspace=\\unit{+120}{\\volt}$.\n\nTo determine the orientation of $\\mathbf{M}$, we traced the minimum of the total free energy $F_\\mathrm{tot}$ as a function of \\ensuremath{V_\\mathrm{p}}\\xspace. This was done by minimizing Eq.~\\eref{eq:Ftot} with respect to the in-plane orientation $\\Theta$ of $\\mathbf{M}$, whilst assuming that $\\Phi=90\\degree$ due to the strong shape anisotropy.\n\nTo simulate the experimental $M(\\ensuremath{V_\\mathrm{p}}\\xspace)$-loops [cf. Fig.~\\ref{fig:panel5}], we assume a more realistic misalignment of 1\\degree in $\\theta$ and numerically minimize $F_\\mathrm{tot}(\\Theta)$ as a function of $\\ensuremath{V_\\mathrm{p}}\\xspace$. For this, we set the initial value of $\\Theta$ antiparallel to the external magnetic field for $\\mu_0H=\\unit{-3}{\\milli\\tesla}$ and parallel to the external magnetic field for $\\mu_0H=\\unit{-5}{\\milli\\tesla}$. Minimizing $F_\\mathrm{tot}(\\Theta)$ determines the $\\mathbf{M}$ orientation which we project onto the \\ensuremath{\\mathbf{\\hat{y}}}\\xspace or \\ensuremath{\\mathbf{\\hat{x}}}\\xspace axis to yield $M$. For this, the magnitude of $\\mathbf{M}$ is chosen to give a good fit to the experimental data at points A and is assumed to remain constant independent of \\ensuremath{V_\\mathrm{p}}\\xspace.\n\nThe resulting simulations of $M$ are shown by the solid lines in Fig.~\\ref{fig:panel5}. The simulation yields the experimentally observed $\\textit{irreversible}$ $\\mathbf{M}$ rotation during the first voltage sweep in Fig.~\\ref{fig:panel5}(a) as well as the $\\textit{reversible}$ $\\mathbf{M}$ rotations in the second voltage sweep in Fig.~\\ref{fig:panel5}(a) and in Figs.~\\ref{fig:panel5}(b,c). The simulated total swing of $M$ is in excellent agreement with the experimental results for \\ensuremath{\\mathbf{H} || \\mathbf{\\hat{y}}}\\xspace and in good agreement for \\ensuremath{\\mathbf{H} || \\mathbf{\\hat{x}}}\\xspace. The fact that the experimental results exhibit a more rounded shape than the simulation is attributed to domain formation during the magnetization reorientation, which is neglected in the simulation.\n\nTaken together, the free energy density of Eq.~\\eref{eq:Ftot} with the contributions quantitatively determined from FMR and our simple simulation of $M(\\ensuremath{V_\\mathrm{p}}\\xspace)$ yield excellent agreement with experiment. In particular, we would like to emphasize that for the experiment and the simulation the application of \\ensuremath{V_\\mathrm{p}}\\xspace leads to a \\textit{rotation} of $\\mathbf{M}$ and not to a decay into domains. This is evident from the fact that in Figs.~\\ref{fig:panel5}(a) and \\ref{fig:panel5}(b), a large projection of $\\mathbf{M}$ onto \\ensuremath{\\mathbf{\\hat{y}}}\\xspace is accompanied by a small projection onto \\ensuremath{\\mathbf{\\hat{x}}}\\xspace and vice versa. Combining FMR and SQUID magnetometry, we thus have unambiguously and quantitatively demonstrated that $\\mathbf{M}$ can be rotated reversibly by about 70\\degree at room temperature solely via the application of appropriate voltages \\ensuremath{V_\\mathrm{p}}\\xspace.\n\n\\section{Summary}\nThe FMR measurements show that the in-plane anisotropy of Ni\/piezoactor hybrids can be inverted if the polarity of the voltage applied to the actuator is changed. The magnetometry results corroborate this result and furthermore show that it is possible to irreversibly or reversibly rotate $\\mathbf{M}$ solely by changing \\ensuremath{V_\\mathrm{p}}\\xspace. For an \\textit{irreversible} $M(\\ensuremath{V_\\mathrm{p}}\\xspace)$ rotation, an appropriate preparation of $\\mathbf{M}$ using a magnetic field sweep is necessary -- i.e. $\\mathbf{M}$ must be aligned in a local free energy minimum. It then can be rotated out of this minimum by changing \\ensuremath{V_\\mathrm{p}}\\xspace and the corresponding $\\mathbf{M}$ orientation change can amount to up to 180\\degree.\nHowever, this voltage control of $\\mathbf{M}$ is irreversible in the sense that $\\mathbf{M}$ can not be brought back into the original orientation by changing \\ensuremath{V_\\mathrm{p}}\\xspace alone. Rather, a second appropriate magnetic field sweep is required to align $\\mathbf{M}$ into its original orientation.\nIn contrast, the \\textit{reversible} $M(\\ensuremath{V_\\mathrm{p}}\\xspace)$ reorientation requires a preparation of $\\mathbf{M}$ only once. In this case, $\\mathbf{M}$ is oriented along a global free energy minimum, and can be rotated within up to 70\\degree (90\\degree in the ideal case) at will by applying an appropriate \\ensuremath{V_\\mathrm{p}}\\xspace. The $M(\\ensuremath{V_\\mathrm{p}}\\xspace)$-loops simulated by minimizing the single-domain free energy [Eq.~\\eref{eq:Ftot}] are in excellent agreement with experiment, showing that the $\\mathbf{M}$ orientation at vanishing magnetic field strengths can still be accurately calculated from the free energy determined from FMR. Finally, we would like to point out that the hysteretic expansion\/contraction of the actuator visible as a hysteresis in $M(\\ensuremath{V_\\mathrm{p}}\\xspace)$ [cf. Fig.~\\ref{fig:panel5}] in particular also leads to distinctly different $M(\\ensuremath{V_\\mathrm{p}}\\xspace=0)$, depending on the \\ensuremath{V_\\mathrm{p}}\\xspace history. Thus, our data also demonstrate that a remanent $\\mathbf{M}$ control is possible in our ferromagnetic thin film\/piezoelectric actuator hybrids.\n\n\\ack\nThe work at the Walter Schottky Institut was supported by the Deutsche Forschungsgemeinschaft (DFG) via SFB 631. The work at the Walther-Meissner-Institut was supported by the DFG via SPP 1157 (Project No.~GR 1132\/13), DFG Project No.~GO 944\/3-1 and the German Excellence Initiative via the \"Nanosystems Initiative Munich (NIM)\".\n\n\\section*{References}\n\\bibliographystyle{prsty2}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nCameras are becoming increasingly ubiquitous and pervasive. Millions of surveillance cameras are recording people's everyday behavior at public places, and people are using wearable cameras designed for lifelogging (e.g., GoPro and Narrative Clips) to obtain large collections of egocentric videos. Furthermore, robots at public places are equipped with multiple cameras for their operation and interaction.\n\n\n\nSimultaneously, such abundance of cameras is also causing a big societal challenge: privacy protection from unwanted video recordings. We want a camera system (e.g., a robot) to recognize important events and assist human daily life by understanding its videos, but we also want to ensure that it is not intruding the user's or others' privacy. This leads to two contradicting objectives. More specifically, we want to (1) prevent the camera system from obtaining detailed visual data that may contain private information (e.g., faces), desirably at the hardware-level. Simultaneously, we want to (2) make the system capture as much detailed information as possible from its video, so that it understands surrounding objects and ongoing events for surveillance, lifelogging, and intelligent services.\n\nThere have been previous studies corresponding to such societal needs. Templeman et al. \\shortcite{templeman14} studied scene recognition from images captured with wearable cameras, detecting locations where the privacy needs to be protected (e.g., restrooms). This will allow the device to be automatically turned off at sensitive locations. One may also argue that limiting the device to only process\/transfer feature information (e.g., HOG and CNN) instead of visual data will make it protect privacy. However, recent studies on feature ``visualizations'' \\cite{vondrick15} showed that it actually is possible to recover a good amount of visual information (i.e., images and videos) from the feature data. Furthermore, all these methods described above rely on software-level processing of original high-resolution videos (which may already contain privacy sensitive data), and there is a possibility of these original videos being snatched by cyber attacks.\n\nA more fundamental solution toward the construction of a privacy-preserving vision system is the use of \\emph{anonymized videos}. Typical examples of anonymized videos are videos made to have extreme low resolution (e.g., 16x12) by using low resolution (LR) camera hardware, or based on image operations like Gaussian blurring and superpixel clustering \\cite{butler15}. Instead of obtaining high-resolution videos and trying to process them, this direction simply limits itself to only obtain anonymized videos. The idea is that, if we are able to \\textbf{develop reliable computer vision approaches that only utilize such anonymized videos}, we will be able to do the recognition while preserving privacy. Such a concept may even allow cameras that can intelligently select their resolution; it will use high-resolution cameras only when it is necessary (e.g., emergency), determined based on extreme low-resolution video analysis.\n\nThere were previous attempts under such paradigm: \\cite{dai15}. This conventional approach was to resize the original training videos to fit the target resolution, making training videos to visually look similar to the testing videos. However, there is an intrinsic problem: because of natural limitations of what a pixel can capture in a LR video, features being extracted from LR videos change a lot depending on sub-pixel camera viewpoint changes even when they contain the exact same object\/human: Figure \\ref{fig:isr-motivation}. This makes the decision boundary learning unstable.\n\n\n\n\n\n\n\n\n\n\n\nIn this paper, we introduce the novel concept of \\emph{inverse super resolution} to overcome such limitations. Super resolution reconstructs a single high-resolution image from a set of low-resolution images. Inverse super resolution is the reverse of this process: we learn to generate a set of informative LR images from one HR image. Although it is natural to assume that the system obtains only one low-resolution `testing' video for privacy protection, in most cases, the system has an access to a rich set of high-resolution `training' videos publicly available (e.g., YouTube). The motivation behind inverse super resolution is that, if it really is true that a set of low-resolution images contains comparable amount of information to a high-resolution image, then we can also generate a set of LR training images from a HR image so that the amount of training information is maintained. Our approach learns the optimal set of LR transformations to make such generation possible, and uses the generated LR videos to obtain LR decision boundaries (Figure \\ref{fig:isr}).\n\n\n\n\n\n\n\n\\section{Related works}\n\nHuman activity recognition is a computer vision area with a great amount of attention \\cite{aggarwal11}. There not only have been studies to recognize activities from YouTube-style videos \\cite{google15} or surveillance videos, but also first-person videos from wearable cameras \\cite{kitani11,ramanan12,lee12,li13,poleg14} and robots \\cite{ryoo13}. However, they only focused on developing features\/methods for more reliable recognition, without any privacy aspect consideration.\n\n\n\n\n\nOn the other hand, as mentioned in the introduction, there are research works whose goal is to specifically address privacy concerns regarding unwanted video taking. Templeman et al. \\shortcite{templeman14} designed a method to automatically detect locations where the cameras should be turned off. \\cite{tran16} was similar. Dai et al. \\shortcite{dai15} studied human activity recognition from extreme low resolution videos, different from conventional activity recognition literature that were focusing mostly on methods for images and videos with sufficient resolutions. Although their work only focused on recognition from 3rd-person videos captured with static cameras, they showed the potential that computer vision features and classifiers can also work with very low resolution videos. However, they followed the `conventional paradigm' described in Figure \\ref{fig:isr} (a), simply resizing original training videos to make them low resolution. \\cite{shokri15} studied privacy protection for convolutional neural networks (CNNs), but they consider the privacy protection only at the training phase and not at the testing phase, unlike ours.\n\n\n\n\n\n\\begin{figure}\n\t\\centering\n\t\\resizebox{1.0\\linewidth}{!}{\n\t \\includegraphics{images\/ISR_concept2.pdf}\n\t}\n\t\\caption{A figure comparing (a) the conventional learning framework for low-resolution videos vs. (b) our learning framework using the proposed inverse super resolution.}\n\t\\label{fig:isr}\n\\end{figure}\n\n\n\\section{Inverse super resolution}\n\\label{sec:isr}\n\n\\emph{Inverse super resolution} (ISR) is the concept of generating a set of low-resolution training images from a single high-resolution image, by `learning' different image transforms optimized for the recognition task. Such transforms may include sub-pixel translation, scaling, rotation, and other affine transforms emulating possible camera motion.\n\nOur ISR targets the realistic scenario where the system is prohibited from obtaining high-resolution videos in the \\emph{testing} phase due to privacy protection but has an access to a rich set of high-resolution training videos publicly available (e.g., YouTube). Instead of trying to enhance the resolution of the testing video (which is not possible with our scale factor x20), our approach is to make the system learn to benefit from high-resolution training videos by imposing multiple different sub-pixel transformations. This enables us to better estimate the decision boundary in the low-resolution feature space, as illustrated in Figure \\ref{fig:isr}.\n\n\n\n\n\n\n\n\n\n\nFrom the super resolution perspective, this is a different way of using the super resolution formulation, whose assumption is that multiple LR images may contain a comparable amount of information to a single HR image. It is called `inverse' super resolution since it follows the super resolution formulation, while the input and the output is the reverse of the original super resolution.\n\n\n\n\n\n\\subsection{Inverse super resolution formulation}\nThe goal of the super resolution process is to take a series of low resolution images $Y_k$, and generate a high resolution output image $X$ \\cite{huang10}. This is done by considering the sequence of images $Y_k$ to be different views of the high resolution image $X$, subject to camera motion, lens blurring, down-sampling, and noise. Each of these effects is modeled as a linear operator, and the sequence of low resolution images can be written as a linear function of the original high resolution image: \n\\begin{equation}\nY_k = D_k H_k F_k X + V_k, ~~k = 1 \\ldots n\n\\end{equation}\nwhere $F_k$ is the motion transformation, $H_k$ models the blurring effects, $D_k$ is the down-sampling operator, and $V_k$ is the noise term. $X$ and $Y_k$ are both images flattened into 1D vectors. In the original super resolution problem, none of these operators are known exactly, resulting in an ill-posed, ill-conditioned, and most likely rank deficient problem. Super resolution research focuses on estimating these operators and the value of $X$ by adding additional constraints to the problem, such as smoothness requirements.\n\n\nOur inverse super resolution formulation can be described as its inverse problem. We want to generate multiple (i.e., $n$) low resolution images (i.e., $Y_k$) for each high resolution training image $X$, by applying the optimal set of transforms $F_k$, $H_k$, and $D_k$. We can simplify our formulation by removing the noise term $V_k$ and the lens blur term $H_k$, since there is little reason to further contaminate the resulting low-resolution images and confuse the classifiers in our case:\n\\begin{equation}\n\\label{eq:isr}\nY_k = D_k F_k X, ~~k = 1 \\ldots n.\n\\end{equation}\nIn principle, $F_k$, the camera motion transformation, can be any affine transformation. We particularly consider combinations of shifting, scaling, and rotation transforms as our $F_k$. We use the standard average downsampling as our $D_k$.\n\nThe main technical challenge here is that we need to learn the set of different motion transforms $S = \\{F_k\\}^n_{k=1}$ from a very large pool, which is expected to maximize the recognition performance when applied to generate the training set for classifiers. Such learning should be done based on training data and should be dependent on the features and the classifiers being used, solving the optimization problem in the feature space.\n\n\nOnce $S$ is learned, the inverse super resolution allows generation of multiple low resolution images $Y_k$ from a single high resolution image $X$ by following Equation \\ref{eq:isr}. Low resolution `videos' can be generated in a similar fashion to the case of images. We simply apply the same $D_k$ and $F_k$ for each frame of the video $X$, and concatenate the results to get the new video $Y_k$.\n\n\n\n\n\n\n\\subsection{Recognition with inverse super resolution}\n\nGiven a set of transforms $S = \\{F_k\\}^n_{k=1}$, the recognition framework with our inverse super resolution is as follows: For each of original high resolution training video $X_i$, we apply Equation \\ref{eq:isr} to generate $n$ number of $Y_{ik} = D_k F_k X_i$. Let us denote the ground truth label (i.e., activity class) of the video $X_i$ as $y_i$. Also, we describe the feature vector of the video $Y_{ik}$ more simply as $x_{ik}$. The features extracted from all LR videos generated using inverse super resolution become training data. That is, the training set $T_n$ can be described as $T(S) = \\cup_i\\{ \\langle x_{ik}, y_i \\rangle \\}_{k=0}^n$ where $n$ is the number of LR training samples to be generated per original video. $x_{i0}$ is the original sample resized to LR as is.\n\nBased on $T(S)$, a classification function $f(x)$ with the parameters $\\theta$ is learned. The proposed approach can cope with any types of classifiers in principle. In the case of SVMs with the non-linear kernels we use in our experiments,\n\\begin{equation}\n\t\\begin{aligned}\n\t\tf_{\\theta}(x) = \\sum_j \\alpha_j y_j K(x, x_j) + b\n\t\\end{aligned}\n\\end{equation}\nwhere $\\alpha_j$ and $b$ are SVM parameters, $x_j$ are support vectors, and $K$ is the kernel function being used.\n\n\n\\section{Transformation learning}\n\\label{sec:learning}\n\nIn the previous section, we presented a new framework that takes advantage of LR training videos generated from HR videos assuming a `given' set of transforms. In this section, we present methods to `learn' the optimal set of such motion transforms $S = \\{F_k\\}^n_{k=1}$ based on video data. Such $S$ learned from video data is expected to perform superior to transforms randomly selected or uniformly selected, which we further confirm in our experiments.\n\n\\subsection{Method 1 - decision boundary matching}\n\nHere, we present a Markov chain Monte Carlo (MCMC)-based search approach to find the optimal set of transforms providing the ideal activity classification decision boundaries. The main idea is that, if we have an infinite number of transforms $F_k$ generating LR training samples, we would be able to learn the best low-resolution classifiers for the problem. Let us denote such ideal decision boundary as $f_{\\theta^*}$. By trying to minimize the distance between $f_{\\theta^*}$ and the decision boundary that can be learned with our transforms, we want to find a set of transformations $S^*$:\n\\begin{equation}\n \\label{eq:boundary-sim}\n\t\\begin{aligned}\n\t\tS^* &= \\argmin_{S} \\left| f_{\\theta^*} - f_{\\theta(S)} \\right|\\\\\n\t\t & \\approx \\argmin_{S} \\sum_{x \\in A} \\left| f_{\\theta^*}(x) - f_{\\theta(S)}(x) \\right|\\\\\n\t\t\\textrm{s.t.}~~ & |S^*| = n\n\t\\end{aligned}\n\\end{equation}\nwhere $f_{\\theta(S)}(x)$ is a classification function (i.e., a decision boundary) learned from the training set $T(S)$ (i.e., LR videos generated using transforms in $S$). $A$ is a validation set with activity videos, being used to measure the empirical similarity between two classification functions.\n\nIn our implementation, we further approximate the above equation, since learning $f_{\\theta^*}(x)$ conceptually requires an infinite (or very large) number of transform filters $F_k$. That is, we assume $f_{\\theta^*}(x) \\approx f_{\\theta(S_L)}(x)$ where $S_{L}$ is a set with a large number of transforms. We also use $S_{L}$ as the `pool' of transforms we consider: $S \\subset S_L$.\n\n\nWe take advantage of a MCMC sampling method with Metropolis-Hastings algorithm, where each MCMC action is adding or removing a particular motion transform filter $F_k$ to\/from the current set $S^{t}$. The transition probability $a$ is defined as \n\\begin{equation}\n\t\\begin{aligned}\n\t\ta = \\frac{\\pi(S') \\cdot q(S', S^{t})}{\\pi(S^{t}) \\cdot q(S^{t}, S')}\n\t\\end{aligned}\n\\end{equation}\nwhere the target distribution $\\pi(S)$ is computed by\n\\begin{equation}\n\t\\begin{aligned}\n\t\t\\pi(S) \\propto e^{-\\sum_{x \\in A} \\left| f_{\\theta^*}(x) - f_{\\theta(S)}(x) \\right|},\n\t\\end{aligned}\n\\end{equation}\nwhich is based on the $\\argmin$ term of Equation \\ref{eq:boundary-sim}. We model the proposal density $q(S', S^{(t)})$ with a Gaussian distribution $|S'| \\sim N(n, \\sigma^2)$ where $n$ is the number of inverse super resolution samples we are targeting (i.e., the number of filters). The proposal $S'$ is accepted with the transition probability $a$, and it becomes $S^{t+1}$ once accepted.\n\nUsing the above MCMC formulation, our approach goes through multiple iterations from $S^0 = \\{ \\}$ to $S^m$ where $m$ is the number of maximum iterations we consider. Based on the sampled $S^0, \\ldots, S^m$, the one with the maximum $\\pi(S)$ value is finally chosen as our transforms: $S^* = \\argmax_{S^t} \\pi(S^t)$ with the condition $|S| \\leq n$.\n\n\n\n\\subsection{Method 2 - maximum entropy}\n\nIn this subsection, we propose an alternative methodology to learn the optimal set of transform filters $S^*$. Although the above methodology of directly comparing the classification functions provides us a good solution for the problem, a fair number of MCMC sampling iterations is needed for a reliable solution. It also requires a separate validation set $A$, which often means that the system is required to split the provided training set into the real training set and the validation set. This makes the transformation set learning itself to use less training data in practice.\n\nHere, we present another approach of using the \\emph{entropy} measure. Entropy is an information-theoretic measure that represents the amount of information needed, and it is often used to measure uncertainty (or information gain) in machine learning \\cite{settles2010active}. Our idea is to learn the set $S^*$ by iteratively finding transform filters $F_{1 \\cdots n}$ that will provide us the maximum amount of information gain when applied to the (original HR) training videos we have.\n\nWe formulate the problem similar to the active learning problem. At each iteration, we select $F_k$ that will generate new LR samples with the most uncertainty (i.e., maximum entropy) measured based on the current classifier trained with the current set of transforms: $f_{\\theta(S^t)}$. Adding such samples to the training set makes the new classifier to have the most information gain. That is, we iteratively update our set as $S^{t+1} = S^t \\cup \\{F_*^{t}\\}$ where\n\\begin{equation}\n\t\\begin{aligned}\n\t\tF^{t}_* &= \\argmax_k \\sum_i H(D_kF_kX_i)\\\\\n\t\t &= \\argmax_k - \\sum_i \\sum_j P_{\\theta(S^t)}(y_j | D_kF_kX_i)\\\\\n\t\t &\\relphantom{= \\argmax - \\sum \\sum}\\log P_{\\theta(S^t)}(y_j | D_kF_kX_i).\n\t\\end{aligned}\n\\end{equation}\nHere, $X_i$ is each video in the training set, and $P_{\\theta(S^t)}$ is the probability computed from the classifier $f_{\\theta(S^t)}$. We are essentially searching for the filter that will provide the largest amount of information gain when added to the current transformation set $S^t$. More specifically, we sum the entropy $H$ (i.e., uncertainty) of all low resolution training videos that can be generated with the filter $F_k$: $H(D_kF_kX_i)$.\n\nThe approach iteratively adds one transform $F^t_*$ at every iteration $t$, which is the greedy strategy based on the entropy measure, until it reaches the $n$th round: $S^* = S^n$. Notice that such entropy can be measured with any videos with\/without ground truth labels. This makes the proposed approach suitable for the unsupervised (transform) learning scenarios as well.\n\n\n\\section{Experiments}\n\nWe confirm the effectiveness of inverse super resolution using low resolution version (16x12 and 32x24) of three different datasets (HMDB, DogCentric, and JPL-Interaction).\n\n\\subsection{Resized datasets}\n\nWe selected three public datasets and resized them to obtain low resolution (e.g., 16x12) videos.\n\nHMDB dataset \\cite{kuehne11} is a dataset popularly used for video classification. It is composed of videos with 51 different action classes, containing $\\sim$7000 videos. The videos include short clips, mostly from movies, obtained from YouTube. DogCentric dataset \\cite{ryoo14dog} and JPL-Interaction dataset \\cite{ryoo13} are the first-person video datasets taken with wearable\/robot cameras. They are smaller scale datasets, having $\\sim$200 videos and $\\sim$10 activity classes. DogCentric activity dataset is composed of first-person videos taken from a wearable camera mounted on top of a dog interacting with humans and surroundings. The dataset contains a significant amount of ego-motion, and it serves as a benchmark to test whether an approach is able to capture information in activities while enduring\/capturing strong camera ego-motion. JPL-Interaction dataset contains human-robot activity videos taken from a robot's point-of-view.\n\n\n\n\n\nWe emphasize once more that we made all these videos in the datasets have significantly lower resolution (Figure \\ref{fig:videos}), which is a much more challenging setting compared to the original datasets with their full resolution. Our main video resolution setting was 16x12, and we also tested the resolution of 32x24. For the resizing, we used the approach of averaging pixels in the original high-resolution videos that fall within LR pixel boundaries. A video cropping was used for the videos with non-4:3 aspect ratio.\n\n\\begin{figure}\n\t\\centering\n\t\\resizebox{0.95\\linewidth}{!}{\n\t \\includegraphics{images\/low_resolution_videos.pdf}\n\t}\n\t\\caption{The original resolution videos (top) and their 16x12 resized videos (bottom) from the three datasets used. We can confirm that the videos are properly anonymized (i.e., we cannot distinguish human faces) by resizing them to extreme low resolution, but activity recognition from them is becoming more challenging due to the loss of details.}\n\t\\label{fig:videos}\n\\end{figure}\n\n\n\\subsection{Implementation}\n\n{\\flushleft\\textbf{Feature descriptors\/representation:} We extracted three different types of popular video features and tested our inverse super resolution with each of them and their combinations. The three features are (i) histogram of oriented gradients (HOG), (ii) histogram of optical flows (HOF), and (iii) convolutional neural network (CNN) features. These feature descriptors were extracted from each frame of the video. In order to make the CNN handle our low-resolution frames, we newly designed and utilized a 3-layer network with dense convolution and minimal pooling, illustrated in Figure \\ref{fig:LR_cnn}. Next, we use Pooled Time Series (PoT) feature representation \\cite{ryoo15} with temporal pyramids of level 1 or 4 on top of these four descriptors.}\n\n\n\n\n\n\n\n\n\n\n{\\flushleft\\textbf{Classifier:} Standard SVM classifiers with three different kernels were used: a linear kernel and two non-linear multi-channel kernels ($\\chi^2$ and the histogram-intersection kernels).}\n\n\n\n{\\flushleft\\textbf{Baselines:} The conventional approach for low resolution activity recognition is to simply resize original training videos to fit the target resolution (Figure \\ref{fig:isr} (a)). We use this as our baseline, while making it utilize the identical features and representation. The parameters were tuned for each system individually. In addition, we implemented the \\emph{data augmentation} (DA) approach similar to \\cite{karpathy14} that randomly selects LR transformations to increase the number of training samples. We added random rotation transformations to the data augmentation as well (DA+rotation), and also tested the uniform transformation selection strategy.}\n\n\n\\begin{figure}\n\t\\centering\n\t\\resizebox{0.8\\linewidth}{!}{\n\t \\includegraphics{images\/LR-cnn.pdf}\n\t}\n\t\\caption{CNN architecture for extracting 256-D features from very low resolution images.}\n\t\\label{fig:LR_cnn}\n\\end{figure}\n\n\n\n\\subsection{Evaluation}\n\nWe conducted experiments with the videos downsampled to 16x12 (or 32x24) as described above. We followed the standard evaluation setting for each of the datasets. In the HMDB experiment, we used the provided 3 training\/testing splits and performed the 51-class classification. In the experiments with the DogCentric dataset, multiple rounds of random half-half training\/test splits were used to measure the accuracy. In the case of JPL-Interaction dataset with robot videos, 12-fold leave-one-set-out cross validation was used.\n\n\n\\begin{figure*}\n\t\\centering\n\t\\resizebox{1.0\\linewidth}{!}{\n\t \\includegraphics{images\/dogcentric_results.pdf}\n\t}\n\t\\caption{Experimental results with different features on 16x12 DogCentric dataset. X-axis shows the number of LR samples obtained using ISR or data augmentation (i.e., $n$), and Y-axis is the classification accuracy (\\%). The blue horizontal line in each plot shows the activity classification performance without ISR. ISR shows a superior performance in all cases.}\n\t\\label{fig:exp-dog}\n\\end{figure*}\n\n\\subsubsection{16x12 DogCentric dataset}\n\\label{subsubsec:dog}\n\n\n\\begin{table}\n \\small\n\t\\caption{Performances (\\%) of different methods tested with 16x12 DogCentric dataset, using three different kernels. $n=16$ and PoT level 1 was used with all features. Standard deviations were $\\sim0.3$, and the behaviors were very consistent.}\n\t\\label{table:kernels}\n\n\t\\small\n\t\\center\n\t\\setlength\\extrarowheight{0pt}\n\n\t\t\\begin{tabular}\t{c|c|c|c}\n\t\t\t\\hline \t & ~~Linear~~ & ~~~~$\\chi^2$~~~~ & Histogram \\tabularnewline\n\t\t\t\\hline \tBaseline (PoT) & 58.47\t& 63.33\t& 58.98 \\tabularnewline\n\t\t\t DA & 60.86\t& 63.36\t& 62.08 \\tabularnewline\n\t\t\t DA + rotation & 60.94\t& 64.17\t& 62.85 \\tabularnewline\n\t\t\t Uniform & 60.56\t& 63.95\t& 62.29 \\tabularnewline\n\t\t\t\\hline\n\t\t\t\t\tISR-method1 & 61.73\t& 64.85\t& 63.35 \\tabularnewline\n\t\t\t\t\tISR-method2 & \\textbf{61.96}\t& \\textbf{64.91}\t& \\textbf{63.61} \\tabularnewline\n\t\t\t\\hline\n\t\t\\end{tabular}\n\n\\end{table}\n\nIn this experiment, we used 4 types of features, 3 different types of kernels (i.e., linear, $\\chi^2$, and histogram-intersection), and 6 different settings for the number of LR samples generated per original video (i.e., $n$). For each of the settings, five different types of sample generation methods were tested: data augmentation, data augmentation with rotations, uniform sampling, and our ISR transform learning methods (method1 and method2).\n\n\n\n\n\n\n\n\n\\begin{table}\n\t\\caption{16x12 DogCentric dataset result comparison: Notice that \\cite{wang13} was not able to extract any trajectories from 16x12 videos. For the PoT and our ISR, we are reporting the result with $\\chi^2$ kernel with PoT level 4.}\n\t\\label{table:dogcentric}\n\n\t\\small\n\t\\center\n\t\\setlength\\extrarowheight{0pt}\n\n\t\t\\begin{tabular}\t{c|c|c}\n\t\t\t\\hline \tApproach & Resolution &Accuracy \\tabularnewline\n\t\t\t\\hline \tIwashita et al. \\shortcite{ryoo14dog} & 320x240\t& \t60.5 \\% \\tabularnewline\n\t\t\t Wang and Schmid \\shortcite{wang13} & 320x240\t& \t67.6 \\% \\tabularnewline\n\t\t\t PoT (Ryoo et al. 2015) & 320x240\t& \t73.0 \\% \\tabularnewline\n\t\t\t\\hline\n\t\t\t Iwashita et al. \\shortcite{ryoo14dog} & 16x12\t& \t46.2 \\% \\tabularnewline\n\t\t\t Wang and Schmid \\shortcite{wang13} & 16x12\t& \t10.0 \\% \\tabularnewline\n\t\t\t\t\tPoT (HOG + HOF + CNN) & 16x12 & \t64.6 \\% \\tabularnewline\n\t\t\t\t\t\\textbf{ISR} ($n = 16$) & 16x12 & 67.4 \\% \\tabularnewline\n\t\t\t\\hline\n\t\t\\end{tabular}\n\n\\end{table}\n\n\n\\begin{table*}\n\t\\caption{HMDB performances with and without our ISR. Classification accuracies (\\%) on 16x12 and 32x24 are reported. We show means and standard deviations of repeated experiments. The best performance per feature is indicated with bold.}\n\t\\label{table:exp-hmdb}\n\n\t\\centering\n\t\\resizebox{1.0\\linewidth}{!}{%\n\t \\includegraphics{images\/hmdb_results2.pdf}\n\t}\n\\end{table*}\n\n\nFigure \\ref{fig:exp-dog} shows the results of our ISR tested with multiple different features while using a linear kernel, and Table \\ref{table:kernels} shows the results with three different kernels. We are able to observe that our method1 always performs superior to conventional approaches including data augmentation or uniform transform selection, very reliably. Our method2 performance was less consistent due to that it only uses information gain instead of taking advantage of sample ground truth labels when learning transforms.\nNevertheless, method2 showed an overall performance meaningfully superior to the other approaches (e.g., method2 - 62.0 vs. data augmentation - 60.9 with a linear kernel).\n\nTable \\ref{table:dogcentric} compares our approach with other state-of-the-art approaches. The best performance report on the DogCentric dataset is 73\\% with 320x240 videos using the PoT feature representation \\cite{ryoo15}, but this method only obtains the accuracy of 64.6\\% with 16x12 anonymized videos. Our inverse super resolution is further improving it to 67.4\\% while using the same features, representation, and classifier.\n\n\n\n\\subsubsection{Tiny-HMDB dataset: 16x12 and 32x24}\n\n\n\n\nTable \\ref{table:exp-hmdb} shows the results with our ISR-method2. The result clearly suggests that inverse super resolution improves low resolution video recognition performances in all cases. Even with a small number of additional ISR samples (e.g., $n = 2$), the performance improved by a meaningful amount compared to the same classifier without inverse super resolution. Naturally, the classification accuracies with 32x24 videos were higher than those with 16x12. CNN performances were similar in both 16x12 and 32x24, since our CNN takes 16x12 resized videos as inputs.\n\nNotice that even with 16x12 downsampled videos, where visual information is lost and the use of trajectory-based features are prohibited, our methods were able to obtain performance superior to several previous methods such as the standard HOF\/HOG classifier (20.0\\% \\cite{kuehne11}) and ActionBank (26.9\\% \\cite{corso12}). That is, although we are extracting features from 16x12 videos where a person is sometimes as small as a few pixels, it is performing superior to certain methods using the original HR videos (i.e., videos larger than 320x240). The approach \\cite{wang13} obtaining state-of-the-art performance of 57.2\\% with 320x240 HR videos got the performance of $\\sim$2\\% in LR videos, since no trajectories could be extracted from 16x12 or 32x24.\n\n\n\n\n\n\\subsubsection{16x12 JPL-Interaction dataset}\n\nWe also conducted experiments with the segmented version of JPL-Interaction dataset \\cite{ryoo13}, containing robot-centric videos. Figure \\ref{fig:videos} (c) shows examples of its 16x12 version, where we can observe that human faces are anonymized.\n\nTable \\ref{table:jpl-interaction} shows the results. We are able to confirm once more that the proposed concept of inverse resolution benefits low resolution video recognition. Surprisingly, probably due to the fact that each activity in the dataset shows very distinct appearance\/motion different from the others, we were able to obtain the activity classification accuracy comparable to the state-of-the-arts while only using 16x12 extreme low resolution videos.\n\n\n\n\n\n\n\\section{Conclusion}\n\nWe present an \\emph{inverse super resolution} method for improving classification performance on extreme low resolution video. We experimentally confirm its effectiveness using three different public datasets. The overall recognition was particularly successful with first-person video datasets, where capturing ego-motion is the most important. Our approach is also computationally efficient in practice, requiring learning iterations linear in the number of ISR samples when using our method 2. In contrast, to achieve similar performance with traditional data augmentation, an order of magnitude more examples are needed (e.g., $n$=$16$ vs. $n$=$175$).\n\n\\begin{table}\n \\small\n\t\\caption{Recognition performances of our approach tested with 16x12 resized JPL-Interaction dataset. Although our result is based on extremely low resolution 16x12 videos, it obtained comparable performance to the other methods \\cite{ryoo13,wang13,narayan14} tested using much higher resolution 320x240 videos.}\n\t\\label{table:jpl-interaction}\n\n\t\\small\n\t\\center\n\t\\setlength\\extrarowheight{0pt}\n\n\t\t\\begin{tabular}\t{c|c|c}\n\t\t\t\\hline \tApproach & Resolution &Accuracy \\tabularnewline\n\t\t\t\\hline \tRyoo and Matthies \\shortcite{ryoo13} & 320x240\t& \t89.6 \\% \\tabularnewline\n\t\t\t Wang and Schmid \\shortcite{wang13} & 320x240\t& \t96.1 \\% \\tabularnewline\n\t\t\t Narayan et al. \\shortcite{narayan14} & 320x240\t& \t96.7 \\% \\tabularnewline\n\t\t\t\\hline\n\t\t\t Ryoo and Matthies \\shortcite{ryoo13} & 16x12\t& \t74.5 \\% \\tabularnewline\n\t\t\t\t\tPoT & 16x12 & \t92.9 \\% \\tabularnewline\n\t\t\t\t\tOurs (PoT + \\textbf{ISR}) & 16x12 & 96.4 \\% \\tabularnewline\n\t\t\t\\hline\n\t\t\\end{tabular}\n\n\\end{table}\n\n\n\n\n\n\\section{Discussions}\n\nOne natural question is whether the resolution in our testing videos is small enough to prevent the human\/machine recognition of faces (i.e., whether our videos are really privacy-preserving videos).\n\n\\begin{figure}\n\t\\centering\n\t\\resizebox{1.0\\linewidth}{!}{\n\t \\includegraphics{images\/HR_recovery_examples.pdf}\n\t}\n\t\\caption{Example resolution enhancement attempts using \\cite{kim16sr}. We can observe that face details are not being recovered properly, even after the x4 scale enhancement. This is particularly so with our 16x12 videos.}\n\t\\label{fig:hr-recovery}\n\\end{figure}\n\nThe state-of-the-art low resolution face recognition (i.e., face identification) algorithm using convolutional neural networks \\cite{lrface16} obtained around 50$\\sim$60\\% accuracy with 16x16 human face images. This 50$\\sim$60\\% classification accuracy is based on the dataset with 180 subjects, and the performance is expected to go even lower in real-world environments with more subjects to be considered for the classification. On the other hand, in our extreme low resolution videos (i.e., 16x12 videos), the resolution of human face is at most 5x7. In often cases, the face resolution was as small as 2x2 or even 1x1. This suggests that reliable face recognition from our extreme low resolution videos will be difficult for both machines and humans. Furthermore, there is a user study \\cite{butler15} reporting that anonymizing videos in a way similar to what we have done significantly lowers human's privacy sensitivity.\n\nAnother relevant question is whether enhancing the resolution of the testing video (i.e., recovering high resolution faces from LR images) is possible with our extreme LR videos. In order to confirm that such recovery is not possible due to the information loss, we applied the state-of-the-art deep learning-based recovery approach \\cite{kim16sr} to the video frames. Figure \\ref{fig:hr-recovery} illustrates the results. Notice that these images are based on the scale factor of x4. Any attempt with the higher scale factor gave us worse results. We observe that this deep learning-based resolution enhancement is not recovering the face details, particularly in 16x12 videos. The algorithm sharpens the edges compared to bicubic interpolation, but fails to recover actual details.\n\n\n\n\n\n\\renewcommand{\\thefootnote}{\\fnsymbol{footnote}}\n\n\\section*{Acknowledgement}\n\nRyoo and Yang's research in this work was conducted as a part of EgoVid Inc.'s research activity on privacy-preserving computer vision. Ryoo and Yang are the corresponding authors.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n{\\small\n\\bibliographystyle{aaai}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{intro}\n\nIt is well known that prime factorization is a computationally difficult problem and the security of the RSA-type classical cryptographic systems derive from this difficulty \\cite{rivest1978method}. Although, various RSA-type schemes for public key cryptography are still in use, the confidence in the\nsecurity provided by RSA-type cryptosystem has been considerably weakened since the pioneering work of Shor \\cite{shor1999polynomial,shor1995scheme,shor2000simple,shor1994algorithms}. Specifically, in \\cite{shor1999polynomial}, Shor showed that the factorization of bi-primes can be performed in polynomial time if a scalable quantum computer can be built. In other words, building of a scalable quantum computer would imply the end of RSA-type classical cryptosystem and a set of other classical cryptosystems, too. This fundamental importance of the factorisation problem and the benefit\nof implementing it using a quantum computer have led to a set of schemes for prime factorization, mainly experimental \\cite{Vandersypen2001Experimental,lu2007demonstration,lanyon2007experimental,politi2009shor,matthews2009compiled}. Initial implementations were based on nuclear magnetic resonance (NMR) and mostly followed Shor's original algorithm. For example, 15 was factorised using an NMR-based 7-qubit quantum computer \\cite{Vandersypen2001Experimental}. Interestingly, largest number that was factored using Shor's algorithm until 2012 was 21 and a 10 qubit quantum computer was used for the purpose \\cite{lucero2012computing}. Due to the fact that considerably large quantum registers are required in Shor's original algorithm, now it's not usually used in its original form \\cite{Vandersypen2001Experimental}. Though, Shor's algorithm, in principle, guarantees factorization of a bi-prime number in polynomial time, the requirement of very large quantum registers restricted its applicability in factorising relatively large bi-primes. This fact motivated researchers to look for alternative approaches. One such approach is to use a hybrid scheme, variants of this approach are reported in Refs. \\cite{peng2008quantum, pal2019hybrid, li2017high}. These variants are quite close to each other and each of them require relatively less quantum resources than that required in Shor's original approach. Here, it would be apt to note that hybrid schemes refer to those schemes, where part of the factorisation task is done classically to reduce the requirement of quantum resources which are costly. \n\nIn 2008, Peng et al. \\cite{peng2008quantum}, devised an algorithm utilizing adiabatic quantum computing, on the basis of the work of Farhi et al. \\cite{farhi2001quantum} and demonstrated factorization of 21 using 3-qubits. Furthermore, in 2012, Xu et al., have improved the scheme by solving some equations mathematically. They have further demonstrated the beauty and power of the factorisation algorithms of this class by factorizing 143 using a 4-qubit NMR quantum processor\\cite{xu2012quantum}. Two years later, Dattani and Bryans established that Xu et al., had actually factored 3599, 11663, and 56153, but could not recognize that. In the work of Dattani and Bryans, classical resources were used for partially simplifying a set of bit-wise factoring equations which allowed them to reduce the quantum overhead for a set of numbers \\cite{dattani2014quantum}. In 2019, Pal et al., have demonstrated a hybrid scheme for factorization of 551 using 3-qubit system resources \\cite{pal2019hybrid}. \nThe progress has been continuing and very recently (in 2017),\nfactorization of 35 was demonstrated using a single solid spin system under ambient conditions by Xu et al., \\cite{xu2017experimental}. Furthermore, a set of relatively large numbers have recently been experimentally factorized. Specifically, combining the concepts of quantum annealing and computational algebraic geometry, a new approach for quantum factorization is developed by Dridi and Alghassi \\cite{dridi2017prime} and the same has been successfully used to factorize all bi-primes up to 200099 using the D-Wave 2X processor \\cite{anschuetz2019variational} and the experimental factorization of 291311 is performed by Li et al., \\cite{li2017high} using a hybrid adiabatic quantum algorithm.\n\nThe importance of the factorization problem and the facts that (i) quantum factorization has yet been performed using only a few potential candidates for the scalable quantum computer and (ii) hybrid schemes have potential to factorize large bi-primes using small and noisy quantum computers available today, have motivated us to modify the hybrid scheme given in \\cite{pal2019hybrid,xu2012quantum} to obtain a new scheme which can be implemented in another experimental platform, namely IBM Quantum processor. Specifically, the algorithm proposed here is designed for factorization of bi-primes using a Josephson-qubit based quantum computer \\cite{IBMQE,devitt2016performing}.\nInterested readers may find a detailed user guide on how to use this computer at \\cite{IBMQE},\nand a lucid description of the working principle of a Josephson-qubit based quantum computer in Ref. \\cite{steffen2011quantum}. This quantum computer was placed in the cloud in 2016. It immediately drew considerable attention of the quantum information processing community, and several quantum information tasks have already been performed using this quantum computer. Specifically, in the\ndomain of quantum communication, properties\nof different quantum channels that can be used for quantum communication have been studied experimentally \\cite{wei2018efficient} and experimental realizations of a\nquantum analogue of a bank cheque \\cite{behera2017experimental}\nthat is claimed to work in a banking system having a quantum network,\nand solving set of linear equations \\cite{doronin2020solving}\nand two-qubit quantum states using optimal resources \\cite{sisodia2017design}, have been reported; in the field of quantum foundation, violation\nof multi-party Mermin inequality has been realized for 3, 4, and 5\nparties \\cite{alsina2016experimental}; an information\ntheoretic version of uncertainty and measurement reversibility has\nalso been implemented \\cite{berta2016entropic}; in \nthe area of quantum computation, a comparison of two architectures\nusing demonstration of an algorithm has been performed \\cite{linke2017experimental},\na quantum permutation algorithm \\cite{yalccinkaya2017optimization},\nand a quantum eigensolver method based experimental search for ground state energy energy for molecules of increasing size up to $\\rm{BeH_2}$ \\cite{kandala2017hardware} have been implemented recently. Further, a non-abelian braiding of surface code defects \\cite{wootton2017demonstrating} and a compressed simulation of the transverse field one-dimensional Ising interaction (realized as a four-qubit Ising chain that utilizes only two qubits) \\cite{hebenstreit2017compressed}\nhave also been demonstrated. Clearly, the IBM quantum computer\nhas already been used for the experimental realizations of various\nquantum information processing tasks. However, to the best of our\nknowledge, IBM quantum computer has not yet been used to realize Shor's algorithm or hybrid algorithms for factorization. This paper aims to address that gap.\n\nThis paper is organized as follows. \nSec. \\ref{intro} sets motivation behind choosing factorization problem, followed by detailing the gradual development of hybrid (i.e., combined classical and quantum) strategies to obtain efficient solution. In Sec. \\ref{theory}, we revisit the general scheme for hybrid factorization. In Sec. \\ref{fact35}, we elaborate on a specific case- factorization of 35 using hybrid scheme of factorization, to be precise, we construct bit-wise equations and bit-wise matrix, optimize them to calculate unknown constants, and obtain relation between bit variables, formulate corresponding problem Hamiltonian using the procedure given in Sec. \\ref{theory}. In Sec. \\ref{exp}, we illustrate experimental implementation of the quantum part of the hybrid scheme for factorizing 35 using IBM architecture. For the purpose, we use Adiabatic evolution for ground state search of the problem Hamiltonian constructed in the previous section, which is the desired solution. In Sec. \\ref{results}, we show the experimental results, which reveals the unknown qubit state and hence one factor of the composite number, and ultimately We conclude in Sec. \\ref{concl}.\n\n\\section{Theory} \\label{theory}\n\nLet's consider a $b_{n}$-bit number $N=\\sum_{j=0}^{b_{n}-1}2^{j}n_{j}$. Although, there may be various sets of factors of a composite number of bit length $b_{n}$ with maximum number of possibilities equals $\\lceil \\frac{b_{n}}{2} \\rceil$. Here, $\\lceil x \\rceil$ corresponds to a ceiling function. Acquainted with the requirement of cryptosystems, here we consider $N$ as a distinct bi-prime. Let us assume that the two factors of $N$ are $P=\\sum_{k=0}^{b_{p}-1}2^{k}p_{k}$ and $Q=\\sum_{l=0}^{b_{q}-1}2^{l}q_{l}$ with bit length $b_{p}$ and $b_{q}$ respectively, such that, either $b_n=b_p+b_q$ or $b_{n}=b_{p}+b_{q}-1$. From the definition of prime factors, $p_{0}=q_{0}=1$ and $p_{b_{p}-1}$ = $q_{b_{q}-1}$=1. Thus, identity $N=PQ$ becomes,\n\\begin{equation}\n\\sum_{j=0}^{b_{n}-1}2^{j}n_{j} = \\sum_{k=0}^{b_{p}-1}2^{k}p_{k} \\sum_{l=0}^{b_{q}-1}2^{l}q_{l}. \\label{eq1}\n\\end{equation}\nAs follows from the above equation either $b_{n}=b_{p}+b_{q}$ or $b_{n}= b_{p}+b_{q}-1$. The hybrid scheme for factorization can be divided into the following steps.\n\\begin{itemize}\n\\item Formulating multiplication table for $P$ and $Q$.\n\\item Constructing bit-wise equations from multiplication table.\n\\item Simplifying bit-wise equations using classical computation.\n\\item Constructing bit-wise Hamiltonian and hence problem Hamiltonian.\n\\item Obtaining unitaries corresponding to adiabatic evolution starting from given Hamiltonian to the problem Hamiltonian.\n\\item Decomposition of a given unitary using gates available in IBM Clifford library.\n\\item Obtaining solution of problem Hamiltonian using adiabatic quantum computation.\n\\end{itemize}\n\nIn the following, we illustrate the hybrid scheme for factorization with an example of $N=35$. We would also report experimentally obtained factors using this scheme. Experimental implementation of quantum part of the scheme, i.e., constrained minimization using adiabatic evolution has been done in 5-qubit IBM quantum processor involving superconducting qubits. \n\\subsection{Simplification of bit-wise constraint equation}\nFor the purpose of bit-wise comparison of coefficients of both sides, we need to rewrite the above equation by introducing new index $c=k+l$. In terms of this new index $c$ modified equation becomes \n\\begin{equation}\n\\sum_{j=0}^{b_{n}-1}2^{j}n_{j} = \\sum_{c=0}^{b_{p}+b_{q}-2}2^{c} \\sum_{l=c_{min}}^{c_{max}}p_{c-l}q_{l}.\n\\label{rewritten_m}\n\\end{equation}\nHere, $c_{min}= max(0,c-lp-1)$ and $c_{max}= min(c,lq-1)$. Further, term $p_{c-l}q_{l}$ can be broken as $p_{c-l}q_{l}+C_{c-1,c}=s_{c,l}+2C_{c,c+1}$. Here, $C_{c-1,c}$ is the carry from $(c-1)^{th}$ column to $c^{th}$ column and $C_{c,c+1}$ is the carry from $(c)^{th}$ column to $(c+1)^{th}$ column. Such a decomposition allows us an easy understanding of the construction of the multiplication table. In order to get simplified bit-wise factoring equations, without any loss of generality, we now add all carries in the given column. So the bit-wise factoring equation takes following form\n\\begin{equation}\n\\sum_{l=c_{min}}^{c_{max}}p_{c-l}q_{l}+C_{c}-2C_{c+1}= n_{j}.\n\\label{bitequation}\n\\end{equation}\nHere, cumulative carry $C_{c}= \\sum_{c_{min}}^{c_{max}}C_{c-1,c}$ and $n_{j}$ is the bit value of number $N$ for the $j^{th}$ order (power) of the base value in the binary system. \n\\subsection{Constraint optimization using classical resources} \\label{classicaloptimization}\n\nConsider the following equation,\n\\begin{equation}\n \\sum_{j=0}^{b_{n}-1}2^{j}n_{j} = \\sum_{c=0}^{b_p+b_q-2} 2^{c}n_{c}+2C_{c+1}-C_{c} \n\\label{lastequation1}\n\\end{equation}\nand equation\n\\begin{equation}\n \\sum_{j=0}^{b_{n}-1}2^{j}n_{j}= \\sum_{c=0}^{b_p+b_q-2} 2^{c}n_{c}+ C_{b_{p}+b_{q}-1}\n \\label{lastequation2}\n\\end{equation}\nobtained after putting the value of $$\\sum_{l=c_{min}}^{c_{max}}p_{c-l}q_{l}$$ from Eq.(\\ref{bitequation}) into Eq.(\\ref{rewritten_m}). Writing Eq. (\\ref{lastequation2}) in such a way allows us to calculate values of carry $C_{b_{p}+b_{q}-1}$. A direct comparison of the L$.$H $.$S$.$ $\\mathrm{with}$ R$.$H$.$ S$.$ reveals the values of the carry $ C_{b_{p}+b_{q}-1} $. Also, as there is no incoming carry to the first column, we set $C_{0}=0$. Moreover, bit-wise equation for $c=0$ and $c=b_{p}+b_{q}-2$, i.e., $1+C_{0}=1-2C_{1}$ and $C_{b_{p}+b_{q}-2}+1=n_{b_{p}+b_{q}-2}+2C_{b_{p}+b_{q}-1}$ give $C_{1}$ and $C_{b_{p}+b_{q}-2}$. Next we obtain the following equality by rewriting the bit-wise equation. \n\\begin{equation}\n \\mathrm{max}\\lfloor C_{c+1} \\rfloor = \\lfloor \\mathrm{max}(\\frac{1}{2}\\sum_{l=C_{min}}^{C_{max}} p_{c-l}q_{l}+\\frac{1}{2}C_{i})-\\frac{n_{i}}{2} \\rfloor.\n\\end{equation}\nThis equality can be used to calculate the upper bounds over $C_{j+1}$, and this upper bound can be iteratively used to obtain upper bound on next $C$ value. The bit equation obtained under these constrain, further allows us to simplify complete set of bit equations with minimum number of independent parameters. \nFor the two cases, namely, Case A $\\colon$ when $b_n=b_p+b_q$ and Case B $\\colon$ when $b_n=b_p+b_q-1$ the values of the carry are $1$ and $0$ respectively.\n\n\\subsection{Construction of problem Hamiltonian} \\label{probHam}\nIn 2008, Peng et al. developed a framework for solving factorization problem using quantum adiabatic algorithm. For the purpose, they formulated the factorization problem as a minimization problem by constructing a function $f(p,q)=(N-pq)$. The solution of this equation should reveal the values of classical variables $p$ and $q$. They further suggested that any corresponding quantum approach to the minimization problem must involve finding the ground state of the Hamiltonian, which can be considered to be of the form $H=f(p,q)\\ket{p,q}\\bra{p,q}$, where $f(p,q)$ is the ground state eigenvalue and $\\ket{p,q}$ is the corresponding product state of states $\\ket{p}$ and $\\ket{q}$. The problem Hamiltonian for the factorization problem takes the form $H=(N\\operatorname{I_{2^{n}}}-PQ)^2$, where $\\operatorname{P}=\\sum_{i}2^iA_{i}$ $\\operatorname{Q}=\\sum_{i}2^iA_{i}$ and $\\operatorname{A_{i}}= \\frac{\\operatorname{I}-\\operatorname{\\sigma_{iz}}}{2}$ with eigenvalues 0 and 1 for eigenstates $\\ket{0}$ and $\\ket{1}$, respectively. Furthermore, Xu et al. \\cite{xu2012quantum} have used another approach to construct a Hamiltonian which uses relatively less quantum resources than that used in Ref. \\cite{peng2008quantum} but still uses more resources than used by Pal et al. in Ref. \\cite{pal2019hybrid}. In this article, in order to construct the problem Hamiltonian we have used the same approach as was used by Xu et al. in Ref. \\cite{xu2012quantum} i.e., to begin with we have transformed the classical bit variable $p_{i}$ and $q_{i}$ into operators such that $p_{i} \\rightarrow A_{i}$ and $q_{i} \\rightarrow A_{i+b_{k}-2}$. \n\n \\subsection{Quantum adiabatic algorithm for ground state search}\nQuantum adiabatic algorithm states that, during the evolution of a quantum system under a slowly varying (as per adiabaticity condition \\cite{farhi2000quantum}) time dependent Hamiltonian $H(t)$, system remains in the same eigenstate in which the system is prepared initially \\cite{mesiah1961quantum}. Given a problem, adiabatic quantum computation typically involves encoding the solution to the problem in the ground-state of the final Hamiltonian. A suitable initial Hamiltonian $H_{i}(0)$ is chosen for which ground-state can be prepared easily and evolved to the final Hamiltonian $H_{f}(T)$. \n\nThen the Hamiltonian of the system is slowly varied such that the system stays in the ground state of the instantaneous Hamiltonian. The instantaneous Hamiltonian can be designed as an interpolation (linear or nonlinear) between the initial Hamiltonian and the final Hamiltonian \\cite{farhi2000quantum}. For the linear interpolation parameter $s=\\frac{t}{T}$, such that $0 \\leq s \\leq 1$, where $t$ is the evolution time and $T= \\vert \\frac{max(\\frac{dH(s)}{ds})}{\\epsilon \\Delta^2\/\\hbar} \\vert$, is the total evolution time. The adiabatic theorem guarantees that system will evolve to the ground state of the final Hamiltonian with probability $1-\\epsilon^2$, and the transformed Hamiltonian would become\n\\begin{equation}\n\\operatorname{H(s)} = (1-s)\\operatorname{H_i} + s\\operatorname{H_f}. \n\\end{equation}\nConsidering the Hamiltonian as piecewise constant Hamiltonian with M pieces the time ($t$) can be rewritten as $t=\\frac{m}{M}T$, where $0 \\le m \\le M$ and the Hamiltonian for the $m^{th}$ piece is\n\\begin{equation}\nH_{m} = (1-\\frac{m}{M})H_{i} + (\\frac{m}{M})H_{f}.\n\\end{equation}\nThe unitary evolution $U_{m} = \\exp{(-\\iota H_{m}\\delta t)}$ governed by the corresponding Hamiltonian $H_{m}$, where $t$ is the duration of $m^{th}$ piece of evolution. Unitary operation for the total evolution is $U=\\prod_{m=1}^{M} U_{m}$. \n\n\\section{Factorization of 35 using IBM's 5-qubit quantum processor} \\label{fact35}\nIn order to demonstrate the method, we take the example of 35. As mentioned in Sec. \\ref{theory} there are two possible cases for the choice of the bit-length of the bi-prime factors, we start with the case $b_n=b_p+b_q$, thus $b_{p}=3$ and $b_{q}=3$. Although, one can start with any of the two cases and in case of obtaining an inconsistent solution in the first case, consistent solution will be guaranteed in the other case. We start by obtaining the multiplication table (see Tab. \\ref{bitwiseMultiplication}) for the composite number $N=35$. \n\n\\begin{table}[b]\n\\begin{tabular} { | c |c| c| c| c | c | c | c | c | c | c | c | c |} \n\\hline\n \\rm{ \\textbf {j}} $\\rightarrow$&5&4&3&2&1&0\\\\\n\\rm{\\textbf{l}} $\\downarrow$&&&&&&\\\\\n\\hline\n&&&&1&$p_{1}$&1\\\\ \n\n\n&&&&1&$q_{1}$&1\\\\\n\\hline\n0&&&&1 & $p_{1}$&1\\\\\n\\hline\n1&&& $q_{1}$&$p_{1}q_{1}$&$q_{1}$&\\\\\n\\hline\n2&&$1$ & $p_{1}$ & $1$&&\\\\\n\\hline\nCarry&$c_{4,5}$ & $c_{3,4}$&$c_{2,3}$&$c_{1,2}$&&\\\\\n& & $c_{2,4}$&&&&\\\\\n\\hline\nCumulative Carry &&&&&&\\\\\n$C_{c}$&$C_{5}$&$C_{4}$&$C_{3}$&$C_{2}$&$C_{1}$&$C_{0}$\\\\\n\\hline\n$n_{j}$&1&0&0&0&1&1\\\\\n\n\\hline\n\\end{tabular}\n\\caption{Bit-wise multiplication table for 35. Columns in the table correspond to parameter $c$ introduced to get simplified bit-wise equations (see Eq. (\\ref{lastequation2})) while rows correspond to $l$ values (see Eq. (\\ref{lastequation1})). Bit values $n_{j}$ for $j=0\\colon5$ are provided in the last row of the table.}\n\\label{bitwiseMultiplication}\n\\end{table}\n\nThe bit-wise equations obtained from the multiplication table (Table \\ref{bitwiseMultiplication}) are$\\colon$\n\\begin{align*}\n 1+C_{0}=1,\\\\\n p_{1}+q_{1}+C_{1}-1=2C_{2},\\\\\n 1+p_{1}q_{1}+1+C_{2}-0=2C_{3},\\\\\n p_{1}+q_{1}+C_{3}-0=2C_{4},\\\\\n 1+C_{4}-0=2C_{5},\\\\\n C_{5}=1.\\\\ \n\\end{align*}\nWe then optimize above set of equations using the constrain optimum condition given in Sec. \\ref{classicaloptimization}. The carries thus obtained are $C_{0}=0, C_{1}=0, C_{2}=0,C_{3}=1, C_{4}=1, C_{5}=1$. This leaves us with only one bit equation, i.e., $p_1+q_{1}=1$. We then construct the operators corresponding to bit values $p_{1}$ and $q_{1}$ as discussed in \\ref{probHam}. Thus, the operators corresponding to the bit values $p_{1}$ and $q_{1}$ are $P=Q=\\sum_{i}A_{i}=A_{1}$ and for $A_{1}= \\frac{\\operatorname{I}-\\operatorname{\\sigma_{1z}}}{2}$. Now the problem Hamiltonian becomes \n\n\\begin{align*}\nH_{p}= (P_{1}+Q_{1}-1)^2 \\\\ \n = (\\frac{\\operatorname{I}-\\operatorname{\\sigma_{z}}}{2}+\\frac{\\operatorname{I}-\\operatorname{\\sigma_{z}}}{2}-\\operatorname{I})^2\\\\\n = {\\operatorname{\\sigma_{z}}}^2\\\\\n = {\\operatorname{I}}. \n\\end{align*}\n\nWe now use quantum adiabatic evolution for finding the ground state of the final Hamiltonian $H_{p}$ starting from the ground state of the easily initializable Hamiltonian in the IBM's $QX_{4}$ processor, i.e., $H_{i}=J \\sigma_{z}$, in the units of $\\hbar$ and $J \\approx 2\\pi\\times10^{6}$ rad\/sec. The ground state of the initial Hamiltonian $H_{i}$ is $\\ket{-}=\\frac{\\ket{0}-\\ket{1}}{2}$ and ground state of the final Hamiltonian $H_{f}$ is degenerate and are $\\ket{0}$ and $\\ket{1}$ with eigenvalues 1. We decided to adiabatically evolve the Hamiltonian in 8 steps. During each step the Hamiltonian can be considered as piecewise constant and the corresponding unitary operators can be obtained using the formula $U_{m}=\\exp{(-\\iota H_{m} \\Delta t)}$, where $H_{m}$ is the Hamiltonian in the $m^{th}$ step and $\\Delta t$ is the duration of the step. \n\nWe first optimize the number of steps to check the adiabaticity condition being satisfied by the given set of Hamiltonians i.e., we check if the ground state of the initial Hamiltonian reaches the ground state of the final Hamiltonian without anticrossing as shown in the Fig. \\ref{fig:Adb}. \n\n\\begin{figure}[h]\n\\includegraphics[width=8cm,trim=5cm 3cm 7cm 3cm]{fig4}\n\\caption{\\label{fig:Adb}(Color online) Simulation shows no anticrossing between two states of the qubit q[0] of the IBM's $QX_{4}$ quantum processor during adiabatic evolution for time chosen $T=10 \\mu s$ in 8 steps. The Hamiltonian used are $H_{i}= J \\sigma_{z}$ and $H_{f}= J\\operatorname{I}$.}\n\\end{figure}\n\\subsection{Decomposition of Unitaries}\nWe now decompose the unitaries obtained in Sec. \\ref{fact35} for each of the 8 steps into the single-qubit Clifford+T gates. In actual implementation of Adiabatic evolution of the system from the ground state of the initial Hamiltonian to the ground state of the final Hamiltonian. The actual decomposition for a general unitary is shown in Eq. (\\ref{nine}) and the exact values of $\\theta$s for each step are given in Tab. \\ref{theta}. In this stage, we need to be specific to the quantum processor to be used, as the available gate library and the ease with which different gates can be realized in a particular implementation\/architecture are different. Here, we are interested in implementing the proposed scheme using an IBM QX4 processor, which restricts us to decompose the unitaries in terms of the available Clifford gate library of IBM Quantum Experience (QE). To be specific, to implement our scheme in IBM's QX4, any general unitary which we require to implement as part of our scheme (circuit), has to be decomposed in terms of the Clifford+T gates. In general, a single-qubit unitary obtained with the chosen initial and obtained final Hamiltonians are of the form, \n$U = \\left(\n\\begin{array}{cc}\n a & 0 \\\\\n 0 & b\n\\end{array}\\right)$.\nEach element in the given unitary is a complex number. Thus, $ a= r_{1}\\exp(\\iota \\theta_{1}) $ and, $ b= r_{2}\\exp(\\iota \\theta_{2})$ hence corresponding to each unitary we have two matrices corresponding to r and $\\theta$ values i.e., $R = \\left( \\begin{array}{cc}r_{1} & 0 \\\\ 0 & r_{2} \\end{array} \\right)$ and $ \\Theta = \\left( \\begin{array}{cc} \\exp(\\iota \\theta_{1}) & 0 \\\\ 0 & \\exp(\\iota \\theta_{2}) \\end{array} \\right)$ such that \n\\begin{equation}\n\\operatorname{U}= \\operatorname{R} \\cdot \\operatorname{\\Theta}.\n\\end{equation}\nThe values of $\\theta$ are given in Tab. \\ref{theta}. For the case of 35, $R$ matrices for all unitaries are identity, so $\\operatorname{U}= \\operatorname{\\Theta}$.\nThe IBM Clifford+T gate library consists of following single-qubit unitaries:\\\\\n(i) The Pauli gates: $\\operatorname{I}$, $\\operatorname{X}$, $\\operatorname{Y}$, and $\\operatorname{Z}$\\\\\n(ii) General gates: $\\operatorname{U_{1}\\left(\\theta \\right)}$, $\\operatorname{U_{2}}(\\phi,\\lambda)$, and $\\operatorname{U_{3}}(\\theta,\\phi,\\lambda)$.\\\\\n(iii) Phase Gates: $\\operatorname{S} (\\operatorname{S^{\\dagger}})$, $\\operatorname{T} (\\operatorname{T^{\\dagger}})$, and\\\\\n(iv) Other Gates: $\\operatorname{H}$ (Hadamard) \\\\\nIn what follows, we will use phase gate and Pauli $X$-gate to construct unitaries to be implemented in our experiments. The decomposition of $\\Theta$ in Clifford+T gate library can be obtained as\n\n\\begin{widetext}\n\\begin{eqnarray}\n\\operatorname{\\Theta} &=&\\left({\\begin{array}{cc} 1 & 0\\\\ 0 & \\exp(\\iota \\theta_{2})\\end{array}}\\right) \\left({\\begin{array}{cc} 0 & 1\\\\ 1 & 0\\end{array}}\\right)\\left({\\begin{array}{cc} 1 & 0\\\\ 0 & \\exp(\\iota \\theta_{1})\\end{array}}\\right) \\left({\\begin{array}{cc} 0 & 1\\\\ 1 & 0\\end{array}}\\right)\\\\ \\nonumber\n&=& \\operatorname{U_{1}}\\left(\\theta_{2}\\right) \\cdot \\operatorname{X} \\cdot \\operatorname{U_{1}}\\left(\\theta_{1}\\right) \\cdot \\operatorname{X}.\n\\label{nine}\n\\end{eqnarray}\n\\end{widetext}\nCombining $R$ and $\\Theta$ matrices, the unitary U$_{m}$ for m$^{th}$ step can be written as\n \\begin{equation}\nU_{m} = \\operatorname{U_{1}^{m}}\\left(\\theta_{1}^{m}\\right) \\cdot \\operatorname{X} \\cdot \\operatorname{U_{1}^{m}}\\left(\\theta_{2}^{m}\\right) \\cdot \\operatorname{X}, \n\\label{um}\n \\end{equation}\nwhere, $\\theta_{1}^{m}$ and $\\theta_{2}^{m}$ are the angles in the m$^{th}$ unitary.\nThus, the total unitary for quantum part of the hybrid factorization scheme is $\\operatorname{U}= \\prod_{m=8}^{m=1} U_{m}$.\nHere, it is important to mention that gates in the unitary have been applied in the right to left order. The values for $\\theta_{1}^{m}$ and $\\theta_{2}^{m}$ for different steps are provided in Tab. \\ref{theta}.\n\n\\begin{center}\n\\begin{table}[b]\n\\begin{tabular}{|m{.5cm}|m{.8cm}|m{.8cm}|m{1.5cm}|m{1.5cm}|}\n\\hline\nm & $r_{1}$ & $r_{2}$ & $\\theta_{1}^{m}$ & $\\theta_{2}^{m}$\\\\\n\\hline\n1 & 1 & 1 & -1.2500 & 0.9375\\\\\n\\hline\n2 & & 1 & -1.2500 & 0.6250 \\\\\n\\hline\n3 & 1 & 1 &-1.2500 & 0.3125\\\\\n\\hline\n4 & 1 & 1 & -1.2500 & 0 \\\\\n \\hline\n5 & 1 & 1 & -1.2500 & -0.3125\\\\\n \\hline\n 6& 1 & 1 & -1.2500 & -0.6250\\\\\n \\hline\n 7 & 1 & 1 & -1.2500 & -0.9375 \\\\\n \\hline\n 8 & 1 & 1 & -1.2500 & -1.2500\\\\\n \\hline\n\\end{tabular}\n\\caption{The r and $\\theta$ values for unitaries in each step of adiabatic evolution.} \\label{theta}\n\\end{table}\n\\end{center}\n\n\\begin{figure}[h]\n\\includegraphics[width=5cm,trim=9cm 3cm 10cm 1cm]{fig3.pdf}\n\\caption{(Color online) Schematic procedure of complete scheme for hybrid factorization. Top trace shows the steps of the scheme. Second trace shows circuit for obtaining ground state of the final Hamiltonian $H_{f}$ starting from $H_{i}$ through adiabatic evolution. The trace below that shows quantum circuit for implementing quantum adiabatic evolution part on IBM's QX4 processor for factorization of 35, by initializing the qubit system to the ground state i.e., $\\ket{1}$ of the initial Hamiltonian $J \\sigma_{z}$ and the lowest trace shows the gate decomposition in IBM's gate library of $k^{th}$ unitary $U_{k}$. }\n\\label{schematic}\n\\end{figure}\n\n\\section{Experiments} \\label{exp}\n This experiment has been performed on an open access 5-qubit quantum processor placed on cloud by IBM corporation. In particular, we have used the architecture of IBM's QX4 (IBM Q 5 Tenerife)\\cite{IBMQE}, which consists of superconducting Transmon qubits \\cite{malkoc2013quantum}. A schematic diagram of this architecture and description of the architecture can be found in \\cite{sisodia2017design,sisodia2017experimental} and references therein. The basic gate library used for the single-qubit gates are $\\operatorname{H}$, Pauli operators $\\operatorname{X}$, $\\operatorname{Y}$, $\\operatorname{Z}$, parametric gates $\\operatorname{U_{1}}$, $\\operatorname{U_{2}}$, and $\\operatorname{U_{3}}$. The operator $\\operatorname{U_{1}}$ depends on single parameter $\\theta$, operator $\\operatorname{U_{2}}$ depends on two parameters $\\theta, \\phi$, and operator $\\operatorname{U_{3}}$ depends on three parameters $\\theta, \\phi, \\lambda$. We chose qubit q[0] for implementation of quantum adiabatic evolution of ground state search. We initialize the system to the ground state of the initial Hamiltonian by applying the $\\operatorname{X}$ gate ($\\sigma_{x}$) to the qubit q[0]. To evolve this state adiabatically we apply a set of eight unitaries, in their decomposed form as shown in the previous section. In order to extract the probabilities of the final state we perform quantum state tomography after each step. The directly measured observable in IBM processor are $\\ket{0}\\bra{0}$ and $\\ket{1}\\bra{1}$ which allow us to calculate $\\expv{Z}$. This is sufficient to reveal the probabilities $p_{0}$ and $p_{1}$. In order to measure real and imaginary part of the coherence term, we have used the method described in our earlier work \\cite{shukla2020complete} i.e., we have applied, $H$ gate followed by $\\operatorname{Z}$-measurement to reveal real part and $S^{\\dagger}H$ followed by $\\operatorname{Z}$-measurement for measuring its imaginary part.\n\n\\section{Results} \\label{results}\nThe QST results, are shown in Fig. \\ref{fig:results1} and Fig. \\ref{fig:results2}. To measure the final state we have the corresponding measurement operator in the Clifford library. The ground state of the final Hamiltonian provides us the solution to our problem, in our case, we would obtain probability of $p_{0}$ and $p_{1}$ for the states $\\ket{0}$ and $\\ket{1}$, respectively, after performing the experiment 8192 times (i.e., the maximum number of runs that one can select from the interface provided by IBM QE). The tomography results reveal that after the full adiabatic evolution of the system in 8 steps, system is in the ground state of the final Hamiltonian. Since the Hamiltonian at this point is degenerate, consequently, solution of the problem Hamiltonian are any of the two states $\\ket{0}$ and $\\ket{1}$. If we consider $\\ket{0}$ as the solution, then the corresponding classical bit value would be $p_{1}=0$ leading to the first factor of the composite number 35 in the binary system as $1 p_{1} 1 =101$ and consequently, in decimal system as $P=\\sum_{k} 2^{k} p_{k}=5$. The conjugate bit value by using the identity $p_{1}+q_{1}=1$ is $q_{1}=1$ and the corresponding prime factor in the binary system is $1 q_{1} 1=111$, and consequently the number, in the decimal system would be $Q=\\sum_{l} 2^{l} q_{l}= 7$. The two factors can also be obtained in the same way if we consider $\\ket{1}$ as the solution of the ground state search of the adiabatic evolution. \n\n\\begin{figure}[h]\n\\includegraphics[width=7cm,trim=5cm 9cm 5cm 8cm]{fig1.pdf}\n\\caption{\\label{fig:results1} (Color online) Figure shows density matrices in rows, with the columns representing the real and imaginary parts of the density matrix. Top row indicates the initial state, subsequent rows represent state at even instances of applying unitaries i.e., 2, 4, 6, and 8 times, while going down. Ticks 1, 2, 3, 4 correspond to the elements $\\ket{0}\\bra{0}$, $\\ket{0}\\bra{1}$, $\\ket{1}\\bra{0}$, $\\ket{1}\\bra{1}$.}\n\\end{figure}\n\n\\begin{figure}[h]\n\\includegraphics[width=9cm,trim=3cm 8cm 3.5cm 8cm]{fig2.pdf}\n\\caption{\\label{fig:results2} (Color online) Figure shows density matrices in rows, with the columns representing the real and imaginary parts of the density matrix. Rows represent state at odd instances of applying unitaries to the initial state i.e., 1, 3, 5, and 7 times, while going down. Ticks 1, 2, 3, 4 correspond to the elements $\\ket{0}\\bra{0}$, $\\ket{0}\\bra{1}$, $\\ket{1}\\bra{0}$, $\\ket{1}\\bra{1}$. The result reveals the system to be in the ground state of the final Hamiltonian.}\n\\end{figure}\n\n \n\n\\section{Conclusions} \\label{concl}\nAs discussed above, factorisation of bi-prime numbers are important for hacking of RSA type cryptographic schemes and various related problems. However, factorisation of some bi-primes are not as difficult as that of the others. Specifically, for an even bi-prime, we already know that one factor is 2 and it's trivial to find the other factor. Further, there are excellent algorithms for finding square root, so factorisation of square bi-primes are easy. What we are left with is odd and square-free bi-primes which are used in cryptography. Here, we report a quantum-classical hybrid scheme for factorization of such (i.e., odd and square-free) bi-prime numbers. The scheme proposed here is hybrid in nature, implying that the scheme utilizes both classical optimization techniques and adiabatic quantum optimization techniques. The advantage of such hybrid schemes underlies in the fact that they require less quantum resources (which are fragile and costly at the moment) in comparison with the purely quantum schemes designed for the same purpose. For, example, it's already understood that the extremely large quantum registers required in Shor's original protocol are not required in the hybrid schemes proposed later on. The same is true for our scheme as well as the schemes \\cite{pal2019hybrid,xu2012quantum} which have been extended here. Thus, in short the proposed scheme has the capability to factorise relatively large odd and square-free bi-primes using a small amount of quantum resources. To illustrate this, we have explicitly performed factorisation of 35 (which is an odd and square free bi-prime) using the smallest quantum computer available on the cloud (i.e., a five qubit quantum processor called IBM's QX4). The quantum processor used here is known to be noisy, but here we have correctly obtained the prime factors of 35 with some small signatures of noise depicting the strength of the algorithm. We conclude the paper with a hope that with the availability of larger quantum processors, larger bi-primes will be factored using this algorithm and it will be found useful in the future development of the hybrid algorithm designing. \n\n\\section{Acknowledgements}\nAuthors thank to Defense Research And Development Organization (DRDO), India for the support provided through the project number ERIP\/ER\/1403163\/M\/01\/1603. Abhishek Shukla thanks to Applied Physics department, The Hebrew University of Jerusalem for the support.\n\\bibliographystyle{apsrev4-1}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nAutonomous driving requires the vehicles to efficiently sense and understand the surrounding 3D world in real time. Currently, most of the autonomous vehicles (Google, Uber, Ford, Baidu, etc) are equipped with high-end 3D scanning systems such as Velodyne which could provide accurate 3D measurements that are critical to the autonomous vehicles' decision making and planning. However these high-end 3D scanning systems such as Velodyne LIDAR sensors are quite expensive with the cost comparable to the cost of the whole vehicle, which may hinder their admittance to the global consumer market.\n\nTo effectively reduce the cost in surrounding 3D world sensing, a natural and cost efficient alternative would be using passive sensors such as monocular cameras or stereo cameras, which could not only provide 3D measurements (by using structure-from-motion (SfM) or simultaneous localization and mapping (SLAM)) but also semantic information (not available from the point clouds). However, the computer vision based system is either not robust (different weather conditions could result in dramatically different vision measurements) or inaccurate (compared with LiDAR based 3D scanning). \n\nIn this paper, we propose to investigate effective fusion to integrate sparse 3D point clouds (low cost) and high-resolution colour images, such that generating dense 3D point clouds comparable to high-end 3D scanning systems. Specifically, the objective of this paper is to ``\\textbf{generate\/predict a dense depth map and the corresponding 3D point clouds from very sparse depth measurements with\/without colour images and colour-depth image datasets}''. Under our framework, we use a low-resolution LIDAR and a high-resolution colour image to generate dense depth maps\/3D point clouds (Fig.~\\ref{fig:conceptual-illustration}). This can be understood as using the high-resolution colour image to augment the sparse LIDAR map and predict a dense depth map\/3D point clouds. Our proposed methods can not only provide accurate dense depth maps but also provide visually pleasant 3D point clouds, which are critical for autonomous driving in urban scenes. Specifically, we have presented a novel way in perceiving 3D surrounding environments, which owns low-cost compared with the high-end LIDAR sensors and high precision and efficiency compared with the colour camera only solutions. The proposed framework owns great potentials in developing compact and high-resolution LIDAR sensors, at very low cost, for domain-specific applications (e.g. ADAS, autonomous driving).\n\n\\begin{figure}[!htp]\n \\centering\n \\includegraphics[width=1\\columnwidth]{Sparse-Dense.png} \n \\caption[Problem definition.]{\\label{fig:sparse-dense-concept} Conceptual illustration of the problem of predicting dense 3D point clouds\/depth maps from sparse LiDar input, where the input is a collection of sparse 3D point clouds and desired output is a dense 3D point cloud\/depth map.}\n \\label{fig:conceptual-illustration}\n\\end{figure}\n\nOur dense depth map prediction framework differs from general depth interpolation methods and depth super-resolution methods in the following aspects: 1) Irregular depth points pattern; 2) Very sparse depth measurements; and 3) Missing real obstacles or generating false obstacles in a certain range is unacceptable. The irregular depth points pattern creates a barrier in applying deep convolutional neural network (CNN) based methods as it requests irregular convolution pattern, which is still an unsolved problem in deep learning. On the other hand, even though many traditional methods \\cite{Li2012,ferstl2013b,YangAR2014,Barron2016,Li2016} could deal with irregular patterns, they suffer from the dependency on the strong correlation between colour images and depth maps, where boundaries that only exist in the colour images will mislead the methods and generate false obstacles on road. For those general smooth interpolation methods such as nearest neighbor, bilinear interpolation, bicubic interpolation and \\cite{Garcia20101167}, since they neglect the colour images in problem formation, they tend to generate over-smoothed results and miss real obstacles.\n\nIn this paper, targeting at handling the above difficulties with existing methods, we propose a dense depth prediction approach that only uses the boundaries in colour image (i.e., the superpixel over-segmentation). We resort to the piecewise planar model for general scene reconstruction, where the desired depth map\/point clouds are represented as a collection of 3D planar and the reconstruction problem is formulated as the optimization of planar parameters. The resultant optimization involves the unary term evaluated at the sparse depth measurements, the smoothness term across neighboring planes. Thirdly, as the urban driving scenarios are well structured, we propose a specifically designed model for urban driving scenario called ``cardboard world'' model, i.e., front-parallel orthogonal piecewise planar model, where each segment can only be assigned to either the road plane or a front-parallel object plane orthogonal to the road plane. We formulate the problem as a continuous CRF optimization problem and solve it through particle based method (MP-PBP) \\cite{Yamaguchi14}. Extensive experiments on the KITTI visual odometry dataset show that the proposed methods owns high resistance to false object boundaries and can generate useful 3D point clouds without missing obstacles.\n\n\\section{Related Work}\nGiven an incomplete depth map, depth completion task is to fill the missing depth values to get a dense depth map. A colour image that is captured with a camera is often used as the guided image. Since most range sensors can only achieve sparse or semi-dense depth maps, completing missing depth values can increase the quality of depth maps for better applicability. This task has drawn increasing attention recently due to the rising interest of autonomous driving, augmented\/virtual reality and robotics. Although this task can be partially addressed by traditional image in-painting techniques, extra knowledge such as depth smoothness, normal smoothness, colour guidance and \\emph{etc. }} \\def\\vs{\\emph{vs. } is yet to be utilized for achieving an accurate dense depth map. \n\nDepth completion includes three sub-tasks: depth inpainting, depth super resolution and depth reconstruction from sparse samples. Anisotropic diffusion \\cite{Perona1990} is a popular method for image inpainting, and it has been successfully adapted to the depth inpainting task in \\cite{Miao2012}. Energy Minimization based depth inpainting methods \\cite{Liu2013icip,Chongyu2015} take characteristics of depth maps as a regularization term in energy minimization and generate more plausible results. Exemplar-based Filling \\cite{Criminisi2004} works well in image inpainting but creates extra challenges in depth inpainting due to the lacking of textures in depth maps. Matrix Completion \\cite{Lu2014} assumes that a depth map lies in a low-dimensional subspace and can be approximated by a low-rank matrix. The main goal for depth super-resolution task is to increase the spatial resolution of a depth image to match a high resolution colour image. The low-resolution depth maps are completed in this case. By assuming depth discontinuities are often aligned with colour changes in the reference image, this task can be solved through Markov Random Fields (MRF) \\cite{DiebelT05,Andreasson07}. Such methods can be easily adapted to depth reconstruction from sparse samples task as they do not require the depth points to be regularly sampled \\cite{DolsonBPT10}. In order to intensively use the structural correlation between colour and depth pairs in colour guided depth super-resolution, non-local means (NLM) was introduced as a high-order term in regularization \\cite{Park2011}. \\cite{YangAR2014} swaps the Gaussian kernel in the standard NLM to a bilateral kernel to enhance the structural correlation in colour-depth pairs and proposed an adaptive colour-guided auto-regressive (AR) model that formulates the depth upsampling task as AR prediction error minimization, which owns a closed-form solution. A few approaches employ sparse signal representations for guided upsampling making use of the Wavelet domain \\cite{Hawe2011}, learned dictionaries \\cite{Li2012} or co-sparse analysis models \\cite{Gong2014}. \\cite{Barron2016} and \\cite{Li2016} leverage edge-aware image smoothing techniques and formulate it as a weighted least squares problem while \\cite{Li2016} also applied coarse-to-fine strategy to deal with the very sparse situation. \\cite{ku2018defense} proposed an efficient depth completion algorithm that achieves the state-of-the-art performance on KITTI depth completion benchmark \\cite{Uhrig2017THREEDV} that only uses image processing operations.\n\nWith the booming development of deep learning technique, deep networks were introduced to geometry learning tasks such as monocular depth estimation~\\cite{Zhong2018ECCV}, stereo matching~\\cite{zhong2017self,Zhong2018ECCV_rnn,zhong2020nipsstereo,zhong2020displacementinvariant}, optical flow~\\cite{Zhong_2019_CVPR,zhong2020nipsflow}, and achieved state-of-the-art performance on most benchmarks. By treating a depth map as a gray-scale image, image super-resolution networks \\cite{Dong2014} can be directly applied \\cite{Song2016}. Riegler \\emph{et~al.} \\cite{Riegler2016} proposed a deeper depth super-resolution network that has faster convergence and better performance. A natural barrier for applying deep methods to other depth completion tasks is the irregular depth pattern as standard convolution layers are designed for regular grid inputs. Sparsity Invariant CNNs \\cite{Uhrig2017THREEDV} addresses this problem and tried to handle the sparse and irregular inputs by introducing invalid masks in convolution layers. On the other hand, \\cite{jaritz2018sparse} claims that standard convolution layers can handle sparse inputs of various densities without any additional mask input. In \\cite{semantic2016}, it proves that leveraging semantic information can improve the depth completion performance and vice versa~\\cite{zhong20183d}. In the recent stereo-LiDAR fusion work~\\cite{Cheng_2019_CVPR}, they also apply the piecewise planner model as a soft constraint in their network loss functions.\n\n\\section{Approach}\nReal world driving scenarios generally consist of road, surrounding buildings, vehicles, pedestrians and etc., which can be well approximated with piece-wise planar model in 3D representation. By representing the traffic scenes with piece-wise planar model, we are able to exploit the structural information in the scenes and the number of variables can be greatly reduced (For each plane, we only need 3 parameters to represent.). In this way, the number of measurements of sparse depths could be greatly reduced, which enables us the ability to work with very sparse LIDAR measurements. Furthermore, the parametric model owns the ability to handle outlying LIDAR measurements. \n\n\\subsection{An overview of the proposed method}\nA high level description of our method is given as follows: assume the LIDAR sensor and the camera have been geometrically calibrated (both intrinsically and extrinsically). Given the input of a single frame monocular image with corresponding sparse depth points, we first perform image over-segmentation to obtain fixed number of super-pixels using the SLIC algorithm \\cite{SLIC2012}. Then we conduct interpolation on the sparse depth points to generate an initial dense depth map by using the penalized least squares method \\cite{Garcia20101167}. This dense depth map is used to provide initial planar parameters for each segment. As shown in Fig.~\\ref{fig:input}, on average each superpixel region contains a single depth point measurement. Also note some superpixels do not contain any depth measurement. After fitting the initial depth measurements inside each superpixel with a plane, we formulate a Conditional Random Field (CRF) to optimize all the plane parameters and recalculate the depth map. A flowchart of our approach is given in Fig.~\\ref{fig:processcrf}. \n\n\\begin{figure}[!htp]\n\\begin{center} \n \\subfigure{\n\\includegraphics[width=1\\columnwidth]{input3.png} }\n \\subfigure{\n \\includegraphics[width=1\\columnwidth]{input.png} }\n \\caption[Inputs of our algorithm.]{\\label{fig:input} \\textbf{Inputs of our algorithm implementation.} Top: The SLIC over-segmentation of the input image. Bottom: the input of our algorithm: the super-pixel segmentation and sparse depth measurements. Note that as the depth measurements are very sparse, there are only a few depth measurements in each super-pixel and there are considerable super-pixels where no depth measurements are available.}\n \\end{center}\n\\end{figure}\n\n\\begin{figure}[!htp]\n\\centering\n\\begin{center}\n\\includegraphics[width=.65\\columnwidth]{process.png\n\\caption[Pipeline of our method.]{\\label{fig:processcrf}\\textbf{Pipeline of our method.} Given high resolution colour image and sparse depth measurements as input, our approach predicts dense depth map with the same resolution as the colour image.}\n\\end{center}\n\\end{figure}\n\n\\subsection{Mathematical Formulation}\nUnder our piece-wise planar model for the scenes, each segment is well approximated with a plane. In this way, the dense depth prediction problem is transformed to the optimization of the plane parameters. Furthermore, we assume that the boundaries in the depth maps are a subset of the boundaries in the colour images, which enables us the freedom to decide the real depth boundaries from the colour boundaries.\n\n\nSpecifically, let $\\mathcal{S}$ denote the set of superpixels and each superpixel $i\\in \\mathcal{S}$ is associated with a segment $\\mathcal{R}_i$ in the image plane and a random variable $\\mathbf{s}_i = \\mathbf{n}_i$, where $\\mathbf{n}_i \\in \\mathbb{R}^3$ describes the plane parameter in 3D. Our goal is to infer the 3D geometry of each superpixel $\\mathbf{s}_i$ given the sparse depth measurements $\\mathrm{d}_i(\\mathbf{x})$. We define the energy of the system to be the sum of a data term $\\varphi_i$ and a smoothness term $\\psi_{i,j}$,\n\\begin{equation}\nE(\\mathbf{s}) = \\sum_{i\\in \\mathcal{S}}\\varphi_i(\\mathbf{s}_i) + \\sum_{i\\sim j}\\psi_{i,j}(\\mathbf{s}_i,\\mathbf{s}_j),\n\\label{energy}\n\\end{equation}\nwhere $\\varphi$ denotes the data term while $\\psi$ denotes the smoothness term. $\\mathbf{s} = \\{\\mathbf{s}_i|i\\in \\mathcal{S}\\}$ and $i\\sim j$ denotes the set of adjacent superpixels in $\\mathcal{S}$.\n\n\\noindent\\textbf{Data Term:} The data term encourages the depth measurement to lie on the plane by penalizing the discrepancy between the measurement and the prediction. To better enforce this constraint, we choose the $\\ell_2$ norm to amplify the cost. Therefore, our data term can be written as:\n\\begin{equation}\n\\varphi_i(\\mathbf{s}_i) = \\theta_1 \\sum_{\\mathbf{x}\\in \\mathcal{X}}\\|\\widehat{\\mathrm{d}}(\\mathbf{s}_i,\\mathbf{x}) - \\mathrm{d}_i(\\mathbf{x})\\|_2^2,\n\\label{dataenergy}\n\\end{equation}\nwhere $\\mathcal{X}$ is the set of pixels that has depth measurements. $\\widehat{\\mathrm{d}}(\\mathbf{s}_i,\\mathbf{x}) = \\frac{-\\widetilde{\\mathrm{d}}_i}{\\mathbf{n}_i^T \\mathbf{K}^{-1}\\mathbf{x}}$ represents the estimated depth value on a pixel $x$, $\\widetilde{\\mathrm{d}}_i$ represents the distance between the plane and the origin, $\\mathbf{n}_i^T$ is the normal vector of the plane, $\\mathbf{K}$ is the camera intrinsic matrix and $\\mathbf{x} = (u,v,1)^T $ is the homogeneous representation of the pixel $x$. $\\mathrm{d}_i(x)$ is the depth measurement on pixel $x$. \n\n\\noindent\n\\textbf{Smoothness Term:}\nThe smoothness term encourages coherence of adjacent superpixels in terms of both depth and orientation. The depth coherence is defined by the difference between the depths of neighboring superpixels' boundaries while the orientation coherence is defined as the difference between the surface normal of neighbouring superpixels. Considering the discontinuity in neighboring superpixels due to scene structure, we use the truncated $\\ell_1$ norm to allow discontinuity in both depth and orientation. \n\n\nFollowing \\cite{Menze2015CVPR}, our smoothness potential $\\psi_{i,j}(\\mathbf{s}_i,\\mathbf{s}_j)$ can be decomposes as:\n\\begin{equation}\n\\psi_{i,j}(\\mathbf{s}_i,\\mathbf{s}_j) = \\theta_2 \\psi_{i,j}^{depth}(\\mathbf{n}_i,\\mathbf{n}_j) + \\theta_3 \\psi_{i,j}^{orient}(\\mathbf{n}_i,\\mathbf{n}_j),\n\\label{smenergy}\n\\end{equation}\nwith weights $\\theta$ and\n\\begin{equation*}\n\\psi_{i,j}^{depth}(\\mathbf{n}_i,\\mathbf{n}_j) = \\sum_{\\mathbf{p}\\in \\mathcal{B}_{i,j}}\\rho_{\\tau_1}(d(\\mathbf{n}_i,\\mathbf{p})-\\mathrm{d}(\\mathbf{n}_j,\\mathbf{p})),\n\\end{equation*}\n\\begin{equation*}\n\\psi_{i,j}^{orient}(\\mathbf{n}_i,\\mathbf{n}_j) = \\rho_{\\tau_2}(1-|\\mathbf{n}_i^T\\mathbf{n}_j|\/(|\\mathbf{n}_i||\\mathbf{n}_j|)),\n\\end{equation*}\nwhere $\\mathcal{B}_{i,j}$ is the set of shared boundary pixels between superpixel $i$ and superpixel $j$, and $\\rho$ is the robust $\\ell_1$ penalty function $\\rho_{\\tau}(x) = \\min(|x|,\\tau)$.\n\n\\subsection{CRF Optimization}\nThe optimization of the above continuous CRF defined in Eq.~\\eqref{energy} is generally NP-hard. In order to efficiently solve this optimization problem, we discretize the continuous variables and leverage particle convex belief propagation (PCBP) \\cite{PCBP2011}, an algorithm that is guaranteed to converge and gradually approach the optimum. It works in the following way: after initialization, for each random variable, particles are sampled around current states. Then these particles act as labels in discretized MRF\/CRF that can be solved by any MRF\/CRF solving methods such as multi-label graph cut, sequential tree-reweighted message passing (TRW-S) \\cite{Kolmogorov2006TRWS} and update the MAP estimation to current solution. The process is repeated for a fixed number of iterations or until convergence.\n\n\\begin{algorithm}[t!]\n\\caption{Solving Eq.~\\eqref{energy} via the PCBP}\n\\label{Algorithm 1}\n\\begin{algorithmic}\n\\REQUIRE ~~\\\\\nSuperpixels $\\mathbf{S}$ and sparse depth measurements $\\mathbf{d}$, number of particles $n_p$, number of iterations $n_i$, parameters $\\theta_1$, $\\theta_2$, $\\theta_3$, $\\tau_1$, $\\tau_2$, $\\rho$, $\\sigma$. \n \\\\ \\vspace{0.2cm}\n\\hspace{-0.3cm}{\\bf Initialize:} Initial plane parameters $\\mathbf{s}_i$ for each superpixel segment. \\\\ \\vspace{0.2cm}\n\\WHILE {iteration $< n_i$}\n\\STATE 1). Sample particles: the first particle for each superpixel is the result of previous iteration, the next $n_p\/2$ particles are randomly sampled around the state in the previous iteration and the remaining $n_p\/2-1$ particles are sampled from the neighboring superpixels' current states;\n\n\\STATE 2). Evaluate the data term \\eqref{dataenergy} and the smoothness term \\eqref{smenergy};\n\n\\STATE 3). Solve the resultant discrete problem with TRW-S and update plane parameters for each superpixel.\n\n\\ENDWHILE\n\\ENSURE ~Plane parameters $\\mathbf{S}$, recovered depth map $\\mathbf{D}$.\n\\end{algorithmic}\n\\end{algorithm}\n\n\\textbf{Implementation Details:} A 3D plane is defined by: \n\\begin{equation}\naX_i+bY_i+cZ_i+d = 0\n\\label{eq:abcdplane}\n\\end{equation}\nwhere $(a,b,c,d)$ are the plane parameters, $(X_i,Y_i,Z_i)$ is the 3D points coordinates that can be computed by\n\\begin{equation}\nX_i = (u_i - C_x)\\times Z_i\/f,\n\\end{equation}\n\\begin{equation}\nY_i = (v_i - C_y)\\times Z_i\/f,\n\\end{equation}\nwhere $Z_i = d_i$. $(u_i,v_i)$ is the point coordinate in the image plane, $(C_x,C_y)$ is the camera principal point offset and $f$ is the camera focal length. \n\nThus, the $i^{th}$ particle is defined as a $4\\times 1$ vector $(a_i,b_i,c_i,d_i)^T$. $a_i,b_i,c_i,d_i$ are independently generated through a normal distribution with standard deviation $\\sigma$ and mean $\\mu$, where $\\sigma$ is given by user setting and $\\mu$ is the state in the previous iteration.\n\nOur approach to solve Eq.~\\eqref{energy} is outlined in Algorithm 1, where parameters $\\theta_1$ $\\theta_2$ $\\theta_3$ $\\tau_1$ $\\tau_2$ have been already defined in previous equations, $\\rho$ is the decay rate for generating particles and $\\sigma$ is a $4\\times 1$ vector that contains the standard deviation for the 4 parameters of each particle.\n\n\\textbf{Particle Generation:} Instead using of the regular PCBP particle generation scheme, which generates particles only from the MCMC framework, we partially adopt the PMBP \\cite{PMBP2014} scheme by adding neighboring plane parameters into the particles, thus the candidate particles are a mixture of MCMC sampling around the previous states and the states from neighboring planes. The advantage of this modification is that it allows neighboring superpixels to fuse together thus decreasing the energy. For example, the road area is often segmented into several segments. By using the regular PCBP particle sampling strategy, each segment's parameters are independently generated by a normal distribution. Even though the smoothness term encourages the planes be close to each other, there will still exist small gap between them as the algorithm could not find exactly the same parameters from its candidate particles. However, in our modified version, the neighboring superpixels can share their parameters and therefore resolve the issue.\n\nIn each PCBP iteration, each superpixel has fixed number of particles, and we need to find the sub-optimal combination that has the lowest energy. This problem can be efficiently solved through tree-reweighted max-product message passing (TRW-S) algorithm. The processing time depends on the number of superpixels and the number of particles.\n\n\\subsection{Cardboard World Model: A More Constrained Model}\nIn the above section, we described our piece-wise planar model for solving the dense depth prediction problem, where each plane has three freedoms in 3D space. \n\nHere we notice that for real-world driving applications, man-made road scenes often have stronger structured information (i.e. stronger prior). In particular, we realize that modeling a front-view road scene as a combination of ground-plane and many front-parallel obstacles planes will be convenient for the task of drivable free-space detection task. Next, we will show how to infer such a simplified 3D road scene model by using the same method of our CRF--conditional random field framework. \n\nBased on these observations, we propose our ``cardboard world'' model for representing the driving scenes. Under the ``cardboard world'' model, there only consist two kinds of planes: the road plane and the object plane. We assume there is only one road plane in the scene and all the other planes are object plane. These two kinds of planes are orthogonal to each other and object planes are front parallel. Fig.~\\ref{fig:cw} illustrates an example of our ``cardboard world'' model. \n\n\\begin{figure}[!htp]\n\\begin{center} \n\\includegraphics[width=1\\columnwidth]{cw001.png} \n \\caption[``Cardboard world'' model.]{\\label{fig:cw} \\textbf{An example of our ``cardboard world'' model:} Object planes are front parallel planes that orthogonal to the road plane.}\n \\end{center}\n\\end{figure}\n\nThere are three advantages in replacing the slanted plane model in the piecewise planar model with our proposed ``cardboard world'' model:\n\\begin{compactenum}[1)]\n\\item The recovered 3D point clouds are more visually pleasant and the location of each object is more accurate. In some case, when there are two depth points with very different depth values in a single superpixel, the slanted plane model will use a very slanted plane to fit these two points. As a result, the shape of this area will be largely distorted. On the other hand, in the ``cardboard world'' model, it will force the plane to become front parallel therefore maintain the object shape.\n\\item As a byproduct, this method provides a free-space for autonomous driving vehicles. We embed a binary labeling problem in our task that classifies superpixels into two clusters: road and object. Superpixels labeled as road belong to the free-space.\n\\item Processing time can be greatly reduced. By applying a front-parallel constraint and orthogonal constraint, the number of parameters to optimize have been greatly reduced thus decreasing the processing time.\n\\end{compactenum}\n\nA diagram of our cardboard model approach is shown in Fig.~\\ref{fig:process}.\n\\begin{figure}[!htp]\n\\centering\n\\begin{center}\n\\includegraphics[width=.65\\columnwidth]{process2.png\n\\caption[Process flow of our method.]{\\label{fig:process}\\textbf{Process flow of our approach.} Given high resolution colour image and sparse depth measurements as input, our approach predicts dense depth map with the same resolution as the colour image.}\n\\end{center}\n\\end{figure}\n\n\\textbf{Initialization:} Different from the piecewise planar based method, this new method requires an initial road plane estimation for initialization. To do so, we fit the road plane directly from sparse depth point using RANSAC. After getting the initial dense depth map using \\cite{Garcia20101167}, for each superpixel segment, we project all the depth points into 3D and calculate the distance between them and the road plane by summing the Euclidean distance of each point. If the distance is smaller than a given threshold $\\epsilon$, we label the superpixel as road plane and fit it with the initial road plane parameters. Otherwise, the superpixel is labeled as object and will be fitted with a front parallel plane with a depth of the mean depth of this superpixel.\n\nThe optimization problem remains the same as the previous one. However, since we need to enforce the orthogonal relationship as well as the front parallel constraint, given road plane parameter $(a_r,b_r,c_r,d_r)^T$, the normal of object planes are fixed as well and are equal to $(0, -\\frac{c_r}{b_r},1)^T$. Hence there is only one freedom in object planes, i.e., the unknown depth. Note that we naturally embed a binary labeling problem in optimization step since superpixels are divided into two groups: road plane group and object planes group. This labeling problem is jointly solved through PCBP process by adding road plane parameters into particle sets of every superpixels, so that every superpixel has a choice to joint the road plane when the fitting cost is low enough. Our approach to solving Eq.~\\eqref{energy} using our ``cardboard world'' model is outlined in Algorithm 2.\n\n\n\\begin{algorithm}[t]\n\\caption{Solving Eq.~\\eqref{energy} under ``cardboard world'' model via the PCBP}\n\\label{Algorithm 2}\n\\begin{algorithmic}\n\\REQUIRE ~~\\\\\nSuperpixels $\\mathbf{S}$ and sparse depth measurements $\\mathbf{d}$, number of particles $n_p$, number of iteration $n_i$, parameters $\\theta_1$, $\\theta_2$, $\\theta_3$, $\\tau_1$, $\\tau_2$, $\\rho$, $\\sigma$, $\\epsilon$. \n\\\\ \n\\hspace{-0.3cm}{\\bf Initialize:} Estimate road plane parameter $\\mathbf{s}_d$, initial dense depth map $\\mathrm{D}_0$ \n\\FORALL{$\\mathbf{S_i} \\in \\mathbf{S}$}\n \\STATE Project every depth points into 3D;\n \\STATE Calculate the sum of Euclidean distance between each 3D point the road plane $\\mathbf{s}_d$; \\IF{The sum of Euclidean distance is less than a given threshold $\\epsilon$}\n \\STATE Fit $\\mathbf{S}_i$ with $\\mathbf{n}_\\mathrm{d}$\n \\ELSE\n \\STATE Fit $\\mathbf{S}_i$ with front parallel plane and orthogonal to $\\mathbf{n_{\\mathrm{d}}}$\n \\ENDIF\n\\ENDFOR\n\\WHILE {iteration $< n_i$}\n\\STATE 1). Sample particles: the first particle for each superpixel is the state in previous iteration, then randomly generate $n_p\/2-1$ particles and add $n_p\/2-1$ neighboring superpixels' parameters to the particle set and add the last particle with $\\mathbf{n_d}$;\n\\STATE 2). Calculate data energy through \\eqref{dataenergy} and smoothness energy through \\eqref{smenergy};\n\\STATE 3). Solve through TRWS and update current MAP estimation ${\\bf n}$ to the first particle;\\\\\n\\ENDWHILE\n\\ENSURE ~Plane parameters ${\\bf n}$, recovered depth map ${\\bf D}$.\n\\end{algorithmic}\n\\end{algorithm}\n\n\\section{Experiments}\n\\subsection{Experimental setup}\nWe perform both quantitative and qualitative evaluations of our methods on the KITTI VO dataset. The KITTI VO dataset consists of 22 sequences 43,596 frames which includes various driving scenarios such as highway, city and country road. It provides stereo images and semi-dense $360^\\circ$ LIDAR points. In our experimental setting, we only use the left images and the sparse LIDAR points. Note, we recover the depth for the every 10th frames, thus 4,359 frames are recovered in total. \n\nWe perform evaluation of two subset of KITTI dataset: KITTI stereo and KITTI VO, which both consist of challenging and varied road scene imagery collected from a test vehicle. Ground truth depth maps are obtained from 64-line LIDAR data. The main difference between these two dataset is that the ground truth depth maps for the stereo dataset were manually corrected and interpolated based on the neighboring frames. Note we only used the lower half part of the images ($200\\times 1226$) as the upper half part generally include large part of sky and there is no depth measurements available. The input of our experiment was synthesized low-resolution LIDAR points which was downsampled by a factor of 6 in horizontal direction and 3 in vertical direction.\n\n\\noindent\n\\textbf{Piece-wise planar method setting:} As this method performs on superpixel level, we use SLIC \\cite{SLIC2012} segmentation method to provide segments and the number of super-pixels is manually set to 800. The number of particles is the set to 10. The total iteration of PCBP is set to 40 as in Fig.~\\ref{fig:energyDist} as the energies - both total and unary one - become flat since then.\n\n\\begin{figure}[t]\n\\begin{center} \n\n \\subfigure[Unary energy versus the number of iterations.]{\n\\includegraphics[width=0.45\\columnwidth]{unaryEnergyDistribution.png} }\n\n \\subfigure[Total energy versus the number of iterations.]{\n \\includegraphics[width=0.45\\columnwidth]{EnergyDistribution.png} }\n \\caption[Energy distribution.]{\\label{fig:energyDist} This figure shows the energy with respect to the number of iterations in CRF based method.}\n \\end{center}\n\\end{figure}\n\n\n\\noindent\n\\textbf{``Cardboard world'' method setting:} We use SLIC \\cite{SLIC2012} segmentation method to provide segments and the number of super-pixels is manually set to 1200. In the TRW-s inference step, the number of particles is set to 10, and divided into 3 parts: 1 particle represents the road plane, 4 particles are sampled from its neighbors and 5 particles are newly generated from MCMC process. The number of iteration of PCBP is set to 20 as there is only 1 parameter that need to optimize in this case.\n\n\\subsection{Error Metrics}\nTo evaluate the performance of depth interpolation, we use the following three quantitative metrics:\n\\begin{compactenum}[1)]\n\\item \\textbf{Mean relative error} (MRE), which is defined as:\n\\begin{equation}\n\\mathrm{e}_{MRE} = \\frac{1}{N}\\sum_{i=1}^N\\frac{|\\mathrm{d}_i-\\widehat{\\mathrm{d}_i}|}{\\mathrm{d}_i},\n\\end{equation}\nwhere $\\mathrm{d}_i$ and $\\widehat{\\mathrm{d}_i}$ are the ground truth depth and inferred depth respectively. A lower $\\mathrm{MRE}$ indicates a better dense depth prediction performance.\n\n\\item \\textbf{Bad pixel ratio} (BPR) measures the percentage of erroneous positions in total, where a depth prediction result is determined as erroneous if the absolute depth prediction error is beyond a given threshold $\\mathrm{d}_{th}$. In our experiment, we set the bad pixel threshold as $\\mathrm{d}_{th} = 3$ meters in VO dataset. A lower bad pixel ratio indicates a better depth prediction results. \n\n\\item \\textbf{Mean absolute error} (MAE) is defined as:\n\\begin{equation}\n\\mathrm{e}_{MAE} = \\frac{1}{N}\\sum_{i=1}^N|\\mathrm{d}_i-\\widehat{\\mathrm{d}_i}|,\n\\end{equation}\nwhere $d_i$ and $\\widehat{\\mathrm{d}_i}$ are the ground truth depth and depth prediction respectively. A lower mean absolute error indicates a better dense depth prediction performance achieved. It also indicates the average depth estimation error in meters.\n\\end{compactenum}\nBad pixel ratio, mean relative error and mean absolute error measure different statistics of the dense depth prediction results, which jointly evaluate the prediction performance.\n\n\\subsection{Experiment Results}\nOur quantitative results are shown in Table~\\ref{kittivo}. The piece-wise planar method outperforms all the other methods. However, since our goal is to generate both useful and visual pleasant 3D point cloud, we also provide qualitative results as shown in Fig.~\\ref{fig:p3}. For better comparison, we compare our method with several other state-of-the-art depth super-resolution methods: bilateral solver\\cite{Barron2016}, as well as our colour-guided PCA based depth interpolation method. The initial depth map used for the bilateral solver \\cite{Barron2016} was generated by a general smooth interpolation method \\cite{Garcia20101167}.\n\n\\begin{table}[!htb]\n\\centering\n\\caption[Evaluation on the KITTI VO dataset.]{Evaluation on the KITTI VO dataset.}\n\\label{kittivo}\n\\tabcolsep=0.38cm\n\\begin{tabularx}{\\columnwidth}{c|c|c|c|c}\n\\hline\n & Bilateral\\cite{Barron2016} & colour PCA & Piece-wise & ``Cardboard\"\\\\ \\hline\nMRE(\\%) & 7.36 & 5.73 & \\textbf{4.87} & 7.85 \\\\ \\hline\nBPR(\\%) & 9.46 & 7.51 & \\textbf{5.82} & 8.57 \\\\ \\hline\nMAE(m)\t& 1.20 & 1.04 & \\textbf{0.80} & 1.29 \\\\ \\hline\n\\end{tabularx}\n\\end{table}\n\nAs we can observe, bilateral solver \\cite{Barron2016} provides over-smoothed results, large distortion can be observed in the areas with different colours. For example, in Fig.~\\ref{fig:p3}, when there are shadows on road, it tends to assign same depth to same colour areas when lacking information, therefore create stripes effects in 3D point clouds. However, in our PCA based colour guided method, with the help of high order smoothness term and the PCA bases as a global constrain, it shows some resistances to the misleading of false boundaries that introduced by the colour images. Also, it can recover the shape of cars. However, as it is a pixel-wise algorithm, it is hard to estimate every pixel with the right depth. Therefore, we can find there are holes on the road plane. This can be harmful for autonomous driving system as it creates many false road pits and\/or false obstacles.\n\nThe ``cardboard world'' algorithm, on the other hand, does not have these drawbacks. As we can see from all these results, none of the road plane was fooled by shadows or marks. For better illustration of our algorithm, in Fig.~\\ref{fig:pcVO1}, we provide the input and output of our algorithm and the road plane segmentation as well. From top to bottom, each figure consists of the reference colour image, the input of our algorithm (LIDAR measurements and super-pixel segmentation), our recovered dense depth map and our road plane segmentation result correspondingly. \\emph{Note that the colour image is only used to generate the super-pixel segmentation and only sparse LIDAR measurements are used in the optimization.}\n\n\\begin{figure}[t]\n\\begin{center} \n \\subfigure{\n\\includegraphics[width=0.3\\columnwidth]{00_0_00_L00.png} } \\hspace{-.3cm}\n\\subfigure{\n\\includegraphics[width=0.3\\columnwidth]{00_0_00_L01.png} } \\hspace{-.3cm}\n\\subfigure{\n\\includegraphics[width=0.3\\columnwidth]{00_0_00_L02.png} }\\vspace{-.3cm}\n\\subfigure{\n\\includegraphics[width=0.3\\columnwidth]{00_300_00_L00.png} } \\hspace{-.3cm}\n\\subfigure{\n\\includegraphics[width=0.3\\columnwidth]{00_300_00_L01.png} } \\hspace{-.3cm}\n\\subfigure{\n\\includegraphics[width=0.3\\columnwidth]{00_300_00_L02.png} }\\vspace{-.3cm}\n\\subfigure{\n\\includegraphics[width=0.3\\columnwidth]{00_620_00_L00.png} } \\hspace{-.3cm}\n\\subfigure{\n\\includegraphics[width=0.3\\columnwidth]{00_620_00_L01.png} } \\hspace{-.3cm}\n\\subfigure{\n\\includegraphics[width=0.3\\columnwidth]{00_620_00_L02.png} }\\vspace{-.3cm}\n\\subfigure{\n\\includegraphics[width=0.3\\columnwidth]{02_190_00_L00.png} } \\hspace{-.3cm}\n\\subfigure{\n\\includegraphics[width=0.3\\columnwidth]{02_190_00_L01.png} } \\hspace{-.3cm}\n\\subfigure{\n\\includegraphics[width=0.3\\columnwidth]{02_190_00_L02.png} }\n\\caption[Comparison in coloured 3D point clouds.]{\\label{fig:p3} \\textbf{Comparison in coloured 3D point clouds.} Left column: results from \\cite{Barron2016}; Middle column: results from PCA based colour guide method; Right column: results from the ``cardboard world'' method.}\n \\end{center}\n\\end{figure}\n\n\\begin{figure}[!htp]\n\\begin{center} \n \\subfigure{\n\\includegraphics[width=0.48\\columnwidth]{Results1_00_0.png} } \\hspace{-.3cm}\n\\subfigure{\n\\includegraphics[width=0.48\\columnwidth]{Results1_00_250.png} }\n\\caption[Results of ``Cardboard world'' method.]{\\label{fig:pcVO1}\\textbf{Example results of ``Cardboard world'' method:} Top to bottom: reference colour frames, inputs of our method, recovered depth map and segmented free space (coloured with green).}\n \\end{center}\n \\vspace{-4mm}\n\\end{figure}\n\n\n\nAs we can see from the figures, all dominant road plane space (labeled by dark green) have been accurately extracted. In typical city road scenarios (i.e., Fig.~\\ref{fig:pcVO1}), our method can successfully extract cars that are parked on side road and a motorbike over 22 meters away from only 3 LIDAR points on it.\n\nTo better illustrate the advantage of our ``cardboard world'' model over the standard piece-wise planar model, we also provided quality comparison between them. As we can see from Fig.~\\ref{fig:cp1}, both methods successfully recover road plane with resistant to shadows. The quantitative results show that the CRF based method achieves better performance. However, in turns of the quality of 3D point cloud, the CRF method largely distorts the shape of cars as in (a) and (g) while the ``cardboard world'' method provides less distorted, much clean and visually look better results. \n\n\\begin{figure}[!htp]\n\\begin{center} \n\\subfigure[\\scriptsize MRE:4.9 BPR:6.1 MAE:0.846 ]{\n\\includegraphics[width=0.48\\columnwidth]{pc00_L00.png} } \\hspace{-.3cm}\n \\subfigure[\\scriptsize MRE:6.5 BPR:8.7 MAE:1.041]{\n\\includegraphics[width=0.48\\columnwidth]{pc00_L01.png} }\\vspace{-.3cm}\n\\subfigure[\\scriptsize MRE:5.2 BPR:4.2 MAE:0.583]{\n\\includegraphics[width=0.48\\columnwidth]{pc03_L00.png} } \\hspace{-.3cm}\n \\subfigure[\\scriptsize MRE:9.5 BPR:6.2 MAE:1.060]{\n\\includegraphics[width=0.48\\columnwidth]{pc03_L01.png} }\n \\caption[CRF based method \\vs ``Cardboard world'' method.]{\\label{fig:cp1} Comparison between CRF based method (left) and ``cardboard world'' method (right).}\n \\end{center}\n \\vspace{-4mm}\n\\end{figure}\n\n\\subsection{Processing time}\nUnlike pixel-level colour guided methods where the processing time largely depends on the input image resolution, the processing time of our method depends on the number of superpixels, the number of particles and the number of parameters to optimize. The higher the numbers, the longer the processing time is. In piece-wise planar model based solution, when we use 800 superpixels, 10 particles for each superpixel, the running time is about 10 seconds on average. However, in the more constrained case, our ``cardboard world'' method with more superpixels only needs 1 seconds on average. \n\n\\section{Conclusion}\nThis paper aims at tackling the challenging task of predicting dense depth maps from very sparse measurements, we have proposed two different methods by exploits various local and global constraints inside the problem. The first method is based on the piecewise planar model of the scene, where dense depth prediction is reformulated as the optimization of planar parameters. The second method enforces strong regularization on the scene model to exploit the structural information in outdoor traffic scenes, which is more suitable for autonomous driving tasks. Unlike existing depth super-resolution methods that can be easily misled by marks or shadows on road, our methods inherently resist to these false guidance. Experimental results on the KITTI VO dataset show that our methods can efficiently recover dense depth map from less than $1000$ LIDAR points without losing important information for autonomous driving, i.e., obstacles on road, or creating false obstacles that may misleading self-driving vehicles. In future, we plan to exploit the temporal information in constraining the dense depth maps.\n\n\\bibliographystyle{unsrt}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}