diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzltps" "b/data_all_eng_slimpj/shuffled/split2/finalzzltps" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzltps" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\subsection{Statement of the problem}\n Our objective is to study a fluid structure interaction problem in a 2d channel. The fluid flow here is modeled by the compressible Navier-Stokes equations. Concerning the structure we will consider an Euler-Bernoulli damped beam located on a portion of the boundary. As remarked in \\cite{gavalos1}, such dynamical models arise in the study of many engineering systems (e.g., aircraft, bridges etc). In the present article we establish a result on the local in time existence of strong solutions of such a fluid structure interaction problem. To the best of our knowledge, this is the first article dealing with the existence of local in time strong solutions for the complete non-linear model considered here.\\\\\n We consider data and solutions which are periodic in the \\textquoteleft channel direction\\textquoteright\\, (with period $L,$ where $L>0$ is a constant). Here $L$-periodicity of a function $f$ (defined on $\\mathbb{R}$) means that $f(x+L)=f(x)$ for all $x\\in \\mathbb{R}.$\\\\ \nWe now define a few notations. Let $\\Omega$ be the domain $\\mathbb{T}_{L}\\times (0,1)\\subset {\\mathbb{R}}^{2},$ where $\\mathbb{T}_{L}$ is the one dimensional torus identified with $(0,L)$ with periodic conditions. The boundary of $\\Omega$ is denoted by $\\Gamma$. We set \n \\begin{equation}\\nonumber\n \\begin{array}{l}\n \\Gamma_{s}=\\mathbb{T}_{L}\\times \\{1\\},\n \\quad\n \\Gamma_{\\ell}=\\mathbb{T}_{L}\\times\\{0\\},\\quad\\Gamma=\\Gamma_{s}\\cup\\Gamma_{\\ell}.\n \\end{array}\n \\end{equation} \nNow for a given function $$\\eta : \\Gamma_{s}\\times (0,\\infty)\\rightarrow (-1,\\infty),$$ which will correspond to the displacement of the one dimensional beam, let us denote by $\\Omega_{t}$ and $\\Gamma_{s,t}$ the following sets\n\\begin{equation}\\nonumber\n\\begin{array}{ll}\n\\Omega_{t}=\\{(x,y) \\;\\ifnum\\currentgrouptype=16 \\middle\\fi|\\; x\\in (0,L),\\quad 0, color=red, thick] (3.2,5) -- (3.2,6);\n \t\\end{tikzpicture}\n \t\\caption{Domain $\\Omega_{t}$.}\n \\end{figure} \\\\ \n We consider a fluid with density $\\rho$ and velocity ${\\bf u}.$ The fluid structure interaction system coupling the compressible Navier-Stokes and the Euler-Bernoulli damped beam equation is modeled by\n \\begin{equation}\\label{1.1}\n \\left\\{\n \\begin{array}{ll}\n \\rho_{t}+\\mbox{div}(\\rho {\\bf u})=0\\quad &\\mbox{in} \\quad \\widetilde{Q_{T}},\n \\vspace{1.mm}\\\\\n (\\rho {\\bf u}_{t}+\\rho({\\bf u}.\\nabla){\\bf u})-(2\\mu \\mbox{div} (D({\\bf u}))+\\mu{'}\\nabla\\mbox{div}{\\bf u}) +\\nabla p(\\rho) =0\\quad &\\mbox{in} \\quad \\widetilde{Q_T},\n \\vspace{1.mm}\\\\\n {\\bf u}(\\cdot,t)=(0,\\eta_{t})\\quad & \\mbox{on}\\quad \\widetilde{\\Sigma^{s}_{T}},\n \\vspace{1.mm}\\\\\n {\\bf u}(\\cdot,t)=(0,0)\\quad &\\mbox{on}\\quad \\Sigma^{\\ell}_{T},\n \\vspace{1.mm}\\\\\n {\\bf u}(\\cdot,0)={\\bf u}_{0}\\quad& \\mbox{in} \\quad \\Omega_{\\eta(0)}=\\Omega,\n \\vspace{1.mm}\\\\\n \\rho(\\cdot,0)=\\rho_{0}\\quad &\\mbox{in}\\quad \\Omega_{\\eta(0)}=\\Omega,\n \\vspace{1.mm}\\\\\n \\eta_{tt}-\\beta \\eta_{xx}- \\delta\\eta_{txx}+\\alpha\\eta_{xxxx}=(T_{f})_{2} \\quad& \\mbox{on}\\quad \\Sigma^{s}_{T},\n \\vspace{1.mm}\\\\\n \\eta(\\cdot,0)=0\\quad \\mbox{and}\\quad \\eta_{t}(\\cdot,0)=\\eta_{1}\\quad &\\mbox{in}\\quad \\Gamma_{s}.\n \\end{array} \\right.\n \\end{equation} \n The initial condition for the density is assumed to be positive and bounded. We fix the positive constants $m$ and $M$ such that\n \\begin{equation}\\label{cor0}\n \\begin{array}{l}\n 00,\\quad {\\mu}{'}\\geqslant 0.$$ \n In our case the fluid is isentropic i.e. the pressure $p(\\rho)$ is only a function of the fluid density $\\rho$ and is given by\n $$p(\\rho)=a\\rho^{\\gamma},$$\n where $a>0$ and $\\gamma>1$ are positive constants.\\\\ \n We assume that there exists a constant external force ${p_{ext}}>0$ which acts on the beam. The external force ${p_{ext}}$ can be written as follows\n $${p_{ext}}=a\\overline{\\rho}^{\\gamma},$$\n for some positive constant $\\overline{\\rho}.$\\\\\n To incorporate this external forcing term ${p_{ext}}$ into the system of equations \\eqref{1.1}, we introduce the following \n \\begin{equation}\\label{1.2}\n P(\\rho)=p(\\rho)-{p_{ext}}=a\\rho^{\\gamma}-a\\overline{\\rho}^{\\gamma}.\n \\end{equation}\n Since $\\nabla p(\\rho)=\\nabla P(\\rho),$ from now onwards we will use $\\nabla P(\\rho)$ instead of $\\nabla p(\\rho)$ in the equation \\eqref{1.1}$_{2}.$\\\\\n In the beam equation the constants, $\\alpha>0,$ $\\beta\\geqslant0$ and $\\delta>0$ are respectively the adimensional rigidity, stretching and friction coefficients of the beam. The non-homogeneous source term of the beam equation $(T_{f})_{2}$ is the net surface force on the structure which is the resultant of force exerted by the fluid on the structure and the external force ${p_{ext}}$ and it is assumed to be of the following form\n\\begin{equation}\\label{1.3}\n(T_{f})_{2}=([-2\\mu D({\\bf u})-\\mu{'}(\\mbox{div}{\\bf u}){\\bf I}_{d}]\\cdot {{\\bf n}_{t}}+P{{\\bf n}_{t}})\\mid_{\\Gamma_{s,t}}\\sqrt{1+\\eta^{2}_{x}}\\cdot \\vec{e}_{2}\\quad\\mbox{on}\\quad \\Sigma^{s}_{T},\n\\end{equation}\nwhere ${\\bf I}_{d}$ is the identity matrix, ${\\bf n}_{t}$ is the outward unit normal to $\\Gamma_{s,t}$ given by\n$${\\bf n}_{t}=-\\frac{\\eta_{x}}{\\sqrt{1+\\eta^{2}_{x}}}\\vec{e}_{1}+\\frac{1}{\\sqrt{1+\\eta^{2}_{x}}}\\vec{e}_{2}$$\n($\\vec{e}_{1}=(1,0)$ and $\\vec{e}_{2}=(0,1)$).\\\\\nObserve that $(\\rho,{\\bf u},\\eta)=(\\overline{\\rho},0,0)$ is a stationary solution to \\eqref{1.1}-\\eqref{1.2}-\\eqref{1.3}.\n\\begin{remark}\n\tNow we can formally derive a priori estimates for the system \\eqref{1.1}-\\eqref{1.2}-\\eqref{1.3} and show the following energy equality \n\t\\begin{equation}\\label{1.15l}\n\t\\begin{split}\n\t&\\frac{1}{2}\\frac{d}{dt}\\left(\\int\\limits_{\\Omega_{t}}\\rho|{\\bf u}|^{2}\\,dx\\right)+\\frac{d}{dt}\\left(\\int\\limits_{\\Omega_{t}}\\frac{a}{(\\gamma-1)}\\rho^{\\gamma}\\,dx \\right) +\\frac{1}{2}\\frac{d}{dt}\\left(\\int\\limits_{0}^{L}|\\eta_{t}|^{2}\\,dx\\right)\n\t+\\frac{\\beta}{2}\\frac{d}{dt}\\left(\\int\\limits_{0}^{L}|\\eta_{x}|^{2}\\,dx\\right)\\\\[1.mm]\n\t& +\\frac{\\alpha}{2}\\frac{d}{dt}\\left(\\int\\limits_{0}^{L}|\\eta_{xx}|^{2}\\,dx\\right)\n\t+2\\mu\\int\\limits_{\\Omega_{t}}| D{\\bf u}|^{2}\\,dx+\\mu'\\int\\limits_{\\Omega_{t}}|\\mathrm{div}{\\bf u}|^{2}\\,dx+\\delta\\int\\limits_{0}^{L}|\\eta_{tx}|^{2}\\,dx =-{p_{ext}}\\int\\limits_{\\Gamma_{s}}\\eta_{t}.\n\t\\end{split}\n\t\\end{equation}\nThe equality \\eqref{1.15l} underlines the physical interpretation of each coefficient and in particular of the viscosity coefficients, $\\mu,$ $\\mu'$ and $\\delta$.\n\\end{remark}\n \\begin{remark}\\label{eta0}\n \tObserve that in \\eqref{1.1} we have considered the initial displacement $\\eta(0)$ of the beam to be zero. This is because we prove the local existence of strong solution of the system \\eqref{1.1} with the beam displacement $\\eta$ close to the steady state zero. There are several examples in the literature where the authors consider the initial displacement of the structure (in a fluid-structure interaction problem) to be equal to zero. For instance the readers can look into the articles \\cite{kukavica} and \\cite{boukir}. We also refer to the article \\cite{veiga} where the initial displacement of the structure is non zero but is considered to be suitably small. The issues involving the existence of strong solution for the model \\eqref{1.1} but with a non zero initial displacement $\\eta(0)$ of the beam is open. The case of a system coupling the incompressible Navier-Stokes equations and an Euler-Bernoulli damped beam with a non zero initial beam displacement is addressed in \\cite{casanova}. \n \\end{remark}\n Our interest is to prove the local in time existence of a strong solution to system \\eqref{1.1}-\\eqref{1.2}-\\eqref{1.3} i.e we prove that given a prescribed initial datum $(\\rho_{0},{\\bf u}_{0},\\eta_{1}),$ there exists a solution of system \\eqref{1.1}-\\eqref{1.2}-\\eqref{1.3} with a certain Sobolev regularity in some time interval $(0,T),$ provided that the time $T$ is small enough.\\\\ \n We study the system \\eqref{1.1}-\\eqref{1.2}-\\eqref{1.3} by transforming it into the reference cylindrical domain $Q_{T}.$ This is done by defining a diffeomorphism from $\\Omega_{t}$ onto $\\Omega.$ We adapt the diffeomorphism used in \\cite{veiga} in the study of an incompressible fluid-structure interaction model. The reader can also look at \\cite{raymondbeam}, \\cite{grand} where the authors use a similar map in the context of a coupled fluid-structure model comprising an incompressible fluid. \n \\subsection{Transformation of the problem to a fixed domain}\\label{transfixdm}\n To transform the system \\eqref{1.1}-\\eqref{1.2}-\\eqref{1.3} in the reference configuration, for $\\eta$ satisfying\n $\n 1+\\eta(x,t)>0$ for all $(x,t)\\in\\Sigma^{s}_{T},\n$\n we introduce the following change of variables\n \\begin{equation}\\label{1.14}\n \\begin{array}{l}\n \\displaystyle\n {\\Phi}_{\\eta(t)}:\\Omega_{t}\\longrightarrow \\Omega\\quad\\mbox{defined by}\\quad {\\Phi}_{\\eta(t)}(x,y)=(x,z)=\\left(x,\\frac{y}{1+\\eta(x,t)}\\right),\\\\\n \\displaystyle\n {\\Phi}_{\\eta}:\\widetilde{{Q}_{T}}\\longrightarrow Q_{T}\\quad\\mbox{defined by}\\quad {\\Phi}_{\\eta}(x,y,t)=(x,z,t)=\\left(x,\\frac{y}{1+\\eta(x,t)},t\\right).\n \\end{array} \n \\end{equation}\n \\begin{remark}\n \tIt is easy to prove that for each $t\\in[0,T),$ the map ${\\Phi}_{\\eta(t)}$ is a $C^{1}-$ diffeomophism from $\\Omega_{t}$ onto $\\Omega$ provided that $(1+\\eta(x,t))>0$ for all $x\\in \\mathbb{T}_L$ and that $\\eta(\\cdot,t)\\in C^{1}(\\Gamma_{s}).$\n \\end{remark}\n Notice that since $\\eta(\\cdot,0)=0,$ ${\\Phi}_{\\eta(0)}$ is just the identity map. We set the following notations\n \\begin{equation}\\label{1.15}\n \\begin{array}{l}\n \\widehat{\\rho}(x,z,t)=\\rho(\\Phi^{-1}_{\\eta}(x,z,t)),\\,\\,\\widehat{{\\bf u}}(x,z,t)=(\\widehat{u}_{1},\\widehat{u}_{2})={\\bf u}(\\Phi^{-1}_{\\eta}(x,z,t)).\\\\\n\n \\end{array}\n \\end{equation}\n After transformation and using the fact that $\\widehat{{ u}}_{1,x}=0$ on $\\Sigma^{s}_{T}$ (since $\\widehat {\\bf u}=\\eta_{t}\\vec{e_{2}}$ on $\\Sigma^{s}_{T}$) the nonlinear system \\eqref{1.1}-\\eqref{1.2}-\\eqref{1.3} is rewritten in the following form\n \\begin{equation}\\label{1.16}\n \\left\\{\n \\begin{array}{ll}\n \\widehat{\\rho_{t}}+\\begin{bmatrix}\n \\widehat {u}_{1}\\\\\n \\frac{1}{(1+\\eta)}(\\widehat {u}_{2}-\\eta_{t}z-\\widehat{u}_{1}z\\eta_{x})\n \\end{bmatrix}\n \\cdot \\nabla\\widehat{\\rho}+\\widehat{\\rho}\\mbox{div}{\\widehat{\\bf u}}={F}_{1}(\\widehat{\\rho},\\widehat{\\bf u},\\eta)\\quad& \\mbox{in}\\quad Q_{T},\n \\vspace{1.mm}\\\\\n \\widehat\\rho \\widehat {\\bf u}_{t}-\\mu\\Delta \\widehat {\\bf u}-(\\mu'+\\mu)\\nabla(\\mbox{div}\\widehat {\\bf u})+ \\nabla P(\\widehat{\\rho}) ={F}_{2}(\\widehat \\rho,\\widehat {\\bf u},\\eta)\\quad &\\mbox{in} \\quad {Q}_{T},\n \\vspace{1.mm}\\\\\n \\widehat{\\bf u}=\\eta_{t}\\vec{e_{2}}\\quad& \\mbox{on}\\quad \\Sigma^{s}_{T},\\\\[1.mm]\n \\widehat {\\bf u}(\\cdot,t)=0\\quad& \\mbox{on}\\quad \\Sigma^{\\ell}_{T},\n \\vspace{1.mm}\\\\\n \\widehat {\\bf u}(\\cdot,0)={\\bf u}_{0}\\quad& \\mbox{in} \\quad \\Omega,\n \\vspace{1.mm}\\\\\n \\widehat{\\rho}(\\cdot,0)={\\rho_{0}}\\quad& \\mbox{in}\\quad \\Omega,\n \\vspace{1.mm}\\\\\n \\eta_{tt}-\\beta \\eta_{xx}- \\delta\\eta_{txx}+\\alpha\\eta_{xxxx}=F_{3}(\\widehat{\\rho},\\widehat{\\bf u},{\\eta}) \\quad& \\mbox{on}\\quad \\Sigma^{s}_{T},\n \\vspace{1.mm}\\\\\n \\eta(0)=0\\quad \\mbox{and}\\quad \\eta_{t}(0)=\\eta_{1}\\quad &\\mbox{in}\\quad \\Gamma_{s},\n \\end{array} \\right.\n \\end{equation}\n\n where \n \\begin{equation}\\label{F123}\n \\begin{aligned}\n \\displaystyle\n {F}_{1}(\\widehat {\\rho},\\widehat {\\bf u},\\eta)= & \\frac{1}{(1+\\eta)}(\\widehat { u}_{1,z}z\\eta_{x}\\widehat\\rho+\\eta\\widehat{\\rho}\\widehat{u}_{2,z}),\\\\\n {F}_{2}(\\widehat \\rho,\\widehat {\\bf u},\\eta)= & -\\eta\\widehat{\\rho}\\widehat {\\bf u}_{t}+z\\widehat\\rho\\widehat {\\bf u}_{z}\\eta_{t}-\\eta\\widehat \\rho\\widehat {u}_{1}\\widehat {\\bf u}_{x}+\\widehat { u}_{1}\\widehat {\\bf u}_{z}\\eta_{x}\\widehat \\rho z+\\mu \\big(\\eta\\widehat {\\bf u}_{xx}-\\frac{\\eta \\widehat {\\bf u}_{zz}}{(1+\\eta)}-2\\eta_{x}z\\widehat {\\bf u}_{zx}+\\frac{\\widehat {\\bf u}_{zz}z^{2}\\eta^{2}_{x}}{(1+\\eta)}\\\\\n & -\\widehat {\\bf u}_{z}\\big( \\frac{(1+\\eta)z\\eta_{xx}-2\\eta_{x}^{2}z}{(1+\\eta)}\\big)\\big)-\\widehat\\rho(\\widehat {\\bf u}.\\nabla)\\widehat {\\bf u}+(\\mu+\\mu')\\\\\n\n\n\n & \\displaystyle \\cdot\\begin{bmatrix}\n \\eta\\widehat {u}_{1,xx}-\\widehat {u}_{1,xz}z\\eta_{x}-\\eta_{x}z\\big(\\widehat { u}_{1,zx}-\\frac{\\widehat {u}_{1,zz}z\\eta_{x}}{(1+\\eta)}\\big)+\\widehat { u}_{1,z}\\big(\\frac{(1+\\eta)z\\eta_{xx}-2\\eta^{2}_{x}z}{(1+\\eta)}\\big) -\\frac{\\eta_{x}\\widehat {u}_{2,z}}{(1+\\eta)}-\\frac{\\eta_{x}z\\widehat { u}_{2,zz}}{(1+\\eta)}\\\\[2.mm]\n -\\frac{\\eta_{x}\\widehat {u}_{1,z}}{(1+\\eta)}-\\frac{\\eta_{x}z\\widehat { u}_{1,zz}}{(1+\\eta)}-\\frac{\\eta\\widehat {u}_{2,zz}}{(1+\\eta)}\n \\end{bmatrix}\\\\\n & -(\\eta P_{x}(\\widehat{\\rho})-P_{z}(\\widehat{\\rho})z\\eta_{x})\\vec{e_{1}},\\\\\n \\pagebreak\n F_{3}(\\widehat{\\rho},\\widehat {\\bf u},\\eta)= & -\\mu\\big(-{\\widehat{ u}_{2,z}}+\\eta_{x}\\widehat {u}_{2,x}+\\frac{\\widehat { u}_{2,z}}{(1+\\eta)}\\eta^{2}_{x}z-\\frac{2\\eta\\widehat { u}_{2,z}}{(1+\\eta)}-\\frac{\\eta_{x}\\widehat {u}_{1,z}}{(1+\\eta)}\\big)\n -\\mu'\\big(-2\\widehat{u}_{2,z}+\\frac{\\widehat { u}_{1,z}}{(1+\\eta)}\\eta_{x}z\\\\\n &-\\frac{\\eta\\widehat {u}_{2,z}}{(1+\\eta)}\\big)+P(\\widehat{\\rho}).&\n \\end{aligned}\n \\end{equation}\n The transport equation for density \\eqref{1.16}$_{1}$-\\eqref{1.16}$_{6}$ is of the form\n \\begin{equation}\\label{1.17}\n \\left\\{\n \\begin{array}{ll}\n \\widehat{\\rho_{t}}+\\begin{bmatrix}\n \\widehat {u}_{1}\\\\\n \\frac{1}{(1+\\eta)}(\\widehat {u}_{2}-\\eta_{t}z-\\widehat{u}_{1}z\\eta_{x})\n \\end{bmatrix}\n \\cdot \\nabla\\widehat{\\rho}+\\widehat{\\rho}\\mbox{div}{\\widehat{\\bf u}}={F}_{1}\\quad& \\mbox{in}\\quad Q_{T},\n \\vspace{1.mm}\\\\\n \\widehat{\\rho}(\\cdot,0)={\\rho_{0}}\\quad& \\mbox{in}\\quad \\Omega.\n \\end{array} \\right.\n \\end{equation}\n Due to the interface condition, $\\widehat {\\bf u}=\\eta_{t}\\vec{e_{2}}$ on $\\Sigma^{s}_{T},$ we get that the velocity field $(\n \\widehat {u}_{1},\n \\frac{1}{(1+\\eta)}(\\widehat {u}_{2}-\\eta_{t}z-\\widehat{u}_{1}z\\eta_{x})\n )$ satisfies\n $$\\begin{bmatrix}\n \\widehat {u}_{1}\\\\\n \\frac{1}{(1+\\eta)}(\\widehat {u}_{2}-\\eta_{t}z-\\widehat{u}_{1}z\\eta_{x})\n \\end{bmatrix}\\cdot {\\bf n}=0\\quad \\mbox{on}\\quad \\Sigma^{s}_{T},$$\n where ${\\bf n}$ is the unit outward normal to $\\Omega.$ \n Hence we shall not prescribe any boundary condition on the density for the system \\eqref{1.17} to be well posed.\\\\\nTo avoid working in domains which deform when time evolves, the meaning of solutions for \\eqref{1.1}-\\eqref{1.2}-\\eqref{1.3} will be understood as follows: The triplet $(\\rho,{\\bf u},\\eta)$ solves \\eqref{1.1}-\\eqref{1.2}-\\eqref{1.3} if and only if $(\\widehat{\\rho},\\widehat{\\bf u},\\eta)$ solves \\eqref{1.16}. This notion will be detailed in the next section. \n\\subsection{Functional settings and the main result}\nIn the fixed domain $\\Omega$ we have the following spaces of functions with values in $\\mathbb{R}^{2},$\n$${\\bf H}^{s}(\\Omega)=H^{s}(\\Omega;\\mathbb{R}^{2})\\quad\\mbox{for all}\\quad s\\geqslant 0.$$\nWe also introduce the following spaces of vector valued functions \n\\begin{equation}\\label{fntlsp}\n\\begin{array}{l}\n{\\bf H}^{1}_{0}(\\Omega)=\\{{\\bf z}\\in{\\bf H}^{1}(\\Omega)\\;\\ifnum\\currentgrouptype=16 \\middle\\fi|\\; {\\bf z}=0\\,\\,\\mbox{on}\\,\\, \\Gamma \\},\\\\[1.mm]\n{\\bf H}^{2,1}(Q_{T})=L^{2}(0,T;{\\bf H}^{2}(\\Omega))\\cap H^{1}(0,T;{\\bf L}^{2}(\\Omega)),\\\\[1.mm]\n{\\bf H}^{2,1}_{\\Sigma_{T}}(Q_{T})=\\{{\\bf z}\\in {\\bf H}^{2,1}(Q_{T})\\;\\ifnum\\currentgrouptype=16 \\middle\\fi|\\; {\\bf z}=0\\,\\,\\mbox{on}\\,\\,\\Sigma_{T} \\}.\n\\end{array}\n\\end{equation}\nSimilarly for $s\\geqslant 0,$ we can define $H^{s}(\\Omega),$ the Sobolev space for the scalar valued functions defined on $\\Omega.$ Now for $\\theta,\\tau\\geqslant 0,$ we introduce the following spaces which we use to analyze the beam equation\n\\begin{equation}\\nonumber\n\\begin{array}{l}\n H^{\\theta,\\tau}(\\Sigma^{s}_{T})=L^{2}(0,T;H^{\\theta}(\\Gamma_{s}))\\cap H^{\\tau}(0,T;L^{2}(\\Gamma_{s})).\n\\end{array}\n\\end{equation} \n\\begin{remark}\n\tSince $\\Omega=\\mathbb{T}_{L}\\times (0,1)$ and $\\Gamma_{s}=\\mathbb{T}_{L}\\times \\{1\\},$ the above definitions of the functional spaces implicitly assert that the functions are $L-$ periodic in the $x$ variable. \n\\end{remark}\n\\begin{prop}\\label{pr1}\nLet $T>0.$ If $\\eta$ is regular enough in the space variable, say $\\eta(\\cdot,t)\\in H^{m}(\\Gamma_{s})$ for $m\\geqslant 2$ and the following holds\n\\begin{equation}\\label{coet}\n\\begin{array}{l}\n1+\\eta(x,t)\\geqslant\\delta_{0}>0\\quad\\mbox{on}\\quad \\Sigma^{s}_{T},\n\\end{array}\n\\end{equation}\nfor some constant $\\delta_{0},$ the map ${{g}}\\mapsto \\widehat{{g}}=g(\\Phi^{-1}_{\\eta(t)}(x,z))$ is a homeomorphism from ${ H}^{s}(\\Omega_{t})$ to ${ H}^{s}(\\Omega)$ for any $s\\leqslant m.$\n\\end{prop}\nThe proposition stated above can be proved in the same spirit of \\cite[Proposition 2, Section 3]{grand}.\\\\\nNow in view of Proposition \\ref{pr1}, we define the notion of strong solution of the system \\eqref{1.1}-\\eqref{1.2}-\\eqref{1.3} in terms of the strong solution of the system \\eqref{1.16}.\n\\begin{mydef}\\label{doss}\n\t The triplet $(\\rho,{\\bf u},\\eta)$ is a strong solution of the system \\eqref{1.1}-\\eqref{1.2}-\\eqref{1.3} if\n\t\t\\begin{equation}\\label{conoeta}\n\t\t\\begin{array}{ll}\n\t\t\\eta\\in C^{0}\\big([0,T]; H^{9\/2}(\\Gamma_{s})\\big),&\n\t\t\\eta_{t}\\in L^{2}\\big(0,T;{ H}^{4}(\\Gamma_{s})\\big)\\cap C^{0}\\big([0,T];{ H}^{3}(\\Gamma_{s}) \\big),\\\\[1.mm]\n\t\t\\eta_{tt}\\in L^{2}\\big(0,T;{ H}^{2}(\\Gamma_{s})\\big)\\cap C^{0}\\big([0,T];{ H}^{1}(\\Gamma_{s}) \\big),&\n\t\t\\eta_{ttt}\\in L^{2}\\big(0,T;{ L}^{2}(\\Gamma_{s})\\big),\n\t\t\\end{array}\n\t\t\\end{equation}\n\t \\eqref{coet} holds for every $(x,t)\\in \\Sigma^{s}_{T}$ and the triplet $(\\widehat{\\rho},\\widehat{\\bf u},\\eta)=({\\rho}\\circ\\Phi_{\\eta}^{-1},{\\bf u}\\circ\\Phi_{\\eta}^{-1},\\eta )$ solves \\eqref{1.16} in the following Sobolev spaces\n\t\\begin{equation}\\label{de2}\n\t\\begin{array}{l}\n\t\\widehat\\rho\\in C^{0}\\big([0,T]; H^{2}(\\Omega) \\big),\\,\n\t\\widehat\\rho_{t}\\in C^{0}\\big([0,T]; H^{1}(\\Omega)\\big),\\\\[1.mm]\n\t\\widehat{\\bf u}\\in L^{2}\\big(0,T;{\\bf H}^{3}(\\Omega)\\big)\\cap C^{0}\\big([0,T]; {\\bf H}^{5\/2}(\\Omega) \\big),\n\t\\vspace{1.mm}\\\\\n\t\\widehat{\\bf u}_{t}\\in L^{2}\\big(0,T;{\\bf H}^{2}(\\Omega)\\big)\\cap C^{0}\\big([0,T];{\\bf H}^{1}(\\Omega) \\big),\n\t\\vspace{1.mm}\\\\\n\t\\widehat{\\bf u}_{tt}\\in L^{2}\\big(0,T;{\\bf L}^{2}(\\Omega)\\big).\n\t\\end{array}\n\t\\end{equation}\n\t($\\eta$ is in the space mentioned in \\eqref{conoeta}). Note that $(\\rho,{\\bf u})$ can then be obtained from $(\\widehat{\\rho},\\widehat{\\bf u})$ by $(\\rho,{\\bf u})=(\\widehat{\\rho}\\circ\\Phi_{\\eta},\\widehat{\\bf u}\\circ\\Phi_{\\eta}).$\n\\end{mydef}\nIn relation with Definition \\ref{doss}, we introduce the following functional spaces\n\\begin{equation}\\label{dofYi}\n\\begin{split}\nY_{1}^{T}=&\\{{\\rho}\\in C^{0}([0,T];H^{2}(\\Omega))\\;\\ifnum\\currentgrouptype=16 \\middle\\fi|\\; \\rho_{t}\\in C^{0}([0,T]; H^{1}(\\Omega))\\},\\\\[1.mm]\nY_{2}^{T}=&\\{{\\bf u}\\in L^{2}(0,T;{\\bf H}^{3}(\\Omega))\\cap C^{0}([0,T];{\\bf H}^{5\/2}(\\Omega))\\;\\ifnum\\currentgrouptype=16 \\middle\\fi|\\; {\\bf u}_{t}\\in L^{2}(0,T;{\\bf H}^{2}(\\Omega))\n\\cap C^{0}([0,T];{\\bf H}^{1}(\\Omega)),\\\\[1.mm]\n&\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\quad{\\bf u}_{tt}\\in L^{2}(0,T;{\\bf L}^{2}(\\Omega)) \\},\\\\[1.mm]\nY_{3}^{T}=&\\{\\eta\\in C^{0}([0,T]; H^{9\/2}(\\Gamma_{s})),\\,\\eta(x,0)=0\\;\\ifnum\\currentgrouptype=16 \\middle\\fi|\\; \\eta_{t}\\in L^{2}(0,T;{ H}^{4}(\\Gamma_{s}))\\cap C^{0}([0,T];{ H}^{3}(\\Gamma_{s})), \\\\[1.mm]\n&\\qquad\\qquad\\qquad\\qquad\\qquad\\eta_{tt}\\in L^{2}(0,T;{ H}^{2}(\\Gamma_{s}))\\cap C^{0}([0,T];{ H}^{1}(\\Gamma_{s})),\n\\eta_{ttt}\\in L^{2}(0,T;{ L}^{2}(\\Gamma_{s}))\\}.\n\\end{split}\n\\end{equation}\nThe spaces $Y_{1}^{T},$ $Y_{2}^{T}$ and $Y_{3}^{T}$ correspond to the spaces in which the unknowns $\\widehat{\\rho},$ $\\widehat{\\bf u}$ and $\\eta$ respectively.\\\\\n Now we precisely state the main result of the article.\n\\begin{thm}\\label{main}\n\tAssume that \n\t\\begin{equation}\\label{cit}\n\t\\left\\{ \\begin{array}{ll}\n\t(i)\\,&(a)\\,\\,{Regularity\\,of\\,initial\\,conditions:}\\,\\rho_{0}\\in {H}^{2}(\\Omega),\\,\\, \\eta_{1}\\in {H}^{3}(\\Gamma_{s}),\\,\\, {\\bf u}_{0}\\in {\\bf H}^{3}(\\Omega).\\\\[2.mm]\n &(b)\\,\\,{Compatibility\\,between\\,initial\\, and\\,boundary\\,conditions}:\\\\\n\t&\\quad(b)_{1}\\,\\,\\left( {\\bf u}_{0}-\\begin{bmatrix}\n\t0\\\\\n\tz\\eta_{1}\n\t\\end{bmatrix}\\right)=0\\,\\mbox{on}\\,\\Gamma,\\\\[2.mm]\n\t &\\quad(b)_{2}\\,\\,-P'(\\rho_{0})\\nabla\\rho_{0}-(\\delta\\eta_{1,xx}-(\\mu+2\\mu')({ u}_{0})_{2,z}+P(\\rho_{0}))z\\rho_{0}\\vec{e}_{2}+z\\rho_{0}({\\bf u}_{0})_{z}\\eta_{1}\\\\\n\t &\\quad\\,\\,\\quad\\quad-\\rho_{0}({\\bf u}_{0}\\cdot\\nabla){\\bf u}_{0}-(-\\mu \\Delta -(\\mu+\\mu{'})\\nabla\\mathrm{div}){\\bf u}_{0}=0\\,\\mbox{on}\\,\\Gamma,\\\\\n\t(ii)\\,&\\eqref{cor0}\\,\\mbox{holds},\n\t\\end{array}\\right.\n\t\\end{equation}\n\t where we use the notations $P'({\\rho}_{0})=\\nabla P({\\rho}_{0}),$ $P(\\rho_{0})=(a\\rho_{0}^{\\gamma}-a\\overline{\\rho}^{\\gamma})$ and ${\\bf u}_{0}=((u_{0})_{1},(u_{0})_{2}).$\n\tThen there exists $T>0$ such that the system \\eqref{1.16} admits a solution $(\\widehat{\\rho},\\widehat{\\bf u},\\eta)\\in Y_{1}^{T}\\times Y_{2}^{T}\\times Y_{3}^{T}.$ \n\tConsequently in the sense of Definition \\ref{doss} the system \\eqref{1.1}-\\eqref{1.2}-\\eqref{1.3} admits a strong solution $({\\rho},{\\bf u},\\eta).$ \n\\end{thm} \n \\begin{remark}\n \tOur analysis throughout the article can be suitably adapted to consider any pressure law $p(\\cdot)\\in C^{2}(\\mathbb{R}^{+})$ (in this article we present the proofs with the pressure law given by $p(\\rho)=a\\rho^{\\gamma},$ with $\\gamma>1$) such that there exists a positive constant $\\overline{\\rho}$ satisfying $p(\\overline{\\rho})={p_{ext}},$ where ${p_{ext}}(>0)$ is the external force acting on the beam. The adaptation is possible since we only consider the case where the fluid density $\\rho$ has a positive lower and upper bound. \n \\end{remark} \nNow let us sketch the strategy towards the proof of Theorem \\ref{main}.\n\\subsection{Strategy}\\label{Chap3strategy}\n(i) $\\mathit{Changing\\,\\,\\eqref{1.16}\\,\\, to\\,\\, a\\,\\, homogeneous\\,\\, boundary\\,\\, value\\,\\, problem}$: Recall that (see Remark \\ref{eta0}) we will prove the existence of local in time strong solution of the system \\eqref{1.1}-\\eqref{1.2}-\\eqref{1.3} only when the beam displacement $\\eta$ is close to zero. Again observe that $(\\widehat{\\rho}=\\overline{\\rho},\\widehat{\\bf u}=0,\\widehat{\\eta}=0)$ is a steady state solution of the system \\eqref{1.1}-\\eqref{1.2}-\\eqref{1.3} and hence of the system \\eqref{1.16}. So to work in a neighborhood of $\\eta=0,$ we make the following change of unknowns in \\eqref{1.16},\n\\begin{equation}\\label{1.20}\n\\begin{array}{l}\n\\sigma=\\widehat\\rho-\\overline{\\rho},\n\\quad\n{\\bf v}=(v_{1},v_{2})=\\widehat {\\bf u}-0,\n\\quad\n\\eta=\\eta-0.\n\\end{array}\n\\end{equation} \nIn view of the change of unknowns \\eqref{1.20} one obtains\n\\begin{equation}\\label{1.21}\n\\left\\{\n\\begin{array}{ll}\n{\\sigma_{t}}+\\begin{bmatrix}\n{v}_{1}\\\\\n\\frac{1}{(1+\\eta)}({v}_{2}-\\eta_{t}z-{v}_{1}z\\eta_{x})\n\\end{bmatrix}\n\\cdot \\nabla\\sigma+(\\sigma+\\overline{\\rho})\\mbox{div}({\\bf v})={F}_{1}(\\sigma+\\overline{\\rho},{\\bf v},\\eta)\\quad &\\mbox{in}\\quad Q_{T},\n\\vspace{1.mm}\\\\\n(\\sigma+\\overline{\\rho}){\\bf v}_{t}-\\mu \\Delta {\\bf v}-(\\mu+\\mu{'})\\nabla\\mathrm{div}{\\bf v}=-{P}'(\\sigma+\\overline{\\rho}) \\nabla \\sigma+{F}_{2}(\\sigma+\\overline{\\rho},{\\bf v},\\eta)\\quad &\\mbox{in} \\quad {Q}_{T},\n\\vspace{1.mm}\\\\\n{\\bf v}=\\eta_{t}\\vec{e_{2}}\\quad& \\mbox{on}\\quad \\Sigma^{s}_{T},\\\\[1.mm]\n{\\bf v}=0\\quad &\\mbox{on}\\quad\\Sigma^{\\ell}_{T},\n\\vspace{1.mm}\\\\\n{\\bf v}(\\cdot,0)={\\bf u}_{0}\\quad& \\mbox{in} \\quad \\Omega,\n\\vspace{1.mm}\\\\\n\\sigma(\\cdot,0)=\\sigma_{0}={\\rho_{0}}-{\\overline\\rho}\\quad &\\mbox{in}\\quad \\Omega,\n\\vspace{1.mm}\\\\\n\\eta_{tt}-\\beta \\eta_{xx}- \\delta\\eta_{txx}+\\alpha\\eta_{xxxx}={F}_{3}(\\sigma+\\overline{\\rho},{\\bf v},\\eta)\\quad &\\mbox{on}\\quad \\Sigma^{s}_{T},\n\\vspace{1.mm}\\\\\n\\eta(0)=0\\quad \\mbox{and}\\quad \\eta_{t}(0)=\\eta_{1}\\quad &\\mbox{in}\\quad \\Gamma_{s}.\n\\end{array} \\right.\n\\end{equation}\nWe transform the system \\eqref{1.21} into a homogeneous Dirichlet boundary value problem by performing further the following change of unknown\n\\begin{equation}\\label{cvhd}\n\\begin{array}{l}\n{\\bf w}=(w_{1},w_{2})={\\bf v}-z\\eta_{t}\\vec{e_{2}}.\n\\end{array}\n\\end{equation}\nSince ${\\bf v}$ and ${\\eta}_{t}$ both are $L$-periodic in the $x-$direction, the new unknown ${\\bf w}$ is also $L$-periodic in the $x-$direction. With the new unknown ${\\bf w},$ we write the transformed system in the following form\n\\begin{equation}\\label{chdb}\n\\left\\{\n\\begin{array}{ll}\n{\\sigma_{t}}+\\begin{bmatrix}\n{w}_{1}\\\\\n\\frac{1}{(1+\\eta)}({w}_{2}-{w}_{1}z\\eta_{x})\n\\end{bmatrix}\n\\cdot \\nabla\\sigma=G_{1}(\\sigma,{\\bf w},\\eta)\\quad& \\mbox{in}\\quad Q_{T}, \n\\vspace{1.mm}\\\\\n(\\sigma+\\overline{\\rho}){\\bf w}_{t}-\\mu \\Delta {\\bf w} -(\\mu+\\mu{'})\\nabla\\mathrm{div}{\\bf w}=G_{2}(\\sigma,{\\bf w},\\eta)\\quad& \\mbox{in}\\quad Q_{T},\n\\vspace{1.mm}\\\\\n{\\bf w}=0\\quad &\\mbox{on}\\quad \\Sigma_{T},\n\\vspace{1.mm}\\\\\n{\\bf w}(\\cdot,0)={\\bf w}_{0}={\\bf u}_{0}-z\\eta_{1}\\vec{e}_{2}\\quad &\\mbox{in} \\quad \\Omega,\n\\vspace{1.mm}\\\\\n\\sigma(\\cdot,0)=\\sigma_{0}={\\rho_{0}}-{\\overline\\rho}\\quad &\\mbox{in}\\quad \\Omega,\n\\vspace{1.mm}\\\\\n\\eta_{tt}-\\beta \\eta_{xx}- \\delta\\eta_{txx}+\\alpha\\eta_{xxxx}=G_{3}(\\sigma,{\\bf w},\\eta)\\quad & \\mbox{on}\\quad \\Sigma^{s}_{T},\n\\vspace{1.mm}\\\\\n\\eta(0)=0\\quad \\mbox{and}\\quad \\eta_{t}(0)=\\eta_{1}\\quad& \\mbox{in}\\quad \\Gamma_{s},\n\\end{array} \\right.\n\\end{equation}\nwhere\n\\begin{equation}\\label{1.22}\n\\begin{split}\n& G_{1}(\\sigma,{\\bf w},\\eta)=-(\\sigma+\\overline{\\rho})\\mbox{div}({\\bf w}+z\\eta_{t}\\vec{e_{2}})+{F}_{1}(\\sigma+\\overline{\\rho},{{\\bf w}+z\\eta_{t}\\vec{e_{2}}},\\eta),\\\\[1.mm]\n& G_{2}(\\sigma,{\\bf w},\\eta)=-{P}'(\\sigma+\\overline{\\rho}) \\nabla \\sigma-\tz\\eta_{tt}(\\sigma+\\overline{\\rho})\\vec{e_{2}}-(-\\mu \\Delta -(\\mu+\\mu{'})\\nabla\\mathrm{div})(z\\eta_{t}\\vec{e_{2}})\\\\[1.mm]\n& \\qquad\\qquad\\qquad+{F}_{2}(\\sigma+\\overline{\\rho},{\\bf w}+z\\eta_{t}\\vec{e_{2}},\\eta),\n\\vspace{1.mm}\\\\\n& G_{3}(\\sigma,{\\bf w},\\eta)={F}_{3}(\\sigma+\\overline{\\rho},{\\bf w}+\\eta_{t}\\vec{e_{2}},\\eta).\n\\end{split}\n\\end{equation}\n(ii) $\\textit{Study\\,\\,of\\,\\,some\\,\\,decoupled\\,\\,linear\\,\\,problems}$: Observe that in the new system \\eqref{chdb} the coupling between the velocity of the fluid and the elastic structure appears only as source terms. In order to solve the system \\eqref{chdb} we first study some linear equations in Section \\ref{lineqs}. In order to analyze the local in time existence of strong solution the difficulty is to track the dependence of the constants (appearing in the inequalities) with respect to the time parameter \\textquoteleft T\\textquoteright.\\ \\,In this direction we first obtain a priori estimates for the linear density and velocity equations with non homogeneous source terms in the spirit of \\cite{vallizak}. Then we prove the existence of strong solutions for a linear beam equation. The proof strongly relies on the analyticity of the corresponding beam semigroup (see \\cite{chen} for details). At this point we refer the readers to the articles \\cite{denk} (maximal $L^{p}-L^{q}$ regularity of structurally damped beam equation), \\cite{fanli} (analyticity and exponential stability of beam semigroup), \\cite{raymondbeam} (study of beam equation in the context of an incompressible fluid structure interaction problem) and the references therein for the existence and regularity issues of the damped beam equation. In our case to obtain estimates with the constants independent of \\textquoteleft T\\textquoteright\\ for the beam equation we first fix a constant $\\overline{T}>0$ and restrict ourselves to work in the time interval $(0,T)$ where\n\\begin{equation}\\label{Tbar}\n\\begin{array}{l}\nT<\\overline{T}.\n\\end{array}\n\\end{equation}\nThis technique is inspired from \\cite{rayvan}.\\\\ \n(iii) $\\mathit{Fixed\\,\\,point\\,\\,argument}$: In Section \\ref{fixdpt} we prove the existence of a strong solution of \\eqref{chdb} by using the Schauder's fixed point theorem based on \\eqref{chdb}-\\eqref{1.22}.\n\\begin{remark}\n\tSince $\\eta(0)=0$ the regularity \\eqref{conoeta} of $\\eta$ guarantees that \n\t\\begin{equation}\\label{smalleta}\n\t\\begin{array}{l}\n\t\\|\\eta\\|_{L^{\\infty}(\\Sigma^{s}_{T})}\\leqslant CT\\|\\eta_{t}\\|_{L^{\\infty}(0,T;H^{3}(\\Gamma_{s}))},\n\t\\end{array}\n\t\\end{equation}\n\tfor a constant $C$ independent of $T.$ For small enough time $T,$ \\eqref{smalleta} furnishes $\\eta\\approx 0$ and hence during small times, the beam stays close to the steady state zero.\n\\end{remark}\n\\subsection{Comments on initial and compatibility conditions}\\label{incom}\n(i)\nRecall from \\eqref{cit}$(i)$$(a)$ that we assume ${\\bf u}_{0}\\in {\\bf H}^{3}(\\Omega).$ Also observe that in our solution (see \\eqref{de2}) the vector field $\\widehat{\\bf u}\\in C^{0}([0,T];{\\bf H}^{5\/2}(\\Omega))$ i.e for the velocity field there is a loss of $\\frac{1}{2}$ space regularity as the time evolves. One can find such instances of a loss of space regularity in many other articles in the literature, for instance we refer the readers to \\cite{bougue3}, \\cite{kukavica} (for the coupling of fluid-elastic structure comprising a compressible fluid) and \\cite{cout1}, \\cite{cout2}, \\cite{rayvan} (for incompressible fluid structure interaction models).\\\\[2.mm] \n(ii) We use \\eqref{1.22}$_{3}$ to obtain the following expression of $G_{3}\\mid_{t=0}$ (the value of $G_{3}(\\sigma,{\\bf w},\\eta)$ at time $t=0$)\n\\begin{equation}\\label{G30}\n\\begin{array}{l}\nG_{3}\\mid_{t=0}=-(\\mu+2\\mu')({u}_{0})_{2,z}+P(\\rho_{0}).\n\\end{array}\n\\end{equation} \nUsing $\\rho_{0}\\in H^{2}(\\Omega),$ ${\\bf u}_{0}\\in {\\bf H}^{3}(\\Omega)$ (see \\eqref{cit}$(i)$$(a)$) and standard trace theorems one easily checks that \n\\begin{equation}\\label{regG3t0}\n\\begin{array}{l}\nG_{3}\\mid_{t=0}\\in H^{3\/2}(\\Gamma_{s}).\n\\end{array}\n\\end{equation}\nWe will use the regularity of $G_{3}\\mid_{t=0}$ (in fact we will only use $G_{3}\\mid_{t=0}\\in H^{1}(\\Gamma_{s})$) to prove the regularity of $\\eta.$ This will be detailed in Theorem \\ref{t23}.\\\\[2.mm]\n (iii) We use \\eqref{G30} and the equation \\eqref{chdb}$_{6}$ to check that\n $$\\eta_{tt}(\\cdot,0)=\\delta\\eta_{1,xx}-(\\mu+2\\mu')(u_{0})_{2,z}+P(\\rho_{0}).$$\n Hence using \\eqref{1.22}$_{2}$ one obtains the following expression of $G_{2}\\mid_{t=0}$ (the value of $G_{2}(\\sigma,{\\bf w},\\eta)$ at time $t=0$)\n\\begin{equation}\\label{iG0}\n\\begin{split}\nG_{2}\\mid_{t=0}&=-P'(\\rho_{0})\\nabla\\rho_{0}-(\\delta\\eta_{1,xx}-(\\mu+2\\mu')({ u}_{0})_{2,z}\n+P(\\rho_{0}))z\\rho_{0}\\vec{e}_{2}\n+z\\rho_{0}({\\bf u}_{0})_{z}\\eta_{1}\\\\\n&-\\rho_{0}({\\bf u}_{0}\\cdot\\nabla){\\bf u}_{0}-(-\\mu\\Delta-(\\mu+\\mu{'})\\nabla\\mathrm{div})(z\\eta_{1}\\vec{e_{2}}).\n\\end{split}\n\\end{equation}\n This gives\n\\begin{equation}\\label{compat}\n\\begin{split}\nG_{2}\\mid_{t=0}&-(-\\mu \\Delta -(\\mu+\\mu{'})\\nabla\\mathrm{div})\\left( {\\bf u}_{0}-\\begin{bmatrix}\n0\\\\\nz\\eta_{1}\n\\end{bmatrix}\\right)\n=-P'(\\rho_{0})\\nabla\\rho_{0}-(\\delta\\eta_{1,xx}-(\\mu+2\\mu')({ u}_{0})_{2,z}\\\\\n&+P(\\rho_{0}))z\\rho_{0}\\vec{e}_{2}\n+z\\rho_{0}({\\bf u}_{0})_{z}\\eta_{1}-\\rho_{0}({\\bf u}_{0}\\cdot\\nabla){\\bf u}_{0}-(-\\mu \\Delta -(\\mu+\\mu{'})\\nabla\\mathrm{div}){\\bf u}_{0}.\n\\end{split}\n\\end{equation}\nThe regularity assumptions \\eqref{cit}$(i)$$(a)$ and \\eqref{compat} furnish the following\n\\begin{equation}\\label{regcom}\n\\begin{array}{l}\nG_{2}\\mid_{t=0}-(-\\mu \\Delta -(\\mu+\\mu{'})\\nabla\\mathrm{div})\\left( {\\bf u}_{0}-\\begin{bmatrix}\n0\\\\\nz\\eta_{1}\n\\end{bmatrix}\\right)\\in {\\bf H}^{1}(\\Omega).\n\\end{array}\n\\end{equation}\nHence one obtains (recalling that ${\\bf w}_{0}={\\bf u}_{0}-z\\eta_{1}\\vec{e}_{2}$)\n\\begin{equation}\\label{asm2}\n\\begin{array}{ll}\n&\\mbox{the assumption}\\, \\eqref{cit}(i)(a)\\,\\mbox{and}\\, \\eqref{cit}(i)(b)_{2}\\\\\n&\\Longrightarrow G_{2}\\mid_{t=0}-(-\\mu \\Delta{\\bf w}_{0} -(\\mu+\\mu{'})\\nabla\\mathrm{div}{\\bf w}_{0})\\in {\\bf H}^{1}_{0}(\\Omega).\n\\end{array}\n\\end{equation}\nWe need this to prove some regularity of ${\\bf w}$ and hence of ${\\widehat{\\bf u}}.$ This will be detailed in Theorem \\ref{t21}.\\\\[2.mm] \n\\subsection{Bibliographical comments}\\label{bibcomlclext} Here we mainly focus on the existing literature devoted to the study of fluid structure interaction problems.\\\\ \nTo begin with we quote a few articles dedicated to the mathematical study of compressible Navier-Stokes equations. The existence of local in time classical solutions for the compressible Navier-Stokes equations in a time independent domain was first proved in \\cite{nash} and the uniqueness was established in \\cite{serrin}. The global existence of strong solutions for a small perturbation of a stable constant state was established in the celebrated work \\cite{matnis}. In the article \\cite{vallizak} the authors established the local in time existence of strong solutions in the presence of inflow and outflow of the fluid through the boundary. In the same article they also present the proof of global in time existence for small data in the absence of the inflow. P.-L. Lions proved (in \\cite{lions}) the global existence of renormalized weak solution with bounded energy for an isentropic fluid (i.e $p(\\rho)=\\rho^{\\gamma}$) with the adiabatic constant $\\gamma>3d\/(d+2),$ where $d$ is the space dimension. E. Feireisl $\\mathit{et\\, al.}$ generalized the approach to cover the range $\\gamma>3\/2$ in dimension $3$ and $\\gamma>1,$ in dimension $2$ in \\cite{feireisl}. The well-posedness issues of the compressible Navier-Stokes equations for critical regularity data can be found in \\cite{danchin}, \\cite{danchin1}. For further references and a very detailed development of the mathematical theory of compressible flow we refer the reader into the books \\cite{novotny1} and \\cite{bresch1}.\\\\\nIn the last decades the fluid-structure interaction problems have been an area of active research. There is a rich literature concerning the motion of a structure inside or at the boundary of a domain containing a viscous incompressible Newtonian fluid, whose behavior is described by Navier-Stokes equations. For instance local existence and uniqueness of strong solutions of incompressible fluid-structure models with the structure immersed inside the fluid are studied in \\cite{cout1} (the elastic structure is modeled by linear Kirchhoff equations) and \\cite{cout2} (the elastic structure is governed by quasilinear elastodynamics). There also exist articles dealing with incompressible fluid-structure interaction problems where the structure appears on the fluid boundary and is modeled by Euler-Bernoulli damped beam equations \\eqref{1.1}$_{7}$-\\eqref{1.1}$_{8}.$ For example we refer the readers to \\cite{veiga} (local in time existence of strong solutions), \\cite{esteban} (existence of weak solutions), \\cite{raymondbeam} (feedback stabilization), \\cite{grand} (global in time existence) and the references therein for a very detailed discussion of such problems.\\\\\nDespite of the growing literature on incompressible fluids the number of articles addressing the compressible fluid-structure interaction problems is relatively limited and the literature has been rather recently developed. One of the fundamental differences between the incompressible and compressible Navier-Stokes equations is that the pressure of the fluid in incompressible Navier-Stokes equations is interpreted as the Lagrange multiplier whereas in the case of compressible Navier-Stokes equations the pressure is given as a function of density with the density modeled by a transport equation of hyperbolic nature. The strong coupling between the parabolic and hyperbolic dynamics is one of the intricacies in dealing with the compressible Navier-Stokes equations and this results in the regularity incompatibilities between the fluid and the solid structure. However in the past few years there have been works exploring the fluid-structure interaction problems comprising the compressible Navier-Stokes equations with an elastic body immersed in the fluid domain. For instance in the article \\cite{bougue2} the authors prove the existence and uniqueness of strong solutions of a fluid structure interaction problem for a compressible fluid and a rigid structure immersed in a regular bounded domain in dimension 3. The result is proved in any time interval $(0,T),$ where $T>0$ and for a small perturbation of a stable constant state provided there is no collision between the rigid body and the boundary $\\partial\\Omega$ of the fluid domain. In \\cite{boulakia1} the existence of weak solution is obtained in three dimension for an elastic structure immersed in a compressible fluid. The structure equation considered in \\cite{boulakia1} is strongly regularized in order to obtain suitable estimates on the elastic deformations. A result concerning the local in time existence and uniqueness of strong solutions for a problem coupling compressible fluid and an elastic structure (immersed inside the fluid) can be found in \\cite{bougue3}. In the article \\cite{bougue3} the equation of the structure does not contain any extra regularizing term. The flow corresponding to a Lagrangian velocity is used in \\cite{bougue3} in order to transform the fluid structure interaction problem in a reference fluid domain $\\Omega_{F}(0),$ whereas in the present article we use the non physical change of variables \\eqref{1.14} for the similar purpose of writing the entire system in a reference configuration. A similar Navier-Stokes-Lam\\'{e} system as that of \\cite{bougue3} is analyzed in \\cite{kukavica} to prove the existence of local in time strong solutions but in a different Sobolev regularity framework. In the article \\cite{kukavica} the authors deal with less regular initial data. We also quote a very recent work \\cite{boukir} where the authors prove the local in time existence of a unique strong solution of a compressible fluid structure interaction model where the structure immersed inside the fluid is governed by the Saint-Venant Kirchhoff equations.\\\\\n On the other hand there is a very limited number of works on the compressible fluid-structure interaction problems with the structure appearing on the boundary of the fluid domain. The article \\cite{flori2} deals with a 1-D structure governed by plate equations coupled with a bi-dimensional compressible fluid where the structure is located at a part of the boundary. Here the authors consider the velocity field as a potential and in their case the non linearity occurs only in the equation modeling the density. Instead of writing the system in a reference configuration in \\cite{flori2} the authors proved the existence and uniqueness of solution in Sobolev-like spaces defined on time dependent domains. The existence of weak solution for a different compressible fluid structure interaction model (with the structure appearing on the boundary) is studied in dimension three by the same authors in \\cite{flori1}. In the model considered in \\cite{flori1}, the fluid velocity $v$ satisfies $\\mbox{curl}v\\wedge n=0$ on the entire fluid boundary and the plate is clamped everywhere on the structural boundary. In a recent article \\cite{gavalos1} the authors prove the Hadamard well posedness of a linear compressible fluid structure interaction problem (three dimensional compressible fluid interacting with a bi-dimensional elastic structure) defined in a fixed domain and considering the Navier-slip boundary condition at the interactive boundary. They write the coupled system in the form\n $$\\frac{d}{dt}\\begin{pmatrix}\n \\rho\\\\\n {\\bf u}\\\\\n \\eta\\\\\n \\eta_{t}\n \\end{pmatrix}=A\\begin{pmatrix}\n \\rho\\\\\n {\\bf u}\\\\\n \\eta\\\\\n \\eta_{t}\n \\end{pmatrix}\\,\\,\\mbox{in}\\,\\,(0,T),\\quad \\mbox{and}\\quad\\begin{pmatrix}\n \\rho(0)\\\\\n {\\bf u}(0)\\\\\n \\eta(0)\\\\\n \\eta_{t}(0)\n \\end{pmatrix}=\\begin{pmatrix}\n \\rho_{0}\\\\\n {\\bf u}_{0}\\\\\n \\eta_{1}\\\\\n \\eta_{2}\n \\end{pmatrix},$$\n and prove the existence of mild solution $(\\rho,{\\bf u},\\eta,\\eta_{t})$ in the space $C^{0}([0,T];\\mathcal{D}(A))$ where $\\mathcal{D}(A)$ is the domain of the operator $A.$ Their approach is based on using the Lumer-Phillips theorem to prove that $A$ generates a strongly continuous semigroup. In yet another recent article \\cite{breit} the authors consider a three dimensional compressible fluid structure interaction model where the structure located at the boundary is a shell of Koiter-type with some prescribed thickness. In the spirit of \\cite{lions} and \\cite{feireisl} the authors prove the existence of a weak solution for their model with the adiabatic constant restricted to $\\gamma>\\frac{12}{7}.$ They show that a weak solution exists until the structure touches the boundary of the fluid domain.\\\\\n To the best of our knowledge there is no existing work (neither in dimension 2 nor in 3) proving the existence of strong solutions for the non-linear compressible fluid-structure interaction problems (defined in a time dependent domain) considering the structure at the boundary of the fluid domain. In the present article we address this problem in the case of a fluid contained in a 2d channel and interacting with a 1d structure at the boundary. Our approach is different from that of \\cite{gavalos1} and \\cite{breit}. In \\cite{gavalos1}, since the problem itself is linearized in a fixed domain, the authors can directly use a semigroup formulation to study the existence of mild solution, whereas \\cite{breit} considers weak solutions and a $4$ level approximation process (using artificial pressure, artificial viscosity, regularization of the boundary and Galerkin approximation for the momentum equation). In the study of weak solutions (in \\cite{lions}, \\cite{feireisl}, \\cite{breit}) one of the major difficulties is to pass to the limit in the non-linear pressure term which is handled by introducing a new unknown called the effective viscous flux. In our case of strong regularity framework we do not need to introduce the effective viscous flux and for small enough time $T,$ the term $\\nabla P(\\sigma+\\overline{\\rho})$ can be treated as a non homogeneous source term. Our approach is based on studying the regularity properties of a decoupled parabolic equation, continuity equation and a beam equation. This is done by obtaining some apriori estimates and exploiting the analyticity of the semigroup corresponding to the beam equation. Then the existence result for the non-linear coupled problem is proved by using the Schauder's fixed point argument. We prove the existence of the fixed point in a suitable convex set, which is constructed very carefully based on the estimates of the decoupled problems and the estimates of the non-homogeneous source terms. This led us to choose this convex set as a product of balls (in various functional spaces) of different radius. In the present article we prove a local in time existence result of strong solutions whose incompressible counterpart was proved in \\cite{veiga}.\\\\ \n Let us also mention the very recent article \\cite{shibata} where the global existence for the compressible viscous fluids (without any structure on the boundary) in a bounded domain is proved in the maximal $L^{p}-L^{q}$ regularity class. In this article the authors consider a slip type boundary condition. More precisely the fluid velocity ${\\bf u}$ satisfies the following on the boundary\n $$\nD({\\bf u}){\\bf n}-\\langle D({\\bf u}){\\bf n},{\\bf n}\\rangle {\\bf n}=0,\\quad\\mbox{and}\\quad {\\bf u}\\cdot {\\bf n}=0\\quad\\mbox{on}\\quad \\partial\\Omega\\times (0,T).$$\n In a similar note one can consider a fluid structure interaction problem with slip type boundary condition. In that case the velocity field ${\\bf u}$ solves the following\n \\begin{equation}\\label{fslip}\n \\begin{array}{l}\n D({\\bf u}){\\bf n}-\\langle D({\\bf u}){\\bf n},{\\bf n}\\rangle {\\bf n}=0,\\quad\\mbox{and}\\quad {\\bf u}\\cdot {\\bf n}=\\eta_{t}\\quad\\mbox{on}\\quad \\Gamma_{s}\\times (0,T),\n \\end{array}\n\\end{equation}\n where $\\eta_{t}$ is the structural velocity at the interactive boundary $\\Gamma_{s}\\times(0,T).$ To the best of our knowledge for a compressible fluid structure interaction problem the condition \\eqref{fslip} is treated only in \\cite{gavalos1}, proving the existence of mild solution. Of course the boundary condition \\eqref{fslip} is different from the one we consider in the present article since in our case we do not allow the fluid to slip tangentially through the fluid structure interface (i.e recall in our case ${ u}_{1}=0$ on $\\Sigma^{s}_{T}$).\\\\\n A more generalized slip boundary condition is considered in \\cite{muhacan} in the context of an incompressible fluid structure interaction problem. In the model examined in \\cite{muhacan} the structural displacement has both tangential and normal components with respect to the reference configuration. At the interface the fluid and the structural velocities are coupled via a kinematic coupling condition and a dynamic coupling condition (stating that the structural dynamics is governed by the jump of the normal stress at the interface). The kinematic coupling conditions at the interface treated in \\cite{muhacan} consists of continuity of the normal velocities and a second condition stating that the slip between the tangential components of the fluid and structural velocities is proportional to the fluid normal stress. The authors in \\cite{muhacan} prove the existence of a weak solution for their model. \n \\subsection{Outline} Section \\ref{lineqs} contains results involving the existence and uniqueness of some decoupled linear equations. We state the existence and uniqueness result for a parabolic equation in Section \\ref{veleq}, continuity equation in Section \\ref{deneq}, linear beam equation in Section \\ref{beameq}. In Section \\ref{fixdpt} we prove Theorem \\ref{main} by using the Schauder fixed point theorem. \n\n\\section{Analysis of some linear equations}\\label{lineqs}\nWe will prove the existence and uniqueness of strong solutions of a parabolic equation, a continuity equation and a damped beam equation with prescribed initial data and source terms in appropriate Sobolev spaces.\\\\\nFrom now onwards all the constants appearing in the inequalities will be independent of the final time $T,$ unless specified. We also comment that we will denote many of the constants in the inequalities using the same notation although they might vary from line to line. \n\\subsection{ Study of a parabolic equation}\\label{veleq}\nAt first we consider the following linear problem\n\\begin{equation}\\label{2.1.1}\n\\left\\{ \\begin{array}{ll}\n\\overline{\\sigma}{\\bf w}_{t}-\\mu \\Delta {\\bf w} -(\\mu+\\mu{'})\\nabla\\mathrm{div}{\\bf w}=G_{2}&\\quad \\mbox{in}\\quad Q_{T},\n\\vspace{1.mm}\\\\\n{\\bf w}=0&\\quad\\mbox{on}\\quad \\Sigma_{T},\\\\[1.mm]\n{\\bf w}(0)={\\bf w}_{0}&\\quad \\mbox{in}\\quad \\Omega,\n\\end{array}\\right.\n\\end{equation}\nwhere $\\overline{\\sigma},$ ${\\bf w}_{0}$ and $G_{2}$ are known functions which are $L$-periodic in the $x$ direction.\\\\\nLet $m$ and $M$ be positive constants such that $m0$ small enough and hence we will conclude Theorem \\ref{main}.\\\\\n\t\t\t Now we sketch the steps towards the proof of Theorem \\ref{main}:\\\\\n\t\t\t\t(i) First in Section \\ref{dfp} we define a suitable map for which a fixed point gives a solution of the system \\eqref{chdb}.\\\\\n\t\t\t\t(ii) Next we design a suitable convex set such that the map defined in step (i) maps this set into itself. This is done in Section \\ref{Enht}. \\\\\n\t\t\t\t(iii) In Section \\ref{comcon} we show that the convex set defined in step (ii) is compact in some appropriate topology. We further prove that the fixed point map from step (i), is continuous in that topology.\\\\\n\t\t\t\t(iv) At the end in Section \\ref{conc} we draw the final conclusion to prove Theorem \\ref{main}.\\\\\n\t\t\t\tIn what follows all the constants appearing in the inequalities may vary from line to line but will never depend on $T.$\n\t\t\t\t \\subsection{Definition of the fixed point map}\\label{dfp}\n\t\t\t\t For $(\\widetilde{\\sigma},\\widetilde{\\bf w},\\widetilde{\\eta})$ satisfying\n\t\t\t\t \\begin{equation}\\label{regswe}\n\t\t\t\t \\left\\{ \\begin{array}{l}\n\t\t\t\t \\widetilde{\\sigma}\\in L^{\\infty}(0,T;H^{2}(\\Omega))\\cap W^{1,\\infty}(0,T;H^{1}(\\Omega)),\\\\\n\t\t\t\t \\widetilde{\\bf w}\\in L^{\\infty}(0,T;{\\bf H}^{5\/2}(\\Omega))\\cap L^{2}(0,T;{\\bf H}^{3}(\\Omega))\\cap W^{1,\\infty}(0,T;{\\bf H}^{1}(\\Omega))\\cap H^{1}(0,T;{\\bf H}^{2}(\\Omega))\\\\\n\t\t\t\t \\qquad\\cap H^{2}(0,T;{\\bf L}^{2}(\\Omega)),\\\\\n\t\t\t\t \\widetilde{\\eta}\\in L^{\\infty}(0,T;H^{9\/2}(\\Gamma_{s}))\\cap W^{1,\\infty}(0,T;H^{3}(\\Gamma_{s}))\\cap H^{1}(0,T;H^{4}(\\Gamma_{s}))\\cap W^{2,\\infty}(0,T;H^{1}(\\Gamma_{s}))\\\\\n\t\t\t\t \\qquad \\cap H^{2}(0,T;H^{2}(\\Gamma_{s}))\\cap H^{3}(0,T;L^{2}(\\Gamma_{s})),\n\t\t\t\t \\end{array}\\right.\n\t\t\t\t \\end{equation}\n\t\twe consider the following problem:\n\t\t\t\t \\begin{equation}\\label{3.2}\n\t\t\t\t \\left\\{\n\t\t\t\t \\begin{array}{lll}\n\t\t\t\t &{\\sigma_{t}}+\\widetilde{W}(\\widetilde{\\bf w},\\widetilde{\\eta})\n\t\t\t\t \\cdot \\nabla\\sigma=G_{1}(\\widetilde{\\sigma},\\widetilde{\\bf w},\\widetilde{\\eta})&\\quad \\mbox{in}\\quad Q_{T},\n\t\t\t\t \\vspace{1.mm}\\\\\n\t\t\t\t &(\\widetilde\\sigma+\\overline{\\rho}){\\bf w}_{t}-\\mu \\Delta {\\bf w} -(\\mu+\\mu{'})\\nabla\\mathrm{div}{\\bf w}=G_{2}(\\widetilde{\\sigma},\\widetilde{\\bf w},\\widetilde{\\eta})&\\quad \\mbox{in} \\quad {Q}_{T},\n\t\t\t\t \\vspace{1.mm}\\\\\n\t\t\t\t &{\\bf w}=0&\\quad \\mbox{on}\\quad \\Sigma_{T},\n\t\t\t\t \\vspace{1.mm}\\\\\n\t\t\t\t &{\\bf w}(\\cdot,0)={\\bf w}_{0}={\\bf u}_{0}-z\\eta_{1}\\vec{e}_{2}&\\quad \\mbox{in} \\quad \\Omega,\n\t\t\t\t \\vspace{1.mm}\\\\\n\t\t\t\t &\\sigma(\\cdot,0)=\\sigma_{0}={\\rho_{0}}-{\\overline\\rho}&\\quad\\mbox{in}\\quad \\Omega,\n\t\t\t\t \\vspace{1.mm}\\\\\n\t\t\t\t &\\eta_{tt}-\\beta \\eta_{xx}- \\delta\\eta_{txx}+\\alpha\\eta_{xxxx}=G_{3}(\\widetilde{\\sigma},\\widetilde{\\bf w},\\widetilde{\\eta}) &\\quad\\mbox{on}\\quad \\Sigma^{s}_{T},\n\t\t\t\t \\vspace{1.mm}\\\\\n\t\t\t\t &\\eta(0)=0\\quad \\mbox{and}\\quad \\eta_{t}(0)=\\eta_{1}&\\quad\\mbox{in}\\quad \\Gamma_{s},\n\t\t\t\t \\end{array} \\right.\n\t\t\t\t \\end{equation}\n\t\t\t\t where $G_{1},$ $G_{2},$ $G_{3}$ are as defined in \\eqref{1.22} and $\\widetilde{W}(\\widetilde{\\bf w},\\widetilde{\\eta})$ is defined as follows\n\t\t\t\t \\begin{equation}\\label{dotW}\n\t\t\t\t \\begin{array}{l}\n\t\t\t\t \\displaystyle\n\t\t\t\t \\widetilde{W}(\\widetilde{\\bf w},\\widetilde{\\eta})=\\begin{bmatrix}\n\t\t\t\t \\widetilde{w}_{1}\\\\\n\t\t\t\t \\frac{1}{(1+\\widetilde{\\eta})}(\\widetilde{w}_{2}-\\widetilde{w}_{1}z\\widetilde{\\eta}_{x})\n\t\t\t\t \\end{bmatrix},\\qquad \\left(\\widetilde{\\bf w}=\\begin{pmatrix}\n\t\t\t\t \\widetilde{w}_{1}\\\\\n\t\t\t\t \\widetilde{w}_{2}\n\t\t\t\t \\end{pmatrix}\\right).\n\t\t\t\t \\end{array}\n\t\t\t\t \\end{equation}\n\t\t\t\t \n\t\t\t\t It turns out that it will be important for us to check that $G_{2}(\\widetilde{\\sigma},\\widetilde{\\bf w},\\widetilde{\\eta})$ and $ G_{3}(\\widetilde{\\sigma},\\widetilde{\\bf w},\\widetilde{\\eta})$ respectively coincide at time $t = 0$ with the values $G_2^0$ and $G_3^0$ computed in \\eqref{iG0} and \\eqref{G30}, and given as follows:\n\t\t\t\t \\begin{equation}\\label{iG0-bis}\n\\begin{split}\nG_{2}^0&=-P'(\\rho_{0})\\nabla\\rho_{0}-(\\delta\\eta_{1,xx}-(\\mu+2\\mu')({ u}_{0})_{2,z}\n+P(\\rho_{0}))z\\rho_{0}\\vec{e}_{2}\n+z\\rho_{0}({\\bf u}_{0})_{z}\\eta_{1}\\\\\n&-\\rho_{0}({\\bf u}_{0}\\cdot\\nabla){\\bf u}_{0}-(-\\mu\\Delta-(\\mu+\\mu{'})\\nabla\\mathrm{div})(z\\eta_{1}\\vec{e_{2}}),\n\\end{split}\n\\end{equation}\n\\begin{equation}\\label{G30-bis}\n\\begin{array}{l}\nG_{3}^0=-(\\mu+2\\mu')({u}_{0})_{2,z}+P(\\rho_{0}).\n\\end{array}\n\\end{equation} \n\t\t\t\t This will be imposed by assuming $(\\widetilde{\\sigma},\\widetilde{\\bf w},\\widetilde{\\eta},\\widetilde{\\eta}_{t})(\\cdot,0)=(\\sigma_{0},{\\bf w}_{0},0,\\eta_{1})$ and \n\t\t\t\t \\begin{equation}\\label{tetatt0}\n\t\t\t\t \\begin{array}{l}\n\t\t\t\t\\displaystyle \\widetilde{\\eta}_{tt}(\\cdot,0)=\\delta\\eta_{1,xx}-(\\mu+2\\mu')(u_{0})_{2,z}+P(\\rho_{0})\\,\\,\\mbox{in}\\,\\,\\Omega, \n\t\t\t\t \\\\\n\t \t\t \t\\displaystyle \\widetilde{\\bf w}_{t}(\\cdot,0)=\\frac{1}{\\rho_{0}}\\big(G_{2}^0-(-\\mu \\Delta -(\\mu+\\mu{'})\\nabla\\mathrm{div})({\\bf u}_{0}-z\\eta_{1}\\vec{e_{2}})\\big)\\,\\,\\mbox{in}\\,\\,\\Omega,\n\t\t\t\t \\end{array}\n\t\t\t\t \\end{equation}\n\t\t\t\tIndeed, under the above conditions, one can check from the expressions of $G_{2}(\\widetilde{\\sigma},\\widetilde{\\bf w},\\widetilde{\\eta})$ and $G_{3}(\\widetilde{\\sigma},\\widetilde{\\bf w},\\widetilde{\\eta})$ that $G_{2}(\\widetilde{\\sigma},\\widetilde{\\bf w},\\widetilde{\\eta})\\mid_{t=0} = G_2^0$ and $G_{3}(\\widetilde{\\sigma},\\widetilde{\\bf w},\\widetilde{\\eta})\\mid_{t=0} = G_3^0$.\n\t\t\t\t \\begin{lem}\\label{wellpos}\n\t\t\t\t\tLet the constant $\\delta_{0}$ be fixed by \\eqref{fixd0}.\n\t\t\t\t \tFor $T < \\overline{T}$, let us assume the following \n\t\t\t\t \t\\begin{equation}\\label{tsweY}\n\t\t\t\t \t\\begin{array}{l}\n\t\t\t\t \t(\\widetilde{\\sigma},\\widetilde{\\bf w},\\widetilde{\\eta})\\,\\,\\mbox{satisfies}\\,\\,\\eqref{regswe},\n\t\t\t\t \t\\end{array}\n\t\t\t\t \t\\end{equation}\n\t\t\t\t \t\\begin{equation}\\label{tweq0*}\n\t\t\t\t \t\\begin{array}{l}\n\t\t\t\t \t\\widetilde{\\bf w}=0\\,\\,\\mathrm{on}\\,\\,\\Sigma_{T},\n\t\t\t\t \t\\end{array}\n\t\t\t\t \t\\end{equation}\n\t\t\t\t \t\\begin{equation}\\label{tswe0*}\n\t\t\t\t \t\\begin{array}{l}\n\t\t\t\t \t(\\widetilde{\\sigma}(\\cdot,0),\\widetilde{\\bf w}(\\cdot,0),\\widetilde{\\eta}(\\cdot,0),\\widetilde{\\eta}_{t}(\\cdot,0))=(\\rho_{0}-\\overline{\\rho},{\\bf u}_{0}-z\\eta_{1}\\vec{e}_{2},0,\\eta_{1})\\,\\,\\mbox{in}\\,\\,\\Omega,\n\t\t\t\t \t\\end{array}\n\t\t\t\t \t\\end{equation}\n\t\t\t\t \t\\begin{equation}\\label{etatt0*}\n\t\t\t\t \t\\begin{array}{l}\n\t\t\t\t \t\\eqref{tetatt0}\\,\\,\\mbox{holds},\n\t\t\t\t \t\\end{array}\n\t\t\t\t \t\\end{equation}\n\t\t\t\t \t\\begin{equation}\\label{1etad0*}\n\t\t\t\t \t\\begin{array}{l}\n\t\t\t\t \t1+\\widetilde{\\eta}(x,t)\\geqslant\\delta_{0}>0\\,\\,\\mathrm{on}\\,\\,\\Sigma^{s}_{T},\n\t\t\t\t \t\\end{array}\n\t\t\t\t \t\\end{equation}\n\t\t\t\t \t\\begin{equation}\\label{tsgmM*}\n\t\t\t\t \t\\begin{array}{l}\n\t\t\t\t \t\\displaystyle 0<\\frac{m}{2}\\leqslant\\widetilde{\\sigma}+\\overline{\\rho}\\leqslant 2M\\,\\,\\mathrm{in}\\,\\, Q_{T},\n\t\t\t\t \t\\end{array}\n\t\t\t\t \t\\end{equation}\n\t\t\t\t \twhere $m$ and $M$ were fixed in \\eqref{cor0}.\\\\\n\t\t\t\t Then $G_{1}(\\widetilde{\\sigma},\\widetilde{\\bf w},\\widetilde{\\eta}),$ $G_{2}(\\widetilde{\\sigma},\\widetilde{\\bf w},\\widetilde{\\eta})$ and $G_{3}(\\widetilde{\\sigma},\\widetilde{\\bf w},\\widetilde{\\eta})$ satisfy the following\n\t\t\t\t \\begin{equation}\\label{G123}\n\t\t\t\t \\begin{array}{ll}\n\t\t\t\t & G_{1}(\\widetilde{\\sigma},\\widetilde{\\bf w},\\widetilde{\\eta})\\in L^{1}(0,T;H^{2}(\\Omega))\\cap L^{\\infty}(0,T;H^{1}(\\Omega)),\\\\\n\t\t\t\t & G_{2}(\\widetilde{\\sigma},\\widetilde{\\bf w},\\widetilde{\\eta})\\in L^{2}(0,T;{\\bf H}^{1}(\\Omega))\\cap H^{1}(0,T;{\\bf L}^{2}(\\Omega))\\cap L^{\\infty}(0,T;{\\bf L}^{2}(\\Omega)),\\\\\n\t\t\t\t & G_{3}(\\widetilde{\\sigma},\\widetilde{\\bf w},\\widetilde{\\eta})\\in L^{\\infty}(0,T;H^{1\/2}(\\Gamma_{s}))\\cap H^{1}(0,T;L^{2}(\\Gamma_{s})),\\\\\n\t\t\t\t & \\widetilde{W}(\\widetilde{\\bf w},\\widetilde{\\eta})\\in L^{1}(0,T;{\\bf H}^{3}(\\Omega))\\cap L^{\\infty}(0,T;{\\bf H}^{2}(\\Omega)), \\\\\n\t\t\t\t& G_{2}(\\widetilde{\\sigma},\\widetilde{\\bf w},\\widetilde{\\eta})\\mid_{t=0} = G_2^0 \\text{ and } G_{3}(\\widetilde{\\sigma},\\widetilde{\\bf w},\\widetilde{\\eta})\\mid_{t=0} = G_3^0.\n\t\t\t\t \\end{array}\n\t\t\t\t \\end{equation}\n\t\t\t\t \\end{lem}\n\t\t\t\t \\begin{proof}\n\t\t\t\t \tThe detailed computations to verify \\eqref{G123} follows from Lemma \\ref{eog1} (for estimates of $G_{1}$), Lemma \\ref{eog2} (for estimates of $G_{2}$), Lemma \\ref{eog3} (for estimates of $G_{3}$) and Lemma \\ref{eoWs} (for estimates of $\\widetilde{W}$) in the Section \\ref{estimates}.\n\t\t\t\t \\end{proof}\n\t\t\t\t Observe that the condition \\eqref{tweq0*} implies that $\\widetilde{W}(\\widetilde{\\bf w},\\widetilde{\\eta})\\cdot {\\bf n}=0$ (where $\\widetilde{W}$ is as defined in \\eqref{dotW}) on $\\Sigma_{T}.$ Hence in view of Lemma \\ref{wellpos}, for all $(\\widetilde{\\sigma},\\widetilde{\\bf w},\\widetilde{\\eta})$ satisfying the conditions \\eqref{tsweY}-\\eqref{tweq0*}-\\eqref{tswe0*}-\\eqref{etatt0*}-\\eqref{1etad0*}\n\t\t\t\t -\\eqref{tsgmM*}, the system \\eqref{3.2} admits a unique solution as a consequence of Theorem \\ref{t21}, Theorem \\ref{t22} and Theorem \\ref{t23} in the space $Z^{T}_{1}\\times Y^{T}_{2}\\times Z^{T}_{3},$ where $Y^{T}_{2}$ is defined in \\eqref{dofYi}, $Z^{T}_{1}$ and $Z^{T}_{3}$ are defined as follows\n\t\t\t\t \\begin{equation}\\label{Z23}\n\t\t\t\t \\begin{array}{ll}\n\t\t\t\t & Z^{T}_{1}=\\{{\\rho}\\in C^{0}([0,T];H^{2}(\\Omega))\\;\\ifnum\\currentgrouptype=16 \\middle\\fi|\\; \\rho_{t}\\in L^{\\infty}(0,T; H^{1}(\\Omega))\\},\\\\\n\t\t\t\t & Z^{T}_{3}=\\{\\eta\\in L^{\\infty}(0,T; H^{9\/2}(\\Gamma_{s})),\\,\\eta(x,0)=0\\;\\ifnum\\currentgrouptype=16 \\middle\\fi|\\; \\eta_{t}\\in L^{2}(0,T;{ H}^{4}(\\Gamma_{s}))\\cap C^{0}([0,T];{ H}^{3}(\\Gamma_{s})), \\\\[1.mm]\n\t\t\t\t &\\qquad\\qquad\\qquad\\qquad\\qquad\\eta_{tt}\\in L^{2}(0,T;{ H}^{2}(\\Gamma_{s}))\\cap C^{0}([0,T];{ H}^{1}(\\Gamma_{s})),\n\t\t\t\t \\eta_{ttt}\\in L^{2}(0,T;{ L}^{2}(\\Gamma_{s}))\\}.\n\t\t\t\t \\end{array}\n\t\t\t\t \\end{equation}\n\t\t\t\t Observe that the only difference between $Y^{T}_{1}$ (defined in \\eqref{dofYi}) and $Z^{T}_{1}$ is that the elements of $Y^{T}_{1}$ belongs to $C^{1}([0,T];H^{1}(\\Omega))$ while the elements of $Z^{T}_{1}$ are in $W^{1,\\infty}(0,T;H^{1}(\\Omega)).$ Also one observes that the elements of $Y^{T}_{3}$ (defined in \\eqref{dofYi}) are in $C^{0}([0,T];H^{9\/2}(\\Gamma_{s}))$ while $Z^{T}_{3}$ is only a subset of $L^{\\infty}(0,T;H^{9\/2}(\\Gamma_{s})).$\\\\\n\t\t\t\t Before defining a suitable fixed point map (in order to solve the non-linear problem \\eqref{chdb}), we will introduce a convex set $\\mathscr{C}_{T}$ (where we will show the existence of a fixed point). The set $\\mathscr{C}_{T}$ will be defined as a subset of $L^{2}(0,T;L^{2}(\\Omega))\\times L^{2}(0,T;{\\bf L}^{2}(\\Omega))\\times L^{2}(0,T;L^{2}(\\Gamma_{s}))$ such that the elements of $\\mathscr{C}_{T}$ satisfy some norm bounds and some conditions at initial time $t=0.$\\\\\n\t\t\t\t Let us make precise the assumptions which will be used to define the set $\\mathscr{C}_{T}.$\\\\ \n\t\t\t\t $Regularity\\,\\,assumptions\\,\\,and\\,\\,norm\\,\\,bounds\\,\\,of\\,\\, (\\widetilde{\\sigma},\\widetilde{\\bf w},\\widetilde{\\eta})$:\n\t\t\t\t\\begin{subequations}\\label{normbnd}\n\t\t\t\t \t\\begin{equation}\\label{tsB12}\n\t\t\t\t \t\\begin{array}{l}\n\t\t\t\t \t\\|\\widetilde{\\sigma}\\|_{L^{\\infty}(0,T;H^{2}(\\Omega))}\\leqslant B_{1},\\quad\\|\\widetilde\\sigma_{t}\\|_{L^{\\infty}(0,T; H^{1}(\\Omega))}\\leqslant B_{2},\n\t\t\t\t \t\\end{array}\n\t\t\t\t \t\\end{equation}\n\t\t\t\t \t\\begin{equation}\\label{twB3}\n\t\t\t\t \t\\begin{split}\n\t\t\t\t \t\\|\\widetilde{\\bf w}\\|_{L^{\\infty}(0,T;{\\bf H}^{2}(\\Omega))}+\\|\\widetilde{\\bf w}\\|_{{L^{2}}(0,T;{\\bf H}^{3}(\\Omega))}\n\t\t\t\t \t+\\|\\widetilde{\\bf w}_{t}\\|_{L^{\\infty}(0,T;{\\bf H}^{1}(\\Omega))}\n\t\t\t\t \t&+\\|\\widetilde{\\bf w}_{t}\\|_{L^{2}(0,T;{\\bf H}^{2}(\\Omega))}\\\\[1.mm]\n\t\t\t\t \t&+\\|\\widetilde{\\bf w}_{tt}\\|_{L^{2}(0,T;{\\bf L}^{2}(\\Omega))}\\leqslant B_{3},\n\t\t\t\t \t\\end{split}\n\t\t\t\t \t\\end{equation}\n\t\t\t\t \t\\begin{equation}\\label{teB4}\n\t\t\t\t \t\\begin{split}\n\t\t\t\t \t\\|\\widetilde\\eta\\|_{L^{\\infty}(0,T;H^{9\/2}(\\Gamma_{s}))}+\t\\|\\widetilde\\eta_{t}\\|_{L^{\\infty}(0,T;H^{3}(\\Gamma_{s}))}&+\t\\|\\widetilde\\eta_{t}\\|_{L^{2}(0,T;H^{4}(\\Gamma_{s}))}+\t\\|\\widetilde\\eta_{tt}\\|_{L^{\\infty}(0,T; H^{1}(\\Gamma_{s}))}\\\\[1.mm]\n\t\t\t\t \t&+\t\\|\\widetilde\\eta_{tt}\\|_{L^{2}(0,T;H^{2}(\\Gamma_{s}))}\n\t\t\t\t \t+\\|\\widetilde\\eta_{ttt}\\|_{L^{2}(0,T;L^{2}(\\Gamma_{s}))}\\leqslant B_{4},\n\t\t\t\t \t\\end{split}\n\t\t\t\t \t\\end{equation} \n\t\t\t\t \t\\vspace{1.mm}\\\\\n\t\t\t\t \t\\begin{equation}\\label{1etad0}\n\t\t\t\t \t\\begin{split}\n\t\t\t\t \t1+\\widetilde{\\eta}(x,t)\\geqslant\\delta_{0}>0\\,\\,\\mathrm{on}\\,\\,\\Sigma^{s}_{T},\n\t\t\t\t \t\\end{split}\n\t\t\t\t \t\\end{equation}\n\t\t\t\t \t\\begin{equation}\\label{tsgmM}\n\t\t\t\t \t\\begin{split}\n\t\t\t\t \t0<\\frac{m}{2}\\leqslant\\widetilde{\\sigma}+\\overline{\\rho}\\leqslant 2M\\,\\,\\mathrm{in}\\,\\, Q_{T},\n\t\t\t\t \t\\end{split}\n\t\t\t\t \t\\end{equation}\n\t\t\t\t \\end{subequations} \n\t\t\t\t where $B_{i}$'s ($1\\leqslant i\\leqslant 4$) are positive constants and will be chosen in the sequel. The norm bound \\eqref{twB3} implicitly asserts (by interpolation) that $\\widetilde{\\bf w}$ is in $C^{0}([0,T];{\\bf H}^{5\/2}(\\Omega)).$\\\\\n\t\t\t\t $Assumptions\\,\\,on\\,\\,initial\\,\\,and\\,\\,boundary\\,\\,conditions$:\\\\\n\t\t\t\t \\begin{subequations}\\label{inbndc}\n\t\t\t\t \t\\begin{equation}\\label{tweq0}\n\t\t\t\t \t\\begin{split}\n\t\t\t\t \t\\widetilde{\\bf w}=0\\,\\,\\mbox{on}\\,\\,\\Sigma_{T},\n\t\t\t\t \t\\end{split}\n\t\t\t\t \t\\end{equation}\n\t\t\t\t \t\\begin{equation}\\label{tswe0}\n\t\t\t\t \t\\begin{split}\n\t\t\t\t \t(\\widetilde{\\sigma}(\\cdot,0),\\widetilde{\\bf w}(\\cdot,0),\\widetilde{\\eta}(\\cdot,0),\\widetilde{\\eta}_{t}(\\cdot,0))=(\\rho_{0}-\\overline{\\rho},{\\bf u}_{0}-z\\eta_{1}\\vec{e}_{2},0,\\eta_{1})\\,\\,\\mbox{in}\\,\\,\\Omega,\n\t\t\t\t \t\\end{split}\n\t\t\t\t \t\\end{equation}\n\t\t\t\t \t\\begin{equation}\\label{etatt0}\n\t\t\t\t \t\\begin{split}\n\t\t\t\t \t\\widetilde{\\eta}_{tt}(\\cdot,0)=\\delta\\eta_{1,xx}-(\\mu+2\\mu')(u_{0})_{2,z}+P(\\rho_{0})\\,\\,\\mbox{in}\\,\\,\\Omega,\n\t\t\t\t \t\\end{split}\n\t\t\t\t \t\\end{equation}\n\t\t\t\t \t\\begin{equation}\\label{twt0}\n\t\t\t\t \t\\begin{split}\n\t\t\t\t \t\\widetilde{\\bf w}_{t}(\\cdot,0)=\\frac{1}{\\rho_{0}}\\big(G_{2}^0-(-\\mu \\Delta -(\\mu+\\mu{'})\\nabla\\mathrm{div})({\\bf u}_{0}-z\\eta_{1}\\vec{e_{2}})\\big)\\,\\,\\mbox{in}\\,\\,\\Omega.\n\t\t\t\t \t\\end{split}\n\t\t\t\t \t\\end{equation}\n\t\t\t\t \\end{subequations}\n\t\t\t\t For $T<\\overline{T},$ let us define the following set\n\t\t\t\t \\begin{equation}\\label{defRT}\n\t\t\t\t \\begin{split}\n\t\t\t\t \\mathscr{C}_{T}(B_{1},B_{2},B_{3},B_{4})= \\{(\\widetilde{\\sigma},\\widetilde{\\bf w},\\widetilde{\\eta})&\\in L^{2}(0,T;L^{2}(\\Omega))\\times L^{2}(0,T;{\\bf L}^{2}(\\Omega))\\times L^{2}(0,T;L^{2}(\\Gamma_{s}))\\;\\ifnum\\currentgrouptype=16 \\middle\\fi|\\;\\\\\n\t\t\t\t & \\mbox{the relations}\\, \\eqref{normbnd}-\\eqref{inbndc}\\,\\,\\mbox{are true}\\}.\n\t\t\t\t \\end{split}\n\t\t\t\t \\end{equation} \n\t\t\t\t Now for $(\\widetilde{\\sigma},\\widetilde{\\bf w},\\widetilde{\\eta})\\in\\mathscr{C}_{T}(B_{1},B_{2},B_{3},B_{4}),$ let $(\\sigma,{\\bf w},\\eta)\\in Z_{1}^{T}\\times Y_{2}^{T}\\times Z_{3}^{T}$ (recall the definition of $Y^{T}_{2},$ from \\eqref{dofYi} and $Z^{T}_{1},$ $Z^{T}_{3}$ are defined in \\eqref{Z23}) be the solution of the problem \\eqref{3.2} corresponding to $(\\widetilde{\\sigma},\\widetilde{\\bf w},\\widetilde{\\eta}).$ This defines the map\n\t\t\t\t \\begin{equation}\\label{welposL}\n\t\t\t\t \\begin{matrix}\n\t\t\t\t L: \\mathscr{C}_{T}(B_{1},B_{2},B_{3},B_{4})&\\longrightarrow& Z_{1}^{T}\\times Y_{2}^{T}\\times Z_{3}^{T}\\\\\n\t\t\t\t (\\widetilde{\\sigma},\\widetilde{\\bf w},\\widetilde{\\eta})&\\mapsto& ({\\sigma},{\\bf w},{\\eta}).\n\t\t\t\t \\end{matrix}\n\t\t\t\t \\end{equation}\n\t\t\t\t Now observe that if the map $L$ admits a fixed point $(\\sigma_{f},{\\bf w}_{f},\\eta_{f})$ on the set $\\mathscr{C}_{T}(B_{1},B_{2},B_{3},B_{4}),$ then the triplet $(\\sigma_{f},{\\bf w}_{f},\\eta_{f})$ is a solution to the system \\eqref{chdb}. \n\t\t\t\t Thus, our goal from now is to prove the existence of a fixed point to the map $L.$ In that direction we first show that for suitable parameters $B_{i}$ ($1\\leqslant i\\leqslant 4$) and $T,$ the set $\\mathscr{C}_{T}(B_{1},B_{2},B_{3},B_{4})$ is non-empty.\n\t\t\t\t \\begin{lem}\\label{nonempty}\n\t\t\t\t \tThere exists a constant $B_{0}^{*}>0$ such that for all $B_{i}\\geqslant B^{*}_{0}$ ($1\\leqslant i \\leqslant 4$) there exists $T^{*}_{0}(B_{1},B_{2},B_{3},B_{4} )\\in (0, \\min\\{1,\\overline{T}\\})$ such that for all $00\\quad\\mbox{on}\\quad \\Sigma^{s}_{T},$$\n\t\t\t\t \n\t\t\t\t \n\t\t\t\t \t$i.e$ $\\eta^{*}$ satisfies \\eqref{1etad0}.\\\\\n\t\t\t\t \t$Step\\, 3.$\\, One easily checks that $\\sigma^{*}={\\rho}_{0}-\\overline{\\rho}$ verifies \\eqref{tsB12} and \\eqref{tsgmM}.\\\\\n\t\t\t\t \tWe further observe that $(\\sigma^{*},{\\bf w}^{*},\\eta^{*})$ satisfies \\eqref{tweq0} and \\eqref{tswe0} automatically by construction.\\\\ \n\t\t\t\t \t So we have shown that if we choose $B^{*}_{0}$ (and hence $B_{i}\\geqslant B^{*}_{0},$ for all $1\\leqslant i\\leqslant 4$) as in \\eqref{coBi} and $0\\frac{d}{2},$ $0\\leqslant s\\leqslant r.$ If $v\\in H^{r}(\\Omega_{0})$ and $w\\in H^{s}(\\Omega_{0})$ then $vw\\in H^{s}(\\Omega_{0})$ with\n\t\t\t\t\t$$\\|vw\\|_{H^{s}(\\Omega_{0})}\\leqslant K(\\Omega_{0})\\|v\\|_{H^{r}(\\Omega_{0})}\\|w\\|_{H^{s}(\\Omega_{0})}.$$ \n\t\t\t\t\tSimilar estimates hold when $v$ and $w$ are vector valued functions i.e for ${\\bf v}\\in {\\bf H}^{r}(\\Omega_{0})$ and ${\\bf w}\\in {\\bf H}^{s}(\\Omega_{0}).$\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\\end{lem}\n\t\t\t\t\\begin{lem}\\label{leoin}\n\t\t\t\t\tLet $T<\\overline{T}$ (recall that we have fixed $\\overline{T}$ in \\eqref{Tbar}). We assume that ${\\bf f}\\in {\\bf H}^{2,1}_{\\Sigma_{T}}(Q_{T}).$ As usual we use the notation ${\\bf f}_{z}$ to denote the directional derivative $\\partial_{z}{\\bf f}$ of ${\\bf f}$ with respect to $z.$ Also suppose that $\\Gamma_{s}$ is a smooth subset of $\\Gamma.$ Then the trace ${\\bf f}_{z}\\mid_{\\Sigma_{T}}$ on $\\Gamma_{s}$ (i.e the normal derivative of ${\\bf f}$ on $\\Gamma_{s}$) belongs to $H^{1\/6}(0,T;{\\bf L}^{2}(\\Gamma_{s})).$ In particular there exists a constant $K>0$ such that for all ${\\bf f}\\in {\\bf H}^{2,1}_{\\Sigma_{T}}(Q_{T})$ we have the following\n\t\t\t\t\t\\begin{equation}\\label{ibin}\n\t\t\t\t\t\t\\begin{array}{l}\n\t\t\t\t\t\t\t\\|{\\bf f}_{z}\\mid_{\\Sigma_{T}}\\|_{L^{2}(0,T;{\\bf L}^{2}(\\Gamma_{s}))}\\leqslant T^{1\/6}K(\\|{\\bf f}(0)\\|_{{\\bf H}^{1}_{0}(\\Omega)}+\\|{\\bf f}\\|_{{\\bf H}^{2,1}_{\\Sigma_{T}}(Q_{T})}),\n\t\t\t\t\t\t\\end{array}\n\t\t\t\t\t\\end{equation}\n\t\t\t\t\twhere ${\\bf f}(0)$ denotes the function ${\\bf f}$ at time $t=0.$ We specify that in our case the space ${\\bf H}^{2,1}_{\\Sigma_{T}}(Q_{T})$ is endowed with the following norm\n\t\t\t\t\t$$\\|{\\bf f}\\|_{{\\bf H}^{2,1}_{\\Sigma_{T}}(Q_{T})}=\\|{\\bf f}\\|_{L^{2}(0,T;{\\bf H}^{2}(\\Omega))}+\\|{\\bf f}_{t}\\|_{L^{2}(0,T;{\\bf L}^{2}(\\Omega))}.$$\n\t\t\t\t\\end{lem}\n\t\t\t\t\\begin{remark}\n\t\t\t\t\tThe appearance of ${\\bf f}(0)$ in the inequality \\eqref{ibin} might seem redundant since for all, ${\\bf f}\\in {\\bf H}^{2,1}_{\\Sigma_{T}}(Q_{T})$\n\t\t\t\t\t$$\\|{\\bf f}(0)\\|_{{\\bf H}^{1}_{0}(\\Omega)}\\leqslant K_{T}\\|{\\bf f}\\|_{{\\bf H}^{2,1}_{\\Sigma_{T}}(Q_{T})}.$$ \n\t\t\t\t\tBut the constant $K_{T}$ there may depend on $T$ while the constant $K$ in \\eqref{ibin} is independent of $T.$ This is the reason why we prefer working with \\eqref{ibin}.\n\t\t\t\t\\end{remark}\n\t\t\t\t\\begin{proof}[Proof of Lemma \\ref{leoin}]\n\t\t\t\t\tWe have to estimate $\\|{\\bf f}_{z}\\mid_{\\Sigma_{T}}\\|_{L^{2}(0,T;L^{2}(\\Gamma_\n\t\t\t\t\t\t{s}))}.$ Using H\\\"{o}lder's inequality we get the following\n\t\t\t\t\t\\begin{equation}\\label{il32}\n\t\t\t\t\t\\begin{array}{l}\n\t\t\t\t\t\\displaystyle\n\t\t\t\t\t\\left(\\int\\limits_{0}^{T}\\|{\\bf f}_{z}\\mid_{\\Sigma_{T}}\\|^{2}_{L^{2}(\\Gamma_{s})}\\right)^{1\/2}\\leqslant \\left(\\int\\limits_{0}^{{T}}\\|{\\bf f}_{z}\\mid_{\\Sigma_{T}}\\|^{3}_{L^{2}(\\Gamma_{s})}\\right)^{1\/3}T^{1\/6}\\leqslant K(\\Omega)\\left(\\int\\limits_{0}^{{T}}\\|{\\bf f}\\|^{3}_{{\\bf H}^{5\/3}{(\\Omega)}}\\right)^{1\/3}T^{1\/6}.\n\t\t\t\t\t\\end{array}\n\t\t\t\t\t\\end{equation}\n\t\t\t\t\tTo prove \\eqref{ibin}, in view of \\eqref{il32} it is enough to show the following inequality\n\t\t\t\t\t\t\\begin{equation}\\label{triain3}\n\t\t\t\t\t\t\\begin{array}{l}\n\t\t\t\t\t\t\\|{\\bf f}\\|_{L^{3}(0,T;{\\bf H}^{5\/3}(\\Omega))}\\leqslant K(\\Omega,\\overline{T})(\\|{\\bf f}\\|_{{\\bf H}^{2,1}_{\\Sigma_{T}}(Q_{T})}+\\|{\\bf f}(0)\\|_{{\\bf H}^{1}_{0}(\\Omega)}).\n\t\t\t\t\t\t\\end{array}\n\t\t\t\t\t\t\\end{equation}\n\t\t\t\t\tIn order to prove \\eqref{triain3}, first let us consider the solution ${\\bf{f}}^{*}$ of\n\t\t\t\t\t\\begin{equation}\\label{ellstf}\n\t\t\t\t\t\t\\left\\{ \\begin{array}{lll}\n\t\t\t\t\t\t\t{\\bf f}^{*}_{t}-\\Delta{\\bf f}^{*}=0&\\quad\\mbox{in}\\quad& Q_{T},\\\\[1.mm]\n\t\t\t\t\t\t\t{\\bf f}^{*}=0&\\quad\\mbox{on}\\quad &\\Sigma_{T},\\\\[1.mm]\n\t\t\t\t\t\t\t{\\bf f}^{*}(.,0)={\\bf f}(0)&\\quad\\mbox{in}\\quad&\\Omega.\n\t\t\t\t\t\t\\end{array}\\right.\n\t\t\t\t\t\\end{equation}\n\t\t\t\t\tAs ${\\bf f}(0)\\in {\\bf H}^{1}_{0}(\\Omega),$ ${\\bf f}^{*}\\in {\\bf H}^{2,1}_{\\Sigma_{T}}(Q_{T}).$ It is also well known that there exists a constant $K(\\Omega)$ such that ${\\bf f}^{*}$ satisfies the following inequalities \n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\t\\begin{equation}\\label{inell}\n\t\t\t\t\t\t\\begin{array}{ll}\n\t\t\t\t\t\t\t(i)&\\|{\\bf f}^{*}\\|_{{\\bf H}^{2,1}_{\\Sigma_{T}}(Q_{T})}\\leqslant K(\\Omega)\\|{\\bf f}(0)\\|_{{\\bf H}^{1}_{0}(\\Omega)},\\\\[1.mm]\n\t\t\t\t\t\t\t(ii)& \\|{\\bf f}^{*}\\|_{L^{\\infty}(0,T;{\\bf H}^{1}_{0}(\\Omega))}+\\|{\\bf f}^{*}\\|_{L^{2}(0,T;{\\bf H}^{2}(\\Omega))}\\leqslant K(\\Omega)\\|{\\bf f}(0)\\|_{{\\bf H}^{1}_{0}(\\Omega)}.\n\t\t\t\t\t\t\\end{array}\n\t\t\t\t\t\\end{equation}\n\t\t\t\t\tNow we will estimate the norm of ${\\bf f}^{*}$ in $L^{3}(0,T;{\\bf H}^{5\/3}(\\Omega)).$\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\tUsing interpolation we have for $a.e$ $t$\n\t\t\t\t\t$$\\|{\\bf f}^{*}(t)\\|_{{\\bf H}^{5\/3}(\\Omega)}\\leqslant K(\\Omega)\\|{\\bf f}^{*}(t)\\|^{2\/3}_{{\\bf H}^{2}(\\Omega)}\\|{\\bf f}^{*}(t)\\|^{1\/3}_{{\\bf H}^{1}_{0}(\\Omega)}.$$\n\t\t\t\t\tFrom the last inequality one obtains the following\n\t\t\t\t\t\\begin{equation}\\label{iafin}\n\t\t\t\t\t\t\\begin{array}{l}\n\t\t\t\t\t\t\\displaystyle\n\t\t\t\t\t\t\t\\|{\\bf f}^{*}\\|_{L^{3}(0,T;{\\bf H}^{5\/3}(\\Omega))}=\\left(\\int\\limits_{0}^{T}\\|{\\bf f}^{*}(t)\\|^{3}_{{\\bf H}^{5\/3}(\\Omega)}\\right)^{1\/3}\\leqslant K(\\Omega)\\|{\\bf f}^{*}\\|^{1\/3}_{L^{\\infty}(0,T;{\\bf H}^{1}_{0}(\\Omega))}\\|{\\bf f}^{*}\\|^{2\/3}_{L^{2}(0,T;{\\bf H}^{2}(\\Omega))}.\n\t\t\t\t\t\t\\end{array}\n\t\t\t\t\t\\end{equation}\n\t\t\t\t\tHence using inequality $(ii)$ of \\eqref{inell} in \\eqref{iafin} we obtain\n\t\t\t\t\t\\begin{equation}\\label{iafin1}\n\t\t\t\t\t\t\\begin{array}{l}\n\t\t\t\t\t\t\t\\|{\\bf f}^{*}\\|_{L^{3}(0,T;{\\bf H}^{5\/3}(\\Omega))}\\leqslant K(\\Omega)\\|{\\bf f}(0)\\|_{{\\bf H}^{1}_{0}(\\Omega)}.\n\t\t\t\t\t\t\\end{array}\n\t\t\t\t\t\\end{equation}\n\t\t\t\t\tNow let us observe that $({\\bf f}-{\\bf f}^{*})(0)=0.$ Extend the function $({\\bf f}-{\\bf f}^{*})$ by defining it zero in the time interval $(T-\\overline{T},0)$ (the extended function is also denoted by $({\\bf f}-{\\bf f}^{*})$). In what follows we will use the notation \n\t\t\t\t\t\\begin{equation}\\nonumber\n\t\t\t\t\t\\begin{array}{l}\n\t\t\t\t\t{Q_{T-\\overline{T},T}}=\\Omega\\times(T-\\overline{T},T).\n\t\t\t\t\t\\end{array}\n\t\t\t\t\t\\end{equation}\n\t\t\t\t\tWe also introduce the space ${\\bf H}^{2,1}_{\\Sigma_{T}}({Q_{T-\\overline{T},T}})$ which is defined as in \\eqref{fntlsp} with $Q_{T}$ replaced by ${Q_{T-\\overline{T},T}}.$\\\\\n\t\t\t\t\tOne can check that the extended function $({\\bf f}-{\\bf f}^{*})\\in {\\bf H}^{2,1}_{\\Sigma_{T}}({Q_{T-\\overline{T},T}})$ and\n\t\t\t\t\t\\begin{equation}\\label{noeq}\n\t\t\t\t\t\t\\begin{array}{l}\n\t\t\t\t\t\t\t\\|({\\bf f}-{\\bf f}^{*})\\|_{{\\bf H}^{2,1}_{\\Sigma_{T}}({Q_{T-\\overline{T},T}})}=\\|({\\bf f}-{\\bf f}^{*})\\|_{{\\bf H}^{2,1}_{\\Sigma_{T}}(Q_{T})}.\n\t\t\t\t\t\t\\end{array}\n\t\t\t\t\t\\end{equation}\n\t\t\t\t\tAgain due to the embedding ${\\bf H}^{2,1}_{\\Sigma_{T}}({Q_{T-\\overline{T},T}})\\hookrightarrow H^{1\/6}(T-\\overline{T},{T};{\\bf H}^{5\/3}(\\Omega))$ we have the following \n\t\t\t\t\t\\begin{equation}\\label{il321}\n\t\t\t\t\t\t\\begin{array}{l}\n\t\t\t\t\t\t\t\\|{\\bf f}-{\\bf f}^{*}\\|_{{H}^{1\/6}(T-\\overline{T},T;{\\bf H}^{5\/3}(\\Omega))}\\leqslant K(\\overline{T},\\Omega)\\|{\\bf f}-{\\bf f}^{*}\\|_{{\\bf H}^{2,1}_{\\Sigma_{T}}({Q_{T-\\overline{T},T}})}.\n\t\t\t\t\t\t\\end{array}\n\t\t\t\t\t\\end{equation}\n\t\t\t\t\tSince $H^{1\/6}(T-\\overline{T},T)$ is continuously embedded into $L^{3}(T-\\overline{T},T)$, hence from \\eqref{il321}\n\t\t\t\t\t\\begin{equation}\\label{il32ne}\n\t\t\t\t\t\t\\begin{array}{l}\n\t\t\t\t\t\t\t\\|{\\bf f}-{\\bf f}^{*}\\|_{{L}^{3}(T-\\overline{T},T;{\\bf H}^{5\/3}(\\Omega))}\\leqslant K(\\overline{T},\\Omega)\\|{\\bf f}-{\\bf f}^{*}\\|_{{\\bf H}^{2,1}_{\\Sigma_{T}}({Q_{T-\\overline{T},T}})}.\n\t\t\t\t\t\t\\end{array}\n\t\t\t\t\t\\end{equation}\n\t\t\t\t\tUse of triangle inequality furnishes the following\n\t\t\t\t\t\\begin{equation}\\label{triain}\n\t\t\t\t\t\t\\begin{array}{l}\n\t\t\t\t\t\t\t\\|{\\bf f}\\|_{L^{3}(0,T;{\\bf H}^{5\/3}(\\Omega))}\\leqslant K(\\|{\\bf f}-{\\bf f}^{*}\\|_{L^{3}(0,T;{\\bf H}^{5\/3}(\\Omega))}+\\|{\\bf f}^{*}\\|_{L^{3}(0,T;{\\bf H}^{5\/3}(\\Omega))}).\n\t\t\t\t\t\t\\end{array}\n\t\t\t\t\t\\end{equation}\n\t\t\t\t\tIncorporate inequalities \\eqref{iafin1} and \\eqref{il32ne} in \\eqref{triain} in order to obtain\n\t\t\t\t\t\\begin{equation}\\label{triain1}\n\t\t\t\t\t\t\\begin{array}{l}\n\t\t\t\t\t\t\t\\|{\\bf f}\\|_{L^{3}(0,T;{\\bf H}^{5\/3}(\\Omega))}\\leqslant K(\\Omega,\\overline{T})(\\|{\\bf f}-{\\bf f}^{*}\\|_{{\\bf H}^{2,1}_{\\Sigma_{T}}({Q_{T-\\overline{T},T}})}+\\|{\\bf f}(0)\\|_{{\\bf H}^{1}_{0}(\\Omega)}).\n\t\t\t\t\t\t\\end{array}\n\t\t\t\t\t\\end{equation}\n\t\t\t\t\tIn view of the equality \\eqref{noeq} we can obtain the following from \\eqref{triain1},\n\t\t\t\t\t\\begin{equation}\\label{triain2}\n\t\t\t\t\t\t\\begin{array}{l}\n\t\t\t\t\t\t\t\\|{\\bf f}\\|_{L^{3}(0,T;{\\bf H}^{5\/3}(\\Omega))}\\leqslant K(\\Omega,\\overline{T})(\\|{\\bf f}-{\\bf f}^{*}\\|_{{\\bf H}^{2,1}_{\\Sigma_{T}}(Q_{T})}+\\|{\\bf f}(0)\\|_{{\\bf H}^{1}_{0}(\\Omega)}).\n\t\t\t\t\t\t\\end{array}\n\t\t\t\t\t\\end{equation}\n\t\t\t\t\tOnce again use triangle inequality and \\eqref{inell} $(i)$, in order to prove \\eqref{triain3}.\\\\\n\t\t\t\t\tFinally use \\eqref{triain3} in \\eqref{il32} to show \\eqref{ibin}. This completes the proof.\n\t\t\t\t\\end{proof}\n\t\t\t\n\t\t\t\tThe following lemma is a simple consequence of the fundamental theorem of calculus, whose proof is left to the reader.\n\t\t\t\t\\begin{lem}\\label{futcl}\n\t\t\t\t\tFix $i\\geqslant 0$ and a domain $\\Omega_{0}$ in $\\mathbb{R}^{d}$ ($d$ is either $1$ or $2$). Then there exists a constant $K>0$ such that for all $\\psi\\in H^{1}(0,T;H^{i}(\\Omega_{0})),$ the following holds \n\t\t\t\t\t\\begin{equation}\\label{eoLit}\n\t\t\t\t\t\t\\begin{array}{l}\n\t\t\t\t\t\t\\displaystyle\n\t\t\t\t\t\t\t\\|\\psi\\|_{L^{\\infty}(0,T;H^{i}(\\Omega_{0}))}\\leqslant K(\\|\\psi(0)\\|_{H^{i}(\\Omega_{0})}+T^{1\/2}\\|\\psi_{t}\\|_{L^{2}(0,T;H^{i}(\\Omega_{0}))}),\n\t\t\t\t\t\t\\end{array}\n\t\t\t\t\t\\end{equation} \n\t\t\t\t\twhere $\\psi(0)$ denotes $\\psi$ at time $t=0.$ The inequality \\eqref{eoLit} is true even for a vector valued function ${\\Psi}\\in H^{1}(0,T;{\\bf H}^{i}(\\Omega_{0})).$\n\t\t\t\t\\end{lem}\n\t\t\t\t\\subsubsection{Estimates of $G_{1},$ $G_{2},$ $G_{3}$ and $\\widetilde{W}$} \\label{estimates} \n\t\t\t\t\\begin{lem}\\label{eog1}\n\t\t\t\t\tLet $B^{*}_{0}$ and $T^{*}_{0}$ are as in Lemma \\ref{nonempty} and $B_{i}\\geqslant B^{*}_{0}$ ($\\forall\\, 1\\leqslant i\\leqslant 4$). Then there exist $K_{1}=K_{1}(B_{1},B_{2},B_{3},B_{4})>0$ and $K_{2}>0$ such that for all $00,$ $K_{4}=K_{4}(B_{1},B_{4})>0$ and $K_{5}>0$ such that for all $0 1,$\n\t\t\t\t\t\t$$(\\widetilde{\\sigma}+\\overline{\\rho})^{\\gamma-1}\\in C^{0}([0,T];H^{2}(\\Omega))\\,\\,\\mbox{and}\\,\\,(\\widetilde{\\sigma}+\\overline{\\rho})^{\\gamma-2}\\in C^{0}([0,T];H^{2}(\\Omega))$$ \n\t\t\t\t\t\tand\n\t\t\t\t\t\t\\begin{equation}\\label{eog2*}\n\t\t\t\t\t\t\\left\\{ \\begin{split}\n\t\t\t\t\t\t\\|(\\widetilde{\\sigma}+\\overline{\\rho})^{\\gamma-1}\\|_{L^{\\infty}(0,T;H^{2}(\\Omega))}\\leqslant K\\|\\widetilde{\\sigma}\\|_{L^{\\infty}(0,T;H^{2}(\\Omega))}\\leqslant K(B_{1}),\\\\\n\t\t\t\t\t\t\\|(\\widetilde{\\sigma}+\\overline{\\rho})^{\\gamma-2}\\|_{L^{\\infty}(0,T;H^{2}(\\Omega))}\\leqslant K\\|\\widetilde{\\sigma}\\|_{L^{\\infty}(0,T;H^{2}(\\Omega))}\\leqslant K(B_{1}).\n\t\t\t\t\t\t\\end{split}\\right.\n\t\t\t\t\t\t\\end{equation}\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\t (i)\tWe first estimate $G_{2}(\\widetilde{\\sigma},\\widetilde{\\bf w},\\widetilde{\\eta})$ in $L^{2}(0,T;{\\bf H}^{1}(\\Omega)).$\\\\ \n\t\t\t\t\t\t{Estimate of $P'(\\widetilde{\\sigma}+\\overline{\\rho})\\nabla\\widetilde{\\sigma}$ in $L^{2}(0,T;{\\bf H}^{1}(\\Omega))$}:\\\\\n\t\t\t\t\t\t\t\\begin{equation}\\label{eog21}\n\t\t\t\t\t\t\t\\begin{split}\n\t\t\t\t\t\t\t&\\|P'(\\widetilde{\\sigma}+\\overline{\\rho})\\nabla\\widetilde{\\sigma}\\|_{L^{2}(0,T;{\\bf H}^{1}(\\Omega))}\\\\[1.mm]\n\t\t\t\t\t\t\t& \\leqslant T^{1\/2}\\|P'(\\widetilde{\\sigma}+\\overline{\\rho})\\nabla\\widetilde{\\sigma}\\|_{L^{\\infty}(0,T;{\\bf H}^{1}(\\Omega))}\\\\[1.mm]\n\t\t\t\t\t\t\t&\\leqslant T^{1\/2}K(\\|(\\widetilde{\\sigma}+\\overline{\\rho})^{\\gamma-1}\\|_{L^{\\infty}(0,T;H^{2}(\\Omega))}\\|\\nabla{\\widetilde{\\sigma}}\\|_{L^{\\infty}(0,T;{\\bf H}^{1}(\\Omega))})\\\\[1.mm]\n\t\t\t\t\t\t\t&\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\quad(\\mbox{using the definition of}\\,P\\,\\mbox{and}\\,\\mbox{Lemma}\\,\\ref{lfpss})\\\\[1.mm]\n\t\t\t\t\t\t\t& \\leqslant K(B_{1})T^{1\/2},\\quad(\\mbox{using}\\,\\eqref{tsB12}).\n\t\t\t\t\t\t\t\\end{split}\n\t\t\t\t\t\t\t\\end{equation}\n\t\t\t\t\t\t\t{Estimate of $z\\widetilde{\\eta}_{tt}(\\widetilde{\\sigma}+\\overline{\\rho})\\vec{e}_{2}-(\\mu \\Delta +(\\mu+\\mu{'})\\nabla\\mathrm{div})(z\\widetilde{\\eta}_{t}\\vec{e}_{2})$ in $L^{2}(0,T;{\\bf H}^{1}(\\Omega))$:}\\\\\n\t\t\t\t\t\t\t\\begin{equation}\\label{eog22}\n\t\t\t\t\t\t\t\\begin{split}\n\t\t\t\t\t\t\t&\\|z\\widetilde{\\eta}_{tt}(\\widetilde{\\sigma}+\\overline{\\rho})\\vec{e}_{2}-(\\mu \\Delta +(\\mu+\\mu{'})\\nabla\\mathrm{div})(z\\widetilde{\\eta}_{t}\\vec{e}_{2})\\|_{L^{2}(0,T;{\\bf H}^{1}(\\Omega))}\\\\[1.mm]\n\t\t\t\t\t\t\t&\\leqslant T^{1\/2}\\|z\\widetilde{\\eta}_{tt}(\\widetilde{\\sigma}+\\overline{\\rho})\\vec{e}_{2}-(\\mu \\Delta +(\\mu+\\mu{'})\\nabla\\mathrm{div})(z\\widetilde{\\eta}_{t}\\vec{e}_{2})\\|_{L^{\\infty}(0,T;{\\bf H}^{1}(\\Omega))}\\\\[1.mm]\n\t\t\t\t\t\t\t&\\leqslant T^{1\/2}K(\\|\\widetilde{\\eta}_{tt}\\|_{L^{\\infty}(0,T; H^{1}(\\Omega))}\\|(\\widetilde{\\sigma}+\\overline{\\rho})\\|_{L^{\\infty}(0,T;H^{2}(\\Omega))}+\\|\\widetilde{\\eta}_{t}\\|_{L^{\\infty}(0,T; {H}^{3}(\\Omega))})\\\\[1.mm]\n\t\t\t\t\t\t\t&\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad(\\mbox{using Lemma}\\,\\ref{lfpss})\\\\[1.mm]\n\t\t\t\t\t\t\t&\\leqslant K(B_{1},B_{4})T^{1\/2},\\quad(\\mbox{using}\\,\\eqref{tsB12},\\eqref{teB4}).\n\t\t\t\t\t\t\t\\end{split}\n\t\t\t\t\t\t\t\\end{equation}\n\t\t\t\t\t\t\n\t\t\t\t\t\t\n\t\t\t\t\t\t\n\t\t\t\t\t\t\n\t\t\t\t\t\t\n\t\t\t\t\t\t\n\t\t\t\t\t\t\n\t\t\t\t\t\t\t{Estimate of $F_{2}(\\widetilde{\\sigma}+\\overline{\\rho},\\widetilde{\\bf w}+z\\widetilde{\\eta}_{t}\\vec{e}_{2},\\widetilde{\\eta})$ (defined in \\eqref{F123}) in $L^{2}(0,T;{\\bf H}^{1}(\\Omega))$:}\n\t\t\t\t\t\t We will only estimate the terms of $F_{2}(\\widetilde{\\sigma}+\\overline{\\rho},\\widetilde{\\bf w}+z\\widetilde{\\eta}_{t}\\vec{e}_{2},\\widetilde{\\eta})$ which are the most intricate to deal with. The others are left to the reader.\n\t\t\t\t\t\t\t\\begin{equation}\\label{eog23}\n\t\t\t\t\t\t\t\\begin{array}{ll}\n\t\t\t\t\t\t\t(a)&\\quad\\|\\widetilde{\\eta}(\\widetilde{\\sigma}+\\overline{\\rho})(\\widetilde{\\bf w}_{t}+z\\widetilde{\\eta}_{tt}\\vec{e_{2}})\\|_{L^{2}(0,T;{\\bf H}^{1}(\\Omega))}\\\\[1.mm]\n\t\t\t\t\t\t\t& \\leqslant T^{1\/2}\\|\\widetilde{\\eta}(\\widetilde{\\sigma}+\\overline{\\rho})(\\widetilde{\\bf w}_{t}+z\\widetilde{\\eta}_{tt}\\vec{e_{2}})\\|_{L^{\\infty}(0,T;{\\bf H}^{1}(\\Omega))}\\\\[1.mm]\n\t\t\t\t\t\t\t&\\leqslant T^{1\/2}K(\\|\\widetilde{\\eta}\\|_{L^{\\infty}(0,T;{H}^{2}(\\Gamma_{s}))}\\|(\\widetilde{\\sigma}+\\overline{\\rho})\\|_{L^{\\infty}(0,T;{H}^{2}(\\Omega))}\n\t\t\t\t\t\t\t\\|(\\widetilde{\\bf w}_{t}+z\\widetilde{\\eta}_{tt}\\vec{e_{2}})\\|_{{L^{\\infty}(0,T;{\\bf H}^{1}(\\Omega))}})\\\\[1.mm]\n\t\t\t\t\t\t\t&\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad(\\mbox{using Lemma}\\,\\ref{lfpss})\\\\[1.mm]\n\t\t\t\t\t\t\t&\\leqslant K(B_{1},B_{3},B_{4})T^{1\/2},\\quad(\\mbox{using}\\,\\eqref{tsB12},\\eqref{twB3},\\eqref{teB4}).\n\t\t\t\t\t\t\t\\end{array}\n\t\t\t\t\t\t\t\\end{equation}\n\t\t\t\t\t\t\t\\begin{equation}\\label{eog24}\n\t\t\t\t\t\t\t\\begin{array}{ll}\n\t\t\t\t\t\t\t(b)&\\quad\\|z(\\widetilde{\\sigma}+\\overline{\\rho})(\\widetilde{\\bf w}_{z}+\\widetilde{\\eta}_{t}\\vec{e_{2}})\\widetilde{\\eta}_{t}\\|_{L^{2}(0,T;{\\bf H}^{1}(\\Omega))}\\\\[1.mm]\n\t\t\t\t\t\t\t& \\leqslant T^{1\/2}\\|z(\\widetilde{\\sigma}+\\overline{\\rho})(\\widetilde{\\bf w}_{z}+\\widetilde{\\eta}_{t}\\vec{e_{2}})\\widetilde{\\eta}_{t}\\|_{L^{\\infty}(0,T;{\\bf H}^{1}(\\Omega))}\\\\[1.mm]\n\t &\\leqslant T^{1\/2} K(\\|(\\widetilde{\\sigma}+\\overline{\\rho})\\|_{L^{\\infty}(0,T;H^{2}(\\Omega))}\\|(\\widetilde{\\bf w}_{z}+\\widetilde{\\eta}_{t}\\vec{e_{2}})\\|_{L^{\\infty}(0,T;{\\bf H}^{1}(\\Omega))}\\|\\widetilde{\\eta}_{t}\\|_{L^{\\infty}(0,T;H^{2}(\\Gamma_{s}))})\\\\[1.mm]\n\t &\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad(\\mbox{using Lemma}\\,\\ref{lfpss})\\\\[1.mm]\n\t\t\t\t\t\t\t&\\leqslant K(B_{1},B_{3},B_{4})T^{1\/2},\\quad(\\mbox{using}\\,\\eqref{tsB12},\\eqref{twB3},\\eqref{teB4}).\n\t\t\t\t\t\t\t\\end{array}\n\t\t\t\t\t\t\t\\end{equation}\n\t\t\t\t\t\t\t\\begin{equation}\\label{eog25}\n\t\t\t\t\t\t\t\\begin{array}{ll}\n\t\t\t\t\t\t\t(c) &\\displaystyle\\quad\\left\\|\\frac{\\widetilde{\\bf w}_{zz}z^{2}\\eta^{2}_{x}}{(1+\\widetilde\\eta)}\\right\\|_{L^{2}(0,T;{\\bf H}^{1}(\\Omega))}\\\\[1.mm]\n\t\t\t\t\t\t\t&\\displaystyle\\leqslant K(\\|\\widetilde{\\bf w}_{zz}\\|_{L^{2}(0,T;{\\bf H}^{1}(\\Omega))}\\|\\widetilde{\\eta}^{2}_{x}\\|_{L^{\\infty}(0,T;H^{2}(\\Gamma_{s}))}\\left\\|\\frac{1}{(1+\\widetilde{\\eta})}\\right\\|_{L^{\\infty}(0,T;H^{9\/2}(\\Gamma_{s}))})\\\\\n\t\t\t\t\t\t\t&\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\,\\,(\\mbox{using Lemma}\\,\\ref{lfpss})\\\\[1.mm]\n\t\t\t\t\t\t\t&\\displaystyle\\leqslant T^{1\/2}K(\\|\\widetilde{\\bf w}_{zz}\\|_{L^{2}(0,T;{\\bf H}^{1}(\\Omega))}\\|\\widetilde{\\eta}^{2}_{xt}\\|_{L^{2}(0,T;H^{2}(\\Gamma_{s}))}\\left\\|\\frac{1}{(1+\\widetilde{\\eta})}\\right\\|_{L^{\\infty}(0,T;H^{9\/2}(\\Gamma_{s}))})\\\\[1.mm]\n\t\t\t\t\t\t\t&\\displaystyle\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad(\\mbox{using}\\,\\eqref{eoLit}\\,\\mbox{with}\\,\\psi=\\widetilde{\\eta}^{2}_{x}\\,\\mbox{and the fact}\\,\\widetilde{\\eta}_{x}(,0)=0)\\\\[1.mm]\n\t\t\t\t\t\t\t&\\displaystyle\\leqslant K(B_{3},B_{4})T^{1\/2},\\quad(\\mbox{using}\\,\\eqref{tsB12},\\eqref{twB3},\\eqref{teB4}\\,\\mbox{and}\\,\\eqref{eog1*}).\n\t\t\t\t\t\t\t\\end{array}\n\t\t\t\t\t\t\t\\end{equation}\n\t\t\t\t\t\t\t \\,\\,\\,\\,\\,\\,\\quad$(d)$\\quad Using arguments similar to that in the computation \\eqref{eog21} we show the following\n\t\t\t\t\t\t\t\\begin{equation}\\label{eog26}\n\t\t\t\t\t\t\t\\begin{split}\n\t\t\t\t\t\t\t\\|(\\widetilde{\\eta} P'\\widetilde{\\sigma}_{x}- P'\\widetilde{\\sigma}_{z}z\\widetilde{\\eta}_{x})\\vec{e_{1}}\\|_{L^{2}(0,T; {\\bf H}^{1}(\\Omega))}\n\t\t\t\t\t\t\t\\leqslant K(B_{1},B_{4})T^{1\/2}.\n\t\t\t\t\t\t\t\\end{split}\n\t\t\t\t\t\t\t\\end{equation}\n\t\t\t\t\t\t\tNow the reader can deal with the other terms using similar arguments in order to prove\n\t\t\t\t\t\t\t\\begin{equation}\\label{eog27}\n\t\t\t\t\t\t\t\\begin{array}{l}\n\t\t\t\t\t\t\t\\|{F}_{2}(\\widetilde{\\sigma}+\\overline{\\rho},(\\widetilde{\\bf{w}}+z\\widetilde\\eta_{t}\\vec{e_{2}}),\\widetilde{\\eta})\\|_{L^{2}(0,T;{\\bf H}^{1}(\\Omega))}\\leqslant K(B_{1},B_{3},B_{4})T^{1\/2}.\n\t\t\t\t\t\t\t\\end{array}\n\t\t\t\t\t\t\t\\end{equation}\n\t\t\t\t\t\t\tCombining the estimates \\eqref{eog21}, \\eqref{eog22} and \\eqref{eog27} we conclude the proof of the inequality \\eqref{eg2}$(i)$.\\\\[2.mm]\n\t\t\t\t\t\t\t(ii)\n\t\t\t\t\t\t\tWe now estimate $\\|(G_{2}(\\widetilde{\\sigma},\\widetilde{\\bf w},\\widetilde{\\eta}))_{t}\\|_{L^{2}(0,T;{\\bf L}^{2}(\\Omega))}.$\\\\\n\t\t\t\t\t\t\t{Estimate of $(P'(\\widetilde{\\sigma}+\\overline{\\rho})\\nabla\\widetilde{\\sigma})_{t}$ in $L^{2}(0,T;{\\bf L}^{2}(\\Omega))$:}\n\t\t\t\t\t\t\t\\begin{equation}\\label{eog28}\n\t\t\t\t\t\t\t\\begin{split}\n\t\t\t\t\t\t &\\|(P'(\\widetilde{\\sigma}+\\overline{\\rho})\\nabla\\widetilde{\\sigma})_{t}\\|_{L^{2}(0,T;{\\bf L}^{2}(\\Omega))}\\\\[1.mm]\n\t\t\t\t\t\t &\\leqslant T^{1\/2}\\|(P'(\\widetilde{\\sigma}+\\overline{\\rho})\\nabla\\widetilde{\\sigma})_{t}\\|_{L^{\\infty}(0,T;{\\bf L}^{2}(\\Omega))}\\\\[1.mm]\n\t\t\t\t\t\t &\\leqslant T^{1\/2}K(\\|(\\widetilde{\\sigma}+\\overline{\\rho})^{(\\gamma-2)}\\widetilde{\\sigma}_{t}\\nabla{\\widetilde\\sigma}\\|_{L^{\\infty}(0,T;{\\bf L}^{2}(\\Omega))}+\\|(\\widetilde{\\sigma}+\\overline{\\rho})^{\\gamma-1}\\nabla\\widetilde{\\sigma}_{t}\\|_{L^{\\infty}(0,T;{\\bf L}^{2}(\\Omega))})\\\\[1.mm]\n\t\t\t\t\t\t &\\leqslant T^{1\/2}K(\\|(\\widetilde{\\sigma}+\\overline{\\rho})^{\\gamma-2}\\|_{L^{\\infty}(0,T;H^{2}(\\Omega))}\\|\\widetilde{\\sigma}_{t}\\|_{L^{\\infty}(0,T; H^{1}(\\Omega))}\\|\\nabla\\widetilde{\\sigma}\\|_{L^{\\infty}(0,T; {\\bf H}^{1}(\\Omega))}\\\\[1.mm]\n\t\t\t\t\t\t &\\qquad+\\|(\\widetilde{\\sigma}+\\overline{\\rho})^{\\gamma-1}\\|_{L^{\\infty}(0,T;H^{2}(\\Omega))}\\|\\nabla\\widetilde{\\sigma}_{t}\\|_{L^{\\infty}(0,T;{\\bf L}^{2}(\\Omega))})\n\t\t\t\t\t\t \\qquad(\\mbox{using Lemma}\\,\\ref{lfpss})\\\\[1.mm]\n\t\t\t\t\t\t &\\leqslant K(B_{1},B_{2})T^{1\/2},\\quad(\\mbox{using}\\,\\eqref{tsB12}\\,\\mbox{and}\\,\\eqref{eog2*}).\n\t\t\t\t\t\t\t\\end{split}\n\t\t\t\t\t\t\t\\end{equation}\n\t\t\t\t\t\t\t{Estimate of $(z\\widetilde{\\eta}_{tt}(\\widetilde{\\sigma}+\\overline{\\rho})\\vec{e_{2}}-(\\mu \\Delta +(\\mu+\\mu{'})\\nabla\\mathrm{div})(z\\widetilde{\\eta}_{t}\\vec{e}_{2}))_{t}$ in $L^{2}(0,T;{\\bf L}^{2}(\\Omega))$:}\\\\\n\t\t\t\t\t\t\t\\begin{equation}\\label{eog29}\n\t\t\t\t\t\t\t\\begin{split}\n\t\t\t\t\t\t\t&\\|(z\\widetilde{\\eta}_{tt}(\\widetilde{\\sigma}+\\overline{\\rho})\\vec{e_{2}}-(\\mu \\Delta +(\\mu+\\mu{'})\\nabla\\mathrm{div})(z\\widetilde{\\eta}_{t}\\vec{e}_{2}))_{t}\\|_{L^{2}(0,T;{\\bf L}^{2}(\\Omega))}\\\\[1.mm]\n\t\t\t\t\t\t\t&\\leqslant T^{1\/2}K(\\|\\widetilde{\\eta}_{tt}\\|_{L^{\\infty}(0,T; H^{1}(\\Gamma_{s}))}\\|\\widetilde{\\sigma}_{t}\\|_{L^{\\infty}(0,T; H^{1}(\\Omega))})\n\t\t\t\t\t\t\t +K(\\|\\widetilde{\\eta}_{ttt}\\|_{L^{2}(0,T;L^{2}(\\Gamma_{s}))}\\|(\\widetilde{\\sigma}+\\overline{\\rho})\\|_{L^{\\infty}(0,T;H^{2}(\\Omega))}\\\\[1.mm]\n\t\t\t\t\t\t\t & \\qquad+\\|\\widetilde{\\eta}_{tt}\\|_{L^{2}(0,T;H^{2}(\\Gamma_{s}))})\n\t\t\t\t\t\t\t \\qquad(\\mbox{using Lemma}\\,\\ref{lfpss})\\\\[1.mm]\n\t\t\t\t\t\t\t & \\leqslant K(B_{2},B_{4})T^{1\/2}+K(B_{1},B_{4}).\n\t\t\t\t\t\t\t\\end{split}\n\t\t\t\t\t\t\t\\end{equation}\n\t\t\t\t\t\t\n\t\t\t\t\t\t\n\t\t\t\t\t\t\n\t\t\t\t\t\t\n\t\t\t\t\t\t\n\t\t\t\t\t\t\n\t\t\t\t\t\t\n\t\t\t\t\t\t\n\t\t\t\t\t\t\n\t\t\t\t\t\t\t{Estimate of $({F}_{2}(\\widetilde{\\sigma}+\\overline{\\rho},(\\widetilde{\\bf w}+z\\widetilde\\eta_{t}\\vec{e_{2}}),\\widetilde{\\eta}))_{t}$ in $L^{2}(0,T;{\\bf L}^{2}(\\Omega))$:}\\\\\n\t\t\t\t\t\t\\begin{equation}\\label{eog210}\n\t\t\t\t\t\t\t\\begin{array}{ll}\n\t\t\t\t\t\t\t(a)\\quad&\\|(\\widetilde{\\eta}(\\widetilde{\\sigma}+\\overline{\\rho})(\\widetilde{\\bf w}_{t}+z\\widetilde{\\eta}_{tt}\\vec{e_{2}}))_{t}\\|_{L^{2}(0,T;{\\bf L}^{2}(\\Omega))}\\\\[1.mm]\n\t\t\t\t\t\t\t&\\leqslant K(\\|(\\widetilde{\\eta}_{t}(\\widetilde{\\sigma}+\\overline{\\rho})(\\widetilde{\\bf w}_{t}+z\\widetilde{\\eta}_{tt}\\vec{e_{2}}))\\|_{L^{2}(0,T;{\\bf L}^{2}(\\Omega))}\n\t\t\t\t\t\t\t+\\|\\widetilde{\\eta}\\widetilde{\\sigma}_{t}(\\widetilde{\\bf w}_{t}+z\\widetilde{\\eta}_{tt}\\vec{e_{2}})\\|_{L^{2}(0,T;{\\bf L}^{2}(\\Omega))}\\\\[1.mm]\n\t\t\t\t\t\t\t &\\qquad+\\|(\\widetilde{\\eta}(\\widetilde{\\sigma}+\\overline{\\rho})(\\widetilde{\\bf w}_{tt}+z\\widetilde{\\eta}_{ttt}\\vec{e_{2}}))\\|_{L^{2}(0,T;{\\bf L}^{2}(\\Omega))})\\\\[1.mm]\n\t\t\t\t\t\t\t &\\leqslant T^{1\/2}K(\\|\\widetilde{\\eta}_{t}\\|_{L^{\\infty}(0,T;H^{2}(\\Gamma_{s}))}\\|(\\widetilde{\\sigma}+\\overline{\\rho})\\|_{L^{\\infty}(0,T;H^{2}(\\Omega))}\\|(\\widetilde{\\bf w}_{t}+z\\widetilde{\\eta}_{tt}\\vec{e_{2}})\\|_{L^{\\infty}(0,T;{\\bf H}^{1}(\\Omega))}\\\\[1.mm]\n\t\t\t\t\t\t\t &\\qquad+\\|\\widetilde{\\eta}\\|_{L^{\\infty}(0,T;H^{2}(\\Gamma_{s}))}\\|\\widetilde{\\sigma}_{t}\\|_{L^{\\infty}(0,T; H^{1}(\\Omega))}\\|(\\widetilde{\\bf w}_{t}+z\\widetilde{\\eta}_{tt}\\vec{e_{2}})\\|_{L^{\\infty}(0,T;{\\bf H}^{1}(\\Omega))})\\\\[1.mm]\n\t\t\t\t\t\t\t &\\qquad+\\|\\widetilde{\\eta}\\|_{L^{\\infty}(0,T;H^{2}(\\Gamma_{s}))}\\|(\\widetilde{\\sigma}+\\overline{\\rho})\\|_{L^{\\infty}(0,T;H^{2}(\\Omega))}\\|(\\widetilde{\\bf w}_{tt}+z\\widetilde{\\eta}_{ttt}\\vec{e_{2}})\\|_{L^{2}(0,T;{\\bf L}^{2}(\\Omega))}\\\\[1.mm]\n\t\t\t\t\t\t\t & \\leqslant T^{1\/2}K(B_{1},B_{2},B_{3},B_{4})+T^{1\/2}\\|\\widetilde{\\eta}_{t}\\|_{L^{2}(0,T;H^{2}(\\Gamma_{s}))}\\|(\\widetilde{\\sigma}+\\overline{\\rho})\\|_{L^{\\infty}(0,T;H^{2}(\\Omega))}\\\\\n\t\t\t\t\t\t\t &\\qquad\\|(\\widetilde{\\bf w}_{tt}+z\\widetilde{\\eta}_{ttt}\\vec{e_{2}})\\|_{L^{2}(0,T;{\\bf L}^{2}(\\Omega))}\n\t\t\t\t\t\t\t \\qquad(\\mbox{using }\\,\\eqref{eoLit}\\,\\mbox{with}\\,\\psi=\\widetilde{\\eta}\\,\\mbox{and the fact}\\,\\widetilde{\\eta}(,0)=0)\\\\[1.mm]\n\t\t\t\t\t\t\t& \\leqslant K(B_{1},B_{2},B_{3},B_{4})T^{1\/2}.\n\t\t\t\t\t\t\t\\end{array}\n\t\t\t\t\t\t\t\\end{equation}\n\t\t\t\t\t\t\t\\,\\,\\,\\,\\,\\,\\,$(b)$\\quad Using similar estimates we can have the following\n\t\t\t\t\t\t\t\\begin{equation}\\label{eog211}\n\t\t\t\t\t\t\t\\begin{array}{l}\n\t\t\t\t\t\t\t\\|(z(\\widetilde{\\sigma}+\\overline{\\rho})(\\widetilde{\\bf w}_{z}+\\widetilde{\\eta}_{t}\\vec{e_{2}})\\eta_{t})_{t}\\|_{L^{2}(0,T;{\\bf L}^{2}(\\Omega))}\\leqslant K(B_{1},B_{2},B_{3},B_{4})T^{1\/2}.\n\t\t\t\t\t\t\t\\end{array}\n\t\t\t\t\t\t\t\\end{equation}\n\t\t\t\t\t\t\t\\,\\,\\,\\,\\,\\,\\,$(c)$\\quad Now we estimate \n\t\t\t\t\t\t\t$$\t\\left\\|\\left(\\frac{\\widetilde{\\bf w}_{zz}z^{2}\\widetilde{\\eta}^{2}_{x}}{(1+\\widetilde{\\eta})}\\right)_{t}\\right\\|_{L^{2}(0,T;{\\bf L}^{2}(\\Omega))}.$$\n\t\t\t\t\t\t\tTo start with, we have the following identity of distributional derivatives\n\t\t\t\t\t\t\t\\begin{equation}\n\t\t\t\t\t\t\t\t\\label{HugeTerms}\n\t\t\t\t\t\t\t\t\\left(\\frac{\\widetilde{\\bf w}_{zz}z^{2}\\widetilde{\\eta}^{2}_{x}}{(1+\\widetilde{\\eta})}\\right)_{t}=\\frac{z^{2}\\widetilde{\\bf w}_{tzz}\\widetilde{\\eta}^{2}_{x}}{(1+\\widetilde{\\eta})}+\\frac{2\\widetilde{\\eta}_{x}\\widetilde{\\eta}_{xt}\\widetilde{\\bf w}_{zz}}{(1+\\widetilde{\\eta})}-\\frac{\\widetilde{\\bf w}_{zz}z^{2}\\widetilde{\\eta}^{2}_{x}\\widetilde{\\eta}_{t}}{(1+\\widetilde{\\eta})^{2}}.\n\t\t\t\t\t\t\t\\end{equation}\nWe now estimate the first term of the summands. Using \\eqref{eog1*} one obtains\n\t\t\t\t\t\t\t\\begin{equation}\\label{eog212}\n\t\t\t\t\t\t\t\\begin{array}{l}\n\t\t\t\t\t\t\t\\displaystyle\n\t\t\t\t\t\t\t\\left\\|\\frac{z^{2}\\widetilde{\\bf w}_{tzz}\\widetilde{\\eta}^{2}_{x}}{(1+\\widetilde{\\eta})}\\right\\|_{L^{2}(0,T;{\\bf L}^{2}(\\Omega))}\\leqslant K(B_{4})(\\|\\widetilde{\\bf w}_{tzz}\\|_{{L^{2}(0,T;{\\bf L}^2(\\Omega))}}\\|\\widetilde{\\eta}_{x}\\|_{{{L^{\\infty}(\\Sigma^{s}_{T})}}}^2).\n\t\t\t\t\t\t\t\\end{array}\n\t\t\t\t\t\t\t\\end{equation}\n\t\t\t\t\t\t\tNow we use inequality \\eqref{eoLit} and $\\widetilde{\\eta}_{x}(\\cdot,0)=0$ to get\n\t\t\t\t\t\t\t\\begin{equation}\\label{eog213}\n\t\t\t\t\t\t\t\\begin{array}{l}\n\t\t\t\t\t\t\t\\displaystyle\n\t\t\t\t\t\t\t\\|\\widetilde{\\eta}_{x}\\|_{{{L^{\\infty}(\\Sigma^{s}_{T})}}}\n\t\t\t\t\t\t\t\\leq\n\t\t\t\t\t\t\tC \\|\\widetilde{\\eta}_{x}\\|_{L^{\\infty}(0,T;{ H}^{2}(\\Gamma_{s}))}\\leqslant K(B_{3})T^{1\/2}.\n\t\t\t\t\t\t\t\\end{array}\n\t\t\t\t\t\t\t\\end{equation}\n\t\t\t\t\t\t\tHence we use \\eqref{eog213} in \\eqref{eog212} to obtain\n\t\t\t\t\t\t\t\\begin{equation}\\label{eog214}\n\t\t\t\t\t\t\t\\begin{array}{l}\n\t\t\t\t\t\t\t\\displaystyle\n\t\t\t\t\t\t\t\\left\\|\\frac{z^{2}\\widetilde{\\bf w}_{tzz}\\widetilde{\\eta}^{2}_{x}}{(1+\\widetilde{\\eta})}\\right\\|_{L^{2}(0,T;{\\bf L}^{2}(\\Omega))}\\leqslant K(B_{3},B_{4})T.\n\t\t\t\t\t\t\t\\end{array}\n\t\t\t\t\t\t\t\\end{equation} \n\t\t\t\t\t\t\tFor the second and third summands of \\eqref{HugeTerms}, we similarly obtain:\n\t\t\t\t\t\t\t\\begin{equation}\n\t\t\t\t\t\t\t\\begin{array}{l}\n\t\t\t\t\t\t\t\\displaystyle\n\t\t\t\t\t\t\t\\left\\|\\frac{2\\widetilde{\\eta}_{x}\\widetilde{\\eta}_{xt}\\widetilde{\\bf w}_{zz}}{(1+\\widetilde{\\eta})}\\right\\|_{L^{2}(0,T;{\\bf L}^{2}(\\Omega))}\\leqslant K(B_{3},B_{4})T^{1\/2}, \\text{ and } \n\t\t\t\t\t\t\t%\n\t\t\t\t\t\t\t\\left\\|-\\frac{\\widetilde{\\bf w}_{zz}z^{2}\\widetilde{\\eta}^{2}_{x}\\widetilde{\\eta}_{t}}{(1+\\widetilde{\\eta})^{2}} \\right\\|_{L^{2}(0,T;{\\bf L}^{2}(\\Omega))}\\leqslant K(B_{3},B_{4})T.\n\t\t\t\t\t\t\t\\end{array}\n\t\t\t\t\t\t\t\\end{equation} \n\n\t\t\t\t\t\t\tSo altogether we get\n\t\t\t\t\t\t\t\\begin{equation}\\label{eog215}\n\t\t\t\t\t\t\t\\begin{array}{l}\n\t\t\t\t\t\t\t\\displaystyle\n\t\t\t\t\t\t\t\\left\\|\\left(\\frac{\\widetilde{\\bf w}_{zz}z^{2}\\widetilde\\eta^{2}_{x}}{(1+\\widetilde\\eta)}\\right)_{t}\\right\\|_{L^{2}(0,T;{\\bf L}^{2}(\\Omega))}\\leqslant K(B_{3},B_{4})T^{1\/2}.\n\t\t\t\t\t\t\t\\end{array}\n\t\t\t\t\t\t\t\\end{equation}\n\t\t\t\t\t\t\tThe remaining terms in the expression of ${F}_{2}$ are relatively easier to deal with and hence we leave the details to the reader to show\n\t\t\t\t\t\t\t\\begin{equation}\\label{eog216}\n\t\t\t\t\t\t\t\\begin{array}{l}\n\t\t\t\t\t\t\t\\|({F}_{2}(\\widetilde{\\sigma}+\\overline{\\rho},\\widetilde{\\bf w}+z\\widetilde\\eta_{t}\\vec{e_{2}},\\widetilde{\\eta}))_{t}\\|_{L^{2}(0,T;{\\bf L}^{2}(\\Omega))}\\leqslant K(B_{1},B_{2},B_{3},B_{4})T^{1\/2}.\n\t\t\t\t\t\t\t\\end{array}\n\t\t\t\t\t\t\t\\end{equation}\n\t\t\t\t\t\t\tHence combining the estimates \\eqref{eog28}, \\eqref{eog29} and \\eqref{eog216} one gets \\eqref{eg2}$(ii).$\\\\[1.mm]\n\t\t\t\t\t\t\n\t\t\t\t\t\t\n\t\t\t\t\t\t\n\t\t\t\t\t\t\n\t\t\t\t\t\t\n\t\t\t\t\t\t\t(iii) In \\eqref{eoLit} replace $\\psi$ by $G_{2}(\\widetilde{\\bf w},\\widetilde{\\sigma},\\widetilde{\\eta})$ and use the estimate \\eqref{eg2}$(ii)$ to prove \\eqref{eg2}$(iii).$\n\t\t\t\t\t\\end{proof}\n\t\t\t\t\\end{lem}\n\t\t\t\t\\begin{lem}\\label{eog3}\n\t\t\t\t\t\t\tLet $B^{*}_{0}$ and $T^{*}_{0}$ are as in Lemma \\ref{nonempty} and $B_{i}\\geqslant B^{*}_{0}$ ($\\forall\\, 1\\leqslant i\\leqslant 4$). Then there exist $K_{6}>0$ and $K_{7}=K_{7}(B_{1},B_{2},B_{3},B_{4})>0$ such that for all $00,$ and $K_{9}=K_{9}(B_{3},B_{4})>0$ such that for all $00,&\\quad \\mbox{on}\\quad \\Sigma^{s}_{T}\\\\[1.mm]\n\t\t\t \t\\displaystyle \\frac{m}{2}\\leqslant \\sigma(x,z,t)+\\overline{\\rho}\\leqslant 2M,&\\quad \\mbox{in}\\quad Q_{T}.\n\t\t\t \t\\end{array}\n\t\t\t \t\\end{equation}\n\t\t\t \tAgain it follows from the equation \\eqref{3.2}$_{2}$ that ${\\bf w}_{t}(0)$ satisfies the condition \\eqref{twt0}. Similarly one uses \\eqref{3.2}$_{6}$ to show that $\\eta_{tt}(\\cdot,0)$ satisfies \\eqref{etatt0}. Now we set\n\t\t\t \t$$T^{*}=T^{*}(B_{1},B_{2},B_{3},B_{4})=T^{*}_{5}.$$\n\t\t\t \tHence if $B_{i}$ ($\\forall\\,1\\leqslant i\\leqslant 4$) is chosen as in \\eqref{chooseBi1}, \\eqref{chooseBi2} and \\eqref{chooseBi3} and $00\\quad \\mbox{on}\\quad \\Sigma^{s}_{T}.\n\t\t\t \n\t\t\t \t\\end{array}\n\t\t\t \t\\end{equation}\n\t\t\t \tObserve that the weak* convergence of $\\widetilde{\\sigma}_{n}$ to $\\overline{\\sigma}$ in $L^{\\infty}(0,T;L^{2}(\\Omega))$ is enough to conclude that (since $\\widetilde{\\sigma}_{n}$ satisfies \\eqref{tsgmM})\n\t\t\t \t\\begin{equation}\\label{bndsigma}\n\t\t\t \t\\begin{array}{l}\n\t\t\t \t\\displaystyle \\frac{m}{2}\\leqslant \\overline\\sigma(x,z,t)+\\overline{\\rho}\\leqslant 2M\\quad \\mbox{in}\\quad Q_{T}.\n\t\t\t \t\\end{array}\n\t\t\t \t\\end{equation}\n\t\t\t \tUsing the strong convergence of $(\\widetilde{\\sigma}_{n},\\widetilde{\\bf w}_{n},\\widetilde{\\eta}_{n})$ to $(\\overline{\\sigma},\\overline{\\bf w},\\overline{\\eta})$ in $\\mathcal{X}$ furnishes\n\t\t\t \t \\begin{equation}\\label{3.31}\n\t\t\t \t \\begin{array}{ll}\n\t\t\t \t \\overline{\\bf w}(\\cdot,0)=({\\bf u}_{0}-z\\eta_{1}\\vec{e_{2}})&\\quad \\mbox{in}\\quad \\Omega,\\\\\n\t\t\t \t \\overline{\\sigma}(\\cdot,0)=\\sigma_{0}&\\quad\\mbox{in}\\quad \\Omega.\n\t\t\t \t \\end{array}\n\t\t\t \t \\end{equation}\n\t\t\t \t Now we can use the uniform bounds of $\\|\\widetilde{\\bf w}_{n,t}\\|_{L^{\\infty}(0,T;{\\bf H}^{1}(\\Omega))}$ and $\\|\\widetilde{\\bf w}_{n,tt}\\|_{L^{2}(0,T;{\\bf L}^{2}(\\Omega))}$ and the Aubin Lions lemma to have the convergence $\\widetilde{\\bf w}_{n,t}\\rightarrow \\overline{\\bf w}_{t}$ in $C^{0}([0,T];{\\bf L}^{2}(\\Omega)).$ Consequently\n\t\t\t \t \\begin{equation}\\label{icbwt}\n\t\t\t \t \\begin{split}\n\t\t\t \t \\overline{\\bf w}_{t}(.,0)=\\frac{1}{\\rho_{0}}(G_{2}^0-(-\\mu \\Delta -(\\mu+\\mu{'})\\nabla\\mathrm{div})({\\bf u}_{0}-z\\eta_{1}\\vec{e}_{2})).\n\t\t\t \t \\end{split}\n\t\t\t \t \\end{equation}\n\t\t\t \tSo combining \\eqref{3.27}-\\eqref{3.28}-\\eqref{inoeta}-\\eqref{3.29}-\\eqref{3.30}-\\eqref{3.32}-\\eqref{bndsigma}-\\eqref{3.31}-\\eqref{icbwt} we conclude that the limit point $(\\overline{\\sigma},\\overline{\\bf w},\\overline{\\eta})\\in \\mathscr{C}_{T}$ and hence $\\mathscr{C}_{T}$ is closed in $\\mathcal{X}$.\n\t\t\t \t\\newline\n\t\t\t \tOnce again using Aubin Lions lemma we get that $\\mathscr{C}_{T}$ is a compact subset of $\\mathcal{X}.$ \n\t\t\t \\end{proof}\n\t\t\t \tNow to apply Schauder's fixed point theorem one only needs to prove that $L$ is continuous on $\\mathscr{C}_{T}.$ \n\t\t\t \t\\begin{lem}\\label{cont}\n\t\t\t \t\tLet $\\mathscr{C}_{T}$ be the set in \\eqref{TCT}. The map $L$ is continuous from $\\mathscr{C}_{T}$ into itself for the topology of $\\mathcal{X}.$\n\t\t\t \t\\end{lem}\n\t\t\t \t\\begin{proof}\n\t\t\t \t Suppose that $(\\widetilde{\\sigma}_{n},\\widetilde{\\bf w}_{n},\\widetilde{\\eta}_{n})\\in \\mathscr{C}_{T},$ converges to $(\\widetilde{\\sigma},\\widetilde{\\bf w},\\widetilde{\\eta})$ strongly in $\\mathcal{X}.$ Then, according to Lemma \\ref{com}, $(\\widetilde{\\sigma},\\widetilde{\\bf w},\\widetilde{\\eta})\\in \\mathscr{C}_{T}.$ We thus set $(\\widehat{\\sigma}_{n},\\widehat{\\bf w}_{n},\\widehat{\\eta}_{n})= L(\\widetilde{\\sigma}_{n},\\widetilde{\\bf w}_{n},\\widetilde{\\eta}_{n}),$ $(\\widehat{\\sigma},\\widehat{\\bf w},\\widehat{\\eta})= L(\\widetilde{\\sigma},\\widetilde{\\bf w},\\widetilde{\\eta}).$ Our goal is to show that $(\\widehat{\\sigma}_{n},\\widehat{\\bf w}_{n},\\widehat{\\eta}_{n})$ strongly converges to $(\\widehat{\\sigma},\\widehat{\\bf w},\\widehat{\\eta})$ in $\\mathcal{X}.$ Using that $(\\widehat{\\sigma}_{n},\\widehat{\\bf w}_{n},\\widehat{\\eta}_{n})$ belongs to $\\mathscr{C}_{T}$ (see Lemma \\ref{inclusion}) we get that there exists a triplet $(\\overline{\\sigma},\\overline{\\bf w},\\overline{\\eta})$ such that up to a subsequence \n\t\t\t \t \\begin{equation}\\label{weakconv}\n\t\t\t \t \\begin{array}{lll}\n\t\t\t \t & \\widehat{\\sigma}_{n}\\stackrel{\\ast}{\\rightharpoonup}\\overline{\\sigma}&\\,\\,\\mbox{in}\\,\\, L^{\\infty}(0,T;H^{2}(\\Omega))\\cap W^{1,\\infty}(0,T;H^{1}(\\Omega)),\\\\\n\t\t\t \t & \\widehat{\\bf w}_{n}\\rightharpoonup \\overline{\\bf w}&\\,\\,\\mbox{in}\\,\\, L^{2}(0,T;{\\bf H}^{3}(\\Omega))\\cap H^{1}(0,T;{\\bf H}^{2}(\\Omega))\\cap H^{2}(0,T;{\\bf L}^{2}(\\Omega)),\\\\\n\t\t\t \t & \\widehat{\\bf w}_{n}\\stackrel{\\ast}{\\rightharpoonup} \\overline{\\bf w}&\\,\\,\\mbox{in}\\,\\, L^{\\infty}(0,T;{\\bf H}^{2}(\\Omega))\\cap W^{1,\\infty}(0,T;{\\bf H}^{1}(\\Omega)),\\\\\n\t\t\t \t & \\widehat{\\eta}_{n}\\rightharpoonup \\overline{\\eta}&\\,\\,\\mbox{in}\\,\\, H^{1}(0,T;H^{4}(\\Gamma_{s}))\\cap H^{2}(0,T;H^{2}(\\Gamma_{s}))\\cap H^{3}(0,T;L^{2}(\\Gamma_{s})),\\\\\n\t\t\t \t & \\widehat{\\eta}_{n}\\stackrel{\\ast}{\\rightharpoonup} \\overline{\\eta}&\\,\\,\\mbox{in}\\,\\, L^{\\infty}(0,T;H^{9\/2}(\\Gamma_{s}))\\cap W^{1,\\infty}(0,T;H^{3}(\\Gamma_{s}))\\cap W^{2,\\infty}(0,T;H^{1}(\\Gamma_{s})).\n\t\t\t \t \\end{array}\n\t\t\t \t \\end{equation}\n\t\t\t \t The compactness result proved in Lemma \\ref{com} provides the strong convergence in $\\mathcal{X}$ i.e, up to a subsequence, $(\\widehat{\\sigma}_{n},\\widehat{\\bf w}_{n},\\widehat{\\eta}_{n})$ converges strongly in $\\mathcal{X}$ to $(\\overline{\\sigma},\\overline{\\bf w},\\overline{\\eta}).$ It is clear that in order to prove that the map $L$ is continuous it is enough to show that $(\\overline{\\sigma},\\overline{\\bf w},\\overline{\\eta})=(\\widehat{\\sigma},\\widehat{\\bf w},\\widehat{\\eta}).$ This will be verified in the following steps.\\\\\n\t\t\t \t$(i)$ We first claim that $G_{2}(\\widetilde{\\sigma}_{n},\\widetilde{\\bf w}_{n},\\widetilde{\\eta}_{n})$ converges weakly to $G_{2}(\\widetilde{\\sigma},\\widetilde{\\bf w},\\widetilde{\\eta})$ in $L^{2}(0,T;{\\bf L}^{2}(\\Omega)).$\\\\\n\t\t\t \t Since $(\\widetilde{\\sigma}_{n},\\widetilde{\\bf w}_{n},\\widetilde{\\eta}_{n})$ belongs to $\\mathscr{C}_{T}$ and we have fixed $B_{i}$ (for all $1\\leqslant i\\leqslant 4$) and $T,$ one can use Lemma \\ref{eog2} to show that $\\|G_{2}(\\widetilde{\\sigma}_{n},\\widetilde{\\bf w}_{n},\\widetilde{\\eta}_{n})\\|_{L^{2}(0,T;L^{2}(\\Omega))}$ is uniformly bounded. Hence, to prove our claim it is enough to show that $G_{2}(\\widetilde{\\sigma}_{n},\\widetilde{\\bf w}_{n},\\widetilde{\\eta}_{n})$ converges to $G_{2}(\\widetilde{\\sigma},\\widetilde{\\bf w},\\widetilde{\\eta})$ in $\\mathcal{D}'(Q_{T})$ ($\\mathcal{D}'(Q_{T})$ is the space of distributions on $Q_{T}$).\\\\\n\t\t\t \t \n\t\t\t \n\t\t\t \n\t\t\t \tLet us consider the term $\\displaystyle\\frac{\\widetilde{\\bf w}_{n,zz}z^{2}\\widetilde{\\eta}^{2}_{n,x}}{(1+\\widetilde{\\eta}_{n})}.$\n\t\t\t \n\t\t\t \n\t\t\t \n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t \n\t\t\t \tFrom the uniform norm bound over $\\|\\widetilde{\\bf w}_{n,zz}\\|_{L^{2}(0,T;{\\bf H}^{1}(\\Omega))}$ we get that $\\widetilde{\\bf w}_{n,zz}$ converges weakly in $L^{2}(0,T;{\\bf H}^{1}(\\Omega))$ to $\\widetilde{\\bf w}_{zz}.$ Since $\\widetilde{\\eta}_{n}$ strongly converges to $\\widetilde{\\eta}$ in $C^{0}([0,T];H^{2}(\\Gamma_{s}))$ and both $\\widetilde{\\eta}_{n}$ and $\\widetilde{\\eta}$ satisfy \\eqref{1etad0}, $\\displaystyle\\frac{1}{(1+\\widetilde{\\eta}_{n})}$ and $\\widetilde{\\eta}_{n,x}$ converge strongly to $\\displaystyle\\frac{1}{(1+\\widetilde{\\eta})}$ and $\\widetilde{\\eta}_{x}$ respectively in the spaces $C^{0}([0,T];H^{2}(\\Gamma_{s}))$ and $C^{0}([0,T];H^{1}(\\Gamma_{s})).$\n\t\t\t \n\t\t\t \n\t\t\t \n\t\t\t \n\t\t\t \n\t\t\t \n\t\t\t \n\t\t\t \tHence one gets in particular the strong convergence of $\\widetilde{\\eta}^{2}_{n,x}$ to $\\widetilde{\\eta}^{2}_{x}$ in the space $C^{0}([0,T];L^{2}(\\Gamma_{s})).$ This implies that $\\displaystyle\\frac{\\widetilde{\\bf w}_{n,zz}z^{2}\\widetilde{\\eta}^{2}_{n,x}}{(1+\\widetilde{\\eta}_{n})}$ converges to $\\displaystyle\\frac{\\widetilde{\\bf w}_{zz}z^{2}\\widetilde{\\eta}^{2}_{x}}{(1+\\widetilde{\\eta})}$ weakly in $L^{2}(0,T;L^{1}(\\Omega))$ and hence particularly in the space $\\mathcal{D}'(Q_{T}).$\\\\\n\t\t\t \tNow we consider the term $P'\\widetilde{\\sigma}_{n,z}z\\widetilde{\\eta}_{n,x}\\vec{e}_{1}=(\\widetilde{\\sigma}_{n}+\\overline{\\rho})^{\\gamma-1}\\widetilde{\\sigma}_{n,z}z\\widetilde{\\eta}_{n,x}\\vec{e}_{1}.$ Since $\\|(\\widetilde{\\sigma}_{n}+\\overline{\\rho})\\|_{C^{0}(0,T;H^{2}(\\Omega))}$ is uniformly bounded so is $\\|(\\widetilde{\\sigma}_{n}+\\overline{\\rho})^{\\gamma-1}\\|_{C^{0}(0,T;H^{2}(\\Omega))}$ and hence $(\\widetilde{\\sigma}_{n}+\\overline{\\rho})^{\\gamma-1}$ converges weakly to $(\\widetilde{\\sigma}+\\overline{\\rho})^{\\gamma-1}$ in $L^{2}(0,T;H^{2}(\\Omega)).$ We also have that $\\widetilde{\\sigma}_{n,z}$ converges strongly to $\\widetilde{\\sigma}_{z}$ in $C^{0}([0,T];L^{2}(\\Omega)).$ Hence $(\\widetilde{\\sigma}_{n}+\\overline{\\rho})^{\\gamma-1}\\widetilde{\\sigma}_{n,z}$ converges weakly to $(\\widetilde{\\sigma}+\\overline{\\rho})^{\\gamma-1}\\widetilde{\\sigma}_{z}$ in $L^{2}(0,T;L^{2}(\\Omega)).$ Now the strong convergence of $\\widetilde{\\eta}_{n,x}$ to $\\widetilde{\\eta}_{x}$ in $C^{0}([0,T];H^{1}(\\Gamma_{s}))$ furnish that $(\\widetilde{\\sigma}_{n}+\\overline{\\rho})^{\\gamma-1}\\widetilde{\\sigma}_{n,z}z\\widetilde{\\eta}_{n,x}$ weakly converges to $(\\widetilde{\\sigma}+\\overline{\\rho})^{\\gamma-1}\\widetilde{\\sigma}_{z}z\\widetilde{\\eta}_{x}$ in $L^{2}(0,T;L^{1}(\\Omega)).$ Hence $(\\widetilde{\\sigma}_{n}+\\overline{\\rho})^{\\gamma-1}\\widetilde{\\sigma}_{n,z}z\\widetilde{\\eta}_{n,x}\\vec{e}_{1}$ converges to $(\\widetilde{\\sigma}+\\overline{\\rho})^{\\gamma-1}\\widetilde{\\sigma}_{z}z\\widetilde{\\eta}_{x}\\vec{e}_{1}$ in the space $\\mathcal{D}'(Q_{T}).$\\\\\n\t\t\t \tWe can apply similar line of arguments to prove that $G_{2}(\\widetilde{\\sigma}_{n},\\widetilde{\\bf w}_{n},\\widetilde{\\eta}_{n})$ converges to $G_{2}(\\widetilde{\\sigma},\\widetilde{\\bf w},\\widetilde{\\eta})$ in $\\mathcal{D}'(Q_{T}).$ Hence we have proved that $G_{2}(\\widetilde{\\sigma}_{n},\\widetilde{\\bf w}_{n},\\widetilde{\\eta}_{n})$ converges to $G_{2}(\\widetilde{\\sigma},\\widetilde{\\bf w},\\widetilde{\\eta})$ weakly in $L^{2}(0,T;{\\bf L}^{2}(\\Omega)).$\n\t\t\t \t\\newline\n\t\t\t \tAlso observe that $(\\widetilde{\\sigma}_{n}+\\overline{\\rho})$ converges strongly to $(\\widetilde{\\sigma}+\\overline{\\rho})$ in $C^{0}([0,T];H^{1}(\\Omega))$ and $\\widehat{\\bf w}_{n,t},$ $(-\\mu\\Delta-(\\mu'+\\mu)\\nabla(\\mbox{div}))\\widehat{\\bf w}_{n}$ converge up to a subsequence weakly to $\\displaystyle\\overline{\\bf w}_{t}$ and $\\displaystyle(-\\mu\\Delta-(\\mu'+\\mu)\\nabla(\\mbox{div}))\\overline{\\bf w}$ respectively in the spaces $L^{2}(0,T;{\\bf H}^{2}(\\Omega))$ and $L^{2}(0,T;{\\bf H}^{1}(\\Omega)).$ Hence up to a subsequence one obtains in particular the following convergence\n\t\t\t \t$$(\\widetilde\\sigma_{n}+\\overline{\\rho})\\widehat{\\bf w}_{n,t}-\\mu\\Delta\\widehat{\\bf w}_{n}-(\\mu'+\\mu)\\nabla(\\mbox{div}\\widehat {\\bf w}_{n})\\rightharpoonup (\\widetilde\\sigma+\\overline{\\rho})\\overline{\\bf w}_{t}-\\mu\\Delta\\overline{\\bf w}-(\\mu'+\\mu)\\nabla(\\mbox{div}\\overline{\\bf w})\\quad\\mbox{in}\\quad L^{2}(0,T;{\\bf L}^{2}(\\Omega)).$$ \n\t\t\t \tNow consider \\eqref{3.2}$_{2}$ with $(\\widetilde{\\sigma},\\widetilde{\\bf w},\\widetilde{\\eta})$ and ${\\bf w}$ replaced respectively by $(\\widetilde{\\sigma}_{n},\\widetilde{\\bf w}_{n},\\widetilde{\\eta}_{n})$ and $\\widehat{\\bf w}_{n}.$ The weak convergences discussed so far allow to pass to the limits in both sides of this equation. So using the uniqueness of weak solution for the linear problem \\eqref{2.1.1} we conclude that $\\overline{\\bf w}=\\widehat{\\bf w}.$\\\\\n\t\t\t \t$(ii)$ Now we claim that $G_{1}(\\widetilde{\\sigma}_{n},\\widetilde{\\bf w}_{n},\\widetilde{\\eta}_{n})$ converges weakly to $G_{1}(\\widetilde{\\sigma},\\widetilde{\\bf w},\\widetilde{\\eta})$ in $L^{2}(0,T;L^{2}(\\Omega)).$\\\\ \n\t\t\t \tLet us consider the term $\\displaystyle\\frac{1}{(1+\\widetilde{\\eta}_{n})}(\\widetilde{ w}_{n})_{1,z}z\\widetilde{\\eta}_{n,x}(\\widetilde{\\sigma}_{n}+\\overline{\\rho}).$ We already know that $\\displaystyle\\frac{1}{(1+\\widetilde{\\eta}_{n})}$ and $\\widetilde{\\eta}_{n,x}$ converge strongly to $\\displaystyle\\frac{1}{(1+\\widetilde{\\eta})}$ and $\\widetilde{\\eta}_{x}$ respectively in the spaces $C^{0}([0,T];H^{2}(\\Gamma_{s})$ and $C^{0}([0,T];H^{1}(\\Gamma_{s})).$ One also observes that $(\\widetilde{w}_{n})_{1,z}$ weakly converges to $\\widetilde{w}_{1,z}$ in $L^{2}(0,T;{H}^{2}(\\Omega))$ (since $\\widetilde{\\bf w}_{n}\\rightharpoonup \\widetilde{\\bf w}$ in $L^{2}(0,T;{\\bf H}^{3}(\\Omega))$). Finally the strong convergence of $(\\widetilde{\\sigma}_{n}+\\overline{\\rho})$ to $(\\widetilde{\\sigma}+\\overline{\\rho})$ in $C^{0}([0,T];H^{1}(\\Omega))$ furnish the weak convergence of $\\displaystyle\\frac{1}{(1+\\widetilde{\\eta}_{n})}(\\widetilde{ w}_{n})_{1,z}z\\widetilde{\\eta}_{n,x}(\\widetilde{\\sigma}_{n}+\\overline{\\rho})$ to $\\displaystyle\\frac{1}{(1+\\widetilde{\\eta})}(\\widetilde{ w})_{1,z}z\\widetilde{\\eta}_{x}(\\widetilde{\\sigma}+\\overline{\\rho})$ in $L^{2}(0,T;L^{2}(\\Omega)).$ We can apply similar arguments for other terms in the expression of $G_{1}(\\widetilde{\\sigma},\\widetilde{\\bf w},\\widetilde{\\eta})$ in order to prove the weak convergence of $G_{1}(\\widetilde{\\sigma}_{n},\\widetilde{\\bf w}_{n},\\widetilde{\\eta}_{n})$ to $G_{1}(\\widetilde{\\sigma},\\widetilde{\\bf w},\\widetilde{\\eta})$ in $L^{2}(0,T;L^{2}(\\Omega)).$\\\\\n\t\t\t \t We further observe that $\\nabla\\widehat{\\sigma}_{n}$ strongly converges to $\\nabla\\overline{\\sigma}$ in $C^{0}([0,T];L^{2}(\\Omega)).$ Since $(\\widetilde{ w}_{n})_{1}$ weakly converges to $\\widetilde{w}_{1}$ in $L^{2}(0,T;H^{3}(\\Omega)),$ $(\\widetilde{\\eta}_{n})_{x}$ strongly converges to $\\widetilde{\\eta}_{x}$ in $L^{\\infty}(\\Sigma^{s}_{T})$ (because $(\\widetilde{\\eta}_{n})_{x}$ strongly converges to $\\widetilde{\\eta}_{x}$ in $C^{0}([0,T];H^{1}(\\Gamma_{s}))$ and the embedding $H^{1}(\\Gamma_{s})\\hookrightarrow L^{\\infty}(\\Gamma_{s})$ is continuous) and $\\frac{1}{(1+\\widetilde{\\eta}_{n})}$ strongly converges to $\\frac{1}{(1+\\widetilde{\\eta})}$ in $C^{0}([0,T];H^{2}(\\Gamma_{s}))$, $\\frac{1}{(1+\\widetilde{\\eta}_{n})}(\\widetilde{w}_{n})_{1}z(\\widetilde{\\eta}_{n})_{x}(\\widehat{\\sigma}_{n})_{z}$ weakly converges to $\\frac{1}{(1+\\widetilde{\\eta})}\\widetilde{w}_{1}z\\widetilde{\\eta}_{x}\\widehat{\\sigma}_{z}$ in $L^{2}(0,T;L^{2}(\\Omega)).$ Besides, up to a subsequence $(\\widehat{\\sigma}_{n})_{t}$ weakly converges to $\\overline{\\sigma}_{t}$ in $L^{2}(0,T;L^{2}(\\Omega)).$ Hence up to a subsequence we have \n\t\t\t $$(\\widehat{\\sigma}_{n})_{t}+\\begin{bmatrix}\n\t\t\t (\\widetilde{w}_{n})_{1}\\\\\n\t\t\t \\frac{1}{(1+\\widetilde{\\eta}_{n})}((\\widetilde{w}_{n})_{2}-(\\widetilde{ w}_{n})_{1}z(\\widetilde{\\eta}_{n})_{x})\n\t\t\t \\end{bmatrix}\\cdot\\nabla\\widehat{\\sigma}_{n}\\rightharpoonup {\\overline{\\sigma}_{t}}+\\begin{bmatrix}\n\t\t\t \\widetilde{ w}_{1}\\\\\n\t\t\t \\frac{1}{(1+\\widetilde{\\eta})}(\\widetilde{w}_{2}-\\widetilde{ w}_{1}z\\widetilde\\eta_{x})\n\t\t\t \\end{bmatrix}\n\t\t\t \\cdot \\nabla\\overline{\\sigma}\\quad\\mbox{in}\\quad L^{2}(0,T;L^{2}(\\Omega)).$$\n\t\t\t \tNow consider \\eqref{3.2}$_{1}$ with $(\\widetilde{\\sigma},\\widetilde{\\bf w},\\widetilde{\\eta})$ and ${\\sigma}$ replaced respectively by $(\\widetilde{\\sigma}_{n},\\widetilde{\\bf w}_{n},\\widetilde{\\eta}_{n})$ and $\\widehat{\\sigma}_{n}.$ The weak type convergences discussed so far allow to pass to the limits in both sides of this equation. Hence from uniqueness of weak solution of the linear problem \\eqref{2.2.1} we conclude that $\\overline{\\sigma}=\\widehat{\\sigma}.$\\\\\n\t\t\t \t$(iii)$ One can use similar line of arguments as used so far to show that $G_{3}{(\\widetilde{\\sigma}_{n},\\widetilde{\\bf w}_{n},\\widetilde{\\eta}_{n})}$ converges weakly to $G_{3}{(\\widetilde{\\sigma},\\widetilde{\\bf w},\\widetilde{\\eta})}$ in $L^{2}(0,T;L^{2}(\\Gamma_{s})).$ Using the norm bounds of $\\widehat{\\eta}_{n}$ (since $(\\widehat{\\sigma}_{n},\\widehat{\\bf w}_{n},\\widehat{\\eta}_{n})\\in \\mathscr{C}_{T}$) we can prove that up to a subsequence the left hand side of \\eqref{3.2}$_{6}$ with $\\eta$ replaced by $\\widehat{\\eta}_{n}$ converges weakly to $$\\displaystyle\\overline{\\eta}_{tt}-\\beta \\overline{\\eta}_{xx}- \\delta\\overline{\\eta}_{txx}+\\alpha\\overline{\\eta}_{xxxx}$$\n\t\t\t \tin $L^{2}(0,T;L^{2}(\\Gamma_{s})).$ Now the uniqueness of weak solution to the problem \\eqref{2.3.1} furnishes $\\overline{\\eta}=\\widehat{\\eta}.$\n\t\t\t \tHence the proof of Lemma \\ref{cont} is complete.\n\t\t\t \t\\end{proof}\n\t\t\t \t\\subsection{Conclusion}\\label{conc}\n\t\t\t The following properties hold\\\\\n\t\t\t \t(i) The convex set $\\mathscr{C}_{T}$ is non-empty (Lemma \\ref{nonempty}) and is a compact subset of $\\mathcal{X}$ (Lemma \\ref{com}).\\\\\n\t\t\t \t(ii) The map $L$, defined in \\eqref{welposL}, is continuous on $\\mathscr{C}_{T}$ in the topology of $\\mathcal{X}$ (Lemma \\ref{cont}).\n\t\t\t\t\\\\\n\t\t\t\t(iii) The map $L$ maps $\\mathscr{C}_{T}$ to itself (Lemma \\ref{inclusion}).\\\\ \n\t\t\t \tThus, all the assumptions of Schauder fixed point theorem are satisfied by the map $L$ on $\\mathscr{C}_{T},$ endowed with the topology of $\\mathcal{X}.$ Therefore, Schauder fixed point theorem yields a fixed point $(\\sigma_{f},{\\bf w}_{f},\\eta_{f})$ of the map $L$ in $\\mathscr{C}_{T}.$ From the definition of the map $L$, one has $(\\sigma_{f},{\\bf w}_{f},\\eta_{f})\\in Z^{T}_{1}\\times Y^{T}_{2}\\times Z^{T}_{3}.$ Hence we have the following time continuities (since still now one only has the regularities \\eqref{eg1} of $G_{1}(\\sigma_{f},{\\bf w}_{f},\\eta_{f}),$ \\eqref{eg2} of $G_{2}(\\sigma_{f},{\\bf w}_{f},\\eta_{f})$ and \\eqref{eg3} of $G_{3}(\\sigma_{f},{\\bf w}_{f},\\eta_{f})$)\n\t\t\t \t\\begin{equation}\\label{timecont}\n\t\t\t \t\\begin{array}{l}\n\t\t\t \t{\\sigma}_{f}\\in C^{0}([0,T];H^{2}(\\Omega)),\\\\\n\t\t\t \t{\\bf w}_{f}\\in C^{0}([0,T];{\\bf H}^{5\/2}(\\Omega))\\cap C^{1}([0,T];{\\bf H}^{1}(\\Omega)),\\\\\n\t\t\t \t\\eta_{f} \\in C^{0}([0,T];H^{4}(\\Gamma_{s}))\\cap C^{1}([0,T];H^{3}(\\Gamma_{s}))\\cap C^{2}([0,T];H^{1}(\\Gamma_{s})).\n\t\t\t \t\\end{array}\n\t\t\t \t\\end{equation}\n\t\t\t \tThe regularities \\eqref{timecont} can be used to further check that $G_{1}(\\sigma_{f},{\\bf w}_{f},\\eta_{f})\\in C^{0}([0,T];H^{1}(\\Omega))$ and $G_{3}(\\sigma_{f},{\\bf w}_{f},\\eta_{f})\\in C^{0}([0,T];H^{1\/2}(\\Gamma_{s})).$ Hence we use Corollary \\ref{dencor} and the Corollary \\ref{timebeam} to obtain the following\n\t\t\t \t\\begin{equation}\\nonumber\n\t\t\t \t\\begin{array}{l}\n\t\t\t \t(\\sigma_{f})_{t}\\in C^{0}([0,T];H^{1}(\\Omega))\\,\\,\\mbox{and}\\,\\, \\eta_{f}\\in C^{0}([0,T];H^{9\/2}(\\Gamma_{s})).\n\t\t\t \t\\end{array}\n\t\t\t \t\\end{equation} \n\t\t\t \t Hence, $(\\sigma_{f},{\\bf w}_{f},\\eta_{f})\\in Y^{T}_{1}\\times Y^{T}_{2}\\times Y^{T}_{3}.$ The trajectory $(\\sigma_{f},{\\bf w}_{f},\\eta_{f})$ solves the nonlinear problem \\eqref{chdb} in $Y^{T}_{1}\\times Y^{T}_{2}\\times Y^{T}_{3}.$ Consequently the system \\eqref{1.21} admits a solution. This further implies that the original system \\eqref{1.1}-\\eqref{1.2}-\\eqref{1.3} admits a strong solution in sense of the Definition \\ref{doss}. Finally the proof of Theorem \\ref{main} is complete.\n\t\t\t \t\\end{proof}\n\t\t\t \n\t\t\t \n\t\t\t \n\t\t\t \n\t\t\t \n\t\t\t \n\t\t\t \n\t\t\t \n\t\t\t \n\t\t\t \n\t\t\t \n\t\t\t \n\t\t\t \n\t\t\t \n\t\t\t \n\t\t\t \n\t\t\t \n\t\t\t \n\t\t\t\n\t\t\t \n\t\t\t \n\t\t\t \n\t\t\t \n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t \n\t\t\t \n\t\t\t\n\t\t\t \n\t\t\t \n\t\t\t \n\t\t\t \n\t\t\t \n\t\t\t \n\t\t\t \n\t\t\t \n\t\t\t \n\t\t\t\n\t\t\t \n\t\t\t \n\t\t\t \n\t\t\t \n\t\t\t \n\t\t\t \n\t\t\t \n\t\t\t \n\t\t\t \n\t\t\t \n\t\t\t \n\t\t\t \n\t\t\t \n\t\t\t \n\t\t\t \n\t\t\t \n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t \n\t\t\t \n\t\t\t \n\t\t\t \n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\n\n\\bibliographystyle{plain}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\n\\subsection{Physical motivation} The study of quantized vortex dynamics in Bose-Einstein condensates (BECs)\nis a topic of intense experimental and theoretical investigations. A particular interesting situation is created when the BEC is stirred through an\nexternal {\\it rotating} confinement potential. Indeed, if the rotation speed exceeds some critical value \n{\\it vortices} and, more generally, {\\it vortex lattices} are being created, see, e.g., \\cite{A, CPRY} for a broader introduction. \n\nFrom a mathematical point of view, rotating BECs can be described within the realm of a mean-field model, the so-called \n{\\it Gross-Pitaevskii equation} \\cite{PiSt}. In the following, we shall assume, without loss of generality, that the system rotates around the $z$-axis with a \ngiven speed $\\Omega \\in \\R$. Placing ourselves in the associated rotating reference frame, \nthe corresponding mathematical model is a {\\it nonlinear \nSchr\\\"odinger equation} (NLS) given by\n\\begin{equation}\n\\label{NLS_rot}\ni \\partial_t \\psi = -\\frac{1}{2} \\Delta \\psi + \\lambda |\\psi|^{2} \\psi + V(x) \\psi - \\Omega L \\psi .\n\\end{equation}\nHere, $t\\in \\R$, $x\\in \\R^d$ with $d=3$, or $d=2$, respectively. The latter corresponds to the assumption of homogeneity of the BEC \nalong the $z$-axis (see, e.g., \\cite{BMSW, MeSp}, for a rigorous scaling limit from three to effective two-dimensional models for BEC). The parameter $\\lambda \\ge 0$ describes \nthe strength of the inter-particle interaction, which in this work is assumed to be repulsive.\nThe potential $V$ describes the magnetic trap and is usually taken in the form of a harmonic oscillator, i.e.\n\\begin{equation}\n\\label{Vquadr}\nV(x)=\\frac{1}{2} \\omega^2 |x|^2, \\quad \\omega \\in \\R.\n\\end{equation}\nHere, and in the following, we choose $V$ to be rotationally symmetric for simplicity. All our results can be easily generalized to the case of \nan anisotropic harmonic oscillator. \nFinally, $ \\Omega L\\psi$ describes the rotation around the $z$-axis, where \n\\begin{equation}\\label{eq:angular_momentum}\n L \\psi:= - i (x_1 \\partial_{x_2} \\psi - x_2 \\partial_{x_1}\\psi)\\equiv -i x^\\perp \\cdot \\nabla \\psi,\n\\end{equation}\ndenotes the corresponding quantum mechanical rotation operator. \n\nMost rigorous mathematical results on vortex creation are based on standing wave solutions of \\eqref{NLS_rot}, i.e. \nsolutions of the form $\\psi(t,x) = \\varphi(x) e^{-i \\mu t}$, $\\mu\\in \\R$, which leads to the following nonlinear elliptic equation\n\\begin{equation}\n\\label{stat_NLS}\n-\\frac{1}{2} \\Delta \\varphi + \\lambda |\\varphi|^{2} \\varphi + V(x) \\varphi - \\Omega L \\varphi - \\mu \\varphi =0.\n\\end{equation}\nEquation \\eqref{stat_NLS} can be interpreted as the \nEuler-Lagrange equation of the associated Gross-Pitaevskii energy functional \\cite{PiSt, S}:\n\\begin{equation}\\label{GPenergy}\nE_{\\rm GP}(\\varphi):= \\int_{\\R^d} \\bigg( \\frac{1}{2} |\\nabla \\varphi|^2 + V(x) |\\psi|^2 + \\frac{\\lambda}{2}|\\varphi |^{4} - \\Omega \\overline{\\varphi}\nL \\varphi \\bigg) \\;dx,\n\\end{equation}\nOne possible way of constructing solutions to \\eqref{stat_NLS} is thus to minimize \\eqref{GPenergy} under the constraint $\\|\\varphi \\|^2_{L^2}=M$, \nwhere $M>0$ denotes a given mass. This consequently yields a {\\it chemical potential} $\\mu=\\mu(M)\\ge 0$ playing the role of a Lagrange multiplier. \nIn order to do so, one requires $\\omega>|\\Omega| $ which ensures that $E_{\\rm GP}$ is bounded from below. \nPhysically speaking, this condition means that the confinement potential $V(x)$ is stronger than the rotational forces, \nensuring that the BEC stays trapped.\nWithin this framework, it was proved in \\cite{S} \nthat the hereby obtained physical {\\it ground states}, i.e. energy minimizing solutions of \\eqref{stat_NLS}, undergo a symmetry breaking (of the rotational symmetry) \nfor sufficiently strong $\\Omega$ and\/or $\\lambda \\ge 0$. The latter is interpreted as the onset of vortex-lattice creation. \n\nOn the other hand, it is often argued in the physics literature that a small amount of dissipation must be present for the \nexperimental realization of stable vortex lattices, cf. \\cite{CMK, KT, KNKM}. \nIn order to describe such dissipative effects, not present in the original Gross-Pitaevskii equation \\eqref{NLS_rot}, \nthe following phenomenological model has been proposed in \\cite{TKU} and subsequently been studied in, e.g., \\cite{CGKT, CP, GAF, KT, KNKM}:\n\\begin{equation}\\label{dissGP}\n(i\\beta-\\gamma) \\partial_t \\psi = -\\frac{1}{2} \\Delta \\psi + \\lambda |\\psi|^{2} \\psi + V(x) \\psi - \\Omega L \\psi - \\mu \\psi.\n\\end{equation}\nHere $\\beta\\in\\R$ and $\\gamma>0$ are physical parameters whose ratio describes the strength of the dissipation. (In \\cite{GAF} the authors use \nformal arguments based on quantum kinetic theory to obtain $\\frac{\\gamma}{\\beta} \\approx\n0.03$.) Note that any {\\it time-independent solution} $\\psi = \\varphi(x)$ of \\eqref{dissGP} solves the stationary NLS \\eqref{stat_NLS}. \nIn contrast to \\eqref{NLS_rot}, equation \\eqref{dissGP} is no longer Hamiltonian and only makes sense for $t\\in \\R_+$.\n\n\\subsection{Mathematical setting and main result} \nThis work is devoted to a rigorous mathematical analysis of \\eqref{dissGP}. In particular, we shall be interested in the long time behavior of its solutions as $t \\to +\\infty$. \nTo this end, it is convenient to re-scale time such that $\\beta^2+\\gamma^2=1$. \nThen we can write \\[\ni\\beta-\\gamma = -e^{i\\vartheta}, \\quad \\text{for some $\\vartheta\\in\\Big(-\\frac \\pi 2,\\frac \\pi 2\\Big)$.}\n\\] \nNote that by doing so, the real part of $e^{i\\vartheta}$ has the same (positive) sign as $\\gamma > 0$. We shall thus be concerned with the following initial value problem for $(t,x)\\in \\R_+\\times \\R^d$ and $d=2,3$:\n\\begin{equation}\n\\label{NLS_diss}\n-e^{i\\vartheta}\\partial_t \\psi = -\\frac{1}{2} \\Delta \\psi + \\lambda |\\psi|^{2\\sigma} \\psi + V(x) \\psi - \\Omega L \\psi - \\mu \\psi, \\quad \n\\psi _{\\mid t =0} =\\psi_0(x),\n\\end{equation}\nwhere $\\psi_0$ will be chosen in some appropriate function space (see below), and $\\sigma > 0$ a generalized nonlinearity. \nFormally, the usual Gross-Pitaevskii equation \\eqref{NLS_rot} is obtained from \\eqref{NLS_diss} in the limit $\\vartheta \\to \\pm \\frac{\\pi}{2}$. On the other hand, if $\\vartheta =0$ the Hamiltonian character \nof the model is completely lost and \\eqref{NLS_diss} instead resembles a nonlinear parabolic \nequation of {\\it complex Ginzburg-Landau} (GL) type, cf. \\cite{AK} for a review on this type of models.\n\n\nEquation \\eqref{NLS_diss} can thus be seen as a hybrid between the Gross-Pitaevskii\/Non-linear Schr\\\"odinger equation and \nthe complex Ginzburg-Landau equation. Both kind of models have been extensively studied in the mathematical literature:\nFor local and global well-posedness results on NLS, \nwith or without quadratic potentials $V$, we refer to \\cite{Caze, Car05, Car09}. Allowing for the inclusion of a rotation term, the initial value problem for \\eqref{NLS_rot} \nhas been analyzed in \\cite{AMS}.\nSimilarly, well-posedness results for the complex GL equation in various spaces can be found in \\cite{DGL, GV1,GV2}. The existence and basic properties of \na global attractor for solutions to GL (on bounded domains $D\\subset \\R^d$) are studied in \\cite{Te} and \\cite{MM}. \nMoreover, the so-called {\\it inviscid limit} which links solutions of \nGL to solutions of NLS has been established in \\cite{Wu}. However, none of the aforementioned results directly apply to the model \n\\eqref{NLS_diss}, which involves an unbounded (quadratic) potential $V$ and a rotation term, \nneither of which have been included in the studies on GL cited above. \nOne should also note that the GL equation in its most general form allows for \ndifferent complex pre-factors in front of the Laplacian and the nonlinearity. In our case those\npre-factors coincide, allowing for a closer connection to NLS.\nVery recently, a similar type of such restricted GL models with $\\lambda <0$ (and without potential and rotation terms) has \nbeen studied in \\cite{CDW1, CDW2} as an ``intermediate step\"\nbetween the NLS and the nonlinear heat equation. \nFinally, we also mention that equation \\eqref{dissGP} with $\\beta=0$ is used to \nnumerically obtain the Gross-Pitaevskii ground states, cf. \\cite{BaoDu, CST}. \n\n\nAs announced before, we shall mainly be interested in the long time behavior of solutions to \\eqref{NLS_diss}. In view of this the main result \nof our paper can be stated in the following form:\n\n\\begin{theorem}\\label{thmmain} Let $d\\in \\{2,3\\}$, $\\omega>|\\Omega|$, $\\vartheta \\in (-\\frac \\pi 2 , \\frac \\pi 2 )$, $\\lambda \\ge 0$, and $0< \\sigma < \\frac{d}{2(d-2)}$. \nThen for any \n\\[\n\\psi_0 \\in \\Sigma:=\\big\\{f\\in H^1(\\R^d)\\; : \\; |x|f\\in L^2(\\R^d)\\big\\}\n\\]\nthere exits a unique strong solution $\\psi \\in C([0,\\infty), \\Sigma)$ to \\eqref{NLS_diss}. The associated mass and energy thereby satisfy the identities \\eqref{eq:mass} and \\eqref{eq:energy} below. \nIf, in addition, $\\lambda >0$, the evolutionary system \\eqref{NLS_diss} possesses a global attractor $\\mathcal A\\subset \\Sigma$, i.e., $\\mathcal A$ is\nis invariant under the time-evolution associated to \\eqref{NLS_diss} and such that\n\\[\n\\inf_{\\phi \\in \\mathcal A} \\| \\psi(t) - \\phi \\|_{L^2(\\R^d)} \\stackrel{t\\to +\\infty}{\\longrightarrow} 0. \n\\]\nMore precisely, \n\\[\n\\mathcal A=\\big \\{\\psi_0 : \\psi_0 = \\psi(0) \\mbox{ for some } \\psi \\in C({(-\\infty}, \\infty);\\Sigma) \\, \\text{solution to \\eqref{NLS_diss}} \\big \\} \n\\]\nis a connected compact set in $L^2(\\R^d)$ and uniformly attracts bounded sets in $L^2(\\R^d)$. \nFurthermore, $\\mathcal A$ has finite Hausdorff and fractal dimensions which depend on the given parameters as described in Proposition \\ref{prop:dimension}.\nFinally, if $\\mu < \\frac{\\omega d}{2}$ it holds $\\mathcal A = \\{ 0 \\}$.\n\\end{theorem}\n\nHere, $\\Sigma$ is the physical energy space ensuring that $E_{\\rm GP}(\\psi(t))$ is finite. \nThe assumption on $\\sigma>0$ is thereby slightly more restrictive than the one for the usual $H^1$-subcritical nonlinearities (see Remark \\ref{remH1} below). \nNote however, that we may always take $\\sigma = 1$ in the above theorem which corresponds to the usual cubic nonlinearity. In addition, the condition $\\omega>|\\Omega|$ \nensures that the confinement is stronger than the rotation, and thus, the system remains trapped for all times $t\\ge 0$. \n\nAs we shall see, neither the mass nor the (total) energy are conserved quantities of the time-evolution, but for $\\lambda >0$, there are {\\it absorbing balls} for $\\psi$ in both \nthe mass and the energy space, see Section \\ref{sec:uniform} for a precise definition. The existence of a global attractor $\\mathcal A$ therefore requires the presence of the nonlinearity \nand, of course, the presence of the confining potential $V$. Clearly, all stationary solutions $\\varphi \\in \\Sigma$ of \\eqref{stat_NLS} \nare members of $\\mathcal A$. However, since for $\\mu$ sufficiently large \nthere are always at least two such solutions (namely, zero and the nontrivial energy minimizer) and since $\\mathcal A$ is connected, it is unclear what the precise \nlong-time behavior of \\eqref{NLS_diss} is. \nIndeed, in the case of the GL equation for superconducting materials it is known \\cite{TW} \nthat the global attractor contains not only all possible steady state solutions, but also the heteroclinic orbits joining these steady states, and we \nconsequently expect a similar behavior to also hold also in our model. \n\nExcept in the case $\\mu<\\frac{\\omega d}{2}$, the precise dependence of the dimension of $\\mathcal A$ on the given physical parameters is not known. \nIn Section \\ref{sec:dimension} we shall prove that the Hausdorff dimension ${\\rm dim}_{\\rm H}(\\mathcal A)\\le m$, where $m$ depends in a \nrather complicated way on all the involved parameters. It is interesting, however, to check that $m\\to +\\infty$, as $|\\Omega|\\to \\omega$. \nIn other words, the influence of the rotation term potentially increases the dimension of the attractor. This is consistent with \nnumerical and physical experiments on the creation of vortex lattices in rotating BEC. For a recent (non-rigorous) study which employs numerical simulations and asymptotic analysis \nto investigate the corresponding pattern formation mechanism, we refer to \\cite{CGKT}. In fact, one easily observes that in the linear case ($\\lambda =0$) the dynamics \nadmits exponentially growing modes, cf. Section \\ref{sec:linear} below for more details. It is argued in \\cite{CGKT} that this type of instability mechanism is responsible for the \nnucleation of a large number of vortices at the periphery of the atomic cloud, as can be seen in physical experiments.\\\\\n\n\n\nThe proof of Theorem \\ref{thmmain} will be done in several steps: First, we shall establish local (in-time) well-posedness of \\eqref{NLS_diss} in Section \\ref{sec:local} below.\nThen, we will show how to extend this result to global in-time solutions in Section \\ref{sec:global}, where we also prove that for $\\mu < \\frac{\\omega d}{2}$ solutions decay to zero as $t\\to +\\infty$. \nThe main technical step for the existence of an attractor is then to prove certain uniform bounds on the total mass and energy as done in Section \\ref{sec:uniform}.\nThis will allow us to conclude the existence of an absorbing ball and of a global attractor in Section \\ref{sec:attractor}, where we shall also prove the announced estimates on the dimension. \nFinally, we collect some basic computations regarding the kernel of the linear semigroup in the appendix.\n\n\n\\section{Mathematical preliminaries}\n\nIn this section we shall collect several preliminary results to be used later on.\n\n\\subsection{Spectral properties of the linear Hamiltonian}\\label{sec:linear}\nIn the following, we denote by\n\\begin{equation}\\label{H}\nH_\\Omega := -\\frac{1}{2} \\Delta + V(x) - \\Omega L ,\\quad x\\in \\R^d, \n\\end{equation}\nthe linear Hamiltonian operator, with $V(x)$ given in \\eqref{Vquadr}. Note that in the case without rotation, i.e. $\\Omega=0$, the operator \n\\begin{equation}\\label{H_0}\nH_0=\\frac12 \\left(- \\Delta+\\omega ^2 {|x|^2}\\right),\n\\end{equation} is nothing but the \n(isotropic) quantum mechanical harmonic oscillator in, respectively, $d=2$, or $3$ spatial dimensions. The spectral properties of $H_0$ are well known \\cite{Fl, T}: \n\\begin{lemma}\\label{lem:H}\n$H_0$ is essentially self-adjoint on $C_0^\\infty(\\R^d)\\subset L^2(\\R^d)$ with compact resolvent. The \nspectrum of $H_0$ is given by $\\sigma(H_0) = \\{ E_{0,n}\\}_{n\\in \\N}$, where\n\\[\nE_{0,n} = \\omega \\Big( \\frac{d}{2} +n-1 \\Big),\\quad n =1,2, \\dots.\n\\]\nIn addition, the eigenvalue $E_{0,n}$ is $\\left(\\begin{matrix} d+n -2 \\\\ n-1 \\end{matrix}\\right)-$fold degenerate.\n\\end{lemma}\nIn particular, $E_{0,n} \\ge E_{0,1}\\equiv \\frac{\\omega d}{2}>0$, for all $n \\in\\N$. \nThe associated eigenfunctions form a complete orthonormal basis of $L^2(\\R^d)$. In $d=2$, they are explicitly given by \\cite{Fl}:\n\\[\n\\chi^0_{n_1, n_2}(x_1, x_2) = f_{n_1}(x_1) f_{n_2}(x_2),\\quad n_j\\in \\N,\n\\]\nwhere $n_1 + n_2 = n$ and the $f_{n_j}\\in \\mathcal S(\\R)$ are the eigenfunctions of the one-dimensional harmonic oscillator, i.e., an appropriately normalized Gaussians times a Hermite polynomial of order $n_j-1$. \nAn analogous formula holds in $d=3$ dimensions.\n\nIn the case with $\\Omega\\not =0$, we first note that the commutator $[H_\\Omega, L]=0$, due to the rotational symmetry of the potential $V$. \nThis implies that $H_\\Omega$ and $L$ have a common orthonormal basis of eigenfunctions $\\{\\chi_n\\}_{n\\in \\N_0}$, which can be obtained by taking appropriate linear combinations of the \neigenvalues of $H_0$, see \\cite{Fl}. An important assumption throughout this work, will \nbe that $\\omega>|\\Omega|$, ensuring confinement of the BEC. \nIn mathematical terms, this condition implies that the rotational term can be seen as a perturbation of the positive definite operator\n$H_0$, such that $H_\\Omega$ is still positive definite. In other words, we have that \n\\begin{equation}\\label{eigenvalue}\nH_\\Omega \\chi_n = E_{\\Omega,n} \\chi_n,\n\\end{equation}\nwhere the new eigenvalues $E_{\\Omega, n}\\in \\R$ (indexed in increasing order) are related to the unperturbed $E_{0,n}$ via \n\\[\n\\{E_{\\Omega, n}, \\ n \\in \\N\\}= \\{ E_{0,\\ell} + m \\Omega, \\ -\\ell+1 \\le m\\le \\ell-1, \\, \\text{for}\\, \\ell \\in\\N\\}.\n\\] \nIn particular, under the assumption that $\\omega > \\Omega$, we still have: $E_{\\Omega, n}\\ge \\frac{\\omega d}{2}$, for all $n\\in \\N$. Thus, the ground state energy \neigenvalue stays the same with and without rotation. \\\\\n\nWith these spectral data at hand, we can now look at the {\\it linear} time-evolution ($\\lambda =0$) associated to \\eqref{dissGP}, i.e.\n\\begin{equation}\\label{linGP}\n(i\\beta-\\gamma) \\partial_t \\psi =H_\\Omega \\psi - \\mu \\psi.\n\\end{equation}\nUsing the fact that $\\{\\chi_n\\}_{n\\in \\N}$ comprises an orthonormal basis, we can decompose the solution to this equation via\n\\begin{equation}\\label{decomp}\n\\psi(t,x) = \\sum_{n \\in \\N} c_n(t) \\chi_n (x),\n\\end{equation}\nwhere $ \\{ c_n (t) \\}_{n\\in \\N} \\in \\ell^2$, i.e. $\\sum |c_n(t)|^2 < +\\infty$. In view of \\eqref{eigenvalue}, \\eqref{linGP} we find\n\\[\nc_n(t) = c_n(0) \\exp (-(i\\beta + \\gamma)(E_{\\Omega, n}-\\mu) t),\n\\]\nIn particular, the normalization $\\beta^2+\\gamma^2=1$ yields \n\\[\n\\| \\psi_n (t) \\|^2_{L^2} \\equiv \\sum_{n=1}^\\infty |c_n(t)|^2 = \\sum_{n=1}^\\infty |c_n(0)|^2 e^{- 2\\cos\\vartheta (E_{\\Omega, n} - \\mu) t},\n\\]\nwhere we identify $\\gamma =\\cos \\vartheta$.\nFor $ \\vartheta \\in (-\\frac{\\pi}{2}, \\frac{\\pi}{2})$ \nthe right hand side exponentially decays to zero as $t\\to+\\infty$, provided $\\mu < E_{\\Omega, n}$, for all $n\\in \\N$. This is equivalent to saying that $\\mu 0$. Hence, given a $\\mu > E_{\\Omega, 1}$ the solution is exponentially decaying as long as \nthe initial data is such that $c_n(0)=0$ for all $n\\in \\N$ for which $E_{\\Omega, n} < \\mu$. Otherwise, we have, in general, exponential growth of the \n$L^2$-norm of $\\psi(t)$. \n\n\\begin{remark}\nIn the case where we choose $\\mu = E_{\\Omega, m}$ for some fixed $m\\in \\N_0$, we see that the $|c_m(t)|^2 = |c_m(0)|^2$ is a conserved quantity of the linear time evolution. \nAll higher modes exponentially decay towards zero, whereas all lower modes will exponentially increase. We consequently expect linear instability of stationary states of the nonlinear system.\n\\end{remark}\n\n\n\n\n\n\\subsection{Dispersive properties of the linear semi-group} \nIn order to set up a well-posedness result for the nonlinear equation \\eqref{dissGP}, we need to study the regularizing properties of the \nlinear semigroup associated to $H_\\Omega$, i.e.\n\\[\nS_\\Omega(t):= \\exp\\left(-e^{-i\\vartheta} t H_\\Omega\\right), \\quad t\\in \\R_+,\n\\]\nAs usual we identify $S_\\Omega(t)$ with its associated integral kernel via\n\\[\nS_\\Omega(t) f(x) = \\int_{\\R^d} S_\\Omega(t,x,y) f(y) \\, dy,\\quad f \\in L^2(\\R^d).\n\\]\nThe following lemma states some basic properties of $S_\\Omega(t)$ to be used later on.\n\n\\begin{lemma}\\label{lem:kernel}\nLet $\\vartheta \\in (-\\frac{\\pi}{2}, \\frac{\\pi}{2})$ and $t>0$. Then \n\\begin{align}\\label{SG_diss}\n S_\\Omega(t,x,y) = \\bigg(\\frac{\\omega}{2 \\pi \\sinh(e^{-i\\vartheta} \\omega t)}\\bigg)^\\frac{d}{2} \\exp\\left({\\Phi(t,x,y)}\\right),\n\\end{align}\nwhere the pre-factor in front of the exponent is understood in terms of the principal value of the complex logarithm, and the phase function $F$ is given by\n\\begin{align*}\n \\Phi(t,x,y) &= -\\frac{\\omega}{\\sinh(e^{-i\\vartheta} \\omega t)}\\bigg(\\frac{1}{2}(x^2+y^2)\\cosh(e^{-i\\vartheta}\n\\omega t) - \\cosh(e^{-i\\vartheta}\\Omega t) (x_1 y_1 + x_2 y_2) \\\\&\\qquad\\qquad\\qquad\\qquad\\qquad + i \\sinh(e^{-i\\vartheta} \\Omega t) (x_2y_1 - x_1y_2) \\bigg).\n\\end{align*}\nMoreover, for $\\omega>|\\Omega|$, there exists $\\delta>0 $ such that \n\\begin{equation}\\label{kernel_Lp}\n \\|S_\\Omega(t) f \\|_{L^r} \\le C\\ t^{\\frac{d}{2}(\\frac{1}{r}-\\frac{1}{q})} \\| f \\|_{L^q}\n\\end{equation}\nand\n\\begin{equation}\\label{kernel_Sigma}\n \\|\\nabla S_\\Omega(t) f \\|_{L^r} + \\|x S_\\Omega(t) f \\|_{L^r},\n\\le C\\ t^{-\\frac{1}{2}+\\frac{d}{2}(\\frac{1}{r}-\\frac{1}{q})} \\| f \\|_{L^q},\n\\end{equation}\nfor all $1\\le q\\le r \\le \\infty$ and all $0|\\Omega|$, and $d\\in \\{2,3\\}$. \n\\begin{itemize}\n\\item[(i)] Let $p > \\max (\\sigma d, 2\\sigma +1)$ and $\\psi_0\\in L^p(\\R^d)$. Then there exists a time $T>0$ and a unique solution $\\psi \\in C([0,T];L^p(\\R^d))$ to \\eqref{NLS_diss}, depending continuously on the initial data.\n\\item[(ii)] If, in addition, $0 < \\sigma < \\frac{d}{2(d-2)}$ and $\\psi_0 \\in \\Sigma $, then there exists a $T^*>0$ such that the solution from ${\\rm (i)}$ satisfies \n\\[\n \\psi \\in C([0,T^*]; \\Sigma).\n\\]\nMoreover, the solution is maximal in the sense that either $T^*=+\\infty$, or the following blow-up alternative holds:\n\\begin{equation*}\\label{blowup_alt}\n\\lim_{t\\to T_-^*} \\| \\psi(t)\\|_{\\Sigma} = \\infty.\n\\end{equation*}\n\n\\end{itemize}\n\\end{proposition}\n\n\\begin{proof}\nThe proof is based on a fixed point argument using Duhamel's formula and the properties of the semigroup $S_\\Omega(t)$. To this end, we first note that the term $\\mu\\psi$ is of no importance here, as it can always be added in a subsequent step (in fact,\nwe could have included it in the kernel of $S_\\Omega(t)$). Hence let us assume that $\\mu=0$ for notational convenience. \n\nTo prove (i), we will show that the mapping\n\\[\n\\psi \\mapsto \\Xi(\\psi)(t) := S_\\Omega(t)\\psi_0 - e^{-i\\vartheta} \\int_0^t S_\\Omega(t-\\tau) \\big(\\lambda |\\psi(\\tau)|^{2\\sigma}\\psi(\\tau)\\big) \\;d\\tau\n\\]\nis a contraction in the space\n\\[\n X_T:=\\big\\{ \\psi\\in C([0,T];L^p(\\R^d)) \\ : \\ \\|\\psi\\|_{L^\\infty(0,T;L^p)} \\le 2 \\|\\psi_0\\|_{L^p} \\big\\}\n\\]\nfor small enough $T>0$. To do so, we can use the kernel estimate \\eqref{kernel_Lp} with the following choice of parameters:\n\\[\n\\begin{split}\nr=p\\ge 2\\sigma+1, \\quad q=\\frac{p}{2\\sigma+1},\\qquad \\text{when} \\qquad d&=2,\\\\\nr=p > \\max (\\sigma d, 2\\sigma +1), \\quad q=\\frac{p}{2\\sigma+1},\\qquad \\text{when} \\qquad d&=3.\n\\end{split} \n\\]\nNote that any such a choice of $p$ implies that $d\\sigma 0$. Thus, for $T>0$ sufficiently small, we conclude that $\\Xi$ indeed maps $X_T$ onto itself.\nLikewise it holds that for two solutions $\\psi$ and $\\tilde \\psi$\n\\begin{align*}\n& \\|\\Xi(\\tilde\\psi)(t)-\\Xi(\\psi)(t)\\|_{L^p}\\\\ &\\le \\lambda \\int_0^t\n\\big\\|S_\\Omega(t-\\tau)\\big(|\\tilde\\psi(\\tau)|^{2\\sigma}\\tilde\\psi(\\tau)-|\\psi(\\tau)|^{2\\sigma}\\psi(\\tau)\\big)\\big\\|_{L^p} \\;d\\tau\\\\\n &\\le C \\int_0^t (t-\\tau)^{-{d\\sigma}\/{p}} \\big(\\|\\tilde\\psi(\\tau)\\|_{L^p}^{2\\sigma} + \\|\\psi(\\tau)\\|_{L^p}^{2\\sigma}\\big) \\|\n\\tilde\\psi(\\tau)-\\psi(\\tau) \\|_{L^p} \\;d\\tau\\\\\n &\\le C T^{1-\\frac{d\\sigma}{p}} \\|\\psi\\|_{L^\\infty(0,T;L^p)}^{2\\sigma} \\| \\tilde\\psi-\\psi\n\\|_{L^\\infty(0,T;L^p)},\n\\end{align*}\nwhich shows that $\\Xi$ is a contraction for $T>0$ sufficiently small. \n\n\\medskip\n\nTo prove (ii), we first note that by Sobolev imbedding $\\Sigma \\hookrightarrow L^p(\\R^d)$, for $p< p^*=\\frac{2d}{d-2}$\nwhen $d=3$ and $p<\\infty$ when $d=2$, respectively. Thus $\\Sigma\\cap L^p(\\R^d) = \\Sigma$ for $p0$ sufficiently small (depending on the size of $\\|\\psi\\|_{L^\\infty L^p}$). Similar arguments for $\\psi$ and $x\\psi$ imply\n\\begin{align*}\n\\| \\psi \\|_{L^\\infty(0,T;\\Sigma)} \\le \\| \\psi_0 \\|_{\\Sigma}+ C T^{1-\\frac{d\\sigma}{p}} \\|\\psi\\|_{L^\\infty(0,T;L^p)}^{2\\sigma} \\| \\psi\\|_{L^\\infty(0,T;\\Sigma)} .\n\\end{align*}\nChoosing $T>0$ even smaller, if necessary, the second term on the right hand side can be absorbed on the left hand side and we are done. \nAs before, this inequality also applies to the differences of two solutions $\\psi, \\tilde \\psi$, which yields the continuity of $\\psi$ in $\\Sigma$. \n\nWe denote by $T^*>0$ the maximal time of existence in $\\Sigma$. This is always less than or equal to $T>0$, the maximal time of existence in $L^p(\\R^d)$. \nTo prove the blow-up alternative, assume by contradiction that $T^*<\\infty$, and $\\| \\psi(t, \\cdot)\\|_{\\Sigma}$ remains bounded for $t\\in [0, T^*]$. Then, by Sobolev imbedding \n$\\| \\psi(t, \\cdot)\\|_{L^p}$ also remains bounded and thus, we can restart the local existence argument in $\\Sigma$ leading to a contradiction. \n\\end{proof}\n\n\n\\begin{remark}\\label{remH1}\nUnfortunately, our method of proof does not yield existence of solutions for the full $H^1$-subcritical regime, i.e., $\\sigma < \\frac{2}{d-2}$. We expect that \nthis is only a technical issue that can be overcome using a different approach (for example, by using ideas from \\cite{GV1}, or by \ngeneralizing the space-time estimates of \\cite{Bax} to $S_\\Omega$). Note, however, that our slightly more restrictive condition $\\sigma < \\frac{d}{2(d-2)}$ still allows to take $\\sigma =1$ in \n$d=3$. Hence, the physically most relevant case of a cubic nonlinearity is covered.\n\\end{remark}\n\n\\section{Global existence and asymptotic vanishing of solutions} \\label{sec:global}\n\nIn this section, we shall first prove the global existence of solutions in the energy space before showing that for any choice of $\\mu < E_{\\Omega,1}$, the \nsolutions asymptotically vanish as $t\\to +\\infty$.\n\n\n\\subsection{Global existence}\n\nIn order to prove global well-posedness of \\eqref{NLS_diss}, we \nwill need to collect some useful a-priori estimates. To this end, we denote for $\\psi\\in \\Sigma$ the {\\it total mass} by\n\\begin{equation}\\label{mass}\nM(\\psi):= \\| \\psi \\|_{L^2}^2,\n\\end{equation}\nand the {\\it total energy} by\n\\begin{equation}\\label{energy}\nE(\\psi):= \\int_{\\R^d} \\bigg( \\frac{1}{2} |\\nabla \\psi|^2 + V(x) |\\psi|^2 + \\frac{\\lambda}{\\sigma +1}|\\psi |^{2\\sigma+2} - \\Omega \\overline{\\psi}\nL \\psi \\bigg) \\;dx.\n\\end{equation}\nThe latter is nothing but the sum of the kinetic, potential, nonlinear potential, and rotational energy. Clearly, for $\\psi \\in \\Sigma$, \nSobolev's imbedding implies that all the terms in $E(\\psi)$ are finite, provided $\\sigma < \\frac{2}{d-2}$ (and hence also for our range of $\\sigma$). \nFor simplicity of notation, we will write $E(t) \\equiv E(\\psi(t,\\cdot))$ and likewise for $M(t)$, whenever we compute the \nmass and energy of the time-dependent solution $\\psi(t,x)$ to \\eqref{NLS_diss}.\nIn addition, the {\\it free energy} is \ngiven by\n\\begin{equation}\\label{free_energy}\nF(\\psi):=E(\\psi) - \\mu M(\\psi).\n\\end{equation}\n\nIn the case of the usual Gross-Pitaevskii equation, i.e. $\\vartheta = \\pm \\frac \\pi 2$, one finds, that both $M(t)=M(0)$ and $E(t)=E(0)$ are conserved in time \\cite{AMS}. \nIn our dissipative model this is no longer the case. Instead we have the following result, which can be seen as an extension of \nsome well-known identities proved for the classical GL equation, cf. \\cite{DGL, GV2, Te, Wu}.\n\n\\begin{lemma}\\label{lem:Lyapunov}\nLet $\\sigma <\\frac{d}{2(d-2)}$ and $\\psi \\in C([0,T]; \\Sigma)$ be a solution to \\eqref{NLS_diss}. Then the following identities hold:\n\\begin{equation}\\label{eq:mass}\nM(t) + 2\\cos \\vartheta \\int_0^t \\left( E(s) + \\frac{\\lambda\\sigma}{\\sigma+1} \\| \\psi(s,\\cdot) \\|_{L^{2\\sigma+2}}^{2\\sigma+2} - \\mu M(s)\\right) ds =M(0),\n\\end{equation}\nand\n\\begin{equation}\\label{eq:energy}\nF(t) + 2\\cos \\vartheta \\int_0^t \\int_{\\R^d} | \\partial_t \\psi(s,x)|^2\\, dx \\, ds= F(0).\n\\end{equation}\nIn particular, for $\\vartheta \\in (-\\frac \\pi 2 ,\\frac \\pi 2 )$, the free energy $F(\\psi)$ is a non-increasing functional along solutions of \\eqref{NLS_diss}.\n\n\\end{lemma}\n\\begin{proof} In a first step, let us assume sufficient regularity (and spatial decay) of $\\psi$, such that all the following calculations are justified. Then, \nas in the case of the usual NLS, \nidentity \\eqref{eq:mass} is obtained by multiplying \\eqref{NLS_diss} by $\\bar \\psi$, integrating with respect to $x\\in \\R^d$ and taking the real part of the resulting expression (see, e.g., \\cite{AMS, Caze}). \nThis yields\n\\begin{equation}\\label{mass-derivative}\n\\frac{d}{dt} M(t) = -2\\cos \\vartheta \\left( E(t) + \\frac{\\lambda\\sigma}{\\sigma+1} \\| \\psi(t) \\|_{L^{2\\sigma+2}}^{2\\sigma+2} - \\mu M(t)\n\\right)\n\\end{equation}\nwhich directly implies \\eqref{eq:mass} after an integration in time. Similarly, after multiplying \\eqref{NLS_diss} by $\\partial_t \\bar \\psi$, \nintegrating with respect to $x$, and taking the real part, we obtain\n\\begin{equation}\\label{energy-derivative}\n \\frac{d}{dt} \\big(E(t) - \\mu M(t)\\big) = -2\\cos \\vartheta \\int_{\\R^d} | \\partial_t \\psi(t,x)|^2\\, dx,\n\\end{equation}\nwhich yields \\eqref{eq:energy} after integration w.r.t. time. \n\nThe second step then consists of a classical density argument (cf. \\cite{CDW1}), which, together with the fact that \n$\\psi(t)$ depends continuously on the initial data $\\psi_0\\in \\Sigma$, \nallows us to extend \\eqref{eq:mass} and \\eqref{eq:energy} to the case of general solutions $\\psi \\in C([0,T;\\Sigma)$. \nFinally, we note that for $\\vartheta \\in (-\\frac \\pi 2 ,\\frac \\pi 2 )$ we have $\\cos \\vartheta >0$, and thus \n\\eqref{eq:energy} directly implies that $F(t) \\le F(0)$, for all $t\\ge 0$.\n\\end{proof}\n\n\n\nHaving in mind that $\\psi \\in C([0,T], \\Sigma)$ the assumption on $\\sigma$ implies (via Sobolev imbedding) that the integrand \nappearing in identity \\eqref{eq:mass} is a continuous function of time. \nThe fundamental theorem of calculus therefore allows us to differentiate \\eqref{eq:mass} w.r.t. $t$ and consequently use the \ndifferential inequality \\eqref{mass-derivative}. However, the same is not true for \\eqref{eq:energy}, i.e., we cannot use \\eqref{energy-derivative}, \nsince at this point we do not know wether $\\partial_t \\psi \\in C([0,T; L^2(\\R^d))$ holds true. This fact will play a role in some of the proofs given below. \\\\\n\n\n\nAnother preliminary result, to be used several times in the following, is the fact that under our assumptions on the parameters $\\omega, \\Omega, \\lambda, \\sigma$, \nthe energy is indeed non-negative.\n\n\\begin{lemma} \\label{lem:Ebound}\nLet $\\omega>|\\Omega|$, $\\lambda \\ge 0$, and $\\sigma < \\frac{2}{d-2}$. Then for any \n$u \\in \\Sigma$ there exists a constant $c=c(\\omega, \\Omega, \\lambda , \\sigma)>0$, such that\nsuch that \n\\[\n\\|\\nabla u \\|_{L^2}^2 + \\| x u \\|_{L^2}^2 + \\| u \\|_{L^{2\\sigma +2}}^{2\\sigma+2} \\le cE(u).\n\\]\n\\end{lemma}\n\\begin{proof}\nSince $\\lambda\\ge0$, the only possibly negative term within $E(u)$ is given by the rotational energy. However, since \n$\\Omega^2\/\\omega^2=:\\epsilon < 1$, Young's inequality applied to~\\eqref{eq:angular_momentum} yields the pointwise interpolation estimate\n\\[\n\\big| \\Omega \\overline{u} L u \\big| \\le \\frac{\\omega^2}{2} |x^\\bot|^2 | u |^2 \\, dx +\\frac{\\epsilon}{2} |\\nabla^\\bot u|^2 \\le V(x) |u|^2 + \\frac{\\epsilon}{2} |\\nabla u |^2.\n\\]\nWe therefore can bound the energy from below via\n\\begin{equation*}\\label{grad_by_en}\n 0 \\le \\frac{1-\\epsilon}{2} \\|\\nabla u \\|_{L^2}^2 + \\frac{\\lambda\\sigma}{\\sigma+1} \\| u \\|_{L^{2\\sigma+2}}^{2\\sigma+2} \\le E(u).\n\\end{equation*}\nAnalogously, we have\n\\[\n 0 \\le \\frac{1-\\epsilon}{2} \\|x u \\|_{L^2}^2+ \\frac{\\lambda\\sigma}{\\sigma+1} \\| u \\|_{L^{2\\sigma+2}}^{2\\sigma+2} \\le E(u).\n\\]\nCombining these two estimates then yields the desired result with a constant\n\\[\nc=\\frac{4}{\\min\\{1-\\epsilon, \\frac{2\\lambda \\sigma}{\\sigma +1}\\}}.\n\\]\nNote that $c\\to +\\infty$ as $|\\Omega|\\to \\omega$.\n\\end{proof}\n\n\nThe mass\/energy-relations stated in Lemma \\ref{lem:Lyapunov} can now be used to infer global existence of solutions in the \ncase of {\\it defocusing} case $\\lambda > 0$.\n\n\\begin{proposition} \\label{prop:global} Let $\\omega>|\\Omega|$, $\\vartheta \\in (-\\frac \\pi 2 , \\frac \\pi 2 )$, $\\lambda \\ge 0$, and $\\sigma < \\frac{d}{2(d-2)}$. \nThen, for any $\\psi_0 \\in\n\\Sigma$\nthere exists a unique global-in-time solution $ \\psi \\in C([0,\\infty); \\Sigma)$ to \\eqref{NLS_diss}.\n\\end{proposition}\n\\begin{proof}\nIn view of the blow-up alternative stated in Proposition \\ref{prop:loc_ex}, all we need to show is that the $\\Sigma$-norm remains bounded for all $t\\ge 0$. \nLemma \\ref{lem:Ebound} implies that this is the case, as soon as we we can show that both $M(t)$ and $E(t)$ are bounded. \nIn order to do so, we first consider the case $\\mu <0$ and recall that $\\cos \\vartheta >0$ for $\\vartheta \\in (-\\frac \\pi 2 , \\frac \\pi 2 )$. In this case \nidentity \\eqref{eq:energy} implies\n\\[\nE(t) + |\\mu| M(t) \\le F(0) <+\\infty,\n\\]\nand since both $E(t)$ and $M(t)$ are non-negative, we directly infer the required bound on the mass and energy.\n\nOn the other hand, for $\\mu \\ge 0$, identity \\eqref{eq:mass} yields (since $\\lambda \\ge 0$)\n\\[\nM(t) \\le M(0) + 2 \\mu \\cos \\vartheta \\int_0^t M(s) \\, ds,\n\\]\nand hence, Grownwall's lemma implies\n\\begin{equation}\\label{Mbound}\nM(t) \\le M(0)\\left( 1+ 2 \\mu t \\cos \\vartheta \\, e^{2\\mu t \\cos \\vartheta } \\right).\n\\end{equation}\nUsing this estimate in identity \\eqref{eq:energy} we obtain\n\\[\n E(t) \\le F(0) + \\mu M(t) \\le E(0) + 2 \\mu^2 t \\cos \\vartheta M(0) e^{2\\mu t \\cos \\vartheta } .\n\\]\nThe right hand side is finite, for all $t\\ge0$ and thus, the assertion is proved.\n\\end{proof}\n\n\\begin{remark}\nThe global in-time strong solutions constructed above are of the same type as the corresponding solutions for NLS with quadratic potentials, cf. \\cite{AMS, Car05}. \nIt is certainly possible to, alternatively, construct global weak solutions to \\eqref{NLS_diss} as has been done for the usual GL model in, e.g., \\cite{DGL, GV1}. \nBut since we consider the equation \\eqref{NLS_diss} as a \ntoy model describing possible relaxation phenomena in the mean-field dynamics of BEC, we have decided to remain as close as possible to the corresponding NLS theory. \nIn particular, we do not make any use of the strong smoothing property of the linear (heat type) semigroup $S_\\Omega(t)$ for $\\vartheta \\in (-\\frac{\\pi}{2}, \\frac \\pi 2)$. \nWe finally note that our set-up makes it possible to directly generalize the inviscid limit results of \\cite{Wu} to our model.\n\\end{remark}\n\n\\subsection{Asymptotically vanishing solutions}\n\nThe discussion in Section \\ref{sec:linear} shows that solutions to the linear time evolution $\\lambda =0$ asymptotically vanish, provided \n$\\mu < E_0$, i.e., the lowest (positive) energy eigenvalue of $H_\\Omega$. \nWe shall prove that the same is true for in the nonlinear case $\\lambda >0$.\n\n\\begin{proposition} Let $\\vartheta \\in (-\\frac \\pi 2 , \\frac \\pi 2 )$, $\\lambda \\ge 0$, $\\omega>|\\Omega|$, and $\\psi \\in C([0,\\infty), \\Sigma)$ be a solution of \\eqref{NLS_diss} with \n$\\mu0$, and $M(t) = \\sum_{n=0}^\\infty |c_n(t)|^2$.\nThe inequality above can thus be rewritten as\n\\[\n\\frac{d}{dt} \\left(e^{+2 t\\cos \\vartheta (E_{\\Omega,0} - \\mu) } M(t) \\right) \\le 0,\n\\]\nwhich after an integration in time implies \n\\[\nM(t)\\le M(0) e^{-2 t\\cos \\vartheta (E_{\\Omega,0} - \\mu) } \\xrightarrow{t\\to +\\infty}0,\n\\]\nsince $\\vartheta \\in (-\\frac \\pi 2 , \\frac \\pi 2 )$. \n\\end{proof}\n\nAt this point, it is unclear if the decay rate given above is indeed sharp.\n\n\\begin{remark} In the case where $\\mu <0$, one does not need to use the decomposition of $\\psi$ via the spectral subspaces of $H_\\Omega$, at the expense of a slightly worse decay rate. \nIndeed, for $\\mu<0$, the inequality \\eqref{mass-derivative} directly yields\n\\[\n\\frac{d}{dt} M(t) \\le - 2 |\\mu| \\cos \\vartheta M(t),\n\\]\nand thus\n\\[\n M(t) \\le M(0) e^{-2 t|\\mu| \\cos \\vartheta} , \\quad \\forall t \\ge 0.\n\\]\nNote that for $\\mu <0$ there are no nontrivial steady states $\\varphi(x)\\not =0$, satisfying \\eqref{stat_NLS}. This can be seen by \nmultiplying equation \\eqref{stat_NLS} with $\\bar \\varphi$, integrating in $x\\in \\R^d$, \nand recalling the restriction $\\omega>\\Omega\\ge0$, which implies that $\\mu$ has to be non-negative.\n\\end{remark}\n\n\\section{Bounds on the mass and energy}\\label{sec:uniform}\n\nIn this section we shall prove the existence of absorbing balls in both $L^2(\\R^d)$ and $\\Sigma$ for solutions to \\eqref{NLS_diss}.\nIn view of the discussion on the linear model, cf. Section \\ref{sec:linear}, this might seem surprising, given that for general $\\mu>0$ we can expect exponentially growing modes. \nHowever, we shall see that for $\\lambda>0$, the nonlinearity, combined with the confining potential, mixes the dynamics in a way that \nmakes it possible to infer a uniform bound on the mass and energy, and hence on the $\\Sigma$--norm of the solution.\nTo this end, the following lemma is the key technical step.\n\n\\begin{lemma}\\label{lem:glob_bound}\nLet $\\lambda > 0$, $\\omega > |\\Omega|$ and $0 < \\sigma < \\frac{d}{2(d-2)}$. Then there exists a constant $C=C(\\omega, \\Omega, \\lambda, \\sigma)>0$, such that\n\\[\n M(\\psi) \\le C E(\\psi)^{\\frac{\\sigma\\theta + 1}{\\sigma+1}} ,\\quad \\text{with $\\theta = \\frac{d\\sigma}{2\\sigma + 2 + d\\sigma}$.}\n \\]\n\\end{lemma}\n\\begin{proof}\nThe proof of this result relies on the following localization property: \n For all $d\\ge1$ and all $p\\ge2$ and any $f\\in C^\\infty_0(\\R^d)$:\n\\begin{equation}\\label{localize}\n \\| f \\|_{L^2(\\R^d)} \\le { 2} \\| x f \\|_{L^2(\\R^d)}^{\\theta} \\| f \\|_{L^p(\\R^d)}^{1-\\theta},\n\\end{equation}\nwith \n\\[\n\\theta = \\frac{d(\\frac{1}{2}-\\frac{1}{p})}{1+d(\\frac{1}{2}-\\frac{1}{p})} = \\frac{d(p-2)}{2p + d(p-2)}.\n\\]\nIn order to show this, let $B_R$ denote the ball around the origin of radius $r>0$. We rewrite \n\\begin{align*}\n \\|f\\|_{L^2(\\R^d)} &= \\|f\\|_{L^2(B_r)} + \\|f\\|_{L^2(\\R^d\\setminus B_r)} \\le r^{d(\\frac{1}{2} - \\frac{1}{p})} \\|f\\|_{L^p(B_r)} + \\frac{1}{r} \\|x f\\|_{L^2(\\R^d\\setminus B_r)}\\\\\n &\\le r^{d(\\frac{1}{2} - \\frac{1}{p})} \\|f\\|_{L^p(\\R^d)} + \\frac{1}{r} \\|x f\\|_{L^2(\\R^d)}.\n\\end{align*}\nThe right-hand side is minimal if both summands are of the same order of magnitude, i.e.\n\\[\n r^{1 + d(\\frac{1}{2} - \\frac{1}{p})} = \\frac{\\|x f\\|_{L^2(\\R^d)}}{\\|f\\|_{L^p(\\R^d)}}.\n\\]\nWith this choice of $r$, the estimate \\eqref{localize} follows and a density argument allows to extend it to any $f\\in \\Sigma$. \nSpecifying $p=2\\sigma+2$, consequently yields\n\\begin{equation}\\label{proof_glob_bound}\n\\|\\psi \\|_{L^2}^2 \\le { 2} \\bigg( \\int_{\\R^d} |x|^2 |\\psi(x)|^2 \\;dx \\bigg)^{\\theta} \\bigg( \\int_{\\R^d} |\\psi(x)|^{2\\sigma+2} \\;dx \\bigg)^{\\frac{1-\\theta}{\\sigma+1}},\n\\end{equation}\nwhere $\\theta = \\frac{d\\sigma}{2\\sigma + 2 + d\\sigma}.$ \nIn view of Lemma \\ref{lem:Ebound}, both factors on the right hand side of \\eqref{proof_glob_bound} are bounded by the energy. \nMore precisely, \n\\[\n M(\\psi) \\le 2(c E(\\psi))^{\\theta + \\frac{1-\\theta}{\\sigma+1}} = C E(\\psi)^{\\frac{\\sigma\\theta+1}{\\sigma+1}},\n\\]\nwhere $C=2c^{\\frac{\\sigma\\theta+1}{\\sigma+1}}$ and $c=c(\\omega, \\Omega, \\lambda, \\sigma)>0$ is the constant from Lemma~\\ref{lem:Ebound}. \n\\end{proof}\n\\begin{remark}\nNote that in order to infer this bound one needs \nthe presence of {\\it both} the confinement and the nonlinearity, since the proof requires $\\sigma>0$, $\\lambda>0$ and $\\omega>0$. Moreover, one checks that $C\\to +\\infty$, as $|\\Omega| \\to \\omega$.\n\\end{remark}\n\n\nWith this result in hand, we can deduce global bounds on $M(t)$ and $E(t)$ along solutions of \\eqref{NLS_diss}.\n\\begin{proposition}\\label{prop:Ebound}\n Let $\\psi \\in C([0,\\infty), \\Sigma)$ be a solution to \\eqref{NLS_diss} with $\\vartheta \\in (-\\frac \\pi 2 ,\\frac \\pi 2 )$.\n Under the assumptions of Lemma \\ref{lem:glob_bound}, if additionally $\\mu>0$, there exists a constant $K=K(\\omega, \\Omega, \\sigma, \\lambda, \\mu)>0$, independent of time, \n such that\n\\[\n E(t) \\le K + e^{- t \\mu \\cos \\vartheta} E(0) , \\quad \\forall \\, t\\ge 0.\n\\]\n\\end{proposition}\n\\begin{proof}\nWe first note that Lemma \\ref{lem:glob_bound} and the differential inequality \\eqref{mass-derivative} imply\n\\[\n\\frac{d}{dt}M(t) \\le -2\\cos \\vartheta\\, E(t) + C \\mu E(t)^{\\frac{\\sigma\\theta + 1}{\\sigma+1}} .\n\\]\nNow, for any $\\vartheta \\in (-\\frac \\pi 2 ,\\frac \\pi 2 )$ and $\\tilde\\theta= \\frac{\\sigma\\theta + 1}{\\sigma+1}$, by Young's inequality, we obtain\n\\[\n E(t)^{\\tilde\\theta} \\le \\frac{\\cos\\vartheta}{C\\mu} E(t) + (1-\\tilde \\theta) \\left(\\frac{ C \\mu \\tilde \\theta}{\\cos \\vartheta}\\right)^{\\frac{\\tilde \\theta} {1-\\tilde \\theta}} = \n \\frac{\\cos \\vartheta}{C\\mu} \\, E(t)+\\tilde C,\n\\]\nwhere $\\tilde C>0$, depends on all the parameters involved, but not on time. Thus, we have\n\\[\n\\frac{d}{dt}M(t) \\le - \\cos\\vartheta E(t) + \\mu C \\tilde C.\n\\]\nOn the other hand, identity \\eqref{eq:energy} implies \n\\[\nE(t) -E(t_0) \\leq \\mu M(t)-\\mu M(t_0), \\qquad 0 \\leq t_0 \\leq t,\n\\]\nand hence\n\\[\nE(t)-E (s) \\le \\int_{s}^t (- \\mu \\cos \\vartheta \\, E(\\tau) + \\mu^2 C \\tilde C )\\, d\\tau, \\qquad 0 \\leq s \\leq t,\n\\]\nas well as\n\\[\nE(t)-E (s) \\ge \\int_{s}^t (- \\mu \\cos \\vartheta \\, E(\\tau) + \\mu^2 C \\tilde C )\\, d\\tau, \\qquad 0 \\leq t \\leq s.\n\\]\n\nNow, given any positive bump function $\\chi \\in C_0^{\\infty}((t-\\epsilon,t+\\epsilon))$, such that\n$\\chi' \\ge 0$ on $(t-\\epsilon, t)$ and $\\chi' \\le 0$ on $(t,t+\\epsilon)$, we multiply by $\\chi'(s)$ and integrate in $s$, to obtain\n\\[\n\\begin{split}\n\\int_{t-\\epsilon}^{t+\\epsilon} [E(t)-E(s)]\\chi'(s)\\, ds \\le& \\int_{t-\\epsilon}^{t+\\epsilon} \\int_s^t \n(- \\mu \\cos \\vartheta \\, E(\\tau) + \\mu^2 C \\tilde C ) \\chi'(s) \\, d\\tau \\, ds\\\\\n =& \\int_{t-\\epsilon}^{t} \\int_{t-\\epsilon}^\\tau \n(- \\mu \\cos \\vartheta \\, E(\\tau) + \\mu^2 C \\tilde C ) \\chi'(s) \\, ds \\, d\\tau \\\\\n&-\\int_{t}^{t+\\epsilon} \\int_\\tau^{t+\\epsilon} \n(- \\mu \\cos \\vartheta \\, E(\\tau) + \\mu^2 C \\tilde C ) \\chi'(s) \\, ds \\, d\\tau \\\\\n =& \\int_{t-\\epsilon}^{t+\\epsilon} \n(- \\mu \\cos \\vartheta \\, E(\\tau) + \\mu^2 C \\tilde C ) \\chi(\\tau) \\, d\\tau.\\\\\n\\end{split}\n\\]\nA similar computation gives the same inequality for a negative bump function function $\\chi \\in C_0^{\\infty}((t-\\epsilon,t+\\epsilon))$, such that\n$\\chi' \\le 0$ on $(t-\\epsilon, t)$ and $\\chi' \\ge 0$ on $(t,t+\\epsilon)$. Since an arbitrary test function can be written as a linear\ncombination of positive and negative bump functions, we have \n\\[\n-\\int_{t_0}^t E(\\tau) \\chi'(\\tau) \\, d \\tau \\le \\int_{t_0}^t \\left(- \\mu \\cos \\vartheta \\, E(\\tau) + \\mu^2 C \\tilde C \\right) \\, \\chi(\\tau) \\, d\\tau,\n\\]\nfor any $\\chi \\in C_0^{\\infty}((t_0,t))$. Here, we have also used the fact that $\\chi$ has compact support on $(t_0,t)$. \nChoosing $\\chi(\\tau)= e^{\\mu \\tau \\cos \\vartheta } \\phi(\\tau)$ with $\\phi \\in C_0^{\\infty}((t_0,t))$, we obtain\n\\[\n-\\int_{t_0}^t E(\\tau) \\left(e^{\\mu \\tau \\cos \\vartheta } \\phi(\\tau)\\right)' \\, d \\tau \\le \\int_{t_0}^t \\left(- \\mu \\cos \\vartheta \\, E(\\tau) + \\mu^2 C \\tilde C \\right) \\, e^{\\mu \\tau \\cos \\vartheta } \\phi(\\tau) \\, d\\tau,\n\\]\nand thus \n\\[\n\\begin{split}\n-\\int_{t_0}^t E(\\tau) e^{\\mu \\tau \\cos \\vartheta } \\phi'(\\tau) \\, d \\tau &\\le\n\\int_{t_0}^t \\mu^2 C \\tilde C e^{\\mu \\tau \\cos \\vartheta } \\phi(\\tau) \\, d\\tau\\\\\n&\\le \\int_{t_0}^t \\frac{\\mu^2 C \\tilde C}{\\mu \\cos \\vartheta}(1- e^{\\mu \\tau \\cos \\vartheta }) \\phi'(\\tau) \\, d\\tau.\n\\end{split}\n\\]\nHence\n\\[\nE(t) e^{\\mu t \\cos \\vartheta } + \\frac{\\mu^2 C \\tilde C}{\\mu \\cos \\vartheta}(1- e^{\\mu t \\cos \\vartheta })\n\\le E(t_0) e^{\\mu t_0 \\cos \\vartheta } + \\frac{\\mu^2 C \\tilde C}{\\mu \\cos \\vartheta}(1- e^{\\mu t_0 \\cos \\vartheta }),\n\\]\nfor almost all $0\\leq t_0 \\leq t$. In summary, for almost all $t\\ge 0$ we have\n\\[\nE(t) \\le E(0) e^{-\\mu t \\cos \\vartheta } +K(1- e^{-\\mu t \\cos \\vartheta }),\n\\]\nwhere\n\\[\nK= \\frac{\\mu C \\tilde C}{\\cos \\vartheta}.\n\\]\nHowever, since $\\psi\\in C([0,\\infty;\\Sigma))$ implies that $E (t)$ is continuous in time, we \nconsequently infer the inequality for all $t\\ge 0$.\\end{proof} \n\n\\begin{remark}\nThe proof above is slightly complicated due to the fact that we cannot use the energy identity \\eqref{eq:energy} in its differentiated form \\eqref{energy-derivative}, see \nthe discussion below the proof of Lemma \\ref{lem:Lyapunov}. \nIf we ignore this problem for the moment, then we have\n\\[\n\\frac{d}{dt} E(t) \\le\\mu\\frac{d}{dt}M(t)\\le - \\mu \\cos \\vartheta E(t) + \\mu^2 C \\tilde C,\n\\]\nwhich directly allows us to conclude the assertion proved above.\n\\end{remark}\n\nIn view of Lemma \\ref{lem:glob_bound} the bound on $E(t)$ obtained above implies a similar bound on $M(t)$. In particular, \nthere is some constant $\\rho_M >0$ and a function $t_M(\\cdot)$, such that for all $\\psi\\in C([0,\\infty);\\Sigma)$ solutions to \\eqref{NLS_diss}, it holds\n\\[\n\\| \\psi (t, \\cdot)\\|_{L^2} \\le \\rho_M, \\quad \\forall t \\ge t_M(M(0)).\n\\]\nTherefore $$\\{ \\psi \\in L^2(\\R^d): \\|\\psi\\|_{L^2} \\le \\rho_M \\}\\subset L^2(\\R^d)$$\nis an absorbing ball for trajectories $t \\mapsto \\psi(t, \\cdot)$. Similarly, we know, that there exists a \n$\\rho_\\Sigma\\ge \\rho_M$ and a function $t_\\Sigma(\\cdot)$, such that\n\\[\n\\|\\psi (t, \\cdot)\\|_\\Sigma \\le \\rho_\\Sigma, \\quad \\forall t \\ge t_\\Sigma(\\|\\psi(0)\\|_\\Sigma).\n\\]\nIn other words, \n\\begin{equation}\\label{X}\nX:=\\{ \\psi \\in \\Sigma: \\|\\psi\\|_{\\Sigma} \\leq \\rho_\\Sigma \\}\n\\end{equation}\nis an absorbing ball in $\\Sigma$ for trajectories $t\\mapsto \\psi(t, \\cdot)$.\nIn our study of long time dynamics of \\eqref{NLS_diss}, the set $X$ will play the role of a {\\it phase space}.\n\n\\section{The global attractor and its properties} \\label{sec:attractor}\nIn the previous section we proved that solutions $\\psi(t)$ exist globally in $\\Sigma$, and, moreover, all such solutions\nremain within an absorbing ball $X \\subset \\Sigma$ for $t>0$ large enough. It is therefore natural to ask whether\nthere exists an $\\mathcal{A} \\subset \\Sigma$ that {\\it attracts} all trajectories $t\\mapsto \\psi(t, \\cdot)\\in \\Sigma$. Unfortunately,\nclassical theories of global attractors (see, e.g., \\cite{CV, Te}) do not apply to our situation as they typically require asymptotic\ncompactness, which is unknown in $\\Sigma$. However, the trajectories might still converge to the global attractor $\\mathcal{A}$\nin some weaker metric, say $L^2$. To prove this we revisit the rather general framework of evolutionary systems introduced in \\cite{C5} and adapt it to our situation. \n\n\n\\subsection{Existence of a global attractor}\n\n\nFirst, recall that our phase space is the metric space $(X,\\mathrm{d}_{L^2}(\\cdot,\\cdot))$ where $X\\subset \\Sigma$ is given by \\eqref{X} and \n$\\mathrm{d}_{L^2}({\\psi,\\phi}) = \\| {\\psi - \\phi} \\|_{L^2}$. We note that $X$ is $\\mathrm{d}_{L^2}$-compact.\nIn addition, we also have the stronger $\\Sigma$-metric $\\mathrm{d}_\\Sigma({\\psi,\\phi}):= \\| {\\psi-\\phi}\\|_{\\Sigma}$ on $X$, which satisfies: If $\\mathrm{d}_\\Sigma(\\psi_n, \\phi_n) \\to 0$ as $n \\to \\infty$ for some\n$\\psi_n, \\phi_n \\in X$, then $\\mathrm{d}_{L^2}(\\psi_n, \\phi_n) \\to 0$ as $n \\to \\infty$.\nNote that any $\\Sigma$-compact set is $L^2$-compact, and any $L^2$-closed set is $\\Sigma$-closed.\n\nNow, let $C([a, b];X_\\bullet)$, where $\\bullet = \\Sigma$ or $L^2$, be the space of $\\mathrm{d}_{\\bullet}$-continuous $X$-valued\nfunctions on $[a, b]$ endowed with the metric\n\\[\n\\,d_{C([a, b];X_\\bullet)}(\\psi,\\phi) := \\sup_{t\\in[a,b]}\\mathrm{d}_{\\bullet}(\\psi(t),\\phi(t)). \n\\]\nAlso, let $C([a, \\infty);X_\\bullet)$ be the space of $\\mathrm{d}_{\\bullet}$-continuous\n$X$-valued functions on $[a, \\infty)$ endowed with the metric\n\\[\n\\,d_{C([a, \\infty);X_\\bullet)}(\\psi,\\phi) := \\sum_{T\\in \\mathbb{N}} \\frac{1}{2^T} \\, \\frac{\\sup\\{\\mathrm{d}_{\\bullet}(\\psi(t),\\phi(t)):a\\leq t\\leq a+T\\}}\n{1+\\sup\\{\\mathrm{d}_{\\bullet}(\\psi(t),\\phi(t)):a\\leq t\\leq a+T\\}}.\n\\]\nIn order to define a general evolutionary system, we introduce\n\\[\n\\mathcal{T} := \\{ I: \\ I=[T,\\infty) \\subset \\mathbb{R}, \\mbox{ or } \nI=(-\\infty, \\infty) \\},\n\\]\nand for each $I \\subset \\mathcal{T}$, we denote \nthe set of all $X$-valued functions on $I$ by $\\mathcal X(I)$.\n\n\\begin{definition} \\label{Dc}\nA map $\\mathcal{E}$ that associates to each $I\\in \\mathcal{T}$ a subset\n$\\mathcal{E}(I) \\subset \\mathcal X(I)$ will be called an {\\it evolutionary system} if\nthe following conditions are satisfied:\n\n\\begin{itemize}\n\\item[(i)] $\\mathcal{E}([0,\\infty)) \\ne \\emptyset$.\n\\item[(ii)]\n$\\mathcal{E}(I+s)=\\{\\psi(\\cdot): \\ \\psi(\\cdot +s) \\in \\mathcal{E}(I) \\}$ for\nall $s \\in \\mathbb{R}$.\n\n\\item[(iii)] For all pairs $I_2\\subset I_1 \\in \\mathcal{T}$: $\\{\\psi(\\cdot)|_{I_2} : \\psi(\\cdot) \\in \\mathcal{E}(I_1)\\}\n\\subset \\mathcal{E}(I_2)$.\n\n\\item[(iv)]\n$\\mathcal{E}((-\\infty , \\infty)) = \\{\\psi(\\cdot) : \\ \\psi(\\cdot)|_{[T,\\infty)}\n\\in \\mathcal{E}([T, \\infty)) \\ \\forall T \\in \\mathbb{R} \\}.$\n\\end{itemize}\nIn general, $\\mathcal{E}(I)$ will be referred to as {\\it set of trajectories} on the time interval $I$, and trajectories in $\\mathcal{E}((-\\infty,\\infty))$ will be called {\\it complete}.\n\\end{definition}\n\nWe now consider the specific evolutionary system induced by the family of trajectories of \\eqref{NLS_diss}\nin $X$. More precisely, we set\n\\begin{equation}\n\\begin{split}\n\\mathcal{E}([T,\\infty)) := \\Big \\{ &\\, \\psi \\in C([T, \\infty);X) \\ \\text{a solution to \\eqref{NLS_diss}, with $\\vartheta \\in \\big (-\\frac \\pi 2 ,\\frac \\pi 2 \\big)$,} \\\\ \n&\\, \\text{$\\lambda, \\mu > 0$, $\\omega > |\\Omega| $, and $0 < \\sigma < \\frac{d}{2(d-2)}$} \\Big \\} .\n\\end{split}\n\\label{eq:syst}\n\\end{equation}\nClearly, the properties (i)--(iv) above hold for the evolutionary system associated to \\eqref{NLS_diss}. In addition, due to Proposition \\ref{prop:global}, for\nany $\\psi_0 \\in X$ there exists $\\psi \\in \\mathcal{E}([{T},\\infty))$ with $\\psi({T})=\\psi_0$. Standard techniques then imply the following lemma:\n\\begin{lemma} \\label{l:convergenceofLH}\nLet $(\\psi_n)_{n\\in \\N}$ be a sequence of functions, such that $\\psi_n \\in \\mathcal{E}([T_1, \\infty))$ for all $n\\in \\N$. Then for any $T_2>T_1$ there exists a sub-sequence $(\\psi_{n_j})_{j\\in \\N}$ which converges in \n$C([T_1, T_2]; X_{L^2})$ to $\\psi\\in \\mathcal{E}([T_1, \\infty))$.\n\\end{lemma}\n\n\\begin{proof}\nSince $X$ is compact in $L^2(\\R^d)$, there exists a sequence $(\\psi_{n_j})_{j\\in \\N}$ such that $\\psi_{n_j}(T_1) \\to \\tilde \\psi$ for some $\\tilde \\psi \\in L^2(\\R^d)$. \nHowever, since lower-semicontinuity and the definition of $X$ yield\n\\[\n\\| \\tilde \\psi \\|_\\Sigma \\le \\liminf_{j \\to \\infty} \\| \\psi_{n_j}(T_1, \\cdot) \\|_\\Sigma \\le \\rho_\\Sigma,\n\\]\nwe have that $\\tilde \\psi \\in X$. In view of proposition \\ref{prop:global} there exists $\\psi \\in \\mathcal{E}([T_1, \\infty))$ with $\\psi(T_1) = \\tilde \\psi$. \nContinuous dependence on the initial data, then gives the desired result.\n\\end{proof}\n\n\nUsing this, we can prove one of the main structural properties of the set of trajectories induced by \\eqref{NLS_diss}:\n\\begin{proposition} \\label{prop:compact} $\\mathcal{E}([0,\\infty))$ is a compact set in $C([0,\\infty); X_{L^2})$.\n\\end{proposition}\n\\begin{proof}\nFirst note that $\\mathcal{E}([0,\\infty)) \\subset C([0,\\infty);X_{L^2})$. Now take any sequence\n$(\\psi_n)_{n\\in \\N} \\in \\mathcal{E}([0,\\infty))$.\nThanks to Lemma~\\ref{l:convergenceofLH}, there exists\na subsequence, still denoted by $\\psi_n$, that converges\nto some $\\psi^{1} \\in \\mathcal{E}([0,\\infty))$ in $C([0, 1];X_{L^2})$ as $n \\to \\infty$.\nPassing to a subsequence and dropping a subindex once more, we obtain that\n$\\psi_n \\to \\psi^2$ in $C([0, 2];X_{L^2})$ as $n \\to \\infty$ for some \n$\\psi^{2} \\in \\mathcal{E}([0,\\infty))$.\nNote that $\\psi^1(t)=\\psi^2(t)$ on $[0, 1]$.\nContinuing and picking a diagonal sequence, we obtain a subsequence $\\psi_{n_j}$\nof $\\psi_n$ that converges\nto some $\\psi \\in \\mathcal{E}([0,\\infty))$ in $C([0, \\infty);X_{L^2})$ as $n_j \\to \\infty$.\n\\end{proof}\n\n\nIn order to proceed further, we denote, as usual, the set of all subsets of $X$ by $P(X)$.\nFor every $t \\ge 0$, we can then define a map $R(t):P(X) \\to P(X)$, by\n\\[\nR(t)A := \\{\\psi(t): \\psi(0) \\in A, \\ \\text{such that} \\ \\psi \\in \\mathcal{E}([0,\\infty))\\}, \\quad\n\\text{for any $A \\subset X.$}\n\\]\nNote that the assumptions on $\\mathcal{E}$ imply that $R(s)$ enjoys\nthe following property:\n\\begin{equation} \\label{eq:propR(T)}\nR(t+s)A \\subset R(t)R(s)A, \\qquad A \\subset X,\\quad t,s \\ge 0.\n\\end{equation}\n\n\\begin{definition}\nA set $A$ is called {\\it invariant} under the dynamics, if $R(t)A = A$ for all $t\\ge 0$.\n\\end{definition}\n\nWe also recall the standard notion of and $\\omega$-limit associated to an evolutionary system (see also \\cite{Te}).\n\n\\begin{definition}The ${\\omega}_{\\bullet}${\\it-limit} ($\\bullet= \\Sigma, \\, L^2$) of a set $A\\subset X$ is\n\\[{\\omega}_{\\bullet}(A):=\\bigcap_{T\\ge0}\\overline{\\bigcup_{t\\ge T}R(t)A}^{\\bullet}.\\]\\end{definition}\n\nWe also note that an equivalent definition of the ${\\omega}_{\\bullet}$-limit set is given by\n\\[\n\\begin{split}\n{\\omega}_{\\bullet}(A)=\\big\\{&{\\psi}\\in X: \\mbox{ there exist sequences }\nt_n \\xrightarrow{n\\to \\infty}\\infty \\mbox { and } {\\psi_n} \\in R(t_n)A,\\\\\n& \\mbox{such that } {\\psi_n(t_n)}\\xrightarrow{n\\to \\infty}\\psi \\mbox{ in the } \\mathrm{d}_{\\bullet}\\mbox{-metric} \\big \\}.\n\\end{split}\n\\]\n\nFinally, we will give a precise definition of what we mean by an attractor.\n\\begin{definition}\nA set $A \\subset X$ is a $\\mathrm{d}_{\\bullet}${\\it -attracting set}, if it uniformly\nattracts $X$ in $\\mathrm{d}_{\\bullet}$-metric, i.e. \n\\[\n\\liminf_{{\\phi} \\in A}\\mathrm{d}_{\\bullet}(R(t) X, {\\phi}) \\xrightarrow{t\\to +\\infty}0.\n\\]\nA set\n$\\mathcal{A} \\subset X$ is a\n$\\mathrm{d}_{\\bullet}$-{\\it global attractor} if\n$\\mathcal{A}$ is a minimal $\\mathrm{d}_{\\bullet}$-closed\n$\\mathrm{d}_{\\bullet}$-attracting set.\n\\end{definition}\n\nAfter these preparations, we are able to prove the main result of this section:\n\\begin{corollary} \\label{thm:Attractor}\nThe evolutionary system \\eqref{eq:syst} possesses a unique $d_{L^2}$-global attractor $\\mathcal{A}=\\omega_{L^2}(X)$, which has the following structure\n\\[\n\\mathcal{A}=\\{\\psi_0: \\psi_0 =\\psi(0) \\mbox { for some } \\psi\\in \\mathcal{E}((-\\infty,\\infty))\\}\\\\\n\\]\nFurthermore, it holds:\n\\begin{enumerate}\n\\item For any $\\epsilon > 0$ and $T > 0$, there exists a $t_0\\in \\R$, such that for any $t^* > t_0$, every trajectory\n $\\psi\\in\\mathcal{E}([0,\\infty))$ satisfies $\\mathrm{d}_{L^2}(\\psi(t), \\phi(t)) < \\varepsilon$, for all $t \\in [t^*, t^* +T ]$, where $\\phi \\in \\mathcal{E} ((-\\infty, \\infty))$ is some complete trajectory, i.e., \n the uniform tracking property holds.\n\\item If the $\\Sigma$ global attractor exists, then it coincides with $\\mathcal{A}$.\n\\item $\\mathcal{A}$ is connected in $L^2$.\n\\item $\\mathcal A$ is the maximal invariant set.\n\\end{enumerate}\n\\end{corollary}\n\n\\begin{proof}\nAssertion (1) and (2) follow from the results proved in \\cite{C5}. To this end, one first shows that the $\\omega_{L^2}$-limit of $X$ is an attracting set, which by definition is closed \nand the minimal set satisfying these two properties. Then, using Proposition \\ref{prop:compact} and a diagonalization process, one can prove the structural properties of $\\mathcal A$, cf. \n\\cite[Theorem 5.6]{C5}.\nThe fact that $\\mathcal A$ is connected then follows from Lemma~\\ref{l:convergenceofLH} and uniqueness: We argue by contradiction and hence assume \nthat $\\mathcal{A}$ is not $L^2$-connected. Then there exist disjoint $\\mathrm{d}_{L^2}$-open sets $U_1, U_2 \\in X$\nsuch that $\\mathcal{A} \\subset U_1 \\cup U_2$ and $\\mathcal{A} \\cap \\, U_1$, $\\mathcal{A} \\cap U_2$ are nonempty. Define\n\\[\nX_j= \\{ {\\psi}\\in X: \\omega_{L^2}({\\psi}) \\in U_j\\}, \\qquad j=1,2.\n\\] \nSince $U_1$ and $U_2$ are disjoint, we also have that $X_1$, $X_2$ are disjoint. Continuity of trajectories implies that\n$X_1 \\cup X_2 =X$. Since $\\mathcal{A}$ is $\\mathrm{d}_{L^2}$-attracting, there exists $T> 0$ such that\n\\[\nR(t)X \\in U_1 \\cup U_2, \\qquad \\forall t > T.\n\\]\nBy continuity of trajectories we have that for each $\\psi \\in \\mathcal{E}([0,\\infty))$, either $\\psi(t) \\in U_1$ for all $t >T$, or $\\psi(t) \\in U_2$ for all $t >T$. \nThis implies that both $X_1$ and $X_2$ are nonempty. Moreover, Lemma~\\ref{l:convergenceofLH} implies that $X_1$ and\n$X_2$ are $\\mathrm{d}_{L^2}$-open. This contradicts the fact that $X$ is $\\mathrm{d}_{L^2}$-connected. \nFinally we note that the structure of $\\mathcal A$, together with uniqueness of solutions, imply that $\\mathcal A$ is an invariant set. \nClearly, only complete trajectories are invariant, hence $\\mathcal A$ is the maximal invariant set.\n\\end{proof}\n\n\\begin{remark}\nIn the case of the usual GL equation (posed on bounded domains $D\\subset \\R^d$) many more details concerning the global attractor are known, see, e.g., \\cite{MM, Te, TW}. \nIt is an interesting open problem to check which of these results can be extended to our situation and what the main structural differences between \\eqref{NLS_diss} and the usual GL equation are.\n\\end{remark}\n\n\\subsection{Dimension of the attractor} \\label{sec:dimension} We hereby follow the, by now, classical theory of estimating the Lyapunov numbers associated to $\\mathcal{E}([0,\\infty))$ by studying \nthe evolution of an $m$-dimensional volume element of our phase space $X$, cf. \\cite[Chapter V]{Te} for a general introduction. Using this technique, \nthe case of the usual GL equation on bounded domains $D\\subset \\R^n$, \nwith $n=1,2$ is studied, e.g., in \\cite[Chapter VI, Section 7]{Te}. In our case, the same idea works, but requires several adaptions on a technical level. \n\nTo this end, we first rewrite \\eqref{NLS_diss} as\n\\[\n\\partial_t \\psi = - e^{-i \\vartheta } G(\\psi),\\quad \\psi_{\\mid t =0} = \\psi_0.\n\\]\nand, for any $\\psi_0 \\in \\mathcal A$, consider the linearization around a given orbit $\\psi(t) = R(t) \\psi_0$, i.e.,\n\\begin{equation}\\label{eq:linear}\n\\partial_t \\phi = - e^{-i \\vartheta } G'(\\psi) \\phi, \\quad \\psi_{\\mid t =0} = \\xi.\n\\end{equation}\nHere, $\\xi \\in X$ and $G'$ denotes the Frechet derivative\n\\[\nG'(\\psi) \\phi = H_{\\Omega} \\phi - \\mu \\phi + \\lambda \\left(|{\\psi}|^{2\\sigma} \\phi + \\sigma \\psi |\\psi|^{2\\sigma -2} \\text{Re}\\, (\\overline \\psi \\phi) \\right),\n\\]\nwhere $H_\\Omega$ is the linear Hamiltonian (with rotation) defined \nin \\eqref{H}. It is easy to see, that the linearized equation \\eqref{eq:linear} admits a unique strong solution for any given $\\xi \\in X$ and $\\psi\\in \\mathcal A$. \nWe now consider $\\phi_1(t), \\dots, \\phi_m(t)$ solutions to \\eqref{eq:linear}, corresponding to initial data $\\xi_1, \\dots, \\xi_m$, $m\\in \\N$, and choose an \n$L^2$-orthonormal basis $\\chi_1(t), \\dots, \\chi_m(t)$ of \n\\[\nP_m(t) X:= \\text{span}\\{ \\phi_1(t), \\dots, \\phi_m(t)\\},\n\\]\nwhere $P_m$ denotes the corresponding orthogonal projection. Then, it is easy to see (cf. \\cite{Te}), that the evolution of the $m$-dimensional volume element in $X$ is given by\n\\[\n|\\phi_1(t)\\wedge \\dots \\wedge \\phi_m(t)| = | \\xi_1 \\wedge \\dots \\wedge \\xi_m| \\exp \\left(- \\int_0^t \\text{Re} \\, \\text{Tr}\\, e^{-i \\vartheta } G'(\\psi(s)) \\circ P_m(s) \\, ds \\right).\n\\]\nIn order to proceed, we first note that:\n\n\\begin{lemma}\\label{lem:collective} Let $H_0$ be given by \\eqref{H_0}. Then, for any orthonormal family $\\{ \\chi_j\\}_{j=1}^m \\subset L^2(\\R^d)$ there exists a constant $c=c(\\omega, d)>0$, such that\n\\[\n\\sum_{j=1}^m \\langle H_0 \\chi_j , \\chi_j \\rangle_{L^2} \\ge c \\, m^{1+1\/d}.\n\\]\n\\end{lemma}\n\n\\begin{proof}\nHaving in mind the form and the degeneracy of the eigenvalues stated Lemma \\ref{lem:H}, one checks that when counted with multiplicity $E_{0,m}\\sim m^{1\/d}$, as $m\\to \\infty$.\nThe desired result then follows directly from \\cite[Chapter VI, Lemma 2.1]{Te}. \\end{proof}\n\nUsing this, we can prove the following result for the dimension of $\\mathcal A$:\n\n\\begin{proposition}\\label{prop:dimension}\nConsider the dynamical system \\eqref{eq:syst} and let $m$ be defined by\n\\[\nm-1 < \\left( \\frac{2 \\kappa_2 }{\\kappa_1} \\right)^{d\/(d+1)}\\le m,\n\\]\nwhere\n\\[\n\\kappa_1 = \\frac{\\gamma c}{4}\\Big(1- \\frac{\\Omega^2}{\\omega^2}\\Big), \\quad \\kappa_2 = c' \\gamma \\mu^{1+d} \\Big(1- \\frac{\\Omega^2}{\\omega^2}\\Big)^{-d}+\n\\frac{c''( \\lambda |\\beta| )^{1+\\alpha} }{\\gamma^\\alpha}\\Big(1- \\frac{\\Omega^2}{\\omega^2}\\Big)^{-\\alpha} \\delta,\n\\]\nwith $c, c', c'', \\alpha, \\tilde \\alpha$ positive constants depending only on $\\omega, d, \\sigma$, and\n\\[\n\\delta = \\limsup_{t\\to \\infty} \\sup_{\\psi_0 \\in \\mathcal A} \\left( \\frac{1}{t} \\int_0^t \\| R(s) \\psi_0\\|^{2\\sigma \\tilde \\alpha}_{L^{2\\sigma +2}} \\, ds \\right) \\le \\left(K\\frac{\\sigma+1}{\\lambda}\\right)^{\\frac{2\\sigma}{2\\sigma+2-d\\sigma}}.\n\\]\nHere, $K$ is the constant from Proposition~\\ref{prop:Ebound}.\n\nThen, as $t\\to +\\infty$, the $m$-dimensional volume element in $X$ is exponentially decaying.\nMoreover, the Hausdorff dimension of $\\mathcal A$ is less than or equal to $m$ and its fractal dimension is less than or equal to $2m$.\n\\end{proposition}\n\n\\begin{proof}\nHaving in mind the representation formula for the $m$-dimensional volume element as given above, we introduce\n\\[\nq_m:= \\limsup_{t\\to \\infty} \\sup_{ \\| \\xi_j \\|_{L^2} \\le 1} \\left(- \\frac{1}{t} \\int_0^t \\text{Re} \\, \\text{Tr}\\, e^{-i \\vartheta } G'(\\psi(s)) \\circ P_m(s) \\, ds \\right).\n\\]\nand quote the following result from \\cite[Chapter VI, Lemma 2.2]{Te}: If there are constants $\\kappa_{1,2}\\ge 0$, such that\n\\[\nq_j \\le - \\kappa_1 j^{\\theta} + \\kappa_2, \\quad \\forall j=1, \\dots, m,\n\\]\nthen the Hausdorff dimension of $\\mathcal A $ is less than or equal to $m$ and its fractal dimension is less than or equal to $2m$, where\n\\[\nm-1< \\left( \\frac{2 \\kappa_2 }{\\kappa_1} \\right)^{1\/\\theta}\\le m.\n\\]\nIn order to obtain the required estimate on $q_j$, we first note that\n\\[\n\\text{Re}\\, \\text{Tr} \\, e^{-i \\vartheta } G'(\\psi(t)) \\circ P_m(t) = \\sum_{j=1}^m \\text{Re}\\, \\langle e^{-i \\vartheta } G'(\\psi(t)) \\chi_j(t), \\chi_j(t)\\rangle_{L^2}.\n\\]\nNext, we recall that $e^{-i \\vartheta } = \\gamma + i \\beta$, with $\\gamma >0$, and compute (suppressing all the $t$-dependence for a moment)\n\\begin{align*}\n&\\, - \\text{Re}\\, \\langle e^{-i \\vartheta } G'(\\psi) \\chi_j, \\chi_j\\rangle_{L^2} = - \\frac{\\gamma}{2}\\left( \\| \\nabla \\chi_j \\|^2_{L^2} + \\omega^2 \n\\| x \\chi_j \\|_{L^2}^2\\right) + \\gamma \\Omega \\int_{\\R^d} \\overline {\\chi_j}L\\chi_j \\, dx + \\gamma \\mu \\\\\n&\\ - \\lambda \\gamma \\int_{\\R^d} |\\psi|^{2\\sigma} |\\chi_j|^2 \\, dx + \\sigma \\lambda \\int_{\\R^d} |\\psi|^{2\\sigma -2} \n\\text{Re}\\, (\\overline \\psi \\chi_j)\\big( \\beta \\text{Im}\\, (\\psi \\overline{\\chi_j})- \\gamma \\text{Re}\\, (\\psi \\overline {\\chi_j}) \\big)\\, dx,\n\\end{align*}\nwhere we have also used the fact that $\\| \\chi(t) \\|_{L^2}=1$. Next, we estimate the term {proportional to} $\\Omega$ as we did in the proof of Lemma \\ref{lem:Ebound} and we also \nuse that fact that \n\\[\n\\int_{\\R^d} \\beta \\text{Re}\\, (\\overline \\psi \\chi_j) \\beta \\text{Im}\\, (\\psi \\overline{\\chi_j})\\,dx \\le |\\beta| \\int_{\\R^d} |\\psi|^2 |\\chi_j|^2\\, dx.\n\\]\nIn summary, this yields\n\\begin{align*}\n - \\text{Re}\\, \\langle e^{-i \\vartheta } G'(\\psi) \\chi_j, \\chi_j\\rangle_{L^2} \\le &\\, - \\frac{\\gamma}{2}\\Big(1- \\frac{\\Omega^2}{\\omega^2}\\Big)\\left( \\| \\nabla \\chi_j \\|^2_{L^2} + \\omega^2 \\| x \\chi_j \\|_{L^2}^2\\right) \n + \\gamma \\mu \\\\\n&\\ - \\lambda (\\gamma- \\sigma |\\beta| ) \\int_{\\R^d} |\\psi|^{2\\sigma} |\\chi_j|^2 \\, dx .\n\\end{align*}\nThus,\n\\begin{equation}\\label{mest1}\n\\begin{split}\n- \\sum_{j=1}^m \\text{Re}\\, \\langle e^{-i \\vartheta } G'(\\psi) \\chi_j, \\chi_j\\rangle_{L^2} \\le &\\, - \\gamma \\Big(1- \\frac{\\Omega^2}{\\omega^2}\\Big)\\sum_{j=1}^m \\langle H_0 \\chi_j , \\chi_j \\rangle_{L^2} + \\gamma \\mu m \\\\\n&\\, + \\sigma \\lambda |\\beta| \\sum_{j=1}^m \\int_{\\R^d} |\\psi|^{2\\sigma} |\\chi_j|^2 \\, dx,\n\\end{split}\n\\end{equation}\nin view of definition \\eqref{H_0}. To further estimate the right hand side of \\eqref{mest1}, we use H\\\"older's inequality and Gagliardo-Nirenberg to obtain\n\\[\n\\int_{\\R^d} |\\psi|^{2\\sigma} |\\chi_j|^2 \\, dx \\le \\| \\psi\\|^{2\\sigma}_{L^{2\\sigma +2}} \\| \\chi_j \\|_{L^{2\\sigma+2}}^2 \\le c_1 \\| \\psi\\|^{2\\sigma}_{L^{2\\sigma +2}} \\| \\nabla \\chi_j\\|_{L^2}^{d\\sigma \/(\\sigma +1)},\n\\]\nwhere $c_1=c_1(d,\\sigma)>0$ some absolute constant. Young's inequality then implies that for any $\\varepsilon >0$, there exists a $c_2 = c_2(d, \\sigma)>0$, such that\n\\begin{align*}\n \\| \\psi\\|^{2\\sigma}_{L^{2\\sigma +2}} \\| \\nabla \\chi_j\\|_{L^2}^{d\\sigma \/(\\sigma +1)} \\le &\\, \\frac{c_2}{\\varepsilon^{\\alpha}} \\| \\psi\\|^{2\\sigma \\tilde \\alpha}_{L^{2\\sigma +2}} + \\varepsilon \\| \\nabla \\chi_j\\|^2_{L^2} \\\\\n \\le & \\, \\frac{c_2}{\\varepsilon^{\\alpha}} \\| \\psi\\|^{2\\sigma \\tilde \\alpha}_{L^{2\\sigma +2}} + \\varepsilon \\langle H_0 \\chi_j , \\chi_j \\rangle_{L^2},\n \\end{align*}\n where $\\alpha = \\frac{d\\sigma}{2\\sigma +2-d\\sigma}$, and $\\tilde \\alpha = \\frac{2\\sigma+2}{2\\sigma +2-d\\sigma}$. Note that both of these exponents are positive for $\\sigma <\\frac{d}{2(d-2)}$.\nThus, we an appropriate choice of $\\varepsilon$, we obtain from \\eqref{mest1}, that\n\\begin{equation*}\\label{mest2}\n\\begin{split}\n- \\sum_{j=1}^m \\text{Re}\\, \\langle e^{-i \\vartheta } G'(\\psi) \\chi_j, \\chi_j\\rangle_{L^2} \\le &\\, - \\frac{\\gamma}{2} \\Big(1- \\frac{\\Omega^2}{\\omega^2}\\Big)\\sum_{j=1}^m \\langle H_0 \\chi_j , \\chi_j \\rangle_{L^2} + \\gamma \\mu m \\\\\n&\\, + \\frac{c_3( \\lambda |\\beta| )^{1+\\alpha} }{\\gamma^\\alpha}\\Big(1- \\frac{\\Omega^2}{\\omega^2}\\Big)^{-\\alpha} \\| \\psi\\|^{2\\sigma \\tilde \\alpha}_{L^{2\\sigma +2}}.\n\\end{split}\n\\end{equation*}\nNow, using the estimate from Lemma \\ref{lem:collective} above, we have\n\\begin{equation*}\\label{mest3}\n\\begin{split}\n- \\sum_{j=1}^m \\text{Re}\\, \\langle e^{-i \\vartheta } G'(\\psi) \\chi_j, \\chi_j\\rangle_{L^2} \\le &\\, - \\frac{\\gamma c}{2} \\Big(1- \\frac{\\Omega^2}{\\omega^2}\\Big)m^{1+1\/d} + \\gamma \\mu m \\\\\n&\\, + \\frac{c_3( \\lambda |\\beta| )^{1+\\alpha} }{\\gamma^\\alpha}\\Big(1- \\frac{\\Omega^2}{\\omega^2}\\Big)^{-\\alpha} \\| \\psi\\|^{2\\sigma \\tilde \\alpha}_{L^{2\\sigma +2}}.\n\\end{split}\n\\end{equation*}\nThis can be estimated further by\n\\[\n- \\sum_{j=1}^m \\text{Re}\\, \\langle e^{-i \\vartheta } G'(\\psi(t)) \\chi_j, \\chi_j\\rangle_{L^2} \\le - \\kappa_1 m^{1+1\/d} + \\rho(t) ,\n\\]\nwhere $\\kappa_1$ is as defined above and\n\\[\n\\rho(t) = c_4 \\gamma \\mu^{1+d} \\Big(1- \\frac{\\Omega^2}{\\omega^2}\\Big)^{-d}+\n\\frac{c_3( \\lambda |\\beta| )^{1+\\alpha} }{\\gamma^\\alpha}\\Big(1- \\frac{\\Omega^2}{\\omega^2}\\Big)^{-\\alpha} \\| \\psi (t)\\|^{2\\sigma \\tilde \\alpha}_{L^{2\\sigma +2}}, \n\\]\nwith $c_4=c_4(\\omega, d)>0$. \n\nNow, for $\\psi(t) = R(t)\\psi_0 \\in \\mathcal A$, we have that\n\\[\n\\delta = \\limsup_{t\\to \\infty} \\sup_{\\psi_0 \\in \\mathcal A} \\left( \\frac{1}{t} \\int_0^t \\| R(s) \\psi_0\\|^{2\\sigma \\tilde \\alpha}_{L^{2\\sigma +2}} \\, ds \\right) <\\infty,\n\\]\ndue to Lemma \\ref{lem:Ebound} and Proposition \\ref{prop:Ebound}, which imply that for $\\psi(t)\\in \\mathcal A$:\n\\[\n\\| \\psi(t) \\|^{2\\sigma \\tilde \\alpha}_{L^{2\\sigma +2}} \\lesssim \\| \\psi(t) \\|^{2\\sigma \\tilde \\alpha\/(2\\sigma +2)}_{\\Sigma} \\lesssim \\rho_\\Sigma ^{2\\sigma \\tilde \\alpha\/(2\\sigma +2)}.\n\\]\nThis consequently yields\n\\[\nq_m \\le - \\kappa_1 m^{1+1\/d} + \\kappa_2, \\quad \\text{ for all $m\\ge 1$,}\n\\] \nwhich finishes the proof.\n\\end{proof}\n\n\\begin{remark}\nIn comparison to many \nclassical results on the dimensions of global attractors (cf. \\cite{Te}), the proof above avoids the use of a Lieb-Thirring type inequality to control the \nterm proportional to $ \\lambda$. \n\\end{remark}\n\nWe expect that a similar analysis can be done to estimate the box dimension of the attractor, cf. \\cite{CV} for more details.\nWe finally note that a careful analysis of all the involved constants in $\\kappa_1, \\kappa_2$ shows that for a given, fixed $\\omega>0$, \nthe fraction\n\\[\n\\left( \\frac{ \\kappa_2 }{\\kappa_1} \\right) \\to +\\infty, \\quad \\text{as $|\\Omega| \\to \\omega$.}\n\\]\nThe estimate on the dimension of $\\mathcal A$ thus becomes larger the larger the rotation speed.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nLet $\\Omega \\subset \\mathbb{R}^{n}$ ($n \\in \\{2,3\\}$) be a bounded and convex domain with boundary $\\Gamma := \\partial \\Omega$ and $s \\in (0,1)$. \nFor $u_{d} : \\Omega \\rightarrow \\mathbb{R}$ we\ndefine the objective functional\n\\begin{equation}\\label{eq:Functional}\nJ(u, z) := \\frac{1}{2}\\|u - u_d\\|_{L^{2}(\\Omega)}^{2} + \\frac{\\mu}{2} \\|z\\|_{L^{2}(\\Omega)}^{2},\n\\end{equation}\nwhere $\\mu > 0$ denotes a regularization parameter. \nIn this work we consider the optimal control problem of finding\n\\begin{equation}\n\\label{eq:min_J}\n\\argmin_z J(u,z),\n\\end{equation}\nsubject to the \\textit{fractional state equation}\n\\begin{equation}\n\\label{eq:state_equation}\n\\left(-\\Delta\\right)^{s}u = z \\textrm{ in } \\Omega, \\quad u=0 \\textrm{ on } \\Gamma,\n\\end{equation}\nand the \\textit{control constraints}\n\\begin{equation}\n\\label{eq:control_constraints}\na \\leq z(x) \\leq b \\textrm{ a.e. in } \\Omega,\n\\end{equation}\nwith constants $a, b \\in \\mathbb{R}$ satisfying $a \\leq 0 \\leq b$.\nHere, we understand the operator $(-\\Delta)^{s}$ in the sense of its spectral\ndefinition, compare e.g.~\\cite{Cabre, Capella, Nochetto}.\n \nThe main difficulty in studying this problem is the nonlocality of the fractional Laplace operator \n\\cite{Cabre}. \nOne way to overcome this issue is based on the Cafarelli--Silvestre extension~\\cite{Cafarelli} on unbounded domains\nand its extension to bounded domains~\\cite{Cabre, Capella, Stinga}.\n In this approach, an auxiliary problem in an extended domain \n $\\mathcal{C} := \\Omega \\times (0,\\infty)$ is introduced and the solution of\n the state equation~\\eqref{eq:state_equation} is then given as the Dirichlet trace on $\\Omega \\times \\left\\{0\\right\\}$ of the solution to the extended problem.\nExponential decay of the solution in the artificial dimension allows construction of different numerical methods, \nsee e.g. \\cite{Bonito, MeidPSV_2017_hpFE_fracDiff, Nochetto}. \nIn these publications, the problem is discretized by introducing a tensor product mesh of the domain \n$\\mathcal{C}_{\\mathcal{Y}} = \\Omega \\times (0,Y)$, \nwhich is constructed by a conformal triangulation of $\\Omega$ \nand a graded mesh in the artificial direction, \nsee e.g. \\cite[Section 5.1]{Nochetto}.\nA convergence rate of $h^{1 + s}$ (up to some logarithmic term) \nin the $L^2(\\Omega)$-norm can be obtained~\\cite{Antil, Nochetto2}, \nprovided that $z \\in \\mathbb{H}^{1-s}(\\Omega)$, where $h$ denotes the global mesh parameter.\nHowever, numerical experiments show that this convergence rate is not optimal in a specific range of fractional powers~$s$.\nThe cost of solving the problem is related to the number of elements in $\\mathcal{C}_{\\mathcal{Y}}$,\nand not only to the number of elements in $\\Omega$, \nresulting in an increased computational complexity.\nThis issue is first overcome in \\cite{MeidPSV_2017_hpFE_fracDiff} by \nexploiting $p$-finite elements in the extended direction.\n\n\nAn alternative approach for solving~\\textup {(\\ref {eq:state_equation})} uses the \nBalakrishnan representation formula \\cite[IX. 11.]{Yosida_FuncAna}, namely for\n$s \\in (0,1)$ and $z \\in \\mathbb{H}^{-s}(\\Omega)$\n\\begin{equation}\n\\label{eq:balakrishnan_formula}\n(-\\Delta)^{-s}z = \\frac{\\sin{(s\\pi)}}{\\pi} \\int_{0}^{\\infty} \\nu^{-s} (\\nu I - \\Delta)^{-1}z d \\nu.\n\\end{equation}\nNumerical approximation of \\textup {(\\ref {eq:balakrishnan_formula})} is then based on a\nsuitable quadrature formula for \\textup {(\\ref {eq:balakrishnan_formula})} with respect to\n$\\nu$ and a discretization of the operator $\\nu I-\\Delta$ using the finite element method,\nsee \\cite{Bonito}. \n\n\nWhile the numerical analysis of the optimal control problem\n\\eqref{eq:min_J}--\\eqref{eq:control_constraints} using an equivalent formulation with the Cafarelli--Silvestre extension is well established \\cite{Antil}, the numerical analysis using the Balakrishnan formula is still open.\n\n\nIn this article we propose and analyze two discrete schemes for the approximation of the solution to the\n optimal control problem \\eqref{eq:min_J}--\\eqref{eq:control_constraints} using\n the Balakrishnan representation of the solution $u$ of the state equation\n \\eqref{eq:state_equation}.\n Both schemes rely on a finite element discretization of the operator $\\nu I - \\Delta$ in \\eqref{eq:balakrishnan_formula} \n and a sinc quadrature approximation \\cite{2018_BonitoLeiPasciak} of the integral in \\eqref{eq:balakrishnan_formula}. \n The first method is the variational discretization approach \\cite{Hinze}, \n where the set of controls is not discretized a priori. However, it inherits its approximation properties from the approximation of the adjoint state. \n The second one uses a fully discrete setting, where the set of controls is discretized by piecewise constant functions \\cite{Arada2002, Casas2005, Roesch2006}. \n We derive $L^2(\\Omega)$-error estimates for the state and control for both types of the FE discretization of the optimal control problem.\n\nRegarding the variational approach for the discretization of the optimal control problem \\eqref{eq:min_J}--\\eqref{eq:control_constraints} \nwe show an optimal convergence rate of $h^{\\min{(2, 3\/2+2s - \\varepsilon)}}$ for the control and the state in the $L^{2}(\\Omega)$-norm, \nwhereas using the extension approach \\cite{Antil} yields a convergence rate of $h^{1+s}$ (up to some logarithmic term). \nIn the case of the fully discrete scheme we show the expected linear convergence for the control in the $L^{2}(\\Omega)$-norm \nand for the state in the $\\mathbb{H}^{s}(\\Omega)$-norm. \nNumerically we also consider the post-processing approach \\cite{MeyerRoesch2004}\nfor the optimal control and measure again the same rate of $h^{\\min{(2, 3\/2+2s - \\varepsilon)}}$ for the \npost-processed optimal control.\nSimilar results are shown for the extension approach in \\cite{Antil}.\nWhile the convergence rate for the optimal control is optimal, as confirmed by numerical experiments, \nthere is still a gap between the theoretical and practical rates for the state, \nwhich will be addressed in future work. \n\nThe outline of this paper is as follows.\n In Section~\\ref{sec:fractional_optimal_control} we review existence and uniqueness results for the \n fractional optimal control problem based on \\cite{Antil} as well as regularity properties of the optimal control problem. \n The numerical analysis of the mentioned discretization methods is conducted in Section~\\ref{sec:error_estimates}, \n starting with the derivation of the error estimates for the discretization of the state equation \\eqref{eq:state_equation} \n using the Balakrishnan formula. \n In Section~\\ref{subsec:semidiscrete_scheme} we study the convergence properties of the optimal control \n and state using the semidiscrete approach, \n while Section~\\ref{subsec:full_discrete_scheme} is devoted to the numerical analysis of the fully discrete scheme. \n In Section~\\ref{sec:Implementation} we introduce a solver for the finite element approximation of the problem. \n Numerical results validating the theoretical convergence results for the proposed discretization techniques are presented in Section~\\ref{sec:Numerics}.\n \n \n \n \n \n \n\\section{Existence and regularity of optimal controls}\n\\label{sec:fractional_optimal_control}\nIn this section we review existence and uniqueness as well as the regularity results for the \noptimal control problem \\eqref{eq:min_J}--\\eqref{eq:control_constraints} based on \\cite[Sec. 3]{Antil}.\nWe start this section with a brief introduction of the spectral definition of the fractional operator\n$(-\\Delta)^s$ following~\\cite{Cabre, Capella}.\n\nThe eigenfunctions $\\left\\{\\varphi_{k}\\right\\}_{k\\in \\mathbb{N}}$ with eigenvalues $\\left\\{\\lambda_{k}\\right\\}_{k \\in \\mathbb{N}}$ of the Laplace operator, i.e.,\n\\begin{equation*}\n-\\Delta \\varphi_{k} = \\lambda_{k} \\varphi_{k} \\textrm{ in } \\Omega, \\quad \\varphi_{k} = 0 \\textrm{ on } \\Gamma, \\quad k \\in \\mathbb{N}\n\\end{equation*}\nform an orthonormal basis of $L^{2}(\\Omega)$. The spectral fractional Laplace operator for ${w \\in C_{0}^{\\infty}(\\Omega)}$ is then defined as\n\\begin{equation*}\n(-\\Delta)^{s}w:= \\sum_{k = 1}^{\\infty} \\lambda_{k}^{s} w_{k} \\varphi_{k}, \\quad w_{k} := \\int_{\\Omega} w \\varphi_{k} dx, \\quad k \\in \\mathbb{N}.\n\\end{equation*}\nThis definition can be extended by density to the space $\\mathbb{H}^{s}(\\Omega)$ \\cite{Antil, Nochetto} defined as\n\\begin{equation}\n\\label{eq:fractional_sobolev_spaces}\n\\mathbb{H}^{s}(\\Omega) := \\left\\{w = \\sum_{k=1}^{\\infty} w_{k} \\varphi_{k} : \\sum_{k=1}^{\\infty} \\lambda_{k}^{s}w_{k}^{2} < \\infty \\right\\} = \n\\begin{cases}\nH^{s}(\\Omega) \\equiv H_{0}^{s}(\\Omega) & \\textrm{if } s \\in (0,\\frac{1}{2}),\\\\\nH_{00}^{1\/2}(\\Omega) & \\textrm{if } s = \\frac{1}{2}, \\\\\nH_{0}^{s}(\\Omega) & \\textrm{if } s \\in (\\frac{1}{2}, 1).\n\\end{cases}\n\\end{equation}\nThe characterization of the fractional Sobolev spaces on the right hand side in \\eqref{eq:fractional_sobolev_spaces} can be found, e.g., in \\cite{mclean2000strongly}. For $s \\in [1,2]$ we set $\\mathbb{H}^{s}(\\Omega) := H^{s}(\\Omega) \\cap H^{1}_{0}(\\Omega)$, whereas $\\mathbb{H}^{0}(\\Omega) := L^{2}(\\Omega)$. The dual space of $\\mathbb{H}^{s}(\\Omega)$ we denote by $\\mathbb{H}^{-s}(\\Omega)$.\nWe stress that this definition of $(-\\Delta)^s$ inherently assumes homogeneous Dirchlet boundary data (in a suitable sense). \nFor a generalization to inhomogeneous Dirichlet boundary data we refer to \\cite{Antil_Pfefferer_Rogovs} and the references therein.\n\n\n\\bigskip\n\n\nLet $u_{d} \\in L^{2}(\\Omega)$ and $a, b \\in \\mathbb{R}$ with $a \\leq 0 \\leq b$ be given.\nWe define the set of admissible controls $Z_{ad}$ by\n\\begin{equation*}\nZ_{ad} := \\left\\{w \\in L^{2}(\\Omega): a \\leq w(x) \\leq b \\textrm{ a.e. in } \\Omega \\right\\}.\n\\end{equation*}\nLet ${S: \\mathbb{H}^{-s}(\\Omega) \\rightarrow \\mathbb{H}^{s}(\\Omega)}$ denote the control to state operator defined as $Sz := u$, \nwhere $u \\in \\mathbb{H}^{s}(\\Omega)$ is the unique solution of the state equation \\eqref{eq:state_equation}. \nIt holds, that for any $z \\in \\mathbb{H}^{-s}(\\Omega)$, \nthe boundary value problem \\eqref{eq:state_equation} has a unique solution $u \\in \\mathbb{H}^{s}(\\Omega)$, see e.g. \\cite{MeidPSV_2017_hpFE_fracDiff}.\nWe may also consider the operator $S$ acting on $L^{2}(\\Omega)$ with range in $L^{2}(\\Omega)$. Note also that $S$ is self-adjoint, since the operator $(-\\Delta)^{s}$ is self-adjoint. The adjoint state $p \\in \\mathbb{H}^{s}(\\Omega)$ for $z \\in \\mathbb{H}^{-s}(\\Omega)$ is then given by $p = S(Sz - u_{d})$. \nIn \\cite[Section 3.1]{Antil} the existence and uniqueness of a solution to the optimal control problem \\eqref{eq:min_J}--\\eqref{eq:control_constraints} is shown. Let us recall the main result from that reference.\n\n\\begin{theorem}[existence, uniqueness, and optimality conditions, {\\cite[Section 3.1]{Antil}}]\nThe fractional optimal control problem \\eqref{eq:min_J}--\\eqref{eq:control_constraints} has a unique optimal solution \n$(\\bar{u}, \\bar{z}) \\in \\mathbb{H}^{s}(\\Omega) \\times Z_{ad}$. \nThese fulfill the necessary and sufficient optimality conditions\n\\begin{align}\n \\bar{u} &= S \\bar{z} \\in \\mathbb{H}^{s}(\\Omega),\\\\\n \\bar{p} &= S(\\bar{u}-u_{d}) \\in \\mathbb{H}^{s}(\\Omega),\\\\\n \\bar{z} \\in Z_{ad}, &\\quad (\\mu \\bar{z} + \\bar{p}, z - \\bar{z})_{L^{2}(\\Omega)} \\geq 0 \\quad \\textrm{for all } z \\in Z_{ad}.\n \\label{eq:variational_inequality}\n\\end{align}\n\\end{theorem}\n\n\n\nFor $\\mu > 0$ and $\\bar{p} = S(\\bar{u}-u_{d})$ the variational inequality \\eqref{eq:variational_inequality} is equivalent to the projection formula \\cite{Fredi}\n\\begin{equation*}\n\\bar{z}(x) = \\textrm{proj}_{[a, b]} \\left(-\\frac{1}{\\mu} \\bar{p}(x)\\right)\n\\end{equation*}\nwhere $\\textrm{proj}_{[a, b]}(v) := \\min{\\left\\{b, \\max{\\left\\{a,v\\right\\}}\\right\\}}$.\nSince we assume that $\\Omega$ is a convex domain and that $a \\leq 0 \\leq b$ we can prove the following regularity results for the control.\n\\begin{lemma}[$H^{1}$-regularity of the optimal control, {\\cite[Lemma 3.5]{Antil}}]\nLet $\\bar{z} \\in Z_{ad}$ be the optimal control and $u_{d} \\in \\mathbb{H}^{1-s}(\\Omega)$. \nThen $\\bar{z} \\in H_0^{1}(\\Omega)$.\n\\end{lemma}\n\\begin{proof}\nThe proof is based on bootstrapping. \nWe only comment on the case $s \\in (0,\\frac{1}{4})$. In this case,\nan intermediate regularity result is $\\bar{z} \\in \\mathbb{H}^{s}(\\Omega)$. \nAs $\\bar z = \\textrm{proj}_{[a,b]}\\left( -\\frac{1}{\\mu}\\bar p\\right)$ this,\nin turn, requires $a \\leq 0 \\leq b$.\n\\end{proof}\n\\begin{lemma}\n \\label{lem:regularity_control}\nLet $\\bar{z} \\in Z_{ad}$ be the solution of the optimal control problem \\textup {(\\ref {eq:min_J})}--\\textup {(\\ref {eq:control_constraints})} \nwith $u_{d} \\in \\mathbb{H}^{3\/2}(\\Omega)$. \nThen $\\bar{z} \\in \\mathbb{H}^{3\/2-\\varepsilon}(\\Omega)$, where $\\varepsilon$ is a positive, arbitrary small number.\n\\end{lemma}\n\\begin{proof}\nThe proof follows from the standard bootstraping argument.\n\\end{proof}\n\\section{A priori error estimates}\n\\label{sec:error_estimates}\n\nIn this section, we analyse two finite element approximations of \nthe fractional optimal control problem \\eqref{eq:min_J}--\\eqref{eq:control_constraints}. \nFirst, we investigate the variational approach \\cite{Hinze}, \nwhere the control set is not discretized, and then move to a fully discrete scheme. \nBoth techniques are based on a finite element discretization of the state \nequation \\eqref{eq:state_equation} using the Balakrishnan formula \\eqref{eq:balakrishnan_formula}. \nIn the following subsection, we review the resulting FE error estimates, based on \\cite[Sec. 4]{Bonito}.\n \n \n\\begin{assumption} \nThroughout this and the following sections we assume that $\\Omega$ is a polygonal or polyhedral domain and that $u_{d} \\in \\mathbb{H}^{3\/2}(\\Omega)$, hence the regularity result from Lemma \\ref{lem:regularity_control} holds.\n\\end{assumption} \n\n\n\\subsection{A finite element method for the state equation}\n\\label{subsec:fem_state_equation}\n\nLet $\\mathbb{U}(\\mathcal{T}_{h})$ be the space of piecewise linear and globally continuous functions vanishing on the boundary $\\partial \\Omega$, defined with respect to a conforming quasi-uniform triangulation $\\mathcal{T}_{h}$ of the domain $\\Omega$. \nA FE approximation of problem \\eqref{eq:state_equation} for $z \\in L^{2}(\\Omega)$ is given by\n\\begin{equation}\n\\label{eq:dunford_taylor_FE}\nu_{h} = \\frac{\\sin{(s \\pi)}}{\\pi} \\int_{0}^{\\infty} \\nu^{-s}(\\nu I - \\Delta_{h})^{-1} z\\,d\\nu\n= \\frac{\\sin{(s \\pi)}}{\\pi}\\int_{-\\infty}^{\\infty} e^{(1-s)t}\\left(e^tI-\\Delta_h\\right)^{-1}z\\,dt,\n\\end{equation}\nwhere $\\Delta_{h}$ denotes the discrete Laplace operator.\n\nFor $k > 0$ we define the numbers\n$N_+ := \\Bigg\\lceil \\frac{\\pi^2}{4 s k^2} \\Bigg\\rceil$ and $N_- := \\Bigg\\lceil \\frac{\\pi^2}{4 (1-s) k^2} \\Bigg\\rceil$.\nThe sinc quadrature approximation of $u_{h}$ is then given by\n\\begin{equation}\n\\label{eq:sinc_quadrature}\nu_{h}^{k} := \\frac{\\sin{(s \\pi)}}{\\pi} k \\sum_{l = -N_{-}}^{N_{+}} e^{(1-s)kl}(e^{kl}I - \\Delta_{h})^{-1} z.\n\\end{equation}\n\nPractical aspects of the numerical implementation of this method are discussed in Section\\nobreakspace \\ref {sec:Implementation}.\n\n\nIn our problem set-up the following error estimates hold.\n\n\n\\begin{theorem}[finite element approximation,{\\cite[Theorem 4.2]{Bonito}}]\n\\label{thm:fe_error_dunford_talyor} \nGiven $r \\in [0,1]$ with $r \\leq 2s$, set $\\gamma := \\max{(r+2\\alpha_{\\star} - 2s, 0)}$ \nand $\\alpha_{\\star} := \\frac{1}{2}(\\alpha + \\min{(1-r, \\alpha)})$ with ${\\alpha \\in (0,1]}$. \nIf $z \\in \\mathbb{H}^{\\delta}(\\Omega)$ for $\\delta \\geq \\gamma$, then \n\\begin{equation*}\n\\norm{u - u_{h}}_{\\mathbb{H}^{r}(\\Omega)} \\leq C_{h}\\, h^{2\\alpha_{\\star}} \\norm{z}_{\\mathbb{H}^{\\delta}(\\Omega)}\n\\end{equation*}\nwhere $C_{h} \\leq c \\log{(2\/h)}$ if $\\delta = \\gamma$ and $r+2\\alpha_{\\star} \\geq 2s$, and $C_{h} \\leq c$ otherwise. \n\\end{theorem}\nNote, that we get a convergence rate of $h^{2 - r}$ if we set $\\alpha = 1$ and if $z$ is regular enough. However, in order to obtain the convergence rates depending on $s \\in (0,1)$ and on the regularity of $z$, we have to choose $\\alpha$ in Theorem \\ref{thm:fe_error_dunford_talyor} appropriately.\nFor $r=0$ and $r=s$ respectively in Theorem \\ref{thm:fe_error_dunford_talyor}, we conclude the following error estimates.\n\\begin{corollary}\nFor $z \\in \\mathbb{H}^{\\delta + \\varepsilon'}(\\Omega)$ with $\\varepsilon' > 0$ arbitrary small and $\\delta \\geq - \\varepsilon'$ there holds\n\\begin{equation}\n\\label{eq:fe_error_l2_hs}\n\\begin{aligned}\n\\norm{u - u_{h}}_{L^{2}(\\Omega)} &\\leq c \\, h^{\\min{(2, \\delta + 2s)}} \\norm{z}_{\\mathbb{H}^{\\delta + \\varepsilon'}(\\Omega)},\\\\\n\\norm{u - u_{h}}_{\\mathbb{H}^{s}(\\Omega)} &\\leq c \\, h^{\\min{(2-s, \\delta + s)}} \\norm{z}_{\\mathbb{H}^{\\delta + \\varepsilon'}(\\Omega)}.\n\\end{aligned}\n\\end{equation}\n\\end{corollary}\n\nThe quadrature formula \\eqref{eq:sinc_quadrature} possesses the following approximation property.\n\\begin{theorem}[sinc quadrature approximation, {\\cite[Theorem 4.3]{2018_BonitoLeiPasciak}}]\nFor $r \\in [0,1]$ and $z \\in \\mathbb{H}^{r}(\\Omega)$ there holds\n\\begin{equation*}\n\\norm{u_{h} - u_{h}^{k}}_{\\mathbb{H}^{r}(\\Omega)} \n\\leq c \\, e^{-\\pi^{2}\/(2k)} \\norm{z}_{\\mathbb{H}^{\\max(0,r-2s+\\epsilon)}(\\Omega)}\n\\leq c \\, e^{-\\pi^{2}\/(2k)} \\norm{z}_{\\mathbb{H}^{r}(\\Omega)}\n\\end{equation*}\n\\end{theorem}\n\nHence, if we choose $k$ appropriately, we can balance the sinc quadrature and the finite element errors.\n\\begin{lemma}\n\\label{lem:fe_approximation_error}\nAssume that the number of integration points in the\n sinc quadrature~\\textup {(\\ref {eq:sinc_quadrature})} is balanced with the FE errors\n \\textup {(\\ref {eq:fe_error_l2_hs})}, i.e., $k \\in \\mathcal O(\\big|\\ln{h}\\big|^{-1})$. For\n $z \\in \\mathbb{H}^{\\delta+\\varepsilon'}(\\Omega)$ with $\\varepsilon' > 0$ and $\\delta \\geq - \\varepsilon'$ we obtain\n\\begin{equation}\n\\label{eq:fe_approximation_error}\n\\begin{aligned}\n\\norm{u - u_{h}^{k}}_{L^{2}(\\Omega)} &\\leq c \\, h^{\\min{(2, \\delta+2s)}} \\norm{z}_{\\mathbb{H}^{\\delta+\\varepsilon'}(\\Omega)},\\\\\n\\norm{u - u_{h}^{k}}_{\\mathbb{H}^{s}(\\Omega)} &\\leq c \\, h^{\\min{(2-s, \\delta+s)}} \\norm{z}_{\\mathbb{H}^{\\delta+\\varepsilon'}(\\Omega)}.\n\\end{aligned}\n\\end{equation}\n\\end{lemma}\n\nGiven the regularity results of ~Lemma\\nobreakspace \\ref {lem:regularity_control} for $u_{d} \\in \\mathbb{H}^{3\/2}(\\Omega)$ we conclude the following error estimates.\n\\begin{corollary}\n\\label{cor:fe_error_estimates_regular}\nFor $z \\in \\mathbb{H}^{3\/2-\\varepsilon}(\\Omega)$ there holds\n\\begin{equation*}\n\\norm{u - u_{h}^{k}}_{L^{2}(\\Omega)} \\leq c \\, h^{\\min{(2, 3\/2+2s-\\varepsilon')}} \\norm{z}_{\\mathbb{H}^{3\/2-\\varepsilon}(\\Omega)}\n\\end{equation*} \nand\n\\begin{equation*}\n\\norm{u - u_{h}^{k}}_{\\mathbb{H}^{s}(\\Omega)} \\leq c \\, h^{\\min{(2-s, 3\/2 + s - \\varepsilon')}} \\norm{z}_{\\mathbb{H}^{3\/2-\\varepsilon}(\\Omega)}\n\\end{equation*}\nwith $\\varepsilon' > 0$ and $\\varepsilon > 0$ arbitrary small and $\\varepsilon < \\varepsilon'$. Note, that the approximation $u^k_{h}$ converges quadratically in the\n$L^{2}(\\Omega)$-norm, provided that $s > \\frac{1}{4}$.\n\\end{corollary}\n\nIn the following we drop the superscript $k$ and write $z_h$, $u_h$, and $p_h$ for the discrete approximations of $z$, $u$ and $p$.\n\n\n\\subsection{Variational discretization}\n\\label{subsec:semidiscrete_scheme}\n\nWe define the variational discretization of the optimal control problem\n\\eqref{eq:min_J}--\\eqref{eq:control_constraints}\nas finding $\\bar{z}_h \\in Z_{\\text{ad}}$ such that\n\\begin{align}\\label{eq:var_dis}\n\\bar{z}_h := \\argmin_{z \\in Z_{\\text{ad}}}J_h(z) = \\argmin_{z \\in Z_{\\text{ad}}}\\frac{1}{2} \\lVert S_h z -u_d \\rVert_{L^2{(\\Omega})}^{2} + \\frac{\\mu}{2}\\lVert z\\rVert_{L^2{(\\Omega})}^{2}.\n\\end{align}\n\nSimilarly, as in the continuous setting, the discrete optimal control problem \\eqref{eq:var_dis} has a unique solution $\\bar{z}_h \\in Z_{\\text{ad}}$.\nWe denote by $\\bar{u}_h:= S_h \\bar{z}_h$ the optimal discrete state and by \n$\\bar{p}_h := S_h(S_h \\bar{z}_h-u_d)$ the optimal discrete adjoint state. In this case the variational inequality reads as\n\\begin{equation}\\label{eq:var_ineq_var}\n (\\bar{p}_h + \\mu \\bar{z}_h,z-\\bar{z}_h)_{L^2(\\Omega)} \\geq 0\\quad \\forall z \\in Z_{ad},\n\\end{equation}\nwhich implies \n \\begin{equation}\n \\label{eq:disc_proj}\n \\bar{z}_h = \\textrm{proj}_{[a,b]}\\left( -\\frac{1}{\\mu}\\bar{p}_h \\right).\n \\end{equation}\n Here and in the following, we denote by $S_h$ the discrete, self-adjoint solution operator defined by \\textup {(\\ref {eq:sinc_quadrature})}.\n \n\\begin{lemma}\\label{lemma:stab}\nThe following stability estimates hold\n\\begin{align}\n\\lVert Sv \\rVert_{L^2(\\Omega)} &\\leq c \\lVert v \\rVert_{L^2(\\Omega)}, \\quad \\lVert S_h v \\rVert_{L^2(\\Omega)} \\leq c \\lVert v \\rVert_{L^2(\\Omega)}, \\quad \\lVert S_h v \\rVert_{\\mathbb{H}^s(\\Omega)} \\leq c \\lVert v \\rVert_{L^2(\\Omega)}.\n \\label{eq:Sv}\n\\end{align}\n\\end{lemma}\n\\begin{proof}\nThe first estimate follows from a trivial embedding and the $2s$-shift of the fractional Laplace operator. \n\\[\n\\lVert Sv \\rVert_{L^2(\\Omega)} \\leq c \\lVert Sv \\rVert_{\\mathbb{H}^{2s}(\\Omega)} \\leq c \\lVert v \\rVert_{L^2(\\Omega)} .\n\\]\nTo prove the second estimate we introduce the intermediate function $Sv$ to obtain\n\\[\n\\norm{S_{h}v}_{L^{2}(\\Omega)} \\leq \\lVert (S_h - S) v \\rVert_{L^2(\\Omega)} + \\lVert Sv \\rVert_{L^2(\\Omega)},\n\\]\nand apply the a priori error estimate \\eqref{eq:fe_approximation_error} \nwith $\\delta = -\\varepsilon'$ \nas well as the first estimate in \\eqref{eq:Sv}. \nThe proof of the third estimate follows the same path, using the stability of the operator $S: L^{2}(\\Omega) \\rightarrow \\mathbb{H}^{s}(\\Omega)$.\n\\end{proof}\n \n\\begin{theorem}\nLet the pairs $(\\bar{u}(\\bar{z}),\\bar{z})$ and $(\\bar{u}_h(\\bar{z}_h),\\bar{z}_h)$ be the solutions to problems \\eqref{eq:min_J} and \\eqref{eq:var_dis}, respectively. Then the estimates\n\\begin{align}\\label{eq:var_a_priori_z}\n\\lVert \\bar{z} - \\bar{z}_h \\rVert_{L^2(\\Omega)} &\\leq c \\, h^{\\min(2,3\/2+2s -\\varepsilon')} \\left(\\lVert \\bar{z} \\rVert_{\\mathbb{H}^{3\/2-\\varepsilon}(\\Omega)} + \\lVert u_d \\rVert_{\\mathbb{H}^{3\/2-\\varepsilon}(\\Omega)} \\right),\\\\\\label{eq:var_a_priori_u}\n\\lVert \\bar{u} - \\bar{u}_h \\rVert_{L^2(\\Omega)} &\\leq c \\, h^{\\min(2,3\/2+2s -\\varepsilon')} \\left(\\lVert \\bar{z} \\rVert_{\\mathbb{H}^{3\/2-\\varepsilon}(\\Omega)} + \\lVert u_d \\rVert_{\\mathbb{H}^{3\/2-\\varepsilon}(\\Omega)} \\right),\\\\\\label{eq:var_a_priori_u_s}\n\\lVert \\bar{u} - \\bar{u}_h \\rVert_{\\mathbb{H}^s(\\Omega)} &\\leq c \\, h^{\\min(2-s,3\/2+s -\\varepsilon')} \\left(\\lVert \\bar{z} \\rVert_{\\mathbb{H}^{3\/2-\\varepsilon}(\\Omega)} + \\lVert u_d \\rVert_{\\mathbb{H}^{3\/2-\\varepsilon}(\\Omega)} \\right)\n\\end{align}\nhold, provided $\\varepsilon < \\varepsilon'$.\n\\end{theorem}\n\\begin{proof}\nWe begin by showing the first estimate.\nThe proof is similar to the proof of \\cite[Theorem 5.10]{Antil} based on ideas introduced in \\cite{Hinze}.\nTesting variational inequalities \\eqref{eq:variational_inequality} and \\eqref{eq:var_ineq_var} with $\\bar{z}_h \\in Z_{\\text{ad}}$ and $\\bar{z} \\in Z_{\\text{ad}}$, respectively, and adding both expressions, we arrive at\n\\begin{align}\n\\mu \\lVert \\bar{z}-\\bar{z}_h \\rVert_{L^2(\\Omega)}^2& \\leq ( \\bar{p}-\\bar{p}_h, \\bar{z}_h-\\bar{z})\\nonumber\\\\\n &\\leq ((S-S_h)S\\bar{z}, \\bar{z}_h-\\bar{z})+ (S_{h}(S-S_h)\\bar{z}, \\bar{z}_h-\\bar{z})\\nonumber\\\\\n &\\quad +((S_h-S) u_d, \\bar{z}_h-\\bar{z}) + (S_h^2(\\bar{z}-\\bar{z}_h), \\bar{z}_h-\\bar{z}).\\label{eq:4_terms_var}\n\\end{align}\nThe first two terms can be estimated using the Cauchy--Schwarz inequality, Lemma \\ref{lemma:stab} and the a priori estimate \\eqref{eq:fe_approximation_error}\n\\begin{align}\n((S-S_h)S\\bar{z}, \\bar{z}_h-\\bar{z}) &\\leq c \\, h^{\\min(2,3\/2+2s -\\varepsilon')}\n\\lVert \\bar{z} \\rVert_{\\mathbb{H}^{3\/2-\\varepsilon}(\\Omega)}\\lVert\n\\bar{z}-\\bar{z}_h \\rVert_{L^2(\\Omega)},\\label{eq:var_proof_1}\\\\\n(S_h(S-S_h)\\bar{z}, \\bar{z}_h-\\bar{z}) &\\leq c \\, h^{\\min(2,3\/2+2s -\\varepsilon')} \\lVert \\bar{z} \\rVert_{\\mathbb{H}^{3\/2-\\varepsilon}(\\Omega)}\\lVert \\bar{z}-\\bar{z}_h \\rVert_{L^2(\\Omega)}.\n\\end{align}\nThe estimate of the third term follows from the Cauchy--Schwarz inequality and the estimate \\eqref{eq:fe_approximation_error}\n\\begin{equation}\\label{eq:var_proof_3}\n((S_h-S) u_d, \\bar{z}_h-\\bar{z}) \\leq c \\, h^{\\min(2,3\/2+2s -\\varepsilon')} \\lVert u_d \\rVert_{\\mathbb{H}^{3\/2-\\varepsilon}(\\Omega)} \\lVert \\bar{z}-\\bar{z}_h \\rVert_{L^2(\\Omega)},\n\\end{equation}\n and the last term is non-positive, since $S_h$ is self-adjoint and therefore\n\\begin{equation}\\label{eq:var_proof_4}\n(S_h^2(\\bar{z}-\\bar{z}_h), \\bar{z}_h-\\bar{z}) \\leq -\\lVert S_h(\\bar{z}-\\bar{z}_h) \\rVert_{L^2(\\Omega)}^2 \\leq 0.\n\\end{equation}\nThe desired estimate follows from estimates \\eqref{eq:var_proof_1}--\\eqref{eq:var_proof_4}.\n\nApplication of Lemma\\nobreakspace \\ref {lem:fe_approximation_error}, Lemma\\nobreakspace \\ref {lemma:stab} and \\textup {(\\ref {eq:var_a_priori_z})} leads to \n\\begin{align*}\n\\lVert \\bar{u} - \\bar{u}_h \\rVert_{L^2(\\Omega)} &\\leq \\lVert S \\bar{z} -S_h \\bar{z} \\rVert_{L^2(\\Omega)} + \\lVert S_h \\bar{z} -S_h \\bar{z}_h \\rVert_{L^2(\\Omega)} \\\\\n&\\leq c \\, h^{\\min(2,3\/2+2s -\\varepsilon')} \\left(\\lVert \\bar{z} \\rVert_{\\mathbb{H}^{3\/2-\\varepsilon}(\\Omega)} + \\lVert u_d \\rVert_{\\mathbb{H}^{3\/2-\\varepsilon}(\\Omega)} \\right),\n\\end{align*}\nand this proves \\textup {(\\ref {eq:var_a_priori_u})}. The proof of\n\\textup {(\\ref {eq:var_a_priori_u_s})} follows the same path.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\\subsection{A fully discrete scheme}\n\\label{subsec:full_discrete_scheme}\n\nIn this section we consider a fully discrete \nscheme for the optimal control problem \\eqref{eq:min_J}--\\eqref{eq:control_constraints}. \nWe discretize the set of admissible controls with piecewise constant functions\n\\begin{align*}\n Z_h &:= \\{ z_h \\in L^{\\infty}(\\Omega) : z_h \\vert_T \\in \\mathcal{P}_0 \\text{ for all } T \\in \\mathcal{T}_h \\}, \n \\quad \\text{and }Z_h^{\\text{ad}} := Z_h \\cap Z_{\\text{ad}} .\n\\end{align*}\nThe discretized optimal control problem reads as: find $\\bar z_h \\in Z_h^{\\text{ad}}$ such that\n\\begin{align}\\label{eq:opt_cont_pb_full}\n\\bar{z}_h &= \\argmin_{z_h \\in Z^{\\text{ad}}_h}J_h (z_h) = \\argmin_{z_h \\in Z^{\\text{ad}}_h} \\frac{1}{2} \\lVert S_h z_h - u_d \\rVert^2_{L^2(\\Omega)} + \\frac{\\mu}{2} \\lVert z_h \\rVert^2_{L^2(\\Omega)}.\n\\end{align}\nUsing the same argumentation as in the continuous case, \nit can be shown that the optimal control problem \\eqref{eq:opt_cont_pb_full} \nhas a unique solution $\\bar{z}_h \\in Z_{h}^{\\text{ad}}$.\n Let $\\bar{u}_h = S_h\\bar{z}_h$ and $\\bar{p}_h = S_h(S_h \\bar{z}_h - u_d)$ \n be the optimal discrete state and optimal discrete adjoint state,\n respectively, associated with $\\bar{z}_h $. \n Then the discrete optimality condition reads as \n \\begin{equation}\\label{eq:full_ineq_var}\n (\\bar{p}_h + \\mu \\bar{z}_h, z_h - \\bar{z}_h)_{L^2(\\Omega)} \\geq 0\\quad \\forall z_h \\in Z_h^{\\text{ad}}.\n \\end{equation}\nBefore we state the main result of this section, \nwe define the $L^2(\\Omega)$-projection operator $Q_h: L^2(\\Omega) \\rightarrow Z_h$ by\n\\[\n\\int_{\\Omega} (z - Q_h z) v_h = 0 \\quad \\forall v_h \\in Z_h,\n\\]\nwhich has the following properties\n\\begin{enumerate}\n\\item[(L1)] $\\lVert Q_h v \\rVert_{L^2(\\Omega)} \\leq c\\lVert v \\rVert_{L^2(\\Omega)} \\quad \\forall v \\in L^2(\\Omega)$,\n\\item[(L2)] $\\lVert v - Q_h v \\rVert_{L^2(\\Omega)} \\leq c \\, h \\norm{v}_{\\mathbb{H}^{1}(\\Omega)} \\quad \\forall v \\in {H}^1(\\Omega)$.\n\\end{enumerate}\n\n\\begin{theorem}\n\\label{thm:FD:rates}\nLet the pairs $(\\bar{u}(\\bar{z}),\\bar{z})$ and $(\\bar{u}_h(\\bar{z}_h),\\bar{z}_h)$ \nbe the solutions to problems \\eqref{eq:min_J} and \\eqref{eq:opt_cont_pb_full}, respectively. Then the estimates\n\\begin{align}\\label{eq:full_a_priori_z}\n\\lVert \\bar{z} - \\bar{z}_h \\rVert_{L^2(\\Omega)} &\\leq c \\, h \\left(\\norm{\\bar{z}}_{\\mathbb{H}^{1}(\\Omega)} \n+ \\lVert u_d \\rVert_{\\mathbb{H}^{\\max{(0, 1-2s+\\varepsilon)}}(\\Omega)} \\right),\\\\\n\\label{eq:full_a_priori_u}\n\\lVert \\bar{u} - \\bar{u}_{h} \\rVert_{\\mathbb{H}^{s}(\\Omega)} &\\leq c \\, h \\left(\\lVert \\bar{z} \\rVert_{\\mathbb{H}^{1}(\\Omega)} \n+ \\lVert u_d \\rVert_{\\mathbb{H}^{\\max{(0, 1-2s+\\varepsilon)}}(\\Omega)} \\right),\\\\\n\\lVert \\bar{u} - \\bar{u}_{h} \\rVert_{L^{2}(\\Omega)} &\\leq c \\, h \\left(\\lVert \\bar{z} \\rVert_{\\mathbb{H}^{1}(\\Omega)} \n+ \\lVert u_d \\rVert_{\\mathbb{H}^{\\max{(0, 1-2s+\\varepsilon)}}(\\Omega)} \\right)\n\\end{align}\nhold.\n\\end{theorem}\n\\begin{proof}\nThe proof is similar to the proof of \\cite[Theorem 5.16]{Antil}. \nFirst, we use $z = \\bar{z}_h \\in Z_{\\textrm{ad}}$ in the continous optimality condition \\eqref{eq:variational_inequality} \nto get\n\\[\n(\\bar{p} + \\mu \\bar{z},\\bar{z}_h - \\bar{z}) \\geq 0. \n\\]\nSecond, using $z_h = Q_h \\bar{z} \\in Z_h^{\\text{ad}}$ in the discrete optimality condition\n\\eqref{eq:full_ineq_var}\nand introducing $\\bar{z}$, we arrive at\n\\[\n(\\bar{p}_h + \\mu \\bar{z}_h , Q_h \\bar{z}- \\bar{z}) + (\\bar{p}_h + \\mu \\bar{z}_h , \\bar{z}- \\bar{z}_h) \\geq 0.\n\\]\nConsequently, adding the previous two inequalities together we get\n\\[\n(\\bar{p} -\\bar{p}_h + \\mu (\\bar{z} - \\bar{z}_h) ,\\bar{z}_h - \\bar{z}) + (\\bar{p}_h + \\mu \\bar{z}_h , Q_h \\bar{z}- \\bar{z}) \\geq 0.\n\\]\nHence, we can conclude\n\\begin{equation}\\label{eq:full_2_terms}\n\\mu \\lVert \\bar{z} - \\bar{z}_h \\rVert_{L^2(\\Omega)}^2 \\leq (\\bar{p} -\\bar{p}_h ,\\bar{z}_h - \\bar{z}) + (\\bar{p}_h + \\mu \\bar{z}_h , Q_h \\bar{z}- \\bar{z}).\n\\end{equation}\nThe estimate for the first term on the right hand side of \\eqref{eq:full_2_terms} \nfollows from the estimate for \\eqref{eq:4_terms_var} \nwith an appropriate application of estimate \\eqref{eq:fe_approximation_error}\n\\begin{equation}\\label{eq:full_2_1_terms}\n(\\bar{p} -\\bar{p}_h ,\\bar{z}_h - \\bar{z}) \\leq c \\, h \\left( \\lVert \\bar{z} \\rVert_{\\mathbb{H}^{1}(\\Omega)} + \\lVert u_d \\rVert_{\\mathbb{H}^{\\max{(0, 1-2s+\\varepsilon)}}(\\Omega)} \\right) \\lVert \\bar{z}-\\bar{z}_h \\rVert_{L^2(\\Omega)}.\n\\end{equation}\nTo estimate the second term we add and substract $\\bar{p}$ and $\\mu \\bar{z}$ and get \n\\begin{equation}\\label{eq:full_3_terms}\n(\\bar{p}_h + \\mu \\bar{z}_h , Q_h \\bar{z}- \\bar{z}) \n= (\\bar{p} + \\mu \\bar{z} , Q_h \\bar{z}- \\bar{z}) + \\mu (\\bar{z}_h - \\bar{z} , Q_h \\bar{z}- \\bar{z}) + (\\bar{p}_h - \\bar{p} , Q_h \\bar{z}- \\bar{z}).\n\\end{equation}\nTo estimate the first term on the right hand side of \\eqref{eq:full_3_terms} we use the definition of the operator $Q_h$ and obtain\n\\[\n(\\bar{p} + \\mu \\bar{z} , Q_h \\bar{z}- \\bar{z}) = (\\bar{p} + \\mu \\bar{z} - Q_h(\\bar{p} + \\mu \\bar{z}) , Q_h \\bar{z}- \\bar{z}) \\leq c h^2 \\lVert \\bar{p} + \\mu \\bar{z} \\rVert_{\\mathbb{H}^{1}(\\Omega)} \\lVert \\bar{z} \\rVert_{\\mathbb{H}^{1}(\\Omega)},\n\\]\nwhere the last inequality follows from property (L2) of the $L^2$-projection. \nThe application of the Cauchy-Schwarz inequality yields the desired estimate of the second term \n\\begin{equation}\\label{eq:full_3_1_terms}\n\\mu (\\bar{z}_h - \\bar{z} , Q_h \\bar{z}- \\bar{z}) \\leq c \\, h \\lVert \\bar{z} - \\bar{z}_h \\rVert_{L^2(\\Omega)} \\lVert \\bar{z} \\rVert_{\\mathbb{H}^{1}(\\Omega)}.\n\\end{equation}\nThe estimate of the third term can be shown analoguous to \\eqref{eq:full_2_1_terms} with an application of (L2)\nand yields\n\\begin{align}\n (\\bar p_h- \\bar p,Q_h \\bar z - \\bar z) \n &\\leq c \\, h \\left( \\lVert \\bar{z} \\rVert_{\\mathbb{H}^{1}(\\Omega)} + \\lVert u_d \\rVert_{\\mathbb{H}^{\\max{(0, 1-2s+\\varepsilon)}}(\\Omega)} \\right)\n \\lVert Q_h \\bar z - \\bar z \\rVert_{L^2(\\Omega)} \\nonumber \\\\\n &\\leq c \\, h ^2\\left( \\lVert \\bar{z} \\rVert_{\\mathbb{H}^{1}(\\Omega)} + \\lVert u_d \\rVert_{\\mathbb{H}^{\\max{(0, 1-2s+\\varepsilon)}}(\\Omega)} \\right)\\lVert \\bar z \\rVert_{\\mathbb{H}^{1}(\\Omega)}\n \\label{eq:full_3_2_terms}\n\\end{align}\nEstimates \\eqref{eq:full_2_1_terms} -- \\eqref{eq:full_3_2_terms} together with an \nappropriate application of H\\\"older's and Young's inequality yield the desired estimate \\eqref{eq:full_a_priori_z}.\n\nIn order to prove estimate \\eqref{eq:full_a_priori_u} we proceed as follows. Introducing intermediate functions, applying the triangle inequality and using the stability results from Lemma \\ref{lemma:stab} yields\n\\begin{equation}\n\\label{eq:full_a_priori_u_1}\n\\begin{aligned}\n\\norm{\\bar{u} - \\bar{u}_{h}}_{\\mathbb{H}^{s}(\\Omega)} &\\leq \\norm{(S-S_{h})\\bar{z}}_{\\mathbb{H}^{s}(\\Omega)} + \\norm{S_{h}(\\bar{z} - \\bar{z}_{h})}_{\\mathbb{H}^{s}(\\Omega)}\\\\\n&\\leq \\norm{(S-S_{h})\\bar{z}}_{\\mathbb{H}^{s}(\\Omega)} + c \\norm{\\bar{z} - \\bar{z}_{h}}_{L^{2}(\\Omega)}.\n\\end{aligned}\n\\end{equation}\nHence an application of the a priori error estimate \\eqref{eq:fe_approximation_error} with $\\delta = 1 - \\varepsilon'$ and estimate \\eqref{eq:full_a_priori_z} proves \\eqref{eq:full_a_priori_u}.\nThe third estimate is obtained in the same way.\n\\end{proof}\n\n\n\\begin{remark}\n We see, that the rates, that are obtained by Theorem\\nobreakspace \\ref {thm:FD:rates} are not optimal with respect to the state\n and that they are dictated by the optimal linear rate, that we obtain for the control.\n In Section\\nobreakspace \\ref {sec:Numerics} we numerically measure higher rates for the optimal state, namely the same\n as for the variational discretization. The proof of higher rates will be addressed in future work and is mainly based on supercloseness results \\cite{MeyerRoesch2004} for the control. These convergence rates carry over to the rates for the discrete control computed by the so-called post-processing step, i.e. using the projection formula \\textup {(\\ref {eq:disc_proj})} to obtain a new, piecewise linear approximation of the control. Numerical experiments for this post-processing approach are also contained in Section\\nobreakspace \\ref {sec:Numerics}. The theoretical analysis is left for future work.\n\\end{remark}\n\\section{Implementation}\n\\label{sec:Implementation}\n\nIn this section, we introduce a solver for the finite element approximation\n\\textup {(\\ref {eq:dunford_taylor_FE})}.\nThe use of the Balakrishnan formula for inverting the fractional operator leads to the necessity of solving a large number of independent linear systems of equations to obtain an accurate solution. \nHowever, these systems carry a lot of structure that can be used to design efficient \niterative schemes based on tailored Krylov subspace methods.\n\nFollowing \\cite{Bonito}, application of the sinc quadrature to the\nBalakrishnan representation \\eqref{eq:balakrishnan_formula}\ngives rise to the discretization of the state\nequation \\eqref{eq:state_equation}.\nFor convenience, we repeat the resulting approximation here.\n\\begin{equation}\n \\label{eq:im:uhk}\nu_{h}^{k} = \\frac{\\sin{(s \\pi)}}{\\pi} k \\sum_{l = -N_{-}}^{N_{+}} e^{(1-s)kl}v_{h}^{l}\n\\end{equation}\nwhere \n$v_{h}^{l} \\in \\mathbb{U}(\\mathcal{T}_{h})$ is the unique solution of the Galerkin variational problem\n\\begin{equation}\n\\label{eq:DT_FEM_systems}\n\\int_{\\Omega} \\nabla v_{h}^l \\cdot \\nabla w_{h} dx \n+ e^{kl} \\int_{\\Omega} v_{h}^l w_{h} dx \n= \\int_{\\Omega} z w_{h} dx \\quad \\forall w_{h} \\in \\mathbb{U}(\\mathcal{T}_{h}).\n\\end{equation}\nThe evaluation of \\textup {(\\ref {eq:im:uhk})} requires the solution of $N_{+} + N_{-} + 1$ linear\nsystems of the form\n\\begin{equation}\n \\label{eq:im:DiscreteDTSystem}\n\\left( A + \\alpha_l M \\right) V^l = Z, \\quad -N_{-} \\leq l \\leq N_{+}.\n\\end{equation}\nHere, $\\alpha_l = e^{kl}$ and $A$, $M$ denote respectively the corresponding stiffness and mass\nmatrices of the system, $Z$ denotes the load vector, while $V^l$ denotes the node vector for\n$v^l_h$.\nNotice that the $N_{-}~+~N_{+}~+~1$ linear systems in~\\eqref{eq:im:DiscreteDTSystem} are independent \nfor different values of~$l$.\nHence, a first approach for solving systems \\textup {(\\ref {eq:im:DiscreteDTSystem})} might be the use of massive\nparallelization.\nHowever, we shall follow a more efficient approach that exploits the structure of the\nlinear systems and uses tailored conjugated gradients solvers.\n \nWe start by normalizing the systems.\nApplication of a standard mass-lumping strategy results in a diagonal mass matrix~$M_h$. \nWe define $\\rho := \\|M_h^{-1\/2} A M_h^{-1\/2}\\|_{\\infty}$,\n$\\tilde{A} = \\frac{1}{\\rho}{M_h}^{-1\/2}A{M_h}^{-1\/2}$, \n$\\tilde{\\alpha}_l = \\frac{1}{\\rho}\\alpha_l$, \n$\\tilde{V}^l = {M_h}^{1\/2} V^l$ \nand $\\tilde{Z} = \\frac{1}{\\rho}{M_h}^{-1\/2}Z$.\nThen, the linear systems~\\textup {(\\ref {eq:im:DiscreteDTSystem})} can be reformulated as\n\\begin{equation}\n \\label{eq:im:DiscreteDTSystem_modified}\n\\Big( \\tilde{A} + \\tilde{\\alpha}_l I \\Big) \\tilde{V}^l = \\tilde{Z}, \\quad -N_{-} \\leq l \\leq N_{+}.\n\\end{equation}\nWe can estimate the $2-$condition number of system $l$ in \\textup {(\\ref {eq:im:DiscreteDTSystem_modified})}\nby\n\\begin{align}\n \\label{eq:im:estimateCond}\n \\kappa\\left(\\tilde{A} + \\tilde{\\alpha}_l I\\right) = \n \\frac{\\lambda_{\\max}(\\tilde{A} + \\tilde{\\alpha}_l I)}\n {\\lambda_{\\min}(\\tilde{A} + \\tilde{\\alpha}_l I)}\n = \n \\frac{\\lambda_{\\max}(\\tilde{A}) + \\tilde{\\alpha}_l}\n {\\lambda_{\\min}(\\tilde{A}) + \\tilde{\\alpha}_l} \n \\leq 1 + \\min\\left(\n\\frac{\\lambda_{\\max}(\\tilde{A})}{\\tilde{\\alpha}_l},\n \\kappa(\\tilde{A})\n\\right)\n\\end{align}\nwhere $\\lambda_{\\max}(\\tilde A)$ and $\\lambda_{\\min}(\\tilde A)$ denote the largest and smallest\neigenvalue of the symmetric positive definit matrix $\\tilde A$, respectively.\nFrom \\textup {(\\ref {eq:im:estimateCond})} we observe, that for small $\\tilde \\alpha_l$ the condition number of\n$\\tilde A + \\tilde \\alpha_l I$ is close to the condition number of $\\tilde A$, which is a scaled\nstiffness matrix, while for large $\\tilde \\alpha_l$ the condition number converges to 1.\nBy introducing the scaling with $\\rho$, we fix $\\lambda_{\\max}(\\tilde A) \\leq 1$.\n\n\n\nThus for $l$ decreasing from $N_+$ to $N_-$ the condition number of the linear system $\\tilde A + \\tilde \\alpha_l I$\nis increasing. While for $l \\equiv N_+$ conjugated gradients without preconditioning is a well suited solver, \nfor $l\\equiv N_-$ preconditioning in general is required. \nDue to this observation, we consider two adapted linear solvers.\n\n\\begin{itemize}\n\\item Linear problems, for which $l$ is sufficiently large, \nare considered to be well-conditioned, and no further preconditioning is needed to obtain fast convergence of the conjugate gradient solver.\nThanks to the shift-invariance property of Krylov subspace methods \nthe Krylov spaces that are generated during the conjugate gradients method are independent of $l$.\nAs the build-up of the Krylov space contains the only matrix-vector multiplication in the conjugated gradients method,\nthe dimension of the space is equal to the number of matrix-vector multiplications. We fix a number $N_{\\max}$ of multiplications and proceed as follows.\nStarting with $l\\equiv N_+$ we solve linear systems for decreasing $l$, where we reuse the Krylov spaces from previous solutions.\nWe stop at $l\\equiv N_0$ as soon as the required Krylov space has reached the dimension of $N_{\\max}$. \n\nFor the implementation, we use a variant of \nthe conjugate gradient method proposed in~\\cite{Frommer1999}.\nIn Algorithm\\nobreakspace \\ref {alg:im:ShiftInvariant} we summarize the\npseudo code.\n\\item For the resulting systems $N_-,\\ldots, N_0$ preconditioning is necessary which conflicts\nwith the shift invariant property of Krylov methods.\nNote that subsequent systems are still similar, and thus, that a preconditioner for system\n$l$ is also a (worse, but not necessarily bad) preconditioner for system $l+1$. Therefore, we use the standard\napproach for solving system \\textup {(\\ref {eq:im:DiscreteDTSystem_modified})} sequentially and recalculating a\nnew preconditioner whenever the old one is no longer good enough, i.e. as soon as a given maximum number of iterations is exceeded in a conjugated gradients method, see Algorithm\\nobreakspace \\ref {alg:im:sequential}.\n\n\\end{itemize}\n\n\nIn Section\\nobreakspace \\ref {ssec:num:DunfordTaylor} we report on the behaviour of the proposed solver.\n\n\n\\begin{algorithm}\n\\SetAlgoLined\n\\KwIn{$\\tilde A,\\tilde \\alpha,\\tilde Z, N_{\\max}$}\n\\KwData{Set: $\\mathcal K=\\emptyset$, $l=N_+$}\n\\KwOut{$N_0$}\n\\While{$dim (\\mathcal K) < N_{\\max}$}\n{ \n\\For(\\tcp*[h]{cg-iteration for system $l$}){$k=1,\\ldots$}\n{\n\\If{$k>\\mbox{dim}(\\mathcal K)$}\n{ \nCalculate basis vector for $k$-th Krylov space and store into $\\mathcal K$\\;\n\\label{alg:im:ShiftCG_increaseKrylovSpace}\n\n{}\nSolve linear system $l$ in space $\\mathcal K_k$\nusing \\cite[Alg.~4]{Frommer1999}\\;\n\\label{alg:im:ShiftCG_solveInKrylovSpace}\n\n$l:=l-1$\\;\n\n$N_0 := l$\n\\caption{Pseudo code for solving the well-conditioned systems. The only matrix-vector\nmultiplication appears in line\\nobreakspace \\ref {alg:im:ShiftCG_increaseKrylovSpace}.\nHere the space $\\mathcal K_k$ in line\\nobreakspace \\ref {alg:im:ShiftCG_solveInKrylovSpace} is\nthe span over the first $k$ basis elements of $\\mathcal K$.}\n\\label{alg:im:ShiftInvariant}\n\\end{algorithm} \n\n\\begin{algorithm}\n\\SetAlgoLined\n\\KwIn{$\\tilde A,\\tilde \\alpha,\\tilde Z, N_0$}\n\\KwData{Set: $N_{\\max}>0$, $N_{iter} = N_{\\max}+1$, }\n\\For{$l=-N_-\\ldots N_0$}\n{ \n\\If{$N_{iter} >N_{\\max}$ }\n{\nBuild up amg preconditioner $P$ for system $l$\\;\n}\nSolve systems $l$ with preconditioned conjugate gradients method using preconditioner $P$ with\n$N_{iter}$ iterations\\;\n\n\\caption{Pseudo code for solving the not well-conditioned systems.}\n\\label{alg:im:sequential}\n\\end{algorithm}\n\n\\begin{remark}[An alternative solver]\n In \\cite{Chan1999} a conjugate gradients method is proposed that uses Krylov spaces generated for one of the linear systems \\textup {(\\ref {eq:im:DiscreteDTSystem_modified})}, called seed\n system, to generate good initial values, or even solutions, for the other systems.\n Thanks to the particular structure of\n the systems \\textup {(\\ref {eq:im:DiscreteDTSystem_modified})} this can be done without additional\n matrix-vector multiplications, see \\cite[Sec.~3.1]{Chan1999}. \n Upon convergence, another system is chosen as a seed system, for which the Krylov spaces are\n generated.\n In our implementation, we combine this approach with algebraic multigrid (amg) preconditioning, \n and choose that system as the next seed system, that\n currently has the largest residuum.\n \n This approach requires storing the solution to all systems in memory to apply the Krylov spaces and to find the next seed system. Unfortunately, this turned out to be not feasible for\n fine meshes in 3D, but we obtained very fast convergence of the method, when applicable.\n \n A combination of the proposed sequential solver as in Algorithm\\nobreakspace \\ref {alg:im:sequential} and the solver\n proposed in \\cite{Chan1999} seems possible (at least with some restrictions) and will be subject\n to future work.\n\\end{remark}\nFinally we note that a large number of tailored Krylov methods is proposed to deal with shifted\nsystems that require preconditioning and we only refer to\n\\cite{2003_Benzi_ApproximateInversePrecond_for_Shift,\n2003_Frommer_bicgstab_shiftedSystems,\n2016_Soodhalter_recursiveGMRES_generalPrecond,\n2014_SoodhalterSzydXue_KrylovRecycling_shiftesSystems,\n2017_ZhongGu_FlexibleAdaptiveSGMRES_shiftedSystems}.\n\n\\section{Numerical results}\n\\label{sec:Numerics}\n\nIn this section, we validate the theoretical rates of convergence derived in Section\\nobreakspace \\ref {sec:error_estimates}. \nIn Section\\nobreakspace \\ref {ssec:num:DunfordTaylor}, we investigate the solver \n for the fractional Laplace proposed in Section\\nobreakspace \\ref {sec:Implementation}.\nIn Section\\nobreakspace \\ref {ssec:OptControl}, we present the convergence rates of the fully discrete finite element scheme for the approximation of the optimal control problem. \nAll numerical experiments are conducted for a range of values of the fractional exponent~$s$.\n\n \nWe implement the solver proposed in Section\\nobreakspace \\ref {sec:Implementation} in C++ using the\nPETSc linear algebra package \\cite{petsc-web-page} and solve the optimal control problem\nusing the TAO package of PETSc using the bound-constrained limited-memory variable-metric method\n(tao\\_blmvm), which is a limited memory BFGS method.\n We generate meshes, finite element functions and assemble matrices\nusing FEniCS \\cite{LoggMardalEtAl2012a} through the C++ interface.\n \n\n For the solver of the fractional operator proposed in Section\\nobreakspace \\ref {sec:Implementation} we fix $N_{\\max} = 500$ for 2D simulations and \n $N_{\\max} = 250$ for 3D simulations. \n The individual linear systems are solved up to a relative accuracy of $10^{-8}$. \n For Algorithm\\nobreakspace \\ref {alg:im:sequential}, we calculate a \n new preconditioner as soon as more than $N_{\\max}=20$ iterations are taken in the preconditioned conjugate gradients method.\n As amg preconditioner we use 2 V-cycles of Hypre \\cite{hypre-web-page} that is accessed through the PETSc interface. We stop the optimization as soon as the $l^2$-norm of the projected gradient is smaller or equal to $10^{-5} \\sqrt{h^n}$, where $h$ is the length of the longest edge in the finite element mesh. Here the scaling with $h$ mimics the different scaling of the $l^2$-norm and the\n $L^2(\\Omega)$-norm.\n\n\n\\subsection{The solver for the fractional operator}\n\\label{ssec:num:DunfordTaylor}\nLet us first report on the performance of the proposed solver for the systems\n\\textup {(\\ref {eq:im:DiscreteDTSystem_modified})}.\nAs a test example we use $\\Omega = (0,1)^n$, $n \\in \\{2,3\\}$ and\nset $f = \\min(0.25,f_0)$, where $f_0(m) = 0.5$, with $m$ the center of $\\Omega$, \n$f_0|_{\\partial\\Omega} = 0$ and $f_0$ is linearly interpolated between these values.\nNote that no analytical solution is known for this\nright-hand side $f$, and that $f$ enjoys $\\mathbb{H}^{3\/2-\\varepsilon}(\\Omega)$ regularity, which is the maximal regularity of the optimal control $z$.\nWe solve the equation $(-\\Delta)^s u_h^k = f$ on a sequence of homogeneously refined meshes and\nuse the solution on the finest mesh (with $N_\\Omega = 4198401$ nodes for $n=2$) as the reference solution. \nThese meshes are chosen, such that all\nkinks in $f$ are resolved, and the integration of $f$ is done with no numerical error.\n\nIn Table\\nobreakspace \\ref {tab:num:DunfordTaylor_2D} and Table\\nobreakspace \\ref {tab:num:DunfordTaylor_3D} we report on the solver for the cases $n=2$ and $n=3$, respectively, and for $s=0.05$ and $s=0.5$.\n \n\\begin{table}\n\\centering\n\\input{DF_generic_2D.tab.tex}\n\n\\caption{The behavior of the proposed solver for the fractional operator for 2D simulation.\n$N_\\Omega$ denotes the number of degrees of freedom in $\\Omega$, $N_\\alpha = N_- + N_+ + 1$ denotes the number of linear systems to solve. In brackets, we show how many systems are solved by\nAlgorithm\\nobreakspace \\ref {alg:im:ShiftInvariant} and Algorithm\\nobreakspace \\ref {alg:im:sequential} respectively.\nWe give results for $s=0.05$, $s=0.5$, and $s=0.95$.\nThe number of amg setups in Algorithm\\nobreakspace \\ref {alg:im:sequential} are equal in all cases.\n}\n\\label{tab:num:DunfordTaylor_2D}\n\\end{table}\n \n \\begin{table}\n \\centering\n\\input{DF_generic_3D.tab.tex}\n\\caption{Behavior of the proposed solver for the fractional operator for 3D simulation.\nFor an explanation of the abbreviations, see Table\\nobreakspace \\ref {tab:num:DunfordTaylor_2D}.\n}\n\\label{tab:num:DunfordTaylor_3D}\n\\end{table} \n\nWe observe that in fact, the number of amg setups is very small or a set up is not even necessary, which indicates, how closely\nrelated the systems are.\n\nFinally, for small $s$ the operator is closer to identity, and thus, more systems are\nwell-conditioned, which can be seen by the number of systems that are solved by\nAlgorithm\\nobreakspace \\ref {alg:im:ShiftInvariant} in comparison to the number of systems solved by\nAlgorithm\\nobreakspace \\ref {alg:im:sequential}.\n\nLet us briefly comment on the convergence rate from Corollary\\nobreakspace \\ref {cor:fe_error_estimates_regular} for $u_h^k$.\nAs the above defined right hand side enjoys $f \\in \\mathbb{H}^{3\/2-\\varepsilon}(\\Omega)$,\nwe expect a rate of $h^{\\min(2,3\/2+2s-\\varepsilon')}$.\nIn Table\\nobreakspace \\ref {tab:num:DF_ratesL2} we show the observed convergence rates for $n=2$ and $s\\in \\{0.05,0.1,0.25\\}$, \nwhich indeed confirm the theoretical predictions. For $n=3$ memory consumption\nrestricts the quality of the reference solution, such that we do not measure a rate for $n=3$.\n \n\\begin{table}\n \n \\input{DF_generic_2D_rates.tab.tex}\n \n \\caption{Experimental convergence rates for the numerical solution of the fractional Laplace\n equation with $f \\in \\mathbb{H}^{3\/2-\\varepsilon}(\\Omega)$. \n We observe, that the expected rate $r^s_{L^2(\\Omega)} = \\min(2,3\/2+2s-\\varepsilon')$ is fulfilled for all examples.\n } \n \\label{tab:num:DF_ratesL2} \n\\end{table}\n\n\\subsection{Optimal Control Problem}\\label{ssec:OptControl}\nTo verify the theoretical convergence rates of the finite element discretization of the optimal control problem, \nwe perform numerical experiments without a known optimal solution.\nWe use the domain $\\Omega = (0, 1)^2$ and set the desired state to be equal to an eigenfunction of the Laplacian on the square, \nnamely $u_d = \\sin (2 \\pi x) \\sin (2 \\pi y)$ and $f \\equiv 0$. \nWe consider three different values of the fractional parameter, \nnamely $s \\in \\lbrace 0.05, 0.25, 0.5\\rbrace$ and choose $a = -0.8, b = 0.8$, \nsuch that the box-constraints are attained in some subdomain of~$\\Omega$.\nThe optimal solution for $h=0.0014$ is considered as reference solution.\n\nResults of the numerical tests are summarized in Figure\\nobreakspace \\ref {fig:OPT_Plots}. \nFirst order convergence of the approximation of the control is obtained, \nwhich is in line with~\\textup {(\\ref {eq:full_a_priori_z})}. \n\nWe also report on results using the post-processing approach \\cite{MeyerRoesch2004}.\nHere the projection formula \\textup {(\\ref {eq:disc_proj})} is used to obtain a higher order approximation for the optimal control.\nHigher order means, that instead of an approximation with piecewise constant functions, \na piecewise linear approximation is obtained that has the same structure as the optimal control obtained with variational discretization. \nWe expect thus the same optimal rate of convergence for this post-processed optimal control $\\bar{z}_h^{PP}$ \nas for the variational discretization approach, namely \n$\\|\\bar{z}-\\bar{z}_h^{PP}\\|_{L^2(\\Omega)} \\leq c \\, h^{\\min(2,3\/2+2s-\\varepsilon')}$.\nIn Figure\\nobreakspace \\ref {fig:OPT_Plots} we observe the expected higher rates for the optimal controls with post-processing approach for $n=2$.\n\nWe also investigate the finite element approximation of the state in the $L^2(\\Omega)$- and $H^s(\\Omega)$-norms, \nthe latter being estimated using Gagliardo-Nirenberg interpolation inequality \n$\\| \\bar{u} - \\bar{u}_h\\|_{H^s(\\Omega)} \\lesssim \\| \\bar{u} - \\bar{u}_h\\|^{1 - s}_{L^2(\\Omega)} \\|\\bar{u} - \\bar{u}_h \\|^s_{H^1(\\Omega)}$. We observe $h^{\\min(2,3\/2 + 2s)}$ order of convergence in the $L^2(\\Omega)$-norm and $h^{\\min(2-s,3\/2 + s)}$ order of convergence in the $H^s(\\Omega)$-norm. \nThe theoretical justification as well as the analysis of the post-processing approach is left for future work.\n\\begin{figure}[!]\n \n\n\\input{Plot_2D_control.new.tex}\n\\input{Plot_2D_state.new.tex}\n\n\n\\caption{Convergence rates of the discretization of the optimal control problem in 2D. \nFigure on the left-hand side presents approximations of the control~$z$. \nFirst order of convergence of the piecewise constant approximation can be observed. \nApplication of the additional post-processing significantly improves the convergence properties, \nand we observe $h^{\\min(2,3\/2 + 2s)}$ convergence. \nOn the right-hand side convergence of the approximation of the state~$u$ is shown. \nConvergence order of the piecewise linear finite element method measured in the $L^2(\\Omega)$-norm depends \non the choice of~$s$ and varies between $3\/2$ and $2$, which can be attained for sufficiently large~$s$. \nConvergence order in $H^s(\\Omega)$-norm is included for completeness.}\n\\label{fig:OPT_Plots}\n\\end{figure}\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Approach}\n\nIn this section, we describe our classification model that incorporates the attention mechanism. It can be applied to both text and tabular data and is inspired by works in attention-based models in text-classification~\\cite{zhou-etal-2016-attention}. We incorporate attention over the input features. Next, we describe how this attention over features can attribute the model's unfairness to certain features. Finally, using this attribution framework, we propose a post-processing approach for mitigating unfairness.\n\nIn this work, we focus on binary classification tasks. We assume access to a dataset of triplets $\\mathcal D = \\{x_i , y_i , a_i \\}^N_{i=1}$, where $x_i,y_i,a_i$ are i.i.d. samples from data distribution $p(\\mathbf x, \\mathbf y, \\mathbf a)$. $\\mathbf a \\in \\{a_1,\\ldots a_l\\}$ is a discrete variable with $l$ possible values and denotes the sensitive or protected attributes with respect to which we want to be fair, $\\mathbf y\\in \\{0,1\\}$ is the true label, $\\mathbf x\\in \\mathbb{R}^m $ are features of the sample which may include sensitive attributes. We use $\\hat y_o$ to denote the binary outcome of the original model, and $\\hat y_z^k$ will represent the binary outcome of a model in which the attention weights corresponding to $k^\\text{th}$ feature are zeroed out. Our framework is flexible and general that it can be used to find attribution for any fairness notion. More particularly, we work with the group fairness measures like \\textit{Statistical Parity}~\\cite{Dwork:2012:FTA:2090236.2090255}, \\textit{Equalized Odds}~\\cite{NIPS2016_9d268236}, and \\textit{Equality of Opportunity}~\\cite{NIPS2016_9d268236}, which are defined as:\\footnote{We describe and use the definition of these fairness measures as implemented in \\textit {Fairlearn }package~\\cite{bird2020fairlearn}. } \n\n\\noindent\n\\textbf{Statistical Parity Difference (SPD)}:\n$$\n\\text{SPD}(\\hat {\\mathbf y}, \\mathbf a ) =\n \\max_{a_i, a_j}|P(\\hat{\\mathbf y}=1\\mid \\mathbf a=a_i)-P(\\hat{\\mathbf y}=1\\mid \\mathbf a =a_j)|\n$$\n\\textbf{Equality of Opportunity Difference (EqOpp)}:\n\\begin{align*}\n\\text{EqOpp}(\\hat {\\mathbf y}, \\mathbf a, \\mathbf y) =\n \\max_{a_i, a_j} |P(\\hat{\\mathbf y}=1\\mid\\mathbf a = a_i,\\mathbf y=1)&\\\\-P(\\hat{\\mathbf y}=1\\mid\\mathbf a = a_j,\\mathbf y=1)|\n\\end{align*}\n\\textbf{Equalized Odds Difference (EqOdd)}:\n\\begin{align*}\n\\text{EqOdd}(\\hat {\\mathbf y}, \\mathbf a, \\mathbf y) = \n\\max_{a_i, a_j} \\max_{y\\in\\{0,1\\}} |P(\\hat{\\mathbf y}=1\\mid\\mathbf a = a_i,\\mathbf y=y)&\\\\- P(\\hat{\\mathbf y}=1\\mid\\mathbf a = a_j,\\mathbf y=y)|\n\\end{align*}\n\n\n\n\\begin{figure}[h]\n \\centering\n \\begin{subfigure}[b]{0.21\\textwidth}\n \\includegraphics[width=0.96\\textwidth,trim=0cm 0cm 0cm 0cm,clip=true]{images\/general_model.pdf}\n \\caption{Classification model.}\n \\label{fig:original_model}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.24\\textwidth}\n \\includegraphics[width=\\textwidth,trim=0cm 0cm 0cm 0cm,clip=true]{images\/attr_framework.pdf}\n \\caption{Attribution framework.}\n \\label{fig:intervention_model}\n \\end{subfigure}\n \\caption{(a) In general classification model, for each feature $f_k$ a vector representation $e_k$ of length $d^e$ is learned. This is passed to the attention layer which produces a $d^e$-dimensional vector representation for the sample instance $i$ which is passed to two dense layers to get the final classification output. (b) The Attribution framework has the same architecture as the general model. One outcome is obtained through the original model and another through the model that has some attention weights zeroed. The observed difference in accuracy and fairness measures will indicate the effect of the zeroed out features on accuracy and fairness.}\n \\label{fig:model}\n \n\\end{figure}\n\n\n\n\\begin{algorithm}[h]\n\\SetAlgoLined\nInput: decay rate $d_r$ ($0 \\leq d_r <1$), $n$ test samples indexed by variable $i$. \\\\\nOutput: final predictions, unfair features. \\\\\nCalculate the attention weights $\\alpha_{ki}$ for $k^{\\text{th}}$ feature in sample $i$ using the attention layer as in Eq.~\\ref{eq:attention_op1}.\\\\\n{unfair\\_feature\\_set} = \\{\\} \\\\\n\\For{each feature (index) $k$}{\n \\If{$\\text{SPD}(\\hat {\\mathbf y}_o, \\mathbf {a})- \\text{SPD}(\\hat {\\mathbf y}_z^k, \\mathbf {a}) \\geq 0$}{\n {unfair\\_feature\\_set} = {unfair\\_feature\\_set} $\\cup\\ \\{k\\}$\n }\n}\n\\For{each feature (index) $k$}{\n\\If{$k$ in \\text{unfair\\_feature\\_set}}{\nSet $\\alpha_{ki} \\leftarrow (d_r \\times \\alpha_{ki})$ for all $n$ samples\n}\n}\nUse new attention weights \nto obtain the final predictions $\\hat Y$. \\\\\n\\Return $\\hat Y$, {unfair\\_feature\\_set}\n \\caption{Bias Mitigation with Attention}\n \\label{mitig_alg}\n\\end{algorithm}\n\n\\subsection{General Model: Incorporating Attention over Inputs in Classifiers}\n\nWe consider each feature value as an individual entity (like the words are considered in text-classification) and learn a fixed-size embedding $\\{e_k\\}_{k=1}^m,\\ e_k \\in \\mathbb{R}^{d^e}$ for each feature, $\\{f_k\\}_{k=1}^m$. These vectors are passed to the attention layer. The Computation of attention weights and the final representation for a sample is described in Eq.~\\ref{eq:attention_op1}. $E = [e_1 \\ldots e_m], \\ E \\in \\mathbb R ^{d^e\\times m} $ is the concatenation of all the embeddings, $w \\in \\mathbb{R}^{d^e} $ is a learnable parameter, $r \\in \\mathbb{R}^{d^e}$ denotes the overall sample representation, and $\\alpha \\in \\mathbb{R}^{m}$ denotes the attention weights.\n\\begin{align}\nH = \\tanh(E) \\quad \\label{eq:attention_op1}\n\\alpha = \\text{softmax}(w^TH) \\quad\nr = \\tanh(E\\alpha^T)\n\\end{align}\nThe resulting representation, $r$, is passed to the feed-forward layers for classification. In this work, we have used \ntwo feed-forward layers (See Fig.~\\ref{fig:model} for overall architecture).\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Fairness Attribution with Attention Weights}\nThe aforementioned classification model with the attention mechanism combines input feature embeddings by taking a weighted combination. By manipulating the weights, we can intuitively capture the effects of specific features on the output. To this end, we observe the effect of each attribute on the fairness of outcomes by zeroing out or reducing its attention weights and recording the change. Other works have used similar ideas to understand the effect of attention weights on accuracy and evaluate interpretability of the attention weights by comparing the difference in outcomes in terms of measures such as Jensen-Shannon Divergence~\\cite{serrano-smith-2019-attention} but not for fairness. We are interested in the effect of features on fairness measures. Thus, we measure the difference in fairness of the outcomes based on the desired fairness measure. A large change in fairness measure and a small change in performance of the model would indicate that this feature is mostly responsible for unfairness, and it can be dropped without causing large impacts on performance. The overall framework is shown in Fig.~\\ref{fig:model}. First, the outcomes are recorded with the original attention weights intact (Fig.~\\ref{fig:original_model}). Next, attention weights corresponding to a particular feature are zeroed out, and the difference in performance and fairness measures is recorded (Fig.~\\ref{fig:intervention_model}). Based on the observed differences, one may conclude how incorporating this feature contributes to fairness\/unfairness.\n\nTo measure the effect of the $k^{th}$ feature on different fairness measures, we consider the difference in the fairness of outcomes of the original model and model with $k^{th}$ feature's effect removed. For example, for statistical parity difference, we will consider $\\text{SPD}(\\hat {\\mathbf y}_o, \\mathbf {a})- \\text{SPD}(\\hat {\\mathbf y}_z^k, \\mathbf {a})$. A negative value will indicate that the $k^{th}$ feature helps mitigate unfairness, and a positive value will indicate that the $k^{th}$ feature contributes to unfairness. This is because $\\hat{y}_z^k$ captures the exclusion of the $k^{th}$ feature (zeroed out attention weight for that feature) from the decision-making process. If the value is positive, it indicates that not having this feature makes the bias lower than when we include it. Notice here, we focus on global attribution, so we measure this over all the samples; however, this can also be turned into local attribution by focusing on individual sample $i$ only.\n\\subsection{Mitigating Bias by Removing Unfair Features} \\label{mitig_sec}\nAs discussed in the previous section, we can identify features that contribute to unfair outcomes according to different fairness measures. A simple technique to mitigate or reduce bias is to reduce the attention weights of these features. This mitigation technique is outlined in Algorithm~\\ref{mitig_alg}. In this algorithm, we first individually set attention weights for each of the features in all the samples to zero and monitor the effect on the desired fairness measure. We have demonstrated the algorithm for SPD, but other measures, such as EqOdd, EqOpp, and even accuracy can be used (in which case the ``unfair\\_feature\\_set'' can be re-named to feature set which harms accuracy instead of fairness). If the $k^{th}$ feature contributes to unfairness, we reduce its attention weight using decay rate value. This is because $\\hat {\\mathbf y}_z^k$ captures the exclusion of the $k^{th}$ feature (zeroed attention weight for that feature) compared to the original outcome $\\hat {\\mathbf y}_o$ for when all the feature weights are intact; otherwise, we use the original attention weight. We can also control the fairness-accuracy trade-off by putting more attention weight on features that boost accuracy while keeping the fairness of the model the same and down-weighting features that hurt accuracy, fairness, or both. \n\nThis post-processing technique has a couple of advantages over previous works in bias mitigation or fair classification approaches. First, the post-processing approach is computationally efficient as it does not require model retraining to ensure fairness for each sensitive attribute separately. Instead, the model is trained once by incorporating all the attributes, and then one manipulates attention weights during test time according to particular needs and use-cases. Second, the proposed mitigation method provides an explanation and can control the fairness-accuracy trade-off. \nThis is because manipulating the attention weights reveals which features are important for getting the desired outcome, and by how much.\nThis provides an explanation for the outcome and also a mechanism to control the fairness-accuracy trade-off by the amount of the manipulation.\n\n\n\n\n\n\n\n\n\\section{Experiments}\n\\vspace{-0.2cm}\nWe perform a suite of experiments on synthetic and real-world datasets to evaluate our attention based interpretable fairness framework. First, we elucidate\ninterpretability on the synthetic dataset on which we can control the relations between input and output features. We verify that our interpretability framework recovers expected attributions and behaviour. We also demonstrate interpretability on the real-world dataset and identify features that cause unfairness. Exploiting these attributions, we test our proposed post-processing bias mitigation framework and compare it with various recent mitigation approaches. Finally, we show the effectiveness of our mitigation approach on non-tabular data format, namely text.\n\n\\subsection{Datasets}\n\n\\newcommand{\\text{Ber}}{\\text{Ber}}\n\n\\subsubsection{Synthetic Data} \\label{syn_data_sec}\nTo validate the interpretability framework, we created two synthetic datasets in which we control how features interact with each other and contribute to the accuracy and fairness of the outcome variable.\nThese datasets capture some of the common scenarios arising in decision or classification problems.\n\n\\paragraph{Scenario 1:} First, we create a simple scenario to demonstrate that our framework identifies correct feature attributions for fairness and accuracy. To this end, we create a feature that is correlated with the outcome (responsible for accuracy), a discrete feature that would cause the prediction outcomes to be biased (responsible for fairness), and a continuous feature that is independent of the label or the task; thus, irrelevant for the task. Intuitively, suppose the attention-based interpretability framework works correctly. In that case, we expect to see a reduction in accuracy upon removing (i.e., making the attention weight zero) the feature responsible for the accuracy, reduction in bias upon removing the feature responsible for bias, and very little or no change upon removing the irrelevant feature. With this objective, we generated a synthetic dataset with three features, i.e., $x = [f_1, f_2, f_3]$ as follows~\\footnote{We use $x\\sim \\text{Ber}(p)$ to denote that $x$ is a Bernoulli random variable with $P(x=1)=p$.}.\n\\begin{equation*}\n f_1 ~\\sim \\text{Ber}(0.9)\n \n \n \n \n \\quad f_2\\sim \\text{Ber}(0.5)\n \\quad f_3\\sim \\mathcal N (0,1)\n \\quad y\\sim \\begin{cases}\n \\text{Ber}(0.9) & \\text{if } f_2=1\\\\\n \\text{Ber}(0.1) & \\text{if } f_2=0\n \\end{cases}\n\\end{equation*}\nClearly, $f_2$ has the most predictive information for the task and responsible for accuracy. Here, we consider $f_1$ as the sensitive attribute. $f_1$ is an imbalanced feature that can bias the outcome and is generated such that there is no intentional correlation between $f_1$ and the outcome, $y$ or $f_2$. $f_3$ is sampled from a normal distribution independent of the outcome $y$, or the other features, making it irrelevant to the task.\nThus, an ideal classifier would be fair if it captures the correct outcome without being affected by the imbalance in $f_1$. However, due to limited data and skew in $f_1$, there will be some undesired bias --- few errors when $f_1=0$ can lead to large statistical parity. \n\n \n\n\\paragraph{Scenario 2:} Using features that are not identified as sensitive attributes can result in unfair decisions due to their implicit relations or correlations with the sensitive attributes. This phenomenon is called indirect discrimination~\\cite{zliobaite2015survey,6175897,ijcai2017-549}. We designed this synthetic dataset to demonstrate and characterize the behavior of our framework under indirect discrimination. Similar to the previous scenario, we consider three features. Here, $f_1$ is considered as the sensitive attribute, and $f_2$ is correlated with $f_1$ and the outcome, $y$. The generative process is as follows:\n\\begin{equation*}\n f_1\\sim \\begin{cases}\n \\text{Ber}(0.9) & \\text{if } f_2=1\\\\\n \\text{Ber}(0.1) & \\text{if } f_2=0\n \\end{cases}\n \\quad f_2\\sim\\text{Ber}(0.5)\n \\quad f_3\\sim \\mathcal N (0,1)\n \\quad y\\sim \\begin{cases}\n \\text{Ber}(0.7) & \\text{if } f_2=1\\\\\n \\text{Ber}(0.3) & \\text{if } f_2=0\n \\end{cases}\n\\end{equation*}\n\nNote that, in this case $f_1$ and $y$ are correlated with \n$f_2$. The model should mostly rely on $f_2$ for its decisions. \nHowever, due to the correlation between $f_1$ and $f_2$, we expect $f_2$ to affect both the accuracy and fairness of the model. Thus, in this case, indirect discrimination is possible. Using such a synthetic dataset, we demonstrate a) indirect discrimination and b) the need to have an interpretable framework to reason about unfairness and not blindly focus on the sensitive attributes for bias mitigation.\n\n\n\n\\subsubsection{Real-world Datasets}\nIn addition to synthetic datasets, we demonstrate our approach on various real-world datasets which we discuss next.\n\n\\paragraph{Tabular Datasets:}\nWe conduct our experiments on two real-world tabular datasets often used to benchmark fair classification techniques --- \\textit{UCI Adult}~\\cite{Dua:2019} and \\textit{Heritage Health}~\\footnote{https:\/\/www.kaggle.com\/c\/hhp} datasets. The \\textit{UCI Adult} dataset contains census information about individuals, with the prediction task being whether the income of the individual is higher than \\$50k or not. The sensitive attribute, in this case, is gender (male\/female). The \\textit{Heritage Health} dataset contains patient information, and the task is to predict the Charleson Index (comorbidity index, which is a patient survival indicator). Each patient is grouped into one of the 9 possible age groups, and we consider this as the sensitive attribute. We used the same pre-processing and train-test splits as in~\\cite{gupta2021controllable}.\n\n\\paragraph{Non-Tabular or Text Dataset:}\nTo demonstrate the flexibility of our approach, we also experiment with a non-tabular text dataset. We used the \\textit{biosbias} dataset~\\cite{de2019bias}. The dataset contains short bios of individuals. The task is to predict the occupation of the individual from their bio.\nWe utilized the bios from the year 2018 from the \\texttt{2018\\_34} archive and considered two occupations for our experiments, namely, nurse and dentist. The dataset was split into $70-15-15$ train, validation, and test splits. \\cite{de2019bias} has demonstrated the existence of gender bias in this prediction task and showed that certain gender words are associated with certain job types. For example, for the bios that we consider, we expect female associated words, such as \\textit{she}, to cause skew towards the prediction of job nurse and male associated words, such as \\textit{he}, to skew the prediction towards the job dentist.\n\n\n\n\n\n\n\n\\subsection{Interpreting Fairness with Attention}\n\\label{sec:interpret_fairness}\n\\begin{figure*}[t]\n\\begin{subfigure}[b]{0.5\\textwidth}\n\\includegraphics[width=\\textwidth,trim=2.5cm 0cm 4.5cm 1.0cm,clip=true]{.\/images\/inter_imgs\/no_err_newscen1.pdf}\n\\end{subfigure}\n\\begin{subfigure}[b]{0.5\\textwidth}\n\\includegraphics[width=\\textwidth,trim=2.5cm 0cm 4.5cm 1.0cm,clip=true]{.\/images\/inter_imgs\/no_err_newscen2.pdf}\n\\end{subfigure}\n\\begin{subfigure}[b]{0.5\\textwidth}\n\\includegraphics[width=\\textwidth,trim=2.5cm 0cm 4.5cm 1.0cm,clip=true]{.\/images\/inter_imgs\/no_err_real_data_adult_subset2.pdf}\n\\end{subfigure}\n\\begin{subfigure}[b]{0.5\\textwidth}\n\\includegraphics[width=\\textwidth,trim=2.4cm 0cm 4.5cm 1.0cm,clip=true]{.\/images\/inter_imgs\/no_err_real_data_health_subset2.pdf}\n\\end{subfigure}\n\\caption{Results from the synthetic datasets on top and real-world datasets in the bottom. Labels on the points represent the feature name that was removed. The results show how the accuracy and fairness of the model (in terms of statistical parity difference) change by exclusion of each feature. The red original point represents the results of the model including all the features.}\n\\label{fig:interpret_results}\n\\end{figure*}\n\n\nFirst, we validate that our interpretability framework can capture correct attributions by controlled experiments with synthetic data. We also experiment with \\textit{UCI Adult} and \\textit{Heritage Health} and argue that the captured attributions are reasonable. Fig.~\\ref{fig:interpret_results} summarizes our results. The interpretability framework correctly captures the intentional behavior created in the synthetic datasets.\n\nIn \\textit{Scenario 1}, as expected, $f_2$ is correctly attributed to being responsible for the accuracy, and removing it hurts the accuracy drastically. Similarly, $f_1$ is correctly shown to be responsible for unfairness, and removing it creates a fairer outcome. Ideally, the model should not be using any information about $f_1$ as it is independent of the task, but it does. Therefore, by removing $f_1$, we can be sure that information is not used and hence outcomes are fair. Lastly, as expected, $f_3$ was the irrelevant feature, and its effects on accuracy and fairness are negligible. Another interesting observation is that $f_2$ is helping the model achieve fairness since its exclusion means that the model should rely on $f_1$ for decision making, resulting in more bias; thus, removing $f_2$ harms accuracy and fairness as expected.\n\nIn \\textit{Scenario 2}, our framework captures the effect of indirect discrimination. We can see that removing $f_2$ reduces bias as well as accuracy drastically. This is because $f_2$ is the predictive feature, but due to its correlation with $f_1$, it can also indirectly affect the model's fairness. More interestingly, although $f_1$ is the sensitive feature, removing it does not play a drastic role in fairness or the accuracy. This is an important finding as it shows why removing $f_1$ on its own can not give us a fairer model due to the existence of correlations to other features and indirect discrimination. \n\nLastly, Fig.~\\ref{fig:interpret_results}, also shows results on a subset of the features from the \\textit{UCI Adult} and \\textit{Heritage Health} datasets (To keep the plots uncluttered and readable, we incorporated the most interesting features in the plot). These can provide us with some intuition about how different features in these datasets contribute to the fairness and accuracy of the model. While features such as capital gain and capital loss in the \\textit{UCI Adult} dataset are responsible for improving accuracy and reducing bias, we can observe from the results in Fig.~\\ref{fig:interpret_results} that features such as relationship or marital status that can be directly or indirectly correlated with sex are causing unfairness. These results are intuitive and once again confirm the reliability of our framework.\n\nThe above results validate that attention weights capture the true or the expected behavior and that this framework can provide reliable attributions for the fairness and accuracy of the model.\n\n\\vspace{-0.3cm}\n\\subsection{Attention as a Mitigation Technique}\n\\vspace{-0.3cm}\n\\begin{figure*}[t]\n \\begin{subfigure}[b]{0.49\\textwidth}\n \\includegraphics[width=\\textwidth,trim=0cm 0cm 1.8cm 1.0cm,clip=true]{.\/images\/mitig_imgs\/adult_results_err.pdf}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.49\\textwidth}\n \\includegraphics[width=\\textwidth,trim=0cm 0cm 1.8cm 1cm,clip=true]{.\/images\/mitig_imgs\/health_results_err.pdf}\n \\end{subfigure}\n \\caption{Accuracy vs parity curves for UCI Adult and Heritage Health datasets.}\n \\label{fig:mitig_results}\n\\end{figure*}\n\nAs we have highlighted earlier, understanding how the information within features interact and contribute to the decision making can be used to design effective bias mitigation strategies. One such example was shown in Sec. ~\\ref{sec:interpret_fairness}. Often real-world datasets have features which cause indirect discrimination, due to which fairness can not be achieved by simply eliminating the sensitive feature from the decision process. \nUsing the attributions derived from our attention-based interpretability framework, we propose a post-processing mitigation strategy. Our strategy is to intervene on attention weights as discussed in Sec.~\\ref{mitig_sec}. We first attribute and identify the features responsible for the unfairness of the outcomes, i.e., all the features whose exclusion will decrease the bias compared to the original model's outcomes and gradually decrease their attention weights to zero. We show that this procedure provides competitive trade-offs compared to several recently fair machine learning approaches. \n\n\\subsubsection{Experiments with Tabular Datasets}\n\\paragraph{Datasets, Baselines, and Evaluation Procedure:}\nMost of the fair decision-making algorithms are benchmarked on tabular datasets. To this end, we evaluate our approach on two real-world datasets, \\textit{UCI Adult} and \\textit{Heritage Health}. In this section, we use statistical parity as our notion of fairness. However, due to the flexibility of our interpretability framework, it can easily be extended to other fairness measures (discussed in Appendix Section A). We compare our method against several recent baselines that are specifically optimized to achieve statistical parity.\nSpecifically, we consider methods that use information-theoretic objectives to learn representations of data so that information about sensitive attributes is eliminated.\n\\textit{\\textbf{CVIB}}~\\cite{NEURIPS2018_415185ea} realize this objective through a conditional variational autoencoder, whereas \\textit{\\textbf{MIFR}}~\\cite{pmlr-v89-song19a} used a combination of information bottleneck term and adversarial learning to optimize the fairness objective. \n\\textit{\\textbf{FCRL}}~\\cite{gupta2021controllable} showed that optimizing information theoretic objectives can be used to achieve good trade-offs between fairness and accuracy by using specialized contrastive information estimators~\\cite{gupta2021controllable}. \nIn addition to information-theoretic approaches, we also considered baselines that use adversarial learning such as \\textit{\\textbf{MaxEnt-ARL}}~\\cite{roy2019mitigating}, \\textit{\\textbf{LAFTR}}~\\cite{pmlr-v80-madras18a}, and \\textit{\\textbf{Adversarial Forgetting}}~\\cite{advforg}. \n\nFor all these fair representation learning baselines, we used the approach outlined in~\\cite{gupta2021controllable} for training a downstream classifier and evaluating the accuracy\/fairness trade-offs. The downstream classifier was a 1-hidden-layer MLP with 50 neurons along with ReLU activation function. Our experiments were performed on Nvidia GeForce RTX 2080. Each method was trained with five different seeds, and we report the average accuracy and fairness measure as statistical parity difference (SPD). \\textit{\\textbf{CVIB}}, \\textit{\\textbf{MaxEnt-ARL}}, \\textit{\\textbf{Adversarial Forgetting}} and \\textit{\\textbf{FCRL}} are designed for statistical parity notion of fairness and are not applicable for other measures like Equalized Odds and Equality of Opportunity. \\textit{\\textbf{LAFTR}} can only deal with binary sensitive attributes and thus not applicable for Heritage Health dataset. For our approach, we vary the attention weights and report the resulting fairness-accuracy trade offs. \n\nNote that in contrast our approach, the baselines described above are not interpretable as they are incapable of directly attributing features to fairness outcomes.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\paragraph{Results:}\nFig.~\\ref{fig:mitig_results} compares fairness-accuracy trade-offs of different bias mitigation approaches. We desire outcomes to be fairer, i.e., lower values of \\texttt{SPD} and to be more accurate, i.e., towards the right. \nOur mitigation framework based on the manipulation of the attention weights is competitive with state-of-the-art mitigation strategies. However, most of these approaches are specifically designed and optimized to achieve parity and do not provide any interpretability. \nOur model can not only achieve comparable and competitive results, but it is also able to provide explanation such that the users exactly know what feature and by how much it was manipulated to get the corresponding outcome. Another advantage of our model is that it needs only one round of training. The adjustments to attention weights are made post-training, and thus it is possible to achieve different trade-offs. Moreover, our approach does not need to know sensitive attributes while training; thus, it could work with other sensitive attributes not known beforehand or during the training. Lastly, here we merely focused on mitigating bias (as our goal was to show that the interpretability framework can identify problematic features and their removal would result in bias mitigation) and did not focus too much on improving accuracy and achieving the best trade-off curve. We manipulated attention weights of all the features that contributed to unfairness irrespective of if they helped maintaining high accuracy or not. However, the trade-off results can be improved by carefully considering the trade-off each feature contributes to with regards to both accuracy and fairness (e.g., using results from Fig.~\\ref{fig:interpret_results}) to achieve even better trade-off results which can be investigated as a future direction (e.g., removing problematic features that contribute to unfairness only if their contribution to accuracy is below a certain threshold value). Picking the right threshold value is challenging and needs further investigation.\n\n\n\n\n\n\n\\vspace{-0.3cm}\n\\subsubsection{Experiments with Non-Tabular Data}\n\\vspace{-0.1cm}\n\\begin{table}[t]\n\\centering\n\\begin{tabular}{ p{3.5cm} c c c}\n \\toprule\n\\textbf{Method}&\\textbf{Dentist TPRD (stdev)}&\\textbf{Nurse TPRD (stdev)}&\\textbf{Accuracy (stdev)}\\\\\n \\midrule\n \\parbox[t]{2mm}{\\multirow{1}{*}{\\shortstack[l]{Post-Processing (Ours)}}}\n&\\textbf{0.0202 (0.010)}&\\textbf{0.0251 (0.020)}&0.951 (0.013)\\\\[0.5pt]\n\n \\parbox[t]{2mm}{\\multirow{1}{*}{\\shortstack[l]{Pre-Processing}}}\n&0.0380 (0.016)&0.0616 (0.025)&0.946 (0.011)\\\\[0.5pt]\n\\midrule\n \\parbox[t]{2mm}{\\multirow{1}{*}{\\shortstack[l]{Not Debiased Model}}}\n&0.0474 (0.025)&0.1905 (0.059)&\\textbf{0.958 (0.011)}\\\\[0.5pt]\n\n\n \\bottomrule\n\\end{tabular}\n \\caption{Difference of the True Positive Rates (TPRD) amongst different genders for the dentist and nurse occupations on the biosbias dataset. Our introduced post-processing method is shown to be the most effective in reducing the disparity for both occupations compared to the previously used pre-processing technique.}\n\\label{bios_results}\n\\end{table}\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\textwidth,trim=0cm 8cm 0cm 3cm,clip=true]{.\/images\/NLP_qual\/ex1.pdf}\n \\includegraphics[width=\\textwidth,trim=0.24cm 7cm 1.4cm 3cm,clip=true]{.\/images\/NLP_qual\/ex2.pdf}\n \\caption{Qualitative results from the non-tabular data experiment on the job classification task based on the bio texts. Green regions represent top three words that the model used for its prediction based on the attention weights. While the Not Debiased Model mostly focused on gendered words, our method focused on more useful words, such as R.N. (Registered Nurse), to predict the nurse label for the corresponding bio.}\n \\label{NLP_qualitative}\n\\end{figure}\n\nIn addition to providing interpretability, our approach is flexible and useful for controlling fairness in modalities other than tabular datasets. To put this to the test, we applied our model to mitigate bias in text-based data. We consider the \\textit{biosbias} dataset~\\cite{de2019bias}, and use our mitigation technique to reduce observed biases in the classification task performed on this dataset. We compare our approach with the debiasing technique proposed in the original paper~\\cite{de2019bias}, which works by masking the gender-related words and then training the model on this masked data. As discussed earlier, such a method is computationally inefficient. It requires re-training the model or creating a new masked dataset, each time it is required to debias the model against different attributes, such as gender vs. race. \nFor the baseline pre-processing method, we masked the gender-related words, such as names and gender words, as provided in the biosbias dataset and trained the model on the filtered dataset. On the other hand, we trained the model on the raw bios for our post-processing method and only manipulated attention weights of the gender words during the testing process. Notice, we eliminated the same features for both of the methods. Only the time of the elimination was different (one before training and one after training by manipulating the attention weights).\n\n\nIn order to measure the bias, we used the same measure as in~\\cite{de2019bias} which is based on the equality of opportunity notion of fairness~\\cite{NIPS2016_9d268236} and reported the True Positive Rate Difference (TPRD) of each occupation amongst different genders. As shown in Table~\\ref{bios_results}, our post-processing mitigation technique provides lower TRPD while being more accurate, followed by the technique that masks out the gendered words before training. Although both methods reduce the bias compared to a model trained on raw bios without applying any mask or invariance to gendered words, the post-processing method is more effective. \nFig.~\\ref{NLP_qualitative} also shows some qualitative results and highlights the differences between models in terms of their most attentive features for the prediction task. As shown in the results, our post-processing technique is able to use more meaningful words, such as R.N. which stands for registered nurse, to predict the outcome label nurse, while the not debiased model focuses on gendered words.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Discussion}\nIn this work, we analyzed how attention weights contribute to fairness and accuracy of a predictive model. To do so, we proposed an attribution method that leverages the attention mechanism and showed the effectiveness of this approach on both tabular and text data. Using this interpretable attribution framework we then introduced a post-processing bias mitigation strategy based on attention weight manipulation. We validated the proposed framework by conducting experiments with different baselines, as well as fairness metrics, and different data modalities.\n\nAlthough our work can have a positive impact in allowing to reason about fairness and accuracy of models and reduce their bias, it can also have negative societal consequences if used unethically. For instance, it has been previously shown that interpretability frameworks can be used as a means for fairwashing which is when malicious users generate fake explanations for their unfair decisions to justify them \\cite{pmlr-v119-anders20a}. In addition, previously it has been shown that interpratability frameworks are vulnerable against adversarial attacks \\cite{slack2020fooling}. We acknowledge that our framework may also be targeted by malicious users for malicious intent that can manipulate attention weights to either generate fake explanations or unfair outcomes. An important future direction can be to analyze and improve robustness of our framework along with others. \n\n\n\n\n\\section{Experiments}\nWe perform a suite of experiments on synthetic and real-world datasets to evaluate our attention based interpretable fairness framework. First, we elucidate\ninterpretability on synthetic dataset for which we have the ground truth attributions. We verify that our interpretability framework recovers correct attribution. We also demonstrate our interpretability on real-world dataset and identify features that cause unfairness. Exploiting these attributions, we test our proposed post-processing bias mitigation framework and compare it with various different existing mitigation approaches. Finally, we show the effectiveness of our approach on non-tabular data format, namely text.\n\n\n\\subsection{Interpreting Fairness with Attention}\n\nIn this section, we demonstrate the effectiveness of our interpretability framework in being able to capture responsible features by controlled experiments with synthetic data as well as results on well-known real-world datasets.\n\n\\subsubsection{Synthetic Data}\nIn order to test our interpretability framework, we created some synthetic datasets in which we could control how features interact with each other and contribute to accuracy and fairness of the outcome variable and performed our first set of experiments.\n\n\\paragraph{Scenario 1:} First, we create a simple scenario to demonstrate that our framework identifies correct feature attributions for fairness and accuracy. To this end, we create a feature that is correlated with the outcome (responsible for accuracy), a feature that would cause the data to be biased (responsible for fairness), and a feature that is independent of the label or the task; thus, irrelevant for the task. Intuitively, if the attention based interpretability framework works correctly, we expect to see a reduction in accuracy upon removing (i.e., making the attention weight zero) the feature responsible for accuracy, reduction in bias upon removing the feature responsible for bias, and very little or no change upon removing the irrelevant feature. With this objective, we generated a synthetic dataset with three features, i.e., $x = [f_1, f_2, f_3]$ as follows~\\footnote{We use $x\\sim \\text{Ber}(p)$ to denote that $x$ is a Bernoulli random variable with $P(x=1)=p$.}.\n\\begin{equation*}\n f_1\\sim \\begin{cases}\n \\text{Ber}(0.9) & \\text{if } y=1\\\\\n \\text{Ber}(0.9) & \\text{if } y=0\n \\end{cases}\n \\quad f_2\\sim \\text{Ber}(0.5)\n \\quad f_3\\sim \\mathcal N (0,1)\n \\quad y\\sim \\begin{cases}\n \\text{Ber}(0.9) & \\text{if } f_2=1\\\\\n \\text{Ber}(0.1) & \\text{if } f_2=0\n \\end{cases}\n\\end{equation*}\n\nDue to the existing correlation between $f_2$ and the outcome variable $y$, $f_2$ can be considered as the feature with the most predictive information and responsible for accuracy. We consider $f_1$ as the sensitive attribute. It is generated such that there is no intentional correlation between $f_1$ and the outcome, $y$ or $f_2$. However, since $f_1$ is skewed towards $1$, there will be some undesired bias. Lastly, we sample $f_3$ from normal distribution that it is independent of the outcome $y$, or the other features; thus, making it irrelevant to the task.\n\n\\paragraph{Scenario 2:} Using features that are not identified as sensitive attributes can result in unfair decisions as a result of their implicit relations or correlations with the sensitive attributes. This phenomenon is called indirect discrimination~\\cite{zliobaite2015survey,6175897,ijcai2017-549}. We designed this synthetic dataset to demonstrate and characterize behaviour of our framework under indirect discrimination. This data has three features same as the previous scenario. Here, $f_1$ is considered as the sensitive attribute, and $f_2$ is correlated with $f_1$ and the outcome, $y$. The generative process is as follows:\n\\begin{equation*}\n f_1\\sim \\begin{cases}\n \\text{Ber}(0.9) & \\text{if } f_2=1\\\\\n \\text{Ber}(0.1) & \\text{if } f_2=0\n \\end{cases}\n \\quad f_2\\sim\\text{Ber}(0.5)\n \\quad f_3\\sim \\mathcal N (0,1)\n \\quad y\\sim \\begin{cases}\n \\text{Ber}(0.7) & \\text{if } f_2=1\\\\\n \\text{Ber}(0.3) & \\text{if } f_2=0\n \\end{cases}\n\\end{equation*}\n\nNote that $f_1$ is made to be correlated with $f_2$, and similarly the outcome is made to be correlated with $f_2$. Thus, the model mostly will rely on $f_2$ for its decisions as $f_2$ is the feature that is making the outcome to be correlated with itself. Therefore, despite the fact that $f_1$ is the sensitive attribute, we expect $f_2$ to affect both accuracy and fairness of the model more than $f_1$ as it is directly correlated with both the outcome and $f_1$ showing how indirect discrimination is possible. Using such synthetic dataset, we demonstrate a) indirect discrimination, and b) the need for having an interpretable framework to reason about unfairness and not blindly focus on the sensitive attributes for bias mitigation.\n\n\n\n\n\\begin{figure*}[t]\n\\begin{subfigure}[b]{0.5\\textwidth}\n\\includegraphics[width=\\textwidth,trim=2.5cm 0cm 4.5cm 1.0cm,clip=true]{.\/images\/inter_imgs\/no_err_newscen1.pdf}\n\\end{subfigure}\n\\begin{subfigure}[b]{0.5\\textwidth}\n\\includegraphics[width=\\textwidth,trim=2.5cm 0cm 4.5cm 1.0cm,clip=true]{.\/images\/inter_imgs\/no_err_newscen2.pdf}\n\\end{subfigure}\n\\begin{subfigure}[b]{0.5\\textwidth}\n\\includegraphics[width=\\textwidth,trim=2.5cm 0cm 4.5cm 1.0cm,clip=true]{.\/images\/inter_imgs\/no_err_real_data_adult_subset2.pdf}\n\\end{subfigure}\n\\begin{subfigure}[b]{0.5\\textwidth}\n\\includegraphics[width=\\textwidth,trim=2.4cm 0cm 4.5cm 1.0cm,clip=true]{.\/images\/inter_imgs\/no_err_real_data_health_subset2.pdf}\n\\end{subfigure}\n\\caption{Results from the synthetic datasets on top and real-world datasets in the bottom. Labels on the points represent the feature name that was removed. The results show how the accuracy and fairness of the model (in terms of statistical parity difference) change by exclusion of each feature. The red original point represents the results of the model including all the features.}\n\\label{interpret_results}\n\\end{figure*}\n\\subsubsection{Real Data}\nIn addition to synthetic datasets, we also used the UCI Adult and Heritage Health datasets and observed how different features contribute to accuracy and fairness of the model in these real-world datasets.\n\\subsubsection{Results}\nFor \\textit{Scenario 1}, as expected $f_2$ is correctly attributed to be responsible for accuracy and removing it hurts the accuracy drastically. Similarly $f_1$ is correctly shown to be responsible for unfairness as removing it creates a fairer outcome ($f_1$ was created to increase the bias thus its exclusion will result in more fair outcome). Lastly, as expected $f_3$ was the irrelevant feature and its effects on accuracy and fairness are negligible. Another interesting observation is that $f_2$ is actually helping the model to achieve fairness in this case since its exclusion means that the model should rely on $f_1$ for decision making, resulting in more bias; thus, removing $f_2$ will both harm accuracy and fairness as expected.\n\nFor \\textit{Scenario 2}, we were also able to capture the effect of indirect discrimination and show that $f_2$ indeed reduces bias and accuracy drastically by its removal. As explained previously, this is because $f_2$ is the predictive feature and due to its correlation with $f_1$, it can also affect fairness of the model indirectly. More interestingly, although $f_1$ is the sensitive feature, removing it does not play a drastic role in fairness nor the accuracy. This is an important finding as it shows why removing $f_1$ by its own can not give us a fairer model due to the existence of correlations of other features and indirect discrimination. Thus, our framework recognize such behavior and help design better mitigation strategies targeted towards responsible features which will lead us to our mitigation strategy discussed in the next section.\n\nLastly, Fig. \\ref{interpret_results}, also contains results on a subset of the features from the UCI Adult and Heritage Health datasets (To make the plots visually appealing and readable, we incorporated the interesting features in the plot.) which can give the reader some intuition on how different features in these datasets contribute to fairness and accuracy of the model. Gender was used as the sensitive feature in the Adult dataset and Age in the Heritage Health dataset.\n\nFrom the results, we conclude that attention weights capture the true and expected behavior and that this framework can be a reliable framework for explaining what contributes to fairness and accuracy of the model.\n\n\\subsection{Attention as a Mitigation Technique}\nAs discussed previously, it is important to know how features interact and contribute to the decision making outcome specifically if we want to design mitigation strategies. One obvious example was shown previously: When there is the effect of the indirect discrimination, it is not enough to just blindly put the focus on the sensitive feature; however, knowing how features interact and contribute to the outcome can help in designing such mitigation strategies. In light of this, we propose a mitigation strategy that relies on the observations from the interpretability framework that was introduced in the previous section. This mitigation strategy operates based on manipulation of the attention weights for the problematic features to achieve the desired outcome of the user. The proposed mitigation strategy is a post-processing technique that manipulates attention weights as discussed in Section \\ref{mitig_sec}. We first use our interpretability framework to identify the features that are responsible for fairness of the model (all the features whose exclusion will decrease the bias compared to the original model) and gradually decrease their attention weights to zero.\n\nFor our experiments, we used two real-world datasets and compared our mitigation technique with different mitigation strategies. We used statistical parity as our notion of fairness; however, due to the felxibility of our interpretability framework, our mitigation strategy can be extended to other fairness measures as well. Here, to compare our method against baselines that are mostly specifically optimized to achieve statistical parity, we decided to perform our main experiments with this notion of fairness. In our later experiments on text data, we utilized another fairness notion to highlight this flexibility. Additional results on other fairness measures and baseline methods can be found in our supplementary material. Each method was ran over five different seeds, and we reported the average of the accuracy and statistical parity difference (SPD). The downstream classifier was a 1-hidden-layer MLP with 50 neurons along with ReLU activation function. Our experiments were performed on Nvidia GeForce RTX 2080.\n\n\\begin{figure*}[t]\n\\begin{subfigure}[b]{0.49\\textwidth}\n\\includegraphics[width=\\textwidth,trim=0cm 0cm 1.8cm 1.0cm,clip=true]{.\/images\/mitig_imgs\/adult_results_err.pdf}\n\\end{subfigure}\n\\begin{subfigure}[b]{0.49\\textwidth}\n\\includegraphics[width=\\textwidth,trim=0cm 0cm 1.8cm 1cm,clip=true]{.\/images\/mitig_imgs\/health_results_err.pdf}\n\\end{subfigure}\n\\caption{Accuracy vs parity curves for UCI Adult and Heritage Health datasets.}\n\\label{mitig_results}\n\\end{figure*}\n\n\\subsubsection{Data}\nWe conduct our experiments on two real-world datasets-- UCI Adult~\\cite{Dua:2019} and Heritage Health \\footnote{https:\/\/www.kaggle.com\/c\/hhp} datasets. The UCI Adult dataset contains information about individuals with the prediction task of whether the income of the individual is greater than \\$50k or not. The Heritage Health dataset contains patient information and the task is to predict the Charleson Index (comorbidity index which is a patient survival indicator). We used the same pre-processing and train-test splits as in~\\cite{gupta2021controllable}. For the UCI Adult dataset we considered gender as the sensitive attribute and age as the sensitive attribute for the Herigate Health dataset.\n\\subsubsection{Baselines}\nTo measure the effectiveness of our interpretability framework and the manipulation of the attention weights as a bias mitigation strategy, we compared our mitigation results with different baselines. The results can also help us to notice that attention weights capture important information utilizing which in a smart manner can achieve us fairness. For our baselines, we included both information-theoretic as well as adversarial approaches that try to achieve fairness. \\textit{\\textbf{FCLR}} is a recent approach that learns representations to control parity through mutual information based on contrastive information estimators~\\cite{gupta2021controllable}. \\textit{\\textbf{CVIB}} also tries to learn fair representations through an information-theoretic approach that optimizes an objective which would minimize the mutual information between the encoding and the sensitive attribute without adversarial training~\\cite{NEURIPS2018_415185ea}. \\textit{\\textbf{MIFR}} also uses an information-theoretic objective to learn representations subject to fairness constraints~\\cite{pmlr-v89-song19a}. In addition to information theoretic approaches, we also compared our results with adversarial approaches that try to achieve invariance and fairness, such as \\textit{\\textbf{Adversarial Forgetting}}~\\cite{advforg} and \\textit{\\textbf{MaxEnt-ARL}}~\\cite{roy2019mitigating}. We also utilized \\textit{\\textbf{LAFTR}}~\\cite{pmlr-v80-madras18a} on the Adult dataset as another baseline and reported its results; however, since this method is feasible for binary sensitive attributes only, we were unable to use it on the Heritage Health dataset in which we have non-binary sensitive features.\n\n\\subsubsection{Results}\nFrom the results demonstrated in Fig.~\\ref{mitig_results}, one can observe that our mitigation framework based on the manipulation of the attention weights is comparable to state of the art mitigation strategies that are specifically targeted and optimized to achieve parity without providing interpretable outcomes. This sheds light on the importance of the interpretability framework and that it correctly captures how features interact and contribute to fairness of the system as knowing these types of information one can design strategies that can benefit the outcome. The results show the averaged accuracy vs parity curves for five different runs over various random seeds. Having results in the bottom-right corner is the ideal case as it represents the region with the highest accuracy and lowest bias. Our model is not only able to achieve comparable and competitive results, but it is also able to provide explanation such that we exactly know what feature and by how much it was manipulated to get the corresponding outcome. It addition, our approach needs only one round of training to get all the trade-off results for every possible sensitive attribute that might exist. Lastly, here we merely focused on mitigating bias (as our goal was to show that the interpretability framework is able to identify problematic features; thus, their removal would result in bias mitigation) and did not focus too much on improving accuracy and achieving the best trade-off curve. Therefore, we manipulated attention weights of all the features which contributed to unfairness irrespective of if they helped maintaining high accuracy or not. However, the trade-off results can be improved by considering the trade-off each feature contributes to with regards to both accuracy and fairness (e.g. using results from Fig.~\\ref{interpret_results}) for a stronger trade-off results which can be investigated as a future direction (e.g. removing problematic features that contribute to unfairness only if their contribution to accuracy is below a certain threshold value). Picking the right threshold value can be a challenge on its own that would need further investigation.\n\n\\subsection{Non-Tabular Experiments}\nIn addition to providing interpretability, an advantage of our proposed model over non-neural based models is its flexibility to be applied to large scale datasets as well as large text-based data. To put this into test, we applied our model over a text-based data and used our mitigation technique based on attention weight manipulation to reduce observed biases in the classification task performed on the dataset. The results confirm the effectiveness of our framework.\n\\subsubsection{Data}\nFor our experiments, we used the biosbias dataset~\\cite{de2019bias}. We utilized the bios from the year 2018 from the 2018\\_34 archive. We only considered two occupations to form our experiments namely, nurse and dentist. The dataset was split into 70-15-15 train, validation, and test splits. The dataset contains short bios about individuals with different occupations with the prediction task of the occupation for the given individual. Previous work~\\cite{de2019bias} has shown the existence of gender bias in such prediction tasks in which certain gender words are associated with certain job types. The goal of our study is to manipulate attention weights associated with the problematic features such as gender words, improve the performance of the model, and reduce the previously observed gender biases in this dataset. In our case, we expect female associated words, such as \\textit{she}, to cause skew towards the prediction of job nurse and male associated words, such as \\textit{he}, to skew the prediction towards the job dentist.\n\\subsubsection{Results}\n\\begin{table}[t]\n\\centering\n\\begin{tabular}{ p{3.5cm} c c c}\n \\toprule\n\\textbf{Method}&\\textbf{Dentist TPRD (stdev)}&\\textbf{Nurse TPRD (stdev)}&\\textbf{Accuracy (stdev)}\\\\\n \\midrule\n \\parbox[t]{2mm}{\\multirow{1}{*}{\\shortstack[l]{Post-Processing (Ours)}}}\n&\\textbf{0.0202 (0.010)}&\\textbf{0.0251 (0.020)}&0.951 (0.013)\\\\[0.5pt]\n\n \\parbox[t]{2mm}{\\multirow{1}{*}{\\shortstack[l]{Pre-Processing}}}\n&0.0380 (0.016)&0.0616 (0.025)&0.946 (0.011)\\\\[0.5pt]\n\\midrule\n \\parbox[t]{2mm}{\\multirow{1}{*}{\\shortstack[l]{Not Debiased Model}}}\n&0.0474 (0.025)&0.1905 (0.059)&\\textbf{0.958 (0.011)}\\\\[0.5pt]\n\n\n \\bottomrule\n\\end{tabular}\n \\caption{Difference of the True Positive Rates (TPRD) amongst different genders for the dentist and nurse occupations on the biosbias dataset. Our introduced post-processing method is shown to be the most effective in reducing the disparity for both occupations compared to the previously used pre-processing technique.}\n\\label{bios_results}\n\\end{table}\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\textwidth,trim=0cm 8cm 0cm 3cm,clip=true]{.\/images\/NLP_qual\/ex1.pdf}\n \\includegraphics[width=\\textwidth,trim=0.24cm 7cm 1.4cm 3cm,clip=true]{.\/images\/NLP_qual\/ex2.pdf}\n \\caption{Qualitative results from the non-tabular data experiment on the job classification task based on the bio texts. Green regions represent top three words that the model used for its prediction based on the attention weights. While the Not Debiased Model mostly focused on gendered words, our method focused on more useful words, such as R.N. (Registered Nurse), to predict the nurse label for the corresponding bio.}\n \\label{NLP_qualitative}\n\\end{figure}\nFor our results, we considered two approaches for bias mitigation. The first approach, is what is used in some previous work~\\cite{de2019bias} to debias the results. This method is based on pre-processing and works by masking the gender words and training the model. As explained previously, these types of pre-processing methods are not efficient since they require re-training the model each time we require to debias the model against different attributes, such as gender vs race. The second approach, however, is our proposed mitigation technique based on post-processing that trains the model on the original corpus without masking, but masks out (zeros out the attention weights) the effect of undesired features during test time. This method is more efficient since it does not require retraining for debiasing with regards to different attributes as debiasing happens during test time.\n\nFor the baseline pre-processing method, we masked out the gender related words, such as names and gender words, as also provided in the biosbias dataset and trained the model on the filtered dataset. On the other hand, for our post-processing method, we trained the model on the raw bios and only manipulated attention weights of the gender words during the testing process. Notice, we eliminated the same features for both of the methods. Only the time of the elimination was different (one before training and one after training by manipulating the attention weights).\n\nIn order to measure the bias, we used the same measure as in ~\\cite{de2019bias} which is based on the equality of opportunity notion of fairness~\\cite{NIPS2016_9d268236} that reports the True Positive Rate Difference (TPRD) of each occupation amongst different genders. As shown in Table~\\ref{bios_results}, our introduced post-processing mitigation technique that works by manipulating the attention weights, works the best in minimizing the bias for both occupations followed by the in-processing technique that masks out the gendered words before training. Although both methods reduce the bias compared to a model that is trained on raw bios without applying any mask or invariance to gendered words, the post-processing method achieves the best results. It also has the advantage of not needing retraining for debiasing with regards to different aspects aside from gender such as race. Fig.~\\ref{NLP_qualitative} also shows some qualitative results and highlights the differences between models in terms of their most attentive features for the prediction task. As shown in the results, our post-processing technique is able to use more meaningful words, such as R.N. which stands for registered nurse, to predict the outcome label nurse, while the not debiased model focuses on gendered words.\n\n\n\n\n\\section{Related Work}\n\\para{Fairness.} The research in fairness concerns itself with various topics, such as defining fairness metrics, proposing solutions for bias mitigation, and analyzing existing harms in various systems~\\cite{10.1145\/3457607}. In this work, we utilized different metrics that were introduced previously, such as statistical parity~\\cite{Dwork:2012:FTA:2090236.2090255}, equality of opportunity and equalized odds~\\cite{NIPS2016_9d268236}, to measure the amount of bias. We also used different bias mitigation strategies to compare against our mitigation strategy, such as FCRL~\\cite{gupta2021controllable}, CVIB~\\cite{NEURIPS2018_415185ea}, MIFR~\\cite{pmlr-v89-song19a}, adversarial forgetting~\\cite{advforg}, MaxEnt-ARL~\\cite{roy2019mitigating}, and LAFTR~\\cite{pmlr-v80-madras18a}. We also utilized concepts and datasets that were analyzing existing biases in NLP systems, such as~\\cite{de2019bias} which studied the existing biases in NLP systems on the occupation classification task on the bios dataset.\n\n\\para{Interpretability.} In this work, we introduced an attribution framework based on the attention weights that can analyze fairness and accuracy of the models at the same time and reason about the importance of each feature on fairness and accuracy. There is a body of work in NLP literature that tried to analyze the effect of the attention weights on interpretability of the model~\\cite{wiegreffe-pinter-2019-attention,jain-wallace-2019-attention,serrano-smith-2019-attention}. Other work also utilized attention weights to define an attribution score to be able to reason about how transformer models such as BERT work~\\cite{hao2021self}. Notice that although \\citet{jain-wallace-2019-attention} claim that attention might not be explanation, a body of work has proved otherwise including~\\cite{wiegreffe-pinter-2019-attention} in which authors directly target the work in \\citet{jain-wallace-2019-attention} and analyze in detail the problems associated with this study. In our work, we also find that attention can be useful and can extract meaningful information which can be beneficial in many aspects. In addition,~\\citet{NEURIPS2020_92650b2e} analyze the effect of the attention weights in transformer models for bias analysis in language models. However, their approach is different and has a more causal take on investigating the bias. Their study is specific to language models and does not necessarily apply to broader tasks and existing fairness definitions. Aside from interpretability and fairness, we utilized concepts from the NLP literature for designing our attention-based model that can be applicable to tabular data~\\cite{NIPS2017_3f5ee243,zhou-etal-2016-attention}.\n \n\\section{Experimental Setup}\nWe perform a suite of experiments on synthetic and real-world datasets to evaluate our attention based interpretable fairness framework. The experiments on synthetic data are intended to elucidate interpretability in controlled settings, where we can manipulate the relations between input and output feature. The experiments on real-world data aim to validate the effectiveness of the proposed approach on both tabular and non-tabular (textual) data. Below we describe the experiments, datasets, and respective baselines. \n\n\\subsection{Types of Experiments}\nWe enumerate the experiments and their goals as follows: \\\\\n\\noindent{\\bf Experiment 1: Attributing Fairness with Attention} The purpose of this experiment is to demonstrate that our attribution framework can capture correct attributions of features to fairness outcomes. We present our results for tabular data in Sec.~\\ref{sec:interpret_fairness}.\\\\\n\\noindent {\\bf Experiment 2: Bias Mitigation via Attention Weight Manipulation} In this experiment, we seek to validate the proposed post-processing bias mitigation framework and compare it with various recent mitigation approaches. The results for real-world tabular data are presented in Sec.~\\ref{sec:bias_mitigation}.\\\\\n\\noindent{\\bf Experiment 3: Validation on Textual Data} The goal of this experiment is to demonstrate the flexibility of the proposed attention-based method by conducting experiments on non-tabular, textual data. The results are presented in Sec.~\\ref{sec:textual_data}.\n\n\n\\subsection{Datasets}\n\\newcommand{\\text{Ber}}{\\text{Ber}}\n\n\\subsubsection{Synthetic Data} \\label{syn_data_sec}\nTo validate the attribution framework, we created two synthetic datasets in which we control how features interact with each other and contribute to the accuracy and fairness of the outcome variable.\nThese datasets capture some of the common scenarios, namely the data imbalance (skewness) and indirect discrimination issues, arising in fair decision or classification problems.\n\n\\paragraph{Scenario 1:} First, we create a simple scenario to demonstrate that our framework identifies correct feature attributions for fairness and accuracy. We create a feature that is correlated with the outcome (responsible for accuracy), a discrete feature that causes the prediction outcomes to be biased (responsible for fairness), and a continuous feature that is independent of the label or the task (irrelevant for the task). For intuition, suppose the attention-based attribution framework works correctly. In this case, we expect to see a reduction in accuracy upon removing (i.e., making the attention weight zero) the feature responsible for the accuracy, reduction in bias upon removing the feature responsible for bias, and very little or no change upon removing the irrelevant feature. With this objective, we generated a synthetic dataset with three features, i.e., $x = [f_1, f_2, f_3]$ as follows\\footnote{We use $x\\sim \\text{Ber}(p)$ to denote that $x$ is a Bernoulli random variable with $P(x=1)=p$.}:\n\\begin{align*}\n f_1 ~\\sim \\text{Ber}(0.9)\n \n \n \n \n \\quad f_2\\sim \\text{Ber}(0.5) \n \\quad f_3\\sim &\\mathcal N (0,1) \\\\\n \\quad y\\sim \\begin{cases}\n \\text{Ber}(0.9) & \\text{if } f_2=1\\\\\n \\text{Ber}(0.1) & \\text{if } f_2=0\n \\end{cases}\n\\end{align*}\nClearly, $f_2$ has the most predictive information for the task and is responsible for accuracy. Here, we consider $f_1$ as the sensitive attribute. $f_1$ is an imbalanced feature that can bias the outcome and is generated such that there is no intentional correlation between $f_1$ and the outcome, $y$ or $f_2$. $f_3$ is sampled from a normal distribution independent of the outcome $y$, or the other features, making it irrelevant for the task.\nThus, an ideal classifier would be fair if it captures the correct outcome without being affected by the imbalance in $f_1$. However, due to limited data and skew in $f_1$, there will be some undesired bias --- few errors when $f_1=0$ can lead to large statistical parity. \n\n\n\\paragraph{Scenario 2:} Using features that are not identified as sensitive attributes can result in unfair decisions due to their implicit relations or correlations with the sensitive attributes. This phenomenon is called indirect discrimination~\\cite{zliobaite2015survey,6175897,ijcai2017-549}. We designed this synthetic dataset to demonstrate and characterize the behavior of our framework under indirect discrimination. Similar to the previous scenario, we consider three features. Here, $f_1$ is considered as the sensitive attribute, and $f_2$ is correlated with $f_1$ and the outcome, $y$. The generative process is as follows:\n\\begin{align*}\n f_1\\sim \\begin{cases}\n \\text{Ber}(0.9) & \\text{if } f_2=1\\\\\n \\text{Ber}(0.1) & \\text{if } f_2=0\n \\end{cases} \n \\quad f_2\\sim\\text{Ber}(0.5) &\n \\quad f_3\\sim \\mathcal N (0,1) \\\\ \n \\quad y\\sim \\begin{cases}\n \\text{Ber}(0.7) & \\text{if } f_2=1\\\\ \n \\text{Ber}(0.3) & \\text{if } f_2=0\n \\end{cases} \n\\end{align*}\n\nIn this case $f_1$ and $y$ are correlated with \n$f_2$. The model should mostly rely on $f_2$ for its decisions. \nHowever, due to the correlation between $f_1$ and $f_2$, we expect $f_2$ to affect both the accuracy and fairness of the model. Thus, in this case, indirect discrimination is possible. Using such a synthetic dataset, we demonstrate a) indirect discrimination and b) the need to have an attribution framework to reason about unfairness and not blindly focus on the sensitive attributes for bias mitigation.\n\n\n\n\\subsubsection{Real-world Datasets}\nWe demonstrate our approach on the following real-world datasets:\\\\\n\\noindent{\\bf Tabular Datasets:} We conduct our experiments on two real-world tabular datasets often used to benchmark fair classification techniques --- \\textit{UCI Adult}~\\cite{Dua:2019} and \\textit{Heritage Health}\\footnote{https:\/\/www.kaggle.com\/c\/hhp} datasets. The \\textit{UCI Adult} dataset contains census information about individuals, with the prediction task being whether the income of the individual is higher than \\$50k or not. The sensitive attribute, in this case, is gender (male\/female). The \\textit{Heritage Health} dataset contains patient information, and the task is to predict the Charleson Index (comorbidity index, which is a patient survival indicator). Each patient is grouped into one of the 9 possible age groups, and we consider this as the sensitive attribute. We used the same pre-processing and train-test splits as in~\\citet{gupta2021controllable}.\\\\\n\\noindent{\\bf Non-Tabular or Text Dataset:}\nTo demonstrate the flexibility of our approach, we also experiment with a non-tabular, text dataset. We used the \\textit{biosbias} dataset~\\cite{de2019bias}. The dataset contains short bios of individuals. The task is to predict the occupation of the individual from their bio.\nWe utilized the bios from the year 2018 from the \\texttt{2018\\_34} archive and considered two occupations for our experiments, namely, nurse and dentist. The dataset was split into 70-15-15 train, validation, and test splits. \\citet{de2019bias} has demonstrated the existence of gender bias in this prediction task and showed that certain gender words are associated with certain job types (e.g., \\textit{she} to nurse and \\textit{he} to dentist). \n\n\\subsection{Bias Mitigation Baselines} \n\\label{sec:baselines}\nWe compared our bias mitigation approach to a number of recent state of the art methods. For our experiments with tabular data, we focus on methods that are specifically optimized to achieve statistical parity. Results for other fairness notions can be found in the appendix. For our baselines, we consider methods that learn representations of data so that information about sensitive attributes is eliminated.\n\\textit{\\textbf{CVIB}}~\\cite{NEURIPS2018_415185ea} realizes this objective through a conditional variational autoencoder, whereas \\textit{\\textbf{MIFR}}~\\cite{pmlr-v89-song19a} uses a combination of information bottleneck term and adversarial learning to optimize the fairness objective. \n\\textit{\\textbf{FCRL}}~\\cite{gupta2021controllable} optimizes information theoretic objectives that can be used to achieve good trade-offs between fairness and accuracy by using specialized contrastive information estimators. \nIn addition to information-theoretic approaches, we also considered baselines that use adversarial learning such as \\textit{\\textbf{MaxEnt-ARL}}~\\cite{roy2019mitigating}, \\textit{\\textbf{LAFTR}}~\\cite{pmlr-v80-madras18a}, and \\textit{\\textbf{Adversarial Forgetting}}~\\cite{advforg}. Note that in contrast to our approach, the baselines described above are not interpretable as they are incapable of directly attributing features to fairness outcomes. For the textual data, we compare our approach with the debiasing technique proposed in~\\citet{de2019bias}, which works by masking the gender-related words and then training the model on this masked data. \n\n\n\n\n\\section{Results}\n\\subsection{Attributing Fairness with Attention} \n\\label{sec:interpret_fairness}\n\nFirst, we test our method's ability to capture correct attributions in controlled experiments with synthetic data (described in Sec.~\\ref{syn_data_sec}). We also conduct a similar experiment with \\textit{UCI Adult} and \\textit{Heritage Health} datasets which can be found in the appendix. Fig.~\\ref{fig:interpret_results} summarizes our results by visualizing the attributions, which we now discuss. \n\nIn \\textit{Scenario 1}, as expected, $f_2$ is correctly attributed to being responsible for the accuracy and removing it hurts the accuracy drastically. Similarly, $f_1$ is correctly shown to be responsible for unfairness and removing it creates a fairer outcome. Ideally, the model should not be using any information about $f_1$ as it is independent of the task, but it does. Therefore, by removing $f_1$, we can ensure that information is not used and hence outcomes are fair. Lastly, as expected, $f_3$ was the irrelevant feature, and its effects on accuracy and fairness are negligible. Another interesting observation is that $f_2$ is helping the model achieve fairness since its exclusion means that the model should rely on $f_1$ for decision making, resulting in more bias; thus, removing $f_2$ harms accuracy and fairness as expected.\n\nIn \\textit{Scenario 2}, our framework captures the effect of indirect discrimination. We can see that removing $f_2$ reduces bias as well as accuracy drastically. This is because $f_2$ is the predictive feature, but due to its correlation with $f_1$, it can also indirectly affect the model's fairness. More interestingly, although $f_1$ is the sensitive feature, removing it does not play a drastic role in fairness or the accuracy. This is an important finding as it shows why removing $f_1$ on its own can not give us a fairer model due to the existence of correlations to other features and indirect discrimination. Overall, our results are intuitive and thus validate our assumption that attention-based framework can provide reliable feature attributions for the fairness and accuracy of the model. \n\n\n\n\n\\subsection{Attention as a Mitigation Technique}\n\\label{sec:bias_mitigation}\n\\begin{figure}[h]\n\\centering\n\\begin{subfigure}[b]{0.35\\textwidth}\n\\includegraphics[width=\\textwidth,trim=2.5cm 0cm 4.5cm 1.0cm,clip=true]{.\/images\/inter_imgs\/attr_vis_sen1.pdf}\n\\end{subfigure}\n\\begin{subfigure}[b]{0.35\\textwidth}\n\\includegraphics[width=\\textwidth,trim=2.5cm 0cm 4.5cm 1.0cm,clip=true]{.\/images\/inter_imgs\/attr_vis_sen2.pdf}\n\\end{subfigure}\n\\caption{Results from the synthetic datasets. Following the $\\hat{y}_o$ and $\\hat{y}_z$ notations, $\\hat{y}_o$ represents the original model outcome with all the attention weights intact, while $\\hat{y}_z^k$ represents the outcome of the model in which the attention weights corresponding to $k^{th}$ feature are zeroed out (e.g. $\\hat{y}_z^1$ represents when attention weights of feature $f_1$ are zeroed out). The results show how the accuracy and fairness of the model (in terms of statistical parity difference) change by exclusion of each feature.}\n\\label{fig:interpret_results}\n\\end{figure}\nAs we have highlighted earlier, understanding how the information within features interact and contribute to the decision making can be used to design effective bias mitigation strategies. One such example was shown in Sec.~\\ref{sec:interpret_fairness}. Often real-world datasets have features which cause indirect discrimination, due to which fairness can not be achieved by simply eliminating the sensitive feature from the decision process. \nUsing the attributions derived from our attention-based attribution framework, we propose a post-processing mitigation strategy. Our strategy is to intervene on attention weights as discussed in Sec.~\\ref{mitig_sec}. We first attribute and identify the features responsible for the unfairness of the outcomes, i.e., all the features whose exclusion will decrease the bias compared to the original model's outcomes and gradually decrease their attention weights to zero as also outlined in Algorithm~\\ref{mitig_alg}. We do this by first using the whole fraction of the attention weights learned and gradually use less fraction of the weights until the weights are completely zeroed out. \n\n\nFor all the baselines described in Sec.~\\ref{sec:baselines}, we used the approach outlined in~\\citet{gupta2021controllable} for training a downstream classifier and evaluating the accuracy\/fairness trade-offs. The downstream classifier was a 1-hidden-layer MLP with 50 neurons along with ReLU activation function. Our experiments were performed on Nvidia GeForce RTX 2080. Each method was trained with five different seeds, and we report the average accuracy and fairness measure as statistical parity difference (SPD). Results for other fairness notions can be found in the appendix. \\textit{\\textbf{CVIB}}, \\textit{\\textbf{MaxEnt-ARL}}, \\textit{\\textbf{Adversarial Forgetting}} and \\textit{\\textbf{FCRL}} are designed for statistical parity notion of fairness and are not applicable for other measures like Equalized Odds and Equality of Opportunity. \\textit{\\textbf{LAFTR}} can only deal with binary sensitive attributes and thus not applicable for Heritage Health dataset. Notice that our approach does not have these limitations. For our approach, we vary the attention weights and report the resulting fairness-accuracy trade offs. \n\n\n\n\n\n\n\n\n\n\n\\begin{figure*}[t]\n\\centering\n \\begin{subfigure}[b]{0.43\\textwidth}\n \\includegraphics[width=\\textwidth,trim=0cm 0cm 1.8cm 1.0cm,clip=true]{.\/images\/mitig_imgs\/adult_results_err.pdf}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.43\\textwidth}\n \\includegraphics[width=\\textwidth,trim=0cm 0cm 1.8cm 1cm,clip=true]{.\/images\/mitig_imgs\/health_results_err.pdf}\n \\end{subfigure}\n \\caption{Accuracy vs parity curves for UCI Adult and Heritage Health datasets.}\n \\label{fig:mitig_results}\n\\end{figure*}\n\nFig.~\\ref{fig:mitig_results} compares fairness-accuracy trade-offs of different bias mitigation approaches. We desire outcomes to be fairer, i.e., lower values of \\texttt{SPD} and to be more accurate, i.e., towards the right. The results show that using attention attributions can indeed be beneficial for reducing bias. Moreover, our mitigation framework based on the manipulation of the attention weights is competitive with state-of-the-art mitigation strategies. However, most of these approaches are specifically designed and optimized to achieve parity and do not provide any interpretability. \nOur model can not only achieve comparable and competitive results, but it is also able to provide explanation such that the users exactly know what feature and by how much it was manipulated to get the corresponding outcome. Another advantage of our model is that it needs only one round of training. The adjustments to attention weights are made post-training; thus, it is possible to achieve different trade-offs. Moreover, our approach does not need to know sensitive attributes while training; thus, it could work with other sensitive attributes not known beforehand or during training. Lastly, here we merely focused on mitigating bias (as our goal was to show that the attribution framework can identify problematic features and their removal would result in bias mitigation) and did not focus too much on improving accuracy and achieving the best trade-off curve which can be considered as the current limitation of our work. We manipulated attention weights of all the features that contributed to unfairness irrespective of if they helped maintaining high accuracy or not. However, the trade-off results can be improved by carefully considering the trade-off each feature contributes to with regards to both accuracy and fairness (e.g., using results from Fig.~\\ref{fig:interpret_results}) to achieve better trade-off results which can be investigated as a future direction (e.g., removing problematic features that contribute to unfairness only if their contribution to accuracy is below a certain threshold value). The advantage of our work is that this trade-off curve can be controlled by controlling how many features and by how much to be manipulated which is not the case for most existing work.\n\n\n\n\n\n\n\n\\subsection{Experiments with Non-Tabular Data}\n\\label{sec:textual_data}\n\n\nIn addition to providing interpretability, our approach is flexible and useful for controlling fairness in modalities other than tabular datasets. To put this to the test, we applied our model to mitigate bias in text-based data. We consider the \\textit{biosbias} dataset~\\cite{de2019bias}, and use our mitigation technique to reduce observed biases in the classification task performed on this dataset. We compare our approach with the debiasing technique proposed in the original paper~\\cite{de2019bias}, which works by masking the gender-related words and then training the model on this masked data. As discussed earlier, such a method is computationally inefficient. It requires re-training the model or creating a new masked dataset, each time it is required to debias the model against different attributes, such as gender vs. race. \nFor the baseline pre-processing method, we masked the gender-related words, such as names and gender words, as provided in the \\textit{biosbias} dataset and trained the model on the filtered dataset. On the other hand, we trained the model on the raw bios for our post-processing method and only manipulated attention weights of the gender words during the testing process as also provided in the \\textit{biosbias} dataset. \n\\begin{table*}[!t]\n\\centering\n\\begin{tabular}{ p{3.5cm} c c c}\n \\toprule\n\\textbf{Method}&\\textbf{Dentist TPRD (stdev)}&\\textbf{Nurse TPRD (stdev)}&\\textbf{Accuracy (stdev)}\\\\\n \\midrule\n \\parbox[t]{2mm}{\\multirow{1}{*}{\\shortstack[l]{Post-Processing (Ours)}}}\n&\\textbf{0.0202 (0.010)}&\\textbf{0.0251 (0.020)}&0.951 (0.013)\\\\[0.5pt]\n\n \\parbox[t]{2mm}{\\multirow{1}{*}{\\shortstack[l]{Pre-Processing}}}\n&0.0380 (0.016)&0.0616 (0.025)&0.946 (0.011)\\\\[0.5pt]\n\\midrule\n \\parbox[t]{2mm}{\\multirow{1}{*}{\\shortstack[l]{Not Debiased Model}}}\n&0.0474 (0.025)&0.1905 (0.059)&\\textbf{0.958 (0.011)}\\\\[0.5pt]\n\n\n \\bottomrule\n\\end{tabular}\n \\caption{Difference of the True Positive Rates (TPRD) amongst different genders for the dentist and nurse occupations on the biosbias dataset. Our introduced post-processing method is the most effective in reducing the disparity for both occupations compared to the pre-processing technique.}\n\\label{bios_results}\n\\end{table*}\n\\begin{figure*}[h]\n \\centering\n \\includegraphics[width=0.95\\textwidth,trim=0cm 8cm 0cm 3.49cm,clip=true]{.\/images\/NLP_qual\/ex1.pdf}\n \\includegraphics[width=0.95\\textwidth,trim=0.24cm 7.8cm 1.4cm 3.49cm,clip=true]{.\/images\/NLP_qual\/ex2.pdf}\n \\caption{Qualitative results from the non-tabular data experiment on the job classification task based on bio texts. Green regions are the top three words used by the model for its prediction based on the attention weights. While the Not Debiased Model mostly focuses on gendered words, our method focused on profession-based words, such as R.N. (Registered Nurse), to correctly predict ``nurse.''}\n \\label{NLP_qualitative}\n\\end{figure*}\nIn order to measure the bias, we used the same measure as in~\\cite{de2019bias} which is based on the equality of opportunity notion of fairness~\\cite{NIPS2016_9d268236} and reported the True Positive Rate Difference (TPRD) for each occupation amongst different genders. As shown in Table~\\ref{bios_results}, our post-processing mitigation technique provides lower TRPD while being more accurate, followed by the technique that masks the gendered words before training. Although both methods reduce the bias compared to a model trained on raw bios without applying any mask or invariance to gendered words, our post-processing method is more effective. Fig.~\\ref{NLP_qualitative} also highlights qualitative differences between models in terms of their most attentive features for the prediction task. As shown in the results, our post-processing technique is able to use more meaningful words, such as R.N. (registered nurse) to predict the outcome label nurse compared to both baselines, while the non-debiased model focuses on gendered words.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Introduction}\n\n\nMachine learning algorithms that optimize for performance (e.g., accuracy) often result in unfair outcomes~\\cite{10.1145\/3457607}. These algorithms capture biases present in the training datasets causing discrimination toward different groups. As machine learning continues to be adopted into fields where discriminatory treatments can lead to legal penalties, fairness and interpretability have become a necessity and a legal incentive in addition to an ethical responsibility~\\cite{barocas2016big,hacker2020explainable}. Existing methods for fair machine learning include applying complex transformations to the data so that resulting representations are fair~\\cite{gupta2021controllable,NEURIPS2018_415185ea,roy2019mitigating,advforg, pmlr-v89-song19a}, adding regularizers to incorporate fairness~\\cite{pmlr-v54-zafar17a,kamishima2012fairness,mehrabi2020statistical}, or modifying the outcomes of unfair machine learning algorithms to ensure fairness~\\cite{NIPS2016_9d268236}, among others. Here we present an alternative approach \\footnote{Code can be found at: \\url{https:\/\/github.com\/Ninarehm\/Attribution}}, which works by identifying the significance of different features in causing unfairness and reducing their effect on the outcomes using an attention-based mechanism.\n\n\nWith the advancement of transformer models and the attention mechanism \\cite{NIPS2017_3f5ee243}, recent research in Natural Language Processing (NLP) has tried to analyze the effects and the interpretability of the attention weights on the decision making process~\\cite{wiegreffe-pinter-2019-attention,jain-wallace-2019-attention,serrano-smith-2019-attention,hao2021self}. Taking inspiration from these works, we propose to use an attention-based mechanism to study the fairness of a model. The attention mechanism provides an intuitive way to capture the effect of each attribute on the outcomes. Thus, by introducing the attention mechanism, we can analyze the effect of specific input features on the model's fairness. We form visualizations that explain model outcomes and help us decide which attributes contribute to accuracy vs.\\ fairness. We also show and confirm the observed effect of indirect discrimination in previous work \\cite{zliobaite2015survey,6175897,ijcai2017-549} in which even with the absence of the sensitive attribute, we can still have an unfair model due to the existence of proxy attributes. Furthermore, we show that in certain scenarios those proxy attributes contribute more to the model unfairness than the sensitive attribute itself.\n\nBased on the above observations, we propose a post-processing bias mitigation technique by diminishing the weights of features most responsible for causing unfairness. We perform studies on datasets with different modalities and show the flexibility of our framework on both tabular and large-scale text data, which is an advantage over existing interpretable non-neural and non-attention-based models. Furthermore, our approach provides a competitive and interpretable baseline compared to several recent fair learning techniques.\n\nTo summarize, the contributions of this work are as follows: (1) We propose a new framework for attention-based classification in tabular data, which is interpretable in the sense that it allows to quantify the effect of each attribute on the outcomes; (2) We then use these attributions to study the effect of different input features on the fairness and accuracy of the models; (3) Using this attribution framework, we propose a post-processing bias mitigation technique that can reduce unfairness and provide competitive accuracy vs. fairness trade-offs; (4) Lastly, we show the versatility of our framework by applying it to large-scale non-tabular data such as text.\n\n\n\n\n\n\n\n\n\n\n\\section{Appendix}\nWe included additional bias mitigation results using other fairness metrics, such as equality of opportunity and equalized odds on both of the Adult and Heritage Health datasets in this supplementary material. We also included additional post-processing results along with additional qualitative results both for the tabular and non-tabular dataset experiments. More details can be found under each sub-section.\n\\subsection{Results on Tabular Data}\nHere, we show the results of our mitigation framework considering equality of opportunity and equalized odds notions of fairness. We included baselines that were applicable for these notions. Notice not all the baselines we used in our previous analysis for statistical parity were applicable for equality of opportunity and equalized odds notions of fairness; thus, we only included the applicable ones. In addition, LAFTR is only applicable when the sensitive attribute is a binary variable, so it was not applicable to be included in the analysis for the heritage health data where the sensitive attribute is non-binary. Results of these analysis is shown in Figures~\\ref{mitig_results_eqop} and \\ref{mitig_results_eqod}. We once again show competitive and comparable results to other baseline methods, while having the advantage of being interpretable and not requiring multiple trainings to satisfy different fairness notions or fairness on different sensitive attributes. Our framework is also flexible for different fairness measures and can be applied to binary or non-binary sensitive features.\n\n\nIn addition, we show how different features contribute differently under different fairness notions. Fig.~\\ref{features} demonstrates the top three features that contribute to unfairness the most along with the percentages of the fairness improvement upon their removal for each of the fairness notions. As observed from the results, while equality of opportunity and equalized odds are similar in terms of their problematic features, statistical parity has different trends. This is also expected as equality of opportunity and equalized odds are similar fairness notions in nature compared to statistical parity.\n\nWe also compared our mitigation strategy with the Hardt etl al. post-processing approach~\\cite{NIPS2016_9d268236}. Using this post-processing implementation~\\footnote{\\url{https:\/\/fairlearn.org}}, we obtained the optimal solution that tries to satisfy different fairness notions subject to accuracy constraints. For our results, we put the results from zeroing out all the attention weights corresponding to the problematic features that were detected from our interpretability framework. However, notice that since our mitigation strategy can control different trade-offs we can have different results depending on the scenario. Here, we reported the results from zeroing out the problematic attention weights that is targeting fairness mostly. From the results demonstrated in Tables~\\ref{adult_post_proc} and~\\ref{health_post_proc}, we can see comparable numbers to those obtained from~\\cite{NIPS2016_9d268236}. This again shows that our interpretability framework yet again captures the correct responsible features and that the mitigation strategy works as expected.\n\n\\subsection{Results on non-tabular Data}\nWe also included some additional qualitative results from the experiments on non-tabular data in Fig.~\\ref{fig:NLP_qualitative_appendix}.\n\\begin{figure*}[h]\n \\centering\n \\includegraphics[width=\\textwidth,trim=0cm 6cm 0cm 1cm,clip=true]{.\/images\/NLP_qual\/ex3.pdf}\n \\caption{Additional qualitative results from the non-tabular data experiment on the job classification task based on the bio texts. Green regions represent top three words that the model used for its prediction based on the attention weights.}\n \\label{fig:NLP_qualitative_appendix}\n\\end{figure*}\n\n\\subsection{Interpreting Fairness with Attention}\nFig.~\\ref{fig:interpret_results_2} shows results on a subset of the features from the \\textit{UCI Adult} and \\textit{Heritage Health} datasets (to keep the plots uncluttered and readable, we incorporated the most interesting features in the plot), and provide some intuition about how different features in these datasets contribute to the model fairness and accuracy. While features such as {\\em capital gain} and {\\em capital loss} in the \\textit{UCI Adult} dataset are responsible for improving accuracy and reducing bias, we can observe that features such as {\\em relationship} or {\\em marital status}, which can be indirectly correlated with the feature {\\em sex}, have a negative impact on fairness. For the \\textit{Heritage Health} dataset, including the features {\\em drugCount ave} and {\\em dsfs max} provide accuracy gains but at the expense of fairness, while including {\\em no Claims} and {\\em no Specialities} negatively impact both accuracy and fairness. \n\n\n\\subsection{Information on Datasets and Features}\nMore details about each of the datasets along with the descriptions of each feature for the Adult dataset can be found at\\footnote{https:\/\/archive.ics.uci.edu\/ml\/datasets\/adult} and for the Heritage Health dataset can be found at\n\\footnote{https:\/\/www.kaggle.com\/c\/hhp}. In our qualitative results, we used the feature names as marked in these datasets. If the names or acronyms are unclear kindly reference to the references mentioned for more detailed description for each of the features. Although most of the features in the Adult datasets are self-descriptive, Heritage Health dataset includes some abbreviations that we list in Table~\\ref{feature_abbriv} for the ease of interpreting each feature's meaning.\n\n\\begin{table}[h]\n\\centering\n\\begin{tabular}{|c|c|}\n\\hline\n\\textbf{Abbreviation} & \\textbf{Meaning} \\\\ \\hline\n\\multirow{1}{*}{PlaceSvcs} & Place where the member was treated. \n \\\\ \\hline\n\\multirow{1}{*}{LOS} & Length of stay. \n \\\\ \\hline\n\\multirow{1}{*}{dsfs} & Days since first service that year. \n \\\\ \\hline\n\\end{tabular}\n\\caption{Some abbreviations used in Heritage Health dataset's feature names. These abbreviations are listed for clarity of interpreting each feature's meaning specifically in our qualitative analysis or attribution visualizations.}\n\\label{feature_abbriv}\n\\end{table}\n\n\\begin{figure*}[h]\n\\begin{subfigure}[b]{0.49\\textwidth}\n\\includegraphics[width=\\textwidth,trim=0cm 0cm 1.8cm 1.0cm,clip=true]{.\/images\/appendix_imgs\/adult_results_err_eqop.pdf}\n\\end{subfigure}\n\\begin{subfigure}[b]{0.49\\textwidth}\n\\includegraphics[width=\\textwidth,trim=0cm 0cm 1.8cm 1cm,clip=true]{.\/images\/appendix_imgs\/health_results_err_eqop.pdf}\n\\end{subfigure}\n\\caption{Accuracy vs equality of opportunity curves for UCI Adult and Heritage Health datasets.}\n\\label{mitig_results_eqop}\n\\end{figure*}\n\n\n\\begin{figure*}[h]\n\\begin{subfigure}[b]{0.49\\textwidth}\n\\includegraphics[width=\\textwidth,trim=0cm 0cm 1.8cm 1.0cm,clip=true]{.\/images\/appendix_imgs\/adult_results_err_eqod.pdf}\n\\end{subfigure}\n\\begin{subfigure}[b]{0.49\\textwidth}\n\\includegraphics[width=\\textwidth,trim=0cm 0cm 1.8cm 1cm,clip=true]{.\/images\/appendix_imgs\/health_results_err_eqod.pdf}\n\\end{subfigure}\n\\caption{Accuracy vs equalized odds curves for UCI Adult and Heritage Health datasets.}\n\\label{mitig_results_eqod}\n\\end{figure*}\n\n\\begin{figure*}[h]\n\\centering\n\\begin{subfigure}[b]{0.44\\textwidth}\n\\includegraphics[width=\\textwidth,trim=7cm 2cm 5cm 4cm,clip=true]{.\/images\/appendix_imgs\/adult_features.pdf}\n\\end{subfigure}\n\\begin{subfigure}[b]{0.50\\textwidth}\n\\includegraphics[width=\\textwidth,trim=4cm 3cm 4cm 3cm,clip=true]{.\/images\/appendix_imgs\/health_features.pdf}\n\\end{subfigure}\n\\caption{Top three features for each fairness definition removing which caused the most benefit in improving the corresponding fairness definition. The percentage of improvement upon removal is marked on the $y$-axis for adult and heritage health datasets.}\n\\label{features}\n\\end{figure*}\n\n\\begin{table*}[h]\n\\centering\n\\scalebox{0.9}{\n \\begin{tabular}{c | c c | c c |cc}\n \\toprule\n & Accuracy & SPD & Accuracy & EQOP & Accuracy & EQOD\\\\\n \\midrule\n Attention (Ours)&\\textbf{0.77 (0.006)}&\\textbf{0.012 (0.003)}&0.81 (0.013)&\\textbf{0.020 (0.019)}&\\textbf{0.81 (0.021)}&\\textbf{0.027 (0.023)}\\\\\n \\midrule\n Hardt et al.&\\textbf{0.77 (0.012)}&0.013 (0.005)&\\textbf{0.83 (0.005)}&0.064 (0.016)&\\textbf{0.81 (0.007)}&0.047 (0.014)\\\\\n \\bottomrule\n \\end{tabular}}\n \\caption{Adult results on post-processing approach from Hardt et al. vs our attention method when all problematic features are zeroed out.}\n \\label{adult_post_proc}\n \\vspace{-1em}\n\\end{table*}\n\n\n\\begin{table*}[h]\n\\centering\n\\scalebox{0.9}{\n \\begin{tabular}{c | c c | c c |cc}\n \\toprule\n & Accuracy & SPD & Accuracy & EQOP & Accuracy & EQOD\\\\\n \\midrule\n Attention (Ours)&\\textbf{0.68 (0.004)}&\\textbf{0.04 (0.015)}&0.68 (0.015)&\\textbf{0.15 (0.085)}&0.68 (0.015)&\\textbf{0.10 (0.085)}\\\\\n \\midrule\n Hardt et al.&\\textbf{0.68 (0.005)}&0.05 (0.018)&\\textbf{0.75 (0.001)}&0.20 (0.033)&\\textbf{0.69 (0.012)}&0.19 (0.031)\\\\\n \\bottomrule\n \\end{tabular}\n }\n \\caption{Heritage Health results on post-processing approach from Hardt et al. vs our attention method when all problematic features are zeroed out.}\n \\label{health_post_proc}\n \\vspace{-1em}\n\\end{table*}\n\n\\begin{figure*}[h]\n\\centering\n\\begin{subfigure}[b]{0.45\\textwidth}\n\\includegraphics[width=\\textwidth,trim=2.5cm 0cm 4.5cm 1.0cm,clip=true]{.\/images\/inter_imgs\/attr_vis_adult.pdf}\n\\end{subfigure}\n\\begin{subfigure}[b]{0.45\\textwidth}\n\\includegraphics[width=\\textwidth,trim=2.4cm 0cm 4.5cm 1.0cm,clip=true]{.\/images\/inter_imgs\/attr_vis_health.pdf}\n\\end{subfigure}\n\\caption{Results from the real-world datasets. Note that in our $\\hat{y}_z$ notation we replaced indexes with actual feature names for clarity in these results on real-world datasets as there is not one universal indexing schema, but the feature names are more universal and discriptive for this case. Labels on the points represent the feature name that was removed (zeroed out) according to our $\\hat{y}_z$ notation. The results show how the accuracy and fairness of the model (in terms of statistical parity difference) change by exclusion of each feature.}\n\\label{fig:interpret_results_2}\n\\end{figure*}","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nWe consider a map $T \\colon [0,1] \\to [0,1]$. Let $r =\n(r_n)_{n=1}^\\infty$ be a sequence of decreasing positive numbers. In\nthis paper we shall investigate the size of the set\n\\begin{align*}\n E (x, r) &= \\{\\, y \\in [0,1] : d (T^n x, y) < r_n\n \\text{ for infinitely many } n \\,\\} \\\\ &= \\limsup_{n \\to \\infty}\n B(T^n x, r_n).\n\\end{align*}\n\nSets of this form with $T \\colon x \\mapsto 2x \\mod 1$ were studied by\nFan, Schmeling and Troubetzkoy in \\cite{Fanetal}. Li, Wang, Wu and Xu\nstudied in \\cite{Lietal} a related but different set in the case when\n$T$ is the Gau\\ss{} map.\n\nIn the paper \\cite{LiaoSeuret}, Liao and Seuret studied the case when\n$T$ is an expanding Markov map with a Gibbs measure $\\mu$. They proved\nthat if $r_n = n^{-\\alpha}$, then for $\\mu$-almost all $x$, the set $E\n(x, r)$ has Hausdorff dimension $1\/\\alpha$ provided that $1\/\\alpha$\nis not larger than the dimension of the measure $\\mu$.\n\nIn this paper we will consider more general maps than those studied by\nLiao and Seuret and prove results similar to those of the three papers\nmentioned above. We will use a method of statistical nature very\nsimilar to the one used in \\cite{Persson}. The maps we will work with\nare mostly piecewise expanding interval maps, but some of our results\nare valid for more abstract maps with certain statistical properties.\n\nWe will not assume that the maps have a Markov partition. In the case\nthat $\\mu$ is a measure that is absolutely continuous with respect to\nLebesgue measure, we can consider the sets $E(x,r)$ with $r_n =\nn^{-\\alpha}$ for any $\\alpha > 1$. However, for other measures $\\mu$\nwe have to impose extra restrictions on $\\alpha$ and our results are\nonly valid for sufficiently large $\\alpha$. This extra restriction is\nnot present in the works of Fan, Schmeling and Troubetzkoy; Li, Wang,\nWu and Xu; and Liao and Seuret.\n\nThe results of this paper are presented in two main theorems, found in\nSections~\\ref{sec:acim} and \\ref{sec:gibbs}. The first theorem treats\nthe case when $\\mu$ is absolutely continuous with respect to the\nLebesgue measure and no extra restriction is imposed on $\\alpha$. In\nthis case our result is a generalisations of the corresponding result\nby Liao and Seuret and we also prove that for almost all $x$ the set\n$E(x, r)$ has large intersections. This means that the set $E(x,r)$\nbelongs for some $0 < s <1$ to the class $\\mathscr{G}^s$ of\n$G_\\delta$-sets, with the property that any countable intersection of\nbi-Lipschitz images of sets in $\\mathscr{G}^s$ has Hausdorff dimension\nat least $s$. See Falconer's paper \\cite{Falconer} for more details\nabout those classes of sets. The large intersection property was not\nproved in any of the papers \\cite{Fanetal}, \\cite{Lietal} and\n\\cite{LiaoSeuret}.\n\nThe second theorem treats more general measures and is only valid for\nsufficiently large $\\alpha$. Restriction of this type are not present\nin the papers \\cite{Fanetal}, \\cite{Lietal} and \\cite{LiaoSeuret}. We\nhave not been able to prove the large intersection property in this\ncase in the general setting. However, we prove that if the map is a\nMarkov map, then the large intersection property holds.\n\nIn Section~\\ref{sec:examples} we provide explicit examples of maps\nthat satisfy the assumptions of the two main theorems. Most examples\nare for uniformly expanding maps, but we also give some examples with\nnon-uniformly expanding maps.\n\nOne can also study the Hausdorff dimension of the complement of $E\n(x,r)$. That was done both in the paper by Liao and Seuret as well as\nthat by Fan, Schmeling and Troubetzkoy, but we shall not do so in this\npaper.\n\n\n\\section{Maps with Absolutely Continuous Invariant Measures} \\label{sec:acim}\n\nWe will first work with maps $T \\colon [0,1] \\to [0,1]$ satisfying the\nfollowing assumptions.\n\\begin{assumption} \\label{as:acim}\n There exists an invariant measure $\\mu$ that is\n absolutely continuous with respect to Lebesgue measure, with density\n $h$ such that $c_h^{-1} < h < c_h$ holds Lebesgue almost everywhere\n for some constant $c_h > 0$.\n\\end{assumption}\n\\begin{assumption} \\label{as:decay1}\n Correlations decay with summable speed for functions of bounded\n variation: There is a function $p \\colon \\mathbb{N} \\to (0,\\infty)$\n such that if $f \\in L^1$ and $g$ is of bounded variation, then\n \\[\n \\biggl| \\int f \\circ T^n \\cdot g \\, \\mathrm{d} \\mu - \\int f \\,\n \\mathrm{d} \\mu \\int g \\, \\mathrm{d} \\mu \\biggr| \\leq \\lVert f\n \\rVert_1 \\lVert g \\rVert p(n),\n \\]\n where $\\lVert \\psi \\rVert = \\lVert \\psi \\rVert_1 + \\var \\psi$, and\n we assume that the correlations are summable in the sense that\n \\[\n C := \\sum_{n=0}^\\infty p(n) < \\infty.\n \\]\n\\end{assumption}\n\nWe prove the following theorem. The proof is in\nSection~\\ref{sec:absctsproof}.\n\n\\begin{theorem} \\label{the:shrinkingtarget}\n Under the Assumptions~\\ref{as:acim} and \\ref{as:decay1} above,\n $\\dimH E (x, r) \\geq s$ for Lebesgue almost all $x \\in [0,1]$, where\n \\[\n s = \\sup \\{\\, t : \\exists c, \\forall n : n^{-2} \\sum_{j=1}^n\n r_j^{-t} < c \\,\\}.\n \\]\n Moreover, the set $E(x,r)$ belongs to the class $\\mathscr{G}^s$ of\n $G_\\delta$-sets with large intersections for Lebesgue almost all\n $x$.\n\n In particular, if $r_n = n^{-\\alpha}$ then $\\dimH E (x, r) =\n 1\/\\alpha$ for Lebesgue almost all $x \\in [0,1]$.\n\\end{theorem}\n\nIn Section~\\ref{sec:examples} we provide some examples of maps\nsatisfying the assumptions of Theorem~\\ref{the:shrinkingtarget}.\n\n\n\\section{Maps with Gibbs Measures} \\label{sec:gibbs}\n\nWe will now consider a map $T \\colon [0,1] \\to [0,1]$ with a Gibbs\nmeasure $\\mu_\\phi$. Our assumptions are as follows.\n\n\\begin{assumption} \\label{as:piecewise}\n $T$ is piecewise monotone and expanding with respect to a finite\n partition, and there is bounded distortion for the derivative $T'$.\n\\end{assumption}\n\n\\begin{assumption} \\label{as:gibbs}\n The potential $\\phi \\colon [0,1] \\to \\mathbb{R}$ is of bounded\n distortion, and there is a Gibbs measure $\\mu_\\phi$ to the potential\n $\\phi$, with $\\mu_\\phi = h_\\phi \\nu_\\phi$ where $h_\\phi$ is a\n bounded function that is bounded away from zero, and $\\nu_\\phi$ is a\n conformal measure, that is, for any subset $A$ of a partition\n element holds\n \\[\n \\nu_\\phi (T(A)) = \\int_A e^{P(\\phi)-\\phi} \\, \\mathrm{d} \\nu_\\phi,\n \\]\n where $P (\\phi)$ denotes the topological pressure of $\\phi$.\n\\end{assumption}\n\n\\begin{assumption} \\label{as:decay2}\n We have summable decay of correlations for functions of bounded\n variation. That is we assume that there is a function $p \\colon\n \\mathbb{N} \\to (0,\\infty)$ such that if $f \\in L^1 (\\mu_\\phi)$ and\n $g$ is of bounded variation, then\n \\[\n \\biggl| \\int f \\circ T^n \\cdot g \\, \\mathrm{d} \\mu_\\phi - \\int f\n \\mathrm{d} \\mu_\\phi \\int g \\, \\mathrm{d} \\mu_\\phi \\biggr| \\leq\n \\lVert f \\rVert_1 \\lVert g \\rVert p(n),\n \\]\n holds for all $n$, and we assume that\n \\[\n C := \\sum_{n=0}^\\infty p(n) < \\infty.\n \\]\n\\end{assumption}\n\n\\begin{assumption} \\label{as:measureofball}\n There is a number $s_0 > 0$ such that for any $s < s_0$ there is a\n constant $c_s$ such that $\\mu_\\phi (I) \\leq c_s |I|^s$ holds for any\n interval $I \\subset [0,1]$.\n\\end{assumption}\n\n\\begin{remark} \\label{rem:dimension}\n We note that Assumption~\\ref{as:measureofball} implies that\n \\begin{equation} \\label{eq:localenergy}\n \\iint | x - y |^{-t} \\, \\mathrm{d} \\mu_\\phi (x) \\mathrm{d}\n \\mu_\\phi (y) \\leq \\frac{t c_s}{s-t}.\n \\end{equation}\n for any $t < s < s_0$. This follows since, for any $x$, we have\n \\begin{multline} \\label{eq:energyatx}\n \\int |x - y|^{-t} \\, \\mathrm{d} \\mu_\\phi (y) = \\int_1^\\infty\n \\mu_\\phi (B (x, u^{-1\/t})) \\, \\mathrm{d}u \\\\ \\leq \\int_1^\\infty\n c_s u^{-s\/t} \\, \\mathrm{d} u = \\frac{t c_s}{s-t},\n \\end{multline}\n which implies \\eqref{eq:localenergy}.\n\n Note also that \\eqref{eq:localenergy} implies that the lower pointwise\n dimension of $\\mu_\\phi$ is at least $s_0\/2$ at any point in\n $[0,1]$. Indeed, since $|I|^{-s} \\leq |x - y|^{-s}$ holds whenever\n $x, y \\in I$, we have together with \\eqref{eq:localenergy} that\n \\begin{align*}\n |I|^{-s} \\mu_\\phi (I)^2 &\\leq \\iint_{I \\times I} | x - y |^{-s} \\,\n \\mathrm{d} \\mu_\\phi (x) \\mathrm{d} \\mu_\\phi (y) \\\\ & \\leq \\iint |\n x - y |^{-s} \\, \\mathrm{d} \\mu_\\phi (x) \\mathrm{d} \\mu_\\phi (y) =\n c\n \\end{align*}\n holds whenever $s < s_0$. Hence $\\mu_\\phi (I) \\leq \\sqrt{c}\n |I|^{s\/2}$ and the claim follows.\n\\end{remark}\n\nIn this setting we can prove a similar result to\nTheorem~\\ref{the:shrinkingtarget}. The proof of the following theorem\nis in Section~\\ref{sec:gibbsproof}.\n\n\\begin{theorem} \\label{the:gibbs}\n Assume that $T \\colon [0,1] \\to [0,1]$ satisfies the\n Assumptions~\\ref{as:piecewise}, \\ref{as:gibbs}, \\ref{as:decay2} and\n \\ref{as:measureofball}. Then, we have that $\\dimH E(x,r) \\geq s$ for\n $\\mu_\\phi$-almost all $x$, where\n \\[\n s = \\sup \\{\\, t < s_0 : \\exists c, \\forall n : n^{-2} \\sum_{j=1}^n\n r_j^{-t} < c \\,\\}.\n \\]\n In particular, if $r_n = n^{-\\alpha}$ and $\\alpha > 1 \/ s_0$, then\n $\\dimH E (x,r) = 1\/\\alpha$ for $\\mu_\\phi$-almost every $x$.\n\\end{theorem}\n\n\\begin{remark}\n Note that if $\\alpha \\leq 1\/s_0$ then Theorem~\\ref{the:gibbs} gives us\n the result that $\\dimH E (x,r) \\geq s_0$. However, one would expect\n that $\\dimH E(x,r) = 1\/\\alpha$ as long as $1\/\\alpha$ is not larger\n than the dimension of $\\mu_\\phi$, which is the result proved by Liao\n and Seuret in their setting.\n\n As is clear from Remark~\\ref{rem:dimension}, our method cannot work\n for the full range of $\\alpha$, since we rely on\n Assumption~\\ref{as:measureofball}, so that we cannot consider\n $\\alpha$ such that $(2\\alpha)^{-1}$ is larger than the lower\n pointwise dimension of $\\mu_\\phi$ at any point.\n\\end{remark}\n\nIf we also assume that the map is Markov, then we can prove the large\nintersection property of the set $E (x,r)$.\n\n\\begin{theorem} \\label{the:markov}\n Assume that $T \\colon [0,1] \\to [0,1]$ is a Markov map that\n satisfies the Assumptions~\\ref{as:piecewise}, \\ref{as:gibbs},\n \\ref{as:decay2} and \\ref{as:measureofball}. Then, we have that\n $E(x,r) \\in \\mathscr{G}^s$ for $\\mu_\\phi$-almost all $x$, where\n \\[\n s = \\sup \\{\\, t < s_0 : \\exists c, \\forall n : n^{-2} \\sum_{j=1}^n\n r_j^{-t} < c \\,\\}.\n \\]\n\\end{theorem}\n\nIn the next section we give examples of maps satisfying the\nassumptions of Theorem~\\ref{the:gibbs}.\n\n\n\\section{Examples} \\label{sec:examples}\n\n\n\\subsection{Examples for Theorem~\\ref{the:shrinkingtarget}} \\label{ssec:example1}\nThere exist some dynamical systems that obviously satisfy the\nassumptions of Theorem \\ref{the:shrinkingtarget}, for example $n-1$\nexpanding diffeomorphisms of the circle. We are going to present less\nobvious examples of application of our results.\n\nFor instance, the maps studied by Liverani in \\cite{Liverani} satisfy\nthe assumptions of Theorem~\\ref{the:shrinkingtarget}. These maps are\ndefined as follows. Assume that there is a finite partition\n$\\mathscr{P}$ of $[0,1]$ into intervals, such that on every interval\n$I \\in \\mathscr{P}$, the map $T$ can be extended to a $C^2$ map on a\nneighbourhood of the closure of $I$, and assume that there is a\n$\\lambda > 1$ such that $|T'| \\geq \\lambda$ holds everywhere. To put\nit shortly, $T$ is piecewise $C^2$ with respect to a finite partition,\nand uniformly expanding. We assume also that $T$ is weakly covering,\nas defined by Liverani: The map $T$ is said to be weakly covering if\nthere exists an $N_0 \\in \\mathbb{N}$ such that if $I \\in \\mathscr{P}$,\nthen\n\\[\n \\bigcup_{k=0}^{N_0} T^k (I) \\supset [0,1] \\setminus W,\n\\]\nwhere $W$ is the set of points that never hit the discontinuities of\n$T$. Under the assumptions mentioned above, it is shown in\n\\cite{Liverani} that $T$ has an invariant measure $\\mu$ satisfying the\nAssumption~\\ref{as:acim} above, and the correlations decay\nexponentially. Hence they are summable and Assumption~\\ref{as:decay1}\nholds. We therefore have the following corollary.\n\n\\begin{corollary}\n If $T \\colon [0,1] \\to [0,1]$ is piecewise $C^2$ with respect to a\n finite partition, uniformly expanding, and weakly covering, then\n with $r_n = n^{-\\alpha}$, $\\alpha \\geq 1$ we have\n \\[\n \\dimH E (x,r) = \\frac{1}{\\alpha}\n \\]\n and $E (x,r) \\in \\mathscr{G}^{1\/\\alpha}$ for Lebesgue almost every\n $x \\in [0,1]$.\n\\end{corollary}\n\nIn fact, it is not necessary to assume that the map is piecewise\n$C^2$. It is sufficient that the derivative is of bounded variation,\nsince then one can combine the estimates by Rychlik \\cite{Rychlik}\nwith the method of Liverani \\cite{Liverani} to get the same result.\n\nIf the map is piecewise expanding with an indifferent fixed point,\nthen Assumption~\\ref{as:decay1} does not hold. However, as we will see\nbelow, we can still use Theorem~\\ref{the:shrinkingtarget} to get the\nfollowing result.\n\n\\begin{corollary}\n Let $T_\\beta:[0,1)\\to [0,1)$ with $\\beta > 1$ be the\n Manneville--Pomeau map\n \\[\n x \\mapsto \\left\\{ \\begin{array}{ll} x + 2^{\\beta-1} x^{\\beta} & x <\n 1\/2 \\\\ 2x - 1 & x \\geq 1\/2 \\end{array} \\right.\n \\]\n and $r_j = j^{-\\alpha}, \\alpha \\geq 1$. If\\\/ $1 < \\beta < 2$, then\n for Lebesgue almost every $x$ we have that $\\dim_H E(x,r) =\n 1\/\\alpha$ and $E (x,r) \\in \\mathscr{G}^{1\/\\alpha}$. If $\\beta \\geq\n 2$, then for Lebesgue almost every $x$ we have that $\\dimH E(x,r) =\n \\frac{1}{\\alpha (\\beta - 1)}$ and $E (x,r) \\in \\mathscr{G}^{1\/\\alpha\n \/(\\beta - 1)}$.\n\\end{corollary}\n\n\\begin{proof}\n Let $S_\\beta$ be the first return map on the interval $[1\/2, 1)$.\n Then there exists an $S_\\beta$-invariant measure $\\nu$ that is\n absolutely continuous with respect to Lebesgue measure, and $\\nu$\n is ergodic.\n\n Let $R(x)$ be the return time of $x$ to $[1\/2, 1)$, that is, we have\n $T_\\beta^{R(x)} = S_\\beta (x)$.\n\n It the case $1 < \\beta < 2$ we will do as follows. In this case $R$\n is integrable and so, for almost all $x$ there is a constant $c > 0$\n such that\n \\begin{equation} \\label{eq:returntimeestimate}\n n \\leq \\sum_{k=1}^n R_k (x) \\leq c n\n \\end{equation}\n for all sufficiently large $n$. (The lower bound always holds, since\n $R \\geq 1$.) We put\n \\[\n r_j' = (cj)^{-\\alpha} \\qquad \\text{and} \\qquad r_j'' = j^{-\\alpha}.\n \\]\n Then for almost all $x$ we will have that\n \\[\n B (S_\\beta^j (x), r_j') \\subset B (T_\\beta^{\\sum_{k=1}^j R_k (x)}\n (x), r_{\\sum_{k=1}^j R_k (x)}) \\subset B(S_\\beta^j (x), r_j'')\n \\]\n for sufficiently large $j$. Hence, with\n \\begin{align*}\n E' (x,r') &:= \\limsup_{j \\to \\infty} B (S_\\beta^j (x), r_j'), \\\\\n E'' (x,r'') &:= \\limsup_{j \\to \\infty} B (S_\\beta^j (x), r_j''),\n \\end{align*}\n we have\n \\[\n E' (x,r') \\subset E (x,r) \\cap [1\/2,1] \\subset E''(x,r'')\n \\]\n for almost all $x$.\n\n Now, Theorem~\\ref{the:shrinkingtarget} implies that $E' (x,r') \\cap\n [1\/2, 1) \\in \\mathscr{G}^{1\/\\alpha}$ for almost all $x$ and $\\dimH\n E'' (x,r'') = 1\/\\alpha$ for almost all $x$. This implies the\n desired result for $E(x,r) \\cap [1\/2, 1)$.\n\n In the same way we can get the result for $E (x,r) \\cap I_n$ where\n $I_n = [x_n, 1)$, where $x_n$ is the $n$-th pre-image of $1\/2$ with\n respect to the left branch of $T_\\beta$. This concludes the proof\n for the case $1 \\leq \\beta < 2$.\n\n The method above does not quite work when $\\beta \\geq 2$, since then\n $\\int R\\, \\mathrm{d} \\nu = \\infty$, and the upper bound of\n \\eqref{eq:returntimeestimate} fails. However, whenever $\\varepsilon\n > 0$, we have for almost all $x$ that\n \\[\n n^{\\beta - 1 - \\varepsilon} \\leq \\sum_{k = 1}^n R_k (x) \\leq\n n^{\\beta - 1 + \\varepsilon}.\n \\]\n holds for large $n$. The upper bound above follows from\n Theorem~2.3.1 of \\cite{Aaronson}. The lower bound follows using\n Theorem~1 in \\cite{AaronsonDenker}.\n\n We now proceed as in the case $1 < \\beta < 2$. Put\n \\[\n r_j' = (c_2 j)^{-\\alpha(\\beta - 1 + \\varepsilon)} \\qquad \\text{and}\n \\qquad r_j'' = (c_1 j)^{-\\alpha (\\beta - 1 - \\varepsilon)}.\n \\]\n With the same notation as previously we then have that\n \\[\n E' (x,r') \\subset E (x,r) \\cap [1\/2,1] \\subset E''(x,r'')\n \\]\n for almost all $x$.\n\n Theorem~\\ref{the:shrinkingtarget} implies that for almost all $x$\n $E'(x, r') \\cap [1\/2, 1) \\in \\mathscr{G}^{1\/\\alpha\/(\\beta - 1 +\n \\varepsilon)}$ and $\\dimH E''(x, r'') \\cap [1\/2,1) = (\\alpha\n (\\beta - 1 - \\varepsilon))^{-1}$. Since $\\varepsilon > 0$ can be\n chosen arbitrarily small, this implies the result for $E (x,r)\n \\cap [1\/2,1)$. As before, we get the result stated in the\n corollary by considering $E (x,r \\cap I_n)$ in the same way.\n\\end{proof}\n\n\n\\subsection{Examples for Theorem~\\ref{the:gibbs}}\n\nHere we will show that the Assumptions~\\ref{as:decay2} and\n\\ref{as:measureofball} are satisfied for a natural class of\nsystems. Consider a map $T$ which is piecewise $C^2$ with respect to a\nfinite partition, and uniformly expanding, as defined in\nSection~\\ref{ssec:example1}. Then Assumption~\\ref{as:piecewise} is\nsatisfied.\n\nSuppose that $\\phi$ satisfies the assumptions of Liverani, Saussol and\nVaienti in \\cite{Liveranietal}, that is, $e^\\phi$ is of bounded\nvariation and that there exists an $n_0$ such that\n\\begin{equation} \\label{eq:contractingpotential}\n \\sup e^{S_{n_0} \\phi} < \\inf L_\\phi^{n_0} 1,\n\\end{equation}\nwhere $S_{n_0} \\phi = \\phi + \\phi \\circ T + \\cdots + \\phi \\circ\nT^{n_0-1}$ and\n\\[\nL_\\phi f (x) = \\sum_{T(y) = x} e^{\\phi (y)} f(y)\n\\]\nis the transfer operator with respect to the potential $\\phi$. We\nassume moreover that $\\phi$ is piecewise $C^2$ with respect to the\npartition of the map, so that the bounded distortion part of\nAssumption~\\ref{as:gibbs} is satisfied.\n\nFinally, we assume that $T$ is covering, in the sense that for any non\ntrivial interval $I$ there is an $n$ such that $T^n (I) \\supset [0,1]\n\\setminus W$, where $W$ is the set of points that never hit the\ndiscontinuities of $T$. Under these assumptions, there exists a\nunique Gibbs measure $\\mu_\\phi$ and the Assumptions~\\ref{as:gibbs} and\n\\ref{as:decay2} hold, see Theorem 3.1 in \\cite{Liveranietal}. In this\nsetting, Assumption~\\ref{as:measureofball} will also be satisfied.\n\n\\begin{corollary} \n Assume that $T \\colon [0,1] \\to [0,1]$ is piecewise $C^2$ with\n respect to a finite partition, uniformly expanding and covering. If\n $\\phi$ satisfies the assumptions above, then the\n Assumption~\\ref{as:measureofball} is satisfied with\n \\[\n s_0 =\n \\limsup_{m \\to \\infty} \\inf \\frac{S_m \\phi - m P(\\phi)}{-\\log\n |(T^m)'|}.\n \\]\n Hence, if $r_n = n^{-\\alpha}$, $\\alpha > 1\/s_0$, then\n \\[\n \\dimH E(x,r) = \\frac{1}{\\alpha}\n \\]\n for $\\mu_\\phi$-almost every $x \\in [0,1]$.\n\\end{corollary}\n\n\\begin{proof}\n We will rely on the part of Assumption~\\ref{as:gibbs} that says that\n if $A$ is a subset of one of the partition elements, then\n \\begin{equation} \\label{eq:conformal}\n \\nu_\\phi (T(A)) = \\int_A e^{P(\\phi)-\\phi} \\, \\mathrm{d} \\nu_\\phi,\n \\end{equation}\n where $P (\\phi) = \\lim_{n \\to \\infty} n^{-1} \\log \\inf L_\\phi^n 1$\n denotes the topological pressure of $\\phi$. Since there are\n constants $c_1$ and $c_2$ such that $0 < c_1 < h < c_2$, it suffices\n to prove Assumption~\\ref{as:measureofball} for the measure\n $\\nu_\\phi$.\n\n Let $r_0 > 0$ be such that any interval of length $r_0$ intersects\n at most two partition elements. If $r < r_0$ and $I$ is an interval\n of length $r$, then $I$ intersects at most two different partition\n elements and therefore $T(I)$ consists of at most two intervals of\n length at most $r \\sup |T'|$. By \\eqref{eq:conformal}, it follows\n that\n \\[\n \\nu_\\phi (I) \\inf_I e^{P(\\phi) - \\phi} \\leq 2 \\sup_{|I_1| = r \\sup_I\n |T'|} \\nu (I_1).\n \\]\n Hence\n \\[\n \\nu_\\phi (I) \\leq 2 \\sup_I e^{\\phi - P(\\phi)} \\sup_{|I_1| = r \\sup_I\n |T'|} \\nu (I_1).\n \\]\n By induction, we conclude that\n \\[\n \\nu_\\phi (I) \\leq \\bigl(2 \\sup_I e^{\\phi - P(\\phi)} \\bigr)^n,\n \\]\n where $n$ is the largest integer such that $r (\\sup_I |T'|)^n \\leq\n r_0$. Hence we have that there is a constant $C_1$, that does not\n depend on $I$, such that\n \\[\n \\nu_\\phi (I) \\leq C_1 r^{\\theta_1} = C_1 |I|^{\\theta_1}, \\quad\n \\theta_1 = \\frac{\\log 2 + \\log \\sup_I e^{\\phi - P(\\phi)}}{- \\log\n \\sup_I |T'|}\n \\]\n By making the constant $C_1$ sufficiently large, we can ensure that\n the estimate above holds for all intervals $I$, not only those that\n are sufficiently small.\n\n By considering $T^m$ instead of $T$, where $m$ is a positive\n integer, the same argument gives us the existence of a constant $C_m$\n such that\n \\[\n \\nu_\\phi (I) \\leq C_m |I|^{\\theta_m}, \\quad \\theta_m = \\frac{\\log 2\n + \\log \\sup_I e^{S_m \\phi - m P(\\phi)}}{- \\log \\sup_I |(T^m)'|}\n \\]\n holds for any interval $I$.\n\n This shows that we may take $s_0 = \\limsup_{m \\to \\infty} \\inf\n \\frac{S_m \\phi - m P(\\phi)}{-\\log |(T^m)'|}$. The assumption\n \\eqref{eq:contractingpotential} guaranties that $s_0 > 0$.\n\\end{proof}\n\n\n\\section{Proof of Theorem~\\ref{the:shrinkingtarget}} \\label{sec:absctsproof}\n\nThe proof of Theorem~\\ref{the:shrinkingtarget} will be based on the\nfollowing lemma. It is a special case of Theorem~1 in\n\\cite{PerssonReeve}. We refer to \\cite{PerssonReeve} for a proof.\n\n\\begin{lemma} \\label{lem:frostman}\n Let $E_n$ be open subsets of $[0,1]$, and $\\mu_n$ Borel probability\n measures with support in $E_n$, that converge weakly to a measure\n $\\mu$ that is absolutely continuous with respect to Lebesgue measure\n and with density that is bounded and bounded away for zero. Suppose\n there exists a constant $C$ such that\n \\[\n \\iint |x-y|^{-s} \\, \\mathrm{d} \\mu_n (x) \\mathrm{d} \\mu_n (y) < C\n \\]\n holds for all $n$. Then the set $\\displaystyle \\limsup_{n \\to\n \\infty} E_n$ belongs to the class $\\mathscr{G}^s$ and has\n Hausdorff dimension at least $s$.\n\\end{lemma}\n\nWe will also make use of the following two lemmata.\n\n\\begin{lemma} \\label{lem:energy}\n Let $0 < s < 1$. There is a constant $c_s > 0$ such that if $B_1 =\n B(x_1, r_1)$ and $B_2 = B(x_2,r_2)$ are two balls, then\n \\[\n \\frac{1}{r_1 r_2} \\int_{B_1} \\int_{B_2} |x - y|^{-s} \\, \\mathrm{d}x\n \\mathrm{d}y \\leq c_s \\min \\{|x_1 - x_2|^{-s}, r_1^{-s}, r_2^{-s} \\},\n \\]\n and for any fixed $x_2$, the variation of the function\n \\[\n x_1 \\mapsto \\frac{1}{r_1 r_2} \\int_{B_1} \\int_{B_2} |x - y|^{-s} \\,\n \\mathrm{d}x \\mathrm{d}y,\n \\]\n is less than $2 c_s \\min \\{r_1^{-s}, r_2^{-s}\\}$.\n\\end{lemma}\n\n\\begin{proof}\n This is intuitively clear, but we provide a proof.\n\n We suppose that $r_1 \\geq r_2$. Let\n \\[\n I (x_1,x_2) = \\frac{1}{r_1 r_2} \\int_{B_1} \\int_{B_2} |x - y|^{-s}\n \\, \\mathrm{d}x \\mathrm{d}y.\n \\]\n It is clear that $I$ achieves it's maximal value when $x_1 = x_2$,\n for instance when $x_1 = x_2 = 1\/2$. Then a direct calculation shows\n that there is a constant $c_1$ such that\n \\[\n I (1\/2, 1\/2) \\leq c_1 r_1^{-s}.\n \\]\n Hence $I (x_1,x_2) \\leq c_1 r_1^{-s} = c_1 \\min \\{r_1^{-s}, r_2^{-s}\n \\}$.\n\n Suppose that $|x_1 - x_2| > r_1$. It suffices to show that\n $I (x_1,x_2) \\leq c_2 |x_1 - x_2|^{-s}$ holds for some constant\n $c_2$. By a change of variables, we have that\n \\begin{align*}\n I (x_1, x_2) &= |x_1 - x_2|^{-s} \\int_{-1}^1 \\int_{-1}^1 \\Bigl| 1\n - \\frac{r_1}{|x_1-x_2|} u - \\frac{r_2}{|x_1 - x_2|} v \\Bigr|^{-s}\n \\, \\mathrm{d}u \\mathrm{d}v \\\\ &\\leq 4 |x_1 - x_2|^{-s} \\int_0^1\n \\int_0^1 \\Bigl| 1 - \\frac{r_1}{|x_1-x_2|} u - \\frac{r_2}{|x_1 -\n x_2|} v \\Bigr|^{-s} \\, \\mathrm{d}u \\mathrm{d}v.\n \\end{align*}\n Since $r_1 \/ |x_1 - x_2|$ and $r_2 \/ |x_1 - x_2|$ are not larger\n than $1$, we have that\n \\[\n I (x_1, x_2) \\leq 4 |x_1 - x_2|^{-s} \\int_0^1 \\int_0^1 | 1 - u - v\n |^{-s} \\, \\mathrm{d}u \\mathrm{d}v = c_2 |x_1 - x_2|^{-s}.\n \\]\n We can now conclude that $I (x_1, x_2) \\leq c_s \\min \\{|x_1 -\n x_2|^{-s}, r_1^{-s}, r_2^{-s} \\}$, with $c_s = \\max \\{c_1, c_2 \\}$.\n\n The statement about the variation is now a direct consequence since\n the function\n \\[\n x_1 \\mapsto \\frac{1}{r_1 r_2} \\int_{B_1} \\int_{B_2} |x - y|^{-s} \\,\n \\mathrm{d}x \\mathrm{d}y,\n \\]\n is positive, unimodal and with maximal value at most $c_s \\min\n \\{r_1^{-s}, r_2^{-s}\\}$.\n\\end{proof}\n\n\\begin{lemma} \\label{lem:decay}\n Suppose that $F \\colon [0,1]^2 \\to \\mathbb{R}$ is a continuous and\n non-negative function, and that $D$ and $E$ are constants such that\n for each fixed $x$ the function $f \\colon y \\mapsto F(x,y)$\n satisfies $\\var f \\leq D$ and $\\int f \\, \\mathrm{d} \\mu \\leq\n E$. Then\n \\[\n \\int F(T^n x, x) \\, \\mathrm{d} \\mu (x) \\leq E + (D + E) p(n).\n \\]\n\\end{lemma}\n\n\\begin{proof}\n Let $\\varepsilon > 0$. Let $I_k = [k\/m, (k+1)\/m)$. There is an $m$\n such that if\n \\[\n G (x, y) = \\sum_{k = 0}^{m-1} F (k\/m, y) {1}_{I_k} (x),\n \\]\n where ${1}_{I_k}$ denotes the indicator function on $I_k$, then\n \\[\n | F (x,y) - G(x,y) | < \\varepsilon.\n \\]\n Hence we have\n \\[\n \\biggl| \\int F (T^n x, x) \\, \\mathrm{d} \\mu (x) - \\int G (T^n x, x)\n \\, \\mathrm{d} \\mu (x) \\biggr| < \\varepsilon.\n \\]\n\n For each term $F (k\/m, y) {1}_{I_k} (x)$ in the sum defining $G$, we\n have\n \\begin{multline*}\n \\biggl| \\int F (k\/m, x) {1}_{I_k} (T^n x) \\, \\mathrm{d} \\mu (x) -\n \\int F (k\/m, x) \\, \\mathrm{d} \\mu (x) \\int 1_{I_k} \\, \\mathrm{d}\n \\mu \\biggr| \\\\ \\leq \\mu (I_k) (D + E) p(n).\n \\end{multline*}\n by the decay of correlations. As a consequence, we have\n \\[\n \\int F (k\/m, x) {1}_{I_k} (T^n x) \\, \\mathrm{d} \\mu (x) \\leq E \\mu\n (I_k) + \\mu (I_k) (D + E) p(n).\n \\]\n and so\n \\[\n \\int F (T^n x, x) \\, \\mathrm{d} \\mu (x) \\leq \\varepsilon + \\int G\n (T^n x, x) \\, \\mathrm{d} \\mu (x) \\leq \\varepsilon + E + (D + E)\n p(n).\n \\]\n Let $\\varepsilon \\to 0$.\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem~\\ref{the:shrinkingtarget}]\n Let $B_n (x) = B(T^n x, r_n)$. We consider the sets\n \\[\n V_n (x) = \\bigcup_{k = m(n)}^n B_k (x)\n \\]\n where $m(n)$ is a slowly increasing sequence such that $m(n) < n$\n and $m(n) \\to \\infty$ as $n \\to \\infty$. It then holds that $\\limsup\n V_n (x) = \\limsup B_n (x)$.\n\n We define probability measures $\\mu_{n, x}$ with support in $V_n\n (x)$ by\n \\[\n \\mu_{n,x} = \\frac{1}{n - m(n) + 1} \\sum_{k = m(n)}^n \\lambda_{B_k\n (x)},\n \\]\n where $\\lambda_A$ denotes the Lebesgue measure restricted to the set\n $A$ and normalised so that $\\lambda_A (A) = 1$. It is clear that\n $\\mu_{n,x}$ converges weakly to $\\mu$ as $n \\to \\infty$ for almost\n every $x$.\n\n We shall consider the quantities\n \\[\n I_s (\\mu_{n,x}) = \\iint |y - z|^{-s} \\, \\mathrm{d} \\mu_{n,x} (y)\n \\mathrm{d} \\mu_{n,x} (z).\n \\]\n From the definition of the measure $\\mu_{n,x}$ it follows that\n \\[\n I_s (\\mu_{n,x}) = \\frac{1}{(n - m(n) + 1)^2} \\sum_{i = m(n)}^n\n \\sum_{j = m(n)}^n \\frac{1}{4 r_i r_j} \\int_{B_i} \\int_{B_j} |y -\n z|^{-s} \\, \\mathrm{d}y \\mathrm{d}z,\n \\]\n We now assume that $m(n) < n \/ 2$. Together with\n Lemma~\\ref{lem:energy} we then get that\n \\[\n I_s (\\mu_{n,x}) \\leq \\frac{4 c_s}{n^2} \\sum_{m(n) \\leq i \\leq j \\leq\n n} \\min \\{|T^i x - T^j x|^{-s}, r_i^{-s} \\}.\n \\]\n\n Using that $\\mu$ is $T$-invariant, we can write\n \\[\n \\int I_s (\\mu_{n,x}) \\, \\mathrm{d}\\mu (x) \\leq \\frac{4 c_s}{n^2}\n \\sum_{m(n) \\leq i \\leq j \\leq n} \\int \\min \\{|T^{j -i} x - x|^{-s},\n r_i^{-s} \\wedge r_j^{-s} \\} \\, \\mathrm{d} \\mu (x),\n \\]\n where $a \\wedge b$ denotes the minimum of $a$ and $b$.\n\n An application of Lemma~\\ref{lem:decay} gives that\n \\begin{align*}\n \\int I_s (\\mu_{n,x}) \\, \\mathrm{d}\\mu (x) & \\leq \\frac{1}{n^2}\n \\sum_{m(n) \\leq i \\leq j \\leq n} \\bigl( C_1 + (C_1 + 2 (r_i^{-s}\n \\wedge r_j^{-s}) ) p(j-i) \\bigr) \\\\ & \\leq \\frac{1}{n^2}\n \\sum_{m(n) \\leq i \\leq j \\leq n} C_2 (1 + (r_i^{-s} \\wedge\n r_j^{-s}) p(j-i)) \\\\ & \\leq C_2 + \\frac{C_2}{n^2} \\sum_{j = 1}^n\n \\sum_{i=1}^j (r_i^{-s} \\wedge r_j^{-s}) p (j-i).\n \\end{align*}\n Since $p$ is summable, we can estimate that\n \\[\n \\sum_{j = 1}^n \\sum_{i=1}^j (r_i^{-s} \\wedge r_j^{-s}) p (j-i) \\leq\n \\sum_{j = 1}^n \\sum_{i=1}^j r_j^{-s} p (j-i) = \\sum_{j = 1}^n\n \\sum_{i=0}^{j-1} r_j^{-s} p (i) \\leq C \\sum_{j=1}^n r_j^{-s}.\n \\]\n (This estimate is actually not too rough, since\n \\[\n \\sum_{j = 1}^n \\sum_{i=1}^j (r_i^{-s} \\wedge r_j^{-s}) p (j-i) =\n \\sum_{j = 1}^n \\sum_{i=0}^{j-1} (r_{j-i}^{-s} \\wedge r_j^{-s}) p (i)\n \\geq \\sum_{j = 1}^n r_j^{-s} p(0),\n \\]\n which is of the same order of magnitude if $r_i \\to 0$ as $i \\to \\infty$.)\n\n We conclude that\n \\[\n \\int I_s (\\mu_{n,x}) \\, \\mathrm{d} \\mu (x) \\leq C_2 + \\frac{C\n C_2}{n^2} \\sum_{j=1}^n r_j^{-s},\n \\]\n and this is uniformly bounded for all $n$ if\n \\[\n s < \\sup \\{\\, t : \\exists c, \\forall n : n^{-2} \\sum_{j=1}^n\n r_j^{-t} < c \\,\\}.\n \\]\n\n Suppose $s$ satisfies the inequality above. Then, by Birkhoff's\n ergodic theorem, for $\\mu$-almost all $x$ the measures $\\mu_{n,x}$\n converges weakly to the measure $\\mu$, and, as follows from the\n considerations above, for $\\mu$-almost all $x$, there is a sequence\n $n_k$, with $n_k \\to \\infty$, such that the sequence $(I_s\n (\\mu_{n_k,x}))_{k=1}^\\infty$ is bounded. We can now apply\n Lemma~\\ref{lem:frostman} and conclude that for $\\mu$-almost all $x$\n the set $E(x, r)$ belongs to the class $\\mathscr{G}^s$. This proves\n the first part of Theorem~\\ref{the:shrinkingtarget}.\n\n If $r_n = n^{-\\alpha}$, then it is easy to check that the result\n above gives us that the set $E (x,r)$ belongs to\n $\\mathscr{G}^{1\/\\alpha}$ for almost all $x$. A simple covering\n argument shows that in fact the dimension is not larger than\n $1\/\\alpha$.\n\\end{proof}\n\n\n\\section{Proof of Theorems~\\ref{the:gibbs} and \\ref{the:markov}} \\label{sec:gibbsproof}\n\nAssume that we have a sequence of open sets $E_n$, such that each\n$E_n$ is a finite union of disjoint intervals, and that the diameters\nof these intervals go to zero as $n$ grows. We are first going to\nstudy the Hausdorff dimension of the set $\\limsup E_n$ in the\nfollowing lemmata. The proof of Theorem~\\ref{the:gibbs} will then be\nsimilar to that of Theorem~\\ref{the:shrinkingtarget}, but will instead\nbe based on the lemmata below.\n\n\\begin{lemma} \\label{lem:localfrostman}\n Let $E_n$ be open subsets of $[0,1]$. Suppose there are Borel\n probability measures $\\mu_n$ with support in $E_n$, that converge\n weakly to a measure $\\mu$ that satisfies \\eqref{eq:localenergy}. If\n for some $t < s < s_0$ there is a constant $C$ such that\n \\[\n \\iint |x - y|^{-s} \\, \\mathrm{d} \\mu_n (x) \\mathrm{d} \\mu_n (y) < C\n \\]\n for all $n$, then, whenever $I$ is an interval with\n \\[\n \\iint_{I \\times I} |x - y|^{-t} \\, \\mathrm{d} \\mu (x) \\mathrm{d}\n \\mu (y) < c |I|^{-t} \\mu (I)^2,\n \\]\n there is an $n_I$ such that\n \\[\n \\sum |U_k|^t \\geq \\frac{1}{2c} |I|^t\n \\]\n holds for any cover $\\{U_k\\}$ of $E_n \\cap I$, $n > n_I$.\n\\end{lemma}\n\n\\begin{proof}\n The assumptions implies that for any $t < s$\n \\[\n \\iint_{I \\times I} |x - y|^{-t} \\, \\mathrm{d} \\mu_n (x) \\mathrm{d}\n \\mu_n (y) \\to \\iint_{I \\times I} |x - y|^{-t} \\, \\mathrm{d} \\mu (x)\n \\mathrm{d} \\mu (y),\n \\]\n as $n \\to \\infty$. (See Corollary 2.3 of \\cite{PerssonReeve}.)\n\n For a measure $\\nu$ on $I$ we write $R_t \\nu (x) = \\int |x-y|^{-t}\n \\, \\mathrm{d} \\nu (y)$.\n\n Take an interval $I \\subset [0,1]$ satisfying the assumption of the\n lemma, and define the measure $\\nu_n$ on $I$ by\n \\begin{equation} \\label{eq:nudefinition}\n \\nu_n (A) = \\frac{\\int_A (R_t \\mu_n |_I)^{-1} \\, \\mathrm{d}\n \\mu_n}{\\int_I (R_t \\mu_n |_I)^{-1} \\, \\mathrm{d} \\mu_n},\n \\end{equation}\n where $\\mu_n |_I$ denotes the restriction of $\\mu_n$ to $I$.\n\n There is an $n_I$ such that if $n > n_I$ then\n \\begin{equation} \\label{eq:nuestimate}\n \\nu_n (U) \\leq 2 c \\frac{|U|^t}{|I|^t}\n \\end{equation}\n holds for all intervals $U \\subset I$. This is proved as follows. By\n the definition of $\\nu_n$ the estimate \\eqref{eq:nuestimate} is\n equivalent to\n \\[\n \\frac{1}{|U|^t} \\int_U (R_t \\mu_n |_I)^{-1} \\, \\mathrm{d} \\mu_n \\leq\n \\frac{2 c}{|I|^t} \\int_I (R_t \\mu_n |_I)^{-1} \\, \\mathrm{d} \\mu_n.\n \\]\n We prove the stronger statement that\n \\begin{equation} \\label{eq:Rtintegrals}\n \\frac{1}{|U|^t} \\int_U (R_t \\mu_n |_I)^{-1} \\, \\mathrm{d} \\mu_n \\leq\n 1 \\leq \\frac{2 c}{|I|^t} \\int_I (R_t \\mu_n |_I)^{-1} \\, \\mathrm{d}\n \\mu_n.\n \\end{equation}\n\n The first inequality in \\eqref{eq:Rtintegrals} is proved in\n \\cite{PerssonReeve}. (Use Lemma~2.4 of \\cite{PerssonReeve} and\n approximate with measures that are absolutely continuous with\n respect to Lebesgue.) To prove the second inequality we use Jensen's\n inequality and Assumption~\\ref{as:measureofball} (in particular\n \\eqref{eq:localenergy}) to conclude that\n \\begin{align*}\n \\int_I (R_t \\mu_n |_I)^{-1} \\, \\frac{\\mathrm{d} \\mu_n}{\\mu_n (I)}\n & \\geq \\biggl(\\int_I (R_t \\mu_n |_I) \\, \\frac{\\mathrm{d}\n \\mu_n}{\\mu_n (I)} \\biggr)^{-1} \\\\ &= \\biggl( \\frac{1}{\\mu_n (I)}\n \\iint_{I \\times I} |x-y|^{-t} \\mathrm{d} \\mu_n (x) \\mathrm{d}\n \\mu_n (y) \\biggr)^{-1} \\\\ &\\geq \\frac{1}{\\sqrt 2} \\biggl(\n \\frac{1}{\\mu (I)} \\iint_{I \\times I} |x-y|^{-t} \\mathrm{d} \\mu (x)\n \\mathrm{d} \\mu (y) \\biggr)^{-1} \\\\ &\\geq \\frac{1}{2c}\n |I|^t \\mu_n (I)^{-1},\n \\end{align*}\n provided $n > n_I$ for some $n_I$. Hence\n \\[\n \\frac{1}{|I|^t} \\int_I (R_t \\mu_n |_I)^{-1} \\, \\mathrm{d} \\mu_n\\geq\n \\frac{1}{2c}\n \\]\n and \\eqref{eq:Rtintegrals} follows.\n\n We have now proved \\eqref{eq:nuestimate}, and will use it as\n follows. Suppose that $\\{U_k\\}$ is a cover of $E_n \\cap I$, and $n >\n n_I$. Then\n \\[\n 1 = \\nu_n (\\bigcup_k U_k) \\leq \\sum_k \\nu_n (E_k) \\leq\n \\frac{2 c}{|I|^t} \\sum_k |U_k|^t.\n \\]\n This shows that $\\sum_k |U_k|^t \\geq \\frac{1}{2c} |I|^t$ for\n any cover $\\{U_k\\}$ of $E_n \\cap I$.\n\\end{proof}\n\nIf we would have known that for some constant $c$, the estimate\n\\[\n\\iint_{I \\times I} |x - y|^{-t} \\, \\mathrm{d} \\mu_\\phi (x) \\mathrm{d}\n\\mu_\\phi (y) < c |I|^{-t} \\mu_\\phi (I)^2,\n\\]\nholds for any $I$, then we could have used this to prove that the set\n$E (x, r)$ has a large intersection property, see the proof of\nTheorem~\\ref{the:gibbs}. However, we are unable to prove that such a\nconstant exists, and our strategy is instead to prove that we have\nsuch an estimate for sufficiently many intervals to get the dimension\nresult. The lemma below is what we need.\n\nIf $\\mathscr{Z}$ is the partition with respect to which $T$ is\npiecewise expanding, then the elements of the partition $\\mathscr{Z}\n\\vee T^{-1} \\mathscr{Z} \\vee \\cdots \\vee T^{-n+1} \\mathscr{Z}$\nare called cylinders of generation $n$.\n\n\\begin{lemma} \\label{lem:energyanddistorsion}\n Let $d_0 > 0$ be given and suppose that \\eqref{eq:localenergy} holds\n and that $s < s_0$. Then there is a constant $K = K (d_0)$ such\n that if $I$ is an interval that is a subset of a cylinder of\n generation $n$ and $|T^n (I)| > d_0$, then\n \\begin{equation} \\label{eq:goodinterval}\n \\iint_{I \\times I} |x - y|^{-s} \\, \\mathrm{d} \\mu_\\phi (x)\n \\mathrm{d} \\mu_\\phi (y) < K |I|^{-s} \\mu_\\phi (I)^2.\n \\end{equation}\n\\end{lemma}\n\n\\begin{proof}\n Let\n \\[\n K_0 = \\sup_{|I| > d_0} \\frac{|I|^s}{\\mu_\\phi (I)^2} \\iint_{I \\times\n I} |x - y|^{-s} \\, \\mathrm{d} \\mu_\\phi (x) \\mathrm{d} \\mu_\\phi (y)\n < \\infty.\n \\]\n By the bounded distortion, there exists a constant $K_1$ such that\n \\begin{multline*}\n \\frac{|I|^s}{\\mu_\\phi (I)^2} \\iint_{I \\times I} |x - y|^{-s} \\,\n \\mathrm{d} \\nu_\\phi (x) \\mathrm{d} \\nu_\\phi (y)\\\\ < K_1\n \\frac{|T^n(I)|^s}{\\nu_\\phi (T^n (I))^2} \\iint_{T^n (I) \\times\n T^n (I)} |x - y|^{-s} \\, \\mathrm{d} \\nu_\\phi (x) \\mathrm{d}\n \\nu_\\phi (y),\n \\end{multline*}\n whenever $I$ is an interval contained in a cylinder of generation\n $n$. Since $\\mu_\\phi = h_\\phi \\nu_\\phi$, where $h_\\phi$ is bounded\n and bounded away from zero, the combination of these two estimates\n gives us the desired result.\n\\end{proof}\n\nBy Lemma~\\ref{lem:energyanddistorsion} we know that some particular\nintervals are good, in the sense that we have the estimate\n\\eqref{eq:goodinterval}. We will now use these intervals to construct\na Cantor set $N = \\cap N_n \\subset \\limsup E_n$ with large\ndimension. The following lemma describes the important properties of\nthis construction.\n\n\\begin{lemma} \\label{lem:net}\n Suppose that the assumptions of Lemma~\\ref{lem:localfrostman} hold\n with $\\mu = \\mu_\\phi$, and that \\eqref{eq:localenergy} is satisfied.\n Then, for any $\\varepsilon > 0$, there is a sequence of sets $N_n$\n with the following properties.\n \\begin{enumerate}\n \\addtocounter{enumi}{1}\n \\item[(\\roman{enumi})] All $N_n$ are compact, each $N_n = \\cup\n N_{n,i}$ is a finite and disjoint union of intervals $N_{n,i}$,\n and $N_{n+1} \\subset N_n$. \\addtocounter{enumi}{1}\n \\item[(\\roman{enumi})] There is an increasing sequence $m_n$ such\n that $N_n \\subset E_{m_n}$. \\addtocounter{enumi}{1}\n \\item[(\\roman{enumi})] For any $N_{n,i}$ we have\n \\[\n \\sum |U_k|^t \\geq \\frac{1}{4K} |N_{n,i}|^t,\n \\]\n for any cover $\\{U_k\\}$ of $N_{n,i} \\cap N_{n+1}$.\n \\addtocounter{enumi}{1}\n \\item[(\\roman{enumi})] For any $N_{n,i}$ and $N_{n+1,j}$ we have\n \\[\n \\frac{|N_{n,i}|}{|N_{n+1,j}|} > (4K)^{1\/\\varepsilon}.\n \\]\n \\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\n By Hofbauer \\cite{Hofbauer}, Lemma 13, we have that if we choose\n $d_0$ sufficiently small, then the Hausdorff dimension of the set of\n points, for which $|T^n I_n (x)| > d_0$ does not hold for infinitely\n many different $n$, is arbitrarily close to $0$. In particular, if\n we choose $d_0$ sufficiently small, then there is a set $A$ of full\n measure such that for any $x \\in A$ there are infinitely many $n$\n with $|T^n I_n (x)| > d_0$.\n\n If $x \\in A$ and $I_n (x)$ has the property that $|T^n I_n (x)| >\n r_0$, then we let $J_{x,n} = I_n (x)$. We denote by $\\mathscr{J}$\n the set of all $J_{x,n}$, that is\n \\[\n \\mathscr{J} = \\{\\, J_{x,n} : x \\in A \\, \\}.\n \\]\n\n We will define the sets $N_n$ inductively as follows. We set $N_0 =\n [0,1]$. Clearly $[0,1]$ satisfies the assumptions of\n Lemma~\\ref{lem:energyanddistorsion}. We let $m_0 = n_{[0,1]}$, where\n $n_{[0,1]}$ is by Lemma~\\ref{lem:localfrostman}.\n\n Suppose that $N_n$ has been defined together with a number $m_n$\n such that for any $N_{n,i}$, Lemma~\\ref{lem:localfrostman} is\n satisfied with $n_{N_{n,i}} \\leq m_n$.\n\n We wish to define $N_{n+1}$. The set $A \\cap N_n$ has full measure\n in $N_n$. Hence, for any $\\varepsilon_n > 0$, we can find a finite\n and disjoint collection $\\mathscr{J}_n \\subset \\mathscr{J}$ such\n that for all $J_{x,n} \\in \\mathscr{J}_n$ we have $|J_{x,n}| <\n \\varepsilon_n$ and $J_{x,n} \\subset E_{m_n}$. Moreover, we can\n choose the collection $\\mathscr{J}_n$ such that for any $N_{n,i}$,\n if $\\mathscr{J}_n'$ denotes the elements of $\\mathscr{J}_n$ that are\n subsets of $N_{n,i}$, then\n \\[\n \\nu_n (\\cup \\mathscr{J}_n') > \\frac{1}{2},\n \\]\n where the measure $\\nu_n$ is defined by \\eqref{eq:nudefinition}. As\n in the proof of Lemma~\\ref{lem:localfrostman}, we can then conclude\n that for any $N_{n,i}$ we have\n \\[\n \\sum |U_k|^t \\geq \\frac{1}{4}\n K^{-1} |N_{n,i}|^t,\n \\]\n for any cover $\\{U_k\\}$ of $N_{n,i}$.\n\n We put $N_{n+1} = \\cup \\mathscr{J}_n$ and $\\{N_{n+1,i}\\} =\n \\mathscr{J}_n$. The number $m_{n+1}$ is taken to be an upper bound\n of $\\{\\, n_I : I \\in J_n \\,\\}$. By taking $\\varepsilon_n$\n sufficiently small we can achieve that\n \\[\n \\frac{|N_{n,i}|}{|N_{n+1,j}|} > (4K)^{1\/\\varepsilon}\n \\]\n holds for any $N_{n,i}$ and $N_{n+1,j}$.\n\n By induction, we now get the sets $N_n$ with the desired properties.\n\\end{proof}\n\n\\begin{lemma} \\label{lem:dimofcantorset}\n With the assumptions and notation of Lemma~\\ref{lem:net} we have\n that $\\dimH N \\geq t - \\varepsilon$, where $N = \\cap N_n$.\n\\end{lemma}\n\n\\begin{proof}\n Consider any countable cover $\\mathscr{U}=\\{U_k\\}$ of the set\n $N$. Since $N$ is compact, we can assume that $\\mathscr{U}$ is a\n finite cover. We will consider the sum\n \\[\n Z_{t-\\varepsilon}(\\mathscr{U}) = \\sum_k |U_k|^{t-\\varepsilon},\n \\]\n trying to prove that it is uniformly bounded away from 0.\n\n Step 1. There exists $n_0$ such that there is a finite cover\n $\\mathscr{U}' = \\{U_k'\\}$ of $N$ such that each intersection $U_k'\n \\cap N$ is a finite union of $N \\cap N_{n_0,i_\\ell}$ and\n \\begin{equation} \\label{eqn:ind1}\n Z_{t- \\varepsilon}(\\mathscr{U}) \\geq \\frac 12 Z_{t-\\varepsilon}\n (\\mathscr{U}').\n \\end{equation}\n This can be done by taking $n_0$ so large that the intervals\n $N_{n_0,i}$ are much smaller than all the (finitely many) elements\n of the cover $\\mathscr{U}$, and then perturb each $U_k$ so that it\n is aligned with the intervals $N_{n_0,i}$.\n\n Step 2. Consider a new cover $\\mathscr{U}''$, obtained in the\n following way. For any $U_k'$, the set $U_k'\\cap N$ must be\n contained in some $N_{n,i}$. There are at most two sets $N_{n+1, j}$\n that intersect $U_k'$ but are not contained in $U_k'$. We replace\n $U_k'$ by at most three open sets: $U_k' \\cap N_{n,i} \\cap N_{n+1,\n j_1}$, $U_k' \\cap N_{n,i} \\cap N_{n+1, j_2}$, and $U_k' \\cap\n N_{n,i} \\setminus \\overline{(N_{n+1, j_1} \\cup N_{n+1, j_2})}$. The\n latter we leave as is, with the former two we repeat the\n procedure. The end result of this procedure: instead of $U_k'$ we\n have a finite family of open sets $U_\\ell''$, each of which contains\n a finite union of $N_{n',i}$ for some $n'$ and does not intersect\n other $N_{n',j}$ (we will call this the {\\it wholeness\n property}). We will call such $U_\\ell''$ a $n'$-th level element.\n\n Note that in this subfamily there will be at most one element of\n level $n$ and at most two elements of each level $n'$, $n