diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzflxk" "b/data_all_eng_slimpj/shuffled/split2/finalzzflxk" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzflxk" @@ -0,0 +1,5 @@ +{"text":"\\section*{Introduction}\n\nLet $f\\colon(\\C^2,0)\\to (\\C^2,0)$ be a germ of a holomorphic map that fixes the origin $0\\in \\C^2$, and which is finite-to-one near $0$. Suppose that $C$ and $D$ are two germs of holomorphic curves passing through $0$. In this article, we will study the sequence $\\mu(n) := C\\cdot f^n(D)$ of local intersection multiplicities at the origin, for $n\\geq 0$. Specifically, we will address the question: \\emph{how fast can the sequence $\\mu(n)$ grow?} This and related questions were posed and studied by V.\\ ~I.\\ ~Arnold, who conjectured that, if $\\mu(n)<\\infty$ for every $n$, the sequence $\\mu(n)$ grows at most exponentially fast, see \\cite[\\S5]{MR1215971}, \\cite[p.\\ 215]{MR1350971}, and \\cite[problems 1994-49 and 1994-50]{MR2078115}. Arnold proved this conjecture in the case when $f$ is a local biholomorphism and in some cases when the complex derivative $f'(0)$ has exactly one zero eigenvalue \\cite[\\S5]{MR1215971}, but the general case appears to be unknown. A new proof of the conjecture in the case when $f$ is a local biholomorphism has recently been obtained by Seigal and Yakovenko \\cite{SY}.\n\nIn this article, we will show by explicit construction that Arnold's conjecture is false in general, and that in fact the sequence $\\mu(n)$ can grow arbitrarily fast. More precisely, we will prove the following theorem.\n\n\\begin{letthm}\\label{thmA} Let $f\\colon \\C^2\\to \\C^2$ be the polynomial map $f(x,y) = (x^2 - y^4,y^4)$ and $\\nu\\colon \\N\\to \\R$ be any function. Then there exist germs of holomorphic curves $C$ and $D$ through the origin such that the local intersection multiplicities $\\mu(n) = C\\cdot f^n(D)$ are always finite, and such that $\\mu(n)>\\nu(n)$ for infinitely many $n$.\n\\end{letthm}\n\nNote that the complex derivative $f'(0) $ for this map is $0$, so $f$ defines a \\emph{superattracting} germ at $0$ (in general, $f$ is superattracting if $f'(0)$ is nilpotent). The dynamics of superattracting germs is an active area of research in holomorphic dynamics in several variables, see for instance \\cite{MR1275463, MR1759437, MR2339287, MR2904007, Rug2, MR2853790, BEK, GR} and the notes \\cite{Mattias}.\n\nIt has long been known that intersection multiplicities in \\emph{smooth} dynamical systems (local or global) can grow arbitrarily fast \\cite{MR1039340, MR1167716, MR1199715}. On the other hand, it is also known that intersection multiplicities ``generically\" grow at most exponentially (see \\cite{MR1139553} for a precise formulation). Our second theorem is a holomorphic version of this principle.\n\n\\begin{letthm}\\label{thmB} Let $f\\colon (\\C^d, 0)\\to (\\C^d,0)$ be a holomorphic fixed point germ at the origin $0\\in \\C^d$, where $d\\geq 2$, such that $f$ is finite-to-one near $0$. Fix holomorphic function germs $\\psi_1,\\ldots, \\psi_m$ at the origin such that $\\{\\psi_1 = 0\\}\\cap \\cdots\\cap \\{\\psi_m = 0\\} = \\{0\\}$. For each $z\\in \\C^m$, let $D_z$ denote the hypersurface germ $D_z = \\{z_1\\psi_1 + \\cdots + z_m\\psi_m = 0\\}$ through the origin. Fix an integer $k\\in \\{1,\\ldots, d-1\\}$. For each $a = (a_1,\\ldots, a_k)\\in (\\C^m)^k$ and $b = (b_1,\\ldots, b_{d-k})\\in (\\C^m)^{d-k}$, let $V_a$ and $W_b$ be the local intersection cycles $V_a := D_{a_1}\\cdot\\ldots \\cdot D_{a_k}$ and $W_b := D_{b_1}\\cdot\\ldots\\cdot D_{b_{d-k}}$. Then there exists a dense set $U\\subseteq (\\C^m)^k\\times (\\C^m)^{d-k}$, given as the complement of a countable union of algebraic subsets, with the property that for all $(a,b)\\in U$, the cycles $V_a$ and $W_b$ are of codimension $k$ and $d-k$, respectively, and the sequence of local intersection multiplicities $\\mu(n) := V_a\\cdot f^n_*W_b$ grows subexponentially, that is to say, $\\mu(n)\\leq AB^n$ for some $A,B>0$.\n\\end{letthm}\n\nIn the special case when $d = 2$ and $f$ is superattracting, we will in fact be able to prove much more about the sequence $\\mu(n)$.\n\n\\begin{letthm}\\label{thmD} With the same setup and notations as \\hyperref[thmB]{Theorem~\\ref*{thmB}} but with the additional assumptions that $d = 2$ and $f$ is superattracting, we have for each $(a,b)\\in U$ that the sequence of local intersection multiplicities $\\mu(n) := V_a\\cdot f^n_*W_b$ at the origin eventually satisfies an integral linear recursion relation. Moreover, there exist constants $A_1, A_2>0$ such that $A_1c_\\infty^n \\leq \\mu(n)\\leq A_2 c_\\infty^n$ as $n\\to \\infty$, where here $c_\\infty>1$ denotes the asymptotic attraction rate of $f$ (see \\S5 for the definition of $c_\\infty$). If one replaces $f$ by $f^2$, then in fact there is a constant $A>0$ such that $\\mu(n)\\sim Ac_\\infty^n$.\n\\end{letthm}\n\nThe proof of \\hyperref[thmB]{Theorem~\\ref*{thmB}}, which will be given in \\S4, is a rather easy application of Teissier's theory of mixed multiplicities. \\hyperref[thmD]{Theorem~\\ref*{thmD}}, which we will prove in \\S5, relies on recent non-elementary results of the author and M.\\ Ruggiero \\cite{GR} within the subject of dynamics on valuation spaces. Unlike Theorems \\ref{thmB} and \\ref{thmD}, our proof of \\hyperref[thmA]{Theorem~\\ref*{thmA}} requires no high-powered techniques, and lends itself to an easy overview, which we give now.\n\n\nLet $S$ be the space of binary sequences $S = \\{0,1\\}^\\N$, and let $\\sigma\\colon S\\to S$ denote the left-shift map on $S$. For any two sequences $s,t\\in S$, set $M(s,t)$ to be the smallest index $m$ such that $s_m\\neq t_m$, with $M(s,t) = \\infty$ if $s = t$. To prove \\hyperref[thmA]{Theorem~\\ref*{thmA}}, we will construct a family $\\{C_s\\}_{s\\in S}$ of holomorphic curve germs through the origin with the properties that \\begin{enumerate}\n\\item[1.] $f(C_s) = C_{\\sigma(s)}$ for each $s\\in S$, and\n\\item[2.] for any $s,t\\in S$, the local intersection multiplicity $C_s\\cdot C_t$ is $\\asymp 4^{M(s,t)}$.\n\\end{enumerate} The theorem then follows easily from the following simple proposition, the proof of which is left to the reader.\n\n\\begin{nonumprop} Let $\\nu\\colon \\N\\to \\R$ be any function. Then there exist sequences $s,t\\in S$ such that $M(s,\\sigma^n(t))$ is finite for all $n\\geq 0$, and such that $M(s,\\sigma^n(t))>\\nu(n)$ for infinitely many $n$.\n\\end{nonumprop}\n\nIn \\S1, we will construct the $C_s$ as \\emph{formal} curves, that is, as curves defined by irreducible formal power series $\\varphi_s\\in \\C\\llbracket x, y\\rrbracket$. The coefficients of the power series $\\varphi_s$ will be determined via a recursive procedure that guarantees properties 1.\\ and 2.\\ are satisfied. In \\S2, we will prove that each formal power series $\\varphi_s$ is actually convergent, and hence that the curve germs $C_s$ are \\emph{holomorphic}. It should be noted that the construction of the power series $\\varphi_s$ in \\S1 is purely algebraic, and that if we replace the word \\emph{holomorphic} with \\emph{formal}, \\hyperref[thmA]{Theorem~\\ref*{thmA}} holds when $\\C$ is replaced by any field of characteristic $\\neq 2$. \n\nIn \\S3, we will sketch an alternative, geometric construction of the curves $C_s$, valid when working over $\\C$. The construction realizes the curves as a \\emph{Cantor bouquet} of holomorphic stable manifolds, similar to the construction carried out first in \\cite{MR1808626} for rational maps; see also the related works \\cite{Yam2, MR2195140, MR2307152, Tomoko, MR2629648}. Finally, it is worth mentioning that this counterexample is by no means isolated; one can construct similar Cantor bouquets for many other superattracting germs.\n\n\\subsection*{Acknowledgements} I would like to wholeheartedly thank Mattias Jonsson and Charles Favre for their support and guidance during the course of this project. I would also like to thank the referee for very useful commentary, and especially for pointing out the work of Yamagishi \\cite{MR1808626}. This work was supported by the grants DMS-1001740 and DMS-1045119, as well as the ERC-Starting grant ``Nonarcomp\" no. 307856.\n\n\n\\section{The formal construction of the curves $C_s$}\n\nIn this and the next two sections, we let $f\\colon \\C^2\\to \\C^2$ be the polynomial map $f(x,y) = (x^2 - y^4, y^4)$. We will write $S$ to denote the space of binary sequences $S = \\{0,1\\}^\\N$, and write $\\sigma$ to denote the left-shift map $\\sigma\\colon S\\to S$.\n\nWe now define a family $\\{\\varphi_s\\}_{s\\in S}$ of irreducible formal power series $\\varphi_s\\in \\C\\llbracket x,y\\rrbracket$ of the form \\begin{equation}\\label{eqn1} \\varphi_s(x,y) = x + a_0^sy^2 + a_1^sy^6 + \\cdots + a_n^sy^{2+4n} + \\cdots,\\end{equation} by recursively defining the coefficients $a_n^s$, in the following manner. First, we set $a_0^s = (-1)^{s_0}$; then, assuming $a_0^s,\\ldots, a_n^s$ have been defined for all $s\\in S$, we set\\begin{equation}\\label{recursion}\na_{n+1}^s = \\begin{cases}\\displaystyle-\\frac{1}{2a_0^s}\\sum_{\\substack{i+j = n+1\\\\i,j\\geq 1}} a_i^sa_j^s & \\mbox{ if }4\\nmid n.\\bigskip\\\\\n\\displaystyle-\\frac{a_{n\/4}^{\\sigma(s)}}{2a_0^s} - \\frac{1}{2a_0^s}\\sum_{\\substack{i+j = n+1\\\\i,j\\geq 1}} a_i^sa_j^s & \\mbox{ if }4\\mid n.\\end{cases}\\end{equation} Let $C_s$ denote the formal curve through the origin in $\\C^2$ defined by $\\varphi_s$. As the next proposition shows, the curves $\\{C_s\\}_{s\\in S}$ have the properties discussed in the introduction\n\n\\begin{prop}\\label{formalprop} The formal curves $\\{C_s\\}_{s\\in S}$ satisfy \\begin{enumerate}\n\\item[$1.$] $f(C_s) = C_{\\sigma(s)}$ for all $s\\in S$, and\n\\item[$2.$] the local intersection multiplicity $C_s\\cdot C_t$ is $\\frac{1}{3}(4^{m+1} + 2)$, where $m$ is the smallest index such that $s_m\\neq t_m$.\n\\end{enumerate}\n\\end{prop}\n\\begin{proof} To prove 1., we must show that $\\varphi_s\\mid (\\varphi_{\\sigma(s)}\\circ f)$ in the ring $\\C\\llbracket x,y\\rrbracket$. Indeed, we show that $\\varphi_{\\sigma(s)}\\circ f = (x + a_0^sy^2 + a_1^sy^6 + \\cdots)(x - a_0^sy^2 - a_1^sy^6 - \\cdots)$. To see this, observe that \\begin{equation}\\label{eqn3}\n(x + a_0^sy^2 + a_1^sy^6 + \\cdots)(x - a_0^sy^2 - a_1^sy^6 - \\cdots) = x^2 - y^4 - \\sum_{n\\geq 0}\\sum_{\\substack{i+j = n+1\\\\i,j\\geq 0}}a_i^sa_j^sy^{4(n+2)}.\\end{equation} The recursion formula (\\ref{recursion}) then gives that the coefficient of $y^{4(n+2)}$ in this expression is $0$ when $4\\nmid n$ and is $a_{n\/4}^{\\sigma(s)}$ when $4\\mid n$, so the right hand side of (\\ref{eqn3}) is \\[x^2 - y^4 + \\sum_{k\\geq 0} a_k^{\\sigma(s)}y^{4(4k + 2)} = x^2 - y^4 + \\sum_{k\\geq 0}a_k^{\\sigma(s)}y^{8 + 16k} = \\varphi_{\\sigma(s)}\\circ f.\\] This completes the proof of 1. \n\nTo prove 2., we first make the easy observation that the intersection multiplicity $C_s \\cdot C_t$ is precisely the smallest integer $k$ such that the coefficients of $y^k$ in the power series $\\varphi_s$ and $\\varphi_t$ differ. From equation (\\ref{eqn1}), it then follows that $C_s\\cdot C_t = 2 + 4n$, where $n$ is the smallest integer such that $a_n^s\\neq a_n^t$. We will prove 2.\\ by induction on $m\\geq 0$, where $m$ is the smallest index such that $s_m\\neq t_m$. If $m = 0$, then $a_0^s\\neq a_0^t$, and hence $C_s\\cdot C_t = 2$, establishing the base case of the induction. Now assume that $m>0$ is the smallest index index such that $s_m\\neq t_m$. Then, by induction, $C_{\\sigma(s)}\\cdot C_{\\sigma(t)} = \\frac{1}{3}(4^m + 2)$, from which it follows that the first index $n$ for which $a_n^{\\sigma(s)}\\neq a_n^{\\sigma(t)}$ is $n = \\frac{1}{3}(4^{m-1}-1)$. Using the recursion formula (\\ref{recursion}), we can then conclude that the first index $n$ such that $a_n^s\\neq a_n^t$ is \\[\nn = 1 + \\frac{4}{3}(4^{m-1} - 1) = \\frac{1}{3}(4^m-1).\\] Thus $C_s\\cdot C_t = 2 + \\frac{4}{3}(4^m-1) = \\frac{1}{3}(4^{m+1}+2),$ completing the induction, and the proof.\n\\end{proof}\n\n\\section{Analyticity}\n\nIn this section, we complete the proof of \\hyperref[thmA]{Theorem~\\ref*{thmA}} by proving that each of the power series $\\varphi_s$ constructed in \\S1 are convergent. Indeed, using very crude estimates, we will prove the following proposition.\n\n\\begin{prop}\\label{analyticity} Let $C = 1\/20$ and $R = 10$. Then $|a_n^s|\\leq CR^n\/n^2$ for each $n\\geq 1$ and each $s\\in S$. In particular, $\\varphi_s$ converges on the set $\\{(x,y)\\in \\C^2 : |y|< 1\/10\\}$.\n\\end{prop}\n\nTo prove the proposition, we will make use of the following lemma.\n\n\\begin{lem}\\label{lemma} Let $n\\geq 1$ be an integer. Then \\[\\sum_{k=1}^n\\frac{1}{k^2(n-k+1)^2}\\leq \\frac{20}{(n+1)^2}.\\]\n\\end{lem}\n\\begin{proof} The symmetry in the terms of the left hand sum implies that \\[\n\\sum_{k=1}^n\\frac{1}{k^2(n-k+1)^2}\\leq 2\\sum_{k=1}^{\\lfloor\\frac{n+1}{2}\\rfloor} \\frac{1}{k^2(n-k+1)^2}.\\] Multiplying both sides of this inequality by $(n+1)^2$ yields \\[\\sum_{k=1}^n\\frac{(n+1)^2}{k^2(n-k+1)^2}\\leq 2\\sum_{k=1}^{\\lfloor\\frac{n+1}{2}\\rfloor} \\frac{(n+1)^2}{k^2(n-k+1)^2} = 2\\sum_{k=1}^{\\lfloor\\frac{n+1}{2}\\rfloor}\\frac{1}{k^2(1 - \\frac{k}{n+1})^2}\\leq 8\\sum_{k=1}^{\\lfloor\\frac{n+1}{2}\\rfloor}\\frac{1}{k^2}<\\frac{8\\pi^2}{6}.\\] Since $8\\pi^2\/6<20$, the proof is complete.\n\\end{proof}\n\n\\begin{proof}[Proof of Proposition \\ref{analyticity}] We will prove the proposition by induction on $n\\geq 1$. When $n = 1$, the recursion formula (\\ref{recursion}) gives $a_1^s = -a_0^{\\sigma(s)}\/2a_0^s =\\pm\\frac{1}{2}$ for each $s\\in S$, and hence $|a_1^s| = \\frac{1}{2} = CR$, establishing the base case of the induction. Now assume that the proposition holds for $a_k^s$ when $k\\leq n$. If $4\\nmid n$, then (\\ref{recursion}), the triangle inequality, and \\hyperref[lemma]{Lemma~\\ref*{lemma}} give that\\[\n|a_{n+1}^s|\\leq \\frac{1}{2} \\sum_{k=1}^n\\frac{C^2R^{n+1}}{k^2(n-k+1)^2}\\leq \\frac{20C^2R^{n+1}}{2(n+1)^2} = \\frac{CR^{n+1}}{2(n+1)^2}<\\frac{CR^{n+1}}{(n+1)^2},\\] establishing the proposition in this case. If $4\\mid n$, then (\\ref{recursion}) gives \\begin{equation}\\label{eqn4}|a_{n+1}^s|\\leq \\frac{CR^{n\/4}}{2(n\/4)^2} + \\frac{1}{2}\\sum_{k=1}^n\\frac{C^2R^{n+1}}{k^2(n-k+1)^2}\\leq \\frac{CR^{n\/4}}{2(n\/4)^2} + \\frac{CR^{n+1}}{2(n+1)^2}.\\end{equation}Since $4\\mid n$, and in particular $n\\geq 4$, the inequality $\\frac{n}{4}\\leq (n+1) - 4$ is valid, and hence \\[\\frac{CR^{n\/4}}{2(n\/4)^2} = \\frac{8CR^{n\/4}}{n^2}\\leq \\frac{8CR^{n+1}}{n^2R^4}<\\frac{8CR^{n+1}}{(n+1)^2R^4}.\\] Putting this estimate into (\\ref{eqn4}), we see that \\[\n|a_{n+1}^s|\\leq \\left(\\frac{8}{R^4} + \\frac{1}{2}\\right)\\frac{CR^{n+1}}{(n+1)^2}<\\frac{CR^{n+1}}{(n+1)^2}.\\] This completes the proof.\n\\end{proof}\n\nWe have thus shown that the curves $C_s$ are holomorphic. On the other hand, it should be noted that they cannot all be algebraic. This is because if $C$ and $D$ are algebraic plane curves passing through $0$, then, as a simple consequence of Bezout's theorem, the local intersection multiplicities $C\\cdot f^n_*D$ grow subexponentially in $n$. We are then led to the following interesting, but possibly difficult, questions.\n\n\\begin{Quest} Which, if any, of the curves $C_s$ just constructed are (local irreducible components of) germs of algebraic curves? More specifically, if a sequence $s\\in S$ is not eventually periodic, is it possible for $C_s$ to be a germ of an algebraic curve?\n\\end{Quest}\n\n\n\\section{Realization as a Cantor bouquet}\n\nIn this section, we reconstruct the curves $C_s$ from \\S1 as a \\emph{Cantor bouquet} of holomorphic stable manifolds using a geometric procedure given first by Yamagishi in \\cite{MR1808626}, see also \\cite{Yam2, MR2195140, MR2307152, Tomoko, MR2629648}. We only sketch this construction here; for details see \\cite[\\S2]{MR1808626}.\n\nLet $\\pi\\colon X\\to \\C^2$ denote the blowup of the origin in $\\C^2$, and let $E$ denote the exceptional divisor of $\\pi$. It is easy to check that the lift $f_X\\colon X\\dashrightarrow X$ of $f$ has exactly one indeterminacy point $q$, given by $z = w = 0$ in the local coordinates $z = x\/y$ and $w = y$. \n\nNow let $\\pi'\\colon X'\\to X$ denote the blowup of the point $q$. It is a straightforward computation to check that $f$ lifts to an (everywhere defined) holomorphic map $F\\colon X'\\to X$. Moreover, with respect to the local coordinates $z,w$ on $X$ and $u = z\/w$, $v = w$ on $X'$, the map $F$ is given simply as $F(u,v) = (u^2 - 1, v^4)$. The preimage $F^{-1}(q)$ then consists of two points, $q_0 = (1,0)$ and $q_1 = (-1,0)$. Because $F$ is not a local biholomorphism near either $q_0$ or $q_1$, we are not in an identical situation to the one considered in \\cite[\\S2]{MR1808626}, but nonetheless the map $(\\pi')^{-1}\\circ F$, defined away from the $q_i$, exhibits similar local dynamics around the $q_i$ as the maps studied in \\cite[\\S2]{MR1808626}; specifically, $(\\pi')^{-1}\\circ F$ is contracting in the $v$-direction and expanding in the $u$-direction near the $q_i$. It is this behavior that allows us to consider holomorphic stable manifolds of $q$. Let $U$ be a small neighborhood of $q$ in $X$, so that $F^{-1}(U)$ is a disjoint union $U_0\\sqcup U_1$ of neighborhoods of $q_0$ and $q_1$, respectively. Let \\begin{align*}\nB & = \\{p\\in X\\smallsetminus E : f_X^n(p)\\in U \\mbox{ for all } n\\geq 0\\}\\\\\n& = \\{p\\in X\\smallsetminus E : f_X^n(p)\\in \\pi'(U_0\\sqcup U_1)\\mbox{ for all }n\\geq0\\}.\n\\end{align*} Near $q$, the set $B$ will be the union of the (strict transforms in $X$ of the) curves $C_s$ constructed in \\S1. To be precise, if $s\\in S := \\{0,1\\}^\\N$, then the set $\\wt{C}_s := \\{p\\in X\\smallsetminus E : f_X^n(p)\\in \\pi'(U_{s_n})$ for all $n\\geq 0\\}$ is a curve transverse to $E$ at $q$, and the family $\\{\\wt{C}_s\\}_{s\\in S}$ consists of the strict transforms in $X$ of the curves $C_s$ from \\S1. The $\\wt{C}_s$ are \\emph{local stable manifolds} of $q$ in the sense that they form an invariant family for $f_X$, and $f_X^n(p)\\to q$ as $n\\to\\infty$ for all $p\\in B = \\bigcup_s \\wt{C}_s$ near $q$. Moreover, it is clear from this construction that $f(\\wt{C}_s) = \\wt{C}_{\\sigma(s)}$, where $\\sigma\\colon S\\to S$ is the left shift map.\n\nIt is also possible to use the geometry in this construction to compute the local intersection multiplicites of the curves $C_s$ at the origin, rederiving \\hyperref[formalprop]{Proposition~\\ref*{formalprop}(2)}. To see how, first observe that because the $\\wt{C}_s$ are transverse to $E$ at $q$, the projection formula gives \\[\nC_s\\cdot C_t = (\\pi^*C_s \\cdot \\pi^*C_t) = (\\wt{C}_s + E)\\cdot (\\wt{C}_t + E) = \\wt{C}_s\\cdot \\wt{C}_t + 1.\\] Similarly, if $D_s$ denotes the strict transform of $C_s$ in $X'$, then $\\wt{C}_s\\cdot \\wt{C}_t = D_s\\cdot D_t + 1$, and thus $C_s\\cdot C_t = D_s\\cdot D_t + 2$. If $s_0\\neq t_0$, then the germs $D_s$ and $D_t$ lie in different open sets $U_0$ and $U_1$, and hence do not intersect, proving that $C_s\\cdot C_t = 2$, as previously derived. Suppose, on the other hand, that $s_0 = t_0$, say without loss of generality $s_0 = t_0 = 0$. If $F_0$ denotes the restriction $F_0 = F|_{U_0}$, then $D_s = F_0^*\\wt{C}_{\\sigma(s)}$ and $D_t = F_0^*\\wt{C}_{\\sigma(t)}$. Because $F_0$ has local topological degree $4$ at $q_0$, it follows that \\begin{align*}\nC_s\\cdot C_t & = 2 + D_s\\cdot D_t = 2 + F_0^*\\wt{C}_{\\sigma(s)}\\cdot F_0^*\\wt{C}_{\\sigma(t)} = 2 + 4(\\wt{C}_{\\sigma(s)}\\cdot \\wt{C}_{\\sigma(t)})\\\\\n& = 4(C_{\\sigma(s)}\\cdot C_{\\sigma(t)}) - 2\\end{align*} when $s_0= t_0$. Using this identity and the fact that $C_s\\cdot C_t = 2$ when $s_0\\neq t_0$, one easily rederives \\hyperref[formalprop]{Proposition~\\ref*{formalprop}(2)}.\n\nFinally, it is worth pointing out that this argument applies equally well to any other superattracting germ with similar geometry. For instance, using either the methods in this section or in \\S1, one can show that the maps $f(x,y) = (x^p - y^q, y^r)$ where $2\\leq p < r\\leq q$ all have Cantor bouquets of curves.\n\n\\section{Mixed multiplicities and the proof of Theorem B}\n\nIn this section, we give a proof of \\hyperref[thmB]{Theorem~\\ref*{thmB}} using Teissier's theory of \\emph{mixed multiplicities}. In fact, we will prove a slightly stronger theorem, namely \\hyperref[thmC]{Theorem~\\ref*{thmC}} below. For ease of notation, let $R$ denote the formal power series ring $\\C\\llbracket x_1,\\ldots, x_d\\rrbracket$, where $d\\geq 2$ is a fixed integer. Recall that $R$ is a local ring with maximal ideal $\\mf{m} = (x_1,\\ldots, x_d)$. An ideal $\\mf{a}$ of $R$ is said to be $\\mf{m}$-primary if $\\mf{a}$ contains some power of $\\mf{m}$, or equivalently if $\\mf{a}$ defines the origin $0\\in \\C^d$.\n\n\\begin{thm}\\label{thmC} Let $f\\colon (\\C^d,0)\\to (\\C^d,0)$ be a holomorphic fixed point germ at the origin $0\\in \\C^d$ that is finite-to-one near $0$. Let $\\mf{a}_1,\\ldots, \\mf{a}_d$ be $\\mf{m}$-primary ideals of $R$. Choose generators $\\psi_1^{(i)},\\ldots, \\psi_{m_i}^{(i)}$ for each of the ideals $\\mf{a}_i$. For every point $z\\in \\C^{m_i}$, let $D_{z}^{(i)}$ denote the formal hypersurface germ $D_{z}^{(i)} = \\{z_{1}\\psi_1^{(i)} + \\cdots + z_{m_i}\\psi_{m_i}^{(i)} = 0\\}$ through the origin. Fix an integer $k\\in \\{1,\\ldots, d-1\\}$. For each point $a = (a_1,\\ldots, a_k)\\in \\C^{m_1}\\times\\cdots\\times \\C^{m_k}$ and $b = (b_1,\\ldots, b_{d-k})\\in \\C^{m_{k+1}}\\times\\cdots\\times \\C^{m_d}$, let $V_a$ and $W_b$ be the local intersection cycles $V_a := D_{a_1}^{(1)}\\cdot\\ldots\\cdot D_{a_k}^{(k)}$ and $W_b := D_{b_1}^{(k+1)}\\cdot\\ldots \\cdot D_{b_{d-k}}^{(d)}$. Then there is a dense subset $U\\subseteq (\\C^{m_1}\\times\\cdots\\times \\C^{m_k})\\times(\\C^{m_{k+1}}\\times\\cdots\\times \\C^{m_d})$, given as the complement of a countable union of algebraic subsets, such that for all $(a,b)\\in U$, the cycles $V_a$ and $W_b$ are of codimension $k$ and $d - k$, respectively, and the sequence of local intersection multiplicities $\\mu(n) = V_a\\cdot f^n_*W_b$ grows subexponentially.\n\\end{thm}\n\nObserve that \\hyperref[thmC]{Theorem~\\ref*{thmC}} implies \\hyperref[thmB]{Theorem~\\ref*{thmB}} by simply taking each of the ideals $\\mf{a}_i$ of \\hyperref[thmC]{Theorem~\\ref*{thmC}} to be the same ideal $(\\psi_1,\\ldots, \\psi_m)$.\n\nBefore beginning the proof of \\hyperref[thmC]{Theorem~\\ref*{thmC}}, we recall some basic facts from the theory of mixed multiplicities developed by B.\\ Teissier in the 70s \\cite{MR0374482, MR0467800, MR645731, MR708342, MR518229}. A concise and clear overview to the topic that suffices for our purposes can be found in \\cite[\\S1.6.8]{MR2095471}. For us, the relevant results are the following.\n\n\\begin{thm}[Teissier]\\label{teissier1} Let $\\mf{b}_1,\\ldots, \\mf{b}_d$ be $\\mf{m}$-primary ideals of $R$. Fix generators $\\varphi_1^{(i)},\\ldots, \\varphi_{m_i}^{(i)}$ of each of the ideals $\\mf{b}_i$. For a point $z\\in \\C^{m_i}$, let $D_{z}^{(i)} = \\{z_{1}\\varphi_1^{(i)} + \\cdots + z_{m_i}\\varphi_{m_i}^{(i)} = 0\\}$. Then there is an integer $e(\\mf{b}_1;\\cdots; \\mf{b}_d)\\geq 1$ and a nonempty Zariski open subset $U\\subseteq \\C^{m_1}\\times\\cdots\\times \\C^{m_d}$ such that if $ (a_1,\\ldots, a_d)\\in U$, then the hypersurface germs $D_{a_1}^{(1)},\\ldots, D_{a_d}^{(d)}$ intersect properly at the origin, and $e(\\mf{b}_1; \\cdots; \\mf{b}_d)$ is exactly the local intersection multiplicity $ D_{a_1}^{(1)}\\cdot D_{a_2}^{(2)}\\cdot\\ldots\\cdot D_{a_d}^{(d)}$. Moreover, one has the inequality \\[\ne(\\mf{b}_1; \\cdots; \\mf{b}_d) \\leq e(\\mf{b}_1)^{1\/d}\\cdots e(\\mf{b}_d)^{1\/d},\\] where here $e(\\mf{b}_i)$ denotes the standard Samuel multiplicity of $\\mf{b}_i$, that is \\[\ne(\\mf{b}_i) := \\lim_{n\\to \\infty} \\frac{d!}{n^d}\\,\\mathrm{length}_R(R\/\\mf{a}^{n+1}) \\in \\N.\\] The integer $e(\\mf{b}_1;\\cdots; \\mf{b}_d)$ is called the \\emph{mixed multiplicity} of the ideals $\\mf{b}_i$.\n\\end{thm}\n\nWith these facts at our disposal, we can now prove \\hyperref[thmC]{Theorem~\\ref*{thmC}}.\n\n\\begin{proof}[Proof of {\\hyperref[thmC]{Theorem~\\ref*{thmC}}}] The projection formula says precisely that \\begin{equation}\\label{proj}\n\\mu(n) = V_a\\cdot f^n_*W_b = f^{n*}D_{a_1}^{(1)}\\cdot \\ldots \\cdot f^{n*}D_{a_k}^{(k)}\\cdot D_{b_1}^{(k+1)}\\cdot\\ldots\\cdot D_{b_{d-k}}^{(d)}.\\end{equation} By \\hyperref[teissier1]{Theorem~\\ref*{teissier1}}, for each $n$ there is a nonempty Zariski open subset $U_n\\subseteq (\\C^{m_1}\\times\\cdots\\times \\C^{m_k})\\times(\\C^{m_{k+1}}\\times\\cdots\\times \\C^{m_d})$ such that if $(a,b)\\in U_n$, then the right hand side of equation (\\ref{proj}) is exactly the mixed multiplicity $e(f^{n*}\\mf{a}_1; \\cdots; f^{n*}\\mf{a}_k; \\mf{a}_{k+1}; \\cdots; \\mf{a}_d)$, where $f^{n*}\\mf{a}_i$ is the ideal $(\\psi_1^{(i)}\\circ f^n, \\ldots, \\psi_{m_i}^{(i)}\\circ f^n)$. We point out that $f^{n*}\\mf{a}_i$ is an $\\mf{m}$-primary ideal by our assumption that $f$ is finite-to-one near $0$. Let $U = \\bigcap_n U_n$. For $(a,b)\\in U$, this proves that $\\mu(n) = e(f^{n*}\\mf{a}_1; \\cdots; f^{n*}\\mf{a}_k; \\mf{a}_{k+1}; \\cdots; \\mf{a}_d)$ for all $n$.\n\nThe problem is now reduced to showing that the sequence $e(f^{n*}\\mf{a}_1; \\cdots; f^{n*}\\mf{a}_k; \\mf{a}_{k+1}; \\cdots; \\mf{a}_d)$ grows subexponentially. Again using \\hyperref[teissier1]{Theorem~\\ref*{teissier1}}, we see \\[\ne(f^{n*}\\mf{a}_1; \\cdots; f^{n*}\\mf{a}_k; \\mf{a}_{k+1}; \\cdots; \\mf{a}_d)\\leq e(f^{n*}\\mf{a}_1)^{1\/d}\\cdots e(f^{n*}\\mf{a}_k)^{1\/d}e(\\mf{a}_{k+1})^{1\/d}\\cdots e(\\mf{a}_d)^{1\/d},\\] so it suffices to show that $e(f^{n*}\\mf{a}_i)$ grows subexponentially. Let $r\\geq 1$ be an integer such that $\\mf{m}^r\\subseteq \\mf{a}_i$ for each $i$, and let $s\\geq 1$ be an integer such that $\\mf{m}^s\\subseteq f^*\\mf{m}$. Then one has inclusions\\[\nf^{n*}\\mf{a}_i\\supseteq f^{n*}\\mf{m}^r\\supseteq f^{(n-1)*}\\mf{m}^{sr}\\supseteq f^{(n-2)*}\\mf{m}^{s^2r}\\supseteq\\cdots\\supseteq \\mf{m}^{s^nr}.\\] It follows that \\[\ne(f^{n*}\\mf{a}_i) \\leq e(\\mf{m}^{s^nr}) = (s^nr)^d e(\\mf{m}) = r^ds^{dn},\\] which grows subexponentially, completing the proof. \n\\end{proof}\n\n\\begin{rem} With very little extra work, one can show that the exponential growth rate of the sequence $e(f^{n*}\\mf{a}_i)^{1\/d}$ can be bounded above in the following manner:\\[ \n\\lim_{n\\to \\infty} \\frac{1}{n} \\log e(f^{n*}\\mf{a}_i)^{1\/d} \\leq \\lim_{n\\to \\infty}\\frac{1}{n} \\log\\min\\{s\\geq 1 : \\mf{m}^s\\subseteq f^{n*}\\mf{m}\\}.\\] We mention this because quantities such as that on the right hand side have been recently studied by Majidi-Zolbanin, Miasnikov, and Szpiro \\cite{MZMS}. In their notation, the right hand side of this inequality is exactly $w_h(f)$.\n\\end{rem}\n\n\n\n\n\\section{Valuative dynamics and the proof of Theorem C}\n\nIn this final section, we will prove \\hyperref[thmD]{Theorem~\\ref*{thmD}} using techniques from valuation theory and dynamics on valuation spaces. Let us begin by recalling the setup of the theorem. We fix a superattracting holomorphic fixed point germ $f\\colon (\\C^2,0)\\to (\\C^2,0)$, which we assume to be finite-to-one near $0$. Let $\\mf{a} = (\\psi_1,\\ldots, \\psi_m)$ be an $\\mf{m}$-primary ideal in the formal power series ring $\\C\\llbracket x,y\\rrbracket$. For any $z\\in \\C^m$, we set $D_z = \\{z_1\\psi_1 + \\cdots + z_m\\psi_m = 0\\}$. We aim to show that there is a dense subset $U\\subseteq \\C^m\\times \\C^m$, given as the complement of a countable union of algebraic subsets, such that for all $(z,w)\\in U$, the sequence $\\mu(n) := D_z\\cdot f^n_*D_w$ eventually satisfies an integral linear recursion relation. We have already seen in \\S4 that we can find such a set $U$ for which one has $\\mu(n) = e(f^{n*}\\mf{a}; \\mf{a})$ for all $(z,w)\\in U$ and all $n\\geq 1$. Our starting point in this section is the following alternate characterization of mixed multiplicities, which can be found in \\cite[\\S1.6.8]{MR2095471} and \\cite{MR0354663}.\n\n\\begin{thm}\\label{teissier2} Let $\\mf{b}_1$ and $\\mf{b}_2$ be two $\\mf{m}$-primary ideals of $\\C\\llbracket x,y\\rrbracket$. Let $\\pi\\colon X\\to (\\C^2,0)$ be a modification over $0$ which dominates the normalized blowup of each of the ideals $\\mf{b}_i$. Then there exist divisors $Z_1$ and $Z_2$ of $X$, both supported within the exceptional locus $\\pi^{-1}(0)$ of $\\pi$, for which $\\mf{b}_i\\cdot \\mc{O}_X = \\mc{O}_X(Z_i)$. The mixed multiplicity $e(\\mf{b}_1; \\mf{b}_2)$ is given by the intersection number $-(Z_1\\cdot Z_2)$. Finally, the divisors $Z_i$ are \\emph{relatively nef}, which is to say that for every irreducible component $E$ of $\\pi^{-1}(0)$, one has $Z_i\\cdot E \\geq 0$.\n\\end{thm}\n\nHere, and for the rest of the article, a \\emph{modification} $\\pi\\colon X\\to (\\C^2,0)$ over $0$ is defined to be a proper birational morphism $\\pi\\colon X\\to \\C^2$ from a normal variety $X$ to $\\C^2$ that is an isomorphism over $\\C^2\\smallsetminus \\{0\\}$. Such a modification will be called a \\emph{blowup} if it is obtained as a composition of point blowups. If $\\pi\\colon X\\to (\\C^2,0)$ is any modification, and $E_1,\\ldots, E_r$ denote the irreducible components of the exceptional locus $\\pi^{-1}(0)$, we define $\\Div(\\pi)$ to be the vector space of $\\R$-divisors $\\Div(\\pi) = \\bigoplus_{i=1}^r \\R E_i$. The intersection pairing on $\\Div(\\pi)$ is nondegenerate by the Hodge index theorem \\cite[Theorem V.1.9]{MR0463157}, and thus there is a dual basis $\\check{E}_1,\\ldots, \\check{E}_r\\in \\Div(\\pi)$, i.e., a basis satisfying the relation $\\check{E}_i\\cdot E_j = \\delta_{ij}$.\n\nBefore beginning the proof of \\hyperref[thmD]{Theorem~\\ref*{thmD}}, let us give a (very) brief overview of the \\emph{valuative tree} $\\mc{V}$ at the origin $0\\in \\C^2$ of Favre-Jonsson, which is the relevant valuation space for us. A full-on introduction to the valuative tree would take us too far afield here; detailed references can be found in \\cite{MR2097722, Mattias}, and more concise introductions can be found in \\cite{MR2339287, GR}. \n\n\n\nThe valuative tree $\\mc{V}$ at $0\\in \\C^2$ is defined as the set of all semivaluations $\\nu\\colon \\C\\llbracket x,y\\rrbracket\\to \\R\\cup\\{+\\infty\\}$ with the properties that $\\nu|_{\\C^\\times} \\equiv 0$ and $\\min\\{\\nu(x), \\nu(y)\\} = 1$. For us, the most important example is that of a \\emph{divisorial valuation}. A valuation $\\nu\\in \\mc{V}$ is divisorial if there is a blowup $\\pi\\colon X\\to (\\C^2,0)$, an irreducible component $E$ of the exceptional locus $\\pi^{-1}(0)$, and a constant $\\lambda\\in \\R$ such that $\\nu(P) = \\lambda\\ord_E(P\\circ \\pi)$ for all $P\\in \\C\\llbracket x,y\\rrbracket$. In this case, the constant $\\lambda$ is exactly $\\lambda = b_E^{-1}$, where $b_E = \\min\\{\\ord_E(x\\circ \\pi), \\ord_E(y\\circ \\pi)\\}\\in \\N$. The constant $b_E$ is sometimes called the \\emph{generic multiplicity} of $E$. If $\\nu$ is a divisorial valuation of the above form, we will denote it simply as $\\nu_E$.\n\nSuppose that $\\pi\\colon X\\to (\\C^2,0)$ is a blowup, and $E_1,\\ldots, E_r$ are the irreducible components of $\\pi^{-1}(0)$. Then any divisorial valuation $\\nu\\in \\mc{V}$ defines a linear functional $\\nu\\colon\\Div(\\pi)\\to \\R$; essentially, $\\nu(E_i)$ is the $\\nu$-valuation of a local defining equation of $E_i$ at the \\emph{center} of $\\nu$ in $\\pi$, see \\cite[\\S1.2]{favre:holoselfmapssingratsurf} for details. Since the intersection pairing is nondegenerate, it follows that there is a divisor $Z_{\\nu, \\pi}\\in \\Div(\\pi)$ such that $\\nu(D) = Z_{\\nu, \\pi}\\cdot D$ for all $D\\in \\Div(\\pi)$. If $\\pi'\\colon X'\\to (\\C^2, 0)$ is a blowup dominating $\\pi$, say $\\eta\\colon X'\\to X$ is such that $\\pi' = \\pi\\eta$, then $Z_{\\nu, \\pi'} = \\eta^*Z_{\\nu, \\pi}$ and $Z_{\\nu, \\pi} = \\eta_*Z_{\\nu, \\pi'}$. Finally, if $\\nu = b_{E_i}^{-1}\\ord_{E_i}$ for one of the exceptional components $E_i$ of $\\pi^{-1}(0)$, then one easily checks that $Z_{\\nu, \\pi} = b_{E_i}^{-1}\\check{E}_i$.\n\nThe valuative tree $\\mc{V}$ has a natural topology and a natural poset structure $(\\mc{V}, \\leq)$. With respect to these structures, $\\mc{V}$ is a \\emph{rooted tree} (see \\cite[\\S2]{Mattias} for a precise definition). For any two elements $\\nu_1,\\nu_2\\in \\mc{V}$, there is a unique greatest element $\\nu_1\\wedge \\nu_2$ that is both $\\leq \\nu_1$ and $\\leq \\nu_2$. In addition, there is defined on $\\mc{V}$ an increasing function $\\alpha\\colon \\mc{V}\\to [1,+\\infty]$, called the \\emph{skewness} function, which is finite on divisorial valuations and has the following geometric property: if $\\pi\\colon X\\to (\\C^2,0)$ is a blowup and $E_1$ and $E_2$ are two irreducible components of the exceptional locus $\\pi^{-1}(0)$, then \n\\begin{equation}\\label{Beqn3}\\alpha(\\nu_{E_1}\\wedge \\nu_{E_2}) = -(Z_{\\nu_{E_1}, \\pi}\\cdot Z_{\\nu_{E_2}, \\pi}).\\end{equation}\n\nFinally, we note that $f$ induces in a natural way a dynamical system $f_\\bullet \\colon \\mc{V}\\to \\mc{V}$. Indeed, if $\\nu\\in \\mc{V}$, then we obtain a semivaluation $f_*\\nu$ defined by $(f_*\\nu)(P) = \\nu(P\\circ f)$. In general the value $c(f,\\nu):= \\min\\{(f_*\\nu)(x), (f_*\\nu)(y)\\}$ is greater than $1$, so $f_*\\nu$ is not an element of $\\mc{V}$, but by appropriately normalizing we obtain a semivaluation $f_\\bullet\\nu = c(f,\\nu)^{-1}f_*\\nu\\in \\mc{V}$. The quantity $c(f,\\nu)$ is called the \\emph{attraction rate} of $f$ along $\\nu$, and is the primary object of study in the paper \\cite{GR}, the results of which we will use shortly.\n\n\\begin{proof}[Proof of {\\hyperref[thmD]{Theorem~\\ref*{thmD}}}] We begin the proof by deriving an alternate expression for the mixed multiplicity $e(f^*\\mf{a}; \\mf{a})$ using the valuative language just discussed. Let $\\pi_1\\colon X_1\\to (\\C^2,0)$ and $\\pi_2\\colon X_2\\to (\\C^2,0)$ be the normalized blowups of the ideals $f^*\\mf{a}$ and $\\mf{a}$, respectively, and let $Z_i$ be the divisors on $X_i$ for $i = 1,2$ such that $f^*\\mf{a}\\cdot \\mc{O}_{X_1} = \\mc{O}_{X_1}(Z_1)$ and $\\mf{a}\\cdot \\mc{O}_{X_2} = \\mc{O}_{X_2}(Z_2)$. Let $E_1,\\ldots, E_r$ be the irreducible components of the exceptional locus $\\pi_2^{-1}(0)$ of $\\pi_2$. For each $i = 1,\\ldots, r$, let $a_i\\in \\Z$ be the integer $Z_2\\cdot E_i$, so that $Z_2$ can be written $Z_2 = \\sum_{i=1}^r a_i\\check{E_i}$. By \\hyperref[teissier2]{Theorem~\\ref*{teissier2}}, these integers $a_i$ are nonnegative.\n\nAs a consequence of Hironaka's theorem on resolution of singularities, it is possible to find blowups $\\eta_1\\colon Y_1\\to (\\C^2,0)$ and $\\eta_2\\colon Y_2\\to (\\C^2,0)$ over $0$ with the following properties:\\begin{enumerate}\n\\item[1.] Each $\\eta_i$ is a \\emph{log resolution} of both of the ideals $\\mf{a}$ and $f^*\\mf{a}$, that is to say, the ideals $\\mf{a}\\cdot \\mc{O}_{Y_i}$ and $f^*\\mf{a}\\cdot \\mc{O}_{Y_i}$ are locally principal. In particular, the $\\eta_i$ dominate both $\\pi_1$ and $\\pi_2$, so there exist proper birational morphisms $\\sigma_i\\colon Y_1\\to X_i$ and $\\gamma_i\\colon Y_2\\to X_i$ for $i = 1,2$ such that one has $\\eta_1 = \\pi_i\\sigma_i$ and $\\eta_2 = \\pi_i\\gamma_i$.\n\\item[2.] The map $f$ lifts to a holomorphic map $F \\colon Y_1\\to Y_2$, or in other words, $F = \\eta_2^{-1}f\\eta_1$ has no indeterminacy points.\n\\item[3.] If $\\wt{E}_1,\\ldots, \\wt{E}_r$ denote the strict transforms of the $E_i$ in $Y_1$ under $\\sigma_2$, then $F$ does not contract any of the $\\wt{E}_i$ to a point.\n\\end{enumerate} The ideal $f^*\\mf{a}\\cdot \\mc{O}_{Y_1}$ is obtained on the one hand by first pulling back the ideal $\\mf{a}$ by $f$ to get $f^*\\mf{a}$, and then by pulling back $f^*\\mf{a}$ by $\\eta_1$ to get $f^*\\mf{a}\\cdot \\mc{O}_{Y_1}$. On the other hand, because $f\\eta_1 = \\eta_2F$, we may also obtain $f^*\\mf{a}\\cdot \\mc{O}_{Y_1}$ by first pulling back $\\mf{a}$ by $\\eta_2$ to get $\\mf{a}\\cdot \\mc{O}_{Y_2} = \\mc{O}_{Y_2}(\\gamma_2^*Z_2)$, and then pulling this back by $F$ to get $f^*\\mf{a}\\cdot \\mc{O}_{Y_1} = \\mc{O}_{Y_1}(F^*\\gamma_2^*Z_2)$. Using this and the fact that $\\mf{a}\\cdot \\mc{O}_{Y_1} = \\mc{O}_{Y_1}(\\sigma_2^*Z_2)$, \\hyperref[teissier2]{Theorem~\\ref*{teissier2}} implies that \\[\ne(f^*\\mf{a};\\mf{a}) = -( \\sigma_2^*Z_2\\cdot F^*\\gamma_2^*Z_2) = -(F_*\\sigma_2^*Z_2\\cdot \\gamma_2^*Z_2).\\] Using our previously derived expression $Z_2 = \\sum_{i=1}^r a_i\\check{E}_i$, we can express this as \\begin{equation}\\label{Beqn1}\ne(f^*\\mf{a}; \\mf{a}) = -\\sum_{i,j = 1}^r a_ia_j(F_*\\sigma_2^*\\check{E}_i \\cdot \\gamma_2^*\\check{E}_j) = -\\sum_{i,j=1}^r a_ia_jb_{E_i}b_{E_j}(F_*Z_{\\nu_{E_i},\\eta_1} \\cdot Z_{\\nu_{E_j},\\eta_2}).\\end{equation} Because of our assumption that $F$ does not contract any of the $\\wt{E}_i$ to a point, we may now apply \\cite[Lemma 1.10]{favre:holoselfmapssingratsurf} to conclude that \\begin{align*}\ne(f^*\\mf{a}; \\mf{a}) & = -\\sum_{i,j=1}^r a_ia_jb_{E_i}b_{E_i}c(f, \\nu_{E_i})(Z_{f_\\bullet\\nu_{E_i}, \\eta_2}\\cdot Z_{\\nu_{E_j}, \\eta_2})\\\\\n& = \\sum_{i,j =1}^r a_ia_jb_{E_j}b_{E_j}\\alpha(f_\\bullet\\nu_{E_i}\\wedge \\nu_{E_j})c(f, \\nu_{E_i}).\n\\end{align*} Of course, this identity is equally valid for any iterate of $f$, leading us to our final equation for the local intersection multiplicities $\\mu(n)$. Namely, if $(z,w)\\in U$, then \\begin{equation}\\label{Beqn4}\n\\mu(n) = \\sum_{i,j = 1}^r a_ia_jb_{E_i}b_{E_j}\\alpha(f_\\bullet^n\\nu_{E_i}\\wedge \\nu_{E_j})c(f^n, \\nu_{E_i})\\,\\,\\,\\,\\,\\,\\mbox{ for all $n\\geq 1$}.\\end{equation}\n\nTo prove that $\\mu(n)$ eventually satisfies an integral linear recursion relation, it therefore suffices to show that for each $i$ and $j$, the sequence $\\alpha(f^n_\\bullet\\nu_{E_i}\\wedge \\nu_{E_j})c(f^n, \\nu_{E_i})$ eventually satisfies an integral linear recursion relation. More generally, we will prove the following: if $\\nu\\in \\mc{V}$ is any divisorial valuation, then $\\alpha(f^n_\\bullet\\nu\\wedge \\nu_{E_j})c(f^n, \\nu)$ eventually satisfies an integral linear recursion relation. \n\nWe first observe that we may without loss of generality prove this for any iterate $f^p$ of $f$. Indeed, the sequence $\\alpha(f^n_\\bullet\\nu\\wedge \\nu_{E_j})c(f^n,\\nu)$ is obtained by joining the sequences \\[\n\\{\\alpha((f^{np}_\\bullet f^k_\\bullet\\nu)\\wedge \\nu_{E_j})c(f^{np}, f^k_\\bullet\\nu)\\}_{n=1}^\\infty\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,k = 0,1,\\ldots, p-1\\] in alternating fashion. If each of these sequences eventually satisfies an integral linear recursion relation, then so does the combined sequence.\n\nImmediately let us replace $f$ by $f^2$. By doing so, we may apply \\cite[Theorem 3.1]{GR} to conclude that there is a fixed point $\\nu_\\star\\in \\mc{V}$ for $f_\\bullet$ such that $f^n_\\bullet\\nu\\to \\nu_\\star$ in a strong sense as $n\\to \\infty$. In particular, $\\alpha(f^n_\\bullet\\nu\\wedge \\nu_{E_j})\\to \\alpha(\\nu_\\star\\wedge \\nu_{E_j})<+\\infty$. If the sequence $\\alpha(f^n_\\bullet\\nu\\wedge \\nu_{E_j})$ is eventually constant, then we are done by \\cite[Theorem 6.1]{GR}, which says that $c(f^n,\\nu)$ eventually satisfies an integral linear recursion relation.\n\n\n\nWe may assume, therefore, that the sequence $\\alpha(f^n_\\bullet\\nu\\wedge \\nu_{E_j})$ is not eventually constant; this implies, in particular, that $f^n_\\bullet \\nu \\leq \\nu_{E_j}$ for infinitely many $n$. Such a condition imposes strong restrictions on the possible asymptotic behavior of the sequence $f^n_\\bullet\\nu$. Indeed, the work done in \\cite{GR} shows that in this case, $\\nu_\\star\\leq \\nu_{E_j}$, and $f^n_\\bullet\\nu\\to \\nu_\\star$ along a periodic cycle of tangent directions $\\vec{v}_1,\\ldots, \\vec{v}_p$ at $\\nu_\\star$ (see \\cite[\\S2]{Mattias} for the notion of tangent directions at a valuation). Replacing $f$ by $f^p$, we can assume without loss of generality that $f^n_\\bullet \\nu\\to \\nu_\\star$ along a fixed tangent direction $\\vec{v}$.\n\n\n\nIn this situation, it is a rather non-trivial fact (see \\cite[\\S5.2]{MR2339287}) that one can find a blowup $\\pi\\colon X\\to (\\C^2,0)$ and irreducible components $V, W$ of $\\pi^{-1}(0)$ which intersect transversely such that the following hold: \\begin{enumerate}\n\\item[1.] The interval $I:= [\\nu_{V}, \\nu_{W}]\\subset \\mc{V}$ is invariant for $f_\\bullet$, that is, $f_\\bullet (I)\\subseteq I$.\n\\item[2.] The interval $I$ contains $\\nu_\\star$ and intersects the tangent direction $\\vec{v}$.\n\\item[3.] The divisorial valuation $\\nu_{E_j}$ corresponds to an irreducible component of $\\pi^{-1}(0)$.\n\\end{enumerate} We now proceed similarly to the proof of \\cite[Lemma 6.2]{GR}. For any valuation $\\lambda\\in I$ and any $n\\geq 1$, one has that $Z_{f^{np}_* \\lambda, \\pi} = r_n\\check{V} + s_n\\check{W}$ for some constants $r_n, s_n\\geq 1$, and so \\[\nc(f^{np}, \\lambda)\\alpha(f^{np}_\\bullet \\lambda\\wedge \\nu_{E_j}) = -(Z_{f^{np}_*\\lambda, \\pi}\\cdot Z_{\\nu_{E_j}, \\pi}) = -r_n(\\check{V}\\cdot Z_{\\nu_{E_j}, \\pi}) - s_n(\\check{W}\\cdot Z_{\\nu_{E_j}, \\pi}).\\] Just as in the proof of \\cite[Lemma 6.2]{GR}, there is a $2\\times 2$ integer matrix $M$ for which one has the identity $(r_n, s_n) = (r_{n-1}, s_{n-1})M$. We conclude that $c(f^{np}, \\lambda)\\alpha(f^{np}_\\bullet\\lambda\\wedge \\nu_{E_j})$ satisfies the integral linear recursion relation with characteristic polynomial $t^2 - \\mathrm{tr}(M)t + \\det(M)$. If $N\\geq 1$ is large enough that $f^N_\\bullet\\nu\\in I$, this proves that the sequence $\\{(\\alpha(f^n_\\bullet\\nu\\wedge \\nu_{E_j})c(f^n, \\nu)\\}_{n = N}^\\infty$ satisfies an integral linear recursion relation, completing the proof.\n\\end{proof}\n\n\\begin{rem} One consequence of the study of the sequences $c(f^n, \\nu)$ in \\cite{GR} is that for any divisorial valuation $\\nu$, there is a constant $B = B(\\nu)$ such that $c(f^n,\\nu)\\sim Bc_\\infty^n$, where $c_\\infty>1$ is the \\emph{asymptotic attraction rate} of $f$, that is, \\[\nc_\\infty := \\lim_{n\\to \\infty}\\left(\\max\\{s\\geq 1 : f^{n*}\\mf{m}\\subseteq \\mf{m}^s\\}\\right)^{1\/n}.\\] Since one has $\\alpha(f^n_\\bullet\\nu_{E_i}\\wedge \\nu_{E_j}) \\leq \\alpha(\\nu_{E_j})<+\\infty$ for all $n$, we can conclude from equation (\\ref{Beqn4}) that there exist constants $A_1, A_2>0$ such that $A_1c_\\infty^n\\leq \\mu(n)\\leq A_2c_\\infty^n$ for all $n$. Moreover, as we saw in the proof of \\hyperref[thmD]{Theorem~\\ref*{thmD}}, if we replace $f$ by $f^2$, then $f^n_\\bullet \\nu_{E_i}\\to \\nu_\\star$ for some $\\nu_\\star\\in \\mc{V}$, and thus $\\alpha(f^n_\\bullet\\nu_{E_i}\\wedge \\nu_{E_j})\\to \\alpha(\\nu_\\star\\wedge\\nu_{E_j})<+\\infty$. Therefore in this case equation (\\ref{Beqn4}) implies that $\\mu(n)\\sim Ac_\\infty^n$ for some constant $A>0$.\n\\end{rem}\n\n\\begin{rem} For any irreducible curve germ $C$ through the origin in $\\C^2$, there is a corresponding \\emph{curve valuation} $\\nu_C\\in \\mc{V}$, and one has $f_\\bullet \\nu_C = \\nu_{f(C)}$. Thus the dynamics of $f$ on curves $C$ through the origin is reflected in the dynamics of $f_\\bullet$ on curve valuations $\\nu_C$. With this in mind, it should not come as a surprise that one can study Arnold's conjecture by examining the dynamics of $f_\\bullet$ on $\\mc{V}$. In brief, the outline of our proof of \\hyperref[thmD]{Theorem~\\ref*{thmD}} can be expressed as follows: for ``general\" enough curves $C$, the dynamics of $f_\\bullet$ on the curve valuations $\\nu_C$ is reflected in the dynamics of $f_\\bullet$ on certain associated divisorial valuations. The dynamics of $f_\\bullet$ on divisorial valuations is very regular: \\cite[Theorem 3.1]{GR} says that there is a set $K$ of fixed valuations of $f_\\bullet$ that attract all $\\nu\\in \\mc{V}$ with $\\alpha(\\nu)<+\\infty$, which includes all divisorial valuations.\n\nTo emphasize this point further, when $f(x,y) = (x^2-y^4, y^4)$ is the example studied in \\S\\S1-3, the attracting set $K\\subset \\mc{V}$ consists of a single point $\\nu_\\star\\in \\mc{V}$, and the points $\\nu\\in \\mc{V}$ that are \\emph{not} attracted to $\\nu_\\star$ are exactly the curve valuations $\\nu_{C_s}$ associated to the curves $C_s$ constructed in \\S1. That is, the curve valuations $\\nu_{C_s}$ are exactly the points of $\\mc{V}$ where the dynamics of $f_\\bullet$ is \\emph{not} regular.\n\nIn the case when $f\\colon (\\C^2,0)\\to (\\C^2,0)$ is a finite germ such that the derivative $f'(0)$ has exactly one nonzero eigenvalue, M.\\ Ruggiero \\cite{MR2904007} has studied the dynamics of $f_\\bullet\\colon \\mc{V}\\to \\mc{V}$, and found that there exist fixed curve valuations $\\nu_1,\\nu_2\\in \\mc{V}$ such that $f^n_\\bullet\\nu\\to \\nu_1$ as $n\\to \\infty$ for all $\\nu\\in \\mc{V}\\smallsetminus\\{\\nu_2\\}$. From this one can conclude that the local intersection multiplicities $\\mu(n) = C\\cdot f^n(D)$ grow at most exponentially fast for all curve germs $C,D$ through $0$, provided these numbers are always finite, confirming \\cite[Theorem 4]{MR1215971}.\n\\end{rem}\n\n\n\n\n\\begin{comment}\n\n\\section{D}\n\nIn this final section, we will prove \\hyperref[thmB]{Theorem~\\ref*{thmB}}. Let us start by recalling the setup of the theorem. Let $f\\colon (\\C^2, 0)\\to (\\C^2,0)$ be a holomorphic superattracting fixed point germ at the origin $0\\in \\C^2$, which we assume to be finite-to-one near $0$. Let $\\psi_1,\\ldots, \\psi_m$ be germs of holomorphic functions at $0$ such that $\\{\\psi_1 = 0\\}\\cap \\cdots\\cap \\{\\psi_m = 0\\} = \\{0\\}$, or, in other words, such that the ideal $\\mf{a} = (\\psi_1,\\ldots, \\psi_m)\\subseteq \\C\\llbracket x, y\\rrbracket$ is primary for the maximal ideal $\\mf{m}$ of $\\C\\llbracket x,y\\rrbracket$. We will consider the holomorphic curve germs $D_w$ through the origin of the form $D_w := \\{w_1\\psi_1 + \\cdots + w_m\\psi_m = 0\\}$, where $w\\in \\C^m$. We aim to show that if $(z,w)\\in \\C^m\\times \\C^m$ is chosen outside of a certain countable union of proper algebraic subsets, then the sequence $\\mu(n) := D_z\\cdot f^n(D_w)$ of local intersection multiplicities grows at most exponentially in $n$.\n\nBefore beginning the proof, we first establish some notation. Suppose $\\pi\\colon X\\to (\\C^2,0)$ is a \\emph{modification}, by which we mean a proper birational morphism $\\pi$ from a normal variety $X$ to $\\C^2$ that is an isomorphism over $\\C^2\\smallsetminus\\{0\\}$. If the \\emph{exceptional locus} $\\pi^{-1}(0)$ of $\\pi$ has irreducible components $E_1,\\ldots, E_r$, then the Hodge index theorem \\cite[Theorem V.1.9]{MR0463157} gives that the intersection product restricted to the vector space $\\bigoplus_i \\R E_i$ is negative definite. In particular, there exists a dual basis $\\check{E}_1,\\ldots, \\check{E}_r\\in \\bigoplus_i \\R E_i$, that is, a basis satisfying the relation $\\check{E}_i\\cdot \\check{E}_j = \\delta_{ij}$.\n\nThe proof of \\hyperref[thmB]{Theorem~\\ref*{thmB}} will use some basic facts from the theory of \\emph{mixed multiplicities}, developed by B.\\ Teissier in the 70s [\\color{red}CITES\\color{black}]. A concise introduction to the topic that suffices for our purposes can be found in [\\color{red}CITES, \\S1.6.8\\color{black}]. We collect the relevant results for us in the following theorem.\n\n\\begin{thm}[Teissier]\\label{teissier} Let $\\mf{b}_1 = (g_1,\\ldots, g_r)$ and $\\mf{b}_2 = (h_1,\\ldots, h_s)$ be two $\\mf{m}$-primary ideals of the ring $\\C\\llbracket x,y\\rrbracket$. Then there is an integer $e(\\mf{b}_1; \\mf{b}_2)\\in \\N$, called the \\emph{mixed multiplicity} of $\\mf{b}_1$ and $\\mf{b}_2$, which can be computed in the following two ways:\\begin{enumerate}\n\\item[$1.$] Let $A_\\alpha$ be the curve $A_\\alpha = \\{\\alpha_1g_1 + \\cdots + \\alpha_rg_r = 0\\}$ for $\\alpha\\in \\C^r$ and $B_\\beta$ be the curve $B_\\beta = \\{\\beta_1h_1 + \\cdots + \\beta_sh_s = 0\\}$ for $\\beta\\in \\C^s$. Then we have for $(\\alpha,\\beta)\\in \\C^r\\times \\C^s$ outside of a certain proper algebraic subset that the mixed multiplicity $e(\\mf{b}_1; \\mf{b}_2)$ is exactly the local intersection number $A_\\alpha\\cdot B_\\beta$ at the origin.\n\\item[$2.$] Let $\\pi\\colon X\\to (\\C^2,0)$ be any modification that dominates the blowups of the ideals $\\mf{b}_1$ and $\\mf{b}_2$. Then $\\mf{b}_1\\cdot\\mc{O}_X = \\mc{O}_X(Z_1)$ and $\\mf{b}_2\\cdot\\mc{O}_X = \\mc{O}_X(Z_2)$ for some divisors $Z_1$ and $Z_2$ supported on the exceptional locus $\\pi^{-1}(0)$ of $\\pi$, and the mixed multiplicity $e(\\mf{b}_1; \\mf{b}_2)$ is given by the intersection number $-(Z_1\\cdot Z_2)$.\n\\end{enumerate} Moreover, the divisors $Z_1$ and $Z_2$ in statement $2.$ are \\emph{relatively nef} in the sense that for any irreducible component $E$ of $\\pi^{-1}(0)$, one has $Z_i\\cdot E \\geq 0$ for $i = 1,2$.\n\\end{thm}\n\nWe are now ready to prove \\hyperref[thmB]{Theorem~\\ref*{thmB}}.\n\n\\begin{proof}[Proof of {\\hyperref[thmB]{Theorem~\\ref*{thmB}}}] Let $z,w\\in \\C^m$. We begin by estimating the local intersection number $D_z\\cdot f(D_w)$ at the origin. Certainly one has the inequality $D_z\\cdot f(D_w) \\leq D_z\\cdot f_*D_w = f^*D_z\\cdot D_w$, so we will proceed by computing $f^*D_z\\cdot D_w$. Observe that $f^*D_z$ is the curve \\[\nf^*D_z = \\{z_1(\\psi_1\\circ f) + \\cdots + z_m(\\psi_m\\circ f) = 0\\}.\\] Applying \\hyperref[teissier]{Theorem~\\ref*{teissier}(1)} to the $\\mf{m}$-primary ideals $\\mf{a}$ and $f^*\\mf{a} = (\\psi_1\\circ f, \\ldots, \\psi_m\\circ f)$, we conclude that for $(z,w)\\in \\C^m\\times \\C^m$ chosen outside of a certain proper algebraic subset, the local intersection number $f^*D_z\\cdot D_w$ is exactly the mixed multiplicity $e(\\mf{a}; f^*\\mf{a})$.\n\nLet $\\pi_1\\colon X_1\\to (\\C^2,0)$ and $\\pi_2\\colon X_2\\to (\\C^2,0)$ be the normalized blowups of the ideals $f^*\\mf{a}$ and $\\mf{a}$, respectively, and let $Z_i$ be the divisors on $X_i$ for $i = 1,2$ such that $f^*\\mf{a}\\cdot \\mc{O}_{X_1} = \\mc{O}_{X_1}(Z_1)$ and $\\mf{a}\\cdot \\mc{O}_{X_2} = \\mc{O}_{X_2}(Z_2)$. Let $E_1,\\ldots, E_r$ be the irreducible components of the exceptional locus $\\pi_2^{-1}(0)$ of $\\pi_2$. For each $i = 1,\\ldots, r$, let $a_i\\in \\Z$ be the integer $Z_2\\cdot E_i$, so that $Z_2$ can be written $Z_2 = \\sum_{i=1}^r a_i\\check{E_i}$. By \\hyperref[teissier]{Theorem~\\ref*{teissier}}, these integers $a_i$ are nonnegative.\n\nAs a consequence of Hironaka's theorem on resolution of singularities, it is possible to find modifications $\\eta_1\\colon Y_1\\to (\\C^2,0)$ and $\\eta_2\\colon Y_2\\to (\\C^2,0)$ with the following properties:\\begin{enumerate}\n\\item[1.] Each $\\eta_i$ is a \\emph{log resolution} of both of the ideals $\\mf{a}$ and $f^*\\mf{a}$. In other words, each $\\eta_i$ is a composition of point blowups over the origin, and the ideals $\\mf{a}\\cdot \\mc{O}_{Y_i}$ and $f^*\\mf{a}\\cdot \\mc{O}_{Y_i}$ are locally principal. In particular, the $\\eta_i$ dominate both $\\pi_1$ and $\\pi_2$, so there exist proper birational morphisms $\\sigma_i\\colon Y_1\\to X_i$ and $\\gamma_i\\colon Y_2\\to X_i$ for $i = 1,2$ such that one has $\\eta_1 = \\pi_i\\sigma_i$ and $\\eta_2 = \\pi_i\\gamma_i$.\n\\item[2.] The map $f$ lifts to a holomorphic map $F \\colon Y_1\\to Y_2$, or in other words, $F = \\eta_2^{-1}f\\eta_1$ has no indeterminacy points.\n\\item[3.] If $\\wt{E}_1,\\ldots, \\wt{E}_r$ denote the strict transforms of the $E_i$ in $Y_1$ under $\\sigma_2$, then $F$ does not contract any of the $\\wt{E}_i$ to a point.\n\\end{enumerate} We will now use these modifications and \\hyperref[teissier]{Theorem~\\ref*{teissier}(2)} to obtain a convenient formula for the mixed multiplicity $e(\\mf{a}; f^*\\mf{a})$. \n\nThe ideal $f^*\\mf{a}\\cdot \\mc{O}_{Y_1}$ is obtained on the one hand by first pulling back the ideal $\\mf{a}$ by $f$ to get $f^*\\mf{a}$, and then by pulling back $f^*\\mf{a}$ by $\\eta_1$ to get $f^*\\mf{a}\\cdot \\mc{O}_{Y_1}$. On the other hand, because $f\\eta_1 = \\eta_2F$, we may also obtain $f^*\\mf{a}\\cdot \\mc{O}_{Y_1}$ by first pulling back $\\mf{a}$ by $\\eta_2$ to get $\\mf{a}\\cdot \\mc{O}_{Y_2} = \\mc{O}_{Y_2}(\\gamma_2^*Z_2)$, and then pulling this back by $F$ to get $f^*\\mf{a}\\cdot \\mc{O}_{Y_1} = \\mc{O}_{Y_1}(F^*\\gamma_2^*Z_2)$. Since $\\mf{a}\\cdot \\mc{O}_{Y_1} = \\mc{O}_{Y_1}(\\sigma_2^*Z_2)$, we conclude that the mixed multiplicity $e(\\mf{a};f^*\\mf{a})$ is exactly the intersection number \\[\ne(\\mf{a};f^*\\mf{a}) = -( \\sigma_2^*Z_2\\cdot F^*\\gamma_2^*Z_2) = -(F_*\\sigma_2^*Z_2\\cdot \\gamma_2^*Z_2).\\] Using our previously derived expression $Z_2 = \\sum_{i=1}^r a_i\\check{E}_i$, we can express this as \\begin{equation}\\label{Beqn1}\ne(\\mf{a}; f^*\\mf{a}) = -\\sum_{i,j = 1}^r a_ia_j(F_*\\sigma_2^*\\check{E}_i \\cdot \\gamma_2^*\\check{E}_j).\\end{equation} \n\nLet $G_1,\\ldots, G_s\\subset Y_2$ be the irreducible components of the exceptional locus $\\eta_2^{-1}(0)$ of $\\eta_2$. Recall that $F$ does not contract any of the $\\wt{E}_i$ to a point, and thus for each $i=1,\\ldots, r$, there is an index $\\theta(i)\\in \\{1,\\ldots, s\\}$ such that $F$ maps $\\wt{E}_i$ onto $G_{\\theta(i)}$. If $k\\in \\{1,\\ldots, s\\}$ is any index, then the coefficient of $\\wt{E}_i$ in the divisor $F^*G_k$ is $0$ if $k\\neq \\theta(i)$, and is some positive integer $\\lambda_i$ if $k = \\theta(i)$. It follows that \\[\nF_*\\sigma_2^*\\check{E}_i\\cdot G_k = \\check{E}_i\\cdot \\sigma_{2*}F^*G_k = \\begin{cases} \\lambda_i & k = \\theta(i).\\\\ 0 & k\\neq \\theta(i).\\end{cases}\\] In other words, $F_*\\sigma_2^*\\check{E}_i = \\lambda_i\\check{G}_{\\theta(i)}$. Putting this into (\\ref{Beqn1}) then yields \\begin{equation}\\label{Beqn2} e(\\mf{a}; f^*\\mf{a}) = -\\sum_{i,j=1}^r a_ia_j\\lambda_i (\\check{G}_{\\theta(i)}\\cdot \\gamma_2^*\\check{E}_j).\n\\end{equation} \n\nExpressions such as (\\ref{Beqn2}) are common in the study of dynamics on valuation spaces [\\color{red}CITES\\color{black}], and in order to proceed we will translate (\\ref{Beqn2}) into this language. The relevant valuation space here is the \\emph{valuative tree} $\\mc{V}$ at $0\\in \\C^2$ of Favre-Jonsson. A full-on introduction to the valuative tree would take us too far afield here; detailed references can be found in [\\color{red}CITES\\color{black}], and more concise introductions can be found in [\\color{red}CITES\\color{black}]. For us, the following discussion should suffice.\n\nThe valuative tree $\\mc{V}$ at the origin $0\\in \\C^2$ is defined as the set of all semivaluations $\\nu\\colon \\C\\llbracket x,y\\rrbracket\\to \\R\\cup\\{+\\infty\\}$ with the properties that $\\nu|_{\\C^\\times} \\equiv 0$ and $\\min\\{\\nu(x), \\nu(y)\\} = 1$. For us, the most important example of is that of a \\emph{divisorial valuation}. A valuation $\\nu\\in \\mc{V}$ is divisorial if there is a composition of point blowups $\\pi\\colon X\\to (\\C^2,0)$ over the origin, an irreducible component $E$ of the exceptional locus $\\pi^{-1}(0)$, and a constant $\\lambda\\in \\R$ such that $\\nu(P) = \\lambda\\ord_E(P\\circ \\pi)$ for all $P\\in \\C\\llbracket x,y\\rrbracket$. In this case, the constant $\\lambda$ is exactly $\\lambda = b_E^{-1}$, where $b_E = \\min\\{\\ord_E(x\\circ \\pi), \\ord_E(y\\circ \\pi)\\}\\in \\N$. The constant $b_E$ is sometimes called the \\emph{generic multiplicity} of $E$. If $\\nu$ is a divisorial valuation of the above form, we will denote it simply as $\\nu_E$.\n\nThe valuative tree $\\mc{V}$ has a natural topology and a natural poset structure $(\\mc{V}, \\leq)$. With respect to the poset structure, $\\mc{V}$ is a \\emph{rooted tree} (see [\\color{red}CITE\\color{black}]) for a precise definition. For any two elements $\\nu_1,\\nu_2\\in \\mc{V}$, there is a unique greatest element $\\nu_1\\wedge \\nu_2$ that is both $\\leq \\nu_1$ and $\\leq \\nu_2$. In addition, there is defined on $\\mc{V}$ an increasing function $\\alpha\\colon \\mc{V}\\to [1,+\\infty]$, called the \\emph{skewness} function, which is finite on divisorial valuations and has the following geometric property: if $\\pi\\colon X\\to (\\C^2,0)$ is a composition of point blowups over the origin and $E_1$ and $E_2$ are two irreducible components of the exceptional locus $\\pi^{-1}(0)$, then \n\\begin{equation}\\label{Beqn3}b_{E_1}b_{E_2}\\alpha(\\nu_{E_1}\\wedge \\nu_{E_2}) = -(\\check{E}_1\\cdot \\check{E}_2).\\end{equation}\n\nFinally, we note that $f$ induces in a natural way a dynamical system $f_\\bullet \\colon \\mc{V}\\to \\mc{V}$. Indeed, if $\\nu\\in \\mc{V}$, then we obtain a semivaluation $f_*\\nu$ defined by $(f_*\\nu)(P) = \\nu(P\\circ f)$. In general the value $c(f,\\nu):= \\min\\{(f_*\\nu)(x), (f_*\\nu)(y)\\}$ is greater than $1$, so $f_*\\nu$ is not an element of $\\mc{V}$, but by appropriately normalizing we obtain a semivaluation $f_\\bullet\\nu = c(f,\\nu)^{-1}f_*\\nu\\in \\mc{V}$. The quantity $c(f,\\nu)$ is called the \\emph{attraction rate} of $f$ along $\\nu$, and is the primary object of study in the paper [\\color{red}CITE\\color{black}], the results of which we will use shortly.\n\nWe now continue with the proof of \\hyperref[thmB]{Theorem~\\ref*{thmB}}. The divisors $E_1,\\ldots, E_r$ and $G_1,\\ldots, G_s$ define divisorial valuations $\\nu_{E_1},\\ldots, \\nu_{E_r}$ and $\\nu_{G_1},\\ldots, \\nu_{G_s}$ in $\\mc{V}$. Moreover, the fact that $F$ maps $\\wt{E}_i$ onto $G_{\\theta(i)}$ implies that $f_\\bullet \\nu_{E_i} = \\nu_{G_{\\theta(i)}}$. Using (\\ref{Beqn3}), we may then rewrite (\\ref{Beqn2}) as \\[\ne(\\mf{a};f^*\\mf{a}) = \\sum_{i,j=1}^r a_ia_j\\lambda_ib_{G_{\\theta(i)}}b_{E_j}\\alpha(f_\\bullet\\nu_{E_i}\\wedge \\nu_{E_j}).\\] By [\\color{red}CITE, Proposition 2.5\\color{black}], we have the equality $\\lambda_ib_{G_{\\theta(i)}} = c(f, \\nu_{E_i})b_{E_i}$, and thus \\begin{equation}\\label{Beqn4}\ne(\\mf{a}; f^*\\mf{a}) = \\sum_{i,j = 1}^r a_ia_jb_{E_i}b_{E_j}\\alpha(f_\\bullet\\nu_{E_i}\\wedge \\nu_{E_j})c(f, \\nu_{E_i}).\\end{equation} In this expression, the quantities $a_i$, $a_j$, $b_{E_i}$, and $b_{E_j}$ depend only on the ideal $\\mf{a}$, and not on the map $f$. While $\\alpha(f_\\bullet \\nu_{E_i}\\wedge \\nu_{E_j})$ does depend on $f$, it is bounded above by $\\alpha(\\nu_{E_j})$, which depends only on $E_j$ and not on $f$. Thus there is a positive constant $K = K(\\mf{a})$ such that \\[\ne(\\mf{a}; f^*\\mf{a}) \\leq K\\sum_{j=1}^r c(f, \\nu_{E_j}).\\] Therefore, if $(z,w)\\in \\C^m\\times \\C^m$ is chosen outside of a certain proper algebraic subset, we obtain the inequality \\[\nf^*D_z\\cdot D_w = e(\\mf{a};f^*\\mf{a}) \\leq K\\sum_{j=1}^r c(f,\\nu_{E_j}).\\]\n\nIf we now repeat this argument for each of the iterates $f^n$, we see that for $(z,w)\\in \\C^m\\times \\C^m$ chosen outside of a \\emph{countable} union of proper algebraic subsets that \\[\n\\mu(n) = D_z\\cdot f^n(D_w) \\leq f^{n*}D_z\\cdot D_w = e(\\mf{a}; f^{n*}\\mf{a}) \\leq K\\sum_{j=1}^r c(f^n, \\nu_{E_j}),\\] where again, the constant $K$ does not depend on $n$. To complete the proof, it suffices to know that for any $j = 1,\\ldots, r$, the sequence $c(f^n, \\nu_{E_j})$ grows subexponentially. Indeed, in [\\color{red}CITE, Theorem 6.1\\color{black}], it was shown that this sequence eventually satisfies an integral linear recursion relation, and thus in particular grows subexponentially. This completes the proof.\n\\end{proof}\n\n\\begin{rem} One has more information about the sequence $c(f^n, \\nu_{E_j})$ from [\\color{red}CITE\\color{black}] than simply that it satisfies an integral linear recursion relation. In fact, $c(f^n, \\nu_{E_j})\\sim Ac_\\infty^n$ for some constant $A>0$, where $c_\\infty>1$ is the \\emph{asymptotic attraction rate} of $f$, defined as follows. For each $n\\geq 1$, express the iterate $f^n$ as $f^n(x,y) = g_c(x, y) + g_{c+1}(x,y) + \\cdots$, where each $g_i$ is a homogeneous polynomial of degree $i$, and $g_c\\neq 0$. Then $c = c(f^n)$ is called the \\emph{attraction rate} of $f^n$. The asymptotic attraction rate $c_\\infty$ is defined to be the exponential growth rate $c_\\infty = \\lim_{n\\to \\infty} c(f^n)^{1\/n}$ of the sequence $c(f^n)$. We can therefore conclude that for $(z,w)\\in \\C^m\\times\\C^m$ outside of a certain countable union of proper algebraic subsets that $\\mu(n) = D_z\\cdot f^n(D_w) = O(c_\\infty^n)$.\n\\end{rem}\n\n\\begin{rem} The main tool used in the proof of \\hyperref[thmB]{Theorem~\\ref*{thmB}} was the result of [\\color{red}CITE\\color{black}] that the sequence $c(f^n, \\nu_{E_j})$ eventually satisfies an integral linear recursion relation. But this result is itself a consequence of a theorem [\\color{red}CITE, Theorem 3.1\\color{black}] about the dynamics of $f_\\bullet$ on the valuative tree $\\mc{V}$, which very roughly says the following: there is a subset $A\\subset \\mc{V}$ of fixed points of $f_\\bullet$ which is attracting in the sense that for every $\\nu\\in \\mc{V}$ with $\\alpha(\\nu)<+\\infty$, the sequence $f^n_\\bullet \\nu$ converges to $A$. For the example $f(x,y) = (x^2-y^4, y^4)$ discussed in $\\S\\S1-3$, the attracting set $A$ consists of a single point $\\nu_\\star$ (that is a curve valuation), and the semivaluations $\\nu\\in \\mc{V}$ that are \\emph{not} attracted to $\\nu_\\star$ under the dynamics of $f_\\bullet$ are exactly the curve valuations $\\nu_{C_s}\\in \\mc{V}$ for the curves $C_s$ previously constructed. In this way, we see that the validity of Arnold's conjecture for a given germ $f$ is closely connected to the basin of attraction of the attracting set $A$.\n\\end{rem}\n\n\\end{comment}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{comment}\nBefore beginning the proof, we first establish some notation. Suppose $\\pi\\colon X\\to (\\C^2,0)$ is a \\emph{modification}, by which we mean a proper birational morphism that is an isomorphism over $\\C^2\\smallsetminus\\{0\\}$. If the \\emph{exceptional locus} $\\pi^{-1}(0)$ of $\\pi$ has irreducible components $E_1,\\ldots, E_r$, the Hodge index theorem \\cite[Theorem V.1.9]{MR0463157} gives that the intersection product restricted to the vector space $\\bigoplus_i \\R E_i$ is negative definite. In particular, there exists a dual basis $\\check{E}_1,\\ldots, \\check{E}_r\\in \\bigoplus_i \\R E_i$, that is, a basis satisfying the relation $\\check{E}_i\\cdot \\check{E}_j = \\delta_{ij}$. In the proof of \\hyperref[thmB]{Theorem~\\ref*{thmB}}, we will make use of the following algebro-geometric result of Teissier:\n\n\\begin{thm}[\\color{red}CITE\\color{black}] Let $\\pi\\colon Y\\to (\\C^2,0)$ be the normalized blowup of the ideal $\\mf{a}$, and let $E_1,\\ldots, E_r$ denote the irreducible components of the exceptional locus $\\pi^{-1}(0)$. Then there exist positive integers $a_1,\\ldots, a_r$ such that for all $w\\in \\C^m$ chosen outside of a proper algebraic subset, one has that \\[\n\\pi^*D_w = -\\sum_{i=1}^r a_i\\check{E}_i + \\sum_{i=1}^r\\sum_{j=1}^{a_i} \\wt{D}_w^{ij}\\] as divisors, where the $\\wt{D}^{ij}_w$ are pairwise disjoint irreducible smooth formal curves with the property that each $\\wt{D}^{ij}_w$ intersects $E_i$ transversely, and does not intersect $E_k$ for any $k\\neq i$. Moreover, the divisors $\\sum_{i,j}\\wt{D}_w^{ij}$ have no common point of intersection on the exceptional locus $\\pi^{-1}(0)$ as $w\\in \\C^m$ varies.\n\\end{thm}\n\nWe are now in a position to prove \\hyperref[thmB]{Theorem~\\ref*{thmB}}.\n\n\n\\begin{proof}[Proof of {\\hyperref[thmB]{Theorem~\\ref*{thmB}}}] Let $z,w\\in \\C^m$. We start by estimating the local intersection number $D_z\\cdot f(D_w)$. Certainly one has the inequality $D_z\\cdot f(D_w) \\leq D_z\\cdot f_*D_w = f^*D_z\\cdot D_w$, so we will proceed by computing $f^*D_z\\cdot D_w$.\n\nLet $\\pi_1\\colon X_1\\to (\\C^2,0)$ and $\\pi_2\\colon X_2\\to (\\C^2,0)$ be two smooth modifications dominating $\\pi$, say with $\\eta_i\\colon X_i\\to Y$ satisfying $\\pi_i = \\pi\\circ \\eta_i$. Suppose, moreover, that the $\\pi_i$ are chosen with the following two properties: \\begin{enumerate}\n\\item[(a)] $f$ lifts to a holomorphic map $F\\colon X_1\\to X_2$, and \n\\item[(b)] if $\\wt{E}_i$ denotes the strict transform of $E_i$ in $X_1$ for $i = 1,\\ldots, r$, then $F$ does not contract any of the $\\wt{E}_i$ to a point. \n\\end{enumerate} One can always find such modifications. To compute $f^*D_z\\cdot D_w$, we now note that by the preceding theorem, if $z,w\\in \\C^m$ are chosen outside a suitable proper algebraic subset, then \\begin{align*}\nf^*D_z\\cdot D_w & = \\pi_1^*f^*D_z\\cdot \\pi_1^*D_w = F^*\\pi_2^*D_z\\cdot \\pi_1^*D_w\\\\\n& = F^*\\left(-\\sum_{i=1}^r a_i\\eta_2^*\\check{E}_i + \\sum_{i=1}^r\\sum_{j=1}^{a_i} \\eta_2^*\\wt{D}^{ij}_z\\right)\\cdot \\left(-\\sum_{i=1}^ra_i\\eta_1^*\\check{E}_i + \\sum_{i=1}^r\\sum_{j=1}^{a_i}\\eta_1^*\\wt{D}_w^{ij}\\right),\n\\end{align*} where the $\\eta_2^*\\wt{D}^{ij}_z$ and $\\eta_1^*\\wt{D}^{ij}_w$ are smooth irreducible formal curves intersecting the exceptional locus transversely at exactly one component, namely the component that is the strict transform of $E_i$ in $X_2$ and $X_1$, respectively. Moreover, because the divisors $\\sum_{i,j} \\eta_2^*\\wt{D}_z^{ij}$ and $\\sum_{i,j} \\eta_1^*\\wt{D}^{ij}_w$ have no common point of intersection on the exceptional loci $\\pi_2^{-1}(0)$ and $\\pi_1^{-1}(0)$, respectively, as $z$ and $w$ vary, we can conclude that for $(z,w)\\in \\C^m\\times \\C^m$ chosen outside of a proper algebraic subset, the divisors $F^*\\sum_{i,j}\\eta_2^*\\wt{D}_z^{ij}$ and $\\sum_{i,j}\\eta_1^*\\wt{D}^{ij}_w$ will have disjoint supports; in particular, these two divisors yield $0$ when intersected. For such pairs $(z,w)$, we therefore have\\begin{align*}\nf^*D_z\\cdot D_w & = F^*\\left(\\sum_{i=1}^ra_i\\eta_2^*\\check{E}_i\\right)\\cdot\\sum_{i=1}^r a_i\\eta_1^*\\check{E}_i - F^*\\left(\\sum_{i=1}^ra_i\\eta_2^*\\check{E}_i\\right)\\cdot\\sum_{i=1}^r\\sum_{j=1}^{a_i}\\eta_1^*\\wt{D}_w^{ij}\\\\\n& \\hspace{1 cm} - F^*\\left(\\sum_{i=1}^r\\sum_{j=1}^{a_i}\\eta_2^*\\wt{D}^{ij}_z\\right)\\cdot \\sum_{i=1}^r a_i\\eta_1^*\\check{E}_i\n\\end{align*} If, for simplicity, we let $V = \\sum_i a_i\\check{E}_i$, this can be rewritten \\begin{align*}\nf^*D_z\\cdot D_w & = (F^*\\eta_2^*V)\\cdot(\\eta_1^*V) - (F^*\\eta_2^*V)\\cdot(\\pi_1^*D_w + \\eta_1^*V) - F^*(\\pi_2^*D_z + \\eta_2^*V)\\cdot (\\eta_1^*V)\n\\end{align*} By the projection formula, $(F^*\\eta_2^*V)\\cdot (\\pi_1^*D_w) = 0 = (F^*\\pi_2^*D_z)\\cdot (\\eta_1^*V)$, thus giving \\[\nf^*D_z\\cdot D_w = - (F^*\\eta_2^*V)\\cdot(\\eta_1^*V) = -(\\eta_2^*V)\\cdot(F_*\\eta_1^*V).\\] Quantities such as the right hand side have been studied in the context of dynamics on valuation spaces, see for instance \\color{red}[CITES]\\color{black}. In the language of these works, the right hand side can be expressed as \\[\nf^*D_z\\cdot D_w = \\sum_{i=1}^r\\sum_{j=1}^r a_ia_jb_{E_i}b_{E_j}\\alpha(\\nu_{E_i}\\wedge f_\\bullet\\nu_{E_j})c(f, \\nu_{E_j}).\\] We will not precisely define each of the terms appearing in this sum, but instead simply say the following: \\begin{enumerate}\n\\item[1.] $b_{E_i}$ is sometimes called the \\emph{generic multiplicity} of the divisor $E_i$, and is a positive constant depending on $E_i$, but not on $f$. \\color{red}[CITES]\\color{black}\n\\item[2.] $\\alpha(\\nu_{E_i}\\wedge f_\\bullet\\nu_{E_j})$ denotes the so-called \\emph{skewness} of the \\emph{valuation} $\\nu_{E_i}\\wedge f_\\bullet \\nu_{E_j}$. It is a positive real number that is bounded above by $\\alpha(\\nu_{E_i})$, which itself is a positive constant depending on $E_i$, but not on $f$. \\color{red}[CITES]\\color{black}\n\\item [3.] $c(f,\\nu_{E_j})$, sometimes called the \\emph{attraction rate} of $f$ along $E_j$, will be the most relevant term for us here. It is positive, and depends on both $E_j$ and $f$. \\color{red}[CITES]\\color{black}\n\\end{enumerate} We have therefore derived that there is a constant $K>0$ depending only on $\\mf{a}$ such that for $(z,w)\\in \\C^m\\times \\C^m$ chosen in a suitable nonempty Zariski open set, \\[\nf^*D_z\\cdot D_w \\leq K\\sum_{j=1}^r c(f,\\nu_{E_j}).\\] This reasoning applies equally to any iterate $f^n$ of $f$, and therefore for $(z,w)\\in \\C^m\\times \\C^m$ chosen away from a suitable \\emph{countable} union of proper algebraic subsets, \\[\n\\mu(n) = D_z\\cdot f^n(D_w) \\leq f^{n*}D_z\\cdot D_w \\leq K\\sum_{j=1}^r c(f^n, \\nu_{E_j})\\,\\,\\,\\,\\,\\mbox{ for all $n$}.\\] To complete the proof, then, it suffices to know that the sequence $c(f^n, \\nu_{E_j})$ grows subexponentially in $n$. Indeed, it was shown in \\color{red}[CITE] \\color{black} that in fact this sequence satisfies an integral linear recursion relation, and hence in particular grows subexponentially, completing the proof.\n\\end{proof}\n\\end{comment}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\bibliographystyle{alpha}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction and motivation}\nPDE models for the propagation of ultrasound waves -- more specifically, high intensity ultrasound propagation (HIUP) -- are relevant to a number of medical and industrial applications.\nTo name but a few, lithotripsy, thermoterapy, (ultrasound) welding, sonochemistry; \ncf., e.g., \\cite{dreyer}. \nThe excitation of induced acoustic fields in order to attain a given task, such as destroying certain `obstacles' (stones in kidneys or deposits resulting from chemical reactions), renders the presence of control functions within the model well-founded. \n \nThe subject of the present investigation is an optimal control problem for a third order in time PDE,\nreferred to in the literature as the Moore-Gibson-Thompson equation, which is the linearization of \nthe Jordan-Moore-Gibson-Thompson (JMGT) equation, arising in the modeling\nof ultrasound waves; see \\cite{jordan,jordan1}, \\cite{kalten-eect}, \\cite{straughan}.\nIn contrast with the renowned Westervelt (\\cite{westervelt}) and Kuznetsov equations, the JMGT equation displays a {\\em finite} speed of propagation of acoustic waves,\nthereby providing a solution to the infinite speed of propagation paradox.\nThis is achieved by replacing the Fourier's law of heat conduction by the Cattaneo law\n(\\cite{cattaneo_1958}); the distinct constitutive law brings about an additional time\nderivative of the acoustic velocity field (or acoustic pressure).\n\nRestricting the analysis to the relevant spatial dimensions $n=2,3$, a Neumann boundary control will be acting as a force on a manifold $\\Gamma_0$ of dimension $n-1$; $\\Gamma_0$ will eventually represent a boundary portion of a bounded domain $\\Omega \\subset \\mathbb{R}^n$.\n(It is an established procedure to reduce the analysis of wave processes on {\\em unbounded} domains to boundary or initial\/boundary value problems (IBVP) on {\\em bounded} domains via the introduction of artificial boundaries.)\nThus, {\\em absorbing} boundary conditions (BC) will be taken on a complementary part of the boundary\n$\\Gamma_1 =\\partial \\Omega \\setminus \\Gamma_0$; see section~\\ref{ss:setting}.\nWe shall assume that the two parts of the boundary do not intersect.\nThe optimal control problem arises from the minimization of the acoustic pressure in $\\Omega$.\nThis setup, which is motivated by significant applications and technologies, has been already adopted\nin the literature in connection with the said nonlinear PDE's; \nsee \\cite{kalten-eect}, \\cite{clason-etal_2009}, \\cite{vania}, \\cite{vania1},\\cite{clason-k} and references therein. \n\nFrom the mathematical point of view, two main challenges appear.\nThe first one is due to the presence of {\\em boundary} controls, which naturally bring about\nunbounded input operators $B$ into the (linear) abstract state equation $y'=Ay+Bg$; see \\cite{bddm}, \\cite{redbook}. \nIt is well known that this issue can be dealt with by exploiting the additional regularity of the\nPDE dynamics: this occurs in the case of parabolic-like dynamics, plainly governed by analytic semigroups $e^{At}$.\nThe reader is referred to the classical texts \\cite{bddm} and \\cite[Vol.~I]{redbook} for a thorough study of the Linear-Quadratic (LQ) problem for parabolic-like PDE's, along with the related differential\nand algebraic Riccati equations.\n\\\\\n(We note that the same is actually valid in the case of PDE problems whose corresponding abstract control systems satisfy the so called {\\em singular estimates} for $e^{At}B$, even if the semigroup \n$e^{At}$ is not analytic (\\cite{las-cbms}).\nAnd, further, appropriate regularity properties can be displayed by certain coupled systems of hyperbolic-parabolic PDE's subject to boundary control -- including thermoelastic systems, acoustic-structure and fluid-structure interactions --, which ensure the solvability of the associated optimal control problems (with quadratic functionals), along with well-posed Riccati equations.\nThe ultimate finite and infinite time horizon theories, as well as references to the motivating\nPDE systems, are found respectively in \\cite{abl-2005} and \\cite{abl_2013}.)\n\nReturning to the PDE under investigation, as we know from \\cite{marchand} and \\cite{kalten-las-mar_2011}, the dynamics of the (uncontrolled) SMGT equation, with classical Dirichlet or Neumann\nBC, is described by a {\\em group} of operators, displaying an intrinsic hyperbolic character, and hence a lack of regularity of its dynamics. \nIn addition, a major challenge is brought about by the presence -- that cannot be eluded -- of the time derivative of the control function $g(t,x)$ within the control system, which becomes\n\\begin{equation} \\label{e:g_t}\ny'=Ay+B_0g+B_1g_t\\,, \n\\end{equation}\nwhereas on the other hand, penalization involves only the $L^2$ (in time) norm of the controls. \nThis means that the cost functional is not coercive with respect to $g_t$. \nThe resulting linear-quadratic problem becomes {\\em singular}.\nIt must be recalled that these features have been already encountered and dealt with\nin the study of optimal boundary control of (second-order in time) wave equations with structural damping; see the former study \\cite{bucci_1992} and the subsequent analysis and solutions proposed in \\cite{LLP} and \\cite{LPT}.\nBecause of the strong damping, in the aforementioned case the free dynamics yields an analytic semigroup, along with an enhanced regularity of the control-to-state map; this feature has been exploited in the studies \\cite{bucci_1992}, \\cite{trig_1994}, \\cite{LLP}, \\cite{LPT}. \nInstead, the present PDE problem is of hyperbolic type.\n\nThe goal of the present paper is to provide a framework for such class of singular control problems, in the case of a hyperbolic-like dynamics which intrinsically does not exhibit \nregularizing effects on its evolution.\nIt is important to emphasize that while the singularity of the control is reflected in difficulties when treating {\\em time} dependence, unbounded inputs affect the analysis of {\\em space} dependence. \nSo, the infinite-dimensional aspect of evolution is at the heart of the problem studied. \nTo the authors' best knowledge this is a first investigation where a singular control \nproblem and the control system \\eqref{e:g_t} appear simultaneously, \nin an infinite dimensional context and with a general semigroup governing the free dynamics. \n\n\\subsection{The nonlinear model and its linearization}\nThe Jordan-Moore-Gibson-Thompson (JMGT) equation is one of the fundamental equations in nonlinear acoustics\nwhich describes wave propagation in viscous thermally relaxing fluids.\nIts linearization is found in the literature as the Moore-Gibson-Thompson (MGT) equation.\n(In recognition of the original work on it by Stokes (\\cite{stokes}), it might rather be termed \nStokes-Moore-Gibson-Thompson equation, as Pedro Jordan himself suggested; \nhence the acronym SMGT (in place of MGT) will be utilized throughout the paper.)\nThe fully nonlinear PDE, that is the JMGT equation, is the following one:\n\\begin{equation}\\label{e:jmgt}\n\\tau \\psi_{ttt} + \\psi_{tt}-c^2\\Delta \\psi - b\\Delta \\psi_t=\n\\frac{\\partial}{\\partial t}\\Big(\\frac1{c^2}\\frac{B}{2A}\\psi^2_t+ |\\nabla \\psi|^2\\Big)\n\\end{equation}\nwhere $\\tau > 0 $ is a time relaxation parameter, the unknown $\\psi=\\psi(t,x)$ is the {\\em acoustic velocity potential}, the space variable $x$ varies in a bounded domain $\\Omega\\subset \\mathbb{R}^n$, $c$ is the speed of sound, parameter $b$ stands for diffusivity, $\\alpha >0 $ is a damping parameter and $A, B$ are suitable nonlinearity constants; then, $-\\nabla \\psi$ is the acoustic {\\em particle velocity}. \n\nWhen $\\tau=0$ the model becomes the Kuznetsov equation, that is \n\\begin{equation}\\label{e:kuzn}\n\\psi_{tt}-c^2\\Delta \\psi - b\\Delta \\psi_t=\n\\frac{\\partial}{\\partial t}\\Big(\\frac1{c^2}\\frac{B}{2A}\\psi^2_t+ |\\nabla \\psi|^2\\Big)\\,,\n\\end{equation} \na (second order in time) quasilinear PDE characterized by an infinite speed of propagation.\nThe positive diffusivity coefficient $b$ provides a regularizing effect on its evolution;\nthe corresponding linearized equation is of parabolic type, as its dynamics is governed by\nan analytic semigroup. \nInstead, as found out in the former works \\cite{kalten-eect} and \\cite{marchand}, in the case\n$\\tau > 0$ the PDE turns into a finite speed of propagation and to a hyperbolic character. \n\nOptimal control problems with quadratic functional for both the Kuznetsov and Westervelt\nequations have been studied first in \\cite{clason-etal_2009} and \\cite{clason-k}; \nsee also \\cite{kalten-eect}. \nThe latter reads as \n\\begin{equation*}\nu_{tt}-c^2\\Delta u - b\\Delta u_t= \\beta \\frac{\\partial^2}{\\partial t^2}\\big(u^2\\big)\n\\end{equation*}\nin terms of the acoustic pressure $u$, where $\\beta > 0$ is a suitable parameter of nonlinearity.\n\\\\\n(The relation $u=\\rho \\psi_t$ between the acoustic pressure and velocity potential \n-- $\\rho(x)$ being the mass density -- allows another formulation of the Kuznetsov equation, \nwith the pressure as the unknown variable.) \nThen, the ultrasound excitation on a certain manifold $\\Gamma_0$ (of dimension~$n-1$)\ncan be represented by means of the Neumann boundary condition \n$\\frac{\\partial u}{\\partial \\nu}=g$ on $\\Gamma_0$, where $g$ is the control function.\nA question which arises is to minimize appropriate cost functionals associated with the controlled PDE. \n\nIn the works \\cite{clason-etal_2009} and \\cite{clason-k} quadratic functionals of tracking type are taken into consideration, such as \n\\begin{equation*}\nJ(g)= \\frac{1}{2}\\int_{\\Omega} |u(T,x)-u^d(x)|^2\\, dx \n+ \\frac{\\alpha}{2}\\int_0^T\\!\\!\\!\\int_{\\Gamma_0} |g|^2 \\,d \\sigma\\,dt\n\\end{equation*}\nand\n\\begin{equation*}\nJ(g)= \\frac{1}{2}\\int_0^T \\!\\!\\!\\int_{\\Omega} |u - u^d|^2\\, dx\\,dt \n+ \\frac{\\alpha}{2}\\int_0^T\\!\\!\\!\\int_{\\Gamma_0} |g|^2 \\,d \\sigma\\,dt\\,,\n\\end{equation*}\nrespectively, where $u^d$ is a given reference pressure;\nthe class of admissible controls $G^{ad}$ is a suitably chosen space \nwhose topology is induced by \n\\begin{equation} \\label{e:smooth-space}\nH^1(0,T; H^{1\/2}(\\Gamma_0)) \\cap H^2(0,T;H^{-1\/2}(\\Gamma_0))\\,.\n\\end{equation}\nA critical role in these studies was played by (i) the assumption that $G^{ad}$ represents a space of smooth controls -- more precisely, differentiable in time and subject to appropriate compatibility conditions (with respect to initial data) --, as well as \n(ii) the control constructed is an open-loop one, rather than a feedback one;\n(iii) the solutions considered are suitably small and the state equation is of parabolic type.\n\\\\ \nFor such class of controls existence, uniqueness of solutions for {\\it small data} (due to quasilinearity) has\nbeen derived (\\cite{kalten-las-pos_2012}, \\cite{kalten-eect}). \nThe optimal control is characterized via the Pontryagin Maximum Principle; see \\cite{clason-etal_2009}. \n\nThe present study, although focused on a simpler linear equation, \ndeparts from the avenues (i)--(iii), guided by two major goals.\nOn one hand, we aim at minimizing a quadratic functional that penalizes controls functions in the $L^2$\n(in time and space) norm, with (state) solutions under consideration not necessarily smooth (in space). \nA set of admissible controls that possess a low regularity is consistent with physical and engineering applications; see, e.g., \\cite{dreyer}. \nIn addition, feedback or closed-loop controls\nare of particular interest. \n\nOn the other hand, as already apparent in the case of the Westervelt equation -- as well\nas in the case of its linearization, that is the strongly damped wave equation (\\cite{bucci_1992}) --, the\nmodeling of boundary control actions naturally brings about the time derivative of the control function,\nwhich is somehow ``hidden'' within the PDE problem. \nThis intrinsic analytical aspect will be made clear later -- once we derive the \ninput-to-state solution formula.\nIf one were to pursue such a study in the case of the JMGT equation, a natural choice would\nbe to begin with the linear dynamics: it is already there where non-smoothness of controls will provide sufficient challenge.\nIn fact, the minimization problem overall $L^2$ controls may not ensure an optimal solution even in the linear case, as already noted in \\cite{LPT}.\nWe shall confirm this finding in the case of the problem under consideration.\n\nThe above suggests that appropriate adjustments in the formulation of the problem and its modeling need to be made.\nWe shall show that by enlarging slightly the class of controls resolves the issue of existence of optimal solution.\nHaving established this, we shall proceed with the optimality analysis and the construction of a\nfeedback control for the PDE which will still display `rough' states. \nHowever, the feedback solution will be shown to generate sufficiently regular outputs which can be used to control the system on-line -- via the solution to a {\\em non-standard} differential Riccati Equation (RE).\nThe well-posedness of these corresponding non-standard Riccati equations provides a contribution of independent interest. \nIn fact, the construction of solutions to the RE requires the extension of the dynamics to extrapolation spaces with very low regularity. \nThis is needed in order to make the dynamics invariant. \n\n\\smallskip\nTo recapitulate: \nthe novel contribution of the present work pertains to optimal feedback control of the acoustic \nSMGT equation; the closed-loop control will be generated by an appropriate non-standard Riccati equation.\n(The non-standard structure is due to the singular nature of the optimization problem.)\nFocus is placed on the linearized version of the model, which already provides significant challenges in terms of the underlying analysis and constitutes a necessary step for a further treatment of nonlinear problems.\nThe expectation\nis that once a solution is given for the optimal feedback control of the linearized dynamics, such control may be used\nfor the nonlinear problem, which then will have to be considered with small initial data.\nA similar approach has been pursued successfully in the case of the Navier-Stokes equations;\ncf. \\cite{barbu}, \\cite{barbu1}. \n\n\n\\subsection{Mathematical setting} \\label{ss:setting}\nWe consider the problem of controlling the acoustic excitation on a certain closed region \n$\\Gamma_0$ while maintaining the acoustic pressure below a certain threshold; $\\Gamma_0$ will be\nsubsequently identified as a part of the boundary of an introduced bounded domain $\\Omega$. \nThen, as usually done in the study of wave propagation phenomena in unbounded spatial domains, an artificial boundary $\\Gamma_1$ is introduced in order to limit the area of observation\/computation.\nThe {\\em absorbing} boundary conditions (BC) on $\\Gamma_1$ are then used to avoid reflections:\nroughly, no waves can `come back'.\nAccordingly, and consistently with the analysis carried out in \\cite{clason-etal_2009} \n(on a classical nonlinear model for ultrasound wave propagation like the Westervelt equation), \nwe will complement the SMGT equation with the BC which are the most pertinent: namely, \n\\begin{itemize}\n\\item\nNeumann {\\em boundary control} acting on\n$\\Gamma_0$ (the so called excited boundary); $g$ below represents a surface force;\n\\item\nabsorbing BC on the complement $\\Gamma_1=\\partial\\Omega\\setminus \\Gamma_0$ \n(the so called {\\em absorbing boundary}).\n\\end{itemize}\nThus, the boundary value problem (BVP) is as follows:\n\\begin{equation} \\label{e:bvp-0}\n\\begin{cases}\n\\tau u_{ttt} + \\alpha u_{tt} -c^2 \\Delta u -b \\Delta u_t =0 & \\textrm{on $(0,T)\\times\\Omega$}\n\\\\\n\\frac{\\partial u}{\\partial \\nu}=g & \\textrm{on $(0,T)\\times\\Gamma_0$}\n\\\\\n\\frac{\\partial u}{\\partial \\nu}+\\frac{1}{c}u_t=0 & \\textrm{on $(0,T)\\times\\Gamma_1$}\n\\end{cases}\n\\end{equation}\nto be supplemented with initial conditions.\n\n\\noindent\nAiming at studying optimal control problems with quadratic functionals associated with the IBVP \\eqref{e:bvp-0}, the following features need to be taken into account: \n\\\\\n(i) {\\em finite} time horizon problems, in the absence of penalization of the final time are the most pertinent ones (e.g., in lithotripsy);\n\\\\\n(ii) with $u$ representing the acoustic pressure, the quantity to be minimized (under the action of the surface force $g$) is\n$\\|u-u^d\\|_{L^2(0,T;L^2(\\Omega))}^2$, where $u^d$ is a reference pressure;\n\\\\\n(iii)\nlonger times (i.e. $T=+\\infty$) might be taken into consideration (e.g., in connection with thermotherapy).\n\nDepending on the applications, different cost functionals may be considered.\nIn what follows we shall focus on the physically significant minimization of the following \ncost functional (of tracking type): \n\\begin{equation} \\label{e:cost-funct}\nJ(g) = \\int_0^T\\!\\!\\!\\int_{\\Omega} |u-u^d|^2 \\,dx\\, dt + \\int_0^T\\!\\!\\! \\int_{\\Gamma_0} |g|^2 \\,d\\sigma\\, dt\\,.\n\\end{equation}\n\n\\begin{remark}\n\\begin{rm}\nThe fact that the functional cost penalizes the control $g$ only in the $L^2$ norm renders the optimization problem a singular one. \nIndeed, if one penalizes also the velocity $g_t$ of the control, then we would obtain a standard boundary control problem with coercive cost functional.\n\\end{rm} \n\\end{remark}\n\nControl problems associated with acoustic equations (Westervelt, Kuznetsov, JMGT ones) have\nbeen recently studied in the literature; see the review paper \\cite{kalten-eect}. \nHowever, the principal difference is that the present minimization involves control functions which belong to $L^2(\\Sigma)$, $\\Sigma := (0,T)\\times \\Gamma_0$, \nrather than more regular -- time-space differentiable -- controls (see \\eqref{e:smooth-space}, \nand the optimal control problems studied in \\cite{clason-etal_2009} and \\cite{vania}). \nIn addition, control laws provided in the past literature were {\\it open loop} controls. \nOur goal is to construct {\\it feedback control} with controls of limited regularity and control gains represented by solutions to Riccati equation. \nThis last aspect is the main trait of our contribution. \nA brief outline-guide to the paper follows below. \n\n\\smallskip\nIn order to state our results and to explain ramifications of the low regularity of the control, \nit is necessary to derive an abstract input-to-state formula of the IBVP problem, within the realm of classical control theory.\nThis means we will seek an explicit representation for the map \n\\begin{equation}\\label{gu}\ng \\longrightarrow (u,u_t,u_{tt})\n\\end{equation}\nThis will be accomplished in the next Section~\\ref{s:semigroup-perspective} by using semigroup theory. \nStarting with uncontrolled dynamics and its representation via generator of a strongly continuous semigroup, we shall then proceed introducing boundary controls into the ``variation of parameters formula'' which will provide an explicit map \\eqref{gu} -- singular and defined on appropriately selected extrapolation spaces, though. \n\nIn the next step we shall formulate control problems associated with the input-state dynamics and we shall discuss existence and non-existence of optimal solutions. \nThe final result pertaining to well-posedness of Riccati equations and to the feedback synthesis \nof the optimal control is presented in Section~\\ref{s:auxiliary-and-riccati}. \nIt is important to notice here that in spite of the singularity of input-state dynamics, the feedback synthesis and the resulting Riccati equations are defined and well-posed on the basic state and control spaces. \nThis is due to the effects of the observation. \n\nThe proofs of the auxiliary and main results are deferred to Sections~\\ref{s:proofs_1} \nand \\ref{s:proofs_2}. \nThe proofs will rely on techniques introduced in the study of the LQ problem for hyperbolic-like equations with unbounded inputs, where the dynamics does not provide beneficial regularizing effects. \nTo handle this issue, we establish appropriate bounds by exploiting structural properties of the observation; see \\cite[Vol.~II]{redbook}.\n\n\n\\section{Input-to-state formulation of the PDE problem} \\label{s:semigroup-perspective}\nA prerequisite step for the understanding of the control-theoretic properties of the \ninitial\/boundary value problem (IBVP)\n\\begin{equation} \\label{e:ibvp-1}\n\\begin{cases}\n\\tau u_{ttt} + \\alpha u_{tt} -c^2 \\Delta u -b \\Delta u_t =0 & \\textrm{on $(0,T)\\times\\Omega$}\n\\\\\n\\frac{\\partial u}{\\partial \\nu}=g & \\textrm{on $(0,T)\\times\\Gamma_0$}\n\\\\\n\\frac{\\partial u}{\\partial \\nu}+\\frac{1}{c} u_t= 0& \\textrm{on $(0,T)\\times\\Gamma_1$}\n\\\\\nu(0,x)=u_0(x)\\,,\\; u_t(0,x)=u_1(x)\\,; u_{tt}(0,x)=u_2(x) & \\textrm{on $\\Omega$}\n\\end{cases}\n\\end{equation}\nfor the SMGT equation is to introduce the corresponding abstract operator model in \nan appropriate function spaces.\n\n\\subsection{Abstract setup. Preliminary analysis}\nIn order to incorporate into the equation the boundary control action on $\\Gamma_0$,\nalong with the absorbing BC on $\\Gamma_1$, we follow a well-established method. \n\nLet ${\\mathcal A}$ be the realization of $-\\Delta$ in $L^2(\\Omega)$ with Neumann BC: namely,\n\\begin{equation*\n{\\mathcal A}=-\\Delta\\,, \\quad \n{\\mathcal D}({\\mathcal A}) =\\Big\\{ f\\in H^2(\\Omega): \\; \\frac{\\partial f}{\\partial \\nu}\\Big|_{\\partial \\Omega}\n=0\\Big\\}\\,.\n\\end{equation*}\nIt is well known that ${\\mathcal A}$ is not boundedly invertible on $L^2(\\Omega)$; it has\nbounded inverse on\n\\begin{equation*}\nL^2_0(\\Omega):=L^2(\\Omega)\/\\ker({\\mathcal A})\n=\\Big\\{f\\in L^2(\\Omega)\\colon \\int_\\Omega f\\,d\\Omega=0\\Big\\}\\,,\n\\end{equation*}\nwhere $\\ker({\\mathcal A})$ is the null space of ${\\mathcal A}$ spanned by the normalized constant functions.\nThen, introduce the Green maps $N_i$, $i=0,1$, which define appropriate harmonic extensions into $\\Omega$ of data defined on $\\partial\\Omega$. \nMore precisely, for $\\varphi\\in L^2(\\Gamma_0)$, $N_i$ will be defined as follows: \n\\begin{equation}\\label{e:N_i}\nN_i\\colon\\varphi\\longmapsto N_i\\varphi=:v \\;\n\\Longleftrightarrow \\;\n\\begin{cases}\n\\Delta v -v =0 & \\textrm{on $\\Omega$}\n\\\\[1mm]\n\\frac{\\partial v}{\\partial \\nu}=\\varphi & \\textrm{on $\\Gamma_i$}\n\\\\[1mm]\n\\frac{\\partial v}{\\partial \\nu}=0 & \\textrm{on $ \\partial \\Omega \\setminus \\Gamma_i$.}\n\\end{cases}\n\\end{equation}\nEither elliptic problem that defines the operator $N_i$ in \\eqref{e:N_i} admits a unique\nsolution $v_i\\in H^{3\/2}(\\Omega)$, for (respective) boundary data $\\varphi\\in L^2(\\Gamma_i)$,\n$i=0,1$.\nThen, by elliptic theory one has for each $i=0,1$ and any positive $\\sigma<3\/4$ \n\\begin{equation} \\label{e:neumann-regularity}\nN_i \\; \\textrm{continuous}\\colon L^2(\\Gamma_i) \\longrightarrow H^{3\/2}(\\Omega)\n\\subset H^{3\/2-2\\sigma}(\\Omega)\\equiv {\\mathcal D}((I+{\\mathcal A})^{3\/4-\\sigma})\\,,\n\\end{equation}\nwith identification of the Sobolev spaces $H^s(\\Omega)$ with the fractional powers of the operator $(I+{\\mathcal A})$, and equivalent norms, that will be especially useful in the sequel. \n\nIf now $N_i^*$ denote the respective adjoint operators of $N_i$, $i=0,1$ -- defined by \n$(N_i\\phi,w)_{L^2(\\Omega)}=(\\phi,N_i^*w)_{L^2(\\Gamma_i)}$ --, it then follows\nfor each $i=0,1$ and any $\\sigma\\in (0,3\/4)$,\n\\begin{equation*}\n(I+{\\mathcal A})^{3\/4-\\sigma}N_i\\in {\\mathcal L}(L^2(\\Gamma_i),L^2(\\Omega))\\,,\nN^*_i(I+{\\mathcal A})^{3\/4-\\sigma}\\in {\\mathcal L}(L^2(\\Omega),L^2(\\Gamma_i))\\,.\n\\end{equation*} \nAs in \\cite{trig_1994}, a computation which utilizes the (second) Green Theorem yields, for $f\\in {\\mathcal D}({\\mathcal A})$, \nthe following fundamental trace results:\n\\begin{equation} \\label{e:trace-result-ni}\nN_i^*({\\mathcal A}+I) f =f|_{\\Gamma_i} \\qquad \\textrm{$i=0,1$.}\n\\end{equation}\n(For the reader's convenience: take $v\\in {\\mathcal D}({\\mathcal A})$, $\\varphi\\in L^2(\\Gamma_0)$,\nand compute\n\\begin{equation*}\n\\begin{split}\n& -\\big(N_0^*({\\mathcal A}+I) v,\\varphi\\big)_{\\Gamma_0}= \\big(-({\\mathcal A}+I) v,N_0\\varphi\\big)_\\Omega\n= (\\Delta v,N_0\\varphi)_\\Omega-(v,N_0\\varphi)_\\Omega=\n\\\\\n& \\qquad = \\big(v,\\Delta (N_0\\varphi)\\big)_\\Omega\n+ \\cancel{\\Big(\\frac{\\partial v}{\\partial\\nu},N_0\\varphi\\Big)_{\\partial\\Omega}}\n- \\Big(v,\\frac{\\partial N_0\\varphi}{\\partial\\nu}\\Big)_{\\partial\\Omega}\n-(v,N_0\\varphi)_\\Omega=\n\\\\\n& \\qquad = (v,N_0\\varphi)_\\Omega\n-(v,\\varphi)_{\\Gamma_0}-(v,N_0\\varphi)_{\\Omega}\n=-(v,\\varphi)_{\\Gamma_0}\\,.\n\\end{split}\n\\end{equation*}\nThe above shows that \\eqref{e:trace-result-ni} holds true when $i=0$; the case $i=1$ is proved in the same way.\nWe note that it has been used that since $v$ belongs to ${\\mathcal D}({\\mathcal A})$,\nthen $\\frac{\\partial v}{\\partial\\nu}=0$ on $\\partial\\Omega$; in addition, the definition\nof $N_0\\varphi$ in \\eqref{e:N_i} -- as the solution of an elliptic problem -- gives\nin particular $\\Delta (N_0\\varphi)=N_0\\varphi$.)\n\n\\smallskip\nIn view of the definition of the introduced operators $N_i$, $i=0,1$, we see that\n\\begin{equation*}\n\\begin{cases}\n(\\Delta -I)\\big(u+\\frac1{c}N_1 u_t|_{\\Gamma_1}-N_0g\\big)=(\\Delta -I)u & \n\\textrm{on $\\Omega\\times (0,T)$}\n\\\\[1mm]\n\\frac{\\partial}{\\partial \\nu}(u+\\frac1{c}N_1 u_t-N_0g)=0 \n& \\textrm{on $\\Gamma_0\\times (0,T)$}\n\\\\[1mm]\n\\frac{\\partial}{\\partial \\nu}(u+\\frac1{c}N_1 u_t-N_0g)=0\\,; \n& \\textrm{on $\\Gamma_1\\times (0,T)$}\n\\end{cases}\n\\end{equation*}\nproceeding formally we get\n\\begin{equation*}\n\\begin{split}\n\\Delta u &= (\\Delta -I)\\big(u+\\frac1{c_1}N_1 u_t|_{\\Gamma_1}-N_0g\\big) +u\\,,\n\\\\\n\\Delta u_t &= (\\Delta -I)\\big(u_t+\\frac1{c_1}N_1 u_{tt}|_{\\Gamma_1}-N_0 g_t\\big) +u_t\\,, \n\\end{split}\n\\end{equation*}\nwhich enable us to rewrite the SMGT equation as \n\\begin{equation*}\n\\begin{split}\n& \\tau u_{ttt}+\\alpha u_{tt} -c^2 (\\Delta - I)\\big(u+\\frac1{c}N_1 u_t|_{\\Gamma_1}-N_0g\\big)-c^2 u-\n\\\\\n& \\qquad\\qquad\\qquad - b(\\Delta -I)\\big(u_t+\\frac1{c}N_1 u_{tt}|_{\\Gamma_1}-N_0 g_t\\big) - b u_t=0\\,,\n\\end{split}\n\\end{equation*}\nwhere $\\frac{\\partial}{\\partial \\nu}\\big(u+\\frac1{c_1}N_1 u_t|_{\\Gamma_1}-N_0g\\big)\\big|_\\Gamma=0$.\nThus, by using the trace results \\eqref{e:trace-result-ni}, the BVP \\eqref{e:bvp-0} for the SMGT equation\ntranslates to the following abstract equation, where both the absorbing BC on $\\Gamma_1$ and the boundary control action on $\\Gamma_0$ are incorporated:\n\\begin{equation*}\n\\begin{split}\n& \\tau u_{ttt}+\\alpha u_{tt} +c^2 ({\\mathcal A}+I)\\Big[u+\\frac1{c}N_1 N_1^*({\\mathcal A}+I)u_t-N_0 g\\Big]\n-c^2 u+\n\\\\[1mm]\n& \\qquad\\qquad\\qquad + b({\\mathcal A}+I)\\Big[u_t+\\frac1{c}N_1N_1^*({\\mathcal A}+I) u_{tt}-N_0 g_t\\Big] - b u_t=0\\,,\n\\end{split}\n\\end{equation*}\nthat is \n\\begin{equation}\\label{e:controlled-eq}\n\\begin{split}\n& \\tau u_{ttt}+\\alpha u_{tt} +c^2 {\\mathcal A} u +c({\\mathcal A}+I)N_1 N_1^*({\\mathcal A}+I)u_t + b {\\mathcal A} u_t +\n\\\\[1mm]\n& \\qquad\\qquad\\qquad +\\frac{b}{c} ({\\mathcal A}+I)N_1 N_1^*({\\mathcal A}+I) u_{tt} = c^2 ({\\mathcal A}+I)N_0 g + b ({\\mathcal A}+I)N_0 g_t\\,;\n\\end{split}\n\\end{equation} \nthe equality is understood with respect to the duality pairing, i.e. in $[{\\mathcal D}({\\mathcal A})]'$. \n\nThe third order abstract equation \\eqref{e:controlled-eq} gives rise readily to a first order control system,\ninitially defined on an extended space $L^2(\\Omega) \\times L^2(\\Omega) \\times [{\\mathcal D}({\\mathcal A})]'$:\n\\begin{equation}\\label{e:1order-system}\n\\frac{d}{dt}\\begin{pmatrix}\nu\\\\\nu_t\\\\\nu_{tt}\n\\end{pmatrix} = A \\begin{pmatrix}\nu\\\\\nu_t\\\\\nu_{tt}\n\\end{pmatrix} + B_0 g + B_1 g_t\\,,\n\\end{equation}\nwhere the operator describing the {\\em free} dynamics is\n{\\small\n\\begin{equation}\\label{e:generator}\nA= \n\\begin{pmatrix}\n0 & I & 0\n\\\\[1mm]\n0 & 0 & I\n\\\\[1mm]\n- \\tau^{-1} c^2{\\mathcal A} & -\\tau^{-1} \\big[b {\\mathcal A}+c({\\mathcal A}+I)N_1N_1^*({\\mathcal A}+I)\\big] \n& - \\tau^{-1} \\big[\\alpha I +\\frac{b}{c}({\\mathcal A}+I)N_1 N_1^*({\\mathcal A}+I)\\big]\n\\end{pmatrix}\n\\end{equation}\n}\nwhile the input operators $B_i\\in {\\mathcal L}(L^2(\\Gamma_0),[{\\mathcal D}({\\mathcal A})]'), i=0,1$, are given by\n\\begin{equation}\\label{e:input-operators}\nB_0=\n\\begin{pmatrix}\n0 \n\\\\[1mm]\n0 \n\\\\[1mm]\n\\tau^{-1}c^2 ({\\mathcal A}+I)N_0\n\\end{pmatrix}\\,,\n\\qquad \nB_1=\n\\begin{pmatrix}\n0 \n\\\\[1mm]\n0 \n\\\\[1mm]\n\\tau^{-1}b ({\\mathcal A}+I)N_0\n\\end{pmatrix} \n=\\frac{b}{c^2}B_0\\,.\n\\end{equation}\nThe (free dynamics) operator $A$ in \\eqref{e:generator} will be shown to generate a $C_0$-semigroup on the space $Y = H^1(\\Omega) \\times H^1(\\Omega) \\times L^2(\\Omega)$. \n\n\\begin{remark}\n\\begin{rm}\nThe first order equation \\eqref{e:1order-system} is a control system in (extended to) the dual space\n$[{\\mathcal D}(A^*)]'$;\nthis is due to the fact that $A^{-1} B_i \\in {\\mathcal L}(U,Y)$, $i=0,1$, as it will be verified later.\nHowever, the given formulation involves the time derivative of control, which is not supported by the\ncost functional; as a consequence, the minimization problem lacks coercivity.\nTo cope with this, we will follow \\cite{LLP}: \nintegration by parts in the input-to-state formula enables to `eliminate' the time derivative of the control function, however with the drawback that the states will become `rougher'.\nThe smoothing properties of the observation operator $R$ -- here, {\\em intrinsic} -- will play a major role\nin the entire subsequent analysis, which will eventually bring about the solution of the optimization problem.\n\\end{rm}\n\\end{remark}\n \nBefore we proceed, let us consider the uncontrolled equation first. \nThis step is necessary in order to formulate a correct notion of duality -- which is always with respect to\nthe generator of the semigroup underlying the dynamics. \n\n\n\\subsection{The uncontrolled equation. Semigroup well-posedness}\nIn order to pinpoint the control-theoretic properties of the abstract system \n\\eqref{e:1order-system} -- an ineludible preliminary step for the analysis \nof the optimal control problem --, we consider first the uncontrolled equation, that is\nequation \\eqref{e:controlled-eq} in the absence of the boundary action $g$.\nWith $g\\equiv 0$, the equation \\eqref{e:controlled-eq} reads as\n\\begin{equation}\\label{e:free-eq}\n\\begin{split}\n& \\tau u_{ttt}+\\alpha u_{tt} +c^2 {\\mathcal A} u +c({\\mathcal A}+I)N_1 N_1^*({\\mathcal A}+I)u_t + b {\\mathcal A} u_t +\n\\\\\n& \\qquad\\qquad\\qquad + \\frac{b}{c} ({\\mathcal A}+I)N_1 N_1^*({\\mathcal A}+I) u_{tt}=0\\,.\n\\end{split}\n\\end{equation} \nWe follow an idea introduced\nand utilized in \\cite{kalten-las-mar_2011} and \\cite{marchand}.\nCalculations below might appear formal: however, they are fully justified with respect to the duality in\n$[{\\mathcal D}(A^*)]'$.\nAfter having set $\\tau=1$ for the sake of simplicity, the rewriting of equation \\eqref{e:free-eq} as\n\\begin{equation}\\label{e:free-eq-1}\n(u_t+\\alpha u)_{tt}+b {\\mathcal A}\\Big(u_t+\\frac{c^2}{b} u\\Big) \n+\\frac{b}{c}({\\mathcal A}+I)N_1 N_1^*({\\mathcal A}+I)\\Big(u_{tt}+ \\frac{c^2}{b} u_t\\Big)=0\\,,\n\\end{equation} \nsuggests the introduction of the auxiliary variable \n\\begin{equation} \\label{e:def-of-z}\nz:= u_t+\\frac{c^2}{b} u\\,.\n\\end{equation} \nThe new variable $z$ plays a major role in deriving well-posedness\nresults for the third order equation \\eqref{e:free-eq} in the unknown $u$;\nthis is because it allows to connect the (free) equation under investigation with\nthe following system in the unknowns $(u,z)$:\n\\begin{equation}\\label{e:auxiliary-free-sys}\n\\begin{cases}\nu_t=-\\frac{c^2}{b} u+z\n\\\\[1mm]\nz_{tt}=-b {\\mathcal A} z -\\frac{b}{c}({\\mathcal A}+I)N_1 N_1^*({\\mathcal A}+I)z_t -\\gamma z_t+\\gamma \\frac{c^2}b z \n- \\gamma \\Big(\\frac{c^2}b\\Big)^2 u\n\\end{cases}\n\\end{equation}\nwhere $\\gamma:= \\alpha - \\frac{c^2}{b}$ will be assumed to be positive.\nThe explicit statement and proof of this claim, that is an immediate generalization of \nwhat done in \\cite{kalten-las-mar_2011}, is given below for the reader's convenience\nand the sake of completeness.\n \n\\begin{lemma}\nThe uncontrolled third order (in time) equation \\eqref{e:free-eq} is equivalent to \nthe coupled ODE-PDE system \\eqref{e:auxiliary-free-sys}, with \n$\\gamma=\\alpha - \\frac{c^2}{b}$.\n\\end{lemma}\n\n\\begin{proof}\nThe starting point is equation \\eqref{e:free-eq-1} that is nothing but a rewriting of\n\\eqref{e:free-eq}.\nWith the new variable $z= u_t+c^2\/b u$, the term $u_t+\\alpha u$ in \\eqref{e:free-eq-1}\nis rewritten in terms of $z$ and $u$ as follows:\n\\begin{equation*}\nu_t+\\alpha u= z+\\gamma u\\,, \\quad \\gamma:= \\alpha - \\frac{c^2}{b}\\,,\n\\end{equation*} \nso that \\eqref{e:free-eq-1} becomes\n\\begin{equation}\\label{e:free-eq-2}\nz_{tt}+b {\\mathcal A} z +\\frac{b}{c}({\\mathcal A}+I)N_1 N_1^*({\\mathcal A}+I)z_t + \\gamma u_{tt}=0\\,.\n\\end{equation} \nOn the other hand, using once again the definition of $z$\nwe see that $u_t=z-c^2\/b \\,u$, which gives \n\\begin{equation}\\label{e:u_tt}\nu_{tt}=z_t-\\frac{c^2}{b} u_t=z_t-\\frac{c^2}{b} z+\\Big(\\frac{c^2}{b}\\Big)^2 u\\,;\n\\end{equation} \nthe above, inserted in \\eqref{e:free-eq-2} yields the following equation:\n\\begin{equation*\nz_{tt}+b {\\mathcal A} z +\\frac{b}{c}({\\mathcal A}+I)N_1 N_1^*({\\mathcal A}+I)z_t + \\gamma z_t-\\gamma \\frac{c^2}b z + \n\\Big(\\frac{c^2}b\\Big)^2 u=0\\,.\n\\end{equation*}\n\nThe latter second order in time equation for $z$, combined with\n\\eqref{e:u_tt}\nleads to the following coupled system of (second-order in time) equations in the \nunknowns $(u,z)$\n\\begin{equation*\n\\begin{cases}\nu_{tt}=z_t-\\frac{c^2}{b} u_t\n\\\\[1mm]\nz_{tt}+b {\\mathcal A} z +\\frac{b}{c}({\\mathcal A}+I)N_1 N_1^*({\\mathcal A}+I)z_t + \\gamma z_t-\\gamma \\frac{c^2}b u_t =0\\,;\n\\end{cases}\n\\end{equation*}\nor, equivalently, to the coupled ODE-PDE system \\eqref{e:auxiliary-free-sys}.\n\\end{proof}\n\nWe establish semigroup well-posedness of the Cauchy problems associated with\nsystem \\eqref{e:auxiliary-free-sys} in three different function spaces.\n\n\\begin{theorem}[Equivalent system. Well-posedness, I] \\label{t:first}\nThe (first order in time) system in the unknown $(u,z,z_t)$ corresponding to\nsystem \\eqref{e:auxiliary-free-sys} is well-posed in the space\n\\begin{equation*}\nY= \\underbrace{H^1(\\Omega)}_{u}\\times \\underbrace{H^1(\\Omega)\\times L^2(\\Omega)}_{(z,z_t)}\\,.\n\\end{equation*}\nIts dynamics is described by a closed operator $\\tilde{A}: {\\mathcal D}(\\tilde{A})\\subset Y\\to Y$ which is the generator of a $C_0$-semigroup $e^{\\tilde{A} t}$ on $Y$, $t\\ge 0$. \n\\end{theorem}\n\n\\begin{proof}\nThe second-order system \\eqref{e:auxiliary-free-sys} is rewritten as \na first-order system\n\\begin{equation*}\n\\begin{pmatrix}\nu\\\\\nz\\\\\nz_t\n\\end{pmatrix}_t= \\tilde{A}\\begin{pmatrix}\nu\\\\\nz\\\\\nz_t\n\\end{pmatrix}\\,,\n\\end{equation*}\nwith dynamics operator\n\\begin{equation*}\n\\tilde{A}=\n\\begin{pmatrix}\n-\\frac{c^2}{b}\\,I & I & 0\n\\\\[1mm]\n0 & 0 & I\n\\\\[1mm]\n- \\gamma\\big(\\frac{c^2}b\\big)^2 \\,I & -b {\\mathcal A}+\\gamma \\frac{c^2}b\\,I & -\\gamma I \n- \\frac{b}{c}({\\mathcal A}+I)N_1 N_1^*({\\mathcal A}+I) \n\\end{pmatrix}\\,.\n\\end{equation*}\nIt is then natural to observe that the decomposition \n\\begin{equation*}\n\\tilde{A}= \\tilde{A}_1+ C_1+K_1\n\\end{equation*}\nholds true, where we set\n\\begin{equation*}\n\\tilde{A}_1=\n\\begin{pmatrix}\n-\\frac{c^2}{b}\\,I & 0 & 0\n\\\\\n0 & 0 & I\n\\\\\n0 & -b ({\\mathcal A}+I) & -\\gamma I - \\frac{b}{c}({\\mathcal A}+I)N_1 N_1^*({\\mathcal A}+I) \n\\end{pmatrix}\\,,\n\\end{equation*}\n\\begin{equation*}\nC_1=\\begin{pmatrix}\n0 & I & 0\n\\\\\n0 & 0 & 0 \n\\\\\n-\\gamma \\big(\\frac{c^2}{b}\\big)^2\\,I & 0 & 0 \n\\end{pmatrix}\\,,\n\\qquad \nK_1=\\begin{pmatrix}\n0 & 0 & 0\n\\\\\n0 & 0 & 0\n\\\\\n0 & (\\gamma \\frac{c^2}{b}+b)\\,I & 0\n\\end{pmatrix}\\,.\n\\end{equation*}\nIt is enough to single out the following respective features: \n\\\\\n(i) the operator $\\tilde{A}_1: {\\mathcal D}(\\tilde{A}_1)\\subset Y\\longrightarrow Y$ is a \n(maximally) dissipative operator on \n\\begin{equation*}\n\\underbrace{H^1(\\Omega)}_{u}\\times \\underbrace{{\\mathcal D}(({\\mathcal A}+I)^{1\/2})\\times L^2(\\Omega)}_{(z,z_t)}\n\\end{equation*}\nand hence it is the generator of a $C_0$-semigroup of {\\em contractions} \n$e^{\\tilde{A}_1 t}$ on $Y$ (which, however, is {\\em not} analytic); \n\\\\\n(ii)\n$C_1$ is a {\\em bounded} operator from $Y$ into itself;\n\\\\\n(iii)\n$K_1$ is a {\\em compact} operator: in fact, with $f\\in {\\mathcal D}(({\\mathcal A}+I)^{1\/2})$ one has \n\\begin{equation*}\n\\gamma \\Big(\\frac{c^2}{b}+b\\Big)\\,f\n=\\gamma \\Big(\\frac{c^2}{b}+b\\Big)({\\mathcal A}+I)^{-1\/2}[({\\mathcal A}+I)^{1\/2}f]\\,.\n\\end{equation*}\nThe generation of a $C_0$-semigroup $e^{\\tilde{A} t}$ on $Y$ follows\nby semigroup theory.\n\\end{proof}\n\n\n\\begin{remark}\n\\begin{rm}\nThe space $Y$ will provide an appropriate functional setting where the original uncontrolled\nsystem is well-posed, and a state space for the optimal control problem under investigation.\nIt is however important to add that well-posedness remains valid in distinct functional spaces;\nthe corresponding results are stated below for the sake of completeness, while the relative proofs are\nomitted.\n\\end{rm}\n\\end{remark}\n\n\n\\begin{corollary}[Equivalent system. Well-posedness, II]\nThe uncontrolled problem is well-posed in\n\\begin{equation*}\nY_2= \\underbrace{H^2(\\Omega)}_{u}\\times \\underbrace{H^1(\\Omega)\\times L^2(\\Omega)}_{(z,z_t)}\\,.\n\\end{equation*}\n\\end{corollary}\n\nThus, in view of the definition of the domain of the generator $\\tilde{A}$, that is\n\\begin{equation*}\n\\begin{split}\n{\\mathcal D}(\\tilde{A}) &= \\big\\{ (u,z,z_t)\\in [H^1(\\Omega)]^3\\colon\nz+\\frac{1}{c}N_1\\,N_1^*({\\mathcal A}+I)z_t\\in {\\mathcal D}({\\mathcal A})\\big\\}=\n\\\\[1mm]\n& = \\Big\\{ (u,z,z_t)\\in H^1(\\Omega)\\times H^2(\\Omega)\\times H^1(\\Omega)\\,\\colon\n\\; \\frac{\\partial z}{\\partial \\nu}\\Big|_{\\Gamma_0} =0 \\,, \\;\n\\Big[c\\frac{\\partial z}{\\partial \\nu}+z_t\\Big]_{\\Gamma_1} =0 \\Big\\}\\,,\n\\end{split}\n\\end{equation*}\ntaking the dual $[{\\mathcal D}(\\tilde{A})]'$ (duality with respect to $Y_2$), we are able to infer the following result.\n\\begin{corollary}[Equivalent system. Well-posedness, III]\nThe uncontrolled problem is well-posed in\n\\begin{equation}\nY_0 \\sim \\underbrace{H^1(\\Omega)}_{u}\\times \\underbrace{L^2(\\Omega)\\times [H^1(\\Omega)]'}_{(z,z_t)}\\,,\n\\end{equation}\nwhere $\\sim$ indicates topological equivalence.\n\\end{corollary}\n\nThe next Theorem~\\ref{t:first} summarizes relevant wellposedness results which will be used throughout . \n\n\\begin{theorem}[The uncontrolled equation. Well-posedness and stability] \\label{t:wellposed_1}\nWith reference to the third order abstract equation \\eqref{e:free-eq} describing the free dynamics,\nthe following statements hold true.\n\\begin{enumerate}\n\\item[i)]\nThe boundary value problem \\eqref{e:bvp-0} with $g\\equiv 0$ admits the abstract formulation\n\\eqref{e:free-eq} as a third order equation; \nequivalently, it is rewritten as a first order abstract system $y'=Ay$, where $y$ denotes the\nstate variable $(u,u_t,u_{tt})$.\n\n\\item[ii)]\nThe operator $A$ which governs the free dynamics, detailed in \\eqref{e:generator},\nis the generator of a $C_0$-semigroup $\\{e^{At}\\}_{t\\ge 0}$ on the function space\n$Y=H^1(\\Omega)\\times H^1(\\Omega)\\times L^2(\\Omega)$.\n\n\\item[iii)] \nThe semigroup $e^{At}$ is exponentially stable when $\\gamma > 0$. \n\n\\end{enumerate}\n\\end{theorem}\n\n\\begin{remark}\n\\begin{rm}\nIn the critical case, when $\\gamma =0$, it is expected that with $\\Gamma_0$ subject to \nthe ``star-shaped'' Geometric Condition (cf.~\\cite{lagnese}) the resulting semigroup is exponentially stable.\n\\end{rm} \n\\end{remark}\n\n\\begin{remarks}\n\\begin{rm}\nThe first assertion in Theorem \\ref{t:wellposed_1} establishes the existence of a linear semigroup defined\non $Y$ which describes the original uncontrolled dynamics. \nIt is worth noting that if the SMGT equation is complemented with either Dirichlet or Neumann\nBC the same result holds true, as it was first proved in \\cite{marchand} and \\cite{kalten-las-mar_2011};\nin that case the semigroup is actually a {\\em group} on $Y$. \nInstead, the group property is not valid any more in the presence of absorbing BC on $\\Gamma_1$. \n\nThe studies \\cite{kalten-las-mar_2011} and \\cite{marchand} -- the latter, providing a clarifying \nspectral analysis -- obtain that (still in the case of Dirichlet or Neumann BC)\nthe semigroup $e^{tA}$ is exponentially stable on the factor space $Y\/{\\ker(A)}$, \nprovided $\\gamma > 0$; it is marginally stable when $\\gamma =0$ and unstable when $\\gamma < 0$.\nIn the present case, assuming appropriate geometric conditions on $\\Gamma_0$, the absorbing boundary conditions\nturn marginal stability ($\\gamma =0$) to stability. \nThis issue has not been fully investigated so far, yet it is expected that the multipliers' method combined\nwith a background on wave equations would provide the tools. \n\\end{rm}\n\\end{remarks}\n\n\\begin{remark}\n\\begin{rm}\n({\\em A distinct perspective) }\nThe connection between the MGT equation with wave equations with memory has been pointed out in \nthe recent independent works \\cite{pata1} and \\cite{bucci-pan_arxiv2017}.\nThe critical role of $\\gamma$ as a threshold for uniform stability is revisited and recovered in \n\\cite{pata1} via the analysis of a corresponding viscoelastic equation. \nIt is apparent that appropriate compatibility conditions on initial data must be assumed,\nin order to study the third order (in time) equation by using theories pertaining to\nwave equations with a non-local term.\n\nAnd yet, the perspective of equations with memory opens a distinct avenue of investigation\nof the (interior and trace) regularity properties of the corresponding solutions, fruitfully explored in \\cite{bucci-pan_arxiv2017} -- as well as, possibly, of other control-theoretic properties.\nIn this connection, we mention the paper \\cite{pandolfi-TAC_2018}, which provides an analysis\nof the LQ problem and Riccati equations for finite dimensional systems with memory. \n\\end{rm} \n\\end{remark}\n\n\n\n\\subsection{Domain of the generator} \\label{ss:domain-generator}\nWe give an explicit description of the natural domain of the the (free) dynamics generator $A$ introduced in\n\\eqref{e:generator}: given the state space $Y = H^1(\\Omega) \\times H^1(\\Omega) \\times L^2(\\Omega)$, \none has\n\\begin{equation*}\n\\begin{split}\ny\\in {\\mathcal D}(A) \\Longleftrightarrow \\; & y\\in \\Big\\{y =(y_1,y_2,y_3) \\in Y\\colon y_3 \\in H^1(\\Omega)\\,,\n\\\\[1mm]\n& \\qquad\\qquad\\qquad \nc^2 y_1 + b y_2 + N_1 (c N_1^* {\\mathcal A} y_2 + \\frac{b}{c} N_1^* {\\mathcal A} y_3)\\in {\\mathcal D}({\\mathcal A})\\Big\\}\\,. \n\\end{split}\n\\end{equation*}\nFrom the PDE vriewpoint the above corresponds to \n\\begin{equation*}\n\\begin{split}\ny\\in {\\mathcal D}(A) \\Longleftrightarrow \\; & y\\in \\Big\\{y \\in [H^1(\\Omega)]^3\\colon \\Delta (c^2 y_1 + b y_2) \\in L^2(\\Omega)\\,, \n\\;{ \\frac{\\partial}{\\partial \\nu } }(c^2 y_1 + b y_2 ) =0 \\;\\textrm{on $\\Gamma_0$}\\,,\n\\\\[1mm]\n& \\qquad\\qquad\\qquad \nc{ \\frac{\\partial}{\\partial \\nu } } (c^2 y_1 + b y_2 )\\Big|_{\\Gamma_1} = - \\Big[c^2y_2+ b y_3\\Big]\\Big|_{\\Gamma_1}\n\\;\\textrm{on $\\Gamma_1$}\\Big\\}\\,.\n\\end{split}\n\\end{equation*}\nNotice that by a standard variational argument the normal derivatives are first well defined on \n$H^{-1\/2}(\\Gamma)$. \nThen, the $H^{1\/2}(\\Gamma)$-regularity of $y_i$, $i =1,2,3$, along with elliptic theory gives\n\\begin{equation}\\label{e:domain-pde}\n\\begin{split}\n& {\\mathcal D}(A) = \\Big\\{y \\in [H^1(\\Omega)]^3\\colon (c^2 y_1 + b y_2) \\in H^2(\\Omega)\\,, \n\\;{ \\frac{\\partial}{\\partial \\nu } }(c^2 y_1 + b y_2 ) =0 \\;\\textrm{on $\\Gamma_0$}\\,,\n\\\\[1mm]\n& \\qquad\\qquad\\qquad \nc{ \\frac{\\partial}{\\partial \\nu } } (c^2 y_1 + b y_2 )\\Big|_{\\Gamma_1} = - \\Big[c^2y_2+ b y_3\\Big]\\Big|_{\\Gamma_1}\n\\;\\textrm{on $\\Gamma_1$}\\Big\\}\\,.\n\\end{split}\n\\end{equation}\nWe also note that the resolvent of $A$ is not compact, which is important to be pointed out.\n\n\n\\subsection{The SMGT equation subject to smooth controls}\nNow let us turn our attention to the controlled (abstract) equation \\eqref{e:controlled-eq} corresponding to the BVP \\eqref{e:bvp-0} and to its reformulation as the first-order control system \\eqref{e:1order-system}.\nThis system produces readily a solution formula, assuming that $g\\in H^1(0,T;L^2(\\Gamma_0))$:\nthe following Proposition provides a rigorous justification.\n \n\\begin{proposition} \\label{p:firstorder-controlled-wellposed}\nAssume that $g\\in H^1(0,T;L^2(\\Gamma_0))$.\nThe boundary value problem \\eqref{e:bvp-0} for the SMGT equation can be recast as the (third order in time) abstract equation \\eqref{e:controlled-eq}; equivalently, it is rewritten as a first order abstract system \\eqref{e:1order-system} that is\n\\begin{equation} \\label{e:state-eq_0}\ny'=Ay+B_0g+B_1g_t\\,;\n\\end{equation}\n$y$ denotes the state variable $(u,u_t,u_{tt})$ and $g$ is the control variable,\nwhile the linear operators $A$ and $B_i$ satisfy the following analytical properties.\n\\begin{enumerate}\n\n\\item[i)] \nthe operator $A$ which describes the free dynamics, detailed in \\eqref{e:generator},\nis the generator of a $C_0$-semigroup $\\{e^{At}\\}_{t\\ge 0}$ on the function space\n$Y=H^1(\\Omega)\\times H^1(\\Omega)\\times L^2(\\Omega)$, with domain \n${\\mathcal D}(A)$ given in \\eqref{e:domain-pde};\n\n\\item[ii)]\nthe control operators $B_i$, $i=0,1$ defined in \\eqref{e:input-operators} satisfy \n$B_i\\in {\\mathcal L}(U,[{\\mathcal D}(A^*)]')$. \n\\end{enumerate}\nThen, the third order equation \\eqref{e:controlled-eq} is understood on the \nextrapolation space $[{\\mathcal D}({\\mathcal A})]'$.\n\n\\end{proposition}\n\n\\begin{proof}\nSince the Neumann maps $N_i$ defined in \\eqref{e:N_i} enjoy the regularity in \\eqref{e:neumann-regularity}, \nthat is $N_i \\in {\\mathcal L}(L^2(\\Gamma_i),{\\mathcal D}({\\mathcal A}^{3\/4-\\sigma}))$, we accordingly have \nthat the distributional range of the control maps $B_i$ is such that \n\\begin{equation*}\n{\\mathcal R}(B_i) \\subset \\{0\\}\\times \\{0\\} \\times [{\\mathcal D}({\\mathcal A}^{1\/4 + \\sigma})]'\\,.\n\\end{equation*}\nTo see this, just recall the explicit form of the input operators $B_0$ in \\eqref{e:input-operators},\nwhich gives \n\\begin{equation*}\n\\begin{split}\n|(B_0 g,y)|_Y&=|(c^2 ({\\mathcal A}+I)N_0 g,y)|_Y\n=c^2 |(g,y_3|_{\\Gamma_0})_{L^2(\\Gamma_0)}\n=c^2 \\,|y_3|_{H^{1\/2+2\\sigma}(\\Omega)} |g|_{L^2(\\Gamma_0)}\n\\\\[1mm]\n& =c^2\\,|{\\mathcal A}^{1\/4+\\sigma} y_3 |_{L^2(\\Omega)} |g|_{L^2(\\Gamma_0)}\n\\end{split}\n\\end{equation*}\nwhich proves that there exists a positive constant $C$ such that \n\\begin{equation*}\n|(B_i g,y)|_Y|\\le C\\,|{\\mathcal A}^{1\/4+\\sigma} y_3 |_{L^2(\\Omega)} |g|_{L^2(\\Gamma_0)}\n\\le C\\,|A y|_Y \\,|g|_{L^2(\\Gamma_0)}\\,, \\qquad i=0,1\\,,\n\\end{equation*}\nsince $B_1=b\/{c^2} B_0$.\n\nBy using interpolation trace results, a stronger inequality is obtained:\nfor any $\\epsilon > 0 $ one has \n\\begin{equation*}\n(B_i g, y)_Y\\le C \\,|Ay|_Y^{1\/2}|y|_Y^{1\/2} |g|_{L^2(\\Gamma_i)}\n\\le \\big(\\epsilon |Ay|_Y + C_\\epsilon |y|_Y\\big)\\,|g|_{L^2(\\Gamma_i)}\n\\end{equation*}\nwhich gives \n\\begin{equation*}\n|B_i^*y|_{L_2(\\Gamma_i)} \\le \\epsilon |Ay|_Y + C_{\\epsilon} |y|_Y \\qquad \\forall \\epsilon > 0\\,.\n\\end{equation*}\n\\end{proof}\n\nIn view of Proposition~\\ref{p:firstorder-controlled-wellposed} (hence, still under the assumption\n$g\\in H^1(0,T,U)$), semigroup theory yields a first input-to-state formula in the extrapolation space $[{\\mathcal D}(A^*)]'$.\n\n\\begin{corollary}\\label{c:state-in-extrapolation}\nFor any initial state $y_0 \\in [{\\mathcal D}(A^*)]'$ and any control $g\\in H^1(0,T,U)$,\nthe control system \\eqref{e:state-eq_0} has a unique mild solution $y\\in C([0,T];[{\\mathcal D}(A^*)]')$\ngiven by \n\\begin{equation}\\label{e:sln_0}\n\\begin{split}\ny(t) &= e^{At} y(0) + \\int_0^t e^{A(t-s)} \\big(B_0 g(s)+B_1 g_t(s)\\big)\\,ds=\n\\\\[1mm]\n&= e^{At} y(0) + \\int_0^t e^{A(t-s)} B_0 \\Big(g(s) + \\frac{b}{c^2}g_t(s)\\Big)\\,ds\\,.\n\\end{split}\n\\end{equation}\n\n\\end{corollary}\n\n\n\\section{The control problem. Main results} \\label{s:auxiliary-and-riccati}\nIf the cost functional \\eqref{e:cost-funct} penalized (quadratically) the time derivative of\nthe control function, we might choose as space of admissible controls \n${\\mathcal U}=H^1(0,T;L^2(\\Gamma_0))$, and the obtained semigroup solution formula \\eqref{e:sln_0}\nas the state equation.\nRemember however that we seek to minimize the functional \\eqref{e:cost-funct} over all controls $g$ which belong to $L^2(0,T;L^2(\\Gamma_0))$, where the acoustic pressure $u$ satisfies the IBVP \\eqref{e:ibvp-1}.\nHence, in this Section we first derive from \\eqref{e:sln_0} a solution formula which requires\ncontrols which just belong to $L^2(0,T;L^2(\\Gamma_0))$ and are continuous at time $t=0$;\nthis is done by an elementary integration (in time) by parts.\nThen, following an idea proposed in \\cite{LLP} and \\cite{LPT}, we introduce an (auxiliary) optimal control\nproblem associated to an equation depending on a parameter $g_0\\in L^2(\\Gamma_0)=:U$.\nThe main result pertaining to the auxiliary problem and the connection with the original one \nare stated collectively in the section. The respective proofs are the subject of the\nsubsequent two sections.\n\n\\subsection{Control problem with the observation} \nOur next step is to provide a representation formula for the solutions to the controlled dynamics \nby assuming that controls belong to $L^2(0,T;U)$. \nThis is done, as usual, integrating by parts (in a dual space) and exploiting the structure of the domain\nof the generator. \n\n\\begin{lemma}\\label{l:rep}\nGiven an initial state $y_0\\in [{\\mathcal D}({A^2}^*)]'$ and any control function $g\\in C([0,T;U)$, the solution\nto the original control system \\eqref{e:1order-system}, represented via the\ninput-to-state formula \\eqref{e:sln_0}, is equivalently given by \n\\begin{equation}\\label{e:eq-for-U}\ny(t) = e^{At}[y_0 - B_1g(0)] + Lg(t)\\,,\n\\end{equation}\nwith \n\\begin{equation}\\label{e:input-to-state-map}\n\\begin{split}\n(Lg)(t) &= B_1 g(t) + (L_0 g)(t)\\,,\n\\\\[1mm]\n(L_0 g)(t) &= \\int_0^t e^{A(t-s)} B_0 g(s) ds + A\\int_0^t e^{A(t-s)} B_1 g(s) ds\\,.\n\\end{split}\n\\end{equation}\nThe map $(y_0,g) \\rightarrow y(\\cdot)$ is bounded from \n$[{\\mathcal D}({A^2}^*)]' \\times C([0,T; U) \\rightarrow C([0,T;[{\\mathcal D}({A^2}^*)]')$.\n\\end{lemma}\n\n\\begin{proof}\nThe novel representation formula \\eqref{e:eq-for-U} is easily established integrating by parts in \n\\eqref{e:sln_0}; what we need to justify rigorously is the claimed regularity. \nWe know already that $e^{At}$ generates a $C_0$-semigroup on $[{\\mathcal D}({A^2}^*)]'$, and that \n$A^{-1} B_i \\in L(U,Y)$.\nThen, it suffices to analyze the regularity of the operator $L_0$ in \\eqref{e:input-to-state-map},\nwhich depends on the one of the operator $AB_1$. \nRecalling the definitions of $A$ and $B_1$, it is easily seen that \n\\begin{equation*}\nAB_1 =b\n\\begin{pmatrix}\n0 \n\\\\[1mm]\n({\\mathcal A}+I)N_0\n\\\\[1mm]\n- \\alpha ({\\mathcal A}+I)N_0\n\\end{pmatrix} \n\\end{equation*}\nwhere we have used that the distributions on $\\Gamma_0$ and $\\Gamma_1$ have disjoint support.\nThis implies that the contribution of the operator $N_1$ in the definition of $A$, when applied to $B_1$, produces the zero element. \nAs a consequence, we obtain that \n\\begin{equation*}\n{\\mathcal R}(AB_1) \\subset \\{0\\} \\times [{\\mathcal D}({\\mathcal A}^{1\/4+\\epsilon}]' \\times [{\\mathcal D}({\\mathcal A}^{1\/4+\\epsilon}]' \n\\subset [{\\mathcal D}(A^{2*})]'\\,,\n\\end{equation*}\nwhich gives the desired conclusion.\n\\end{proof}\n\nObserve that -- just like in the works \\cite{LLP} and \\cite{LPT} -- the drawback of the chosen approach\nis that the space regularity of the state function gets worse.\nMoreover, in contrast with the dynamics under investigation therein, whose underlying semigroup is \nanalytic, we are dealing with a purely hyperbolic problem.\n\\\\\nOn the other hand, recall that the goal is to minimize the $L^2(\\Omega)$-norm of the acoustic pressure,\ndescribed by the state variable $u$, that is the first component of the state variabile $y$.\nBy setting $u^d=0$ in \\eqref{e:cost-funct} just for the sake of simplicity, \nthe cost functional is abstractly rewritten as\n\\begin{equation}\\label{J}\nJ(g) = \\int_0^T \\|R y\\|_Y^2 \\,dt + \\int_0^T \\|g\\|_U^2\\,dt\\,,\n\\end{equation}\nwhere $U$ denotes the control space, i.e. $U=L^2(\\Gamma_0)$, and the observation operator\n$R$ is acting as follows: for any $y = [y_1,y_2,y_3]^T$, it holds \n\\begin{equation}\\label{e:def-observation-op}\nR y = \\begin{pmatrix}\n{\\mathcal A}^{-1\/2}y_1 \n\\\\[1mm]\n0\n\\\\[1mm]\n0\n\\end{pmatrix}\\,.\n\\end{equation}\nIn fact, after identifying $H^1(\\Omega)$ with ${\\mathcal D}({\\mathcal A}^{1\/2})$, we see that\n\\begin{equation*}\n\\|Ry\\|_Y = \\|{\\mathcal A}^{1\/2} {\\mathcal A}^{-1\/2} y_1\\|_{L^2(\\Omega)} = |y_1|_{L^2(\\Omega)}\\,.\n\\end{equation*}\nThus, the simple -- and yet natural -- quadratic functional taken into consideration,\nattributes to the observation operator $R$ a very special structure and an {\\em intrinsic} strong smoothing effect.\nThe improved regularity of the observed states enables us to pursue an adaptation of \nthe theory developed in \\cite[Vol.~II]{redbook} in the study of hyperbolic-like PDE's\nwith boundary or point control actions and ``smoothing'' observations. \n\n\\medskip\n\n\n\\subsection{Main Results}\nIn this subsection we shall formulate the main results, while the proofs are relegated to the next section. \nWe shall begin with a negative result.\n\nConsider the following minimization problem.\n\n\\begin{problem} \\label{p:pbm_0}\nFor any $y_0 \\in Y$, minimize the cost functional \\eqref{J} over all controls $L^2((0,T)\\times \\Gamma_0)$, \nwhere $y(\\cdot)$ satisfies the controlled equation \\eqref{e:eq-for-U}.\n\\end{problem}\n\n\\begin{theorem} \\label{l:neg} \nIf the initial state $y_0$ belongs to ${\\mathcal R}(B_1)$, then Problem \\ref{p:pbm_0} does not have a solution. \n\\end{theorem}\n\n\nGiven this negative result, one might wonder what are the additional constraints which render the problem solvable. \nThe proof of the negative result (cf.~\\cite{LPT}) reveals that the issue is in singularity of control, as\nthe ``candidate'' to be the optimal control is no longer in the space $L^2(0,T; U)$.\n(This depends upon the appearance of a (time-)trace operator -- intrisincally uncloseable -- in the definition of the state.) \n\nIn view of the above, we shall consider an input-to-state formula depending on a given parameter $g_0 \\in U$, that is \n\\begin{equation}\\label{g0}\ny_{g_0}(t) = e^{At}(y_0 - B_1g_0) + Lg(t)\\,,\n\\end{equation}\nwith $L$ defined in \\eqref{e:input-to-state-map}. This idea has been developed in \\cite{LLP,LPT}.\nWhen $g(0) =g_0$ the above controlled dynamics coincides with the one given by \\eqref{e:eq-for-U}.\nWith \\eqref{g0} we associate the same cost functional \\eqref{J}.\nA new (`extended') optimal control problem is formulated as follows.\n\n\\begin{problem} \\label{p:pbm_1-parameter}\nFor any $y_0 \\in [{\\mathcal D}({A^*}^2)]'$, $g_0\\in U$, minimize the cost functional \\eqref{J} overall controls $g\\in L^2((0,T)\\times \\Gamma_0)$, with $y$ subject to \\eqref{g0}.\n\\end{problem}\nFor this problem the following results holds true.\n\n\\begin{theorem}\\label{l:pos} \nThe optimization Problem \\ref{p:pbm_1-parameter} has a unique solution \n$\\hat{g}_{g_0} \\in L^2(0,T;U)$.\nIts corresponding optimal trajectory satisfies\n\\begin{equation}\\label{e:memberships} \n\\hat{y}_{g_0} \\in C([0,T];[{\\mathcal D}({A^*}^2]')\\,, \n\\quad\nR\\hat{y}_{g_0} \\in C([0,T];Y)\\,.\n\\end{equation} \n\\end{theorem}\n\nThe first main result of the paper establishes the feedback synthesis of the optimal control referred to in Theorem~\\ref{l:pos}. \nFor clarity of the exposition, we shall take $u_d =0$. \n\n\\begin{theorem}\\label{T0}\nWith reference to the minimization Problem \\ref{p:pbm_1-parameter}, the following \nstatements are valid.\n\n\\begin{enumerate}\n\n\\item[i)] (Partial regularity)\nFor any $y_0 \\in [{\\mathcal D}({A^2}^*)]'$, and any $g_0 \\in U$, the unique optimal control \n$\\hat{g}_{g_0}$ belongs to $C([0,T];U]$, and produces the output \n$R\\hat{y}_{g_0} \\in C([0,T];Y)$.\n \n\\item[ii)] (Riccati Equation) \nFor every $t \\in [0,T]$, there exists a self-adjoint positive operator $P(t)$ on $L(Y)$, \nwhose regularity is as follows,\n\\begin{equation*}\nA^* P(t) \\in {\\mathcal L}(Y)\\,, \\quad B_1^* A^* P(t) \\in {\\mathcal L}(Y,U) \\; \\textrm{continuously in time,}\n\\end{equation*}\nand which satisfies the following (non-standard) Riccati equation: \n\\begin{equation}\\label{e:RE-0}\n\\begin{split}\n& \\frac{d}{dt}(P(t) y, w)_Y +(Ay, P(t) w)_Y + (P(t) y, Aw)_Y + (Ry, Rw)_Y = \n\\\\[1mm] \n& \\qquad =((B_0^* + B_1^* A^*) P(t)y,([B_0^* + B_1 A^*) P(t) w)_{U} \n\\quad \\textrm{for all $y, w\\in {\\mathcal D}(A)$}\n\\end{split}\n\\end{equation}\nwith terminal condition $P(T) =0$.\nThe equation \\eqref{e:RE-0} actually extends to all $y, w \\in Y$.\n\n\\item[iii)] (Feedback synthesis) \nThe optimal control $\\hat{g}_{g_0}(\\cdot)$ has the following feedback representation:\n\\begin{equation*} \n\\hat{g}_{g_0}(t) = - \\big(I - [B_0^* + B_1^* A^*] P(t) B_1 \\big)^{-1} \n[B_0^* + B_1^* A^* ] P(t) \\hat{y}_{g_0}(t)\\,,\n\\end{equation*}\nwhere the operator $G(t)=I - [B_0^* + B_1^* A^*] P(t) B_1$ is boundedly invertible on $U$ for each \n$t \\in [0,T]$.\n\n\\end{enumerate}\n\n\\end{theorem}\nFrom the structure of the Riccati equation \\eqref{e:RE-0}, along with the space regularity of the operator \n$P(t)$ asserted in Theorem \\ref{T0}, some additional regularity of the operator $P(t)$ follows. \n\\begin{corollary}\nThe Riccati operator $P(t)$ is time differentiable from $Y$ into itself. \nMore precisely, the operator $ \\frac{d}{dt} P(t)\\colon Y \\rightarrow C([0,T];Y)$ is bounded. \n\\end{corollary}\n\n\n\\begin{remark}\n\\begin{rm}\nWe note that the Riccati equation \\eqref{e:RE-0} is termed {\\em non-standard} \n(already in \\cite{LPT}) because of the special structure of its quadratic term. \nThis feature results from the lack of coercivity in the functional cost, a cause for singularity of the minimization problem.\nThen, the feedback formula which allows the synthesis of the optimal control of Problem~\\ref{p:pbm_1-parameter}\ninvolves the inverse of certain operator defined on the control space $U$.\nInvertibility of the said operator is an issue already encountered in \\cite{LLP} and \\cite{LPT}: however, differently from those studies, in the present case we cannot appeal to the analyticity of the semigroup underlying the controlled dynamics.\n\\end{rm}\n\\end{remark}\n\nTheorem \\ref{T0} provides the optimal control and the optimal synthesis for the input-state dynamics\n\\eqref{g0}, given $y_0$ and the parameter $g_0$. \nOne aims then at exploring the relation between the parameter $g_0$ with the optimal control $\\hat{g}$,\nwhich is known from Theorem~\\ref{T0} to be continuous on $[0,T]$. \nThus, a question of major concern is whether the parameter $g_0\\in U$ can be selected \nin order that $\\hat{g}(0) = g_0$. \nThe validity of this property will prove the equivalence of the state description in \\eqref{e:eq-for-U} with the one in \\eqref{g0}, thereby ensuring that the latter system corresponds to the original PDE model. \nThe answer to this question is positive, as asserted by the Theorem below. \n\n\\begin{theorem}\\label{T:1}\nThe operator $[I + G(0)B_1]$ is bounded invertible on $U$; in particular, $[I + G(0)B_1]^{-1} \\in {\\mathcal L}(U)$. \nBy choosing $g_0 = [I + G(0) B_1]^{-1} G(0) y_0$, one obtains that\n\\begin{equation*}\n\\hat{y}(t) = e^{At }[ y_0 -B_1 \\hat{g}(0) ] + (L \\hat{g})(t)\\,,\n\\end{equation*} \nso that the original dynamics \\eqref{e:eq-for-U} coincides with the one in \\eqref{g0}. \nMoreover, the obtained $\\hat{g}$ is continuous in time, i.e. $\\hat{g}\\in C([0,T];U)$. \n\\end{theorem}\n\nForcing the original model with continuity of the control at the origin may compromise the optimality. \nInstead, the additional `player' $g_0 \\in U$ is advantageous from the optimality point of view. \nWhile we know that in general there is no optimal control in the class of $L^2(0,T;U)$ functions \n(cf.~Theorem~\\ref{l:neg}), reformulating the solution formula as in \\eqref{g0}, with an additional\nparameter, gives additional possibilities for optimization with respect to the parameter. \n\n\\begin{theorem}\\label{T:2}\nLet $U_0 \\subset U$ be a bounded and weakly closed set in $U$. \nThen, there exists a $g^* \\in U_0$ such that the resulting control $\\hat{g}_{g^*}$ attains the infimum of the functional $J(g)$ with respect to $g_0 \\in U_0$, $g \\in L^2(0,T;U)$ and $y$ satisfying (\\ref{g0}).\nMoreover, the following characterization holds true: either $g^*$ is such that \n$y_0 - B_1 g^* \\in \\ker(B_1^* P(0))$, or $g^* \\in \\partial U_0$.\n\\end{theorem}\n\n\n\\begin{remark}\n\\begin{rm}\nNote that the optimal control of Theorem~\\ref{T:2} provides control which is in a larger space than just\n$L^2(0,T; U)$. \nThis is singular control.\nThe corresponding state is described by \\eqref{g0} and it satisfies $R\\hat{y}_{\\hat{g}_{g^*}} \\in C([0,T]; Y)$.\n\\end{rm}\n\\end{remark}\n\nIt is important to note that from both the point of view of applications as well as of mathematical developments, it is significant to have two versions of optimal solutions corresponding to two different formulations of the input-state map. \nIf one is to develop nonlinear versions of the problem, where regularity of controls and of the states is of paramount importance, the first version in Theorem~\\ref{T:1} is the most relevant. \nHowever, from the point of view of automatic control -- where discontinuous inputs are feasible and lead to `better' optimization solutions --, Theorem~\\ref{T:2} becomes more relevant. \nClearly, further discussion of the topic along with relevant examples is appropriate and desirable. \n\n\\begin{remark}\n\\begin{rm}\nThere are several open problems sparked off by the present work. We name but a few.\n \n\\begin{enumerate}\n\\item[i)]\nExtension of the theory to more general observation operators $R$. \nHowever, it is clear that $R$ should display some kind of smoothing effect. \nMoreover, the structure of the problem -- namely, an appropriate interplay between control and observation\noperators -- will need to be carefully chosen, in order that the optimal ($L^2$) solution does exist. \n\n\\item[ii)]\nThe infinite horizon LQ problem in both the stable and the critical case. \nIt is expected that under suitable geometric conditions imposed on $\\Gamma_1$ one could guarantee \nsolvability of the optimization problem, along with a feedback synthesis of the optimal control. \n\n\\item[iii)]\nApplication of the previous result to the feedback control of the nonlinear equation. \nA local theory for small initial data should emerge, while the feedback control should provide a stabilizing effect on the nonlinear dynamics. \n\n\\end{enumerate}\n\\end{rm}\n\\end{remark}\n\nThe remaining parts of the paper are devoted to proofs of four Theorems. \n\n\\section{Proofs of Theorems \\ref{l:neg}, \\ref{l:pos}} \\label{s:proofs_1}\nWe point out at the outset that the main challenge in proving the stated results is to be able to `run' the dynamics on much larger dual spaces, still preserving the invariance \nof the said dynamics. \nThe following Proposition singles out some basic regularity and structural properties \npertaining to the observation operator $R$.\n\n\\begin{proposition}\\label{p:R}\nThe observation $R$ satisfies the following properties. \n\\begin{itemize}\n\\item\n$R \\colon Y \\rightarrow {\\mathcal D}({\\mathcal A}) \\times \\{0\\} \\times \\{0\\}$ is bounded;\n\\item\n$R \\in {\\mathcal L}(Y,{\\mathcal D}(A))$; \n\\item\n$R = R^* $ on $Y$, hence $R \\in {\\mathcal L}([{\\mathcal D}(A^*)]',Y)$.\n\\end{itemize}\n\n\\end{proposition}\n\n\\begin{proof}\nFor the first statement, take $y\\in Y$: then $y_1 \\in {\\mathcal D}({\\mathcal A}^{1\/2})$, and since\n$Ry =({\\mathcal A}^{-1\/2} y_1, 0, 0)^T$ we obtain ${\\mathcal A}^{-1\/2} y_1 \\in {\\mathcal D}({\\mathcal A})$. \n\nThe second statement follows from the calculation with $y \\in Y$\n\\begin{equation*}\nA R y = [0,0,-\\tau^{-1} c^2 {\\mathcal A}^{1\/2} y_1]^T \\in Y \n\\end{equation*}\nWe also note that $A^{-1} \\in {\\mathcal L}(Y,[{\\mathcal D}({\\mathcal A}^{1\/2})]^3)$. \nThe third statement follows from direct calculations using the inner product in $Y$. \n\nThe fourth statement follows combining the third with the second one. \n\\end{proof}\n\n\\subsection{Properties of the input-to-output map}\nThe following Lemma captures \na set of functional-analytic properties pertaining to appropriate combination of the involved abstract operators -- namely, the dynamics, control and observation operators --, which will play a major role in the proof of well-posedness of generalized differential\/integral Riccati equations, eventually leading to solvability of the optimal control problem.\n\n\\begin{lemma} \\label{l:abstract-basics}\nLet $A$, $B_i$ and $R$ the dynamics, control, observation operators defined by\n\\eqref{e:generator}, \\eqref{e:input-operators}, \\eqref{e:def-observation-op}, respectively.\nThen,\n\\begin{enumerate}\n\\item[i)]\n$RA^2$ can be extended to a {\\em bounded} operator on the state space $Y$;\n\\item[ii)]\n$RB_1=0$;\n\\item[iii)]\n$(I+A)^{-1}B_i$ are bounded and compact operators from $L^2(\\Gamma_i)$ into $Y$, $i=0,1$.\n\\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\ni) We take an element $y=(y_1,y_2,y_3)$ initially assumed in ${\\mathcal D}(A^2)$, and compute\n\\begin{equation*}\n\\begin{split}\n& A^2y = A(Ay)= \n\\\\[1mm]\n& \\quad= A\n\\begin{pmatrix}\ny_2\n\\\\[1mm]\ny_3\n\\\\[1mm]\n- c^2{\\mathcal A} y_1- [b {\\mathcal A}+c({\\mathcal A}+I)N_1N_1^*({\\mathcal A}+I)]y_2\n-\\big[\\alpha I +\\frac{b}{c}({\\mathcal A}+I)N_1 N_1^*({\\mathcal A}+I)\\big]y_3\n\\end{pmatrix}\n\\\\[1mm]\n& \\quad = \\begin{pmatrix}\ny_3\n\\\\[1mm]\n\\ldots\\ldots\\ldots\n\\\\[1mm]\n\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\n\\end{pmatrix}\n\\end{split}\n\\end{equation*}\nwhere the second and third component of $A^2y$ are unspecified, owing to the \nstructure of the observation operator $R$ to be applied.\nConsequently, \n\\begin{equation*}\nR\\, A^2y = \\begin{pmatrix}\n(I+{\\mathcal A})^{-1\/2} y_3 \\\\ 0 \\\\ 0\n\\end{pmatrix}\n\\end{equation*}\nand \n\\begin{equation*}\n\\|R\\, A^2y\\|_Y = \\left\\| \\begin{pmatrix} (I+{\\mathcal A})^{-1\/2} y_3 \\\\ 0 \\\\ 0 \\end{pmatrix}\\right\\|_Y\n= \\big\\|(I+{\\mathcal A})^{1\/2}(I+{\\mathcal A})^{-1\/2}y_3\\big\\|=\\|y_3\\|_{L^2(\\Omega)}\n\\end{equation*}\n\n\\smallskip\n\\noindent\nii) It is immediately verified that for any $h\\in L^2(0,T;L^2(\\Gamma_1))$ \n\\begin{equation*}\nR\\,B_1 h=R \\begin{pmatrix}\n0 \n\\\\[1mm]\n0 \n\\\\[1mm]\nb ({\\mathcal A}+I)N_1\\,h\n\\end{pmatrix}\n=(I+{\\mathcal A})^{-1\/2}\\,0=0\\,.\n\\end{equation*}\n\n\\smallskip\n\\noindent\niii) It is clear that the resolvent $(I+A)^{-1}$ is not compact.\nHowever, we have\n\\begin{equation}\n(I+A)^{-1} B_0 = c^2 \\begin{pmatrix} N_0 \\\\ 0 \\\\ 0 \\end{pmatrix}\\,, \n\\qquad\n(I+A)^{-1} B_1 = \\frac{b}{c^2} (I+A)^{-1} B_0=\nbc^{-2}\\begin{pmatrix} N_0 \\\\ 0 \\\\ 0 \\end{pmatrix}\n\\end{equation}\nand because ${\\mathcal R}(N_0) \\subset H^{3\/2}(\\Omega)$, the operators $A^{-1} B_i$\nare not only bounded from $L_2(\\Gamma_0) \\rightarrow Y$, but also compact. \n\n\\end{proof}\n\nThe following Lemma pertains to the regularity of the map $RL_0$.\n\n\\begin{lemma}\\label{l:L}\nLet $L_0$ be the operator defined by \\eqref{e:input-to-state-map}.\nThen \n\\begin{itemize}\n\\item\n$RL_0$ is a compact operator from $L^2(0,T;L^2(\\Gamma_0))$ into $C([0,T];Y)$.\n\\item\n$R e^{A\\cdot} B_i \\colon L^2(0,T;L^2(\\Gamma_0)) \\rightarrow C([0,T];Y)$, $i=0,1$,\nare compact. \n\\end{itemize}\n\n\\end{lemma}\n\n\\begin{proof}\nThe first statement follows computing \n\\begin{equation}\n\\begin{split}\nR L_0\\colon g\\longmapsto R L_0 g \n& = R \\int_0^t e^{A(t-s)} B_0 g(s) ds - R A\\int_0^t e^{A(t-s)} B_1 g(s) ds\n\\\\[1mm]\n& = R A^2 \\int_0^t e^{A(t-s)} A^{-2} B_0 g(s) - R A^2 \\int_0^t e^{A(t-s)} A^{-1} B_1 g(s)ds\n\\end{split}\n\\end{equation}\nin view of Lemma~\\ref{l:abstract-basics}, combined with Aubin-Simon compactness criterion. \n\nThe second statement follows rewriting $R e^{At } B_0$ as follows,\n\\begin{equation*}\nR e^{At } B_0 = R A e^{At} A^{-1} B_0\\,,\n\\end{equation*}\nwhere $RA\\in {\\mathcal L}(Y)$ and $A^{-1}B_0 \\colon U \\rightarrow Y$ compactly. \nThe strong additional regularity $R A^2 \\in {\\mathcal L}(Y) $ allows to handle \nthe time derivative \n\\begin{equation*}\n\\frac{d}{dt} R Ae^{At } A^{-1} B_0 = RA^2 e^{At} A^{-1} B_0 \\in {\\mathcal L}(U,Y)\\,,\n\\end{equation*}\nas needed for the applicability of the Aubin-Simon compactness criterion. \n\\end{proof}\n\n\\subsection{Proof of Theorem~\\ref{l:neg}} \nWe will denote by $J(g)$ the cost functional $J(g,y)$, where $y(\\cdot)=y(\\cdot;g)$ corresponds to the state variable given by \\eqref{e:eq-for-U}. \nTake $y_0 \\in {\\mathcal R}(B_1)\\in [{\\mathcal D}(A^*)]'$ and select a sequence of controls $g_n \\in H^1(0,T; U )$\nsuch that \n\\begin{itemize}\n\\item[i)]\n$B_1 g_n(0) = y_0$, \n\\item[ii)]\n$g_n \\rightarrow 0$ in $L^2(0,Y;U)$. \n\\end{itemize}\nThen, with $y_n(t) =y_n(t,g_n) = e^{At} \\big(y_0 - B_1 g_n(0)\\big) + (L_0 g_n )(t) \n+ B_1 g_n (t)$ \nwe have \n\\begin{equation*}\nRy_n = R L_0 g_n \\longrightarrow 0 \\quad \\textrm{in $L^2(0,T;Y)$,} \n\\end{equation*}\non the strength of Lemma~\\ref{l:L}. \nConsequently, $J(g_n) \\rightarrow 0$.\n\nSince $g_n \\rightarrow 0$ in $L^2(0,T;U )$, we turn to\n$J(0) = \\int_0^T |R e^{At} y_0 |_Y^2 dt > 0$, \nwhich combined with $g_n \\rightarrow 0$ contradicts the existence of a minimizer. \n\n\\subsection{Proof of Theorem \\ref{l:pos}} \nThe argument is in principle standard, as it is based on proving weak lower semicontinuity\nof the cost functional. \nThus, the challenge is to establish appropriate regularity of the input-to-state map,\nwhich is not obvious in view of the high unboundedness of the control input operators. \nHowever, this is possible exploiting the smoothing effect of the observation operator\nas well as the properties specifically established for the input-to-output map (cf.~Lemma~\\ref{l:L}). \nTo wit: for a given $g_0 \\in U$ consider a minimizing sequence $g_n \\in L^2(0,T;U)$,\nso that $J(g_n) \\rightarrow d = \\inf_{g_n \\in L^2(0,T;U)} J(g_n)$.\nThen, coercivity of the cost in $L^2(0,T;U)$ gives the bound $\\|g_n\\|_{L_2(0,T;U)} \\le M$\nwhich implies that\n\\begin{equation} \ng_n \\rightarrow g \\quad \\text{weakly in $L^2(0,T;U)$.} \n\\end{equation}\nWe also have \n\\begin{equation*}\nR y_n(t) = Re^{At} (y_0 - B_1 g_0) + (RL_0 g_n)(t)\\,.\n\\end{equation*}\nOn the strength of Lemma~\\ref{l:abstract-basics} and Lemma \\ref{l:L}, for a subsequence\n-- denoted by the same symbol -- it follows $R L_0 g_n \\rightarrow R L_0 g$ in $L^2(0,T;Y)$.\nIn addition, $R e^{At} B_1 = R A e^{At}A^{-1} B_1$ is bounded from $L^2(0,T;U)$ into\n$L^2(0,T;Y)$. \nThis implies the weak lower semicontinuity of $J(g)$, along with $J(g) \\le d $, which proves optimality.\nThe regularity in \\eqref{e:memberships} pertaining to the observed optimal state,\nfollows in view of the obtained regularity of the three summands in \n\\begin{equation*}\nR y(t) = Re^{At} (y_0 - B_1 g_0) + (RL_0 g)(t)\\,,\n\\end{equation*}\nwhere in particular $Re^{At} y_0=RA^2 e^{At} A^{-2}y_0\\in C([0,T];Y)$ for any \n$y_0\\in [{\\mathcal D}((A^2)^*)]'$, \nthanks to the property i) of Lemma~\\ref{l:abstract-basics}. \n\n\\section{Proof of Theorem \\ref{T0}} \\label{s:proofs_2}\nGiven the the solution formula \\eqref{g0}, with the input-to-state map $L$ defined\nin \\eqref{e:input-to-state-map}, let us consider the dynamics \n\\begin{equation}\\label{alpha}\ny_{\\alpha}(t) = e^{At} \\alpha + (Lg)(t) \n\\end{equation}\ndepending on the parameter $\\alpha \\in [{\\mathcal D}(A^*)]'$.\nThis choice is justified by $B_1 g_0 \\in [{\\mathcal D}(A^*)]'$ for $g_0\\in U$.\n(We note that $y_{g_0}(\\cdot)$ has been used to denote the function in \\eqref{g0}, with emphasis on the dependence of $y$ on $g_0\\in U$, beside to $y_0$. \nIn the present section, although with a certain abuse of notation, with $y_{\\alpha}(\\cdot)$\nwe shall be always referring to the ``full'' parameter $\\alpha$, rather than to its component $g_0$.) \nRecall that \n\\begin{equation}\\label{A-1B}\nA^{-1} B_0 g= [{\\mathcal A}^{-1} ({\\mathcal A} + I ) N_0g, 0, 0 ]^T \\in \nH^{3\/2}(\\Omega) \\times \\{0\\} \\times \\{0\\} \\subset Y\\,. \n\\end{equation}\nWe add that on the strength of \\eqref{A-1B} and $A^{-2} AB_1 = A^{-1} B_1$,\none gets \n\\begin{equation}\\label{L}\nL\\in {\\mathcal L}(L^2(0,T;U),C([0,T];[{\\mathcal D}(A^*)^2]'))\\,.\n\\end{equation}\n\nThe following auxiliary control problem is naturally associated to \\eqref{alpha}.\n\n\\begin{problem}[\\bf Problem ${\\mathcal P}_\\alpha$] \\label{p:alfa}\nFor any $\\alpha \\in [{\\mathcal D}(A^{2*})]'$, minimize the functional \n\\begin{equation}\\label{Jx}\nJ(g,y_{\\alpha}) = \\int_0^T \\|R y_{\\alpha}\\|_Y^2 \\,dt + \\int_0^T \\|g\\|^2_{U}\\,dt\\,,\n\\end{equation}\noverall controls $g\\in L^2(0,T;U)$, with $y_{\\alpha}(\\cdot)$ solution to \\eqref{alpha}.\n\\end{problem}\n\nOf course, our goal is to obtain the results in the topology of the original spaces $Y$ and $U$. While this is not possible for the entire control system, \nit turns out that the optimal solution displays an additional regularity that will make it\npossible the return to the original state space. \nThe corresponding result is formulated below. \nFor simplicity of notation we shall set $C(Y) = C([0,T];Y)$ and $L^2(Y) = L_2(0,T;Y)$;\na similar notation will be adopted with $Y$ replaced by $U$. \n\n\\begin{proposition}\\label{T}\nWith reference to the parametrized control Problem~\\ref{p:alfa}, the following statements\nare valid. \n\\begin{enumerate}\n\n\\item[i)]\nFor any $\\alpha \\in [{\\mathcal D}({A^*}^2)]'$, there exists a unique optimal control\n$g^0 (\\cdot)\\in L^2(0,T;U)$, which additionally satisfies $g^0 \\in C([0,T];U)$. \nMoreover, $R y_{\\alpha}^0 \\in C[[0,T];Y)$. \n\n\\item[ii)]\nThere exists a selfadjoint, positive operator $P(t)$ on ${\\mathcal L}(Y)$ with the\nfollowing regularity,\n\\begin{equation*}\nA^* P(t)A \\in {\\mathcal L}(Y,C(Y))\\,, \\quad B_1^* A^* P(t) \\in {\\mathcal L}(Y,C(U))\\,,\n\\quad P_t \\in {\\mathcal L}(Y,C(Y))\\,;\n\\end{equation*}\n$P(t)$ satisfies the following (non-standard) Riccati equation, valid for any $y, w \\in {\\mathcal D}(A)$:\n\\begin{equation}\\label{Ric}\n\\begin{split}\n& \\frac{d}{dt}(P(t)y,w)_Y +(Ay, P(t)w)_Y + (P(t) y, Aw)_Y + (Ry, R\\hat{y} )_Y = \n\\\\[1mm] \n& \\; \\big( (B_0^* + B_1^* A^*) P y, [I + B_1^* R^* R B_1]^{-1} \n[ (B_0^* + B_1 A^*) P(t)w] \\big)_U\\,,\n\\end{split}\n\\end{equation}\nwith terminal condition $P(T) =0$.\n\n\\item[iii)]\nFor every $\\alpha \\in {\\mathcal D}({A^2}^*)]'$, the optimal cost \n$J(g^0) = \\min_{g \\in L_2(0,T; U )} J(g,y_{\\alpha})$ is given by \n$J(g^0) = (P(0)\\alpha, \\alpha)_Y$. \n\n\\item[iv)]\nThe optimal control has the following feedback representation:\n\\begin{equation*} \ng^0(t) = - \\big[I - (B_0^* + B_1^* A^*) P(t) B_1\\big]^{-1} \n\\big[ (B_0^* + B_1^* A^*)P(t)\\big] y_{\\alpha}^0(t)\\,,\n\\end{equation*}\nwhere the operator $I - (B_0^* + B_1^* A^*)P(t)B_1$ is boundedly invertible on $U$ \nfor each $t \\ge 0$. \n\n\\end{enumerate}\n\\end{proposition}\n\n\\ifdefined\\xxxxxx\n\\subsection{Matching the initial condition}. \nWe note that for any $x= y_0 - B_1g_0 $ with $y_0 \\in [{\\mathcal D}(A^*)]'$ and $g_0 \\in U$, \nthe corresponding optimal control $g^0 \\in C([0,T];U)$. \nThe latter follows from Part 1 of Theorem~\\ref{T}. \nTherefore, in order to comply with the original model one is asking for the following selection of $g_0$ $g_0 = g^0(0) $. This amounts to \n$$ g^0_{x}(t=0)=g_0, ~x = y_0 - B_1g_0 $$\nThe above implicit relation is always uniquely solvable for $g_0 \\in U $ as shown in \\cite{T}. \nIn fact, the matching condition amounts to solving \n$$ g_0 = G x = G (y_0 - B_1 g_0 ) $$\n$$[I + G B_1] g_0 = Gy_0 $$\nwhere $G\\equiv - [ I + B_1^* R^* R B_1]^{-1} [ B_1^* R^* R + ( B_0^* + B_1^* A^* ) P(0) ]$\n\nWith the key properties $G \\in L([D(A^*)]') \\rightarrow U )$ \nand $ [ I + GB_1 ]^{-1}\\in L(U ) $ to be shown later. \nThus we obtain \n\\begin{corollary}\nLet $y_0 \\in D(A^*)]' $ be given. Consider Problem $\\mathcal{P}_x $ with $x = y_0 -B_1g_0$ and $g_0 \\in U $ is given by\n\\begin{equation}\\label{g00}\n g_0 = [ I + GB_1 ]^{-1} G y_0 \n \\end{equation}\n Then there exists unique optimal control $g^0\\in L_2(0,T; U) $ with the \n corresponding trajectory (\\ref{e:eq-for-U}) and initial condition $y^0(0) = y_0 $, such that \n the results of Theorem \\ref{T} holds with $x = y_0 - B_1 g_0 $ .\n \\end{corollary}\n \nIn other words, by solving parametrized optimal control with a given $x = y_0 - B_1 g_0 $ and a parameter p $g_0 \\in U $ we solve a family of parametrized optimal control problems, which always has a unique solution. \n The original dynamics is included in tis family. \n By selecting $g_0$ according to the matching condition, we make a selection of a problem whose dynamics coincides with the original one. However, the above does not imply that the constructed optimal control for parametrized control problem is also optimal for the original problem -when considered within $L_2(U)$ framework for optimal controls. In fact, the latter may not have optimal solution at all when \n$y_0 \\in {\\mathcal R}(B_1) $ -as shown in \\cite{LPT}. Thus, the constructed control is suboptimal -but it corresponds \n the original dynamics. \n However, if the original problem does have $L_2(U) $ optimal control, then such control coincides with \n a parametrized control where $g_0$ is selected according to the matching condition. \n\\fi\n\n\\subsection{Proof of Proposition \\ref{T} }\n\\ifdefined\\xxx\n\n\\subsection{Properties of the input-output map}\nThe following Lemma captures\na set of functional-analytic properties pertaining to appropriate combination of the involved abstract operators -- namely, the dynamics, control and observation operators --, which will play a major role in the proof of well-posedness of generalized differential\/integral Riccati equations, eventually leading to solvability of the optimal control problem.\n\n\\begin{lemma} \\label{l:abstract-basics}\nLet $A$, $B_i$ and $R$ the dynamics, control, observation operators defined by\n\\eqref{e:generator}, \\eqref{e:input-operators}, \\eqref{e:def-observation-op} respectively.\nThen, we have\n\\begin{enumerate}\n\\item[i)]\n$RA^2$ can be extended to a {\\em bounded} operator on the state space $Y$;\n\\item[ii)]\n$RB_1=0$;\n\\item[iii)]\n$(I+A)^{-1}B_i$ are bounded and compact operators from $L^2(\\Gamma_i)$ into $Y$, $i=0,1$.\n\\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\ni) We take an element $y=(y_1,y_2,y_3)$ initially assumed in ${\\mathcal D}(A^2)$, and compute\n\\begin{equation*}\n\\begin{split}\n& A^2y = A(Ay)= \n\\\\[1mm]\n& \\quad= A\n\\begin{pmatrix}\ny_2\n\\\\[1mm]\ny_3\n\\\\[1mm]\n- c^2{\\mathcal A} y_1- [b {\\mathcal A}+c({\\mathcal A}+I)N_1N_1^*({\\mathcal A}+I)]y_2\n-\\big[\\alpha I +\\frac{b}{c}({\\mathcal A}+I)N_1 N_1^*({\\mathcal A}+I)\\big]y_3\n\\end{pmatrix}\n\\\\[1mm]\n& \\quad = \\begin{pmatrix}\ny_3\n\\\\[1mm]\n\\ldots\\ldots\\ldots\n\\\\[1mm]\n\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\n\\end{pmatrix}\n\\end{split}\n\\end{equation*}\nwhere the second and third component of $A^2y$ are unspecified, owing to the \nstructure of the observation operator $R$ to be applied.\nConsequently, \n\\begin{equation*}\nR\\, A^2y = \\begin{pmatrix}\n(I+{\\mathcal A})^{-1\/2} y_3 \\\\ 0 \\\\ 0\n\\end{pmatrix}\n\\end{equation*}\nand \n\\begin{equation*}\n\\|R\\, A^2y\\|_Y = \\left\\| \\begin{pmatrix} (I+{\\mathcal A})^{-1\/2} y_3 \\\\ 0 \\\\ 0 \\end{pmatrix}\\right\\|_Y\n= \\big\\|(I+{\\mathcal A})^{1\/2}(I+{\\mathcal A})^{-1\/2}y_3\\big\\|=\\|y_3\\|_{L^2(\\Omega)}\n\\end{equation*}\n\n\\smallskip\n\\noindent\nii) It is immediatly verified that for any $h\\in L^2(0,T;L^2(\\Gamma_1))$ \n\\begin{equation*}\nR\\,B_1 h=R \\begin{pmatrix}\n0 \n\\\\[1mm]\n0 \n\\\\[1mm]\nb ({\\mathcal A}+I)N_1\\,h\n\\end{pmatrix}\n=(I+{\\mathcal A})^{-1\/2}\\,0=0\\,.\n\\end{equation*}\n\n\\smallskip\n\\noindent\niii) It is clear that the resolvent $(I+A)^{-1}$ is not compact.\nHowever, we have\n\\begin{equation}\n(I+A)^{-1} B_0 = c^2 \\begin{pmatrix} N_0 \\\\ 0 \\\\ 0 \\end{pmatrix}\\,, \n\\qquad\n(I+A)^{-1} B_1 = \\frac{b}{c^2} (I+A)^{-1} B_0=\nbc^{-2}\\begin{pmatrix} N_0 \\\\ 0 \\\\ 0 \\end{pmatrix}\n\\end{equation}\nand because ${\\mathcal R}(N_0) \\subset H^{3\/2}(\\Omega)$, the operators $A^{-1} B_i$\nare not only bounded from $L_2(\\Gamma_0) \\rightarrow Y$, but also compact. \n\n\\end{proof}\n\nThe following Lemma pertains to regularity of the map $RL_0$, that is\n\n\\begin{lemma}\nLet $L_0$ be the operator defined by \\eqref{e:input-to-state-map}.\nThen $RL_0$ is a compact operator from $L^2(0,T;L^2(\\Gamma_0))$ into $C([0,T];Y)$.\n\\end{lemma}\n\n\\begin{proof}\nFollows from Lemma~\\ref{l:abstract-basics} \nfollowed by\n\\begin{equation}\n\\begin{split}\nR L_0\\colon g\\longmapsto R L_0 g \n& = R \\int_0^t e^{A(t-s)} B_0 g(s) ds - R A\\int_0^t e^{A(t-s)} B_1 g(s) ds\n\\\\[1mm]\n& = R A^2 \\int_0^t e^{A(t-s)} A^{-2} B_0 g(s) - R A^2 \\int_0^t e^{A(t-s)} A^{-1} B_1 g(s)ds\n\\end{split}\n\\end{equation}\nand combined with Aubin-Simon compactness criterion. \n\\end{proof}\n\\fi\n\n\\subsubsection{The parametrized LQ-problem} \nThe starting point is the semigroup solution $y(t)= e^{At} \\alpha + Lg(t)$.\nIn order to derive the synthesis for the ``enlarged'' problem by introducing a parameter \n$\\alpha \\in Y$ and later considering the family of control problems depending on a parameter \n$\\alpha \\in Y \\oplus {\\mathcal R}(B_1)$, one needs to develop a dynamics that is invariant on the space compatible with initial data. \n\nIn view of the above, it is essential to extend the action of the semigroup $e^{At}$, originally defined on $Y$, to a larger space which contains $Y \\oplus {\\mathcal R} (B_1)$. \nThis can be done on the strength of the extended regularity of the operator $B_i$ as acting \ninto dual spaces of ${\\mathcal D}(A^*)$. This will be seen below. \nThe low regularity of the input-to-state mapping $L$ will force us to run the dynamics written below on an even larger space which is $[{\\mathcal D}({A^2}^*)]'$. \n\\begin{equation}\\label{e:eq-in-U-with-alpha}\ny(t) = e^{At} \\alpha + Lg(t) = e^{At} \\alpha + B_1 g(t) +[ L_0 g](t) \\,,\n\\end{equation}\nIt is important to emphasize that $y(0)\\ne \\alpha$, whereas $y(0)= \\alpha+B_1g(0)$. \nSince\n\\begin{equation*}\nA^{-1} B_1g = \n\\begin{pmatrix}b c^{-2} {\\mathcal A}^{-1} ({\\mathcal A} + I ) N_0g\n\\\\\n0\n\\\\\n0\n\\end{pmatrix}\\,,\n\\end{equation*}\nwe have $A^{-1} B_1 \\in {\\mathcal L}(U,Y)$ (in fact, compactly). \nThis follows from the regularity of the Neumann map where \n$N_0 \\in {\\mathcal L}(L^2(\\Gamma_0),H^{3\/2}(\\Omega))$, where $H^{3\/2}(\\Omega) \\subset {\\mathcal D}({\\mathcal A}^{1\/2})$ \n(the latter being a compact embedding). \nWe can thus take $\\alpha$ in $[{\\mathcal D}({A^2}^*)]'$.\nSo the dynamics operator with $g \\in C([0,T];U)$ will have values in the dual space \n$[{\\mathcal D}({A^2}^*)]'$.\n\nBy the same arguments as these used for the proof of Theorem \\ref{l:pos} we obtain\n\n\\begin{lemma}[Auxiliary optimal control problem] \\label{p:auxiliary}\nGiven $\\alpha \\in [{\\mathcal D}({A^*}^2]'$, there exists a control function \n$g^0\\in L^2(0,T;U)$ which minimizes the cost functional \\eqref{Jx}, where \n$y(\\cdot)$ is the solution to \\eqref{e:eq-in-U-with-alpha} corresponding to\nthe control $g(\\cdot)$.\n\\end{lemma}\nOur main goal is to provide a feedback synthesis of the optimal control $g^0$. \n\nWhile the existence of optimal solution for the parametrized problem follows from\nLemma~\\ref{p:auxiliary}, in order to provide a (pointwise in time) feedback representation\nof the optimal control -- via the optimal cost operator $P(t)$ -- one needs to introduce, for any $s\\in [0,T)$, the dynamics described by the equation \n\\begin{equation}\\label{e:s-eq-in-U-with-alpha}\ny(t,s;\\alpha) = e^{A(t-s)} \\alpha + L_sg(t)\\,, \\qquad s\\le t\\le T\\,,\n\\end{equation}\nas well as the cost functional\n\\begin{equation}\\label{e:s-cost}\nJ_{s,T}(g) \\equiv \\int_s^T \\big(\\|Ry(t)\\|^2_{Y} + \\|g(t)\\|^2_U \\big)\\,dt\\,, \n\\end{equation}\nwhere as before $y=(u,u_t,u_t)$ and $L_{s,T}$ -- $L_s$, in short -- is the operator defined by \n\\begin{equation}\\label{e:s-input-to-state-operator}\n\\{L_sg\\}(t)= \\int_s^t e^{A(t-\\tau)} B_0 g(\\tau)\\,d\\tau \n+ A\\int_s^t e^{A(t-\\tau)} B_1 g(\\tau)\\,d\\tau +B_1 g(t) \\qquad \\forall t\\in [s,T]\\,. \n\\end{equation}\n(Note that the subscript {\\em s} refers to initial time: in order to avoid confusion, the former operator $L_0=L-B_1$ is written $L^0$.)\n\n\\begin{lemma}\\label{l:regularity-input-to-state}\nOne has the following basic regularity of the input-to-state map:\n\\begin{equation*}\nL^0_s \\; \\text{is continuous}\\colon L^2(s,T;U) \\longrightarrow C([s,T];[{\\mathcal D}({A^*}^2)]')\\,,\n\\end{equation*}\n\\begin{equation*}\nL_s \\; \\text{is continuous}\\colon L^2(s,T;U) \\longrightarrow \nL^2(s,T;[{\\mathcal D}(A^*)]')\\oplus C([s,T];[{\\mathcal D}({A^*}^2)]')\\,,\n\\end{equation*}\nThe above regularity improves when input-to-state map is combined with the observation operator\n$R$; indeed, for the operator $RL$ and its adjoint it holds\n\\begin{equation*}\n\\begin{split}\n& RL_s \\; \\text{continuous}\\colon L^1(s,T;U) \\longrightarrow C([s,T];Y)\\,;\n\\\\[1mm]\n& L_s^*R^* \\; \\text{continuous}\\colon L^1(s,T;Y) \\longrightarrow C([s,T];U)\\,.\n\\end{split}\n\\end{equation*}\nIn addition, the operator $L_s^*R^*$ satisfies \n\\begin{equation*}\nL_s^*R^* \\; \\text{continuous}\\colon L^2(s,T;Y) \\longrightarrow C([s,T];U)\n\\end{equation*}\nuniformly with respect to $s\\in [0,T)$. \n\\end{lemma}\n\n\\begin{proof}\nThe regularity of the control-state map follows from the quantified regularity of $B_1$ map which takes boundedly $U$ into $[{\\mathcal D}((A^*)]'$. \nThen the first statement in the Lemma follows from the structure of $L$ operator.\nThe key in the regularity control $\\rightarrow$ observation operator is the combination of the three properties $A^{-2}B_i\\in{\\mathcal L}(U,Y)$, $i=1,2$, $RA^2\\in {\\mathcal L}(Y)$, $RB_1=0$.\n\\end{proof}\n\n\\begin{lemma}\nWith reference to the optimal control problem \\eqref{e:s-eq-in-U-with-alpha}-\\eqref{e:s-cost},\nthe following statements are valid:\n\\begin{enumerate}\n\\item[i)]\n{\\bf (Optimal pair). }\nGiven $\\alpha\\in [{\\mathcal D}({A^*}^2)]'$, there exists a unique optimal pair\n\\begin{equation*}\n(\\hat{y}(t,s;\\alpha),\\hat{g}(t,s;\\alpha)) \n\\end{equation*}\nfor Problem~\\ref{p:auxiliary}, with \n\\begin{subequations}\n\\begin{align}\n& \\hat{g}(t,s;\\alpha)=[I+L_s^*R^*RL_s]^{-1}L_s^*R^*Re^{A(\\cdot-s)}\\alpha\\in C([s,T];U)\\,,\n\\\\[2mm]\n& \\hat{y}(t,s;\\alpha)= \ne^{A(t-s)}\\alpha + \\{L_s\\hat{g}(\\cdot,s;\\alpha)\\}(t)\\in C([s,T];[{\\mathcal D}({A^*}^2)]')\\,,\n\\label{e:basicregularity}\\\\[2mm]\n& R\\hat{y}(t,s;\\alpha)= [I+RL_sL_s^*R^*]^{-1}Re^{A(\\cdot-s)}\\alpha \\in C([s,T];Y)\\,.\n\\end{align}\n\\end{subequations}\n\\item[ii)]\n{\\bf (Riccati operator). }\nThe operator $P(t)\\in {\\mathcal L}(Y)$, $t\\in [s,T]$, is \ngiven by\n\\begin{equation}\\label{e:riccati-operator-2}\nP(t) \\alpha = \\int_t^T e^{A^*(\\tau-t)}R^*R \\hat{y}(\\tau,t;\\alpha)\\,d\\tau\\,, \n\\end{equation} \nThe operator $P(t)$ is positive selfadjoint on $Y$, and represents the optimal cost (or Riccati) operator; its regularity properties are detailed separately (cf.~Proposition~\\ref{p:Riccati-operator} below).\n\\item[iii)]\n{\\bf (Implicit feedback formula). }\nThe optimal control satisfies\n\\begin{equation*}\n\\hat{g}(t,s;\\alpha)= -[B_0^*P(t) +B_1^*A^*P(t)]\\Phi(t,s)\\alpha\\,,\n\\end{equation*}\nthat is the following implicit equation\n\\begin{equation*}\n\\hat{g}(t,s;\\alpha)= -[B_0^*P(t) +B_1^*A^*P(t)]\\hat{y}(t,s;\\alpha)\n+[B_0^*P(t) +B_1^*A^*P(t)]B_1\\hat{g}(t,s;\\alpha)\\,,\n\\end{equation*}\nwhere the operator $\\Phi(t,s)$ is defined in \\eqref{e:transition}.\n\\item[iv)]\n{\\bf (Optimal cost). }\nThe optimal cost for Problem~\\ref{p:auxiliary} is given by\n\\begin{equation*}\nJ_s(\\hat{g}) =\\int_s^T \\big(\\|R\\hat{y}\\|^2_Y + |\\hat{g}(t)|^2_U \\big)\\,dt\n= \\|[I+RL_sL_s^*R^*]^{-1\/2}\\, Re^{A(\\cdot-s)}\\alpha\\|_{L^2(s,T;Y)}^2 \n\\end{equation*}\nwhich is rewritten in terms of the optimal cost (or Riccati) operator as follows\n\\begin{equation}\n\\begin{split}\nJ_s(\\hat{g}) &=(P(s)\\alpha,\\alpha)=\n\\\\\n&= \\big([I+RL_sL_s^*R^*]^{-1}\\, Re^{A(\\cdot-s)}\\alpha,Re^{A(\\cdot-s)}\\alpha\\big)_{L^2(s,T;Y)}\\,,\n\\end{split}\n\\end{equation}\nthereby providing \n\\begin{equation}\nP(s)\\alpha=e^{A^*(\\cdot-s)}R^*\\,[I+RL_sL^*_sR^*]^{-1}\\,Re^{A(\\cdot-s)}\\alpha \n\\quad \\forall \\alpha \\in [{\\mathcal D}({A^*}^{2}]'\\,.\n\\end{equation}\n\n\\end{enumerate}\n\n\\end{lemma}\n\n\\begin{proof}\n1. The first statement follows by standard variational arguments applied to the \nLQ-problem (cf.~\\cite{redbook}), \nafter taking into consideration the regularity of input-output map stated\nin the preceding Lemma. \nThe formulas for the optimal control, optimal state, observed state are derived\nas usual from the optimality conditions. \nThe regularity of the optimal quantities follows from the regularity of the map $L$. \nIn fact $A^{-2}\\alpha \\in Y$ gives $R e^{At} \\alpha = RA^2 e^{At} A^{-2}\\alpha \\in C([0,T];Y)$ and by Lemma~\\ref{l:regularity-input-to-state}\n\\begin{equation*}\nL_s^* R^* R e^{A\\cdot} \\alpha\\in C([0,T];U)\\,.\n\\end{equation*}\n\nWe note that the invertibility of the operator $I + L_s^* R^* R L_s$ on $C([s,T];U)$ follows\ncombining the self-adjointness and positivity of $L_s^* R^* R L_s$ -- which guarantees the invertibility on $L_2(U)$ -- with boundedness of the latter operator on $C([s,T];U)$. \nA classical bootstrap argument yields the claimed regularity: \none starts from \n\\begin{equation*}\nv=[I + L_s^* R^* R L_s ]^{-1} g\\,,\n\\end{equation*}\n with $g \\in C(U)$, obtaining first \n$v \\in L^2(U)$; then, since $v = - L_s^* R^* R L_s v +g$, the regularity improves to \n$v \\in C(U)$. \n\nA word of caution: while $RL_0 $ is compact on $L^2(U)$, this is no longer the case for $RL$,\nowing to the presence of the summand $R B_1$, which is not time compact. \n\nThe regularity of $R\\hat{y}(t,s;\\alpha)$ is a consequence of the regularity of the operator \n$RL$ in Lemma~\\ref{l:regularity-input-to-state}. \nThen, by the optimality condition \n\\begin{equation}\\label{e:optimal-control-from-optimality}\n\\hat{g}(t,s;\\alpha)=-\\{L_s^*R^*[R\\hat{y}(\\cdot,s;\\alpha)]\\}(t)\\,,\n\\end{equation}\nwhich combined with the regularity of the operator $L_s^*R^*$ yields continuity (in time)\nof the optimal control. \n\n\\noindent\n\\smallskip\n2. All the statements ii)-iv) follow by variational arguments, \nby using the structure of the optimal quantities, once several properties \nthat specifically pertain the operators $\\Phi(\\cdot,\\cdot)$ and $P(\\cdot)$ are proved. \nThese technical results are given in the Propositions which follow next. \n\\end{proof}\n\n\\begin{remark}\n\\begin{rm}\nA peculiarity of the parametrized minimization problem is that the optimal trajectory\ndoes not satisfy the evolution property. \n(For this reason the Riccati operator and the resulting synthesis cannnot be standard, as certain cancellations do not occur.)\nIn the next section we study the evolution operator, defined only on a dual (extrapolation) space. \nThis is a consequence of the low regularity of the control-to-state map. \n\\end{rm}\n\\end{remark}\n\n\\subsubsection{The operator $\\Phi(t,s)$}\nOne of the most critical ingredients of Riccati theory is the evolution operator which\ndescribes controlled dynamics. \nWhile in the standard theory the evolution operator is constructed directly from the optimal trajectory, this is not the case in singular theory. \nThe reason is that such operator will not display the evolution property -- the most fundamental feature. \nFor this reason we define evolution differently, as in the formula below. \n \nFor any couple $(t,s)$ such that $0\\le s\\le t\\le T$, let \n$\\Phi(t,s)\\colon [{\\mathcal D}({A^*}^2)]'\\rightarrow [{\\mathcal D}({A^*}^2)]'$ defined by\n\\begin{equation}\\label{e:transition}\n\\Phi(t,s)\\alpha := \\hat{y}(t,s;\\alpha)-B_1\\hat{g}(t,s;\\alpha)=\ne^{A(t-s)} \\alpha + \\{L^0_s \\hat{g}(\\cdot,s;\\alpha)\\}(t)\\,.\n\\end{equation}\nThe regularity properties of the operator $\\Phi(\\cdot,\\cdot)$, which a priori\nbelongs to ${\\mathcal L}([{\\mathcal D}({A^*}^2)]')$ (for $(t,s)$ given), are collected in the following Proposition.\n\n\\begin{proposition}\\label{p:Phi}\nFor the operator $\\Phi(\\cdot,\\cdot)$ defined in \\eqref{e:transition} \nthe following properties are valid:\n\\begin{enumerate}\n\\item[i)]\n$\\Phi(t,t)\\alpha =\\alpha$ for all $\\alpha\\in [{\\mathcal D}({A^*}^2)]'$;\n\\item[ii)]\nfor any $s, \\tau$ with $0\\le s\\le \\tau\\le T$, it holds\n\\begin{equation}\\label{e:transition-item2}\nRe^{A(\\cdot-\\tau)}\\Phi(\\tau,s)\\alpha\\in C([\\tau,T];Y) \\qquad \\forall \\alpha\\in [{\\mathcal D}({A^*}^2)]'\n\\end{equation}\ncontinuously with respect to $\\alpha$ and uniformly in $s$ and $\\tau$;\n\\item[iii)]\nfor any $s, \\tau$ with $0\\le s\\le \\tau\\le T$, it holds\n\\begin{equation*}\nR\\Phi(\\cdot,\\tau)\\,\\Phi(\\tau,s)\\alpha\\in C([\\tau,T];Y) \\qquad \\forall \\alpha\\in [{\\mathcal D}({A^*}^2)]'\n\\end{equation*} \ncontinuously with respect to $\\alpha$ and uniformly in $s$ and $\\tau$;\n\\item[iv)]\nfor any $s,\\tau, t$ with $0\\le s\\le \\tau\\le t\\le T$, it holds in $Y$ \n\\begin{equation*}\nR\\Phi(t,\\tau)\\,\\Phi(\\tau,s)\\alpha= R \\Phi(t,s)\\alpha \\qquad \\forall \\alpha\\in [{\\mathcal D}({A^*}^2)]'\n\\end{equation*} \n\\end{enumerate}\n\n\\end{proposition}\n\n\\begin{proof}\nSince the operator $\\Phi(t,s)$ -- as defined above -- has the same algebraic structure as in\nthe classical LQ-theory, we can treat this operator as an evolution on the dual space to \n${\\mathcal D}({A^*}^2)$. \nThe needed regularity is established by referring to preceding Lemmas: in particular,\nto Lemma~\\ref{l:regularity-input-to-state}.\nThe proof of the above properties can be produced along the lines of Lemma~8.3.2.3\nand Lemma~8.3.2.4 in \\cite{redbook}, on the basis of the powerful facts\n$RB_1=0$, $R A^2\\in {\\mathcal L}(Y)$, beside $A^{-2}B_i\\in{\\mathcal L}(U,Y)$, $i=1,2$. \n\\end{proof}\n\n\n\\subsubsection{The optimal cost operator} \\label{s:riccati-operator}\nWe note that the Riccati Operator defined via optimal trajectory coincides with \n\\begin{equation}\\label{e:riccati-operator-1}\nP(t) \\alpha = \\int_t^T e^{A^*(\\tau-t)}R^*R \\Phi(\\tau,t)\\alpha\\,d\\tau\\,,\n\\qquad 0\\le t\\le T\\,, \\; \\alpha\\in [{\\mathcal D}({A^*}^2)]'\\,,\n\\end{equation} \nwhere $\\Phi(\\tau,t)$ is defined in \\eqref{e:transition}.\nIt is readily seen that, combining\n$\\Phi(\\tau, t)\\alpha=\\hat{y}(\\tau,t;\\alpha)-B_1 \\hat{g}(\\tau,t;\\alpha)$ with $RB_1=0$, \n\\eqref{e:riccati-operator-1} is actually equivalently rewritten as follows\n\\begin{equation*} \nP(t) \\alpha = \\int_t^T e^{A^*(\\tau-t)}R^*R \\hat{y}(\\tau,t;\\alpha)\\,d\\tau\\,,\n\\qquad 0\\le t\\le T\\,, \\; \\alpha\\in [{\\mathcal D}({A^*}^2)]' \n\\end{equation*} \nwhich confirms the equivalent relation \\eqref{e:riccati-operator-2}.\n\n\\begin{proposition} \\label{p:Riccati-operator}\nThe optimal cost operator $P(t)$ defined by \\eqref{e:riccati-operator-1} (equivalently,\nby \\eqref{e:riccati-operator-2}) satisfies the following (enhanced) regularity properties:\n\n\\begin{enumerate}\n\\item\n{\\rm (\\bf Space regularity)}\nFor any given $t\\in [0,T]$, one has \n\\begin{equation}\\label{e:riccati-spaceregularity}\n{A^*}^2 P(t) A^2 \\in {\\mathcal L}(Y)\\,;\n\\end{equation}\nequivalently,\n\\begin{equation\nP(t)\\in {\\mathcal L}([{\\mathcal D}({A^*}^{\\gamma_1})]',{\\mathcal D}({A^*}^{\\gamma_2})) \n\\qquad \\forall \\gamma_1, \\gamma_2\\le 2\\,.\n\\end{equation}\n\nAs a consequence, $B_i^*P(\\cdot)A^2\\in {\\mathcal L}(Y,U)$, $i=1,2$ and\nthe gain operator $B^*P(t)\\equiv B_0^*P(t)+B_1^*A^*P(t)$ satisfies $B^*P(t) A^2 \\in {\\mathcal L}(Y,U)$;\nnamely,\n\\begin{equation}\\label{gain-spaceregularity}\nB_i^*P(t)\\in {\\mathcal L}([{\\mathcal D}({A^*}^2)]',U))\\,; \\qquad i =0,1\\,.\n\\end{equation}\n\\item\n{\\bf (Time regularity)} As for the regularity in time of the optimal cost operator -- then,\nof the value function -- one has\n\\begin{equation}\\label{e:riccati-timeregularity}\nP(\\cdot) \\;\\textrm{continuous} \\colon \n[{\\mathcal D}({A^*}^2)]' \\longrightarrow C(0,T;{\\mathcal D}({A^*}^2))\\,.\n\\end{equation}\n\n\\end{enumerate}\n\\end{proposition}\n\n\\begin{proof}\n1. Let $\\alpha\\in [{\\mathcal D}({A^*}^2)]'$ be given.\nWe write down and compute, with $0\\le t\\le T$, \n\\begin{equation*}\n\\begin{split}\n(-A^*)^2P(t)\\alpha &= (-A^*)^2\\int_t^T e^{A^*(\\tau-t)}R^*R \\Phi(\\tau,t\n\\alpha\\,d\\tau \n\\\\[1mm]\n& = \\int_t^T e^{A^*(\\tau-t)}[(-A^*)^2R^*]\\,R \\Phi(\\tau,t)\\alpha\\,d\\tau \n\\end{split}\n\\end{equation*}\nwhere the application of the operator $(-A^*)^2$ commutes with the integration\nin time on the extrapolation space.\n \nThen, the conclusion in \\eqref{e:riccati-spaceregularity} follows recalling that\nthe function $R \\Phi(\\cdot,t)\\alpha$ takes values in $Y$ (cf.~\\eqref{e:transition-item2}), whilst $(-A^*)^2R^*$ is a bounded operator on $Y$.\n\\\\\nAs for gain operator, on the basis of \\eqref{e:riccati-spaceregularity}, we next obtain \n\\begin{equation*}\nB_i^*P(\\cdot)A^2=[B_i^*(-A^*)^{-\\gamma_0}]\\,(-A^*)^{\\gamma_0}P(\\cdot)A^2 \\in {\\mathcal L}(Y,U)\\,,\n\\qquad i=1,2\\,,\n\\end{equation*}\nowing to $B_i^*(-A^*)^{-\\gamma_0}\\in {\\mathcal L}(Y,U)$, $ \\gamma_0 =1 $ thereby confirming the exceptional \nboundedness and smoothing effect of the gain operator in \\eqref{gain-spaceregularity}.\n\n\\smallskip\n\\noindent\n2. Finally, the regularity in time of \\eqref{e:riccati-timeregularity}\nfollows combining the continuity in time of the function $R \\Phi(\\cdot,t)\\alpha$ \n(see, once again, \\eqref{e:transition-item2}) with more standard semigroup properties;\nsee the proof in \\cite[p.~697]{redbook}.\n\\end{proof}\n\n\\subsubsection{The Riccati equation}\nIn this section we shall provide several key relations which lead to a characterization of the Riccati operator via Differential Riccati equation. \nOne of the fundamental properties is time evolution (of the evolution operator) with respect to the initial time, that is the second argument. \nIn the case of semigroups both evolutions are the same. \nHowever, in the case of time dependent evolutions -- as in the present case -- proving differentiability with respect to the initial time is challenging.\nThe challenge is due to compromised regularity and the intrinsic lack of invariance. \n\n\\begin{lemma}[Differentiability of the evolution with respect to initial time]\\label{Right}\n\nThe evolution operator $\\Phi(\\tau,t)$ defined in \\eqref{e:transition} satisfies\n\\begin{equation*}\n\\frac{d}{dt} \\big (R\\Phi(\\tau,t)\\alpha\\big)=- R\\Phi(\\tau,t)\\big[A-BB^*P(t)\\big]\\alpha\n\\qquad \\forall \\alpha \\in [{\\mathcal D}({A^*})]'\\,, \\quad \\textrm{a.e. in $t$,}\n\\end{equation*}\nwhere $B$ denotes $B_0 + A B_1$.\n\\end{lemma}\n\n\\begin{proof}\nWe will sketch the major steps of the proof.\n\n\\noindent\n1. We have seen that $R\\Phi(t,s)$ may be defined on the extrapolation space \n$[{\\mathcal D}({A^*}^2)]'$.\nIn particular, it makes sense $R\\Phi(t,s)Bu$ and it holds\n\\begin{equation*}\n\\sup_{0\\le t\\le T}\\|R\\Phi(\\cdot,t)Bu\\|_{L^1(t,T;Y)}\\le c_T\\|u\\|_U\\,.\n\\end{equation*}\nTo justify the above assertion: we recall that \n\\begin{equation*}\nR\\Phi(\\cdot,t)\\alpha=Re^{A(\\tau-t)}\\alpha+R\\{L_t \\hat{g}(\\cdot,t;\\alpha)\\}(\\tau) \n\\end{equation*}\nwhich combined with \\eqref{e:optimal-control-from-optimality} gives \n\\begin{equation}\\label{e:rphi}\nR\\Phi(\\tau,t)\\alpha = \\big\\{ \\big[I+RL_tL_t^*R^*\\big]^{-1} \\, Re^{A(\\tau-t)}\\alpha\\big\\}(\\tau)\\,,\n\\quad \\alpha \\in [{\\mathcal D}({A^*}^2)]'\n\\end{equation}\nInsertion of $Bu \\in [D(A^{2*})]'$ in place of $\\alpha$ brings about the estimate \n\\begin{equation*}\n\\sup_{0\\le t\\le T}\\|R\\Phi(\\cdot,t)Bu\\|_{L^1(t,T;Y)}\\le \\dots \\le \n\\|Re^{A(\\tau-t)}\\alpha\\|_{L^?(t,T;Y)}\n\\le c_T\\|u\\|_U\\,.\n\\end{equation*}\n\n\\smallskip\n\\noindent\n2. A major step is to show existence (as well as to pinpoint the regularity) of the derivative\nof $R\\Phi(\\tau,t)\\alpha$ with respect to $t$, with $\\alpha$ belonging to the largest possible space. \nThe arguments here owe to \\cite[Vol.~II, Lemma~8.3.4.2]{redbook}. \n\\\\\nRewrite\n\\begin{equation}\\label{e:implicit-eq-for-rphi}\nR\\Phi(\\tau,t)\\alpha+ \\big\\{RL_tL_t^*R^*\\, R \\Phi(\\cdot,t)\\alpha\\big\\}(\\tau)\n= Re^{A(\\tau-t)}\\alpha\n\\end{equation}\nand notice that if $\\alpha\\in [{\\mathcal D}(A^*)]'$\n(please note that here it is {\\bf not} $\\alpha\\in [{\\mathcal D}({A^*}^2)]'$), \nthen \n\\begin{equation*}\nRe^{A(\\tau-t)} x= RA^2\\,A^{-1}e^{A(\\tau-t)}A^{-1}x\\,,\n\\end{equation*}\nwhich gives \n\\begin{equation*}\n\\frac{d}{dt} Re^{A(\\tau-t)} x= -[RA^2]e^{A(\\tau-t)} \\underbrace{A^{-1}x}_{\\in H}\\,.\n\\end{equation*}\nRewrite next \\eqref{e:implicit-eq-for-rphi} explicitly:\n\\begin{equation*}\n\\begin{split}\n& R\\Phi(\\tau,t)\\alpha + R\\int_t^\\tau e^{A(\\tau-\\sigma)}B\\int_\\sigma^T B^*e^{A^*(r-\\sigma)}\nR^*R \\Phi(r,t)\\alpha\\, dr\\, d\\sigma=\n\\\\[1mm]\n& \\qquad\\qquad\\qquad = Re^{A(\\tau-t)}\\alpha\n\\end{split}\n\\end{equation*}\nwhich implies\n\\begin{equation*}\n\\begin{split}\n& \\frac{d}{dt} \\big(R\\Phi(\\tau,t)\\alpha\\big)- Re^{A(\\tau-t)}B\\int_t^T B^*e^{A^*(r-t)}\nR^*R \\Phi(r,t)\\alpha\\, dr \\,+\n\\\\[1mm]\n& \\qquad +\\,R\\int_t^\\tau e^{A(\\tau-\\sigma)}B\\int_\\sigma^T B^*e^{A^*(r-\\sigma)}\nR^*R \\frac{d}{dt} \\big(R\\Phi(\\tau,t)\\alpha\\big)\\, dr\\, d\\sigma\n\\\\[1mm]\n& \\qquad\\qquad\\qquad =- Re^{A(\\tau-t)}A\\alpha\\,.\n\\end{split}\n\\end{equation*}\nThe above implicit equation is rewritten as\n\\begin{equation*}\n\\big[I+RL_tL_t^*R^*\\big]\\frac{d}{dt}\\big(R\\Phi(\\cdot,t)\\alpha\\big) =\n-\\underbrace{Re^{A(\\tau-t)}A\\alpha}_{T_1(\\tau,t)}+ \\underbrace{Re^{A(\\tau-t)}BB^*P(t)\\alpha}_{T_2(\\tau,t)}\n\\end{equation*}\nwhich makes sense at least in $H^{-1}(0,T;Y)$.\n\nThen, noting that \n\\begin{equation*}\nT_1(\\cdot,t)\\in C([t,T];Y)\\,, \\qquad T_2(\\cdot,t)\\in L^\\infty(t,T;Y)\n\\end{equation*}\nwe get \n\\begin{equation*}\n\\frac{d}{dt} \\big(R\\Phi(\\tau,t)\\alpha\\big)=\\big[I+RL_tL_t^*R^*\\big]^{-1}\n\\Big\\{- Re^{A(\\tau-t)}A\\alpha+Re^{A(\\tau-t)}BB^*P(t)\\alpha\\Big\\}\\in L^2(t,T;HY\\,.\n\\end{equation*}\nRecalling \\eqref{e:rphi} we finally obtain\n\\begin{equation*}\n\\frac{d}{dt} \\big(R\\Phi(\\tau,t)\\alpha\\big)=- R\\Phi(\\tau,t)A\\alpha+R\\Phi(\\tau,t)BB^*P(t)\\alpha\n\\end{equation*}\n(cf.~\\cite[Vol.~II, \\S~8.3.4, p.~701]{redbook}), thereby providing with \n\\begin{equation*}\n\\begin{split}\n& \\frac{d}{dt} (\\big(R\\Phi(\\tau,t)x\\big),y)_Y=\n\\\\[1mm]\n& \\qquad\n=- (R\\Phi(\\tau,t)\\,[A-(B_0 + A B_1 )\\,(B_0^* + B_1^* A^*)P(t)]x, y)_Y\\,,\n\\quad x\\in [{\\mathcal D}(A^*)]', y\\in Y\\,.\n\\end{split}\n\\end{equation*} \n\\end{proof}\n\n\n\\begin{lemma}[\\bf First Feedback Synthesis] \\label{l:F}\nThe optimal control $\\hat{g}$ admits the representation\n\\begin{equation*}\n\\hat{g}(\\tau,t;\\alpha) =- [B_0^* + B_1^* A^*] P(\\tau)\\Phi(\\tau,t)\\alpha\n\\qquad \\forall \\alpha \\in [{\\mathcal D}({A^*}^2)]'\\,.\n\\end{equation*}\n\n\\end{lemma}\n\n\\begin{proof}\nFrom the optimality conditions we know that \n\\begin{equation*}\n\\hat{g}(\\tau,t;\\alpha) =-\\{L_t^*R^*R\\hat{y}(\\cdot,t;\\alpha)\\}(\\tau)\\,.\n\\end{equation*}\nBecause $RB_1 =0$, and exploiting the evolution property enjoyed by $\\Phi$, it follows\n\\begin{equation*}\n\\hat{g}(\\tau,t;\\alpha) =-L_t^*R^*R \\Phi(\\cdot,t) \\alpha\\,.\n\\end{equation*}\nObserving that for any $\\alpha \\in [{\\mathcal D}({A^*})]'$ one has $R\\Phi(t,s) \\alpha \\in Y$ and \n$L_t^* R^* \\colon L^1(Y) \\rightarrow C(U)$, makes the above composition of operators \nmeaningful -- as acting on appropriate domains. \nThis concludes the optimal synthesis as stated in the Lemma. \n\\end{proof}\n\n\\begin{lemma}[\\bf Riccati Equation]\\label{RIC}\nFor all $x, y \\in {\\mathcal D}(A)$ the Riccati operator $P(\\cdot)$ satisfies \n\\begin{equation*}\n\\begin{split}\n& \\big(\\frac{d}{dt} P(t)x,y\\big)_Y= -(R^*Rx,y)_H-(A^* P(t)x,y)_Y -\n\\\\[1mm]\n& \\qquad\\qquad\\qquad - (P(t)Ax,y)_Y- ([B_0^*+B_1^*A^*]P(t)x,[B_0^*+B_1^*A^*]P(t)y)_Y\\,,\n\\end{split}\n\\end{equation*}\nwith \n\\begin{equation*}\n\\begin{cases}\nA^*P_t(t) A \\in {\\mathcal L}(Y)\\,, \n\\\\[1mm]\n\\textrm{$A^*P_t(t) A$ continuous $\\colon Y\\longrightarrow L^\\infty(0,T;Y)$.}\n\\end{cases}\n\\end{equation*}\n\n\\end{lemma}\n\n\\begin{proof}\nIn order to derive the Riccati equation, we follow the so called direct approach (cf.~\\cite{redbook}).\nDifferentiation (in a weak sense) of the Riccati operator requires the characterization of the left derivative (with respect to the initial time) of the evolution.\nHowever, in the present case, Proposition~\\ref{p:Phi} provides the needed regularity for the evolution when acted upon by the observation. \nThis allows to obtain the critical representation for the right evolutionary derivative which is given by Lemma~\\ref{Right}. \nThe said representation, when combined with the ``first feedback synthesis'' in Lemma~\\ref{l:F}\ngives the final conclusion.\n\nCalculations are justified by the already proved regularity of the quantities involved. \nIn particular, the compromised regularity of the derivative of the evolution (which requires \n$\\alpha \\in [{\\mathcal D}(A^*)]'$, is sufficient to obtain the final conclusion. \n\\end{proof}\n\nWe note that the feedback synthesis given in Lemma \\ref{l:F} is in terms of the evolution operator $\\Phi(t,s)$. \nWhat is needed, instead, is the feedback synthesis in terms of the actual trajectory $\\hat{y}$.\nThis is attained below. \n\n\\begin{lemma}[\\bf Feedback Synthesis] \\label{l:feed} \nFor any $\\alpha \\in [{\\mathcal D}({A^{2*}}]'$, the following feedback representation of the\noptimal control $\\hat{g}(t;\\alpha)$ holds true: \n\\begin{equation*}\n\\hat{g}(t;\\alpha) =- \\big[I - [B_0^* + B_1^* A^*] P(t) B_1\\big]^{-1} \n[B_0^* + B_1^* A^*]P(t)\\hat{y}(t,\\alpha)\\,; \n\\end{equation*}\nthe formula provides an ``on line'' optimal control $\\hat{g}(\\cdot, \\alpha ) \\in L_2(U)$\nfor the $\\alpha$-parametrized problem. \n\\end{lemma}\n\n\\begin{proof}\nFor the feedback synthesis of the optimal control it remains to discuss the invertibility of the operator \n\\begin{equation*}\nI-[{B_0}^*+{B_1}^*A^*]P(t)B_1\\,.\n\\end{equation*}\n\n\\begin{proposition}\\label{p:feed}\nThe operator $I-[{B_0}^*+{B_1}^*A^*]P(t)B_1$ is boundedly invertible on $U$ for each \n$t \\in [0, T]$. \n\\end{proposition}\n\n\\begin{proof}\n{\\sl Step 1.} \nWe shall first prove the injectivity of the operator \n$I-[{B_0}^*+{B_1}^*A^*]P(t)B_1$ for $t =0$. \nThen, the dynamic programming argument extends the argument to all $t\\in [0,T]$.\n\nBy contradiction, let $v \\in U$ be such that $v \\ne 0$, and \n\\begin{equation}\\label{1}\nv=[{B_0}^*+{B_1}^*A^*]P(t)B_1v\\,.\n\\end{equation}\nConsider then the optimal control problem with $y_0=0$, and $\\alpha =-B_1v$.\nThe (implicit) optimal synthesis gives \n\\begin{equation}\\label{2}\n\\hat{g}_{\\alpha}(0) = - [B_0^* + B_1^* A^*] P(0) \n\\big(\\hat{ y}_{\\alpha}(0) - B_1 \\hat{g}_{\\alpha} (0)\\big)\\,.\n\\end{equation}\nBut from the continuity of optimal control, we also have \n$\\hat{y}_{\\alpha}(0) = \\alpha + B_1 \\hat{g}_{\\alpha}(0)$. \nThis, combined with \\eqref{2} give\n\\begin{equation}\\label{3}\n\\hat{g}_{\\alpha}(0) =\n- [B_0^* + B_1^* A^*] P(0) [\\hat{ y}_{\\alpha}(0) - B_1 \\hat{g}_{\\alpha} (0) ]\n= [B_0^* + B_1^* A^*] P(0) B_1v\\,. \n\\end{equation}\nFrom the contradiction argument \\eqref{1} it follows that $g^0_{\\alpha}(0) =v$.\nOn the other hand, the optimal control problem with $y_0 =0$ produces only one solution \nwhich is equal identically to zero. \nTherefore, the optimal control $g^0$ should be zero as well. \nThis contradicts the fact that $v \\ne 0$. \n\nThe same argument applied to the dynamics originating at the time $t$ yields\ninjectivity of $I - [B_0^* + B_1^* P(t) ]B_1$ on $U$, for any $t \\in [0,T]$. \n\n\\smallskip\n\\noindent\n{\\sl Step 2}. \nCompactness of the operator $[B_0^* + B_1^* P(t)]B_1$. \nThis follows from regularity properties of $P(t)$ which asserts that \n$P(t)\\colon {\\mathcal D}({A^*}^2)]' \\rightarrow {\\mathcal D}({A^*}^2)$ is bounded. \nHowever, the injection $B_1 \\colon U \\rightarrow {\\mathcal D}({A^*}^2)$ is compact.\nThe latter follows from the fact \n$A^{-1} B_1g = [ b c^{-2} {\\mathcal A}^{-1} ( {\\mathcal A} + I ) N_0 g, 0, 0]$ and elliptic theory giving \n$N_0 : L_2(\\Gamma_0 ) \\rightarrow H^1(\\Omega) $ is compact. \n\nThus, the final conclusion follows from spectral theory of compact operator. \n\\end{proof}\n\nNow, the conclusion in Lemma \\ref{l:feed} follows from the Proposition \\ref{p:feed} and the representation in Lemma \\ref{l:F} supported by definition of evolution operator $\\Phi$. \n\\end{proof}\nCompletion of the proof of Proposition \\ref{T}: combine the results stated in Proposition \\ref{p:Riccati-operator}, Lemma \\ref{RIC} and Lemma \\ref{l:feed}. \n\nCompletion of the proof of Theorem \\ref{T0}: setting $\\alpha = y_0 - B_1 g_0$ provides the conclusions stated in Theorem \\ref{T0}. \n \n\\subsection{Proof of Theorem \\ref{T:1}}\nIt remains to be shown that $\\hat{g}(0)$ coincides with the parameter $g_0$.\nThis is done below. \n\nLet $y_0 \\in [{\\mathcal D}({A^2}^*)]'$ and $g_0 \\in U$ be given.\nWith $\\alpha= y_0 - B_1g_0$, we know from from Part 1 of Theorem \\ref{T0} that the optimal control $g^0$ belongs $C([0,T];U)$. \nTherefore, in order to comply with the original model one is asking for the following selection of the parameter $g_0$: $g_0 = g^0(0)$. \nThis amounts to \n\\begin{equation*}\ng^0_\\alpha(t=0)=g_0\\,, \\qquad \\alpha = y_0 - B_1g_0\\,.\n\\end{equation*}\nThe above implicit relation is always uniquely solvable for some $g_0 \\in U$.\nIn fact, the matching condition amounts to solving \n$g_0 = F \\alpha = F (y_0 - B_1 g_0 )$, that is $(I - F B_1) g_0 = Fy_0$,\nwhere $F\\equiv [B_0^* + B_1^* A^*] P(0)$.\n\nWith the key properties $F \\in {\\mathcal L}([{\\mathcal D}(A^*)]',U)$ and $(I - FB_1)^{-1}\\in {\\mathcal L}(U)$.\nHowever, we recognize that $I - F B_1$ coincides with the operator $G(0)$, for which the requisite boundeness and invertibility have been shown in Proposition \\ref{p:feed}. \n\nThus we obtain \n\\begin{corollary}\nLet $y_0 \\in {\\mathcal D}({A^2}^*)]'$ be given. \nConsider Problem $\\mathcal{P}_{\\alpha}$ with $\\alpha = y_0 -B_1g_0$ and $g_0\\in U$\ngiven by\n\\begin{equation}\\label{g00}\ng_0 = (I - FB_1 )^{-1} F y_0\\,. \n \\end{equation}\nThen, there exists a unique optimal control $g^0\\in C([0,T]; U)$ and a \ncorresponding trajectory \\eqref{e:eq-for-U}, with $y^0(0) = y_0$, \nsuch that the results of Proposition \\ref{T} hold with $\\alpha= y_0 - B_1 g_0$ and \n$g_0$ given by \\eqref{g00}.\n\\end{corollary}\n \nIn other words, by solving the parametrized optimal control problem with a given \n$\\alpha = y_0 - B_1 g_0$ and a parameter $g_0 \\in U $ we solve a family of parametrized optimal control problems, which always has a unique solution. \nThe original dynamics is included in this family. \nBy selecting $g_0\\in U$ according to the matching condition, we make a selection of a problem whose dynamics coincides with the original one.\nHowever, the above does not imply that the constructed optimal control for the parametrized control problem is also optimal for the original problem -- when considered within the $L_2(U)$ framework for optimal controls.\nIn fact, the latter may not have an optimal solution at all when $y_0 \\in {\\mathcal R}(B_1)$, as shown in Theorem \\ref{l:neg}; see also \\cite{LPT}.\nThus, the constructed control is suboptimal, yet it corresponds to the original dynamics. \nHowever, if the original problem does have an $L_2(U)$ optimal control, then such control coincides with a parametrized control where $g_0$ is selected according to the matching condition.\n \n\\subsection{Proof of Theorem \\ref{T:2}}\nTheorem \\ref{T:2} follows from Theorem \\ref{T:1} by using a rather standard argument in calculus of variations. \nTo wit: we recall from Proposition \\ref{T} that the optimal value for the parametrized problem equals \n\\begin{equation*} \nJ(\\hat{g},\\hat{y}_{g_0}) = (P(0) \\alpha, \\alpha )_Y \n= (P(0) (y_0 - B_1 g_0), y_0 - B_1 g_0 )_Y\\,. \n\\end{equation*}\nOn the strength of positivity and selfadjointness of $P(0)$ we can write the above as \n\\begin{equation*} \nJ(\\hat{g},\\hat{y}_{g_0}) = || P^{1\/2} (0) (y_0 - B_1 g_0 ) ||^2_Y\\,.\n\\end{equation*}\nAppealing to the regularity properies of $P(0)$ listed in Theorem \\ref{T:1} we obtain that \n$J(g_0) \\equiv J(\\hat{g},\\hat{y}_{g_0})$ is weakly lower semicontinuous on $U$. \nIndeed, the latter follows from \n\\begin{eqnarray}\\label{opt}\nJ(\\hat{g},\\hat{y}_{g_0})= (P(0) (y_0 - B_1 g_0), y_0 - B_1 g_0 )_Y = (P(0) y_0, y_0 )_Y \n\\notag \\\\ \n- 2 (P(0) y_0, B_1 g_0 )_Y + (P(0) B_1 g_0, B_1 g_0 )_Y\\,,\n\\end{eqnarray}\nwhere $A^{-1} B_1 \\colon U \\rightarrow Y$ is compact and $A^*P(0) A \\colon Y \\rightarrow Y$\nis bounded. \nThis gives compactness of the map $g \\rightarrow P^{1\/2}(0) B_1 g$ from $U$ to $Y$,\nadressing the convergence of the last quadratic term in \\eqref{opt}.\n \nAs for the first term, we simply recall Proposition \\ref{p:Riccati-operator} which states $ {A^*}^2P(0) A^2 :Y \\rightarrow Y $ is also bounded.\nStrong continuity of the second term (linear in $g_0$ ) follows now from $ A^{-1} B_1 \\in L(Y) $ and $A^* P(0) A^2 \\in L(Y) $. \nThus the regularity of the Riccati operator $ P(0)$ along with $ A^{-1} B_1 \\in L(Y) $ implies weal lower-semicontinuity of the functional. Since $U_0$ is weakly compact, we obtain a minimizing sequence $ g_n \\in U_0$ such that $J(g_n) \\rightarrow d = \\inf_ {g_0\\in {U_0}} J(g_0) $ \nand $g_n \\rightarrow g^* \\in U_0 $ weakly in $U$. \n Weak lower semicontinuity of $ J(g_0)$ gives an existence of a minimizer. The characterization of the minimizer follows now from a standard argument in calculus of variations,\nafter taking into consideration the representation of the functional via Riccati operator. This leads to the final conclusion stated in Theorem \\ref{T:2}.\n \n\\ifdefined\\xxx\n1. Notice that unlike the more general framework of Lasiecka-Lukes-Pandolfi and \nLasiecka-Pandolfi-Triggiani, in the present case there is no need of an analysis \nof a non-standard LQ-problem, along with the invocation of a Dissipation Inequality\nsatisfied by the value function. \nIndeed, in view of $RB_1\\equiv 0$, the dynamics $y(t) -B_1g(t)$ is such that \n\\begin{equation}\n\\|R(y(t) -B_1g(t))\\|^2\\equiv \\|Ry(t)\\|_H^2 \n\\end{equation}\nand the cost functional is the same for both dynamics.\n\n\n\\smallskip\n\\noindent\n2. I have analyzed the case $d>0$ and $d_0=0$: we have once more $RA^2$ bounded and $RB_1=0$;\nit must be computed the value of $\\lambda\\ne 0$ for wich $(\\lambda-A)^{-1}B_i$ is bounded. \nWe will return on this after your first feedback.\n\\end{remarks}\n\\fi\n\n\n\\section*{Acknowledgements}\nThe authors are grateful to Barbara Kaltenbacher, whose work has provided motivation for studying \ncontrol problems associated with the SMGT acoustic model. \nInspiring and illuminating mathematical conversations of both authors with Barbara are gratefully\nacknowledged. \n\n\\smallskip\n\\noindent\nThe research of F.B. was partially supported by the Universit\\`a degli Studi di Firenze under the Project \n{\\em Analisi e controllo di sistemi di Equazioni a Derivate Parziali di evoluzione}, and by the GDRE (Groupement de Recherche Europ\\'een) ConEDP (Control of PDEs). \nF.B. is a member of the Gruppo Nazionale per l'Analisi Matematica, la Probabilit\\`a e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM), whose occasional support is acknowledged. \n\\\\\nThe research of I.L. was partially supported by the NSF Grant DMS-1713506.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIn this note, we consider the question of determining the number of\ncovers between projective lines in positive characteristic with\nspecified ramification data and fixed branch points. The ramification\ndata considered are the degree of the cover, together with a list of\nthe ramification indices in the fibers of the branch points. Over an\nalgebraically closed field of characteristic zero, it is in principal\n possible to solve this problem by Riemann's Existence Theorem. Namely, the number of covers can be expressed as the cardinality of a finite set,\nwhich can be explicitly constructed in concrete cases. In particular, this approach\nshows that the number of covers is finite and does not depend on the\nposition of the branch points. \n\nIn positive characteristic, the situation is drastically different. For example, the\nnumber of covers with fixed ramification depends on the position\nof the branch points. Moreover, if the characteristic $p$ divides one\nof the ramification indices, the number of covers is in general\ninfinite. There are only few general results on the number of covers\nin this situation (we refer to \\cite{BO} for an overview).\n\n The work of Osserman (\\cite{Osserman1}, \\cite{Osserman2}, \\cite{LO})\n suggests that a particularly nice case to look at is that of covers\n $f:\\mathbb{P}^1\\to \\mathbb{P}^1$ of degree $d$ which are ramified at $r$ points\n $x_1, \\ldots, x_r$ with $f(x_i)$ pairwise distinct (the\n so-called {\\sl single-cycle} case). We write $h(d; e_1, e_2, e_3,\n \\ldots, e_r)$ for the number of single-cycle covers with fixed branch\n locus over $\\mathbb{C}$, where $e_i$ is the ramification index of\n $x_i$; this number is called the {\\sl Hurwitz number}.\n\nLet $k$ be an algebraically closed field of positive characteristic $p$. We\nonly consider covers $f:\\mathbb{P}^1_k\\to \\mathbb{P}^1_k$ in the tame and single-cycle\ncase. We denote by $h_p(d; e_1, e_2, e_3, \\ldots,\ne_r)$ the maximal number of covers with fixed branch locus, where\nthe maximum is taken over all possible branch loci. This number is called the\n$p$-{\\sl Hurwitz number}. Since $p\\nmid e_i$ for all $i$, this number\nis finite and does not depend on $k$. It can be shown that there the maximum is attained if the branch locus belongs to a dense open subset $U\\subset (\\mathbb{P}^1_k)^r\\setminus \\Delta$. Here $\\Delta$\nis the fat diagonal.\n\n\nWe start by summarizing the results on the number of covers with fixed\nbranch locus in the single-cycle case for $r\\in \\{3,\n4\\}$. In \\cite{LO}, F. Liu and B. Osserman give a closed formula for the number\nof such covers in characteristic zero. In \\cite{Osserman1} and \n\\cite{Osserman2}, B. Osserman determines the $p$-Hurwitz number $h_p(d;\ne_1, e_2, e_3)$ using linear series. In \\cite{BO} the number $h_p(p;\ne_1, e_2, e_3, e_4)$ is computed. This last case is substantially more\ndifficult and he proof relies on the theory of stable reduction of covers. \n\nIn this note, we also consider covers $f:\\mathbb{P}^1_k\\to \\mathbb{P}^1_k$ of\nramification type $(d; e_1, e_2, e_3, e_4)$. In contrast with the\nsituation in \\cite{BO}, the degree $d$ is not fixed. We consider two\nelementary constructions, which yield previously unknown results on\nsome $p$-Hurwitz numbers $h_p(d; e_1, e_2, e_3, e_4)$. Both\nconstructions were known before and can be found for example in\n\\cite{Osserman1}. However, the implications for the $p$-Hurwitz\nnumbers have not been fully exploited. As an additional result, we\nobtain rather complete information on the structure of the Hurwitz\ncurve, parameterizing covers of the type considered, in positive\ncharacteristic. These are the first such results.\n\nThe first result deals with the case $1< e_i< p$ and $e_4=p-1$. In\nthis situation, we compute the $p$-Hurwitz number $h_p(d; e_1, e_2, e_3, e_4)$. We can even obtain\nsomething stronger, namely an explicit description of the Hurwitz curve\n$\\mathcal{H}_p(d; e_1, e_2, e_3, e_4)$ parameterizing all covers of type $(d;\ne_1, e_2, e_3, e_4)$. This yields, in particular, not only a formula for\nall covers with generic branch locus, but also exactly describes the values for which the number of covers drops. As far as we know, this is the first nontrivial example of a complete description of Hurwitz curves in positive characteristic. Other papers on Hurwitz curves in positive characteristic (eg\n\\cite{crelle}, \\cite{meta} and \\cite{BO}) do not yield such description.\nWe refer to \\S\\ \\ref{multconstsec} for the precise\nstatement of the result.\n\nThe second result considers the case $e_1>p$ and $20$.\n\\begin{itemize}\n\\item[(a)] The $p$-Hurwitz number $h_p({\\boldsymbol C})$ only depends\n on $p$, and not on the field $k$.\n\\item[(b)] We have $h_p({\\boldsymbol C})\\leq h({\\boldsymbol C})$ with\n equality if $d0$.\n\nLet $k$ be an algebraically closed field of characteristic $p>0$. We\nfix a genus-$0$ ramification type ${\\boldsymbol C}=(d; e_1, e_2, e_3,\np-1)$ and a branch locus ${\\boldsymbol x}=(x_1=0, x_2=1, x_3=\\infty,\nx_4=:\\lambda)$. We assume that $11,\\\\\ne_3&0$.\n\nComputing $p$-Hurwitz numbers is in general more difficult than\ncounting covers with fixed ramification. Beside the classical result\nfrom Lemma \\ref{p-hurwitzlem}.(b), the only general result on\n$p$-Hurwitz numbers is the main result of \\cite{BO}, which\ncomputes $h_p(p; e_1, e_2, e_3, e_4)$. That result relies on subtle\nand deep results on the stable reduction of Galois covers. \n\\end{rem}\n\n\nThe following corollary translates the statement of Proposition\n\\ref{hurwitznrprop} into a statement on the Hurwitz curve\n$\\mathcal{H}_p({\\boldsymbol C})$.\n\n\\begin{cor}\\label{hurwitznrcor} Let ${\\boldsymbol C}=(d; e_1, e_2, e_3, p-1)$\n be as in the \nstatement of Proposition \\ref{hurwitznrprop}. The Hurwitz curve\n$\\mathcal{H}_p({\\boldsymbol C})$ is connected.\n\\end{cor}\n\n{\\bf Proof:\\ } The statement immediately follows from the proposition. Let\n$\\pi_p:\\mathcal{H}_p({\\boldsymbol C})\\to \\mathbb{P}^1_\\lambda$ be the natural map which\nsends a cover of type ${\\boldsymbol C}$ to the branch point\n$\\lambda$. Then $\\pi$ is birationally equivalent to the map\n$\\mu\\mapsto \\lambda$ described in the prof of that proposition.\n\\hspace*{\\fill} $\\Box$ \\vspace{1ex} \\noindent \n\n\n\nLet ${\\boldsymbol C}:=(d; e_1, e_2, e_3, p-1)$ be a\nramification type satisfying the equivalent conditions of Proposition\n\\ref{hurwitznrprop}.(a). Put $\\tilde{{\\boldsymbol C}}=(\\tilde{d}; e_1,\ne_2, p-e_3)$ and let $h:\\mathbb{P}^1_k\\to \\mathbb{P}^1_k$ be the unique cover of type\n$\\tilde{{\\boldsymbol C}}$ (compare with the proof of Proposition\n\\ref{hurwitznrprop}.(a)). We may write $h(y)=h_1(y)\/h_2(y)$, where the\n$h_i\\in k[y]$ are relatively prime and satisfy the relations\n$\\deg(h_1)=\\tilde{d}-e_1$ and\n$\\deg(h_2)=\\tilde{d}-(p-e_3)=(e_1+e_2+e_3-p-1)\/2$. \n\nIt follows from Lemma \\ref{p-hurwitzlem} that there exist finitely many values $\\lambda\\in \\mathbb{P}^1_\\lambda\\setminus\\{ 0,1,\\infty\\}$ for\nwhich the number of covers of ramification type ${\\boldsymbol C}$ and\nbranch locus $(0, 1, \\infty, \\lambda)$ is strictly less than\n$h_p({\\boldsymbol C})$. We let $\\Sigma({\\boldsymbol C})\\subset\n\\mathbb{P}^1_\\lambda\\setminus\\{0,1,\\infty\\}$ be this exceptional set and\ncall it the {\\sl supersingular locus}.\n\n\n\\begin{cor}\\label{supersingcor}\nWith the above notation, we have\n\\[\n\\Sigma({\\boldsymbol C})=\\{y\\in \\mathbb{P}^1_k\\setminus\\{0,1,\\infty\\}\\mid h_2(y)=0\\}.\n\\]\n\\end{cor}\n\n{\\bf Proof:\\ } We recall that the construction of Lemma\n\\ref{multconstlem}.(b) works if and only if $h(\\mu)\\neq 0,1, \\infty,\n\\mu^p$. Moreover, equation (\\ref{lambdaeq}) gives an expression of the fourth branch point $\\lambda$ of $f$ as function of $\\mu$. \n\nAssume that $h(\\mu)=0$. Then (\\ref{lambdaeq}) implies that either\n$\\mu=0$ or $h(\\mu)=1$. By definition $0, 1\\not\\in \\Sigma({\\boldsymbol\n C})$. Therefore it suffices to consider the solutions of $h(\\mu)=1$\nwith $\\mu\\neq 1$. We may write $h(\\mu)-1=\\mu^{e_2}\\varphi(\\mu)$, where\n$\\varphi(1)\\neq 1$. Substituting this in (\\ref{lambdaeq}) yields\n\\[\n\\lambda(\\mu)=\\frac{\\mu^p\\varphi(\\mu)}{(\\mu-1)^{p-e_2}-\\varphi(\\mu)}.\n\\]\nIn particular, it follows that $\\lambda(\\mu)=0$ if $\\mu$ is a zero of\n$\\varphi$. Hence these zeroes are not contained in $\\Sigma({\\boldsymbol\n C})$. Similarly it follows that the solutions of $h(\\mu)=1$ don't belong to $\\Sigma({\\boldsymbol C})$.\n\nAssume that $h(\\mu)=\\infty$ and $\\mu\\neq \\infty$, i.e.\\ $h_2(\\mu)=0$ according to the notation introduced above the statement of the corollary. We then have the identity $\\lambda(\\mu)=\\mu^p$. Therefore $\\mu\\in\n\\Sigma({\\boldsymbol C})$, since $\\mu\\neq 0,1,\\infty$.\n\nFinally, assume that $\\mu=\\mu^p$ and $\\mu\\not\\in\\{0,1,\\infty\\}$. Then\n$\\lambda=\\infty$, hence this does not yields any new value.\n\\hspace*{\\fill} $\\Box$ \\vspace{1ex} \\noindent \n\n\n\n\\begin{exa} We illustrate the results of this section with two concrete \nexamples.\n\n(a) Let $p\\geq 5$ be a prime, and consider the genus-$0$ ramification\ntype ${\\boldsymbol C}=(d; 2,2,p-3, p-1)$. Note that the condition of\nProposition \\ref{hurwitznrprop}.(a) is satisfied. Hence Proposition\n\\ref{hurwitznrprop}.(b) implies that $h_p({\\boldsymbol C})=p-1$ and\nProposition \\ref{4ptprop}.(a) leads to $h({\\boldsymbol\n C})=\\min(3(p-3), 2(p-2),p-1)=p-1$. We therefore find the equality $h({\\boldsymbol\n C})=h_p({\\boldsymbol C})$, so that all covers of this type with generic\nbranch locus have good reduction.\n\nThe unique normalized cover $h:\\mathbb{P}^1\\to \\mathbb{P}^1$ of\ntype $(3; 2,2,3)$ is given by\n\\[\nh(y)=3y^3-2y^2.\n\\]\nTherefore \n\\[\n\\lambda(\\mu)=\\frac{\\mu^{p}(1+2\\mu^3-3\\mu^2)}{\\mu^p+2\\mu^3-3\\mu^2}=\n\\frac{\\mu^{p-2}(2\\mu-1)}{\\sum_{i=1}^{p-4}i\n \\mu^{p-3-i}}.\n\\]\nThis confirms that the degree $\\deg(\\lambda)$ equals\n$(3p-(e_1+e_2+e_3))\/2=p-1$.\n\nWe have already remarked that all covers with generic branch locus\nhave good reduction to characteristic $p>0$. Arguing as in the proof\nof \\cite[Theorem 4.2]{meta}, one may deduce from this observation that\nthe map $\\pi_p:\\mathcal{H}_p({\\boldsymbol C})\\to\n\\mathbb{P}^1_\\lambda\\setminus\\{0,1,\\infty\\}$ is finite. However, in this\nconcrete example the finiteness of $\\pi_p$ immediately follows from\nCorollary \\ref{supersingcor}.\n\n(b) Next, we consider the case ${\\boldsymbol C}=(p; 3,2,p-2, p-1)$\nand ${\\tilde{\\boldsymbol C}}=(3; 3,2,2)$, again assuming that $p\\geq\n5$. The unique cover $h:\\mathbb{P}^1_k\\to \\mathbb{P}^1_k$ of type\n${\\tilde{\\boldsymbol C}}$ is given by $h(y)=y^3\/(3y-2)$ and a direct computation leads to the expression\n\\[\n\\lambda(\\mu)=\\frac{\\mu^{p-3}(-\\mu^3+3\\mu-2)}{3\\mu^{p-2}-2\\mu^{p-3}-1}.\n\\]\nDividing the numerator and the denominator by $(\\mu-1)^2$, we find\n$\\deg(\\lambda)=p-2$, which confirms Proposition\n\\ref{hurwitznrprop}.(b). As in Corollary \\ref{supersingcor}, the supersingular values are the poles of $h$ different from $\\infty$. In this concrete example, we find a unique value, namely $\\mu=2\/3$. The case $d=p$ has been considered in \\cite{BO}. This result may also be deduced from\n\\cite[Remark 8.3]{BO} (note, however, that the proof relies on subtle\narguments involving stable reduction, which are only sketched in that\npaper). As in the previous example, Proposition \\ref{4ptprop} implies that $h({\\boldsymbol\n C})=2(p-1)$ and Proposition \\ref{hurwitznrprop}.(b) asserts that\n$h_p({\\boldsymbol C})=p-2$ We conclude that $h({\\boldsymbol\n C})-h_p({\\boldsymbol C})=p$, which confirms the main result of \\cite{BO}.\n\\end{exa}\n\n\n\nFix a genus-$0$ ramification type ${\\boldsymbol C}=(d; e_1, e_2, e_3,\np-1)$ which is $p$-tame, i.e.\\ $p\\nmid e_i$. Recall that\n$h({\\boldsymbol C})-h_p({\\boldsymbol C})$ denotes the ``bad degree''\nof the ramification type. This is the number of covers with generic\nbranch locus which have bad reduction to characteristic $p$. \n\n\\begin{prop}\\label{baddegprop}\nThe notation being as above, assume that the minimum\\\\ $\\min_{i\\in\n \\{1,2,3\\}} e_i(d+1-e_i)$ is attained for $e_1$. This is not a\nrestriction, since\\\\ we may permute the branch points. Then, the bad degree\n$h(\\boldsymbol{C})-h_p(\\boldsymbol{C})$ is\\\\ given by\n\\[\n\\begin{cases} \n0& \\text{ if }d\\leq p-1,\\\\\np(d+1-p)& \\text{ if }p\\leq d\\leq p-2+e_1,\\\\\nh({\\boldsymbol C})=e_1(d+1-e_1)&\\text{ otherwise}.\n\\end{cases}\n\\]\n\\end{prop}\n\n{\\bf Proof:\\ } The first case immediately follows from Lemma\n\\ref{p-hurwitzlem}. Assume that $p\\leq d\\leq p-2+e_1$. In this case, Propositions \\ref{4ptprop}.(a) and\n\\ref{hurwitznrprop}.(a) assert that $h({\\boldsymbol C})=(p-1)(d+2-p)$ and\n$h_p({\\boldsymbol C})\\neq 0$. Statement (b) therefore follows from\nProposition \\ref{hurwitznrprop}.(b).\n\nFor $d> p-2+e_1$, Proposition \\ref{4ptprop}.(a) implies\nthat $h({\\boldsymbol C})=e_1(d+1-e_1)$. Since $d>p-2+\\min_i\ne_i=p-2+e_1$ by assumption, we conclude from Proposition\n\\ref{hurwitznrprop}.(a) that $h_p({\\boldsymbol C})=0$ and statement (c)\nfollows. \\hspace*{\\fill} $\\Box$ \\vspace{1ex} \\noindent \n\nIn the second case of Proposition \\ref{baddegprop}, some covers have\ngood reduction while others have bad reduction. In the first\n(resp.\\ third) case all covers have good (resp.\\ bad) reduction to\ncharacteristic $p>0$. The following corollary therefore follows\nfrom Proposition \\ref{baddegprop} and its proof. A similar phenomenon\noccurs in the situation of \\cite[Section 4]{meta}.\n\n\\begin{cor}\\label{baddegcor}\nLet ${\\boldsymbol C}$ be as in Proposition \\ref{baddegprop}, and\nassume that $h({\\boldsymbol C})\\neq h_p({\\boldsymbol C})\\neq 0$. Then\nthe bad degree $h({\\boldsymbol C})-h_p({\\boldsymbol C})$ is divisible\nby $p$.\n\\end{cor}\n\n\\section{A variant}\\label{addconstsec}\nIn this section, we present a variant of the construction of Section\n\\ref{multconstsec}. This construction and the idea of the proof of the\nfollowing lemma has been taken from \\cite[Prop. 5.4]{Osserman1}. We\nfix integers $e_1>p$ and $1p$ it follows also that the ramification index of $f_c$ in\n$x=\\infty$ is $e_1$. Similarly, the ramification\nindices of $f_c$ in $x=1, \\rho$ are $e_2, e_3$, respectively.\n\nThe equality\n\\[\n\\frac{\\partial f_c}{\\partial x}=\\frac{\\partial f}{\\partial x}\n\\]\nimplies that $f_c$ is unramified outside $x=0, 1, \\rho,\\infty$. The\nassumption on $c$ implies that the image of $x=0, 1, \\rho,\\infty$\nunder $f_c$ are all distinct and the statement of the lemma follows.\n\n(b) Let $g$ be as in the statement of the lemma. Define $g_c=g+cx^p.$\nSince $\\partial g_c\/\\partial x=\\partial g\/\\partial x\\neq 0$ it follows\nthat $g_c$ is separable. Moreover, for all $c$ such that the image\nunder $g_c$ of the ramification points are pairwise distinct, the\nramification type of $g_c$ is still $\\boldsymbol{C}=(d; e_1, e_2, e_3,\ne_4)$. Assume that two of the ramification points, for example $x_3$\nand $x_4$, have the same image under $g_c$. \nThen the ramification type is $\\tilde{\\boldsymbol{C}}=(d; e_1,\ne_2, e_3\\text{-}e_4)$. The connectedness of the Hurwitz curve\n$\\mathcal{H}_p(\\boldsymbol{C})$ (Proposition \\ref{4ptprop}.(b)) implies that\nthere exists a $c$ such that $g_c(x_3)=g_c(x_4)$, which proves (b).\n\\hspace*{\\fill} $\\Box$ \\vspace{1ex} \\noindent \n\n\nThe following proposition is a direct consequence of Lemma \\ref{addconstlem}.\n\n\n\\begin{prop}\\label{addconstprop} The assumptions being as above, assume additionally that $e_3\\neq\ne_4$. \n\\begin{itemize}\n\\item[(a)]\nWe then have the equality\n\\[\nh_p(d, e_1, e_2, e_3, e_4)=h_p(d; e_1, e_2, e_3\\text{-}e_4).\n\\]\n\\item[(b)] If $h_p(d; e_1, e_2, e_3\\text{-}e_4)>0$ then the Hurwitz\n curve $\\mathcal{H}_p(d, e_1, e_2, e_3, e_4)$ contains $h_p(d; e_1, e_2,\n e_3\\text{-}e_4)$ irreducible components of genus $0$. Moreover, the restriction of\n the natural map $\\pi: \\mathcal{H}_p(d, e_1, e_2, e_3, e_4)\\to \\mathbb{P}^1_\\lambda$\n which sends $[f]$ to its fourth branch point has degree $1$ on each\n of these components.\n\\end{itemize}\n\\end{prop}\n\n\n{\\bf Proof:\\ } To prove (a), it is sufficient to show that nonisomorphic covers $f_i$ of\ntype $(d, e_1, e_2, e_3, e_4)$ give rise to nonisomorphic covers\nunder the construction of Lemma \\ref{addconstlem}.\n\nLet $f^i:\\mathbb{P}^1_k\\to \\mathbb{P}^1_k$ be two nonisomorphic covers of type $(d,\ne_1, e_2, e_3$-$e_4)$, and assume they are normalized as in the\nstatement of Lemma \\ref{addconstlem}. The branch points of $f^i_c$ are\n$\\infty, 0, 1+c, 1+c\\rho^p$. Normalizing the third branch point to $1$\nyields the normalized cover $g^i_c(x):=f^i_c(x)\/(1+c)$ with branch\npoints $\\infty, 0, 1, (1+c\\rho^p)\/(1+c)=:\\lambda_i$. The assertion\nthat the $g^i_c$ are nonisomorphic follows immediately from the\nassumption that $e_3\\neq e_4$. \n\nStatement (b) follows immediately from the explicit expression\nfor the cover $f_c$ given in the proof of Lemma \\ref{addconstlem}.\n\\hspace*{\\fill} $\\Box$ \\vspace{1ex} \\noindent \n\nIn the rest of this section, we discuss a concrete application of this\nresult to Hurwitz curves in positive characteristic.\n\n\n\\begin{lem}\\label{3ptaddlem}\nLet $p>3$ be a prime, and choose $e_1=p+2, e_2=3, 2\\leq e_3< e_43$ be a prime, and choose $e_1=p+2, e_2=3, 2\\leq e_3< e_4 0}|\\langle n|\\hat{D}|0\\rangle|^2\\delta(E-(E_n-E_0)) ,\n\\end{equation} \nwhere $E_n$ are the excitation energies of the states $|n\\rangle$ while\n$E_0$ is the energy of the ground state $\\displaystyle |0\\rangle=|\\Phi_{0} \\rangle$. In our approach this is obtained \nfrom the imaginary part of the Fourier transform of the time-dependent expectation value of \nthe dipole momentum $ \\displaystyle D(t) = \\frac{NZ}{A} X(t)= \\langle \\Phi (t) |\\hat{D}| \\Phi (t) \\rangle $ extracted\nfrom our simulations (see Fig. \\ref{dip}). We have:\n\\begin{equation}\n S(E) =\\frac{Im(D(\\omega))}{\\pi \\eta \\hbar}~~,\n\\label{stre}\n\\end{equation}\nwhere $\\displaystyle D(\\omega) =\\int_{t_0}^{t_{max}} D(t) e^{i\\omega t} dt$.\nWe consider the initial perturbation along the z-axis and integrate numerically\nthe Vlasov equations (\\ref{vlaprot}, \\ref{vlaneut}) until $t_{max}=1830fm\/c$.\n$\\eta$ was determined from the numerical value of the collective momentum at $t=t_0=30fm\/c$. \n\\begin{figure}\n\\begin{center}\n\\includegraphics*[scale=0.36]{stre_sn132_148.eps}\n\\end{center}\n\\caption{(Color online) The strength function for $^{132}Sn$ [the blue (solid) lines] and $^{148}Sn$ [the red (dashed) lines].\nAsystiff EOS.}\n\\label{stre} \n\\end{figure} \nIn order to eliminate the artifacts resulting from a finite time domain analysis of the signal a filtering procedure, as described\nin \\cite{reiPRE2006}, was considered. A smooth cut-off function was introduced such\nthat $D(t) \\rightarrow D(t)cos^{2}(\\frac{\\pi t}{2 t_{max}}) $. \nThe E1 strength functions of $^{132}Sn$ and $^{148}Sn$ \nare represented in Fig. \\ref{stre}. A test of the quality of our method is the comparison of the numerically estimated\nvalue of the first moment $\\displaystyle m_1=\\int_0^\\infty E S(E) dE$ with the value predicted by the\nThomas-Reiche-Kuhn (TRK) sum rule $\\displaystyle m_1= \\frac{\\hbar^2}{2m} \\frac{N Z}{A}$. In all cases \nthe difference was below $5\\%$. \n\nBefore discussing the dipole response below GDR region let us observe that from the strength function one can determine the \nnuclear dipole polarizability:\n\\begin{equation}\n\\alpha_D = 2 e^2 \\int_0^{\\infty}\\frac{S(E)}{E} dE ~.\n\\end{equation}\nFor $^{68}Ni$ the experimental value of $\\displaystyle \\alpha_D$ reported recently, \\cite{rosPRL2013} is $3.14 fm^3$ while we\nobtained values from $4.1 fm^3$ from $5.7 fm^3$ when we pass from asysoft to asysuperstiff EOS \\cite{barPRC2013}. In the case of $^{208}Pb$ the \nexperimental value of $\\displaystyle \\alpha_D$ is around $20.1 fm^3$ \\cite{tamPRL2011}. In our approach it changes from \n$21.1 fm^3$ for asysoft EOS to $28.6 fm^3$ for asysuperstiff.\nHere we want to explore the mass dependence of this quantity for the three asy-EOS. In order to accomplish this goal we\nconsider the systems $^{48}Ca, $$^{68}Ni$, $^{86}Kr$, $^{208}Pb$ as well as the mentioned isotopic chain of Sn.\nThe Migdal estimation of polarizability \\cite{migJP1944}, valid for large systems, provides a $A^{5\/3}$ dependence with mass: \n\\begin{equation}\n\\alpha_D = \\frac{e^2 A }{24 \\epsilon_{sym}}=\\frac{1.44 e^2}{40 \\epsilon_{sym}} A^{\\frac{5}{3}} ~,\n\\end{equation}\nconsidering that $ \\displaystyle = \\frac{3}{5} R^2$ and $\\displaystyle R=1.2A^{\\frac{1}{3}}$.\nSince $\\epsilon_{sym}$, at saturation, has similar values for the three asy-EOS one also expect, in this situation, close values for the polarizability. \nIn Fig. \\ref{polar} we show the polarizability $\\displaystyle \\alpha_D$ as a function of $A^{5\/3}$. The linear correlation is quite well\nverified. Nevertheless a clear dependence of the slope with asy-EOS is evidenced. This can be related to the surface effects and the interplay\nbetween surface and volume symmetry energy, expected to manifest in finite systems \\cite{lipPLB1982} and which will be influenced by the\nsymmetry energy slope parameter L. \n\n\\begin{figure}\n\\begin{center}\n\\includegraphics*[scale=0.36]{polariz_mass.eps}\n\\end{center}\n\\caption{(Color online) The dipole polarizability as a function of $A^{5\/3}$\nfor asysuperstiff (blue squares) asystiff (red circles) and asysoft (green diamonds) EOS. \nThe corresponding dashed lines provide the best linear fit of $\\alpha_D$ with $A^{5\/3}$.\nThe correlation coefficients $r_{fit}$ are $97\\%$, $98\\%$ and $99\\%$ respectively.}\n\\label{polar} \n\\end{figure}\n\nReturning to the strength function, one can identify the appearance of a resonant response below GDR, more important when the number\nof neutrons in excess is larger. In the present model the energy centroid \nis very well described by the parametrization $\\displaystyle 41 A^{- \\frac{1}{3}}$ \\cite{barPRC2013}, in nice agreement with several experimental data.\nThis new mode we associate with Pygmy Dipole Resonance (PDR)\nand notice that other studies based on Vlasov equations\narrived at similar conclusions \\cite{barPRC2012,barRJP2012,abrJU2009,urbPRC2012}. \nHere our purpose is to investigate the dependence of PDR response on the isospin parameter $I$.\nWe calculate EWSR exhausted by this mode by integrating over the low-energy resonance region:\n\\begin{equation}\n m_{1,y} = \\int_{PDR} E S(E) dE ~.\n\\end{equation}\nand plot its dependence on $\\displaystyle I=\\frac{N-Z}{A}$ in Fig. \\ref{m1y}. \nFrom our calculations, for Sn isotopes, a quadratic correlation appears to describe quite well the observed dependence of $\\displaystyle m_{1y}$\nwith the isospin parameter $I$. We remark that, as in the case of polarization, the linear correlation between\n$ m_{1,y}$ and $I^2$ is influenced by the symmetry energy slope parameter $L$. We can therefore conclude that\npolarization effects in the isovector density play an important role in the dynamics of Pygmy Dipole Resonance. \n\\begin{figure}\n\\begin{center}\n\\includegraphics*[scale=0.36]{m1y_iso2.eps}\n\\end{center}\n\\caption{(Color online) The EWSR exhausted by PDR as a function of $I$ square, for asysuperstiff (blue squares),\nasystiff (red circles) and asysoft (green diamonds) EOS. Were\n considered the systems $^{108}Sn$, $^{116}Sn$, $^{124}Sn$, $^{132}Sn$, $^{140}Sn$ and $^{148}Sn$.\nThe dashed lines correspond to the best fit of $ m_{1,y}$ with $I^2$. The correlation coefficients\n$r_{fit}$ are $99.3\\%$, $99.6\\%$ and $99.1\\%$ respectively.}\n\\label{m1y} \n\\end{figure}\n\nThe PDR was observed experimentally for several systems \\cite{aumPS2013,savPPNP2013} and discussed in different theoretical models \\cite{paaJPG2010} for various nuclei, including\nthe Sn isotopic chain \\cite{paaPLB2005,tsoPRC2008,artPRC2009,daoPRC2012,papPRC2014}. Concerning the features of this mode, in literature still exists\nan intense debate about the collective character of this mode, about the role of symmetry energy as well as about its isovector\/isoscalar structure.\nWhile the relativistic quasiparticle RPA (RQRPA) \\cite{paaRPP2007,paaPRL2009} provides evidences about collectivity of PDR, from amplitudes and transition matrix elements, the nonrelativistic Hartree-Fock-Bogoliubov treatment within quasiparticle-phonon model \\cite{tsoPLB2004}, assign to the resonant\nstructures noncollective properties. The calculations based on relativistic time-blocking \\cite{litPRC2009} also shows in the dipole spectra\nof even-even $\\displaystyle ^{130}Sn$-$^{140}Sn$ nuclei two well separated collective structures, the lower lying one, having a specific\nbehavior of the transition densities of states, being ascribed to PDR. For $\\displaystyle ^{34}Mg$, from the time-dependent density plots obtained within TDHF calculations\nwith Skyrme interaction, was identified a superimposed surface mode, not fully coupled to the bulk dynamics. This was related to the pygmy-like peak, obtained around 10 MeV, in the dipole response strength \\cite{briIJMPE2006}.\n\n\\section{Conclusions}\n\\label{concl}\n\nSummarizing, the main task in this paper was to present new results regarding the collective dipole response\nin connection with the properties of the symmetry energy below saturation.\nOur investigation was performed in a microscopic transport model based on a system of two coupled \nVlasov equations for protons and neutrons. \n\n For all studied asy-EOS our model predicts that the energy weighted sum-rule exhausted by the\nPygmy Dipole Resonance manifests a linear dependence with the square of isospin parameter $I=(N-Z)\/A$, with a slope which\nis influenced by the variation rate with density of the symmetry energy around saturation. \nEven if were considered asy-EOS providing similar values of symmetry energy at saturation\nwas also observed that the slope of dipole polarizability as a function of $\\displaystyle A^{5\/3}$\nchanges with the symmetry energy slope parameter L. \nWe interpret these results as an indication of the surface effects associated to the polarization of \nisovector density in finite nuclei. \n\nIn conclusion, the models based on Vlasov equation prove to be appropriate tools for the study of several\naspects of nuclear dynamics, including the development of quite feeble modes as is Pygmy Dipole Resonance, \nfor which provides qualitative insights but also quantitative information regarding its dependence on the\nsymmetry energy or its evolution with the isospin parameter and mass number. \n\n\\section{Acknowledgments}\nThis work for V. Baran and A. Croitoru was supported by a grant of the Romanian National\nAuthority for Scientific Research, CNCS - UEFISCDI, project number PN-II-ID-PCE-2011-3-0972.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}