diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzqclb" "b/data_all_eng_slimpj/shuffled/split2/finalzzqclb" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzqclb" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\\label{intro}\n\nLet $M$ be a differentiable manifold. The question of whether there is a `best'\nRiemannian metric on $M$ is intriguing. A great deal of deep results in\nRiemannian geometry have been motivated, and even inspired, by this single natural\nquestion. For several good reasons, an Einstein metric is a good candidate, if\nnot the best, at least a very distinguished one (see \\cite[Chapter 0]{Bss}). A\nRiemannian metric $g$ on $M$ is called {\\it Einstein} if its Ricci tensor $\\ricci_g$\nsatisfies\n\\begin{equation}\\label{eeq}\n\\ricci_g=cg, \\qquad \\mbox{for some}\\; c\\in\\RR.\n\\end{equation}\n\nThis notion can be traced back to \\cite{Hlb}, where Einstein metrics emerged as\ncritical points of the total scalar curvature functional on the space of all metrics\non $M$ of a given volume. Equation (\\ref{eeq}) is a non-linear second order PDE\n(recall that the number of parameters is $\\tfrac{n(n+1)}{2}$ on both sides,\n$n=\\dim{M}$), which also gives rise to some hope, but a good understanding of the\nsolutions in the general case seems far from being attained. A classical reference for Einstein\nmanifolds is the book \\cite{Bss}, and some updated expository articles are\n\\cite{And}, \\cite{LbrWng}, \\cite[III,C.]{Brg1} and \\cite[11.4]{Brg2}.\n\nThe Einstein condition (\\ref{eeq}) is very subtle, even when restricted to almost\nany subclass of metrics on $M$ one may like. It is too strong to allow general\nexistence results, and sometimes even just to find a single example, and at the same\ntime, it is too weak to get obstructions or classification results.\n\nBut maybe the difficulty comes from PDEs, so let us `algebrize' the problem\n(algebra is always easier for a geometer ...). Let us consider homogeneous\nRiemannian manifolds. Indeed, the Einstein equation for a homogeneous metric is\njust a system of $\\tfrac{n(n+1)}{2}$ algebraic equations, but unfortunately, a quite\ninvolved one, and the following main general question is still open in both compact\nand noncompact cases:\n\n\\begin{quote}\nWhich homogeneous spaces $G\/K$ admit a $G$-invariant Einstein Riemannian metric?\n\\end{quote}\n\nWe refer to \\cite{BhmWngZll} and the references therein for an update in the compact\ncase. In the noncompact case, the only known examples until now are all of a very\nparticular kind; namely, simply connected solvable Lie groups endowed with a left\ninvariant metric (so called {\\it solvmanifolds}). According to the following long\nstanding conjecture, these might exhaust all the possibilities for noncompact\nhomogeneous Einstein manifolds.\n\n\\begin{quote}\n{\\bf Alekseevskii's conjecture} \\cite[7.57]{Bss}. If $G\/K$ is a homogeneous\nEinstein manifold of negative scalar curvature then $K$ is a maximal compact\nsubgroup of $G$ (which implies that $G\/K$ is a solvmanifold when $G$ is a linear\ngroup).\n\\end{quote}\n\nThe conjecture is wide open, and it is known to be true only for $\\dim \\leq 5$, a\nresult which follows from the complete classification in these dimensions given in\n\\cite{Nkn}. One of the most intriguing facts related to this conjecture, and maybe\nthe only reason so far to consider Alekseevskii's conjecture as too optimistic, is\nthat the Lie groups $\\Sl_n(\\RR)$, $n\\geq 3$, do admit left invariant metrics of\nnegative Ricci curvature, as well as does any complex simple Lie group (see\n\\cite{DttLt}, \\cite{DttLtMtl}). However, an inspection of the eigenvalues of the\nRicci tensors in these examples shows that they are far from being close to each\nother, giving back some hope.\n\nLet us now consider the case of left invariant metrics on Lie groups. Let $\\ggo$ be\na real Lie algebra. Each basis $\\{ X_1,...,X_n\\}$ of $\\ggo$ determines structural\nconstants $\\{ c_{ij}^k\\}\\subset\\RR$ given by\n$$\n[X_i,X_j]=\\sum_{k=1}^nc_{ij}^kX_k, \\qquad 1\\leq i,j\\leq n.\n$$\nThe left invariant metric on any Lie group with Lie algebra $\\ggo$ defined by the\ninner product given by $\\la X_i,X_j\\ra=\\delta_{ij}$ is Einstein if and only if the\n$\\tfrac{n^2(n+1)}{2}$ numbers $c_{ij}^k$'s satisfy the following $\\tfrac{n(n+1)}{2}$\nalgebraic equations for some $c\\in\\RR$:\n\\begin{equation}\\label{condi}\n\\sum_{kl}-\\unm c_{ik}^lc_{jk}^l +\\unc c_{kl}^ic_{kl}^j-\\unm c_{jk}^lc_{il}^k+\\unm\nc_{kl}^lc_{ki}^j +\\unm c_{kl}^lc_{kj}^i = c\\delta_{ij}, \\quad 1\\leq i\\leq j\\leq n.\n\\end{equation}\n\nIn view of this, one may naively think that the classification of Einstein left\ninvariant metrics on Lie groups is at hand. However, the following natural\nquestions remain open:\n\n\\begin{itemize}\n\\item[ ]\n\\item[(i)] Is any Lie group admitting an Einstein left invariant metric either solvable or compact?\n\n\\item[ ]\\item[(ii)] Does every compact Lie group admit only finitely many Einstein left invariant metrics up to isometry and scaling?\n\n\\item[ ]\\item[(iii)] Which solvable Lie groups admit an Einstein left invariant metric?\n\n\\item[ ]\n\\end{itemize}\n\nWe note that question (i) is just Alekseevskii Conjecture restricted to Lie groups,\nand question (ii) is contained in \\cite[7.55]{Bss}. The only group for which the\nanswer to (ii) is known is $\\SU(2)$, where there is only one (see \\cite{Mln}). For\nmost of the other compact simple Lie groups many Einstein left invariant metrics\nother than minus the Killing form are explicitly known (see \\cite{DtrZll}).\n\nEven if one is very optimistic and believes that Alekseevskii Conjecture is true, a\nclassification of Einstein metrics in the noncompact homogeneous case will depend on\nsome kind of answer to question (iii). The aim of this expository paper is indeed\nto give a report on the present status of the study of Einstein solvmanifolds.\n\nPerhaps the main difficulty in trying to decide if a given Lie algebra $\\ggo$ admits\nan Einstein inner product is that one must check condition (\\ref{condi}) for any\nbasis of $\\ggo$, and there are really too many of them. In other words, there are\ntoo many left invariant metrics on a given Lie group, any inner product on the\nvector space $\\ggo$ is playing. This is quite in contrast to what happens in\nhomogeneous spaces $G\/K$ with not many different $\\Ad(K)$-irreducible components in\nthe decomposition of the tangent space $\\tang_{eK}(G\/K)$. Another obstacle is how to\nrecognize your Lie algebra by just looking at the structural constants $c_{ij}^k$'s.\nThough even in the case when we have two solutions to (\\ref{condi}), and we know they define\nthe same Lie algebra, to be able to guarantee that they are not isometric, i.e. that we\nreally have two Einstein metrics, is usually involved.\n\nIf we fix a basis $\\{ X_1,...,X_n\\}$ of $\\ggo$, then instead of varying all possible\nsets of structural constants $\\{ c_{ij}^k\\}$'s by running over all bases, one may\nact on the Lie bracket $\\lb$ by $g.\\lb=g[g^{-1}\\cdot,g^{-1}\\cdot]$, for any\n$g\\in\\Gl(\\ggo)$, and look at the structural constants of $g.\\lb$ with respect to the\nfixed basis $\\{ X_1,...,X_n\\}$. This give rises to an orbit $\\Gl(\\ggo).\\lb$ in the\nvector space $V:=\\Lambda^2\\ggo^*\\otimes\\ggo$ of all skew-symmetric bilinear maps\nfrom $\\ggo\\times\\ggo$ to $\\ggo$, which parameterizes, from a different point of\nview, the set of all inner products on $\\ggo$. Indeed, if $\\ip$ is the inner product\ndefined by $\\la X_i,X_j\\ra=\\delta_{ij}$ then\n\n\\begin{quote}\n$(\\ggo,g.\\lb,\\ip)$ is isometric to $(\\ggo,\\lb,\\la g\\cdot,g\\cdot\\ra)$ for any\n$g\\in\\Gl(\\ggo)$.\n\\end{quote}\n\nThe subset $\\lca\\subset V$ of those elements satisfying the Jacobi condition is\nalgebraic, $\\Gl(\\ggo)$-invariant and the $\\Gl(\\ggo)$-orbits in $\\lca$ are precisely\nthe isomorphism classes of Lie algebras. $\\lca$ is called the {\\it variety of Lie\nalgebras}. Furthermore, if $\\Or(\\ggo)\\subset\\Gl(\\ggo)$ denotes the subgroup of\n$\\ip$-orthogonal maps, then two points in $\\Gl(\\ggo).\\lb$ which lie in the same\n$\\Or(\\ggo)$-orbit determine isometric left invariant metrics, and the converse holds\nif $\\ggo$ is completely solvable (see \\cite{Alk2}).\n\nThis point of view is certainly a rather tempting invitation to try to use geometric\ninvariant theory in any problem which needs a running over all left invariant\nmetrics on a given Lie group, or even on all Lie groups of a given dimension. We\nshall see throughout this article that indeed, starting in \\cite{Hbr}, the approach `by varying\nLie brackets' has been very fruitful in the study of Einstein solvmanifolds\nduring the last decade.\n\nThe latest fashion generalization of Einstein metrics, although they were introduced\nby R. Hamilton more than twenty years ago, is the notion of {\\it Ricci soliton}:\n\\begin{equation}\\label{rseq}\n\\ricci_g=cg+L_Xg, \\qquad\\mbox{for some}\\; c\\in\\RR, \\quad X\\in\\chi(M),\n\\end{equation}\n\n\\noindent where $L_Xg$ is the usual Lie derivative of $g$ in the direction of the\nfield $X$. A more intuitive equivalent condition to (\\ref{rseq}) is that $\\ricci_g$\nis tangent at $g$ to the space of all metrics which are homothetic to $g$ (i.e.\nisometric up to a constant scalar multiple). Recall that Einstein means\n$\\ricci_{g}$ tangent to $\\RR_{>0}g$. Ricci solitons correspond to solutions of the\nRicci flow\n$$\n\\ddt g(t)=-2\\ricci_{g(t)},\n$$\nthat evolves self similarly, that is, only by scaling and the action by\ndiffeomorphisms, and often arise as limits of dilations of singularities of the\nRicci flow. We refer to \\cite{soliton}, \\cite{GntIsnKnp}, \\cite{libro} and the\nreferences therein for further information on the Hamilton-Perelman theory of Ricci\nflow and Ricci solitons and the role played by nilpotent Lie groups in the story.\n\nA remarkable fact is that if $S$ is an Einstein solvmanifold, then the metric\nrestricted to the submanifold $N:=[S,S]$ is a Ricci soliton, and conversely, any\nRicci soliton left invariant metric on a nilpotent Lie group $N$ (called {\\it\nnilsolitons}) can be uniquely `extended' to an Einstein solvmanifold. This\none-to-one correspondence is complemented with the uniqueness up to isometry of\nnilsolitons, which finally turns the classification of Einstein solvmanifolds into a\nclassification problem on nilpotent Lie algebras. These are not precisely good\nnews. Historically, as the literature and experience shows us, any classification\nproblem involving nilpotent Lie algebras is simply a headache.\n\n\\vs \\noindent {\\it Acknowledgements.} I am grateful to the Scientific\nCommittee for the invitation to give a talk at the `Sixth Workshop on Lie Theory and\nGeometry', November 13-17, 2007, Cruz Chica, C\\'ordoba, Argentina. I wish to thank Yuri Nikolayevsky and Cynthia Will for very useful comments on a first version of the paper, and to Roberto Miatello for going over the manuscript. I also wish to express my gratitude to the young collaborators Alejandra \\'Alvarez, Adri\\'an\nAndrada, Gast\\'on Garc\\'{\\i}a and Emilio Lauret for the invaluable help they generously provided to the organization of the workshop.\n\n\n\n\n\n\n\n\n\n\n\\section{Structure and uniqueness results on Einstein solvmanifolds}\\label{pre}\n\nA {\\it solvmanifold} is a simply connected solvable Lie group $S$ endowed with a\nleft invariant Riemannian metric. A left invariant metric on a Lie group $G$ will\nbe always identified with the inner product $\\ip$ determined on the Lie algebra\n$\\ggo$ of $G$, and the pair $(\\ggo,\\ip)$ will be referred to as a {\\it metric Lie\nalgebra}. If $S$ is a solvmanifold and $(\\sg,\\ip)$ is its metric solvable Lie\nalgebra, then we consider the $\\ip$-orthogonal decomposition\n$$\n\\sg=\\ag\\oplus\\ngo,\n$$\nwhere $\\ngo:=[\\sg,\\sg]$ is the derived algebra (recall that $\\ngo$ is nilpotent).\n\n\\begin{definition}\\label{stan}\nA solvmanifold $S$ is said to be {\\it standard} if\n$$\n[\\ag,\\ag]=0.\n$$\n\\end{definition}\n\nThis is a very simple algebraic condition, which may appear as kind of technical,\nbut it has nevertheless played an important role in many questions in homogeneous\nRiemannian geometry:\n\n\\begin{itemize}\n\\item \\cite{GndPttVnb} K$\\ddot{{\\rm a}}$hler-Einstein noncompact homogeneous manifolds are all standard solvmanifolds.\n\n\\item\\cite{Alk,Crt} Every quaternionic K$\\ddot{{\\rm a}}$hler solvmanifold (completely real) is standard.\n\n\\item \\cite{AznWls} Any homogeneous manifold of nonpositive sectional curvature is a standard solvmanifold.\n\n\\item \\cite{Hbr2} All harmonic noncompact homogeneous manifolds are standard solvmanifolds (with $\\dim{\\ag}=1$).\n\\end{itemize}\n\nPartial results on the question of whether Einstein solvmanifolds are all standard\nwere obtained for instance in \\cite{Hbr} and \\cite{Sch}, who gave several sufficient\nconditions. The answer was known to be yes in dimension $\\leq 6$ (see \\cite{NktNkn})\nand followed from a complete classification of Einstein solvmanifolds in these\ndimensions. On the other hand, it is proved in \\cite{Nkl0} that many classes of\nnilpotent Lie algebras can not be the nilradical of a non-standard Einstein\nsolvmanifold.\n\n\n\\begin{theorem}\\cite{standard}\\label{stand}\nAny Einstein solvmanifold is standard.\n\\end{theorem}\n\nAn idea of the proof of this theorem will be given in Section \\ref{proof}. Standard\nEinstein solvmanifolds were extensively investigated in \\cite{Hbr}, where the\nremarkable structural and uniqueness results we next describe are derived. Recall\nthat combined with Theorem \\ref{stand}, all of these results are now valid for any\nEinstein solvmanifold.\n\n\\begin{theorem}\\label{u}\\cite[Section 5]{Hbr} {\\rm (}{\\bf Uniqueness}{\\rm )}\nA simply connected solvable Lie group admits at most one standard Einstein left\ninvariant metric up to isometry and scaling.\n\\end{theorem}\n\nA more general result is actually valid: if a noncompact homogeneous manifold $G\/K$\nwith $K$ maximal compact in $G$ admits a $G$-invariant metric $g$ isometric to an\nEinstein solvmanifold, then $g$ is the unique $G$-invariant Einstein metric on $G\/K$\nup to isometry and scaling. This is in contrast to the compact homogeneous case,\nwhere many pairwise non isometric $G$-invariant Einstein metrics might exist (see\n\\cite{BhmWngZll} and the references therein), although it is open if only finitely\nmany (see \\cite[7.55]{Bss}).\n\nIn the study of Einstein homogeneous manifolds, the compact case is characterized by\nthe positivity of the scalar curvature and Ricci flat implies flat (see\n\\cite{AlkKml}). The following conditions on an Einstein solvmanifold $S$ are\nequivalent:\n\n\\begin{itemize}\n\\item[(i)] $\\sg$ is unimodular (i.e. $\\tr{\\ad{X}}=0$ for all $X\\in\\sg$).\n\n\\item[(ii)] $S$ is Ricci flat (i.e. $\\scalar(S)=0$).\n\n\\item[(iii)] $S$ is flat.\n\\end{itemize}\n\nWe can therefore consider from now on only nonunimodular solvable Lie algebras.\n\n\\begin{theorem}\\label{ror}\\cite[Section 4]{Hbr} {\\rm (}{\\bf Rank-one reduction}{\\rm )}\nLet $\\sg=\\ag\\oplus\\ngo$ be a nonunimodular solvable Lie algebra endowed with a\nstandard Einstein inner product $\\ip$, say with $\\ricci_{\\ip}=c\\ip$. Then $c<0$\nand, up to isometry, it can be assumed that $\\ad{A}$ is symmetric for any $A\\in\\ag$.\nIn that case, the following conditions hold.\n\\begin{itemize}\n\\item[(i)] There exists $H\\in\\ag$ such that the eigenvalues of $\\ad{H}|_{\\ngo}$ are all positive integers without a common divisor.\n\n\\item[(ii)] The restriction of $\\ip$ to the solvable Lie algebra $\\RR H\\oplus\\ngo$ is also Einstein.\n\n\\item[(iii)] $\\ag$ is an abelian algebra of symmetric derivations of $\\ngo$ and the inner product on $\\ag$ must be given by $\\la A,A'\\ra=-\\tfrac{1}{c}\\tr{\\ad{A}\\ad{A'}}$ for all $A,A'\\in\\ag$.\n\\end{itemize}\n\\end{theorem}\n\nThe Ricci tensor for these solvmanifolds has the following simple formula.\n\n\\begin{lemma}\\label{fr}\nLet $S$ be a standard solvmanifold such that $\\ad{A}$ is symmetric and nonzero for\nany $A\\in\\ag$. Then the Ricci tensor of $S$ is given by\n\\begin{itemize}\n\\item[(i)] $\\ricci(A,A')=-\\tr{\\ad{A}\\ad{A'}}$ for all $A,A'\\in\\ag$.\n\n\\item[(ii)] $\\ricci(\\ag,\\ngo)=0$.\n\n\\item[(iii)] $\\ricci(X,Y)=\\ricci_{\\ngo}(X,Y) -\\la\\ad{H}(X),Y\\ra$, for all $X,Y\\in\\ngo$, where $\\ricci_{\\ngo}$ is the Ricci tensor of $(\\ngo,\\ip|_{\\ngo\\times\\ngo})$ and $H\\in\\ag$ is defined by $\\la H,A\\ra=\\tr{\\ad{A}}$ for any $A\\in\\ag$.\n\\end{itemize}\n\\end{lemma}\n\nThe natural numbers which have appeared as the eigenvalues of $\\ad{H}$ when\n$(\\sg,\\ip)$ is Einstein play a very important role.\n\n\\begin{definition}\\label{et}\nIf $d_1,...,d_r$ denote the corresponding multiplicities of the positive integers\nwithout a common divisor $k_1<...0$. We\ncan therefore assume that $\\la\\beta,\\alpha_i\\ra=||\\beta||^2$ for all $i$, and also\nthat $\\beta=\\mcc(\\{\\alpha_1,...,\\alpha_s\\})$, where $\\{\\alpha_1,...,\\alpha_s\\}$ is a\nlinearly independent subset of $X$. Thus the $s\\times s$ matrix\n$U:=\\left[\\la\\alpha_i,\\alpha_j\\ra\\right]$ is invertible and satisfies\n\\begin{equation}\\label{rat}\nU\\left[\\begin{smallmatrix} c_1 \\\\ \\vdots \\\\ c_s\\end{smallmatrix}\\right]=\n||\\beta||^2\\left[\\begin{smallmatrix} 1 \\\\ \\vdots \\\\ 1\\end{smallmatrix}\\right].\n\\end{equation}\n\nIn particular, if all the entries of $\\alpha_i$ are in $\\QQ$ for any $i=1,...,r$,\nthen also the entries of $\\beta:=\\mcc(X)$ are all in $\\QQ$. Indeed,\n$\\tfrac{c_i}{||\\beta||^2}\\in\\QQ$ for all $i$ and so their sum\n$\\tfrac{1}{||\\beta||^2}\\in\\QQ$, which implies that $c_i\\in\\QQ$ for all $i$ and\nconsequently $\\beta$ has all its coefficients in $\\QQ$.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Variational approach to Einstein solvmanifolds}\\label{va}\n\nEinstein metrics are often considered as the nicest, or most privileged ones on a\ngiven differentiable manifold (see for instance \\cite[Introduction]{Bss}). One of\nthe justifications is the following result due to Hilbert (see \\cite{Hlb}): the\nEinstein condition for a compact Riemannian manifold $(M,g_{\\circ})$ of volume one\nis equivalent to the fact that the total scalar curvature functional\n$$\n\\scalar : g\\mapsto\\int_M\\scalar(g)\\mu_g\n$$\nadmits $g_{\\circ}$ as a critical point on the space of all metrics of volume one\n(see also \\cite[4.21]{Bss}). This variational approach still works for $G$-invariant\nmetrics on $M$, where $G$ is any compact Lie group acting transitively on $M$ (see\n\\cite[4.23]{Bss}).\n\nOn the other hand, it is proved in \\cite{Jns} that in a unimodular $n$-dimensional\nLie group, the Einstein left invariant metrics are precisely the critical points of\nthe scalar curvature functional on the set of all left invariant metrics having a\nfixed volume element. However, this fails in the non-unimodular case. For\ninstance, if $\\sg$ is a solvable non-unimodular Lie algebra, then the scalar\ncurvature functional restricted to any leaf\n$F=\\{t\\}\\times\\Sl(\\sg)\/\\SO(\\sg)\\subset\\pca$ of inner products, has no critical\npoints (see \\cite[3.5]{Hbr}). Thus, the approach to study Einstein solvmanifolds by\na variational method should be different.\n\nIn this section, we shall describe the approach proposed in the introduction: to\nvary Lie brackets rather than inner products. Recall that when $\\ngo$ is an\n$n$-dimensional nilpotent Lie algebra, then the set of all inner products on $\\ngo$\nis very nice, it is parameterized by the symmetric space $\\Gl_n(\\RR)\/\\Or(n)$.\nHowever, isometry classes are precisely the orbits of the action on\n$\\Gl_n(\\RR)\/\\Or(n)$ of the group of automorphisms $\\Aut(\\ngo)$, a group mostly\nunknown, hard to compute, and far from being reductive, that is, ugly from the point\nof view of invariant theory. If we instead vary Lie brackets, isometry classes will\nbe given by $\\Or(n)$-orbits, a beautiful group. But since nothing is for free in\nmathematics, the set of left invariant metrics will now be parameterized by a\n$\\Gl_n(\\RR)$-orbit in the variety $\\nca$ of $n$-dimensional nilpotent Lie algebras,\na terrible space.\n\nWe fix an inner product vector space\n$$\n(\\sg=\\RR H\\oplus\\RR^n,\\ip),\\qquad \\la H,\\RR^n\\ra=0,\\quad \\la H,H\\ra=1,\n$$\nsuch that the restriction $\\ip|_{\\RR^n\\times\\RR^n}$ is the canonical inner product\non $\\RR^n$, which will also be denoted by $\\ip$. A linear operator on $\\RR^n$ will\nbe sometimes identified with its matrix in the canonical basis $\\{ e_1,...,e_n\\}$ of\n$\\RR^n$. The metric Lie algebra corresponding to any $(n+1)$-dimensional rank-one\nsolvmanifold, can be modeled on $(\\sg=\\RR H\\oplus\\ngo,\\ip)$ for some nilpotent Lie\nbracket $\\mu$ on $\\RR^n$ and some $D\\in\\Der(\\mu)$, the space of derivations of\n$(\\RR^n,\\mu)$. Indeed, these data define a solvable Lie bracket $[\\cdot,\\cdot]$ on\n$\\sg$ by\n\\begin{equation}\\label{solv}\n[H,X]=DX,\\qquad [X,Y]=\\mu(X,Y), \\qquad X,Y\\in\\RR^n,\n\\end{equation}\n\n\\noindent and the solvmanifold is then the simply connected Lie group $S$ with Lie\nalgebra $(\\sg,[\\cdot,\\cdot])$ endowed with the left invariant Riemannian metric\ndetermined by $\\ip$. We shall assume from now on that $\\mu\\ne 0$ since the case\n$\\mu=0$ (i.e. abelian nilradical) is well understood (see \\cite[Proposition\n6.12]{Hbr}). We have seen in the paragraph above Definition \\ref{enil} that for a\ngiven $\\mu$, there exists a unique symmetric derivation $D_{\\mu}$ to consider if we\nwant to get Einstein solvmanifolds. We can therefore associate to each nilpotent\nLie bracket $\\mu$ on $\\RR^n$ a distinguished rank-one solvmanifold $S_{\\mu}$,\ndefined by the data $\\mu,D_{\\mu}$ as in (\\ref{solv}), which is the only one with a\nchance of being Einstein among all those metric solvable extensions of $(\\mu,\\ip)$.\n\nWe note that conversely, any $(n+1)$-dimensional rank-one Einstein solvmanifold is\nisometric to $S_{\\mu}$ for some nilpotent $\\mu$. Thus the set $\\nca$ of all\nnilpotent Lie brackets on $\\RR^n$ parameterizes a space of $(n+1)$-dimensional\nrank-one solvmanifolds\n$$\n\\{ S_{\\mu}:\\mu\\in\\nca\\},\n$$\ncontaining all those which are Einstein in that dimension.\n\nConcerning the identification\n$$\n\\mu\\longleftrightarrow (N_{\\mu},\\ip),\n$$\nwhere $N_{\\mu}$ is the simply connected nilpotent Lie group with Lie algebra\n$(\\RR^n,\\mu)$, the $\\G$-action on $\\nca$ defined in (\\ref{action}) has the following\ngeometric interpretation: each $g\\in\\G$ determines a Riemannian isometry\n\\begin{equation}\\label{id}\n(N_{g.\\mu},\\ip)\\longrightarrow (N_{\\mu},\\la g\\cdot,g\\cdot\\ra)\n\\end{equation}\n\n\\noindent by exponentiating the Lie algebra isomorphism\n$g^{-1}:(\\RR^n,g.\\mu)\\longrightarrow(\\RR^n,\\mu)$. Thus the orbit $\\G.\\mu$ may be\nviewed as a parametrization of the set of all left invariant metrics on $N_{\\mu}$.\nBy a result of E. Wilson, two pairs $(N_{\\mu},\\ip)$, $(N_{\\lambda},\\ip)$ are\nisometric if and only if $\\mu$ and $\\lambda$ are in the same $\\Or(n)$-orbit (see\n\\cite[Appendix]{minimal}), where $\\Or(n)$ denotes the subgroup of $\\G$ of orthogonal\nmatrices. Also, two solvmanifolds $S_{\\mu}$ and $S_{\\lambda}$ with\n$\\mu,\\lambda\\in\\nca$ are isometric if and only if there exists $g\\in\\Or(n)$ such\nthat $g.\\mu=\\lambda$ (see \\cite[Proposition 4]{critical}). From (\\ref{id}) and the\ndefinition of $S_{\\mu}$ we obtain the following result.\n\n\\begin{lemma}\\label{enilmu}\nIf $\\mu\\in\\nca$ then the nilpotent Lie algebra $(\\RR^n,\\mu)$ is an Einstein\nnilradical if and only if $S_{g.\\mu}$ is Einstein for some $g\\in\\G$.\n\\end{lemma}\n\nRecall that being an Einstein nilradical is a property of a whole $\\G$-orbit in\n$\\nca$, that is, of the isomorphism class of a given $\\mu$.\n\nFor any $\\mu\\in\\nca$ we have that the scalar curvature of $(N_{\\mu},\\ip)$ is given\nby $\\scalar(\\mu)=-\\unc ||\\mu||^2$, which says that normalizing by scalar curvature\nand by the spheres of $V$ is actually equivalent. The critical points of any\nscaling invariant curvature functional on $\\nca$ appear then as very natural\ncandidates to be distinguished left invariant metrics on nilpotent Lie groups.\n\n\\begin{theorem}\\label{crit}\\cite{soliton,critical,einsteinsolv}\nFor a nonzero $\\mu\\in\\nca$, the following conditions are equivalent:\n\\begin{itemize}\n\\item[(i)] $S_{\\mu}$ is Einstein.\n\n\\item[(ii)] $(N_{\\mu},\\ip)$ is a nilsoliton.\n\n\\item[(iii)] $\\mu$ is a critical point of the functional $F:V\\longrightarrow\\RR$ defined by\n$$\nF(\\mu)=\\tfrac{16}{||\\mu||^4}\\tr{\\Ric_{\\mu}^2},\n$$\nwhere $\\Ric_{\\mu}$ denotes the Ricci operator of $(N_{\\mu},\\ip)$.\n\n\\item[(iv)] $\\mu$ is a minimum of $F|_{\\G.\\mu}$ (i.e. $(N_{\\mu},\\ip)$ is minimal).\n\n\\item[(v)] $\\Ric_{\\mu}\\in\\RR I\\oplus\\Der(\\mu)$.\n\\end{itemize}\nUnder these conditions, the set of critical points of $F$ lying in $\\G.\\mu$ equals\n$\\Or(n).\\mu$ (up to scaling).\n\\end{theorem}\n\nThus another natural approach to find rank-one Einstein solvmanifolds would be to\nuse the negative gradient flow of the functional $F$. It follows from \\cite[Lemma\n6]{critical} that if $\\pi$ is the representation defined in (\\ref{actiong}) then\n$$\n\\grad(F)_{\\mu}=\\tfrac{16}{||\\mu||^6}\\left(||\\mu||^2\\pi(\\Ric_{\\mu})\\mu-4\\tr{\\Ric_{\\mu}^2}\\mu\\right).\n$$\nSince $F$ is invariant under scaling we know that $||\\mu||$ will remain constant in\ntime along the flow. We may therefore restrict ourselves to the sphere of radius\n$2$, where the negative gradient flow $\\mu=\\mu(t)$ of $F$ becomes\n\\begin{equation}\\label{flow}\n \\ddt\\mu=-\\pi(\\Ric_{\\mu})\\mu+\\tr{\\Ric_{\\mu}^2}\\mu.\n\\end{equation}\n\nNotice that $\\mu(t)$ is a solution to this differential equation if and only if\n$g.\\mu(t)$ is so for any $g\\in\\Or(n)$, according to the $\\Or(n)$-invariance of $F$.\nThe existence of $\\lim\\limits_{t\\to\\infty}\\mu(t)$ is guaranteed by the compactness\nof the sphere and the fact that $F$ is a polynomial (see for instance \\cite[Section\n2.5]{Sjm}).\n\n\\begin{lemma}\\label{strataflow}\\cite{einsteinsolv}\nFor $\\mu_0\\in V$, $||\\mu_0||=2$, let $\\mu(t)$ be the flow defined in {\\rm\n(\\ref{flow})} with $\\mu(0)=\\mu_0$ and put $\\lambda=\\lim\\limits_{t\\to\\infty}\\mu(t)$.\nThen\n\\begin{itemize}\n \\item[(i)] $\\mu(t)\\in\\G.\\mu_0$ for all $t$.\n \\item[(ii)] $\\lambda\\in\\overline{\\G.\\mu_0}$.\n \\item[(iii)] $S_{\\lambda}$ is Einstein.\n \\end{itemize}\n\\end{lemma}\n\nPart (i) follows from the fact that $\\ddt\\mu\\in\\tang_{\\mu}\\G.\\mu$ for all $t$ (see\n(\\ref{flow})), and part (ii) is just a consequence of (i). Condition (ii) is often\nreferred in the literature as the Lie algebra $\\mu_0$ {\\it degenerates} to the Lie\nalgebra $\\lambda$. Some interplays between degenerations and Riemannian geometry of\nLie groups have been explored in \\cite{inter}, by using the fact that for us, the\norbit $\\G.\\mu_0$ is the set of all left invariant metrics on $N_{\\mu_0}$. We note\nthat if the limit $\\lambda\\in\\G.\\mu_0$, then $\\mu_0$ is an Einstein nilradical. We\ndo not know if the converse holds. Since $\\lambda$ is a critical point of $F$ and\n$\\lambda\\in\\nca$ by (ii) and the fact that $\\nca$ is closed, we have that part (iii)\nfollows from Theorem \\ref{crit}.\n\nIn geometric invariant theory, a moment map for linear reductive Lie group actions\nover $\\CC$ has been defined in \\cite{Nss} and \\cite{Krw1} (see Appendix). In our\nsituation, it is an $\\Or(n)$-equivariant map\n$$\nm:V\\smallsetminus\\{ 0\\}\\longrightarrow\\sym(n),\n$$\ndefined implicitly by\n\\begin{equation}\\label{defmm}\n\\la m(\\mu),\\alpha\\ra=\\tfrac{1}{||\\mu||^2}\\la\\pi(\\alpha)\\mu,\\mu\\ra, \\qquad \\mu\\in\nV\\smallsetminus\\{ 0\\}, \\; \\alpha\\in\\sym(n).\n\\end{equation}\n\n\\noindent We are using $\\g=\\sog(n)\\oplus\\sym(n)$ as the Cartan decomposition for the\nLie algebra $\\g$ of $\\G$, where $\\sog(n)$ and $\\sym(n)$ denote the subspaces of\nskew-symmetric and symmetric matrices, respectively.\n\nRecall that $\\nca\\subset V$ and each $\\mu\\in\\nca$ determines two Riemannian\nmanifolds $S_{\\mu}$ and $(N_{\\mu},\\ip)$. A remarkable fact is that this moment map\nencodes geometric information on $S_{\\mu}$ and $(N_{\\mu},\\ip)$; indeed, it was\nproved in \\cite{minimal} that\n\\begin{equation}\\label{mmR}\nm(\\mu)=\\tfrac{4}{||\\mu||^2}\\Ric_{\\mu}.\n\\end{equation}\n\nThis allows us to use strong and well-known results on the moment given in\n\\cite{Krw1} and \\cite{Nss}, and proved in \\cite{Mrn} for the real case (see the\nAppendix for an overview on such results). We note that the functional $F$ defined\nin Theorem \\ref{crit}, (iii) is precisely $F(\\mu)=||m(\\mu)||^2$, and so the\nequivalence between (iii) and (iv) in Theorem \\ref{crit} follows from Theorem\n\\ref{marian}, (i). It should be pointed out that actually most of the results in\nTheorem \\ref{crit} follow from general results on the moment map proved in\n\\cite{Mrn}. For instance, the last sentence about uniqueness of critical points of\n$F$ (see Theorem \\ref{marian}, (ii)), is easily seen to be equivalent to the\nuniqueness of standard Einstein solvmanifolds (see Theorem \\ref{u}) and nilsolitons\n(see Theorem \\ref{un}).\n\nIn Section \\ref{st}, we shall see that one can go further in the application of\ngeometric invariant theory to the study of Einstein solvmanifolds, by considering a\nstratification for $\\nca$ intimately related to the moment map.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{On the classification of Einstein solvmanifolds}\\label{clas}\n\nAs we have seen in Section \\ref{pre}, the classification of Einstein solvmanifolds\nis essentially reduced to the rank-one case. There is a bijection between the set\nof all isometry classes of rank-one Einstein solvmanifolds and the set of isometry\nclasses of certain distinguished left invariant metrics on nilpotent Lie groups\ncalled nilsolitons, and the uniqueness up to isometry of nilsolitons finally\ndetermines a new bijection with the set of all isomorphism classes of Einstein\nnilradicals. For better or worse, what we get in the end is then a classification\nproblem on nilpotent Lie algebras.\n\nRecall that a nilpotent Lie algebra $\\ngo$ is an Einstein nilradical if and only if\n$\\ngo$ admits a nilsoliton, that is, an inner product $\\ip$ such that the\ncorresponding Ricci operator $\\Ric_{\\ip}$ satisfies\n$$\n\\Ric_{\\ip}=cI+D, \\qquad \\mbox{for some}\\; c\\in\\RR,\\; D\\in\\Der(\\ngo).\n$$\nTherefore, in order to understand or classify Einstein nilradicals, a main problem\nwould be how to translate this condition based on the existence of an inner product\non $\\ngo$ having a certain property into purely Lie theoretic conditions on $\\ngo$.\nThe following questions also arise:\n\n\\begin{itemize}\n\\item[(A)] Besides the existence of an $\\NN$-gradation, is there any other neat structural obstruction for a nilpotent Lie algebra to be an Einstein nilradical?\n\n\\item[ ]\\item[(B)] Is there any algebraic condition on a nilpotent Lie algebra which is sufficient to be an Einstein nilradical?\n\n\\item[ ]\\item[(C)] An $\\NN$-graded nilpotent Lie algebra can or can not be an\nEinstein nilradical, what is more likely?\n\\end{itemize}\n\nLet us now review what we do know on the classification of Einstein nilradicals.\n\nAny nilpotent Lie algebra of dimension $\\leq 6$ is an Einstein nilradical (see\n\\cite{Wll}). There are $34$ of them in dimension $6$, giving rise to $29$ different\neigenvalue-types (there are $5$ eigenvalue-types with exactly two algebras). In\ndimension $7$, the first nilpotent Lie algebras without any $\\NN$-gradation appear,\nbut also do the first examples of $\\NN$-graded Lie algebras which are not Einstein\nnilradicals. The family of $7$-dimensional nilpotent Lie algebras defined for any\n$t\\in\\RR$ by\n\\begin{equation}\\label{ex7}\n\\begin{array}{lll}\n[X_1,X_2]_t=X_3, & [X_1,X_5]_t=X_6, & [X_2,X_4]_t=X_6, \\\\ \\\\\n\n[X_1,X_3]_t=X_4, & [X_1,X_6]_t=X_7, & [X_2,X_5]_t=tX_7, \\\\ \\\\\n\n[X_1,X_4]_t=X_5, & [X_2,X_3]_t=X_5, & [X_3,X_4]_t=(1-t)X_5,\n\\end{array}\n\\end{equation}\n\n\\noindent is really a curve in the set of isomorphism classes of algebras (i.e.\n$\\lb_t\\simeq\\lb_s$ if and only if $t=s$) and $\\lb_t$ turns to be an Einstein\nnilradical if and only if $t\\ne 0,1$ (see \\cite{einsteinsolv}). Recall that all of\nthem admit the gradation $\\ngo=\\ngo_1\\oplus\\ngo_2\\oplus...\\oplus\\ngo_7$, $\\ngo_i=\\RR\nX_i$ for all $i$. This example in particular shows that to be an Einstein\nnilradical is not a property which depends continuously on the structural constants\nof the Lie algebra.\n\nPerhaps the nicest source of examples of Einstein nilradicals is the following.\n\n\\begin{theorem}\\cite{Tmr}\nLet $\\ggo$ be a real semisimple Lie algebra. Then the nilradical of any parabolic\nsubalgebra of $\\ggo$ is an Einstein nilradical.\n\\end{theorem}\n\nIf we add to this that H-type Lie algebras and any nilpotent Lie algebra admitting a\nnaturally reductive left invariant metric are Einstein nilradicals, one may get the\nimpression that any nilpotent Lie algebra which is special or distinguished in some\nway, or just has a `name', will be an Einstein nilradical. This is contradicted by\nthe following surprising result, which asserts that free nilpotent Lie algebras are\nrarely Einstein nilradicals.\n\n\\begin{theorem}\\cite{Nkl1}\nA free $p$-step nilpotent Lie algebra on $m$ generators is an Einstein nilradical if\nand only if\n\\begin{itemize}\n\\item $p=1,2$;\n\\item $p=3$ and $m=2,3,4,5$;\n\\item $p=4$ and $m=2$;\n\\item $p=5$ and $m=2$.\n\\end{itemize}\n\\end{theorem}\n\nA nilpotent Lie algebra $\\ngo$ is said to be {\\it filiform} if $\\dim{\\ngo}=n$ and\n$\\ngo$ is $(n-1)$-step nilpotent. These algebras may be seen as those which are as\nfar as possible from being abelian along the class of nilpotent Lie algebras, and in\nfact most of them admit at most one $\\NN$-gradation. Several families of filiform\nalgebras which are not Einstein nilradicals have been found in \\cite{Nkl2}, as well\nas many isolated examples of non-Einstein nilradicals belonging to a curve of\nEinstein nilradicals as in example (\\ref{ex7}). In \\cite{Arr}, a weaker version of\nTheorem \\ref{Upos} given in \\cite{Nkl2} is used to get a classification of\n$8$-dimensional filiform Einstein nilradicals.\n\nThe lack of $\\NN$-gradations is not however the only obstacle one can find for\nEinstein nilradicals. Several examples of non-Einstein nilradicals are already\nknown in the class of $2$-step nilpotent Lie algebras (i.e. $[\\ngo,[\\ngo,\\ngo]]=0$),\nthe closest ones to being abelian and so algebras which usually admit plenty of\ndifferent $\\NN$-gradations.\n\n\\begin{definition} A $2$-step nilpotent Lie algebra $\\ngo$ is said to be of {\\it type} $(p,q)$ if $\\dim{\\ngo}=p+q$ and $\\dim{[\\ngo,\\ngo]}=p$.\n\\end{definition}\n\nIn \\cite{einsteinsolv}, certain $2$-step nilpotent Lie algebras attached to graphs\nare considered (of type $(p,q)$ if the graph has $q$ vertices and $p$ edges) and it\nis proved that they are Einstein nilradicals if and only if the graph is positive\n(i.e. when certain uniquely defined weighting on the set of edges is positive). For\ninstance, any regular graph and also any tree such that any of its edges is adjacent\nto at most three other edges is positive. On the other hand, a graph is not\npositive under the following condition: there are two joined vertices $v$ and $w$\nsuch that $v$ is joined to $r$ vertices of valency $1$, $w$ is joined to $s$\nvertices of valency $1$, both are joined to $t$ vertices of valency $2$ and\n$(r,s,t)$ is not in a set of only a few exceptional small triples. This provides a\ngreat deal of $2$-step non-Einstein nilradicals, starting from types $(5,6)$ and\n$(7,5)$, and any dimension $\\geq 11$ is attained.\n\nMany other $2$-step algebras of type $(6,5)$ and $(7,5)$ which are not Einstein\nnilradicals have appeared from the complete classification for types $(p,q)$ with\n$q\\leq 5$ and $(p,q)\\ne (5,5)$ carried out in \\cite{Nkl3}.\n\nCuriously enough, at this point of the story, with so many examples of non-Einstein\nnilradicals available, a curve was still missing. In each fixed dimension,\nonly finitely many nilpotent Lie algebras which are not Einstein nilradicals have\nshowed up. But this potential candidate to a conjecture has recently been dismissed\nby the following result.\n\n\\begin{theorem}\\cite{Wll2}\nLet $\\ngo_t$ be the $9$-dimensional Lie algebra with Lie bracket defined by\n$$\n\\begin{array}{lll}\n\n[X_5,X_4]_t=X_7, & [X_1,X_6]_t=X_8, & [X_3,X_2]_t=X_9, \\\\ \\\\\n\n[X_3,X_6]_t=tX_7, & [X_5,X_2]_t=tX_8, & [X_1,X_4]_t=tX_9, \\\\ \\\\\n\n[X_1,X_2]_t=X_7. &&\n\\end{array}\n$$\n\nThen $\\ngo_{t},$ $t\\in (1,\\infty)$, is a curve of pairwise non-isomorphic $2$-step\nnilpotent Lie algebras of type $(3,6)$, none of which is an Einstein nilradical.\n\\end{theorem}\n\nThe following definition is motivated by (\\ref{pE}), a condition a rank-one solvable\nextension of a nilpotent Lie algebra must satisfy in order to have a chance of being\nEinstein.\n\n\\begin{definition}\\label{preE} A derivation $\\phi$ of a real Lie algebra $\\ggo$ is called {\\it pre-Einstein} if it is diagonalizable over $\\RR$ and\n$$\n\\tr{\\phi\\psi}=\\tr{\\psi}, \\qquad\\forall\\; \\psi\\in\\Der(\\ggo).\n$$\n\\end{definition}\n\nThe following result is based on the fact that $\\Aut(\\ggo)$ is an algebraic group.\n\n\\begin{theorem}\\cite{Nkl3}\\label{eupreE}\nAny Lie algebra $\\ggo$ admits a pre-Einstein derivation, which is unique up to\n$\\Aut(\\ggo)$-conjugation and has eigenvalues in $\\QQ$.\n\\end{theorem}\n\nLet $\\ngo$ be a nilpotent Lie algebra with pre-Einstein derivation $\\phi$. We note\nthat if $\\ngo$ admits a nilsoliton metric, say with $\\Ric_{\\ip}=cI+D$, then $D$\nnecessarily equals $\\phi$ up to scaling and conjugation (see (\\ref{pE})), and thus\nthe eigenvalue-type of the corresponding Einstein solvmanifold is the set of\neigenvalues of $\\phi$ up to scaling. In particular, $\\phi>0$. It is proved in\n\\cite{Nkl3} that also $\\ad{\\phi}\\geq 0$ as long as $\\ngo$ is an Einstein nilradical.\nThese conditions are not however sufficient to guarantee that $\\ngo$ is an Einstein\nnilradical (see \\cite{Nkl0}). In order to get a necessary and sufficient condition\nin terms of $\\phi$ we have to work harder.\n\nLet us first consider\n\\begin{equation}\\label{gphi}\n\\ggo_{\\phi}:=\\{\\alpha\\in\\glg(\\ngo):[\\alpha,\\phi]=0, \\quad\\tr{\\alpha\\phi}=0,\n\\quad\\tr{\\alpha}=0\\}\n\\end{equation}\n\n\\noindent and let $G_{\\phi}$ be the connected Lie subgroup of $\\Gl(\\ngo)$ with Lie\nalgebra $\\ggo_{\\phi}$. Recall that the Lie bracket $\\lb$ of $\\ngo$ belongs to the\nvector space $\\Lambda^2\\ngo^*\\otimes\\ngo$ of skew-symmetric bilinear maps from\n$\\ngo\\times\\ngo$ to $\\ngo$, on which $\\Gl(\\ngo)$ is acting naturally by\n$g.\\lb=g[g^{-1}\\cdot,g^{-1}\\cdot]$.\n\n\n\\begin{theorem}\\cite{Nkl3}\\label{madreN}\nLet $\\ngo$ be a nilpotent Lie algebra with pre-Einstein derivation $\\phi$. Then\n$\\ngo$ is an Einstein nilradical if and only if the orbit $G_{\\phi}.\\lb$ is closed\nin $\\Lambda^2\\ngo^*\\otimes\\ngo$.\n\\end{theorem}\n\nThis is certainly the strongest general result we know so far concerning questions\n(A) and (B) above, and of course it has many useful applications, some of which we will now\ndescribe (see also Theorem \\ref{madre} for a turned to be equivalent result).\n\n\\begin{definition}\\label{nice} Let $\\{ X_1,...,X_n\\}$ be a basis for a nilpotent Lie algebra $\\ngo$,\nwith structural constants $c_{ij}^k$'s given by\n$[X_i,X_j]=\\sum\\limits_{k=1}^{n}c_{ij}^kX_k$. Then the basis $\\{ X_i\\}$ is said to\nbe {\\it nice} if the following conditions hold:\n\\begin{itemize}\n \\item for all $i0$) or $\\hg_3\\oplus\\hg_3$ ($h<0$). This implies that\nthe union of the two $\\Gl_4(\\RR)\\times\\Gl_2(\\RR)$-orbits corresponding to\n$\\hg_3\\oplus\\CC$ and $\\hg_3\\oplus\\hg_3$, which coincides with the set of all\nEinstein nilradicals of eigenvalue type $(1<2;4,2)$ in $V_{4,2}$, is open and dense\nin $V_{4,2}$. However, recall that the net probability of being an Einstein\nnilradical of eigenvalue type $(1<2;4,2)$ in $V_{4,2}$ is $\\tfrac{2}{7}$.\n\nOne may try to avoid this by working on the quotient space\n$V_{q,p}\/\\Gl_q(\\RR)\\times\\Gl_p(\\RR)$, where Theorem \\ref{generic} is by the way also\ntrue, but the topology here is so ugly that an open and dense subset can never be\ntaken as a probability one subset. In fact, there could be a single point set which\nis open and dense. On the other hand, the coset of $0$ is always in the closure of\nany other subset, which shows that this quotient space is far from being $T_1$.\n\nIt has very recently appeared in \\cite{Nkl4} a complete classification for $2$-step\nEinstein nilradicals of type $(2,q)$ for any $q$. In \\cite{Jbl}, a construction\ncalled concatenation of $2$-step nilpotent Lie algebras is used to obtain Einstein\nnilradicals of type $(1<2;q,p)$ from smaller ones, as well as many new examples of\n$2$-step non-Einstein nilradicals.\n\n\n\n\n\n\n\n\n\n\\section{Known examples and non examples}\\label{ene}\n\nAs far as we know, the following is a complete chronological list of nilpotent Lie\nalgebras which are known to be Einstein nilradicals, or equivalently, of known\nexamples of rank-one Einstein solvmanifolds:\n\\begin{itemize}\n\n\\item[ ]\n\\item \\cite{Cart} The Lie algebra of an Iwasawa $N$-group: $G\/K$ irreducible symmetric space of\nnoncompact type and $G=KAN$ the Iwasawa decomposition.\n\n\\item[ ]\n\\item\\cite{GndPttVnb} Nilradicals of normal $j$-algebras (i.e. of\nnoncompact homogeneous K$\\ddot{{\\rm a}}$hler Einstein spaces).\n\n\\item[ ]\n\\item\\cite{Alk, Crt} Nilradicals of homogeneous quaternionic K$\\ddot{{\\rm\na}}$hler spaces.\n\n\\item[ ]\n\\item\\cite{Dlf} Certain $2$-step nilpotent Lie algebras for which there\nis a basis with very uniform properties (see also \\cite[1.9]{Wlt}).\n\n\\item[ ]\n\\item\\cite{Bgg} $H$-type Lie algebras (see also \\cite{Lnz}).\n\n\\item[ ]\n\\item\\cite{EbrHbr, manus} Nilpotent Lie algebras admitting a naturally reductive left invariant metric.\n\n\\item[ ]\n\\item\\cite{Hbr} Families of deformations of Lie algebras of Iwasawa $N$-groups in the rank-one\ncase.\n\n\\item[ ]\n\\item\\cite{Fan1,Fan2} Certain $2$-step nilpotent Lie algebras\nconstructed via Clifford modules.\n\n\\item[ ]\n\\item\\cite{GrdKrr} A $2$-parameter family of $2$-step nilpotent Lie algebras of type $(3,6)$ and certain\nmodifications of the Lie algebras of Iwasawa $N$-groups (rank $\\geq 2$).\n\n\\item[ ]\n\\item\\cite{finding} Any nilpotent Lie algebra with a codimension one abelian ideal.\n\n\\item[ ]\n\\item\\cite{finding} A curve of $6$-step nilpotent Lie algebras of dimension $7$, which\nis the lowest possible dimension for a continuous family.\n\n\\item[ ]\n\\item\\cite{Mor} (and Yamada), Certain $2$-step nilpotent Lie algebras defined from\nsubsets of fundamental roots of complex simple Lie algebras.\n\n\\item[ ]\n\\item\\cite{finding} Any nilpotent Lie algebra of dimension $\\leq 5$.\n\n\\item[ ]\n\\item\\cite{Wll} Any nilpotent Lie algebra of dimension $6$.\n\n\\item[ ]\n\\item\\cite{inter} A curve of $2$-step nilpotent Lie algebras of type $(5,5)$.\n\n\\item[ ]\n\\item\\cite{Krr} A $2$-parameter family of deformations of the nilradical of the $12$-dimensional quaternionic hyperbolic space.\n\n\\item[ ]\n\\item\\cite{Pyn} Any filiform (i.e. $n$-dimensional and $(n-1)$-step nilpotent) Lie algebra with at least two linearly independent semisimple derivations.\n\n\\item[ ]\n\\item\\cite{einsteinsolv} Certain $2$-step nilpotent Lie algebras attached to graphs as soon as a uniquely defined weighting on the graph is positive. Regular graphs and trees without any edge adjacent to four or more edges are positive.\n\n\\item[ ]\n\\item\\cite{Nkl1} The free $p$-step nilpotent Lie algebras $\\fg(m,p)$ on $m$ generators for $p=1,2$; $p=3$ and $m=2,3,4,5$; $p=4$ and $m=2$; $p=5$ and $m=2$.\n\n\\item[ ]\n\\item\\cite{Nkl2} Several families of filiform Lie algebras.\n\n\\item[ ]\n\\item\\cite{Tmr} The nilradical of any parabolic subalgebra of a semisimple\nLie algebra.\n\n\\item[ ]\n\\item\\cite{Nkl3} Any $2$-step nilpotent Lie algebra of type $(p,q)$ (i.e. $p+q$-dimensional and $p$-dimensional derived algebra) with $q\\leq 5$ and $(p,q)\\ne (5,5)$, with the only exceptions of the real forms of six complex algebras of type $(6,5)$ and three of type $(7,5)$.\n\n\\item[ ]\n\\end{itemize}\n\n\nWe now give an up to date list of $\\NN$-graded nilpotent Lie\nalgebras which are not Einstein nilradicals, that is, they do not admit any\nnilsoliton metric.\n\n\\begin{itemize}\n\\item[ ]\n\\item\\cite{einsteinsolv} Three $6$-step nilpotent Lie algebras of dimension $7$, and\ncertain $2$-step nilpotent Lie algebras attached to graphs in any dimension $\\geq\n11$ (only finitely many in each dimension).\n\n\\item[ ]\n\\item\\cite{Nkl1} The free $p$-step nilpotent Lie algebras $\\fg(m,p)$ on $m$ generators\nfor $p=3$ and $m\\geq 6$; $p=4$ and $m\\geq 3$; $p=5$ and $m\\geq 3$; $p\\geq 6$.\n\n\\item[ ]\n\\item\\cite{Nkl2} Many filiform Lie algebras starting from dimension $8$ (see also \\cite{Arr}).\n\n\\item[ ]\n\\item\\cite{Nkl3} Real forms of six complex $2$-step nilpotent Lie algebras of\ntype $(6,5)$ and three of type $(7,5)$.\n\n\\item[ ]\n\\item\\cite{Wll2} Two curves of $2$-step nilpotent Lie algebras of type $(3,6)$.\n\n\\item[ ]\n\\end{itemize}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{A stratification for the variety of nilpotent Lie algebras}\\label{st}\n\nIn this section, we define a $\\G$-invariant stratification for the representation\n$V=\\lam$ of $\\G$ by adapting to this context the construction given in \\cite[Section\n12]{Krw1} for reductive group representations over an algebraically closed field.\nThis construction, in turn, is based on some instability results proved in\n\\cite{Kmp} and \\cite{Hss}. We decided to give in \\cite[Section 2]{standard} a\nself-contained proof of all these results, bearing in mind that a direct application\nof them does not seem feasible (see also \\cite{strata}).\n\nWe shall use the notation given in Section \\ref{Tb}. For any $\\mu\\in V$ we have\nthat\n$$\n\\lim\\limits_{t\\to\\infty}e^{tI}.\\mu= \\lim\\limits_{t\\to\\infty}e^{-t}\\mu=0,\n$$\nand hence $0\\in\\overline{\\G.\\mu}$, that is, any element of $V$ is unstable for our\n$\\G$-action (see Appendix). Therefore, in order to distinguish two elements of $V$\nfrom the point of view of geometric invariant theory, we would need to measure in\nsome sense `how' unstable each element of $V$ is. Maybe the above is not the\noptimal way to go to $0$ along the orbit starting from $\\mu$.\n\nLet us consider $\\mu\\in V$ and $\\alpha\\in\\dca$, where $\\dca$ denotes the set of all\n$n\\times n$ matrices which are diagonalizable, that is,\n$$\n\\dca=\\bigcup_{g\\in\\G}g\\tg g^{-1}.\n$$\nThus $\\pi(\\alpha)$ is also diagonalizable (see (\\ref{actiong})), say with\neigenvalues $a_1,...,a_r$ and eigenspace decomposition $V=V_1\\oplus...\\oplus V_r$.\nThis implies that if $\\mu\\ne 0$ and $\\mu=\\mu_1+...+\\mu_r$, $\\mu_i\\in V_i$, then\n$$\ne^{-t\\alpha}.\\mu= \\sum_{i=1}^r e^{-ta_i}\\mu_i,\n$$\nand so $e^{-t\\alpha}.\\mu$ goes to $0$ when $t\\to\\infty$ if and only if $\\mu_i=0$ as\nsoon as $a_i\\leq 0$. Moreover, in that case, the positive number\n$$\nm(\\mu,\\alpha):=\\min\\{ a_i : \\mu_i\\ne 0\\},\n$$\nmeasures the degree of instability of $\\mu$ relative to $\\alpha$, in the sense that\nthe train has not arrived until the last wagon has. Indeed, the larger\n$m(\\mu,\\alpha)$ is, the faster $e^{-t\\alpha}.\\mu$ will converge to $0$ when\n$t\\to\\infty$. Recall that for an action in general the existence of such $\\alpha$\nfor any unstable element is guaranteed by Theorem \\ref{RS}, (iv).\n\nNotice that $m(\\mu,c\\alpha)=c m(\\mu,\\alpha)$ for any $c>0$. We can therefore\nconsider the most efficient directions (up to the natural normalization) for a given\n$\\mu\\in V$, given by\n$$\n\\Lambda(\\mu):=\\left\\{\\beta\\in\\dca: m(\\mu,\\beta)=1=\\sup\\limits_{\\alpha\\in\\dca}\\left\\{\nm(\\mu,\\alpha):\\tr{\\alpha^2}=\\tr{\\beta^2}\\right\\}\\right\\}.\n$$\nA remarkable fact is that $\\Lambda(\\mu)$ lie in a single conjugacy class, that is,\nthere exists an essentially unique direction which is `most responsible' for the\ninstability of $\\mu$. All the parabolic subgroups $P_{\\beta}$ of $\\G$\nnaturally associated to any $\\beta\\in\\Lambda(\\mu)$ defined in (\\ref{para}) coincide, and\nhence they define a unique parabolic subgroup $P_{\\mu}$ which acts transitively on\n$\\Lambda(\\mu)$ by conjugation. A very nice property $P_{\\mu}$ has is that\n\\begin{equation}\\label{aut}\n\\Aut(\\mu)\\subset P_{\\mu}.\n\\end{equation}\n\nSince\n$$\n\\Lambda(g.\\mu)=g\\Lambda(\\mu)g^{-1}, \\qquad \\forall\\mu\\in V, \\quad g\\in\\G,\n$$\nwe obtain that $\\Lambda(g.\\mu)$ will meet the Weyl chamber $\\tg^+$ for some\n$g\\in\\G$, and the intersection set will consist of a single element $\\beta\\in\\tg^+$\n(see (\\ref{weyl})).\n\nSummarizing, we have been able to attach to each nonzero $\\mu\\in V$, and actually to\neach nonzero $\\G$-orbit in $V$, a uniquely defined $\\beta\\in\\tg^+$ which comes from\ninstability considerations.\n\n\\begin{definition}\nUnder the above conditions, we say that $\\mu\\in\\sca_{\\beta}$ and call the subset\n$\\sca_{\\beta}\\subset V$ a {\\it stratum}.\n\\end{definition}\n\nWe note that $\\sca_{\\beta}$ is $\\G$-invariant for any $\\beta\\in\\tg^+$ and\n$$\nV\\smallsetminus\\{ 0\\}=\\bigcup\\limits_{\\beta\\in\\tg^+}\\sca_{\\beta},\n$$\na disjoint union. An alternative way to define $\\sca_{\\beta}$ is\n$$\n\\sca_{\\beta}=\\G.\\left\\{\\mu\\in V:\\tfrac{\\beta}{||\\beta||^2}\\in\\Lambda(\\mu)\\right\\},\n$$\nwhich actually works for any $\\beta\\in\\tg$. From now on, we will always denote by\n$\\mu_{ij}^k$ the structure constants of a vector $\\mu\\in V$ with respect to the\nbasis $\\{ v_{ijk}\\}$:\n$$\n\\mu=\\sum\\mu_{ij}^kv_{ijk}, \\qquad \\mu_{ij}^k\\in\\RR, \\qquad {\\rm i.e.}\\quad\n\\mu(e_i,e_j)=\\sum_{k=1}^n\\mu_{ij}^ke_k, \\quad i0$. By\nletting $E=\\ad{H}$ in (\\ref{einstein}) we get\n\\begin{equation}\\label{c}\nc=-\\tfrac{\\tr{S(\\ad{H})^2}}{\\tr{S(\\ad{H})}}<0.\n\\end{equation}\n\nIn order to apply the results in Section \\ref{st}, we identify $\\ngo$ with $\\RR^n$\nvia an orthonormal basis $\\{ e_1,...,e_n\\}$ of $\\ngo$ and we set\n$\\mu:=[\\cdot,\\cdot]|_{\\ngo\\times\\ngo}$. In this way, $\\mu$ can be viewed as an\nelement of $\\nca\\subset V$. If $\\mu\\ne 0$ then $\\mu$ lies in a unique stratum\n$\\sca_{\\beta}$, $\\beta\\in\\bca$, by Theorem \\ref{strata}, and it is easy to see that\nwe can assume (up to isometry) that $\\mu$ satisfies (\\ref{cond}), so that one can\nuse all the additional properties stated in the theorem. In particular, the\nfollowing crucial technical result follows. Consider $E_{\\beta}\\in\\End(\\sg)$\ndefined by\n$$\nE_{\\beta}=\\left[\\begin{smallmatrix} 0&0\\\\\n0&\\beta+||\\beta||^2I\\end{smallmatrix}\\right],\n$$\nthat is, $E|_{\\ag}=0$ and $E|_{\\ngo}=\\beta+||\\beta||^2I$.\n\n\\begin{lemma}\\label{pie}\nIf $\\mu\\in\\sca_{\\beta}$ satisfies {\\rm (\\ref{cond})} then\n$\\la\\pi(E_{\\beta})[\\cdot,\\cdot],[\\cdot,\\cdot]\\ra\\geq 0$.\n\\end{lemma}\n\nWe then apply (\\ref{einstein}) to $E_{\\beta}\\in\\End(\\sg)$ and obtain from Lemma\n\\ref{pie}, (\\ref{c}), (\\ref{trb}) and (\\ref{betaort}) that\n$$\n\\tr{S(\\ad{H})^2}\\tr{E_{\\beta}^2}\\leq (\\tr{S(\\ad{H})E_{\\beta}})^2,\n$$\na `backwards' Cauchy-Schwartz inequality. This turns all inequalities which\nappeared in the proof of Lemma \\ref{pie} into equalities, in particular:\n$$\n\\unc\\sum_{rs}\\la(\\beta+||\\beta||^2I)[A_r,A_s],[A_r,A_s]\\ra=0,\n$$\nwhere $\\{ A_i\\}$ is an orthonormal basis of $\\ag$. We finally get that $\\ag$ is\nabelian since $\\beta+||\\beta||^2I$ is positive definite by (\\ref{betapos}).\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{The stratification and Einstein solvmanifolds via closed orbits}\\label{smadre}\n\nWe shall describe in this section some other applications of the strata defined in\nSection \\ref{st} to the study of Einstein solvmanifolds.\n\nLet $\\ngo$ be a nonabelian nilpotent Lie algebra of dimension $n$. We fix any basis\n$\\{ X_1,...,X_n\\}$ of $\\ngo$ and consider the corresponding structural constants:\n$$\n[X_i,X_j]=\\sum_{k=1}^n c_{ij}^kX_k, \\qquad 1\\leq i0$ (see (\\ref{betapos})) and $\\ad{\\phi}\\geq 0$ (see\n(\\ref{adbeta})) are necessary conditions in order to have\n$0\\notin\\overline{G_{\\beta}.\\lb}$ (i.e. $\\lb\\in\\sca_{\\beta}$). These conditions are not however sufficient (compare with the paragraph below Theorem \\ref{eupreE}). For instance, any free nilpotent Lie algebra which is not an Einstein nilradical provides a counterexample (see \\cite[Remark 2]{Nkl3}). \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Open problems}\\label{op}\n\nLet $\\ngo$ be an $\\NN$-graded nilpotent Lie algebra.\n\n\\begin{enumerate}\n\\item {\\bf Obstructions}. To find algebraic necessary conditions on $\\ngo$ to be an Einstein nilradical.\n\n\\item {\\bf Existence}. Are there algebraic conditions on $\\ngo$ which are sufficient to be an Einstein nilradical?\n\n\\item Does the assertion `$\\ngo$ is an Einstein nilradical' have probability $1$ in some sense?\n\n\\item Does the assertion `$\\ngo$ is not an Einstein nilradical' have probability $1$ in some sense?\n\n\\item Assume that $\\ngo$ is an Einstein nilradical with Lie bracket $\\mu_0\\in\\nca$, and consider the flow $\\mu(t)$ defined in (\\ref{flow}) with $\\mu(0)=\\mu_0$. Does $\\lambda=\\lim\\limits_{t\\to\\infty}\\mu(t)$ necessarily belong to $\\G.\\mu_0$? (this would provide a nice obstruction).\n\n\\item To exhibit an explicit example or prove the existence of a nilpotent Lie algebra which does not admit a nice basis (see Definition \\ref{nice}).\n\n\\item Are there only finitely many $\\NN$-graded filiform Lie algebras which are not Einstein nilradicals in each dimension?\n\\end{enumerate}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Appendix: Real geometric invariant theory}\\label{git}\n\n\nLet $G$ be a real reductive group acting linearly on a finite dimensional real\nvector space $V$ via $(g,v)\\mapsto g.v$, $g\\in G,v\\in V$. The precise definition of\nour setting is the one considered in \\cite{RchSld}. We also refer to \\cite{EbrJbl},\nwhere many results from geometric invariant theory are adapted and proved over\n$\\RR$.\n\nThe Lie algebra $\\ggo$ of $G$ also acts linearly on $V$ by the derivative of the\nabove action, which will be\n denoted by $(\\alpha,v)\\mapsto\\pi(\\alpha)v$, $\\alpha\\in\\ggo$, $v\\in V$. We consider\n a Cartan decomposition\n$\\ggo=\\kg\\oplus\\pg$, where $\\kg$ is the Lie algebra of a maximal compact subgroup\n$K$ of $G$. Endow $V$ with a fixed from now on $K$-invariant inner product $\\ip$\nsuch that $\\pg$ acts by symmetric operators, and endow $\\pg$ with an\n$\\Ad(K)$-invariant inner product $\\ipp$.\n\nThe function $m:V\\smallsetminus\\{ 0\\}\\longrightarrow\\pg$ implicitly defined by\n$$\n(m(v),\\alpha)=\\isn\\la\\pi(\\alpha)v,v\\ra, \\qquad \\forall\\alpha\\in\\pg,\\; v\\in V,\n$$\nis called the {\\it moment map} for the representation $V$ of $G$. Since\n$m(cv)=m(v)$ for any nonzero $c\\in\\RR$, we also may consider the moment map on the\nprojective space of $V$, $m:\\PP V\\mapsto\\pg$, with the same notation and definition\nas above for $m([v])$, $[v]$ the class of $v$ in $\\PP V$. It is easy to see that\n$m$ is $K$-equivariant: $m(k.v)=\\Ad(k)m(v)$ for all $k\\in K$.\n\nIn the complex case (i.e. for a complex representation of a complex reductive\nalgebraic group), under the natural identifications $\\pg=\\pg^*=(\\im\\kg)^*=\\kg^*$,\nthe function $m$ is precisely the moment map from symplectic geometry, corresponding\nto the Hamiltonian action of $K$ on the symplectic manifold $\\PP V$ (see for\ninstance the survey \\cite{Krw2} or \\cite[Chapter 8]{Mmf} for further information).\nFor real actions, this nice interplay with symplectic geometry is lost ($\\PP V$\ncould even be odd dimensional), but the moment map is nevertheless a very natural\nobject attached to a real representation encoding a lot of information on the\ngeometry of $G$-orbits and the orbit space $V\/G$.\n\nLet $\\mca=\\mca(G,V)$ denote the set of {\\it minimal vectors}, that is,\n$$\n\\mca=\\{ v\\in V: ||v||\\leq ||g.v||\\quad\\forall g\\in G\\}.\n$$\nFor each $v\\in V$ define\n$$\n\\rho_v:G\\mapsto\\RR, \\qquad \\rho_v(g)=||g.v||^2.\n$$\nIn \\cite{RchSld}, it is shown that the nice interplay between closed orbits and\nminimal vectors discovered in \\cite{KmpNss} for actions of complex reductive\nalgebraic groups, is still valid in the real situation.\n\n\\begin{theorem}\\cite{RchSld}\\label{RS}\nLet $V$ be a real representation of a real reductive group $G$, and let $v\\in V$.\n\\begin{itemize}\n\\item[(i)] The orbit $G.v$ is closed if and only if $G.v$ meets $\\mca$.\n\n\\item[(ii)] $v\\in\\mca$ if and only if $\\rho_v$ has a critical point at $e\\in G$.\n\n\\item[(iii)] If $v\\in\\mca$ then $G.v\\cap\\mca=K.v$.\n\n\\item[(iv)] The closure $\\overline{G.v}$ of any orbit $G.v$ always meets $\\mca$. Moreover, there always exists\n$\\alpha\\in\\pg$ such that $\\lim\\limits_{t\\to\\infty}\\exp(-t\\alpha).v=w$ exists and\n$G.w$ is closed.\n\n\\item[(v)] $\\overline{G.v}\\cap\\mca$ is a single $K$-orbit, or in other words, $\\overline{G.v}$ contains a unique\nclosed $G$-orbit.\n\\end{itemize}\n\\end{theorem}\n\nAs usual in the real case, classical topology of $V$ is always considered rather\nthan Zarisky topology, unless explicitly indicated.\n\nLet $(\\dif\\rho_v)_e:\\ggo\\mapsto\\RR$ denote the differential of $\\rho_v$ at the\nidentity $e$ of $G$. It follows from the $K$-invariance of $\\ip$ that\n$(\\dif\\rho_v)_e$ vanishes on $\\kg$, and so we can assume that\n$(\\dif\\rho_v)_e\\in\\pg^*$, the vector space of real-valued functionals on $\\pg$. If\nwe identify $\\pg$ and $\\pg^*$ by using $\\ipp$, then it is easy to see that\n$$\nm(v)=\\tfrac{1}{2||v||^2}(\\dif\\rho_v)_e.\n$$\nThe moment map at $v$ is therefore an indicator of the behavior of the norm along\nthe orbit $G.v$ in a neighborhood of $v$. It follows from Theorem \\ref{RS}, (ii)\nthat\n$$\n\\mca\\smallsetminus\\{ 0\\} = \\{ v\\in V\\smallsetminus\\{ 0\\}: m(v)=0\\}.\n$$\nThus if we consider the functional square norm of the moment map\n\\begin{equation}\\label{norm}\nF:V\\smallsetminus\\{ 0\\}\\mapsto\\RR, \\qquad F(v)=||m(v)||^2,\n\\end{equation}\nwhich is a $4$-degree homogeneous polynomial times $||v||^{-4}$, $\\mca\\setminus\\{ 0\\}$\ncoincides with the set of zeros of $F$. It then follows from Theorem \\ref{RS}, parts\n(i) and (iii), that a nonzero orbit $G.v$ is closed if and only if $F(w)=0$ for some\n$w\\in G.v$, and in that case, the set of zeros of $F|_{G.v}$ coincides with $K.v$.\nRecall that $F$ is scaling invariant and so it is actually a function on any sphere\nof $V$ or on $\\PP V$.\n\nA natural question arises: what is the role played by the remaining critical points\nof $F$ (i.e. those for which $F(v)>0$) in the study of the $G$-orbit space of the\naction of $G$ on $V$?. This was independently studied in \\cite{Krw1} and \\cite{Nss}\nin the complex case, who have shown that non-minimal critical points still enjoy\nmost of the nice properties of minimal vectors stated in Theorem \\ref{RS}. In the\nreal case, the analogues of some of these results have been proved in \\cite{Mrn}.\n\nWe endow $\\PP V$ with the Fubini-Study metric defined by $\\ip$ and denote by\n$x\\mapsto \\alpha_x$ the vector field on $\\PP V$ defined by $\\alpha\\in\\ggo$ via the\naction of $G$ on $\\PP V$, that is, $\\alpha_x=\\ddt|_0\\exp(t\\alpha).x$. We will also\ndenote by $F$ the functional $F:\\PP V\\longrightarrow\\RR$, $F([v])=||m([v])||^2$.\n\n\n\\begin{lemma}\\cite{Mrn}\\label{marian2}\nThe gradient of the functional $F:V\\setminus\\{ 0\\}\\longrightarrow\\RR$ is given by\n$$\n\\grad(F)_{v}=\\tfrac{4}{||v||^2}\\Big(\\pi(m(v))v-||m(v)||^2v\\Big), \\qquad v\\in\nV\\setminus\\{ 0\\},\n$$\nand for $F:\\PP V\\longrightarrow\\RR$ we have that\n$$\n\\grad(F)_{[v]}=4m([v])_{[v]}, \\qquad [v]\\in\\PP V.\n$$\n\\end{lemma}\n\nTherefore, $v$ is a critical point of $F$ (or equivalently, of $F|_{G.v}$) if and\nonly if $v$ is an eigenvector of $\\pi(m(v))$, and $[v]$ is a critical point of $F$\n(or equivalently, of $F|_{G.[v]}$) if and only if $\\exp{tm([v])}$ fixes $[v]$.\n\n\\begin{theorem}\\cite{Mrn}\\label{marian}\nLet $V$ be a real representation of a real semisimple Lie group $G$.\n\\begin{itemize}\n\\item[(i)] If $x\\in\\PP V$ is a critical point of $F$ then the functional $F|_{G.x}$ attains its minimum value at\n$x$.\n\n\\item[(ii)] If nonempty, the critical set of $F|_{G.x}$ consists of a single $K$-orbit.\n\\end{itemize}\n\\end{theorem}\n\n\n\\begin{definition}\\label{stable}\nA nonzero vector $v\\in V$ is called {\\it unstable} if $0\\in\\overline{G.v}$, and {\\it\nsemistable} otherwise. If a semistable vector has in addition compact isotropy\nsubgroup then it is called {\\it stable}.\n\\end{definition}\n\nIf the orbit of a nonzero $v\\in V$ is closed then $v$ is clearly semistable. More\ngenerally, $v\\in V$ is semistable if and only if the unique (up to $K$-action) zero\nof $F$ which belongs to $\\overline{G.v}$ is a nonzero vector. On the contrary, any\ncritical point of $F$ which is not a zero of $F$ is unstable. Indeed, if\n$\\pi(m(v))v=cv$, $c=||m(v)||^2>0$ (see Lemma \\ref{marian2}), then\n$$\n\\lim_{t\\to\\infty}\\exp(-tm(v)).v=\\lim_{t\\to\\infty}e^{-tc}v=0,\n$$\nand so $0\\in\\overline{G.v}$. Thus the study of critical points of $F$ other than\nzeroes gives useful information on the orbit space structure of the subset of all\nunstable vectors, often called the {\\it nullcone} of $V$.\n\n\n\\begin{example}\\label{hompol1}\nLet us consider the example of $G=\\Sl_3(\\RR)$ and $V=P_{3,3}(\\RR)$, the vector space\nof all homogeneous polynomials of degree $3$ on $3$ variables. The action is given\nby a linear change of variables on the left\n$$\n(g.p)(x_1,x_2,x_3)=p\\left(g^{-1}\\left[\\begin{smallmatrix} x_1\\\\ x_2\\\\ x_3\n\\end{smallmatrix}\\right]\\right), \\qquad \\forall g\\in\\Sl_3(\\RR),\\quad p\\in P_{3,3}(\\RR).\n$$\nIt follows that $\\ggo=\\slg_3(\\RR)$, $K=\\SO(3)$, $\\kg=\\sog(3)$ and $\\pg=\\sym_0(3)$ is\nthe space of traceless symmetric $3\\times 3$ matrices. As an $\\Ad(K)$-invariant\ninner product on $\\pg$ we take $(\\alpha,\\beta)=\\tr{\\alpha\\beta}$, and it is easy to\nsee that the inner product $\\ip$ on $V$ for which the basis of monomials\n$$\n\\{ x^D:=x_1^{d_1}x_2^{d_2}x_3^{d_3}:d_1+d_2+d_3=3,\\; D=(d_1,d_1,d_3)\\}\n$$\nis orthogonal and\n$$\n||x^D||^2=d_1!d_2!d_3!, \\qquad\\forall D=(d_1,d_2,d_3),\n$$\nsatisfies the required conditions. Let $E_{ij}$ denote as usual the $n\\times n$\nmatrix whose only nonzero coefficient is a $1$ in the entries $ij$. Since\n$$\n\\pi(E_{ij})p=\\ddt|_0p(e^{-tE_{ij}}\\cdot )=-x_j\\tfrac{\\partial p}{\\partial x_i},\n$$\nwe obtain that the moment map $m:P_{3,3}(\\RR)\\longrightarrow\\sym_0(3)$ is given by\n$$\nm(p)=I-\\tfrac{1}{||p||^2}\\left[\\la x_j\\tfrac{\\partial p}{\\partial x_i},p\\ra\\right].\n$$\nWe are using here that $\\la x_j\\tfrac{\\partial p}{\\partial x_i},p\\ra=\\la\nx_i\\tfrac{\\partial p}{\\partial x_j},p\\ra$ for all $i,j$.\n\nIt is also easy to see that the action of a diagonal matrix $\\alpha\\in\\slg_3(\\RR)$\nwith entries $a_1,a_2,a_3$ is given by\n\\begin{equation}\\label{diagact}\n\\pi(\\alpha)x^D=-\\left(\\sum_{i=1}^3a_id_i\\right)x^D, \\qquad\\forall D=(d_1,d_2,d_3).\n\\end{equation}\nA first general observation is that any monomial is a critical point of $F$. Indeed,\n$$\nm(x^D)=\\left[\\begin{smallmatrix} 1-d_1&&\\\\ &1-d_2&\\\\ &&1-d_3\n\\end{smallmatrix}\\right],\n$$\nand so $x^D$ is an eigenvector of $m(x^D)$ with eigenvalue $F(x^D)=\\sum d_i^2-1$\n(see Lemma \\ref{marian2}). It follows that $m(p)=0$ for $p=x_1x_2x_3$, that is, $p$\nis a minimal vector and its $\\Sl_3(\\RR)$-orbit is therefore closed. We also have in\nsuch case that $p_1=p+x_1^3$ is a semistable vector whose orbit is not closed.\nIndeed, by acting by diagonal elements with entries $t,\\tfrac{1}{t},1$ we get that\n$p+t^dx_1^3\\in\\Sl_3(\\RR).p$ for all $t\\ne 0$ and so $p\\in\\overline{\\Sl_3(\\RR).p_1}$\n(recall that $p$ and $p_1$ can never lie in the same orbit since they have\nnon-isomorphic isotropy subgroups).\n\nFor the vector $q=x_1^2x_3+x_1x_2^2$ we have that\n$$\nm(q)=\\left[\\begin{smallmatrix} -\\tfrac{1}{2}&&\\\\ &0&\\\\\n&&\\tfrac{1}{2}\n\\end{smallmatrix}\\right].\n$$\nIt follows from (\\ref{diagact}) that $\\pi(m(q))q=\\tfrac{1}{2}q$ proving that $q$ is\na critical point of $F$ with critical value $F(q)=\\tfrac{1}{2}>0$. On the other\nhand, the family $p_{a,b}=ax_1^2x_3+bx_2^3$, $a,b\\ne 0$, lie in a single orbit and\n$$\nm(p_{a,b})=I-\\tfrac{1}{2a^2+6b^2} \\left[\\begin{smallmatrix} 4a^2&&\\\\ &18b^2&\\\\\n&&2a^2\n\\end{smallmatrix}\\right].\n$$\nIt is then easy to see by using (\\ref{diagact}) that $p_{a,b}$ is a critical point\nif and only if $5a^2=27b^2$ and the critical value equals $\\tfrac{155}{49}-3$, a\nnumber smaller than $\\unm$. In particular, $p_{a,b}$ can not be in the closure of\nthe orbit of $q$ by Theorem \\ref{marian}, (i).\n\\end{example}\n\n\n\n\n\n\n\n\n\\bibliographystyle{amsalpha}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Autographic Visualization}\nWe present autographic visualization as a counter-model to data visualization, focusing on material rather than data encoded in numbers and symbols. It is a counter-model not in the sense that it excludes data visualization---there is considerable overlap between the two models---but in the sense that it clarifies the characteristics of each model through this juxtaposition. \n\nWe refer to autographic visualization as a set of techniques for revealing material phenomena as visible traces and guiding their interpretation. Designing an autographic display means setting the conditions that allow a trace to emerge. We understand a trace as any transient or persistent configuration of matter presenting itself to the senses.\n\nA central goal of autographic visualization is to make environmental information legible and the processes of data collection and their underlying causalities experiential and accountable. Since a data set is the outcome rather than the starting point (Fig.~\\ref{fig:sketch}, middle), autographic visualization cannot rely on the representation of data. It is non-representational: rather than re-presenting something absent, the phenomenon presents itself. Autographic visualizations can be accidental, such as the desire paths across grass areas in the city or the uneven traces of wear on a staircase or a computer keyboard. But in general, they are the outcome of design operations that aim to reveal, isolate, amplify, conserve, and present material traces as records of past processes and events. For example, the display of the sundial~\\cite{Peters2015Marvelous} is a product of a natural phenomenon untouched by human intervention. At the same time, it is a computational device designed to calculate not only the time of day but also month and season. Its display often incorporates a calendar---a classic form of data visualization---geometrically aligned with the path of the sun in the particular location.\n\n\nAutographic visualization techniques draw from a long history of epistemic and material cultures that deal with the visual interpretation of traces, symptoms, or signatures as forms of material evidence. Its practices range from scientific experimentalism to ancient techniques of hunting, navigating, and healing. This paper is based on two premises. First, the diverse space of practices engaging with traces can be generalized into several distinct design operations. And second, these visual operations of autographic visualization are closely related to the modes of exploration facilitated by information visualization.\n\nWhile the interpretation of medical symptoms, the design of experimental systems, or the design of shape-changing materials are usually considered in isolation, autographic visualization identifies common visual strategies across all of these practices. Table~\\ref{infovsauto} summarizes the main differences between InfoVis and autographic visualization. To avoid confusion, we use \\textit{symbolic data} to refer to digital data.\n\n\\begin{table}\n\t\\caption{Comparison between InfoVis and Autographic Visualization}\\label{infovsauto}\\centering\\scriptsize\\medskip\n\t\\begin{tabular}\n\t\t{L{1.8cm} L{2.7cm} L{3cm}} & InfoVis & Autographic Visualization\\\\\n\t\t\\midrule Role of symbolic data & Begins with data & Ends with data\\medskip\\\\\n\t\tFocus & Inwards: reveals patterns within data & Outwards: reveals the process of data generation\\medskip\\\\\n\t\tRole of representation & Representational: visual marks stand for a phenomenon & Non-representational: the phenomenon presents itself\\medskip\\\\\n\t\tRole of design & Mapping data to visual variables \\& layouts & Elucidating qualities of a phenomenon\\\\\n\t\\end{tabular}\n\\end{table}\n\nDespite these differences, there is a close kinship between autographic and information visualization\u2014both are rooted in the same visual culture and take advantage of similar perceptual mechanisms~\\cite{Noe2004Action,Haroz2006Natural}. Foundational literature in visualization and HCI frequently invokes natural phenomena as metaphor or inspiration. Whether charts and graphs should be viewed as abstractions of natural phenomena based on shared organizational principles or as metaphorical references will not be elaborated here. However, it is worth noting that both InfoVis and autographic visualization were at one point considered to be the same approach. Etienne-Jules Marey's late 19\\textsuperscript{th}~century \\emph{Methode Graphique} encompasses both the charting of statistical information and the construction of self-registering devices for recording blood pressure, the flight of birds, or the turbulence of air. In pursuit of his declared goal to capture \"the language of the phenomena themselves,\" Marey's pioneering work included autographic devices such as the wind tunnel and, most prominently, his invention of chronophotography~\\cite{Marey1878La}, inspiring other non-mimetic uses of photography~\\cite{felton_photoviz:_2016}.\n\nAnalog information visualizations are often at the same time physical traces. Mechanically excited by seismic movements, a simple seismometer produces a classic line chart. This dual role creates a conceptual ambiguity that blurs the boundary between InfoVis and autographic visualization. If the line chart produced by the seismometer is a physical trace, what about a satellite image, what about the electrical charge generated by a digital sensor connected to a computer? The difference between an analog and a digital medium is not relevant for the underlying causality since both devices operate in a deterministic way. From this perspective, many symbolic datasets indeed share the character of a material trace; the material aspects of data collection inscribe themselves, sometimes unintentionally, into the data set~\\cite{Dourish2013Media}. \n\nThis can be illustrated through a public data set of GPS traces of drop-off and pick-up locations of NYC taxis. Plotting the data set in Cartesian space yields, unsurprisingly, a figure that resembles a map of the city. Some areas on this map, however, appear blurrier than others: an artifact of diminished GPS reception between tall buildings. In other words, the two-dimensional geographic datum contains hidden information about the three-dimensional shape of the city. But this latent information is only accessible if the materiality of GPS is understood and considered. A material reading that takes advantage of such artifacts, or \"dust\" in the data~\\cite{Loukissas2016Taking}, differs from a classic approach of cleaning the data set by excluding obvious errors, e.g., points that fall into the ocean or within buildings. While information- and autographic visualization may differ in the length of the causal chains that link phenomenon and representation, the autographic perspective can to some extent also be applied to digital information, further explored in Section~\\ref{performative}.\n\n\\begin{figure*}\n\t[t] \\centering \n\t\\includegraphics[width=18cm]{collage.jpg} \n\t\\caption{Autographic visualizations and their design operations (Table~\\ref{designop}), top-left to bottom-right: (a)~Cyanometer, a device for measuring the blueness of the sky, \\emph{framing} and \\emph{encoding}~\\cite{Saussure1791Description}; (b)~Mercury-in-glass thermometer, \\emph{constraining} and \\emph{encoding}; (c)~filter for sampling airborne particulate matter, \\emph{aggregating}; (d) southern blot for DNA electrophoresis, \\emph{separating, registering}; (e)~EJ Marey's smoke machine to visualize airflow, \\emph{coupling}; (f)~Campbell-Stokes sunshine recorder, \\emph{registering, encoding}; (g)~Chladni figure revealing sound waves, \\emph{coupling, registering}; (h)~a planning diagram for neuro-surgery, \\emph{annotating}~\\cite{Wulfingen2017Traces:}; (i)~pedocomparator for sampling and comparing soil samples, \\emph{aggregating, encoding}~\\cite{Latour1999Pandora's}; (j)~reagent strips for ozone detection, \\emph{registering, encoding}.}\\label{collage}\n\\end{figure*}\n\n\\section{Theoretical Perspectives on Material Traces}\nClassic semiology, in many ways foundational for information design and visualization~\\cite{Bertin1983Semiology,Robinson1975Map,Cleveland1987Graphical}, offers a framework for analyzing physical traces. Philosopher Charles Sanders Peirce differentiates \\emph{symbol}, \\emph{icon}, and \\emph{index} as three kinds of (non-exclusive) relationships a sign can have with a corresponding object or concept in the world. The symbol is linked to its object based on arbitrary convention; the icon is based on a relationship of resemblance; and in the case of the index, the relationship is an existential connection such as the causal link between a footprint and the person that left it~\\cite{Peirce1998What}. While \\emph{icons} (Peirce includes diagrams in this category) and \\emph{symbols} play a prominent role in information design and visualization, indexical signs appear only implicitly; for example, as patterns and signals in data sets. \n\nIndexical phenomena have been explored in HCI, ubiquitous computing, and to a lesser extent, information visualization~\\cite{Moere2009Analyzing,Offenhuber2012Kuleshov's,Schofield2013Indexicality,Offenhuber2015Indexical}. The application of indexicality to physical traces, however, is somewhat limited by the central role of linguistic concepts in semiotic theory. Peirce, for example, describes a pointing finger, a physical trace, and the word \"there\" as equivalent examples of indexical signs. By relying on the sign as the universal vehicle of meaning, semiotic perspectives reduce the trace to its role as a signifier. Scholars have critiqued the semiotic model of representation, in which meaning is conveyed by signs that stand for concepts in the world. To paraphrase historian Lorraine Daston, the proposition of a one-to-one correspondence between a sign and its object turned out to be as useless as the Borgesian 1-to-1 map that fully covers the territory~\\cite{Coopmans2014Representation}.\n\nWhen considering all the processes, actions, and material conditions involved in exploring traces, it is not always useful to make explicit what exactly constitutes a sign and how it is used to generate meaning. Scholars in science, technology, and society (STS) have formulated alternative perspectives that focus on the performative and embodied modes of cognition with regard to the roles of traces and trace-making in the history of science. Bruno Latour describes data, traces, and visualizations as \\emph{immutable mobiles}: aspects of the world that have been stabilized, flattened, and made mobile to support arguments in scientific discourse~\\cite{Latour1990Visualisation,Latour1999Pandora's}. In a similar vein, Hans-J\u00f6rg Rheinberger speaks about epistemic things: objects manipulated in the laboratory that should not just be regarded as samples collected from the world, but as materializations of research questions and scientific models that are embodied in the countless transformations applied to them~\\cite{Rheinberger1997Toward}.\n\nThe notion of the trace as objective evidence and science as a process of trace-making has blossomed in the 19\\textsuperscript{th}~century paradigm of \\emph{mechanical objectivity}. Charting the history of objectivity through scientific atlases and visualization, historians Daston and Galison describe the paradigm as a pursuit to develop modes of inscription that create pure and objective visualizations without human intervention, even if just for the sake of removing dirt and imperfections~\\cite{Daston2007Objectivity}. Culminating in the work of E. J. Marey, mechanical objectivity still resonates in contemporary efforts to develop canonical visualization principles based on scientific criteria. \n\nMechanical objectivity in its purest ambition of tracing \"nature's pencil,\" however, was bound to fail due to the indispensability of narrative explanation and the ambiguous nature of the trace. Historian Carlo Ginzburg describes the interpretation of traces, clues, and symptoms as a method of conjecture rather than computation~\\cite{Ginzburg1979Clues}. Philosopher Sybille Kr\u00e4mer locates traces \"at the seam of where the meaningless becomes meaningful,\" embodying meaning through material configuration rather than verbal attribution. In her understanding, traces are not found, but constructed in the act of reading: a trace is whatever is recognized as a trace~\\cite{Kraemer2007Spur}. Contemporary thinkers under the umbrella of \\emph{new materialism}, however, do not insist on the centrality of the human observer~\\cite{VanderTuin2012New}. Distinct from both \\emph{realist} (focusing on the external world) and \\emph{anti-realist} (focusing on the relationships among signs) perspectives, Karen Barad's concept of \\emph{agential realism} considers the human subject as a part, but not the center of an external phenomenon~\\cite{Barad2007Meeting}. Avoiding any dualism between objects in the world and their representations, Barad understands a phenomenon as an ongoing process of what she describes as intra-actions rather than a fixed set of objects and their relationships.\n\nTranslated to the subject at hand, this implies that autographic visualizations are not stable artifacts whose correct interpretation is just a matter of visual literacy, but phenomena that emerge from a recipients' extensive engagement with the world and with the knowledge of others, like a hunter who learns to spot latent animal tracks that are not just invisible but non-existent for an unskilled person. Philosopher Michael Polanyi aptly describes how a complex trace can depend on theoretical concepts and language~\\cite{Polanyi1998Personal}:\n\\begin{quote}\n\tThink of a medical student attending a course in the X-ray diagnosis of pulmonary diseases. He watches, in a darkened room, shadowy traces on a fluorescent screen placed against a patient's chest, and hears the radiologist commenting to his assistants, in technical language, on the significant features of these shadows. At first, the student is completely puzzled. [\\dots] The experts seem to be romancing about figments of their imagination- he can see nothing that they are talking about. Then, as he goes on listening for a few weeks, looking carefully at ever-new pictures of different cases, a tentative understanding will dawn on him; he will gradually forget about the ribs and begin to see the lungs. And eventually, if he perseveres intelligently, a rich panorama of significant details will be revealed to him (p. 106) \n\\end{quote}\n\n\\section{Autographic Visualization Neighbors}\nAutographic visualization shares a space with other visualization models concerned with physical information displays, embedded in physical environments and contexts of action~\\cite{willett_embedded_2017}. They can be seen in the tradition of ubiquitous computing and its explorations of tangible media, ambient and situated displays~\\cite{Weiser1991computer,Weiser1996Designing,IshiiTangible,Wisneski1998Ambient}. \n\nWithin the information visualization discourse, the field of data physicalization is closest to the concept of autographic visualization. Data physicalization investigates three-dimensional physical embodiments of information and their possible advantages for data communication and exploration~\\cite{jansen_opportunities_2015}. Unlike autographic visualization, however, physicalization (or physical visualization) is a data-first approach. As Jansen et al.\\ explain: \"Traditional visualizations map data to pixels or ink, whereas physical visualizations map data to physical form\"~\\cite{Jansen2013Evaluating}. Data physicalization aims to take advantage of the cognitive processes involved in examining, manipulating, and constructing three-dimensional objects that may not be accessible through visual observation of two-dimensional representations. The goal of data physicalization is therefore epistemological---supporting data analysis---while autographic visualization emphasizes ontological questions such as what constitutes a datum and how it relates to the world.\n\nBased on the Peircean concept of the index, indexical visualization presents a design space spanned by the dimensions of symbolic and causal distance~\\cite{Offenhuber2015Indexical}; the former describes the amount of symbolic mediation used to transform a phenomenon into a display, the latter the number of transformations in the causal chain. Despite its short causal distance, a simple seismometer involves a high degree of symbolic mediation; its line chart can no longer be connected to the phenomenon without knowledge of the process that created it. Conversely, an ambient display that mimics the outdoor sky based on weather data would have a short symbolic distance, but a long causal distance because of the complexity of the mediating apparatus. In place of these two dimensions, the concept of qualitative displays elegantly presents a one-dimensional measure of \"directness,\" describing the degree of intervention by a designer~\\cite{Lockton2017Exploring}. This dimension spans five different levels ranging from visual phenomena that are their own visualization to highly artificial data physicalizations at both extremes of the scale. The authors argue that visualization, so far, has been biased towards quantitative information while neglecting qualitative aspects.\n\nIndexical visualization and qualitative displays both are motivated by a gap in existing frameworks: the neglect of the index compared to icons and symbols in the former, the neglect of qualitative information in the latter. Both emphasize the embeddedness of visualizations in the physical world~\\cite{willett_embedded_2017}. Neither, however, fully capture the nature of analog visualizations of material information: Indexicality requires adhering to a semiotic framework that insists on explicating visual codes. The term qualitative display, on the other hand, seems overly broad as a descriptor of material displays. The term \\emph{autographic} addresses the main difference to information design, InfoVis, and data physicalization: the self-inscribing nature of material displays, in which the designer creates the apparatus that lets traces emerge rather than explicitly defining symbolic mappings. Areas of intersection exist: for example, data visualization software that generates and displays its own data from user interaction and therefore assumes autographic qualities, or projects such as \\emph{Dear Data}, when the signature of the author is considered as a trace~\\cite{lupi_dear_2016}.\n\nAutographic visualization continues the explorations into self-illustrating phenomena, first presented by Pat Hanrahan in his 2004 IEEE Vis capstone talk~\\cite{Hanrahan2004Self-illustrating}. Referencing a concept from the history of scientific representation~\\cite{Robin1992scientific}, Hanrahan focused on scientific experiments rather than the broader cultural field of visual practices. Autographic displays, however, are not limited to science but can be found throughout history and culture. The term autographic not only reflects the process of visualization and the role of the designer in this process but is also historically accurate, since the term was widely used during the late 19\\textsuperscript{th} and early 20\\textsuperscript{th}~century to describe self-inscribing mechanisms~\\cite{Siegel2014Forensic}. As reflected by a Google n-gram search, the terms \"autographic\" and \"self-registering\" saw their peak in the early 20\\textsuperscript{th}~century, where they show up in many patent applications for mechanical visualization devices and photographic techniques, before losing popularity later in the 20\\textsuperscript{th}~century.\\footnote{See~\\url{https:\/\/books.google.com\/ngrams\/graph?content=autographic\\%2Cself-registering&year_start=1800}}\n\n\\section{Autographic Design Operations} \n\n\\begin{table}\n\t\\caption{Overview of autographic design operations}\\label{designop}\\scriptsize\\centering\\medskip\n\t\\begin{tabular}\n\t\t{L{2cm} L{2cm} L{3.2cm}} Objective&Operations&Description\\\\\n\t\t\\midrule Establishing perceptual space~\\cite{Latour1990Visualisation,Lynch1985Discipline} & Framing& Establishing a perceptual context to isolate a phenomenon~\\cite{Bateson1972Steps,Berger2008Ways}\\medskip\\\\\n\t\t&Constraining&Isolating a single quality by constraining other qualities~\\cite{Rheinberger1997Toward,Peirce1998Essential}\\medskip\\\\\n\t\tTuning scale and intensity&Aggregating&Making visible by aggregating material~\\cite{Rheinberger1997Toward,Latour1999Pandora's}\\medskip\\\\\n\t\t&Separating&Making visible by separating material~\\cite{Rheinberger1997Toward,Daston2007Objectivity}\\medskip\\\\\n\t\tTrace-making& Coupling & Making visible by allowing the phenomenon to interact with another substance~\\cite{Rheinberger1997Toward,Daston2007Objectivity}\\medskip\\\\\n\t\t&Registering&Creating a persistent trace~\\cite{Latour1990Visualisation,Daston2007Objectivity,Siegel2014Forensic,Goodman1968Language}\\medskip\\\\\n\t\tMeasuring and Interpretation& Annotating & Adding graphical elements to guide the interpretation~\\cite{Bredekamp2015technical,Wulfingen2017Traces:}\\medskip\\\\\n\t\t& Encoding & Adding a scale for discretizing a phenomenon~\\cite{Goodman1968Language,Cole2002Suspect}\\\\\n\t\\end{tabular}\n\\end{table}\n\nThe production of interpretable traces is facilitated by cultural techniques that involve various degrees of intervention. In the most simple case, an environmental trace presents itself to a skilled observer. At the other end of the spectrum, traces are the product of a complex experimental apparatus involving many transformations. Along this continuum, the engagement with traces can be articulated as a design process that comprises a set of operations to turn a phenomenon into encodable data. The designer has to decide which aspect of a phenomenon can be used as an indicator and proceed to apply different operations that make this indicator legible. \n\nThe visual vocabulary of information visualization is formalized in schemata ranging from the foundational concept of visual variables to the grammar of graphics, organized by data structure and user needs~\\cite{Bertin1983Semiology,Shneiderman1996eyes,Card1997structure,Wilkinson2005grammar,Wickham2010Layered}. A taxonomic approach that categorizes trace-phenomena into visual variables seems impractical and would introduce another level of symbolic representation. Instead, our approach focuses on the design operations involved in autographic design (Fig.~\\ref{collage}). Table~\\ref{designop} provides an overview of these operations, grouped by the kinds of transformations they achieve. Literature categorizing traces exists in domain-specific areas, from the forensic analysis of crash skid marks~\\cite{Struble2013Automotive} to the identification of animal tracks~\\cite{Liebenberg1990art}. But to our knowledge, there are no overarching accounts that generalize the visual operations of trace-making across disciplines. The following taxonomy is an attempt to this effect. \n\nThe construction of the proposed autographic design space involved multiple steps. The fundamental concepts were drawn from theoretical literature, including history of science~\\cite{Daston2007Objectivity}, theory of scientific representation~\\cite{Latour1990Visualisation,Coopmans2014Representation}, and other perspectives on the ontology and epistemology of the trace~\\cite{derrida_grammatology_2016,Kraemer2007Spur,Rheinberger2011Infra-experimentality}. We also included professional literature from fields concerned with preparing traces, especially in medicine and the forensic sciences~\\cite{Struble2013Automotive,Weizman2017Forensic,Cole2002Suspect}. \n\nThe reviewed literature made clear that the goals of trace-making diverge across fields. While the natural sciences are interested in the generalization of the phenomenon behind the trace, forensic science is striving for individualization: finding what differentiates a particular object from all other things in the world, e.g., the gun that fired a bullet. However, despite these different objectives, the practices of identifying, preparing, and transcribing a trace share more similarities than differences. The review, therefore, focused on practices more than the underlying intent. \n\nThe next step involved the collection and analysis of 800 examples of traces and techniques of trace-making in the broadest sense.\\footnote{For reference~\\url{https:\/\/www.pinterest.com\/dietmaro\/autographic-visualization}} These examples were examined considering the theoretical concepts identified earlier and used to reflect on these concepts. To include autographic devices such as the sundial or the Cyanometer (Fig.~\\ref{collage}a), we expanded the definition of the trace from the narrow meaning of a persistent imprint~\\cite{Kraemer2007Spur} to a broader definition that includes ephemeral phenomena such as shadows or sound.\n\nThe resulting design space is grouped into four sections that loosely correspond to the steps involved in trace-making: 1.\\ frame the context in which the phenomenon can emerge, 2.\\ adjust the intensity of a phenomenon to make it intelligible, 3.\\ register the trace phenomenon and make it persistent, and 4.\\ annotate and encode it into data. The steps are not strictly sequential (steps 2 and 3 are sometimes skipped), so the term \"pipeline\" does not seem appropriate, but they are generally traversed in one direction. The four steps synthesize and simplify operations discussed in the literature under technical image production and \\emph{chains of representation}~\\cite{Bredekamp2015technical,Latour1999Pandora's}.\n\n\\subsection{Establishing Perceptual Space}\n\nIn an environment saturated with latent information, the first step involves defining the space and context for the autographic visualization and thus offering a scaffolding for its reception. This problem rarely emerges in InfoVis, where charts are usually recognized as such and appear in familiar contexts such as newspapers, websites, or exhibitions. But to facilitate decoding, also traditional visualizations have to define a spatial reference system and clarify the domain covered by the data~\\cite{Lynch1985Discipline}. Framing and constraining are two families of operations that focus the attention, isolate the phenomenon from its background, and offer a reference for comparison.\n\n\\paragraph{Framing}As the most fundamental autographic operation, framing circumscribes the perceptual space of the visualization. Framing guides the attention to a particular quality of a phenomenon while masking the many other qualities that are not considered relevant. Framing manipulates the context of a phenomenon without touching it. Nevertheless, framing determines how the phenomenon presents itself, shaping the qualities of the display. Framing can be illustrated through the Cyanometer, a historical device for measuring the blueness of the sky consisting of a numbered color scale with a hole at the center~\\cite{Saussure1791Description}. The frame separates the color as the quality of interest from its surrounding context and simultaneously allows constructing a different context that allows comparison or measurement~(Fig.~\\ref{collage}a). Framing is also a rhetorical strategy and, as such, omnipresent in information design and visualization. In communication theory, framing describes a form of meta-communication that places a message into an existing interpretative context~\\cite{Bateson1972Steps}. \n\n\\paragraph{Constraining}As a stronger form of framing, constraining involves physically manipulating the phenomenon. Constraining isolates a particular quality of a phenomenon from all others, but unlike framing it makes it observable by physically limiting the degrees of freedom of other qualities and behaviors. As an example, a glass thermometer allows a liquid to expand with temperature, but only in a single direction and by amplifying the expansion through the diameter of the tube~(Fig.~\\ref{collage}b). As a sentinel species, the proverbial canary in the coal mine is constrained in a cage, so that its demise can alert miners of dangerous gases in the surrounding atmosphere. Constraining can be compared to similar interaction techniques in data visualization that control for changes in a specific variable by keeping the others constant.\n\n\\subsection{Tuning Scale and Intensity} \nAfter a perceptual space is established, the phenomenon might still be invisible because it is too faint or too intense, too large or too small, too fast or too slow to be perceived. The second group of autographic operations is therefore aimed at tuning the scale, speed, and intensity of a phenomenon to the gamut of human perception. Hans-J\u00f6rg Rheinberger goes as far as describing compression and dilatation as the two fundamental procedures of scientific experimentation~\\cite{Rheinberger2011Infra-experimentality}. In the following, aggregation refers to operations that compress a phenomenon in space, time, and magnitude, while separation achieves the opposite. In data visualization, the two operations are equivalent, usually accomplished by tweaking visual variables. In autographic space, however, aggregation and separation are quite different in terms of operations and levels of complexity. We therefore discuss them separately.\n\n\\paragraph{Aggregation}A straightforward way to amplify the visual intensity of a material substance is to aggregate the substance over time until visual differences become apparent. Air-borne particulate matter is invisible but reveals itself in the filter of a dust mask worn over an extended period in polluted air. An example of spatial compression involves the collection of soil samples from a larger territory, which can then be organized into a grid that serves as a compact visualization of the territory~(Fig.~\\ref{collage}i). Aggregating material under controlled conditions is at the core of many methods of sensing and measurement, such as the gravimetric measurement of particulate matter~(Fig.~\\ref{collage}c). InfoVis methods designed to reveal patterns based on aggregation include scatter plots and heat map displays.\n\n\\paragraph{Separation}Often the opposite is necessary, untangling a material mixture and spatially separating it into its components based on their physical properties. A prism separates white light into its different wavelengths. In paper chromatography, the components of an ink blot on a piece of paper dipped into water are separated by the force of capillary action. DNA Electro-chromatography works by the same principle, separating fragments of DNA embedded in a gel driven by the force of an electric field~(Fig.~\\ref{collage}d). The analytical separation of multiple correlated variables is a central task in InfoVis, addressed in methods such as scatterplot matrices, parallel coordinate displays, or faceting.\n\n\\subsection{Trace-making techniques}\nMany phenomena of interest are non-visual but can be visualized through their interaction with certain substances and processes. Other visual phenomena are ephemeral; operations of marking and tracing can help to preserve traces and create a persistent record. The operations under this rubric include most analog visualization techniques, which are historically and visually linked to the contemporary languages of data visualization. \n\n\\paragraph{Coupling} links an invisible phenomenon to a second phenomenon that serves as a visible indicator or proxy. Wind itself may be invisible, but reveals itself in the movement of grass; smoke injected into a wind tunnel serves the same purpose~(Fig.~\\ref{collage}e). Many archaic skills of farming, hunting, or navigating rely on observing such proxy indicators linked with the phenomenon of interest. Coupling can involve adding a tracer substance, such as color dyes to visualize the movement of liquids. 18\\textsuperscript{th}~century experimentalist Ernst Chladni visualized the shape of sound-waves by adding sand on a metal plate struck by a violin bow~(Fig.~\\ref{collage}g). Mechanical instruments can be designed to respond to a phenomenon with visible changes, as illustrated by pressure gauges or meteorological instruments. By translating the phenomenon into changes in a defined visual layout, such instruments provide a bridge into the space of symbolic representation. \n\n\\paragraph{Registering or marking} is a stronger form of coupling that aims to create a permanent trace---a cast from a footprint, the groove of a vinyl record, or the photochemical reaction in an exposed photograph. Tracer substances can also be used to create a permanent trace, including dyes to reveal structure in biological specimen, radioactive tracers used in radiology, or powder to reveal latent fingerprints. Campbell-Stokes' sunshine recorder serves its literal purpose through a spherical lens that burns a linear trace into a paper strip~(Fig.~\\ref{collage}f). Registration creates not only a spatial but also a temporal record of a phenomenon. James Watt's indicator mechanism, producing a two-dimensional line chart of steam pressure over piston displacement, became the first of many self-registering devices, including the black boxes used in aviation~\\cite{Siegel2014Forensic}. Real-time data visualizations from sensor input are, to some extent, autographic visualizations.\n\n\\subsection{Measuring and Interpretation}\\label{encode}\nThe last step in the design of autographic visualizations involves the interpretation of the trace and to make it comparable to other traces. This step can involve many different forms, from manual annotations of traces during the analysis, visual guides for non-expert viewers, to scales for encoding the trace into symbolic data.\n\n\\paragraph{Annotating} Graphical annotations are frequently found where traces and records of measurements are interpreted, creating a hybrid of graphic and autographic displays~(Fig.~\\ref{collage}h). Such scribbles can themselves be seen as traces of a thought process or collective discourse. Annotations and legends can also guide the attention and support the discovery of traces where implicit framing is not sufficient. Museum displays of archeological artifacts often include abstracted representations that point out significant features of the object. In all of these examples, annotations serve largely identical purposes in information design and autographic visualization.\n\n\\paragraph{Encoding} represents the last step in the process of translating a phenomenon into data. Encoding begins by marking different conditions over time, registering observations~\\cite{lupi_dear_2016}. A systematic application of such marks becomes a scale that allows encoding the phenomenon into discrete elements~(Fig.~\\ref{collage}j). Fingerprints became a viable means of identification only after an efficient system of encoding their intricate appearance into a sparse sequence of symbols was developed~\\cite{Cole2002Suspect}. In the operation of encoding, an analog system becomes a digital system. Nelson Goodman describes a pressure gauge as an analog system if the marks on the gauge face are used for mere orientation, but it becomes a digital system, once only the discrete intervals are considered~\\cite{Goodman1968Language}\n\n\\section{Autographic Systems} \nThe design operations described in the previous section form the basic vocabulary of autographic visualization. Revealing a phenomenon, however, typically requires the application of several operations, combined in an \\emph{autographic system}. Most examples given earlier, when applied in practice, form autographic systems of various complexity, comprising a system of operations for framing, tuning, and recording a trace and making it measurable. A set of design operations can either be deployed in parallel to create a controlled environment to observe a phenomenon in isolation, or sequentially as a series of material transformations to make an invisible phenomenon accessible to the senses. While desire paths may appear accidental and free of design intent, complex autographic systems can be highly artificial displays that represent through analogies---as in the case of MONIAC, a hydraulic model meant to simulate economic flows~\\cite{bissell_historical_2007}. In the following, three common types of autographic systems will be discussed. The list in Table~\\ref{autosys}, neither exclusive nor exhaustive, includes experimental apparatuses, analog visualizations and simulations, hybrid systems using digital and analog components, and new materials with intentionally designed properties and behaviors.\n\n\\begin{table}\n\t\\caption{Autographic systems}\\label{autosys}\\scriptsize\\centering \n\t\\begin{tabular}\n\t\t{L{3cm} L{5cm}} Autographic systems&\\\\\n\t\t\\midrule Autographic environments&Systems that combine operations to generate traces under controlled conditions\\medskip\\\\\n\t\tDigital\/physical systems&Coupling analog and digital systems\\medskip\\\\\n\t\tAutographic materials&Encoding behavior into smart materials or synthetic organisms\\\\\n\t\\end{tabular}\n\\end{table}\n\n\\subsection{Autographic Environments} \nAutographic environments are environments in which a phenomenon can unfold, largely isolated from external influences. Operations such as \\emph{framing, constraining, coupling,} and \\emph{registering} are mobilized to turn a phenomenon into a usable trace under controlled conditions. The wind tunnel is a simulated environment isolated from its surroundings. It includes mechanisms for producing traces (e.g., through smoke or silk strings), for observing and recording them. Another example is the large bacterial growth area in Fig.~\\ref{antibiotic} to study the adaptation of bacteria to antibiotic environments. Autographic environments are analog computers and visualization systems, allowing us to perform the same visualization tasks on different inputs and observe the results.\n\n\\begin{figure}\n\t[htb] \\centering \n\t\\includegraphics[width=3.45in]{antibiotic.png} \n\t\\caption{Autographic environment: gel with graduated antibiotic presence to observe microbial evolution towards antibiotic tolerance~\\cite{Baym2016Spatiotemporal}}\\label{antibiotic}\n\\end{figure}\n\n\\subsection{Digital\/Analog Systems} \nDigital\/analog systems are a special case of autographic environments, which utilize both digital and analog forms of computation for controlling a phenomenon. The digital\/analog coupling can happen in three different ways. First, the physical conditions in the autographic environment can be computationally controlled to achieve different results. Another possibility is to digitize the outputs of the apparatus and subject them to further computational analysis. The third possibility is to couple digital and analog processes in a dynamic feedback loop, creating a hybrid, autopoietic system. Since, as discussed earlier, digital sensors and circuits process material information, the line between digital and analog components, discrete logic and continuous feedback, is somewhat blurry. Fig.~\\ref{cloud} shows an example of an autopoietic digital\/analog system, a cloud chamber controlled by a digital algorithm aiming to sculpt the shape of generated clouds.\n\n\\begin{figure}\n\t[htb] \\centering \n\t\\includegraphics[width=3.45in]{cloud.png} \n\t\\caption{Digital\/physical system: Clemens Winkler. Per-forming clouds. 2018 \u2013 art project attempting to create rectangular clouds~\\cite{Winkler2018Per-Forming}}\\label{cloud}\n\\end{figure}\n\n\\subsection{Autographic Materials} \nThe last group of autographic systems avoids the complexity of digital\/analog apparatuses and aims to develop materials and biological organisms with truly autographic properties~\\cite{telhan_designature_2016}. The concept of \\emph{4D printing} investigates geometries and materials that dynamically respond to environmental changes or possess the capacity for self-assembly~\\cite{tibbits_4d_2014}. Autographic materials also include work in synthetic biology with modified bacteria that react to environmental changes with visible changes of, for example, color or smell (Fig.~\\ref{echromi}).\n\\begin{figure}\n\t[htb] \\centering \n\t\\includegraphics[width=3.45in]{chromi.png} \n\t\\caption{Autographic material: Daisy Ginsberg, James King, E.Chromi, 2009 \u2013 color changing bacteria to detect gut diseases, a speculative design project~\\cite{Ginsberg2009E-Chromi}}\\label{echromi}\n\\end{figure}\n\n\\section{Rhetoric Techniques of Autographic Visualization} \nConsidering the pervasive availability of digital information and the extent to which our experience is shaped by it, are there compelling reasons, beyond nostalgia for the analog, to engage with the slow, ambiguous, and bespoke domain of material displays? The beginning of this paper presented the argument that autographic visualizations allow to experience the phenomenon behind the data and render legible the circumstances of data collection. Traces are not representations; they present themselves. This argument, however, deserves more scrutiny. Traces are often equated with incontestable evidence. As unintentional side-products of past processes and events, material traces are considered trustworthy, and their display can achieve a persuasive effect that is difficult to attain with digital representations. However, as discussed earlier, traces neither speak for themselves nor are recognized by everyone. The persuasiveness of traces is shaped by social and cultural processes, as illustrated by the slow acceptance of fingerprinting and DNA identification~\\cite{Cole2002Suspect}. Traces require interpretation, and their interpretation relies on assumptions and often speculation. The most apparent patterns are often misleading. Interpretation, therefore, requires a rhetoric scaffolding for integrating material displays into a framing argument. The following section discusses examples of autographic design principles as they are used in practice. In all of these cases, material traces are used as rhetorical devices; as visual arguments that support specific claims. \n\n\\subsection{Evidentiary Aesthetics in Citizen Science}\nTufte's imperative of information visualization to \"above all else show the data\" becomes \"above all else move closer to the phenomenon\" in autographic visualization. The tactic to enable the sensory experience of causality is often found in the domain of citizen science~\\cite{Snyder2017Vernacular}. Especially groups who investigate issues of environmental pollution are often met with skepticism of the data they collect\u2014whether their methods are rigorous, their instruments accurate, or their biases reflected in data collection. Many grassroots scientists lack institutional affiliations and scientific credentials, making their data sets often seem less trustworthy in public perception. In such an environment, the evidentiary aesthetics of a raw trace can be a more effective way to generate trust than a well-designed chart using canonical visualization principles.\n\n\\begin{figure}\n\t[ht] \\centering \n\t\\includegraphics[width=3.45in]{wylie.png} \n\t\\caption{Map of photo strips tarnished by H2S emanation on the field site, Public Lab. Registering, annotating~\\cite{Wylie2017Materializing}}\\label{h2s}\n\\end{figure}\n\nThe Public Lab is a grassroots science collective investigating environmental pollution and the social impacts of oil spills and fracking; issues that in their view do not receive enough attention by environmental agencies~\\cite{Wylie2017Materializing}. The group has developed a DIY method using photo paper to detect harmful hydrogen sulfide gas emanating from the ground in proximity to fracking sites. The maps presenting their results incorporate an arrangement of small pieces of the original, stained photo papers over an abstracted representation of the landscape (Fig.~\\ref{h2s}). The grid of samples not only shows a visual pattern similar to a heat map indicating the locations of highest exposure, it also suggests physical circumstances to the uninitiated viewer: that a chemical reaction that stains photo paper is associated with these places, implying the papers were actually exposed at the indicated locations (a rhetorical claim that cannot be verified using the map). In combination with an encoded data set, the trace map serves as an illustration and justification of the method. But the autographic map is not only directed outwards towards a skeptical audience but also inwards at the own collaborators. The modes of data collection are often participatory and depend on the engagement of volunteers and members of the affected community. To this end, data collection is staged as a public experiment; the physical traces serve as rhetorical devices to make the nature of pollution tangible for collaborators, and the results of their voluntary efforts visible. It may be a lucky accident that the pollutant causes an noticeable and reproducible trace to visualize environmental harms. Autographic design involves discovering such opportunities and building an explanatory framework around a suitable indicator. The first rhetoric strategy involves choosing a presentation that suggests immediacy and direct causal connection over more abstract representations. In the case of Public Lab's grassroots science, the presentation of causality is not just concerned with data veracity, but, more importantly, with the methods and practices of its researchers. \n\n\\subsection{Performative Mapping and Annotated Walkthroughs}\\label{performative}\nThe second rhetoric strategy involves guiding the audience through the causal chain, helping them to \"connect the dots\" and perform the analysis themselves. Traces imply causality, but the immediate cause is absent from the trace\u2014what is left is an imprint. Annotated walkthroughs put the traces that are considered relevant next each other and allow the recipient to explore the latent connections. \n\nThe example to illustrate this approach is based on digital information---satellite images, video footage, and other media formats read through a material-forensic lens, highlighting the indexical and material \"residues\" in digital data. In recent years, an active community of conflict mappers and amateur forensic experts has emerged who analyze and compare social media content from conflict regions and gained prominence during the Syrian civil war~\\cite{Kurgan2017Conflict,Weizman2017Forensic}. In the first major military conflict in which social media played a decisive role, all adversaries made extensive use of platforms such as YouTube or Twitter to disseminate footage from the frontlines recorded by drones, smartphones, and body-cams. Social media not only served propagandistic purposes but also as a backchannel for reporting military success to foreign donors, which in some cases even involved staging fake battles~\\cite{Atwan2015Islamic}.\n\n\\begin{figure}\n\t[htb] \\centering \n\t\\includegraphics[width=3in]{syria.png} \n\t\\caption{Amateur visual forensics, Institute for United Conflict Analysts \u2013 framing, annotating (IUCA)~\\cite{IUCA2016Syria}}\\label{syria}\n\\end{figure}\n\nThe community of conflict mappers took it upon themselves to verify claims of the warring parties by geo-referencing buildings and locations shown in the videos and placing the events into a temporal sequence in order track the shifting frontlines~\\cite{Offenhuber2017Maps}. As in the case of the grassroots scientists, the conflict mappers are not represented by certified institutions, and therefore choose their visual displays to address questions of credibility. For their frontline analysis, conflict mappers have created a specific format of display. It does not synthesize information from individual sources into a consistent cartographic language but arranges snippets from the raw sources. Elements in this tableau are annotated with simple shapes, indicating the same building or location from various angles in smartphone footage and satellite images. In the example featured in Fig.~\\ref{syria}, conflict mappers countered the claim of the Syrian military to have captured a particular city by demonstrating that the shown location is not inside the city, but rather on its outskirts. To interpret these displays, the recipient is forced to do the work of the cartographer, judge the likelihood of whether the highlighted elements are the same building or whether they are shot at the same time of day. This rhetoric strategy has been described as non-representational or performative cartography~\\cite{Kitchin2007Rethinking}. The same strategy of guiding the viewer by juxtaposing elements, implying connection through spatial proximity, and highlighting relevant aspects can be equally applied to material traces, is used in museum displays of archaeological artifacts. The displays of the conflict mappers try to persuade not by encoding information into visual variables, but by emphasizing the authenticity of the sources and inviting the viewers to \"see for themselves.\" Their use of publicly available tools such as Google Earth without bothering to modify their visual defaults underscores this invitation as if to say that anyone can conduct this investigation and arrive at the same conclusion. \n\n\\subsection{Sensory Accountability---Exploring Data Materiality}\nThe third rhetoric strategy contextualizes digital data and computational models with material displays to explore their grounding in physical reality. A considerable number of artists have taken on visualizing the materiality of climate change and environmental pollution through situated and material displays. Proxy data sources play, implicitly or explicitly, a major role in this genre: from visceral explorations of the rich material qualities of ice core samples and arctic ice to the instrumentalization of bioindicators and sentinel species---plants and animals that are especially sensitive to particular conditions and can serve as environmental sensors. Many of these projects not only comment on environmental phenomena such as climate change, but also on the aesthetic and ontological dimensions of scientific measurement: what is it that is measured, which qualities are captured or overlooked, and what are the political underpinnings of how a problem is articulated and operationalized. \n\n\\begin{figure}\n\t[htb] \\centering \n\t\\includegraphics[width=3.45in]{staub.png} \n\t\\caption{D. Offenhuber, Staubmarke \u2013 reverse graffiti washed into concrete, calling attention to air pollution, aggregating, encoding~\\cite{Offenhuber2018Dust}}\\label{staub}\n\\end{figure}\n\nIn public controversies around air pollution stemming from particulate matter and ozone, the discourse among actors with conflicting positions often quickly converges on technicalities of measurement such as appropriate threshold values and exposure times. The work of grassroots science initiatives thus has a strong political dimension: challenging conventional protocols of measurement by revealing what is not shown by them. Their perspective of data collection values richness and completeness over accuracy; aims to capture the implications of pollution on the lives of individuals and communities even if the accuracy of the cheap sensors used in these projects is \"good enough\" rather than perfect. In this context, material displays can be used to call attention to the basic assumptions of environmental sensing and their political implications. We use the term \"sensory accountability\" to describe approaches that call attention to the complex material qualities of a phenomenon that is often reduced to a single quantitative dimension. As an example, the project \"Staubmarke\" (Fig.~\\ref{staub}) visualizes air pollution by applying visual markers on urban surfaces to make accumulations of particulate matter legible. These markers are executed as \\emph{reverse graffiti}, which are based on selectively cleaning dirty surfaces rather than applying paint. With ongoing pollution, the markers will fade over time, starting with the most delicate textures in the pattern. The markers were applied at locations also surveyed by the sensors installed by a citizen sensing initiative, allowing the comparison between data values and physical appearance of the markers. Sensory accountability is a critical inquiry into sensing methods: what is captured by the sensor, what is ignored, and how do the recorded values correspond with sensory phenomena. Highlighting changes in the physical environment and contextualizing these changes with corresponding data offers an interesting space for visualization.\n\n\\section{Discussion---Reviving the Public Experiment}\nPhysical expressions of data currently enjoy burgeoning interest, from the popularity of data physicalization in design and education to the wide range of artistic projects focusing on data and the environment. One can make the argument that this is not merely a short-lived design trend, but an expression of a new sensibility for the relationship between digital data and the world around us. The current fascination with materiality in design and the humanities coincides with renewed critiques of the central role of data in society and a discomfort with the explicit encoding of the world into symbolic categories. Fields such as critical data studies probe the assumptions behind data collection, the politics of categorization, and the opacity of algorithmic decision systems. Research on the interpretability of machine learning and algorithmic decision-making models is currently thriving in information science and information visualization. Autographic visualization also resonates with the recent resurgence of analog computing, advances in microfluidics and synthetic biology~\\cite{Ulmann2013Analog,Whitesides2006origins}. The disconnect between science and public opinion in the issue of climate change and the role of climate data as a proxy-site to negotiate political positions further demonstrate the need to probe the material foundations of data generation. Disinformation and the phenomenon fake news also provide reasons why it is crucial to investigate data generation through a forensic-material lens, based on the premise that \"no two things in the physical world are ever exactly alike\"~\\cite{Kirschenbaum2008Mechanisms:}, which extends to digital camera sensors and hard drives. We argue that these issues also justify a shift in the agenda of visualization from the patterns inside data to the conditions of data generation. Surely, misleading patterns and false narratives can be sufficiently addressed with better explanations and more nuanced visualizations that are contextualized with scientific arguments. But the original impetus of visualization has always been to show rather than tell, to produce images that say more than a 1000 words. Autographic visualization continues this project by extending information visualization beyond the space of symbolic encodings into the spaces where data take shape. The practices of grassroots science offer an interesting model that connects to the history of public experiments during the enlightenment period, where natural phenomena were explained through spectacular public demonstrations~\\cite{Nieto-Galan2016Science}.\n\n\\section{Limitations of Autographic Visualization}\nMany trace-phenomena have charismatic qualities; they fascinate and invite exploration while remaining elusive. As Michelangelo Antonioni's movie \"Blow Up\" illustrates, as one gets closer to a trace, the meaning seems to disappear: through successive magnification, a candid photograph reveals a murder scene but eventually dissolves into ambiguous patterns.\n\nOne has to avoid a na\u00efve empiricism that uncritically elevates trace-reading over theoretical inquiry as a source of knowledge. Traces often seem to inspire attributions of meaning even when there is none. With regards to trace-reading, science and superstition are often uncomfortably close~\\cite{Ginzburg1979Clues,butz_superstition_2007,huxley_method_1880}. As previous sections have pointed out, the charisma of traces can be exploited for rhetorical purposes by selectively curating, framing, and guiding the process of interpretation, and autographic visualization is not different from any other visual practice in this regard.\n\nThe strongest limitations are encountered on the practical level. Compared to data visualization, the creation of autographic visualizations is slow and limited by the available material. Autographic visualizations lack the agility, versatility, and potential scale of computational analysis. Where data visualization is limited by the gap between data and physical phenomenon, autographic visualization cannot compete with the full scope of computational possibilities. \n\n\\section{Conclusion}\nThis paper introduces the concept of autographic visualization, which examines the discovery and preparation of material traces. It offers a preliminary systematization of the design space of autographic display including its design operations, their combination into composite autographic systems, and the rhetorical strategies of presenting material traces. Autographic visualization considers the structures in the physical world as a form of data and thus serves as a speculative counter-model to data visualization, which is limited to the space of symbolic representation. \n\nIn the tradition of Marey's \\emph{graphical method}, which encompasses both the production of traces and the graphical display of data, we argue that InfoVis and autographic visualization can complement each other. Expanding the scope of visualization beyond symbolic data would open fertile areas for research, addressing questions such as: how do we see traces, and how do these perceptions relate to individual knowledge and skills? J.J. Gibson's theory of affordances~\\cite{Gibson1979ecological} has been foundational for the entire field of HCI but has been so far mostly operationalized from a functional perspective, without further attention to the phenomenology of affordances. In this context, also the aesthetics of experience and its relationship to knowledge construction (\\emph{aisthesis}) deserve attention.\n\nBeyond academic questions, what is the place of autographic approaches in visualization practice? Artists and citizen scientists provide examples that show how the immediacy, directness, and richness of material information can be utilized. Material traces serve as visual means of evidence construction: public experiments turn data collection from a bureaucratic exercise into a sensory experience of causality; physical data proxies make abstract climate models and their predictions relatable and observable.\n\nAutographic visualization aims not just to bridge the gap between data and phenomenon, but also the one between observer and display. Designers can no longer blindly rely on normative conventions on how to visualize data for an idealized, data-literate audience. In the space of material information, the observer becomes an experimenter, having to actively construct evidence by connecting the dots. Autographic visualization is therefore a critical practice in the sense that it de-naturalizes the concept of data and its underlying assumptions.\n\n\\acknowledgments{My gratitude goes to my long-term collaborators Orkan Telhan and Gerhard Dirmoser for their ideas and inspiring discussions. I also would like to thank the reviewers for their close attention and helpful feedback. }\n\n\\bibliographystyle{abbrv}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe theory of modulated structures in macroscopic ferromagnets \\cite%\n{dzialoshin}, ferroelectrics \\cite{levaniuk} and metal alloys\n\\cite{hachatur} has a rich history. The number of works dedicated\nto a study of the modulated structures in small samples is\nsignificantly less \\cite{char}. The standard approach to the\ntheory of the modulated structures (including the incommensurate\nones) uses the constant order parameter amplitude function\napproximation \\cite{levaniuk}, \\cite{izium}. Our consideration of\nthe bounded sample (thin film, in particular) requires us to drop\nthis approximation: the order parameter must satisfy some boundary\nconditions and it is unlikely to find a solution with constant\norder parameter satisfying these conditions. That is why we have\nto consider a generic case of connected set of the nonlinear\nequations for the order parameter amplitude and phase functions.\nWe will show here that the solutions of this system are\nsignificantly more complicated, than in the constant order\nparameter approximation.\n\n\\section{Free energy and the equations for the order parameter for a\nferroelectric with the Lifshitz invariants}\n\n\\qquad The Landau free energy functional reads \\cite{izium}:%\n\\begin{eqnarray}\n\\Phi & =\\int dz\\left\\{\n-r(\\eta_{1}^{2}+\\eta_{2}^{2})+u_{1}(\\eta_{1}^{2}+\\eta_{2}^{2})^{2}+u_{2}(%\n\\eta_{1}^{2}\\eta_{2}^{2})\\right\\} + \\notag \\\\\n& +\\int dz\\left\\{ \\sigma\\left( \\eta_{2}\\frac{\\partial\\eta_{1}}{\\partial z}%\n-\\eta_{1}\\frac{\\partial\\eta_{2}}{\\partial z}\\right) +\\gamma\\left[ \\left(\n\\frac{\\partial\\eta_{1}}{\\partial z}\\right) ^{2}+\\left( \\frac{\\partial\n\\eta_{2}}{\\partial z}\\right) ^{2}\\right] \\right\\} , \\label{free}\n\\end{eqnarray}\nwhere $\\eta_{1}$ and $\\eta_{2}$\\ are the order parameter components; we\nconsider only one-dimensional configurations; parameters $r,$\\ $\\sigma$, $%\nu_{1},u_{2}$ and $\\gamma$ are the Landau free energy expansion coefficient.\nIntroducing the amplitude and phase variables\n\n\\begin{equation*}\n\\eta_{1}=\\rho\\cos\\varphi,\\eta_{2}=\\rho\\sin\\varphi,\n\\end{equation*}\nwe obtain the following expression for the Landau free energy:%\n\\begin{eqnarray}\n\\Phi=\\int dz \\{ -r\\rho^{2}+u\\rho^{4}+w\\rho^{n}(1+\\cos n\\varphi) \\notag \\\\\n-\\sigma\\rho^{2}\\frac{\\partial\\varphi}{\\partial z}+ \\gamma\\left[ \\left( \\frac{%\n\\partial\\rho}{\\partial z}\\right) ^{2}+\\rho^{2}\\left( \\frac {\\partial\\varphi}{%\n\\partial z}\\right) ^{2}\\right] \\} . \\label{free2}\n\\end{eqnarray}\nVarying the free energy we obtain the equilibrium equations \\cite{izium}%\n\\begin{eqnarray}\n-r\\rho+2u\\rho^{3}+\\frac{n}{2}w\\rho^{n-1}(1+\\cos n\\varphi)+\\gamma\\rho (\\frac{%\n\\partial\\varphi}{\\partial z})^{2}- \\notag \\\\\n\\gamma\\frac{\\partial^{2}\\rho }{\\partial z^{2}}-\\sigma\\rho\\frac{%\n\\partial\\varphi}{\\partial z}=0, \\label{ampeq}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\gamma\\rho^{2}\\frac{\\partial^{2}\\varphi}{\\partial z^{2}}+2\\gamma\\rho \\frac{%\n\\partial\\rho}{\\partial z}\\frac{\\partial\\varphi}{\\partial z}-\\sigma \\rho\\frac{%\n\\partial\\rho}{\\partial z}+\\frac{n}{2}w\\rho^{n}\\sin n\\varphi=0.\n\\label{phaseeq}\n\\end{eqnarray}\n\nHere $n$ is an integer number describing the system symmetry. Now we\nintroduce the dimensionless variables:%\n\\begin{eqnarray}\n\\rho=\\sqrt{\\frac{r}{2u}}R,\\xi=z\\sqrt{\\frac{r}{\\gamma}},\\frac{dR}{dz}=\\frac {%\ndR}{d\\xi}\\sqrt{\\frac{r}{\\gamma}}, \\notag \\\\\n\\frac{\\sigma}{\\sqrt{\\gamma r}}=T,u^{1-\\frac{n}{2}}nwr^{\\frac{n}{2}-2}2^{-%\n\\frac{n}{2}}=K. \\label{dimlessvariables}\n\\end{eqnarray}\nThen the equations (\\ref{ampeq}) and (\\ref{phaseeq}) take the form%\n\\begin{eqnarray}\nR^{\\prime\\prime}-R^{3}+(1-\\varphi^{\\prime2}+T\\varphi^{\\prime})R- \\notag \\\\\nR^{n-1}K(\\cos n\\varphi+1)=0, \\label{ampeqdimless}\n\\end{eqnarray}\n\\begin{eqnarray}\n\\varphi^{\\prime\\prime}+2\\frac{R^{\\prime}}{R}\\varphi^{\\prime}-\\frac{R^{\\prime}%\n}{R}T+R^{n-2}K\\sin n\\varphi=0. \\label{phaseeqdimless}\n\\end{eqnarray}\n\n\\section{Approximate analytic solution}\n\nLet us begin from the limit $K=0.$ Then the equation (\\ref{phaseeqdimless})\ncan be solved:%\n\\begin{eqnarray}\n\\varphi^{\\prime}\\equiv\\psi=\\frac{C_{0}}{R^{2}}+\\frac{T}{2},\n\\label{phasesolution}\n\\end{eqnarray}\nwhere $C_{0}=\\left[ \\psi\\left( 0\\right) -\\frac{T}{2}\\right] R\\left( 0\\right)\n^{2}$ is the integration constant, which is determined by the initial\nconditions $\\psi\\left( 0\\right) $ and $R\\left( 0\\right) $. Now we can\nsubstitute this expression for $\\varphi^{\\prime}$ into the equation (\\ref%\n{ampeqdimless}). We obtain in result a closed equation for the amplitude\nfunction $R:$%\n\\begin{eqnarray}\nR^{\\prime\\prime}-R^{3}+R(1+\\frac{T^{2}}{4})-\\frac{C_{0}^{2}}{R^{3}}=0.\n\\label{ampclosed}\n\\end{eqnarray}\n\nThis equation can be interpreted as a dynamics equation with the effective\npotential:%\n\\begin{eqnarray}\nU=\\frac{R^{2}}{2}(1+\\frac{T^{2}}{4})-\\frac{R^{4}}{4}+\\frac{C_{0}^{2}}{2R^{2}}%\n. \\label{effpot}\n\\end{eqnarray}\n\nAn interesting feature of this potential is its dependence on the initial\nconditions via the constant $C_{0}^{2}.$ There exists a domain of\nparameters, where the potential has a minimum and, therefore, an oscillating\nsolution for the amplitude function can take place. A condition of the\nmaximum and minimum points merging into the inflection point with the\nhorizontal derivative $U^{\\prime}\\left( R\\right) =U^{\\prime\\prime}\\left(\nR\\right) =0$ gives the equation\n\n\\begin{equation*}\nT^{6}+12T^{4}+48T^{2}+64-432C_{0}^{2}=0\n\\end{equation*}\n\nHowever, a presence of a minimum is necessary but not sufficient condition:\nan oscillating solution will not exist if the initial point is situated\noutside the potential well.\n\nA border line for the domain, where oscillating solutions can exist is\ndepicted here:\n\n\\begin{center}\n\\includegraphics[\nheight=2in, width=1.5in, angle=270, trim = 0.1in 0.1in 0.1in 0.1in, clip\n]{cond.eps}\n\nFIG.1. A border line for the domain, where oscillating solutions can exist.\n\\end{center}\n\n\\bigskip\n\nIf we have a solution for the amplitude function $R(\\xi),$ the phase\nfunction $\\varphi(\\xi)$ is given by the equation (\\ref{phasesolution}). The\nphase function consists from two terms: a slow contribution stemming from\nthe second term in (\\ref{phasesolution}) and more or less diffused jumps due\nto the first term in this equation. It is important that the jump value\nequals exactly to $\\pi:$\n\nThe first integral of \\ref{ampeqdimless} reads $\\frac{dR}{d\\xi}=\\sqrt {2(E-U)%\n},$ so $d\\xi=\\frac{dR}{\\sqrt{2(E-U)}}.$\n\nNotice that the jump value $\\int\\frac{C_{0}}{R^{2}}d\\xi$ calculated in the\nvicinity of the $R(\\xi)$ minimum equals to\n\\begin{eqnarray}\n\\Delta\\varphi & =\\int\\frac{C_{0}}{R^{2}}d\\xi=\\int\\frac{C_{0}}{R^{2}}\\frac {dR%\n}{\\sqrt{2(E-U)}}\\simeq2\\int_{R_{0}}^{R_{e}}\\frac{C_{0}dR}{R^{2}\\sqrt{2(\\frac{%\nC_{0}^{2}}{2R_{0}^{2}}-\\frac{C_{0}^{2}}{2R^{2}})}}= \\notag \\\\\n& =2\\int_{R_{0}}^{R_{e}}\\frac{dR}{R\\sqrt{\\frac{R^{2}}{R_{0}^{2}}-1}}%\n=2\\arccos\\left\\vert \\frac{R_{0}}{Re}\\right\\vert \\simeq\\pi,\n\\label{phasejump}\n\\end{eqnarray}\nwhere $R_{e}\\gg R_{0}.$ We have used the first integral of\n(\\ref{ampclosed}) above. Only leading terms of $U(R)$ were taken\ninto account. Note that the model (\\ref{free2}), which can be\nreduced to the sine-Gordon model in the limit of $R=const$, admits\nsolutions with jumps exactly equal to $\\pm\\pi$ (topological charge\n\\cite{soliton}). However, we have obtained a similar\nresult within the approximation $K=0$, i.e. neglecting the cosine term in (%\n\\ref{free2})! As it is seen from the formulae \\ref{phasejump}, these phase\njumps appear due to the excursion of the amplitude function near the\nsingularity $\\frac{C_{0}}{R^{2}}$.\n\nThe equation (\\ref{ampclosed}) can be solved analytically, but we\nwill present below numerical solutions both for $K=0$ and\n$K\\neq0.$\n\n\\section{Numerical solution for K = 0}\n\nSome results of calculation of the amplitude and phase functions spatial\ndependence for the case of $K=0$ are presented below. Figures 2 -- 5 and 6\n-- 9 differ only by initial values of the amplitude function: in the case of\nfigures 2 -- 5 we have small initial amplitude function and, therefore,\noscillations of the amplitude function are near to harmonic ones, while in\nthe case of figures 6 -- 9 the initial amplitude function is near to the\napex of the hump and, therefore, we have a train of solitons. \\newpage\n\n\\begin{center}\n\\includegraphics[\nheight=2in, width=1.5in, angle=270, trim = 0.1in 0.1in 0.1in 0.1in, clip\n]{U1.eps}\n\nFIG.2. Effective potential for $n=4,$ $K=0,$ $T=1,$ $R(0)=0.3,$ $R^{\\prime\n}(0)=0,$ $\\varphi(0)=0,$ $\\varphi^{\\prime}(0)=0.3$.\n\nVertical dash marks $R(0)$ value.\n\n\\includegraphics[\nheight=2in, width=1.5in, angle=270, trim = 0.1in 0.1in 0.1in 0.1in, clip\n]{R1.eps}\n\nFIG. 3. Spatial dependence of the amplitude function for $n=4,$ $K=0,$ $T=1,$\n$R(0)=0.3,$ $R^{\\prime}(0)=0,$ $\\varphi(0)=0,$ $\\varphi^{\\prime}(0)=0.3$\n\n\\includegraphics[\nheight=2in, width=1.5in, angle=270, trim = 0.1in 0.1in 0.1in 0.1in, clip\n]{F1.eps}\n\nFIG. 4. Spatial dependence of the phase function for $n=4,$ $K=0,$ $T=1,$ $%\nR(0)=0.3,$ $R^{\\prime}(0)=0,$ $\\varphi(0)=0$\n\n(a) thin line $\\varphi^{\\prime}(0)=0.3$\n\n(b) heavy line $\\varphi^{\\prime}(0)=0.7.$\n\n\\includegraphics[\nheight=1.8in, width=1.8in, angle=270, trim = 0.1in 0.1in 0.1in 0.1in, clip\n]{P1.eps}\n\nFIG. 5. Amplitude-phase polar diagram for $n=4,$ $K=0,$ $T=1,$ $R(0)=0.3,$ $%\nR^{\\prime}(0)=0,$ $\\varphi(0)=0$\n\n(a) thin line $\\varphi^{\\prime}(0)=0.3$\n\n(b) heavy line $\\varphi^{\\prime}(0)=0.7.$\n\\end{center}\n\n\\newpage\n\n\\begin{center}\n\\includegraphics[\nheight=2in, width=1.5in, angle=270, trim = 0.1in 0.1in 0.1in 0.1in, clip\n]{U2.eps}\n\nFIG. 6. Effective potential for $n=4,$ $K=0,$ $T=1,$ $R(0)=1.099,$ $%\nR^{\\prime }(0)=0,$ $\\varphi(0)=0,$ $\\varphi^{\\prime}(0)=0.3$\n\nVertical dash marks $R(0)$ value.\n\n\\includegraphics[\nheight=2in, width=1.5in, angle=270, trim = 0.1in 0.1in 0.1in 0.1in, clip\n]{R2.eps}\n\nFIG. 7. Spatial dependence of the amplitude function for $n=4,$ $K=0,$ $T=1,$\n$R(0)=1.099,$ $R^{\\prime}(0)=0,$ $\\varphi(0)=0,$ $\\varphi^{\\prime}(0)=0.3$\n\n\\includegraphics[\nheight=2in, width=1.5in, angle=270, trim = 0.1in 0.1in 0.1in 0.1in, clip\n]{F2.eps}\n\nFIG. 8. Spatial dependence of the phase function for $n=4,$ $K=0,$ $T=1,$ $%\nR(0)=1.099,$ $R^{\\prime}(0)=0,$ $\\varphi(0)=0$\n\n(a) thin line $\\varphi^{\\prime}(0)=0.3$\n\n(b) heavy line $\\varphi^{\\prime}(0)=0.7.$\n\n\\includegraphics[\nheight=1.8in, width=1.8in, angle=270, trim = 0.1in 0.1in 0.1in 0.1in, clip\n]{P2.eps}\n\nFIG. 9. Amplitude-phase polar diagram for $n=4,$ $K=0,$ $T=1,$ $R(0)=1.099,$\n$R^{\\prime}(0)=0,$ $\\varphi(0)=0$\n\n(a) thin line $\\varphi^{\\prime}(0)=0.3$\n\n(b) heavy line $\\varphi^{\\prime}(0)=0.7.$\n\\end{center}\n\n\\newpage\n\n\\section{Numerical solution for K}\n\nResults of the numerical solution of our equations in the case of $K\\neq0$\nare presented in the figures 10 -- 15. The following important distinctions\nfrom the case of $K=0$ must be mentioned:\n\n(i) The periodicity of the spatial dependence is broken.\n\n(ii) The frequency and amplitude modulation can be seen in Figs. 10 and 11.\n\n(iii) A direction of the phase function staircase spatial dependence may\nchange to the opposite after some number of steps. The number of steps\nbetween neighboring direction changes is random (see FIG. 15).\n\n(iv) There exists a parameter range, where the trajectories in the\npolar diagram show closed periodic movement: a synchronization\nwith the periodic contribution of the sine of the monotonously\nincreasing component of the phase function takes place.\n\n\\begin{center}\n\\includegraphics[\nheight=3in, width=2in, angle=270, trim = 0.1in 0.1in 0.1in 0.1in, clip\n]{R3.eps}\n\nFIG. 10. Spatial dependence of the amplitude function for $n=4,$ $K=1.6,$ $%\nT=1,$ $R(0)=0.3,$ $R^{\\prime}(0)=0,$ $\\varphi(0)=0,$ $\\varphi^{\\prime\n}(0)=0.3$\n\n\\includegraphics[\nheight=3in, width=2in, angle=270, trim = 0.1in 0.1in 0.1in 0.1in, clip\n]{R4.eps}\n\nFIG. 11. Spatial dependence of the amplitude function for $n=4,$ $K=1.6,$ $%\nT=1,$ $R(0)=0.3,$ $R^{\\prime}(0)=0,$ $\\varphi(0)=0,$ $\\varphi^{\\prime\n}(0)=0.75$: thin line\n\nSpatial dependence of the parameter $C^{2}$ (see below) in arbitrary units:\nheavy line.\n\n\\includegraphics[\nheight=3in, width=2in, angle=270, trim = 0.1in 0.1in 0.1in 0.1in, clip\n]{F34.eps}\n\nFIG. 12. Spatial dependence of the phase function for $n=4,$ $K=1.6,$ $T=1,$\n$R(0)=0.3,$ $R^{\\prime}(0)=0,$ $\\varphi(0)=0$\n\n(a) thin line $\\varphi^{\\prime}(0)=0.3$\n\n(b) heavy line $\\varphi^{\\prime}(0)=0.75.$\n\n\\includegraphics[\nheight=2.5in, width=2.5in,angle=270, trim = 0.1in 0.1in 0.1in 0.1in, clip\n]{P3.eps}\n\nFIG. 13. Amplitude-phase polar diagram for $n=4,$ $K=1.6,$ $T=1,$ $R(0)=0.3,$\n$R^{\\prime}(0)=0,$ $\\varphi(0)=0,\\varphi^{\\prime}(0)=0.3$\n\n\\includegraphics[\nheight=2.5in, width=2.5in, angle=270, trim = 0.1in 0.1in 0.1in 0.1in, clip\n]{P4.eps}\n\nFIG. 14. Amplitude-phase polar diagram for $n=4,$ $K=1.6,$ $T=1,$ $R(0)=0.3,$\n$R^{\\prime}(0)=0,$ $\\varphi(0)=0,\\varphi^{\\prime}(0)=0.75$\n\\end{center}\n\n\\newpage\n\n\\section{Discussion}\n\nThus, the main distinctions of the solution in the case of the\nnon-vanishing anisotropy parameter $K$ are a random change of the\njumps direction (see FIG. 12) and an amplitude modulation of the\namplitude function (see FIG. 11). Just these random direction\nchanges lead to the tangled trajectory in polar coordinates\npresented in FIG 14.\n\nIn the case of $K=0$ we introduced the integration constant $C_{0}$ (\\ref%\n{phasesolution}). Let us introduce the function:\n\n\\begin{equation*}\nC(\\xi)=\\left[ \\psi(\\xi)-\\frac{T}{2}\\right] R(\\xi)^{2}.\n\\end{equation*}\n\nDirect differentiation shows that this function reduces to a constant in the\ncase of vanishing anisotropy parameter $K$:%\n\\begin{eqnarray}\n\\frac{dC}{d\\xi}=2R\\frac{dR}{d\\xi}\\left[ \\psi-\\frac{T}{2}\\right] +R^{2}\\frac{%\nd\\psi}{d\\xi}= \\notag \\\\\n=R^{2}(2\\frac{R^{\\prime}}{R}\\varphi^{\\prime} -T\\frac{R^{\\prime}}{R}%\n+\\varphi^{\\prime\\prime})= \\notag \\\\\n=R^{2}(-R^{n-2}K\\sin n\\varphi)=-R^{n}K\\sin n\\varphi.\n\\end{eqnarray}\n\n\\begin{center}\n\\includegraphics[\nheight=3in, width=2in, angle=270, trim = 0.1in 0.1in 0.1in 0.1in, clip\n]{C4.eps}\n\nFIG. 15. Spatial dependence of the parameter $C$ for $n=4,$ $K=1.6,$ $T=1,$ $%\nR(0)=0.3,$ $R^{\\prime}(0)=0,$ $\\varphi(0)=0,\\varphi^{\\prime}(0)=0.75$\n\n(a) thin line $C^{\\prime}$\n\n(b) heavy line $C.$\n\\end{center}\n\nComparison of FIG. 15 and FIG. 12 confirms that changes of the direction\ntake place in the points where $C(\\xi)$ function changes its sign.\n\n\\section{Conclusion}\n\nWe have shown in this paper that the constant amplitude\napproximation gives a poor description of the real picture of the\nspatial evolution of the amplitude and phase functions for the\nmodel of the incommensurate ferroelectric with the Lifshitz\ninvariant.\n\n\n\n\\begin {thebibliography}{99}\n\n\\bibitem {dzialoshin}{ I. Dzyaloshinskii, Soviet Physics JETP 19, 960\n(1964).}\n\n\\bibitem{levaniuk}{A.P. Levaniuk and D.G. Sannikov, Fiz. Tverd. Tela 18,\n423 (1976).}\n\n\\bibitem{hachatur}{ A.G. Khachaturian, \\textit{Theory of Structural\nTransformations in Solids, Wiley, 1983}}\n\n\\bibitem{char}{ E.V. Charnaya, S.A. Ktitorov, O.S. Pogorelova,\nFerroelectrics, 297, 29 (2003).}\n\n\\bibitem{izium}{ Ju. A. Iziumov, V.N. Syromiatnikov, Phase Transitions and\nCrystal Symmetry, Moscow, Nauka, 1984.}\n\n\\bibitem{soliton}{Mark J. Ablowitz and Harvey Segur, Solitons\nand the Inverse Scattering Transform, SIAM, Philadelphia, 1981}\n\\end{thebibliography}\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nAtherosclerotic plaque formation is today seen as a chronic inflammation of the arterial wall which grows over decades and may finally lead to a heart attack in case the artery is occluded or a thrombus is built through a rupture of the plaque. To understand the mechanisms of the chronic inflammation it was recently shown in \\cite{kuhlmann2012implantation} that genetically modified (apoE knockdown) mice with a cuff around their carotid develop atherosclerotic plaque formation up- and downstream of the cuff after they were fed with a Western diet. A low wall shear stress of the blood onto the arterial wall or highly oscillating blood flow was shown to be an important indicator for the development of plaque because it damages the endothelial layer. \n\nAt this point our mathematical model (cf. \\cite{ibragimov2005mathematical}) comes into play which we want to present in section 2: A dysfunction of the endothelial allows low-density lipoproteins (LDL) to enter the artery wall. Once inside the arterial wall, the LDL becomes oxidized which leads to a recruitment of immune cells, i.e. monocytes. Monocytes differentiate into active macrophages when inside the arterial wall starting continuously absorbing the oxidized LDL. Finally, the macrophages differentiate into foam cells, die and build a necrotic core. Smooth muscle cells (SMCs) from the outer regions of the arterial wall can migrate into the lesion and either become an apoptotic cell or migrate around the lesion to form a fibromuscular cap overlaying the plaque. \n\nSection 3 describes the spatial and temporal discretization of the CDG2 method which was successfully tested for elliptic problems, scalar convection-diffusion equations and compressible Navier-Stokes equations in \\cite{brdar2012cdg2,klofkorn2011benchmark,dgimpl:12}.\n\nWe summarize our paper with some 2D and 3D benchmark tests. in section 4 and a conclusion in section 5. \n\n\n\n\\section{Mathematical Model for Atherosclerotic Inflammation}\n\nA variety of mathematical models dealing with atherosclerotic plaque formation exist, \nsee \\cite{calvez2010mathematical,ibragimov2005mathematical}. Here, we focus on six species: immune cells $n_1$ (we do not distinguish between monocytes and macrophages, here), SMCs $n_2$, debris $n_3$ (i.e. all dead or apoptotic cells), chemoattractant $c_1$ (immune cells and SMCs attract to), non oxidized $c_2$ and oxidized LDL $c_3$. Let $\\Omega\\subset\\mathbb{R}^d$, $d=2,3$ be the domain of the arterial wall, $\\Gamma_1$ the boundary between the arterial wall and the lumen and $\\Gamma_2$ the outer boundary of the arterial wall.\n\nLet us suppose that for all $x\\in\\Omega$ and $t>0$ the following system holds:\n\\begin{eqnarray}\\label{eq:01}\n\\partial_t n_1 &=& \\nabla\\cdot \\left( \\mu_1 \\nabla n_1 -\\chi(n_1,c_1,\\chi_{11}^0,\\chi_{11}^{th})\\nabla c_1 - \\chi(n_1,c_3,\\chi_{13}^0,\\chi_{13}^{th})\\nabla c_3 \\right) - d_1n_1,\\\\\n\\partial_t n_2 &=& \\nabla\\cdot \\left( \\mu_2 \\nabla n_2 -\\chi(n_2,c_1,\\chi_{21}^0,\\chi_{21}^{th})\\nabla c_1 + \\chi( n_2,n_1,\\chi_{21}^0,\\chi_{21}^{th})\\nabla n_1 \\right) - d_2n_2,\\\\\n\\partial_t n_3 &=& \\nabla\\cdot ( \\mu_3 \\nabla n_3 ) + d_1 n_1 + d_2 n_2 - F(n_3,c_3) n_1,\\\\\n\\partial_t c_1 &=& \\nabla\\cdot ( \\nu_1 \\nabla c_1 ) - \\alpha_1 n_1 c_1 - \\alpha_2 n_2 c_1 + f_1(n_3)n_3,\\\\\n\\partial_t c_2 &=& \\nabla\\cdot (\\nu_2 \\nabla c_2 ) - k c_2,\\\\\n\\partial_t c_3 &=& \\nabla\\cdot (\\nu_3 \\nabla c_3 ) + k c_2.\n\\end{eqnarray}\nIn our model we assume the motility coefficients $\\mu_1$, $\\mu_2$, $\\mu_3$, $\\nu_1$, $\\nu_2$ and $\\nu_3$ to be constant. The parameters $d_1$ and $d_2$ are also constant and describe the death rates of immune cells and SMCs. Chemoattractant is neutralized by immune cells and SMCs which is described by $\\alpha_1$ and $\\alpha_2$. The parameter $k$ describes how fast the native LDL becomes oxidized. \n\nThe functions $\\chi$ is called tactic sensitivity function.\nWe have chosen $\\chi(x,y,a,b) = a \\frac{x}{y+b}$ to mimic a high sensitivity of cells to the relative gradient $\\frac{\\nabla c}{c}$ of a chemoattractant (or other cells) $c$ on the one hand and a small penalization term to regularize the (chemo-)tactic movement for small concentrations $c$ on the other hand. A lot of other tactic sensitivity functions are possible as well. Our tactic sensitivity functions are defined by constants $\\chi_{ij}^0$ and $\\chi_{ij}^{th}$.\n\nFor a healthy immune system debris is degraded which is indicated by a general function $F>0$. We suppose $\\gamma:=F<0$ to be constant indicating a diseased state. The function $f_1$ is a production term which is debris dependent. \nWe allow LDL and immune cells to enter the arterial wall through the inner boundary and SMCs to enter through the outer arterial wall. The immune cell (SMC) inflow is triggered when a threshold $c_1^{*}$ ($c_1^{**}$) of chemoattractant is exceeded, i.e.\n\\begin{eqnarray}\\label{eq:03}\n\t\\partial_n n_1 &=& -\\beta_1 H(c_1 -c_1^{*} ) \\quad\\quad \\forall x\\in \\Gamma_1,\\:t>0,\\\\\n\t\\partial_n n_2 &=& -\\beta_2 H(c_1 -c_1^{**} ) \\quad\\quad \\forall x\\in \\Gamma_2,\\:t>0,\\\\\n\t\\partial_n c_2 &=& -\\sigma \\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad \\forall x\\in \\Gamma_{1,in},\\:t>0,\n\\end{eqnarray}\nwith Heaviside function $H$ and a boundary $\\Gamma_{1,in}\\subset\\Gamma_1$ for the inflow of LDL. Here, $\\beta_1$, $\\beta_2$ and $\\sigma$ denote constant inflow rates for immune cells, SMCs and LDL, respectively. For all other boundary conditions we choose a no-flow condition.\nThe initial data is supposed to be given by $n_i(x,0)=n_i^0(x)$ and $c_i(x,0)=c_i^0(x)$, $i=1,2,3$, $x\\in\\Omega$.\n\nDefining a vector $U := (n_1, n_2, n_3, c_1, c_2, c_3 )$ and functions $\\mathcal{F}:\\mathbb{R}^{6}\\rightarrow\\mathbb{R}^{6\\times d}$, $\\mathcal{A}:\\mathbb{R}^{6}\\rightarrow\\mathbb{R}^{6\\times 6}$ and $S:\\mathbb{R}^{6}\\rightarrow\\mathbb{R}^{6}$ by\n\\begin{eqnarray*}\n\t\\mathcal{F}(U)&:=&(\\chi(n_1,c_1,\\chi_{11}^0,\\chi_{11}^{th})\\nabla c_1 + \\chi(n_1,c_3,\\chi_{13}^0,\\chi_{13}^{th})\\nabla c_3,\\\\\n &&\\chi(n_2,c_1,\\chi_{21}^0,\\chi_{21}^{th})\\nabla c_1 - \\chi( n_2,n_1,\\chi_{21}^0,\\chi_{21}^{th})\\nabla n_1,0\\ldots0),\\\\\n\t\\mathcal{A}(U)&:=&\\mathrm{diag}(\\mu_1,\\mu_2,\\mu_3,\\nu_1,\\nu_2,\\nu_3 ),\\\\\n\tS(U)&:=&-(d_1n_1,d_2n_2,-d_1 n_1 - d_2 n_2 + \\gamma n_1,\\alpha_1 n_1 c_1+\\alpha_2 n_2 c_2 - f_1n_3, k c_2,-k c_2)\n\\end{eqnarray*}\nequation \\eqref{eq:01} can be written as \n\\begin{equation}\n\t\\partial_t U = - \\nabla\\cdot( \\mathcal{F}(U) - \\mathcal{A}(U)\\nabla U) + S(U).\n\\end{equation}\n\n\n\\section{Discretization}\n\\label{seq:discretization}\n\nThe considered discretization is based on the Discontinuous Galerkin (DG) approach and \nimplemented in \\textsc{Dune-Fem}\\xspace \\cite{dedner2010generic} a module of the\n\\textsc{Dune}\\xspace framework \\cite{bastian2008generic}.\nThe current state of development allows for simulation of convection dominated \n(cf.~\\cite{limiter:11}) as well as viscous flow (cf.~\\cite{brdar2012cdg2}).\nWe consider the CDG2 method from \n\\cite{brdar2012cdg2} for various polynomial orders in space and 2nd (or 3rd) order \nin time for the numerical investigations carried out in this paper.\n\n\\subsection{Spatial Discretization}\n\nThe spatial discretization is derived in the following way. \nGiven a tessellation ${\\mathcal{T}_h}$ of the domain $\\Omega$ with \n$\\cup_{K \\in {\\mathcal{T}_h}} K = \\Omega$ the \ndiscrete solution $\\vecU_h$ is sought in the piecewise polynomial space \n\\begin{equation}\n\\label{eqn:vspace}\n V_h = \\{\\vect{v}\\in L^2(\\Omega,\\mathbb{R}^{n_{spec}}) \\; \\colon\n \\vect{v}|_{K}\\in[\\mathcal{P}_k(K)]^{n_{spec}}, \\ K\\in{\\mathcal{T}_h}\\}\n \\quad\\textrm{for some}\\;k \\in {\\mathbbm N}, \\nonumber\n\\end{equation}\nwhere $n_{spec}$ is the number of species and $\\mathcal{P}_k(K)$ is a space containing polynomials up to degree\n$k$. \n\n\\newcommand{\\dual}[1]{\\langle \\vect{\\varphi}, #1 \\rangle\nWe denote with $\\Gamma_i$ the set of all intersections between two \nelements of the grid ${\\mathcal{T}_h}$ and accordingly with $\\Gamma$ the set of all\nintersections, also with the boundary of the domain $\\Omega$. \nThe following discrete form is not the most general but still\ncovers a wide range of well established DG methods. \nFor all basis functions $\\vect{\\varphi} \\in V_h$ we \ndefine \n\\begin{equation}\n\\label{convDiscr}\n\\dual { \\oper{L}_h(\\vecU_h) } := \\dual{ \\oper{K}_h(\\vecU_h) } + \\dual{ \\oper{I}_h(\\vecU_h) }\n\\end{equation}\nwith the element integrals \n\\begin{eqnarray}\n\\label{eqn:elementint}\n \n \\dual{ \\oper{K}_h(\\vecU_h) } &:=&\n \n \\sum_{\\entity \\in {\\mathcal{T}_h}} \\int_{\\entity}\n \\big( ( \\mathcal{F}(\\vecU_h) - \\mathcal{A}(\\vect{U}_h) \\nabla \\vecU_h ) : \\nabla\\vect{\\varphi} + S(\\df)\n \\cdot \\vect{\\varphi} \\big),\n\\end{eqnarray}\nand the surface integrals (by introducing appropriate numerical fluxes \n$\\flux{\\mathcal{F}}_{\\isec}$, $\\flux{\\mathcal{A}}_{\\isec}$ for the convection and diffusion terms, respectively) \n\\begin{eqnarray}\n\\label{eqn:surfaceint}\n \n \\dual{ \\oper{I}_h(\\vecU_h) } &:=&\n \\sum_{e \\in \\Gamma_i} \\int_e \\big(\n \\vaver{\\mathcal{A}(\\vect{U}_h)^T\\nabla\\vect{\\varphi}} : \\vjump{\\vecU_h} +\n \\vaver{\\mathcal{A}(\\vect{U}_h)\\nabla\\vecU_h} : \\vjump{\\vect{\\varphi}} \\big) \\nonumber \\\\\n &-& \\sum_{e \\in \\Gamma} \\int_e \\big( \\flux{\\mathcal{F}}_{\\isec}(\\vecU_h) - \\flux{\\mathcal{A}}_{\\isec}(\\vecU_h)\\big) :\n \\vjump{\\vect{\\varphi}}, \n\\end{eqnarray}\nwhere $\\vaver{ \\vect{V} } = \\frac{1}{2}( \\vect{V}^+ + \\vect{V}^- )$ denotes the average and \n$\\vjump{ \\vect{V} } = (\\boldsymbol{n}^+ \\otimes \\vect{V}^+ + \\boldsymbol{n}^-\\otimes \\vect{V}^-) $ the jump of the\ndiscontinuous function $\\vect{V}\\in V_h$ over element boundaries.\nFor matrices $\\sigma,\\tau\\in\\mathbb{R}^{m\\times n}$ we use standard notation\n$\\sigma : \\tau = \\sum_{j=1}^m\\sum_{l=1}^n\\sigma_{jl}\\tau_{jl}$. Additionally, for vectors\n$\\vect{v} \\in \\mathbb{R}^m,\\vect{w}\\in\\mathbb{R}^n$, we define $\\vect{v}\\otimes\\vect{w}\\in\\mathbb{R}^{m\\times n}$\naccording to $(\\vect{v}\\otimes\\vect{w})_{jl}=\\vect{v}_j \\vect{w}_l$ for $1\\leq j\\leq m$, $1\\leq l\\leq n$.\n\nThe convective numerical flux $\\flux{\\mathcal{F}}_{\\isec}$ can be any appropriate numerical flux known for\nstandard finite volume methods. \nFor the results presented in this paper we choose $\\flux{\\mathcal{F}}_{\\isec}$ to be the widely used\nlocal Lax-Friedrichs numerical flux function.\n\nA wide range of diffusion fluxes $\\flux{\\mathcal{A}}_{\\isec}$ can be found in the\nliterature, for a summary see \\cite{arnold2002unified}.\nWe choose the CDG2 flux\n\\begin{eqnarray}\n\\flux{\\mathcal{A}}_{\\isec}(\\vect{V}) := 2\\chi_e \\big(\\mathcal{A}(\\vect{V}){\\liftr_e}(\\vjump{\\vect{V}})\\big)|_{{K^-_e}}\n\\quad\\mbox{for } \\vect{V}\\in V_h,\n\\end{eqnarray}\nwhich was shown to be highly efficient for advection-diffusion equations (cf. \\cite{brdar2012cdg2}). \nBased on stability results, we choose ${K^-_e}$ to be the element adjacent to the edge $e$ with the smaller\nvolume. ${\\liftr_e}(\\vjump{\\vect{V}})\\in [V_h]^d$ is the lifting of the jump of $\\vect{V}$ defined by\n\\begin{eqnarray}\n \\int_\\Omega {\\liftr_e}(\\vjump{\\vect{V}}) : \\boldsymbol{\\tau} = -\\int_e\n \\vjump{\\vect{V}} : \\vaver{\\boldsymbol{\\tau}} \\quad\n \\mbox{for all}\\;\\boldsymbol{\\tau}\\in [V_h]^d.\n\\end{eqnarray}\nFor the numerical experiments in this paper we use $\\chi_e= \\frac{1}{2}\\mathcal{N}_\\grid$,\nwhere $\\mathcal{N}_\\grid$ is the maximal number of intersections one element in the grid\ncan have (cf. \\cite{brdar2012cdg2}). We use triangular elements \nwhere $\\chi_e=1.5$ for all $e \\in \\Gamma$, and tetrahedral elements \nwhere $\\chi_e=2$ for all $e \\in \\Gamma$.\n\n\\subsection{Temporal discretization}\n\\label{TimeDisc}\n\nThe discrete solution $\\vecU_h(t) \\in V_h$ \nhas the form $\\vecU_h(t,x) = \\sum_i \\vect{U}_i(t)\\vect{\\varphi}_i(x)$.\nWe get a system of ODEs for the coefficients of $\\vect{U}(t)$ which reads \n\\begin{eqnarray}\n \\label{eqn:ode}\n \\vect{U}'(t) &=& f(\\vect{U}(t),t) \\mbox{ in } (0,T]\n\\end{eqnarray}\nwith $f(\\vect{U}(t),t) = M^{-1}\\oper{L}_h(\\vecU_h(t),t)$, $M$ being the mass matrix which is in\nour case block diagonal or even diagonal, depending on the choice of basis\nfunctions. $\\vect{U}(0)$ is given by the projection of $\\vect{U}_0$ onto $V_h$.\n\nFor the numerical results \nwe have chosen Diagonally Implicit Runge-Kutta (DIRK) \nsolvers of order $2$, $3$, or $4$ depending\non the polynomial order of the basis functions. The DIRK solvers are \nbased on a Jacobian-free Newton-Krylov method (see \\cite{knoll:04}).\nThe Krylov method is chosen to be GMRES. \nThe implicit solver relies on a \\textbf{matrix-free} implementation of the discrete operator \n$\\oper{L}_h$. In a follow-up paper we will compare this approach to a fully assembled\napproach.\n\n\\section{Numerical Results}\nIn this section we present some benchmark tests for 2D and 3D focusing on parallelization and higher order DG schemes. Due to the lack of an exact solution $U$ we have computed the $L^2$-error between the discrete solution $U_h$ and a very fine, higher order solution $U_{h'}$. The quadrature order to compute $\\|U_h-U_{h'}\\|_{L^2(\\Omega)}$ was chosen to be $2k+4$, where $k$ denotes the order of the scheme. All computations are done on an unstructured, tetrahedral mesh.\n\n\\subsection{A 2D numerical experiment with six species}\n\n\n\\begin{table}[t]\n\\small\n\\caption{Accuracy of the CDG2 scheme with 32 threads}\n\\begin{tabular}{ll|lll|lll|lll}\n\\hline\\noalign{\\smallskip}\n & & & linear & & & quadratic & & & cubic & \\\\\nlevel & grid size & time$^a$ & $L^2$-error & EOC$^b$ & time$^a$ & $L^2$-error & EOC$^b$ & time$^a$ & $L^2$-error & EOC$^b$ \\\\\n\\hline\n0 & 80 & 5.72E-1& 2.42E-3 & --- & 2.00E0 & 2.18E-3 & --- & 6.52E0 & 1.96E-3 & --- \\\\\n1 & 320 & 5.56E0 & 2.10E-3 & 0.20311 & 2.33E1 & 1.82E-3 & 0.26650 & 8.63E1 & 1.50E-3 & 0.38074 \\\\\n2 & 1280 & 3.98E2 & 1.82E-3 & 0.21263 & 2.09E2 & 1.34E-3 & 0.43315 & 8.22E2 & 9.26E-4 & 0.69823 \\\\\n3 & 5120 & 3.33E3 & 1.39E-3 & 0.38944 & 2.21E3 & 7.92E-4 & 0.76429 & 9.12E3 & 4.32E-4 & 1.0993 \\\\\n4 & 20480 & 3.01E4 & 8.28E-4 & 0.74208 & 2.10E4 & 2.94E-4 & 1.4284 & 8.02E4 & 8.77E-5 & 2.3024 \\\\\n5 & 81920 & 2.67E5 & 3.21E-4 & 1.3659 & 1.93E5 & 7.26E-5 & 2.0193 & 6.96E5 & 2.33E-5 & 1.9122 \\\\\n\\hline\n\\end{tabular}\n$^a$ total CPU time, $^b$ experimental order of convergence, \n\\label{tab_CGD2}\n\\end{table}\n\n$U_{h'}$ was calculated using the 4th order CDG2 scheme on a grid with 81,920 elements (refinement level $5$), i.e. 7,372,800 degrees of freedom. For each $h$-refinement of the grid we bisect the time step size. Results for linear, quadratic and cubic DG schemes can be seen in table \\ref{tab_CGD2}. In figure \\ref{fig:eoc} (left picture) we compare on a log-log scale the total CPU time of all threads with the $L^2$-error. Although the convergence rate is not as high as from the theory for parabolic problems, we see better rates for higher order schemes. We assume that re-entrant corners \nare responsible for the reduced convergence rates, see re-entrant corners in left picture of figure \\ref{fig:2d}.\n\n\n\\begin{figure}[t]\n\\includegraphics[scale=0.215]{girke-2d.eps}\n\\caption{Left: The coarsest grid for the EOC calculations containing 80 elements visualising a re-entrant corner (blue). The angle of $171^{\\circ}$ stays fixed for all refinements. Middle: Initial distribution for the immune cells. Right: Solution for 6 species from left to right, up to down: Immune cells, SMCs, debris, chemoattractant, native LDL, oxidized LDL. (data visualisation: Paraview.)}\n\\label{fig:2d}\n\\end{figure}\n\nThe right picture of figure \\ref{fig:eoc} shows that the CDG2 is as good as the BR2 scheme and outperforms other DG schemes. \n\n\\begin{figure}[t]\n\\includegraphics[scale=0.17]{girke-eoc2.eps}\n\\caption{Plot CPU time vs. $L^2$-error: left: 1st, 2nd and 3rd order CDG2 scheme, right: 1st order CDG, CDG2, Baumann-Oden (BO), Bassy-Rebay (BR2), interior penalty (IP) scheme (Visualisation of graphs: gnuplot.)}\n\\label{fig:eoc}\n\\end{figure}\n\n\n\n\\subsection{A 3D numerical experiment with three species}\n\nFor the 3D benchmark we simplify our model and do our simulation only for immune cells, debris and chemoattractant. This reduces the considered model to \n\\begin{eqnarray}\\label{eq:02}\n\\partial_t n_1 &=& \\nabla\\cdot \\left( \\mu_1 \\nabla n_1 -\\chi(n_1,c_1,\\chi_{11}^0,\\chi_{11}^{th})\\nabla c_1\\right),\\\\\n\\partial_t n_3 &=& \\nabla\\cdot ( \\mu_3 \\nabla n_3 ) + d_1 n_1 + d_2 n_2 - F(n_3,c_3) n_1,\\\\\n\\partial_t c_1 &=& \\nabla\\cdot ( \\nu_1 \\nabla c_1 ) - \\alpha_1 n_1 c_1 + f_1(n_3)n_3.\n\\end{eqnarray}\nWe cannot trigger the inflammation through an inflow of LDL anymore. Thus, we suppose that the inflammation is triggered by a local, high concentration of debris and keep all other boundary and initial data from the last section.\n\nIn the 3D benchmark we examine parallelization using MPI and present in table \\ref{tab_parallel} strong scaling results for a third order CDG2 scheme on a grid with 113,549 elements and 13,625,880 degrees of freedom. Figure \\ref{fig:3d} shows the distribution of the processors and a discrete solution of the chemoattractant calculated using first order CDG2.\n\n\\begin{table}[t]\n\\caption{CPU time for a parallel runs using the cubic CDG2 method for computation of\n$10$ time steps.}\n\\begin{tabular}{p{2.5cm} p{1.0cm}p{1.0cm}p{1.0cm}p{1.0cm}p{1.0cm}p{1.0cm}}\n\\hline\nprocessors & 8 & 16 & 32 & 64 & 128 & 256 \\\\\nCPU time in sec & $1177$ & $528$ & $277$ & $142$ & $75$ & $39$\\\\\nspeedup & --- & $2.23$ & $4.29$ & $8.29$ & $15.7$ & $30.18$\\\\\n\\hline\n\\end{tabular}\n\\label{tab_parallel}\n\\end{table}\n\n\n\n\n\\begin{figure}[t]\n\\includegraphics[scale=0.16]{girke-3d.eps}\n\\caption{3D cuff model. Left: Each colour denotes a processor in a parallel run with 32 processors, right: Isolines of the distribution of the chemoattractant after the inflammation has started}\n\\label{fig:3d}\n\\end{figure}\n\n\\section{Conclusion}\n\nWe have shown that Discontinuous Galerkin schemes are well suited for solving huge coupled reactive diffusion transport systems. Modern techniques, such as parallelization, help to handle large systems in an appropriate CPU time.\nFurthermore, we have shown that it is possible to model the early stages of atherosclerotic plaque formation. A lot of more work needs to be done: In a future paper we will model the wall shear stress and some more species to understand later stages of atherosclerosis. \n\n\\section*{Acknowledgement}\nThis work was supported by the Deutsche Forschungsgemeinschaft, Collaborative Research Center\nSFB 656 ``Cardiovascular Molecular Imaging'', project B07, M{\\\"u}nster, Germany. The scaling results were produced using the super computer Yellowstone\n(ark:\/85065\/d7wd3xhc) provided by NCAR's Computational and Information Systems\nLaboratory, sponsored by the National Science Foundation.\nRobert Kl\\\"ofkorn is partially funded by \nthe DEO program BER under award DE-SC0006959.\n\n\n\\bibliographystyle{unsrt}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\\label{sec:introduction}\n\nIn this paper, we study a class of second-order ordinary differential equations (ODEs) of the form\n\\begin{align} \\label{eq: nonlinear ode}\nM \\ddot{x} + D \\dot{x} + f(x) = 0,\n\\end{align}\nand its corresponding first-order system\n\\begin{align} \\label{eq: nonlinear ode 1 order_intro}\n\\begin{bmatrix} \\dot{x} \\\\ \\dot{y}\n\\end{bmatrix}\n= \n\\begin{bmatrix}\n0 & I \\\\\n0 & -M^{-1}D\n\\end{bmatrix}\n\\begin{bmatrix} {x} \\\\ {y}\n\\end{bmatrix}\n- M^{-1} \n\\begin{bmatrix} 0 \\\\ f(x)\n\\end{bmatrix},\n\\end{align}\nwhere $f:\\mathbb{R}^n\\to \\mathbb{R}^n$ is a continuously differentiable function, the dot denotes differentiation with respect to the independent variable $t\\ge0$, the dependent variable $x\\in\\mathbb{R}^n$ is a vector of state variables, and the coefficients $M\\in\\mathbb{S}^n$ and $D\\in\\mathbb{S}^n$ are constant $n\\times n$ real symmetric matrices.\nWe refer to $M$ and $D$ as the inertia and damping matrices, respectively. \nWe restrict our attention to the case where $M$ is nonsingular, thereby avoiding differential algebraic equations, and $D \\in \\mathbb{S}^n_+$ is positive semi-definite (PSD). We also investigate and discuss the case where $M$ and $D$ are not symmetric. \n\n\nAn important example of \\eqref{eq: nonlinear ode} is an electric power system with the set of interconnected generators $\\mathcal{N}=\\{1,\\cdots,n\\}, n\\in\\mathbb{N}$ characterized by the second-order system\n\\begin{align} \\label{eq: 2nd order swing-intro}\n\t\t\\frac{m_j}{\\omega_s} \\ddot{\\delta}_j(t)+ \\frac{d_j}{\\omega_s} {\\dot{\\delta}}_j(t) = P_{m_j} - \\sum \\limits_{k = 1}^n { V_j V_k Y_{jk} \\cos \\left( \\theta _{jk} - \\delta _j + \\delta _k \\right)} && \\forall j \\in \\mathcal{N},\n\\end{align}\nwhere $\\delta\\in\\mathbb{R}^n$ is the vector of state variables. The inertia and damping matrices in this case\nare $M= \\frac{1}{\\omega_s} \\mathbf{diag}(m_1,\\cdots,m_n)$ and $D=\\frac{1}{\\omega_s}\\mathbf{diag}(d_1,\\cdots,d_n)$. \nSystem \\eqref{eq: 2nd order swing-intro}, which is known as the \\emph{swing equations}, describes the nonlinear dynamical relation between the power output and voltage angle of generators \\cite{1994-kundur-stability, 2020-fast-certificate}.\nThe first-order system associated with swing equations is also of the form \\eqref{eq: nonlinear ode 1 order_intro}, i.e.,\n\\begin{subequations} \\label{eq: swing equations -intro}\n\t\\begin{align}\n\t\t& \\dot{\\delta}_j(t) = \\omega_j(t) && \\forall j \\in \\mathcal{N}, \\label{eq: swing equations a-intro}\\\\\n\t\t& \\frac{m_j}{\\omega_s} \\dot{\\omega}_j(t)+ \\frac{d_j}{\\omega_s} \\omega_j(t) = P_{m_j} - \\sum \\limits_{k = 1}^n { V_j V_k Y_{jk} \\cos \\left( \\theta _{jk} - \\delta _j + \\delta _k \\right)} && \\forall j \\in \\mathcal{N}, \\label{eq: swing equations b-intro}\n\t\\end{align}\n\\end{subequations}\nwhere $(\\delta,\\omega)\\in\\mathbb{R}^{n+n}$ is the vector of state variables. Note that each generator $j$ is a second-order oscillator, which is coupled to other generators through the cosine term in \\cref{eq: swing equations b-intro} and the admittance $Y_{jk}$ encodes the graph structure of the power grid (see \\Cref{Sec: Power System Model and Its Properties} for full details on swing equations).\n\n\n\n\nAmong the various aspects of model \\eqref{eq: nonlinear ode}, the impact of damping matrix $D$ on the stability properties of the model is one of the most intriguing topics \\cite{1980-Miller-Asymptotic,2013-adhikari-structural-book, 2017-koerts-second-order}. Moreover, better understanding of the damping impact in swing equations \\eqref{eq: 2nd order swing-intro} is of particular importance to the stability analysis of electric power systems \\cite{2012-Dorfler-synchronization}. \nUndamped modes and oscillations are the root causes of several blackouts, such as the WECC blackout on August 10, 1996 \\cite{1996-blackout} as well as the more recent events such as the forced oscillation event on January 11, 2019 \\cite{nerc2019} in the Eastern Interconnection of the U.S. power grid. In order to maintain system stability in the wake of unexpected equipment failures, many control actions taken by power system operators are directly or indirectly targeted at changing the effective damping of system \\eqref{eq: 2nd order swing-intro} \\cite{1994-kundur-stability,2019-Patrick-koorehdavoudi-input,2012-aminifar-wide-area-damping}. In this context, an important question is \\emph{how the stability properties of power system equilibrium points change as the damping of the system changes}. Our main motivation is to rigorously address this question for the general model \\eqref{eq: nonlinear ode} and show its applications in power system model \\eqref{eq: 2nd order swing-intro}.\n\n\n\n\n\n\n\n\n\\subsection{Literature Review}\nThe dynamical model \\eqref{eq: nonlinear ode} has been of interest to many researchers who have studied necessary and sufficient conditions for its local stability \\cite{1980-skar-nontrivial-transfer-conductance1, 2020-fast-certificate} or characterization of its stability regions \\cite{1988-chiang-stability-of-nonlinear}. \nWhen $f(x)$ is a linear function, this model coincides with the model of $n$-degree-of-freedom viscously damped vibration systems which are also extensively studied in the structural dynamics literature \\cite{1984-laub-controllability,2010-Ma-decoupling,2013-adhikari-structural-book}.\nEquation \\eqref{eq: nonlinear ode} is also the cornerstone of studying many physical and engineering systems such as an $n$-generator electric power system \\cite{2012-Dorfler-synchronization}, an $n$-degree-of-freedom rigid body \\cite{1988-chiang-stability-of-nonlinear}, and a system of $n$ coupled oscillators \\cite{2011-dorfler-critical-coupling, 2012-Dorfler-synchronization, 2005-kuramoto-review}, in particular Kuramoto oscillators with inertia \\cite{2016-Igor-Inertia-Kuramoto,2020-Igor-Inertia-Kuramoto}.\n\n\n\n\nRegarding damping effects in power systems, the results are sporadic and mostly based on empirical studies of small scale power systems. For example, it is known that the lossless swing equations (i.e., when the transfer\nconductances of power grid are zero, which corresponds to $\\nabla f(x)$ in \\eqref{eq: nonlinear ode} being a real symmetric matrix for all $x$) have no periodic solutions, provided that all generators have a positive damping value \\cite{1982-Arapostathis-global-analysis-periodicSol}. It is also shown by numerical simulation that subcritical and supercritical Hopf bifurcations, and as a consequence, the emergence of periodic solutions, could happen if the swing equations of a two-generator network are augmented to include any of the following four features: variable damping, frequency-dependent electrical torque, lossy transmission lines, and excitation control \\cite{1981-Abed-Oscillations, 1986-Alexander-Oscillatory-Solutions}. Hopf bifurcation is also demonstrated in a three-generator undamped system as the load of the system changes \\cite{1989-Kwatny-Energy-Analysis-Flutter}, where several energy functions for such undamped lossy swing equations in the neighborhood of points of Hopf bifurcation are developed to help characterize Hopf bifurcation in terms of energy properties. Furthermore, a frequency domain analysis to identify the stability of the periodic orbits created by a Hopf bifurcation is presented in \\cite{1990-Kwatny-Frequency-Analysis_Hopf}.\nThe existence and the properties of limit cycles in power systems with higher-order models are also numerically analyzed in \\cite{2007-Hiskens-Limit-cycle2,2005-Hiskens-limit-cycle1}.\n %\n\n\nAnother set of literature relevant to our work studies the role of power system parameters in the stability of its equilibrium points. For instance, the work presented in \\cite{2019-Patrick-koorehdavoudi-input} examines the dependence of the transfer functions on the system parameters in the swing equation model. In \\cite{2017-paganini-global}, the role of inertia in the frequency response of the system is studied. Moreover, it is shown how different dynamical models can lead to different conclusions. Finally, the works on frequency and transient stabilities in power systems \\cite{2013-Lee-Power-dynamics-stochastic,2014-Low-NaLi-stability,2016-Vu-framework,2005-ortega-transient,2018-Dorfler-robust} are conceptually related to our work.\n\n\n\n\n\n\n\n\n\n\\subsection{Contributions}\nThis paper presents a thorough theoretical analysis of the role of damping in the stability of model \\eqref{eq: nonlinear ode}-\\eqref{eq: nonlinear ode 1 order_intro}. Our results provide rigorous formulation and theoretical justification for the intuitive notion that damping increases stability. The results also characterize the hyperbolicity and Hopf bifurcation of an equilibrium point of \\eqref{eq: nonlinear ode 1 order_intro} through the inertia $M$, damping $D$, and Jacobian $\\nabla f$ matrices. These general results are applied to swing equations \\eqref{eq: 2nd order swing-intro} to provide new insights into the damping effects on the stability of power grids.\n\nThe contributions and main results of this paper are summarized below.\n\\begin{enumerate}\n \\item We show that increasing damping has a monotonic effect on the stability of equilibrium points in a large class of ODEs of the form \\eqref{eq: nonlinear ode} and \\eqref{eq: nonlinear ode 1 order_intro}. In particular, we show that, when $M$ is nonsingular symmetric, $D$ is symmetric PSD, and $\\nabla f(x_0)$ is symmetric at an equilibrium point $(x_0,0)$ of the first-order system \\eqref{eq: nonlinear ode 1 order_intro}, if the damping matrix $D$ is perturbed to $D'$ which is more PSD than $D$, i.e. $D'-D\\in\\mathbb{S}^n_+$, then the set of eigenvalues of the Jacobian of \\cref{eq: nonlinear ode 1 order_intro} at $(x_0,0)$ that have a zero real part will not enlarge as a set (\\cref{thrm: monotonic damping behaviour}). We also show that these conditions on $M, D, \\nabla f(x_0)$ cannot be relaxed. To establish this result, we prove that the rank of a complex symmetric matrix with PSD imaginary part does not decrease if its imaginary part is perturbed by a real symmetric PSD matrix (\\cref{thrm: rank}), which may be of independent interest in the matrix perturbation theory.\n \n \n \n \n\n \\item\n We propose a necessary and sufficient condition for an equilibrium point $(x_0,0)$ of the first-order system \\eqref{eq: nonlinear ode 1 order_intro} to be hyperbolic. Specifically, when $M$ and $\\nabla f(x_0)$ are symmetric positive definite and $D$ is symmetric PSD, then $(x_0,0)$ is hyperbolic if and only if the pair ($M^{-1}\\nabla f(x_0), M^{-1}D$) is observable (\\cref{thrm: nec and suf for pure imaginary lossless}). We extend the necessary condition to the general case where $M, D, \\nabla f(x_0)$ are not symmetric (\\cref{thrm: nec and suf for pure imaginary lossy}). Moreover, we characterize a set of sufficient conditions for the occurrence of Hopf bifurcation, when the damping matrix varies as a smooth function of a one dimensional bifurcation parameter (\\cref{coro: Hopf bifurcation} and \\cref{thm: fold and Hopf bifurcation}).\n \n \n \n \\item We show that the theoretical results have key applications in the stability of electric power systems. \n \n We propose a set of necessary and sufficient conditions for breaking the hyperbolicity in lossless power systems (\\cref{prop: nec and suf for pure imaginary lossless power system}). We prove that in a lossy system with two or three generators, as long as only one generator is undamped, any equilibrium point is hyperbolic (\\cref{prop:hyperbolicity n2 n3}), and as soon as there are more than one undamped generator, a lossy system with any $n\\ge 2$ generators may lose hyperbolicity at its equilibrium points (\\cref{prop: non-hyper example}).\n Finally, we perform bifurcation analysis to detect Hopf bifurcation and identify its type based on two interesting case studies.\n \n \\end{enumerate}\n\n\\subsection{Organization}\nThe rest of our paper is organized as follows. \\Cref{sec: Background} introduces some notation and provides the problem statement. In \\Cref{Sec: Monotonic Behavior of Damping}, we rigorously prove that damping has a monotonic effect on the local stability of a large class of ODEs. \\Cref{Sec: Impact of Damping in Hopf Bifurcation} further investigates the impact of damping on hyperbolicity and bifurcation and presents a set of necessary and\/or sufficient conditions for breaking the hyperbolicity and occurrence of bifurcations. \\Cref{Sec: Power System Model and Its Properties} introduces the power system model (i.e., swing equations), provides a graph-theoretic interpretation of the system, and analyzes the practical applications of our theoretical results in power systems. \\Cref{Sec: Computational Experiments} further illustrates the developed theoretical results through numerical examples, and finally, the paper concludes with \\Cref{Sec: Conclusions}.\n\\section{Background} \\label{sec: Background}\n\\subsection{Notations}\nWe use $\\mathbb{C_{-\/+}}$ to denote the set of complex numbers with negative\/positive real part, and $\\mathbb{C}_{0}$ to denote the set of complex numbers with zero real part. $\\frak{i}=\\sqrt{-1}$ is the imaginary unit. If $A\\in\\mathbb{C}^{m\\times n}$, the transpose of $A$ is denoted by $A^\\top$, the real part of $A$ is denoted by $\\mathrm{Re}(A)$, and the imaginary part of $A$ is denoted by $\\mathrm{Im}(A)$. The conjugate transpose of $A$ is denoted by $A^*$ and defined by $A^* = \\Bar{A}^\\top$, in which $\\Bar{A}$ is the entrywise conjugate.\nThe matrix $A\\in\\mathbb{C}^{n \\times n}$ is said to be symmetric if $A^\\top = A$, Hermitian if $A^* = A$, and unitary if $A^*A = I$. The spectrum of a matrix $A\\in\\mathbb{R}^{n\\times n}$ is denoted by $\\sigma(A)$. We use $\\mathbb{S}^n$ to denote the set of real symmetric $n\\times n$ matrices, $\\mathbb{S}^n_+$ to denote the set of real symmetric PSD $n\\times n$ matrices, and $\\mathbb{S}^n_{++}$ to denote the set of real symmetric positive definite $n\\times n$ matrices. For matrices $A$ and $B$, the relation $B \\succeq A$ means that $A$ and $B$ are real symmetric matrices of the same size such that $B-A$ is PSD; we write $A \\succeq 0$ to express the fact that $A$ is a real symmetric PSD matrix. Strict version $B \\succ A$ of $B \\succeq A$ means that $B-A$ is real symmetric positive definite, and $A\\succ 0$ means that $A$ is real symmetric positive definite. \n\n\n\\subsection{Problem Statement} \\label{subsec: Problem Statement}\nConsider the second-order dynamical system \\eqref{eq: nonlinear ode}.\nThe smoothness (continuous differentiability) of $f$ is a sufficient condition for the existence and uniqueness of solution. \nWe transform \\eqref{eq: nonlinear ode} into a system of $2n$ first-order ODEs of the form\n\\begin{align} \\label{eq: nonlinear ode 1 order}\n\\begin{bmatrix} \\dot{x} \\\\ \\dot{y}\n\\end{bmatrix}\n= \n\\begin{bmatrix}\n0 & I \\\\\n0 & -M^{-1}D\n\\end{bmatrix}\n\\begin{bmatrix} {x} \\\\ {y}\n\\end{bmatrix}\n- M^{-1} \n\\begin{bmatrix} 0 \\\\ f(x)\n\\end{bmatrix}.\n\\end{align}\nIf $f(x_0)=0$ for some $x_0\\in\\mathbb{R}^n$, then $(x_0,0)\\in\\mathbb{R}^{n+n}$ is called an equilibrium point. The stability of such equilibrium points can be revealed by the spectrum of the Jacobian of the $2n$-dimensional vector field in \\eqref{eq: nonlinear ode 1 order} evaluated at the equilibrium point. Note that $f:\\mathbb{R}^n\\to\\mathbb{R}^n$ is a vector-valued function, and its derivative at any point $x\\in\\mathbb{R}^n$ is referred to as the Jacobian of $f$ and denoted by $\\nabla f(x)\\in\\mathbb{R}^{n\\times n}$. This Jacobian of $f$ should not be confused with the Jacobian of the $2n$-dimensional vector field in right-hand side of \\eqref{eq: nonlinear ode 1 order}, which is \n\\begin{align}\\label{eq: J general case}\nJ(x) := \\begin{bmatrix}\n0 & I \\\\\n-M^{-1} \\nabla f(x) & - M^{-1}D \\\\\n\\end{bmatrix} \\in\\mathbb{R}^{2n\\times 2n}.\n\\end{align} \nIf the Jacobian $J$ at an equilibrium point $(x_0,0)\\in\\mathbb{R}^{n+n}$\nhas all its eigenvalues off the imaginary axis, then we say that $(x_0,0)$ is a \\emph{hyperbolic} equilibrium point. \nAn interesting feature of hyperbolic equilibrium points is that they are either unstable or asymptotically stable. Breaking the hyperbolicity (say due to changing a parameter of the system), leads to bifurcation.\nAs mentioned before, we restrict our attention to the case where inertia matrix $M$ is nonsingular. Instead, we scrutinize the case where damping matrix $D$ is not full rank, i.e., the system is partially damped. This is a feasible scenario in real-world physical systems \\cite{2017-koerts-second-order}, and as will be shown, has important implications specially in power systems.\nNow, it is natural to ask the following questions:\n\\begin{enumerate}[(i)]\n\t\\item How does changing the damping matrix $D$ affect the stability and hyperbolicity of equilibrium points of system \\cref{eq: nonlinear ode 1 order}? \\label{Q1}\n\t\\item What are the conditions on $D$ under which an equilibrium point is hyperbolic? \\label{Q2}\n\t\\item When we lose hyperbolicity due to changing $D$, what kind of bifurcation happens? \\label{Q3}\n\\end{enumerate}\nNote that in these questions, the inertia matrix $M$ is fixed, and the bifurcation parameter only affects the damping matrix $D$.\nQuestions \\eqref{Q1}-\\eqref{Q3} will be addressed in the following sections, but before that, we present \\cref{lemma: relation between ev J and ev J11} \\cite{2020-fast-certificate} which provides some intuition behind the role of different factors in the spectrum of the Jacobian matrix $J$.\nLet us define the concept of matrix pencil \\cite{2001-Tisseur-Pencil}. Consider $n\\times n$ matrices $Q_0,Q_1,$ and $Q_2$. A quadratic matrix pencil is a matrix-valued function $P:\\mathbb{C}\\to\\mathbb{R}^{n \\times n}$ given by $\\lambda \\mapsto P(\\lambda)$ such that $P(\\lambda) = \\lambda^2Q_2 + \\lambda Q_1 + Q_0$.\n\\begin{lemma} \\label{lemma: relation between ev J and ev J11}\n\tFor any $x\\in\\mathbb{R}^n$, $\\lambda$ is an eigenvalue of $J(x)$ if and only if the quadratic matrix pencil $P(\\lambda):= \\lambda^2 M + \\lambda D + \\nabla f(x)$ is singular.\n\\end{lemma}\n\\begin{proof}\n\tFor any $x\\in\\mathbb{R}^n$, let $\\lambda$ be an eigenvalue of $J(x)$ and $( v , u )$ be the corresponding eigenvector. Then\t\n\t\\begin{align} \\label{eq: J cha eq}\n\t\\begin{bmatrix}\n\t0 & I \\\\\n\t-M^{-1} \\nabla f(x) & - M^{-1}D \\\\\n\t\\end{bmatrix} \\begin{bmatrix} v \\\\u \\end{bmatrix} = \\lambda \\begin{bmatrix} v \\\\u \\end{bmatrix},\n\t\\end{align}\n\twhich implies that $ u = \\lambda v$ and $-M^{-1} \\nabla f(x) v - M^{-1}D u = \\lambda u$. Substituting the first equality into the second one, we get\n\t\\begin{align} \n\n\n\t\\left( \\nabla f(x) + \\lambda D + \\lambda^2 M \\right) v = 0. \\label{eq: quadratic matrix pencil}\n\t\\end{align} \n\tSince $v \\not = 0$ (otherwise $u = \\lambda \\times 0 = 0 $ which is a contradiction), equation \\eqref{eq: quadratic matrix pencil} implies that the matrix pencil $P(\\lambda)= \\lambda^2 M + \\lambda D + \\nabla f(x)$ is singular.\n\t\n\tConversely, for any $x\\in\\mathbb{R}^n$,\n\tsuppose there exists $\\lambda \\in \\mathbb{C}$ such that $P(\\lambda)= \\lambda^2 M + \\lambda D + \\nabla f(x)$ is singular. Choose a nonzero $v \\in \\ker(P(\\lambda))$ and let $ u := \\lambda v$. \n\tAccordingly, the characteristic equation \\eqref{eq: J cha eq} holds, and consequently, $\\lambda$ is an eigenvalue of $J(x)$.\n\\end{proof}\n\nTo give some intution, let us pre-multiply \\eqref{eq: quadratic matrix pencil} by $v^*$ to get the quadratic equation\n\\begin{align} \nv^* \\nabla f(x) v + \\lambda v^*Dv + \\lambda^2 v^*Mv = 0, \\label{eq: quadratic matrix pencil equation}\n\\end{align}\nwhich has roots\n\\begin{align} \n\\lambda_{\\pm} = \\frac{-v^*Dv \\pm \\sqrt{(v^*Dv)^2-4(v^*Mv)(v^* \\nabla f(x) v)}}{2v^*Mv}.\n\\label{eq: quadratic matrix pencil equation roots}\n\\end{align}\nEquation \\eqref{eq: quadratic matrix pencil equation roots} provides some insights into the impact of matrices $D$, $M$, and $\\nabla f(x)$ on the eigenvalues of $J$. For instance, when $D\\succeq0$, it seems that increasing the damping matrix $D$ (i.e., replacing $D$ with $\\hat{D}$, where $\\hat{D}\\succeq D$) will lead to more over-damped eigenvalues. However, this argument is not quite compelling because by changing $D$, the eigenvector $v$ would also change. \nAlthough several researchers have mentioned such arguments about the impact of damping \\cite{2010-Ma-decoupling}, to the best of our knowledge, this impact has not been studied in the literature in a rigorous fashion. We will discuss this impact in the next section.\n\\section{Monotonic Effect of Damping}\n\\label{Sec: Monotonic Behavior of Damping}\nIn this section, we analytically examine the role of damping matrix $D$ in the stability of system \\eqref{eq: nonlinear ode}. Specifically, we answer the following question:\nlet System-I and System-II be two second-order dynamical systems \\eqref{eq: nonlinear ode} with partial damping matrices $D_I\\succeq 0$ and $D_{II}\\succeq 0$, respectively. Suppose the two systems are identical in other parameters (i.e., everything except their dampings) and $(x_0,0)\\in\\mathbb{R}^{2n}$ is an equilibrium point for both systems. Observe that changing the damping of system \\eqref{eq: nonlinear ode} does not change the equilibrium points. Here, we focus on the case where $M$ and $L:=\\nabla f(x_0)$ are symmetric (these are reasonable assumptions in many dynamical systems such as power systems). Now, if System-I is asymptotically stable, what kind of relationship between $D_I$ and $D_{II}$ will ensure that System-II is also asymptotically stable? \nThis question has important practical consequences. \nFor instance, the answer to this question will illustrate how changing the damping coefficients of generators (or equivalently, the corresponding controller parameters of inverter-based resources) in power systems will affect the stability of equilibrium points. Moreover, this question is closely intertwined with a problem in matrix perturbation theory, namely \ngiven a complex symmetric matrix with PSD imaginary part, how does a PSD perturbation of its imaginary part affect the rank of the matrix? We answer the matrix perturbation question in \\Cref{thrm: rank}, which requires \\Cref{lemma: Autonne} to \\Cref{lemma: rank principal} and \\Cref{Prop: diagonal complex update nonsingularity}. Finally, the main result about the monotonic effect of damping is proved in \\Cref{thrm: monotonic damping behaviour}. \n\n\nThe following lemma on Autonne-Takagi factorization is useful.\n\n\\begin{lemma}[Autonne-Takagi factorization] \\label{lemma: Autonne}\nLet $S\\in \\mathbb{C}^{n\\times n}$ be a complex matrix. Then $S^\\top=S$ if and only if there is a unitary $U\\in \\mathbb{C}^{n\\times n}$ and a nonnegative diagonal matrix $\\Sigma\\in \\mathbb{R}^{n\\times n}$ such that $S=U\\Sigma U^\\top$. The diagonal entries of $\\Sigma$ are the singular values of $S$.\n\\end{lemma}\t\n\\begin{proof}\nSee e.g. \\cite[Chapter 4]{2013-Horn-matrix-analysis}.\n\\end{proof}\n\nWe also need the following lemmas to derive our main results. \\Cref{lemma: definiteness inverse imaginary part} generalizes a simple fact about complex numbers to complex symmetric matrices: a complex scalar $z\\in\\mathbb{C}, z\\ne0$ has a nonnegative imaginary part if and only if $z^{-1}$ has a nonpositive imaginary part.\n\\begin{lemma} \\label{lemma: definiteness inverse imaginary part}\n\tLet $S\\in \\mathbb{C}^{n\\times n}$ be a nonsingular complex symmetric matrix. Then $\\mathrm{Im}(S) \\succeq 0$ if and only if $\\mathrm{Im}(S^{-1})\\preceq 0$. \n\\end{lemma}\t\n\\begin{proof}\nSince $S$ is nonsingular complex symmetric, by Autonne-Takagi factorization, there exists a unitary matrix $U$ and a diagonal positive definite matrix $\\Sigma$ such that $S = U\\Sigma U^\\top$. The inverse $S^{-1}$ is given by $S^{-1} = \\bar{U}\\Sigma^{-1}U^*$. The imaginary parts of $S$ and $S^{-1}$ are \n\t\\begin{align*}\n\t& \\mathrm{Im}(S) = -\\frac{1}{2} \\frak{i} (U\\Sigma U^\\top - \\bar{U}\\Sigma U^*), \\\\\n\t& \\mathrm{Im}(S^{-1}) = -\\frac{1}{2} \\frak{i} (\\bar{U}\\Sigma^{-1} U^* - U\\Sigma^{-1} U^\\top).\n\t\\end{align*}\n\tThe real symmetric matrix $2\\mathrm{Im}(S^{-1})=\\frak{i} (U\\Sigma^{-1} U^\\top-\\bar{U}\\Sigma^{-1} U^*)$ is unitarily similar to the Hermitian matrix $\\frak{i}(\\Sigma^{-1}U^\\top U - U^*\\bar{U}\\Sigma^{-1})$ as\n\t\\begin{align*}\n\t U^*(2\\mathrm{Im}(S^{-1}))U &= U^*(\\frak{i}(U\\Sigma^{-1} U^\\top - \\bar{U}\\Sigma^{-1} U^*))U \\\\\n\t & = \\frak{i} (\\Sigma^{-1}U^\\top U - U^*\\bar{U}\\Sigma^{-1}),\n\t\\end{align*} \n\tand is *-congruent to $\\frak{i} (U^\\top U \\Sigma - \\Sigma U^*\\bar{U})$ as\n\t\\begin{align*}\n \\Sigma U^*(2\\mathrm{Im}(S^{-1}))U\\Sigma &= \\frak{i}\\Sigma(\\Sigma^{-1}U^\\top U - U^*\\bar{U}\\Sigma^{-1})\\Sigma \\\\\n\t& = \\frak{i} (U^\\top U \\Sigma - \\Sigma U^*\\bar{U}).\n\t\\end{align*}\n\tNote that the latter transformation is a *-congruence because $U\\Sigma$ is nonsingular but not necessarily unitary. Hence, $2\\mathrm{Im}(S^{-1})$ has the same eigenvalues as $\\frak{i}(\\Sigma^{-1}U^\\top U - U^*\\bar{U}\\Sigma^{-1})$ and has the same inertia as $\\frak{i}(U^\\top U \\Sigma - \\Sigma U^*\\bar{U})$ by Sylvester's law of inertia. \n\tFurthermore, since $U^\\top U$ is unitary and $\\overline{U^\\top U}=U^*\\bar{U}$, then\n\t\\begin{align*}\n\t\\frak{i}(U^\\top U \\Sigma - \\Sigma U^*\\bar{U}) = (U^\\top U)(\\frak{i}(\\Sigma U^\\top U - U^*\\bar{U}\\Sigma))(U^*\\bar{U}),\n\t\\end{align*}\n\twhich implies that $\\frak{i}(U^\\top U \\Sigma - \\Sigma U^*\\bar{U})$ has the same eigenvalues as $\\frak{i}(\\Sigma U^\\top U - U^*\\bar{U}\\Sigma)$. Furthermore, since \n\n\t\\begin{align*}\n\tU(\\frak{i}(\\Sigma U^\\top U - U^*\\bar{U}\\Sigma))U^* = \\frak{i}(U\\Sigma U^\\top - \\bar{U}\\Sigma U^*) = -2\\mathrm{Im}(S),\n\t\\end{align*}\n\t$\\mathrm{Im}(S^{-1})$ and $-\\mathrm{Im}(S)$ have the same inertia, i.e., they have the same number of positive eigenvalues and the same number of negative eigenvalues. Therefore, $\\mathrm{Im}(S)\\succeq 0$ if and only if all eigenvalues of $\\mathrm{Im}(S^{-1})$ are nonpositive, that is, if and only if $\\mathrm{Im}(S^{-1})\\preceq0$.\n\\end{proof}\n\n\n\\Cref{lemma: rank-one imaginary PSD update} shows how rank-one perturbation to the imaginary part of a nonsingular complex matrix preserves its nonsingularity.\n\\begin{lemma} \\label{lemma: rank-one imaginary PSD update}\n\tLet $S\\in\\mathbb{C}^{n\\times n}$ be a nonsingular complex symmetric matrix. If $\\mathrm{Im}(S)\\succeq 0 $, then $S+\\frak{i} vv^\\top$ is nonsingular for any real vector $v\\in\\mathbb{R}^n$.\n\\end{lemma}\n\\begin{proof}\n\tWe use Cauchy's formula for the determinant of a rank-one perturbation \\cite{2013-Horn-matrix-analysis}:\n\t\\begin{align*}\n\t\\det(S + \\frak{i} vv^\\top) &= \\det(S) + \\frak{i} v^\\top\\mathrm{adj}(S)v\\\\\n\t&= \\det(S) + \\frak{i} v^\\top S^{-1}v \\det(S) \\\\\n\t&= \\det(S)(1 + \\frak{i} v^\\top S^{-1}v) \\\\\n\t&= \\det(S)(1 - v^\\top\\mathrm{Im}(S^{-1})v + \\frak{i} v^\\top\\mathrm{Re}(S^{-1})v),\n\t\\end{align*}\n\twhere $\\mathrm{adj}(S)$ is the adjugate of $S$, which satisfies $\\mathrm{adj}(S)=(\\det(S))S^{-1}$. \n\n\tSince $\\det(S)\\ne0$, we only need to prove that the complex scalar $z:=(1 - v^\\top\\mathrm{Im}(S^{-1})v + \\frak{i} v^\\top\\mathrm{Re}(S^{-1})v)$ is nonzero for any $v\\in\\mathbb{R}^n$.\n\tBy \\cref{lemma: definiteness inverse imaginary part}, $\\mathrm{Im}(S^{-1}) \\preceq 0$, thus $\\mathrm{Re}(z)=1-v^\\top\\mathrm{Im}(S^{-1})v \\ge 1$. This proves that any rank-one update on the imaginary part of $S$ is nonsingular.\n\\end{proof}\n\nNow, we extend \\cref{lemma: rank-one imaginary PSD update} to the case where the perturbation is a general real PSD matrix.\n\\begin{proposition} \\label{Prop: diagonal complex update nonsingularity}\n\tLet $S\\in\\mathbb{C}^{n\\times n}$ be a nonsingular complex symmetric matrix with $\\mathrm{Im}(S)\\succeq 0 $. Then, for any real PSD matrix $E\\in\\mathbb{S}^{n}_+$, $S+\\frak{i} E$ is nonsingular.\n\\end{proposition}\n\\begin{proof}\n\n\tSince $E$ is a real PSD matrix, its eigendecomposition gives $E=\\sum_{\\ell=1}^n v_\\ell v_\\ell^\\top$, where $v_\\ell$ is an eigenvector scaled by the $\\ell$-th eigenvalue of $E$. Now, we need to show that $S+\\sum_{\\ell=1}^n \\frak{i} v_\\ell v_\\ell^\\top$ is nonsingular. According to \\cref{lemma: rank-one imaginary PSD update}, $\\tilde{S}_\\ell:=S + \\frak{i} v_\\ell v_\\ell^\\top$ is nonsingular for each $\\ell \\in \\{1,\\cdots,n\\}$. Moreover, $\\tilde{S}_\\ell$ is a complex symmetric matrix with $\\textrm{Im}(\\tilde{S}_\\ell)\\succeq 0$. Therefore, \\cref{lemma: rank-one imaginary PSD update} can be consecutively applied to conclude that $S+\\frak{i} E$ is nonsingular.\n\\end{proof}\n\\begin{remark}\n\tThe assumption of $S$ being complex symmetric cannot be relaxed. For example, consider unsymmetric matrix\n\t\\begin{align*}\n\tS = \\begin{bmatrix}\n\t1+\\frak{i} & \\sqrt{2} \\\\ -\\sqrt{2} & -1\n\t\\end{bmatrix}, E = \\begin{bmatrix} 0 & 0\\\\0 & 1\\end{bmatrix}, S+\\frak{i} E = \\begin{bmatrix}\n\t1+\\frak{i} & \\sqrt{2} \\\\ -\\sqrt{2} & -1+\\frak{i} \n\t\\end{bmatrix}.\n\t\\end{align*}\n\tThen, $\\textrm{Im}(S)\\succeq 0$, $\\det(S)=1-\\frak{i} $, but $\\det(S+\\frak{i} E)=0$. Likewise, the assumption of $E$ being real PSD cannot be relaxed.\n\\end{remark}\nBefore proceeding further with the analysis, let us recall the concept of principal submatrix. For $A\\in\\mathbb{C}^{n\\times n}$ and $\\alpha \\subseteq \\{1,\\cdots,n\\}$, the (sub)matrix of entries that lie in the rows and columns of $A$ indexed by $\\alpha$ is called a principal submatrix of $A$ and is denoted by $A[\\alpha]$. We also need \\cref{lemma: rank principal} about rank principal matrices. In what follows, the direct sum of two matrices $A$ and $B$ is denoted by $A\\oplus B$.\n\\begin{lemma} [rank principal matrices] \\label{lemma: rank principal}\n\tLet $S\\in \\mathbb{C}^{n\\times n}$ and suppose that $n>\\mathop{\\bf rank}(S)=r \\ge 1$. If $S$ is similar to $B\\oplus0_{n-r}$ (so\n\t$B \\in \\mathbb{C}^{r\\times r} $is nonsingular), then $S$ has a nonsingular $r$-by-$r$ principal submatrix, that is,\n\t$S$ is rank principal.\n\\end{lemma}\n\\begin{proof}\n See \\cref{proof of lemma: rank principal}.\n\\end{proof}\nNow we are ready to state our main matrix perturbation result.\n\\begin{theorem} \\label{thrm: rank}\nSuppose $A\\in\\mathbb{S}^{n}$ is a real symmetric matrix and $D\\in\\mathbb{S}^{n}_+$ and $E\\in\\mathbb{S}^{n}_+$ are real symmetric PSD matrices. \nThen $\\mathop{\\bf rank}(A+\\frak{i} D)\\le \\mathop{\\bf rank}(A+\\frak{i} D+\\frak{i} E)$. \n\\end{theorem}\n\\begin{proof\nDefine $r:=\\mathop{\\bf rank}(A+\\frak{i} D)$ and note that if $r=0$, i.e., $A+\\frak{i} D$ is the zero matrix, then the rank inequality holds trivially. If $r\\ge1$, the following two cases are possible.\n\n\t\n\t\n\tFor $r=n$ : in this case $S:=A+\\frak{i} D$ is a nonsingular complex symmetric matrix with $\\mathrm{Im}(S)\\succeq 0 $, and according to \\cref{Prop: diagonal complex update nonsingularity}, $A+\\frak{i} D+\\frak{i} E$ is also nonsingular, i.e., $\\mathop{\\bf rank}(A+\\frak{i} D+\\frak{i} E)=n$. Thus, in this case, the rank inequality $\\mathop{\\bf rank}(A+\\frak{i} D)\\le \\mathop{\\bf rank}(A+\\frak{i} D+\\frak{i} E)$ holds with equality.\n\n\t\n\t For $1\\le r < n$: since $A+\\frak{i} D$ is complex symmetric, using Autonne-Takagi factorization in \\cref{lemma: Autonne}, $A+\\frak{i} D = U\\Sigma U^\\top$ for some unitary matrix $U$ and a diagonal real PSD matrix $\\Sigma$. Moreover, $r=\\mathop{\\bf rank}(A+\\frak{i} D)$ will be equal to the number of positive diagonal entries of $\\Sigma$. In this case, $A+\\frak{i} D$ is unitarily similar to $\\Sigma=B\\oplus0_{n-r}$, for some nonsingular diagonal $B\\in\\mathbb{R}^{r\\times r}$. According to \\cref{lemma: rank principal}, there exists a principal submatrix of $A+\\frak{i} D$ with size $r$ that is nonsingular, that is, there exists an index set $\\alpha \\subseteq \\{1,\\cdots,n\\}$ with $\\textrm{card}(\\alpha)=r$ such that $A[\\alpha]+\\frak{i} D[\\alpha]$ is nonsingular. Note that $A[\\alpha]+\\frak{i} D[\\alpha]$ is also complex symmetric. Now, using the same index set $\\alpha$ of rows and columns, we select the principal submatrix $E[\\alpha]$ of $E$. Recall that if a matrix is PSD then all its principal submatrices are also PSD. Therefore, $D[\\alpha] \\succeq 0$ and $E[\\alpha] \\succeq 0$.\n\t %\n\t\n \n %\n Using the same argument as in the previous case of this proof, we have $\\mathop{\\bf rank}(A[\\alpha]+\\frak{i} D[\\alpha]) = \\mathop{\\bf rank}(A[\\alpha]+\\frak{i} D[\\alpha]+\\frak{i} E[\\alpha])=r$. \n On the one hand, according to our assumption, we have $\\mathop{\\bf rank}(A+\\frak{i} D)=\\mathop{\\bf rank}(A[\\alpha]+\\frak{i} D[\\alpha])=r$. On the other hand, we have\n\t\\begin{align} \\label{eq: rank ineq of rank thrm}\n\t\t \\mathop{\\bf rank}(A+\\frak{i} D+\\frak{i} E) \\ge \\mathop{\\bf rank}(A[\\alpha]+\\frak{i} D[\\alpha]+\\frak{i} E[\\alpha]) = r = \\mathop{\\bf rank}(A+\\frak{i} D).\n\t\\end{align}\n\tNote that the inequality in \\eqref{eq: rank ineq of rank thrm} holds because the rank of a principal submatrix is always less than or equal to the rank of the matrix itself. In other words, by adding more columns and rows to a (sub)matrix, the existing linearly independent rows and columns will remain linearly independent. Therefore, the rank inequality $\\mathop{\\bf rank}(A+\\frak{i} D)\\le \\mathop{\\bf rank}(A+\\frak{i} D+\\frak{i} E)$ also holds in this case.\n\\end{proof}\n\nWe now use \\cref{thrm: rank} to answer the question of how damping affects stability. In particular, \\cref{thrm: monotonic damping behaviour} shows a monotonic effect of damping on system stability. Namely, when $\\nabla f(x_0)$ is symmetric at an equilibrium point $(x_0,0)$, the set of eigenvalues of the Jacobian $J(x_0)$ that lie on the imaginary axis does not enlarge, as the damping matrix $D$ becomes more positive semidefinite. \n\\begin{theorem}[Monotonicity of imaginary eigenvalues in response to damping] \\label{thrm: monotonic damping behaviour}\n \n Consider the following two systems,\n \\begin{align} \n \\quad & M \\ddot{x} + D_{I} \\dot{x} + f(x) = 0, \\tag{System-I} \\label{System-I} \\\\\n \\quad & M \\ddot{x} + D_{II} \\dot{x} + f(x) = 0, \\tag{System-II} \\label{System-II}\n \\end{align}\n where $M\\in\\mathbb{S}^n$ is nonsingular and $D_{I},D_{II} \\in \\mathbb{S}^n_+$. Suppose $(x_0,0)$ is an equilibrium point of the corresponding first-order systems defined in \\eqref{eq: nonlinear ode 1 order}. Assume $L:=\\nabla f(x_0)\\in\\mathbb{S}^n$. Denote $J_I, J_{II}$ as the associated Jacobian matrices at $x_0$ as defined in \\eqref{eq: J general case}. Furthermore,\n %\n \n \n let $\\mathcal{C}_{I} \\subseteq \\mathbb{C}_{0} $ (resp. $\\mathcal{C}_{II} \\subseteq \\mathbb{C}_{0}$) be the set of eigenvalues of $J_{I}$ (resp. $J_{II}$) with a zero real part, which may be an empty set. Then the sets $C_I, C_{II}$ of eigenvalues on the imaginary axis satisfy the following monotonicity property, \n \\begin{align}\n D_{II} \\succeq D_{I} \\implies \\mathcal{C}_{II} \\subseteq \\mathcal{C}_{I}. \n \\end{align}\n \n\\end{theorem}\n\\begin{proof}\nRecall the Jacobian matrices are defined as\n\\begin{align*}\n J_I = \\begin{bmatrix}\n0 & I \\\\\n- M^{-1} L & -M^{-1} D_{I} \\\\\n\\end{bmatrix},\nJ_{II} = \\begin{bmatrix}\n0 & I \\\\\n- M^{-1} L & - M^{-1} D_{II} \\\\\n\\end{bmatrix},\n\\end{align*}\nat an equilibrium point $(x_0,0)$.\nAccording to \\cref{lemma: relation between ev J and ev J11}, $\\frak{i} \\beta\\in\\mathcal{C}_{I}$ if and only if the quadratic matrix pencil $P_I(\\frak{i} \\beta):= (L - \\beta^2 M) + \\frak{i} \\beta D_I$ is singular. The same argument holds for $\\mathcal{C}_{II}$. Since $J_I$ and $J_{II}$ are real matrices, their complex eigenvalues will always occur in complex conjugate pairs. Therefore, without loss of generality, we assume $\\beta\\ge0$. Note that for any $\\beta\\ge0$ such that $\\frak{i} \\beta\\notin\\mathcal{C}_{I}$ the pencil $P_I(\\frak{i} \\beta)= (L - \\beta^2 M) + \\frak{i} \\beta D_I$ is nonsingular. \nMoreover, $(L-\\beta^2M)\\in\\mathbb{S}^{n}$ is a real symmetric matrix and $\\beta D_I\\in\\mathbb{S}^{n}_+$ is a real PSD matrix. According to \\cref{thrm: rank}, \n\\begin{align*}\nr & = \\mathop{\\bf rank}(L-\\beta^2 M +\\frak{i} \\beta D_I) \\\\\n &\\le \\mathop{\\bf rank}(L-\\beta^2 M +\\frak{i} \\beta D_I + \\frak{i} \\beta(D_{II}-D_{I})) \\\\\n & = \\mathop{\\bf rank}(L-\\beta^2 M +\\frak{i} \\beta D_{II}),\n\\end{align*}\nconsequently, $P_{II}(\\frak{i} \\beta)=L-\\beta^2M+\\frak{i} \\beta D_{II}$ is also nonsingular and $\\frak{i} \\beta\\notin\\mathcal{C}_{II}$. This implies that $\\mathcal{C}_{II} \\subseteq \\mathcal{C}_{I}$ and completes the proof.\n\\end{proof}\n\\begin{remark}\n In the above theorem, the assumption of $L=\\nabla f(x_0)$ being symmetric cannot be relaxed. For example, consider \n\t\\begin{align*}\n\tf(x_1,x_2) = \\begin{bmatrix}\n\t2x_1 + \\sqrt{2} x_2 \\\\ -\\sqrt{2} x_1\n\t\\end{bmatrix},\n\tD_{I} = \\begin{bmatrix} 1 & 0\\\\0 & 0\\end{bmatrix}, D_{II} = \\begin{bmatrix} 1 & 0\\\\0 & 1\\end{bmatrix}, M = \\begin{bmatrix} 1 & 0\\\\0 & 1\\end{bmatrix}.\n\t\\end{align*}\n\tHere, the origin is the equilibrium point of the corresponding first-order systems, and $L=\\nabla f(0,0)$ is not symmetric. The set of eigenvalues with zero real part in \\eqref{System-I} and \\eqref{System-II} are $\\mathcal{C}_I=\\emptyset$ and $\\mathcal{C}_{II}=\\{ \\pm \\frak{i} \\}$. Accordingly, we have $D_{II}\\succeq D_{I}$, but $\\mathcal{C}_{II} \\not \\subseteq \\mathcal{C}_{I}$.\n\\end{remark}\n\\section{Impact of Damping on Hyperbolicity and Bifurcation}\n\\label{Sec: Impact of Damping in Hopf Bifurcation}\n\\subsection{Necessary and Sufficient Conditions for Breaking Hyperbolicity}\n\\label{subsec: Sufficient and Necessary Conditions}\nWe use the notion of observability from control theory to provide a necessary and sufficient condition for breaking the hyperbolicity of equilibrium points in system \\eqref{eq: nonlinear ode 1 order} when the inertia, damping, and Jacobian of $f$ satisfy $M\\in\\mathbb{S}^{n}_{++}, D\\in\\mathbb{S}^n_+, \\nabla f(x_0)\\in\\mathbb{S}^n_{++}$ at an equilibrium point $(x_0,0)$ (\\cref{thrm: nec and suf for pure imaginary lossless}). We further provide a sufficient condition for the existence of purely imaginary eigenvalues in system \\eqref{eq: nonlinear ode 1 order} when $M, D, \\nabla f(x_0)$ are not symmetric (\\cref{thrm: nec and suf for pure imaginary lossy}). Such conditions will pave the way for understanding Hopf bifurcations in these systems. Observability was first related to stability of second-order system \\eqref{eq: nonlinear ode} by Skar \\cite{1980-Skar-stability-thesis}.\n\\begin{definition}[observability] \\label{def: Observability}\n\tConsider the matrices $A\\in\\mathbb{R}^{m\\times m}$ and $B\\in\\mathbb{R}^{n\\times m}$. The pair $(A,B)$ is observable if $Bx\\ne 0$ for every right eigenvector $x$ of $A$, i.e.,\n\t\\begin{align*}\n\t\\forall \\lambda \\in \\mathbb{C}, x \\in \\mathbb{C}^m, x\\ne 0 \\: \\text{ s.t. } Ax = \\lambda x \\implies Bx \\ne 0.\n\t\\end{align*}\n\\end{definition}\n\nWe will show that the hyperbolicity of an equilibrium point $(x_0,0)$ of system \\eqref{eq: nonlinear ode 1 order} is intertwined with the observability of the pair $(M^{-1}\\nabla f(x_0),M^{-1}D)$. Our focus will remain on the role of the damping matrix $D\\in\\mathbb{S}^n_+$ in this matter. Note that if the damping matrix $D$ is nonsingular, the pair $(M^{-1}\\nabla f(x_0),M^{-1}D)$ is always observable because the nullspace of $M^{-1}D$ is trivial. Furthermore, if the damping matrix $D$ is zero, the following lemma holds.\n\\begin{lemma} \\label{lemma: undamped systems}\n In an undamped system (i.e., when $D=0$), for any $x\\in\\mathbb{R}^n$ the pair $(M^{-1}\\nabla f(x),M^{-1}D)$ can never be observable. Moreover, for any $x\\in\\mathbb{R}^n$\n \\begin{align*}\n \\lambda \\in\\sigma(J(x))\\iff \\lambda^2\\in\\sigma(-M^{-1}\\nabla f(x)).\n \\end{align*}\n\\end{lemma}\n\\begin{proof}\nThe first statement is an immediate consequence of \\cref{def: Observability} and the second one follows from \\cref{lemma: relation between ev J and ev J11}.\n\\end{proof}\n\nThe next theorem yields a necessary and sufficient condition on the damping matrix $D$ for breaking the hyperbolicity of an equilibrium point.\n\\begin{theorem}[hyperbolicity in second-order systems: symmetric case] \\label{thrm: nec and suf for pure imaginary lossless}\n Consider the second-order ODE system \\eqref{eq: nonlinear ode} with inertia matrix $M\\in\\mathbb{S}^n_{++}$ and damping matrix $D \\in\\mathbb{S}^n_+$. Suppose $(x_0,0)\\in \\mathbb{R}^{n+n}$ is an equilibrium point of the corresponding first-order system \\eqref{eq: nonlinear ode 1 order} with the Jacobian matrix $J\\in\\mathbb{R}^{2n\\times 2n}$ defined in \\eqref{eq: J general case} such that $L=\\nabla f(x_0)\\in \\mathbb{S}^n_{++}$. Then, the equilibrium point $(x_0,0)$ is hyperbolic if and only if the pair $(M^{-1}L,M^{-1}D)$ is observable.\n %\n %\n %\n \n \n \n \n \n \n \n \n \n \n \n \n\\end{theorem}\n\\begin{proof}\n According to \\cref{lemma: undamped systems}, if $D=0$, the pair $(M^{-1}L,M^{-1}D)$ can never be observable. \n\t%\n\tMoreover, $M^{-1}L=M^{-\\frac{1}{2}} \\hat{L} M^{\\frac{1}{2}}$, where $\\hat{L}:=M^{-\\frac{1}{2}} L M^{-\\frac{1}{2}}$. This implies that $M^{-1}L$ is similar to (and consequently has the same eigenvalues as) $\\hat{L}$. \n \tNote that $\\hat{L}$ is *-congruent to $L$. According to Sylvester's law of inertia, $\\hat{L}$ and $L$ have the same inertia. Since $L\\in \\mathbb{S}^n_{++}$, we conclude that $\\hat{L}\\in \\mathbb{S}^n_{++}$. Therefore, the eigenvalues of $M^{-1}L$ are real and positive, i.e., $\\sigma(M^{-1}L)\\subseteq \\mathbb{R_{++}}=\\{\\lambda\\in\\mathbb{R}:\\lambda>0\\}$.\n %\n\n\tMeanwhile, when $D=0$, we have $\\mu\\in\\sigma(J)\\iff\\mu^2\\in\\sigma(-M^{-1}L)$, hence all eigenvalues of $J$ would have zero real parts, i.e., $\\sigma(J)\\subseteq\\mathbb{C}_0$, and consequently, the theorem holds trivially. In the sequel, we assume that $D\\not=0$.\n\n \n\t\\emph{Necessity:}\n\tSuppose the pair $(M^{-1}L,M^{-1}D)$ is observable, but assume the equilibrium point is not hyperbolic, and let us lead this assumption to a contradiction. Since $L=\\nabla f(x_0)$ is nonsingular, \\cref{lemma: relation between ev J and ev J11} asserts that $0\\not\\in\\sigma(J)$. Therefore, there must exist $\\beta>0$ such that $\\frak{i} \\beta \\in \\sigma(J)$. \n\n\tBy \\cref{lemma: relation between ev J and ev J11}, $\\frak{i}\\beta \\in \\sigma(J)$ if and only if the matrix pencil $(M^{-1}L + \\frak{i}\\beta M^{-1}D - \\beta^2 I)$ is singular: \n\n\t\\begin{align*}\n\t\n\t& \\det \\left( M^{-\\frac{1}{2}}(M^{-\\frac{1}{2}}LM^{-\\frac{1}{2}} + \\frak{i} \\beta M^{-\\frac{1}{2}}DM^{-\\frac{1}{2}} - \\beta^2 I) M^{\\frac{1}{2}} \\right) = 0,\n\t\\end{align*}\n\tor equivalently, $\\exists (x + \\frak{i} y) \\ne 0$ such that $x,y\\in\\mathbb{R}^n$ and\n\t\\begin{align} \\label{eq: proof of parially damped hopf bifurcation}\n\t\\notag & (M^{-\\frac{1}{2}}LM^{-\\frac{1}{2}} + \\frak{i} \\beta M^{-\\frac{1}{2}} D M^{-\\frac{1}{2}} - \\beta^2 I)(x+\\frak{i} y) = 0 \\\\\n\t& \\iff \\begin{cases}\n\t(M^{-\\frac{1}{2}}LM^{-\\frac{1}{2}}-\\beta^2 I)x - \\beta M^{-\\frac{1}{2}}DM^{-\\frac{1}{2}} y = 0, \\\\ \n\t(M^{-\\frac{1}{2}}LM^{-\\frac{1}{2}}-\\beta^2 I)y + \\beta M^{-\\frac{1}{2}}DM^{-\\frac{1}{2}} x = 0.\n\t\\end{cases}\n\t\\end{align}\n\tLet $\\hat{L}:=M^{-\\frac{1}{2}}LM^{-\\frac{1}{2}}$, $\\hat{D}:=M^{-\\frac{1}{2}}DM^{-\\frac{1}{2}}$, and observe that\n\t\\begin{align*}\n\t\\begin{cases}\n\ty^\\top (\\hat{L}-\\beta^2 I)x = y^\\top (\\beta \\hat{D} y) = \\beta y^\\top \\hat{D} y \\ge 0,\n\t\\\\\n\tx^\\top (\\hat{L}-\\beta^2 I)y = x^\\top (-\\beta \\hat{D} x) = -\\beta x^\\top \\hat{D} x \\le 0,\n\t\\end{cases}\n\t\\end{align*}\n %\n\twhere we have used the fact that $\\hat{D}$ is *-congruent to $D$. According to Sylvester's law of inertia, $\\hat{D}$ and $D$ have the same inertia. Since $D\\succeq 0$, we conclude that $\\hat{D}\\succeq0$.\n\n\tAs $(\\hat{L}-\\beta^2 I)$ is symmetric, we have $x^\\top (\\hat{L}-\\beta^2 I)y = y^\\top (\\hat{L}-\\beta^2 I)x$. Therefore, we must have $x^\\top \\hat{D} x = y^\\top \\hat{D} y =0$. Since $\\hat{D}\\succeq0$, we can infer that $x\\in\\ker(\\hat{D})$ and $y\\in\\ker(\\hat{D})$.\n\t%\n\n \n \n \n \n\tNow considering $\\hat{D} y = 0$ and using the first equation in \\eqref{eq: proof of parially damped hopf bifurcation} we get \n\t\\begin{align}\n\t( \\hat{L}-\\beta^2 I)x = 0 \\iff M^{-\\frac{1}{2}}LM^{-\\frac{1}{2}} x=\\beta^2 x,\n\t\\end{align}\n\tmultiplying both sides from left by $M^{-\\frac{1}{2}}$ we get $M^{-1}L (M^{-\\frac{1}{2}} x) = \\beta^2 (M^{-\\frac{1}{2}} x)$. Thus, $ \\hat{x}:= M^{-\\frac{1}{2}} x$ is an eigenvector of $M^{-1}L$. Moreover, we have \n\t\\begin{align*}\n\t M^{-1}D \\hat{x} = M^{-1}DM^{-\\frac{1}{2}} x = M^{-\\frac{1}{2}} ( \\hat{D} x) = 0, \n\t\\end{align*}\n\twhich means that the pair $(M^{-1}L,M^{-1}D)$ is not observable; we have arrived at the desired contradiction.\n\n\n\n\n\n\t\\emph{Sufficiency:}\n\n\t%\n \n \n \n \n \n \n \n\n\tSuppose the equilibrium point is hyperbolic, but assume that the pair~ $\\allowbreak (M^{-1}L, M^{-1}D)$ is not observable; we will show that this assumption leads to a contradiction.\tAccording to \\cref{def: Observability}, $\\exists \\lambda \\in \\mathbb{C}, x \\in \\mathbb{C}^n , x\\ne 0$ such that\n\t\\begin{align} \\label{eq: observability in lossless general}\n\t M^{-1}Lx = \\lambda x \\text{ and } M^{-1}Dx = 0.\n\t\\end{align}\n \n\tWe make the following two observations.\n\tFirstly, as it is shown above, we have $\\sigma(M^{-1}L)\\subseteq \\mathbb{R_{++}}$. \n\t%\n\tSecondly, since $L$ is nonsingular, the eigenvalue $\\lambda$ in \\eqref{eq: observability in lossless general} cannot be zero.\n\t%\n\n \n \n \n \n \n %\n\tBased on the foregoing two observations, when the pair $(M^{-1}L,M^{-1}D)$ is not observable, there must exist $\\lambda \\in \\mathbb{R_{+}}, \\lambda\\ne0$ and $x \\in \\mathbb{C}^n , x\\ne 0$ such that \\eqref{eq: observability in lossless general} holds.\n\n \n \n \n\n\tDefine $\\xi=\\sqrt{-\\lambda}$, which is a purely imaginary number. The quadratic pencil $M^{-1}P(\\xi)=\\xi^2 I + \\xi M^{-1}D + M^{-1}L$ is singular because $M^{-1}P(\\xi)x = \\xi^2 x + \\xi M^{-1}Dx + M^{-1}Lx = -\\lambda x + 0 + \\lambda x = 0$. By \\cref{lemma: relation between ev J and ev J11}, $\\xi$ is an eigenvalue of $J$. Similarly, we can show $-\\xi$ is an eigenvalue of $J$. Therefore, the equilibrium point is not hyperbolic, which is a desired contradiction.\n\\end{proof}\n\n\nAs mentioned above, if matrix $D$ is nonsingular, the pair $(M^{-1}\\nabla f(x_0),\\allowbreak M^{-1}D)$ is always observable. Indeed, if we replace the assumption $D\\in\\mathbb{S}^n_+$ with $D\\in\\mathbb{S}^n_{++}$ in \\cref{thrm: nec and suf for pure imaginary lossless}, then the equilibrium point $(x_0,0)$ is not only hyperbolic but also asymptotically stable. This is proved in \\Cref{thrm: Stability of Symmetric Second-Order Systems with nonsing damping} in \\Cref{appendix: Stability of Symmetric Second-Order Systems with Nonsingular Damping}.\n\n\n\n\n\nAnother interesting observation is that when an equilibrium point is hyperbolic, \\cref{thrm: nec and suf for pure imaginary lossless} confirms the monotonic behaviour of damping in \\cref{thrm: monotonic damping behaviour}. Specifically, suppose an equilibrium point $(x_0,0)$ is hyperbolic for a value of damping matrix $D_I\\in\\mathbb{S}^n_+$. \\cref{thrm: nec and suf for pure imaginary lossless} implies that the pair $(M^{-1}\\nabla f(x_0),M^{-1}D_I)$ is observable. Note that if we change the damping to $D_{II}\\in\\mathbb{S}^n_+$ such that $D_{II}\\succeq D_I$, then the pair $(M^{-1}\\nabla f(x_0),M^{-1}D_{II})$ is also observable. Hence, the equilibrium point $(x_0,0)$ of the new system with damping $D_{II}$ is also hyperbolic. This is consistent with the monotonic behaviour of damping which is proved in \\cref{thrm: monotonic damping behaviour}.\n\n\n\n\n\n\n\n\n\n\nUnder additional assumptions, \\cref{thrm: nec and suf for pure imaginary lossless} can be partially generalized to a sufficient condition for the breakdown of hyperbolicity when $L$, $M$, and $D$ are not symmetric as in the following\n\\begin{theorem}[hyperbolicity in second-order systems: unsymmetric case]\\label{thrm: nec and suf for pure imaginary lossy}\nConsider the second-order ODE system \\eqref{eq: nonlinear ode} with nonsingular inertia matrix $M\\in\\mathbb{R}^{n\\times n}$ and damping matrix $D \\in\\mathbb{R}^{n\\times n}$. Suppose $(x_0,0)\\in \\mathbb{R}^{n+n}$ is an equilibrium point of the corresponding first-order system \\eqref{eq: nonlinear ode 1 order} with the Jacobian matrix $J\\in\\mathbb{R}^{2n\\times 2n}$ defined in \\eqref{eq: J general case} such that $L=\\nabla f(x_0)\\in \\mathbb{R}^{n\\times n}$. If $M^{-1}L$ has a positive eigenvalue $\\lambda$ with eigenvector $x$ such that $x$ is in the nullspace of $M^{-1}D$, then the spectrum of the Jacobian matrix $\\sigma(J)$ contains a pair of purely imaginary eigenvalues.\n\\end{theorem}\n\\begin{proof}\n The proof is similar to that of \\Cref{thrm: nec and suf for pure imaginary lossless} and is given in \\cref{proof of thrm: nec and suf for pure imaginary lossy}.\n\\end{proof}\n\n\n\\subsection{Bifurcation under Damping Variations}\n\\label{subsec: Bifurcation under Damping Variations}\nIn \\Cref{subsec: Sufficient and Necessary Conditions}, we developed necessary and\/or sufficient conditions for breaking the hyperbolicity through purely imaginary eigenvalues. Naturally, the next question is: what are the consequences of breaking the hyperbolicity? To answer this question, consider the parametric ODE \n\\begin{align} \\label{eq: parameter-dependent second order system}\n M \\ddot{x} + D(\\gamma) \\dot{x} + f(x) = 0,\n\\end{align}\nwhich satisfies the same assumptions as \\eqref{eq: nonlinear ode}. Suppose $D(\\gamma)$ is a smooth function of $\\gamma\\in\\mathbb{R}$, and $(x_0,0)\\in \\mathbb{R}^{n+n}$ is a hyperbolic equilibrium point of the corresponding first-order system at $\\gamma=\\gamma_1$ with the Jacobian matrix $J(x,\\gamma)$ defined as\n\\begin{align}\\label{eq: J general case hopd thrm proof}\n\tJ(x, \\gamma) = \\begin{bmatrix}\n\t0 & I \\\\\n\t-M^{-1} \\nabla f(x) & - M^{-1} D(\\gamma) \\\\\n\t\\end{bmatrix} \\in\\mathbb{R}^{2n\\times 2n}.\n\\end{align} \nLet us vary $\\gamma$ from $\\gamma_1$ to $\\gamma_2$ and monitor the equilibrium point. There are two ways in which the hyperbolicity can be broken. Either a simple real eigenvalue approaches zero and we have $0\\in\\sigma(J(x_0,\\gamma_2))$, or a pair of simple complex eigenvalues approaches the imaginary axis and we have $\\pm \\frak{i} \\omega_0 \\in \\sigma( J(x_0,\\gamma_2) )$ for some $\\omega_0>0$. The former corresponds to a fold bifurcation, while the latter is associated with a Hopf (more accurately, Poincare-Andronov-Hopf) bifurcation\\footnote{It can be proved that we need more parameters to create extra eigenvalues on the imaginary axis unless the system has special properties such as symmetry \\cite{2004-kuznetsov-hopf}.}. The next theorem states the precise conditions for a Hopf bifurcation to occur in system \\eqref{eq: parameter-dependent second order system}.\n\n\\begin{theorem} \\label{coro: Hopf bifurcation}\n\tConsider the parametric ODE \\eqref{eq: parameter-dependent second order system}, with inertia matrix $M\\in\\mathbb{S}^n_{++}$ and damping matrix $D(\\gamma) \\in\\mathbb{S}^n_+$. Suppose $D(\\gamma)$ is a smooth function of $\\gamma$, $(x_0,0)\\in \\mathbb{R}^{n+n}$ is an isolated equilibrium point of the corresponding first-order system, and $L:=\\nabla f(x_0)\\in \\mathbb{S}^n_{++}$. Assume the following conditions are satisfied:\n\t\\begin{enumerate}[(i)]\n\t\t\\item There exists $\\gamma_0\\in\\mathbb{R}$ such that the pair $(M^{-1}L, M^{-1}D(\\gamma_0) )$ is not observable, that is, $\\exists \\lambda\\in\\mathbb{C},v\\in\\mathbb{C}^n, v\\ne0$ such that \n\t\t\\begin{align} \\label{eq: hopf observ}\n\t\tM^{-1} L v = \\lambda v \\text{ and } M^{-1}D(\\gamma_0)v=0.\n\t\t\\end{align}\\label{coro: hopf conditions_1}\n\t\t\\vspace{-5mm}\n\t\t\\item $\\frak{i} \\omega_0$ is a simple eigenvalue of $J(x_0,\\gamma_0)$, where $\\omega_0=\\sqrt{\\lambda}$. \n\t\t\\label{coro: hopf conditions_3}\n \n\t\n\t\n\t\t\\item $\\mathrm{Im}( q^* M^{-1} D'(\\gamma_0) v ) \\ne 0$, where $D'(\\gamma_0)$ is the derivative of $D(\\gamma)$ at $\\gamma=\\gamma_0$, $\\ell_0=(p,q)\\in\\mathbb{C}^{n+n}$ is a left eigenvector of $J(x_0, \\gamma_0)$ corresponding to eigenvalue $\\frak{i} \\omega_0$, and $\\ell_0$ is normalized so that $\\ell_0^*r_0 = 1$ where $r_0=(v,\\frak{i} \\omega_0 v)$.\n\t\t\\label{coro: hopf conditions_transvers} \n\t\t%\n \n\t\t%\n\t\n\t\t\\item $\\det({P(\\kappa)})\\ne0$ for all $\\kappa\\in\\mathbb{Z} \\setminus \\{-1,1\\}$, where $P(\\kappa)$ is the quadratic matrix pencil given by $P(\\kappa):= \\nabla f(x_0)-\\kappa^2\\omega_0^2 M + \\frak{i} \\kappa\\omega_0 D(\\gamma_0)$.\n\t\t\\label{coro: hopf conditions_4}\n %\n\t\n\t\t%\n\t\n\t\\end{enumerate}\n\n\tThen, there exists smooth functions $\\gamma=\\gamma(\\varepsilon)$ and $T=T(\\varepsilon)$ depending on a parameter $\\varepsilon$ with $\\gamma(0)=\\gamma_0$ and $T(0)=2\\pi |\\omega_0|^{-1}$ such that there are nonconstant periodic solutions of \\eqref{eq: parameter-dependent second order system} with period $T(\\varepsilon)$ which collapses into the equilibrium point $(x_0,0)$ as $\\varepsilon\\to 0$.\n\\end{theorem}\n\\begin{proof}\n\n\tBy \\cref{thrm: nec and suf for pure imaginary lossless}, condition (\\ref{coro: hopf conditions_1}) implies that the Jacobian matrix \\eqref{eq: J general case hopd thrm proof} at $(x,\\gamma) = (x_0,\\gamma_0)$ possesses a pair of purely imaginary eigenvalues $\\pm \\frak{i} \\omega_0$, where $\\omega_0=\\sqrt{\\lambda}$. Moreover, a right eigenvector of $\\frak{i} \\omega_0$ is $(v, \\frak{i} \\omega_0 v)$, where $v$ is from \\eqref{eq: hopf observ}. \n\t%\n\tAccording to condition (\\ref{coro: hopf conditions_3}), the eigenvalue $\\frak{i} \\omega_0$ is simple. Therefore, according to the eigenvalue perturbation theorem \\cite[Theorem 1]{2020-greenbaum-perturbation}, for $\\gamma$ in a neighborhood of $\\gamma_0$, the matrix $J(x_0,\\gamma)$ has an eigenvalue $\\xi(\\gamma)$ and corresponding right and left eigenvectors $r(\\gamma)$ and $\\ell(\\gamma)$ with $\\ell(\\gamma)^*r(\\gamma)=1$ such that $\\xi(\\gamma)$, $r(\\gamma)$, and $\\ell(\\gamma)$ are all analytic functions of $\\gamma$, satisfying $\\xi(\\gamma_0)=\\frak{i} \\omega_0$, $r(\\gamma_0)=r_0$, and $\\ell(\\gamma_0)=\\ell_0$. Let us differentiate the equation $J(x_0,\\gamma) r(\\gamma) = \\xi(\\gamma) r(\\gamma)$ and set $\\gamma=\\gamma_0$, to get\n\t\\begin{align}\n\t J'(x_0,\\gamma_0) r(\\gamma_0) + J(x_0,\\gamma_0) r'(\\gamma_0) = \\xi'(\\gamma_0) r(\\gamma_0) + \\xi(\\gamma_0) r'(\\gamma_0).\n\t\\end{align}\n\tAfter left multiplication by $\\ell_0^*$, and using $\\ell_0^*r_0=1$, we obtain the derivative of $\\xi(\\gamma)$ at $\\gamma = \\gamma_0$:\n\t\\begin{align*}\n\t\\xi'(\\gamma_0) = \n\t\\begin{bmatrix}\n\t p^{*} & q^{*}\n\t\\end{bmatrix}\n\t\\begin{bmatrix}\n\t0 & 0 \\\\\n\t0 & - M^{-1} D'(\\gamma_0) \\\\\n\t\\end{bmatrix} \n\t\\begin{bmatrix}\n\tv \\\\ \\frak{i} \\omega_0 v\n\t\\end{bmatrix} = - \\frak{i} \\omega_0 q^{*} M^{-1} D'(\\gamma_0) v.\n\t\\end{align*}\n\t%\n\t%\n \n %\n \n %\n \n \n \n \n \n \n \n \n \n \n %\n\tNow, $\\mathrm{Im}( q^* M^{-1} D'(\\gamma_0) v ) \\ne 0$ in condition (\\ref{coro: hopf conditions_transvers}) implies that $\\mathrm{Re}(\\xi'(\\gamma_0)) \\ne 0$ which is a necessary condition for Hopf bifurcation. \n %\n\n \n %\n\n\tTherefore, the results follow from the Hopf bifurcation theorem \\cite[Section 2]{1978-Schmidt-hopf}. Note that $J(x_0,\\gamma)$ is singular if and only if $\\nabla f(x_0)$ is singular. Thus, nonsingularity of $\\nabla f(x_0)$ is necessary for Hopf bifurcation.\n\\end{proof}\n\n\nIf one or more of the listed conditions in \\cref{coro: Hopf bifurcation} are not satisfied, we may still have the birth of a periodic orbit but some of the conclusions of the theorem may not hold true. The bifurcation is then called a degenerate Hopf bifurcation. For instance, if the transversality condition (\\ref{coro: hopf conditions_transvers}) is not satisfied, the stability of the equilibrium point may not change, or multiple periodic orbits may bifurcate \\cite{1978-Schmidt-hopf}. The next theorem describes a safe region for damping variations such that fold and Hopf bifurcations will be avoided.\n\n\\begin{theorem}\\label{thm: fold and Hopf bifurcation}\n Consider the parametric ODE \\eqref{eq: parameter-dependent second order system}, with a nonsingular inertia matrix $M\\in\\mathbb{R}^{n\\times n}$. Suppose the damping matrix $D(\\gamma)\\in\\mathbb{R}^{n\\times n}$ is a smooth function of $\\gamma$, $(x_0,0)\\in \\mathbb{R}^{n+n}$ is a hyperbolic equilibrium point of the corresponding first-order system at $\\gamma=\\gamma_0$, and $L:=\\nabla f(x_0)\\in\\mathbb{R}^{n\\times n}$. Then, the following statements hold:\n \\begin{enumerate}[(i)]\n \\item Variation of $\\gamma$ in $\\mathbb{R}$ will not lead to any fold bifurcation.\n %\n \n %\n \\item Under the symmetric setting, i.e., when $M\\in\\mathbb{S}^n_{++}$, $D(\\gamma) \\in\\mathbb{S}^n_+$, and $L\\in \\mathbb{S}^n_{++}$, variation of $\\gamma$ in $\\mathbb{R}$ will not lead to any Hopf bifurcation, as long as $D(\\gamma) \\succeq D(\\gamma_0)$. If in addition the equilibrium point is stable, variation of $\\gamma$ in $\\mathbb{R}$ will not make it unstable, as long as $D(\\gamma) \\succeq D(\\gamma_0)$. \n %\n \n \\end{enumerate}\n %\n \n\n\n\\end{theorem}\n\\begin{proof}\n According to \\cref{lemma: relation between ev J and ev J11}, zero is an eigenvalue of $J$ if and only if the matrix $L=\\nabla f(x_0)$ is singular. Therefore, the damping matrix $D$ has no role in the zero eigenvalue of $J$. The second statement follows from \\Cref{thrm: monotonic damping behaviour}.\n\\end{proof}\nThe above theorem can be straightforwardly generalized to bifurcations having higher codimension.\n\n\n\n\n\n\n\n\n\\section{Power System Models and Impact of Damping}\n\\label{Sec: Power System Model and Its Properties}\nThe Questions \\eqref{Q1}-\\eqref{Q3} asked in \\Cref{subsec: Problem Statement} and the theorems and results discussed in the previous parts of this paper arise naturally from the foundations of electric power systems. These results are useful tools for analyzing the behaviour and maintaining the stability of power systems. In the rest of this paper, we focus on power systems to further explore the role of damping in these systems.\n\\subsection{Power System Model} \\label{subsec: Multi-Machine Swing Equations}\nConsider a power system with the set of interconnected generators $\\mathcal{N}=\\{1,\\cdots,n\\}, n\\in\\mathbb{N}$. Based on the classical small-signal stability assumptions \\cite{2008-anderson-stability}, the mathematical model for a power system is described by the following second-order system:\n\\begin{align} \\label{eq: 2nd order swing}\n\t\t\\frac{m_j}{\\omega_s} \\ddot{\\delta}_j(t)+ \\frac{d_j}{\\omega_s} {\\dot{\\delta}}_j(t) = P_{m_j} - P_{e_j}(\\delta(t)) && \\forall j \\in \\mathcal{N}.\n\\end{align}\nConsidering the state space $\\mathcal{S}:= \\{ (\\delta,\\omega): \\delta \\in \\mathbb{R}^n, \\omega \\in \\mathbb{R}^n \\}$, the dynamical system \\eqref{eq: 2nd order swing} can be represented as a system of first-order nonlinear autonomous ODEs, aka swing equations: \n\\begin{subequations} \\label{eq: swing equations}\n\t\\begin{align}\n\t\t& \\dot{\\delta}_j(t) = \\omega_j(t) && \\forall j \\in \\mathcal{N}, \\label{eq: swing equations a}\\\\\n\t\t& \\frac{m_j}{\\omega_s} \\dot{\\omega}_j(t)+ \\frac{d_j}{\\omega_s} \\omega_j(t) = P_{m_j} - P_{e_j}(\\delta(t)) && \\forall j \\in \\mathcal{N}, \\label{eq: swing equations b}\n\t\\end{align}\n\\end{subequations}\nwhere for each generator $j\\in\\mathcal{N}$, $P_{m_j}$ and $P_{e_j}$ are mechanical and electrical powers in per unit, $m_j$ is the inertia constant in seconds, $d_j$ is the unitless damping coefficient, $\\omega_{s}$ is the synchronous angular velocity in electrical radians per seconds, $t$ is the time in seconds, $\\delta_j(t)$ is the rotor electrical angle in radians, and finally $\\omega_j(t)$ is the deviation of the rotor angular velocity from the synchronous velocity in electrical radians per seconds. For the sake of simplicity, henceforth we do not explicitly write the dependence of the state variables $\\delta$ and $\\omega$ on time $t$. \nThe electrical power $P_{e_j}$ in \\eqref{eq: swing equations b} can be further spelled out: \n\\begin{align} \\label{eq: flow function}\n\tP_{e_j}(\\delta) & = \\sum \\limits_{k = 1}^n { V_j V_k Y_{jk} \\cos \\left( \\theta _{jk} - \\delta _j + \\delta _k \\right)}\n\n\\end{align}\nwhere $V_j$ is the terminal voltage magnitude of generator $j$, and\n$Y_{jk}\\exp{({\\frak{i} \\theta_{jk}})}$ is the $(j,k)$ entry of the reduced admittance matrix, with $Y_{jk}\\in\\mathbb{R}$ and $\\theta_{jk}\\in\\mathbb{R}$. The reduced admittance matrix encodes the underlying graph structure of the power grid, which is assumed to be a connected graph in this paper.\nNote that for each generator $j\\in\\mathcal{N}$, the electrical power $P_{e_j}$ in general is a function of angle variables $\\delta_k$ for all $k\\in\\mathcal{N}$. Therefore, the dynamics of generators are interconnected through the function $P_{e_j}(\\delta)$ in \\eqref{eq: 2nd order swing} and \\eqref{eq: swing equations}.\n\\begin{definition}[flow function] \\label{def: flow function}\n\tThe smooth function $P_e:\\mathbb{R}^n \\to \\mathbb{R}^n$ given by $\\delta \\mapsto P_e(\\delta)$ in \\eqref{eq: flow function} is called the flow function.\n\\end{definition}\nThe smoothness of the flow function (it is $\\mathcal{C}^\\infty$ indeed) is a sufficient condition for the existence and uniqueness of the solution to the ODE \\eqref{eq: swing equations}. \nThe flow function is translationally invariant with respect to the operator $\\delta \\mapsto \\delta + \\alpha \\mathbf{1}$, where $\\alpha \\in \\mathbb{R}$ and $\\mathbf{1}\\in\\mathbb{R}^n$ is the vector of all ones. In other words, $P_e(\\delta + \\alpha \\mathbf{1})=P_e(\\delta)$. A common way to deal with this situation is to define a reference bus and refer all other bus angles to it. This is equivalent to projecting the original state space $\\mathcal{S}$ onto a lower dimensional space. We will delve into this issue in \\Cref{subsec: Referenced Model}.\n\\subsection{Jacobian of Swing Equations} \\label{SubSec: Graph Theory Interpretation and Spectral Properties}\nLet us take the state variable vector $(\\delta ,\\omega)\\in\\mathbb{R}^{2n}$ into account and note that the Jacobian of the vector field in \\eqref{eq: swing equations} has the form \\eqref{eq: J general case}\nwhere $M= \\frac{1}{\\omega_s} \\mathbf{diag}(m_1,\\cdots,m_n)$ and $D=\\frac{1}{\\omega_s}\\mathbf{diag}(d_1,\\cdots,d_n)$. Moreover, $f=P_e-P_m$ and $\\nabla f = \\nabla P_e (\\delta) \\in\\mathbb{R}^{n\\times n}$ is the Jacobian of the flow function with the entries:\n\\begin{align*}\n\t& \\frac {\\partial P_{e_j}} {\\partial \\delta_j} = \\sum \\limits_{k \\ne j} { V_j V_k Y_{jk} \\sin \\left( {\\theta _{jk} - {\\delta _j} + {\\delta _k}} \\right) }, \\forall j \\in \\mathcal{N}\\\\\n\t%\n\t& \\frac{\\partial P_{e_j} } {\\partial \\delta _k} = - {V_j} {V_k}{Y_{jk}}\\sin \\left( {{\\theta _{jk}} - {\\delta _j} + {\\delta _k}} \\right),\\forall j,k \\in \\mathcal{N}, k \\neq j.\n\\end{align*}\nLet $\\mathcal{L}$ be the set of transmission lines of the reduced power system. We can rewrite $ {\\partial P_{e_j}}\/ {\\partial \\delta_j}=\\sum_{k=1, k\\ne j}^n w_{jk}$ and ${\\partial P_{e_j} }\/ {\\partial \\delta _k}=-w_{jk}$, where \n\\begin{align} \\label{eq: weights of digraph lossy}\nw_{jk} = \n\\begin{cases}\n {V_j} {V_k}{Y_{jk}}\\sin \\left( \\varphi_{jk} \\right) & \\forall \\{ j,k \\} \\in \\mathcal{L} \\\\\n 0 & \\text{otherwise},\n\\end{cases}\n\\end{align}\nand $\\varphi_{jk} = {{\\theta _{jk}} - {\\delta _j} + {\\delta _k}}$. Typically, \nwe have $\\varphi_{jk} \\in ( 0,\\pi )$ for all $\\{j,k \\} \\in \\mathcal{L}$ \\cite{2020-fast-certificate}. Thus, it is reasonable to assume that the equilibrium points $(\\delta^{0},\\omega^{0})$ of the dynamical system \\eqref{eq: swing equations} are located in the set $\\Omega$ defined as\n\\begin{align*}\n\\Omega = \\left\\{ (\\delta,\\omega)\\in\\mathbb{R}^{2n} : 0 < \\theta_{jk}-\\delta_j+\\delta_k < \\pi , \\forall \\{j,k\\} \\in \\mathcal{L},\\omega = 0 \\right\\}.\n\\end{align*}\nUnder this assumption, the terms $w_{jk} > 0$ for all transmission lines $\\{j,k\\} \\in \\mathcal{L}$.\nConsequently, $ {\\partial P_{e_j}} \/ {\\partial \\delta_j}\\ge0, \\forall j \\in \\mathcal{N}$ and ${\\partial P_{e_j} }\/ {\\partial \\delta _k} \\le 0, \\forall j,k \\in \\mathcal{N}, k \\neq j$. Moreover, $\\nabla P_e (\\delta)$ has a zero row sum, i.e., $ \\nabla P_e (\\delta)\\mathbf{1} = 0 \\implies 0 \\in \\sigma(\\nabla P_e (\\delta))$. Given these properties, $\\nabla P_e (\\delta^0)$ turns out to be a singular M-matrix for all $(\\delta^{0},\\omega^{0})\\in\\Omega$ \\cite{2020-fast-certificate}. Recall that a matrix $A$ is an \\emph{M-matrix} if the off-diagonal elements of $A$ are nonpositive and the nonzero eigenvalues of $A$ have positive real parts \\cite{1974-M-matrices}. Finally, if the power system under study has a connected underlying undirected graph, the zero eigenvalue of $\\nabla P_e (\\delta^0)$ will be simple \\cite{2020-fast-certificate}. \n\n\n\n In general, the Jacobian $\\nabla P_e (\\delta)$ is not symmetric. When the power system is \\emph{lossless}, i.e., when the transfer conductances of the grid are zero, then $\\theta_{jj}=-\\frac{\\pi}{2}, \\forall j \\in \\mathcal{N}$ and $\\theta_{jk}=\\frac{\\pi}{2}, \\forall \\{j,k\\} \\in \\mathcal{L}$. In a lossless system, $\\nabla P_e (\\delta)$ is symmetric. If in addition an equilibrium point $(\\delta^0,\\omega^0)$ belongs to the set $\\Omega$, then $\\nabla P_e (\\delta^0) \\in\\mathbb{S}^n_+$, because $\\nabla P_e (\\delta^0)$ is real symmetric and diagonally dominant \\cite{2021-gholami-sun-MMG-stability}. \n\n\\subsection{Referenced Power System Model}\n\\label{subsec: Referenced Model}\n\nThe translational invariance of the flow function $P_e$ gives rise to a zero eigenvalue in the spectrum of $\\nabla P_e (\\delta)$, and as a consequence, in the spectrum of $J(\\delta)$. This zero eigenvalue and the corresponding nullspace pose some difficulties in monitoring the hyperbolicity of the equilibrium points, specially during Hopf bifurcation analysis. As mentioned in \\Cref{subsec: Multi-Machine Swing Equations}, this situation can be dealt with by defining a reference bus and referring all other bus angles to it. Although this is a common practice in power system context \\cite{2008-anderson-stability}, the spectral and dynamical relationships between the original system and the referenced system are not rigorously analyzed in the literature. In this section, we fill this gap to facilitate our analysis in the later parts.\n\n\\subsubsection{Referenced Model}\nDefine $\\psi_j:=\\delta_j-\\delta_n, \\forall j \\in \\{1,2,...,n-1\\}$ and reformulate the swing equation model \\eqref{eq: swing equations} into the \\emph{referenced model}\n\\begin{subequations} \\label{eq: Swing Equation Polar referenced}\n\t\\begin{align}\n\t& \\dot{\\psi}_j = \\omega_j - \\omega_n \\: \\qquad \\qquad \\qquad \\qquad \\forall j \\in \\{1,...,n-1\\}, \\\\\n\t& \\dot{\\omega}_j = - \\frac{d_j}{m_j} \\omega_j + \\frac{\\omega_s}{m_j} (P_{m_j} - P_{e_j}^r(\\psi)) \\: \\: \\: \\: \\forall j \\in \\{1,...,n\\},\n\t\\end{align}\n\\end{subequations}\nwhere for all $j$ in $\\{1,...,n\\}$ we have\n\t\\begin{align} \\label{eq: flow function referenced compact}\n\tP_{e_j}^r(\\psi) & = \\sum \\limits_{k = 1}^{n} { V_j V_k Y_{jk} \\cos \\left( \\theta _{jk} - \\psi_j + \\psi_k \\right)},\n\t\\end{align}\nand $\\psi_n\\equiv0$. The function $P_e^r: \\mathbb{R}^{n-1} \\to \\mathbb{R}^n$ given by \\eqref{eq: flow function referenced compact} is called the referenced flow function. \n\\subsubsection{The Relationship}\nWe would like to compare the behaviour of the two dynamical systems \\eqref{eq: swing equations} and \\eqref{eq: Swing Equation Polar referenced}. Let us define the linear mapping $\\Psi:\\mathbb{R}^n\\times\\mathbb{R}^n \\to \\mathbb{R}^{n-1} \\times\\mathbb{R}^n$ given by $(\\delta,\\omega)\\mapsto(\\psi,\\omega)$ such that\n\\begin{align*}\n \\Psi(\\delta,\\omega) = \\left \\{ (\\psi,\\omega) : \\psi_j:=\\delta_j-\\delta_n, \\forall j \\in \\{1,2,...,n-1\\} \\right \\}. \n\\end{align*}\nThis map is obviously smooth but not injective.\nIt can also be written in matrix form \n\\begin{align*}\n \\Psi(\\delta,\\omega) = \\begin{bmatrix} T_1 & 0 \\\\ 0 & I_n \\end{bmatrix}\n \\begin{bmatrix} \\delta \\\\ \\omega \\end{bmatrix}\n\\end{align*}\nwhere $I_n\\in\\mathbb{R}^{n\\times n}$ is the identity matrix, $\\mathbf{1}\\in\\mathbb{R}^{n-1}$ is the vector of all ones, and\n\\begin{align}\nT_1:=\\begin{bmatrix}\nI_{n-1}& -\\mathbf{1}\n\\end{bmatrix} \\in \\mathbb{R}^{(n-1)\\times n}.\n\\end{align}\n\n\nThe next proposition, which is proved in \\cref{proof of thrm: original model vs referenced model}, establishes the relationship between the original model \\eqref{eq: swing equations} and the referenced model \\eqref{eq: Swing Equation Polar referenced}.\n\\begin{proposition} \\label{thrm: original model vs referenced model}\nLet $(\\delta^0,\\omega^0)$ be an equilibrium point of the swing equation \\eqref{eq: swing equations} and $(n_-,n_0,n_+)$ be the inertia\\footnote{Inertia of a matrix (see e.g. \\cite{2013-Horn-matrix-analysis} for a definition) should not be confused with the inertia matrix $M$.} of its Jacobian at this equilibrium point. The following two statements hold:\n\\begin{enumerate}[(i)]\n \\item $\\Psi(\\delta^0,\\omega^0)$ is an equilibrium point of the referenced model \\eqref{eq: Swing Equation Polar referenced}.\n \\item $(n_-,n_0-1,n_+)$ is the inertia of the Jacobian of \\eqref{eq: Swing Equation Polar referenced} at $\\Psi(\\delta^0,\\omega^0)$.\n \n\\end{enumerate}\n\\end{proposition}\n\n\n\\begin{remark}\n Note that the equilibrium points of the referenced model \\eqref{eq: Swing Equation Polar referenced} are in the set \n \\begin{align*}\n \\Tilde{\\mathcal{E}} = \\biggl\\{(\\psi,\\omega)\\in \\mathbb{R}^{n-1}\\times\\mathbb{R}^n : \\; & \\omega_j = \\omega_n \\:, \\forall j \\in \\{1,...,n-1\\}, \\\\ & P_{m_j} = P_{e_j}^r(\\psi) + d_j \\omega_n\/\\omega_s , \\: \\forall j \\in \\{1,...,n\\} \\biggr\\}\n \\end{align*}\n where $\\omega_n$ is not necessarily zero. Therefore, the referenced model \\eqref{eq: Swing Equation Polar referenced} may have extra equilibrium points which do not correspond to any equilibrium point of the original model \\eqref{eq: swing equations}.\n \\end{remark}\n\n\n\n\n\\subsection{Impact of Damping in Power Systems}\n\\label{Sec: Impact of Damping in Power Systems}\nThe theoretical results in \\Cref{Sec: Monotonic Behavior of Damping} and \\Cref{Sec: Impact of Damping in Hopf Bifurcation} have important applications in electric power systems. For example, \\cref{thrm: monotonic damping behaviour} is directly applicable to lossless power systems, and provides new insights to improve the situational awareness of power system operators. Recall that many control actions taken by power system operators are directly or indirectly targeted at changing the effective damping of the system \\cite{1994-kundur-stability,2019-Patrick-koorehdavoudi-input,2012-aminifar-wide-area-damping}. In this context, \\cref{thrm: monotonic damping behaviour} determines how the system operator should change the damping of the system in order to avoid breaking the hyperbolicity and escaping dangerous bifurcations. \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n %\n \n \n\n \n \n \n Now, consider the case where a subset of power system generators have zero damping coefficients. Such partial damping is possible in practice specially in inverter-based resources (as damping coefficient corresponds to a controller parameter which can take zero value). The next theorem and remark follow from \\cref{thrm: nec and suf for pure imaginary lossless}, and show how partial damping could break the hyperbolicity in lossless power systems.\n %\n \n %\n \n\n \n\\begin{theorem}[purely imaginary eigenvalues in lossless power systems] \\label{prop: nec and suf for pure imaginary lossless power system}\n Consider a lossless network \\eqref{eq: swing equations} with an equilibrium point $(\\delta^{0},\\omega^{0})\\in \\Omega$. Suppose all generators have positive inertia and nonnegative damping coefficients. Then, $\\sigma(J(\\delta^{0}))$ contains a pair of purely imaginary eigenvalues if and only if the pair $(M^{-1}\\nabla P_e (\\delta^0), \\allowbreak M^{-1}D)$ is not observable.\n\\end{theorem}\n\n\\begin{proof}\n As mentioned above, we always assume the physical network connecting the power generators is a connected (undirected) graph. Under this assumption,\n as mentioned in \\Cref{SubSec: Graph Theory Interpretation and Spectral Properties}, matrix $L:=\\nabla P_e (\\delta^0)$ has a simple zero eigenvalue with a right eigenvector $\\mathbf{1}\\in\\mathbb{R}^{n}$, which is the vector of all ones \\cite{2020-fast-certificate}.\n Moreover, since the power system is lossless and $(\\delta^{0},\\omega^{0})\\in \\Omega$, we have $L\\in\\mathbb{S}^n_+$.\n %\n If $D=0$, the pair $(M^{-1}L,M^{-1}D)$ can never be observable. Using a similar argument as in the first part in the proof of \\cref{thrm: nec and suf for pure imaginary lossless}, it can be shown that $M^{-1}L$ has a simple zero eigenvalue and the rest of its eigenvalues are positive, i.e., $\\sigma(M^{-1}L)\\subseteq \\mathbb{R_{+}}=\\{\\lambda\\in\\mathbb{R}:\\lambda\\ge0\\}$. Meanwhile, when $D=0$, we have $\\mu\\in\\sigma(J)\\iff\\mu^2\\in\\sigma(-M^{-1}L)$. Notice that a power grid has at least two nodes, i.e. $n\\ge2$, and hence, $M^{-1}L$ has at least one positive eigenvalue, i.e., $\\exists \\lambda\\in\\mathbb{R}_+, \\lambda>0$ such that $\\lambda\\in\\sigma(M^{-1}L)$. Hence, $\\mu= \\sqrt{-\\lambda}$ is a purely imaginary number and is an eigenvalue of $J$. Similarly, we can show that $-\\mu$ is an eigenvalue of $J$. Consequently, the theorem holds in the case of $D=0$. In the sequel, we assume that $D\\not=0$.\n\t%\n\t\n\t\n \n \n\t\\emph{Necessity:}\n\tAssume there exists $\\beta>0$ such that $\\frak{i} \\beta \\in \\sigma(J)$. We will show that the pair $(M^{-1}L,M^{-1}D)$ is not observable.\n\t%\n\n\tBy \\cref{lemma: relation between ev J and ev J11}, $\\frak{i}\\beta \\in \\sigma(J)$ if and only if the matrix pencil $(M^{-1}L + \\frak{i}\\beta M^{-1}D - \\beta^2 I)$ is singular: \n\n\t\\begin{align*}\n\t\n\t& \\det \\left( M^{-\\frac{1}{2}}(M^{-\\frac{1}{2}}LM^{-\\frac{1}{2}} + \\frak{i} \\beta M^{-\\frac{1}{2}}DM^{-\\frac{1}{2}} - \\beta^2 I) M^{\\frac{1}{2}} \\right) = 0,\n\t\\end{align*}\n\tor equivalently, $\\exists (x + \\frak{i} y) \\ne 0$ such that $x,y\\in\\mathbb{R}^n$ and\n\t\\begin{align} \\label{eq: proof of parially damped hopf bifurcation lossless swing}\n\t\\notag & (M^{-\\frac{1}{2}}LM^{-\\frac{1}{2}} + \\frak{i} \\beta M^{-\\frac{1}{2}} D M^{-\\frac{1}{2}} - \\beta^2 I)(x+\\frak{i} y) = 0 \\\\\n\t& \\iff \\begin{cases}\n\t(M^{-\\frac{1}{2}}LM^{-\\frac{1}{2}}-\\beta^2 I)x - \\beta M^{-\\frac{1}{2}}DM^{-\\frac{1}{2}} y = 0, \\\\ \n\t(M^{-\\frac{1}{2}}LM^{-\\frac{1}{2}}-\\beta^2 I)y + \\beta M^{-\\frac{1}{2}}DM^{-\\frac{1}{2}} x = 0.\n\t\\end{cases}\n\t\\end{align}\n\tLet $\\hat{L}:=M^{-\\frac{1}{2}}LM^{-\\frac{1}{2}}$, $\\hat{D}:=M^{-\\frac{1}{2}}DM^{-\\frac{1}{2}}$, and observe that\n\t\\begin{align*}\n\t\\begin{cases}\n\ty^\\top (\\hat{L}-\\beta^2 I)x = y^\\top (\\beta \\hat{D} y) = \\beta y^\\top \\hat{D} y \\ge 0,\n\t\\\\\n\tx^\\top (\\hat{L}-\\beta^2 I)y = x^\\top (-\\beta \\hat{D} x) = -\\beta x^\\top \\hat{D} x \\le 0,\n\t\\end{cases}\n\t\\end{align*}\n \n %\n\twhere we have used the fact that $\\hat{D}$ is *-congruent to $D$. According to Sylvester's law of inertia, $\\hat{D}$ and $D$ have the same inertia. Since $D\\succeq 0$, we conclude that $\\hat{D}\\succeq0$.\n\n\tAs $(\\hat{L}-\\beta^2 I)$ is symmetric, we have $x^\\top (\\hat{L}-\\beta^2 I)y = y^\\top (\\hat{L}-\\beta^2 I)x$. Therefore, we must have $x^\\top \\hat{D} x = y^\\top \\hat{D} y =0$. Since $\\hat{D}\\succeq0$, we can infer that $x\\in\\ker(\\hat{D})$ and $y\\in\\ker(\\hat{D})$.\n\t%\n\n \n \n \n \n\tNow considering $\\hat{D} y = 0$ and using the first equation in \\eqref{eq: proof of parially damped hopf bifurcation lossless swing} we get \n\t\\begin{align}\n\t( \\hat{L}-\\beta^2 I)x = 0 \\iff M^{-\\frac{1}{2}}LM^{-\\frac{1}{2}} x=\\beta^2 x,\n\t\\end{align}\n\tmultiplying both sides from left by $M^{-\\frac{1}{2}}$ we get $M^{-1}L (M^{-\\frac{1}{2}} x) = \\beta^2 (M^{-\\frac{1}{2}} x)$. Thus, $ \\hat{x}:= M^{-\\frac{1}{2}} x$ is an eigenvector of $M^{-1}L$. Moreover, we have \n\t\\begin{align*}\n\t M^{-1}D \\hat{x} = M^{-1}DM^{-\\frac{1}{2}} x = M^{-\\frac{1}{2}} ( \\hat{D} x) = 0, \n\t\\end{align*}\n\twhich means that the pair $(M^{-1}L,M^{-1}D)$ is not observable.\n\n\n\n\n\t\n\n\t%\n \n \n \n \n \n \n \n\n\t\n\t\\emph{Sufficiency:}\n\tSuppose the pair~ $\\allowbreak (M^{-1}L, M^{-1}D)$ is not observable. We will show that $\\sigma(J)$ contains a pair of purely imaginary eigenvalues.\n\n\tAccording to \\cref{def: Observability}, $\\exists \\lambda \\in \\mathbb{C}, x \\in \\mathbb{C}^n , x\\ne 0$ such that\n\t\\begin{align} \\label{eq: observability in lossless general swing}\n\t M^{-1}Lx = \\lambda x \\text{ and } M^{-1}Dx = 0.\n\t\\end{align}\n \n\tWe make the following two observations.\n\tFirstly, as it is shown above, we have $\\sigma(M^{-1}L)\\subseteq \\mathbb{R_{+}}$. \n\t%\n\tSecondly, \n\n\t%\n\n \n \n $L$ has a simple zero eigenvalue and a one-dimensional nullspace spanned by $\\mathbf{1}\\in\\mathbb{R}^n$. We want to emphasize that this zero eigenvalue of $L$ cannot break the observability of the pair $(M^{-1}L,M^{-1}D)$. Note that $\\ker(L)=\\ker(M^{-1}L)$ and $M^{-1}L \\mathbf{1} =0$ implies that $M^{-1}D \\mathbf{1} \\ne0$ because $D\\ne0$.\n \n \n %\n\tBased on the foregoing two observations, when the pair $(M^{-1}L,M^{-1}D)$ is not observable, there must exist $\\lambda \\in \\mathbb{R_{+}}, \\lambda\\ne0$ and $x \\in \\mathbb{C}^n , x\\ne 0$ such that \\eqref{eq: observability in lossless general swing} holds.\n\n \n \n \n\n\tDefine $\\xi=\\sqrt{-\\lambda}$, which is a purely imaginary number. The quadratic pencil $M^{-1}P(\\xi)=\\xi^2 I + \\xi M^{-1}D + M^{-1}L$ is singular because $M^{-1}P(\\xi)x = \\xi^2 x + \\xi M^{-1}Dx + M^{-1}Lx = -\\lambda x + 0 + \\lambda x = 0$. By \\cref{lemma: relation between ev J and ev J11}, $\\xi$ is an eigenvalue of $J$. Similarly, we can show $-\\xi$ is an eigenvalue of $J$. Therefore, $\\sigma(J)$ contains the pair of purely imaginary eigenvalues $\\pm\\xi$.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nThe following remark illustrates how \\cref{prop: nec and suf for pure imaginary lossless power system} can be used in practice to detect and damp oscillations in power systems.\n\\begin{remark}\n\n\tConsider the assumptions of \\cref{prop: nec and suf for pure imaginary lossless power system} and suppose there exists a pair of purely imaginary eigenvalues $\\pm \\frak{i} \\beta \\in \\sigma(J(\\delta^0))$ which give rise to Hopf bifurcation and oscillatory behaviour of the system. This issue can be detected by observing the osculations in power system state variables (through phasor measurement units (PMUs) \\cite{2012-aminifar-wide-area-damping}). According to \\cref{prop: nec and suf for pure imaginary lossless power system}, we conclude that $\\beta^2\\in\\sigma(M^{-1}\\nabla P_e (\\delta^0))$. Let $\\mathcal{X}:= \\{x^1,...,x^k\\}$ be a set of independent eigenvectors associated with the eigenvalue $\\beta^2\\in\\sigma(M^{-1}\\nabla P_e (\\delta^0))$, i.e., we assume that the corresponding eigenspace is $k$-dimensional. According to \\cref{prop: nec and suf for pure imaginary lossless power system}, we have $M^{-1}Dx^\\ell=0,\\forall x^\\ell\\in\\mathcal{X}$, or equivalently, $Dx^\\ell=0,\\forall x^\\ell\\in\\mathcal{X}$. Since $D$ is diagonal, we have $d_jx_j^\\ell=0,\\forall j\\in\\{1,\\cdots,n\\}, \\forall x^\\ell\\in\\mathcal{X}$.\n\tIn order to remove the purely imaginary eigenvalues, we need to make sure that $\\forall x^\\ell\\in\\mathcal{X}, \\exists j \\in \\{1,\\cdots,n\\}$ such that $d_j x_j^\\ell\\ne0$. This can be done for each $x^\\ell\\in\\mathcal{X}$ by choosing a $j\\in\\{1,\\cdots,n\\}$ such that $x_j^\\ell\\ne0$ and then increase the corresponding damping $d_j$ from zero to some positive number, thereby rendering the pair $(M^{-1}\\nabla P_e (\\delta^0),M^{-1}D)$ observable.\n\\end{remark}\n\n\n\\Cref{prop: nec and suf for pure imaginary lossless power system} gives a necessary and sufficient condition for the existence of purely imaginary eigenvalues in a lossless power system with \\emph{nonnegative} damping and positive inertia. It is instructive to compare it with an earlier result in \\cite{2021-gholami-sun-MMG-stability}, which shows that when all the generators in a lossless power system have \\emph{positive} damping $d_j$ and positive inertia $m_j$, then any equilibrium point in the set $\\Omega$ is asymptotically stable.\nThis is also proved in \\Cref{thrm: Stability of Symmetric Second-Order Systems with nonsing damping} of \\Cref{appendix: Stability of Symmetric Second-Order Systems with Nonsingular Damping} for the general second-order model \\eqref{eq: nonlinear ode}.\n\n\n\n\n\n\n\nRecall that the simple zero eigenvalue of the Jacobian matrix $J(\\delta^0)$ in model \\eqref{eq: swing equations} stems from the translational invariance of\nthe flow function defined in \\Cref{def: flow function}. As mentioned earlier, we can eliminate this eigenvalue by choosing a reference bus and refer all other bus angles to it. According to \\Cref{thrm: original model vs referenced model}, aside from the simple zero eigenvalue, the Jacobians of the original model \\eqref{eq: swing equations} and the referenced model \\eqref{eq: Swing Equation Polar referenced} have the same number of eigenvalues with zero real part. Hence, \\cref{prop: nec and suf for pure imaginary lossless power system} provides a necessary and sufficient condition for breaking the hyperbolicity in the referenced lossless power system model \\eqref{eq: Swing Equation Polar referenced}.\n\n\n\n\n\nIn lossy power systems, matrix $\\nabla P_e (\\delta)$ may not be symmetric. In this case, \\cref{thrm: nec and suf for pure imaginary lossy} can be used for detecting purely imaginary eigenvalues. Meanwhile, let us discuss some noteworthy cases in more detail. \\Cref{prop:hyperbolicity n2 n3} asserts that in small lossy power networks with only one undamped generator, the equilibrium points are always hyperbolic. The proof is provided in \\cref{proof of prop:hyperbolicity n2 n3}.\n\\begin{theorem} \\label{prop:hyperbolicity n2 n3}\nLet $n\\in\\{2,3\\}$ and consider an $n$-generator system with only one undamped generator. Suppose $(\\delta^{0},\\omega^{0})\\in \\Omega$ holds, the underlying undirected power network graph is connected, and $\\nabla P_e (\\delta)$. Then the Jacobian matrix $J(\\delta^0)$ has no purely imaginary eigenvalues.\nWe allow the network to be lossy, but we assume ${\\partial P_{e_j} }\/ {\\partial \\delta _k}=0$ if and only if ${\\partial P_{e_k} }\/ {\\partial \\delta _j}=0$. The lossless case is a special case of this.\n\\end{theorem}\n\n\n\t\n\t\n\n\n\n\t\n\t\n \n\n\n\nThe following counterexample shows that as long as there are two undamped generators, the Jacobian $J(\\delta)$ at an equilibrium point may have purely imaginary eigenvalues.\n\\begin{proposition} \\label{prop: non-hyper example}\nFor any $n\\ge 2$, consider an $(n+1)$-generator system with $2$ undamped generators and the following $(n+1)$-by-$(n+1)$ matrices $L=\\nabla P_e (\\delta^0)$, $D$, and $M$:\n\\begin{align*}\nL = \\begin{bmatrix} \n1 & -\\frac{1}{n} & -\\frac{1}{n} & \\cdots & -\\frac{1}{n} \\\\\n-\\frac{1}{n} & 1 & -\\frac{1}{n} & \\cdots & -\\frac{1}{n} \\\\\n\\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\\\n-\\frac{1}{n} & -\\frac{1}{n} & -\\frac{1}{n} & \\cdots & 1\n\\end{bmatrix},\\\\\nD= \\mathbf{diag} ([0,0,d_3,d_4,\\cdots,d_{n+1}]), \\: M = I_{n+1}.\n\\end{align*}\nThen $\\pm \\frak{i} \\beta \\in \\sigma(J(\\delta^0))$, where $\\beta^2 = 1+\\frac{1}{n}$.\n\\end{proposition}\n\\begin{proof}\nLet $\\beta^2 = 1+\\frac{1}{n}$ and observe that $\\mathop{\\bf rank}(L-\\beta^2 M)=1$ and $\\mathop{\\bf rank}(\\beta D)=(n+1)-2=n-1$. The rank-sum inequality \\cite{2013-Horn-matrix-analysis} implies that\n\\begin{align*}\n\\mathop{\\bf rank}(L-\\beta^2 M- \\frak{i} \\beta D) &\\le \\mathop{\\bf rank}(L-\\beta^2 M)+\\mathop{\\bf rank}(- \\frak{i}\\beta D) = 1+(n-1)=n,\n\\end{align*}\nthat is $\\det\\left( L + \\frak{i} \\beta D - \\beta^2 M \\right) = 0$. Now according to \\cref{lemma: relation between ev J and ev J11}, the latter is equivalent to $\\frak{i} \\beta \\in \\sigma(J(\\delta^0))$. This completes the proof.\nAlso note that the constructed $L$ is not totally unrealistic for a power system.\n\\end{proof}\n\n\n\n\\section{Illustrative Numerical Power System Examples} \\label{Sec: Computational Experiments}\nTwo case studies will be presented to illustrate breaking the hyperbolicity and the occurrence of Hopf bifurcation under damping variations. Additionally, we adopt the center manifold theorem to determine the stability of bifurcated orbits. Note that using the center manifold theorem, a Hopf bifurcation in an $n$-generator network essentially reduces to a planar system provided that aside from the two purely imaginary eigenvalues no other eigenvalues have zero real part at the bifurcation point. Therefore, for the sake of better illustration we focus on small-scale networks.\n\\subsection{Case $1$} \\label{subsubsec: case 1}\nConsider a $3$-generator system with $D=\\mathbf{diag} ([\\gamma,\\gamma,1.5])$, $M=I_3$, $Y_{12}=Y_{13}=2Y_{23}=\\frak{i} $ p.u., $P_{m_1}=-\\sqrt{3}$ p.u., and $P_{m_2}=P_{m_3}=\\sqrt{3}\/2$ p.u. The load flow problem for this system has the solution $V_j=1$ p.u. $\\forall j$ and $\\delta_1=0$, $\\delta_2=\\delta_3=\\pi\/3$. Observe that when $\\gamma=0$, the pair $(M^{-1}\\nabla P_e (\\delta^0),M^{-1}D)$ is not observable, and \\cref{prop: nec and suf for pure imaginary lossless power system} implies that the spectrum of the Jacobian matrix $\\sigma(J)$ contains a pair of purely imaginary eigenvalues. Moreover, this system satisfies the assumptions of \\cref{prop: non-hyper example}, and consequently, we have $\\pm \\frak{i} \\sqrt{1.5} \\in \\sigma(J)$. In order to eliminate the zero eigenvalue (to be able to use the Hopf bifurcation theorem), we adopt the associated referenced model using the procedure described in \\Cref{subsec: Referenced Model}. \nThe conditions (\\ref{coro: hopf conditions_1})-(\\ref{coro: hopf conditions_4}) of \\cref{coro: Hopf bifurcation} are satisfied (specifically, the transversality condition (\\ref{coro: hopf conditions_transvers}) holds because $\\mathrm{Im}( q^* M^{-1} D'(\\gamma_0) v ) = -0.5$), and accordingly, a periodic orbit bifurcates at this point. \nTo determine the stability of bifurcated orbit, we compute the \\emph{first Lyapunov coefficient} $l_1(0)$ as described in \\cite{2004-kuznetsov-hopf}. If the first Lyapunov coefficient is negative, the bifurcating limit cycle is stable, and the bifurcation is supercritical. Otherwise it is unstable and the bifurcation is subcritical. In this example, we get $l_1(0)=-1.7\\times10^{-3}$ confirming that the type of Hopf bifurcation is supercritical and a stable limit cycle is born. Figs. (\\ref{fig:sfig1 case1 bifurcation})-(\\ref{fig:sfig3 case1 bifurcation}) depict these limit cycles when the parameter $\\gamma$ changes. Moreover, Fig. (\\ref{fig:sfig4 case1 bifurcation}) shows the oscillations observed in the voltage angles and frequencies when $\\gamma=0$.\n\\begin{figure}\n\\centering\n\\begin{subfigure}{0.33\\textwidth}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/case1\/y1y2_rev.pdf}\n \\caption{}\n \\label{fig:sfig1 case1 bifurcation}\n\\end{subfigure}%\n\\begin{subfigure}{0.35\\textwidth}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/case1\/w1w2w3.pdf}\n \\caption{}\n \\label{fig:sfig2 case1 bifurcation}\n\\end{subfigure}\n\\begin{subfigure}{0.34\\textwidth}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/case1\/w2w3d.pdf}\n \\caption{}\n \\label{fig:sfig3 case1 bifurcation}\n\\end{subfigure}\n\\begin{subfigure}{0.35\\textwidth}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/case1\/y1y2w1w2w3_rev.pdf}\n \\caption{}\n \\label{fig:sfig4 case1 bifurcation}\n\\end{subfigure}\n\\caption{Occurrence of supercritical Hopf bifurcation in Case $1$. (a)-(c) Projection of limit cycles into different subspaces as the parameter $\\gamma$ changes. (d) Oscillations of the voltage angles $\\psi$ in radians and the angular frequency deviation $\\omega$ in radians per seconds when $\\gamma=0$. Note that $\\psi_3 \\equiv 0$.}\n\\label{fig: case1 bifurcation}\n\\end{figure}\n\\subsection{Case $2$} \\label{subsubsec: case 2}\nNext, we explore how damping variations could lead to a Hopf bifurcation in lossy systems. It is proved in \\cref{prop:hyperbolicity n2 n3} that a $2$-generator system with only one undamped generator cannot experience a Hopf bifurcation. To complete the discussion, let us consider a fully-damped (i.e., all generators have nonzero damping) lossy $2$-generator system here. Note also that the discussion about a fully-undamped case is irrelevant (see \\cref{lemma: undamped systems}). Suppose $M=I_2$, $D= \\mathbf{diag} ([\\gamma,1])$, $Y_{12}=-1+\\frak{i} 5.7978$ p.u., $P_{m_1}=6.6991$ p.u., and $P_{m_2}=-4.8593$ p.u. The load flow problem for this system has the solution $V_j=1$ p.u. $\\forall j$ and $\\delta_1=1.4905$, $\\delta_2=0$. \nWe observe that $\\gamma=0.2$ will break the hyperbolicity and lead to a Hopf bifurcation with the first Lyapunov coefficient $l_1(0.2)=1.15$. This positive value for for $l_1(0.2)$ confirms that the type of Hopf bifurcation is subcritical and an unstable limit cycle bifurcates for $\\gamma\\ge0.2$. Therefore, the situation can be summarized as follows:\n\\begin{itemize}\n \\item If $\\gamma <0.2$, there exists one unstable equilibrium point.\n \\item If $\\gamma =0.2$, a subcritical Hopf bifurcation takes place and a unique small unstable limit cycle is born.\n \\item If $ \\gamma >0.2$, there exists a stable equilibrium point surrounded by an unstable limit cycle. \n\\end{itemize}\n\nFigs. (\\ref{fig:sfig1 case2 bifurcation})-(\\ref{fig:sfig3 case2 bifurcation}) depict the bifurcating unstable limit cycles when the parameter $\\gamma$ changes in the interval $[0.2,0.35]$. This case study sheds lights on an important fact: bifurcation can happen even in fully damped systems, provided that the damping matrix $D$ reaches a critical point (say $D_c$). When $D \\preceq D_c$, the equilibrium point is unstable. On the other hand, when $D \\succ D_c$, the equilibrium point becomes stable but it is still surrounded by an unstable limit cycle. As we increase the damping parameter, the radius of the limit cycle increases, and this will enlarge the region of attraction of the equilibrium point. Note that the region of attraction of the equilibrium point is surrounded by the unstable limit cycle. This also confirms the monotonic behaviour of damping proved in \\cref{thrm: monotonic damping behaviour}. Fig. (\\ref{fig:sfig4 case2 bifurcation}) shows the region of attraction surrounded by the unstable limit cycle (in red) when $\\gamma=0.25$. In this figure, the green orbits located inside the cycle are spiraling in towards the equilibrium point while the blue orbits located outside the limit cycle are spiraling out.\n\\begin{figure}\n\\centering\n\\begin{subfigure}{0.35\\textwidth}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/case2\/y1w1w2_rev.pdf}\n \\caption{}\n \\label{fig:sfig1 case2 bifurcation}\n\\end{subfigure}%\n\\begin{subfigure}{0.35\\textwidth}\n \\centering\n \\includegraphics[width=0.9\\linewidth]{figures\/case2\/y1d_rev.pdf}\n \\caption{}\n \\label{fig:sfig2 case2 bifurcation}\n\\end{subfigure}\n\\begin{subfigure}{0.35\\textwidth}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/case2\/y1w1d_rev.pdf}\n \\caption{}\n \\label{fig:sfig3 case2 bifurcation}\n\\end{subfigure}\n\\begin{subfigure}{0.34\\textwidth}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/case2\/y1w1_rev.pdf}\n \\caption{}\n \\label{fig:sfig4 case2 bifurcation}\n\\end{subfigure}\n\\caption{Occurrence of subcritical Hopf bifurcation in Case $2$. (a) Unstable limit cycles as the parameter $\\gamma$ changes. (b)-(c) Projection of limit cycles into different subspaces as the parameter $\\gamma$ changes. (d) The region of attraction of the equilibrium point when $\\gamma=0.25$. The unstable limit cycle is shown in red, while the orbits inside and outside of it are shown in green and blue, respectively. Note that $\\psi_2 \\equiv 0$.}\n\\label{fig: case2 bifurcation}\n\\end{figure}\n\nAlthough both supercritical and subcritical Hopf bifurcations lead to the birth of limit cycles, they have quite different practical consequences. The supercritical Hopf bifurcation which occurred \\Cref{subsubsec: case 1} corresponds to a soft or noncatastrophic stability loss because a stable equilibrium point is replaced with a stable periodic orbit, and the system remains in a neighborhood of the equilibrium. In this case, the system operator can take appropriate measures to bring the system back to the stable equilibrium point. Conversely, the subcritical Hopf bifurcation in \\Cref{subsubsec: case 2} comes with a sharp or catastrophic loss of stability. This is because the region of attraction of the equilibrium point (which is bounded by the unstable limit cycle) shrinks as we decrease the parameter $\\gamma$ and disappears once we hit $\\gamma=0.2$. In this case, the system operator may not be able to bring the system back to the stable equilibrium point as the operating point may have left the region of attraction.\n\n\n\n\\section{Conclusions} \\label{Sec: Conclusions}\nWe have presented a comprehensive study on the role of damping in a large class of dynamical systems, including electric power networks. Paying special attention to partially-damped systems, it is shown that damping plays a monotonic role in the hyperbolicity of the equilibrium points. We have proved that the hyperbolicity of the equilibrium points is intertwined with the observability of a pair of matrices, where the damping matrix is involved. We have also studied the aftermath of hyperbolicity collapse, and have shown both subcritical and supercritical Hopf bifurcations can occur as damping changes. It is shown that Hopf bifurcation cannot happen in small power systems with only one undamped generator. In the process, we have developed auxiliary results by proving some important spectral properties of the power system Jacobian matrix, establishing the relationship between a power system model and its referenced counterpart, and finally addressing a fundamental question from matrix perturbation theory. Among others, the numerical experiments have illustrated how damping can change the region of attraction of the equilibrium points.\nWe believe our results are of general interest to the community of applied dynamical systems, and provide new insights into the interplay of damping and oscillation in one of the most important engineering system, the electric power systems.\n\\section{Proof of Theorem \\ref{thrm: rank}}\n\\section{Proof of \\Cref{lemma: rank principal}} \\label{proof of lemma: rank principal}\n \n \n \n \n \n\\begin{proof}\n \n \tAssume that all $r$-by-$r$ principal submatrices of $S$ are singular, and let us lead this assumption to a contradiction. Since $\\mathop{\\bf rank}(S)=r$, all principal submatrices of size larger than $r$ are also singular. Therefore, zero is an eigenvalue of every $m$-by-$m$ principal submatrix of $S$ for each $m\\ge r$. Consequently, all principal minors of $S$ of size $m$ are zero for each $m\\ge r$. Let $E_\\ell(S)$ denote the sum of principal minors of $S$ of size $\\ell$ (there are $n\\choose \\ell$ of them), and observe that we have $E_m(S) = 0, \\forall m \\ge r$. Moreover, thought of as a formal polynomial in $t$, let $p_S (t)=\\sum_{\\ell=0}^n a_\\ell t^\\ell$ with $a_n=1$ be the characteristic polynomial of $S$, and recall that the $k$-th derivative of $p_S(t)$ at $t=0$ is $p_S^{(k)}(0)=k!(-1)^{n-k}E_{n-k}(S), \\forall k\\in\\{0,1,\\cdots,n-1\\}$, and the coefficients of the characteristic polynomial are $a_k=\\frac{1}{k!}p_S^{(k)}(0)$. \n \tIn this case, our assumption leads to $a_k=p_S^{(k)}(0)=0,\\forall k\\in\\{0,1,\\cdots,n-r\\}$, i.e., zero is an eigenvalue of $S$ with algebraic multiplicity at least $n-r+1$. But from the assumption of the lemma we know $S$ is similar to $B\\oplus0_{n-r}$, that is, zero is an eigenvalue of $S$ with algebraic multiplicity exactly $n-r$, and we arrive at the desired contradiction.\n\\end{proof}\n\n\n\n\\section{Stability of Symmetric Second-Order Systems with Nonsingular Damping}\n\\label{appendix: Stability of Symmetric Second-Order Systems with Nonsingular Damping}\n\n\n\n\n\n\\Cref{thrm: nec and suf for pure imaginary lossless} provides a necessary and sufficient condition for the hyperbolicity of an equilibrium point $(x_0,0)$ of the second-order system \\eqref{eq: nonlinear ode}, when the inertia, damping, and Jacobian of $f$ satisfy $M\\in\\mathbb{S}^{n}_{++}, D\\in\\mathbb{S}^n_+, \\nabla f(x_0)\\in\\mathbb{S}^n_{++}$. \nIn this section, we prove that if we replace the assumption $D\\in\\mathbb{S}^n_+$ with $D\\in\\mathbb{S}^n_{++}$, then the equilibrium point $(x_0,0)$ is not only hyperbolic but also asymptotically stable. This asymptotic stability is proved for lossless swing equations in \\cite[Theorem 1, Part d]{2021-gholami-sun-MMG-stability}.\nThe next theorem generalizes \\cite[Theorem 1, Part d]{2021-gholami-sun-MMG-stability} to the second-order system \\eqref{eq: nonlinear ode} where the damping and inertia matrices are not necessarily diagonal.\n\\begin{theorem}[stability in second-order systems: symmetric case] \\label{thrm: Stability of Symmetric Second-Order Systems with nonsing damping}\n Consider the second-order ODE system \\eqref{eq: nonlinear ode} with inertia matrix $M\\in\\mathbb{S}^n_{++}$ and damping matrix $D \\in\\mathbb{S}^n_{++}$. Suppose $(x_0,0)\\in \\mathbb{R}^{n+n}$ is an equilibrium point of the corresponding first-order system \\eqref{eq: nonlinear ode 1 order} with the Jacobian matrix $J\\in\\mathbb{R}^{2n\\times 2n}$ defined in \\eqref{eq: J general case} such that $L=\\nabla f(x_0)\\in \\mathbb{S}^n_{++}$. Then, the equilibrium point $(x_0,0)$ is locally asymptotically stable.\n\\end{theorem}\n\n\n\n\\begin{proof}\nWe complete the proof in three steps:\\newline\n\\textbf{Step 1:} First, we show all real eigenvalues\nof $J$ are negative. Assume $\\lambda\\in\\mathbb{R}, \\lambda\\ge0$ is a nonnegative eigenvalue of $J(x_0)$, and let us lead this assumption to a contradiction. According to \\Cref{lemma: relation between ev J and ev J11},\n\\begin{align} \\label{eq: pencil det zero appendix}\n \\det\\left(\\lambda^2 M + \\lambda D + L \\right) = 0.\n\\end{align}\nSince all three matrices $L, D$, and $M$ are positive definite, the\nmatrix pencil $P(\\lambda)=\\lambda^2 M + \\lambda D + L $ is also a positive definite matrix for any nonnegative $\\lambda$. Hence $P(\\lambda)$ is nonsingular, contradicting \\eqref{eq: pencil det zero appendix}.\n\n\n\n\\noindent\n\\textbf{Step 2:} Next, we prove that the eigenvalues of $J$ cannot be purely imaginary. We provide two different proofs for this step.\nAccording to our assumption, the damping matrix $D$ is nonsingular, and the pair $(M^{-1}\\nabla f(x_0),M^{-1}D)$ is always observable because the nullspace of $M^{-1}D$ is trivial. Hence, according to\n \\Cref{thrm: nec and suf for pure imaginary lossless}, the equilibrium point $(x_0,0)$ is hyperbolic, and $J(x_0)$ does not have any purely imaginary eigenvalue, so the first proof of this step is complete. For the second proof,\nlet $\\lambda \\in \\sigma(J(x_0))$, then according to \\Cref{lemma: relation between ev J and ev J11}, $\\exists v\\in\\mathbb{C}^n, v\\ne0$ such that $(\\lambda^2 M + \\lambda D + L) v = 0.$\n\tSuppose, for the sake of contradiction, that $\\lambda = \\frak{i} \\beta \\in \\sigma(J(x_0))$ for some nonzero real $\\beta$. Let $v = x + \\frak{i} y$, then $((L - \\beta^2 M) + \\frak{i}\\beta D) (x+\\frak{i} y) = 0$,\n\twhich can be equivalently written as\n\t\\begin{align}\n\t\\begin{bmatrix}\n\tL - \\beta^2 M & -\\beta D \\\\\n\t\\beta D & L - \\beta^2 M\n\t\\end{bmatrix} \\begin{bmatrix}\n\tx\\\\y\n\t\\end{bmatrix} = \\begin{bmatrix}\n\t0\\\\0\n\t\\end{bmatrix}.\n\t\\end{align}\n\tDefine the matrix \n\t\\begin{align} \\label{eq:M}\n\tH(\\beta) := \\begin{bmatrix}\n\t\\beta D & L - \\beta^2 M\\\\\n\t L - \\beta^2 M & -\\beta D\n\t\\end{bmatrix}. \n\t\\end{align}\n\tSince $L \\in \\mathbb{S}^n_{++}$, $H(\\beta)$ is a symmetric matrix.\n\tNotice also that $H(\\beta)$ cannot be positive semidefinite due to the diagonal blocks $\\pm\\beta D$. \n\tSince $D\\in \\mathbb{S}^n_{++}$, the determinant of $H(\\beta)$ can be expressed using Schur complement as\n\t\\begin{align*}\n\t {\\det}(H(\\beta)) = {\\det} (-\\beta D) {\\det} (\\beta D \n\t + \\beta^{-1} ( L - \\beta^2 M )D^{-1}( L - \\beta^2 M)).\n\t\\end{align*}\n\tSo we only need to consider the nonsingularity of the Schur complement.\n\tDefine the following matrices for the convenience of analysis: \n\t\\begin{align*}\n\t\tA(\\beta) &:= L - \\beta^2 M, \\\\\n\t\tB(\\beta) &:= D^{-\\frac{1}{2}}A(\\beta)D^{-\\frac{1}{2}},\\\\\n\t\tE(\\beta) &:= I + \\beta^{-2} B(\\beta)^2.\n\t\\end{align*}\n\tThe inner matrix of the Schur complement can be written as\n\n\n\t\t\\begin{align*}\n\t\t& \\beta D + \\beta^{-1} ( L - \\beta^2 M )D^{-1}( L - \\beta^2 M) \\\\ \n\t & = \\beta D^{\\frac{1}{2}} (I + \\beta^{-2} D^{-\\frac{1}{2}}A(\\beta)D^{-1}A(\\beta)D^{-\\frac{1}{2}})D^{\\frac{1}{2}} \\\\\n\t & = \\beta D^{\\frac{1}{2}} (I + \\beta^{-2} B(\\beta)^2)D^{\\frac{1}{2}} = \\beta D^{\\frac{1}{2}} E(\\beta) D^{\\frac{1}{2}}.\n\t\t\\end{align*}\n\n\tNotice that {$E(\\beta)$ and $B(\\beta)$ have the same eigenvectors and the eigenvalues of $E(\\beta)$ and $B(\\beta)$ have a one-to-one correspondence: $\\mu$ is an eigenvalue of $B(\\beta)$ if and only if $1 + \\beta^{-2} \\mu^2$ is an eigenvalue of $E(\\beta)$}. Indeed, we have \n\t $E(\\beta)v = v + \\beta^{-2}B(\\beta)^2v = v + \\beta^{-2}\\mu^2 v = (1+\\beta^{-2}\\mu^2)v$ for any eigenvector $v$ of $B(\\beta)$ with eigenvalue $\\mu$. \n\t%\n\tSince $B(\\beta)$ is symmetric, $\\mu$ is a real number. Hence, $E(\\beta)$ is positive definite (because $1+\\beta^{-2}\\mu^2>0$), therefore $H(\\beta)$ is nonsingular for any real nonzero $\\beta$. Then, the eigenvector $v=x+\\frak{i} y$ is zero which is a contradiction. This proves that $J(x_0)$ has no eigenvalue on the punctured imaginary axis.\n\n\n \n\n\n\t\n\t\\noindent \\textbf{Step 3:}\n\tFinally, we prove that any complex nonzero eigenvalue of $J(x_0)$ has a negative real part. \n\t%\n\t For a complex eigenvalue $\\alpha + \\frak{i} \\beta$ of $J(x_0)$ with $\\alpha\\ne 0, \\beta\\ne 0$, by setting $v = x + \\frak{i} y$, the pencil singularity equation becomes\n\t\\begin{align*}\n\t( L + (\\alpha + \\frak{i} \\beta) D + (\\alpha^2 - \\beta^2 + 2\\alpha\\beta \\frak{i} )M)(x+\\frak{i} y) = 0.\n\t\\end{align*}\n\tSimilar to Step 2 of the proof, define the matrix $H(\\alpha,\\beta)$ as\n\t\\begin{align*}\n\tH(\\alpha,\\beta):=\\begin{bmatrix}\n\t L + \\alpha D + (\\alpha^2-\\beta^2) M & -\\beta(D + 2\\alpha M) \\\\\n\t\\beta(D+2\\alpha M) & L + \\alpha D + (\\alpha^2-\\beta^2) M\n\t\\end{bmatrix}.\n\t\\end{align*}\n\tWe only need to consider two cases, namely 1) $\\alpha > 0, \\beta > 0$ or 2) $\\alpha <0, \\beta > 0$. For the first case, $\\beta(D+{2}\\alpha M)$ is invertible and positive definite, therefore, we only need to look at the invertibility of the Schur complement\n\t\\begin{align*}\n\t& S(\\alpha,\\beta) + T(\\alpha,\\beta) S^{-1}(\\alpha,\\beta) T(\\alpha,\\beta),\n\t\\end{align*}\n\twhere $S(\\alpha,\\beta):= \\beta(D+{2}\\alpha M)$ and $T(\\alpha,\\beta):= L + \\alpha D + (\\alpha^2-\\beta^2) M$.\n\tUsing the same manipulation as in Step 1 of the proof, we can see that the Schur complement is always invertible for any $\\alpha >0, \\beta>0$. This implies the eigenvector $v$ is 0, which is a contradiction. Therefore, the first case is not possible. So any complex nonzero eigenvalue of $J(x_0)$ has a negative real part.\n\\end{proof}\t\n\n\n\n\n\n\n\n\n\n\n\n\\section{Proof of \\cref{thrm: nec and suf for pure imaginary lossy}}\n\\label{proof of thrm: nec and suf for pure imaginary lossy}\n\\begin{proof}\n %\n \n\n\tThere exist\t$\\lambda \\in \\mathbb{R_{+}}, \\lambda\\ne0$ and $x \\in \\mathbb{C}^n , x\\ne 0$ such that\n\t\\begin{align} \\label{eq: observability in lossy nonsymmetric}\n\t M^{-1}Lx = \\lambda x \\text{ and } M^{-1}Dx = 0.\n\t\\end{align}\n\n\tDefine $\\xi=\\sqrt{-\\lambda}$, which is a purely imaginary number. The quadratic matrix pencil $M^{-1}P(\\xi)=\\xi^2 I + \\xi M^{-1}D + M^{-1}L$ is singular because $M^{-1}P(\\xi)x = \\xi^2 x + \\xi M^{-1}Dx + M^{-1}Lx = -\\lambda x + 0 + \\lambda x = 0$. By \\cref{lemma: relation between ev J and ev J11}, $\\xi$ is an eigenvalue of $J$. Similarly, we can show $-\\xi$ is an eigenvalue of $J$. Therefore, $\\sigma(J)$ contains a pair of purely imaginary eigenvalues.\n\\end{proof}\n\n\n\n\n\n\n\n\t\n\n\t\n\t\n\n\n\n\n\\section{Proof of \\cref{thrm: original model vs referenced model}}\n\\label{proof of thrm: original model vs referenced model}\n\n\nLet us first prove the following useful lemma.\n\\begin{lemma} \\label{lemma: pencil for referenced systems}\n Let $(\\delta^0,\\omega^0)$ be an equilibrium point of the swing equation \\eqref{eq: swing equations} and $\\Psi(\\delta^0,\\omega^0)$ be the corresponding equilibrium point of the referenced model \\eqref{eq: Swing Equation Polar referenced}. Let $J^r$ denote the Jacobian of the referenced model at this equilibrium point.\n\tFor any $\\lambda \\ne 0$, $\\lambda$ is an eigenvalue of $J^r$ if and only if the quadratic matrix pencil $P(\\lambda):= \\lambda^2 M + \\lambda D + \\nabla P_e (\\delta^0)$ is singular.\n\\end{lemma}\n\\begin{proof}\n The referenced model \\eqref{eq: Swing Equation Polar referenced} can be written as\n\\begin{align}\\label{eq:swing reduced}\n\\begin{bmatrix}\n\\dot {\\psi} \\\\\n\\dot \\omega \n\\end{bmatrix}\n=\n\\begin{bmatrix}\nT_1 \\omega \\\\\n- D M^{-1} \\omega + M^{-1} (P_m -P_e^r(\\psi))\n\\end{bmatrix}.\n\\end{align}\nNote that the Jacobian of the referenced flow function $\\nabla P^r_e(\\psi)$ is an $n \\times (n-1)$ matrix and we have $\\nabla P^r_e(\\psi^0) = \\nabla P_e (\\delta^0) T_2$, where\n\\begin{align}\nT_2:=\\begin{bmatrix}\nI_{n-1} \\\\ 0_{1\\times(n-1)}\n\\end{bmatrix} \\in \\mathbb{R}^{n\\times(n-1)}.\n\\end{align}\nAccordingly, the Jacobin of the referenced model \\eqref{eq: Swing Equation Polar referenced} is\n\\begin{align}\nJ^r=\\begin{bmatrix}\n0_{(n-1)\\times (n-1)} & T_1 \\\\\n-M^{-1} \\nabla P_e (\\delta^0) T_2 & -DM^{-1}\n\\end{bmatrix}.\n\\end{align}\n\t\\textit{Necessity:} Let $\\lambda$ be a nonzero eigenvalue of $J^r$ and $ (v_1 , v_2 ) $ be the corresponding eigenvector with $v_1\\in\\mathbb{C}^{n-1}$ and $v_2\\in\\mathbb{C}^{n}$. Then\t\n\t\\begin{align} \\label{eq: J cha eq referenced}\n\t\\begin{bmatrix}\n\t0_{(n-1)\\times (n-1)} & T_1 \\\\\n\t-M^{-1}\\nabla P_e (\\delta^0) T_2 & -DM^{-1}\n\t\\end{bmatrix} \\begin{bmatrix} v_1 \\\\v_2 \\end{bmatrix} = \\lambda \\begin{bmatrix} v_1 \\\\v_2 \\end{bmatrix},\n\t\\end{align}\n\twhich implies that $ T_1 v_2 = \\lambda v_1$. Since $\\lambda \\ne 0$, we can substitute $ \\lambda^{-1} T_1 v_2 = v_1$ in the second equation to obtain \n\t\\begin{align} \n\t\\left(\\lambda^2 M + \\lambda D + \\nabla P_e (\\delta^0) T_2 T_1 \\right) v_2 = 0. \\label{eq: quadratic matrix pencil referenced}\n\t\\end{align} \n\tSince the eigenvector $(v_1,v_2)$ is nonzero, we have $v_2 \\not = 0$ (otherwise $ v_1=\\lambda^{-1} T_1 0 = 0 \\implies (v_1,v_2) = 0 $), Eq. \\eqref{eq: quadratic matrix pencil referenced} implies that the matrix pencil $P(\\lambda)= \\lambda^2 M + \\lambda D + \\nabla P_e (\\delta^0) T_2 T_1$ is singular. Next, we show that $\\nabla P_e (\\delta^0) T_2 T_1 = \\nabla P_e (\\delta)$. Since $\\nabla P_e (\\delta^0)$ has zero row sum, it can be written as\n\t\\begin{align*}\n\t\\nabla P_e (\\delta^0)=\\begin{bmatrix}\n\tA&b\\\\c^\\top&d\n\t\\end{bmatrix}, \\text{ where } A\\mathbf{1}=-b, c^\\top \\mathbf{1}=-d.\n\t\\end{align*}\n\tTherefore, we have\n\t\\begin{align*}\n\t\\nabla P_e (\\delta^0) T_2 T_1 = \\begin{bmatrix}\n\tA&b\\\\c^\\top&d\n\t\\end{bmatrix} \n\t\\begin{bmatrix}\n\tI_{n-1} \\\\ 0\n\t\\end{bmatrix}\n\t\\begin{bmatrix}\n\tI_{n-1}& -\\mathbf{1}\n\t\\end{bmatrix}\n\t= \n\t\\begin{bmatrix}\n\tA&-A\\mathbf{1}\\\\c^\\top&-c^\\top\\mathbf{1}\n\t\\end{bmatrix}\n\t=\n\t\\begin{bmatrix}\n\tA&b\\\\c^\\top&d\n\t\\end{bmatrix}. \n\t\\end{align*}\n\t\n\t\\textit{Sufficiency:}\t\n\tSuppose there exists $\\lambda \\in \\mathbb{C}, \\lambda \\ne 0$ such that $P(\\lambda)= \\lambda^2 M + \\lambda D + \\nabla P_e (\\delta^0)$ is singular. Choose a nonzero $v_2 \\in \\ker (P(\\lambda))$ and let $v_1:=\\lambda^{-1} T_1 v_2$. \n\tAccordingly, the characteristic equation \\eqref{eq: J cha eq referenced} holds, and consequently, $\\lambda$ is a nonzero eigenvalue of $J^r$.\n\\end{proof}\n\nNow, we are ready to prove \\cref{thrm: original model vs referenced model}.\n\n\\begin{proof}\nAny equilibrium point $(\\delta^0,\\omega^0)$ of the swing equation model \\eqref{eq: swing equations} is contained in the set\n\\begin{align*}\n \\mathcal{E}:=\\left \\{ (\\delta,\\omega)\\in \\mathbb{R}^{2n} : \\omega = 0, P_{m_j} = P_{e_j}(\\delta), \\quad \\forall j \\in \\{1,...,n\\} \\right \\}.\n\\end{align*}\nLet $(\\psi^0,\\omega^0)=\\Psi(\\delta^0,\\omega^0)$, and note that $\\omega^0=0$. From \\eqref{eq: flow function} and \\eqref{eq: flow function referenced compact}, we observe that $P_{e_j}(\\delta^0) = P_{e_j}^r(\\psi^0), \\forall j \\in \\{1,...,n\\}$ where $\\psi^0_n :=0$. Therefore, $(\\psi^0,\\omega^0)$ is an equilibrium point of the the referenced model \\eqref{eq: Swing Equation Polar referenced}. \n\\\\\nTo prove the second part, recall that $\\lambda$ is an eigenvalue of the Jacobian of \\eqref{eq: swing equations} at $(\\delta^0,\\omega^0)$ if and only if $\\det( \\nabla P_e (\\delta^0) +\\lambda D + \\lambda^2 M)=0$. \nAccording to \\cref{lemma: pencil for referenced systems}, the nonzero eigenvalues $J$ and $J^r$ are the same. Moreover, the referenced model \\eqref{eq: Swing Equation Polar referenced} has one dimension less than the swing equation model \\eqref{eq: swing equations}. This completes the proof.\n\\end{proof}\n\n\n\n\\section{Proof of \\Cref{prop:hyperbolicity n2 n3}}\n\\label{proof of prop:hyperbolicity n2 n3}\n\nWe prove the following lemmas first:\n\\begin{lemma} \\label{lemma: complex representation}\nLet $A,B\\in\\mathbb{R}^{n\\times n}$ and define \n$$C:= \\small{ \\left[\\begin{array}{cc}A&-B\\\\B&A\\end{array}\\right] }.$$\nThen $\\mathop{\\bf rank}(C)=2\\mathop{\\bf rank}(A+\\frak{i} B)$ which is an even number.\n\\end{lemma}\n\\begin{proof}\nLet $V:= \\frac{1}{\\sqrt{2}} \\left[\\begin{array}{cc}I_n&\\frak{i} I_n\\\\\\frak{i} I_n&I_n\\end{array}\\right] $ and observe that $V^{-1}=\\bar{V}=V^*$, where $\\bar{V}$ stands for the entrywise conjugate and $V^*$ denotes the conjugate transpose of $V$. We have \n\\begin{align*}\nV^{-1}CV=\\left[\\begin{array}{cc}A-\\frak{i} B&0\\\\0&A+\\frak{i} B\\end{array}\\right] = (A-\\frak{i} B) \\oplus (A+\\frak{i} B).\n\\end{align*}\nSince rank is a similarity invariant, we have $\\mathop{\\bf rank}(C)=\\mathop{\\bf rank}((A-\\frak{i} B) \\oplus (A+\\frak{i} B))=2\\mathop{\\bf rank}(A+\\frak{i} B)$.\n\\end{proof}\n\\begin{lemma} \\label{lemma: matrix form of pencil sinularity}\n$\\lambda = \\frak{i} \\beta$ is an eigenvalue of $J$ if and only if\nthe matrix\n\\begin{align*}\n\\mathcal{M}(\\beta):= \\begin{bmatrix}\nL-\\beta^2 M & -\\beta D\\\\ \\beta D & L-\\beta^2 M \n\\end{bmatrix}\n\\end{align*}\nis singular. Here $L=\\nabla P_e (\\delta^0)$.\n\\end{lemma}\n\\begin{proof}\nAccording to \\Cref{lemma: relation between ev J and ev J11}, $\\frak{i} \\beta \\in \\sigma(J)$ if and only if $\\exists x\\in\\mathbb{C}^n,x\\ne0$ such that\n \\begin{align} \\label{eq: pencil matrix singularity theorem proof}\n \\left( L - \\beta^2 M + \\frak{i} \\beta D \\right) x = 0.\n \\end{align}\nDefine $A:=L - \\beta^2 M$, $B:=\\beta D$, and let $x=u+\\frak{i} v$. Rewrite \\eqref{eq: pencil matrix singularity theorem proof} as $(A+\\frak{i} B)(u+\\frak{i} v) = (Au - Bv) + \\frak{i} (Av + Bu) = 0$,\nwhich is equivalent to \n\\begin{align*}\n\\begin{bmatrix}\nA & -B \\\\ B & A \n\\end{bmatrix} \\begin{bmatrix}\nu \\\\ v\n\\end{bmatrix} = 0.\n\\end{align*}\n\\end{proof}\nNow, we are ready to prove \\Cref{prop:hyperbolicity n2 n3}:\nAccording to \\Cref{lemma: matrix form of pencil sinularity}, $\\frak{i} \\beta\\in\\sigma(J)$ for some nonzero real $\\beta$ if and only if the matrix\n\\begin{align*} \n\\mathcal{M}(\\beta) := \\begin{bmatrix}\nL - \\beta^2 M & -\\beta D\\\\\n\\beta D & L - \\beta^2 M\n\\end{bmatrix} \n\\end{align*}\nis singular. Recall that $L:=\\nabla P_e (\\delta^0)$. In the sequel, we will show under the assumptions of \\Cref{prop:hyperbolicity n2 n3}, $\\mathcal{M}(\\beta)$ is always nonsingular. First, we prove the theorem for $n=2$. In this case, \n$$\nL= \\left[ \\begin{array}{cc}a_{12}&-a_{12}\\\\-a_{21}&a_{21}\\end{array}\\right], a_{12}>0, a_{21}>0.\n$$\nAccording to \\Cref{lemma: complex representation}, we have $\\mathop{\\bf rank}(\\mathcal{M}(\\beta))=2\\mathop{\\bf rank}(L-\\beta^2M- \\frak{i} \\beta D)$, and $L-\\beta^2M- \\frak{i} \\beta D$ is full rank because \n\\begin{align*}\nL-\\beta^2M- \\frak{i} \\beta D= \\left[ \\begin{array}{cc}a_{12}-\\beta^2 m_1&-a_{12}\\\\-a_{21}&a_{21}-\\beta^2 m_2- \\frak{i} \\beta d_2\\end{array}\\right],\n\\end{align*}\nand $\\det(L-\\beta^2M- \\frak{i} \\beta D)=(a_{12}-\\beta^2 m_1)(a_{21}-\\beta^2 m_2- \\frak{i} \\beta d_2)-a_{12}a_{21}$. It is easy to see that the real part and imaginary parts of the determinant cannot be zero at the same time. Therefore, $ \\mathcal{M}(\\beta)$ is also nonsingular and a partially damped $2$-generator system cannot have any pure imaginary eigenvalues. \n\nNow, we prove the theorem for $n=3$. Let $A\\in\\mathbb{R}^{2n\\times 2n}$. For index sets $\\mathcal{I}_1\\subseteq\\{1,\\cdots,2n\\}$ and $\\mathcal{I}_2\\subseteq\\{1,\\cdots,2n\\}$, we denote by $A[\\mathcal{I}_1,\\mathcal{I}_2]$ the (sub)matrix of entries that lie in the rows of $A$ indexed by $\\mathcal{I}_1$ and the columns indexed by $\\mathcal{I}_2$. For a $3$-generator system, the matrix $L$ can be written as\n\\begin{align*}\n L = \\begin{bmatrix}\n a_{12}+a_{13} & -a_{12} & -a_{13}\\\\\n -a_{21} & a_{21}+a_{23} &-a_{23}\\\\\n -a_{31} & - a_{32} & a_{31} + a_{32}\n \\end{bmatrix}\n\\end{align*}\nwhere $a_{jk}\\ge0, \\forall j,k\\in\\{1,2,3\\}, j\\ne k$ and $a_{jk}=0 \\iff a_{kj}=0$. Moreover, $M=\\mathbf{diag}(m_1,m_2,m_3)$ and $D=\\mathbf{diag}(0,d_2,d_3)$.\nWe complete the proof in three steps:\n\\begin{itemize}\n\\item Step $1$: We show that the first four columns of $\\mathcal{M}(\\beta)$ are linearly independent, i.e., $\\mathop{\\bf rank}(\\mathcal{M}(\\beta))\\ge4$.\\\\\nTo do so, we show that the equation \n\\begin{align*}\n \\mathcal{M}(\\beta)\\left [ \\{1,...,6\\},\\{1,2,3,4\\}\\right ] \n \\begin{bmatrix}\nx_1 \\\\ x_2 \\\\x_3 \\\\ x_4\n\\end{bmatrix} = 0\n\\end{align*}\nhas only the trivial solution.\n\n\\begin{enumerate}[(i)]\n\t\\item If $a_{12}+a_{13}-\\beta^2 m_1\\ne0$, then $x_4=0$. Moreover, we have $\\beta d_2 x_2 = 0$ and $\\beta d_3 x_3 = 0$ which imply $x_2=x_3=0$ because $\\beta, d_2$, and $d_3$ are nonzero scalars. Finally, the connectivity assumption requires that at least one of the two entries $a_{21}$ and $a_{31}$ are nonzero, implying that $x_1=0$.\n\t\n\t\\item If $a_{12}+a_{13}- \\beta^2 m_1=0$, then by expanding the fifth and sixth rows we get\n\t\\begin{align*}\n\t & \\beta d_2x_2 -a_{21}x_4=0 \\implies x_2=\\frac{a_{21}}{\\beta d_2}x_4,\\\\\n\t\t& \\beta d_3x_3 -a_{31}x_4=0, \\implies x_3= \\frac{a_{31}}{\\beta d_3}x_4.\n\t\\end{align*}\n\tExpanding the first row and substituting $x_2$ and $x_3$ from above gives\n\t\\begin{align*}\n\t& -a_{12}x_2-a_{13}x_3=0 \\implies -\\frac{a_{12} a_{21}}{\\beta d_2}x_4 -\\frac{a_{13}a_{31}}{\\beta d_3}x_4 =0.\n\t\\end{align*}\n\tThe connectivity assumption (and the fact that $a_{kj}\\ge0, \\forall k\\ne j$ and $a_{kj}=0\\iff a_{jk}=0$) leads to $x_4=0$. This implies $x_2=x_3=0$ and further $x_1=0$ due to the connectivity assumption.\n\t\n\\end{enumerate}\n\n\\item Step $2$: We prove that the first five columns of $\\mathcal{M}(\\beta)$ are linearly independent, i.e., $\\mathop{\\bf rank}(\\mathcal{M}(\\beta))\\ge5$.\\\\\nTo do so, we show that the equation \n\n\n\\begin{align*}\n \\mathcal{M}(\\beta) \\left [\\{1,...,6\\},\\{1,2,3,4\\} \\right ] \n \\begin{bmatrix}\nx_1 \\\\ x_2 \\\\x_3 \\\\ x_4\n\\end{bmatrix} = \\begin{bmatrix}\n0 \\\\ -\\beta d_2 \\\\0 \\\\ -a_{12} \\\\ a_{21}+a_{23}- \\beta^2 m_2 \\\\ -a_{32}\n\\end{bmatrix}\n\\end{align*}\nhas no solution, i.e., the fifth column is not in the span of the first four columns. Based on the equation in the fourth row we consider the following situations:\n\\begin{enumerate}[(i)]\n\t\\item If $a_{12}+a_{13}- \\beta^2 m_1=0$ and $a_{12}\\ne0$, then there exists no solution.\n\t\n\t\\item If $a_{12}+a_{13}- \\beta^2 m_1=0$ and $a_{12}=0$, then $a_{13}=\\beta^2 m_1$. Expanding the first row yields $-a_{13}x_3=0\\implies x_3=0$. Expanding the second row provides $(a_{23}-\\beta^2 m_2)x_2=-\\beta d_2 \\implies x_2=- \\frac{\\beta d_2}{(a_{23}- \\beta^2 m_2)}$. Note that we assume $(a_{23}-\\beta^2 m_2)\\ne0$, since otherwise the system has no solution.\n\tFinally, we expand the fifth row and substitute $x_2$ into it:\n\t\\begin{align*}\n\t \\beta d_2 x_2 = a_{23}-\\beta^2 m_2 \n\t& \\implies -\\frac{(\\beta d_2)^2}{(a_{23}-\\beta^2 m_2)} = a_{23}-\\beta^2 m_2 \\\\ \n\t& \\implies -(\\beta d_2)^2 = (a_{23}-\\beta^2 m_2)^2\n\t\\end{align*}\n\twhich is a contradiction.\n\t\n\t\\item If $a_{12}+a_{13}-\\beta^2 m_1\\ne0$ and $a_{12}=0$, then $x_4=0$. By expanding the fifth and sixth rows we get\n\t\\begin{align*}\n\t& \\beta d_2x_2 = a_{23}- \\beta^2 m_2 \\implies x_2=\\frac{a_{23}-\\beta^2 m_2}{\\beta d_2},\\\\\n\t& \\beta d_3x_3 = -a_{32} , \\implies x_3= -\\frac{a_{32}}{\\beta d_3}.\n\t\\end{align*}\n\tExpanding the second row and substituting $x_2$ and $x_3$ from above gives\n\t\\begin{align*}\n\t (a_{23}-\\beta^2 m_2)x_2-a_{23}x_3 = -\\beta d_2 \n\t \\implies \\frac{(a_{23}-\\beta^2 m_2)^2}{\\beta d_2} + \\frac{a_{23}a_{32}}{\\beta d_3} = -\\beta d_2\n\t\\end{align*}\n which is a contradiction.\n \n \\item If $a_{12}+a_{13}-\\beta^2 m_1\\ne0$ and $a_{12}\\ne0$, then $x_4=\\frac{-a_{12}}{a_{12}+a_{13}- \\beta^2 m_1}$. By expanding the fifth and sixth rows and substituting $x_4$ we get\n \\begin{align*}\n & \\beta d_2x_2 + \\frac{a_{12}a_{21}}{a_{12}+a_{13}- \\beta^2 m_1} = a_{21}+a_{23}- \\beta^2 m_2,\\\\\n & \\beta d_3x_3 + \\frac{a_{12}a_{31}}{a_{12}+a_{13}-\\beta^2 m_1} = -a_{32}.\n \\end{align*}\n Now we expand the first row to get $ x_1=\\frac{a_{12}x_2+a_{13}x_3}{a_{12}+a_{13}-\\beta^2 m_1}$. Finally, we expand the second row and substitute for $x_1, x_2$, and $x_3$:\n \\begin{align*}\n -a_{21}\\frac{a_{12}x_2+a_{13}x_3}{a_{12}+a_{13}-\\beta^2 m_1} + (a_{21}+a_{23}-\\beta^2 m_2)x_2 -a_{23}x_3=-\\beta d_2, \n \\end{align*}\n which implies\n \\begin{align*}\n & ( (a_{21}+a_{23}- \\beta^2 m_2) -\\frac{a_{12}a_{21}}{a_{12}+a_{13}-\\beta^2 m_1} ) x_2 \\\\\n & - (a_{23} +\\frac{a_{13}a_{21}}{a_{12}+a_{13}-\\beta^2 m_1}) x_3 =-\\beta d_2, \n \\end{align*}\n or equivalently\n \\begin{align*}\n & \\frac{1}{\\beta d_2} ( (a_{21}+a_{23}- \\beta^2 m_2) -\\frac{a_{12}a_{21}}{a_{12}+a_{13}- \\beta^2 m_1} )^2 \\\\\n & + \\frac{1}{\\beta d_3} (a_{23} +\\frac{a_{13}a_{21}}{a_{12}+a_{13}- \\beta^2 m_1})^2 =-\\beta d_2.\n \\end{align*}\n which is a contradiction.\n\\end{enumerate}\n\n\\item Step $3$: $\\mathop{\\bf rank}(\\mathcal{M}(\\beta))$ is an even number.\\\\\nFinally, \\Cref{lemma: complex representation} precludes the rank of $\\mathcal{M}(\\beta)$ from being equal to $5$. Therefore, $\\mathop{\\bf rank}(\\mathcal{M}(\\beta))=6$, i.e., $\\mathcal{M}(\\beta)$ is always nonsingular. This completes the proof.\n\\end{itemize}\n\n\n\n\n\n\n\n\\section{Introduction}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}