diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzqhna" "b/data_all_eng_slimpj/shuffled/split2/finalzzqhna" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzqhna" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\\label{sec1}\n\nThe space of flattenings $F(L)$ associated to a simplicial $k$-sphere $L$ is the space of all simplicial embeddings of the cone $cL$ of $L$ in $\\mathbb{R}^{k+1}$. The space of flateenings of simplicial spheres was first studied by S.S Cairns \\cite{cairns} in the early 1940s. In \\cite{cairns}, Cairns proved that the space of all flattenings of a simplicial 2-sphere which have an orientation preserving isomorphism onto a given triangulation is path connected. \n\n\nThe space of flattenings associated to a simplicial 1-sphere is homotopy equivalent to the orthogonal group $O(2)$ as we will prove in Theorem \\ref{my_flat} \\cite{kayman}. The only known positive result about the topology of $F(L)$ when $\\dim(L) > 2$ is the theorem of N.H Kuiper \\cite{kuiper:nh} where he proved that $F(L)$ has the homotopy type of $O(n+1)$ when $L$ is the boundary of the $(n+1)-$simplex. In \\cite{milin}, Milin showed that there exist a simplicial sphere $L$ of dimension $3$ whose subset of $F(L)$ consisting of flattenings which have an orientation preserving isomorphism onto a given triangulation is not path connected.\n\n\nLet $\\Delta^n$ denote an $n$-simplex. We will establish that the space of flattenings associated to $\\partial\\Delta^n \\ast \\partial\\Delta^1$ has the homotopy type of an orthogonal group. \n\nLet $L$ be a simplicial k-sphere and CF(L) denote the quotient of $F(L)$ by $\\mathrm{GL}_{k+1}$ the invertible $(k+1)\\times (k+1)$ matrices. To establish the above statement, we will first prove that $CF(L)$ is contractible. In Section \\ref{or:flat}, we will introduce a poset $P(L)$ called \\textit{The poset of oriented matroid flattenings} of $L$ and a poset stratification map $\\pi: CF(L) \\rightarrow P(L)$. We will prove that $P(L)$ is contractible when $L$ is either a simplicial 1-sphere, $\\partial\\Delta^{n+1}$ the boundary of an $(n+1)$-simplex or $\\partial \\Delta^1 \\ast \\partial\\Delta^{n+1}$\n\nIn Section \\ref{top:flat}, we will prove for the above simplicial spheres that $\\{\\overline{\\pi^{-1}(M)} : M \\in P(L)\\}$ is a totally normal cellular decomposition of $CF(L)$. Theorem \\ref{thm:total} will then conclude that $\\|P(L)\\|$ can be embedded in $CF(L)$ as a deformation retract.\n\n\n\n\\section{Oriented Matroids} \\label{sec:back}\n\nWe will view elements of $\\mathbb{R}^n$ as $1\\times n$ row vectors so that $X$ is the rowspace of a $r \\times n$ matrix. Suppose $X \\in \\mathrm{Gr}(r, \\mathbb{R}^n)$ so that $X = \\mathrm{Rowspace}(v_1\\; v_2 \\; v_3 \\ldots \\; v_n)$. We consider the following function $\\chi : [n]^r \\rightarrow \\{+, - , 0\\}$ associated to $X$\n\t$$\\chi(i_1, i_2, \\ldots , i_r) = \\mathrm{sign}(\\det(v_{i_1} \\; v_{i_2} \\cdots v_{i_r}) )$$\nThe collection $\\{\\pm \\chi\\}$ is independent of the choice of basis vectors for $X$. The resulting functions $(\\pm \\chi)$ defines a rank $r$ \\textit{oriented matroid}. \n\t\n\t\nIn general, an oriented matroid can be obtained from an arrangement of pseudospheres as evident by the following theorem. Figure \\ref{pseudo} illustrates an arrangement of pseudospheres.\n\n\\begin{thm}(\\cite{jim:law}){The Topological Representation Theorem (Folkman-Lawrence 1978)}\n\tThe rank $r$ oriented matroids are exactly the sets $(E,\\mathcal{V}^*)$ arising from essential {\\em arrangements of pseudospheres} in $S^{r-1}$.\n\t \\end{thm}\n\n\n\n\\begin{figure}[htb]\n\\begin{tikzpicture}\n\t\\begin{scope}[scale=.7]\n\t\t\\draw (0,0) circle(3);\n\t\n\t\t\\draw[->](64:2.8)--(56:2.8);\n\t\t\\draw[->](111:2.8)--(118:2.8);\n\t\t\\draw[->, rotate=32](62:2.8)--(55:2.8);\n\t\t\\draw[->, rotate=121](62:2.8)--(55:2.8);\n\t\t\\draw[->, rotate=140](62:2.9)--(55:2.8);\n\t\t\\draw[rotate=120] (60:3) arc[radius = 5, start angle= 110, end angle= 184] node[above] at (0:3){$3$};\n\t\t\\draw[rotate=240] (60:3) arc[radius = 5, start angle= 110, end angle= 184] node[left] at (300:3){$4$};\n\t\t\\draw[rotate=30] (60:3) arc[radius = 5, start angle= 110, end angle= 184] node[above] at (60:3){$2$};\n\t\t\\draw(195:3) arc[radius=3.15, start angle=210, end angle =355] node[left] at (200:3){$5$};\n\t\t\n\t\t\\draw (60:3)..controls (100:2.5) and (120:1.5)..(120:1)..controls(-60:.9) and (-80:1)..(240:3) node[left] at (60:3.5){$1$};\n\t\n\\draw[] (160:3)..controls (170:2.5) and (200:1.5)..(200:1)..controls(195:.9) and (30:1)..(20:1)..controls(22:1.2) and (-10:2.5)..(-20:3) node[left] at (160:3){$6$};\n\\draw[->] (162:2.8)--(155:2.8);\n\\end{scope}\n\\end{tikzpicture}\n\\caption{Arrangement of Pseudospheres}\n\\label{pseudo}\n\\end{figure}\n\nA detailed introduction to the theory of oriented matroids can be found in the in the book \\cite{anders:bjo}. Associated to a rank $r$ oriented matroid on $n$ elements are the functions $\\pm \\chi : [n]^r \\rightarrow \\{+, - , 0\\}$ called the chirotopes. Let $\\{+, -, 0\\}$ be a poset with the partial order $0 < -$ and $0< +$.\n\n\\begin{defn}(\\cite{macp:rob})\n Let $\\mathcal{N} = (\\pm \\chi_1)$ and $\\mathcal{M} = (\\pm \\chi_2)$ be two rank $r$ oriented matroids. We say that $\\mathcal{N} \\leq \\mathcal{M}$ if and only if $\\chi_1 \\leq \\chi_2$ or $\\chi_1 \\leq -\\chi_2$. The oriented matroid $\\mathcal{M}$ is said to {\\em weak map} to $\\mathcal{N}$. \n\\end{defn}\n\n\n\n \\begin{defn}(\\cite{macp:rob})\n $\\mathrm{MacP}(p,n)$ denotes the poset of all rank $p$ oriented matroids on elements $\\{1,2,\\ldots, n\\}$, with weak map as the partial order. The poset is called the \\textit{MacPhersonian} ~\\cite{macp:rob}.\n \\end{defn}\n\nWe have explained how to obtain a rank $r$ oriented matroid on $n$ elements from a rank $r$ subspace of $\\mathbb{R}^n$. That is, there is a function $\\mu: \\mathrm{Gr}(r, \\mathbb{R}^n) \\rightarrow \\mathrm{MacP}(r,n): X \\to (\\pm \\chi_X)$.\n\nThe following Proposition and Theorem are from the work of the author in \\cite{kay:ab}, \\cite{kayman}.\n\n\t\\begin{prop}(\\cite{kay:ab})\\label{ref1}\n\t\tLet $M \\in \\mbox{MacP}(2,n)$. Then $\\partial \\overline{\\mu^{-1}(M)} = \\bigcup_{N < M} \\mu^{-1}(N)$\n\t\\end{prop}\n\n\\begin{thm}(\\cite{kay:ab})\\label{ref2} $\\{\\overline{\\mu^{-1}(M)} : M \\in \\; \\mbox{MacP}(2, n)\\}$ is a regular cell decomposition of $Gr(2,\\mathbb{R}^n)$.\n\t\\end{thm}\n\t\n\t\n\n\\section{Cellular stratified spaces }\\label{cellular}\n \n\\begin{defn}(\\cite{Dai:Tam})\n A \\textit{globular} $n$-cell is a subset $D$ of $D^n$ containing $H= \\mathrm{Int}(D^n)$. We call $D \\cap \\partial D^n$ the \\textit{boundary} of $D$ and denote it by $\\partial D$. The number $n$ is called the \\textit{globular dimension} of $D$. \n\\end{defn}\n\nA globular $n$-cell was introduced by Tamaki \\cite{Dai:Tam} as an extension of closure of $n$-cells to non-closed cells.\n\n\n\\begin{defn} (\\cite{Dai:Tam})\n Let $X$ be a topological space. For a non-negative integer $n$, an $n$-cell structure on a subspace $e \\subset X$ is a pair $(D, \\varphi)$ of a globular $n$-cell $D$ and a continuous map $$\\varphi: D \\rightarrow X$$\n satsifying the following conditions:\n \\begin{itemize}\n \\item $\\varphi(D) = \\bar{e}$ and $\\varphi : D \\rightarrow \\bar{e}$ is a quotient map.\n \n \\item The restriction $\\varphi$ : $H \\rightarrow e$ is a homeomorphism.\n \\end{itemize}\n \n\\end{defn}\n\n\\begin{defn}(\\cite{Dai:Tam})\nLet $X$ be a topological space and $P$ be a poset with the Alexandroff topology. A stratification of $X$ indexed by $P$ is an open continuous map \n$$\\pi : X \\rightarrow P$$\nsatisfying the condition that for each $\\lambda \\in P$, $e_\\lambda = \\pi^{-1}(\\lambda)$ is connected and locally closed. $X$ is called a \\textit{cellular stratified space} if each $e_\\lambda$ is homeomorphic to an open ball.\n\\end{defn}\n\n\n\\begin{defn}(\\cite{Dai:Tam}, \\cite{Fur:Muk})\n Let $X$ be a cellular stratifed space. $X$ is called totally normal if for each globular $n$-cell $(D_\\lambda, \\varphi)$, and $e_\\lambda = \\varphi(\\mathrm{Int}(D_\\lambda))$\n \\begin{enumerate}[(i)]\n \\item If $e_\\lambda \\cap \\overline{e_\\mu} \\neq \\emptyset $, then $e_\\lambda \\subseteq \\overline{e_\\mu}$. \n \\item There exists a structure of a regular cell complex on $S^{n-1}$ containing $\\partial D_\\lambda$ as a cellular stratified subspace of $S^{n-1}$.\n \\item For each cell $e$ in the cellular stratification on $\\partial D_\\lambda$, there exists a cell $e_\\eta$ in $X$ and a map $b: D_\\eta \\rightarrow \\partial D_\\lambda$ such that $b(\\mathrm{Int}(D_\\eta)) = e$ and $\\varphi_\\lambda \\circ b = \\varphi_\\eta$.\n \\end{enumerate}\n\\end{defn}\n\n\\begin{thm}\\label{thm:total} (\\cite{Dai:Tam})\nFor a totally normal cellular stratified space $X$ with stratification $\\pi : X \\rightarrow P$, there is an embedding of $\\|P\\|$ as a strong deformation retract of $X$. \n\\end{thm}\n\n\n\n\n\n\n\n\\section{Flattenings} \\label{def:flattenings}\n\n\\begin{defn} (\\cite{losik:1}, \\cite{milin})\n\tLet $L$ be a triangulation of a $k$-sphere, and let $cL$ be a simplicial cone over $L$. A flattening of $L$ is an embedding $\\psi : cL \\rightarrow \\mathbb{R}^{k+1}$ that maps the cone vertex to the origin and it is linear on simplices of $cL$. \n\\end{defn}\n\n\n\\begin{nota}\nLet $L$ be a simplicial $k$-sphere. We denote as in \\cite{losik:1} by $F(L)$ the space of all flattenings of $L$. Also, the group $GL_{k+1}$ of invertible $(k+1) \\times (k+1)$ matrices acts on $F(L)$; the quotient space denoted by $CF(L)$ is the configuration space of $L$.\n\\end{nota}\n\nThe space $F(L)$ is an open subset of $\\mathbb{R}^{(k+1)|\\mbox{Vert}(L)|}$, and so has a natural smooth manifold structure. The space of flattenings comes up in the problem of existence and uniqueness of differentiable structures on triangulated manifolds (see \\cite{losik:2}, \\cite{kuiper:nh}). \n\nWe will show that $CF(L)$ is contractible when $L$ is a simplicial $1$-sphere, and so, $F(L)$ has the homotopy type of $O(2)$. Some few other non-trivial results that are known about the topology of $CF(L)$ and $F(L)$ are as follows.\n\n\\begin{thm}(\\cite{cairns})\n\tLet $L$ be a triangulated $2$-sphere. Then $CF(L)$ is path connected. \n\\end{thm}\n\nFor $\\dim(L) \\geq 3$, Cairns \\cite{cairns} also showed that $CF(L)$ can be empty. When the dimension of $L$ is greater than $2$, Milin \\cite{milin} obtained the following negative result about the topology of $CF(L)$.\n \n \\begin{thm}\\cite{milin}\n \tThere exists a $3$ dimensional simplicial sphere $L$ such that $CF(L)$ is disconnected.\n \\end{thm}\n \nSo far, for $n > 2$ the only known positive result about the homotopy type of $F(L)$ is the following result of Kuiper.\n\n\\begin{thm}(\\cite{kuiper:nh})\\label{kuip}\n\tLet $\\partial \\Delta^{n+1}$ be the boundary of an $(n+1)$-simplex. Then $F(\\partial \\Delta^{n+1})$ has the homotopy type of $O(n+1)$. \n\\end{thm}\n\\begin{cor}(\\cite{kuiper:nh})\n\tLet $\\partial \\Delta^{n+1}$ be the boundary of an $(n+1)$-simplex. Then any two smoothings of $\\partial \\Delta^{n+1}$ are diffeomorphic. \n\\end{cor}\n\nAs in Theorem \\ref{kuip}, we also obtain the following positive result for the simplicial sphere $\\partial \\Delta^1 \\ast \\partial \\Delta^{n+1}$.\n\n\\begin{thm}\\label{my_flat}\nLet $\\partial \\Delta^{n+1}$ be the boundary of an $(n+1)$-simplex . Then $F(\\partial \\Delta^1 \\ast \\partial \\Delta^{n+1})$ has the homotopy type of $O(n+2)$. Let $L$ be a simplicial $1$-sphere. Then $F(L)$ has the homotopy type of $O(2)$. \n\\end{thm}\n\n\\begin{cor}\\label{cor_flat}\n\tLet $\\partial \\Delta^{n+1}$ be the boundary of an $(n+1)$-simplex. Then any two smoothings of $\\partial \\Delta^1 \\ast \\partial \\Delta^{n+1}$ are diffeomorphic. \n\\end{cor}\n\n\n\\section{Oriented matroid flattenings} \\label{or:flat}\nLet $L$ be a triangulated of a $k$-sphere, and $\\psi : cL \\rightarrow \\mathbb{R}^{k+1}$ a flattening of $L$. Then the arrangement of vectors $(\\psi(v): v \\in \\mathrm{Vert}(L))$ determines a rank $k+1$ oriented matroid $M$. Definition \\ref{comb_abstr} gives a combinatorial abstraction for oriented matroids obtained from flattenings of a simplicial sphere.\n\n\n\\begin{figure}[htb]\n\\centering\n\\begin{subfigure}[t]{0.35\\textwidth}\n \\begin{tikzpicture}[line join=bevel,z=-5.5]\n \\coordinate (A1) at (-0.2,-2);\n \\coordinate (A2) at (-2,0);\n \\coordinate (A3) at (-0.67,2);\n \\coordinate (A5) at (0,0);\n \\coordinate (A4) at (2,0.67);\n \\draw (A1) -- (A2) -- (A3) ;\n \\draw (A1) -- (A4) -- (A3);\n \\path[->] (0,0) edge node[at end, left]{$3$} (-0.22, -2.2);\n \\path[->] (0,0) edge node[at end, left]{$4$} (-2.08, 0);\n \\path[->] (0,0) edge node[at end, left]{$1$} (-0.707, 2.06);\n \\path[->] (0,0) edge node[at end, above]{$2$} (2.1, 0.7);\n \\foreach \\Coor\/\\Texto\/\\Pos in \n {A5\/0\/below\n }\n \\node[circle,draw,inner sep=1.5pt,fill=black,label={\\Pos:$\\Texto$}] \n at (\\Coor) {};\n \\end{tikzpicture}\n\\end{subfigure}\n\n~\n\n\\begin{subfigure}[t]{0.35\\textwidth}\n \\begin{tikzpicture}[line join=bevel,z=-5.5]\n \\coordinate (A1) at (-0.2,-2);\n \\coordinate (A2) at (-2,0);\n \\coordinate (A3) at (0.6,2);\n \\coordinate (A5) at (0,0);\n \\coordinate (A4) at (2,0);\n \\draw (A1) -- (A2) -- (A3) ;\n \\draw (A1) -- (A4) -- (A3);\n \\path[->] (0,0) edge node[at end, left]{$3$} (-0.22, -2.2);\n \\path[->] (0,0) edge node[at end, left]{$4$} (-2.08, 0);\n \\path[->] (0,0) edge node[at end, left]{$1$} (0.613, 2.133);\n \\path[->] (0,0) edge node[at end, above]{$2$} (2.093, 0);\n \\foreach \\Coor\/\\Texto\/\\Pos in \n {A5\/0\/below\n }\n \\node[circle,draw,inner sep=1.5pt,fill=black,label={\\Pos:$\\Texto$}] \n at (\\Coor) {};\n \\end{tikzpicture}\n\\end{subfigure}\n\\caption{Flattenings of a simplicial $1$-sphere.}\n\\end{figure}\n\n\n\\begin{defn}\\label{comb_abstr}\n Let $L$ be a simplicial sphere of dimension $k$. An oriented matroid flattening of $L$ is a rank $k+1$ oriented matroid $\\mathcal{M}$ satisfying the following:\n \\begin{enumerate}[(i)]\n \\item The elements of $\\mathcal{M}$ are the vertices of $L$.\n \\item The set of vertices in a simplex are independent.\n \\item The set of vertices in a simplex has no other elements in its convex hull.\n \n \\end{enumerate}\n\\end{defn}\n\n \\begin{nota}\n The poset of all oriented matroid flattenings of $L$ is denoted by $P(L)$. \n\\end{nota}\n\n\\begin{prop}\\label{contr_prop}\nLet $L$ be a simplicial sphere and $\\partial \\Delta^n$ the boundary of an $n$-simplex. Then $\\|P(L)\\|$ is contractible when $L$ is either a simplicial $1$-sphere, $\\partial \\Delta^n$ or $\\partial \\Delta^1 \\ast \\partial \\Delta^n$. \n\\end{prop}\n\n\\begin{proof}\nThe poset $P(\\partial \\Delta^n)$ consists of a point. Let $P(\\partial \\Delta^n) = \\{\\mathcal{M}_n\\}$. For the sphere $\\partial\\Delta^1 \\ast \\partial \\Delta^n$, $P(\\partial\\Delta^1 \\ast \\partial \\Delta^n)$ has a minimum; given by the join $\\mathcal{M}_1 \\oplus \\mathcal{M}_n$ of two oriented matroids.\n\nIn the case when $L$ is a simplicial $1$-sphere, this will follow by induction on the number of vertices in $\\mathrm{Vert}(L)$. Let $L_n$ denote a simplical $1$-sphere on $n$ vertices. We know that $P(L_3)$ consists of a point say $M_0 = (\\pm \\chi_0)$. In the following argument, we will consider chirotopes with positive value on the basis $\\{1,2\\}$ \n\nLet $\\Sigma^{n+1}$ denote a subposet of $P(L_{n+1})$ consisting of $\\mathcal{M}'$ such that $\\mathcal{M}'\\setminus \\{n+1\\}$ is an element of $P(L_n)$. An oriented matroid in $\\Sigma^{n+1}$ is thus an extension of an oriented matroid $\\mathcal{M}$ in $P(L_n)$ by an element ${n+1}$, with $n+1$ lying in the convex hull of $\\{1,n\\}$.\n\nThere is a poset map $P(L_{n+1}) \\rightarrow \\Sigma^{n+1}$ obtained as composition of some poset maps as given below.\nLet $f_0 : P(L_{n+1}) \\rightarrow P(L_{n+1})$ defined as:\n$$f_0(\\chi)(B) = \\left\\{\\begin{array}{ccc}\n \\chi(B) & \\mbox{if} & B \\neq (n, 1) \\\\\n 0 & \\mbox{if} & B = (n,1) \\; \\mbox{and}\\; \\chi(n, 1) \\in \\{0, -\\}\\\\\n + & \\mbox{if} & B = (n,1) \\; \\mbox{and} \\; \\chi(n,1) = +\n\\end{array}\\right\\}$$\nLet $P_0 = f_0(P(L_{n+1}))$. The poset map $f_0$ is a lowering homotopy, and so $\\|P_0\\|$ is homotopy equivalent to $\\|P(L_{n+1})\\|$. We again consider another poset map $f_1: P_0 \\rightarrow P_0$ defined as:\n\n$$f_1(\\chi)(B) = \\left\\{\\begin{array}{ccc}\n \\chi(B) & \\mbox{if} & B \\neq (n,1) \\\\\n + & \\mbox{if} & B = (n,1) \n\\end{array}\\right\\}$$\n\nThe image of $f_1$ is denoted is given by $f_1(P_0) = \\Sigma^{n+1}$. The poset map $f_1 : P_0 \\rightarrow P_0$ is a raising homotopy, and so $\\|P_0\\|$ is homotopy equivalent to $\\|\\Sigma^{n+1}\\|$. The poset map $\\Sigma^{n+1} \\rightarrow P(L_n)$ induces a homotopy equivalence between $\\|\\Sigma^{n+1}\\|$ and $\\|P(L_n)\\|$.\n\\end{proof}\n\nFor a simplicial sphere $L$, there is a stratification map $\\mu_0: CF(L) \\rightarrow P(L)$. \n\n\\begin{conj}\n\tLet $L$ be a simplicial sphere of dimension at least $2$. Then $\\|P(L)\\|$ is contractible.\n\\end{conj}\n\n\n\n\n\n\n\n\\section{Topology of space of flattenings of some spheres} \\label{top:flat}\nLet $\\mu' : \\mathrm{Gr}(r, \\mathbb{R}^{r+2}) \\rightarrow \\mathrm{MacP}(r, r+2)$ and $\\mu: \\mathrm{Gr}(2, \\mathbb{R}^n) \\rightarrow \\mathrm{MacP}(2,n) $. Let $\\mu_0 : CF(L) \\rightarrow P(L)$ be the restriction of $\\mu'$ to $CF(L)$ when $L = \\partial \\Delta^1 \\ast \\partial \\Delta^{r-1}$ or the restriction of $\\mu$ when $L$ is a simplicial 1-sphere on $n$ vertices.\n\n\nThe stratification map $\\mu_0: CF(L) \\rightarrow P(L)$ gives a decomposition of $CF(L)$ into semi-algebraic sets $\\{\\mu_0^{-1}(M) : M \\in P(L)\\}$. When $L$ is a simplicial 1-sphere or $L = \\partial \\Delta^1 \\ast \\partial \\Delta^n$, we will show that the decomposition is a totally normal cellular decomposition. \n\n\nWe have the following commutative diagram\n\n\\begin{tikzcd}\n\\mathrm{Gr}(r, \\mathbb{R}^{r+2}) \\arrow[r, \"\\mu'\"] \\arrow[d, \"V \\mapsto V^{\\perp}\", labels = left ] & \\mathrm{MacP}(r, r+2) \\arrow[d, \"M\\mapsto M^*\"]\\\\\n\\mathrm{Gr}(2, \\mathbb{R}^{r+2}) \\arrow[r, \"\\mu\"] & \\mathrm{MacP}(2, r+2)\n\\end{tikzcd}\n\nThe commutativity of the diagram follows from the fact that \n\n$V = (I_r | A) \\in \\mathrm{Gr}(r, \\mathbb{R}^{r+2})$ if and only if $V^{\\perp} = \\mathrm{Rowspace}(-A^T|I_2) \\in \\mathrm{Gr}(2, \\mathbb{R}^{r+2})$. The oriented matroid $M^*$ is called the dual of $M$.\n\nThe map $\\mathrm{Gr}(r, \\mathbb{R}^{r+2}) \\rightarrow \\mathrm{Gr}(2, \\mathbb{R}^{r+2}) : V \\mapsto V^{\\perp}$ is a homeomorphism and the poset map $\\mathrm{MacP}(r, r+2) \\rightarrow \\mathrm{MacP}(2, r+2) : M \\mapsto M^*$ is a poset isomorphism.\n\n\nThe following result thus follows from Theorem \\ref{ref2}\nand the commutativity of the diagram described above.\n\n\n\\begin{thm}\\label{thm:reg}\n\tLet $M \\in \\mbox{MacP}(r, r+2)$ be a rank $r$ oriented matroid on $r+2$ elements, and $\\mu' : \\mathrm{Gr}(r, \\mathbb{R}^{r+2}) \\rightarrow \\mbox{MacP}(r,r+2) $. Then $\\{\\overline{(\\mu')^{-1}(M)}: M \\in \\mathrm{MacP}(r, r+2)\\}$ is a regular cell decomposition of $\\mathrm{Gr}(r, \\mathbb{R}^{r+2})$.\n\\end{thm}\n\n\n\\begin{prop}\\label{tot_nor}\nLet $L$ be a simplicial sphere and $\\mu_0 : CF(L) \\rightarrow P(L)$ a stratification map. If $L$ is a simplicial $1$-sphere or $L = \\partial \\Delta^1 \\ast \\partial \\Delta^n$, then the decomposition $\\{\\mu_0^{-1}(M) : M \\in P(L)\\}$ is a totally normal cellular decomposition of $CF(L).$\n\\end{prop}\n\n\\begin{proof}\n\\begin{enumerate}[(i)]\n \\item Suppose $L$ is as given above. It was proven in Proposition \\ref{ref1} that if $N, M \\in P(L)$ such that $N < M$, then $\\mu_0^{-1}(N) \\subseteq \\overline{\\mu_0^{-1}(M)}$. So, the decomposition $\\{\\mu_0^{-1}(M) : M \\in P(L)\\}$ is normal.\n \n \\item In Theorem \\ref{ref2}, it was proven that $\\{\\overline{\\mu^{-1}(M)} : M \\in \\; \\mbox{MacP}(2, |\\mathrm{Vert}(L)|)\\}$ is a regular cell decomposition of $Gr(2,\\mathbb{R}^{|\\mathrm{Vert}(L)|})$. Similarly, we have in Theorem \\ref{thm:reg} that $\\{\\overline{(\\mu')^{-1}(M)} : M \\in \\; \\mbox{MacP}(r, r+2)\\}$ is a regular cell decomposition of $Gr(r,\\mathbb{R}^{r+2})$.\n \n If $L$ is a simplicial $1$-sphere, and $M \\in P(L)$, then $\\partial \\overline{\\mu^{-1}(M)}$ is a regular cellular cell complex homeomorphic to a sphere. Let $\\overline{\\mu_0^{-1}(M)}$ denote the closure of $\\mu_0^{-1}(M)$ in $CF(L)$. Then $\\partial \\overline{\\mu^{-1}(M)}$ contains $\\partial \\overline{\\mu_0^{-1}(M)}$ as a cellular stratified subspace. Similarly when $L = \\partial \\Delta^1 \\ast \\partial \\Delta^n$ and $M \\in P(L)$, $\\partial \\overline{(\\mu')^{-1}(M)}$ contains $\\partial \\overline{\\mu_0^{-1}(M)}$ as a cellular stratified subspace.\n \n \\item $D_M = \\overline{\\mu_0^{-1}(M)}$, and let $\\varphi_M$ be the restriction to $D_M$ of the characteristic map of the cell $\\overline{\\mu^{-1}(M)}$ if $L$ is a simplicial $1$-sphere or restriction of the characteristic map of $\\overline{(\\mu')^{-1}(M)}$ if $L = \\partial \\Delta^1 \\ast \\partial \\Delta^n$.\n \n For a cell $e$ in the boundary of $D_M$, there exists an oriented matroid $N$ in $P(L)$ such that $N < M$ and $\\mu_0^{-1}(N) = e$. The map $b : D_N \\rightarrow \\partial D_M$ is given by $b = (\\varphi_M)^{-1} \\circ \\varphi_N.$ \n \n \\end{enumerate}\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem \\ref{my_flat}]\nSuppose $L$ is a simplicial $1$-sphere or $L= \\partial \\Delta^1 \\ast \\partial \\Delta^n$. The decomposition $\\{\\mu_0^{-1}(M): M \\in P(L)\\}$ is a totally normal cellular decomposition of $CF(L)$ by Proposition \\ref{tot_nor}. It thus follows from Theorem \\ref{thm:total} that $\\|P(L)\\|$ is a deformation retract of $CF(L)$. We know from Proposition \\ref{contr_prop} that $\\|P(L)\\|$ is contractible. Hence, $CF(L)$ is contractible.\n\nSuppose $L$ is a simplicial $1$-sphere. We know that $F(L)|_H \\cong \\mathrm{GL}_2(\\mathbb{R}) \\times CF(L)$. Hence $F(L)$ has the homotopy type of $O(2)$. Similarly, if $L = \\partial \\Delta^1 \\ast \\partial \\Delta^n$, then $F(L) \\cong \\mathrm{GL}_{n+1}(\\mathbb{R}) \\times CF(L)$. Hence $F(L)$ has the homotopy type of $O(n+1)$.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{ Introduction }\n\nThe development of clever fabrication techniques in semiconductors has\nbrought the reduction of the effective dimension of electronic states\nfrom their usual three-dimensional character in bulk materials, to \n``zero-dimensional'' states in quantum dots.\\cite{1} The quantum effects\nof these lower-dimensional systems have attracted much attention in\nrecent years, due in part to possible applications which include\nelectronic devices based on parallel and perpendicular transport,\nquantum well lasers, and optical devices. \\cite{2} Two-dimensional\nquantum-well or quantum-film structures, which provide confinement in\none space dimension, have been well investigated, and quantum exciton\neffects observable even at room temperature have been studied.\\cite{3} \nThe confinement of excitons has also been shown to result in very large\nelectro-optical shifts of the absorption peaks, producing the so-called\nquantum-confined Stark effect.\\cite{3,4}\n \nIn quasi-zero-dimensional quantum dot systems, the additional quantum\nconfinement dramatically changes the optical and electronic properties,\ncompared to those in higher-dimensional structures, as the whole\nsingle-particle spectrum is now discrete. Correspondingly, the\nexcitonic spectrum is expected to be strongly affected. The properties\nof excitons confined in quantum boxes were first analyzed theoretically\nby Bryant,\\cite{5} who used variational and configuration-interaction\nrepresentations. Later on, excitons and biexcitons have been studied,\n\\cite{6,7} as well as excitons in the presence of a strong magnetic\nfield,\\cite{8} using numerical matrix diagonalization schemes. \n\nOn the experimental side, interband optical spectroscopies, such as\nphotoluminescence, have been used to study various quantum dot systems\n--- such as those produced in the GaAs\/Al$_x$Ga$_{1-x}$As structure, with its\nbandgap modulation.\\cite{9,10,11} More recently, fascinating studies\non so-called ``self-assembled'' quantum dots, such as \nInAs and In$_x$Ga$_{1-x}$As\nclusters on GaAs substrates, have also been reported.\\cite{12,13,14,15}\n Most of the theoretical investigations are based on the assumption\nthat the shape of quantum dots is a simple sphere or box, having a\ngreat deal of symmetry, both because it simplifies calculations and\nbecause quantities such as the exciton binding energy scale very well\nwith the overall dot size. However, realistic dot shapes are probably\nmuch less symmetrical, as well as being typically flat and more\ntwo-dimensional in shape.\\cite{12,13,14,15} \n \nHere, we consider the effect that less symmetric structures, namely\nflat quantum dots with elliptical cross sections, or ``elliptical\nquantum disks'', have on the excitonic optical properties. To date,\nlittle work has been reported on the properties of nonsymmetric\nquantum dots, probably because this system has more complicated\nsolutions.\\cite{13,15} Our studies within the effective mass\napproximation yield some very interesting consequences of the\nelliptical asymmetry: apart from the expected blue shift of the first\nexcitonic transition for dots with the same overall area but different\naxes, we find a rearrangement of the oscillator strength which\ncharacterizes individual dot shapes. In particular, since elliptical\ncross-section dots have less symmetry, some of the accidental\ndegeneracies in circular dots giving rise to stronger and fewer peaks\nin the imaginary part of the optical susceptibility are split. This\ngives rise to a more monotonically decreasing peak intensity for higher\nenergy features in the susceptibility of noncircular dots. This\nbehavior can in turn be used to structurally characterize specific dots\nfrom their photoluminescence excitation response. \n \nThe remainder of the paper is organized as follows. We introduce the\ntheoretical method in Sec. II. Here, we outline the effective mass\nHamiltonian approach and introduce the various basis function\nrepresentations which allow us to use numerical methods to calculate\nthe eigenvalues and eigenfunctions of excitons in these quantum dots. \nA great deal of care is needed to assure that the solutions obtained\nare well behaved and converged with a finite computational effort. We\ndiscuss in this section how this is accomplished. In Sec. III, we\ndiscuss the main geometrical effects on various exciton\ncharacteristics, such as the exciton binding energy, electron-hole\nseparation, and the linear optical susceptibility. Solutions for\nexcitons in quantum dots with both circular and elliptical\ncross sections are shown, using large enough basis sets, and a set of\noptimized basis functions, which improve the accuracy of the solutions\nat a modest computational cost. Finally, we summarize our conclusions\nin Sec. IV. The Appendix contains an outline of the derivation of\nthe Coulomb matrix element with these basis functions. The analytical\nexpression presented there greatly simplifies our calculations.\n\n\n\\section{ Theoretical Method }\n\nFor concreteness, and to simulate recent quantum dot\nsystems,\\cite{12,13,14} we assume quantum dots with an oblate\nspheroidal profile where the lateral $xy$ confinement is much weaker\n(or larger size) than that along the $z$ direction. Correspondingly,\nthe electrons and holes are assumed confined in an effectively\ntwo-dimensional potential with a constant $z$ profile, $V_z$. We\nassume $V_z$ to be a hard-wall confinement potential, so that the\n$z$ component of the energy is $\\hbar^2 \\pi^2\/2mL_z^2$, with $L_x, L_y\n\\gg L_z$. Further, we approximate the single $z$ wavefunction in the\nproblem as a $\\delta$ function centered at the origin, so that the problem\ncan be described by a separable Hamiltonian in two dimensions. The\nlateral confinement is modeled via harmonic potentials with two\ndifferent frequencies $\\omega_x$ and $\\omega_y$, which yield the\nelliptical cross section of the dots with axes ratio given by $L_x \/\nL_y = \\sqrt{\\omega_y\/\\omega_x}$, for both electrons and holes. The\nsmoothly varying potential should mimic well the situation in\nexperiments where the dots are effectively embedded in a dielectric\nmatrix.\\cite{1,12}\n\nThe effective-mass parabolic-band Hamiltonian for an electron-hole pair\nis given by $H = H_e + H_h + H_{e-h}$, where the subscripts $e$ and $h$\nrepresent electron and hole, and \n \\begin{equation}\nH_e = \\frac{p^2}{2m_e} + \\frac{1}{2}\nm_e \\omega_x^2 x_e^2 + \\frac{1}{2} m_e \\omega_y^2 y_e^2 + V_{ze} \\, ,\n \\end{equation}\n with a similar expression for the Hamiltonian of the hole,\n$H_h$.\\cite{NOTE} The Coulomb interaction between electron and hole is\nscreened by a background dielectric constant $\\epsilon$, so that\n$H_{e-h} = -e^2\/\\epsilon r_{e-h}$. \n \nWe rewrite the Hamiltonian into relative and center of mass\ncoordinates, described by ${\\bf r} = {\\bf r}_e - {\\bf r}_h$, and\n${\\bf R} = (m{_e}{\\bf r}_e + m_h{\\bf r}_h)\/M$.\n The total and reduced masses are given by $M = m{_e} + m{_h}$, and\n$\\mu = m{_e}m{_h}\/M$, respectively. The total Hamiltonian of this\nsystem can then be written in the form \n$H = H{_{c.m.}} + H{_{rel}}$,\nwith the expected expressions \n \\begin{equation}\nH_{c.m.} = \\frac{P^2}{2M} + \\frac{1}{2}M\\omega_x^2\nX^2 + \\frac{1}{2}M\\omega_y^2 Y^2 + V_Z \\, ,\n \\end{equation}\n and \n \\begin{equation}\nH{_{rel}} = \\frac{p{{^2}}}{2\\mu} + \\frac{1}{2}{\\mu}\\omega_x^2 x{^2}\n + \\frac{1}{2}{\\mu}\\omega_y^2 y{^2}\n - \\frac{e{^2}}{\\epsilon \\sqrt{x{^2} + y{^2}}} + V_z \\, .\n \\end{equation} \n The Hamiltonian of the center of mass is obviously a two-dimensional\nharmonic oscillator in the $XY$ plane, with wavefunction $\\Psi_{N_X\nN_Y} = \\phi_{N_X}(X)\\, \\phi_{N_Y}(Y)$, and energy $E_{c.m.}$, where\n \\begin{equation} \n \\phi{_{N}}(X) = \\left(\\frac{{\\alpha_M}}{{\\pi^{1\/2}}2{^{N}}N!}\\right)^{1\/2} \\,\ne^{-\\alpha_M^2 X^2\/2} H_N \\left( \\alpha_M X \\right) \\, , \n \\end{equation} \n$\\alpha_M = \\sqrt{M \\omega_x \/ \\hbar} \\, ,$ and \n \\begin{equation}\nE_{c.m.} = \\left(\nN_X + \\frac{1}{2} \\right) \\hbar \\omega_x + \\left( N_Y + \\frac{1}{2} \\right)\n\\hbar \\omega_y + \\frac{\\hbar^2 \\pi^2}{2ML_z^2} \\, .\n \\end{equation}\n Here, $N{_X}$ and $N{_Y}$ are quantum numbers for the center of mass\ncoordinate, and $H{_N}$ is a Hermite polynomial.\\cite{17}\n\nThe physics of the problem is determined to a great extent by the ratio\nbetween the effective Bohr radius, $a_B^* = \\hbar^2 \\epsilon \/ \\mu\ne^2$, and the size of the dot, $L=\\sqrt{L_x L_y}$, where\n$L_i=\\sqrt{\\hbar \/\\mu \\omega_i}$. The strong confinement limit for $L\n\\leq a_B^*$ is characterized by a weak electron-hole correlation and by\nthe Coulomb term being a small perturbation of the single-particle\nconfined-level energy. On the other hand, the weak-confinement limit\nfor $L \\geq a_B^*$ reduces asymptotically to the problem of a free\ntwo-dimensional exciton for large $L$, where the Coulomb interaction \ndominates the state of the exciton \\cite{1}. \n \nWith this in mind, the effects of the Coulomb term $H_1$ in $H_{rel} =\nH_0 + H_1$, are treated by using the solutions of $H_0$ as the basis\nset in the diagonalization of $H_{rel}$. The unperturbed Hamiltonian of\nthe relative coordinate $H_0$ is also a two-dimensional harmonic\noscillator, so that the wavefunction of the interacting electron-hole\npair is described by a linear combination of wavefunctions, $\\psi_{n_x\nn_y}=\\phi_{n_x}(x) \\phi_{n_y}(y)$, with the $\\phi$'s satisfying a\nsimilar expression to Eq.\\ (4), and correspondingly \n \\begin{equation}\nE{_{rel}^0} =\n\\left( n_x + \\frac{1}{2} \\right) \\hbar \\omega_x + \\left( n_y +\n\\frac{1}{2} \\right) \\hbar \\omega_y + \\frac{\\hbar^2 \\pi^2}{2\\mu (2L_z)^2} \\, ,\n \\end{equation} \n where $n_x$ and $n_y$ are quantum numbers for the relative coordinate.\n(Notice $z$ confinement length for this coordinate is $2L_z$.)\n \nWith this basis set, the interaction matrix elements of the\nelectron-hole pair can be calculated analytically and expressed in\nterms of hypergeometric functions as outlined in the Appendix. This\nanalytical expression greatly simplifies the calculation, as most of\nthe computational time is spent on the calculation of the matrix\nelements rather than on the diagonalization of the matrix. The\nresulting Hamiltonian matrix is real, symmetric, and sparse. The\nenergies and eigenfunctions are calculated from the numerical\ndiagonalization of the matrix, for a given size of the basis. The\ndiagonalization is repeated with larger basis sets until the desired\nconvergence is achieved (see below).\n \n\\subsection {Circular limit} \n As an additional test of our numerical procedures we use the resulting\nradial equation of the relative Hamiltonian for the circular dot case,\n$\\omega_x = \\omega_y = \\omega$, which is then given by \n \\begin{equation}\n H_{rel} = \\frac{p^2}{2\\mu} + \\frac{1}{2} \\mu \\omega^2 r^2 - \n \\frac{e^2}{\\epsilon r} \\, .\n \\label{neweq7}\n \\end{equation}\n The resulting one-dimensional equation can be directly integrated\nnumerically, as reported by Que.\\cite{7} Furthermore, the problem can\nalso be solved using a harmonic basis set of radial states as those\ndescribed above. For a large enough basis set, this approach yields\nthe same results as those obtained by direct numerical\nintegration.\\cite{7} Notice also that the large dot limit $(\\omega\n\\approx 1\/L \\approx 0)$ is easily solved as an expansion in terms of\nthe Laguerre polynomial-based solutions of the free exciton\nproblem.\\cite{7} This allows one to obtain very accurate solutions for\n$1\/L \\approx 0$ with little computation. These results, moreover,\nallow us to test the convergence of the irreducible two-dimensional\nnonsymmetric problem $(\\omega_x \\neq \\omega_y)$, as we discuss below.\n \n\\subsection {Optimized basis}\n Solutions for these elliptic cylinderlike quantum dots are also\ncarried out using an optimized basis set, as the solution method\ndiscussed above requires a rather large number of states for full\nconvergence, especially for large dots. Here, one notices that for\nlarge dots it is the Coulomb interaction that dominates the exciton\nstates (since confinement becomes less important). Correspondingly, we\nchoose a set of optimized frequencies, ${\\Omega}_x$ and ${\\Omega}_y$, \nwhich are larger than the original frequencies, ${\\omega}_x$ and\n${\\omega}_y$. The values of the $\\Omega$'s are determined\nvariationally, and allow one to consider $H$-matrix systems that are\nmuch smaller than those required when one uses the $\\omega$ basis. \nThe physical reason for this is that as the dot size increases, the\nexciton size converges to $a_B^{2D}$ (the radius of the free\ntwo-dimensional exciton), and one needs a large number of\n$\\omega$ states to describe the small-scale structure of the exciton.\n\nNotice that the harmonic $\\Omega$ basis allows also an easy calculation\nof the $H$-matrix elements in this case, so that for example,\n \\begin{eqnarray}\n \\langle n_x{^{\\prime}} \\, n_y{^{\\prime}} \\, &| &H_{0} | n_x \\, n_y\n\\, \\rangle_\\Omega = \\left[{\\hbar}\\Omega{_x}(n{_x}+\\frac{1}{2}) \\, + \\,\n{\\hbar}\\Omega{_y}(n{_y}+\\frac{1}{2}) \\right. \\nonumber \\\\ \\,\n & & - \\, \\left.\n\\frac{\\hbar}{2\\Omega{_x}}(\\Omega_x^2-\\omega_x^2)(n{_x}+\\frac{1}{2})\n\\, - \\,\n\\frac{\\hbar}{2\\Omega{_y}}(\\Omega_y^2-\\omega_y^2)(n{_y}+\\frac{1}{2})\n\\right] \\, \\delta{_{n{_{x^\\prime}},n{_x}}} \\,\n\\delta_{n_{y^\\prime},n_{y}} \\nonumber \\\\ \n & &- \\,\n\\frac{\\hbar}{4\\Omega{_x}}(\\Omega_x^2- \\omega_x^2) \\left[\n\\sqrt{n_x(n{_x}-1)} \\, \\delta{_{n{_{x^\\prime}},n{_x}-2}} +\n\\sqrt{(n_x+1)(n{_x}+2)} \\, \\delta{_{n{_x^{\\prime}},n{_x}+2}} \\right]\n \\, \\delta{_{n_{y^\\prime},n_y}} \n \\nonumber \\\\ \n & & - \\,\n\\frac{\\hbar}{4\\Omega{_y}}(\\Omega_y^2-\\omega_y^2) \\left[\n\\sqrt{n_y(n{_y}-1)} \\, \\delta{_{n{_{y^\\prime}},n_y-2}} +\n\\sqrt{(n_y+1)(n{_y}+2)} \\, \\delta{_{n{{_{y^{\\prime}}}},n{_y}+2}}\n\\right] \\, \\delta{_{n{{_{x^\\prime}}},n{_x}}} \\,\\, .\n \\label{oldeq7}\n \\end{eqnarray}\n Notice this reduces to the obvious diagonal matrix for $\\Omega\n\\rightarrow \\omega$. This expression allows one to evaluate the\nHamiltonian matrix rather conveniently, even for this other basis set.\n\n\\subsection {Exciton characteristics} \n The wavefunctions of the relative coordinate problem can then be\nwritten as $| \\psi \\rangle = \\sum_{n{_x}n{_y}} a{_{n{_x}n{_y}}} |\nn{_x} , n{_y} \\rangle$, with either the $\\omega$ or $\\Omega$ basis\nstates, which can be used to study various characteristic properties of\nthe exciton system. For example, the mean electron-hole separation\n$r_s$, is given by\n \\begin{eqnarray}\n r_s^2 &=& \\langle \\psi | r^2 | \\psi \\rangle = \\sum_{n{_x}n_y}\n \\left[\\frac{\\hbar}{{\\mu}\\Omega{_y}}\n(n{_y}+\\frac{1}{2})+\\frac{\\hbar}{{\\mu}\\Omega{_x}}(n{_x}+\\frac{1}{2})\\right]\n|a{_{n{_x},n{_y}}}|^2 \\, \\nonumber \\\\ \n & &+\n \\, \\frac{1}{2}\\frac{\\hbar}{{\\mu}\\Omega{_y}}\\left[\\sqrt{(n{_y}+2)(n{_y}+1)}\n \\, a{{^*}{_{n{_x},n{_y}+2}}} +\n\\sqrt{n{_y}(n{_y}-1)}\n \\, a{{^*}{_{n{_x},n{_y}-2}}}\\right]a{_{n{{_x}},n{{_y}}}} \\nonumber \\\\\n & &+ \\,\n\\frac{1}{2}\\frac{\\hbar}{{\\mu}\\Omega{_x}}\\left[\\sqrt{(n{_x}+2)(n{_x}+1)}\n \\, a{{^*}{_{n{_x}+2,n{_y}}}} +\n\\sqrt{n{_x}(n{_x}-1)}\n \\, a{{^*}{_{n{_x}-2,n{_y}}}}\\right]a{_{n{{_x}},n{{_y}}}} \\, ,\n \\end{eqnarray}\n which gives an idea of the exciton size.\n\n One can also use the diagonalization results to calculate\ndirectly measurable properties, such as the linear optical\nsusceptibility of the quantum dot\/disk. The linear optical\nsusceptibility is proportional to the dipole matrix elements between\none electron-hole pair j state and the vacuum state, $\\langle 0 | P |1 \n\\rangle_j$.\\cite{5} These in turn are proportional to the interband\nmatrix element, $p_{cv}$,\\cite{18} which is the matrix element formed\nbetween an electron and hole in the conduction and valence bands,\nrespectively. The form of the dipole matrix elements for a single\nexciton in the envelope function approximation is given by\\cite{5,7} \n \\begin{equation}\n|\\langle 0 | P |1 \n\\rangle|^2 = |p_{cv}|^2 {|{\\psi}(0)|}^2 \\left|{\\int}{\\int} {\\Psi}(X_e,Y_e)\n \\, dX_e \\, dY_e \\right|^2 \\, .\n \\end{equation}\n Here, the wavefunction for the relative coordinate is given as\nabove, so that \n \\begin{equation}\n |\\psi(0)|^2 = (\\mu\/\\hbar \\pi) \\sqrt{w_ x w_y} \\, \\left|\\sum_{n_xn_y}\n(2^{n_x+n_y}n_x!n_y!)^{-1\/2} \\, a_{n_xn_y}\\right|^2 \\, ,\n \\end{equation}\n where the $ a_{n_xn_y}$ coefficients are obtained from the diagonalization\nof the relative-coordinate Hamiltonian, and\n \\begin{equation}\n \\left|{\\int}{\\int} {\\Psi}(X_e,Y_e) dX_e dY_e \\right|^2 = 4 \\pi^2 \\hbar\nN_X! \\, N_Y! \\, \\left[\\pi M \\sqrt{w_xw_y} \\,\\, 2^{N_X+N_Y} \\,\n(N_X\/2)!^2 \\, (N_Y\/2)!^2 \\right]^{-1} \\, , \n \\end{equation}\n with $N_X= {\\rm even}$ and $N_Y = {\\rm even}$, for nonzero matrix\nelements. Finally, the dipole matrix elements have the form \n \\begin{equation} \n |\\langle0| P |1\\rangle|^2 = 4 |p_{cv}|^2 \\frac{\\mu}{M} N_X! \\, N_Y! \\,\n\\left[2^{N_X+N_Y} \\, (N_X\/2)!^2 \\, (N_Y\/2)!^2 \\, \\right]^{-1} \\, \\left|\n\\sum_{n_x n_y} (2^{n_x+n_y}n_x! \\, n_y!)^{-1\/2} \\, a_{n_x n_y} \\,\n\\right|^2 \\, .\n \\end{equation}\n The linear optical susceptibility can then be calculated from\n \\begin{equation}\n\\chi ( \\omega ) = \\sum_{j} |\\langle 0 | P | 1 \\rangle_j |^2 (\\hbar\n\\omega -E_j -i \\hbar \\Gamma)^{-1} \\, ,\n \\end{equation}\n where $\\Gamma$ is introduced as a phenomenological level broadening\nconstant.\n \nNotice also that since experimental systems are typically configured\nto analyze a large collection of nearby dots, one should in principle\nbe concerned by the effect of local fields. However, in typical\nexperimental systems so far, where the separation between dots can be\nseveral microns, it is valid to assume that dots are basically\nindependent. In the case of higher-dot densities, however, the\ndynamical response of the system may be affected by the local fields\nproduced by neighboring dots, and one can obtain that response from the\nindividual microscopic polarizabilities.\\cite{16} \n \n\n\\section { Results }\n\nAs an interesting example of a typical system, we use parameters to\ndescribe GaAs quantum dots, so that the dielectric constant is\n$\\epsilon=13.1$, and the carrier masses are $m{_e}=0.067m{_0}$, and (the\nheavy hole effective mass) $m_{hh}=0.37m{_0}$.\\cite{NOTE} We present\nthe numerical results for heavy-hole excitons in GaAs quantum dots with\nelliptical and circular cross sections. The solutions can be\ncalculated using a sufficiently large basis set and\/or an optimized\nbasis set, as described above. Results obtained from these methods and\ndifferent basis sets are shown in Fig.\\ 1. The exciton binding energy\nand normalized electron-hole separation are shown as a function of\nquantum dot size ranging from 2 to 100 nm.\n \nIn the insets, results are shown for {\\em circular} dots, with dot,\ndashed, and dot-dashed curves showing results for basis sets with\n$M=30, 100$, and $500$ wavefunctions, respectively. Here, states with\n$n$ and $n{^{\\prime}}$ from $0$ to $29$, $99$, and $499$ are used in\nEq.\\ (\\ref{neweq7}). (The matrix size is obviously $M \\times M$, and\nis diagonalized by a QL decomposition technique.\\cite{19}) The results\nof the one-dimensional radial equation in the weak-confinement limit\nare shown with the solid line for comparison, and represent the exact\nquantity (both $E_b$ and $r_s$) for large $L$. The transition between\nthe strong and weak confinement regimes comes appropriately when the\nsize of the quantum dot is near the effective Bohr radius, $L \\approx\na_B^* =12.2$ nm. Notice that it is for $L \\approx 15$ nm that the\n$M=100$ curve (dashed) departs from the exact result (solid line), and\nthat one requires larger $M$ values as $L$ increases to achieve better\nconvergence. The $M=500$ basis set (dot-dashed curve) yields the\nconvergent solutions with acceptable accuracy and execution time for a\nlarger range of $L$ values ($\\leq 60$ nm). Similar behavior can be\nseen in the electron-hole separation in the ground state [inset in\npanel (b)].\n\nThe inset in 1(a) also shows the difference between the $\\omega$ basis\nand $\\Omega$ basis results. Diamonds show results for the $M=400$\nbasis set with optimized $\\Omega$ frequency, while triangles show\nresults for $M=400$ with the original $\\omega$ basis set. For both\ncases, the states with $n_x$, $n_y$, $n_x{^{\\prime}}$, and\n$n_y{^{\\prime}}$ from $0$ to $19$, respectively, are included for the\n$M=400$ basis set (see Eq.\\ \\ref{oldeq7}). These states are used as\nthe basis set for elliptical quantum dots. The results with the\noptimized basis set are essentially identical to the convergent\nsolutions. The optimized frequencies used are also given as a\nfunction of dot size in the inset 1(a) (with $E_\\Omega = \\hbar\n\\Omega$) as a long-dashed curve. For dot sizes $30$ nm and larger, the\noptimized frequency $\\Omega$ converges to 80 meV, corresponding to a\ndot size $L_{\\Omega}=\\sqrt{\\hbar\/{\\mu}{\\Omega}} =4.1$ nm. This latter\nvalue is close to the two-dimensional effective Bohr radius\n($a^{2D}_B=a^*_B\/2\\sqrt{2}=4.3$ nm), as one would expect. Notice that\nfor most of the range of $L$'s shown, one is in the weak-confinement\nregime, where $L \\ge a^{2D}_B$. This is why the optimized\nbasis set gives very reasonable results with minimal effort. It is\nalso interesting that the agreement continues also for smaller dot\nsizes, completing the range from $2$ to $100$ nm. The exciton ground\nstate binding energies and the normalized electron-hole separation\nobtained with the optimized $\\Omega$ basis approach are basically exact\nto the fully converged results.\n\nThe main panels in Fig.\\ 1 (a) and (b) show the geometrical confinement\neffects of excitons with several different size axis ratios in the\n$xy$ plane. The plots show results versus $L=\\sqrt{L_xL_y}$, the\neffective size of the dot, for ${\\omega_x}$\/${\\omega_y}=1$, $4$, and\n$9$ ($L_y\/L_x=1$, $2$, and $3$) with diamonds, pluses, and triangles,\nrespectively. \nThe exciton binding energy increases for a small $L$ as the axis \nratio is increased; meanwhile, the normalized electron-hole separation \nis basically unchanged, \nexcept for small $L$ values, where\nthe confinement energy dominates. Notice that as the axis ratio\nincreases, the single-particle and exciton states move up in energy but \nthe {\\em binding} energy increases. \nThis increase in binding energy for\nthe elliptical dots is then related to the increase of the Coulomb\nenergy {\\em relative} to the confinement contribution to the total\nenergy. (In fact, since $r_s\/L$ is basically unchanged with geometry,\nthe Coulomb interaction energy is nearly constant in all these cases.)\n\nTo better explore the geometrical confinement effects on excitons, we\nshow the linear optical susceptibility of the GaAs quantum disks with\nelliptical cross sections and lateral mean size of $L=\\sqrt{L_xL_y}=5$\nnm (Fig.\\ 2), and 10 nm (Fig.\\ 3). The dot thickness along the $z$-\ndirection is kept constant at $L_z=3$ nm, and several size ratios for\neach axis in the $xy$ plane are shown. We use here also a value for\nthe optical bandgap of $E_g = 1.51$ eV. The results presented here\nwere obtained using the optimized $\\Omega$ basis set approach discussed\nabove. Notice that since this function represents all of the possible\ntransitions of this excitonic system, its features would be measurable\nvia photoluminescence excitation measurements. On the other hand,\nthe photoluminescence response would correspond to the first (lower\nenergy) feature in these traces, associated with the ground state of\nthe excitonic system.\n\nFigure 2 shows the imaginary part of the linear optical susceptibility\nas a function of frequency for a dot with $L = \\sqrt{L_x L_y} = 5$ nm\n(a broadening of $\\Gamma=2$ meV is used). The bottom trace is for a\ncircular dot, so that $L_x = L_y = 5$ nm. The upper two traces show\nresults for elliptical dots with a size ratio $L_y:L_x = 2:1$\n($L_y=7.0$ nm, and $L_x=3.5$ nm), and $3:1$ ($L_y=8.7$ nm, $L_x=2.9$\nnm), respectively, all having the same mean size $L=5$ nm. [Notice\nthat although the peak heights are in arbitrary units, the ratio\nbetween different peaks or traces is real, reflecting the different\ndipole matrix elements involved.]~ One obvious difference among traces\nis that the first transition energy shifts to higher values by $\\approx\n26$ and 69 meV, as the size ratio changes from 1:1 to 2:1 and 3:1,\nrespectively. Notice that while increasing the size ratio, the exciton\nbinding energies {\\em increase} from 47 meV to 48 and 49 meV for each\nratio. It is then clear that the larger blue shifts are due mostly to\nthe increasing confinement as the disk becomes more elliptical, and not\ndue to the binding energy between electron and hole. In\nall the cases shown, the transition involving the ground state of the\nexciton is dominant, as the excited states appear only with smaller\noscillator strength. Notice further that for the larger length ratio,\nthe spectrum is understandably sparser, as the levels associated with\nthe narrow dimension are quickly pushed upwards in energy. Finally,\nsince the elliptical dots have lower symmetry, accidental degeneracies\nare fewer and the transition peaks show nearly monotonically decreasing\nintensities (unlike the circular dot case). [Notice these all the\nsusceptibility traces include transitions between different\ncenter-of-mass states up to the $N_X=N_Y=10$ levels. Additional $N_X$\nand $N_Y$ values would yield higher energy structure.]\n\nFigure 3 shows also the imaginary part of the susceptibility but for a\ndot size of 10 nm, and here with level broadening of 1 meV.\\@ The first\ntransition energy in the circular quantum dot is 1.78 eV, while the\nvalues in the elliptical dots are 1.79 and 1.80 eV, for ratios $2:1$\nand $3:1$. The transition energies result here to be lower than in\nFig.\\ 2 since the confinement is not as strong, reducing the effective\ngap energy. In this set of curves, the first transition energy\n(involving the ground state of the exciton) is shifted upwards for\nelliptical dots, due to\nthe increased confinement, although the shift is not as large as in\nFig.\\ 2, since the overall lengths are larger. The first excited state\nappears split because the two-fold degeneracy of the excited state is\nbroken as the dot becomes elliptical.\n\nAs dot size is increased, it is apparent that the geometrical effects\nare not as prominent, producing only a small shift of the spectrum of\ntransitions. Incidentally, the onset of transitions for larger size\ndots ($L \\approx 100$ nm) compares qualitatively well with experimental\nphotoluminescence spectra in dots with similar disk geometry,\\cite{10}\nwhere features appear in the energy range 1.73 -- 1.74 eV, for dots\nwith radius thought to be in the range 150--200 nm. According to the\nexperimental results, the additional confinements give the observed\nblue shift, compared to two-dimensional quantum-well exciton case.\nSimilarly, a blue shift appears due to the elliptical shape, although\nfor $L \\approx 100$ nm they are only $\\leq 10$ meV, as the size ratio\nincreases to 2:1 and 3:1.\n\nAs an example of the effects for different materials, Fig.\\ 4 shows the\nimaginary part of the susceptibility of InAs quantum dots with lateral\nmean size of $12$ nm and thickness of $2.8$ nm --- having a dielectric\nconstant $\\epsilon=14.6$, $m{_e}=0.026m{_0}$, and heavy hole effective\nmass $m_{hh}=0.41m{_0}$. Here, the energy gap is taken as 0.43 eV, and\nwe also use the optimized $\\Omega$ basis set to obtain the results\nshown. The convergent optimized size, $L_{\\Omega}=10.5$ nm, is close\nto the two dimensional effective Bohr radius ($a^{2D}_B=10.54$ nm). In\nthis case, the first transition energy shifts to higher values by\n$\\approx 11$ and 28 meV, as the size ratio changes from 1:1 to 2:1 and\n3:1, respectively. Notice the similarity with Fig.\\ 2, although the\nenergy scale and size (12 nm) are completely different here. This\nsimilarity is due to the scaling of the problem in terms of $a^*_B$.\nFor these InAs parameters, $a^*_B=29.8$ nm, so that $a^*_B\/L \\approx\n2.5$, comparable to the value in Fig.\\ 2 for GaAs, where $a^*_B=12.2$,\n$L=5$, and $a^*_B\/L \\approx 2.4$.\n\n\n\\section{ Conclusions}\n \nWe have demonstrated that strong geometrical confinement effects appear\non excitons in GaAs and InAs quantum dots with elliptical\ncross sections. The solutions have been obtained using sufficiently\nlarge basis sets as well as with an optimized basis set. The results\nobtained with the optimized basis sets, such as exciton binding energy\nand normalized electron-hole separation, are extremely close to the\nbest converged results, and only with a relatively modest computational\neffort.\n\nThe linear optical susceptibilities are calculated for several\ndifferent lateral size ratios of each axis ($x$ and $y$). Strong blue\nshifts in the susceptibilities are observed as the size ratio is\nincreased and the shifts due to the geometrical shape effects are\nespecially important for the smaller dot sizes ($ \\leq 25$ nm). The\nshifts are due mostly to the increasing confinement as the dot becomes\nmore elliptical, and not due to the interaction energy between\nelectron and hole. A splitting of the first few excited states appears\nin the elliptical cross section cases since the symmetry-related\ndegeneracy of the excited states in the circular dot is broken. This\ngives also rise to a more monotonic decrease of the peak intensities\nseen as the energy of the transition increases.\n\n\n \\acknowledgments\n\nWe would like to thank R.L. Cappelletti and D.A. Drabold for helpful\ndiscussions, and the support of the US Department of Energy through\ngrant no.\\ DE--FG02--91ER45334. Calculations were partially performed\nat the Cray Y\/MP of the Ohio Supercomputer Center. S.E.U. acknowledges\nsupport of the A. v. Humboldt Foundation.\n\n ","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\n\nRecent years have seen significant advances in the capacity of Artificial Intelligence (AI), which is growing in sophistication, complexity and autonomy. A continuously veritable and explosive growth of data with a rapid iteration of computing hardware advancement provides a \\emph{turbo boost} for the development of AI. \n\nAI is a generic concept and an umbrella term that implies the use of a machine with limited human interference to model intelligent actions. It covers a broad range of research studies from machine intelligence for computer vision, robotics, natural language processing to more theoretical machine learning algorithms design and recently re-branded and thrived \\emph{deep learning} development (Figure \\ref{fig:fig1}).\n\n\\subsection{Born of AI}\n\nAI changes almost every sector globally, e.g., enhancing (digital) healthcare (e.g., making diagnosis more accurate, allowing improved disease prevention), accelerating drug\/vaccine development and repurposing, raising agricultural productivity, leading to mitigation and adaptation in climate change, improving the efficiency of manufacturing processes by predictive maintenance, supporting the development of autonomous vehicles and programming more efficient transport networks, and in many other successful applications, which make significant positive socio-economic impact. Besides, AI systems are being deployed in highly-sensitive policy fields, such as facial recognition in the police or recidivism prediction in the criminal justice system, and in areas where diverse social and political forces are presented. Therefore, nowadays, AI systems are incorporated into a wide variety of decision-making processes. As AI systems become integrated into all kinds of decision-making processes, the degree to which people who develop AI, or are subject to an AI-enabled decision, can understand how the resulting decision-making mechanism operates and why a specific decision is reached, has been increasingly debated in science and policy communities.\n\nA collection of innovations, which are typically correlated with human or animal intelligence, is defined as the term \"artificial intelligence\". John McCarthy, who coined this term in 1955, described it as \"the scientific and technical expertise in the manufacture of intelligent machines\", and since then many different definitions have been endowed.\n\n\\begin{figure}\n\\begin{center}\n \\includegraphics[width=\\linewidth]{.\/figures\/Figure1.pdf} \n \n \\caption{Left: Terminology and historical timeline of AI, machine learning and deep learning. Right: We are still at the stage of narrow AI, a concept used to describe AI systems that are capable of handling a single or limited task. General AI is the hypothetical wisdom of AI systems capable of comprehending or learning any intelligent activity a human being might perform. Super AI is an AI that exceeds human intelligence and skills.}\n \\label{fig:fig1}\n\\end{center}\n\\end{figure}\n\n\\subsection{Growth of Machine Learning}\n\nMachine learning is a subdivision of AI that helps computer systems to intelligently execute complex tasks. Traditional AI methods, which specify step by step how to address a problem, are normally based on hard-coded rules. Machine learning framework, by contrast, leverages the power of a large amount of data (as examples and not examples) for the identification of characteristics to accomplish a pre-defined task. The framework then learns how the target output will be better obtained. Three primary subdivisions of machine learning algorithms exist:\n\n\\begin{itemize}\n\t\\item A machine learning framework, which is trained using labelled data, is generally categorised as supervised machine learning. The labels of the data are grouped into one or more classes at each data point, such as \"cats\" or \"dogs.\" The supervised machine learning framework exploits the nature from these labelled data (i.e., training data), and forecasts the categories of the new or so called test data.\n\t\n \\item Learning without labels is referred to as unsupervised learning. The aim is to identify the mutual patterns among data points, such as the formation of clusters and allotting data points to these clusters.\n \n \\item Reinforcement learning on the other hand is about knowledge learning, i.e., learning from experience. In standard reinforcement learning settings, an agent communicates with its environment, and is given a reward function that it tries to optimise. The purpose of the agent is to understand the effect of its decisions, and discover the best strategies for maximising its rewards during the training and learning procedure.\n \n\\end{itemize}\n\nIt is of note that some hybrid methods, e.g., semi-supervised learning (using partially labelled data) and weakly supervised (using indirect labels), are also under development.\n\nAlthough not achieving the human-level intelligence often associated with the definition of the general AI, the capacity to learn from knowledge increases the amount and sophistication of tasks that can be tackled by machine learning systems (Figure \\ref{fig:fig1}). A wide variety of technologies, many of which people face on a daily basis, are nowadays enabled by rapid developments in machine learning, contributing to current advancements and dispute about the influence of AI in society. Many of the concepts that frame the existing machine learning systems are not new. The mathematical underpinnings of the field date back many decades, and since the 1950s, researchers have developed machine learning algorithms with varying degrees of complexity. In order to forecast results, machine learning requires computers to process a vast volume of data. How systems equipped with machine learning can handle probabilities or uncertainty in decision-making is normally informed by statistical approaches. Statistics, however, often cover areas of research that are not associated with the development of algorithms that can learn to make forecasts or decisions from results. Although several key principles of machine learning are rooted in data science and statistical analysis, some of the complex computational models do not converge with these disciplines naturally. Symbolic approaches, compared to statistical methods, are also used for AI. In order to create interpretations of a problem and to reach a solution, these methods use logic and inference.\n\n\\subsection{Boom of Deep Learning}\n\nDeep learning is a relatively recent congregation of approaches that have radically transformed machine learning. Deep learning is not an algorithm per se, but a range of algorithms that implements neural networks with deep layers. These neural networks are so deep that they can only be implemented on computer node clusters --- modern methods of computing --- such as graphics processing units (GPUs), are needed to train them successfully. Deep learning functions very well for vast quantities of data, and it is never too difficult to engineer the functionality even if a problem is complex (for example, due to the unstructured data). When it comes to image detection, natural language processing, and voice recognition, deep learning can always outperform the other types of algorithms. Deep learning assisted disease screening and clinical outcome prediction or automated driving, which were not feasible using previous methods, are well manifested now. Actually, the deeper the neural network with more data loaded for training, the higher accuracy a neural network can produce. The deep learning is very strong, but there are a few disadvantages to it. The reasoning of how deep learning algorithms reach to a certain solution is almost impossible to reveal clearly. Although several tools are now available that can increase insights into the inner workings of the deep learning model, this black-box problem still exists. Deep learning often involves long training cycles, a lot of data and complex hardware specifications, and it is not easy to obtain the specific skills necessary to create a new deep learning approach to tackle a new problem. \n\nAlthough acknowledging that AI includes a wide variety of scientific areas, this paper uses the umbrella word 'AI' and much of the recent interest in AI has been motivated by developments in machine learning and deep learning. More importantly, we should realise that there is not one algorithm, though, that will adapt or solve all issues. Success normally depends on the exact problem that needs to be solved and the knowledge available. A hybrid solution is often required to solve the problem, where various algorithms are combined to provide a concrete solution. Each issue involves a detailed analysis into what constitutes the best-fit algorithm. Transparency of the input size, capabilities of the deep neural network and time efficiency should also be taken into consideration, since certain algorithms take a long time to train.\n\n\\subsection{Stunt by the Black-box and Promotion of the Explainable AI}\n\nAny of today's deep learning tools are capable of generating extremely reliable outcomes, but they are often highly opaque, if not fully invisible, making it difficult to understand their behaviours. For even skilled experts to completely comprehend these so-called 'black-box' models may be still difficult. As these deep learning tools are applied on a wide scale, researchers and policymakers can challenge whether the precision of a given task outweighs more essential factors in the decision-making procedure.\n\nAs part of attempts to integrate ethical standards into the design and implementation of AI-enabled technologies, policy discussions around the world increasingly involve demands for some form of Trustable AI, which includes Valid AI, Responsible AI, Privacy-Preserving AI, and Explainable AI (XAI), in which the XAI want to address the fundamental question about the rationale of the decision making process including both human level XAI and machine level XAI (Figure \\ref{fig:fig2}). For example, in the UK, such calls came from the AI Committee of the House of Lords, which argued that the development of intelligible AI systems is a fundamental requirement if AI will be integrated as a trustworthy tool for our society. In the EU, the High-Level Group on AI has initiated more studies on the pathway towards XAI (Figure \\ref{fig:fig2}). Similarly, in the USA, the Defence Advanced Research Projects Agency funds a new research effort aiming at the development of AI with more explainability. These discussions will become more urgent as AI approaches are used to solve problems in a wide variety of complicated policy making areas, as experts increasingly work alongside AI-enabled decision-making tools, for example in clinical studies, and as people more regularly experience AI systems in real life when decisions have a major impact. Meanwhile, research studies in AI continue to progress at a steady pace. XAI is a vigorous area with many on-going studies emerging and several new strategies evolving that make a huge impact on AI development in various ways.\n\n\\begin{figure}\n\\begin{center}\n \\includegraphics[width=\\linewidth]{.\/figures\/Figure2.pdf} \n \n \\caption{Left: Trustable AI or Trustworthy AI includes Valid AI, Responsible AI, Privacy-Preserving AI, and Explainable AI (XAI). Right: EU General Data Protection Regulation (GDPR) highlights the Fairness, Privacy, Transparency and Explainability of the AI.}\n \\label{fig:fig2}\n\\end{center}\n\\end{figure}\n\n\\begin{figure}[!ht]\n\\begin{center}\n \\includegraphics[width=\\linewidth]{.\/figures\/Figure3.pdf} \n \n \\caption{Schema of the added explainable surrogate module for the normal machine or deep learning procedure that can achieve a more transparent and trustworthy model.}\n \\label{fig:fig3}\n\\end{center}\n\\end{figure}\n\nWhile the usage of the term is inconsistent, \"XAI\" refers to a class of systems that have insight into how an AI system makes decisions and predictions. XAI explores the reasoning for the decision-making process, presents the positives and drawbacks of the system, and offers a glimpse of how the system will act in the future. By offering accessible explanations of how AI systems perform their study, XAI can allow researchers to understand the insights that come from research results. For example, in Figure \\ref{fig:fig3}, an additional explainable surrogate module can be added to the learnt model to achieve a more transparent and trustworthy model. In other words, for a conventional machine or deep learning model, only generalisation error has been considered while adding an explainable surrogate, both generalisation error and human experience can be considered and a verified prediction can be achieved. In contrast, a learnt black-box model without an explainable surrogate module will cause concerns for the end-users although the performance of the learnt model can be high. Such a black-box model can always cause confusions like ``Why did you do that?\", ``Why did you not do that?\", ``When do you succeed or fail?\", ``How do I correct an error?\", and ``Can I trust the prediction?\". The XAI powered model, on the other hand, can provide clear and transparent predictions to reassure ``I understand why.\", ``I understand why not.\", ``I know why you succeed or fail.\", ``I know how to correct an error.\", and ``I understand, therefore I trust\". A typical feedback loop of the XAI development can be found in Figure \\ref{fig:fig4}, which includes seven steps from training, quality assurance (QA), deployment, prediction, split testing (A\/B test), monitoring, and debugging. \n\n\\begin{figure}[!htbp]\n\\begin{center}\n \\includegraphics[width=0.8\\linewidth]{.\/figures\/Figure4.pdf} \n \n \\caption{A typical feedback loop of the XAI development that includes seven steps from training, quality assurance (QA), deployment, prediction, split testing (A\/B test), monitoring, and debugging.}\n \\label{fig:fig4}\n\\end{center}\n\\end{figure}\n\nA variety of terms are used to define certain desired characteristics of an XAI system in research, public, and policy debates, including:\n\n\\begin{itemize}\n\t\\item Interpretability: it means a sense of knowing how the AI technology functions.\n\t\n \\item Explainability: it provides an explanation for a wider range of users that how a decision has been drawn.\n \n \\item Transparency: it measures the level of accessibility to the data or model.\n\n \\item Justifiability: it indicates an understanding of the case to support a particular outcome.\n \n \\item Contestability: it implies how the users can argue against a decision. \n \n\\end{itemize}\n\n\\begin{figure}[!htb] \n\\begin{center}\n \\includegraphics[width=0.7\\linewidth]{.\/figures\/Figure5.pdf} \n \n \\caption{Model explainability vs. model performance for widely used machine learning and deep learning algorithms. The ideal solution should have both high explainability and high performance. However, existing linear models, rule-based models and decision trees are more transparent, but with lower performance in general. In contrast, complex models, e.g., deep learning and ensembles, manifest higher performance while less explainability can be obtained. HBN: Hierarchical Bayesian Networks; SLR: Simple Linear Regression; CRF: Conditional Random Fields; MLN: Markov Logic Network; SVM: Support Vector Machine; AOG: Stochastic And-Or-Graphs; XGB: XGBoost; CNN: Convolutional Neural Network; RNN: Recurrent Neural Network; and GAN: Generative Adversarial Network.}\n \\label{fig:fig5}\n\\end{center}\n\\end{figure}\n\nComprehensive surveys on general XAI can be found elsewhere, e.g., \\cite{arrieta2020explainable,adadi2018peeking,samek2019towards,rai2020explainable}; therefore, here we provide an overview of most important concepts of the XAI. Broadly speaking, XAI can be categorised into model-specific or model-agnostic based approaches. Besides, these methods can be classified into local or global methods that can be either intrinsic or post-hoc \\cite{arrieta2020explainable}. Essentially, there are many machine learning models that are intrinsically explainable, e.g., linear models, rule-based models and decision trees, which are also known as transparent models or white-box models. However, these relatively simple models may have a relatively lower performance (Figure \\ref{fig:fig5}). For more complex models, e.g, support vector machines (SVM), convolutional neural networks (CNN), recurrent neural networks (RNN) and ensemble models, we can design model-specific and post-hoc XAI strategies for each of them. For example, commonly used strategies include explanation by simplification, architecture modification, feature relevance explanation, and visual explanation \\cite{arrieta2020explainable}. Clearly these more complex models can achieve better performance while the explainability becomes lower (Figure \\ref{fig:fig5}). \n\nRecently, model-agnostic based approaches attract great attention that rely on a simplified surrogate function to explain the predictions \\cite{samek2019towards}. Model-agnostic approaches are not attached to a specific machine learning model. This class of techniques, in other words, distinguishes prediction from explanation. Model-agnostic representations are usually post-hoc that are generally used to explain deep neural networks with interpretable surrogates that can be local or global \\cite{adadi2018peeking}. Below is some summary for XAI in more complex deep learning based models.\n\n\\subsubsection{Model-Specific Global XAI}\n\nBy integrating interpretability constraints into the procedure of deep learning, these model-specific global XAI strategies can improve the understandability of the models. Structural restrictions may include sparsity and monotonicity, where fewer input features are leveraged or the correlation between features and predictions is confined as monotonic). Semantic prior knowledge can also be impelled to restrict the higher-level abstractions derived from the data. For instance, in a CNN based brain tumour detection\/classification model using multimodal MRI data fusion, constraints can be imposed by forcing disengaged representations that are recognisable to each MRI modality (e.g., T1, T1 post-contrast and FLAIR), respectively. In doing so, the model can identify crucial information from each MRI modality and distinguish brain tumours and sub-regions into necrotic, more or less infiltrative that can provide vital diagnosis and prognosis information. On the contrary, simple aggregation based information fusion (combining all the multimodal MRI data like a sandwich) would not provide such explainability.\n\n\\subsubsection{Model-Specific Local XAI}\n\nIn a deep learning model, a model-specific local XAI technique offers an interpretation for a particular instance. Recently, novel attention mechanisms have been proposed to emphasise the importance of different features of the high-dimensional input data to provide an explanation of a representative instance. Consider a deep learning algorithm that encodes an X-ray image into a vector using a CNN and then use an RNN to produce a clinical description for the X-ray image by using the encoded vector. For the RNN, an attention module can be applied to explain to the user what image fragments the model focuses on to produce each substantive term for the clinical description. For example, the attention mechanism will represent the appropriate segments of the image corresponding to the clinical key words derived by the deep learning model when a clinician is baffled to link the clinical key words to the regions of interest in the X-ray image.\n\n\\subsubsection{Model-Agnostic Global XAI}\n\nIn model-agnostic global XAI, a surrogate representation is developed to approximate an interpretable module for the black-box model. For instance, an interpretable decision tree based model can be used to approximate a more complex deep learning model on how clinical symptoms impact treatment response. A clarification of the relative importance of variables in affecting treatment response to clinical symptoms can be given by the IF-THEN logic of the decision tree. Clinical experts can analyse these variables and are likely to believe the model to the extent that particular symptomatic factors are known to be rational and confounding noises can be accurately removed. Diagnostic methods can also be useful to produce insights into the significance of individual characteristics in the predictions of the model. Partial dependence plots can be leveraged to determine the marginal effects of the chosen characteristics vs. the performance of the forecast, whereas individual conditional expectation can be employed to obtain a granular explanation of how a specific feature affects particular instances and to explore variation in impacts throughout instances. For example, a partial dependency plot can elucidate the role of clinical symptoms in reacting favourably to a particular treatment strategy, as observed by a computer-aided diagnosis system. On the other hand, individual conditional expectation can reveal variability in the treatment response among subgroups of patients.\n\n\\subsubsection{Model-Agnostic Local XAI}\n\nFor this type of XAI approaches, the aim is to produce model-agnostic explanations for a particular instance or the vicinity of a particular instance. Local Interpretable Model-Agnostic Explanation (LIME) \\cite{ribeiro2016lime}, a well-validated tool, can provide an explanation for a complex deep learning model in the neighbourhood of an instance. Consider a deep learning algorithm that classifies a physiological attribute as a high-risk factor for certain diseases or cause of death, for which the clinician requires a post-hoc clarification. The interpretable modules are perturbed to determine how the predictions made by the change of those physiological attributes. For this perturbed dataset, a linear model is learnt with higher weights given to the perturbed instances in the vicinity of the physiological attribute. The most important components of the linear model can indicate the influence of a particular physiological attribute that can suggest a high-risk factor or the contrary can be implied. This can provide comprehensible means for the clinicians to interpret the classifier. \n\n\\section{Related Studies in AI for Healthcare and XAI for Healthcare}\n\n\\subsection{AI in Healthcare}\n\nAI attempts to emulate the neural processes of humans, and it introduces a paradigm change to healthcare, driven by growing healthcare data access and rapid development in analytical techniques. We survey briefly the present state of healthcare AI applications and explore their prospects. For a detailed up to date review, the readers can refer to Jiang et al. \\cite{jiang2017artificial}, Panch et al. \\cite{panch2019inconvenient}, and Yu et al. \\cite{yu2018artificial} on general AI techniques for healthcare and Shen et al. \\cite{shen2017deep}, Litjens et al. \\cite{litjens2017survey} and Ker et al. \\cite{ker2017deep} on medical image analysis.\n\nIn the medical literature, the effects of AI have been widely debated \\cite{dilsizian2014artificial,patel2009coming,jha2016adapting}. Sophisticated algorithms can be developed using AI to 'read' features from a vast amount of healthcare data and then use the knowledge learnt to help clinical practice. To increase its accuracy based on feedback, AI can also be fitted with learning and self-correcting capabilities. By presenting up-to-date medical knowledge from journals, manuals and professional procedures to advise effective patient care, an AI-powered device \\cite{strickland2019ibm} will support clinical decision making. Besides, in human clinical practice, an AI system may help to reduce medical and therapeutic mistakes that are unavoidable (i.e., more objective and reproducible) \\cite{dilsizian2014artificial,patel2009coming,strickland2019ibm,weingart2000epidemiology, graber2005diagnostic,winters2012diagnostic,lee2013cognitive}. In addition, to help render real-time inferences for health risk warning and health outcome estimation, an AI system can handle valuable knowledge collected from a large patient population \\cite{neill2013using}. \n\nAs AI has recently re-emerged into the scientific and public consciousness, AI in healthcare has new breakthroughs and clinical environments are imbued with novel AI-powered technologies at a breakneck pace. Nevertheless, healthcare was described as one of the most exciting application fields for AI. Researchers have suggested and built several systems for clinical decision support since the mid-twentieth century \\cite{miller1994medical,musen2014clinical}. Since the 1970s, rule-based methods had many achievements and have been seen to interpret ECGs \\cite{kundu2000knowledge}, identify diseases \\cite{de1972computer}, choose optimal therapies \\cite{shortliffe1975computer}, offer scientific logic explanations \\cite{barnett1987dxplain} and assist doctors in developing diagnostic hypotheses and theories in challenging cases of patients \\cite{miller1986internist}. Rule-based systems, however, are expensive to develop and can be unstable, since they require clear expressions of decision rules and, like any textbook, require human-authored modifications. Besides, higher-order interactions between various pieces of information written by different specialists are difficult to encode and the efficiency of the structures is constrained by the comprehensiveness of prior medical knowledge \\cite{berner1994performance}. To narrow down the appropriate psychological context, prioritise medical theories, and prescribe treatment, it was also difficult to incorporate a method that combines deterministic and probabilistic reasoning procedures \\cite{szolovits1978categorical,szolovits1994categorical}.\n\nRecent AI research has leveraged machine learning approaches, which can account for complicated interactions \\cite{deo2015machine}, to recognise patterns from the clinical results, in comparison to the first generation of AI programmes, which focused only on the curation of medical information by experts and the formulation of rigorous decision laws. The machine learning algorithm learns to create the correct output for a given input in new instances by evaluating the patterns extracted from all the labelled input-output pairs \\cite{yu2016omics}. Supervised machine learning algorithms are programmed to determine the optimal parameters in the models in order to minimise the differences between their training case predictions and the effects observed in these cases, with the hope that the correlations found are generalisable to cases not included in the dataset of training. The model generalisability can be then calculated using the test dataset. For supervised machine learning models, grouping, regression and characterisation of the similarity between instances with similar outcome labels are among the most commonly used tasks. For the unlabelled dataset, unsupervised learning infers the underlying patterns for discovering sub-clusters of the original dataset, for detecting outliers in the data, or for generating low-dimensional data representations. However, it is of note that in a supervised manner, the recognition of low-dimensional representations for labelled dataset may be done more effectively. Machine-learning approaches allow the development of AI applications that promote the exploration of previously unrecognised data patterns without the need to define decision-making rules for each particular task or to account for complicated interactions between input features. Machine learning has therefore been the preferred method for developing AI utilities \\cite{deo2015machine,roberts2017biomedical,rogers2020radiomics}.\n\nThe recent rebirth of AI has primarily been motivated by the active implementation of deep learning---which includes training a multi-layer artificial neural network (i.e., a deep neural network) on massive datasets---to wide sources of labelled data \\cite{goodfellow2016deep}. Existing neural networks are getting deeper and typically have $>$100 layers. Multi-layer neural networks may model complex interactions between input and output, but may also require more data, processing time, or advanced architecture designs to achieve better performance. Modern neural networks commonly have tens of millions to hundreds of millions of parameters and require significant computing resources to perform the model training \\cite{yu2018artificial}. Fortunately, recent developments in computer-processor architecture have empowered the computing resources required for deep learning \\cite{wang2019benchmarking}. However, in labelled instances, deep-learning algorithms are incredibly 'data hungry.' Huge repositories of medical databases that can be integrated into these algorithms have only recently become readily available, due to the establishment of a range of large-scale research (in particular the Cancer Genome Atlas \\cite{tomczak2015cancer} and the UK Biobank \\cite{sudlow2015uk}), data collection platforms (e.g., Broad Bioimage Benchmark Collection \\cite{ljosa2012annotated} and the Image Data Resources \\cite{williams2017image}) and the Health Information Technology for Economic and Clinical Health (HITECH) Act, which has promised to provide financial incentives for the use of electronic health records (EHRs) \\cite{desroches2008electronic,hsiao2013office}. In general, deep learning based AI algorithms have been developed for image-based classification \\cite{hu2020weakly}, diagnosis \\cite{litjens2016deep,zhang2019deep,cao2020multiparameter} and prognosis \\cite{cheerla2019deep,roberts2020machine}, genome interpretation \\cite{zou2019primer}, biomarker discovery \\cite{waldstein2020unbiased,li2020atrial}, monitoring by wearable life-logging devices \\cite{lane2015early}, and automated robotic surgery \\cite{chen2020deeprobotic} to enhance the digital healthcare \\cite{esteva2019guide}. The rapid explosion of AI has given rise to the possibilities of using aggregated health data to generate powerful models that can automate diagnosis and also allow an increasingly precise approach to medicine by tailoring therapies and targeting services with optimal efficacy in a timely and dynamic manner. A non-exhaustive map of possible applications is showing in Figure \\ref{fig:fig6}. \n\n\\begin{figure}[!htb] \n\\begin{center}\n \\includegraphics[width=0.75\\linewidth]{.\/figures\/Figure6.pdf} \n \n \\caption{A non-exhaustive map of the AI in healthcare applications.}\n \\label{fig:fig6}\n\\end{center}\n\\end{figure}\n\nWhile AI is promising to revolutionise medical practice, several technological obstacles lie ahead. Because deep learning based approaches rely heavily on the availability of vast volumes of high-quality training data, caution must be taken to collect data that is representative of the target patient population. For example, data from various healthcare settings, which include different forms of bias and noise, may cause a model trained in the data of one hospital to fail to generalise to another \\cite{obermeyer2016predicting}. Where the diagnostic role has an incomplete inter-expert agreement, it has been shown that consensus diagnostics could greatly boost the efficiency of the training of the deep learning based models \\cite{krause2018grader}. In order to manage heterogeneous data, adequate data curation is important. However, achieving a good quality gold standard for identifying the clinical status of the patients requires physicians to review their clinical results independently and maybe repeatedly, which is prohibitively costly at a population scale. A silver standard \\cite{rebholz2010calbc} that used natural-language processing methods and diagnostic codes to determine the true status of patients has recently been proposed \\cite{kirby2016phekb}. Sophisticated algorithms that can handle the idiosyncrasies and noises of different datasets can improve the efficiency and safety of prediction models in life-and-death decisions.\n\nMost of the recent advancement in neural networks has been limited to well-defined activities that do not require data integration across several modalities. Approaches for the application of deep neural networks to general diagnostics (such as analysis of signs and symptoms, prior medical history, laboratory findings and clinical course) and treatment planning are less simple. While deep learning has been effective in image detection \\cite{zhao2019object}, translation \\cite{singh2017machine}, speech recognition \\cite{amodei2016deep,nassif2019speech}, sound synthesis \\cite{purwins2019deep} and even automated neural architecture search \\cite{elsken2019neural}, clinical diagnosis and treatment tasks often need more care (e.g., patient interests, beliefs, social support and medical history) than the limited tasks that deep learning can be normally adept. Moreover, it is unknown if transfer learning approaches will be able to translate models learnt from broad non-medical datasets into algorithms for the study of multi-modality clinical datasets. This suggests that more comprehensive data-collection and data-annotation activities are needed to build end-to-end clinical AI programmes.\n\nThe design of a computing system for the processing, storage and exchange of EHRs and other critical health data remains a problem \\cite{lee2009ethical}. Privacy-preserving approaches, e.g., via federated learning, can allow safe sharing of data or models across cloud providers \\cite{narayan2010privacy}. However, the creation of interoperable systems that follow the requirement for the representation of clinical knowledge is important for the broad adoption of such technology \\cite{dolin2006hl7}. Deep and seamless incorporation of data across healthcare applications and locations remains questionable and can be inefficient. However, new software interfaces for clinical data are starting to show substantial adoption through several EHR providers, such as Substitutable Medical Applications and Reusable Technologies on the Fast Health Interoperability Resources platform \\cite{mandl2012escaping,mandel2016smart}. Most of the previously developed AI in healthcare applications were conducted on retrospective data for the proof of concept \\cite{topol2019high}. Prospective research and clinical trials to assess the efficiency of the developed AI systems in clinical environments are necessary to verify the real-world usefulness of these medical AI systems \\cite{yu2019framing}. Prospective studies will help recognise the fragility of the AI models in real-world heterogeneous and noisy clinical settings and identify approaches to incorporate medical AI for existing clinical workflows.\n\nAI in medicine would eventually result in safety, legal and ethical challenges \\cite{miller2019medical} with respect to medical negligence attributed to complicated decision-making support structures, and have to face the regulation hurdles \\cite{challen2019artificial}. If malpractice lawsuits involving medical AI applications occur, the judicial system will continue to provide specific instructions as to which agency is responsible. Health providers with malpractice insurance have to be clear on coverage as health care decisions are taken in part by the AI scheme \\cite{yu2018artificial}. With the deployment of automatic AI for particular clinical activities, the criteria for diagnostic, surgical, supporting and paramedical tasks will need to be revised and the functions of healthcare practitioners will begin to change as different AI modules are implemented into the quality of treatment, and the bias needs to be minimised while the patient satisfaction must be maximised \\cite{decamp2020latent,esmaeilzadeh2020use}.\n\n\n\\subsection{XAI in Healthcare}\n\nDespite deep learning based AI technologies will usher in a new era of digital healthcare, challenges exist. XAI can play a crucial role, as an auxiliary development (Figure \\ref{fig:fig6}), for potentially solving the small sample learning by filter out clinically meaningless features. Moreover, many high-performance deep learning models produce findings that are impossible for unaided humans to understand. While these models can produce better-than-human efficiency, it is not easy to express intuitive interpretations that can justify model findings, define model uncertainties, or derive additional clinical insights from these computational 'black-boxes.' With potentially millions of parameters in the deep learning model, it can be tricky to understand what the model sees in the clinical data, e.g., radiological images \\cite{england2019artificial}. For example, research investigation has explicitly stated that being a black box is a \"strong limitation\" for AI in dermatology since it is not capable of doing a personalised evaluation by a qualified dermatologist that can be used to clarify clinical facts \\cite{gomolin2020artificial}. This black-box design poses an obstacle for the validation of the developed AI algorithms. It is necessary to demonstrate that a high-performance deep learning model actually identifies the appropriate area of the image and does not over-emphasise unimportant findings. Recent approaches have been developed to describe AI models including the visualisation methods. Some widely used levers include occlusion maps \\cite{zeiler2014visualizing}, salience maps \\cite{Simonyan14a}, class activation maps \\cite{selvaraju2017grad}, and attention maps \\cite{zhang2017mdnet}. Localisation and segmentation algorithms can be more readily interpreted since the output is an image. Model understanding, however, remains much more difficult for deep neural network models trained on non-imaging data other than images that is a current open question for ongoing research efforts \\cite{ribeiro2016lime}.\n\nDeep learning-based AI methods have gained popularity in the medical field, with a wide range of work in automatic triage, diagnosis, prognosis, treatment planning and patient management \\cite{jiang2017artificial}. We can find many open questions in the medical field that have galvanised clinical trials leveraging deep learning and AI approaches (e.g., from grand-challenge.org). Nevertheless, in the medical field, the issue of interpretability is far from theoretical development. More precisely, it is noted that interpretabilities in the clinical sectors include considerations not recognised in other areas, including risk and responsibilities \\cite{croskerry2017diagnosis,panch2019inconvenient}. Life may be at risk as medical responses are made, and leaving those crucial decisions to AI algorithms that without explainabilities and accountabilities will be irresponsible \\cite{quinn2020three}. Apart from legal concerns, this is a serious vulnerability that could become disastrous if used with malicious intent. \n\nAs a result, several recent studies \\cite{zhang2017mdnet,pmlr-v106-tonekaboni19a,holzinger2019causability} have been devoted to the exploration of explainability in medical AI. More specifically, specific analyses have been investigated, e.g., chest radiography \\cite{kallianos2019far}, emotion analysis in medicine \\cite{zucco2018explainable}, COVID-19 detection and classification \\cite{hu2020weakly}, and the research encourages understanding of the importance of interpretability in the medical field \\cite{langlotz2019roadmap}. Besides, the exposition argues \\cite{london2019artificial} that a certain degree of opaqueness is appropriate, that is, it would be more important for us to deliver empirically checked reliable findings than to dwell too hard on how to unravel the black-box. It is advised that readers consider these studies first, at least for an overview of interpretability in medical AI. \n\nAn obvious XAI approach has been taken by many researchers is to provide their predictive models with interpretability. These methods depend primarily on maintaining the interpretability of less complicated AI models while improving their performance by techniques of refinement and optimisation. For example, as Figure \\ref{fig:fig5} shows, decision tree based methods are normally interpretable, research studies have been done using automated pruning of decision trees for various classifications of illnesses \\cite{stiglic2012comprehensive} and accurate decision trees focused on boosting patient stratification \\cite{valdes2016mediboost}. However, such model optimisation is not always straightforward and it is not a trivial task. \n\nPrevious survey studies on XAI in healthcare can be found elsewhere, e.g., Tjoa and Guan \\cite{tjoa2020survey} in medical XAI and Payrovnaziri et al. \\cite{payrovnaziri2020explainable} in XAI for EHR. For specific applications, e.g., digital pathology, the readers can refer to Pocevivciute et al. \\cite{pocevivciute2020survey} and Tosun et al. \\cite{tosun2020histomapr}. The research studies in XAI and medical XAI have been increased exponentially especially after 2018 alongside increasingly development of multimodal clinical information fusion (Figure \\ref{fig:fig7}). In this mini-review, we only surveyed the most recent studies that were not covered by previous more comprehensive review studies. In this mini-review, we classified XAI in medicine and healthcare into five categories, which synthesised the approach by Payrovnaziri et al. \\cite{payrovnaziri2020explainable}, including (1) XAI via dimension reduction, (2) XAI via feature importance, (3) XAI via attention mechanism, (4) XAI via knowledge distillation, and (5) XAI via surrogate representations (Table \\ref{tab:tab1}).\n\n\\subsubsection{XAI via Dimension Reduction}\n\nDimension reduction methods, e.g., using principal component analysis (PCA) \\cite{wold1987principal}, independent component analysis (ICA) \\cite{comon1994independent}, and Laplacian Eigenmaps \\cite{belkin2001laplacian} and other more advanced techniques, are commonly and conventionally used approaches to decipher AI models by representing the most important features. For example, by integrating multi-label k-nearest neighbour and genetic algorithm techniques, Zhang et al. \\cite{zhang2015predicting} developed a model for drug side effect estimation based on the optimal dimensions of the input features. Yang et al. \\cite{yang2015manifold} proposed a nonlinear dimension reduction method to improve unsupervised classification of the $^1$H MRS brain tumour data and extract the most prominent features using Laplacian Eigenmaps. Zhao and Bolouri \\cite{zhao2016object} stratified stage-one lung cancer patients by defining the most insightful examples via a supervised learning scheme. In order to recognise a group of \"exemplars\" to construct a \"dense data matrix,\" they introduced a hybrid method for dimension reduction by combining pattern recognition with regression analytics. Then they used examples in the final model that are the most predictive for the outcome. Based on domain knowledge, Kim et al. \\cite{kim2016opening} developed a deep learning method to extract and rank the most important features based on their weights in the model, and visualised the outcome for predicting cell-type-specific enhancers. To explore the gene pathways and their associations in patients with the brain tumour, Hao et al. \\cite{hao2018pasnet} proposed a pathway-associated sparse deep learning method. Bernardini et al. \\cite{bernardini2019discovering} used the least absolute shrinkage and selection operator (LASSO) to prompt sparsity for SVMs for the early diagnosis of type 2 diabetes.\n\nSimplifying the information down to a small subset using dimension reduction methods can make the underlying behaviour of the model understandable. Besides, with potentially more stable regularised models, they are less prone to overfitting, which may also be beneficial in general. Nevertheless, the possibility of losing crucial features, which may still be relevant for clinical predictions on a case-by-case basis, can be common and these important features may be neglected unintentionally by the dimensional reduced models. \n\n\n\\subsubsection{XAI via Feature Importance}\n\nResearchers have leveraged the feature importance to explain the characteristics and significance of the extracted features and the correlations among features and between features and the outcomes for providing interpretability for AI models \\cite{carvalho2019machine,adadi2018peeking,linardatos2021explainable}. Ge et al. \\cite{ge2018interpretable} used feature weights to rank the top ten extracted features to predict mortality of the intensive care unit. Suh et al. \\cite{suh2020development} developed a risk calculator model for prostate cancer (PCa) and clinically significant PCa with XAI modules that used Shapley value to determine the feature importance \\cite{lundberg2020local}. Sensitivity analysis of the extracted features can represent the feature importance, and essentially the more important features are those for which the output is more sensitive \\cite{montavon2018methods}. Eck et al. \\cite{eck2017interpretation} defined the most significant features of a microbiota-based diagnosis task by roughly marginalising the features and testing the effect on the model performance.\n\nShrikumar et al. \\cite{shrikumar17a} implemented the Deep Learning Important FeaTures (DeepLIFT)---a backpropagation based approach to realise interpretability. Backpropagation approaches measure the output gradient for input through the backpropagation algorithm to report the significance of the feature. Zuallaert et al. \\cite{zuallaert2018splicerover} developed the DeepLIFT based method to create interpretable deep models for splice site prediction by measuring the contribution score for each nucleotide. A recent comparative study of different models of XAI, including DeepLIFT \\cite{shrikumar17a}, Guided backpropagation (GBP) \\cite{DB15a}, Layer wise relevance propagation (LRP) \\cite{bach2015pixel}, SHapley Additive exPlanations (SHAP) \\cite{Chen2021} and others, was conducted for ophthalmic diagnosis \\cite{singh2020interpretation}.\n\nXAI, by the extraction of feature importance, can not only explain essential feature characteristics, but may also reflect their relative importance to clinical interpretation; however, numerical weights are either not easy to understand or maybe misinterpreted.\n\n\\subsubsection{XAI via Attention Mechanism}\n\nThe core concept behind the attention mechanism \\cite{BahdanauCB14} is that the model \"pays attention\" only to the parts of the input where the most important information is available. It was originally proposed for tackling the relation extraction task in machine translation and other natural language processing problems. Because certain words are more relevant than others in the relation extraction task, the attention mechanism can assess the importance of the words for the purpose of classification, generating a meaning representation vector. There are various types of attention mechanisms, including global attention, which uses all words to build the context, local attention, which depends only on a subset of words, or self-attention, in which several attention mechanisms are implemented simultaneously, attempting to discover every relation between pairs of words \\cite{putelli2019applying}. The attention mechanism has also been shown to contribute to the enhancement of interpretability as well as to technical advances in the field of visualisation \\cite{mascharka2018transparency}.\n\nKaji et al. \\cite{kaji2019attention} demonstrated particular occasions when the input features have mostly influenced the predictions of clinical events in ICU patients using the attention mechanism. Shickel et al. \\cite{shickel2019deepsofa} presented an interpretable acuity score framework using deep learning and attention-based sequential organ failure assessment that can assess the severity of patients during an ICU stay. Hu et al. \\cite{hu2019deephint} provided \"mechanistic explanations\" for the accurate prediction of HIV genome integration sites. Zhang et al. \\cite{zhang2018patient2vec} also built a method to learn how to represent EHR data that could document the relationship between clinical outcomes within each patient. Choi et al. \\cite{choi2016retain} implemented the Reverse Time Attention Model (RETAIN), which incorporated two sets of attention weights, one for visit level to capture the effect of each visit and the other at the variable-level. RETAIN was a reverse attention mechanism intended to maintain interpretability, to replicate the actions of clinicians, and to integrate sequential knowledge. Kwon et al. \\cite{kwon2018retainvis} proposed a visually interpretable cardiac failure and cataract risk prediction model based on RETAIN (RetainVis). The general intention of these research studies is to improve the interpretability of deep learning models by highlighting particular position(s) within a sequence (e.g., time, visits, DNA) in which those input features can affect the prediction outcome.\n\nClass activation mapping (CAM) \\cite{zhou2016learning} method and its variations have been investigated for XAI since 2016, and have been subsequently used for digital healthcare, especially the medical image analysis areas. Lee et al. \\cite{lee2019explainable} developed an XAI algorithm for the detection of acute intracranial haemorrhage from small datasets that is one of the most famous studies using CAM. Kim et al. \\cite{kimartificial2020} summarised AI based breast ultrasonography analysis with CAM based XAI. Zhao et al. \\cite{zhao2018respond} reported a Respond-CAM method that offered a heatmap-based saliency on 3D images obtained from cryo-tomography of cellular electrons. The region where macromolecular complexes were present was marked by the high intensity in the heatmap. Izadyyazdanabadi et al. \\cite{izadyyazdanabadi2018weakly} developed a multilayer CAM (MLCAM), which was used for brain tumour localization. Coupling with CNN, Couture et al. \\cite{couture2018multiple} proposed a multi-instance aggregation approach to classify breast tumour tissue microarray for various clinical tasks, e.g., histologic subtype classification, and the derived super-pixel maps could highlight the area where the tumour cells were and each mark corresponded to a tumour class. Rajpurkar et al. \\cite{rajpurkar2020appendixnet} used Grad-CAM for the diagnosis of appendicitis from a small dataset of CT exams using video pretraining. Porumb et al. \\cite{porumb2020precision} combined CNN and RNN for electrocardiogram (ECG) analysis and applied Grad-CAM for the identification of the most relevant heartbeat segments for the hypoglycaemia detection. In Hu et al. \\cite{hu2020weakly}, a COVID-19 classification system was implemented with multiscale CAM to highlight the infected areas. By the means of visual interpretability, these saliency maps are recommended. The clinician analysts who examine the AI output can realise that the target is correctly identified by the AI model, rather than mistaking the combination of the object with the surrounding as the object itself.\n\nAttention based XAI methods do not advise the clinical end user specifically of the response, but highlight the areas of greater concern to facilitate easier decision-making. Clinical users can, therefore, be more tolerant of imperfect precision. However, it might not be beneficial to actually offer this knowledge to a clinical end user because of the major concerns, including information overload and warning fatigue. It can potentially be much more frustrating to have areas of attention without clarification about what to do with the findings if the end user is unaware of what the rationale of a highlighted segment is, and therefore the end user can be prone to ignore non-highlighted areas that could also be critical.\n\n\\subsubsection{XAI via Knowledge Distillation and Rule Extraction}\n\nKnowledge distillation is one form of the model-specific XAI, which is about eliciting knowledge from a complicated model to a simplified model---enables to train a student model, which is usually explainable, with a teacher model, which is hard to interpret. For example, this can be accomplished by model compression \\cite{polino2018model} or tree regularisation \\cite{wu2018beyond} or through a coupling approach of model compression and dimension reduction \\cite{carvalho2019machine}. Research studies have investigated this kind of technique for several years, e.g., Hinton et al. \\cite{Hinton44873}, but has recently been uplifted along with the development of AI interpretability \\cite{Hinton46495,yang2020auto,li2020survey}. Rule extraction is another widely used XAI method that is closely associated with knowledge distillation and can have a straightforward application for digital healthcare, for example, decision sets or rule sets have been studied for interpretability \\cite{lage2019evaluation} and Model Understanding through Subspace Explanations (MUSE) method \\cite{lakkaraju2019faithful} has been developed to describe the projections of the global model by considering the various subgroups of instances defined by user interesting characteristics that also produces explanation in the form of decision sets.\n\nChe et al. \\cite{che2016interpretable} introduced an interpretable mimic-learning approach, which is a straightforward knowledge-distillation method that uses gradient-boosting trees to learn interpretable structures and make the baseline model understandable. The approach used the information distilled to construct an interpretable prediction model for the outcome of the ICU, e.g., death, ventilator usage, etc. A rule-based framework that could include an explainable statement of death risk estimation due to pneumonia was introduced by Caruana et al. \\cite{caruana2015intelligible}. Letham et al. \\cite{letham2015interpretable} also proposed an XAI model named Bayesian rule lists, which offered certain stroke prediction claims. Ming et al. \\cite{ming2018rulematrix} developed a visualisation approach to derive rules by approximating a complicated model via model induction at different tasks such as diagnosis of breast cancer and the classification of diabetes. Xiao et al. \\cite{xiao2018readmission} built a deep learning model to break the dynamic associations between readmission to hospital and possible risk factors for patients by translating EHR incidents into embedded clinical principles to characterise the general situation of the patients. Classification rules were derived as a way of providing clinicians interpretable representations of the predictive models. Davoodi and Moradi \\cite{davoodi2018mortality} developed a rule extraction based XAI technique to predict mortality in ICUs and Das et al. \\cite{das2019interpretable} used a similar XAI method for the diagnosis of Alzheimer's disease. In the LSTM-based breast mass classification, Lee et al. \\cite{lee2019generation} incorporated the textual reasoning for interpretability. For the characterisation of stroke and risk prediction, Prentzas et al. \\cite{prentzas2019integrating} implemented the argumentation theory for their XAI algorithm training process by extracting decision rules. \n\nXAI approaches, which rely on knowledge distillation and rule extraction, are theoretically more stable models. The summarised representations of complicated clinical data can provide clinical end-users with the interpretable results intuitively. However, if the interpretation of these XAI results could not be intuitively understood by clinical end-users, then the representations are likely to make it much harder for the end-users to comprehend.\n\n\\subsubsection{XAI via Surrogate Representation}\n\nAn effective application of XAI in the medical field is the recognition of individual health-related factors that lead to disease prediction using the local interpretable model-agnostic explanation (LIME) method \\cite{ribeiro2016lime} that offers explanations for any classifier by approximating the reference model with a surrogate interpretable and \"locally faithful\" representation. LIME disrupts an instance, produces neighbourhood data, and learns linear models in the neighbourhood to produce explanations \\cite{liang2021explaining}. \n\nPan et al. \\cite{pan2019development} used LIME to analyse the contribution of new instances to forecast central precocious puberty in children. Ghafouri-Fard et al. \\cite{ghafouri2019application} have applied a similar approach to diagnose autism spectrum disorder. Kovalev et al. \\cite{KOVALEV2020106164} proposed a method named SurvLIME to explain AI base survival models. Meldo et al. \\cite{meldo2020natural} used a local post-hoc explanation model, i.e., LIME, to select important features from a special feature representation of the segmented lung suspicious objects. Panigutti et al. \\cite{Panigutti2020} developed the \"Doctor XAI\" system that could predict the readmission, diagnosis and medications order for the patient. Similar to LIME, the implemented system trained a local surrogate model to mimic the black-box behaviour with a rule-based explanation, which can then be mined using a multi-label decision tree. Lauritsen et al. \\cite{lauritsen2020explainable} tested an XAI method using Layer-wise Relevance Propagation \\cite{montavon2019layer} for the prediction of acute critical illness from EHR.\n\nSurrogate representation is a widely used scheme for XAI; however, the white-box approximation must accurately describe the black-box model to gain trustworthy explanation. If the surrogate models are too complicated or too abstract, the clinician comprehension might be affected.\n\n\n\\begin{sidewaystable}\n\n\\resizebox{0.55\\textwidth}{!}{\\begin{minipage}{\\textwidth}\n\n\\begin{tabular}{@{}lllllll@{}}\n\\toprule\nXAI Category & Reference & Method & Intrinsic\/Post-hoc & Local\/Global & Model-specific\/Model-agnostic & Application \\\\ \\midrule\nDimension Reduction & Zhang et al. \\cite{zhang2015predicting} & Optimal feature selection & Intrinsic & Global & Model-specific & Drug side effect estimation \\\\\n & Yang et al. \\cite{yang2015manifold} & Laplacian Eigenmaps & Intrinsic & Global & Model-specific & Brain tumour classification using MRS \\\\\n & Zhao and Bolouri \\cite{zhao2016object} & Cluster analysis and LASSO & Intrinsic & Global & Model-agnostic & Lung cancer patients stratification \\\\\n & Kim et al. \\cite{kim2016opening} & Optimal feature selection & Intrinsic & Global & Model-agnostic & Cell-type specific enhancers prediction \\\\\n & Hao et al. \\cite{hao2018pasnet} & Sparse deep learning & Intrinsic & Global & Model-agnostic & Long-term survival prediction for glioblastoma multiforme \\\\\n & Bernardini et al. \\cite{bernardini2019discovering} & Sparse-balanced SVM & Intrinsic & Global & Model-agnostic & Early diagnosis of type 2 diabetes \\\\ \\midrule\nFeature Importance & Eck et al. \\cite{eck2017interpretation} & Feature marginalisation & Post-hoc & Global, Local & Model-agnostic & Gut and skin microbiota\/inflammatory bowel diseases diagnosis \\\\\n & Ge et al. \\cite{ge2018interpretable} & Feature weighting & Post-hoc & Global & Model-agnostic & ICU mortality prediction (all-cause) \\\\\n & Zuallaert et al. \\cite{zuallaert2018splicerover} & DeepLIFT & Post-hoc & Global & Model-agnostic & Splice site detection \\\\\n & Suh et al. \\cite{suh2020development} & Shapley value & Post-hoc & Global, Local & Model-agnostic & Decision-supporting for prostate cancer \\\\\n & Singh et al. \\cite{singh2020interpretation} & DeepLIFT and others & Post-hoc & Global, Local & Model-agnostic & Ophthalmic diagnosis \\\\ \\midrule\nAttention Mechanism & Kwon et al. \\cite{kwon2018retainvis} & Attention & Intrinsic & Global, Local & Model-specific & Clinical risk prediction (cardiac failure\/cataract) \\\\\n & Zhang et al. \\cite{zhang2018patient2vec} & Attention & Intrinsic & Local & Model-specific & EHR based future hospitalisation prediction \\\\\n & Choi et al. \\cite{choi2016retain} & Attention & Intrinsic & Local & Model-specific & Heart failure prediction \\\\\n & Kaji et al. \\cite{kaji2019attention} & Attention & Intrinsic & Global, Local & Model-specific & Predictions of clinical events in ICU \\\\\n & Shickel et al. \\cite{shickel2019deepsofa} & Attention & Intrinsic & Global, Local & Model-specific & Sequential organ failure assessment\/in-hospital mortality \\\\\n & Hu et al. \\cite{hu2019deephint} & Attention & Intrinsic & Local & Model-specific & Prediction of HIV genome integration site \\\\ \\cmidrule{2-7}\n & Izadyyazdanabadi et al. \\cite{izadyyazdanabadi2018weakly} & MLCAM & Intrinsic & Local & Model-specific & Brain tumour localisation \\\\\n & Zhao et al. \\cite{zhao2018respond} & Respond-CAM & Intrinsic & Local & Model-specific & Macromolecular complexes \\\\\n & Couture et al. \\cite{couture2018multiple} & Super-pixel maps & Intrinsic & Local & Model-specific & Histologic tumour subtype classification \\\\\n & Lee et al. \\cite{lee2019explainable} & CAM & Intrinsic & Local & Model-specific & Acute intracranial haemorrhage detection \\\\\n & Kim et al. \\cite{kimartificial2020} & CAM & Intrinsic & Local & Model-specific & Breast neoplasm ultrasonography analysis \\\\\n & Rajpurkar et al. \\cite{rajpurkar2020appendixnet} & Grad-CAM & Intrinsic & Local & Model-specific & Diagnosis of appendicitis \\\\\n & Porumb et al. \\cite{porumb2020precision} & Grad-CAM & Intrinsic & Local & Model-specific & ECG based hypoglycaemia detection \\\\\n & Hu et al. \\cite{hu2020weakly} & Multiscale CAM & Intrinsic & Local & Model-specific & COVID-19 classification \\\\ \\midrule\nKnowledge Distillation & Caruana et al. \\cite{caruana2015intelligible} & Rule-based system & Intrinsic & Global & Model-specific & Prediction of pneumonia risk and 30-day readmission forecast \\\\\n & Letham et al. \\cite{letham2015interpretable} & Bayesian rule lists & Intrinsic & Global & Model-specific & Stroke prediction \\\\\n & Che et al. \\cite{che2016interpretable} & Mimic learning & Post-hoc & Global, Local & Model-specific & ICU outcome prediction (acute lung injury) \\\\\n & Ming et al. \\cite{ming2018rulematrix} & Visualization of rules & Post-hoc & Global & Model-specific & Clinical diagnosis and classification (breast cancer, diabetes) \\\\\n & Xiao et al. \\cite{xiao2018readmission} & Complex relationships distilling & Post-hoc & Global & Model-specific & Prediction of the heart failure caused hospital readmission \\\\\n & Davoodi and Moradi \\cite{davoodi2018mortality} & Fuzzy rules & Intrinsic & Global & Model-specific & In-hospital mortality prediction (all-cause) \\\\\n & Lee et al. \\cite{lee2019generation} & Visual\/textual justification & Post-hoc & Global, Local & Model-specific & Breast mass classification \\\\\n & Prentzas et al. \\cite{prentzas2019integrating} & Decision rules & Intrinsic & Global & Model-specific & Stroke Prediction \\\\ \\midrule\nSurrogate Models & Pan et al. \\cite{pan2019development} & LIME & Post-hoc & Local & Model-agnostic & Forecast of central precocious puberty \\\\\n & Ghafouri-Fard et al. \\cite{ghafouri2019application} & LIME & Post-hoc & Local & Model-agnostic & Autism spectrum disorder diagnosis \\\\\n & Kovalev et al. \\cite{KOVALEV2020106164} & LIME & Post-hoc & Local & Model-agnostic & Survival models construction \\\\\n & Meldo et al. \\cite{meldo2020natural} & LIME & Post-hoc & Local & Model-agnostic & Lung lesion segmentation \\\\\n & Panigutti et al. \\cite{Panigutti2020} & LIME like with rule-based XAI & Post-hoc & Local & Model-agnostic & Prediction of patient readmission, diagnosis and medications \\\\\n & Lauritsen et al. \\cite{lauritsen2020explainable} & Layer-wise relevance propagation & Post-hoc & Local & Model-agnostic & Prediction of acute critical illness from EHR \\\\ \\bottomrule\n\\end{tabular}\n\n\\end{minipage}}\n\\caption{Summary of various XAI methods in digital healthcare and medicine including their category (XAI via dimension reduction, feature importance, attention mechanism, knowledge distillation, and surrogate representations), reference, key idea, type (Intrinsic or Post-hoc, Local or Global, and Model-specific or Model-agnostic) and specific clinical applications.}\n \\label{tab:tab1}\n\n\\end{sidewaystable}\n\n\n\\begin{figure}[!htb] \n\\begin{center}\n \\includegraphics[width=0.8\\linewidth]{.\/figures\/Figure7.pdf} \n \n \\caption{Publication per year for XAI and medical XAI (top) and percentage for two categories of research (bottom). Data retrieved from Scopus\u00ae (Jan 8th, 2021) by using these commands when querying this database---XAI: (ALL(\"Explainable AI\") OR ALL(\"Interpretable AI\") OR ALL(\"Explainable Artificial Intelligence\") OR ALL(\"Interpretable Artificial Intelligence\") OR ALL(\"XAI\")) AND PUBYEAR = 20XX; Medical XAI: (ALL(\"Explainable AI\") OR ALL(\"Interpretable AI\") OR ALL(\"Explainable Artificial Intelligence\") OR ALL(\"Interpretable Artificial Intelligence\") OR ALL(\"XAI\")) AND (ALL(\"medical\") OR ALL(\"medicine\")) AND PUBYEAR = 20XX, in which XX represents the actual year.}\n \\label{fig:fig7}\n\\end{center}\n\\end{figure}\n\n\n\n\\section{Proposed Method}\n\n\\subsection{Problem Formulation}\n\nIn this study, we have demonstrated two typical but important applications of using XAI, which have been developed for classification and segmentation---two mostly widely discussed problems in medical image analysis and AI-powered digital healthcare. Our developed XAI techniques have been manifested using CT images classification for COVID-19 patients and segmentation for hydrocephalus patients using CT and MRI datasets.\n\n\\subsection{XAI for Classification}\n\nIn this subsection, we provide a practical XAI solution for explainable COVID-19 classification that is capable of alleviating the domain shift problem caused by multicentre data collected for distinguishing COVID-19 patients from other lung diseases using CT images. The main challenge for multicentre data is that hospitals are likely to use different scanning protocols and parameters for CT scanners when collecting data from patients leading to distinct data distribution. Moreover, it can be observed that images obtained from various hospitals are visually different although they are imaging the same organ. If a machine learning model is trained on data from one hospital and tested on the data from another hospital (i.e., another centre), the performance of the model often degrades drastically. Besides, another challenge is that only patient-level annotations are available commonly but image-level labels are not since it would take a large amount of time for radiologists to annotate them \\cite{lin2005emergency}. Therefore, we propose a weakly supervised learning based classification model to cope with these two problems. Besides, an explainable diagnosis module in the proposed model can also offer the auxiliary diagnostic information visually for radiologists. The overview of our proposed model is illustrated in Figure \\ref{fig:cls_network}.\n\n\n\\begin{figure}\n\\begin{center}\n \\includegraphics[width=\\linewidth]{.\/figures\/cls_network.pdf} \n \n \\caption{The overview of our proposed model. $P(c \\given S_i)$ denotes the probability of the Section $S_i$, and $P(c \\given \\mathcal{P})$ represents the probability of the patient who is COVID-19 infected or not. $Q\\in \\mathbb{R}^{2\\times 2 \\times C}$ indicates the noise transaction from the probability of the true label $P(y_c \\given \\mathcal{I})$ to the noise label $P(z_c \\given \\mathcal{I})$. Besides, $\\phi(\\cdot)$ is a non-linear feature transformation function, which projects the feature into embedding space.}\n \\label{fig:cls_network}\n\\end{center}\n\\end{figure}\n\n\\subsubsection{Explainable Diagnosis Module (EDM)}\n\nAs the predicting process of deep learning models is in a black-box, it is desirable to develop an explainable technique in medical image diagnosis, which provides an explainable auxiliary tool for radiologists. For common practice, CAM can generate the localisation maps for the prediction through the weighted sum of feature maps from the backbone networks such as ResNet \\cite{he2016resnet}. Suppose $F^{k} \\in \\mathbb{R}^{H'\\times W'}$ is the $k$-th feature map with the shape of $H' \\times W'$, and $W^{fc} \\in \\mathbb{R}^{K \\times C}$, where $K$ is the number of feature maps. Therefore, the class score for class $c$ can be computed as\n\\begin{align}\n s_c = \\sum_{k=1}^K W_{k,c}^{fc} \\left( \\frac{1}{H'W'} \\sum_{i=1}^{H'}\\sum_{j=1}^{W'}F_{i,j}^k \\right).\n\\label{eq:cls_cam}\n\\end{align}\nTherefore, the activation map $A_c^{fc}$ for class $c$ can be defined by \n\\begin{align}\n (A_c^{fc})_{i,j} = \\sum_{k=1}^K W_{k,c}^{fc} F_{i,j}^k.\n\\end{align}\n\nHowever, generating CAMs is not an end-to-end process, in which the network should be firstly trained on the dataset and utilises the weights of the last fully connected layer to compute the CAMs, bringing extra computation. To tackle this drawback, in our explainable diagnosis module (EDM), we replace the fully connected layer with a $1\\times 1$ convolutional layer of which the weight $W^{conv}$ shares the same mathematical form as $W^{fc}$. So we can reformulate Eq.(\\ref{eq:cls_cam}) as\n\\begin{align}\n s_c = \\frac{1}{H'W'} \\sum_{i=1}^{H'}\\sum_{j=1}^{W'} \\left( \\sum_{k=1}^K W_{k,c}^{conv}F_{i,j}^k \\right) = \\frac{1}{H'W'} \\sum_{i=1}^{H'}\\sum_{j=1}^{W'} (A_c^{conv})_{i,j},\n\\end{align}\nwhere $A_c^{conv}$ is the activation map for class $c$ that can be learnt adaptively during the training procedure. The activation map produced by the EDM can not only accurately indicate the importance of the region from CT images and locate the infected parts of the patients, but can also offer the explainable results which are able to account for the prediction.\n\n\\subsubsection{Slice Integration Module (SIM)}\n\nIntuitively, each COVID-19 patient case has a different severity. Some patients are severely infected with large lesions, while most of the positive cases can be mild of which only a small portion of the CT volume is infected. Therefore, if we directly apply the patient level annotations as the labels for the image slices, the data would be extremely noisy leading to poor performance as the consequence. To overcome this problem, instead of relying on single images, we propose a slice integration module (SIM) and use the joint distribution of the image slices to model the probability of the patient being infected or not. In our SIM, we assume that the lesions are consecutive and the distribution of the lesion positions is consistent. Therefore, we adopt a section based strategy to handle this problem and fit this into a Multiple Instance Learning (MIL) framework \\cite{zhou2004mil}. In the MIL, each sample is regarded as a bag, which is composed of a set of instances. A positive bag contains at least one positive instance, while the negative bag solely consists of negative instances. In our scenario, only patient annotations (bag labels) are provided, and the sections can be regarded as instances in the bags.\n\nGiven a patient $\\mathcal{P} = [\\mathcal{I}_1, \\mathcal{I}_2, \\cdots, \\mathcal{I}_n]$ with $n$ CT slices, we divide them into disjoint sections $\\mathcal{P} = \\{S_i\\}_{i=1}^{|S|}$, where $|S|$ is the total amount of sections for patient $\\mathcal{P}$, that is\n\\begin{align}\n |S| = \\max \\left(1, \\left\\lfloor\\frac{n}{l_s}\\right\\rfloor \\right).\n\\end{align}\n\nHere $l_s$ is the section length, which is a designed parameter. Then we integrate the probability of each section as the probability of the patient, that is\n\\begin{align}\n P(c\\givenbase \\mathcal{P}) = P(c \\givenbase \\{S_i\\}_{i=1}^{|S|}) = \\frac{1}{1+ \\prod_{i=1}^{|S|} (\\frac{1}{P(c\\givenbase S_i)} - 1)}, \n\\end{align}\nwhere $P(c\\givenbase S_i)$ is the probability of the $i$-th section $S_i$ that belongs to class $c$. By taking the $k$-max probability of the images for each class to compute the section probability, we can mitigate the problem that some slices may contain few infections, which can hinder the prediction for the section. The $k$-max selection method can be formulated as\n\\begin{align}\n P(c \\given S_i) = \\sigma\\left( \\frac{1}{k} \\max_{\\substack{s^{(j)} \\in M}} \\sum_{j=1}^k s^{(j)}_c \\right), \\nonumber \\\\ s.t. \\quad M \\subset S_i, |M| = k.\n\\end{align}\nwhere $s^{(j)}_c$ is the top $j$-th class score of the slice in the $i$-th section for the class $c$, and $\\sigma (\\cdot)$ represents the Sigmoid function. Then we apply the patient annotations $\\textbf{y}$ to compute the classification loss, which can be formulated as\n\\begin{align}\n \\mathcal{L}_{cls} = -\\sum_{c = 0}^1 \\left[y_c\\log P(c \\given \\mathcal{P}) + (1-y_c)\\log (1-P(c \\given \\mathcal{P})) \\right].\n\\label{eq:loss_cls}\n\\end{align}\n\n\\subsubsection{Noisy Correction Module (NCM)}\n\nIn real-world applications, radiologists would only diagnose the disease from one image. Therefore, it is also significant for improving the prediction accuracy on single images. However, the image-level labels are extremely noisy since only patient-level annotations are available. To further alleviate the negative impact of patient-level annotations, we propose a noisy correction module (NCM). Inspired by \\cite{bekker2016training}, we model the noise transaction distribution $P(z_c = i \\given y_c = j, \\mathcal{I})$, which transforms the true posterior distribution $P(y_c \\given \\mathcal{I})$ to the noisy label distribution $P(z_c \\given \\mathcal{I})$ by \n\\begin{align}\n P(z_c = i \\given \\mathcal{I}) = \\sum_j P(z_c = i \\given y_c = j, \\mathcal{I}) P(y_c = j \\given \\mathcal{I}).\n\\label{eq:noise_prob_simple}\n\\end{align}\n\nIn practice, we estimate the noise transaction distribution $Q^c_{ij} = P(z_c = i \\given y_c = j, \\mathcal{I})$ for the class $c$ via\n\\begin{align}\n Q^c_{ij} = P(z_c = i \\given y_c = j, \\mathcal{I}) = \\frac{\\exp(w^c_{ij} \\phi(\\mathcal{I}) + b^c_{ij})}{\\sum_i\\exp(w^c_{ij} \\phi(\\mathcal{I}) + b^c_{ij})},\n\\label{eq:noise_trans}\n\\end{align}\nwhere $i, j \\in \\{0,1\\}$; $\\phi(\\cdot)$ is a nonlinear mapping function implemented by convolution layers; $w^c_{ij}$ and $b^c_{ij}$ are the trainable parameters. The noise transaction score $T^c_{ij} = w^c_{ij} \\phi(\\mathcal{I}) + b^c_{ij}$ represents the confidence score of the transaction from the true label $i$ to the noise label $j$ for the class $c$. Therefore, Eq.(\\ref{eq:noise_prob_simple}) can be reformulated as\n\\begin{align}\n P(z_c = i \\given \\mathcal{I}) = \\sum_j Q^c_{ij} P(y_c = j \\given \\mathcal{I}).\n\\label{eq:noise_prob}\n\\end{align}\n\nBy estimating the noisy label distribution $P(z_c \\given \\mathcal{I})$ for patient $\\mathcal{P}$, the noisy classification loss can be computed by\n\\begin{align}\n \\mathcal{L}_{noisy} = -\\frac{1}{N} \\sum_{n=1}^N\\sum_{c = 0}^1 [y_c^n\\log P(z_c = 1 \\given \\mathcal{I}_n) \\nonumber \\\\+ (1-y_c^n)\\log P(z_c = 0 \\given \\mathcal{I}_n)].\n\\label{eq:loss_noisy}\n\\end{align}\n\nBy combining Eq. (\\ref{eq:loss_cls}) and Eq. (\\ref{eq:loss_noisy}), we can obtain the total loss function for our XAI solution of an explainable COVID-19 classification, that is\n\\begin{align}\n \\mathcal{L} = \\mathcal{L}_{cls} + \\lambda \\mathcal{L}_{noisy},\n\\label{eq:loss}\n\\end{align}\nwhere $\\lambda$ is a hyper-parameter to balance the classification loss $\\mathcal{L}_{cls}$ and the noisy classification loss $\\mathcal{L}_{noisy}$.\n\n\n\n\\subsection{XAI for Segmentation}\n\nIn this subsection, we introduce an XAI model that is applicable for the explainable brain ventricle segmentation using multimodal MRI data acquired from the hydrocephalus patients. Previous methods \\cite{qian2017objective, cherukuri2017learning} have conducted experiments using images with a slice thickness of less than 3 mm. This is because the smaller of the image thickness, the more images could be obtained, which helps improve the representation power of the model. However, in a real-world scenario, it is not practical for clinicians to use these models because labelling these image slices is extremely labour-intensive and time-consuming. Therefore, it is more common for the annotations of images with larger slice thicknesses, which are easily available while those images with smaller slice thickness are not. Besides, models trained only on thick-slice images have poor generalisation on thin-slice images. To alleviate these problems, we proposed a thickness agnostic image segmentation model, which can be applicable for both thick-slice and thin-slice images, but only requires the annotations of thick-slice images during the training procedure.\n\n\\begin{figure}\n\\begin{center}\n \\includegraphics[width=\\linewidth]{.\/figures\/seg_network.pdf} \n \n \\caption{Overview of our proposed XAI model for explainable segmentation. Here ResBlock represents the residual block proposed in the ResNet \\cite{he2016resnet}. }\n \\label{fig:seg_network}\n\\end{center}\n\\end{figure}\n\n\nSuppose we have a set of thick-slice images $\\mathcal{D}_\\mathcal{S} = \\{(x_s, y_s) | x_s \\in \\mathbb{R}^{H\\times W \\times 3}, y_s \\in \\mathbb{R}^{H \\times W}\\}$ and a set of thin-slice images $\\mathcal{D}_\\mathcal{T} = \\{ x_t | x_t \\in \\mathbb{R}^{H\\times W \\times 3}\\}$. The main idea of our model is to utilise the unlabelled thin-slice images $\\mathcal{D}_\\mathcal{T}$ to minimise the model performance gap between thick-slice and thin-slice images while a post-hoc XAI can also be developed. \n\n\\subsubsection{Segmentation Network}\n\nWith the wide applications of deep learning methods, the encoder-decoder based architectures are usually adopted in automated high accuracy medical image segmentation. The workflow of our proposed segmentation network is illustrated in Figure \\ref{fig:seg_network}. Inspired by the U-Net \\cite{ronneberger2015unet} model, we replace the original encoder with ResNet-50 \\cite{he2016resnet} pre-trained on ImageNet dataset \\cite{deng2009imagenet} since it can provide better feature representation for the input images. In addition, the decoder of the U-Net has at least a couple of drawbacks: 1) the increase of low-resolution feature maps can bring a large amount of computational complexity, and 2) interpolation methods \\cite{dong2015bilinear} such as bilinear interpolation and bicubic interpolation do not bring extra information to improve the segmentation. Instead, the decoder of our model adopts sub-pixel convolution for constructing segmentation results. The sub-pixel convolution can be represented as \n\\begin{align}\n F^{L} = SP(W_L * F^{L-1} + b_L),\n\\end{align}\nwhere $SP(\\cdot)$ operator transforms and arranges a tensor shaped in $H \\times W \\times C \\times r^2$ into a a tensor with the shape of $rH \\times rW \\times C$, and $r$ is the scaling factor. $F^{L-1}$ and $F^L$ are the input feature maps and output feature maps. $W_L$ and $b_L$ are the parameters of the sub-pixel convolution operators for the layer $L$.\n\n\\subsubsection{Multimodal Training}\n\nAs aforementioned, the thick-slice images with annotations are available. Therefore, in order to minimise the performance gap between thick-slice images and thin-slice images. We apply a multimodal training procedure to jointly optimise for both types of images. Overall, the objective function of our proposed multimodal training can be computed as\n\\begin{align}\n \\mathcal{L}(x_t, x_s) = \\mathcal{L}_\\mathcal{S}(p_s, y_s) + \\beta \\mathcal{L}_\\mathcal{T}(p_t),\n\\end{align}\nwhere $\\beta$ is a hyper-parameter for weighting the impact of $\\mathcal{L}_\\mathcal{S}$ and $\\mathcal{L}_\\mathcal{T}$. $p_s$ and $p_t$ are the prediction of the segmentation probability maps shaped in $H\\times W\\times C$ for thick-slice images and thin-slices images, respectively. In particular, $\\mathcal{L}_\\mathcal{S}$ is the cross-entropy loss defined as follows\n\\begin{align}\n \\mathcal{L}_\\mathcal{S} (p_s, y_s) = -\\frac{1}{HWC} \\sum_{n=1}^{HW} \\sum_{c=1}^C y_s^{n,c} \\log p_s^{n,c}.\n\\end{align}\n\nFor the unlabelled thin-slice images, we assume that $\\mathcal{L}_\\mathcal{T}$ can push the features away from the decision boundary of the feature distributions of the thick-slice images, thus achieving distribution alignment. Besides, according to \\cite{grandvalet2005semi}, minimising the distance between the prediction distribution $p$ and the uniform distribution $\\mathcal{U} = \\frac{1}{C}$ can diminish the uncertainty of the prediction. To measure the distance of these two distributions, the objective function $\\mathcal{L}_\\mathcal{T}$ can be modelled by the $f$-divergence, that is\n\\begin{align}\n \\mathcal{L}_\\mathcal{T} (p_t) = -\\frac{1}{HWC} \\sum_{n=1}^{HW} \\sum_{c=1}^C D_f(p_t^{n,c} || \\mathcal{U}) = -\\frac{1}{HWC} \\sum_{n=1}^{HW} \\sum_{c=1}^Cf(Cp_t^{n,c}).\n\\end{align}\n\nMost existing methods \\cite{grandvalet2005semi, vu2019advent} tend to choose $f(x) = x\\log x$, which is alternatively named as KL-divergence. However, one of the main obstacle is that when adopting $f(x) = x\\log x$, the gradient of $\\mathcal{L}_\\mathcal{T}$ would be extremely imbalanced. To be more specific, it can assign a large gradient to the easily classified samples, while assigning a small gradient to hardly classified samples. Therefore, in order to mitigate the unbalancing problem during the optimisation, we incorporate Pearson $\\chi^2$-divergence (i.e., $f(x) = x^2-1$) rather than using the KL-divergence for $\\mathcal{L}_\\mathcal{T}$, that is \n\\begin{align}\n \\mathcal{L}_\\mathcal{T} (p_t) = -\\frac{C}{HW} \\sum_{n=1}^{HW} \\sum_{c=1}^C(p_t^{n,c})^2.\n\\end{align}\n\nAfter applying the Pearson $\\chi^2$-divergence, the gradient imbalanced issue can be mitigated since the slope of the gradient is constant, which can be verified by taking the second order derivative of $\\mathcal{L}_\\mathcal{T}$.\n\nDuring the training procedure, $\\mathcal{L}(x_t, x_s)$ is optimised alternatively for both thick-slice and thin-slice images.\n\n\\subsubsection{Latent Space Explanation}\n\nOnce the model is trained using multimodal datasets, the performance of the network can be quantitatively evaluated by volumetric or regional overlapping metrics, e.g., Dice scores. However, the relation between network performance and input samples remains unclear. In order to provide information about the characteristics of data and their effect on model performance, through which users can set their expectations accordingly, we investigate the feature space and their correlation with the model performance. For feature space visualisation, we extract the outputs of the encoder module of our model, and then decompose them into a two-dimensional space via Principal Component Analysis (PCA). For estimating the whole space, we use a multi-layer perceptron to fit the decomposed samples and their corresponding Dice scores, which can provide an understanding of Dice scores for particular regions of interests in the latent space where there is no data available. Therefore, through analysing the characteristics of the samples in the latent space, we can retrieve the information about the relationships between samples and their prediction power.\n\n\\subsection{Implementation Details}\nFor both our classification and segmentation tasks, we used ResNet-50 \\cite{he2016resnet} as the backbone network pre-trained on ImageNet \\cite{deng2009imagenet}. For classification, we resized these images into a spatial resolution of $224 \\times 224$. During the training procedure, we set $\\lambda = 1 \\times 10^{-4}$, the dropout rate as $0.7$, and the $L_2$ weight decay coefficient as $1\\times 10^{-5}$. Besides, $l_s$ was set to 16, and $k$ was set to 8 for the sake of computing the patient-level probability. For segmentation, we set $\\beta = 1 \\times 10^{-2}$ for balancing the impact of supervised loss and unsupervised loss. During the training, Adam \\cite{kingma2014adam} optimiser was utilised with a learning rate $1 \\times 10^{-3}$. The training procedure is terminated after 4,000 iterations with batch size 8. All of the experiments were conducted on a workstation with 4 NVIDIA RTX GPUs using PyTorch framework with version 1.5. \n\n\\section{Experimental Settings and Results}\n\n\\subsection{Showcase I: Classification for COVID-19} \n\n\\subsubsection{Datasets}\nWe collected CT data from four different local hospitals in China and removed the personal information to ensure data privacy. The information of our collected data is summarised in Table \\ref{fig:cls_dataset}. In total, there were 380 CT volumes of the patients who tested COVID-19 positive (reverse transcription polymerase chain reaction test confirmed) and 424 COVID-19 negative CT volumes. For a fair comparison, we trained the model on the cross-centre datasets collected from hospital A, B, C, and D. For an unbiased independent testing, CC-CCII data \\cite{zhang2020ccii}, a publicly available dataset, which contained 2,034 CT volumes with 130,511 images, was adopted to verify the effectiveness of the trained models.\n\n\\begin{figure}[!ht]\n\\begin{center}\n \\subfigure[Patient-level Statistics]{\n \\includegraphics[width=0.9\\linewidth]{.\/figures\/number_of_patient.pdf} \n }\n \\subfigure[Image-level Statistics]{\n \\includegraphics[width=0.9\\linewidth]{.\/figures\/number_of_images.pdf} \n }\n \n \\caption{Class distribution of the collected CT data. The numbers in the sub-figures (a) and (b) represent the counts for the patient-level statistics and image-level statistics, respectively. The data collected from several clinical centres can result in great challenges in learning discriminative features from those class-imbalanced centres.}\n \\label{fig:cls_dataset}\n\\end{center}\n\\end{figure}\n\n\n\n\\subsubsection{Data Standardisation, Pre-Processing and Augmentation}\nFollowing the protocol described in \\cite{zhang2020ccii}, we used the U-Net segmentation network \\cite{ronneberger2015unet} to segment the CT images. Then, we randomly cropped a rectangular region whose aspect ratio was randomly sampled in $[3\/4, 4\/3]$, the area was randomly sampled in $[90\\%, 100\\%]$, and the region was then resized into $224 \\times 224$. Meanwhile, we randomly flipped the input volumes horizontally with 0.5 probability. The input data would be a set of CT volumes, which were composed of consecutive CT image slices.\n\n\\subsubsection{Quantitative Results}\nWe compared our proposed classification model with several state-of-the-art COVID-19 CT classification models \\cite{he2016resnet, wang2020covid, li2020artificial, ouyang2020dual}. Table \\ref{tb:sota} summarises the experimental results of COVID-19 classification on the CC-CCII data. For image-level annotations, ResNet-50 \\cite{he2016resnet} and COVID-Net \\cite{wang2020covid} simply treated patient-level labels as the image labels. Different from methods proposed by \\cite{he2016resnet, wang2020covid, li2020artificial}, VBNet \\cite{ouyang2020dual} utilised the 3D residual convolutional neural network to train with patient-level annotations on the whole CT volumes rather than single slices. Besides, COVNet \\cite{li2020artificial} extracted prediction scores from each slice in the CT volumes with ResNet and aggregated the prediction scores via a max-pooling operator to get the patient-level probability. \n\nIn Table \\ref{tb:sota}, we can find that our method achieved the best performance among these SOTA methods. In particular, our method obtained a better performance by 7.2\\% on AUC compared to VB-Net \\cite{ouyang2020dual} on the patient-level indicating that our method can be applicable for the real-world scenario. This also verified the benefit of modelling section information in the CT volumes via our proposed SIM, which we believe is also vital to the improvement of the classification performance. Besides, our method significantly outperformed other methods by at least 40\\% with respect to the specificity while maintaining high sensitivity, which is also a crucial indication for diagnosing COVID-19. In addition, models trained on patient-level annotations could achieve better performance compared to those trained on image-level labels. This is because the noise in the image labels could have a negative impact during the training, which might degrade the representation ability of the model. According to \\cite{geirhos2018imagenet}, models trained on images may rely on learning the textures of images that were highly discriminative among multiple centres. Therefore, these trained models might be overfitted and biased to the texture features of the images collected from different centres, which could explain the phenomenon that these methods (i.e., \\cite{he2016resnet, wang2020covid}) were poorly performed on the unseen centres.\n\nIn another aspect, for CT volumes, the sequential ordering of CT image slices is also informative. COVID-Net \\cite{li2020artificial} took the most discriminative slice as the representation of the whole CT volume, which ignored the encoding of adjacent slices. This would enforce the model only detect the most discriminative slice, leading to the bias towards positive cases, which could impede the detecting of negative cases that resulted in a low specificity. On the contrary, VBNet proposed by Ouyang et al. \\cite{ouyang2020dual} preserved the sequential information by training on the whole CT volumes. In contrast, we partitioned the CT volume into several sections in order to preserve the sequential information to some extent. Besides, VB-Net was trained with stronger supervision that it utilised additional masks for its supervised training. For our method, we only used patient-level annotations that were much more efficient. More importantly, our method achieved better performance on both AUC and accuracy compared to VBNet \\cite{ouyang2020dual} and COVNet \\cite{li2020artificial}.\n\n\\begin{table*}\n\\begin{center}\n\\resizebox{1.0\\linewidth}{!}{%\n\\begin{tabular}{c|c|c|c|c|c|c}\n\\hline\n\\textbf{Annotation} & \\textbf{Method} & \\textbf{Patient Acc. (\\%)} & \\textbf{Precision (\\%)} & \\textbf{Sensitivity (\\%)} & \\textbf{Specificity (\\%)} & \\textbf{AUC (\\%)} \\\\ \\hline\n\\multirow{5}{*}{Patient-level} & ResNet-50 \\cite{he2016resnet} & 53.44 & 64.45 & 63.03 & 35.71 & 53.24 \\\\\n & COVID-Net \\cite{wang2020covid} & 57.13 & 62.53 & 84.70 & 6.16 & 49.58 \\\\\n & COVNet \\cite{li2020artificial} & 69.96 & 70.20 & \\textbf{93.33} & 26.75 & 81.61 \\\\\n & VB-Net \\cite{ouyang2020dual} & 76.11 & 75.84 & 92.73 & 45.38 & 88.34 \\\\\n & Ours & \\textbf{89.97} & \\textbf{92.99} & 91.44 & \\textbf{87.25} & \\textbf{95.53} \\\\ \\hline\n\\multirow{4}{*}{Image-level} & ResNet-50 \\cite{he2016resnet} & 52.56 & 61.60 & 71.27 & 18.06 & 50.19 \\\\\n & COVID-Net \\cite{wang2020covid} & 60.03 & 64.81 & \\textbf{83.91} & 15.98 & 58.39 \\\\\n & COVNet \\cite{li2020artificial} & 75.55 & 79.90 & 83.24 & 61.37 & 79.48 \\\\\n & Ours & \\textbf{80.41} & \\textbf{88.56} & 80.15 & \\textbf{80.89} & \\textbf{86.06} \\\\ \\hline\n\\end{tabular}\n}\n\\end{center}\n\\caption{Comparison results of our method vs. state-of-the-art methods performed on the CC-CCII dataset.}\n\\label{tb:sota}\n\\end{table*}\n\nIn addition, we also provided the Precision-Recall (PR) and Receiver Operating Characteristic (ROC) curves to compare different methods on patient-level annotations and image-level annotations (Figure \\ref{fig:pr} and \\ref{fig:roc}). From the figure, we can observe that models trained on image-level annotations (e.g., ResNet \\cite{he2016resnet} and COVID-Net \\cite{wang2020covid}) were poorly performed since their AUCs were close to 50\\% which indicated a random guess. In contrast, models trained on patient-level was more reliable since their AUCs were greater than 50\\%. In particular, we found that overall our proposed method remained the best-performed algorithm with an AUC of 95.53\\% at the patient-level and 86.06\\% at the image-level. These results verified our assumption that for mild COVID-19 cases, most of the image slices are disease-free.\n\n\n\\begin{figure}\n\\begin{center}\n \\subfigure[Patient-level Annotation]{\n \\includegraphics[width=0.46\\linewidth]{.\/figures\/patient_roc.pdf} \n }\n \\quad\n \\subfigure[Image-level Annotation]{\n \\includegraphics[width=0.46\\linewidth]{.\/figures\/image_roc.pdf} \n }\n \n \\caption{The Receiver Operating Characteristic (ROC) curves of different compared methods.}\n \\label{fig:roc}\n\\end{center}\n\\end{figure}\n\n\n\\begin{figure}\n\\begin{center}\n \\subfigure[Patient-level Annotation]{\n \\includegraphics[width=0.46\\linewidth]{.\/figures\/patient_pr.pdf} \n }\n \\quad\n \\subfigure[Image-level Annotation]{\n \\includegraphics[width=0.46\\linewidth]{.\/figures\/image_pr.pdf} \n }\n \n \\caption{The Precision-Recall (PR) curves of different compared methods.}\n \\label{fig:pr}\n\\end{center}\n\\end{figure}\n\n\n\n\n\\subsubsection{Qualitative Results}\n\nIn order to make the prediction to be more explainable, we used the trained model to visualise the CAMs and bounding boxes generated by our EDM as described above. Figure \\ref{fig:cls_cam} shows the visualisation results of the derived CAMs (i.e., $A^{conv}$). In this figure, we can clearly observe that our method tended to pay more attention to the discriminative part of the images so as to make the predictions. For example, in the first column, the lower left part of the lung was seriously infected and had a large area of lesions. Therefore, our method would make the predictions that the image was classified as COVID-19 positive, demonstrating the capability of our XAI model to make explainable predictions.\n\nIn addition, based on the results of the derived CAMs, we also extracted the lesion bounding boxes from the CAMs. It can be found that our method was capable of yielding accurate bounding boxes from the salient part of the CAMs, as illustrated in Figure \\ref{fig:cls_cam}, which further confirmed that our XAI method was applicable to be an auxiliary diagnosis tool for the clinicians. \n\n\n\\begin{figure}\n\\begin{center}\n\n \\includegraphics[width=\\linewidth]{.\/figures\/new_cams.pdf} \n \n \\caption{Examples of the CAMs $A^{conv}$ generated by our proposed EDM for classifying COVID-19 positive patients. The first row contains the original CT-scan image slices, and the second row illustrates the heatmaps of CAMs $A^{conv}$ with bounding boxes confined to the infected areas.}\n \\label{fig:cls_cam}\n\\end{center}\n\\end{figure}\n\nTo further illustrate the learnt features from our proposed method, we extracted the feature from the backbone network of our architecture, and used T-SNE \\cite{maaten2008tsne} visualisation technique to transform the features extracted from the backbone network of our proposed model, and visualised the distribution of the classified images as shown in Figure \\ref{fig:t_sne}. In this figure, we can find the distinctive visual characteristics of the CT images from different hospitals (i.e., Hospital A, B, C, and D). Besides, it can be observed that the COVID-19 positive images were mostly clustered together, and negative images were mainly distributed in another cluster. More interestingly, in the cluster of the negative images, we can find several positive images in this cluster since these images were scanned from patients who were tested COVID-19 positive. In our intuition, we assume that for some mild cases, lesions were not presented in all of the CT slices. Therefore, there were indeed disease-free CT slices that could be falsely labelled as COVID-19 positive, which verified our assumption.\n\n\n\\begin{figure*}[!ht]\n\\centering\n\\includegraphics[width=\\textwidth]{.\/figures\/TSNE.pdf}\n\\caption{T-SNE visualisation \\cite{maaten2008tsne} of the learnt features from CT images. Original images are sampled from four different hospitals and represented in the figure. Besides, a falsely annotated image is drawn from the negative cluster.}\n\\label{fig:t_sne}\n\\end{figure*}\n\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=\\textwidth]{.\/figures\/lime.pdf}\n\\caption{Visualisation of the super-pixels that are positively contributed to the predictions via the LIME method \\cite{ribeiro2016lime}.}\n\\label{fig:lime}\n\\end{figure*}\n\n\nAdditionally, in order to explain each individual prediction, we adopted the LIME method \\cite{ribeiro2016lime} to investigate the contribution of each pixel for the prediction. Instead of using the individual pixel, we divided an image into super-pixels, which were composed of interconnected pixels with similar imaging patterns. Figure \\ref{fig:lime} shows the explanations via LIME for COVID-19 positive images. In each pair of images, we visualised the super-pixels that contributed to the COVID-19 positive prediction results. We can observe that the lesion parts would explain for the positive prediction, which is reasonable to our deep learning model.\n\n\n\n\\begin{figure*}[!ht]\n\\centering\n\\includegraphics[width=\\textwidth]{.\/figures\/shap.pdf}\n\\caption{The SHAP values for different super-pixels of the sampled images. We computed the SHAP values through the Kernel SHAP method \\cite{lundberg2017shap}. The super-pixel with positive SHAP value indicates the positive impact to the positive prediction, while the negative value means that the super-pixel contributes to the negative prediction.}\n\\label{fig:shap}\n\\end{figure*}\n\nHowever, the LIME method could only quantitatively estimate the importance according to how close the combination of super-pixels was to the original instance. It discarded the global view of the individual feature contributed by the super-pixels. To overcome this drawback, we further leveraged Kernel SHapley Additive exPlanations (Kernel SHAP) method \\cite{lundberg2017shap} to estimate the contribution of each super-pixel quantitatively by the SHAP value. Samples explained by the Kernel SHAP are demonstrated in Figure \\ref{fig:shap}. We can observe that the super-pixels contained lesion areas positively contributed to the positive prediction, while those super-pixels related to the backgrounds or disease-free areas would reflect the contribution to negative prediction.\n\n\n\n\\subsection{Showcase II: Segmentation for Hydrocephalus} \n\n\\subsubsection{Datasets}\nThe studied cohort included 20 normal elderly people, 20 patients with cerebral atrophy, 64 patients with normal pressure hydrocephalus, and 51 patients with acquired hydrocephalus (caused by subarachnoid haemorrhage, brain trauma or brain tumour). CT scans of the head were performed using two CT instruments, one of which was the SOMATOM Definition Flash from Siemens, Germany, and the other was the SOMATOM Emotion 16 from Siemens, Germany. Secondly, MRI examinations were conducted using a 1.5T MR scanner(Avanto, Siemens, Erlangen, Germany) and a 3.0T MRI scanner(Prisma, Siemens, Erlangen, Germany). The slice thickness of the CT images includes: 0.5mm, 1.0mm, 1.5mm, 2.0mm, 4.8mm, 5.0mm. The slice thickness of the MRI images includes: 1.0mm, 7.8mm, 8.0mm. For experiments, we randomly split the thick-slice and thin-slice images into training, validation and testing sets. The details of the dataset are summarised in Table \\ref{tb:seg_dataset}.\n\n\n\\begin{table}[]\n\\begin{center}\n\\resizebox{1.0\\linewidth}{!}{%\n\\begin{tabular}{l|c|c|c|c|cc}\n\\hline\n\\multirow{2}{*}{Modality} & \\multicolumn{2}{c|}{Training Set} & \\multicolumn{2}{c|}{Validation Set} & \\multicolumn{2}{c}{Test Set} \\\\ \\cline{2-7} \n & Thick-slice & Thin-Slice & Thick-slice & Thin-Slice & \\multicolumn{1}{c|}{Thick-slice} & Thin-Slice \\\\ \\hline\nMRI & 810 & 1,303 & 203 & 326 & \\multicolumn{1}{c|}{189} & 982 \\\\\nCT & 2,088 & 2,076 & 523 & 519 & \\multicolumn{1}{c|}{309} & 492 \\\\ \\hline\n\\end{tabular}\n}\n\\end{center}\n\\caption{The number of thick-slice and thin-slice images used in our study.}\n\\label{tb:seg_dataset}\n\\end{table}\n\n\n\\subsubsection{Data Standardisation, Pre-Processing and Augmentation}\nFor the pre-processing of these data, we normalised images using the $z$-score normalisation scheme, which was done by subtracting its mean then divided by its standard deviation. For anomaly pixels, we clipped them within the range of 1-quantile and 99-quantile. For data augmentation, we resized the images using a bicubic interpolation method and resized masks with the nearest interpolation. Then we flipped the images horizontally with 0.5 probability, and scaled the hue, saturation, and brightness with coefficients uniformly drawn from $[0.8, 1.2]$. \n\n\\subsubsection{Quantitative Results}\nTable \\ref{tb:Dice_seg} shows the segmentation performance of various compared models on different modalities. All of the models were trained on the thick-slice images with annotations and the unlabelled thin-slice images. We can observe that our proposed method outperformed all of the compared state-of-the-art methods by a large margin on the mixed datasets (i.e., the mixture of thick-slice and thin-slice images) with at least 4.4\\% of the Dice scores. It is of note that all three models achieved similar segmentation performance on thick-slice images. However, our proposed method gained a significant improvement on the thin-slice images for both MRI and CT scans. The primary reason is that our model could diminish the uncertainty of these thin-slice images while achieving the distribution alignment between thick-slice and thin-slice images, which could enhance the representation and generalisation capabilities of our model. Besides, we also investigated the effectiveness of $\\mathcal{L}_\\mathcal{S}$ and $\\mathcal{L}_\\mathcal{T}$ in Table \\ref{tb:seg_ablation}. In the table, we can find that when only training on thick-slice images, the model performed perfectly on thick-slice images while performing poorly on thin-slice images, since the distribution of these two kinds of slices could vary. Moreover, the performance of the models trained only on unlabelled thin-slice images degraded sharply because of the lack of annotations to guide the segmentation. In the Exp.3 as shown in Table \\ref{tb:seg_ablation}, our model could gain significant improvement on the thin-slice images while preserving good performance on the thick-slice images, which demonstrated that our trained model was applicable for both types of images for both CT and MRI modalities.\n\n\n\\begin{table}[]\n\\begin{center}\n\\begin{tabular}{l|c|c|c|ccc}\n\\hline\n\\multirow{2}{*}{Method} & \\multicolumn{3}{c|}{MRI} & \\multicolumn{3}{c}{CT} \\\\ \\cline{2-7} \n & Thick & Thin & Mixed & \\multicolumn{1}{c|}{Thick} & \\multicolumn{1}{c|}{Thin} & Mixed \\\\ \\hline\nU-Net \\cite{ronneberger2015unet} & 0.9226 & 0.7665 & 0.8353 & \\multicolumn{1}{c|}{0.9351} & \\multicolumn{1}{c|}{0.7987} & 0.8513 \\\\\nU-Net++ \\cite{zhou2018unet++} & \\multicolumn{1}{l|}{0.9159} & \\multicolumn{1}{l|}{0.8495} & \\multicolumn{1}{l|}{0.8602} & \\multicolumn{1}{l|}{\\textbf{0.9421}} & \\multicolumn{1}{l|}{0.7797} & \\multicolumn{1}{l}{0.8424} \\\\\nOurs & \\textbf{0.9323} & \\textbf{0.9056} & \\textbf{0.9099} & \\multicolumn{1}{c|}{0.9365} & \\multicolumn{1}{c|}{\\textbf{0.8697}} & \\textbf{0.8954} \\\\ \\hline\n\\end{tabular}\n\\end{center}\n\\caption{Comparison results (Dice scores) of our method vs. other state-of-the-art methods. Mixed represents the test set containing both thick-slice and thin-slice images.}\n\\label{tb:Dice_seg}\n\\end{table}\n\n\n\\begin{table}[]\n\\begin{center}\n\\begin{tabular}{l|c|c|c|c|c|ccc}\n\\hline\n\\multirow{2}{*}{\\textbf{Exp.}} & \\multirow{2}{*}{$\\mathcal{L}_\\mathcal{S}$} & \\multirow{2}{*}{$\\mathcal{L}_\\mathcal{T}$} & \\multicolumn{3}{c|}{\\textbf{MRI}} & \\multicolumn{3}{c}{\\textbf{CT}} \\\\ \\cline{4-9} \n & & & Thick & Thin & Mixed & \\multicolumn{1}{c|}{Thick} & \\multicolumn{1}{c|}{Thin} & Mixed \\\\ \\hline\n1 & $\\surd$ & & \\textbf{0.9390} & 0.8199 & 0.8391 & \\multicolumn{1}{c|}{\\textbf{0.9438}} & \\multicolumn{1}{c|}{0.8345} & 0.8767 \\\\\n2 & & $\\surd$ & 0.0034 & 0.0108 & 0.0110 & \\multicolumn{1}{c|}{0.0109} & \\multicolumn{1}{c|}{0.0006} & 0.0069 \\\\\n3 & $\\surd$ & $\\surd$ & 0.9323 & \\textbf{0.9056} & \\textbf{0.9099} & \\multicolumn{1}{c|}{0.9365} & \\multicolumn{1}{c|}{\\textbf{0.8697}} & \\textbf{0.8954} \\\\ \\hline\n\\end{tabular}\n\\end{center}\n\\caption{Dice scores comparison for verifying the effectiveness of each loss term. Mixed represents the test set containing both thick-slice and thin-slice images.}\n\\label{tb:seg_ablation}\n\\end{table}\n\n\n\n\\begin{figure*}[!ht]\n\\centering\n\\includegraphics[width=\\textwidth]{.\/figures\/ious.pdf}\n\\caption{The visualisation of the Dice scores of the projected images. The plane was computed by smoothing the Dice scores. It is of note that images sharing similar characteristics were clustered together. On the left-hand side and right-hand side, samples from different regions of the plane are presented.}\n\\label{fig:ious}\n\\end{figure*}\n\nBesides, in order to interpret the black-box segmentation model, we extracted the lowest bottom features and projected them into a 2D latent space using the PCA technique. We then computed the Dice score for each sample and visualised it in Figure \\ref{fig:ious}. In this figure, we can observe that slices sampled from the orange circle all contained a small region of ventricle where the model could not perform well. However, images from the green and yellow circle had multiple ventricles, which took a large proportion of the images. Therefore, these images could be well-predicted by our model.\n\n\n\\subsubsection{Qualitative Results}\nTo qualitatively examine the performance of our model and other state-of-the-art models, we presented some visualisation results of the CT and MRI images with thin-slices in Figure \\ref{fig:seg_results}, and computed the Dice scores for each segmentation result. For MRI images, our model and U-Net++ \\cite{zhou2018unet++} were able to segment four ventricles in the brain. In particular, our model could predict the third ventricle in the brain more completely compared to the prediction generated by the U-Net++ \\cite{zhou2018unet++} due to the informative feature representation by the pre-trained encoder. However, for CT images, the performance varied among different models. The primary reason is that original CT volumes contained the skull which could cause the brain to be visually unclear, after removing the skull, the contrast of the images could be largely distinct. More concretely, for those images with low contrast, (e.g., the row 1 and row 5 in Figure \\ref{fig:seg_results}), all of the three compared methods were capable of predicting the left lateral and right lateral ventricles. However, for those images with high contrast (e.g., the row 2 and row 4 in Figure \\ref{fig:seg_results}), our proposed method could predict most of the ventricle part in the brain while U-Net and U-Net++ failed.\n\n\\begin{sidewaysfigure}\n\\begin{center}\n\n \\includegraphics[width=\\linewidth]{.\/figures\/results_seg.pdf} \n \n \\caption{The visualisation of the 3D brain ventricles segmentation results using different compared models. The right lateral ventricle is coloured in red; the left lateral ventricle is coloured in green; the yellow coloured region represents the third ventricle; and the blue region represents the fourth ventricle.}\n \\label{fig:seg_results}\n\\end{center}\n\\end{sidewaysfigure}\n\nIn addition, we used the segmentation results generated by compared models to reconstruct the 3D images of each ventricle. The example is illustrated in Figure \\ref{fig:seg_3d_results}. We can observe that U-Net \\cite{ronneberger2015unet} could hardly predict the ventricles on thin-slice images, while U-Net++ \\cite{zhou2018unet++} was able to segment the left lateral and right lateral ventricles by taking advantage of dense connections of the intermediate feature maps. In contrast, our proposed method could not only predict the two ventricles mentioned above, but could also segment the third ventricle and the fourth ventricle well. One limitation of our model is that it could not predict the connection region between the third ventricle and fourth ventricle because the area is too small to be distinguished.\n\n\\begin{figure}[!ht]\n\\begin{center}\n\n \\includegraphics[width=\\linewidth]{.\/figures\/3d_visual.pdf} \n \n \\caption{Three-dimensional visualisation of the predictions on thin-slice MRI images for each ventricle segmented by different comparison models. The 3D segmentation results were visualised from the axial plane, the coronal plane, and the sagittal plane. Colouring scheme is consistent with Figure \\ref{fig:seg_results}.}\n \\label{fig:seg_3d_results}\n\\end{center}\n\\end{figure}\n\n\\subsection{Discussions}\n\nIn missions increasingly vital to human healthcare, AI is being deployed. Automated decisions should be explainable in order to create trust in AI and prevent an algorithm-based totalitarian society. This is not just a human right, for example, enshrined in the European GDPR, but an ultimate goal for algorithm developers who want to know if the necessary clinical characteristics are captured by the decision support systems. XAI should be possible to provide explanations in a systematic manner in order to make the explainability scalable. To construct a surrogate while-box model for the black-box model used to make a prediction, a typical solution is to use simpler, more intuitive decision algorithms. There is a chance, though, that the surrogate model is too complicated or too abstract for it to be truly understandable for humans. \n\nIn this study, we have firstly provided a mini-review for XAI methods and their specific applications in medicine and digital healthcare that is followed by two example showcases that we have developed. From our two showcases, we have explored the classification model and segmentation model in terms of sensitivity (i.e., LIME \\cite{ribeiro2016lime} and Kernel SHAP \\cite{lundberg2017shap}) and decomposition (i.e., T-SNE \\cite{maaten2008tsne} and CAMs). For LIME and Kernel SHAP methods, the individual sample can be analysed and interpreted with each super-pixel, which is useful for individual diagnosis. These methods can provide a straightforward view of how local explanations affect the final predictions. \n\nOn the other hand, T-SNE provides us with an insight into the strength and weakness of our proposed models. For example, in Figure \\ref{fig:ious}, the distribution of the decomposed image features has an association with the prediction performance, which indicates the weakness of the black-box segmentation models. Meanwhile, the distribution of decomposed image features also reveals the clustered characteristics of the raw inputs (Figure \\ref{fig:t_sne}), which can help us to find the reason why a model would make such predictions.\n\nIn consequence, these methods can also be classified into two categories named as perceptive interpretability and mathematical interpretability. When visual evidence is not useful or erroneous, the mathematical evidence can be used as the complement for interpretability. Therefore, various methods should be applied simultaneously for the sake of providing reliable interpretability.\n\nNevertheless, a significant drawback of the current studies on XAI is that the interpretations are focused on the intuition of experts rather than from the demands of the end-users \\cite{du2019techniques}. Current local explanations are typically provided in a feature-importance vector format, which is a full causal attribution and a low-level interpretation. This format would be satisfactory if the description viewers were the developers and analysts, since they could use the mathematical study of the distribution of features to debug the models. However, this type of XAI is less accommodating if the description receivers are lay-users of the AI. XAI can explain the complete judgement logic of the model, which includes a large amount of repetitive knowledge which can confuse the lay-users. The presentation of the XAI algorithms should be further improved to increase customer satisfaction.\n\nThe poor abstraction level of explanations is another drawback. For example, despite XAI derived heatmaps can indicate that individual pixels are important, there is normally no correlation computed between these significance regions to more abstract principles such as the anatomical or pathological regions shown in the images. More importantly, the explanations ought to be understood by humans to make sense of them and to grasp the understandable actions of the model. It is indeed desirable to provide meta-explanations that can integrate evidence from these low-level heatmaps to describe the behaviour of the model at a more abstract, more humanly understandable level. However, this level of understanding can be hard and erroneous. Previously proposed methods have recently been suggested to aggregate low-level explanations and measure the semantics of neural representations. Thus, a constructive topic for future study is the development of more advanced meta-explanations that leverages multimodal information fusion.\n\nBecause the audiences of XAI results are essentially human users, an important future research direction is the use of XAI in human-machine interaction; therefore, research studies in XAI need to explore human factors. A prerequisite for good human-machine interaction is to construct explanations for the right user focus, for instance, develop XAI to ask the correct questions in the proper manner, which is crucial in the clinical environment. Optimisation of the reasoning procedure for optimal human use, however, is still a problem that demands more research. Eventually, a broad open gap in XAI is the use of interpretabilities beyond using visualisation techniques. Future studies will demonstrate how to incorporate XAI into a broader optimisation mechanism in order to, e.g., boost the efficiency of the model and reduce the model complexity.\n\n\\section{Conclusion}\n\nThe recent confluence of large-scale annotated clinical databases, the innovation of deep learning approaches, open-source software packages, and inexpensive and rapidly increasing computing capacity and cloud storage has fuelled the recent exponential growth in AI. This foretells to change the landscape of medical practice in the near future. AI systems have specialised success in certain clinical activities that are more able to assess patient prognosis compared to doctors, and can help in surgical procedures. If deep learning models continue to advance, there is a growing chance that AI could revolutionise medical practice and redefine the role of clinicians in the process. Our mini-review has demonstrated the research trends towards the trustable AI or trustworthy AI, which promotes the XAI globally, and XAI methods in medicine and digital healthcare are highly in demand. Additionally, our two showcases have shown promising XAI results for the two most widely investigated classification and segmentation problems in medical image analysis. We can envisage further development of XAI in medicine and digital healthcare by integrating information fusion from cross-modalities imaging and non-imaging clinical data can be a stepping stone toward a more general acceptance of AI in clinical practice. Ultimately, the trustable AI will promote confidence and openness of its deployment in the clinical arena and also make it easier to comply with the legislation of the GDPR and regulations of the NHS$^X$ in the UK, CE-mark in the EU, FDA in the USA, and NMPA in China. \n\n\\section*{Acknowledgement}\n\nThis work was supported in part by the European Research Council Innovative Medicines Initiative on Development of Therapeutics and Diagnostics Combatting Coronavirus Infections Award 'DRAGON: rapiD and secuRe AI imaging based diaGnosis, stratification, fOllow-up, and preparedness for coronavirus paNdemics' [H2020-JTI-IMI2 101005122], in part by the British Heart Foundation [PG\/16\/78\/32402], in part by the AI for Health Imaging Award 'CHAIMELEON: Accelerating the Lab to Market Transition of AI Tools for Cancer Management' [H2020-SC1-FA-DTS-2019-1 952172], in part by the Hangzhou Economic and Technological Development Area Strategical Grant [Imperial Institute of Advanced Technology], in part by the Project of Shenzhen International Cooperation Foundation (GJHZ20180926165402083), and in part by the Clinical Research Project of Shenzhen Health and Family Planning Commission (SZLY2018018).\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\nThere has been a large interest in generating automatic text description \\cite{mckeown1992text} of tabular data -- for example, prior work has sought to generate biographies from tables of biographical information \\cite{lebret2016neural}, and generating descriptions from structured meaning representations \\cite{gardent2017webnlg}. \nHowever, in many of these tasks, the main focus is on designing systems that are able to \\emph{select entries} from tabular or equivalent data during generation by using neural attention mechanisms.\nIn many naturally occurring descriptions of tabular data, humans often refer to higher-level patterns, for example in the description of stock index pricing over the week in Fig. \\ref{fig:pull}, \nthe speaker refers to how the stock price peaks towards the ending. Some recent work has looked into setups that require non-trivial inference \\cite{wiseman2017challenges,chen2020logical}. However, they typically don't involve inference about numerical patterns in time series data. \nMoreover, much recent prior work on identifying more complex patterns in data for captioning has relied on deep neural networks, often employing neural encoders and attention mechanisms. \nHowever, such approaches often fail to generate faithful responses and lack interpretability \\cite{DBLP:journals\/corr\/abs-1910-08684,dhingra2019handling,parikh2020totto}. \n\n\\begin{figure}[t]\n \\includegraphics[width=0.95\\textwidth]{figures\/pull.png}\n \\vspace{-0.1\\abovedisplayskip}\n \\caption{\\footnotesize We propose a neural truth-conditional model for high precision and diverse time series caption generation. \n }\n \\label{fig:pull}\n \\vspace{-3mm}\n\\end{figure}\n\n\n\n\n\\begin{figure}[t]\n\\begin{center}\n \\includegraphics[width=0.80\\textwidth]{figures\/model_overview.png}\n \\vspace{-0.3\\abovedisplayskip}\n \\caption{\\footnotesize Method Overview: We present a truth-conditional model for time series captioning, which first identifies patterns (composed of simpler modules) that hold true for a given data point. Decoder conditions only on a sampled program $z$ (and not on input $x$), generating high precision outputs. \n } \n \\label{fig:ts_method_overview}\n \\end{center}\n \\vspace{-2mm}\n\\end{figure}\n\n\n\\begin{figure}[t]\n\\begin{center}\n \\includegraphics[width=0.95\\textwidth]{figures\/example7.png}\n \\vspace{-0.3\\abovedisplayskip}\n \\caption{\\footnotesize \n A program $z=(z_P,z_L)$ operates on an input time series $x$ to given final output score $s_z(x)$. The module instances are learned from scratch during training. \n } \n \\label{fig:vizoutput}\n \\end{center}\n \\vspace{-2mm}\n\\end{figure}\n\nWe present a novel neural\ntruth-conditional model for time series captioning, which learns to identify patterns that hold true for the input time series (Figure \\ref{fig:ts_method_overview}). \nWe first sample a latent program from the space of learned neural operators. Each program produces a soft truth-value. Then, with probability proportional to each program's truth-value, a language decoder generates a caption. Thus, programs that yield low truth values, do not produce captions. Critically, the decoder takes \\textit{an encoding of the program itself}, rather than the time series, in order to determine output text. Overall, this approach allows for both: (a) precision in generated output through explicit truth conditioning, and explicit program structure as a representation of time series trends, and (b) diversity in caption generation through the sampling process.\n\n\nWhile some of the patterns in data are complex, they can be considered to have been constructed by composing simpler concepts such as slope (rate of change of value) or comparisons (between values at give points). As such, our programs are constructed by composing simpler operations\/modules.\nSuch a modular design enables sharing of modules across multiple programs, leading to more data efficient learning of module parameters, and also providing better generalization to unseen compositions of modules. \nWe consider a relatively simple space of three module types, using which our model is able to capture a significant fraction of the patterns present in data. The module types could be expanded in future to capture more complex patterns.\nOur model treats the choice of composed computation graph of programs as a latent variable, learned using natural language descriptions as the only supervision. \nIn this respect, our approach is related to neural module networks used in \\citet{andreas2016learning,andreas2016neural}, which condition on a question to generate a program, which then operates on an image or other data to predict an answer. \nIn our case, the constructed computation graph operates and identifies salient patterns in the source data directly, without being guided by an input question. \n\n\n\n\nOur main contributions are as follows:\nWe propose a novel method for time series captioning which first induces useful patterns via composing simpler modules, identifies the programs which hold true, and finally generates text describing the selected program.\nTowards this end, we collect and release two datasets consisting of time series data with accompanying English language description of salient patterns.\nWe observe that the proposed method is able to learn useful patterns, exhibits compositionality and interpretability, and generates outputs that are \nmuch more faithful to the input \ncompared to strong traditional neural baselines.\n\\footnote{Data and code can be found at \\url{https:\/\/github.com\/harsh19\/TRUCE}.}\n\n\n\n\n\n\n\n\n\\section{Related Work}\n\n\n\\noindent \\textbf{Time-Series Numerical Data and Natural Language}\n\\newcite{andreas2014grounding} worked on grounding news headlines to stock time series data by aligning sub-trees in sentence parses to segments of time series. \n\\newcite{DBLP:conf\/acl\/MurakamiWMGYTM17} generate stock data commentary using encoders such as convolutional and recurrent neural networks, similar to the baselines used in our experiments. \n\\newcite{sowdaboina2014learning} focus on the task of describing wind speed and direction. \nTime series data in the form of charts has been utilized in some prior work in figure question answering \\cite{DBLP:conf\/iclr\/KahouMAKTB18,DBLP:journals\/corr\/abs-1906-02850}. \n\n\n\nPast work has explored ways to handle numerical data in a variety of input data domains using neural networks.\n\\newcite{trask2018neural} propose neural logic unit for tasks such as counting objects in images. \nPrior work has investigated handling of numeracy in question answering datasets \\cite{dua2019drop,andor2019giving,DBLP:conf\/iclr\/GuptaLR0020}, typically using a predefined set of executable operations or using specific distributions for number prediction \\cite{DBLP:conf\/emnlp\/Berg-Kirkpatrick20,DBLP:conf\/naacl\/ThawaniPIS21}.\n\n\\noindent \\textbf{Neuro-Symbolic Methods:}\n\\citet{andreas2016neural} proposed to use neural modular networks for visual question answering. Since then, similar approaches have been used for several other tasks such as referring expression comprehension \\cite{DBLP:conf\/aaai\/CirikBM18}, image captioning \\cite{DBLP:conf\/iccv\/YangZC19}, and text question answering {\\cite{andreas2016learning,DBLP:conf\/naacl\/KhotKRCS21}}. %\nCompared to such past efforts, we induce the latent numerical and temporal detection operations, pick a high-scoring program, and condition only on a program encoding to generate the output description. \nIn this respect, our work is also related to prior work on neural discrete representation learning \\cite{DBLP:conf\/nips\/OordVK17,DBLP:conf\/acl\/EskenaziLZ18}, though none of these past works explore utilizing such techniques for data to text problems. \nOur proposed model abstracts the numerical pattern detection from text generation. Related ideas have been explored in the past in other domains and tasks \\cite{DBLP:conf\/emnlp\/GehrmannDR18,DBLP:conf\/emnlp\/JhamtaniB18,DBLP:conf\/icml\/AmizadehPPHK20}. \n \n\n\n\\noindent \\textbf{Data to Text:}\nTabular or structured data to text generation has been explored in prior work \\cite{lebret2016neural,DBLP:conf\/sigdial\/NovikovaDR17,wiseman2017challenges,jhamtani2018chess,DBLP:journals\/corr\/abs-2102-01672}. \nThe Rotowire dataset \\cite{wiseman2017challenges} is comprised of sports summaries for tabular game data which may require modeling of numerical operations and trends.\nHowever, much of the past work has relied on neural models with attention mechanisms, without explicit and interpretable notions of numerical operations.\nFidelity to the input in the context of neural text generation has received a lot of attention lately \\cite{DBLP:conf\/aaai\/CaoWLL18}. \nPrior work has approached the aspect of fidelity to input through changes in model training and\/or decoding methods \\cite{DBLP:journals\/corr\/abs-1910-08684,DBLP:conf\/acl\/KangH20,DBLP:conf\/acl\/MajumderBMJ20,DBLP:conf\/naacl\/GoyalD21,DBLP:conf\/aaai\/0001ZCS21}. \nWe explore a different approach that increases fidelity through conditional independence structure and model parameterization.\n\n\n\n\n\n\n\\section{Experiment Setup}\n\\section{Experiments with Synthetic Data}\n\n\\begin{table}[t]\n \\centering\n \\footnotesize\n \\begin{tabular}{@{}l@{\\hskip 0.05in}l@{\\hskip 0.05in}l@{\\hskip 0.05in}l@{\\hskip 0.05in}l@{\\hskip 0.05in}l@{\\hskip 0.05in}lll@{}}\n \\toprule\n \\bf Method & \\bf COR & \\bf PPL & \\bf Bleu-3\/4 & \\bf Cider & \\bf Rouge & \\bf BERT \\\\ \\midrule\n \\textsc{TRUCE}{} & $\\bf 92\\%$ & $13.9$ & $0.61\/0.46$ & $1.40$ & $0.74$ & $0.77$ \\\\ \n \\textsc{FcEnc}{} & $39\\%$ & $16.7$ & $0.45\/0.28$ & $0.81$ & $0.61$ & $0.65$ \\\\ \n \\textsc{LstmEnc}{} & $45\\%$ & $11.2$ & $0.43\/0.28$ & $0.87$ & $0.62$ & $0.63$ \\\\ \n \\textsc{ConvEnc}{} & $53\\%$ & $11.0$ & $0.47\/0.32$ & $1.00$ & $0.66$ & $0.67$ \\\\ \n \n \\textsc{FftEnc}{} & $39\\%$ & $22.7$ & $0.38\/0.22$ & $0.67$ & $0.58$ & $0.54$ \\\\ %\n \\textsc{NearNbr}{} & $71\\%$ & NA & $0.28\/0.14$ & $0.60$ & $0.40$ & $0.48$ \\\\ \n \\bottomrule\n \\end{tabular}\n \\caption{\\footnotesize\n Results on test split of SYNTH dataset: Human evaluation for correctness (COR) and various automated metrics. \\textsc{TRUCE}{} performs much better than baselines as per correctness evaluation. \n }\n \\label{tab:synth6_results}\n\\end{table}\n\n\n\n\\subsection{Methods}\nFor SYNTH data, we consider several baselines listed below (More detailed descriptions are provided in the Appendix). Note that all non-retrieval baselines use the same LSTM decoder architecture as our model. \n(1) \\textbf{\\textsc{NearNbr}{}:} The ground-truth caption of the closest matching training data instance is used as the prediction. The closest matching instance is identified via L2 distance between input time series.\n(2) \\textbf{\\textsc{FcEnc}{}}: Encodes the input time series sequence using a multi-layer feed-forward encoder.\n(3) \\textbf{\\textsc{LstmEnc}{}}: Encodes the input time series sequence using a LSTM recurrent neural network.\n(4) \\textbf{\\textsc{ConvEnc}{}}: Encodes time series using a multi layer convolutional neural network. \n(5) \\textbf{\\textsc{FftEnc}{}}: Encodes time series using Fourier transform features of the input. \n\n\n\n\n\n\n\n\n\n\\subsection{Results}\n\nFor \\textsc{TRUCE}{}, we pick the highest scoring program, according to the prior, for description generation. \nWe generate captions (using greedy decoding) from each of the methods for the test split.\n\\\\\n\\textbf{Automated metrics} measure overlap between model generated caption and the reference ground truth captions. We report Perplexity (\\textbf{PPL}), BLEU-3\/4 \\shortcite{papineni2002bleu}, METEOR \\cite{banerjee2005meteor}, ROUGE-L (\\textbf{Rouge}) \\cite{lin2004rouge}, and BertScore-Precision (\\textbf{BERT}) \\cite{DBLP:conf\/iclr\/ZhangKWWA20}. The proposed \\textsc{TRUCE}{} method gets favorable scores as per various automated metrics on the test split of SYNTH (Table \\ref{tab:synth6_results}).\n\n\\noindent \\textbf{Human Evaluations for Correctness:} \nAutomated metrics may not correlate well with actual quality of the generated output in text generation tasks \\cite{celikyilmaz2020evaluation}.\nAs such, we report human evaluation results as well. We recruit human annotators who are requested to provide a binary label on factual correctness \\textbf{(COR)} of the captions for the test split. Each caption is annotated by three annotators, and the majority label is used. The proposed method is able to achieve a high correctness score of $92\\%$, which is much better than the baselines. This demonstrates the usefulness of the proposed truth-conditional model in generating highly faithful captions. \nOutput samples are provided in the Appendix.\n \n\n\n\n\\begin{SCtable}[]\n \n \\centering\n \\footnotesize\n \\begin{tabular}{@{}ll@{}}\n \\toprule\n \\bf Method & \\bf COR \\\\ \\midrule\n \\textsc{TRUCE}{} & $\\bf 97\\%$ \\\\ \n \\textsc{FcEnc}{} & $38\\%$ \\\\ \n \\textsc{LstmEnc}{} & $50\\%$ \\\\ \n \\textsc{ConvEnc}{} & $59\\%$ \\\\\n \\textsc{FftEnc}{} & $39\\%$ \\\\ \n \\textsc{NearNbr}{} & $72\\%$ \\\\ \n \\bottomrule\n \\end{tabular}\n \\label{tab:synthetic_clf_transfer}\n \\caption{\\footnotesize Models trained on SYNTH data (where each time series has T=12 values) are tested on another synthetic data with T=24 without any fine-tuning.\n }\n\\end{SCtable}\n\n\n\n\\subsection{Analysis}\n\\noindent \\textbf{Generalization to different time series duration:} SYNTH data consists of time series instances with T=12 sequence of values. We experiment the extent to which models trained on SYNTH can accurately detect patterns in time series data of different lengths without any fine-tuning. For this, we evaluate results on a separate synthetic data consisting of 100 time series with T'=24 values per time series (dataset created in the same manner as SYNTH and consists of the same set of 6 classes as in SYNTH). \n\nWe observe that \\textsc{TRUCE}{} retains high correctness of the output captions (Table \\ref{tab:synthetic_clf_transfer}), whereas some of the high performing baseline show significant reduction in correctness.\nNote that some of the employed methods like \\textsc{NearNbr}{} and \\textsc{FcEnc}{} cannot work directly on inputs of length different than present in the training data. For such models, we first adjust length of series. For example, for length 24 input, we consider alternate values only, thereby reducing the series to length 12 (same as in the training data). \n\\vspace{2mm}\n\n\n\n\n\\begin{table}\n\\footnotesize\n\\begin{tabular}{l@{\\hskip 0.1in}l}\n\\toprule\n\\bf Module & \\bf Most freq. words associated \\\\\n\\bf id & \\bf with learned modules \\\\ \\midrule\npattern-1 & increases, rises \\\\\npattern-2 & decreases, decline, dips \\\\ \nlocate-1 & end, late \\\\ \nlocate-2 & beginning , start, initial \\\\\nlocate-3 & middle, halfway \\\\\n\\bottomrule\n\\end{tabular}\n \\caption{\\footnotesize Some of the most frequent words associated with some of the learned module instances for SYNTH data.\n \\label{tab:module_analysis}\n}\n\\end{table}\n\n\\noindent \\textbf{Analyzing Learned Modules:}\nWe analyze the characteristics of the learned modules by identifying the top words (excluding stop words) associated with each learned module. To do so, for a given series, we find program with highest score, and associate the annotations for that series to corresponding modules in that program. Finally, we collect the most frequent words in annotations associated with each module. We show a summary in the Table \\ref{tab:module_analysis}. The two trend modules seem to be getting activated for increase and decrease patterns respectively.\n\\vspace{2mm}\n\n\n\n\n\n\\noindent \\textbf{Compositionality of Learned Modules}\nWe analyze if the proposed model uses its compositional parameterization effectively.\nTo do so,\nwe conduct a simple analysis as follows:\nWe train \\textsc{TRUCE}{} on a subset of synthetic data consisting of only the following 4 patterns: increase-beginning, decreases-end, increase-middle, decreases-middle. \nWe examine this trained model's behavior on test data points consisting of the two unseen patterns: increase-end and decrease-beginning. More specifically, we analyze the argmax program prediction as per the conditional prior. Based on manual inspection of modules (similar to what we discussed for analysis in Table \\ref{tab:module_analysis}), we know before hand the program which should be selected for these patterns. Model's prediction is considered to be correct if, for example, for an input with `decrease-beginning' pattern, model assigns highest score to the program composed using modules corresponding to `decrease' and `beginning'.\nWe observe that the highest scoring program is the correct\/expected program for 92\\% of the cases in the test split. \n\n\n\\section{Experiments with STOCK Dataset}\n\n\n\\subsection{Posterior Regularization:}\nIn the initial experiments with STOCK dataset, we observe that our model suffers from model collapse, and degenerates into learning a single program only. \nThis is perhaps because randomly initialized modules do not \nhave much guidance to begin with. To mitigate such mode collapse issues, prior work has used mutual posterior divergence (MPD) regularization \\cite{ma2019mae} \n$-E_{y_i,y_j} KL(q(z|y_i)||q(z|y_j)) $,\nwhere $y_i$ and $y_j$ captions for two randomly chosen data points. \n\nHowever, we note that MPD term enforces the divergence in an indiscriminate manner -- divergence is encouraged even if captions are paraphrases of each other. An alternate way to encourage divergence in the inference network prediction is to encourage divergence only when two captions $y_i$ and $y_j$ represent different programs or patterns. However, such information is not available in the training data. \nInstead, we use an approximation as follows: \nWe identify the $M$ most frequently occurring words excluding stop-words (list available in Appendix) in the captions and are manually labelled to to represent pattern or locate or neither. Each of the words labelled to be of type pattern or locate is assigned a unique \\emph{pattern} or \\emph{locate} module id respectively. \nThe corresponding captions thus get tagged with some heuristic (but potentially noisy) labels for module ids.\nOnly those captions are tagged which have exactly one `locate' word and one `pattern' word.\nThis leads to about 31\\% of the captions being assigned such heuristic labels, while the remaining data stays unlabelled. \n\nThe above procedure does involve a small human-in-the-loop component. However, we note that it is a pretty light-weight involvement. For example, the system presents M(=10) most frequent pairs of words (excluding stopwords) in captions, and a person spends a couple of minutes labeling their type (locate or pattern).\n\n\n\n\n\n\\subsection{Results}\nWe now report results with STOCK dataset.\nAs mentioned above,\nwe utilize heuristic labels as an auxiliary loss when training the proposed method. Thus, for a fair comparison,\nthe baselines \\textbf{\\textsc{Lstm-Multi}{}}, \\textbf{\\textsc{Conv-Multi}{}} and \\textbf{\\textsc{FcEnc}{}} also use the same set of heuristic labels via a classification loss on the encoded representation in a multi-task learning setup.\n \nThe proposed method \\textsc{TRUCE}{} produces high precision captions as judged by human annotators (Table \\ref{tab:stock_results}). \nWe additionally report automated text overlap scores against reference captions, though the automated metrics seem only mildly correlated with human judgement ratings. \nInterestingly, some of the baselines show large differences in performance in STOCK vs SYNTH datasets. For example, \\textsc{NearNbr}{} performs well on SYNTH but rather poorly on STOCK dataset, perhaps because of variety in time series instances in SYNTH being small, while the same being large in STOCK.\n\\vspace{2mm}\n\n\n\\noindent \\textbf{Diversity and Coverage:}\nIdeally, we want models which can identify all the interesting patterns present in an input time series. Correctness results discussed earlier are indicative of faithful generation but do not necessarily capture coverage of patterns.\nWe compute coverage of various models via the following procedure. First, we collect L(=12) samples per data point from the model. Next, we recruit human annotators to rate whether a human written reference annotations for that data point is covered by the set of L generated captions or not.\nFor \\textsc{TRUCE}{}, we perform sampling at the program selection stage, while baselines admit sampling only at the token generation stage. \n\n\n\\begin{table}[]\n \\centering\n \\footnotesize\n\\begin{tabular}{@{}l@{\\hskip 0.05in}l@{\\hskip 0.09in}l@{\\hskip 0.09in}l@{\\hskip 0.05in}l@{\\hskip 0.05in}l@{\\hskip 0.05in}ll@{}}\n\\toprule\n\\textbf{Method} & \\textbf{COR} & \\textbf{Bleu-3\/4} & \\textbf{Cider} & \\textbf{Rouge} & \\textbf{BERT} \\\\\n\\midrule\n\\textsc{TRUCE}{}(Ours) & \\bf 88.4\\% & 0.35 \/ 0.19 & 0.36 & 0.50 & 0.57 \\\\\n\\textsc{FcEnc}{} & $64.2\\%$ & 0.32 \/ 0.19 & 0.43 & 0.47 & 0.56 \\\\\n\\textsc{Lstm-Multi}{} & $65.5\\%$ & 0.35 \/ 0.21 & 0.41 & 0.50 & 0.61 \\\\\n\\textsc{Conv-Multi}{} & $65.9\\%$ & 0.33 \/ 0.18 & 0.41 & 0.49 & 0.59 \\\\\n\\textsc{FftEnc}{} & 61.8\\% & 0.34 \/ 0.19 & 0.39 & 0.49 & 0.58 \\\\\n\\textsc{NearNbr}{} & 47.2\\% & 0.12 \/ 0.06 & 0.14 & 0.28 & 0.35 \\\\\n\\bottomrule\n\\end{tabular}\n \\caption{\\footnotesize Results with STOCK data: Proposed method \\textsc{TRUCE}{} scores the best on correctness evaluation. The best performing baseline scores $20\\%$ less on correctness evaluation. Greedy decoding was used for all the methods.}\n \\label{tab:stock_results}\n\\end{table}\n\n\\begin{figure}[t]\n\\begin{center}\n \\includegraphics[width=0.65\\textwidth]{figures\/coverage.png}\n \\vspace{-0.5\\abovedisplayskip}\n \\caption{\\footnotesize Coverage and Correctness of model outputs at different sampling settings. In general, settings with higher coverage of human written captions have lower precision of generated captions. \\textsc{TRUCE}{} achieves much higher correctness scores compared to baselines for similar coverage values.}\n \\label{fig:coverage}\n \\end{center}\n \\vspace{-3mm}\n\\end{figure}\nNote that this makes the coverage score depend on the settings used in the sampling process (e.g. top-p value in nucleus sampling), which will also affect the correctness of the generated captions. In Figure \\ref{fig:coverage}, we demonstrate coverage and correctness values of \\textsc{TRUCE}{} and two of the baseline models under different sampling conditions. In general, restricting samples to a low value of top-p leads to lower coverage but higher correctness. \nOverall, \\textsc{TRUCE}{} behaves in a more favorable manner. For example, comparing \\textsc{TRUCE}{} against \\textsc{ConvEnc}{}, for roughly same level of coverage (e.g. ~50\\%), correctness is much higher for \\textsc{TRUCE}{} (~83\\% against ~45\\% for \\textsc{ConvEnc}{}). \nHowever, there still seems to be a gap in the coverage of patterns, and can perhaps be addressed by incorporating more module types. \n\\vspace{2mm}\n\n\n\n\n\n\n\\subsection{Analysis}\n\n\n\n\\noindent \\textbf{Direct conditioning on the input:}\nOur decoder conditions only an encoding of a sampled program. We hypothesize that such an approach creates a bottleneck discouraging the decoder from learning spurious correlations between the input time series and the output text. \nTo inspect the usefulness of the proposed abstraction, we consider an alternative model wherein the decoder conditions on the input time series as well -- by providing output of a convolutional encoder (same as in \\textsc{ConvEnc}{}) to the decoder. More specifically, the program representation and the encoder representation are concatenated before being fed to the decoder. Lets refer to such a model with decoder having direct access to the input as \\textsc{TRUCE}\\textsc{-D}. For STOCK data, \\textsc{TRUCE}\\textsc{-D} gets correctness of $69\\%$ compared to $88\\%$ for \\textsc{TRUCE}{}. \\\\\n\n\n\n\\noindent \\textbf{Analysis of Inference Network:}\nWe analyze the predictions of the inference network at the end of model training. Particularly, we associate the set of ground truth annotations in validation split to module-ids present in the argmax program prediction from the inference network. Next, we identify the most frequently occurring tokens present for each module-id\/module-instance. We observe that the inference network seems to be associating semantically similar words to the same module instance (\\autoref{tab:inference}). \n\\begin{table}[]\n \\centering\n \\footnotesize\n \\begin{tabular}{l@{\\hskip 0.1in}l}\n \\toprule\n \\bf Module id & \\bf Most frequently associated words\\\\\n \\midrule\n pattern-1 & increases, rises, gains \\\\\n pattern-3 & stays, remains, flat \\\\\n pattern-4 & bottoms, out, decline, dips \n \\\\ \n loc-1 & start, beginning, initially \\\\ \n \\bottomrule\n \\end{tabular}\n \\caption{\\footnotesize Inference Network Analysis: Analyzing words frequently present in captions when the argmax program prediction from inference network comprises of a give module-id.}\n \\label{tab:inference}\n\\end{table}\n\n\n\n\n\n\\subsection{Model}\n\nOur goal is to generate a text caption $y$\ndescribing a salient pattern in an input time series $x$. Our model's generative process is depicted in Figure \\ref{fig:ts_method_overview} and operates as follows: Conditioned on an input time series $x$, we first sample a program $z$ from a learned prior, $p(z|x)$. \nThe latent program $z$ is composed of several operations\/modules composed together, and outputs a truth value score. The prior is governed by the truth-values of corresponding programs so that we are likely to sample programs with high truth values.\nNext, we sample caption $y$ conditioning \\emph{only} on the encoding of sampled program $z$ to generate the final text -- i.e. $y$ is independent of $x$ given $z$. Intuitively, if the latent program encodes sufficient information to describe the pattern it detects, caption needs to only depend on the program itself. \n\n\nThe set of latent `programs' in our model are learned from data. On executing a program $z$ on the input time series data $x$, we obtain an output score $s_z(x)$ (between 0 and 1, both inclusive). Score $s_z(x)$ represents the model's confidence about whether the pattern corresponding to the program holds true for the given input time series. Note that $s_z(x)$ does \\emph{not} represent the prior probability of program $z$ -- since multiple programs can be true for a given time series, and $\\sum_z s_z(x) \\neq 1$. \nWe provide our model with a set of building blocks\/modules, which combine to form programs. The composition of modules into programs as well as the module parameters are unobserved in data and are learned during model training. The compositionality in the program space enables modules to be shared across programs, leading to more efficient learning. \nThe programs we consider will prove quite effective in experiments, but are actually relatively simple, being composed of only three module types. Our framework is extensible, however, and future work might consider larger program spaces.\nWe refer to our proposed method as \\textsc{TRUCE}{} (\\textbf{TRU}th \\textbf{C}onditional g\\textbf{E}neration).\n\n\n\n\\subsection{Programs and Modules}\nAs previously mentioned, each program $z$ in our model is composed of several learnable operations\/modules. \nFollowing prior work on neural modular networks \\cite{andreas2016neural}, we consider multiple module types, and incorporate inductive biases in their architecture to learn useful numerical patterns. In the current study, however, we limit to three simple types of patterns: \\emph{pattern}, \\emph{locate}, and \\emph{combine}, leaving extensions to the module space as a future direction.\nThese modules are composed together into programs that operate on the input time series (Figure \\ref{fig:ts_method_overview})\n\nThe module types \\emph{pattern} and \\emph{locate}, output a vector of the same length as the input vector. Both of them output a temporally localized vector, with each value between 0 and 1 (achieved by applying a sigmoid activation function), representing the degree of confidence that the pattern it represents is present at the corresponding position on the temporal axis. For example, as shown in Figure \\ref{fig:vizoutput}, the output of a learned \\emph{locate} module is a vector with high values in the middle part, and the output of the \\emph{pattern} module is high on those positions where there is a decrease in the value in the input time series.\n\nFor the current study, we restrict the space of programs to consist of one \\emph{pattern} ($z_P$) module instance and one \\emph{locate} ($z_L$) module instance. Outputs from the two modules are combined using a \\emph{combine} module, which carries out position-wise multiplication of outputs from $z_P$ and $z_L$, followed by a feed-forward layer and a sigmoid non-linearity. \n\n\\emph{Pattern} modules are aimed at learning patterns such as peaks, dips, increasing trend, and so on. \nWe realize \\emph{pattern} modules through multi layer 1-D convolutions. We argue that 1D convolutions provide an appropriate architecture to induce aspects such as slopes, and compose them to identify patterns such as peaks. \nThe \\emph{locate} module types are realized though a mixture model of K fixed Gaussians placed at equal intervals on the temporal axis of given length $T$. The weights of the components represent learnable parameters for such types of modules. \nThe \\emph{combine} module type learns to transform the position-wise multiplied outputs to a real-valued score, which is then passed through a sigmoid function. \n\n\n\\subsection{Prior}\nAs discussed above, the output of each program $z$ is a real-valued score between 0 and 1. We define prior over the set of programs $Z$ as $p(z) \\propto e^{\\lambda s(z)}$, where $\\lambda$ is a hyperparameter.\nThis formulation makes an implicit assumption that a program $z$ being true for an input time series will make other programs less probable through conservation of probability mass. Such an assumption is necessary, as otherwise directly trying to optimize the likelihood without normalizing across programs will lead to trivial solutions, wherein each program will output a high score for every input. \nNote that an alternative formulation could directly use softmax on an unrestricted real-value output from modules -- such a formulation loses out on the semantics of soft truth output from the programs, and also fared worse in our preliminary experimental evaluations in comparison with the proposed formulation. \n\n\n\n\\subsection{Decoder}\nAs mentioned previously, our decoder conditions only on the program $z$ sampled from the prior $p(z|x)$ to generate final text. \nTo achieve this, we need to pass a program representation to the decoder. We consider an auto-regressive neural decoder such as LSTM or Transformer. At every step, the decoder considers embedding of the previous token as well as the input program representation. \n\nA straightforward approach to obtain program representation is to associate each unique program with a low dimension embedding vector. However, such an approach will not fully exploit the program structures and shared modules. \nInstead, we first associate each module with an embedding. Next, the representation of a program is constructed by appending the embeddings of the corresponding modules (using a fixed pre-determined order of module types). Such a representation achieves sharing of module embeddings across programs. Moreover, it enables obtaining the representation of a new (unseen) program composed using the same set of modules. \n\n\n\\section{Datasets}\n\\label{sec:data}\n\nWe are interested in modeling numerical patterns and trends in time series data. However, there is a lack of existing data sources with time series data paired with natural language descriptions. \nSome prior work on weather forecasting data (such as Sumtime-Mausam \\cite{sripada2003sumtime}) are typically small (only 1045 data instances), and are limited in the scope of patterns they encompass.\nToTTo dataset \\cite{parikh2020totto} contains a small fraction of descriptions based on numerical reasoning and patterns - however, the main challenge is to find the correct value(s) by identifying the relevant row and column in a table.\nLOGIC-NLG \\cite{chen2020logical} consists of 37K tables and corresponding natural language descriptions, some of which require comparisons of cells in a table. \nIn contrast, we focus on trends and patterns in time series data. \nThus, we construct a new dataset where natural language descriptions are collected for naturally occurring stock price time series data (Section \\ref{sec:data:stock}). Additionally, we collect natural language descriptions for a synthetically constructed set of time series to evaluate and analyse our models in a more controlled setup (Section \\ref{sec:data:synth}).\n\n\n\\subsection{STOCK Dataset}\n\\label{sec:data:stock}\n\nWe collect naturally occurring time series data in the form of stock prices. We utilize the Google Finance API to collect stock prices of 7 randomly chosen technology companies over a period of 20 years. We collect weekly (beginning of week) as well as and daily stock price values. \nWe sub-select a total of 1900 instances, each of consists of sequence of T(=12) values. \nEach instance is sampled from the stock data as follows: (1) we pick one of the companies uniformly at random (2) we randomly pick weekly or daily series with equal probability, (3) we pick a sequence of values of given length T, ensuring no overlap with any previously selected time series. (4) Additionally, since different company stocks can be in very different range of values, we normalize such that all the values are between 0 and 100: $v' = 100*(v-min)\/(max-min) $ . However, normalizing this way directly would create undesirable biases in the dataset since each time series would necessarily cover entire range 0-100. Instead, to compute \\emph{max} and \\emph{min}, we additionally consider 10 values (chosen based on manual inspection) just before and just after the currently selected range. \\vspace{2mm}\n\n\\noindent \\textbf{Annotation collection:}\nWe collect 3 natural language annotations for each of the 1900 data points, leading to a total of 5700 paired time-series with natural language descriptions. We split the 1900 unique time series and associated captions into train, dev, and test splits with ratio 8:1:1. \n\n\n\\noindent \\textbf{Annotator description:}\nWe use Amazon Mechanical Turk as a crowd-sourcing platform.\nWe limit to annotators from Anglophone countries, with HIT (Human Intelligence Task) acceptance rates of more than $90\\%$, and minimum number of accepted HITs as 100. Annotators were paid 25 cents for each annotation (which comes to average hourly rate of over USD 23).\n\n\n\n\\noindent \\textbf{Quality Control:}\nBased on initial pilot studies, we found it useful to show annotators plots instead of tables of values, as we are interested in high level patterns rather than specific values. We do not label the plot lines with actual stock names to remove any potential biases one may have about specific company stocks. Finally, we restrict annotations to a maximum of 9 words, so that one annotation reflects only one pattern.\nEach HIT is labelled by 3 different annotators. We manually inspected at least one annotation from each unique annotator, and ruled out (but still paid) annotations for about 7\\% annotators for being poor quality.\n\n\n\n\\noindent \\textbf{Encouraging Lexical Diversity:}\nWe encouraged annotators (through instructions) to not limit themselves to words shown in examples. Additionally, we limit each annotator to a maximum of 10 HITs to increase diversity in annotations. \n\n\n\n\\noindent \\textbf{Dataset Statistics:}\nThere are a total of 861 unique words across the 5700 captions. Most annotation sentences follow a simple syntactic structure. Additionally, we picked a random subset of 100 data points, and manually classified most of them into following major buckets: trend (increase\/decrease trends: 48\\%) superlative(max\/min values; peaks and troughs: 20\\%); comparisons(comparison of start and end values: 10\\%); volatility (flat\/smooth; irregular: 12\\%). \n\n\n\n\n\\subsection{Synthetic Time Series (SYNTH)}\n\\label{sec:data:synth}\nTo develop and test models in a more controlled setup, we synthetically construct time series data. \nOur synthetic time series data is constructed such that each time series has exactly one of the following 6 patterns: increases-in-beginning, increases-in-middle, increases-in-end, decreases-in-beginning, decreases-in-middle, decreases-in-end. \nThe resulting dataset consists of a total of paired 720 time series - natural language annotations. \n\n\nEach synthetic time series is generated as follows: \nFirst, the trend is chosen: increase or decrease. A trend is realized through a straight line of length $L<=T\/3$, with randomly chosen intercept and slope within a range based on the trend selected. \nNext, we randomly select one of the 3 temporal locations : begin, middle, end -- and based on the choice, the pattern is placed in first 40 percentile, 30-70 percentile, or 60-100 percentile respectively, of the entire length T. The region outside the trend is flat. \nFinally, small noise is added to each point. The setup is such that the resulting values are always in (0,100) range. Examples and more specific details can be found in Appendix.\n\n\\section{Conclusion}\nWe present a truth-conditional neural model for time series captioning. Our model composes learned operations\/modules to identify patterns which hold true for a given input. Outputs from the proposed model demonstrate higher precision and diversity compared to various baselines. Further, the proposed model (and some of the baselines) successfully generalize, to some extent, to multiple input sizes. We release two new datasets (in English) for the task of time series captioning. Future work might expand to a broader set of module types to cover more numerical patterns. \n\n\\section*{Acknowledgements}\nWe thank anonymous EMNLP reviewers for insightful comments and feedback. We thank Nikita Duseja for useful discussions.\n\n\n\n\n\n\\section{Truth-Conditional Natural Language Description} \n\nOur goal is to learn models for describing salient patterns in time series data. The main research challenge involved is to learn the types of patterns that humans find salient in time series data, using natural language descriptions as the only source of supervision during training.\nBased on the novel dataset we collect (described in Section \\ref{sec:data} \n, we find that the patterns humans identify tend to describe increasing or decreasing trends, volatility, comparisons of start and end values, presence of peaks and dips. They also mention the temporal location of patterns, such as `at the beginning'\nof the time series.\nThus, our model should be able to learn patterns such as `increase' or `ends with higher value compared to start', and temporal aspects such as `begin' or `end'. \n\n\nOne way to operationalize this process is through the lens of formal logic: e.g. an increasing trend at the beginning of a time series $x$ \ncan be represented trough the logic $z$: $\\big[ \\exists_i$ s.t. \\textsc{increase}($x_i$) AND \\textsc{begin}($i$) $\\big]$ \nThereafter, if the program returns \\texttt{true} on the input, one can condition on only the logical program $z$ to generate output text that describes this pattern via a decoder, $p(y|z)$. However, this still requires learning or defining modules for patterns and temporal location. Inspired by neural module networks \\cite{andreas2016learning,andreas2016neural}, we propose to use functions parameterized by neural networks (Figure \\ref{fig:ts_method_overview}) as modules, incorporating inductive bias through architecture design. However, unlike past work, we condition only on an encoding of sampled programs that return \\texttt{true} to generate output text. \n\\section{Additional Details on Data Sets}\n\n\n\nA downloadable json file for each of the two datasets is provided in the github repository \\footnote{\\url{https:\/\/github.com\/harsh19\/TRUCE}}.\n\n\n\\subsection{Synthetic Data}\n\\label{appendix:sec:synth}\n\nOur synthetic time series data is constructed such that each time series has exactly one of the following 6 patterns: increases-in-beginning, increases-in-middle, increases-in-end, decreases-in-beginning, decreases-in-middle, decreases-in-end. \nThe position in which the pattern is placed is based on the temporal choice (begin\/middle\/end). i.e. L must lie withing first one-third of the time-series (0,T\/3) in case of `begin' pattern, should lie in middle one-third for `middle', and last one third for `end' respectively. We consider equation a*x+b of a line, where `a' represents the slope and `b' represents the y-axis intercept. We pick a random slope value between 0 and 2, and a random intercept value between 1 and 20. Finally, we pick $|L|$ random integral values for x such that ax+b point lies between 0 and 1. The points in the time series outside the pattern are fixed to be same as the nearest point in the patter. Finally, small noise is added to each point using U(-2,2). \n\nSome random data samples are shown in Fig. \\ref{fig:syntheg1}. The text corresponding to `HUMAN' marker represents one of the collected annotations for the corresponding time series data. \n\n\n\n\n\\subsection{STOCK data}\n\nFigures \\ref{fig:stockeg1} show data samples for STOCK dataset. The text corresponding to `HUMAN' marker represents one of the collected annotations for the corresponding time series data. \nThe total number of unique words (considering train and validation splits) are 861, out of which only 560 words occur more than once in the dataset.\n\n\n\n\n\n\\section{Additional Results}\n\n\n\\subsection{SYNTH: Generated Samples}\n\\label{appendix:sec:synthgensamples}\nAdditional examples are provided in Figure \\ref{fig:syntheg1}.\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=0.85\\textwidth]{samples\/egall.png}\n \\caption{SYNTH: Data and Generated Samples. The captions marked in red were judged as incorrect by human annotators. \\textsc{TRUCE}{} achieves very high precision of 95\\% on outputs for the test split of SYNTH dataset. }\n \\label{fig:syntheg1}\n\\end{figure*}\n\n\n\n\n\n\n\n\\subsection{STOCK: Generated Samples}\n\\label{appendix:sec:stockgensamples}\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=0.85\\textwidth]{samples\/egstockall.png}\n \\caption{STOCK: Data and Generated Samples. The captions marked in red were judged as incorrect by human annotators. (Best viewed in color)}\n \\label{fig:stockeg1}\n\\end{figure*}\nFigure \\ref{fig:stockeg1} shows some generated samples on STOCK dataset. \n\n\n\n\n\n\n\n\\subsection{Validation Split Results}\nTables \\ref{tab:synth6_results_val} and \\ref{tab:stock_results_val} show automated metrics on the validation split.\n\n\\begin{table}[]\n \\centering\n \\footnotesize\n \\begin{tabular}{@{}l@{\\hskip 0.05in}l@{\\hskip 0.05in}l@{\\hskip 0.05in}l@{\\hskip 0.05in}l@{\\hskip 0.05in}l@{\\hskip 0.05in}lll@{}}\n \\toprule\n \\bf Method & \\bf PPL & \\bf Bleu-3\/4 & \\bf Cider & \\bf Rouge & \\bf BERT \\\\ \\midrule\n \\textsc{TRUCE}{} & $9.02$ & $0.61\/0.50$ & $1.92$ & $0.74$ & $0.76$ \\\\ \n \\textsc{FcEnc}{} & $9.66$ & $0.41\/0.34$ & $1.17$ & $0.63$ & $0.57$ \\\\ \n \\textsc{LstmEnc}{} & $7.5$ & $0.43\/0.35$ & $1.39$ & $0.63$ & $0.63$ \\\\ \n \\textsc{ConvEnc}{} & $7.6$ & $0.63\/0.53$ & $1.99$ & $0.73$ & $0.71$ \\\\ \n \\textsc{FftEnc}{} & $15.7$ & $0.39\/0.29$ & $1.26$ & $0.61$ & $0.62$ \\\\ %\n \\textsc{NearNbr}{} & NA & $0.32\/0.19$ & $0.68$ & $0.50$ & $0.48$ \\\\ \n \\bottomrule\n \\end{tabular}\n \\caption{\\footnotesize\n Results on validation split for SYNTH dataset. \n }\n \\label{tab:synth6_results_val}\n\\end{table}\n\n\n\n\n\\begin{table}[]\n \\centering\n \\footnotesize\n\\begin{tabular}{@{}l@{\\hskip 0.05in}l@{\\hskip 0.09in}l@{\\hskip 0.09in}l@{\\hskip 0.05in}l@{\\hskip 0.05in}l@{\\hskip 0.05in}ll@{}}\n\\toprule\n\\textbf{Method} & \\textbf{Bleu-3\/4} & \\textbf{Cider} & \\textbf{Rouge} & \\textbf{BERT} \\\\\n\\midrule\n\\textsc{TRUCE}{}(Ours) & 0.36 \/ 0.22 & 0.40 & 0.50 & 0.58 \\\\\n\\textsc{FcEnc}{} & 0.32 \/ 0.20 & 0.38 & 0.47 & 0.56 \\\\\n\\textsc{Lstm-Multi}{} & 0.34 \/ 0.18 & 0.33 & 0.51 & 0.61 \\\\\n\\textsc{Conv-Multi}{} & 0.34 \/ 0.17 & 0.35 & 0.50 & 0.60 \\\\\n\\textsc{FftEnc}{} & 0.32 \/ 0.18 & 0.36 & 0.48 & 0.56 \\\\\n\\textsc{NearNbr}{} & 0.11 \/ 0.05 & 0.11 & 0.27 & 0.37 \\\\\n\\bottomrule\n\\end{tabular}\n \\caption{\\footnotesize Results on validation split of STOCK data.\n }\n \\label{tab:stock_results_val}\n\\end{table}\n\n\n\\subsection{Analyzing Learned Modules}\nFigure \\ref{fig:loc_module_viz} shows visualization of a learned \\emph{locate} module when model is trained on SYNTH data.\n\n\\begin{figure*}[]\n \\includegraphics[width=0.75\\textwidth]{figures\/temporal_synth_begin.png}\n \\caption{Visualizing a learned 'locate' module. Our locate modules are weighted mixtures of equally spaced Gaussians. The module's weight on each of these components is shown, along with the resulting distribution -- the module being visualized seems to have learned to focus on middle part of the time series.}\n \\label{fig:loc_module_viz}\n\\end{figure*}\n\n\n\\subsection{Additional Ablation Studies}\nWe consider following ablations for the \\textsc{TRUCE}{}:\n(1) \\textsc{TRUCE}\\textsc{-NoInf}: Train \\textsc{TRUCE}{} without the use of inference network\n(2) \\textsc{TRUCE}\\textsc{-NoHeur}: Train \\textsc{TRUCE}{} without the use of heuristic labels\n\n\n\n\n\n\n\n\\section{Additional Training Details}\n\nWe code our models in Pytorch library. \n\n\\subsection{Heuristic Labels}\nList of the keywords selected for use in constructing heuristic labels: \\\\\n--- `locate':[`beginning',`middle',`end',`throughout'], \\\\ \n--- `pattern':[`increase',`decrease',`peak',`flat',`dip']\n\n\n\\subsection{Optimizer}\nWe use Adam optimizer with initial learning rate of $1e-4$.\n\n\\subsection{Infrastructure}\nWe use GeForce RTX 2080 GPUs for training models.\n\n\n\\subsection{Additional method details} \nWhile the automated metrics are only moderately correlated with quality, we found it reasonable to select best model configurations based on the Bleu-4 scores on validation split. \nThe model configurations, when using STOCK dataset, are as follows:\n\\begin{itemize}\n \\item LSTM Decoder: Token embedding size and hidden size are varied from the set \\{32,64,128,256\\}. \n \\item Weight for the classification loss term (in case of multitask objective in baselines): Following three weights of classification loss (i.e. the weight of the classification term which is present in addition to the conditional language modeling objective) are tried: 0.3,1.0,3.0. \n \\item \\textsc{TRUCE}{}: Program embedding encoding size. Number of module instantiations are varied in following ranges:\n \\begin{itemize}\n \\item LOCATE: 4-7 instantiations of each of locate \n \\item PATTERN: 6-10 instantiations of each of trend\n \\item COMBINE: 1 instantiation\n \\end{itemize}\n - Module embedding is varied in the set \\{9,18,36,72\\}. Final module embedding size is 18. \\\\\n - Number of trainable parameters: 466K (excluding inference network parameters since inference network is used only at training and not at prediction time)\n \\item \\textsc{FftEnc}{}: \n - Number of trainable parameters: 462K\n - Construct features based on numpy:fft:rfft functions, using real as well as imaginary components from the transformation.\n \\item \\textsc{ConvEnc}{}: \n Number of trainable parameters: 463K\n \\item \\textsc{LstmEnc}{}: \n - Representation: A single LSTM step involves feeding an embedding of the input and using the previous step's hidden state. To construct an input embedding of size $h$ for a given number $x_t$, we simply repeat the number $x_t$ for $h$ times. \\\\\n - Number of trainable parameters: 464K \n \\item \\textsc{NearNbr}{}: \n We experiment with L2 distance and L1 distance, and observed former to perform better in terms of automated as well as human evaluations. \n\\end{itemize}\n\n\n\n\\section{Learning and Inference}\n\nThe log probability of observing a natural language description $y$ of the time series $x$ under the model can be written as follows:\n\\begin{equation*}\n \\log p(y|x) = \\log \\sum_{z \\in \\mathcal{Z}} p_\\phi(z|x)p_\\theta(y|z)\n\\end{equation*}\nwhere $\\mathcal{Z}$ is the set of all possible programs, and $\\theta$ and $\\phi$ are learnable model parameters.\nThe model is trained to maximize the log likelihood of the the observed descriptions conditioned on the corresponding time series data. Since the programs $z$ are unobserved at training, we must marginalize over all possible values of $z$.\n\\vspace{2mm}\n\n\n\n\\noindent \\textbf{Inference Network:}\nThe space of programs we currently employ is relatively small (about 20-60 number of programs), which makes it feasible to marginalize over the program space. However, any future work expanding the space of programs might run into feasibility issues when computing the exact likelihood. In such cases, we can perhaps resort to variational learning to optimize a lower bound to the likelihood by drawing samples from an inference network. \n\nAdditionally, use of inference networks can provide a useful inductive bias by using the observed text descriptions to guide the model learning. For example, words `increase' and `begin' in a caption could inform the inference network about a high chance of the presence of an increase pattern in the initial duration of the time series. \nWe observe that training with inference networks results in models which can better capture the patterns in data. \nNote that the inference network is used only for model training. At test time, we sample from the learned prior and decoder without regard to the inference network.\n\n\nWe use amortized variational learning by introducing an inference network $q_\\gamma$, and train the model to maximize the following evidence lower-bound (ELBO):\n\\begin{align*}\n \\mathbb{E}_{z \\sim q_\\gamma(z|y)} [\\log p_\\theta(y|z)] - \\text{KL}(q_\\gamma(z|y)||p_\\phi(z|x))\n\\end{align*}\n\n\n\nWe use a BiLSTM encoder to encode the caption $y$, followed by a classification layer to predict the approximate posterior $q_\\gamma(z|y)$ over the programs.\nWe also considered fine-tuning of a pre-trained BERT model instead of BiLSTM, but did not observe any improvement in the model performance during the initial experiments.\n\\vspace{2mm}\n\n\n\n\\noindent \\textbf{Optimization:}\n$\\theta$, $\\phi$ and $\\gamma$\nare learned through directly optimizing the ELBO term. \nWe compute the exact reconstruction and the KL-terms -- the number of programs in our case is small enough to enable this exact computation (typically we consider 6-10 instances each of \\emph{pattern} and \\emph{locate} module). \n\n\n\\section*{Ethics Statement}\nWe collect natural language annotations from a crowd-sourcing platform. We do not collect or store any person identifiable information. We did not observe any toxic or hateful language in our dataset -- though researchers working on the dataset in future are advised due caution since the annotations are crowd-sourced, and might reflect certain biases.\nOur work primarily performs experiments on text generation in English language. \nOur method generates high precision text output -- much higher than all the baselines considered. However, it is still not perfect, and must be used cautiously in any real world deployment.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe spin group $Spin(n)$ is the universal cover of the special orthogonal\ngroup $SO(n)$. The spin$^{c}$ group $Spin^{c}(n)$ is the central extension \nSpin(n)\\times _{\\mathbb{Z}_{2}}U(1)$ of $SO(n)$ by the circle group $U(1)$.\nIn the first part of the paper we introduce a pair $F=(\\alpha ,\\gamma )$ of\nsecondary cohomology operations, which is applied to construct the integral\ncohomology rings of the classifying spaces $B_{Spin^{c}(n)}$ and \nB_{Spin(n)} $.\n\nThe $\\func{mod}2$ cohomology of the space $B_{Spin(n)}$ has been determined\nby Borel \\cite{B1} for $n\\leq 10$, and completed by Quillen in \\cite{Q}.\nConcerning the integral cohomology $H^{\\ast }(B_{Spin(n)})$ partial\ninformation are known \\cite{Web,Web1}. In \\cite{Th} Thomas calculated the\ncohomology $H^{\\ast }(B_{Spin(n)})$ in the stable range $n=\\infty $, but his\nresult relies on two sequences $\\{\\Phi _{i}\\},\\{\\Psi _{i}\\}$ of\nindeterminacies. Another inspiring approach is due to Benson and Wood. By\ncomputing with the Weyl invariants a partial presentation of the ring \nH^{\\ast }(B_{Spin(n)})$ is formulated in \\cite[Theorem 11.1]{BW}, where the\ndetermination of explicit generators and relations is noted to be a rather\ndaunting task. For the difficulties that one encounters when computing with\nthe cohomologies of the classifying space $B_{G}$ of a Lie group $G$, we\nrefer to Feshbach \\cite[Final remarks]{Fe}. In our approach the pair \nF=(\\alpha ,\\gamma )$ of cohomology operators will make the structure of the\nring $H^{\\ast }(B_{Spin(n)})$ appearing in a new light, see Remarks 8.6 and\n9.5.\n\nKnowing the integral cohomology of the classifying space $B_{G}$ of a Lie\ngroup $G$ has direct consequences in geometry and invariant theory. In\nparticular, assuming that a minimal system $\\{q_{1},\\cdots ,q_{m}\\}$ of\ngenerators of the ring $H^{\\ast }(B_{G})$ has been specified, one can\nintroduce the characteristic classes for a principle $G$ bundle $\\xi $ over\na space $X$ by letting\n\n\\begin{quote}\n$q_{r}(\\xi ):=f_{\\xi }^{\\ast }(q_{r})\\in H^{\\ast }(X)$, $1\\leq r\\leq m$,\n\\end{quote}\n\n\\noindent where $f_{\\xi }:X\\rightarrow B_{G}$ is the classifying map of the\nbundle $\\xi $. One obtains also the basic Weyl invariants of the group $G$\nby setting\n\n\\begin{quote}\n$d_{r}:=B_{t}^{\\ast }(q_{r})\\in H^{\\ast }(B_{T})$, $1\\leq r\\leq m$,\n\\end{quote}\n\n\\noindent where $T$ is a maximal torus on $G$, and the map \nB_{t}:B_{T}\\rightarrow B_{G}$ is induced by the inclusion $T$ $\\subset G$.\nFor the classical groups $G=U(n),SO(n)$ and $Sp(n)$ these stories have been\nwell understood by the 1950's \\cite{B,BH}. In the second part of the paper\nwe complete the projects for the spinor groups $Spin(n)$ and $Spin^{c}(n)$.\n\nIn mathematical physics the Postnikov tower anchored by the classifying\nspace $B_{SO(n)}$ is\n\n\\begin{quote}\n$\\cdots \\rightarrow $ $B_{Fivebrane(n)}\\rightarrow B_{String(n)}\\rightarrow\nB_{Spin(n)}\\rightarrow B_{SO(n)}$,\n\\end{quote}\n\n\\noindent indicating that the calculation of ring $H^{\\ast }(B_{Spin(n)})$\nis a necessary step towards the integral cohomologies of the further spaces \nB_{String(n)}$ and $B_{Fivebrane(n)}$ in the tower. In addition, the\nintegral cohomology $H^{\\ast }(B_{Spin(n)})$ is essential to form the\ncohomology of the Thom spectrum $\\{M_{Spin(n)},n\\geq 3\\}$ \\cite{ABP}, is\nexplicitly requested by the study of the complex cobordism of the\nclassifying space $B_{Spin(n)}$ \\cite[Corollary 1.4]{T}, and by the\nunderstanding of the total Chern class of the complex spin presentation of\nthe group $Spin(n)$ \\cite{ABS,Q}.\n\n\\bigskip\n\n\\noindent \\textbf{Remark 1.1.} If $3\\leq n\\leq 6$ there hold the following\ngroup isomorphisms\n\n\\begin{quote}\n$Spin(3)=SU(2)$, $Spin(4)=SU(2)\\times SU(2)$,\n\n$Spin(5)=Sp(2)$, $Spin(6)=SU(4)$,\n\\end{quote}\n\n\\noindent where $SU(k)$ is the special unitary group of rank $k$, $Sp(2)$ is\nthe symplectic group of rank $2$. Therefore, we may assume $n\\geq 7$ to\nexclude these cases.$\\square $\n\n\\section{The main results}\n\nFor a topological space $X$ let $Sq^{k}$ be the Steenrod squares on the \n\\func{mod}2$ cohomology algebra $H^{\\ast }(X;\\mathbb{Z}_{2})$, and denote by \n$\\delta _{m}$ the Bockstein homomorphism from the $\\func{mod}m$ cohomology \nH^{r}(X;\\mathbb{Z}_{m})$ into the integral cohomology $H^{r+1}(X)$. For the\nhomomorphisms of coefficients groups\n\n\\begin{quote}\n$\\theta :\\mathbb{Z}_{2}\\rightarrow \\mathbb{Z}_{4}$ by $\\theta (1)=2$, and \n\\rho _{m}:\\mathbb{Z}\\rightarrow \\mathbb{Z}_{m}$ by $\\rho _{m}(1)=1$,\n\\end{quote}\n\n\\noindent the same notion are applied to denote their induced maps on the\ncohomologies.\n\nLet $\\mathcal{B}:$ $H^{2r}(X;\\mathbb{Z}_{2})\\rightarrow H^{4r}(X;\\mathbb{Z\n_{4})$ be the Pontryagin square \\cite{BT}. For an even degree class $u\\in\nH^{2r}(X;\\mathbb{Z}_{2})$ there holds the following universal relation\n\n\\begin{enumerate}\n\\item[(2.1)] $\\delta _{2}(u\\cup u)=2\\delta _{4}\\mathcal{B}(u)$ on \nH^{4r+1}(X)$ (see (3.1)).\n\\end{enumerate}\n\n\\noindent \\textbf{Definition 2.1.} The space $X$ is called $\\delta _{2}\n\\textsl{--formal} if $\\delta _{2}(u\\cup u)=0$ for every $u\\in H^{2r}(X\n\\mathbb{Z}_{2})$, $r\\geq 1$.$\\square $\n\n\\bigskip\n\nIt follows from (2.1) that, if $X$\\ is a space whose integral cohomologies \nH^{\\ast }(X)$ in degrees $4r+1$ have no torsion element of order $4$, then \nX $\\ is $\\delta _{2}$--formal. In particular, all the $1$--connected Lie\ngroups, the classifying spaces $B_{SO(n)}$, $B_{Spin(n)}$ and \nB_{Spin^{c}(n)}$ of our concern, as well as the Thom spectrum \n\\{M_{Spin(n)},n\\geq 3\\}$, are all examples of $\\delta _{2}$--formal spaces.\n\nTo introduce the promised operations observe that the Bockstein operator \nSq^{1}=\\rho _{2}\\circ \\delta _{2}$ defines the following decomposition on\nthe $\\mathbb{Z}_{2}$ space $H^{\\ast }(X;\\mathbb{Z}_{2})$\n\n\\begin{quote}\n$H^{\\ast }(X;\\mathbb{Z}_{2})=\\ker Sq^{1}\\oplus S_{2}^{\\ast }(X)$ with \nS_{2}^{\\ast }(X)=H^{\\ast }(X;\\mathbb{Z}_{2})\/\\ker Sq^{1}$.\n\\end{quote}\n\n\\noindent \\textbf{Theorem A.} \\textsl{For any }$\\delta _{2}$\\textsl{--formal\nspace }$X$\\textsl{\\ there exists a unique pair of cohomological operations}\n\n\\begin{quote}\n$F:H^{2r}(X;\\mathbb{Z}_{2})\\rightarrow H^{4r}(X;\\mathbb{Z}_{4})\\times\nS^{4r}(X;\\mathbb{Z}_{2})$\\textsl{,}\n\\end{quote}\n\n\\noindent \\textsl{written }$F(u)=(\\alpha (u),\\gamma (u))$\\textsl{,} $u\\in\nH^{2r}(X;\\mathbb{Z}_{2})$\\textsl{, that is characterized by the following\nthree properties:}\n\n\\begin{quote}\n\\textsl{i) }$\\alpha (u)\\in \\func{Im}\\rho _{4}$\\textsl{;}\n\n\\textsl{ii)} $\\mathcal{B}(u)=\\alpha (u)+\\theta (\\gamma (u))$\\textsl{;}\n\n\\textsl{iii)} $Sq^{1}(\\gamma (u))=Sq^{2r}Sq^{1}(u)+u\\cup Sq^{1}(u)$\\textsl{.}\n\\end{quote}\n\nThe uniqueness assertion of Theorem A implies that the properties i), ii)\nand iii) can be taken as \\textsl{an axiomatic definition} of the pair \nF=(\\alpha ,\\gamma )$ of operators. In particular, since $Sq^{1}$ injects on \nS_{2}^{\\ast }(X)$ while $\\gamma (u)\\in S_{2}^{\\ast }(X)$, the operator \n\\gamma $ is determined uniquely by the relation iii). Thus, applying $Sq^{1}$\nto both sides verifies at once the following equalities useful to evaluate \n\\gamma $.\n\n\\bigskip\n\n\\noindent \\textbf{Corollary 2.2.} \\textsl{For any }$\\delta _{2}$\\textsl\n--formal space }$X$\\textsl{\\ and }$u_{1}$\\textsl{,}$u_{2}\\in H^{\\ast }(X\n\\mathbb{Z}_{2})$ \\textsl{with} $\\deg u_{i}=2r_{i}$\\textsl{,} \\textsl{one has}\n\n\\begin{quote}\n\\textsl{i)} $\\gamma (u_{1}+u_{2})\\equiv \\gamma (u_{1})+\\gamma\n(u_{2})+u_{1}\\cup u_{2}\\func{mod}\\ker Sq^{1}$\n\n\\textsl{ii)} $\\gamma (u_{1}\\cup u_{2})\\equiv u_{1}^{2}{}\\cup \\gamma\n(u_{2})+u_{2}^{2}{}\\cup \\gamma (u_{1})+u_{1}\\cup Sq^{1}u_{1}\\cup\nSq^{2r_{2}-1}u_{2}$\n\n$\\qquad +u_{2}\\cup Sq^{1}u_{2}\\cup Sq^{2r_{1}-1}u_{1}\\func{mod}\\ker Sq^{1}$.\n\\square $\n\\end{quote}\n\nThe operator $\\gamma $ can be iterated to yield the following notion.\n\n\\bigskip\n\n\\noindent \\textbf{Definition 2.3. }For an even degree cohomology class $u\\in\nH^{2r}(X;\\mathbb{Z}_{2})$ of a $\\delta _{2}$--formal space $X$, the sequence \n$\\left\\{ u^{(0)},u^{(1)},u^{(2)},\\cdots \\right\\} $ on $H^{\\ast }(X;\\mathbb{Z\n_{2})$ defined recursively by $u^{(0)}=u$, $u^{(k+1)}=\\gamma (u^{(k)})$ is\ncalled \\textsl{the derived sequence} \\textsl{of} $u$.$\\square $\n\n\\bigskip\n\n\\noindent \\textbf{Example 2.4.} Recall that $H^{\\ast }(B_{SO(n)};\\mathbb{Z\n_{2})=\\mathbb{Z}_{2}[w_{2},\\cdots ,w_{n}]$, where $w_{i}$ is the $i^{th}$\nStiefel--Whitney class of the canonical real $n$--bundle on $B_{SO(n)}$. For \n$u=w_{2r}$ solving the equation iii) of Theorem A using\ncoefficient--comparison yields\n\n\\begin{enumerate}\n\\item[(2.2)] $\\gamma (w_{2r})=w_{4r}+w_{2}w_{4r-2}+\\cdots +w_{2r-2}w_{2r+2}$.\n\\end{enumerate}\n\n\\noindent Since the $Sq^{k}$ action on $H^{\\ast }(B_{SO(n)};\\mathbb{Z}_{2})$\nis determined by the Wu--formula, the formula (2.2), together with the\nalgorithms given by Corollary 2.2, suffices to evaluate the $\\gamma $ action\non $H^{\\ast }(B_{SO(n)};\\mathbb{Z}_{2})$. For example when $u=w_{2}$ we get\nthat\n\n\\begin{quote}\n$w_{2}^{(1)}=w_{4},$\n\n$w_{2}^{(2)}=w_{8}+w_{2}w_{6},$\n\n\nw_{2}^{(3)}=w_{16}+w_{2}w_{14}+w_{4}w_{12}+w_{6}w_{10}+w_{2}w_{6}w_{8}+w_{4}w_{6}^{2}+w_{2}w_{7}^{2} \n$\n\n$\\qquad\n+w_{3}^{2}(w_{10}+w_{2}w_{8}+w_{4}w_{6})+w_{2}^{2}(w_{12}+w_{2}w_{10}+w_{4}w_{8}), \n$\n\\end{quote}\n\n\\noindent and in general, if $2^{k}\\leq n$, that\n\n\\begin{quote}\n$w_{2}^{(k)}=w_{2^{k}}+w_{2}w_{2^{k}-2}+\\cdots +w_{2^{k-1}-2}w_{2^{k-1}+2}+$\nhigher terms.\n\\end{quote}\n\n\\noindent In contrast to the structure of the algebra $H^{\\ast }(B_{SO(n)}\n\\mathbb{Z}_{2})$ as a module over the Steenrod algebra \\cite[Lemma 3.2]{PW},\nthese calculation reveal a striking property about the operator $\\gamma $:\nmodulo the decomposable elements the derived sequence of $w_{2}$ consists of\nall the $2$--power Stiefel--Whitney classes:\n\n\\begin{quote}\n$\\{w_{2}^{(0)},w_{2}^{(1)},w_{2}^{(2)},\\cdots \\}\\equiv \\{w_{2},w_{4},\\cdots\n,w_{2^{l(n)}},0,\\cdots \\}$, $l(n)=\\left[ \\ln n\\right] $.\n\\end{quote}\n\n\\noindent This sequence will play a central role in the construction and\ncalculation of this paper.$\\square $\n\n\\bigskip\n\nAn essential point of the operator $\\alpha $ is that it always admits an\nintegral lift by i) of Theorem A. That is, it can be factored into $\\rho\n_{4}\\circ f_{\\alpha }$ for some $f_{\\alpha }:H^{2r}(X;\\mathbb{Z\n_{2})\\rightarrow H^{4r}(X)$. In the case $X=B_{SO(n)}$ of our interest a\ncanonical choice of such an integral lift $f_{\\alpha }$ can be made\nexplicitly. Recall from Brown \\cite{Br} and Feshbach \\cite{Fe} that the\nintegral cohomology ring of $B_{SO(n)}$ is\n\n\\begin{enumerate}\n\\item[(2.3)] $H^{\\ast }(B_{SO(n)})=\\left\\{ \n\\begin{tabular}{l}\n$\\mathbb{Z}[p_{1},p_{2},\\cdots ,p_{\\left[ \\frac{n-1}{2}\\right]\n},e_{n}]\\oplus \\tau (B_{SO(n)})$ if $n$ is even; \\\\ \n$\\mathbb{Z}[p_{1},p_{2},\\cdots ,p_{\\left[ \\frac{n-1}{2}\\right] }]\\oplus \\tau\n(B_{SO(n)})$ if $n$ is odd\n\\end{tabular\n\\right. $\n\\end{enumerate}\n\n\\noindent where $\\tau (X)$ denotes the torsion ideal of the integral\ncohomology $H^{\\ast }(X)$ of a complex $X$, $2\\tau (B_{SO(n)})=0$, and where \n$p_{i}$ (resp. $e_{n}$) is the $i^{th}$ Pontryagin class (resp. the Euler\nclass) of the canonical real $n$--bundle on $B_{SO(n)}$. With respect to\n(2.3) Thomas introduced in \\cite[\\S 3]{Th} an operator $f:H^{r}(B_{SO(n)}\n\\mathbb{Z}_{2})\\rightarrow H^{2r}(B_{SO(n)})$, i.e. \\textsl{the integral\nrepresentation}, by the following practical rules:\n\n\\begin{enumerate}\n\\item[(2.4)] $f(u):=\\left\\{ \n\\begin{tabular}{l}\n$p_{r}$, $\\delta _{2}(Sq^{2r}w_{2r+1})$ or $e_{n}^{2}$ if $u=w_{2r},w_{2r+1}$\nor $w_{n}$ if $n$ is even; \\\\ \n$f(w_{i_{1}})\\cdots f(w_{i_{k}})$ if $u=w_{i_{1}}\\cdots w_{i_{k}}$ is a\nmonomial; \\\\ \n$f(u_{1})+\\cdots +f(u_{k})$ if $u=$ $u_{1}+\\cdots +u_{k}$\n\\end{tabular\n\\right. $\n\\end{enumerate}\n\n\\noindent where $2r,2r+1<\\left[ \\frac{n-1}{2}\\right] $, and where the $u_{i}\n's are distinct monomials in $w_{2},\\cdots ,w_{n}$. Based on Theorem A we\nshall show that\n\n\\bigskip\n\n\\noindent \\textbf{Theorem B.} \\textsl{The pair }$(f,\\gamma )$\\textsl{\\ of\noperators on }$H^{\\ast }(B_{SO(n)};\\mathbb{Z}_{2})$ \\textsl{satisfies,} \n\\textsl{for any }$u\\in S_{2}^{\\ast }(B_{SO(n)})$\\textsl{, that}\n\n\\begin{quote}\n\\textsl{i)} $\\mathcal{B}(u)=\\rho _{4}(f(u))+\\theta (\\gamma (u))$\\textsl{,}\n\n\\textsl{ii)} $Sq^{1}(\\gamma (u))=Sq^{2r}Sq^{1}(u)+u\\cup Sq^{1}(u)$.\n\\end{quote}\n\n\\noindent \\textbf{Example 2.5.} For $u=w_{2r}\\in S_{2}^{\\ast }(B_{SO(n)})$\nwe have $f(w_{2r})=p_{r}$ by the definition of $f$. Substituting this and\n(2.2) into i) of Theorem B we obtain the formula\n\n\\begin{quote}\n$\\mathcal{B}(w_{2r})=\\rho _{4}(p_{r})+\\theta (w_{4r}+w_{2}w_{4r-2}+\\cdots\n+w_{2r-2}w_{2r+2})$\n\\end{quote}\n\n\\noindent implying that the $\\func{mod}4$ reductions of the Pontryagin\nclasses of a manifold are homotopy invariants. This formula was originally\nobtained by W. Wu \\cite{Wu} by computing with the Schubert cells on \nB_{SO(n)}$. S.S. Chern suggested a different approach which was implemented\nby Thomas in \\cite[Theorem C]{Th1}.\n\nConcerning this topic property ii) of Theorem A may be called \\textsl{the} \n\\textsl{generalized Wu--formula }on $\\delta _{2}$--formal spaces.$\\square $\n\n\\bigskip\n\nTurning to our main concerns the classifying spaces $B_{Spin(n)}$ and \nB_{Spin^{c}(n)}$ fit into the fibered sequences\n\n\\begin{enumerate}\n\\item[(2.5)] $\\mathbb{C}P^{\\infty }\\overset{i}{\\rightarrow }B_{Spin^{c}(n)\n\\overset{\\pi }{\\rightarrow }B_{SO(n)}$ and\n\n\\item[(2.6)] $U(1)\\rightarrow B_{Spin(n)}\\overset{\\psi }{\\rightarrow \nB_{Spin^{c}(n)}\\overset{\\iota }{\\rightarrow }\\mathbb{C}P^{\\infty }$,\n\\end{enumerate}\n\n\\noindent where the maps $\\pi $ and $\\iota $ are induced by the obvious\nepimorphisms\n\n\\begin{quote}\n$Spin(n)\\times _{\\mathbb{Z}_{2}}U(1)\\rightarrow SO(n)$ and $Spin(n)\\times _\n\\mathbb{Z}_{2}}U(1)\\rightarrow U(1)$,\n\\end{quote}\n\n\\noindent respectively. Let $\\{w_{2},w_{2}^{(1)},w_{2}^{(2)},\\cdots \\}$ be\nthe derived sequence of the second Stiefel Whitney class $w_{2}$. Applying\nthe operator $f$ gives rise to the sequence $\\{f(w_{2}),$ \nf(w_{2}^{(1)}),f(w_{2}^{(2)}),\\cdots \\}$ of integral cohomology classes on \nB_{SO(n)}$. By examining the $\\pi ^{\\ast }$ images of these two sequences in\nthe cohomology of $B_{Spin^{c}(n)}$ we single out a set of new generators of\nthe ring $H^{\\ast }(B_{Spin^{c}(n)})$ in the following result, where $x$\\\ndenotes the Euler class of the Hopf line bundle $\\lambda $\\ on $\\mathbb{C\nP^{\\infty }$.\n\n\\bigskip\n\n\\noindent \\textbf{Theorem C. }\\textsl{There exists a unique sequence }\n\\left\\{ q_{r}\\QTR{sl}{,\\ }r\\geq 0\\right\\} $ \\textsl{of} \\textsl{integral} \n\\textsl{cohomology classes on }$B_{Spin^{c}(n)}$\\textsl{, }$\\deg\nq_{r}=2^{r+1}$\\textsl{,} \\textsl{that satisfies the following system}\n\n\\begin{quote}\n\\textsl{i)} $q_{0}=\\iota ^{\\ast }(x)$\\textsl{,}\n\n\\textsl{ii)} $\\rho _{2}(q_{r})=\\pi ^{\\ast }(w_{2}^{(r)})$\\textsl{; }\n\n\\textsl{iii)} $2q_{r+1}+q_{r}^{2}=\\pi ^{\\ast }f(w_{2}^{(r)})$\\textsl{, }\nr\\geq 0$\\textsl{.}\n\\end{quote}\n\nFor an integer $n\\geq 7$ (see Remark 1.1) we set $h(n)=\\left[ \\frac{n-1}{2\n\\right] $, and let $\\theta _{n}\\in H^{\\ast }(B_{Spin^{c}(n)})$ be the Euler\nclass of the complex bundle $\\xi _{n}$ associated to the complex spin\npresentation $Spin^{c}(n)\\rightarrow U(2^{h(n)})$ (\\cite{ABS,HK}). Regarding\nthe cohomology $H^{\\ast }(B_{Spin^{c}(n)})$ as a module over its subring \n\\pi ^{\\ast }H^{\\ast }(B_{SO(n)})$, our main result presents the cohomology \nH^{\\ast }(B_{Spin^{c}(n)})$ by the unique sequence $\\left\\{ q_{r},r\\geq\n0\\right\\} $ obtained by Theorem C, together with the Euler class $\\theta\n_{n} $.\n\n\\bigskip\n\n\\noindent \\textbf{Theorem D. }\\textsl{The cohomology of }$B_{Spin^{c}(n)}\n\\textsl{\\ has the presentation}\n\n\\begin{enumerate}\n\\item[(2.6)] $H^{\\ast }(B_{Spin^{c}(n)})=\\pi ^{\\ast }H^{\\ast\n}(B_{SO(n)})\\otimes \\mathbb{Z}[q_{0},q_{1},\\cdots ,q_{h(n)-1},\\theta\n_{n}]\/R_{n}$,\n\\end{enumerate}\n\n\\noindent \\textsl{in which }$R_{n}$ \\textsl{denotes the ideal generated by\nthe following elements}\n\n\\begin{quote}\n\\textsl{i)} $2q_{r+1}+q_{r}^{2}-\\pi ^{\\ast }f(w_{2}^{(r)})$\\textsl{,\\quad }\n0\\leq r\\leq h(n)-2\\QTR{sl}{;}$\n\n\\textsl{ii)} $\\pi ^{\\ast }\\delta _{2}(z)\\cup q_{r}-\\pi ^{\\ast }\\delta\n_{2}(z\\cup w_{2}^{(r)}),$ $0\\leq r\\leq h(n)-1$\\textsl{;}\n\n\\textsl{iii)} $4(-1)^{h(n)}\\theta _{n}+q_{h(n)-1}^{2}-a_{n}$\\textsl{,}\n\\end{quote}\n\n\\noindent \\textsl{where} $z\\in H^{\\ast }(B_{SO(n)},\\mathbb{Z}_{2})$\\textsl{,\nand where }$a_{n}\\in \\pi ^{\\ast }H^{+}(B_{SO(n)})\\otimes \\mathbb{Z\n(q_{0},q_{1},\\cdots ,q_{h(n)-1})$ \\textsl{is an} \\textsl{element to be\nspecified in Lemma 5.3.}\n\n\\bigskip\n\nLeaving out $\\otimes $ for notational simplicity the relations i) and ii) of\nTheorem D are inherited from the properties iii) and ii) of Theorem C. They\nexpress the relationship between the two sequences $\\left\\{ q_{r},r\\geq\n0\\right\\} $ and $\\left\\{ \\pi ^{\\ast }p_{r},r\\geq 1\\right\\} $ of integral\ncohomology classes on $B_{Spin^{c}(n)}$, and deal with the product between\nthe free part and the torsion ideal $\\tau (B_{Spin^{c}(n)})$ of the ring \nH^{\\ast }(B_{Spin^{c}(n)})$, respectively.\n\nAn outline of the paper is as follows. Sections \\S 3 to \\S 5 are devoted to\nshow Theorems A--D. The calculation is extended in Section \\S 6 to obtain a\nsimilar presentation of the ring $H^{\\ast }(B_{Spin(n)})$ in Theorem D'. The\nremaining sections constitute applications of Theorem D'. We determine in \\S\n8 the ring of integral Weyl invariants of the group $Spin(n)$; introduce in \n\\S 9 the spin characteristic classes for the spin vector bundles. In spin\ngeometry the Spin characteristic classes can play roles that may not be\nreplaced by the regular characteristic classes. Section \\S 10 is devoted to\npresent such examples.\n\nEffective computability with characteristic classes is essential in geometry\nand invariant theory. To illustrate the algorithmic nature of the operators\nand results developed in this paper a number of examples are included. In\nparticular, in a concrete situation Theorems D and D' are directly\napplicable to present the rings $H^{\\ast }(B_{Spin^{c}(n)})$ and $H^{\\ast\n}(B_{Spin(n)})$ by a minimal system of explicit generators and relations,\nsee in Examples 5.4 and 6.5; Theorems 7.7 and 7.8 are ready to deduce the\nintegral Weyl invariants of the groups $Spin^{c}(n)$ and $Spin(n)$, which\nare shown in Examples 7.6 and 7.9.\n\n\\section{The cohomology operation $F=(\\protect\\gamma ,\\protect\\alpha )$}\n\nFor a $\\func{mod}2$ cohomology class $u\\in H^{2r}(X;\\mathbb{Z}_{2})$ of a CW\ncomplex $X$ the Pontryagin square $\\mathcal{B}(u)\\in H^{4r}(X;\\mathbb{Z\n_{4}) $ can be defined by the formula\n\n\\begin{quote}\n$\\mathcal{B}(u)\\equiv \\rho _{4}(\\widetilde{u}\\cup _{0}\\widetilde{u}+\\delta \n\\widetilde{u})\\cup _{1}\\widetilde{u})$ (see \\cite{BT}),\n\\end{quote}\n\n\\noindent where $\\widetilde{u}$ is an integral lift of $u$ in the cochain\ncomplex $C^{\\ast }(X;\\mathbb{Z})$ associated to $X$, and $\\cup _{i}$ denotes\nthe $i^{th}$ cup product on $C^{\\ast }(X;\\mathbb{Z})$. Based on this formula\na co--chain level calculation verifies the following universal relations\n\n\\begin{enumerate}\n\\item[(3.1)] $\\delta _{2}(u\\cup u)=2\\delta _{4}(\\mathcal{B}(u))$ in \nH^{4r+1}(X)$;\n\n\\item[(3.2)] $\\rho _{2}\\delta _{4}\\mathcal{B}(u)=Sq^{2r}Sq^{1}u+u\\cup\nSq^{1}u $ in $H^{4r+1}(X;\\mathbb{Z}_{2})$.\n\\end{enumerate}\n\n\\noindent \\textbf{Proof of Theorem A.} Let $X$ be a $\\delta _{2}$ formal\nspace. For each $u\\in H^{2r}(X;\\mathbb{Z}_{2})$ the property $\\delta\n_{2}(u\\cup u)=0$ implies that $\\delta _{4}(\\mathcal{B}(u))\\in \\func{Im\n\\delta _{2}$ by (3.1). In view of the isomorphism $\\delta _{2}:S_{2}^{\\ast\n}(X)\\cong \\func{Im}\\delta _{2}$ there exists a unique $u_{1}\\in\nS_{2}^{4r}(X) $ so that\n\n\\begin{enumerate}\n\\item[(3.3)] $\\delta _{2}(u_{1})=\\delta _{4}(\\mathcal{B}(u))$.\n\\end{enumerate}\n\n\\noindent We can now formulate the desired operation $F=(\\alpha ,\\gamma\n):H^{2r}(X;\\mathbb{Z}_{2})\\rightarrow H^{4r}(X;\\mathbb{Z}_{4})\\times\nS^{4r}(X;\\mathbb{Z}_{2})$ by setting\n\n\\begin{quote}\n$\\gamma (u):=u_{1}$ and $\\alpha (u):=\\mathcal{B}(u)-\\theta (\\gamma (u))$.\n\\end{quote}\n\nApplying $\\rho _{2}$ to both sides of (3.3) we get by (3.2) that\n\n\\begin{quote}\n$Sq^{1}\\gamma (u)=Sq^{2r}Sq^{1}u+u\\cup Sq^{1}u$,\n\\end{quote}\n\n\\noindent showing property iii) of Theorem A. From $\\delta _{4}\\circ \\theta\n=\\delta _{2}$ and (3.3) we obtain\n\n\\begin{quote}\n$\\delta _{4}\\alpha (u)=\\delta _{4}(\\mathcal{B}(u)-\\theta (\\gamma\n(u)))=\\delta _{4}(\\mathcal{B}(u))-\\delta _{2}(\\gamma (u))=0$,\n\\end{quote}\n\n\\noindent implying $\\alpha (u)\\in \\func{Im}\\rho _{4}$ (i.e. property i) of\nTheorem A).\n\nSummarizing, the pair $F=(\\gamma ,\\alpha )$ of operators fulfills the\nproperties i), ii) and iii) of Theorem A, whose uniqueness comes directly\nfrom its definition.$\\square $\n\n\\bigskip\n\n\\noindent \\textbf{Proof of Theorem B.} Let $f:H^{\\ast }(B_{SO(n)};\\mathbb{Z\n_{2})\\rightarrow H^{\\ast }(B_{SO(n)})$ be the map entailed in (2.4). It has\nbeen shown by Thomas \\cite[Lemma (3.9)]{Th} that for any $u\\in S_{2}^{\\ast\n}(B_{SO(n)})$ there exists a unique element $v\\in S_{2}^{\\ast }(B_{SO(n)})$\nso that\n\n\\begin{enumerate}\n\\item[(3.5)] $\\mathcal{B}(u)=\\rho _{4}(f(u))+\\theta (v)$.\n\\end{enumerate}\n\n\\noindent It suffices for us to show that $v=\\gamma (u)$. Since the space \nB_{SO(n)}$ is $\\delta _{2}$--formal applying $\\delta _{4}$ to both sides of\n(3.5) we get by (3.3) that\n\n\\begin{quote}\n$\\delta _{2}(\\gamma (u))=\\delta _{4}(\\theta (v))=\\delta _{2}(v)$ (since \n\\delta _{4}\\circ \\theta =\\delta _{2}$).\n\\end{quote}\n\n\\noindent With $\\gamma (u),v\\in S_{2}^{\\ast }(B_{SO(n)})$ while $\\delta _{2}$\ninjects on $S_{2}^{\\ast }(B_{SO(n)})$, we obtain $v=$ $\\gamma (u)$.$\\square $\n\n\\section{The proof of Theorem C}\n\nApplying the Serre spectral sequence to the fibration\n\n\\begin{enumerate}\n\\item[(4.1)] $\\mathbb{C}P^{\\infty }\\overset{i}{\\rightarrow }B_{Spin^{c}(n)\n\\overset{\\pi }{\\rightarrow }B_{SO(n)}$ (see (2.5))\n\\end{enumerate}\n\n\\noindent the cohomologies of the total space $B_{Spin^{c}(n)}$ with fields\ncoefficients have been computed. We begin by recalling the relevant results\ndue to Borel, Hirzebruch, Quillen, Harada and Kono. Let $\\mathbb{Z}_{0}$ be\nthe field of rationals and denote by $\\rho _{0}$ be the cohomology\nhomomorphism induced by the inclusion $\\mathbb{Z}\\subset \\mathbb{Z}_{0}$.\nSet $q_{0}:=\\iota ^{\\ast }(x)$, where $x$ is the Euler class of the Hopf\nline bundle $\\lambda $ on $\\mathbb{C}P^{\\infty }$.\n\n\\bigskip\n\n\\noindent \\textbf{Lemma 4.1.} \\textsl{If either }$p=0$\\textsl{\\ or }$p\\geq 3\n\\textsl{\\ is a prime, then }$H^{\\ast }(B_{Spin^{c}(n)};\\mathbb{Z}_{p})\n\\textsl{\\ is a free polynomial algebra on the generators}\n\n\\begin{enumerate}\n\\item[(4.2)] $\\rho _{p}(q_{0}),\\rho _{p}(\\pi ^{\\ast }p_{1}),\\cdots ,\\rho\n_{p}(\\pi ^{\\ast }p_{\\left[ \\frac{n-1}{2}\\right] }),$ \\textsl{and} $\\rho\n_{p}(\\pi ^{\\ast }e_{n})$ \\textsl{if} $n\\equiv 0\\func{mod}2$.\n\\end{enumerate}\n\n\\textsl{In addition, the map }$\\rho :H^{\\ast }(B_{Spin^{c}(n)})\\rightarrow\nH^{\\ast }(B_{Spin^{c}(n)};\\mathbb{Z}_{0})\\times H^{\\ast }(B_{Spin^{c}(n)}\n\\mathbb{Z}_{2})$ \\textsl{by} $\\rho (z)=(\\rho _{0}(z),\\rho _{2}(z))$\\textsl{\\\ninjects.}\n\n\\bigskip\n\n\\noindent \\textbf{Proof.} Since the composition $\\iota \\circ i:\\mathbb{C\nP^{\\infty }\\rightarrow \\mathbb{C}P^{\\infty }$ (see (2.5) and (2.6)) is of\ndegree $2$ the class $i^{\\ast }(q_{0})=2x$ generates the algebra $H^{\\ast }\n\\mathbb{C}P^{\\infty };\\mathbb{Z}_{p})=\\mathbb{Z}_{p}[x]$ for all $p\\neq 2$.\nThe first assertion follows from the Leray--Hirsch property \\cite[p.231]{Hus}\nof the fibration (4.1) with $\\mathbb{Z}_{p}$ coefficients.\n\nAccording to Borel and Hirzebruch \\cite[30.6.]{BH} if $X$ is a space with \n2\\tau (X)=0$, then the map $\\rho :H^{\\ast }(X)\\rightarrow H^{\\ast }(X\n\\mathbb{Z}_{0})\\times H^{\\ast }(X;\\mathbb{Z}_{2})$ by $\\rho (z)=(\\rho\n_{0}(z),\\rho _{2}(z))$ injects. The second assertion follows from $2\\tau\n(B_{Spin^{c}(n)})=0$ by Harada and Kono \\cite[Theorem 3.7]{HK}.$\\square $\n\n\\bigskip\n\nTurning to the algebra $H^{\\ast }(B_{Spin^{c}(n)};\\mathbb{Z}_{2})$ the\ntransgression $\\sigma $ in the fibration (4.1) clearly satisfies $\\sigma\n(\\rho _{2}(x))=w_{3}$. Since $\\sigma $ commutes with the Steenrod squares\nthe standard relation $Sq^{2^{k}}x_{k}=x_{k+1}$ on $H^{\\ast }(\\mathbb{C\nP^{\\infty };\\mathbb{Z}_{2})$ implies that\n\n\\begin{enumerate}\n\\item[(4.3)] $\\sigma (x_{k+1})=Sq^{2^{k}}\\sigma (x_{k})$, $k\\geq 0$, where \nx_{k}:=(\\rho _{2}(x))^{2^{k-1}}$.\n\\end{enumerate}\n\n\\noindent For a subset $\\{a_{1},\\cdots ,a_{r}\\}$ of an algebra $A$ denote by \n$\\left\\langle a_{1},\\cdots ,a_{r}\\right\\rangle $ the ideal generated by \na_{1},\\cdots ,a_{r}$, and let $A\/\\left\\langle a_{1},\\cdots\n,a_{r}\\right\\rangle $ be the quotient algebra.\n\n\\bigskip\n\n\\noindent \\textbf{Lemma 4.2.} \\textsl{Let }$J=\\left\\langle \\sigma\n(x_{1}),\\cdots ,\\sigma (x_{h(n)})\\right\\rangle $\\textsl{, }$h(n)=\\left[ \n\\frac{n-1}{2}\\right] $\\textsl{, and let }$\\theta _{n}$\\textsl{\\ be the Euler\nclass of the presentation }$Spin^{c}(n)\\rightarrow U(2^{h(n)})$\\textsl{. The\n}\n\n\\begin{enumerate}\n\\item[(4.4)] $H^{\\ast }(B_{Spin^{c}(n)};\\mathbb{Z}_{2})=H^{\\ast }(B_{SO(n)}\n\\mathbb{Z}_{2})\/J\\otimes \\mathbb{Z}_{2}[\\theta _{n}]$\\textsl{.}\n\\end{enumerate}\n\n\\textsl{In particular, the torsion ideal }$\\tau (B_{Spin^{c}(n)})$\\textsl{\\\nof the ring }$H^{\\ast }(B_{Spin^{c}(n)})$\\textsl{\\ is}\n\n\\begin{enumerate}\n\\item[(4.5)] $\\tau (B_{Spin^{c}(n)})=\\pi ^{\\ast }\\tau (B_{SO(n)})\\otimes \n\\mathbb{Z}[\\theta _{n}]$.\n\\end{enumerate}\n\n\\noindent \\textbf{Proof.} Formula (4.4) goes to Harada and Kono \\cite\nTheorem 3.5]{HK} (see also Quillen \\cite[Theorem 6.5]{Q}). With $2\\tau\n(B_{SO(n)})=2\\tau (B_{Spin^{c}(n)})=0$ the map $\\pi $ induces the\ncommutative diagram\n\n\\begin{quote}\n\\begin{tabular}{llll}\n$0\\rightarrow $ & $\\tau (B_{SO(n)})$ & $\\overset{\\rho _{2}}{\\rightarrow }$ & \n$H^{\\ast }(B_{SO(n)};\\mathbb{Z}_{2})$ \\\\ \n& $\\pi ^{\\ast }\\downarrow $ & & $\\pi ^{\\ast }\\downarrow $ \\\\ \n$0\\rightarrow $ & $\\tau (B_{Spin^{c}(n)})$ & $\\overset{\\rho _{2}}\n\\rightarrow }$ & $H^{\\ast }(B_{Spin^{c}(n)};\\mathbb{Z}_{2})\n\\end{tabular}\n\\end{quote}\n\n\\noindent in which both $\\rho _{2}$ inject. Since $\\tau (B_{Spin^{c}(n)})$\nis an ideal the map\n\n\\begin{quote}\n$h:\\pi ^{\\ast }\\tau (B_{SO(n)})\\otimes \\mathbb{Z}[\\theta _{n}]\\rightarrow\n\\tau (B_{Spin^{c}(n)})$ by $h(\\pi ^{\\ast }x\\otimes \\theta _{n})=\\pi ^{\\ast\n}x\\cup \\theta _{n}$\n\\end{quote}\n\n\\noindent is well defined, and gives rise to the isomorphism (4.5).\n\nIndeed, by (4.4) the composition $\\rho _{2}\\circ h$ injects, hence $h$\ninjects, too. On the other hand for any $x\\in \\tau (B_{Spin^{c}(n)})$ there\nexists $y\\in H^{\\ast }(B_{Spin^{c}(n)};\\mathbb{Z}_{2})$ so that $\\delta\n_{2}(y)=x$. By formula (4.4) the map $h$ also surjects.$\\square $\n\n\\bigskip\n\nAs the space $B_{SO(n)}$ is $\\delta _{2}$--formal the derived sequence \n\\{w_{2},w_{2}^{(1)},\\cdots \\}$ of the Stiefel Whitney class $w_{2}$ is\ndefined, see Example 2.4. Its relationship with the sequence $\\{\\sigma\n(x_{1}),\\sigma (x_{2}),\\cdots \\}$ defined by (4.3) is stated in the\nfollowing result.\n\n\\bigskip\n\n\\noindent \\textbf{Lemma 4.3.} \\textsl{In }$H^{\\ast }(B_{SO(n)};\\mathbb{Z\n_{2})$\\textsl{\\ let }$J_{n,k}=$\\textsl{\\ }$\\left\\langle \\sigma\n(x_{1}),\\cdots ,\\sigma (x_{k})\\right\\rangle $\\textsl{, }$k\\geq 0$\\textsl{.\nThen}\n\n\\begin{enumerate}\n\\item[(4.6)] $Sq^{1}(w_{2}^{(k-1)})=\\sigma (x_{k})+\\beta _{k-1}$ \\textsl\nwith }$\\beta _{k-1}\\in J_{n,k-1}$\\textsl{, }$k\\geq 1$\\textsl{.}\n\\end{enumerate}\n\n\\noindent \\textbf{Proof.} If $k=1$ the formula (4.6) is verified by $\\sigma\n(x_{1})=w_{3}=Sq^{1}(w_{2})$. Assume next that it holds for some $k\\geq 1$.\nThen\n\n\\begin{quote}\n$Sq^{1}(w_{2}^{(k)})=Sq^{1}(\\gamma (w_{2}^{(k-1)}))$ (by $w_{2}^{(k)}=\\gamma\n(w_{2}^{(k-1)})$)\n\n$=Sq^{2^{k}}Sq^{1}(w_{2}^{(k-1)})+w_{2}^{(k-1)}\\cup Sq^{1}(w_{2}^{(k-1)})$\n(by iii) of Theorem A)\n\n$=Sq^{2^{k}}(\\sigma (x_{k})+\\beta _{k-1})+w_{2}^{(k-1)}\\cup (\\sigma\n(x_{k})+\\beta _{k-1})$ (by induction)\n\n$=\\sigma (x_{k+1})+Sq^{2^{k}}\\beta _{k-1}+w_{2}^{(k-1)}\\cup (\\sigma\n(x_{k})+\\beta _{k-1})$ (by (4.3)).\n\\end{quote}\n\n\\noindent The inductive procedure showing (4.6) is completed by taking\n\n\\begin{quote}\n$\\beta _{k}:=Sq^{2^{k}}\\beta _{k-1}+w_{2}^{(k-1)}\\cup (\\sigma (x_{k})+\\beta\n_{k-1})$.$\\square $\n\\end{quote}\n\nWe proceed to a constructive proof of Theorem C. Since $\\pi ^{\\ast }\\circ\n\\sigma =0$ we get by (4.5) that $Sq^{1}(\\pi ^{\\ast }(w_{2}^{(k)}))=0$. With\nthe space $B_{Spin^{c}(n)}$ being $\\delta _{2}$--formal we obtain further \n\\delta _{2}(\\pi ^{\\ast }(w_{2}^{(k)}))=0$, $k\\geq 0$. In view of the\nBockstein exact sequence\n\n\\begin{center}\n$\\cdots \\rightarrow H^{r}(B_{Spin^{c}(n)})\\overset{\\rho _{2}}{\\rightarrow \nH^{r}(B_{Spin^{c}(n)};\\mathbb{Z}_{2})\\overset{\\delta _{2}}{\\rightarrow \nH^{r+1}(B_{Spin^{c}(n)})\\rightarrow \\cdots $\n\\end{center}\n\n\\noindent this implies that the classes $\\pi ^{\\ast }(w_{2}^{(k)})$ admit\nintegral lifts\n\n\\begin{enumerate}\n\\item[(4.7)] $\\rho _{2}(q_{k}^{\\prime })=\\pi ^{\\ast }(w_{2}^{(k)})$ for some \n$q_{k}^{\\prime }\\in H^{2^{k+1}}(B_{Spin^{c}(n)})$, $k\\geq 0$.\n\\end{enumerate}\n\n\\noindent In particular,\n\n\\begin{quote}\n$\\mathcal{B}(\\pi ^{\\ast }(w_{2}^{(k)}))=\\rho _{4}(q_{k}^{\\prime }\\cup\nq_{k}^{\\prime })$, $\\theta (\\pi ^{\\ast }(w_{2}^{(k)}))=$ $2\\rho\n_{4}(q_{k}^{\\prime })$.\n\\end{quote}\n\n\\noindent Thus, applying $\\pi ^{\\ast }$ to $u=w_{2}^{(k)}$ the relation i)\nof Theorem B becomes\n\n\\begin{quote}\n$\\rho _{4}(q_{k}^{\\prime }\\cup q_{k}^{\\prime })=\\rho _{4}(\\pi ^{\\ast\n}f(w_{2}^{(k)}))+2\\rho _{4}(q_{k+1}^{\\prime })$ on $H^{\\ast\n}(B_{Spin^{c}(n)};\\mathbb{Z}_{4})$,\n\\end{quote}\n\n\\noindent implying that there exist integral class $v_{k+1}\\in H^{\\ast\n}(B_{Spin^{c}(n)})$ so that\n\n\\begin{enumerate}\n\\item[(4.8)] $2q_{k+1}^{\\prime }+q_{k}^{\\prime }\\cup q_{k}^{\\prime }=\\pi\n^{\\ast }f(w_{2}^{(k)})+4v_{k+1}$.\n\\end{enumerate}\n\n\\noindent \\textbf{Proof of Theorem C. }Since the class $q_{0}=\\iota ^{\\ast\n}(x)$ generates $H^{2}(B_{Spin^{c}(n)})=\\mathbb{Z}$ with $\\rho\n_{2}(q_{0})=\\pi ^{\\ast }(w_{2})$ we can take in (4.7) that $q_{0}^{\\prime\n}=q_{0}$, and define in term of (4.8) that $q_{1}:=q_{1}^{\\prime }-2v_{1}$.\nThen, the relations ii) and iii) of Theorem C for the case $r=1$ are\nverified by\n\n\\begin{quote}\n$\\rho _{2}(q_{1})=\\rho _{2}(q_{1}^{\\prime })=\\pi ^{\\ast }(w_{2}^{(1)})$ (by\n(4.7));\n\n$2q_{1}+q_{0}\\cup q_{0}=\\pi ^{\\ast }f(w_{2}^{(0)})$ (by (4.8)).\n\\end{quote}\n\nAssume next that a sequence $q_{0},\\cdots ,q_{r}$ of classes satisfying the\nproperties i), ii) and iii) of Theorem C has been obtained for some $r\\geq 1\n. Take in (4.7) that $q_{r}^{\\prime }=q_{r}$ and define in term of (4.8)\nthat $q_{r+1}:=q_{r+1}^{\\prime }-2v_{r+1}$. Then\n\n\\begin{quote}\n$\\rho _{2}(q_{r+1})=\\pi ^{\\ast }(w_{2}^{(r+1)})$, $2q_{r+1}+q_{r}\\cup\nq_{r}=\\pi ^{\\ast }f(w_{2}^{(r)})$.\n\\end{quote}\n\n\\noindent This completes the inductive construction of a sequence \n\\{q_{r},r\\geq 0\\}$ fulfilling the system i)--iii) of Theorem C.\n\nTo see the uniqueness of the sequence $\\{q_{r},r\\geq 0\\}$ we make use of the\ninjection $\\rho $ in Lemma 4.1. Note that the properties i), ii) and iii) of\nTheorem C suffices to decide the $\\rho $--image of $q_{r}$ as $\\rho\n(q_{r})=(g_{r},\\pi ^{\\ast }w_{2}^{(r)})$, $r\\geq 1$, where $g_{r}\\in H^{\\ast\n}(B_{Spin^{c}(n)};\\mathbb{Z}_{0})$ is the unique polynomial (with rational\ncoefficients) in the generators (4.2) defined recurrently by the relation\niii) of Theorem C as\n\n\\begin{quote}\n$\\rho _{0}(q_{1})=g_{1}:=\\frac{1}{2}(\\rho _{0}(\\pi ^{\\ast }p_{1})-\\rho\n_{0}(q_{0})^{2})$\n\n$\\rho _{0}(q_{2})=g_{2}:=\\frac{1}{2}(\\rho _{0}(\\pi ^{\\ast }p_{2})-g_{1}^{2})\n, $\\cdots $,\n\n$\\rho _{0}(q_{r})=g_{r}:=\\frac{1}{2}(\\rho _{0}(\\pi ^{\\ast\n}f(w_{2}^{(r-1)}))-g_{r-1}^{2}))$, $r\\geq 2$.\n\\end{quote}\n\n\\noindent The injectivity of $\\rho $ implies that, if $\\left\\{ q_{r}^{\\prime\n},1\\leq r\\right\\} $ is a second sequence satisfying the system i)--iii) of\nTheorem C, then $q_{r}^{\\prime }=q_{r}$, as required.$\\square $\n\n\\bigskip\n\nWe conclude this section with two applications of Theorem C. Firstly, let \nH^{+}(B_{SO(n)})$ be the subring of $H^{\\ast }(B_{SO(n)})$ consisting of\nelements in the positive degrees. The induced action of the fiber inclusion \ni$ on cohomology is determined in the following result.\n\n\\bigskip\n\n\\noindent \\textbf{Lemma 4.4. }\\textsl{The map }$i^{\\ast }$\\textsl{\\\nsatisfies that }$i^{\\ast }\\circ \\pi ^{\\ast }=0$\\textsl{\\ on }\nH^{+}(B_{SO(n)})$\\textsl{, and that}\n\n\\begin{enumerate}\n\\item[(4.9)] $i^{\\ast }(q_{r})=2(-1)^{r}x^{2^{r}}$, $r\\geq 0$; $\\quad\ni^{\\ast }(\\theta _{n})=x^{2^{h(n)}}$.\n\\end{enumerate}\n\n\\noindent \\textbf{Proof.} As the composition $\\pi \\circ i$ is null--homotopy\nwe get $i^{\\ast }\\circ \\pi ^{\\ast }=0$. Consequently, applying $i^{\\ast }$\nto the relation iii) of Theorem C yields\n\n\\begin{quote}\n$2i^{\\ast }(q_{r+1})+i^{\\ast }(q_{r})^{2}=0$, $r\\geq 0$.\n\\end{quote}\n\n\\noindent Inputting $i^{\\ast }(q_{0})=2x$ we obtain the first relation in\n(4.9) by induction on $r$. The second one is verified by the geometric fact \ni^{\\ast }\\xi _{n}=\\lambda \\oplus \\cdots \\oplus \\lambda $ ($2^{h(n)}$ copies)\n$\\square $\n\n\\bigskip\n\nRecall that the $\\func{mod}2$ Bockstein cohomology $H_{\\beta }^{\\ast }(X)$\nof a space $X$ is the kernel modulo the image of the operation $\\beta\n=Sq^{1} $ on $H^{\\ast }(X;\\mathbb{Z}_{2})$. Granted with Theorem C we can\nexpress the Bockstein $H_{\\beta }^{\\ast }(B_{Spin^{c}(n)})$ by the mod $2$\nreductions of explicit integral cohomology classes on $B_{Spin^{c}(n)}$. For\na set $\\{y_{1},\\cdots ,y_{r}\\}$ of graded elements let $\\Delta (y_{1},\\cdots\n,y_{r})$ be the graded free $\\mathbb{Z}$ module with the basis\n\n\\begin{quote}\n$\\{1,y_{i_{1}}y_{i_{2}}\\cdots y_{i_{k}}$, $1\\leq i_{1}<\\cdots k\n)\n\n\\begin{enumerate}\n\\item[(7.3)] $p_{r}=c_{r}^{2}-2c_{r-1}c_{r+1}+\\cdots\n+2(-1)^{r-1}c_{1}c_{2r-1}+2(-1)^{r}c_{2r}$ (\\cite[p.177]{MS}).\n\\end{enumerate}\n\n\\textbf{Convention c).} The notion $e_{2k}$ for the Euler class of the\ncanonical bundle on $B_{SO(2k)}$ is also applied to denote either\n\n\\begin{quote}\n$\\pi ^{\\ast }e_{2k}\\in H^{\\ast }(B_{Spin^{c}(2k)})$ or $\\psi ^{\\ast }\\pi\n^{\\ast }e_{2k}\\in H^{\\ast }(B_{Spin(2k)})$.\n\\end{quote}\n\n\\noindent These convention will not cause any confusion, because in each\ncircumstance the cohomologies or the homomorphisms involved will be clearly\nstated.\n\n\\bigskip\n\n\\textbf{7.2. The operators }$(\\psi ,\\delta )$\\textbf{\\ on the} \\textbf\ncohomology }$H^{\\ast }(B_{U^{c}(k)})$\\textbf{.} The group $U^{c}(k)$ has two\nobvious $1$--dimensional unitary representations\n\n\\begin{quote}\n$U(k)\\times _{\\mathbb{Z}_{2}}U(1)\\rightarrow U(k)\\overset{\\det }{\\rightarrow \n}U(1)$ and $U(k)\\times _{\\mathbb{Z}_{2}}U(1)\\rightarrow U(1)$\n\\end{quote}\n\n\\noindent whose Euler classes are $c_{1}$ and $B_{\\lambda ^{c}}^{\\ast\n}(q_{0})$, respectively. By the commutivity of the second diagram in (7.2)\nwe have\n\n\\begin{quote}\n$\\rho _{2}(B_{\\lambda ^{c}}^{\\ast }(q_{0})+c_{1})=B_{\\lambda ^{c}}^{\\ast\n}\\circ B_{b}^{\\ast }(w_{2})+B_{b^{\\prime }}^{\\ast }\\circ B_{\\lambda\n_{0}}^{\\ast }(w_{2})=0$,\n\\end{quote}\n\n\\noindent implying that the sum $B_{\\lambda ^{c}}^{\\ast }(q_{0})+c_{1}\\in\nH^{2}(B_{U^{c}(k)})$ is divisible by $2$. This brings us the integral class\n\n\\begin{quote}\n$y:=\\frac{1}{2}(B_{\\lambda ^{c}}^{\\ast }(q_{0})+c_{1})\\in\nH^{2}(B_{U^{c}(k)}) $\n\\end{quote}\n\n\\noindent by which following result becomes transparent by Lemma 7.1.\n\n\\bigskip\n\n\\noindent \\textbf{Lemma 7.2.} \\textsl{We have }$H^{\\ast }(B_{U^{c}(k)})\n\\mathbb{Z}[y,c_{1},\\cdots ,c_{k}]$\\textsl{\\ so that}\n\n\\begin{quote}\n\\textsl{i)} $B_{b^{\\prime }}^{\\ast }(c_{r})=c_{r},1\\leq r\\leq k$;\n\n\\textsl{ii) }$B_{a^{\\prime }}^{\\ast }(z)=c_{1},2c_{1}$\\textsl{\\ or }$c_{r}\n\\textsl{\\ for }$z=y,c_{1}$\\textsl{\\ or }$c_{r}$\\textsl{\\ with }$r\\geq 2\n\\textsl{;}\n\n\\textsl{iii) }$B_{\\lambda ^{c}}^{\\ast }(q_{0})=2y-c_{1}$\\textsl{.}$\\square $\n\\end{quote}\n\nFor each ordered sequence $\\lambda =(\\lambda _{1},\\cdots ,\\lambda _{k})$ of \nk$ non-negative integers define\n\n\\begin{quote}\n$c_{\\lambda }:=c_{1}^{\\lambda _{1}}\\cdots c_{k}^{\\lambda _{k}}\\in H^{\\ast\n}(B_{U^{c}(k)})$.\n\\end{quote}\n\n\\noindent Since all the monomials $y^{r}c_{\\lambda }$, $r\\geq 0$, form an\nadditive basis of the cohomology $H^{\\ast }(B_{U^{c}(k)})$ every element \nu\\in H^{\\ast }(B_{U^{c}(k)})$ has the unique expansion\n\n\\begin{quote}\n$u=\\underset{(r,\\lambda )}{\\Sigma }u_{(r,\\lambda )}\\cdot y^{r}c_{\\lambda }$, \n$u_{(r,\\lambda )}\\in \\mathbb{Z}$.\n\\end{quote}\n\n\\noindent We may then introduce the operator $\\psi $ on the ring $H^{\\ast\n}(B_{U^{c}(k)})$ by\n\n\\begin{quote}\n$\\psi (u):=\\underset{(r,\\lambda )}{\\Sigma }\\rho _{2}(u_{(r,\\lambda )})\\cdot\ny^{2r}p_{\\lambda },$ $u\\in H^{\\ast }(B_{U^{c}(k)})$,\n\\end{quote}\n\n\\noindent where $p_{\\lambda }=p_{1}^{\\lambda _{1}}\\cdots p_{k}^{\\lambda\n_{k}} $ with $p_{i}$ the polynomials in the Chern classes $c_{r}$ given by\n(7.3). Alternatively,\n\n\\bigskip\n\n\\noindent \\textbf{Corollary 7.3.} \\textsl{The operator }$\\psi $\\textsl{\\ is\ncharacterized uniquely by the following algorithmic properties, where }\nu,v\\in H^{\\ast }(B_{U^{c}(k)})$\\textsl{, }$b\\in \\mathbb{Z}$\\textsl{,}\n\n\\begin{quote}\n\\textsl{i) }$\\psi (u+b\\cdot y^{r}c_{\\lambda })=\\psi (u)+\\rho _{2}(b)\\cdot\ny^{2r}p_{\\lambda }$\\textsl{\\ if }$\\rho _{2}(u_{(r,\\lambda )})=0;$\n\n\\textsl{ii) }$\\psi (u\\cup v)=\\psi (u)\\cup \\psi (v)$\\textsl{.}$\\square $\n\\end{quote}\n\nIt follows from $p_{\\lambda }\\equiv c_{\\lambda }^{2}\\func{mod}2$ by (7.3)\nthat the operator $\\psi $ satisfies also the relation $\\psi (u)\\equiv u^{2\n\\func{mod}2$, which implies that there exists a unique operator $\\delta $ on\nthe ring $H^{\\ast }(B_{U^{c}(k)})$ that is related to $\\psi $ by the formula\n\n\\begin{quote}\n$\\psi (u)=u^{2}+2\\delta (u).$\n\\end{quote}\n\n\\noindent \\textbf{Definition 7.4. }For a polynomial $u\\in\nH^{2n}(B_{U^{c}(k)})$ the sequence $\\left\\{ u,\\delta (u),\\delta\n^{2}(u),\\cdots \\right\\} $ on $H^{\\ast }(B_{U^{c}(k)})$ defined inductively\nby $\\delta ^{r}(u)=\\delta (\\delta ^{r-1}(u)$ will be called \\textsl{the\nderived sequence} \\textsl{of} \\textsl{the initial polynomial} $u$.$\\square $\n\n\\bigskip\n\nSince the ring $H^{\\ast }(B_{U^{c}(k)})$ is torsion free we get\n\n\\bigskip\n\n\\noindent \\textbf{Corollary 7.5. }\\textsl{For any }$u\\in H^{\\ast\n}(B_{U^{c}(k)})$\\textsl{\\ the derive sequence }$\\left\\{ u,\\delta (u),\\delta\n^{2}(u),\\cdots \\right\\} $\\textsl{\\ can be computed by }$\\psi $ \\textsl{via\nthe following recurrence relations}\n\n\\begin{quote}\n$\\delta ^{0}(u)=u$\\textsl{, }$2\\delta ^{r+1}(u)+\\delta ^{r}(u)^{2}=\\psi\n(\\delta ^{r}(u))$, $r\\geq 0$.$\\square $\n\\end{quote}\n\n\\bigskip\n\n\\noindent \\textbf{Example 7.6. }Corollaries 7.3 and 7.5 indicate a direct\nand effective recurrence to produce the sequence $\\left\\{ u,\\delta\n(u),\\delta ^{2}(u),\\cdots \\right\\} $ from the initial one $u$. For example\nwe take $u=2y-c_{1}\\in H^{2}(B_{U^{c}(k)})$. Then\n\n\\begin{quote}\n$\\delta ^{1}(u)=-c_{2}+2yc_{1}-2y^{2}$;\n\n$\\delta\n^{2}(u)=c_{4}-c_{1}c_{3}-2y^{2}c_{1}^{2}-2y^{4}+2yc_{1}c_{2}-2y^{2}c_{2}+4y^{3}c_{1} \n$;\n\n$\\delta\n^{3}(u)=c_{_{8}}-c_{_{1}}c_{_{7}}+c_{2}c_{6}-c_{3}c_{5}+(c_{1}^{2}-2c_{2})(-c_{2}c_{4}+c_{1}c_{5}-c_{6}) \n$\n\n$\\qquad\n-c_{2}c_{3}^{2}-c_{1}c_{3}c_{4}-2(-y^{2}c_{1}^{2}-y^{4}+yc_{1}c_{2}-y^{2}c_{2}+2y^{3}c_{1})^{2} \n$\n\n$\\qquad\n-2(c_{4}-c_{1}c_{3})(-y^{2}c_{1}^{2}-y^{4}+yc_{1}c_{2}-y^{2}c_{2}+2y^{3}c_{1}) \n$, $\\cdots $.\n\\end{quote}\n\n\\noindent In addition, it can be shown that\n\n\\begin{enumerate}\n\\item[(7.4)] $\\psi (\\delta ^{r}(u))\\in \\left\\langle p_{2},\\cdots\n,p_{k}\\right\\rangle $, $r>1$.$\\square $\n\\end{enumerate}\n\n\\textbf{7.3. The ring map }$B_{\\lambda ^{c}}^{\\ast }:H^{\\ast\n}(B_{Spin^{c}(2k)})\\rightarrow H^{\\ast }(B_{U^{c}(k)})$\\textbf{.} By the\nconventions in \\textbf{7.1} and with respect to the presentation\n\n\\begin{quote}\n$H^{\\ast }(B_{Spin^{c}(2k)})=\\pi ^{\\ast }H^{\\ast }(B_{SO(2k)})\\otimes \n\\mathbb{Z}[q_{0},q_{1},\\cdots ,q_{h(2k)-1},\\theta _{2k}]\/R_{2k}$,\n\\end{quote}\n\n\\noindent by Theorem D, partial information on the ring map $B_{\\lambda\n^{c}}^{\\ast }$ has already known. Precisely we have\n\n\\begin{quote}\n$B_{\\lambda ^{c}}^{\\ast }(e_{2k})=c_{k}$, \\textsl{\\ }$B_{\\lambda ^{c}}^{\\ast\n}(p_{r})=p_{r}$, $B_{\\lambda ^{c}}^{\\ast }(\\tau (B_{Spin^{c}(2k)}))=0$,\n\\end{quote}\n\n\\noindent where the third relation follows from the fact that the target\nring $H^{\\ast }(B_{U^{c}(k)})$ is torsion free. In addition, since $\\theta\n_{n}\\in H^{\\ast }(B_{Spin^{c}(n)})$ is the Euler class of the the complex\nspin presentation of the group $Spin^{c}(n)$, the polynomials $B_{\\lambda\n^{c}}^{\\ast }(\\theta _{n})$ can be computed in representation theory \\cit\n{ABS}. As examples we have\n\n\\begin{quote}\n$B_{\\lambda ^{c}}^{\\ast }(\\theta _{4})=y^{2}-yc_{1};$\n\n$B_{\\lambda ^{c}}^{\\ast }(\\theta\n_{6})=y^{4}-2y^{3}c_{1}+y^{2}(c_{2}+c_{1}^{2})-y(c_{1}c_{2}-c_{3})$;\n\n$B_{\\lambda ^{c}}^{\\ast }(\\theta\n_{8})=y^{8}-4y^{7}c_{1}+y^{6}(2c_{2}+6c_{1}^{2})-y^{5}(6c_{3}+4c_{1}^{3}+6c_{1}c_{2}) \n$\n\n$\\\n+y^{4}(c_{1}^{4}+c_{2}^{2}+6c_{1}^{2}c_{2}+c_{1}c_{3}-4c_{4})-y^{3}(2c_{1}c_{2}^{2}-8c_{1}c_{4}+2c_{1}^{2}c_{3}+2c_{1}^{3}c_{2}) \n$\n\n\\ \\ \n+y^{2}(c_{1}^{3}c_{3}-22c_{2}c_{4}+c_{1}c_{2}c_{3}-c_{3}^{2}-5c_{1}^{2}c_{4}+\\allowbreak c_{1}^{2}c_{2}^{2})-y(c_{1}^{2}c_{2}c_{3}-c_{1}^{3}c_{4}-c_{1}c_{3}^{2}) \n$.\n\\end{quote}\n\n\\noindent Summarizing, the determination of the ring map $B_{\\lambda\n^{c}}^{\\ast }$ is reduced to express the sequence $\\{B_{\\lambda ^{c}}^{\\ast\n}(q_{r}),0\\leq r\\leq h(n)-1\\}$ as explicit polynomials in $H^{\\ast\n}(B_{U^{c}(k)})$.\n\n\\bigskip\n\n\\noindent \\textbf{Theorem 7.7.} \\textsl{The sequence }$\\{B_{\\lambda\n^{c}}^{\\ast }(q_{r}),0\\leq r\\leq h(n)-1\\}$\\textsl{\\ is the first }$h(n)$ \n\\textsl{terms of the} \\textsl{derived sequence }$\\left\\{ u,\\delta (u),\\delta\n^{2}(u),\\cdots \\right\\} $ \\textsl{of} $u=c_{1}-2y$\\textsl{\\ (see in Example\n7.6).}\n\n\\bigskip\n\n\\noindent \\textbf{Proof.} The proof serves also the purpose to bring a\npassage from the operators $\\{f,\\gamma \\}$ on $H^{\\ast }(B_{SO(2k)};\\mathbb{\n}_{2})$ (Theorem B) to the ones $\\{\\psi ,\\delta \\}$ on $H^{\\ast\n}(B_{U^{c}(k)})$ via the map $B_{\\lambda ^{c}}^{\\ast }$. In view of the\npresentation\n\n\\begin{quote}\n$H^{\\ast }(B_{U^{c}(k)};\\mathbb{Z}_{2})=\\mathbb{Z}_{2}[\\rho _{2}(y),\\rho\n_{2}(c_{1}),\\cdots ,\\rho _{2}(c_{k})]$\n\\end{quote}\n\n\\noindent by Lemma 7.2 define the operator $R:H^{2k}(B_{U^{c}(k)};\\mathbb{Z\n_{2})\\rightarrow H^{4k}(B_{U^{c}(k)})$ by\n\n\\begin{enumerate}\n\\item[(7.5)] $R(u):=\\left\\{ \n\\begin{tabular}{l}\n$y^{2r}p_{\\lambda }$ if $u=\\rho _{2}(y^{r}c_{\\lambda })$ is a monomial; \\\\ \n$R(u_{1})+\\cdots +R(u_{m})$ if $u=$ $u_{1}+\\cdots +u_{m}\n\\end{tabular\n\\right. $\n\\end{enumerate}\n\n\\noindent where the $u_{i}$'s are distinct monomials in the $\\rho\n_{2}(y),\\rho _{2}(c_{1}),\\cdots ,\\rho _{2}(c_{k})$. Then, in addition to the\nobvious factorization\n\n\\begin{enumerate}\n\\item[(7.6)] $\\psi =R\\circ \\rho _{2}$,\n\\end{enumerate}\n\n\\noindent comparison with the formula (2.4) of the operator $f$ tells that\n\n\\begin{enumerate}\n\\item[(7.7)] $R\\circ B_{\\lambda ^{c}}^{\\ast }\\circ \\pi ^{\\ast }=B_{\\lambda\n^{c}}^{\\ast }\\circ \\pi ^{\\ast }\\circ f$ (where $\\pi ^{\\ast }=B_{b}^{\\ast }$).\n\\end{enumerate}\n\n\\noindent On the other hand, by the second diagram in (7.2), applying the\nring map $B_{\\lambda ^{c}}^{\\ast }$ to the relation ii) and iii) of Theorem\nC gives rise to\n\n\\begin{enumerate}\n\\item[(7.8)] $\\rho _{2}\\circ B_{\\lambda ^{c}}^{\\ast }(q_{r})=B_{\\lambda\n^{c}}^{\\ast }\\circ \\pi ^{\\ast }(w_{2}^{(r)})$ on $H^{\\ast }(B_{U^{c}(k)}\n\\mathbb{Z}_{2})$, and\n\n\\item[(7.9)] $2B_{\\lambda ^{c}}^{\\ast }(q_{r+1})+B_{\\lambda ^{c}}^{\\ast\n}(q_{r})^{2}=B_{\\lambda ^{c}}^{\\ast }\\circ \\pi ^{\\ast }f(w_{2}^{(r)})$, \nr\\geq 0$,\\textsl{\\ }on $H^{\\ast }(B_{U^{c}(k)})$,\n\\end{enumerate}\n\n\\noindent respectively. Setting $u_{r+1}:=B_{\\lambda ^{c}}^{\\ast }(q_{r})$\nthese imply that\n\n\\begin{quote}\n$2u_{r+1}+u_{r}^{2}=B_{\\lambda ^{c}}^{\\ast }\\circ \\pi ^{\\ast }\\circ\nf(w_{2}^{(r)})$ (by (7.9))\n\n$=R\\circ B_{\\lambda ^{c}}^{\\ast }\\circ \\pi ^{\\ast }(w_{2}^{(r)})$ (by (7.7))\n\n$=R\\circ \\rho _{2}(u_{r})$ (by (7.8))\n\n$=\\psi (u_{r})$ (by (7.6)).\n\\end{quote}\n\n\\noindent With $u_{1}=2y-c_{1}$ by iii) of Lemma 7.2 we obtain the result by\nCorollary 7.5.$\\square $\n\n\\bigskip\n\n\\textbf{7.4. The ring map }$B_{\\lambda }^{\\ast }:H^{\\ast\n}(B_{Spin(2k)})\\rightarrow H^{\\ast }(B_{U(k)})$\\textbf{. }Carrying on\ndiscussion from Example 7.6 let $\\left\\{ u,\\delta (u),\\delta ^{2}(u),\\cdots\n\\right\\} $ be the derived sequence of the polynomial $u=2y-c_{1}$. As\nelements in the polynomial ring $H^{\\ast }(B_{U^{c}(k)})$ we can write for\neach $r\\geq 1$ that\n\n\\begin{quote}\n$\\delta ^{r}(u)=u^{(r)}(y,c_{1},c_{2},\\cdots ,c_{k})$.\n\\end{quote}\n\n\\noindent Applying the ring map $B_{a^{\\prime }}^{\\ast }$ to the polynomials \n$p_{r},\\delta ^{r}(u)$, $\\psi (\\delta ^{r}(u))\\in H^{\\ast }(B_{U^{c}(k)})$\nwe obtain by ii) of Lemma 7.2 the following polynomials in $H^{\\ast\n}(B_{U(k)}):$\n\n\\begin{enumerate}\n\\item[(7.10)] $g_{r}:=B_{a^{\\prime }}^{\\ast\n}(p_{r})=p_{r}+2(-1)^{r-1}c_{1}c_{2r-1}$ (see (7.3) for $p_{r}$);\n\n\\item[(7.11)] $\\alpha _{r}:=B_{a^{\\prime }}^{\\ast }(\\delta\n^{r}(u))=u^{(r)}(c_{1},2c_{1},c_{2},\\cdots ,c_{k})$,\n\n\\item[(7.12)] $f_{r}:=B_{a^{\\prime }}^{\\ast }(\\psi (\\delta\n^{r}(u))=u^{(r)}(g_{1},2g_{1},g_{2},\\cdots ,g_{k})$\n\\end{enumerate}\n\n\\noindent On the other hand, according to Theorem D' and by the conventions\nin 7.1, the ring $H^{\\ast }(B_{Spin(2k)})$ is generated multiplicatively by\nthe integral classes\n\n\\begin{quote}\n$\\{p_{r},$ $1\\leq r\\leq k-1\\}$; $\\{\\overline{q}_{r},1\\leq r\\leq h(n)-1\\}$; \ne_{2k}$, $\\overline{\\theta }_{n}$,\n\\end{quote}\n\n\\noindent together with the ideal $\\tau (B_{Spin^{c}(n)})$. Thus, we obtain\nfrom Theorem 7.7, the relation $\\overline{q}_{r}=\\psi ^{\\ast }(q_{r})$ by\nTheorem C', as well as the commutivity of the first diagram in (7.2), the\nfollowing result.\n\n\\bigskip\n\n\\noindent \\textbf{Theorem 7.8.} \\textsl{The map }$B_{\\lambda }^{\\ast\n}:H^{\\ast }(B_{Spin(2k)})\\rightarrow H^{\\ast }(B_{U(k)})$\\textsl{\\ is\ndetermined by}\n\n\\begin{quote}\n\\textsl{i)} $B_{\\lambda }^{\\ast }(p_{r})=g_{r},$ $1\\leq r\\leq k-1$;\n\n\\textsl{ii)} $B_{\\lambda }^{\\ast }(\\overline{q}_{r})=\\alpha _{r},1\\leq r\\leq\nh(n)-1$,\n\\end{quote}\n\n\\noindent \\textsl{and by }$B_{\\lambda }^{\\ast }(e_{2k})=c_{k},$ $B_{\\lambda\n}^{\\ast }(\\tau (B_{Spin^{c}(n)}))=0$\\textsl{.}$\\square $\n\n\\bigskip\n\n\\noindent \\textbf{Example 7.9. }We shall show in Section \\S 8 that the\nsequence $\\{\\alpha _{r},1\\leq r\\leq h(n)-1\\}$ (as well as $\\{p_{r},1\\leq\nr\\leq k-1\\}$) of polynomials in the Chern classes are integral Weyl\ninvariants of the group $Spin(n)$. We emphasize at this stage that these\npolynomials can be effectively produced by the simple algorithm indicated by\nCorollary 7.5, together with the formula (7.11). As examples, combining\nresults in Example 7.6 with formula (7.11) we obtain that\n\n\\begin{quote}\n$\\alpha _{1}=-c_{2}+2c_{1}^{2}$;\n\n$\\alpha _{2}=c_{4}-2c_{1}c_{3}+2c_{1}^{2}c_{2}-2c_{1}^{4}$;\n\n$\\alpha\n_{3}=-c_{8}+2c_{1}c_{7}-c_{2}c_{6}+c_{3}c_{5}-(2c_{1}^{2}-c_{2})(c_{3}^{2}-2c_{2}c_{4}+4c_{1}c_{5}-2c_{6}) \n$\n\n$\\qquad\n-2c_{4}(c_{1}c_{3}-c_{1}^{2}c_{2}+c_{1}^{4})+2(c_{1}c_{3}-c_{1}^{2}c_{2}+c_{1}^{4})^{2},\\cdots \n${\\small .}$\\square $\n\\end{quote}\n\n\\bigskip\n\n\\textbf{7.5.} We conclude this section with the following result which has\nplayed a role in showing Theorem D'.\n\n\\bigskip\n\n\\noindent \\textbf{Lemma 7.10.} \\textsl{The constant }$\\kappa $\\textsl{\\ in\nthe formula (6.9) is }$2(-1)^{k(n)-1}$\\textsl{.}\n\n\\bigskip\n\n\\noindent \\textbf{Proof.} Let $D$ be the ideal on $H^{\\ast }(B_{U(k)})\n\\mathbb{Z}[c_{1},\\cdots ,c_{k}]$ generated by the $c_{r}$ with $r\\geq 2$,\nand consider the ring map $e:H^{\\ast }(B_{U(k)})\\rightarrow \\mathbb{Z}$\ndefined by\n\n\\begin{quote}\n$e(c_{1})=1$, $e(c_{r})=0$, $2\\leq r\\leq k$.\n\\end{quote}\n\n\\noindent That is, for every $u\\in H^{2k}(B_{U(k)})$,\n\n\\begin{quote}\ni) $u=e(u)\\cdot c_{1}^{k}+h(u)$ with $h(u)\\in D$;\n\nii) $e(u)=0$ if and only if $u\\in D$.\n\\end{quote}\n\n\\noindent In particular, applying $e$ to the relations\n\n\\begin{quote}\n$2\\alpha _{r+1}+\\alpha _{r}^{2}=f_{r}(g_{1},\\cdots ,g_{k-1})$ (by Theorem D'\nand (7.4))\n\\end{quote}\n\n\\noindent on $H^{\\ast }(B_{U(k)})$ we get from $e(\\alpha _{1})=2$ (by\nExample 7.9) that\n\n\\begin{enumerate}\n\\item[(7.13)] $e(\\alpha _{r})=2(-1)^{r-1}$.\n\\end{enumerate}\n\nOn the other hand, since $B_{\\lambda }^{\\ast }(\\overline{\\theta }_{n})$ is\nthe Euler class of the induced bundle $B_{\\lambda _{n}}^{\\ast }\\eta _{n}$ on \n$B_{U(k)}$ we have\n\n\\begin{quote}\n$e(B_{\\lambda }^{\\ast }(\\overline{\\theta }_{n}))=1$ (i.e. $B_{\\lambda\n}^{\\ast }(\\overline{\\theta }_{n})=c_{1}^{2^{k(n)}}+\\alpha $ for some $\\alpha\n\\in D$).\n\\end{quote}\n\n\\noindent Thus, applying the ring map $e\\circ B_{\\lambda }^{\\ast }$ to the\nequation (6.8) and noting that $B_{\\lambda }^{\\ast }(\\varphi (\\overline{b\n_{n}))\\in D$ (since $\\overline{b}_{n}\\in A_{1}$), we obtain \n2(-1)^{k(n)-1}=\\kappa $ as required.$\\square $\n\n\\bigskip\n\n\\noindent \\textbf{Remark 7.10. }For a $CW$ complex $X$ the maps $B_{\\lambda\n_{0}}$, $B_{\\lambda }$ and $B_{\\lambda ^{c}}$ induce, respectively, the\ncorrespondences between homotopy sets\n\n\\begin{quote}\n$B_{\\lambda _{0\\ast }}:[X,B_{U(k)}]\\rightarrow \\lbrack X,B_{SO(2k)}]$ by \nB_{\\lambda _{0}\\ast }[g]=[B_{\\lambda _{0}}\\circ g]$,\n\n$B_{\\lambda _{\\ast }}:[X,B_{U(k)}]\\rightarrow \\lbrack X,B_{Spin(2k)}]$ by \nB_{\\lambda _{\\ast }}[g]=[B_{\\lambda }\\circ g]$,\n\n$B_{\\lambda _{\\ast }^{c}}:[X,B_{U^{c}(k)}]\\rightarrow \\lbrack\nX,B_{Spin^{c}(2k)}]$ by $B_{\\lambda _{\\ast }^{c}}[g]=[B_{\\lambda ^{c}}\\circ\ng]$,\n\\end{quote}\n\n\\noindent in which $B_{\\lambda _{0\\ast }}$ is well known to be \\textsl{the\nreal reduction} on the complex bundles \\cite[p.155]{MS}. Likewise, the maps \nB_{\\lambda _{\\ast }}$ and $B_{\\lambda _{\\ast }^{c}}$ can be regarded as the \n\\textsl{spin} and the \\textsl{spin}$^{c}$\\textsl{\\ reduction} of complex\nbundles, respectively. In this connection, the formulae in Theorem 7.8\nexpress the Spin characteristic classes (see in \\S 9) of the spin reduction\nof a complex vector bundle by its Chern classes.\n\nThe map $B_{\\lambda }$ fits also into the fibration\n\n\\begin{quote}\n$Spin(2k)\/U(k)\\hookrightarrow B_{U(k)}\\rightarrow B_{Spin(2k)}$\n\\end{quote}\n\n\\noindent that is also of geometric significances \\cite{D,D1}: the fiber\nmanifold $Spin(2k)\/U(k)$ acts as the Grassmanian of complex structures on\nthe $2k$ dimensional Euclidean space $\\mathbb{R}^{2k}$, and can be\nidentified with the classifying space of the complex $k$ bundles whose real\n(resp. $Spin$) reductions are trivial.$\\square $\n\n\\section{The Weyl invariants of the groups $Spin(n)$}\n\nLet $G$ be a compact connected Lie group with a maximal torus $T$, and the\nWeyl group $W=N_{G}(T)\/T$. The canonical $W$ action on $T$ induces an action\non the integral cohomology $H^{\\ast }(B_{T})$. Denote by $H^{\\ast\n}(B_{T})^{W}$ the subring consisting of all the $W$ invariants, and let \nB_{t}:B_{T}\\rightarrow B_{G}$ be induced by the inclusion $t:T\\rightarrow G\n. A classical result of Borel \\cite{B2} states that\n\n\\bigskip\n\n\\noindent \\textbf{Lemma 8.1.} \\textsl{The ring map }$B_{t}^{\\ast }:H^{\\ast\n}(B_{G})\\rightarrow H^{\\ast }(B_{T})$\\textsl{\\ annihilates the torsion ideal \n}$\\tau (B_{G})$\\textsl{, and induces an injection }$H^{\\ast }(B_{G})\/\\tau\n(B_{G})\\rightarrow H^{\\ast }(B_{T})^{W}$\\textsl{.}$\\square $\n\n\\bigskip\n\nThe fundamental problem of the invariant theory of Weyl groups is to present\nthe ring $H^{\\ast }(B_{T})^{W}$ by explicit generators and relations \\cit\n{Gu,We}. A closely related problem in topology is to decide the subring \n\\func{Im}B_{t}^{\\ast }\\subseteq H^{\\ast }(B_{T})^{W}$. For the Weyl group $W$\nof a semi--simple Lie group $G$ Chevalley has shown that\n\n\\begin{quote}\n$H^{\\ast }(B_{T})^{W}\\otimes \\mathbb{Z}_{0}=H^{\\ast }(B_{G})\\otimes \\mathbb{\n}_{0}=\\mathbb{Z}_{0}[P_{1},\\cdots ,P_{n}]$, $n=\\dim T$,\n\\end{quote}\n\n\\noindent where \\textsl{the basic (rational) polynomial} \\textsl{invariants \n$\\left\\{ P_{1},\\cdots ,P_{n}\\in H^{\\ast }(B_{T})^{W}\\otimes \\mathbb{Z\n_{0}\\right\\} $ have been made explicitly by Mehta \\cite{Me}. In addition,\nfor a prime $p$ methods to calculate the algebras $H^{\\ast\n}(B_{T})^{W}\\otimes \\mathbb{Z}_{p}$ have been developed from the\nperspectives of algebraic topology \\cite{KM,Sm}, combinatorics \\cite{St},\nand algorithm \\cite{Bt}. However, apart from Borel's classical result \\cite\n\\S 3. Examples]{Fe1}\n\n\\begin{quote}\n$\\func{Im}B_{t}^{\\ast }=H^{\\ast }(B_{T})^{W}=H^{\\ast }(B_{G})$ for $G=SU(n)$\nor $Sp(n)$,\n\\end{quote}\n\n\\noindent complete information on the ring $H^{\\ast }(B_{T})^{W}$ are not\nknown for most other semi--simple Lie groups. In particular, the types of\nthe Weyl groups of $G=Spin(n)$ consist of $B_{k}$ ($n=2k+1$) or $D_{k}$ (\nn=2k$), and partial information has been obtained by Borel, Feshbach,\nTotaro, Benson and Wood \\cite{B2,BW,Fe1,T}.\n\nFor $G=Spin(n)$ our approach to $H^{\\ast }(B_{T})^{W}$ begins with the\nsubring $\\func{Im}B_{t}^{\\ast }$. For an integer $n>6$ (see Remark 1.1) we\nset $k=\\left[ \\frac{n}{2}\\right] $, and let $h$ be the inclusion of the\ndiagonal subgroup $T=U(1)\\times \\cdots \\times U(1)$ ($k$ copies) into $U(k)\n. Then a convenient maximal torus on $Spin(n)$ is\n\n\\begin{quote}\n$t=\\lambda \\circ h:T\\rightarrow U(k)\\rightarrow Spin(2k)\\subseteq Spin(n)$,\n\\end{quote}\n\n\\noindent where $\\lambda $ is the inclusion given in table (7.1).\nFurthermore, with respect to the canonical presentation\n\n\\begin{enumerate}\n\\item[(8.1)] $H^{\\ast }(B_{T})=\\mathbb{Z}[x_{1},\\cdots ,x_{k}]$, $\\deg\nx_{i}=2$,\n\\end{enumerate}\n\n\\noindent the ring map $B_{h}^{\\ast }:$ $H^{\\ast }(B_{U(k)})\\rightarrow\nH^{\\ast }(B_{T})$ is given by\n\n\\begin{quote}\n$B_{h}^{\\ast }(c_{r})=e_{r}(x_{1},\\cdots ,x_{k})$, $1\\leq r\\leq k$,\n\\end{quote}\n\n\\noindent where $e_{r}$ is the $r^{th}$ elementary symmetric function in the \n$x_{i}$'s. It follows that $B_{h}^{\\ast }$ carries $H^{\\ast }(B_{U(k)})$\\\nisomorphically onto the subring $Sym[x_{1},\\cdots ,x_{k}]$\\ of symmetric\nfunctions, while $B_{\\lambda }^{\\ast }$\\ induces an injection from the\nquotient ring $H^{\\ast }(B_{Spin(n)})\/\\tau (B_{Spin(n)})$ into $H^{\\ast\n}(B_{U(k)})$. Thus, applying the ring map $B_{t}^{\\ast }$ to the formula\n(6.10) of the ring $H^{\\ast }(B_{Spin(n)})$ we obtain by Theorem 7.8 the\nfollowing characterization of the subring $\\func{Im}B_{t}^{\\ast }\\subseteq\nH^{\\ast }(B_{T})^{W}$.\n\n\\bigskip\n\n\\noindent \\textbf{Theorem 8.2.} \\textsl{The subring }$\\func{Im}B_{t}^{\\ast }\n\\textsl{\\ has the presentations:}\n\n\\begin{enumerate}\n\\item[(8.2)] $\\func{Im}B_{t}^{\\ast }=\\left\\{ \n\\begin{tabular}{l}\n$\\mathbb{Z}[g_{1},\\cdots ,g_{k-1},c_{k},\\alpha _{1},\\cdots ,\\alpha\n_{k(n)-1},B_{t}^{\\ast }(\\overline{\\theta }_{n})]\/D_{n}$ \\textsl{if} $n=2k$;\n\\\\ \n$\\mathbb{Z}[g_{1},\\cdots ,g_{k},\\alpha _{1},\\cdots ,\\alpha\n_{k(n)-1},B_{t}^{\\ast }(\\overline{\\theta }_{n})]\/D_{n}$ \\textsl{if} $n=2k+1$\n\\end{tabular\n\\right. $\n\\end{enumerate}\n\n\\noindent \\textsl{where }$D_{n}$\\textsl{\\ denotes the ideal generated by the\nfollowing relations}\n\n\\begin{quote}\n\\textsl{i)} $2\\alpha _{1}-g_{1}$\\textsl{,} $2\\alpha _{r+1}+\\alpha\n_{r}^{2}-f_{r}(g_{1},\\cdots ,g_{k})$\\textsl{,} $1\\leq r\\leq k(n)-2$\\textsl{,}\n\n\\textsl{ii)} $(-1)^{k(n)-1}\\cdot 4\\cdot B_{t}^{\\ast }(\\overline{\\theta \n_{n})+\\alpha _{k(n)-1}^{2}-\\beta _{n}$\\textsl{,}\n\\end{quote}\n\n\\noindent \\textsl{where }$\\beta _{n}:=B_{t}^{\\ast }(\\overline{b}_{n})\n\\textsl{,} \\textsl{and where the invariants }$g_{r}$\\textsl{,} $\\alpha _{r}$ \n\\textsl{and} $f_{r}(g_{1},\\cdots ,g_{k})$ \\textsl{are given by formulae\n(7.10), (7.11) and (7.12), respectively.}$\\square $\n\n\\bigskip\n\nIn term of (8.2) define $\\func{Im}\\overline{B}_{t}^{\\ast }\\subset $\\ $\\func\nIm}B_{t}^{\\ast }$ to be the subring generated by\n\n\\begin{quote}\n$g_{2},\\cdots ,g_{\\left[ \\frac{n-1}{2}\\right] },\\alpha _{1},\\cdots ,$ \n\\alpha _{k(n)-2}$,\\ and $c_{k}$\\ if $n=2k$.\n\\end{quote}\n\n\\noindent Then, for the degree reason, the relation $2\\alpha\n_{k(n)-1}+\\alpha _{k(n)-2}^{2}=f_{k(n)-2}(g_{1},\\cdots ,g_{k})$ on $\\func{Im\nB_{t}^{\\ast }$ implies that\n\n\\bigskip\n\n\\noindent \\textbf{Corollary 8.3. }\\textsl{The quotient group }$\\func{Im\nB_{t}^{2^{k(n)}}\/\\func{Im}\\overline{B}_{t}^{2^{k(n)}}$\\textsl{\\ is\nisomorphic to }$\\mathbb{Z}_{2}$\\textsl{\\ with generator }$\\alpha _{k(n)-1}\n\\textsl{.}$\\square $\n\n\\bigskip\n\nFor $G=Spin(n)$ the extension problem from $\\func{Im}B_{t}^{\\ast }$ to the\nring $H^{\\ast }(B_{T})^{W}$ was raised by Borel \\cite[1954]{B2}, studied by\nFeshbach \\cite[1981]{Fe1}, and has been solved by Benson and Wood in the\nremarkable work \\cite[1995]{BW}. Efforts to bring the relevant constructions\nand calculations taking place in \\cite{BW} into our context gives rise to\nthe following results.\n\n\\bigskip\n\n\\noindent \\textbf{Theorem 8.4 (Benson and Wood).} \\textsl{Assume that }\nG=Spin(n)$ \\textsl{with} $n>6$\\textsl{.}\n\n\\textsl{i) If }$n\\neq 3,4,5\\func{mod}8$\\textsl{\\ then }$\\func{Im}B_{t}^{\\ast\n}=H^{\\ast }(B_{T})^{W}$\\textsl{.}\n\n\\textsl{ii) If} $n$\\textsl{\\ }$=3,4,5\\func{mod}8$\\textsl{,\\ then }$H^{\\ast\n}(B_{T})^{W}$ \\textsl{is} \\textsl{generated by its subring }$\\func{Im\nB_{t}^{\\ast }$\\textsl{, together with an additional invariant }$\\omega\n_{k(n)-1}\\in H^{2^{k(n)}}(B_{T})^{W}$\\textsl{\\ that is related to the known\ninvariants }$B_{t}^{\\ast }(\\overline{\\theta }_{n})$ \\textsl{and} $\\alpha\n_{k(n)-1}$ \\textsl{by the relations}\n\n\\begin{quote}\n\\textsl{a) }$\\omega _{k(n)-1}^{2}=B_{\\lambda }^{\\ast }(\\overline{\\theta \n_{n})$\\textsl{; }\n\n\\textsl{b) }$2\\omega _{k(n)-1}-\\alpha _{k(n)-1}=l_{n}$\\textsl{, }$l_{n}\\in \n\\func{Im}\\overline{B}_{t}^{2^{k(n)}}$\\textsl{.}\n\\end{quote}\n\n\\noindent \\textbf{Proof. }With respect to $n=2k+1$ or $n=2k$ consider the\nsequence of cohomology classes introduced in \\cite[\\S 4]{BW}\n\n\\begin{quote}\n$\\left\\{ \\eta _{j}\\in H^{\\ast }(B_{T})\\text{, }1\\leq j\\leq k\\right\\} $ or \n\\left\\{ \\mu _{j}\\in H^{\\ast }(B_{T})\\text{, }1\\leq j\\leq k\\right\\} $.\n\\end{quote}\n\n\\noindent Then by \\cite[Table 2]{BW} we have\n\n\\begin{enumerate}\n\\item[(8.3)] $B_{\\lambda }^{\\ast }(\\overline{\\theta }_{n})=\\eta _{k(n)}$ or \n\\mu _{k(n)\\text{ }}$ if $n\\equiv 3,5\\func{mod}8$ or $n\\equiv 4\\func{mod}8$.\n\\end{enumerate}\n\n\\noindent Let us put\n\n\\begin{quote}\n$\\omega _{k(n)-1}:=\\eta _{k(n)-1}$ if $n\\equiv 3,5\\func{mod}8$, or $\\mu\n_{k(n)-1}$ if $n\\equiv 4\\func{mod}8$.\n\\end{quote}\n\n\\noindent Then Benson and Wood \\cite[Proposition 4.1]{BW} has shown that\n\n\\begin{enumerate}\n\\item[(8.4)] $\\omega _{k(n)-1}\\in H^{\\ast }(B_{T})^{W}$ and $\\omega\n_{k(n)-1}^{2}=B_{t}^{\\ast }(\\overline{\\theta }_{n})$.\n\\end{enumerate}\n\n\\noindent Now, apart from for the relation b) all the statements of the\ntheorem are verified by comparing \\cite[Theorem 7.1]{BW} with \\cite[Theorem\n10.2]{BW}.\n\nConsider the sequence $\\left\\{ q_{r},r\\geq 1\\right\\} $ on $H^{\\ast\n}(B_{T})^{W}$ constructed in the proof of \\cite[Proposition 3.3]{BW}. By \n\\cite[Corollary 7.2]{BW}\n\n\\begin{enumerate}\n\\item[(8.5)] $2\\omega _{k(n)-1}-q_{k(n)-1}=l_{n}$ for some $l_{n}\\in \\func{I\n}\\overline{B}_{t}^{2^{k(n)}}$.\n\\end{enumerate}\n\n\\noindent On the other hand, by ii) and iii) of \\cite[Theorem 10.2]{BW} the\nclass $q_{k(n)-1}$ generates also the quotient group\n\n\\begin{quote}\n$\\func{Im}B_{t}^{2^{k(n)}}\/\\func{Im}\\overline{B}_{t}^{2^{k(n)}}=\\mathbb{Z\n_{2}$.\n\\end{quote}\n\n\\noindent Comparing this with Corollary 8.3 we can replace in (8.5) the\nclass $q_{k(n)-1}$ by our $\\alpha _{k(n)-1}$ to obtain the desired relation\nb).$\\square $\n\n\\bigskip\n\nAssume that $n$\\textsl{\\ }$=3,4,5\\func{mod}8$. The relations a) and b) of\nTheorem 8.4 imply that the generators $B_{t}^{\\ast }(\\overline{\\theta }_{n})\n\\textsl{\\ }and\\textsl{\\ }$\\alpha _{k(n)-1}$ of $\\func{Im}B_{t}^{\\ast }$ can\nbe expressed as polynomials in the elements\n\n\\begin{quote}\n$\\omega _{k(n)-1}$, $g_{2},\\cdots ,g_{\\left[ \\frac{n-1}{2}\\right] },\\alpha\n_{1},\\cdots ,\\alpha _{k(n)-2}$, and $c_{k}$\\ if $n=2k$.\n\\end{quote}\n\n\\noindent In addition, combining the relation b) with the relation on $\\func\nIm}B_{t}^{\\ast }$\n\n\\begin{quote}\n$2\\alpha _{k(n)-1}+\\alpha _{k(n)-2}^{2}=f_{k(n)-2}(g_{1},\\cdots ,g_{k})$ (by\nTheorem 8.2)\n\\end{quote}\n\n\\noindent one gets\n\n\\begin{quote}\n$4\\omega _{k(n)-1}-\\alpha _{k(n)-2}^{2}=\\varepsilon _{n}$ for some \n\\varepsilon _{n}\\in \\func{Im}\\overline{B}_{t}^{\\ast }$.\n\\end{quote}\n\n\\noindent Thus, putting Theorems 8.2 and Theorem 8.4 together we obtain that\n\n\\bigskip\n\n\\noindent \\textbf{Theorem 8.5.} \\textsl{Assume that }$G=Spin(n)$ \\textsl{wit\n} $n>6$\\textsl{\\ (see Remark 1.1).} \\textsl{Then }\n\n\\begin{enumerate}\n\\item[(8.6)] $H^{\\ast }(B_{T})^{W}=\\left\\{ \n\\begin{tabular}{l}\n$\\func{Im}B_{t}^{\\ast }\\text{ \\textsl{if} }n\\QTR{sl}{\\ }\\neq 3,4,5\\func{mod}\n\\text{;}$ \\\\ \n$\\func{Im}\\overline{B}_{t}^{\\ast }\\otimes \\mathbb{Z}[\\omega\n_{k(n)-1}]\/\\left\\langle h_{n}\\right\\rangle \\text{ \\textsl{if} }n\\QTR{sl}{\\ \n\\equiv 3,4,5\\func{mod}8\\text{,}\n\\end{tabular\n\\right. $\n\\end{enumerate}\n\n\\noindent \\textsl{where }$h_{n}=4\\cdot \\omega _{k(n)-1}-\\alpha\n_{k(n)-2}^{2}-\\varepsilon _{n}$\\textsl{\\ with }$\\varepsilon _{n}\\in \\func{Im\n\\overline{B}_{t}^{\\ast }.\\square $\n\n\\bigskip\n\n\\noindent \\textbf{Remarks 8.6. }Theorems 8.2 and 8.5 present both of the\nrings $\\func{Im}B_{t}^{\\ast }$ and $H^{\\ast }(B_{T})^{W}$ by the explicit\ngenerators (e.g. (7.10) and (7.11))\n\n\\begin{quote}\n$g_{1},\\cdots ,g_{k-1},c_{k},\\alpha _{1},\\cdots ,\\alpha\n_{k(n)-1},B_{t}^{\\ast }(\\overline{\\theta }_{n})$\n\\end{quote}\n\n\\noindent together with the invariant $\\omega _{k(n)-1}$ given by Benson and\nWood \\cite[\\S 4]{BW}.\n\nIn \\cite[Theorem 7.1]{BW} Benson and Wood obtained a presentation of the\nring $H^{\\ast }(B_{T})^{W}$ without specifying the relations among their\ngenerators. In our approach the recurrence relations\n\n\\begin{quote}\n$2\\alpha _{1}-g_{1}$, $2\\alpha _{r+1}+\\alpha _{r}^{2}-f_{r}(g_{1},\\cdots\n,g_{k}),$ $1\\leq r\\leq k(n)-2$,\n\\end{quote}\n\n\\noindent are originated from property ii) of Theorem C', which are useful\nto produce the sequence $\\{\\alpha _{1},\\alpha _{2},\\cdots \\}$ of invariants\nfrom the initial one $\\alpha _{1}=-c_{2}+2c_{1}^{2}$, see Examples 7.6 and\n7.9.$\\square $\n\n\\section{The Spin characteristic classes}\n\nFor the groups $SO=\\cup _{n=2}^{\\infty }SO(n)$ and $Spin=\\cup _{n=2}^{\\infty\n}Spin(n)$ in the stable range we have by formulae (2.3) and (6.10) that\n\n\\begin{quote}\n$H^{\\ast }(B_{SO})=\\mathbb{Z}[p_{1},p_{2},\\cdots ]\\oplus \\tau (B_{SO})$ with \n$2\\cdot \\tau (B_{SO})=0$,\n\n$H^{\\ast }(B_{Spin})=\\overline{\\pi }^{\\ast }H^{\\ast }(B_{SO})\\otimes \\mathbb\nZ}[\\overline{q}_{1},\\overline{q}_{2},\\cdots ]\/K_{\\infty }$,\n\\end{quote}\n\n\\noindent respectively, where the Euler classes $\\overline{\\pi }^{\\ast\n}(e_{n})$, $\\overline{\\theta }_{n}$ are disappeared at $n=\\infty $. In view\nof these formulae we introduce the sequence $\\left\\{ Q_{k},\\text{ }k\\geq\n1\\right\\} $, $\\deg Q_{k}=4k$, of integral cohomology classes on $B_{Spin}$\nby setting\n\n\\begin{quote}\n$Q_{k}:=\\left\\{ \n\\begin{tabular}{l}\n$\\overline{\\pi }^{\\ast }p_{k}$ if $k>1$ is not a power of $2$; \\\\ \n$\\overline{q}_{r}$ if $k=2^{r}$, $r\\geq 0$\n\\end{tabular\n\\right. $\n\\end{quote}\n\n\\noindent Then the relation ii) of Theorem C' implies the formulae\n\n\\begin{quote}\n$2Q_{1}=\\overline{\\pi }^{\\ast }p_{1}$, $2Q_{2^{r}}+Q_{2^{r-1}}^{2}=\\overline\n\\pi }^{\\ast }f(w_{2}^{(r)})$\n\\end{quote}\n\n\\noindent in which by Example 2.4 and by the formula (2.4) of $f$\n\n\\begin{quote}\n$f(w_{2}^{(r)})=p_{2^{r}}+p_{1}p_{2^{r}-2}+\\cdots\n+p_{2^{r-1}-2}p_{2^{r-1}+2}+$ higher terms.\n\\end{quote}\n\n\\noindent This implies that\\ every $2$--power Pontryagin class $\\overline\n\\pi }^{\\ast }p_{2^{r}}$\\ can be expressed as a polynomial in the $Q_{k}$'s,\nplus some torsion element. Summarizing, taking $n=\\infty $ in Theorem D' we\ncan eliminate\n\n\\begin{quote}\n$\\overline{\\pi }^{\\ast }(e_{n})$, $\\overline{\\theta }_{n}$, $\\overline{\\pi \n^{\\ast }p_{2^{r}}$ and $2Q_{2^{r}}+Q_{2^{r-1}}^{2}=\\overline{\\pi }^{\\ast\n}f(w_{2}^{(r)})$\n\\end{quote}\n\n\\noindent form the sets of generators and relations to obtain the following\nresult from Theorem C', as well as the formula (6.6) of $\\tau (B_{Spin(n)})$.\n\n\\bigskip\n\n\\noindent \\textbf{Theorem 9.1.} \\textsl{The integral cohomology of }\nB_{Spin} $\\textsl{\\ has the presentation}\n\n\\begin{enumerate}\n\\item[(9.1)] $H^{\\ast }(B_{Spin})=\\mathbb{Z}[Q_{1},Q_{2},Q_{3},\\cdots\n]\\oplus \\overline{\\pi }^{\\ast }\\tau (B_{SO})$\\textsl{, }$\\deg Q_{k}=4k\n\\textsl{,}\n\\end{enumerate}\n\n\\noindent \\textsl{where} \\textsl{the generators }$Q_{k}$\\textsl{\\ are\ncharacterized uniquely by the following properties:}\n\n\\textsl{i) if }$k>1$\\textsl{\\ is not a power of }$2$\\textsl{, then }$Q_{k}\n\\overline{\\pi }^{\\ast }p_{k}$\\textsl{;}\n\n\\textsl{ii) if }$k=2^{r}$\\textsl{\\ with }$r\\geq 0$ \\textsl{then}\n\n\\begin{enumerate}\n\\item[(9.2)] $\\rho _{2}(Q_{k})=\\overline{\\pi }^{\\ast }(w_{2}^{(k+1)})\n\\textsl{,}\n\n\\item[(9.3)] $2Q_{1}=\\overline{\\pi }^{\\ast }(p_{1})$\\textsl{,} \n2Q_{2k}+Q_{k}^{2}=\\overline{\\pi }^{\\ast }f(w_{2}^{(k+1)})$\\textsl{, }$k\\geq\n1 $\\textsl{,}\n\\end{enumerate}\n\n\\noindent \\textsl{In particular, the cup product between the free part and\nthe torsion ideal of the ring }$H^{\\ast }(B_{Spin})$ \\textsl{is given by}\n\n\\begin{center}\n$Q_{k}\\cup \\overline{\\pi }^{\\ast }(\\delta _{2}(x))=\\left\\{ \n\\begin{tabular}{l}\n$\\overline{\\pi }^{\\ast }(\\delta _{2}(x\\cup w_{2k}^{2}))\\text{ }$\\textsl{if }\nk>1$\\textsl{\\ is not a power of} $2$\\textsl{;} \\\\ \n$\\overline{\\pi }^{\\ast }(\\delta _{2}(x\\cup w_{2}^{(r+1)}))\\text{ \\textsl{if} \n}k=\\text{ }2^{r}\\text{.}\\square \n\\end{tabular\n\\right. $\n\\end{center}\n\nIn the formula (9.1) the torsion ideal $\\overline{\\pi }^{\\ast }\\tau (B_{SO})$\nis fashioned from $\\tau (B_{SO})$, hence contributes nothing essentially\nnew. It is the generators $\\{Q_{k},k\\geq 1\\}$ of the free part, together\nwith their uniqueness property, are just what requested for us to define the\ncharacteristic classes for the Spin vector bundles.\n\nLet $\\xi $ be an oriented real bundle over a connected $CW$--complex $X$,\ninduced by a map $f_{\\xi }$ from $X$ to $B_{SO}$, and suppose that \nw_{2}(\\xi )=0$ (i.e. $\\xi $ is Spin). Then $f_{\\xi }$ can be factored into a\ncomposition\n\n\\begin{quote}\n$X\\overset{g_{\\xi }}{\\rightarrow }B_{Spin}\\overset{\\overline{\\pi }}\n\\rightarrow }B_{SO}$,\n\\end{quote}\n\n\\noindent where the map $g_{\\xi }$ is unique up to homotopy.\n\n\\bigskip\n\n\\noindent \\textbf{Definition 9.2.} The cohomology class $q_{k}(\\xi ):=g_{\\xi\n}^{\\ast }(Q_{k})\\in H^{4k}(X)$, $k\\geq 1$, (resp. the sum $q(\\xi\n):=1+q_{1}(\\xi )+q_{2}(\\xi )+\\cdots \\in H^{\\ast }(X)$) is called the $k^{th}$\n\\textsl{Spin\\ characteristic class} (resp. \\textsl{the total Spin\\\ncharacteristic class}) of the bundle $\\xi $.$\\square $\n\n\\bigskip\n\nThe Spin characteristic classes so defined possesses the naturality property.\n\n\\bigskip\n\n\\noindent \\textbf{Corollary 9.3.} \\textsl{For any map }$g:Y\\rightarrow X\n\\textsl{\\ between CW--complexes one has}\n\n\\begin{quote}\n$q_{k}(g^{\\ast }\\xi )=g^{\\ast }q_{k}(\\xi )$\\textsl{, }$k\\geq 1$\\textsl{.}\n\\end{quote}\n\n\\noindent \\textsl{In particular, the generator }$Q_{2^{r}}$\\textsl{\\ of }\nH^{\\ast }(B_{Spin})$ \\textsl{restricts to} \\textsl{the generator }$\\overline\nq}_{r}$\\textsl{\\ of }$H^{\\ast }(B_{Spin(n)})$\\textsl{\\ (see (6.10)) via the\ninclusion }$B_{Spin(n)}\\subset B_{Spin}$\\textsl{.}$\\square $\n\n\\bigskip\n\n\\noindent \\textbf{Example 9.4. }The relations (9.2) and (9.3) have\nnon--trivial implications. Let $\\xi $ be a spin vector bundle over a space \nX $, $\\dim \\xi =n$, and let\n\n\\begin{quote}\n$w(\\xi )=1+w_{4}(\\xi )+$ $\\cdots +w_{n}(\\xi )\\in H^{\\ast }(X;\\mathbb{Z}_{2})$\nor\n\n$p(\\xi )=1+p_{1}(\\xi )+p_{2}(\\xi )+$ $\\cdots +p_{\\left[ \\frac{n-1}{2}\\right]\n}(\\xi )\\in H^{\\ast }(X)$\n\\end{quote}\n\n\\noindent be its total Stiefel--Whitney or Pontryagin classes, respectively.\nThen (9.2) turns out\n\n\\begin{quote}\n$\\rho _{2}(q_{1}(\\xi ))=w_{4}(\\xi )$,\n\n$\\rho _{2}(q_{2}(\\xi ))=w_{8}(\\xi ),$\n\n$\\rho _{2}(q_{4}(\\xi ))=w_{16}(\\xi )+w_{4}(\\xi )w_{12}(\\xi )+w_{6}(\\xi\n)w_{10}(\\xi )+w_{4}(\\xi )w_{6}^{2}(\\xi )$, $\\cdots $\n\\end{quote}\n\n\\noindent by Example 2.4. Since these polynomials admit integral lifts,\napplying $Sq^{1}$ to both sides yields the following universal relations\namong the Stiefel--Whitney classes of a spin bundle $\\xi $:\n\n\\begin{quote}\n$w_{5}(\\xi )=0$, $w_{9}(\\xi )=0$,\n\n$w_{17}(\\xi )+w_{4}(\\xi )w_{13}(\\xi )+w_{7}(\\xi )w_{10}(\\xi )+w_{6}(\\xi\n)w_{11}(\\xi )=0$, $\\cdots $.\n\\end{quote}\n\nIgnoring the elements of order $2$ the relation (9.3) allows us to express\nthe Pontryagin classes $p_{i}(\\xi )$'s as polynomials in the Spin ones \nq_{i}(\\xi )$'s, such as\n\n\\begin{enumerate}\n\\item[(9.4)] $p_{1}=2q_{1}$, $p_{2}=2q_{2}+q_{1}^{2}$, $p_{3}=q_{3}$, \np_{4}=2q_{4}+q_{2}^{2}-2q_{1}q_{3}$, $\\cdots $,\n\\end{enumerate}\n\n\\noindent by Example 2.4 and formula (2.4). In Section \\S 10 these\ntransition functions will be applied to simplify various formulae of spin\nmanifolds.$\\square $\n\n\\bigskip\n\n\\noindent \\textbf{Remark 9.5.} The spinors were first discovered by E.\nCartan in 1913 in his investigations of the representation theory of\ntopological groups, and had subsequently found significant and wide\napplications to geometry and mathematical physics. However, a precise\ndefinition of spin structure was possible only after the notion of fiber\nbundle was introduced. Notably, Haefliger (1956) proved\\ that the second\nStiefel--Whitney class $w_{2}(M)$ is the only obstruction to the existence\nof a spin structure on an oriented Riemannian manifold $M$. This was soon\nextended by Borel and Hirzebruch (1958) to the cases of vector bundles over\nCW--complexes.\n\nThe idea of \\textsl{Spin characteristics} was initiated by Thomas. He \\cite\nTheorem (1.2)]{Th} described the integral cohomology $H^{\\ast }(B_{Spin})$\nusing a squence $\\{Q_{j}\\}$ of generators, but that is subject to two\nsequences $\\{\\Phi _{j}\\}$ and $\\{\\Psi _{j}\\}$ of indeterminacies, where he\nasked also for axioms with geometric significance by which the uniqueness of\nsuch a sequence $\\{Q_{j}\\}$ can be secured.\n\nGranted with the two sequences $\\{w_{2},w_{2}^{(1)},\\cdots \\}$ and \n\\{f(w_{2}),f(w_{2}^{(1)}),\\cdots \\}$ of cohomology classes depending on the\nonly obstruction $w_{2}$ to the existence of spin structure, Theorem C\n(resp. Theorem C') amounts to an axiomatic characterization of our\ngenerators $\\left\\{ q_{r},r\\geq 0\\right\\} $ on $H^{\\ast }(B_{Spin^{c}(n)})$\n(resp. $\\left\\{ \\overline{q}_{r},r\\geq 0\\right\\} $ on $H^{\\ast\n}(B_{Spin(n)}) $). Not surprisingly, the Spin characteristic classes so\nobtained are better adapted with topics of spin geometry, see in Section \\S\n10.$\\square $\n\n\\section{Applications to spin geometry}\n\nFor a smooth manifold $M$ its total Pontryagin class (resp. total Stiefel\nWhitney class) is defined to be that of the tangent bundle $TM$ of $M$, and\nis denoted by\n\n\\begin{quote}\n$p(M):=1+p_{1}+\\cdots +p_{k}$, $k=\\left[ \\frac{n}{4}\\right] $\n\n(resp. $w(M):=1+w_{1}+\\cdots +w_{n}$, $n=\\dim M$).\n\\end{quote}\n\n\\noindent Similarly, if $M$\\ is spin (i.e. $w_{2}(M)=0$), then its total\nSpin characteristic class is also defined, and will be written as\n\n\\begin{quote}\n$q(M):=1+q_{1}+\\cdots +q_{k}$, $k=\\left[ \\frac{\\dim M}{4}\\right] $.\n\\end{quote}\n\nIn the topological approach to spin geometry, the Spin characteristic\nclasses can play roles that may not be replaced by the regular\ncharacteristic classes. In this section we provide such initial evidences.\nSubject to the main theme of this paper the examples and calculations will\nbe restricted to relatively lower dimensional cases.\n\n\\bigskip\n\n\\textbf{10.1. The tangent invariants of Spin manifolds.} According to C.T.C.\nWall \\cite[Theorem 5]{Wa1} for each integer $b$ there exists a unique $6\n--dimensional smooth manifold $\\mathbb{C}P_{b}^{3}$ homotopy equivalent to\nthe $3$--dimensional complex projective space $\\mathbb{C}P^{3}$, whose first\nPontryagin class is $p_{1}=4(1+6b)x^{2}$, where $x$ denotes a generator of \nH^{2}(\\mathbb{C}P_{b}^{3})=\\mathbb{Z}$. Since the total Stiefel--Whitney\nclass of a manifold is a homotopy invariant we must have $w(\\mathbb{C\nP_{b}^{3})=w(\\mathbb{C}P^{3})=1$. In particular, the manifold $\\mathbb{C\nP_{b}^{3}$ is spin with\n\n\\begin{quote}\n$q_{1}(\\mathbb{C}P_{b}^{3})=\\frac{1}{2}p_{1}(\\mathbb{C\nP_{b}^{3})=2(1+6b)x^{2}$ (by (9.4)).\n\\end{quote}\n\nLet $\\pi :M_{b}^{7}\\rightarrow \\mathbb{C}P_{b}^{3}$ be the oriented circle\nbundle on $\\mathbb{C}P_{b}^{3}$ with Euler class $e=4(1+6b)x$. From the\nGysin sequence of $\\pi $ \\cite[p.157]{MS} one sees that the groups \nH^{2r}(M_{b}^{7})$ is cyclic of order $4(1+6b)$ with generators $\\pi ^{\\ast\n}(x^{r})$, where $r=1,2,3$. In view of the decomposition $TM_{b}^{7}=\\pi\n^{\\ast }T\\mathbb{C}P_{b}^{3}\\oplus \\varepsilon $ on the tangent bundle of \nM_{b}^{7}$ we get by the naturality of the characteristic classes that\n\n\\begin{quote}\n$w(M_{b}^{7})=1$,\\textsl{\\ }$p(M_{b}^{7})=1$,\\textsl{\\ }but \nq_{1}(M_{b}^{7})=2(1+6b)\\pi ^{\\ast }(x^{2})\\neq 0$,\n\\end{quote}\n\n\\noindent where $\\varepsilon $ denotes the $1$--dimensional trivial bundle\non $M_{b}^{7}$. This shows that:\n\n\\bigskip\n\n\\noindent \\textbf{Proposition 10.1.} \\textsl{The family }$\\{M_{b}^{7},b\\in \n\\mathbb{Z}\\}$ \\textsl{of} $7$\\textsl{--dimensional} \\textsl{smooth} \\textsl\nSpin manifolds satisfies that }$w(M_{b}^{7})=1$\\textsl{, }$p(M_{b}^{7})=1,\n\\textsl{\\ but }$q(M_{b}^{7})\\neq 1$\\textsl{.}$\\square $\n\n\\bigskip\n\n\\textbf{10.2. The integral lifts of Wu--classes.} For a sufficiently large \nn $ let $v_{r}\\in H^{r}(B_{Spin(n)};\\mathbb{Z}_{2})$ be the $r^{th}$\nWu--class of the canonical real $n$--bundle on $B_{Spin(n)}$. It is well\nknown that $v_{r}=0$ unless $r\\equiv 0\\func{mod}4$, and that all the classes \n$v_{4k}$ admit integral lifts (see \\cite{ABP} or \\cite[Lemma E1]{HS}). In \n\\cite{HS} Hopkins and Singer constructed a stable exponential characteristic\nclass $v_{t}^{Spin}$ with values in the integral cohomology $H^{\\ast\n}(B_{Spin(n)})$, whose mod $2$--reduction is the total Wu--class.\nRationally, in terms of Pontryagin classes, the first four terms are\n\n\\begin{quote}\n$v_{4}^{Spin}=-\\frac{1}{2}p_{1}$,\n\n$v_{8}^{Spin}=\\frac{1}{2^{3}}(20p_{2}-9p_{1}^{2})$,\n\n$v_{12}^{Spin}=-\\frac{1}{2^{4}}(80p_{3}+60p_{1}p_{2}-17p_{1}^{3})$,\n\n$v_{16}^{Spin}=\\frac{1}{2^{7}}(2^{6}\\cdot 29p_{4}-2^{4}\\cdot\n33p_{2}^{2}+2^{3}\\cdot 147p_{1}^{2}p_{2}-277p_{1}^{4})$.\n\\end{quote}\n\n\\noindent Substituting the Pontryagin classes by the Spin characteristic\nclasses using the transition (9.4) shows that\n\n\\bigskip\n\n\\noindent \\textbf{Proposition 10.2.} \\textsl{In term of the Spin\ncharacteristic classes, the first four Wu--classes }$v_{4k}$, $i=1,2,3$ \n\\textsl{or} $4,$\\textsl{\\ admit the integral lifts:}\n\n\\begin{quote}\n$\\widetilde{v}_{4}=q_{1}$\\textsl{, }\n\n$\\widetilde{v}_{8}=q_{2}$\\textsl{, }\n\n$\\widetilde{v}_{12}=q_{3}+q_{1}q_{2}+q_{1}^{3}$\\textsl{,}\n\n$\\widetilde{v}_{16}=q_{4}+q_{1}q_{3}+q_{1}^{2}q_{2}$\\textsl{.}$\\square $\n\\end{quote}\n\nFor possible applications of these formulae in geometry we refer to Wilson \n\\cite{Wi}, Landweber and Stong \\cite{LS}, where the following subtle\nrelations are found for the spin manifolds of dimension $8k+2$\n\n\\begin{enumerate}\n\\item[(10.1)] $Sq^{3}v_{4k}=0$, $w_{4}w_{8k-2}=v_{4k}Sq^{2}v_{4k}$.\n\\end{enumerate}\n\n\\textbf{10.3. The Eells--Kuiper invariant.} For a closed, smooth, oriented \n(4k-1)$ manifold $M$ furnished with a spin coboundary $W$ that satisfies so\ncalled $\\mu $--conditions, Eells and Kuiper \\cite{EK} introduced a\ndifferential invariant $\\mu _{k}(M)$ of $M$ in terms of Pontryagin numbers\nand the signature $\\sigma $ of the coboundary $W$. For the small values \nk=2,3,4$ the formulae of $\\mu _{k}(M)$ reads\n\n\\begin{quote}\n$\\mu _{2}(M)\\equiv \\frac{(p_{1}^{2}-4\\sigma )[W]}{2^{7}\\cdot 7}\\func{mod}1$\n\n$\\mu _{3}(M)\\equiv \\frac{(4p_{1}p_{2}-3p_{1}^{3}-24\\sigma )[W]}{2^{11}\\cdot\n3\\cdot 31}\\func{mod}1$\n\n$\\mu _{4}(M)\\equiv \\frac\n(12096p_{1}p_{3}+5040p_{2}^{2}-22680p_{1}^{2}p_{2}+9639p_{1}^{4}-18144\n\\sigma )[W]}{2^{15}\\cdot 3^{4}\\cdot 5\\cdot 7\\cdot 127}\\func{mod}1$.\n\\end{quote}\n\n\\noindent Since the coboundary $W$ is spin we can use the Spin\ncharacteristic classes to replace the Pontryagin classes by the transition\n(9.4) to get the following simpler expressions.\n\n\\bigskip\n\n\\noindent \\textbf{Proposition 10.3.} \\textsl{In term of the Spin\ncharacteristic classes, the Eells and Kuiper invariants }$\\mu _{k}(M)\n\\textsl{,} $k=2,3$ \\textsl{or} $4,$ \\textsl{have the following expressions}\n\n\\begin{quote}\n$\\mu _{2}(M)\\equiv \\frac{(q_{1}^{2}-\\sigma )[W]}{2^{5}\\cdot 7}\\func{mod}1$\n\n$\\mu _{3}(M)\\equiv \\frac{(2(q_{1}q_{2}-q_{1}^{3})-3\\sigma )[W]}{2^{7}\\cdot\n(2^{5}-1)\\cdot 3}\\func{mod}1$\n\n$\\mu _{4}(M)\\equiv \\frac\n(6q_{1}q_{3}+5q_{2}^{2}-40q_{1}^{2}q_{2}+17q_{1}^{4}-45\\sigma )[W]}\n2^{9}(2^{7}-1)}\\func{mod}1$.$\\square $\n\\end{quote}\n\n\\textbf{10.4. The Rohlin type formula.} For an $4m$ dimensional oriented\nmanifold $M$ denote by $\\sigma _{M}$ the signature of the intersection form \nI_{M}:H^{2m}(M)\\rightarrow \\mathbb{Z}$, $I_{M}(x)=\\left\\langle\nx^{2},[M]\\right\\rangle $, on the middle dimensional integral cohomology.\n\nBy the year 1950 there had no example of topological manifolds that are not\nsmoothable. Nevertheless, Whitehead \\cite[1949]{Wh} had constructed for each\nintegral unimodular symmetric matrix $A$ a simply--connected $4$ dimensional\ntopological manifold $M$, whose the intersection form $I_{M}$ is precisely\ngiven by $A$. In contrast, the following result of Rokhlin \\cite{Ro,Fr}\nsingles out a severe gap between the topological and smooth categories,\ndetectable by the elementary and topological invariant $\\sigma _{M}$.\n\n\\bigskip\n\n\\noindent \\textbf{Theorem 10.4 (Rokhlin, 1952).} \\textsl{The signature }\n\\sigma _{M}$ \\textsl{of a }$4$\\textsl{--dimensional smooth spin manifold }$M$\n\\textsl{must be divisible by }$16$\\textsl{.}$\\square $\n\n\\bigskip\n\nTo extend Rokhlin's result to the higher dimensional settings we resort to\nthe $\\widehat{A}$ genus $\\alpha _{m}$ and the $L$ genus $\\tau _{m}$ of $4m$\ndimensional smooth oriented manifolds $M^{4m}$, which are certain\npolynomials in the Pontryagin classes $p_{1},\\cdots ,p_{m}$ of $M$ with\nhomogeneous degree $4m$\n\n\\begin{quote}\n$\\alpha _{m}=a_{m}\\cdot p_{m}+l_{m}(p_{1},\\cdots ,p_{m-1})$ and\n\n$\\tau _{m}=b_{m}\\cdot p_{m}+k_{m}(p_{1},\\cdots ,p_{m-1})$ \\cite[\\S 19]{MS},\n\\end{quote}\n\n\\noindent where $a_{m}$ and $b_{m}$ are certain non--zero rationals, and\nwhere $l_{m}$ and $k_{m}$ are certain polynomials in $p_{1},\\cdots ,p_{m-1}$\nwith rational coefficients. These allow us to eliminate the top degree\nPontryagin class $p_{m}$ to obtain the following expression of $\\tau _{m}$\nwithout involving $p_{m}$:\n\n\\begin{enumerate}\n\\item[(10.2)] $\\tau _{m}=$ $\\frac{b_{m}}{a_{m}}(\\alpha\n_{m}-l_{m}(p_{1},\\cdots ,p_{m-1}))+k_{m}(p_{1},\\cdots ,p_{m-1})$.\n\\end{enumerate}\n\n\\noindent For the geometric implications of the polynomials $\\alpha _{m}$\nand $\\tau _{m}$ we recall the following classical results.\n\n\\bigskip\n\n\\noindent \\textbf{Theorem 10.5 (Hirzebruch }\\cite{BH}\\textbf{) }\\textsl{The \n$L$\\textsl{--genus }$\\tau _{m}$\\textsl{\\ of }$M^{4m}$\\textsl{\\ equals to the\nsignature }$\\sigma _{M}$ \\textsl{of the intersection form on }\nH^{2m}(M^{4m}) $\\textsl{.}$\\square $\n\n\\bigskip\n\n\\noindent \\textbf{Theorem 10.6 (Borel--Hirzebruch \\cite{BH})} \\textsl{The }\n\\widehat{A}$ \\textsl{genus} $\\alpha _{m}$\\textsl{\\ of a spin manifold }\nM^{4m}$ \\textsl{is an integer, and is an even integer when }$m$\\textsl{\\ is\nodd.}$\\square $\n\n\\bigskip\n\n\\noindent \\textbf{Theorem 10.7 (Gromov, Lawson and Stolz \\cite{L,S0}).} \n\\textsl{If }$M^{4m}$\\textsl{\\ is a simply connected spin manifold with }$m>1\n\\textsl{, then }$M^{4m}$\\textsl{\\ admits a metric with positive scalar\ncurvature if and only if }$\\alpha _{m}=0$\\textsl{.}$\\square $\n\n\\bigskip\n\nPrecisely, for $1\\leq m\\leq 4$ the polynomials $\\alpha _{m}$ and $\\tau _{m}$\nare, respectively,\n\n\\begin{quote}\n$\\alpha _{1}=-\\frac{1}{24}p_{1}$;\n\n$\\alpha _{2}=\\frac{1}{2^{7}\\cdot 3^{2}\\cdot 5}(-4p_{2}+7p_{1}^{2});$\n\n$\\alpha _{3}=\\frac{1}{2^{10}\\cdot 3^{3}\\cdot 5\\cdot 7\n(-16p_{3}+44p_{2}p_{1}-31p_{1}^{3});$\n\n$\\alpha _{4}=\\frac{1}{2^{15}\\cdot 5^{2}\\cdot 3^{4}\\cdot 7\n(-192p_{4}+512\\cdot p_{1}p_{3}+208p_{2}^{2}-904p_{1}^{2}p_{2}+381p_{1}^{4}),$\n\\end{quote}\n\n\\noindent and\n\n\\begin{quote}\n$\\tau _{1}=\\frac{1}{3}p_{1}$;\n\n$\\tau _{2}=\\frac{1}{3^{2}\\cdot 5}(7p_{2}-p_{1}^{2})$;\n\n$\\tau _{3}=\\frac{1}{3^{3}\\cdot 5\\cdot 7}(62p_{3}-13p_{2}p_{1}+2p_{1}^{3})$;\n\n$\\tau _{4}=\\frac{1}{3^{4}\\cdot 5^{2}\\cdot 7}(381p_{4}-71\\cdot\np_{1}p_{3}-19p_{2}^{2}+22p_{1}^{2}p_{2}-3p_{1}^{4})$.\n\\end{quote}\n\n\\noindent Assume now that our manifold $M$ is spin (i.e. $w_{2}(M)=0$). Then\nthe formulae in (9.4) is applicable to replace the Pontryagin classes \np_{1},\\cdots ,p_{m-1}$ in (10.2) to yield the following simpler formulae of\nthe signature $\\sigma _{M}=\\tau _{m}$ in the Spin characteristic classes.\n\n\\bigskip\n\n\\noindent \\textbf{Theorem 10.8. }\\textsl{In accordance to }$m=1,2,3$\\textsl\n\\ and }$4$\\textsl{,\\ the signature }$\\sigma _{M}$\\textsl{\\ of a smooth spin\nmanifold }$M^{4m}$ \\textsl{is given, respectively, by}\n\n\\begin{quote}\n$\\sigma _{M}=-2^{3}\\cdot \\alpha _{1}$;\n\n$\\sigma _{M}=q_{1}^{2}-2^{5}\\cdot (2^{3}-1)\\cdot \\alpha _{2}$;\n\n$\\sigma _{M}=\\frac{2}{3}(q_{1}q_{2}-q_{1}^{3})-2^{7}\\cdot (2^{5}-1)\\cdot\n\\alpha _{3}$;\n\n$\\sigma _{M}=\\frac{2}{3\\cdot 5}q_{1}q_{3}+\\frac{1}{3^{2}}q_{2}^{2}-\\frac\n2^{3}}{3^{2}}q_{1}^{2}q_{2}+\\frac{17}{3^{2}\\cdot 5}q_{1}^{4}-2^{9}(2^{7}-1\n\\cdot \\alpha _{4}$,\n\\end{quote}\n\n\\noindent \\textsl{where }$\\alpha _{1}\\equiv \\alpha _{3}\\equiv 0\\func{mod}2\n\\textsl{.}$\\square $\n\n\\bigskip\n\n\\noindent \\textbf{Example 10.9.} For a smooth spin manifold $M$ one has by\nTheorems 10.5, 10.6 and 10.8 that\n\n\\begin{quote}\ni) if $\\dim M=4$, then $\\sigma _{M}\\equiv 0\\func{mod}2^{4}$;\n\nii) if $\\dim M=8$, then $\\sigma _{M}\\equiv q_{1}^{2}\\func{mod}2^{5}\\cdot\n(2^{3}-1)$\\textsl{.}\n\niii) if $\\dim M=12$, then $\\sigma _{M}\\equiv \\frac{2}{3\n(q_{1}q_{2}-q_{1}^{3})\\func{mod}2^{8}\\cdot (2^{5}-1)$\\textsl{,}\n\\end{quote}\n\n\\noindent where assertion i) is identical to Theorem 10.4. For this reason\nwe may call the formulae in Theorem 10.8 \\textsl{the Rokhlin type formulae}\nof spin manifolds.\n\nSince the string group $String(n)$ is the $3$--connected cover of $Spin(n)$,\na spin manifold is \\textsl{string} \\cite{S} if and only if its first spin\ncharacteristic class $q_{1}$ vanishes. In this case the second spin\ncharacteristic class $q_{2}$ has been shown to be divisible by $3$ \\cite{LD\n. Thus, for a smooth string manifold $M$ we have by Theorem 10.8 the\nfollowing Rokhlin type formulae.\n\n\\begin{quote}\na) if $\\dim M=8$, then $\\sigma _{M}\\equiv 0\\func{mod}2^{5}\\cdot (2^{3}-1)\n\\textsl{.}\n\nb) if $\\dim M=12$, then $\\sigma _{M}\\equiv 0\\func{mod}2^{8}\\cdot (2^{5}-1)\n\\textsl{,}\n\nc) if $\\dim M=16$, then $\\sigma _{M}\\equiv (\\frac{1}{3}q_{2})^{2}\\func{mod\n2^{9}\\cdot (2^{7}-1)$.\n\\end{quote}\n\n\\noindent In addition, by Theorem 10.7 if $M$\\ is a simply connected, then \nM $\\ admits a metric with positive scalar curvature if and only if\n\n\\begin{quote}\n$\\sigma _{M}=0,0$ or $(\\frac{1}{3}q_{2})^{2}$ in accordance to $\\dim M=8,12$\nor $16$.$\\square $\n\\end{quote}\n\n\\textbf{10.5. The existence of smooth structure on triangulable manifolds.}\nWithout involving the top degree Pontryagin class $p_{m}$ the Rokhlin type\nformulae in Theorem 10.8 is ready to apply to study of the existence problem\nof smooth structures on certain $4m$ dimensional triangulable manifolds. To\nprovide such examples in the case $m=2$ we need the following notation.\n\n\\bigskip\n\n\\noindent \\textbf{Definition 10.10.} For a unimodular symmetric integral\nmatrix $A=(a_{ij})_{n\\times n}$ of rank $n$, and a sequence $b=(b_{1},\\cdots\n,b_{n})$ of integers with length $n$, the pair $(A,b)$ is called a\\textsl{\\\nWall pair} if the following congruences are satisfied\n\n\\begin{enumerate}\n\\item[(10.3)] $a_{ii}\\equiv b_{i}\\func{mod}2,1\\leq i\\leq n$.$\\square $\n\\end{enumerate}\n\n\\bigskip\n\nLet $D^{8}$ be the unit disk on the $8$ dimensional Euclidean space $\\mathbb\nR}^{8}$. The following result is due to C.T.C. Wall \\cite{Wa}.\n\n\\bigskip\n\n\\noindent \\textbf{Theorem 10.11. }\\textsl{For each Wall pair }$(A,b)$\\textsl\n\\ with }$A=(a_{ij})_{n\\times n}$ \\textsl{and }$b=(b_{1},\\cdots ,b_{n})\n\\textsl{,} \\textsl{there exists a closed }$8$\\textsl{\\ dimensional} \\textsl\ntopological manifold }$M$\\textsl{\\ that satisfies the following properties}\n\n\\textsl{i) }$M$\\textsl{\\ admits a decomposition }$M=W\\cup _{h}D^{8}$\\textsl\n, where }$W$ \\textsl{is a }$3$\\ \\textsl{connected} \\textsl{smooth manifold\nwith boundary }$\\partial W$ \\textsl{a homotopy }$7$\\textsl{--sphere, and\nwhere }$h:\\partial W\\rightarrow \\partial D^{8}$ \\textsl{is a homeomorphism;}\n\n\\textsl{ii) there is a basis }$\\left\\{ x_{1},\\cdots ,x_{n}\\right\\} $\\textsl\n\\ on }$H^{4}(M)$\\textsl{\\ so that }$x_{i}\\cup x_{j}=a_{i,j}\\cdot \\omega _{M}\n\\textsl{, where }$\\omega _{M}\\in H^{8}(M)$ \\textsl{is an orientation class o\n} $M$\\textsl{;}\n\n\\textsl{iii) the first Spin characteristic class }$q_{1}$\\textsl{\\ of }$M\n\\textsl{\\ is well defined (by i)), and is determined by }$b$\\textsl{\\ as }\n\n\\begin{quote}\n$\\qquad q_{1}=b_{1}x_{1}+\\cdots +b_{n}x_{n}\\in H^{4}(M)$\\textsl{.}\n\\end{quote}\n\n\\textsl{Furthermore, if }$(A^{\\prime },b^{\\prime })$\\textsl{\\ is a second\nWall pair, then the associated manifold }$M^{\\prime }$\\textsl{\\ is\ncombinatorially homeomorphic to }$M$\\textsl{\\ (in the sense of \\cite{Wa}) if\nand only if there exists an integer matrix }$P=(p_{i,j})_{n\\times n}$\\textsl\n\\ so that }\n\n\\begin{quote}\n$P^{\\tau }AP=A^{\\prime }$ \\textsl{and} $bP=b^{\\prime }$\\textsl{, }\n\\end{quote}\n\n\\noindent \\textsl{where }$P^{\\tau }$\\textsl{\\ denotes the transpose of the\nmatrix }$P$\\textsl{.}$\\square $\n\n\\bigskip\n\n\\noindent \\textbf{Remark 10.12.} In \\cite{Wa} Wall classified the\ncombinatorial homeomorphism types of all the $(n-1)$ connected $2n$\ndimensional manifolds $M$ that are smooth off one point $o\\in M$. The result\nin Theorem 10.11 corresponds to the case $n=4$.\n\nIt is known that for a $4$--dimensional real vector bundle $\\xi $ on the $4$\ndimensional sphere $S^{4}$ the difference $2e(\\xi )-p_{1}(\\xi )$ (resp. \ne(\\xi )-q_{1}(\\xi )$) is divisible by $4$ (resp. by $2$), where $e(\\xi )$ is\nthe Euler class of $\\xi $ \\cite[Lemma 20.10]{MS}. In Theorem 10.11 the\nnecessity of the Wall condition (10.3) is governed by the following\ngeometric fact. According to Haefliger \\cite{Ha}, for the manifold $W$ in i)\nof Theorem 10.11, there exist $n$ smooth embeddings\n\n\\begin{quote}\n$\\iota _{i}:S^{4}\\rightarrow W$, $1\\leq i\\leq n$,\n\\end{quote}\n\n\\noindent so that the Kronecker dual of the cycle classes $\\iota _{i\\ast\n}[S^{4}]\\in H_{4}(M)$ is the basis $\\left\\{ x_{1},\\cdots ,x_{n}\\right\\} \n\\textsl{\\ }on\\textsl{\\ }$H^{4}(M)$. Then, the matrix $A$ is the intersection\nform on $H^{4}(M)$ corresponding to the basis, while the normal bundle \n\\gamma _{i}$ of the embedding $\\iota _{i}$ is related to the pair $(A,b)$ by\nthe relations\n\n\\begin{quote}\n$(e(\\gamma _{i}),q_{1}(\\gamma _{i}))=(a_{ii}\\cdot \\omega ,b_{i}\\cdot \\omega\n) $, $1\\leq i\\leq n,$\n\\end{quote}\n\n\\noindent where $\\omega $ is the orientation class on $S^{4}$ that\ncorresponds $x_{i}$ via $\\iota _{i}$.\n\nFor an $8$--dimensional manifold $M$ associated to a Wall pair $(A,b)$\nproperties ii) and iii) of Theorem 10.9 imply, respectively, that\n\n\\begin{enumerate}\n\\item[(10.4)] $\\sigma _{M}=sign(A)$ and $q_{1}(M)^{2}=bAb^{\\tau }$,\n\\end{enumerate}\n\n\\noindent where $b^{\\tau }$\\textsl{\\ }denotes the transpose of the row\nvector $b$.$\\square $\n\n\\bigskip\n\nConcerning the manifold $M$ associated to a Wall pair $(A,b)$ a natural\nquestion is whether there exists a smooth structure that extends the given\none on $W$. For the special case $A=(1)_{1\\times 1}$ this question has been\nstudied by Milnor \\cite{M}, Eells and Kuiper \\cite[\\S 6]{EK} in their\ncalculation on the group $\\Theta _{7}$ of homotopy $7$ spheres. We extend\ntheir calculations in the following results.\n\n\\bigskip\n\n\\noindent \\textbf{Theorem 10.13. }\\textsl{Let }$M^{8}$\\textsl{\\ be the\nmanifold associated to a Wall pair }$(A,b)$\\textsl{. There exists a smooth\nstructure} \\textsl{on }$M^{8}$\\textsl{\\ extending the one on }$W$\\textsl{\\\nif and only if}\n\n\\begin{enumerate}\n\\item[(10.5)] $sign(A)\\equiv bAb^{\\tau }\\func{mod}2^{5}\\cdot (2^{3}-1)\n\\textsl{.}\n\\end{enumerate}\n\n\\bigskip\n\n\\noindent \\textbf{Proof.} The necessity of (10.5) comes from ii) of Example\n10.9. The sufficiency is verified by computing with the Eells--Kuiper $\\mu $\ninvariant \\cite[formula (11)]{EK} of the boundary $\\partial W$, which by\nProposition 10.3 reads\n\n\\begin{quote}\n$\\mu (\\partial W)\\equiv \\frac{4bAb^{\\tau }-4sign(A)}{2^{7}\\cdot (2^{3}-1)\n\\equiv \\frac{bAb^{\\tau }-sign(A)}{2^{5}\\cdot (2^{3}-1)}\\func{mod}1$.$\\square \n$\n\\end{quote}\n\nTheorem 10.13 has several direct, but notable consequences. A theorem of\nKervaire states that there exist a $10$ dimensional manifold which do not\nadmit any smooth structure \\cite{K}. Eells and Kuiper provided further\nexamples which have the same cohomology ring as that of the projective plane \n\\cite{EK1}. Theorem 10.13 implies that\n\n\\bigskip\n\n\\noindent \\textbf{Corollary 10.14. }\\textsl{If }$(A,b)$\\textsl{\\ is a Wall\npair so that }\n\n\\begin{quote}\n$sign(A)\\neq bAb^{\\tau }\\func{mod}2^{5}\\cdot (2^{3}-1)$\\textsl{, }\n\\end{quote}\n\n\\noindent \\textsl{then the corresponding manifold} $M$ \\textsl{does not\nadmit any smooth structure.}$\\square $\n\n\\bigskip\n\nConversely, for those $M$ which do admit smooth structures, their total Spin\ncharacteristic class $q(M)$ can be determined completely.\n\n\\bigskip\n\n\\noindent \\textbf{Corollary} \\textbf{10.15.} \\textsl{If the manifold} $M$ \n\\textsl{associated to a Wall pair }$(A,b)$ \\textsl{admits a smooth\nstructure, then its total Spin characteristic class is}\n\n\\begin{enumerate}\n\\item[(10.6)] $q(M)=1+(b_{1}x_{1}+\\cdots +b_{1}x_{n})+\\frac{3(15\\cdot\nsign(A)-bAb^{\\tau })}{2\\cdot (2^{3}-1)}\\cdot \\omega _{M}$\\textsl{,}\n\\end{enumerate}\n\n\\noindent \\textsl{where }$2\\cdot (2^{3}-1)$\\textsl{\\ divides }$15\\cdot\nsign(A)-bAb^{\\tau }$\\textsl{\\ by ii) of Example 10.9..}\n\n\\bigskip\n\n\\noindent \\textbf{Proof.} With $\\tau _{2}=\\sigma _{M}=sign(A)$ and \np_{1}^{2}=4bAb^{\\tau }$ the formula $\\tau _{2}=\\frac{7p_{2}-p_{1}^{2}}\n3^{2}\\cdot 5}$ implies that\n\n\\begin{quote}\n$p_{2}=\\frac{45\\cdot sign(A)+4bAb^{\\tau }}{7}\\cdot \\omega _{M}$.\n\\end{quote}\n\n\\noindent From $p_{2}=2q_{2}+q_{1}^{2}$ by (9.4) we get $q_{2}=\\frac\n3(15\\cdot sign(A)-bAb^{\\tau })}{2\\cdot (2^{3}-1)}\\cdot \\omega _{M}$.$\\square \n$\n\n\\bigskip\n\nIn view of the formula of $q_{2}$ in (10.6) Theorem 10.7 implies that\n\n\\bigskip\n\n\\noindent \\textbf{Corollary 10.16. }\\textsl{For} \\textsl{a manifold} $M$ \n\\textsl{associated to a Wall pair }$(A,b)$\\textsl{\\ the following statements\nare equivalent:}\n\n\\begin{quote}\n\\textsl{i) }$M$\\textsl{\\ is smoothable and has a metric with positive scalar\ncurvature;}\n\n\\textsl{ii)} $sign(A)=bAb^{\\tau }$\\textsl{;}\n\n\\textsl{iii)} $q_{2}=3sign(A)\\cdot \\omega _{M}$\\textsl{.}$\\square $\n\\end{quote}\n\n\\bigskip\n\n\\noindent \\textbf{Example 10.17. }Corollary 10.16 reduces the problem of\nfinding all the $3$--connected and $8$-- dimensional smooth manifolds that\nhave a metric with positive scalar curvature to the arithmetic problem of\nfinding those Wall pairs $(A,b)$ satisfying the quadratic equation\n\n\\begin{enumerate}\n\\item[(10.7)] $sign(A)=bAb^{\\tau }$.\n\\end{enumerate}\n\nConsider a Wall pair $(A,b)$ with $A$ the identity matrix $I_{n}$ of rank $n\n, and with $b=(2k_{1}+1,\\cdots ,2k_{n}+1)$, $k_{i}\\in \\mathbb{Z}$ (see\n(10.3)). The equation (10.7) is\n\n\\begin{quote}\n$n=(2k_{1}+1)^{2}+\\cdots +(2k_{n}+1)^{2}$.\n\\end{quote}\n\n\\noindent It implies that the corresponding $M$\\textsl{\\ }is smooth and\nadmits a metric with positive scalar curvature, if and only if $M$ is\ncombinatorially homeomorphic \\cite{Wa}\\textsl{\\ }to $\\mathbb{H}P^{2}\\#\\cdots\n\\#\\mathbb{H}P^{2}$, the connected sum of $n$ copies of the projective plan \n\\mathbb{H}P^{2}$.\n\nConsider next a Wall pair $(A,b)$ with $A=$ $\\left( \n\\begin{array}{cc}\n0 & 1 \\\\ \n1 & \n\\end{array\n\\right) $ and $b=(2k_{1},2k_{2})$, $k_{i}\\in \\mathbb{Z}$ (see (10.3)). The\nequation (10.7) turns to be $k_{1}\\cdot k_{2}=0$. It implies that the\ncorresponding manifold $M$\\textsl{\\ }is smooth and admits a metric with\npositive scalar curvature, if and only if $M$ is combinatorially\nhomeomorphic to the spherical bundle $S(\\xi )$ of a $5$ dimensional\nEuclidean bundle $\\xi $ over $S^{4}$. Such manifolds $S(\\xi )$ are\nclassified by the homotopy group $\\pi _{3}(SO(5))$.\n\nConcerning this topic general cases will be studied in the sequel work \\cit\n{DL}.$\\square $\n\n\\bigskip\n\n\\textbf{Acknowledgement.} The author would like to thank Fei Han, Ruizhi\nHuang for discussion on the integral lifts of Wu--classes \\cite{HS,DHH} (see\nSection \\S 9.2), and to Yang Su for informing him the recent questions in\nmathoverflow \\cite{Web,Web1} about the integral cohomology of the\nclassifying space $B_{Spin(n)}$.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}