diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzhqdz" "b/data_all_eng_slimpj/shuffled/split2/finalzzhqdz" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzhqdz" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sec:introduction}\n\nThe motivic nearby fiber and the motivic vanishing cycles were\nintroduced by J.~Denef and F.~Loeser (see\n\\cite{denef-loeaser-motivic-igusa-zeta, denef-loeser-motivic,\n denef-loeser-arc-spaces, looijenga-motivic-measures}). Let $V\n\\colon X \\rightarrow {\\mathbb{A}}^1_{\\mathsf{k}}$ \nbe a morphism of ${\\mathsf{k}}$-varieties where ${\\mathsf{k}}$ is an algebraically\nclosed field of characteristic zero and $X$ is smooth over ${\\mathsf{k}}$\nand connected.\nThe motivic nearby fiber $\\psi_{V,a}$ and the motivic vanishing\ncycles $\\phi_{V,a}$ of $V$\nat a point $a \\in {\\mathsf{k}}={\\mathbb{A}}^1_{\\mathsf{k}}({\\mathsf{k}})$ are elements of\na localization $\\mathcal{M}_{|X_a|}^{\\hat{\\upmu}}$ of the\nequivariant Grothendieck ring $K_0({\\op{Var}}_{|X_a|}^{\\hat{\\upmu}})$ \nof varieties over the reduced fiber $|X_a|$ of $V$ over $a$.\nWe refer the reader to the main body of this article for precise\ndefinitions. We will often view $\\psi_{V,a}$ and $\\phi_{V,a}$ as\nelements of $\\mathcal{M}_{{\\mathsf{k}}}^{\\hat{\\upmu}}$ in this introduction.\n\nThe motivic nearby fiber is additive on the Grothendieck group\n$K_0({\\op{Var}}_{{\\mathbb{A}}^1_{\\mathsf{k}}})$ of varieties over ${\\mathbb{A}}^1_{\\mathsf{k}}$, as shown\nby F.~Bittner \n\\cite{bittner-motivic-nearby} \nand by G.~Guibert, F.~Loeser and M.~Merle \n\\cite[Thm.~3.9]{guibert-loeser-merle-convolution}.\nNamely, for any $a \\in {\\mathsf{k}}$,\nthere is a map\n\\begin{equation*}\n K_0({\\op{Var}}_{{\\mathbb{A}}^1_{\\mathsf{k}}}) \\rightarrow \\mathcal{M}_{\\mathsf{k}}^{\\hat{\\upmu}} \n\\end{equation*}\nof $K_0({\\op{Var}}_{\\mathsf{k}})$-modules which maps the class of a proper\nmorphism $V \\colon X \\rightarrow {\\mathbb{A}}^1_{\\mathsf{k}}$ with $X$ as above to \nthe motivic nearby fiber $\\psi_{V,a}$.\n \nThe motivic Thom-Sebastiani theorem\n\\cite{guibert-loeser-merle-convolution} \nis a\nlocal multiplicativity result for motivic vanishing cycles. Given\nanother morphism $W \\colon Y \\rightarrow {\\mathbb{A}}^1_{\\mathsf{k}}$ as above define\n$V \\circledast W \\colon X \\times Y \\rightarrow {\\mathbb{A}}^1_{\\mathsf{k}}$ by $(V\n\\circledast W)(x,y)=V(x)+W(y)$. Then the motivic\nThom-Sebastiani Theorem states that a certain convolution of the\nmotivic vanishing cycles $\\phi_{V,a}$ and $\\phi_{W,b}$ determines\nsome part of the motivic vanishing cycles\n$\\phi_{V \\circledast W,a+b}$ (see Theorem\n\\ref{t:thom-sebastiani-GLM-motivic-vanishing-cycles}).\n\nOur main result states that after small adjustments - the\nmotivic vanishing cycles $\\phi_{V,a}$ we use differ by a factor\n$(-1)^{\\dim X}$ from the usual motivic vanishing cycles (see\nRemark~\\ref{rem:motivic-fibers-signs}) - the motivic vanishing\ncycles are \nboth additive and multiplicative.\n\n\\begin{theorem}\n [{see Theorem~\\ref{t:total-motivic-vanishing-cycles-ring-homo}}]\n \\label{t:main-intro}\n There is a morphism\n \\begin{equation}\n \\label{eq:60}\n (K_0({\\op{Var}}_{{\\mathbb{A}}^1_{\\mathsf{k}}}), \\star) \\rightarrow\n (\\mathcal{M}_{\\mathsf{k}}^{\\hat{\\upmu}},*) \n \\end{equation}\n of $K_0({\\op{Var}}_{\\mathsf{k}})$-algebras - called \\textbf{motivic vanishing\n cycles measure} -\n which is uniquely\n determined by the following \n property: it maps the class of each proper\n morphism $V \\colon X \\rightarrow {\\mathbb{A}}^1_{\\mathsf{k}}$ from a smooth and connected\n ${\\mathsf{k}}$-variety $X$ to the sum \n $\\sum_{a \\in {\\mathsf{k}}} \\phi_{V,a}$ of its motivic vanishing cycles.\n\\end{theorem}\n\nThe motivic vanishing cycles measure is a motivic\nmeasure in the sense that it is a ring morphism from\nsome Grothendieck ring of\nvarieties to another ring.\nThe multiplication $*$ on the target of our measure is a convolution\nproduct whose definition is due to Looijenga and involves Fermat\nvarieties. The \nmultiplication $\\star$ on the source is given by\n$[X \\xra{V} {\\mathbb{A}}^1_{\\mathsf{k}}] \\star [Y \\xra{W} {\\mathbb{A}}^1_{\\mathsf{k}}] = \n[X \\times Y \\xra{V \\circledast W} {\\mathbb{A}}^1_{\\mathsf{k}}]$.\nApart from the additivity and local multiplicativity results\nmentioned \nabove, the main ingredient in the proof of\nTheorem~\\ref{t:main-intro} is a compactification construction\ndescribed in \n\\cite{valery-olaf-matfak-motmeas}. \nIn fact, \nwe prove a slightly stronger statement\nin Theorem~\\ref{t:total-motivic-vanishing-cycles-ring-homo}: the\nmotivic \nvanishing cycles \nmeasure \\eqref{eq:60}\ncomes from a\nmorphism $(K_0({\\op{Var}}_{{\\mathbb{A}}^1_{\\mathsf{k}}}), \\star) \\rightarrow\n(\\tilde{\\mathcal{M}}_{{\\mathbb{A}}^1_{\\mathsf{k}}}^{\\hat{\\upmu}},\\star)$ \nof $K_0({\\op{Var}}_{\\mathsf{k}})$-algebras. \nLet us mention that our sign adjustments are already\nnecessary for additivity (see\nRemark~\\ref{rem:justify-sign-vanishing}).\n\nIn the last part of this article we compare the motivic vanishing\ncycles measure \nwith a motivic measure of a completely different categorical\nnature (in case \n${\\mathsf{k}}=\\mathbb{C}$). Mapping a projective morphism $W \\colon X \\rightarrow {\\mathbb{A}}^1_\\mathbb{C}$\nfrom a smooth complex variety $X$ to its category of matrix\nfactorizations gives rise to\na ``matrix factorization'' motivic\nmeasure\n\\begin{equation*}\n \\mu \\colon \n (K_0({\\op{Var}}_{{\\mathbb{A}}^1_\\mathbb{C}}), \\star) \n \n \\rightarrow K_0(\\op{sat}_\\mathbb{C}^{{\\mathbb{Z}}_2}) \n\\end{equation*}\nas we explained in \n\\cite{valery-olaf-matfak-semi-orth-decomp,\n valery-olaf-matfak-motmeas}. The target of this ring morphism\nis the Grothendieck\nring of saturated differential $\\mathbb{Z}_2$-graded categories.\nHere is our comparison result.\n\n\\begin{theorem} \n [{see Theorem~\\ref{t:comparison}}]\n \\label{t:comparison-intro}\n We have the following commutative diagram of ring homomorphisms\n \\begin{equation*}\n \\xymatrix{\n {(K_0({\\op{Var}}_{{\\mathbb{A}}^1_\\mathbb{C}}), \\star)} \n \\ar[r]^-{\\mu} \\ar[d]\n & {K_0(\\op{sat}_{\\mathbb{C}}^{{\\mathbb{Z}}_2})} \\ar[d]^-{\\chi_{{\\operatorname{HP}}}} \\\\\n {(\\mathcal{M}_{\\mathbb{C}}^{\\hat{\\upmu}},*)} \\ar[r]^-{\\chi_{\\op{c}}} \n & {\\mathbb{Z}}\n }\n \\end{equation*}\n where the left vertical arrow is \n the motivic vanishing cycles measure \\eqref{eq:60}\n from Theorem~\\ref{t:main-intro},\n the lower horizontal arrow is induced by forgetting the group\n action and taking the Euler\n characteristic with compact support, and the \n right vertical arrow is induced by taking the Euler\n characteristic of periodic cyclic homology.\n\\end{theorem}\n\nThe main ingredients in the proof of this theorem are the\ncomparison between the periodic cyclic homology of the dg\ncategory of matrix factorizations of a given potential $V$ with\nthe vanishing cohomology of $V$ due to A.~Efimov\n\\cite{efimov-cyclic-homology-matrix-factorizations}, and the\ncomparison between the motivic and geometric vanishing cycles due\nto G.~Guibert, F.~Loeser and M.~Merle\n\\cite{guibert-loeser-merle-convolution}.\n \n\\subsection{Structure of the article}\n\\label{sec:structure-article}\n\n\\begin{itemize}\n\\item[\\S\\ref{sec:grothendieck-rings}]\n We remind the reader of various (equivariant) Grothendieck\n abelian groups of \n varieties and multiplications (or ``convolutions'') on them.\n We recall Looijenga's convolution\n product $*$ in section~\\ref{sec:convolution} and include a direct\n proof of associativity (see\n Proposition~\\ref{p:convolution-comm-ass-unit}); \n this reproves results of \n \\cite[5.1-5.5]{guibert-loeser-merle-convolution}.\n We also define a variant of Looijenga's convolution product for\n varieties over ${\\mathbb{A}}^1_{\\mathsf{k}}$ in section~\\ref{sec:conv-vari-over}.\n\\item[\\S\\ref{sec:motiv-vanish-fiber}]\n We recall the definition of the motivic nearby\n fiber $\\psi_{V,a}$ and the motivic vanishing\n cycles $\\phi_{V,a}$ and show that $\\phi_{V,a}$ lies in \n $\\mathcal{M}^{\\hat{\\upmu}}_{|{\\op{Sing}}(V) \\cap X_a|}$ (see\n Proposition~\\ref{p:motivic-vanishing-cycles-over-singular-part}). \n We also show an invariance property of $\\phi_{V,a}$ \n in Corollary~\\ref{c:mot-van-fiber-compactification}.\n\\item[\\S\\ref{sec:thom-sebastiani}]\n We state the motivic Thom-Sebastiani Theorem\n \\cite[Thm.~5.18]{guibert-loeser-merle-convolution}\n as\n Theorem~\\ref{t:thom-sebastiani-GLM-motivic-vanishing-cycles} and\n give some corollaries. In particular, we globalize the\n Thom-Sebastiani Theorem to \n Corollary~\\ref{c:global-thom-sebastiani}.\n\\item[\\S\\ref{sec:motiv-vanish-fiber-1}]\n A corollary of\n \\cite[Thm.~3.9]{guibert-loeser-merle-convolution} is given as\n Theorem~\\ref{t:GLM-measure}. We obtain additivity of the\n motivic vanishing cycles in \n Theorem~\\ref{t:total-vanishing-cycles-group-homo}.\n Then we deduce our main\n Theorem~\\ref{t:total-motivic-vanishing-cycles-ring-homo} \n using the previous Thom-Sebastiani results\n and\n the compactification result stated as\n Proposition~\\ref{p:compactification}.\n\\item [\\S\\ref{sec:comp-with-matr}]\n We remind the reader of the categorical motivic measure\n in \\cite{bondal-larsen-lunts-grothendieck-ring}\n and its relation to the matrix factorization measure.\n Then we prove Theorem~\\ref{t:comparison}. We finish by giving\n two examples and by drawing a diagram relating the motivic\n measures considered in this article.\n\\end{itemize}\n\n\\subsection{Acknowledgements}\n\\label{sec:acknowledgements}\n\nWe thank Daniel Bergh, Alexander Efimov, Annabelle Hartmann,\nFran\\-\\c{c}ois Loeser, and \nAnatoly Preygel for useful discussions, and the referee for\ncareful reading.\n\nThe first author was supported by \nNSA grant 43.294.03.\nThe second author was supported by a postdoctoral\nfellowship of the DFG\nand by \nSFB\/TR 45 of the DFG.\n\n\\subsection{Conventions}\n\\label{sec:conventions}\n\nWe fix an algebraically closed field ${\\mathsf{k}}$ of characteristic\nzero.\nBy a ${\\mathsf{k}}$-variety we mean a separated reduced scheme of finite\ntype over ${\\mathsf{k}}$.\nA morphism of ${\\mathsf{k}}$-varieties is a morphism of ${\\mathsf{k}}$-schemes.\nLet ${\\op{Var}}_{\\mathsf{k}}$ be the category of\n${\\mathsf{k}}$-varieties.\nWe write $\\times$ instead of $\\times_{\\operatorname{Spec} {\\mathsf{k}}}$. By our\nassumptions on ${\\mathsf{k}}$, the product of two ${\\mathsf{k}}$-varieties is again\nreduced and hence a ${\\mathsf{k}}$-variety.\nIf $X$ is a scheme we denote by $|X|$ the\ncorresponding reduced closed subscheme. \n\n\\section{Grothendieck rings of varieties}\n\\label{sec:grothendieck-rings}\n\n\\subsection{Grothendieck rings of varieties over a base variety}\n\\label{sec:groth-rings-vari}\n\n\nFix a ${\\mathsf{k}}$-variety $S$. By an $S$-variety we mean a morphism\n$X \\rightarrow S$ of ${\\mathsf{k}}$-varieties. Let ${\\op{Var}}_S$ be the category of\n$S$-varieties.\nThe Grothendieck group $K_0({\\op{Var}}_S)$ of $S$-varieties is the\nquotient of the free abelian group on \nisomorphism \nclasses $\\langle X \\rightarrow S\\rangle$ of $S$-varieties $X \\rightarrow S$ by the subgroup generated by the scissor\nexpressions $\\langle X \\rightarrow S \\rangle - \\langle (X - Y) \\rightarrow S\n\\rangle - \\langle Y \\rightarrow S\\rangle$ where $Y \\subset X$ is a\nclosed reduced subscheme.\nAny $S$-variety $X \\rightarrow S$ \ndefines an element $[X \\rightarrow S]$ of $K_0({\\op{Var}}_S)$. \n\nGiven $S$-varieties \n$X \\rightarrow S$ and $Y \\rightarrow S$, the composition\n$|X \\times_S Y| \\rightarrow X \\times_S Y \\rightarrow S$ is an $S$-variety; this\noperation turns $K_0({\\op{Var}}_S)$ into a commutative associative ring\nwith identity element\n$[S \\xra{\\operatorname{id}} S]$ (use \\cite[Prop.~4.34]{goertz-wedhorn-AGI} for\nassociativity). \n\nLet $\\mathcal{M}_S:= K_0({\\op{Var}}_S)[\\mathbb{L}_S^{-1}]$ be the ring obtained\nfrom \n$K_0({\\op{Var}}_S)$ by inverting $\\mathbb{L}_S=[{\\mathbb{A}}^1_S \\rightarrow S]$.\n\nWe usually write $K_0({\\op{Var}}_{\\mathsf{k}})$ instead of $K_0({\\op{Var}}_{\\operatorname{Spec}\n {\\mathsf{k}}})$, $\\mathbb{L}=\\mathbb{L}_{\\mathsf{k}}$ instead of $\\mathbb{L}_{\\operatorname{Spec} {\\mathsf{k}}}$, and $\\mathcal{M}_{\\mathsf{k}}$ instead of $\\mathcal{M}_{\\operatorname{Spec} {\\mathsf{k}}}$.\n\n\\begin{remark}\n \\label{rem:compare-Grothendieck-rings}\n Note that the Grothendieck ring $K_0({\\op{Var}}_S)$ defined here is\n canonically isomorphic to the Grothendieck ring defined in\n \\cite[3.1]{nicaise-sebag-grothendieck-ring-of-varieties}, by\n \\cite[3.2.2]{nicaise-sebag-grothendieck-ring-of-varieties}.\n\\end{remark}\n\n\\subsubsection{Pullback}\n\\label{sec:pullback}\n\nLet $f \\colon T \\rightarrow S$ be a morphism of ${\\mathsf{k}}$-varieties. Then\nthe functor ${\\op{Var}}_S \\rightarrow {\\op{Var}}_T$,\n$(X \\rightarrow S) \\mapsto (|T \\times_S X| \\rightarrow T \\times_S X \\rightarrow T)$,\ninduces a morphism\n\\begin{equation}\n \\label{eq:1}\n f^* \\colon K_0({\\op{Var}}_S) \\rightarrow K_0({\\op{Var}}_T)\n\\end{equation}\nof commutative unital rings\nwhich satisfies $f^*(\\mathbb{L}_S)=\\mathbb{L}_T$\nand hence \ninduces a morphism\n\\begin{equation}\n \\label{eq:2}\n f^* \\colon \\mathcal{M}_S \\rightarrow \\mathcal{M}_T\n\\end{equation}\nof rings. \nIf $g \\colon U \\rightarrow T$ is another morphism of\n${\\mathsf{k}}$-varieties, we have $g^* f^* =(fg)^*$, by \n\\cite[Prop.~4.34]{goertz-wedhorn-AGI}.\n\nIn particular, $K_0({\\op{Var}}_S)$ (resp.\\ $\\mathcal{M}_S$) becomes a\n$K_0({\\op{Var}}_{\\mathsf{k}})$-algebra (resp.\\ $\\mathcal{M}_{\\mathsf{k}}$-algebra), and \n\\eqref{eq:1} (resp.\\ \\eqref{eq:2}) is a morphism of\n$K_0({\\op{Var}}_{\\mathsf{k}})$-algebras (resp of $\\mathcal{M}_{\\mathsf{k}}$-algebras).\nNote that the obvious map defines a canonical\nisomorphism \n\\begin{equation*}\n \\mathcal{M}_{\\mathsf{k}} \\otimes_{K_0({\\op{Var}}_{\\mathsf{k}})} K_0({\\op{Var}}_S) \\xra{\\sim}\n \\mathcal{M}_S \n\\end{equation*}\nof $\\mathcal{M}_{\\mathsf{k}}$-algebras.\n\n\\subsubsection{Pushforward}\n\\label{sec:push-forward}\n\nLet $f \\colon T \\rightarrow S$ be a morphism of ${\\mathsf{k}}$-varieties.\nThe functor ${\\op{Var}}_T \\rightarrow {\\op{Var}}_S$, $(Y \\xra{y} T) \\mapsto (Y\n\\xra{fy} S)$, induces a morphism\n\\begin{equation*}\n \n f_! \\colon K_0({\\op{Var}}_T) \\rightarrow K_0({\\op{Var}}_S)\n\\end{equation*}\nof $K_0({\\op{Var}}_{\\mathsf{k}})$-modules. \nTensoring with $\\mathcal{M}_{\\mathsf{k}}$ yields a morphism\n\\begin{equation*}\n \n f_! \\colon \\mathcal{M}_T \\rightarrow \\mathcal{M}_S\n\\end{equation*}\nof $\\mathcal{M}_{\\mathsf{k}}$-modules which sends\n$[Y \\xra{y} T] \\cdot \\mathbb{L}_T^{-i}$ to\n$[Y \\xra{fy} S] \\cdot \\mathbb{L}_S^{-i}$.\n\n\\begin{remark}\n \\label{rem:compare-Grothendieck-rings-and-pullback}\n The canonical isomorphisms from \n Remark~\\ref{rem:compare-Grothendieck-rings}\n are compatible with pullback and pushforward, \n by \\cite[Prop.~4.34]{goertz-wedhorn-AGI}.\n\\end{remark}\n\n\\subsection{Grothendieck rings of equivariant varieties over a\n base variety}\n\\label{sec:groth-rings-equiv}\n\nFor $n \\in \\mathbb{N}_{>0}$ let $\\upmu_n=\\operatorname{Spec}({\\mathsf{k}}[x]\/(x^n-1))$ be the\ngroup ${\\mathsf{k}}$-variety of $n$-th roots of unity. Note that actions\nof $\\upmu_n$ on a ${\\mathsf{k}}$-variety $X$ correspond bijectively to group\nmorphisms $\\upmu_n({\\mathsf{k}}) \\rightarrow \\operatorname{Aut}_{{\\op{Var}}_{\\mathsf{k}}}(X)$. \n\nFix a ${\\mathsf{k}}$-variety $S$ and let $n \\in \\mathbb{N}_{>0}$. Recall that a\ngood $\\upmu_n$-action on a ${\\mathsf{k}}$-variety is a $\\upmu_n$-action\nsuch that each $\\upmu_n({\\mathsf{k}})$-orbit is contained in an affine\nopen subset of $X$. An $S$-variety with a good $\\upmu_n$-action is\nan $S$-variety $p \\colon X \\rightarrow S$ together with a good\n$\\upmu_n$-action on $X$. So $p$ is $\\upmu_n$-equivariant if we equip\n$S$ with the trivial $\\upmu_n$-action. We obtain the category\n${\\op{Var}}_S^{\\upmu_n}$ of $S$-varieties with good $\\upmu_n$-action.\n \nThe definition of the Grothendieck ring $K_0({\\op{Var}}_S^{\\upmu_n})$ of\n$S$-varieties with good $\\upmu_n$-action is evident from\n\\cite[2.2-2.5]{guibert-loeser-merle-convolution}; apart from the\nusual scissor relations there is another family of relations,\ncf.\\ \\cite[(2.2.1)]{guibert-loeser-merle-convolution}. Any\n$S$-variety $X \\rightarrow S$ with good $\\upmu_n$-action gives rise to an\nelement $[X \\rightarrow S]=[X]$ of $K_0({\\op{Var}}_S^{\\upmu_n})$. The\nproduct of $[X \\rightarrow S]$ and $[Y \\rightarrow S]$ is the element obtained\nfrom $|X \\times_S Y| \\rightarrow S$ with the obvious diagonal\n$\\upmu_n$-action. Define\n$\\mathbb{L}_S=\\mathbb{L}_{S, \\upmu_n}=[{\\mathbb{A}}^1_S \\rightarrow S] \\in K_0({\\op{Var}}_S^{\\upmu_n})$\nwhere $\\upmu_n$ acts trivially on ${\\mathbb{A}}^1_S$. Let\n$\\mathcal{M}_S^{\\upmu_n}:= K_0({\\op{Var}}_S^{\\upmu_n})[\\mathbb{L}_S^{-1}]$.\n\n\nWe write \n$K_0({\\op{Var}}_{\\mathsf{k}}^{\\upmu_n})$ and $\\mathcal{M}_{\\mathsf{k}}^{\\upmu_n}$ instead of \n$K_0({\\op{Var}}_{\\operatorname{Spec} {\\mathsf{k}}}^{\\upmu_n})$ and $\\mathcal{M}_{\\operatorname{Spec} {\\mathsf{k}}}^{\\upmu_n}$.\n\nIf $f \\colon T \\rightarrow S$ is a morphism of ${\\mathsf{k}}$-varieties we obtain\nas above a pullback morphism\n$f^* \\colon K_0({\\op{Var}}_S^{\\upmu_n}) \\rightarrow K_0({\\op{Var}}_T^{\\upmu_n})$\nof $K_0({\\op{Var}}_{\\mathsf{k}}^{\\upmu_n})$-algebras\nsatisfying $f^*(\\mathbb{L}_S)=\\mathbb{L}_T$ and an induced\npullback morphism\n$f^* \\colon \\mathcal{M}_S^{\\upmu_n} \\rightarrow \\mathcal{M}_T^{\\upmu_n}$\nof $\\mathcal{M}_{\\mathsf{k}}^{\\upmu_n}$-algebras.\nWe also have a pushforward morphism\n$f_! \\colon K_0({\\op{Var}}_T^{\\upmu_n}) \\rightarrow K_0({\\op{Var}}_S^{\\upmu_n})$\nof $K_0({\\op{Var}}_{\\mathsf{k}}^{\\upmu_n})$-modules, and a pushforward morphism\n$f_! \\colon \\mathcal{M}_T^{\\upmu_n} \\rightarrow \\mathcal{M}_S^{\\upmu_n}$\nof $\\mathcal{M}_{\\mathsf{k}}^{\\upmu_n}$-modules. \nFor $n=1$ we recover the notions from \n\\ref{sec:groth-rings-vari}.\n\nWhenever $n'$ is a multiple of $n$ there is a morphism\n$\\upmu_{n'} \\rightarrow \\upmu_n$, $\\lambda\n\\mapsto \\lambda^{n'\/n}$, of ${\\mathsf{k}}$-group varieties inducing\nmorphisms \n\\begin{align}\n \\label{eq:28}\n K_0({\\op{Var}}_S^{\\upmu_n}) & \\rightarrow K_0({\\op{Var}}_S^{\\upmu_{n'}}),\\\\\n \\label{eq:29}\n \\mathcal{M}_S^{\\upmu_n} & \\rightarrow \\mathcal{M}_S^{\\upmu_{n'}},\n\\end{align}\nof rings.\nThese \nmorphism are compatible with pullback and pushforward morphisms.\n\nIn particular, $K_0({\\op{Var}}^{\\upmu_n}_S)$ (resp.\\\n$\\mathcal{M}^{\\upmu_n}_S$) becomes a $K_0({\\op{Var}}_{\\mathsf{k}})$-algebra\n(resp.\\ $\\mathcal{M}_{\\mathsf{k}}$-algebra)\nand the morphisms \n\\eqref{eq:28}\nand \\eqref{eq:29} are morphisms of algebras. We have a canonical\nisomorphism \n\\begin{equation}\n \\label{eq:15}\n \\mathcal{M}_{\\mathsf{k}} \\otimes_{K_0({\\op{Var}}_{\\mathsf{k}})} K_0({\\op{Var}}^{\\upmu_n}_S)\n \\xra{\\sim} \\mathcal{M}_S^{\\upmu_n} \n\\end{equation}\nof $\\mathcal{M}_{\\mathsf{k}}$-algebras\ngiven by $\\tfrac{r}{(\\mathbb{L}_{\\mathsf{k}})^n}\n\\otimes a \\mapsto \\tfrac{ra}{(\\mathbb{L}_S)^n}$. \n\nLet $\\hat{\\upmu}$ be the (inverse) limit of the\n$(\\upmu_n({\\mathsf{k}}))_{n \\in \\mathbb{N}_{>0}}$ with respect to the morphisms\n$\\upmu_{n'}({\\mathsf{k}}) \\rightarrow \\upmu_n({\\mathsf{k}})$,\n$\\lambda \\mapsto \\lambda^{n'\/n}$, whenever $n'$ is a multiple of\n$n$.\n\nAn $S$-variety with good $\\hat{\\upmu}$-action is by definition an\n$S$-variety $X \\rightarrow S$ together with a group morphism $\\hat{\\upmu} \\rightarrow\n\\operatorname{Aut}_{{\\op{Var}}_{\\mathsf{k}}}(X)$ that comes from a good $\\upmu_n$-action on $X$,\nfor some $n \\in \\mathbb{N}_{>0}$.\nAs in \\cite[2.2]{guibert-loeser-merle-convolution} we obtain \nthe category ${\\op{Var}}_S^{\\hat{\\upmu}}$ of $S$-varieties with good\n$\\hat{\\upmu}$-action.\nWe define $K_0({\\op{Var}}_S^{\\hat{\\upmu}})$ and\n$\\mathcal{M}_S^{\\hat{\\upmu}}$ in the obvious way so that we have\n\\begin{align*}\n \n K_0({\\op{Var}}_S^{\\hat{\\upmu}}) & = \\operatorname{colim}_n K_0({\\op{Var}}_S^{\\upmu_n}),\\\\\n \\mathcal{M}_S^{\\hat{\\upmu}} & = \\operatorname{colim}_n \\mathcal{M}_S^{\\upmu_n}.\n\\end{align*}\nThe Grothendieck ring \n$K_0({\\op{Var}}_S^{\\hat{\\upmu}})$\nis an $K_0({\\op{Var}}_{\\mathsf{k}})$-algebra (even a \n$K_0({\\op{Var}}_{\\mathsf{k}}^{\\hat{\\upmu}})$-algebra), and \n$\\mathcal{M}_S^{\\hat{\\upmu}}$ is a $\\mathcal{M}_{\\mathsf{k}}$-algebra\n(even a \n$\\mathcal{M}_{\\mathsf{k}}^{\\hat{\\upmu}}$-algebra). We have \n\\begin{equation*}\n \n \\mathcal{M}_{\\mathsf{k}} \\otimes_{K_0({\\op{Var}}_{\\mathsf{k}})} K_0({\\op{Var}}^{\\hat{\\upmu}}_S)\n \\cong \\mathcal{M}_S^{\\hat{\\upmu}} \n\\end{equation*}\ncanonically as rings.\nIf $f \\colon T \\rightarrow S$ is a morphism of ${\\mathsf{k}}$-varieties, we obtain\na pullback morphism\n$f^* \\colon K_0({\\op{Var}}_S^{\\hat{\\upmu}}) \\rightarrow K_0({\\op{Var}}_T^{\\hat{\\upmu}})$\nof $K_0({\\op{Var}}_{\\mathsf{k}})$-algebras and a pushforward morphism\n$f_! \\colon\n K_0({\\op{Var}}_T^{\\hat{\\upmu}})\n\\rightarrow\n K_0({\\op{Var}}_S^{\\hat{\\upmu}})$\nof $K_0({\\op{Var}}_{\\mathsf{k}})$-modules. The base changes of these morphisms along\nthe ring morphism $K_0({\\op{Var}}_{\\mathsf{k}}) \\rightarrow \\mathcal{M}_{\\mathsf{k}}$ are denoted\nby the same symbols.\n\nInstead of working with $\\upmu_n$ we could work more generally with\n$\\upmu_{n_1} \\times \\dots \\upmu_{n_r}$ (for $r \\in \\mathbb{N}$ and $n_1,\n\\dots, n_r \\in \n\\mathbb{N}_{>0}$), and instead of $\\hat{\\upmu}$ we could work\nwith $\\hat{\\upmu}^r$ (for $r \\in \\mathbb{N}$). We extend our notation\naccordingly. \n\n\\begin{remark}\n \\label{rem:dictionary}\n There is an alternative description of \n $K_0({\\op{Var}}_S^{\\hat{\\upmu}^r})$ and\n $\\mathcal{M}_S^{\\hat{\\upmu}^r}$,\n see \n \n the dictionary in\n \\cite[2.3-2.6]{guibert-loeser-merle-convolution}. \n When referring to results of\n \\cite{guibert-loeser-merle-convolution} we will usually\n translate \n them using this dictionary.\n\\end{remark}\n\n\\begin{lemma}\n \\label{l:open-closed-decomposition}\n Let $S$ be a ${\\mathsf{k}}$-variety and $F \\subset S$ a closed reduced\n subscheme with open complement $U$.\n Let $i \\colon F \\rightarrow S$ and $j \\colon U \\rightarrow S$ denote the\n inclusions. Then \n \\begin{align*}\n (j^*, i^*) \\colon K_0({\\op{Var}}_S^{\\hat{\\upmu}}) \n & \\xra{\\sim}\n K_0({\\op{Var}}_U^{\\hat{\\upmu}})\n \\times\n K_0({\\op{Var}}_F^{\\hat{\\upmu}}),\\\\\n A & \\mapsto (j^*(A), i^*(A)),\n \\end{align*}\n is an isomorphism of $K_0({\\op{Var}}_{\\mathsf{k}})$-algebras, with\n inverse given by $(B,C) \\mapsto j_!(B) + i_!(C)$.\n Similarly, \n \\begin{equation*}\n \n (j^*, i^*) \\colon \\mathcal{M}_S^{\\hat{\\upmu}} \\xra{\\sim}\n \\mathcal{M}_U^{\\hat{\\upmu}}\n \\times \\mathcal{M}_F^{\\hat{\\upmu}}\n \\end{equation*}\n is an isomorphism of $\\mathcal{M}_{\\mathsf{k}}$-algebras.\n\\end{lemma}\n\n\\begin{proof}\n This is obvious from the definitions.\n\\end{proof}\n\n\\begin{remark}\n \\label{rem:standard-multiplication}\n Recall that $K_0({\\op{Var}}_S)$,\n $K_0({\\op{Var}}_S^{\\upmu_n})$ and\n $K_0({\\op{Var}}_S^{\\hat{\\upmu}})$ are $K_0({\\op{Var}}_{\\mathsf{k}})$-algebras\n whose multiplications are\n induced from the fiber product over\n $S$. In the rest of this article mainly the \n underlying $K_0({\\op{Var}}_{\\mathsf{k}})$-module structure on \n $K_0({\\op{Var}}_S)$,\n $K_0({\\op{Var}}_S^{\\upmu_n})$ and\n $K_0({\\op{Var}}_S^{\\hat{\\upmu}})$ will be important. \n Given $(T \\rightarrow \\operatorname{Spec} {\\mathsf{k}})$ in ${\\op{Var}}_{\\mathsf{k}}$ and $(Z \\rightarrow S)$ in ${\\op{Var}}_S$\n or ${\\op{Var}}_S^{\\upmu_n}$ or\n ${\\op{Var}}_S^{\\hat{\\upmu}}$ it is given by\n \\begin{equation*}\n \n [T \\rightarrow \\operatorname{Spec} {\\mathsf{k}}].[Z \\rightarrow S] = [T \\times Z \\rightarrow S].\n \\end{equation*}\n In fact, we will introduce other multiplications on \n the \n $K_0({\\op{Var}}_{\\mathsf{k}})$-modules\n \n $K_0({\\op{Var}}_S^{\\upmu_n})$ and\n $K_0({\\op{Var}}_S^{\\hat{\\upmu}})$\n turning them into \n $K_0({\\op{Var}}_{\\mathsf{k}})$-algebras.\n\\end{remark}\n\n\\subsection{Convolution}\n\\label{sec:convolution}\n\nAfter some preparations we define the convolution product $*$ on\n$K_0({\\op{Var}}_S^{\\upmu_n})$ (Definition~\\ref{d:convolution})\nand show that it turns\n$K_0({\\op{Var}}_S^{\\upmu_n})$ into a \n$K_0({\\op{Var}}_{\\mathsf{k}})$-algebra\n(Proposition~\\ref{p:convolution-comm-ass-unit}). \nThis is not a new result: see \n\\cite[5.1-5.5]{guibert-loeser-merle-convolution} and use the\ndictionary from Remark~\\ref{rem:dictionary}. \nNevertheless we liked the exercise of showing associativity \nwithout using this dictionary.\n\n\n\nLet $S$ be a ${\\mathsf{k}}$-variety and $n \\in \\mathbb{N}_{>0}$. Let\n$p \\colon Z \\rightarrow S$ be an object of ${\\op{Var}}^{\\upmu_n \\times \\upmu_n}_S$.\nWe assume that $\\upmu_n \\times \\upmu_n$ acts on $Z$ from the right.\nThe group $\\upmu_n \\times \\upmu_n$ acts on the ${\\mathsf{k}}$-variety\n$Z \\times {\\DG_\\op{m}} \\times {\\DG_\\op{m}}$ via\n$(z,x,y).(s,t):=(z.(s,t), s^{-1} x, t^{-1} y)$. The quotient with\nrespect to this action is the balanced\nproduct \n$Z \\times^{\\upmu_n \\times \\upmu_n} {\\DG_\\op{m}} \\times {\\DG_\\op{m}}$ which is again\na ${\\mathsf{k}}$-variety (use \\cite[Exp.~V.1]{SGA-1}).\nWe equip it with the\ndiagonal $\\upmu_n$-action given by \n$[z,x,y].t= [z, tx, ty]=[z.(t,t), x, y]$.\nWith the obvious morphism to $S$ induced by $p$ it is \nan object of\n${\\op{Var}}_S^{\\upmu_n}$.\nSimilarly, starting from the two closed $S$-subvarieties\nof $Z \\times {\\DG_\\op{m}} \\times {\\DG_\\op{m}}$ defined by the equations $x^n+y^n=1$ and\n$x^n+y^n=0$, we obtain the two objects\n$(Z \\times^{\\upmu_n \\times \\upmu_n} {\\DG_\\op{m}} \\times {\\DG_\\op{m}})|_{x^n+y^n=1}$\nand\n$(Z \\times^{\\upmu_n \\times \\upmu_n} {\\DG_\\op{m}} \\times {\\DG_\\op{m}})|_{x^n+y^n=0}$\nof ${\\op{Var}}_S^{\\upmu_n}$.\n\nGiven $(Z \\xra{p} S) \\in {\\op{Var}}_S^{\\upmu_n \\times \\upmu_n}$ as above define\n\\begin{align}\n \\label{eq:psi-two-mu-s}\n \\Psi(Z \\xra{p} S)\n := &\n -[(Z \\times^{\\upmu_n \\times \\upmu_n} \\underset{x}{{\\DG_\\op{m}}} \\times\n \\underset{y}{{\\DG_\\op{m}}})|_{x^n+y^n=1} \\xra{[z,x,y] \\mapsto p(z)}\n S]\\\\\n \\notag\n & +[(Z \\times^{\\upmu_n \\times \\upmu_n} \\underset{x}{{\\DG_\\op{m}}} \\times\n \\underset{y}{{\\DG_\\op{m}}})|_{x^n+y^n=0}\n \\xra{([z,x,y]) \\mapsto p(z)} S] \n \\in K_0({\\op{Var}}_S^{\\upmu_n}).\n\\end{align}\nHere the symbols $x$ and $y$ below ${\\DG_\\op{m}} \\times {\\DG_\\op{m}}$ indicate\nthat $(x,y)$ forms a system \nof coordinates \non ${\\DG_\\op{m}} \\times {\\DG_\\op{m}}$.\nSimilar notation will be\nused below without further explanations.\n\n\\begin{example}\n \\label{ex:trivial-action}\n Let $p \\colon Z \\rightarrow S$ be as above and assume that\n $\\upmu_n \\times \\upmu_n$ acts trivially on $Z$. Then\n $Z \\times^{\\upmu_n \\times \\upmu_n} {\\DG_\\op{m}} \\times {\\DG_\\op{m}} \\xra{\\sim} Z\n \\times {\\DG_\\op{m}} \\times {\\DG_\\op{m}}$,\n $[z,x,y] \\mapsto (z,x^n, y^n)$, is an isomorphism which is\n $\\upmu_n$-equivariant if we equip \n $Z \\times {\\DG_\\op{m}} \\times {\\DG_\\op{m}}$\n with the trivial $\\upmu_n$-action. This implies\n $\\Psi(Z \\xra{p} S) = [Z \\xra{p} S]$ in $K_0({\\op{Var}}_S^{\\upmu_n})$\n where $Z$ is considered as a $\\upmu_n$-variety over $S$ with\n trivial action. In particular, we obtain\n $\\Psi(S \\xra{\\operatorname{id}} S) = [S \\xra{\\operatorname{id}} S]$.\n \n \n\\end{example}\n\n\\begin{example}\n \\label{ex:p-product-over-k-trivial-on-one-factor}\n Assume that $p=p_1 \\times p_2 \\colon Z=Z_1 \\times Z_2 \\rightarrow S=S_1\n \\times S_2$ where $S_1$ and $S_2$ are ${\\mathsf{k}}$-varieties and \n $p_i \\colon Z_i \\rightarrow S_i$\n is an object of ${\\op{Var}}_{S_i}^{\\upmu_n}$, for $i=1,2$.\n Moreover assume that the action of $\\upmu_n$ on\n $Z_2$ is trivial. Then \n $Z_1 \\times Z_2 \\times^{\\upmu_n \\times \\upmu_n} {\\DG_\\op{m}} \\times\n {\\DG_\\op{m}} \\xra{\\sim} (Z_1 \\times^{\\upmu_n} {\\DG_\\op{m}}) \\times (Z_2 \\times {\\DG_\\op{m}})$,\n $[z_1,z_2,x,y] \\mapsto ([z_1,x], z_2, y^n)$, is an isomorphism\n over $S$, and we can simplify \n \\eqref{eq:psi-two-mu-s} to\n \\begin{align*}\n \n \\Psi(Z_1 \\times Z_2 \\xra{p_1 \\times p_2} S_1 \\times S_2)\n = \n &\n -[(Z_1 \\times^{\\upmu_n} \\underset{x}{{\\DG_\\op{m}}})|_{x^n\\not=1} \\times Z_2\n \\rightarrow S_1 \\times S_2]\\\\\n \\notag\n &\n +[(Z_1 \\times^{\\upmu_n} \\underset{x}{{\\DG_\\op{m}}}) \\times Z_2\n \\rightarrow S_1 \\times S_2]\\\\\n \\notag\n =\n & [(Z_1 \\times^{\\upmu_n} \\upmu_n) \\times Z_2\n \\rightarrow S_1 \\times S_2]\\\\\n \\notag\n =\n & [Z_1 \\times Z_2\n \\rightarrow S_1 \\times S_2].\n \\end{align*}\n This example will be useful later on.\n\\end{example}\n\nIn fact, $\\Psi$ induces a morphism\n\\begin{equation}\n \\label{eq:10}\n \\Psi \\colon K_0({\\op{Var}}_S^{\\upmu_n \\times \\upmu_n}) \\rightarrow K_0({\\op{Var}}_S^{\\upmu_n})\n\\end{equation}\nof $K_0({\\op{Var}}_{\\mathsf{k}})$-modules.\n\nOur next aim is to prove Proposition~\\ref{p:Psi-associative}\nwhich will later on imply associativity of the convolution\nproduct.\n\nLet $p \\colon Z \\rightarrow S$ be an object of ${\\op{Var}}^{\\upmu_n \\times \\upmu_n\n \\times \\upmu_n}_S$. Similarly as above we \ndefine\n\\begin{multline}\n \\label{eq:psi-123}\n \\Psi_{123}(Z \\xra{p} S) := \n -[(Z \\times^{\\upmu_n \\times \\upmu_n \\times\n \\upmu_n} \\underset{x_1}{{\\DG_\\op{m}}} \\times \\underset{x_2}{{\\DG_\\op{m}}}\n \\times \\underset{x_3}{{\\DG_\\op{m}}})|_{x_1^n+x_2^n+x_3^n=1}\n \\xra{[z,x_1,x_2,x_3] \\mapsto p(z)}\n S]\\\\\n \n +[(Z\n \\times^{\\upmu_n \\times \\upmu_n \\times\n \\upmu_n} \\underset{x_1}{{\\DG_\\op{m}}} \\times \\underset{x_2}{{\\DG_\\op{m}}}\n \\times \\underset{x_3}{{\\DG_\\op{m}}})|_{x_1^n+x_2^n+x_3^n=0}\n \\xra{([z,x_1,x_2,x_3]) \\mapsto p(z)} S] \\in K_0({\\op{Var}}_S^{\\upmu_n})\n\\end{multline}\nwhere the closed subvarieties of $Z \\times^{\\upmu_n \\times \\upmu_n \\times \\upmu_n}\n\\underset{x_1}{{\\DG_\\op{m}}} \\times \\underset{x_2}{{\\DG_\\op{m}}} \\times\n\\underset{x_3}{{\\DG_\\op{m}}}$ are equipped with the $\\upmu_n$-action\n$[z,x_1,x_2,x_3].t=[z,tx_1, tx_2, tx_3]=[z.(t,t,t), x_1, x_2,\nx_3]$.\nAgain we obtain a morphism\n\\begin{equation*}\n \n \\Psi_{123} \\colon K_0({\\op{Var}}_S^{\\upmu_n \\times \\upmu_n \\times \\upmu_n}) \\rightarrow\n K_0({\\op{Var}}_S^{\\upmu_n}) \n\\end{equation*}\nof $K_0({\\op{Var}}_k)$-modules.\n\nSimilarly we associate to \n$(p \\colon Z \\rightarrow S) \\in {\\op{Var}}^{\\upmu_n \\times \\upmu_n\n \\times \\upmu_n}_S$ the element\n\\begin{align}\n \\label{eq:psi-13}\n \\Psi_{13}&(Z) := \n -[(Z \\times^{\\upmu_n \\times \\{1\\} \\times\n \\upmu_n} \\underset{x_1}{{\\DG_\\op{m}}} \\times \\{1\\}\n \\times \\underset{x_3}{{\\DG_\\op{m}}})|_{x_1^n+x_3^n=1}\n \\xra{[z,x_1,1,x_3] \\mapsto p(z)}\n S]\n \\\\\n \\notag \n & +[(Z\n \\times^{\\upmu_n \\times \\{1\\} \\times\n \\upmu_n} \\underset{x_1}{{\\DG_\\op{m}}} \\times \\{1\\}\n \\times \\underset{x_3}{{\\DG_\\op{m}}})|_{x_1^n+x_3^n=0}\n \\xra{([z,x_1,1,x_3]) \\mapsto p(z)} S] \n \\in K_0({\\op{Var}}_S^{\\upmu_n \\times \\upmu_n}).\n\\end{align}\nHere the $\\upmu_n \\times \\upmu_n$-action is\ngiven by the two commuting $\\upmu_n$-actions\n$[z,x_1,1,x_3].s=[z,sx_1, 1, sx_3]=[z.(s,1,s), x_1, 1, x_3]$\nand\n$[z,x_1,1,x_3].t=[z.(1,t,1),x_1, 1, x_3]$, i.\\,e.\\ we have\n$[z,x_1,1,x_3].(s,t) =[z.(s,t,s), x_1, 1, x_3]$.\nAs above we obtain a morphism\n\\begin{equation*}\n \\Psi_{13} \\colon K_0({\\op{Var}}_S^{\\upmu_n \\times \\upmu_n \\times \\upmu_n}) \\rightarrow\n K_0({\\op{Var}}_S^{\\upmu_n \\times \\upmu_n}) \n\\end{equation*}\nof $K_0({\\op{Var}}_k)$-modules.\nSimilarly we define $\\Psi_{12}$ and $\\Psi_{23}$.\n\n\\begin{remark}\n \\label{rem:Psi-compatible-with-pushforward-pullback}\n If $f \\colon S \\rightarrow S'$ is a morphism of ${\\mathsf{k}}$-varieties, all maps\n $\\Psi$, \n $\\Psi_{123}$, \n $\\Psi_{12}$,\n $\\Psi_{13}$,\n $\\Psi_{23}$\n are compatible with $f_!$ and $f^*,$ for example\n $\\Psi(f_!(Z))=f_!(\\Psi(Z))$ for $Z \\in\n K_0({\\op{Var}}_{S}^{\\upmu_n \\times \\upmu_n})$\n and \n $\\Psi(f^*(Z))=f^*(\\Psi(Z))$ for $Z \\in\n K_0({\\op{Var}}_{S'}^{\\upmu_n \\times \\upmu_n})$.\n For $f_!$ this is obvious. For $f^*$ one uses the fact that\n $Z \\times {\\DG_\\op{m}} \\times {\\DG_\\op{m}} \\rightarrow Z \\times^{\\upmu_n \\times\n \\upmu_n} {\\DG_\\op{m}} \\times {\\DG_\\op{m}}$ is a $(\\upmu_n \\times\n \\upmu_n)$-torsor and hence its pullback under the base change\n morphism $f$ is again such a torsor.\n\\end{remark}\n\n\\begin{proposition}\n \n [{\\cite[Prop.~5.5]{guibert-loeser-merle-convolution}}]\n \\label{p:Psi-associative}\n We have\n \\begin{equation*}\n \\Psi_{123}=\\Psi \\circ \\Psi_{13}\n =\\Psi \\circ \\Psi_{12}\n =\\Psi \\circ \\Psi_{23}\n \\end{equation*}\n as morphisms \n $K_0({\\op{Var}}_S^{\\upmu_n \\times \\upmu_n \\times \\upmu_n}) \\rightarrow\n K_0({\\op{Var}}_S^{\\upmu_n})$\n of $K_0({\\op{Var}}_{\\mathsf{k}})$-modules.\n\\end{proposition}\n\n\\begin{proof}\n Let $p \\colon Z \\rightarrow S$ be an object of\n ${\\op{Var}}^{\\upmu_n \\times \\upmu_n \\times \\upmu_n}_S$. It is enough to\n show that\n $\\Psi_{123}(Z) =\\Psi(\\Psi_{13}(Z)) =\\Psi(\\Psi_{12}(Z))\n =\\Psi(\\Psi_{23}(Z))$\n in $K_0({\\op{Var}}_S^{\\upmu_n})$. We only prove\n $\\Psi_{123}(Z) =\\Psi(\\Psi_{13}(Z))$ and leave the remaining\n cases to the reader.\n \n From \n \\eqref{eq:psi-two-mu-s}\n and \n \\eqref{eq:psi-13}\n we obtain\n \\begin{align}\n \\label{eq:psi-psi13}\n \\Psi(\\Psi_{13}(Z)) = \n & -\\Psi([(Z \\times^{\\upmu_n \\times \\{1\\}\n \\times \\upmu_n} \\underset{x_1}{{\\DG_\\op{m}}} \\times \\{1\\} \\times\n \\underset{x_3}{{\\DG_\\op{m}}})|_{x_1^n+x_3^n=1} \\rightarrow\n S])\\\\\n \\notag \n & +\\Psi([(Z \\times^{\\upmu_n \\times \\{1\\} \\times \\upmu_n}\n \\underset{x_1}{{\\DG_\\op{m}}} \\times \\{1\\} \\times\n \\underset{x_3}{{\\DG_\\op{m}}})|_{x_1^n+x_3^n=0} \\rightarrow S])\\\\\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \\notag = &\n \\sum_{\\delta, \\epsilon \\in \\{0,1\\}}\n (-1)^{\\delta+\\epsilon}\n [D_{\\delta, \\epsilon} \\rightarrow S]\n \\end{align}\n where \n \\begin{align*}\n \n D_{\\delta, \\epsilon} & :=\n ((Z \\times^{\\upmu_n \\times \\{1\\} \\times \\upmu_n}\n \\underset{x_1}{{\\DG_\\op{m}}} \\times \\{1\\} \\times\n \\underset{x_3}{{\\DG_\\op{m}}})|_{x_1^n+x_3^n=\\delta} \\times^{\\upmu_n \\times\n \\upmu_n} \\underset{y_1}{{\\DG_\\op{m}}} \\times\n \\underset{y_2}{{\\DG_\\op{m}}})|_{y_1^n+y_2^n=\\epsilon}\\\\\n \\notag\n & =\n (Z \\times^{\\upmu_n \\times \\upmu_n}\n \\underset{x_1}{{\\DG_\\op{m}}} \\times\n \\underset{x_3}{{\\DG_\\op{m}}} \\times^{\\upmu_n \\times\n \\upmu_n} \\underset{y_1}{{\\DG_\\op{m}}} \\times\n \\underset{y_2}{{\\DG_\\op{m}}})|_{x_1^n+x_3^n=\\delta, \\atop\n y_1^n+y_2^n=\\epsilon}\\\\\n \\notag\n & =\n \\frac{(Z \\times\n \\underset{x_1}{{\\DG_\\op{m}}} \\times\n \\underset{x_3}{{\\DG_\\op{m}}} \\times \\underset{y_1}{{\\DG_\\op{m}}} \\times\n \\underset{y_2}{{\\DG_\\op{m}}})|_{x_1^n+x_3^n=\\delta, \\atop\n y_1^n+y_2^n=\\epsilon}}\n {\\upmu_n \\times \\upmu_n \\times \\upmu_n \\times \\upmu_n}.\n \\end{align*}\n Here, by the definitions of the quotients in\n \\eqref{eq:psi-two-mu-s}\n and \\eqref{eq:psi-13},\n the quotient is formed with respect to the\n $(\\upmu_n)^{\\times 4}$-action \n \\begin{equation*}\n \n (z,x_1, x_3, y_1, y_2).(s,t,u,v)=\n (z.(su,v,tu),s^{-1}x_1, t^{-1}x_3, u^{-1}y_1, v^{-1}y_2). \n \\end{equation*}\n and $D_{\\delta,\\epsilon}$ is a $\\upmu_n$-variety with action\n \\begin{equation*}\n \n [z,x_1,x_3,y_1,y_2].m=[z,x_1,x_3,my_1,my_2]=\n [z.(m,m,m),x_1,x_3,y_1,y_2].\n \\end{equation*}\n\n The coordinate changes $a_1=x_1y_1$, $a_2=x_3y_1$, $b=y_1$,\n $a_3=y_2$ in $({\\DG_\\op{m}})^{\\times 4}$ and $s'=su$,\n $t'=tu$, $u=u$, $v=v$ in $(\\upmu_n)^{\\times 4}$ show that\n \\begin{equation*}\n \n D_{\\delta,\\epsilon}\n \\cong\n \\frac{(Z \\times\n \\underset{a_1}{{\\DG_\\op{m}}} \\times\n \\underset{a_2}{{\\DG_\\op{m}}} \\times \\underset{b}{{\\DG_\\op{m}}} \\times\n \\underset{a_3}{{\\DG_\\op{m}}})|_{a_1^n+a_2^n=\\delta b^n, \\atop\n b^n+a_3^n=\\epsilon}}\n {\\upmu_n \\times \\upmu_n \\times \\upmu_n \\times \\upmu_n}.\n \\end{equation*}\n where the quotient is formed with respect to the\n $(\\upmu_n)^{\\times 4}$-action \n \\begin{equation*}\n \n (z,a_1, a_2, b, a_3).(s',t',u,v)=\n (z.(s',v,t'),s'^{-1}a_1, t'^{-1}a_2, u^{-1}b, v^{-1}a_3) \n \\end{equation*}\n and the $\\upmu_n$-action on this quotient is given by \n \\begin{equation*}\n [z,a_1,a_2,b,a_3].m=[z,ma_1,ma_2,mb,ma_3]=\n [z.(m,m,m),a_1,a_2,b,a_3].\n \\end{equation*}\n The quotient of \n $\\underset{a_1}{{\\DG_\\op{m}}} \\times\n \\underset{a_2}{{\\DG_\\op{m}}} \\times \n \\underset{b}{{\\DG_\\op{m}}} \\times \n \\underset{a_3}{{\\DG_\\op{m}}}|_{a_1^n+a_2^n=\\delta b^n, \\atop\n b^n+a_3^n=\\epsilon}$\n under the obvious action of $\\{1\\} \\times \\{1\\} \\times \\upmu_n\n \\times \\{1\\}$ on the factor ${\\DG_\\op{m}}$ with coordinate $b$ is \n clearly isomorphic to \n \\begin{equation*}\n \n Q_{\\delta, \\epsilon}:=(\\underset{a_1}{{\\DG_\\op{m}}} \\times\n \\underset{a_2}{{\\DG_\\op{m}}} \\times \n \\underset{a_3}{{\\DG_\\op{m}}})|_{a_1^n+a_2^n=\\delta (\\epsilon -a_3^n),\n \\atop \n a_3^n\\not=\\epsilon}\n \\end{equation*}\n So we obtain\n \\begin{equation*}\n \n D_{\\delta, \\epsilon} \\cong Z \\times^{\\upmu_n \\times \\upmu_n \\times\n \\upmu_n} Q_{\\delta,\\epsilon} \n \\end{equation*}\n where the quotient is formed with respect\n to the $(\\upmu_n)^{\\times 3}$-action\n \\begin{equation*}\n \n (z,a_1, a_2, a_3).(s',t',v)=\n (z.(s',v,t'),s'^{-1}a_1, t'^{-1}a_2, v^{-1}a_3) \n \\end{equation*}\n and the $\\upmu_n$-action on this quotient is given by \n \\begin{equation*}\n [z,a_1,a_2,a_3].m=\n [z.(m,m,m),a_1,a_2,a_3]=\n [z,ma_1,ma_2,ma_3].\n \\end{equation*}\n Continuing the computation\n \\eqref{eq:psi-psi13} we obtain\n \\begin{align*}\n \n \\Psi(\\Psi_{13}(Z)) \n \n \n \n \n \n = &\n + [Z \\times^{\\upmu_n \\times \\upmu_n\n \\times \\upmu_n}\n (\\underset{a_1}{{\\DG_\\op{m}}} \\times\n \\underset{a_2}{{\\DG_\\op{m}}} \\times\n \\underset{a_3}{{\\DG_\\op{m}}})|_{a_1^n+a_2^n+a_3^n=1\n \\atop 1 \\not= a_3^n}\n \\rightarrow S]\\\\\n \\notag\n & -[Z \\times^{\\upmu_n \\times \\upmu_n\n \\times \\upmu_n}\n (\\underset{a_1}{{\\DG_\\op{m}}} \\times\n \\underset{a_2}{{\\DG_\\op{m}}} \\times\n \\underset{a_3}{{\\DG_\\op{m}}})|_{a_1^n+a_2^n+a_3^n=0}\n \\rightarrow S]\\\\\n \\notag\n & -[Z \\times^{\\upmu_n \\times \\upmu_n\n \\times \\upmu_n}\n (\\underset{a_1}{{\\DG_\\op{m}}} \\times\n \\underset{a_2}{{\\DG_\\op{m}}} \\times \\underset{a_3}{{\\DG_\\op{m}}})|_{a_1^n+a_2^n=0\n \\atop 1 \\not= a_3^n}\n \\rightarrow S]\\\\\n \\notag\n & + [Z \\times^{\\upmu_n \\times \\upmu_n\n \\times \\upmu_n}\n (\\underset{a_1}{{\\DG_\\op{m}}} \\times\n \\underset{a_2}{{\\DG_\\op{m}}} \\times \\underset{a_3}{{\\DG_\\op{m}}})|_{a_1^n+a_2^n=0}\n \\rightarrow S]\n \\end{align*}\n The last two summands simplify to \n \\begin{equation*}\n +[Z \\times^{\\upmu_n \\times \\upmu_n\n \\times \\upmu_n}\n (\\underset{a_1}{{\\DG_\\op{m}}} \\times\n \\underset{a_2}{{\\DG_\\op{m}}} \\times\n \\underset{a_3}{{\\DG_\\op{m}}})|_{a_1^n+a_2^n=0 \n \\atop 1=a_3^n} \n \\rightarrow S].\n \\end{equation*}\n The two conditions\n $a_1^n+a_2^n=0$ and $1=a_3^n$ are equivalent to\n the two conditions\n $a_1^n+a_2^n+a_3^n=1$ and $1=a_3^n$.\n Hence we can further simplify and obtain\n \\begin{align*}\n \\Psi(\\Psi_{13}(Z)) = &\n + [Z \\times^{\\upmu_n \\times \\upmu_n\n \\times \\upmu_n}\n (\\underset{a_1}{{\\DG_\\op{m}}} \\times\n \\underset{a_2}{{\\DG_\\op{m}}} \\times\n \\underset{a_3}{{\\DG_\\op{m}}})|_{a_1^n+a_2^n+a_3^n=1}\n \\rightarrow S]\\\\\n & -[Z \\times^{\\upmu_n \\times \\upmu_n\n \\times \\upmu_n}\n (\\underset{a_1}{{\\DG_\\op{m}}} \\times\n \\underset{a_2}{{\\DG_\\op{m}}} \\times\n \\underset{a_3}{{\\DG_\\op{m}}})|_{a_1^n+a_2^n+a_3^n=0}\n \\rightarrow S]\\\\\n = &\n \\Psi_{123}(Z).\n \\end{align*}\n where the last equality holds by definition\n \\eqref{eq:psi-123}.\n\\end{proof}\n\n\\begin{definition}\n [{Convolution product}]\n \\label{d:convolution}\n \n The convolution product $*$ on $K_0({\\op{Var}}_S^{\\upmu_n})$ is\n defined as the $K_0({\\op{Var}}_{\\mathsf{k}})$-linear composition\n \\begin{equation}\n \\label{eq:25}\n * \\colon K_0({\\op{Var}}_S^{\\upmu_n}) \\otimes_{K_0({\\op{Var}}_{\\mathsf{k}})}\n K_0({\\op{Var}}_S^{\\upmu_n}) \n \\xra{\\times_S} K_0({\\op{Var}}_S^{\\upmu_n \\times \\upmu_n})\n \\xra{\\Psi} K_0({\\op{Var}}_S^{\\upmu_n})\n \\end{equation}\n where the first map $\\times_S$ is \n the $K_0({\\op{Var}}_{\\mathsf{k}})$-linear map\n induced by mapping a pair\n $(A,B)$ of \n $S$-varieties with good $\\upmu_n$-action to the class of the\n $S$-variety $|A \\times_S B|$ with good $(\\upmu_n \\times\n \\upmu_n)$-action. \n\\end{definition}\n\nMore explicitly, if $A \\rightarrow S$ and $B \\rightarrow S$ are\n$S$-varieties with good $\\upmu_n$-action, then\n\\begin{align}\n \\label{eq:convolution-explicit}\n [A \\rightarrow S] * [B \\rightarrow S] = \n & -[(|A \\times_S B| \\times^{\\upmu_n \\times \\upmu_n}\n \\underset{x}{{\\DG_\\op{m}}} \\times \n \\underset{y}{{\\DG_\\op{m}}})|_{x^n+y^n=1} \\rightarrow S]\\\\\n \\notag\n & +[(|A \\times_S B| \\times^{\\upmu_n \\times \\upmu_n}\n \\underset{x}{{\\DG_\\op{m}}} \\times \n \\underset{y}{{\\DG_\\op{m}}})|_{x^n+y^n=0}\n \\rightarrow S]\n \\\\\n \\notag\n =\n & -[|(A \\times_S B \\times^{\\upmu_n \\times \\upmu_n}\n \\underset{x}{{\\DG_\\op{m}}} \\times \n \\underset{y}{{\\DG_\\op{m}}})|_{x^n+y^n=1}| \\rightarrow S]\n \\\\\n \\notag\n & +[|(A \\times_S B \\times^{\\upmu_n \\times \\upmu_n}\n \\underset{x}{{\\DG_\\op{m}}} \\times \n \\underset{y}{{\\DG_\\op{m}}})|_{x^n+y^n=0}|\n \\rightarrow S]. \n\\end{align}\nThe second equality comes from the fact that taking the reduced\nsubscheme structure commutes with fiber \nproducts \n(\\cite[Prop.~4.34]{goertz-wedhorn-AGI})\nand with quotients under the action of a finite group.\n\n\\begin{remark}\n \\label{rem:action-on-B-trivial}\n Let $A \\rightarrow S$ and $B \\rightarrow S$ be\n $S$-varieties with good $\\upmu_n$-action, and assume that the\n $\\upmu_n$-action on $B$ is trivial.\n \n \n Similar as in\n Example~\\ref{ex:p-product-over-k-trivial-on-one-factor}\n we deduce from \\eqref{eq:convolution-explicit} that\n \\begin{multline*}\n \n [A] * [B]\n = \n -[|((A \\times^{\\upmu_n}\n \\underset{x}{{\\DG_\\op{m}}}) \\times_S \n (B \\times \\underset{y'}{{\\DG_\\op{m}}}))|_{x^n+y'=1}|]\n +[|((A \\times^{\\upmu_n}\n \\underset{x}{{\\DG_\\op{m}}}) \\times_S \n (B \\times \\underset{y'}{{\\DG_\\op{m}}}))|_{x^n+y'=0}|]\n \\\\\n = \n -[|((A \\times^{\\upmu_n}\n \\underset{x}{{\\DG_\\op{m}}}) \\times_S B)|_{x^n\\not=1}|]\n +[|(A \\times^{\\upmu_n} {\\DG_\\op{m}}) \\times_S B|]\n \\\\\n = [|(A \\times^{\\upmu_n} \\upmu_n) \\times_S B|]\n = [|A \\times_S B|]= [A] [B].\n \\end{multline*}\n\\end{remark}\n\n\\begin{proposition}\n [{\\cite[Prop.~5.2]{guibert-loeser-merle-convolution}}]\n \\label{p:convolution-comm-ass-unit}\n Let $S$ be a ${\\mathsf{k}}$-variety and $n \\geq 1$.\n The convolution product $*$ turns $K_0({\\op{Var}}_S^{\\upmu_n})$ into\n an associative commutative unital $K_0({\\op{Var}}_{\\mathsf{k}})$-algebra. The\n identity element is \n the class of $(\\operatorname{id}_S \\colon S \\rightarrow S)$ where $\\upmu_n$ acts\n trivially on $S$. \n We denote this ring as $(K_0({\\op{Var}}_S^{\\upmu_n}),*)$.\n\\end{proposition}\n\n\\begin{proof}\n Clearly, the convolution product is commutative.\n Remark~\\ref{rem:action-on-B-trivial}\n shows that $[\\operatorname{id}_S \\colon S \\rightarrow S]$ is the identity with respect to\n the convolution \n product. \n Associativity follows from \n Proposition~\\ref{p:Psi-associative}:\n \\begin{multline*}\n ([A] * [B]) * [C] \n = \\Psi(\\Psi_{12}([|A \\times_S B \\times_S C|]) \n = \\Psi(\\Psi_{23}([|A \\times_S B \\times_S C|]) \n = [A] * ([B] * [C]).\n \\end{multline*}\n Here we again use that passing to the reduced subscheme structure\n commutes with fiber products and taking quotients under the\n action of a finite group.\n\\end{proof}\n\n\\begin{remark}\n \\label{rem:convolution-n=1}\n For $n=1$ the convolution product $*$ on $K_0({\\op{Var}}_S^{\\upmu_1})$\n coincides with the product on\n $K_0({\\op{Var}}_S)=K_0({\\op{Var}}_S^{\\upmu_1})$,\n so $K_0({\\op{Var}}_S)=(K_0({\\op{Var}}_S^{\\upmu_1}),*)$ as $K_0({\\op{Var}}_{\\mathsf{k}})$-algebras.\n This follows immediately from \n Remark~\\ref{rem:action-on-B-trivial}.\n\\end{remark}\n\nLet $(Z \\rightarrow S) \\in {\\op{Var}}_S^{\\upmu_n \\times \\upmu_n}$ and assume\nthat $n'=dn$ is a multiple of $n$. Then the morphism\n$Z \\times {\\DG_\\op{m}} \\times {\\DG_\\op{m}} \\rightarrow Z \\times {\\DG_\\op{m}} \\times {\\DG_\\op{m}}$,\n$(z,x,y) \\mapsto (z,x^d,y^d)$ defines an isomorphism\n\\begin{equation}\n \\label{eq:22}\n Z \\times^{\\upmu_{n'} \\times \\upmu_{n'}} {\\DG_\\op{m}} \\times {\\DG_\\op{m}} \\xra{\\sim} \n Z \\times^{\\upmu_n \\times \\upmu_n} {\\DG_\\op{m}} \\times {\\DG_\\op{m}}\n\\end{equation}\nin ${\\op{Var}}_S^{\\upmu_{n'}}$.\nThis implies that $\\Psi$ is compatible with the morphisms\n$K_0({\\op{Var}}_S^{\\upmu_n \\times \\upmu_n}) \\rightarrow K_0({\\op{Var}}_S^{\\upmu_{n'}\n \\times \\upmu_{n'}})$\nand \n$K_0({\\op{Var}}_S^{\\upmu_n}) \\rightarrow K_0({\\op{Var}}_S^{\\upmu_{n'}})$,\ncf.\\ \\eqref{eq:28}, and so is the \nfirst map in \\eqref{eq:25}.\nWe deduce that the obvious morphism\n\\begin{equation*}\n \n (K_0({\\op{Var}}_S^{\\upmu_n}),*) \\rightarrow (K_0({\\op{Var}}_S^{\\upmu_{n'}}),*)\n\\end{equation*}\nis a map of $K_0({\\op{Var}}_{\\mathsf{k}})$-algebras.\nHence convolution turns $K_0({\\op{Var}}_S^{\\hat{\\upmu}})$ \ninto an associative commutative unital \n$K_0({\\op{Var}}_{\\mathsf{k}})$-algebra; we denote this algebra by \n$(K_0({\\op{Var}}_S^{\\hat{\\upmu}}),*)$. \n \nIf $f \\colon T \\rightarrow S$ is a morphism of ${\\mathsf{k}}$-varieties, \nthe \npullback maps \n$f^* \\colon (K_0({\\op{Var}}_S^{\\upmu_n}),*) \\rightarrow\n(K_0({\\op{Var}}_T^{\\upmu_n}),*)$\nand\n$f^* \\colon (K_0({\\op{Var}}_S^{\\hat{\\upmu}}),*) \\rightarrow\n(K_0({\\op{Var}}_T^{\\hat{\\upmu}}),*)$\nare maps of $K_0({\\op{Var}}_{\\mathsf{k}})$-algebras (use\nRemark~\\ref{rem:Psi-compatible-with-pushforward-pullback}\nand that the first map in \\eqref{eq:25} is compatible\nwith pullbacks). \n\n\nWe also want to define a convolution product on\n$\\mathcal{M}_S^{\\upmu_n}$ and $\\mathcal{M}_S^{\\hat{\\upmu}}$.\n\nConsider \nthe localization of \n$(K_0({\\op{Var}}_S^{\\upmu_n}),*)$ at the multiplicative set $\\{1,\n\\mathbb{L}_S, \\mathbb{L}_S * \\mathbb{L}_S, \\dots\\}$. The $n$-fold convolution\nproduct of $\\mathbb{L}_S=[{\\mathbb{A}}^1_S]$ with itself is $[{\\mathbb{A}}^n_S]$ \nand we have \n$[A]*[{\\mathbb{A}}^n_S]=[A][{\\mathbb{A}}^n_S]$ for $[A] \\in K_0({\\op{Var}}_S^{\\upmu_n})$,\nby\nRemark~\\ref{rem:action-on-B-trivial}. Hence the underlying\nabelian group of\nthis localization is canonically identified with the underlying\nabelian group of $\\mathcal{M}_S^{\\upmu_n}$. We can therefore\ndenote the above localization by $(\\mathcal{M}_S^{\\upmu_n},*)$.\n\nBecause the structure morphism $K_0({\\op{Var}}_{\\mathsf{k}}) \\rightarrow\n(K_0({\\op{Var}}_S^{\\upmu_n}),*)$ sends $\\mathbb{L}_{\\mathsf{k}}$ to $\\mathbb{L}_S$ we obtain\na canonical isomorphism\n\\begin{equation}\n \\label{eq:26}\n \\mathcal{M}_{\\mathsf{k}} \\otimes_{K_0({\\op{Var}}_{\\mathsf{k}})} (K_0({\\op{Var}}_S^{\\upmu_n}),*)\n \\xra{\\sim}\n (\\mathcal{M}_S^{\\upmu_n},*)\n\\end{equation}\nof $\\mathcal{M}_{\\mathsf{k}}$-algebras which we will often treat as an\nequality in the following. Its underlying morphism of \n$\\mathcal{M}_{\\mathsf{k}}$-modules coincides with \\eqref{eq:15}.\n\nSimilarly, we define the convolution product $*$ on\n$\\mathcal{M}_S^{\\hat{\\upmu}}$ and obtain the $\\mathcal{M}_{\\mathsf{k}}$-algebra\n$\\mathcal{M}_{\\mathsf{k}} \\otimes_{K_0({\\op{Var}}_{\\mathsf{k}})} (K_0({\\op{Var}}_S^{\\hat{\\upmu}}),*)=\n(\\mathcal{M}_S^{\\hat{\\upmu}},*)$. \nThe map $\\Psi$ from \\eqref{eq:10} gives in the obvious way rise to a morphism\n\\begin{equation*}\n \n \\Psi \n \\colon \n \\mathcal{M}_S^{\\hat{\\upmu} \\times \\hat{\\upmu}} \n \\rightarrow \n \\mathcal{M}_S^{\\hat{\\upmu}}\n\\end{equation*}\nof $\\mathcal{M}_{\\mathsf{k}}$-modules; \nthe \nconvolution product $*$ on\n$\\mathcal{M}_S^{\\hat{\\upmu}}$ is then given as the composition\n\\begin{equation*}\n * \\colon \\mathcal{M}_S^{\\hat{\\upmu}} \\otimes_{\\mathcal{M}_{\\mathsf{k}}}\n \\mathcal{M}_S^{\\hat{\\upmu}}\n \\xra{\\times_S} \\mathcal{M}_S^{\\hat{\\upmu} \\times \\hat{\\upmu}}\n \\xra{\\Psi} \\mathcal{M}_S^{\\hat{\\upmu}}.\n\\end{equation*}\n\nGiven $f \\colon T \\rightarrow S$ as above, we obtain pullback maps\n$f^* \\colon (\\mathcal{M}_S^{\\upmu_n},*) \\rightarrow (\\mathcal{M}_T^{\\upmu_n},*)$\nand\n$f^* \\colon (\\mathcal{M}_S^{\\hat{\\upmu}},*) \\rightarrow\n(\\mathcal{M}_T^{\\hat{\\upmu}},*)$ of\n$\\mathcal{M}_{\\mathsf{k}}$-algebras. \nUnder the isomorphisms \\eqref{eq:26} they are just obtained by\nscalar extension along $K_0({\\op{Var}}_{\\mathsf{k}}) \\rightarrow \\mathcal{M}_{\\mathsf{k}}$ from\nthe previous pullback maps.\n\n\\subsection{Convolution of varieties over\n \\texorpdfstring{${\\mathbb{A}}^1_{\\mathsf{k}}$}{A1}}\n\\label{sec:conv-vari-over}\n\nWe now use that ${\\mathbb{A}}^1_{\\mathsf{k}}$ is a commutative \ngroup\n${\\mathsf{k}}$-variety.\nLet $\\operatorname{add} \\colon {\\mathbb{A}}^1_{\\mathsf{k}} \\times {\\mathbb{A}}^1_{\\mathsf{k}} \\rightarrow {\\mathbb{A}}^1_{\\mathsf{k}}$, $(x,y)\n\\mapsto x+y$, be the \naddition morphism. Let $n \\geq 1$.\n\n\\begin{definition}\n [{Convolution over ${\\mathbb{A}}^1_{\\mathsf{k}}$}]\n \\label{d:convolution-over-A1}\n \n The convolution product $\\star$ on $K_0({\\op{Var}}_{{\\mathbb{A}}^1_{\\mathsf{k}}}^{\\upmu_n})$ is\n defined as the $K_0({\\op{Var}}_{\\mathsf{k}})$-linear composition\n \\begin{equation}\n \\label{eq:convolution-A1}\n \\star \\colon K_0({\\op{Var}}_{{\\mathbb{A}}^1_{\\mathsf{k}}}^{\\upmu_n}) \\otimes_{K_0({\\op{Var}}_{\\mathsf{k}})}\n K_0({\\op{Var}}_{{\\mathbb{A}}^1_{\\mathsf{k}}}^{\\upmu_n}) \n \\xra{\\times}\n K_0({\\op{Var}}_{{\\mathbb{A}}^1_{\\mathsf{k}} \\times {\\mathbb{A}}^1_{\\mathsf{k}}}^{\\upmu_n \\times \\upmu_n})\n \\xra{\\operatorname{add}_!}\n K_0({\\op{Var}}_{{\\mathbb{A}}^1_{\\mathsf{k}}}^{\\upmu_n \\times \\upmu_n})\n \\xra{\\Psi} K_0({\\op{Var}}_{{\\mathbb{A}}^1_{\\mathsf{k}}}^{\\upmu_n})\n \\end{equation}\n where the first map $\\times$ is \n the $K_0({\\op{Var}}_{\\mathsf{k}})$-linear map\n induced by mapping a pair\n $(A,B)$ of \n ${\\mathbb{A}}^1_{\\mathsf{k}}$-varieties with good $\\upmu_n$-action to the\n $S$-variety $A \\times B$ with good $(\\upmu_n \\times\n \\upmu_n)$-action. \n\\end{definition}\n\nBy Remark~\\ref{rem:Psi-compatible-with-pushforward-pullback} we\nhave\n\\begin{equation*}\n A \\star B\n =\\Psi(\\operatorname{add}_!(A \\times B)) \n =\\operatorname{add}_!(\\Psi(A \\times B))\n\\end{equation*}\nfor $A$, $B \\in \nK_0({\\op{Var}}_{{\\mathbb{A}}^1_{\\mathsf{k}}}^{\\upmu_n})$.\n\n\n\\begin{remark}\n \\label{rem:action-on-B-trivial-over-A1}\n Let $A \\xra{\\alpha} {\\mathbb{A}}^1_{\\mathsf{k}}$ and $B \\xra{\\beta} {\\mathbb{A}}^1_{\\mathsf{k}}$ be\n ${\\mathbb{A}}^1_{\\mathsf{k}}$-varieties with good $\\upmu_n$-action, and assume\n that the \n $\\upmu_n$-action on $B$ is trivial.\n Then \n Example~\\ref{ex:p-product-over-k-trivial-on-one-factor}\n \n \n implies that\n \\begin{equation*}\n \n [A \\xra{\\alpha} {\\mathbb{A}}^1_{\\mathsf{k}}] \\star [B \\xra{\\beta} {\\mathbb{A}}^1_{\\mathsf{k}}] =\n [A \\times B \\xra{\\alpha \\circledast \\beta} {\\mathbb{A}}^1_{\\mathsf{k}}]\n \\end{equation*}\n where $(\\alpha \\circledast \\beta)(a,b)=\\alpha(a)+\\beta(b)$;\n the $\\upmu_n$-action on $A \\times B$ is the obvious diagonal\n action $(a,b).t=(a.t,b.t)=(a.t,b)$.\n\\end{remark}\n\n\\begin{remark}\n \\label{rem:no-group-action}\n In the case $n=1$ the convolution product $\\star$ on\n $K_0({\\op{Var}}_{{\\mathbb{A}}^1_{\\mathsf{k}}}^{\\upmu_1})= K_0({\\op{Var}}_{{\\mathbb{A}}^1_{\\mathsf{k}}})$\n satisfies\n \\begin{equation*}\n \n [A \\xra{\\alpha} {\\mathbb{A}}^1_{\\mathsf{k}}] \\star [B \\xra{\\beta} {\\mathbb{A}}^1_{\\mathsf{k}}] =\n [A \\times B \\xra{\\alpha \\circledast \\beta} {\\mathbb{A}}^1_{\\mathsf{k}}]\n \\end{equation*}\n where $A \\xra{\\alpha} {\\mathbb{A}}^1_{\\mathsf{k}}$ and $B \\xra{\\beta} {\\mathbb{A}}^1_{\\mathsf{k}}$ \n are \n ${\\mathbb{A}}^1_{\\mathsf{k}}$-varieties.\n This is a special case of\n Remark~\\ref{rem:action-on-B-trivial-over-A1}. \n\\end{remark}\n\n\\begin{proposition}\n \\label{p:A1-convolution-comm-ass-unit}\n The convolution product $\\star$ \n turns \n $K_0({\\op{Var}}_{{\\mathbb{A}}^1_{\\mathsf{k}}}^{\\upmu_n})$\n \n into an associative commutative $K_0({\\op{Var}}_{\\mathsf{k}})$-algebra with\n identity element \n $[\\operatorname{Spec} k \\xra{0} {\\mathbb{A}}^1_{\\mathsf{k}}]$.\n \n We denote this ring by $(K_0({\\op{Var}}_{{\\mathbb{A}}^1_{\\mathsf{k}}}^{\\upmu_n}), \\star)$.\n\\end{proposition}\n\n\\begin{proof}\n Commutativity follows from commutativity of ${\\mathbb{A}}^1_{\\mathsf{k}}$.\n That \n $[\\operatorname{Spec} k \\xra{0} {\\mathbb{A}}^1_{\\mathsf{k}}]$ is the identity element with respect\n to $\\star$ follows from\n Remark~\\ref{rem:action-on-B-trivial-over-A1}.\n Denote the morphism $({\\mathbb{A}}^1_{\\mathsf{k}})^{\\times 3} \\rightarrow {\\mathbb{A}}^1_{\\mathsf{k}}$,\n $(x,y,z) \n \\mapsto x+y+z$ by $\\op{addd}$.\n Remark~\\ref{rem:Psi-compatible-with-pushforward-pullback} and\n Proposition~\\ref{p:Psi-associative}\n show that\n \\begin{align*}\n (A \\star B) \\star C \n & = \n \\operatorname{add}_!(\\Psi(\\operatorname{add}_!(\\Psi(A \\times B)) \\times C))\\\\\n & = \n \\operatorname{add}_!((\\operatorname{add} \\times \\operatorname{id})_!(\\Psi(\\Psi(A \\times B) \\times C)))\\\\\n & = \\op{addd}_!\n (\\Psi(\\Psi_{12}(A \\times B \\times C)))\\\\\n & = \n \\op{addd}_!(\\Psi_{123}(A \\times B \\times C))\n \\end{align*}\n A similar computation shows that the last term equals $A\n \\star \n (B \\star C)$. This proves associativity.\n\\end{proof}\n\nMapping a ${\\mathsf{k}}$-variety $(A \\rightarrow \\operatorname{Spec} {\\mathsf{k}})$ to $(A \\xra{0}\n{\\mathbb{A}}^1_{\\mathsf{k}})$ induces a morphism of unital rings\n\\begin{equation}\n \\label{eq:46}\n K_0({\\op{Var}}_{\\mathsf{k}}) \\rightarrow (K_0({\\op{Var}}_{{\\mathbb{A}}^1_{\\mathsf{k}}}^{\\upmu_n}), \\star)\n\\end{equation}\nas follows immediately from\nRemark~\\ref{rem:action-on-B-trivial-over-A1}. \nThis map is the structure map of the $K_0({\\op{Var}}_{\\mathsf{k}})$-algebra\n$(K_0({\\op{Var}}_{{\\mathbb{A}}^1_{\\mathsf{k}}}^{\\upmu_n}), \\star)$.\nDenote the image of $\\mathbb{L}_{\\mathsf{k}}$ under this map by\n\\begin{equation}\n \\label{eq:def-L-A1-0}\n \\mathbb{L}_{({\\mathbb{A}}^1_{\\mathsf{k}},0)}:=[{\\mathbb{A}}^1_{\\mathsf{k}} \\xra{0} {\\mathbb{A}}^1_{\\mathsf{k}}] \\in\n K_0({\\op{Var}}_{{\\mathbb{A}}^1_{\\mathsf{k}}}^{\\upmu_n}). \n\\end{equation}\n\nLet us denote the localization of \n$(K_0({\\op{Var}}_{{\\mathbb{A}}^1_{\\mathsf{k}}}^{\\upmu_n}), \\star)$ with respect to the\nmultiplicative set $\\{1, \\mathbb{L}_{({\\mathbb{A}}^1_{\\mathsf{k}},0)}, \\mathbb{L}_{({\\mathbb{A}}^1_{\\mathsf{k}},0)}\n\\star \\mathbb{L}_{({\\mathbb{A}}^1_{\\mathsf{k}},0)}, \\dots\\}$ by\n$(\\tilde{\\mathcal{M}}_{{\\mathbb{A}}^1_{\\mathsf{k}}}^{\\upmu_n}, \\star)$. Then there is a canonical isomorphism\n\\begin{equation*}\n \n \\mathcal{M}_{\\mathsf{k}} \\otimes_{K_0({\\op{Var}}_{\\mathsf{k}})}\n (K_0({\\op{Var}}_{{\\mathbb{A}}^1_{\\mathsf{k}}}^{\\upmu_n}), \\star) \n \\xra{\\sim}\n (\\tilde{\\mathcal{M}}_{{\\mathbb{A}}^1_{\\mathsf{k}}}^{\\upmu_n}, \\star)\n\\end{equation*}\nof $\\mathcal{M}_{\\mathsf{k}}$-algebras given by $\\tfrac{r}{(\\mathbb{L}_{\\mathsf{k}})^n}\n\\otimes a \\mapsto \\tfrac{ra}{(\\mathbb{L}_{({\\mathbb{A}}^1_{\\mathsf{k}},0)})^n}$. \nIf we compare with \nthe isomorphism~\\eqref{eq:15} we see that\n$\\tfrac{b}{(\\mathbb{L}_{{\\mathbb{A}}^1_{\\mathsf{k}}})^n} \\mapsto\n\\tfrac{b}{(\\mathbb{L}_{({\\mathbb{A}}^1_{\\mathsf{k}},0)})^n}$\ndefines an isomorphism \n\\begin{equation}\n \\label{eq:71}\n \\mathcal{M}_{{\\mathbb{A}}^1_{\\mathsf{k}}}^{\\upmu_n}\n \\xra{\\sim}\n \\tilde{\\mathcal{M}}_{{\\mathbb{A}}^1_{\\mathsf{k}}}^{\\upmu_n}\n\\end{equation}\nof $\\mathcal{M}_{\\mathsf{k}}$-modules.\n\nSimilarly as above (cf.\\ the reasoning around \\eqref{eq:22}),\nthe various $K_0({\\op{Var}}_{\\mathsf{k}})$-algebras\n$(K_0({\\op{Var}}_{{\\mathbb{A}}^1_{\\mathsf{k}}}^{\\upmu_n}), \\star)$ for $n \\geq 1$ are\ncompatible. Hence we obtain the \n$K_0({\\op{Var}}_{\\mathsf{k}})$-algebras\n$(K_0({\\op{Var}}_{{\\mathbb{A}}^1_{\\mathsf{k}}}^{\\hat{\\upmu}}), \\star)$\nand\n\\begin{equation*}\n \n \\mathcal{M}_{\\mathsf{k}} \\otimes_{K_0({\\op{Var}}_{\\mathsf{k}})}\n (K_0({\\op{Var}}_{{\\mathbb{A}}^1_{\\mathsf{k}}}^{\\hat{\\upmu}}), \\star) \n \\xra{\\sim}\n (\\tilde{\\mathcal{M}}_{{\\mathbb{A}}^1_{\\mathsf{k}}}^{\\hat{\\upmu}}, \\star)\n\\end{equation*}\nand an isomorphism\n\\begin{equation}\n \\label{eq:62}\n \\mathcal{M}_{{\\mathbb{A}}^1_{\\mathsf{k}}}^{\\hat{\\upmu}}\n \\xra{\\sim}\n \\tilde{\\mathcal{M}}_{{\\mathbb{A}}^1_{\\mathsf{k}}}^{\\hat{\\upmu}}\n\\end{equation}\nof $\\mathcal{M}_{\\mathsf{k}}$-modules.\n\n\\begin{lemma}\n \\label{l:convolution-products-compatible}\n Let $\\epsilon \\colon {\\mathbb{A}}^1_{\\mathsf{k}} \\rightarrow \\operatorname{Spec} {\\mathsf{k}}$ be the structure\n morphism. \n Then mapping an object $(A \\xra{\\alpha} {\\mathbb{A}}^1_{\\mathsf{k}}) \\in\n {\\op{Var}}_{{\\mathbb{A}}^1_{\\mathsf{k}}}^{\\upmu_n}$ to $(A \\xra{\\epsilon \\alpha} \\operatorname{Spec} {\\mathsf{k}})\n \\in {\\op{Var}}^{\\upmu_n}_{\\mathsf{k}}$ induces a morphism\n \\begin{equation}\n \\label{epsilon!-star-*}\n \\epsilon_! \\colon (\\tilde{\\mathcal{M}}_{{\\mathbb{A}}^1_{\\mathsf{k}}}^{\\hat{\\upmu}}, \\star)\n \\rightarrow (\\mathcal{M}_k^{\\hat{\\upmu}}, *)\n \\end{equation}\n \n of $\\mathcal{M}_{\\mathsf{k}}$-algebras.\n\\end{lemma}\n\n\\begin{proof}\n Certainly we have a morphism\n \\begin{equation}\n \\label{eq:33}\n \\epsilon_! \\colon K_0({\\op{Var}}_{{\\mathbb{A}}^1_{\\mathsf{k}}}^{\\upmu_n})\n \\rightarrow K_0({\\op{Var}}_k^{\\upmu_n}) \n \\end{equation}\n of $K_0({\\op{Var}}_{\\mathsf{k}})$-modules.\n If $Z$ is a ${\\mathsf{k}}$-variety we denote its structure morphism\n $Z \\rightarrow \\operatorname{Spec} {\\mathsf{k}}$ by $\\epsilon^Z$.\n Let $A, B \\in K_0({\\op{Var}}_{{\\mathbb{A}}^1_{\\mathsf{k}}}^{\\upmu_n})$.\n Since $\\epsilon_!(A)$ and $\\epsilon_!(B)$ are in\n $K_0({\\op{Var}}_k^{\\upmu_n})$, \n $\\epsilon_!(A) * \\epsilon_!(B)$ is defined using the fiber\n product over ${\\mathsf{k}}$.\n Using Remark~\\ref{rem:Psi-compatible-with-pushforward-pullback}\n we obtain\n \\begin{multline*}\n \\epsilon_!(A \\star B) \n = \\epsilon_!(\\operatorname{add}_!(\\Psi(A \\times B)))\n = \\epsilon^{{\\mathbb{A}}^1_{\\mathsf{k}} \\times {\\mathbb{A}}^1_{\\mathsf{k}}}_!(\\Psi(A \\times B))\n = \\Psi(\\epsilon^{{\\mathbb{A}}^1_{\\mathsf{k}} \\times {\\mathbb{A}}^1_{\\mathsf{k}}}_!(A \\times B))\\\\\n = \\Psi(\\epsilon_!(A) \\times \\epsilon_!(B))\n = \\epsilon_!(A) * \\epsilon_!(B).\n \\end{multline*}\n Clearly, \\eqref{eq:33} maps $[\\operatorname{Spec} {\\mathsf{k}} \\xra{0} {\\mathbb{A}}^1_{\\mathsf{k}}]$ to\n $[\\operatorname{Spec} {\\mathsf{k}} \\rightarrow \\operatorname{Spec} {\\mathsf{k}}]$. Therefore it is a morphism of\n $K_0({\\op{Var}}_{\\mathsf{k}})$-algebras\n $\\epsilon_! \\colon (K_0({\\op{Var}}_{{\\mathbb{A}}^1_{\\mathsf{k}}}^{\\upmu_n}), \\star)\n \\rightarrow (K_0({\\op{Var}}_k^{\\upmu_n}),*)$. \n We can pass to $\\hat{\\upmu}$. Then\n base change along $K_0({\\op{Var}}_{\\mathsf{k}}) \\rightarrow\n \\mathcal{M}_{\\mathsf{k}}$ (or noting that\n $\\mathbb{L}_{({\\mathbb{A}}^1_{\\mathsf{k}},0)}$ goes to $\\mathbb{L}_{\\operatorname{Spec} {\\mathsf{k}}}$) yields a\n morphism\n $\\epsilon_! \\colon (\\tilde{\\mathcal{M}}_{{\\mathbb{A}}^1_{\\mathsf{k}}}^{\\upmu_n},\n \\star) \\rightarrow (\\mathcal{M}_k^{\\upmu_n}, *)$\n of $\\mathcal{M}_{\\mathsf{k}}$-algebras. The lemma follows.\n\\end{proof}\n\n\\section{Motivic vanishing cycles}\n\\label{sec:motiv-vanish-fiber}\n\nLet $X$ be a smooth connected (nonempty) ${\\mathsf{k}}$-variety and let \n$V \\colon X \\rightarrow {\\mathbb{A}}^1_{\\mathsf{k}}$ be a morphism. \nGiven $a \\in {\\mathsf{k}} = {\\mathbb{A}}^1_{\\mathsf{k}}({\\mathsf{k}})$ we\ndenote by $X_a$ the scheme theoretic fiber of $V$ over $a$.\n\nWe quickly review the definition of the motivic vanishing\ncycles. For details we refer to \n\\cite[Sect.~3]{guibert-loeser-merle-convolution};\nnote however that we use slightly different signs, see Remark \n\\ref{rem:motivic-fibers-signs} below. \nFollowing Denef and Loeser, the \\define{motivic zeta function of\n $V$ at $a$} is a certain power series \n\\begin{equation*}\n Z_{V,a}(T) \\in \\mathcal{M}^{\\hat{\\upmu}}_{|X_a|}[[T]]\n\\end{equation*}\nwhose coefficients are defined using arc spaces, see\n\\cite[(3.2.2)]{guibert-loeser-merle-convolution}.\nIt is possible to evaluate $Z_{V,a}$ at $T=\\infty$. \nThis is clear if $V$ is constant because then $Z_{V,a}=0$. \nIf $V$ is not \nequal to $a$\nthere is a formula expressing $Z_{V,a}$ in terms of an embedded\nresolution of $|X_a| \\subset X$\nwhich makes it evident that the\nevaluation at $T=\\infty$ exists.\n \nThe \\define{motivic nearby fiber\n $\\psi_{V,a}$ of $V$ at $a$} is defined to be\nthe negative of this value at \ninfinity, i.\\,e.\\\n\\begin{equation*}\n \\psi_{V,a} := -Z_{V,a}(\\infty) \\in\n \\mathcal{M}^{\\hat{\\upmu}}_{|X_a|}. \n\\end{equation*}\nSee \\eqref{eq:formula-mot-nearby-fiber} below for a formula for\n$\\psi_{V,a}$ in terms of an embedded resolution.\nThe \\define{motivic vanishing cycles of $V$ at $a$} are defined by\n\\begin{equation}\n \\label{eq:defi-mot-van-fiber}\n \\phi_{V,a}:= [|X_a| \\xra{\\operatorname{id}} |X_a|] \n -\\psi_{V,a}\n \\in\n \\mathcal{M}^{\\hat{\\upmu}}_{|X_a|}. \n\\end{equation}\nHere $|X_a|$ is endowed with the trivial\n$\\hat{\\upmu}$-action.\n\n\\begin{remark}\n \\label{rem:constant-potential}\n If $V$ is constant we have $\\psi_{V,a}=0$. If $V$ is\n constant \n $\\not=a$ we have $\\phi_{V,a}=0$. If $V$ is constant $=a$ we\n have $X=|X_a|$ and $\\phi_{V,a}=[X \\xra{\\operatorname{id}} X]$.\n\\end{remark}\n\n\\begin{remark}\n \\label{rem:motivic-fibers-signs} \n Denef and Loeser choose different signs in the definition of\n the motivic vanishing cycles. In \n \\cite{guibert-loeser-merle-convolution}, the motivic nearby\n fiber (resp.\\ motivic vanishing cycles) \n of $V$ at $0$ is denoted $\\mathcal{S}_V$ (resp.\\\n $\\mathcal{S}_V^\\phi$). They are related to our definitions by\n \\begin{align*}\n \n \\psi_{V,a} & = \\mathcal{S}_{V-a},\\\\\n \n \\phi_{V,a} & = (-1)^{\\dim X} \\mathcal{S}_{V-a}^\\phi.\n \\end{align*}\n Our sign choice for the motivic vanishing cycles is justified in \n Remark~\\ref{rem:justify-sign-vanishing}.\n\\end{remark}\n\nLet ${\\op{Sing}}(V) \\subset X$ be the closed subscheme defined by the \nvanishing of the section $dV \\in \\Gamma(X, \\Omega^1_{X\/{\\mathsf{k}}})$\nof the cotangent bundle.\nThe closed points of ${\\op{Sing}}(V)$ are the critical\npoints of $V$. Let ${\\op{Crit}}(V) =V({\\op{Sing}}(V)({\\mathsf{k}})) \\subset\n{\\mathbb{A}}^1({\\mathsf{k}})={\\mathsf{k}}$ \nbe the set of critical values of $W$; it is finite by generic\nsmoothness on the target. Trivially we have ${\\op{Sing}}(V) \\cap\nX_a=\\emptyset$ if $a$ is not a critical value.\n\nIf $Z$ is a scheme locally of finite type over ${\\mathsf{k}}$ we denote\nits open\nsubscheme\nconsisting of\nregular points by $Z^{\\op{reg}}$. The closed subset $Z^{\\op{sing}} \\subset Z$ of\nsingular points has a unique structure of a reduced closed\nsubscheme of $Z,$ denoted by $|Z^{\\op{sing}}|$.\n\n\\begin{remark}\n \\label{rem:SingV-cap-Xa-versus-Xa-sing}\n If $V=a$ then ${\\op{Sing}}(V) \\cap X_a=X$ and\n $(X_a)^{\\op{sing}}=\\emptyset$. \n Otherwise\n \n the singular\n points of $X_a$ are precisely the elements of the\n scheme-theoretic intersection ${\\op{Sing}}(V) \\cap X_a,$ i.\\,e.\\ we\n have the equality\n \\begin{equation}\n \\label{eq:reduced-SingV-cap-Xa-equals-reduced-Xa-sing}\n |{\\op{Sing}}(V) \\cap X_a| = |(X_a)^{\\op{sing}}| \n \\end{equation}\n of ${\\mathsf{k}}$-varieties. This is\n trivial if $V$ is constant $\\not=a,$ and otherwise it follows\n by considering Jacobian matrices.\n \n\\end{remark}\n\nLet us prove that the motivic vanishing cycles $\\phi_{V,a}$ live \nover $|{\\op{Sing}}(V) \\cap X_a|$.\n\n\\begin{proposition}\n \\label{p:motivic-vanishing-cycles-over-singular-part}\n We have $\\phi_{V,a} \\in\n \\mathcal{M}^{\\hat{\\upmu}}_{|{\\op{Sing}}(V) \\cap X_a|}$ canonically.\n\\end{proposition}\n\nTherefore we will often view the motivic vanishing cycles\n$\\phi_{V,a}$ as an \nelement of $\\mathcal{M}^{\\hat{\\upmu}}_{|({\\op{Sing}}(V) \\cap X_a|}$ in\nthe following. \n\n\\begin{proof}\n If $V$ is constant this follows directly from\n Remarks~\\ref{rem:constant-potential}\n and \\ref{rem:SingV-cap-Xa-versus-Xa-sing}\n \n So let us assume that $V$ is not constant. \n As in \\cite[3.3]{denef-loeser-arc-spaces}\n let\n $h \\colon R \\rightarrow X$ be an embedded resolution of $|X_a|$\n so that the ideal sheaf of $h^{-1}(|X_a|)$ is the ideal sheaf of\n a simple normal crossing divisor\n (cf.\\ \\cite[Thm.~3.26]{kollar-singularities} or\n \\cite[Thm.~2.2]{villamayor-on-constructive-desingularization} \n for existence).\n Let $E=h^{-1}(X_a)$ be the divisor\n on $R$ defined by $V \\circ h$.\n Let\n $(E_i)_{i \\in \\op{Irr}(|E|)}$ \n denote the irreducible components of $|E|$.\n Then $E=\\sum_{i \\in \\op{Irr}(|E|)} m_i E_i$ for unique\n $m_i \\in \\mathbb{N}_+$. Let $I \\subset \\op{Irr}(|E|)$ be given. Define\n $E_I:= \\cap_{i \\in I} E_i$ and\n $E_I^\\circ:= E_I \\setminus \\bigcup_{j \\in \\op{Irr}(|E|) \\setminus I}\n E_j$.\n Let $m_I$ be the greatest common divisor of the $m_i$ for\n $i \\in I$. Then Denef and Loeser define an unramified Galois\n cover $\\tildew{E}^\\circ_I \\rightarrow E^\\circ_I$ with Galois group\n $\\upmu_{m_I}$. They establish the formula\n \\begin{equation}\n \\label{eq:formula-mot-nearby-fiber}\n \\psi_{V,a}= \\mathcal{S}_{V-a} = \\sum_{\\emptyset \\not=I \\subset \\op{Irr}(|E|)}\n (1-\\mathbb{L})^{|I|-1}[\\tildew{E}^\\circ_I \\rightarrow |X_a|],\n \\end{equation}\n see \\cite[Sect. 3.3 and Def.~3.5.3]{denef-loeser-arc-spaces}.\n\n \n Note that $h$ induces an\n isomorphism $h^{-1}(U) \\xra{\\sim} U$ where\n $U:= X - |X_a|^{\\op{sing}}$, by \n part (ii) of\n \\cite[Thm.~2.2]{villamayor-on-constructive-desingularization}.\n We can also deduce this from principalization \n \\cite[Thm.~3.26]{kollar-singularities} as follows.\n Since $|X_a|$ has codimension one and $h$ is a\n composition of blow-ups in smooth centers of codimension two\n and higher, \n $h$ is an isomorphism over an open neighborhood of some regular\n point \n of $|X_a|$ if $|X_a|$ is non-empty. \n Since principalization \n is \n functorial with respect to \\'etale morphisms, $h$ is an\n isomorphism over all regular points of $|X_a|$. \n\n We obviously have open embeddings $(X_a)^{\\op{reg}} = |(X_a)^{\\op{reg}}| \\subset\n |X_a|^{\\op{reg}} \\subset |X_a|$ and hence a closed embedding\n $||X_a|^{\\op{sing}}| \\subset |(X_a)^{\\op{sing}}|$. \n Let $U':= X - (X_a)^{\\op{sing}} \\subset U$, so $h^{-1}(U') \\xra{\\sim} U'$\n is an isomorphism. Over $X_a \\cap U' =(X_a)^{\\op{reg}}$ we obtain the\n isomorphism \n \\begin{equation}\n \\label{eq:36}\n h \\colon E \\cap h^{-1}(U') \\xra{\\sim} (X_a)^{\\op{reg}},\n \\end{equation}\n so $E \\cap h^{-1}(U')$ is regular.\n\n If $|I|\\geq 2$, then every element $e \\in E_I$ lies in $|E|^{\\op{sing}}\n \\subset E^{\\op{sing}}$, so $e \\not\\in E \\cap h^{-1}(U')$ and hence\n $h(e) \\in (X_a)^{\\op{sing}}$. Therefore \n $\\tildew{E}^\\circ_I \\rightarrow |X_a|$ factors as \n $\\tildew{E}^\\circ_I \\rightarrow |(X_a)^{\\op{sing}}| \\subset |X_a|$.\n\n If $r \\colon (X_a)^{\\op{reg}} = |(X_a)^{\\op{reg}}| \\rightarrow |X_a|$ is the\n inclusion we hence obtain\n \\begin{equation*}\n \n r^*(\\psi_{V,a}) = \\sum_{i \\in \\op{Irr}(|E|)}\n [\\tildew{E}^\\circ_i|_{E_i^\\circ \\cap h^{-1}(U')} \\rightarrow (X_a)^{\\op{reg}}].\n \\end{equation*}\n If $E_i^\\circ \\cap h^{-1}(U')$ is nonempty then $m_i=1$ because\n $E \\cap h^{-1}(U')$ is reduced, so $\\tildew{E}^\\circ_i \\rightarrow\n E^\\circ_i$ is an isomorphism. Moreover, $E \\cap h^{-1}(U')$ is\n the disjoint union of the \n $E_i^\\circ \\cap h^{-1}(U')$, for $i \\in \\op{Irr}(|E|)$.\n These facts and the isomorphism \\eqref{eq:36} imply that\n \\begin{equation*}\n \n r^*(\\psi_{V,a}) \n = \n [(X_a)^{\\op{reg}} \\xra{\\operatorname{id}} (X_a)^{\\op{reg}}].\n \\end{equation*}\n Hence $r^*(\\phi_{V,a})=0$ by definition\n \\eqref{eq:defi-mot-van-fiber}.\n The decomposition\n $(X_a)^{\\op{reg}} \\subset X_a \\supset |(X_a)^{\\op{sing}}|$ of $X_a$ into an\n open and a closed reduced subscheme gives rise to a similar\n decomposition\n $(X_a)^{\\op{reg}}=|(X_a)^{\\op{reg}}| \\subset |X_a| \\supset |(X_a)^{\\op{sing}}|$\n of $|X_a|$. \n Hence Lemma~\\ref{l:open-closed-decomposition} shows that\n $\\phi_{V,a} \\in \\mathcal{M}_{|(X_a)^{\\op{sing}}|}^{\\hat{\\upmu}}$.\n Since $V$ is not constant, \n \\eqref{eq:reduced-SingV-cap-Xa-equals-reduced-Xa-sing} holds\n true. \n\\end{proof}\n\n\\begin{corollary}\n \\label{c:mot-van-fiber-zero-if-Xa-smooth}\n If $V$ is not constant and $X_a$ is smooth\n then $\\phi_{V,a}=0$. \n\\end{corollary}\n\n\\begin{proof}\n In this case we have \n $|{\\op{Sing}}(V) \\cap X_a| = |(X_a)^{\\op{sing}}|= \\emptyset$\n by \\eqref{eq:reduced-SingV-cap-Xa-equals-reduced-Xa-sing}.\n More directly, we can take $h=\\operatorname{id}$ as an embedded resolution\n in the above proof and obtain \n $\\psi_{V,a}= [|X_a| \\xra{\\operatorname{id}} |X_a|]$ from\n \\eqref{eq:formula-mot-nearby-fiber} and\n hence $\\phi_{V,a}=0$. \n\\end{proof}\n\n\\begin{corollary}\n \\label{c:mot-van-fiber-compactification}\n Assume that $X$ is a dense open subset of a smooth\n ${\\mathsf{k}}$-variety $X'$ and that $V \\colon X \\rightarrow {\\mathbb{A}}^1_{\\mathsf{k}}$ extends\n to a morphism $V' \\colon X' \\rightarrow {\\mathbb{A}}^1_{\\mathsf{k}}$ such that all\n critical points of $V'$ are contained in $X$, i.\\,e.\\\n ${\\op{Sing}}(V')={\\op{Sing}}(V') \\cap X={\\op{Sing}}(V)$. \n Then $\\phi_{V,a}=\\phi_{V',a}$.\n\\end{corollary}\n\n\\begin{proof}\n If $V$ is constant then $V'$ is constant and\n $X={\\op{Sing}}(V)={\\op{Sing}}(V')=X'$, so the claim is trivial.\n\n Assume that $V$ is not constant. Then we can assume that the \n embedded resolution $h \\colon R \\rightarrow X$ of $|X_a|$ from the\n proof of \n Proposition~\\ref{p:motivic-vanishing-cycles-over-singular-part}\n is the restriction to $X$ of a similar embedded resolution \n $h' \\colon R' \\rightarrow X'$ \n of $|X'_a|$.\n Let $s \\colon |(X_a)^{\\op{sing}}| \\rightarrow |X_a|$ \n and $s' \\colon |(X'_a)^{\\op{sing}}| \\rightarrow |X'_a|$ denote the closed\n embeddings. \n Then $\\phi_{V,a}=s_! s^*(\\phi_{V,a})$ \n by (the proof of)\n Proposition~\\ref{p:motivic-vanishing-cycles-over-singular-part}\n and\n \\begin{equation*}\n \n s^*\\phi_{V,a} = \n [|(X_a)^{\\op{sing}}| \\rightarrow |(X_a)^{\\op{sing}}|]\n -s^*(\\psi_{V,a})\n \\end{equation*}\n by definition \n \\eqref{eq:defi-mot-van-fiber}.\n Similarly, we have $\\phi_{V',a}=s'_! s'^*(\\phi_{V',a})$ and\n \\begin{equation*}\n \n s'^*\\phi_{V',a}= \n [|(X'_a)^{\\op{sing}}| \\rightarrow |(X'_a)^{\\op{sing}}|]\n -s'^*(\\psi_{V',a}).\n \\end{equation*}\n By assumption and Remark~\\ref{rem:SingV-cap-Xa-versus-Xa-sing}\n we have $|(X_a)^{\\op{sing}}|=|{\\op{Sing}}(V) \\cap X_a|=|{\\op{Sing}}(V') \\cap\n X'_a|=|(X'_a)^{\\op{sing}}|$. Therefore it is enough to show that\n $s^*(\\psi_{V,a})=s'^*(\\psi_{V',a})$. But both\n expressions have \n explicit formulas obtained from\n equation \\eqref{eq:formula-mot-nearby-fiber}; these expressions\n coincide since the Galois coverings \n $\\tildew{E}^\\circ_I \\rightarrow E^\\circ_I$ and \n $\\tildew{E}'^\\circ_{I'} \\rightarrow E'^\\circ_{I'}$ \n constructed for \n $h \\colon R \\rightarrow X$ \n and $h' \\colon R' \\rightarrow X'$ are compatible and give rise to\n isomorphic varieties \n with $\\hat{\\upmu}$-action\n over \n $|(X_a)^{\\op{sing}}|=|(X'_a)^{\\op{sing}}|$.\n\\end{proof}\n\n\\section{Motivic Thom-Sebastiani theorem}\n\\label{sec:thom-sebastiani}\n\nLet $V \\colon X \\rightarrow {\\mathbb{A}}^1_{\\mathsf{k}}$ be as in the previous\nsection~\\ref{sec:motiv-vanish-fiber}. \nLet $Y$ be a smooth connected (nonempty) ${\\mathsf{k}}$-variety with a \nmorphism $W \\colon Y \\rightarrow {\\mathbb{A}}^1_{\\mathsf{k}}$. \nWe define $V \\circledast W$ to be the composition\n\\begin{equation*}\n \n V \\circledast W \\colon X \\times Y \\xra{V \\times W} {\\mathbb{A}}^1_{\\mathsf{k}} \\times\n {\\mathbb{A}}^1_{\\mathsf{k}} \\xra{+} \n {\\mathbb{A}}^1_{\\mathsf{k}}. \n\\end{equation*}\n\n\\begin{theorem}\n [{Motivic Thom-Sebastiani Theorem,\n \\cite[Thm.~5.18]{guibert-loeser-merle-convolution}}] \n \\label{t:thom-sebastiani-GLM-motivic-vanishing-cycles}\n Consider morphisms $V \\colon X \\rightarrow {\\mathbb{A}}^1_{\\mathsf{k}}$ and $W \\colon Y \\rightarrow\n {\\mathbb{A}}^1_{\\mathsf{k}}$ \n as above and let $a, b \\in {\\mathsf{k}}$. Let $i_{a,b}$ be the\n inclusion $|X_a| \\times \n |Y_b| \\rightarrow |(X \\times Y)_{a+b}|$. \n Then\n \\begin{equation}\n \\label{eq:thom-sebastiani-GLM-motivic-vanishing-cycles}\n i_{a,b}^*(\\phi_{V \\circledast W,a+b})\n =\n \\Psi(\\phi_{V,a} \\times \\phi_{W,b}) \n \\end{equation}\n in $\\mathcal{M}^{\\hat{\\upmu}}_{|X_a| \\times |Y_b|}$ where \n $\\phi_{V,a} \\times \\phi_{W,b}$ is the obvious element of\n $\\mathcal{M}^{\\hat{\\upmu}\n \\times \\hat{\\upmu}}_{|X_a| \\times |Y_b|}$.\n\\end{theorem}\n\n\\begin{proof}\n Using Remark~\\ref{rem:motivic-fibers-signs}\n this is precisely\n \\cite[Thm.~5.18]{guibert-loeser-merle-convolution}. \n\n We would like to emphasize that\n \\eqref{eq:thom-sebastiani-GLM-motivic-vanishing-cycles} also\n holds if $V$ or $W$ is constant. \n Assume first that both $V$ and $W$ are constant; if $V=a$ and\n $W=b$ then both sides of\n \\eqref{eq:thom-sebastiani-GLM-motivic-vanishing-cycles} are\n equal to $[X \\times Y \\xra{\\operatorname{id}} X \\times Y]$ (use\n Remark~\\ref{rem:constant-potential} and \n Example~\\ref{ex:p-product-over-k-trivial-on-one-factor}); otherwise\n both sides are zero.\n\n Now assume that $V$ is not constant but $W$ is. If $W\n \\not= b$ then again both sides of \n \\eqref{eq:thom-sebastiani-GLM-motivic-vanishing-cycles} are\n zero. If $W=b$ choose an embedded resolution of $|X_a| \\subset\n X$ as in the\n proof of \n Proposition~\\ref{p:motivic-vanishing-cycles-over-singular-part}\n and obtain an explicit expression for $\\phi_{V,a}$. If we take\n the product of \n this embedded resolution with $Y=Y_b$ we obtain an embedded\n resolution of $|(X \\times Y)_{a+b}|=|X_a| \\times Y \\subset X\n \\times Y$ and an explicit expression for $\\phi_{V \\circledast W,a+b}$.\n Now use again Remark~\\ref{rem:constant-potential} and \n Example~\\ref{ex:p-product-over-k-trivial-on-one-factor}.\n\\end{proof}\n\nWe want to globalize this theorem.\nSince the set ${\\op{Crit}}(V) \\subset {\\mathsf{k}}$ of\ncritical values of $V$ is finite, we have\n\\begin{equation}\n \\label{eq:63}\n |{\\op{Sing}}(V)| = \\coprod_{a \\in {\\op{Crit}}(V)} |{\\op{Sing}}(V) \\cap X_a|.\n\\end{equation}\nProposition~\\ref{p:motivic-vanishing-cycles-over-singular-part}\nshows that we can view \n$\\phi_{V,a}$ as an element of\n$\\mathcal{M}^{\\hat{\\upmu}}_{|{\\op{Sing}}(V)|}$. Define\n\\begin{equation*}\n \n \\tildew{\\phi}_V:= \\sum_{a \\in {\\op{Crit}}(V)} \\phi_{V,a}\n \\in \\mathcal{M}^{\\hat{\\upmu}}_{|{\\op{Sing}}(V)|}. \n\\end{equation*}\nOf course we could have equivalently taken the sum over all $a\n\\in {\\mathsf{k}}$,\nby Corollary~\\ref{c:mot-van-fiber-zero-if-Xa-smooth}\nand Remark~\\ref{rem:constant-potential}.\n\n\nWe obviously have\n\\begin{equation}\n \\label{eq:Sing-V*W}\n {\\op{Sing}}(V \\circledast W) = {\\op{Sing}}(V) \\times {\\op{Sing}}(W)\n\\end{equation}\nand hence\n${\\op{Crit}}(W*V)={\\op{Crit}}(W)+{\\op{Crit}}(V):=\\{a+b\\mid a \\in\n{\\op{Crit}}(W), b \\in {\\op{Crit}}(V)\\}$. \n\n\\begin{corollary}\n [{Global motivic Thom-Sebastiani}]\n \\label{c:global-thom-sebastiani}\n Let $V \\colon X \\rightarrow {\\mathbb{A}}^1_{\\mathsf{k}}$ and $W \\colon Y \\rightarrow {\\mathbb{A}}^1_{\\mathsf{k}}$\n be as above. Then\n \\begin{equation}\n \\label{eq:global-thom-sebastiani}\n \\tildew{\\phi}_{V \\circledast W}\n =\n \\Psi(\\tildew{\\phi}_V \\times \\tildew{\\phi}_W) \n \\end{equation}\n in $\\mathcal{M}^{\\hat{\\upmu}}_{|{\\op{Sing}}(V)| \\times |{\\op{Sing}}(W)|}$ where \n $\\tildew{\\phi}_V \\times \\tildew{\\phi}_W$ is the obvious element of\n $\\mathcal{M}^{\\hat{\\upmu}\n \\times \\hat{\\upmu}}_{|{\\op{Sing}}(V)| \\times |{\\op{Sing}}(W)|}$.\n\\end{corollary}\n\n\\begin{proof}\n Let $s_a \\colon |{\\op{Sing}}(V) \\cap X_a| \\rightarrow |{\\op{Sing}}(V)|$ be\n the closed embedding. Then obviously\n $s_a^*(\\tildew{\\phi})=\\phi_{V,a}$. \n Define $s'_b$ and $s''_c$ similarly for $W$ and $V \\circledast W$.\n From \\eqref{eq:63} and \\eqref{eq:Sing-V*W} we see that\n $|{\\op{Sing}}(V \\circledast W)|$ is the disjoint finite union of its\n closed \n subvarieties $|{\\op{Sing}}(V) \\cap X_a| \\times |{\\op{Sing}}(W) \\cap Y_b|$ where\n $(a,b) \\in {\\op{Crit}}(V) \\times {\\op{Crit}}(W)$.\n By Lemma~\\ref{l:open-closed-decomposition} it is therefore\n enough to show that both sides of\n \\eqref{eq:global-thom-sebastiani}\n coincide when restricted to each of these subvarieties.\n Consider the following commutative diagram\n \\begin{equation*}\n \n \\xymatrix{\n {|{\\op{Sing}}(V) \\cap X_a| \\times |{\\op{Sing}}(W) \\cap Y_b|}\n \\ar[d]^-{s_a \\times s'_b} \n \\ar[r]^-{\\iota_{a,b}}\n &\n {|{\\op{Sing}}(V \\circledast W) \\cap (X \\times\n Y)_{a+b}|}\n \\ar[d]^-{s''_{a+b}}\\\\\n {|{\\op{Sing}}(V)| \\times |{\\op{Sing}}(W)|}\n \\ar@{}[r]|-{=}\n & \n {|{\\op{Sing}}(V \\circledast W)|.}\n }\n \\end{equation*}\n If we apply $(s_a \\times s'_b)^*$ to both sides of\n \\eqref{eq:global-thom-sebastiani}\n we obtain on the left\n \\begin{equation*}\n \n \n \n \\iota_{a,b}^*((s''_{a+b})^*(\\tildew{\\phi}_{V \\circledast W}))\n = \\iota_{a,b}^*(\\phi_{V \\circledast W,a+b})\n \\end{equation*}\n and on the right\n \\begin{equation*}\n \n \n \n \\Psi((s_a \\times s'_b)^*(\\tildew{\\phi}_V \\times\n \\tildew{\\phi}_W)) \n =\n \\Psi(s_a^*(\\tildew{\\phi}_V) \\times (s'_b)^*(\\tildew{\\phi}_W)) \n =\n \\Psi(\\phi_{V,a} \\times \\phi_{W,b})\n \\end{equation*}\n where we use\n Remark~\\ref{rem:Psi-compatible-with-pushforward-pullback}. \n But \n $\\iota_{a,b}^*(\\phi_{V \\circledast W,a+b})\n = \\Psi(\\phi_{V,a} \\times \\phi_{W,b})$\n is just a reformulation of\n Theorem~\\ref{t:thom-sebastiani-GLM-motivic-vanishing-cycles}\n using \n Proposition~\\ref{p:motivic-vanishing-cycles-over-singular-part}\n and Remark~\\ref{rem:Psi-compatible-with-pushforward-pullback}. \n\\end{proof}\n\n\n\\begin{definition}\n \\label{d:total-motivic-vanishing-cycles}\n For $V \\colon X \\rightarrow {\\mathbb{A}}^1_{\\mathsf{k}}$ as above we define \n \n \\begin{equation}\n \\label{eq:phi-V-DA1}\n (\\phi_V)_{{\\mathbb{A}}^1_{\\mathsf{k}}} := V_!(\\tildew{\\phi}_V)=\\sum_{a \\in {\\mathsf{k}}}\n V_!(\\phi_{V,a}) \n \\in \\tilde{\\mathcal{M}}^{\\hat{\\upmu}}_{{\\mathbb{A}}^1_{\\mathsf{k}}} \n \\end{equation}\n where we use the isomorphism \\eqref{eq:62}\n in order to change the target of\n $V_! \\colon \\mathcal{M}_{|{\\op{Sing}}(V)|}^{\\hat{\\upmu}} \\rightarrow \n \\mathcal{M}^{\\hat{\\upmu}}_{{\\mathbb{A}}^1_{\\mathsf{k}}}$ to\n $\\tilde{\\mathcal{M}}^{\\hat{\\upmu}}_{{\\mathbb{A}}^1_{\\mathsf{k}}}$.\n\\end{definition}\n\nRecall the convolution product\n\\eqref{eq:convolution-A1}. \n\n\\begin{corollary}\n \\label{c:thom-sebastiani-motivic-vanishing-cycles}\n Let $V \\colon X \\rightarrow {\\mathbb{A}}^1_{\\mathsf{k}}$ and $W \\colon Y \\rightarrow {\\mathbb{A}}^1_{\\mathsf{k}}$\n be as above. \n Then\n \\begin{equation*}\n \n (\\phi_{V \\circledast W})_{{\\mathbb{A}}^1_{\\mathsf{k}}}\n =\n (\\phi_{V})_{{\\mathbb{A}}^1_{\\mathsf{k}}}\n \\star\n (\\phi_{W})_{{\\mathbb{A}}^1_{\\mathsf{k}}}\n \\end{equation*}\n in\n $(\\tilde{\\mathcal{M}}^{\\hat{\\upmu}}_{{\\mathbb{A}}^1_{\\mathsf{k}}},\\star)$.\n\\end{corollary}\n\n\\begin{proof}\n Just apply $(V \\circledast W)_!=\\operatorname{add}_!(V \\times W)_!$ to \n \\eqref{eq:global-thom-sebastiani} and note that\n \\begin{multline*}\n \n \\operatorname{add}_!((V \\times W)_!(\\Psi(\\tildew{\\phi}_V \\times\n \\tildew{\\phi}_W)))\n =\\operatorname{add}_!(\\Psi((V \\times W)_!(\\tildew{\\phi}_V \\times\n \\tildew{\\phi}_W)))\\\\\n =\\operatorname{add}_!(\\Psi(V_!(\\tildew{\\phi}_V) \\times\n W_!(\\tildew{\\phi}_W)))\n =\n (\\phi_{V})_{{\\mathbb{A}}^1_{\\mathsf{k}}}\n \\star\n (\\phi_{W})_{{\\mathbb{A}}^1_{\\mathsf{k}}}.\n \\end{multline*}\n using\n Remark~\\ref{rem:Psi-compatible-with-pushforward-pullback}. \n\\end{proof}\n\n\nDefine\n$\\ul{\\phi_V}$ as the image of\n$(\\phi_V)_{{\\mathbb{A}}^1_{\\mathsf{k}}}$ under the morphism $\\epsilon_!$ \nof rings from Lemma~\\ref{l:convolution-products-compatible}, i.\\,e.\\\n\\begin{equation}\n \\label{eq:phi-V-k}\n \\ul{\\phi_V} := \n \\epsilon_!((\\phi_V)_{{\\mathbb{A}}^1_{\\mathsf{k}}})\n =\n \\sum_{a \\in {\\mathsf{k}}} (\\epsilon_a)_!(\\phi_{V,a})\n \\in \\mathcal{M}^{\\hat{\\upmu}}_{{\\mathsf{k}}} \n\\end{equation}\nwhere $\\epsilon_a \\colon |{\\op{Sing}}(V) \\cap X_a| \\rightarrow \\operatorname{Spec} {\\mathsf{k}}$\ndenotes the structure morphism.\n\n\\begin{corollary}\n \\label{c:thom-sebastiani-motivic-vanishing-cycles-over-k}\n We have\n $\\ul{\\phi_{V \\circledast W}}\n =\n \\ul{\\phi_{V}}\n *\n \\ul{\\phi_{W}}$\n in\n $\\mathcal{M}^{\\hat{\\upmu}}_{{\\mathsf{k}}}$.\n\\end{corollary}\n\n\\begin{proof}\n This is obvious from\n Lemma~\\ref{l:convolution-products-compatible} \n and Corollary~\\ref{c:thom-sebastiani-motivic-vanishing-cycles}.\n\\end{proof}\n\n\\section{Motivic vanishing cycles measure}\n\\label{sec:motiv-vanish-fiber-1}\n\n\\subsection{Some reminders}\n\\label{sec:some-reminders}\n\nWe recall some facts from\n\\cite[3.7-3.9]{guibert-loeser-merle-convolution}.\nLet $X$ be a smooth connected ${\\mathsf{k}}$-variety and $V\n\\colon X \\rightarrow {\\mathbb{A}}^1_{\\mathsf{k}}$ a morphism. Let $U \\subset X$ be a dense\nopen subvariety. Then Guibert, Loeser and Merle\ndefine \nin \\cite[Prop.~3.8]{guibert-loeser-merle-convolution}\nan element \n\\begin{equation*}\n \n \\mathcal{S}_{V, U, X} \\in\n \\mathcal{M}_{|X_0|}^{\\hat{\\upmu}} \n\\end{equation*}\n(they denote this element just\nby $\\mathcal{S}_{V,U}$).\n\n\\begin{remark}\n The remarks at the end of \n \\cite[3.8]{guibert-loeser-merle-convolution}\n (and Remarks~\\ref{rem:motivic-fibers-signs} and\n \\ref{rem:constant-potential})\n imply:\n if $U=X$ we have\n \\begin{equation}\n \\label{eq:43} \n \\mathcal{S}_{V, X, X}=\n \\mathcal{S}_V=\\psi_{V,0};\n \\end{equation}\n if $V=0$ we have\n \\begin{equation*}\n \n \\mathcal{S}_{V, U, X}=0=\\psi_{V,0}=\\mathcal{S}_{V}\n \\end{equation*}\n\\end{remark}\n\n\\begin{theorem}\n [{\\cite[Thm.~3.9]{guibert-loeser-merle-convolution}}] \n \\label{t:GLM-measure}\n Let $\\alpha \\colon A \\rightarrow {\\mathbb{A}}^1_{\\mathsf{k}}$ be an ${\\mathbb{A}}^1_{\\mathsf{k}}$-variety. \n Then there exists a unique $\\mathcal{M}_{\\mathsf{k}}$-linear map\n \\begin{equation*}\n \n \\mathcal{S}_\\alpha^{\\mathcal{M}_A} \\colon \\mathcal{M}_A \\rightarrow\n \\mathcal{M}^{\\hat{\\upmu}}_{|A_0|} \n \\end{equation*}\n such that for every proper morphism $V' \\colon Z \\rightarrow A$\n where $Z$ \n is a smooth and connected ${\\mathsf{k}}$-variety, and every dense open\n subvariety $U$ of $Z$ \n we have\n \\begin{equation*}\n \n \\mathcal{S}_\\alpha^{\\mathcal{M}_A}([U \\rightarrow A]) =\n V'_!(\\mathcal{S}_{\\alpha \\circ V', U, X}).\n \\end{equation*}\n\\end{theorem}\n\nNote that given any morphism $V \\colon U \\rightarrow A$ where $U$ is a smooth\nconnected ${\\mathsf{k}}$-variety, there is a smooth connected\n${\\mathsf{k}}$-variety $Z$ containing $U$ as a dense open subscheme and a\nproper morphism $V' \\colon Z \\rightarrow A$ extending $V$ (use Nagata\ncompactification and resolve the singularities).\n\nIn particular, if $V$ is a proper morphism, then by definition\nof $\\mathcal{S}_\\alpha^{\\mathcal{M}_A}$ and using\n\\eqref{eq:43} we have\n\\begin{equation}\n \\label{eq:42}\n \\mathcal{S}_\\alpha^{\\mathcal{M}_A}([U \\xra{V} A]) \n = V_!(\\mathcal{S}_{\\alpha \\circ V, U, U})\n = V_!(\\mathcal{S}_{\\alpha \\circ V})\n = V_!(\\psi_{\\alpha \\circ V,0}).\n\\end{equation}\nIn particular, if $A$ is smooth and connected we obtain \n$\\mathcal{S}_\\alpha^{\\mathcal{M}_A}([A \\xra{\\operatorname{id}} A]) \n= \\mathcal{S}_{\\alpha}=\\psi_{\\alpha,0}$ which justifies the notation\n$\\mathcal{S}_\\alpha^{\\mathcal{M}_A}$.\n\nWe will apply Theorem~\\ref{t:GLM-measure} only in the case that\n$\\alpha$ is a translation ${\\mathbb{A}}^1_{\\mathsf{k}} \\rightarrow {\\mathbb{A}}^1_{\\mathsf{k}}$, $x \\mapsto\nx-a$, for some $a \\in {\\mathsf{k}}$.\n\n\\subsection{Additivity of the motivic vanishing cycles}\n\\label{sec:addit-vanish-cycl}\n\nTheorem~\\ref{t:GLM-measure} has the following consequence.\n\n\n\\begin{theorem}\n \\label{t:total-vanishing-cycles-group-homo}\n There exists a unique $\\mathcal{M}_{\\mathsf{k}}$-linear map\n \\begin{equation*}\n \n \\Phi' \\colon \\mathcal{M}_{{\\mathbb{A}}^1_{\\mathsf{k}}} \\rightarrow\n \\tilde{\\mathcal{M}}_{{\\mathbb{A}}^1_{\\mathsf{k}}}^{\\hat{\\upmu}} \n \\end{equation*}\n such that \n \\begin{equation*}\n \n \\Phi'([X \\xra{V} {\\mathbb{A}}^1_{\\mathsf{k}}]) = (\\phi_{V})_{{\\mathbb{A}}^1_{\\mathsf{k}}}\n \\end{equation*}\n for all proper morphisms $V \\colon X \\rightarrow {\\mathbb{A}}^1_{\\mathsf{k}}$ of\n ${\\mathsf{k}}$-varieties \n where $X$ is smooth over ${\\mathsf{k}}$ and connected\n (for the definition of $(\\phi_{V})_{{\\mathbb{A}}^1_{\\mathsf{k}}}$ see\n Definition~\\ref{d:total-motivic-vanishing-cycles}). \n\\end{theorem}\n\n\\begin{proof}\n Uniqueness is clear \n since $K_0({\\op{Var}}_{{\\mathbb{A}}^1_{\\mathsf{k}}})$ is generated by the classes of\n proper morphisms $V \\colon X \\rightarrow {\\mathbb{A}}^1_{\\mathsf{k}}$ of ${\\mathsf{k}}$-varieties\n with $X$ \n connected and smooth over ${\\mathsf{k}}$ \n (and relations given by the blowing-up relations), see\n \\cite[Thm.~5.1]{bittner-euler-characteristic}.\n\n Let $a \\in {\\mathsf{k}}$ and let\n $\\gamma_a \\colon |X_a| \\rightarrow \\{a\\}=\\operatorname{Spec} {\\mathsf{k}}$ be\n the obvious morphism. Apply Theorem~\\ref{t:GLM-measure} to the\n morphism $\\alpha \\colon {\\mathbb{A}}^1_{\\mathsf{k}} \\rightarrow {\\mathbb{A}}^1_{\\mathsf{k}}$,\n $x \\mapsto x-a$. We obtain an $\\mathcal{M}_{\\mathsf{k}}$-linear map\n $\\mathcal{M}_{{\\mathbb{A}}^1_{\\mathsf{k}}} \\rightarrow \\mathcal{M}_{\\{a\\}}^{\\hat{\\upmu}}$\n that maps $[V \\colon X \\rightarrow {\\mathbb{A}}^1_{\\mathsf{k}}]$ to\n $-(\\gamma_a)_!(\\psi_{V,a})$\n (use \\eqref{eq:42}; we add a global minus sign)\n whenever $V \\colon X \\rightarrow {\\mathbb{A}}^1_{\\mathsf{k}}$\n is proper with $X$ connected and smooth over ${\\mathsf{k}}$. \n\n Obviously there\n is a unique \n $\\mathcal{M}_{\\mathsf{k}}$-linear map\n $\\mathcal{M}_{{\\mathbb{A}}^1_{\\mathsf{k}}} \\rightarrow\n \\mathcal{M}_{\\{a\\}}^{\\hat{\\upmu}}$\n mapping $[V \\colon X \\rightarrow\n {\\mathbb{A}}^1_{\\mathsf{k}}]$ to \n $[|X_a| \\rightarrow \\{a\\}]=(\\gamma_a)_!([|X_a| \\rightarrow |X_a|)$\n for any morphism $V \\colon X \\rightarrow {\\mathbb{A}}^1_{\\mathsf{k}}$ of \n ${\\mathsf{k}}$-varieties.\n \n Let $\\Phi'_a \\colon \\mathcal{M}_{{\\mathbb{A}}^1_{\\mathsf{k}}} \\rightarrow\n \\mathcal{M}_{\\{a\\}}^{\\hat{\\upmu}}$ be the sum of these two\n maps. If\n $V \\colon X \\rightarrow {\\mathbb{A}}^1_{\\mathsf{k}}$\n is proper with $X$ connected and smooth over ${\\mathsf{k}}$ we have \n $\\Phi'_a([V \\colon X \\rightarrow {\\mathbb{A}}^1_{\\mathsf{k}}]) =(\\gamma_a)_!(\\phi_{V,a})$\n by the \n definition of the motivic vanishing cycles\n \\eqref{eq:defi-mot-van-fiber}. \n\n For any $a \\in {\\mathsf{k}},$ let $i_a \\colon \\{a\\} \\rightarrow {\\mathbb{A}}^1_{\\mathsf{k}}$ be the\n inclusion. Observe that \n \\begin{equation*}\n \n \\sum_{a \\in {\\mathsf{k}}} (i_a)_!(\\Phi'_a) \\colon \n \\mathcal{M}_{{\\mathbb{A}}^1_{\\mathsf{k}}} \\rightarrow \\mathcal{M}_{{\\mathbb{A}}^1_{\\mathsf{k}}}^{\\hat{\\upmu}} \n \\end{equation*}\n is\n well defined since for any given $m \\in \\mathcal{M}_{{\\mathbb{A}}^1_{\\mathsf{k}}}$\n only finitely many $\\Phi'_a(m)$ are nonzero. \n The composition of this morphism with the isomorphism \\eqref{eq:62}\n has the required properties. This proves existence.\n\\end{proof}\n\n\\begin{remark}\n \\label{rem:Phi'-on-unity-and-A1-0-A1}\n If $Z$ is a smooth ${\\mathsf{k}}$-variety we have $\\Phi'([Z \\xra{0}\n {\\mathbb{A}}^1_{\\mathsf{k}}])= [Z \\xra{0} {\\mathbb{A}}^1_{\\mathsf{k}}]$. \n If $Z$ is proper over ${\\mathsf{k}}$ this follows from \n Remark~\\ref{rem:constant-potential}. Otherwise we can\n compactify $Z$ to a smooth proper ${\\mathsf{k}}$-variety $\\ol{Z}$ such\n that $\\ol{Z}-Z$ is a simple normal crossing divisor and then\n express the class of $Z$ in terms of $\\ol{Z}$ and the various\n smooth intersections of the involved smooth prime divisors.\n \n In particular we have\n $\\Phi'([\\operatorname{Spec} {\\mathsf{k}} \\xra{0} {\\mathbb{A}}^1_{\\mathsf{k}}])=[\\operatorname{Spec} {\\mathsf{k}} \\xra{0}\n {\\mathbb{A}}^1_{\\mathsf{k}}]$\n and \n $\\Phi'([{\\mathbb{A}}^1_{\\mathsf{k}} \\xra{0} {\\mathbb{A}}^1_{\\mathsf{k}}])=[{\\mathbb{A}}^1_{\\mathsf{k}} \\xra{0}\n {\\mathbb{A}}^1_{\\mathsf{k}}]$\n\\end{remark}\n\n\\begin{remark}\n \\label{rem:justify-sign-vanishing}\n We \n keep our promise from\n Remark~\\ref{rem:motivic-fibers-signs}\n to justify our sign choice.\n We do this\n by showing that\n Theorem~\\ref{t:total-vanishing-cycles-group-homo} \n does not hold if the right hand side of\n \\eqref{eq:phi-V-DA1} \n is replaced by\n $\\sum_{a \\in {\\mathsf{k}}} V_!(\\mathcal{S}^\\phi_{V-a})$.\n Assume that there is morphism $\\Xi \\colon\n \\mathcal{M}_{{\\mathbb{A}}^1_{\\mathsf{k}}} \\rightarrow\n \\mathcal{M}_{{\\mathbb{A}}^1_{\\mathsf{k}}}^{\\hat{\\upmu}}\n \\cong\n \\tilde{\\mathcal{M}}_{{\\mathbb{A}}^1_{\\mathsf{k}}}^{\\hat{\\upmu}}$\n of abelian groups\n such that \n $\\Xi([X \\xra{V} {\\mathbb{A}}^1_{\\mathsf{k}}]) = \n \\sum_{a \\in {\\mathsf{k}}} V_!(\\mathcal{S}^\\phi_{V-a})$\n for all proper morphisms $V \\colon X \\rightarrow {\\mathbb{A}}^1_{\\mathsf{k}}$ of\n ${\\mathsf{k}}$-varieties \n where $X$ is smooth over ${\\mathsf{k}}$ and connected.\n Remark~\\ref{rem:constant-potential} implies that\n $\\Xi([Z \\xra{0}\n {\\mathbb{A}}^1_{\\mathsf{k}}])= (-1)^{\\dim Z}[Z \\xra{0} {\\mathbb{A}}^1_{\\mathsf{k}}]$ for all smooth proper\n connected ${\\mathsf{k}}$-varieties $Z$.\n\n Let $X$ be a smooth proper connected 2-dimensional\n ${\\mathsf{k}}$-variety \n and $\\tildew{X}$ its blowup in a \n closed point $Y =\\{x\\} \\subset X$. Let $E$ be the exceptional\n divisor. We view $X, \\tildew{X}, Y, E$ as ${\\mathbb{A}}^1_{\\mathsf{k}}$-varieties\n via the zero morphism to ${\\mathbb{A}}^1_{\\mathsf{k}}$.\n In $K_0({\\op{Var}}_{{\\mathbb{A}}^1_{\\mathsf{k}}})$ we obviously have $[X]-[Y]=\n [\\tildew{X}] -[E]$. So if we apply $\\Xi$ we obtain\n $[X]-[Y]=[\\tildew{X}] +[E]$ since $E$ has odd dimension. \n We obtain $2[E]=0$ in \n $\\mathcal{M}_{{\\mathbb{A}}^1_{\\mathsf{k}}}^{\\hat{\\upmu}}$.\n Let us explain why this is a contradiction.\n Note that $E \\cong\n {\\mathbb{P}}^1_{\\mathsf{k}}$. Pulling back via the inclusion $\\operatorname{Spec} {\\mathsf{k}} \\xra{0}\n {\\mathbb{A}}^1_{\\mathsf{k}}$ and forgetting the group action shows that\n $2[{\\mathbb{P}}^1_{\\mathsf{k}}]=0$ in $\\mathcal{M}_{\\mathsf{k}}$. \n Taking the topological Euler characteristic with compact\n support (see \\cite[Example\n 4.3]{nicaise-sebag-grothendieck-ring-of-varieties}) yields the\n contradiction $4=0$ in $\\mathbb{Z}$.\n\\end{remark}\n\n\\begin{remark}\n \\label{rem:Phi'-on-smooth-proper-morphisms}\n If $V \\colon X \\rightarrow {\\mathbb{A}}^1_{\\mathsf{k}}$ is a smooth and proper morphism\n then \n $\\Phi'([X \\xra{V} {\\mathbb{A}}^1_{\\mathsf{k}}])=0$. This follows from\n Corollary~\\ref{c:mot-van-fiber-zero-if-Xa-smooth};\n note that\n $X$ is smooth over ${\\mathsf{k}}$ and $V$ is not constant (if $X$ is\n nonempty). \n\\end{remark}\n\n\\begin{remark}\n \\label{rem:Phi'-of-L-A1}\n We claim that $\\Phi'(\\mathbb{L}_{{\\mathbb{A}}^1_{\\mathsf{k}}})=0$.\n Indeed, we have\n \\begin{equation*}\n \n \\mathbb{L}_{{\\mathbb{A}}^1_{\\mathsf{k}}}=\n [{\\mathbb{A}}^1_{{\\mathbb{A}}^1_{\\mathsf{k}}}]=[{\\mathbb{A}}^1_{\\mathsf{k}} \\times {\\mathbb{A}}^1_{\\mathsf{k}} \\rightarrow {\\mathbb{A}}^1_{\\mathsf{k}}]\n =\n [{\\mathbb{A}}^1_{\\mathsf{k}} \\times {\\mathbb{P}}^1_{\\mathsf{k}} \\rightarrow {\\mathbb{A}}^1_{\\mathsf{k}}]\n -\n [{\\mathbb{A}}^1_{\\mathsf{k}} \\times \\operatorname{Spec} {\\mathsf{k}} \\rightarrow {\\mathbb{A}}^1_{\\mathsf{k}}]\n \\end{equation*}\n in $K_0({\\op{Var}}_{{\\mathbb{A}}^1_{\\mathsf{k}}})$. Now apply \n Remark~\\ref{rem:Phi'-on-smooth-proper-morphisms}.\n In fact, this argument \n together with the compactification argument from\n Remark~\\ref{rem:Phi'-on-unity-and-A1-0-A1}\n shows: if $Z$ is any smooth\n ${\\mathsf{k}}$-variety, then $\\Phi'$ maps the class of the projection\n ${\\mathbb{A}}^1_{\\mathsf{k}} \\times Z \\rightarrow {\\mathbb{A}}^1_{\\mathsf{k}}$ to zero.\n\\end{remark}\n\n\\begin{remark}\n \\label{rem:Phi'-cannot-be-ring-morphism}\n Remark~\\ref{rem:Phi'-of-L-A1} shows that\n the morphism $\\Phi'$ from\n Theorem~\\ref{t:total-vanishing-cycles-group-homo}\n is not a morphism of rings if we consider the usual \n multiplication on\n $\\mathcal{M}_{{\\mathbb{A}}^1_{\\mathsf{k}}}$: it\n maps the invertible element \n $\\mathbb{L}_{{\\mathbb{A}}^1_{\\mathsf{k}}}$ to zero and hence would be the zero morphism\n (which it is not, by\n Remark~\\ref{rem:Phi'-on-unity-and-A1-0-A1}).\n Therefore it seems presently more appropriate to restrict\n $\\Phi'$ to \n $K_0({\\op{Var}}_{{\\mathbb{A}}^1_{\\mathsf{k}}})$. \n See however Remark~\\ref{rem:A1-0-A1-sent-to-invertible} below.\n\\end{remark}\n\n\\subsection{The motivic vanishing cycles measure}\n\\label{sec:motiv-vanish-cycl}\n\nWe define $\\Phi$ to be the $K_0({\\op{Var}}_{\\mathsf{k}})$-linear composition\n\\begin{equation}\n \\label{eq:52}\n \\Phi \\colon K_0({\\op{Var}}_{{\\mathbb{A}}^1_{\\mathsf{k}}}) \\rightarrow \n \\mathcal{M}_{{\\mathbb{A}}^1_{\\mathsf{k}}} \\xra{\\Phi'}\n \\tilde{\\mathcal{M}}_{{\\mathbb{A}}^1_{\\mathsf{k}}}^{\\hat{\\upmu}} \n\\end{equation}\nwhere the second map is the morphism $\\Phi'$ from\nTheorem~\\ref{t:total-vanishing-cycles-group-homo}.\n\nNow we can state our main theorem which says that $\\Phi$ is a\nring morphism if we equip source \n$K_0({\\op{Var}}_{{\\mathbb{A}}^1_{\\mathsf{k}}})= K_0({\\op{Var}}^{\\upmu_1}_{{\\mathbb{A}}^1_{\\mathsf{k}}})$ and\ntarget $\\tilde{\\mathcal{M}}_{{\\mathbb{A}}^1_{\\mathsf{k}}}^{\\hat{\\upmu}}$ \nwith the convolution product $\\star$ \nfrom\nsection~\\ref{sec:conv-vari-over}, see in particular\nRemark~\\ref{rem:no-group-action} and Proposition~\\ref{p:A1-convolution-comm-ass-unit}.\n\n\\begin{theorem}\n \\label{t:total-motivic-vanishing-cycles-ring-homo}\n The map \\eqref{eq:52}\n from Theorem~\\ref{t:total-vanishing-cycles-group-homo}\n is a morphism \n \\begin{equation*}\n \\Phi \\colon (K_0({\\op{Var}}_{{\\mathbb{A}}^1_{\\mathsf{k}}}), \\star) \\rightarrow\n (\\tilde{\\mathcal{M}}_{{\\mathbb{A}}^1_{\\mathsf{k}}}^{\\hat{\\upmu}},\\star) \n \\end{equation*}\n of $K_0({\\op{Var}}_{\\mathsf{k}})$-algebras.\n By composing with\n \\eqref{epsilon!-star-*}\n we obtain a morphism\n \\begin{equation}\n \\label{eq:epsilon-Phi}\n \\epsilon_! \\circ \\Phi \\colon (K_0({\\op{Var}}_{{\\mathbb{A}}^1_{\\mathsf{k}}}), \\star) \\rightarrow\n (\\mathcal{M}_{{\\mathsf{k}}}^{\\hat{\\upmu}},*) \n \\end{equation}\n of $K_0({\\op{Var}}_{\\mathsf{k}})$-algebras. We call these two morphisms\n \\textbf{motivic vanishing cycles measures}.\n\\end{theorem}\n\n\\begin{proof}\n The second claim is obvious from\n Lemma~\\ref{l:convolution-products-compatible}, so let us prove\n the first claim.\n Remark~\\ref{rem:Phi'-on-unity-and-A1-0-A1} shows that $\\Phi$ maps\n the identity element to the identity element.\n Remark~\\ref{rem:Phi'-on-unity-and-A1-0-A1} shows that $\\Phi$ is\n compatible with the algebra structure maps, cf.\\ \\eqref{eq:46}.\n\n We use that $K_0({\\op{Var}}_{{\\mathbb{A}}^1_{\\mathsf{k}}})$ is generated by the classes of\n projective morphisms $V \\colon X \\rightarrow {\\mathbb{A}}^1_{\\mathsf{k}}$ with $X$\n a connected quasi-projective ${\\mathsf{k}}$-variety that is smooth over\n ${\\mathsf{k}}$ \n (and relations given by the blowing-up relations), see\n \\cite[Thm.~5.1]{bittner-euler-characteristic}.\n\n So let $X$ and $Y$ be connected quasi-projective ${\\mathsf{k}}$-varieties that\n are smooth over ${\\mathsf{k}}$ and let $V \\colon X \\rightarrow {\\mathbb{A}}^1_{\\mathsf{k}}$ and $W \\colon Y \\rightarrow\n {\\mathbb{A}}^1_{\\mathsf{k}}$ be projective morphisms.\n Then we know by\n Theorem~\\ref{t:total-vanishing-cycles-group-homo}\n and\n Corollary\n \\ref{c:thom-sebastiani-motivic-vanishing-cycles}\n that\n \\begin{equation*}\n \\Phi([X \\xra{V} {\\mathbb{A}}^1_{\\mathsf{k}}]) \\star\n \\Phi([Y \\xra{W} {\\mathbb{A}}^1_{\\mathsf{k}}])\n =\n (\\phi_{V})_{{\\mathbb{A}}^1_{\\mathsf{k}}}\n \\star\n (\\phi_{W})_{{\\mathbb{A}}^1_{\\mathsf{k}}}\n = (\\phi_{V \\circledast W})_{{\\mathbb{A}}^1_{\\mathsf{k}}}.\n \\end{equation*}\n Our aim is to show that\n \\begin{equation*}\n \n (\\phi_{V \\circledast W})_{{\\mathbb{A}}^1_{\\mathsf{k}}}= \n \\Phi([X \\times Y \\xra{V \\circledast W} {\\mathbb{A}}^1_{\\mathsf{k}}]).\n \\end{equation*}\n This is not obvious since $V \\circledast W \\colon X \\times Y \\rightarrow\n {\\mathbb{A}}^1_{\\mathsf{k}}$ is not \n proper in general. \n\n We apply Proposition~\\ref{p:compactification} below and\n use notation from there.\n We obtain the equality\n \\begin{multline*}\n [X\\times Y \\xra{V \\circledast W} {\\mathbb{A}}^1_{\\mathsf{k}}]\n =\n [Z \\xra{h} {\\mathbb{A}}^1_{\\mathsf{k}}]\n -\\sum_{i}[D_i\\xra{h_i} {\\mathbb{A}}^1_{\\mathsf{k}}]\n +\\sum_{i}[r]\n \\ar[d]^-{V \\circledast W} & {Z} \\ar[d]^-h\\\\\n {{\\mathbb{A}}^1_{\\mathsf{k}}} \\ar@{}[r]|-{=} & {{\\mathbb{A}}^1_{\\mathsf{k}}}\n }\n \\end{equation*}\n commutes.\n \\item\n \\label{enum:compactification-critical-points}\n All critical points of $h$ are contained\n in $X\\times Y,$ i.\\,e.\\ ${\\op{Sing}}(V \\circledast W)={\\op{Sing}}(h)$.\n \\item\n \\label{enum:compactification-boundary-snc}\n We have $Z \\setminus X \\times Y = \\bigcup_{i=1}^s D_i$ where\n the\n $D_i$ are pairwise distinct smooth\n prime divisors.\n More precisely, $Z \\setminus X \\times Y$ is the support\n of a simple normal crossing divisor.\n \\item\n \\label{enum:compactification-intersections-proper-and-smooth}\n For every $p$-tuple $(i_1, \\dots, i_p)$ of indices\n (with $p \\geq 1$) the morphism\n \\begin{equation*}\n h_{i_1 \\dots i_p} \\colon D_{i_1 \\dots i_p}:=D_{i_1} \\cap \\dots \\cap\n D_{i_p} \\rightarrow {\\mathbb{A}}^1_{\\mathsf{k}} \n \\end{equation*}\n induced by $h$ is projective and smooth. In\n particular, all\n $D_{i_1 \\dots i_p}$ are smooth quasi-projective ${\\mathsf{k}}$-varieties.\n \\end{enumerate}\n\\end{proposition}\n\n\\section{Comparison with the matrix factorization motivic\n measure} \n\\label{sec:comp-with-matr}\n\n\nWe would like to place\nTheorem~\\ref{t:total-motivic-vanishing-cycles-ring-homo} \nin a certain context and compare the motivic measure $\\Phi$ \nor rather $\\epsilon_! \\circ \\Phi$ with\nanother motivic measure of a different nature. \n\n\\subsection{Categorical motivic measures}\n\\label{sec:categorical-measures}\n\nFirst let us recall the {\\it categorical} measure\n\\begin{equation*}\n \\nu \\colon K_0({\\op{Var}}_{\\mathsf{k}})\\rightarrow K_0(\\op{sat}^\\mathbb{Z}_{\\mathsf{k}}) \n\\end{equation*}\nconstructed in \\cite{bondal-larsen-lunts-grothendieck-ring}.\nHere $K_0(\\op{sat}^\\mathbb{Z}_{\\mathsf{k}})$ is the free abelian group generated by\nquasi-equivalence classes of saturated differential\n$\\mathbb{Z}$-graded \n${\\mathsf{k}}$-categories with relations coming from semiorthogonal\ndecompositions into admissible subcategories on the level of\nhomotopy categories. The map $\\nu $ sends the class $[X]$ of a\nsmooth projective ${\\mathsf{k}}$-variety $X$ to the class\n$[{\\operatorname{D}}^{\\op{b}}({\\op{Coh}}(X))]$ \nof (a suitable\ndifferential $\\mathbb{Z}$-graded ${\\mathsf{k}}$-enhancement of) its derived\ncategory\n${\\operatorname{D}}^{\\op{b}}({\\op{Coh}}(X))$. The\ntensor product of differential $\\mathbb{Z}$-graded ${\\mathsf{k}}$-categories induces a ring structure on\n$K_0(\\op{sat}^\\mathbb{Z}_{\\mathsf{k}})$ and $\\nu $ is a ring homomorphism. In recent\npapers\n\\cite{valery-olaf-matfak-semi-orth-decomp,\n valery-olaf-matfak-motmeas} we have constructed a motivic\nmeasure\n\\begin{equation*}\n \\mu \\colon \n (K_0({\\op{Var}}_{{\\mathbb{A}}^1_{\\mathsf{k}}}), \\star) \n \n \\rightarrow K_0(\\op{sat}_{\\mathsf{k}}^{{\\mathbb{Z}}_2}) \n\\end{equation*}\nwhich is a relative analogue of the measure $\\nu $. Here\n$K_0(\\op{sat}_{\\mathsf{k}}^{{\\mathbb{Z}}_2})$ is defined in exactly the same way as\n$K_0(\\op{sat}^\\mathbb{Z}_{\\mathsf{k}})$ except that this time we consider saturated\ndifferential ${{\\mathbb{Z}}_2}$-graded ${\\mathsf{k}}$-categories. If\n$[X \\xra{W} {{\\mathbb{A}}}^1_{\\mathsf{k}}]\\in K_0({\\op{Var}}_{{{\\mathbb{A}}}^1_{\\mathsf{k}}})$ where $X$ is\nsmooth over ${\\mathsf{k}}$ and $W$ is proper, then $\\mu([X\\xra{W}{{\\mathbb{A}}}^1_{\\mathsf{k}}])$\nis defined as the class \n$[{\\mathbf{MF}}(W)]$\nof (a suitable differential $\\mathbb{Z}_2$-graded ${\\mathsf{k}}$-enhancement of)\nthe category\n\\begin{equation*}\n {\\mathbf{MF}}(W):=\\prod_{a\\in k}{\\mathbf{MF}}(X,W-a)^\\natural \n\\end{equation*}\nHere ${\\mathbf{MF}}(X,W-a)^\\natural$ is the Karoubi envelope of the\ncategory ${\\mathbf{MF}}(X,W-a)$ of matrix factorizations of the potential\n$W-a$.\n\nThe measures $\\nu$ and $\\mu$ are related by the commutative\ndiagram of ring homomorphisms \n(as announced in the introduction of \\cite{valery-olaf-matfak-semi-orth-decomp})\n\\begin{equation}\n \\label{eq:folding-diagram}\n \\xymatrix{\n {K_0({\\op{Var}}_{\\mathsf{k}})} \\ar[r]^-{\\nu} \\ar[d] \n \\ar[d]\n & {K_0(\\op{sat}_{\\mathsf{k}}^{\\mathbb{Z}})} \\ar[d]\\\\\n {(K_0({\\op{Var}}_{{\\mathbb{A}}^1_{\\mathsf{k}}}), \\star)} \\ar[r]^-{\\mu} \n & {K_0(\\op{sat}_{\\mathsf{k}}^{\\mathbb{Z}_2})}\n }\n\\end{equation}\nwhere $K_0({\\op{Var}}_{\\mathsf{k}}) \\rightarrow K_0({\\op{Var}}_{{{\\mathbb{A}}}^1_{\\mathsf{k}}})$ is the ring\nhomomorphism \\eqref{eq:46} (for $n=1$)\nand\n$K_0(\\op{sat}^\\mathbb{Z}_{\\mathsf{k}}) \\rightarrow K_0(\\op{sat}_{\\mathsf{k}}^{{\\mathbb{Z}}_2})$ is induced from\nthe {\\it folding} \n(see \\cite{olaf-folding-derived-categories-in-prep})\nwhich assigns to a differential ${\\mathbb{Z}}$-graded ${\\mathsf{k}}$-category the\ncorresponding differential ${{\\mathbb{Z}}}_2$-graded ${\\mathsf{k}}$-category. \n\n\\subsection{Comparing vanishing cycles and matrix factorization\n measures}\n\\label{sec:comp-vanish-cycl}\n\n\n\nTo each saturated differential ${{\\mathbb{Z}}}_2$-graded ${\\mathsf{k}}$-category $A$\none assigns the finite dimensional ${{\\mathbb{Z}}}_2$-graded vector\nspace\n\\begin{equation*}\n {\\operatorname{HP}}(A)={\\operatorname{HP}}_0(A)\\oplus {\\operatorname{HP}}_1(A) \n\\end{equation*}\nover the Laurent power series field ${\\mathsf{k}}((u))$ - the periodic\ncyclic homology of $A$ (see \\cite{keller-invariance}).\n\nPut\n$\\chi_{{\\operatorname{HP}}}(A):=\\dim_{{\\mathsf{k}}((u))}{\\operatorname{HP}}_0(A)-\\dim_{{\\mathsf{k}}((u))}{\\operatorname{HP}}_1(A)$. Since\n${\\operatorname{HP}}$ is additive on semiorthogonal decompositions of\ntriangulated categories (see\n\\cite{keller-cyclic-homology-exact-cat}) \nthis\nassignment descends to a group homomorphism\n\\begin{equation*}\n \\chi_{{\\operatorname{HP}}}\\colon K_0(\\op{sat}_{k}^{{\\mathbb{Z}}_2})\\rightarrow {\\mathbb{Z}} \n\\end{equation*}\nBecause of the K\\\"unneth property for ${\\operatorname{HP}}$ \n(see \\cite{shklyarov-hodge-theoretic-property-kuenneth} and references therein) the map $\\chi_{{\\operatorname{HP}}}$ is in fact a ring homomorphism.\n\nOn the other hand, if ${\\mathsf{k}}={\\mathbb{C}}$ we have the usual ring\nhomomorphism (see \\cite{looijenga-motivic-measures})\n\\begin{equation}\n \\label{eq:chi_c}\n \\chi_{\\op{c}}:=\n \\sum (-1)^i\\dim {\\operatorname{H}}_{\\op{c}}^i \\ \\colon \\mathcal{M}_{\\mathbb{C}}\\rightarrow {\\mathbb{Z}}\n\\end{equation}\nNotice that $\\chi_{\\op{c}}({\\mathbb{L}})=1$, hence $\\chi_{\\op{c}}$ is indeed well-defined as a homomorphism from\n$\\mathcal{M}_{\\mathbb{C}}$.\n\nForgetting the action of $\\hat{\\upmu}$ obviously defines a \nmap\n\\begin{equation}\n \\label{eq:14}\n \\mathcal{M}_\\mathbb{C}^{\\hat{\\upmu}} \\rightarrow \\mathcal{M}_\\mathbb{C}\n\\end{equation}\nof $\\mathcal{M}_\\mathbb{C}$-modules.\nClearly, this map is a ring\nhomomorphism if we equip its source with the usual\nmultiplication.\nHowever, this is not true if we equip its source with the \nconvolution product $*$ as we will explain in\nLemma~\\ref{l:forgetful-convolution} below.\nNevertheless we have the following result.\n\n\\begin{proposition}\n \\label{p:chi_c-covolution}\n The composition of $\\chi_{\\op{c}}$ (see \\eqref{eq:chi_c}) with the map ``forget the\n \\mbox{$\\hat{\\upmu}$-action}''\n \\eqref{eq:14}\n defines a ring homomorphism\n \\begin{equation}\n \\label{eq:72}\n \\chi_{\\op{c}} \\colon (\\mathcal{M}_\\mathbb{C}^{\\hat{\\upmu}},*) \\rightarrow \\mathbb{Z}\n \\end{equation}\n which we denote again by $\\chi_{\\op{c}}$.\n\\end{proposition}\n\n\\begin{proof}\n Let $A$ and $B$ be complex varieties\n with\n a good $\\upmu_n$-action for some $n \\geq 1$. We need to show\n that $A \\times B$ and \n \\begin{equation*}\n \n [A]*[B] = \n [((A \\times^{\\upmu_n} \\underset{x}{{\\DG_\\op{m}}})\n \\times \n (B \\times^{\\upmu_n} \\underset{y}{{\\DG_\\op{m}}}))|_{x^n+y^n=0}]\n -[((A \\times^{\\upmu_n} \\underset{x}{{\\DG_\\op{m}}})\n \\times \n (B \\times^{\\upmu_n} \\underset{y}{{\\DG_\\op{m}}}))|_{x^n+y^n=1}]\n \\end{equation*}\n (see\n \\eqref{eq:convolution-explicit})\n \n have the same Euler characteristic\n with compact support.\n Since ${\\DG_\\op{m}} \\rightarrow {\\DG_\\op{m}}$, $x \\mapsto x^n$, is a $\\upmu_n$-torsor\n (in the \\'etale topology) we have a pullback square\n \\begin{equation*}\n \\xymatrix{\n {A \\times {\\DG_\\op{m}}} \n \\ar[rr]^-{(a,x) \\mapsto x}\n \\ar[d]\n &&\n {{\\DG_\\op{m}}} \n \\ar[d]^-{x \\mapsto x^n}\n \\\\\n {A \\times^{\\upmu_n} {\\DG_\\op{m}}} \n \\ar[rr]^-{[a,x] \\mapsto x^n}\n &&\n {{\\DG_\\op{m}}.}\n }\n \\end{equation*}\n Its lower horizontal morphism is an \\'etale-locally\n trivial fibration with fiber $A$. Therefore it is a locally\n trivial fibration if we pass to the analytic topologies.\n In this way we obtain a locally trivial fibration\n \\begin{equation*}\n f \\colon (A^{\\op{an}} \\times^{\\upmu_n(\\mathbb{C})} \\mathbb{C}^\\times)\n \\times \n (B^{\\op{an}} \\times^{\\upmu_n(\\mathbb{C})} \\mathbb{C}^\\times)\n \\xra{([a,x],[b,y]) \\mapsto (x^n,y^n)} \\mathbb{C}^\\times \\times \\mathbb{C}^\\times\n \\end{equation*}\n with fiber $A^{\\op{an}} \\times B^{\\op{an}}$. \n Consider the subsets $N:=\\{x'+y'=0\\} \\cong \\mathbb{C}^\\times$ and \n $E:=\\{x'+y'=1\\} \\cong \\mathbb{C}^\\times -\\{1\\}$ of the base\n of this\n fibration where\n $x'$ and $y'$ are the obvious coordinates.\n Then\n \\begin{multline*}\n \\chi_{\\op{c}}([A]*[B]) =\n \\chi_{\\op{c}}(f^{-1}(N))-\\chi_{\\op{c}}(f^{-1}(E))\\\\\n =\n \\chi_{\\op{c}}(A^{\\op{an}} \\times B^{\\op{an}})\n (\\chi_{\\op{c}}(N) - \\chi_{\\op{c}}(E))\n =\\chi_{\\op{c}}(A^{\\op{an}} \\times B^{\\op{an}}).\n \\end{multline*}\n This proves what we need.\n\\end{proof}\n\nAlthough not strictly needed for our purposes we would like to\ninclude the following result (which is also true for ${\\mathsf{k}}$\ninstead of $\\mathbb{C}$).\n\n\\begin{lemma}\n \\label{l:forgetful-convolution}\n The map ``forget the \\mbox{$\\hat{\\upmu}$-action}''\n $f \\colon \\mathcal{M}_\\mathbb{C}^{\\hat{\\upmu}} \\rightarrow \\mathcal{M}_\\mathbb{C}$\n (see\n \\eqref{eq:14}) does not define a\n ring homomorphisms \n $(\\mathcal{M}_\\mathbb{C}^{\\hat{\\upmu}},*) \\rightarrow \\mathcal{M}_\\mathbb{C}$.\n \n \n \n \n \n \n \n %\n \n\\end{lemma}\n\n\\begin{proof}\n Let $M=\\upmu_2 \\in {\\op{Var}}^{\\upmu_2}_\\mathbb{C}$ with obvious action of\n $\\upmu_2$.\n We claim that $f([M] * [M]) \\not= f([M])f([M])$.\n\n We clearly have $f([M])f([M])=4 [\\operatorname{Spec} \\mathbb{C}]=4$.\n On the other hand multiplication defines an isomorphism\n $M \\times^{\\upmu_2} {\\DG_\\op{m}} \\xra{\\sim} {\\DG_\\op{m}}$ and therefore\n \\eqref{eq:convolution-explicit} yields\n \\begin{equation*}\n [M] * [M] = \n [(\\underset{x}{{\\DG_\\op{m}}} \\times \n \\underset{y}{{\\DG_\\op{m}}})|_{x^2+y^2=0}]\n -[(\\underset{x}{{\\DG_\\op{m}}} \\times \n \\underset{y}{{\\DG_\\op{m}}})|_{x^2+y^2=1}]\n \\end{equation*}\n The $\\upmu_2$-action on ${\\DG_\\op{m}} \\times {\\DG_\\op{m}}$ is the diagonal action.\n Instead of using the coordinates $(x,y)$ on ${\\mathbb{A}}^2_\\mathbb{C}$ let us\n use the coordinates $(a,b)$ where $a=x+iy$ and $b=x-iy$.\n \n Then $x^2+y^2=ab$ and the conditions $x \\not= 0$ and $y\\not=0$\n are equivalent to $a+b \\not=0$ and $a-b \\not=0$. \n Hence\n \\begin{equation*}\n [M] * [M] \n = \n [\\underset{(a,b)}{{\\mathbb{A}}^2_\\mathbb{C}}|_{ab=0,\\; a \\not=\\pm b}]\n -[\\underset{(a,b)}{{\\mathbb{A}}^2_\\mathbb{C}}|_{ab=1,\\; a \\not=\\pm b}]\n \\end{equation*}\n The $\\upmu_2$-action on ${\\mathbb{A}}^2_\\mathbb{C}$ is again the diagonal\n action.\n The first summand is the coordinate cross without the origin\n and equal to $2[{\\DG_\\op{m}}]$ with obvious $\\upmu_2$-action. \n To treat the second summand note that the map $({\\DG_\\op{m}} - \\upmu_4)\n \\rightarrow \n {{\\mathbb{A}}^2_\\mathbb{C}}|_{ab=1, a \\not=\\pm b}$, $a \\mapsto (a, a^{-1})$\n defines a $\\upmu_2$-equivariant isomorphism.\n Hence\n \\begin{equation}\n \\label{eq:mu2-convolution-squared}\n [M] * [M] = 2[{\\DG_\\op{m}}]-[{\\DG_\\op{m}}]+[\\upmu_4]=[{\\DG_\\op{m}}]+2[\\upmu_2]\n \\end{equation}\n and $f([M] * [M])=[{\\DG_\\op{m}}]+4$.\n\n But the element $[{\\DG_\\op{m}}]=f([M] * [M])-f([M])f([M])$ is certainly\n not zero in $\\mathcal{M}_\\mathbb{C}$: taking the \n Hodge-Deligne polynomial defines a ring homomorphism\n $\\mathcal{M}_\\mathbb{C} \\rightarrow \\mathbb{Z}[u,v,u^{-1},v^{-1}]$ which sends\n $[{\\DG_\\op{m}}]$ to $uv-1$, cf.\\ \\cite[Example\n 4.6]{nicaise-sebag-grothendieck-ring-of-varieties}. \n\\end{proof}\n\n\\begin{theorem} \n \\label{t:comparison}\n We have the following commutative diagram of ring homomorphisms\n \\begin{equation*}\n \\xymatrix{\n {(K_0({\\op{Var}}_{{\\mathbb{A}}^1_\\mathbb{C}}), \\star)} \n \\ar[r]^-{\\mu} \\ar[d]_-{\\epsilon_!\\circ \\Phi}\n & {K_0(\\op{sat}_{\\mathbb{C}}^{{\\mathbb{Z}}_2})} \\ar[d]^-{\\chi_{{\\operatorname{HP}}}} \\\\\n {(\\mathcal{M}_{\\mathbb{C}}^{\\hat{\\upmu}},*)} \\ar[r]^-{\\chi_{\\op{c}}} \n & {\\mathbb{Z}}\n }\n \\end{equation*}\n where the left vertical arrow is \n the map \n \\eqref{eq:epsilon-Phi}\n from Theorem~\\ref{t:total-motivic-vanishing-cycles-ring-homo}\n and the lower horizontal map is \n the ring homomorphism \\eqref{eq:72}.\n\\end{theorem}\n\n\\begin{proof} \n The abelian group $K_0({\\op{Var}}_{{{\\mathbb{A}}}^1_{\\mathbb{C}}})$ is\n generated by classes \n $[X\\xra{W} {{\\mathbb{A}}}^1_{\\mathbb{C}}]$ where $X$ is smooth over ${\\mathbb{C}}$ and\n the map $W$ is projective\n (see \\cite{bittner-euler-characteristic}).\n So it suffices to prove commutativity on such generators.\n\n Fix a projective map $W \\colon X\\rightarrow {{\\mathbb{A}}}^1_{\\mathbb{C}}$ of a smooth\n ${\\mathbb{C}}$-variety $X$. Then by definition\n \\begin{equation*}\n \\mu (W)=\\sum_{a\\in {\\mathbb{C}}}[{\\mathbf{MF}}(X,W-a)^\\natural ]\n \\in\n K_0(\\op{sat}_{{\\mathbb{C}}}^{{\\mathbb{Z}}_2}) \n \\end{equation*}\n and\n \\begin{equation*}\n \\epsilon_!\\circ \\Phi (W)\n =\\sum_{a \\in {\\mathbb{C}}}(\\epsilon_a)_!\\phi_{W,a}\\in \\mathcal{M}_{{\\mathbb{C}}} \n \\end{equation*}\n with notation as in \\eqref{eq:phi-V-k}.\n So it suffices to prove that\n \\begin{equation*} \n \n \\chi_{{\\operatorname{HP}}}({\\mathbf{MF}}(X,W-a)^\\natural)=\\chi_{\\op{c}}((\\epsilon_a)_!\\phi_{W,a})\n \\end{equation*}\n for any given $a\\in {\\mathbb{C}}$. We may and will assume that $a=0$.\n\n Let $X^{{\\op{an}}}$ denote the space $X$ with the analytic\n topology. Recall the classical functors of nearby and vanishing\n cycles\n \\begin{equation*}\n \\psi^{{\\op{geom}}}_W,\\phi^{{\\op{geom}}}_W\\colon {\\operatorname{D}}^{\\op{b}}_{\\op{c}}(X^{{\\op{an}}})\\to\n {\\operatorname{D}}^{\\op{b}}_{\\op{c}}(X_0^{{\\op{an}}}) \n \\end{equation*}\n between the corresponding derived categories of constructible\n sheaves with complex coefficients.\n For $F\\in {\\operatorname{D}}^{\\op{b}}_{\\op{c}}(X^{{\\op{an}}})$ we have a distinguished triangle\n \\begin{equation}\n \\label{eq:extr}\n F\\vert_{X_0^{{\\op{an}}}}\\rightarrow \\psi^{{\\op{geom}}}_WF\\rightarrow \\phi^{{\\op{geom}}}_WF\\to\n F\\vert_{X_0^{{\\op{an}}}}[1] \n \\end{equation}\n in ${\\operatorname{D}}^{\\op{b}}_{\\op{c}}(X_0^{{\\op{an}}})$ (see\n \\cite[Exp.~XIII]{SGA7-II}).\n \n \n\n In particular for the constant sheaf ${\\mathbb{C}}_{X^{{\\op{an}}}}$ we have\n the complex $\\phi^{{\\op{geom}}}_W{\\mathbb{C}}_{X^{{\\op{an}}}}$ of sheaves on\n $X_0^{{\\op{an}}}$. Consider its hypercohomology with compact\n supports\n ${\\operatorname{H}}_{\\op{c}}^\\bullet (X_0^{{\\op{an}}}, \\phi^{{\\op{geom}}}_W{\\mathbb{C}}_{X^{{\\op{an}}}})$\n and its Euler characteristic\n $\\sum_i(-1)^i\\dim {\\operatorname{H}}^i_{\\op{c}}(X_0^{{\\op{an}}}, \\phi^{{\\op{geom}}}_W{\\mathbb{C}}_{X^{{\\op{an}}}})$.\n (Note that in our case we may as well consider the\n hypercohomology ${\\operatorname{H}}^\\bullet$ instead of ${\\operatorname{H}}^\\bullet_{\\op{c}}$,\n since $X_0^{\\op{an}}$ is compact.) It follows from\n \\cite[Thm.~1.1]{efimov-cyclic-homology-matrix-factorizations}\n that \n \\begin{equation*}\n \n \\chi_{{\\operatorname{HP}}}({\\mathbf{MF}}(X,W))\n =-\\sum_i(-1)^i\\dim {\\operatorname{H}}^i_{\\op{c}}(X_0^{{\\op{an}}},\\phi^{{\\op{geom}}}_W{\\mathbb{C}}_{X^{{\\op{an}}}}). \n \\end{equation*}\n By the localization theorem in cyclic homology it follows that the\n Karoubi closure ${\\mathbf{MF}}(X,W)^\\natural$ has the same cyclic\n homology as ${\\mathbf{MF}}(X,W)$, i.\\,e.\\\n $\\chi_{{\\operatorname{HP}}}({\\mathbf{MF}}(X,W))=\\chi_{{\\operatorname{HP}}}({\\mathbf{MF}}(X,W)^\\natural)$. \n \n \n \n \n \n \n \n Hence it remains to prove the equality\n \\begin{equation}\n \\label{eq:val4} \n \\chi_{\\op{c}}((\\epsilon_0)_!\\phi_{W,0})=-\\sum_i(-1)^i\\dim\n {\\operatorname{H}}^i_{\\op{c}}(X_0^{{\\op{an}}}, \\phi^{{\\op{geom}}}_W{\\mathbb{C}}_{X^{{\\op{an}}}}).\n \\end{equation}\n\n \\begin{lemma}\n \\label{l:SH}\n \\begin{enumerate}\n \\item \n \\label{enum:SH-exist}\n For every variety $Y$ there\n exists a unique \n group homomorphism\n \\begin{equation*}\n {\\operatorname{SH}}_Y\\colon K_0({\\op{Var}}_Y)\\rightarrow K_0({\\operatorname{D}}_{\\op{c}}^{\\op{b}}(Y^{{\\op{an}}})) \n \\end{equation*}\n such that ${\\operatorname{SH}}_Y([Z\\xra{f}Y])=[{{\\mathbf{R}}}f_!{\\mathbb{C}}_{Z^{{\\op{an}}}}]$.\n \\item \n \\label{enum:SH-push}\n Given a morphism of varieties $g\\colon Y\\rightarrow T$ the diagram\n \\begin{equation*}\n \\xymatrix{\n {K_0({\\op{Var}}_Y)} \\ar[r]^-{{\\operatorname{SH}}_Y} \\ar[d]_-{g_!}\n & {K_0({\\operatorname{D}}_{\\op{c}}^{\\op{b}}(Y^{{\\op{an}}}))} \\ar[d]^-{K_0({{\\mathbf{R}}}g_!)}\\\\\n {K_0({\\op{Var}}_T)} \\ar[r]^-{{\\operatorname{SH}}_T} \n & {K_0({\\operatorname{D}}_{\\op{c}}^{\\op{b}}(T^{{\\op{an}}}))}\n }\n \\end{equation*}\n commutes.\n \\item\n \\label{enum:SH-C}\n If $Y=\\operatorname{Spec} {\\mathbb{C}}$, then $K_0({\\operatorname{D}}^{\\op{b}}_{\\op{c}}((\\operatorname{Spec}\n {\\mathbb{C}})^{\\op{an}}))={\\mathbb{Z}}$ (by taking the alternating sum of the\n cohomologies) and ${\\operatorname{SH}}_{\\operatorname{Spec} {\\mathbb{C}}}([Z\\rightarrow \\operatorname{Spec}\n {\\mathbb{C}}])=\\chi_{\\op{c}}([Z])$. \n \\end{enumerate}\n \\end{lemma}\n\n \\begin{proof} \n \\ref{enum:SH-exist}\n For a variety $S$ and an open embedding $j\\colon\n U\\hookrightarrow S$ with \n complementary closed embedding $i\\colon\n Z=S-U\\hookrightarrow S$ recall the short exact sequence \n of sheaves\n \\begin{equation*}\n \n 0\\rightarrow j_!{\\mathbb{C}}_{U^{{\\op{an}}}}\\rightarrow {\\mathbb{C}}_{S^{{\\op{an}}}}\\rightarrow i_!{\\mathbb{C}}_{Z^{{\\op{an}}}}\\rightarrow 0.\n \\end{equation*}\n This implies that the map\n ${\\operatorname{SH}}_Y([Z\\xra{f}Y])=[{{\\mathbf{R}}}f_!{\\mathbb{C}}_{Z^{{\\op{an}}}}]$ indeed\n descends to a homomorphism ${\\operatorname{SH}}_Y\\colon K_0({\\op{Var}}_Y)\\to\n K_0({\\operatorname{D}}_{\\op{c}}^{\\op{b}}(Y^{{\\op{an}}}))$. Uniqueness is obvious. \n\n \\ref{enum:SH-push}\n Given a morphism $f\\colon Z\\rightarrow Y$ we have by definition\n \\begin{equation*}\n K_0({{\\mathbf{R}}}g_!)\\cdot {\\operatorname{SH}}_Y([Z\\xra{f}Y])=\n K_0({{\\mathbf{R}}}g_!)[{{\\mathbf{R}}}f_!{\\mathbb{C}}_{Z^{{\\op{an}}}}]=[{{\\mathbf{R}}}(gf)_!{\\mathbb{C}}_{Z^{{\\op{an}}}}] \n \\end{equation*}\n and\n \\begin{equation*}\n {\\operatorname{SH}}_T \\cdot g_!([Z\\xra{f}Y])\n ={\\operatorname{SH}}_T([Z\\xra{gf}T]=[{{\\mathbf{R}}}(gf)_!{\\mathbb{C}}_{Z^{{\\op{an}}}}] \n \\end{equation*}\n\n \\ref{enum:SH-C}\n This is clear.\n \\end{proof}\n \n Now \\cite[Prop.~3.17]{guibert-loeser-merle-convolution} implies\n the following equality in \n $K_0({\\operatorname{D}}^{\\op{b}}_{\\op{c}}(X^{{\\op{an}}}_0))$: \n \\begin{equation*}\n \n {\\operatorname{SH}}_{X_0}(\\psi_{W,0})=[\\psi_W^{{\\op{geom}}}({\\mathbb{C}}_{X^{{\\op{an}}}})].\n \\end{equation*}\n Applying part \\ref{enum:SH-push}\n of Lemma~\\ref{l:SH}\n to the map $\\epsilon_0\\colon\n X_0\\rightarrow {\\operatorname{Spec} {\\mathbb{C}}}$ and\n using part \\ref{enum:SH-C}\n we conclude that\n \\begin{equation*}\n \n \\chi_{\\op{c}}((\\epsilon_0)_!\\psi_{W,0})=\\sum_i(-1)^i\\dim\n {\\operatorname{H}}^i_{\\op{c}}(X_0^{{\\op{an}}},\\psi_W^{{\\op{geom}}}({\\mathbb{C}}_{X^{{\\op{an}}}})). \n \\end{equation*}\n Notice that on one hand by definition of $\\phi_{W,0}$ we have\n \\begin{equation*}\n \\chi_{\\op{c}}((\\epsilon_0)_!\\phi_{W,0})\n =\\chi_{\\op{c}}((\\epsilon_0)_![X_0\\xra{\\operatorname{id}}\n X_0])-\\chi_{\\op{c}}((\\epsilon_0)_!\\psi_{W,0}) \n \\end{equation*}\n and on the other hand by the distinguished triangle\n \\eqref{eq:extr} we have \n \\begin{multline*}\n \\sum_i(-1)^i\\dim {\\operatorname{H}}^i_{\\op{c}}(X_0^{{\\op{an}}},\\phi_W^{{\\op{geom}}}({\\mathbb{C}}_{X^{{\\op{an}}}}))=\n \\sum_i(-1)^i\\dim\n {\\operatorname{H}}^i_{\\op{c}}(X_0^{{\\op{an}}},\\psi_W^{{\\op{geom}}}({\\mathbb{C}}_{X^{{\\op{an}}}}))\n \\\\\n -\\sum_i(-1)^i\\dim {\\operatorname{H}}^i_{\\op{c}}(X_0^{{\\op{an}}},{\\mathbb{C}}_{X_0^{{\\op{an}}}}) \n \\end{multline*}\n It remains to notice that\n \\begin{equation*}\n \\chi_{\\op{c}}((\\epsilon_0)_![X_0\\xra{\\operatorname{id}} X_0])=\\sum_i(-1)^i\\dim\n {\\operatorname{H}}^i_{\\op{c}}(X_0^{{\\op{an}}},{\\mathbb{C}}_{X_0^{{\\op{an}}}}) \n \\end{equation*}\n This proves equality \\eqref{eq:val4} and finishes the proof of the\n theorem. \n\\end{proof}\n\nWe give two simple examples in which the equality \\eqref{eq:val4} can\nbe verified directly. \n\n\\begin{example} \n \\label{expl:a^n}\n Let $X={{\\mathbb{A}}}^1_\\mathbb{C}$ and $W(a)=a^n$ for some $n\\geq 1$.\n Then $\\phi^{{\\op{geom}}}_W{\\mathbb{C}}_{X^{{\\op{an}}}}={\\mathbb{C}}^{\\oplus n-1}_{(0)}$. \n Hence the right-hand side of \n equation \\eqref{eq:val4} is equal to $-(n-1)$.\n\n On the other hand, in the notation of \n the proof of\n Proposition~\\ref{p:motivic-vanishing-cycles-over-singular-part}\n (with the identity as embedded resolution) \n the divisor $E$ is $n\\cdot (0)$ and\n hence its $\\upmu_n$-Galois covering $\\tilde{E}$ is\n isomorphic to $\\upmu_n$. \n From \\eqref{eq:formula-mot-nearby-fiber}\n we obtain\n \\begin{equation*}\n \\phi_{W,0}=[|X_0|]-\\psi_{W,0}=[(0)]-\\upmu_n. \n \\end{equation*}\n Thus $\\chi_{\\op{c}}((\\epsilon_0)_!\\phi_{W,0})$ is also equal to $-(n-1)$.\n\\end{example}\n\n\\begin{example} \n \\label{expl:ab}\n Let $X={{\\mathbb{A}}}^2_\\mathbb{C}$ and $W\\colon X\\rightarrow {{\\mathbb{A}}}^1_\\mathbb{C}$,\n $W(a,b)=ab$. (This is not proper, but should make no \n difference since the complex $\\phi^{{\\op{geom}}}_W{\\mathbb{C}}_{X^{{\\op{an}}}}$\n has compact support.) \n Then $\\phi^{{\\op{geom}}}_W{\\mathbb{C}}_{X^{{\\op{an}}}}={\\mathbb{C}}_{(0,0)}[-1]$. Hence\n the right-hand side of \\eqref{eq:val4} is equal to 1. \n\n On the other hand, in the notation of \n the proof of\n Proposition~\\ref{p:motivic-vanishing-cycles-over-singular-part}\n the divisor $E$ is\n the coordinate cross (with components of multiplicity one) \n and so \\eqref{eq:formula-mot-nearby-fiber}\n yields\n \\begin{equation*}\n \\phi_{W,0}=[X_0]-\\psi_{W,0}=({\\DG_\\op{m}} +{\\DG_\\op{m}}+pt)-({\\DG_\\op{m}}+{\\DG_\\op{m}}-{\\DG_\\op{m}})={\\mathbb{L}} \n \\end{equation*}\n Hence $\\chi_{\\op{c}}((\\epsilon_0)_!\\phi_{W,0})=1$.\n\n Here is another way to compute this example.\n Using coordinates $(s,t)$ on ${\\mathbb{A}}^2_\\mathbb{C}$ so that $a=s+it$ and\n $b=s-it$ we have $W(a,b)=ab=s^2+t^2= s^2 \\circledast t^2$. \n Example~\\ref{expl:a^n} shows that\n $\\phi_{s^2,0}=[(0)]-\\upmu_2$ and\n $\\chi_{\\op{c}}((\\epsilon_0)_!\\phi_{s^2,0})=\n -1$.\n We have $\\epsilon_!\\Phi(s^2)=(\\epsilon_0)_!\\phi_{s^2,0}$\n and\n \\begin{equation*}\n \\chi_{\\op{c}}((\\epsilon_0)_!\\phi_{W,0})=\n \\chi_{\\op{c}}(\\epsilon_!\\Phi(ab))=\n \\chi_{\\op{c}}(\\epsilon_!\\Phi(s^2))\n \\chi_{\\op{c}}(\\epsilon_!\\Phi(t^2))\n =(-1)^2=1\n \\end{equation*}\n using multiplicativity of our motivic measures.\n We can also use the motivic Thom-Sebastiani \n Theorem~\\ref{t:thom-sebastiani-GLM-motivic-vanishing-cycles}\n and compute \n (use Remark~\\ref{rem:action-on-B-trivial}\n and (the computation leading to) equation \n \\eqref{eq:mu2-convolution-squared})\n \\begin{multline*}\n \\phi_{W,0}=\n \\Psi(\\phi_{s^2,0} \\times \\phi_{t^2,0})=\n \\Psi([(0)] \\times [(0)])\n -\n 2 \\Psi([(0)] \\times [\\upmu_2])\n + \\Psi([\\upmu_2] \\times [\\upmu_2])\\\\\n = [(0)]-2 [\\upmu_2]+([{\\DG_\\op{m}}] +2[\\upmu_2])) =\\mathbb{L}.\n \\end{multline*}\n Here the $\\mu_2$-action on ${\\mathbb{G}}_m$ is a priori the obvious one\n but can then also be assumed to be trivial by the defining\n relations \n of the equivariant Grothendieck group.\n\\end{example}\n\n\\subsection{Summarizing diagram}\n\\label{sec:summarizing-diagram}\n\nWe collect the motivic measures considered in this paper in the following\ncommutative diagram (in case ${\\mathsf{k}}={\\mathbb{C}}$; see\n\\eqref{eq:folding-diagram} and Theorem~\\ref{t:comparison}).\n\\begin{equation*}\n \\xymatrix{\n {K_0({\\op{Var}}_\\mathbb{C})} \\ar[r]^-{\\nu} \\ar[d]\n & {K_0(\\op{sat}_\\mathbb{C}^{\\mathbb{Z}})} \\ar[d]\\\\\n {(K_0({\\op{Var}}_{{\\mathbb{A}}^1_\\mathbb{C}}), \\star)} \\ar[r]^-{\\mu} \\ar[d]_{\\epsilon_!\n \\circ \\Phi}\n & {K_0(\\op{sat}_\\mathbb{C}^{\\mathbb{Z}_2})} \\ar[d]^-{\\chi_{\\operatorname{HP}}}\n \\\\\n {(\\mathcal{M}^{\\hat{\\upmu}}_\\mathbb{C}, *)} \n \\ar[r]^-{\\chi_{\\op{c}}}\n & {\\mathbb{Z}}\n }\n\\end{equation*}\nThe upper left vertical arrow and the vertical composition on the\nleft are the algebra structure maps. The composition from the top\nleft corner to the bottom right corner is induced by mapping a\ncomplex variety to its Euler characteristic with compact support.\n\n\n\\def$'$} \\def\\cprime{$'${$'$} \\def$'$} \\def\\cprime{$'${$'$} \\def$'$} \\def\\cprime{$'${$'$} \\def$'$} \\def\\cprime{$'${$'$}\n \\def\\leavevmode\\lower.6ex\\hbox to 0pt{\\hskip-.23ex \\accent\"16\\hss}D{\\leavevmode\\lower.6ex\\hbox to 0pt{\\hskip-.23ex \\accent\"16\\hss}D}\n \\def$'$} \\def\\cprime{$'${$'$} \\def$'$} \\def\\cprime{$'${$'$}\n\\providecommand{\\bysame}{\\leavevmode\\hbox to3em{\\hrulefill}\\thinspace}\n\\providecommand{\\MR}{\\relax\\ifhmode\\unskip\\space\\fi MR }\n\\providecommand{\\MRhref}[2]{%\n \\href{http:\/\/www.ams.org\/mathscinet-getitem?mr=#1}{#2}\n}\n\\providecommand{\\href}[2]{#2}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nRecent experimental efforts have been devoted to measurements of the weak mixing angle in electron-proton scattering \\cite{Androic,Becker}. Precise measurements at low energy are of high interest due to the sensitivity of the running of the weak mixing angle in extensions of the Standard Model \\cite{Davoudiasl}. In order to control the effects of proton structure in the scattering experiments, knowledge of nucleon form factors is required. Among others, the strange form factors $G_E^s(Q^2)$, $G_M^s(Q^2)$ and $G_A^s(Q^2)$ need to be known precisely, which is a challenging task in experiments \\cite{Pate}. Here, we extract these form factors from a first-principles Lattice QCD study, based on our previous work \\cite{Djukanovic}. Furthermore, we calculate the contributions of the $u$-, $d$- and $s$-quarks to the axial charge. These correspond to the contributions of the intrinsic spin of the respective quarks to the total spin of the nucleon \\cite{Ji}. Following the same procedure, we also obtain the $u$-, $d$- and $s$-contributions to the tensor charge of the nucleon. \n\n\\section{Extracting Form Factors from Lattice QCD}\nTo extract nucleon form factors, we consider the following ratio of nucleon three- and two-point functions \\cite{Brandt}\n\\begin{align}\n\\begin{split}\nR_{J_\\mu}(\\vec{q};\\vec{p}^\\prime;\\Gamma_\\nu) &= \\frac{C_{3,J_\\mu}^N(\\vec{q},z_0;\\vec{p}^\\prime,y_0;\\Gamma_\\nu)}{C_2^N(\\vec{p}^\\prime,y_0)}\\sqrt{\\frac{C_2^N(\\vec{p}^\\prime,y_0)C_2^N(\\vec{p}^\\prime,z_0)C_2^N(\\vec{p}^\\prime\\text{-}\\vec{q},y_0\\text{-}z_0)}{C_2^N(\\vec{p}^\\prime\\text{-}\\vec{q},y_0)C_2^N(\\vec{p}^\\prime\\text{-}\\vec{q},z_0)C_2^N(\\vec{p}^\\prime,y_0\\text{-}z_0)}}\\\\[0.25cm]\t\n\t&\\stackrel{\\text{s.d.}}{=} M_{\\nu\\mu}^1(\\vec{q},\\vec{p}^\\prime) G_1(Q^2) + M_{\\nu\\mu}^2(\\vec{q},\\vec{p}^\\prime) G_2(Q^2)\\ ,\n\\end{split}\n\\label{eq:ratio}\n\\end{align}\nwhere the second line corresponds to the spectral decomposition at large Euclidean times. At each value of $Q^2$, the ratios belonging to different $\\mu$, $\\nu$ and momenta $\\vec{q}$, $\\vec{p}^\\prime$ can be grouped into a vector $\\bm{R}$. Ordering the kinematic factors $M^1$ and $M^2$ accordingly into a matrix $M$, a (generally overdetermined) system of equations can be defined\n\\begin{equation}\nM\\ \\bm{G} = \\bm{R}\\ \\ ;\\ \\ M = \\left( \\begin{array}{c}\nM^1_1\\\\\n\\vdots\\\\\nM^1_N\\\\\n\\end{array}\\ \\begin{array}{c}\nM^2_1\\\\\n\\vdots\\\\\nM^2_N\\\\\n\\end{array} \\right)\\ \\ ,\\ \\ \\bm{G} = \n\\left(\n\\begin{array}{c}\nG_1\\\\\nG_2\\\\\n\\end{array}\n\\right),\\ \\ \\bm{R} = \n\\left(\n\\begin{array}{c}\nR_1\\\\\n\\vdots\\\\\nR_N\\\\\n\\end{array}\n\\right)\\ .\n\\label{eq:system}\n\\end{equation}\n\nIn the case of the disconnected contributions, we reduce the size of the system by first averaging equivalent three-point functions, where the momenta at the source and the sink are related by spatial symmetry \\cite{Syritsyn}, followed by a construction of the ratios from the resulting averages. Concerning the connected contributions, we perform an average of equivalent ratios. In both cases we drop non-contributing ratios, for which both kinematic factors $M^1$ and $M^2$ are zero. Furthermore, we average the two-point functions over equivalent momentum classes. The system of equations can be solved for the form factors $G_1$ and $G_2$ by applying a two-parameter fit corresponding to minimizing the $\\chi^2$-function given by $\\chi^2 = (\\bm{R}-M\\bm{G})^T\\ C^{-1}\\ (\\bm{R}-M\\bm{G})$, where $C$ denotes the covariance matrix. Identifying $J_\\mu$ with the vector current $V_\\mu$ in \\Eq{eq:ratio} yields the electromagnetic form factors $G_E$ and $G_M$, whereas the axial current $A_\\mu$ leads to the axial vector form factors $G_A$ and $G_P$. We obtain the tensor charge $g_T$ at $Q^2=0$ from the tensor current $T_{\\mu\\nu}$, where only one independent ratio contributes.\n\n\\section{Setup}\nThis work makes use of the CLS $N_f=2+1$ $\\mathcal{O}(a)$-improved Wilson fermion ensembles \\cite{Bruno} listed in Tab.~\\ref{tab:ensembles}. The tree-level improved L\\\"uscher-Weisz gauge action is employed in the gauge sector. These ensembles obey open boundary conditions in time in order to prevent topological freezing \\cite{Luscher}. The physical quark masses are approached along a trajectory where the trace of the quark mass matrix is kept (approximately) constant, $\\mathrm{tr}\\ M_q = \\text{const}$. To set the scale, we rely on the gradient flow time $t_0$ determined in Ref. \\cite{Bruno2}.\n\n\\begin{table}[h]\n\\begin{center}\n\\begin{tabular}{l|ccccccccc}\n\t&$\\beta$\t&$a$ [fm] &$N_s^3\\times N_t$\t&$m_\\pi$[MeV]\t&$m_K$[MeV]\t&$N_{\\text{cfg}}^{\\text{dis}}$\t&$N_{\\text{cfg}}^{\\text{con}}$\\\\\n\\hline\nH102\t&3.40\t&0.08636\t&$32^3\\times 96$\t&352\t&438\t&997\t&997\\\\\nH105\t&3.40\t&0.08636 &$32^3\\times 96$\t&278\t&460\t&1019\t&1019\\\\\nC101\t&3.40\t&0.08636\t&$48^3\\times 96$\t&223\t&472\t&1000\t&2000\\\\\n\\hline\nN401\t&3.46\t&0.07634\t&$48^3\\times 128$\t&289\t&462\t&701\t&701\\\\\n\\hline\nN203\t&3.55 &0.06426\t&$48^3\\times 128$ &345\t&441\t&768\t&1536\\\\\nN200\t&3.55 &0.06426\t&$48^3\\times 128$\t&283\t&463\t&852\t&852\\\\\nD200\t&3.55 &0.06426\t&$64^3\\times 128$\t&200\t&480\t&234\t&936\\\\\n\\hline\nN302\t&3.70\t&0.04981\t&$48^3\\times 128$\t&354\t&458\t&1177\t&1177\\\\\n\\end{tabular}\n\\end{center}\n\\caption{The ensembles used for this work. The last two columns give the number of configurations for the disconnected and connected diagrams, respectively.}\n\\label{tab:ensembles}\n\\end{table}\n\nFor the disconnected three-point function it is straightforward to check that it factorizes into separate traces of the quark loop and the two-point function\n\\begin{equation}\nC_{3,J_\\mu}^{N,l\/s}(\\bm{q},z_0;\\bm{p}^\\prime,y_0;\\Gamma_\\nu) = \\left\\langle \\mathcal{L}_{J_\\mu}^{l\/s}(\\bm{q},z_0)\\cdot \\mathcal{C}_2^N(\\bm{p}^\\prime,y_0;\\Gamma_\\nu) \\right\\rangle_G\\ .\n\\end{equation}\n\nThe quark loops have been calculated with an improved stochastical estimator provided by hierarchical probing \\cite{Stathopoulos}. Here the sequence of noise vectors is replaced by a sequence of Hadamard vectors, which are element-wise multiplied by one noise vector. In total, we considered two noise vectors, each multiplied with a sequence of 512 Hadamard vectors on every configuration and for each quark flavor. For the two-point function we used the standard nucleon interpolator \n\\begin{equation}\nN_{\\alpha}(x) = \\epsilon_{abc}\\left( u_\\beta^a(x)\\ \\left(C\\gamma_5\\right)_{\\beta\\gamma}\\ d_\\gamma^b(x) \\right)\\ u_\\alpha^c(x)\\ .\n\\end{equation}\n\nTo increase the overlap with the ground state, Wuppertal smearing \\cite{Gusken} was used at the source and the sink. The statistical precision of the two-point function has been improved by employing the truncated-solver method \\cite{Bali,Shintani}. For each ensemble we performed 32 low-precision solves on seven timeslices equally distributed around the center of the lattice and separated by seven timeslices without sources. On each of these timeslices one low-precision solve was considered for the bias correction, except for ensemble H105, where we considered four low-precision solves on each timeslice. For each current we considered all components combined with four choices of the projector $\\Gamma$: $\\Gamma_0 = \\frac{1}{2}(1+\\gamma_0)\\ ,\\ \\Gamma_k = \\frac{1}{2}(1+\\gamma_0)\\ \\mathrm{i}\\gamma_5\\gamma_k\\ ,\\ k\\in\\{1,2,3\\}$. Details on the two- and three-point functions for the connected contributions at $Q^2=0$ can be found in Ref. \\cite{Harris}. The same details also carry over to $Q^2\\neq 0$. Here, we did not make use of the zeroth component of the axial vector current, due to its large excited-state contamination. Furthermore, the electromagnetic form factors can be obtained directly, as shown in Ref. \\cite{Capitani}, without fitting the system in \\Eq{eq:system}. Note that for the connected three-point function the nucleon at the sink is always at rest, whereas in the disconnected contributions we consider up to two units of the squared lattice integer momentum. For the local axial vector and the conserved vector current we implemented the improved versions\n\\begin{equation}\nA_\\mu(\\vec{z},z_0)^{\\text{Imp.}}=A_\\mu(\\vec{z},z_0)+ac_A\\ \\partial_\\mu P(\\vec{z},z_0)\\ ,\\ V_\\mu(\\vec{z},z_0)^{\\text{Imp.}}=V_\\mu(\\vec{z},z_0)+ac_V\\ \\partial_\\nu T_{\\nu\\mu}(\\vec{z},z_0)\\ ,\n\\end{equation}\nwhere the improvement coefficients have been determined non-perturbatively in Ref. \\cite{Bulava,Gerardin}. For the tensor charge we used the local tensor current.\n\n\\section{Renormalization}\nAs starting point for the renormalization procedure we consider the flavor-diagonal operators \n\\begin{equation}\nO^a_\\Gamma(x) = \\bar{\\psi}(x) \\Gamma \\lambda^a \\psi(x)\\ \\ ,\\ \\ \\psi = (u,d,s)^T\\ \\ ,\\ \\ a\\in\\{3,8,0\\}\\ ,\n\\end{equation}\n\\begin{equation}\n\\lambda^3 = \\dfrac{1}{\\sqrt{2}}\\text{diag}\\left(1,-1,0\\right)\\ ,\\ \\lambda^8 = \\dfrac{1}{\\sqrt{6}}\\text{diag}\\left(1,1,-2\\right)\\ ,\\ \\lambda^0 = \\dfrac{1}{\\sqrt{3}}\\text{diag}\\left(1,1,1\\right)\\ .\n\\end{equation}\n\nWe make use of $N_f=3$ $\\mathcal{O}(a)$-improved Wilson fermion ensembles, and hence, the renormalization matrix of the flavor-diagonal operators is given by\n\\begin{equation}\nZ_\\Gamma = \\text{diag}\\left(Z_\\Gamma^{33},Z_\\Gamma^{88},Z_\\Gamma^{00}\\right)\\ \\ ,\\ \\ Z_\\Gamma^{33} = Z_\\Gamma^{88}\\ \\ .\n\\end{equation}\n\nAll details on our renormalization procedure for the $Z_\\Gamma^{33}$ can be found in Ref. \\cite{Harris}. We follow the same procedure for $Z_\\Gamma^{00}$ which, however, necessitates the calculation of additional diagrams. The required quark loops have been estimated with hierarchical probing, using 512 Hadamard vectors on each configuration. Furthermore, in the case of the singlet axial operator, an anomalous dimension arises (given in Ref. \\cite{Larin}) and the conversion factors $Z^{\\overline{\\text{MS}}}_{\\text{RI}^\\prime\\text{-MOM}}$ for the singlet vector and axial vector operators are only known to one loop. Finally, we apply a basis transformation to the physical basis given by\n\\begin{equation}\n\\left(\\begin{array}{c}\nO_\\Gamma^{u-d}(x)_R\\\\\nO_\\Gamma^{u+d}(x)_R\\\\\nO_\\Gamma^{s}(x)_R\\\\\n\\end{array} \\right) = \\left(\\begin{array}{ccc} Z_\\Gamma^{u-d,u-d} &0 &0\\\\ 0 &Z_\\Gamma^{u+d,u+d} &Z_\\Gamma^{u+d,s}\\\\ 0 &Z_\\Gamma^{s,u+d} &Z_\\Gamma^{s,s} \\end{array}\\right)\\ \\left(\\begin{array}{c}\nO_\\Gamma^{u-d}(x)\\\\\nO_\\Gamma^{u+d}(x)\\\\\nO_\\Gamma^{s}(x)\\\\\n\\end{array} \\right)\\ .\n\\end{equation}\n\nThe final renormalization constants are taken in the $\\overline{\\text{MS}}$-scheme at $\\mu=2\\,\\text{GeV}$. Note that the renormalization constants at $\\beta=3.7$ have been obtained from a linear extrapolation. To account for the systematic uncertainty introduced through the extrapolation, the error of the renormalization constants at this $\\beta$ has been inflated by a factor of ten. The reliability of the extrapolation was checked for the isovector axial charge $g_A$ in Ref. \\cite{Harris}, where consistent results from our renormalization procedure compared to renormalizing through the Schr\\\"odinger functional were found.\n\n\n\\section{Results}\nIn \\Fig{fig:svecff}, we show the resulting strange electromagnetic form factors for three lattice spacings at constant kaon mass of $m_K\\approx 460\\,\\text{MeV}$. Excited-state contamination has been handled with the summation method with $y_0\\in[0.5,1.3]\\,\\text{fm}$. The bands correspond to $z$-expansion fits \\cite{Hill} to fifth order using the two-kaon production threshold for the conformal map, where Gaussian priors of the form $\\tilde{a}_k = 0 \\pm 5\\max\\{|a_0|,|a_1|\\}\\ \\forall\\ k>1$ have been employed to stabilize the fits. \n\n\\begin{figure}\n\\hspace{1cm}\n\\includegraphics[scale=0.39]{plots\/zexp_fit_GEs.pdf}\n\\hspace{1cm}\n\\includegraphics[scale=0.39]{plots\/zexp_fit_GMs.pdf}\n\\caption{The $Q^2$-dependence of the strange electromagnetic form factors at a kaon mass of $m_K\\approx 460\\,\\text{MeV}$.}\n\\label{fig:svecff}\n\\end{figure}\n\nFrom the $z$-expansion fits we can extract the strange magnetic moment $\\mu^s=G^s_M(0)$ and the strange electromagnetic charge radii $(r^2)_{E\/M}^s$, shown in \\Fig{fig:extsEMff}. Note that for our main result we impose a cut in $Q^2$ at $0.5\\,\\text{GeV}^2$. We extrapolate to the physical kaon mass by performing linear fits of the form \n\\begin{equation}\n(r^2_E)^s(m_K) = a_0 + a_1 \\log(m_K)\\ ,\\ \\mu^s(m_K) = a_2 + a_3 m_K\\ ,\\ (r^2_M)^s(m_K) = a_4 + a_5\/m_K\\ ,\n\\end{equation}\nderived from leading-order SU(3) HBChPT \\cite{Hemmert}, where we add the higher-order $a_4$-term as our data does not show the leading-order divergence expected by HBChPT. \n\n\\begin{figure}\n\\includegraphics[scale=0.325]{plots\/elec_charge_radius_phys.pdf}\n\\includegraphics[scale=0.325]{plots\/magn_mom_phys.pdf}\n\\includegraphics[scale=0.325]{plots\/magn_charge_radius_phys.pdf}\n\\caption{The kaon mass dependence of the strange magnetic moment and strange charge radii. The orange band is a linear extrapolation to the physical kaon mass (black data point).}\n\\label{fig:extsEMff}\n\\end{figure}\n\nIn order to estimate systematic errors we consider four variations: the inclusion of $\\mathcal{O}(a^2)$ lattice artifacts, doubling the prior width in the $z$-expansion fits, handling excited states with the plateau method at $\\sim 1\\,\\text{fm}$, and removing the cut in $Q^2$. The respective errors are taken from the difference to the main result and then added in quadrature leading to our final result\n\\begin{equation}\n(r^2_E)^s = -0.0048(11)(23)\\,\\text{fm}^2\\ ,\\ \\mu^s = -0.020(4)(11)\\ ,\\ (r^2_M)^s = -0.011(5)(12)\\,\\text{fm}^2\\ .\n\\end{equation}\n\nWe follow the same procedure for the strange axial vector form factor, shown in \\Fig{fig:saxvecffagts}. Also shown is a linear extrapolation in $m_\\pi^2$ to the physical point for the strange contribution to the axial charge. The same sources of systematics are considered in this case with the adjustment to $\\mathcal{O}(a)$ lattice artifacts, as the renormalization constants are not $\\mathcal{O}(a)$-improved. Our final result is given by $g_A^s = -0.044(4)(5)$. Furthermore, in \\Fig{fig:saxvecffagts}, we show the analogous linear extrapolation of the strange tensor charge, again obtained from the summation method. In the estimation of the systematic error, we consider the plateau method at $\\sim 1\\,\\text{fm}$ and the inclusion of $\\mathcal{O}(a)$ lattice artifacts in the extrapolation. The final result is $g_T^s = -0.0026(73)(424)$.\n\\begin{figure}\n\\includegraphics[scale=0.325]{plots\/zexp_fit_GAs.pdf}\n\\includegraphics[scale=0.325]{plots\/gas_phys.pdf}\n\\includegraphics[scale=0.325]{plots\/gts_phys.pdf}\n\\caption{The $Q^2$-dependence of the strange axial vector form factor for three lattice spacings at constant kaon mass of $m_K\\approx 460\\,\\text{MeV}$ (left), and the extrapolation of the strange axial charge (middle) and strange tensor charge (right) to the physical point (black data point).}\n\\label{fig:saxvecffagts}\n\\end{figure}\n\nWe make use of the following decomposition of the $u$- and $d$-charges\n\\begin{equation}\ng^u_{A\/T} = \\frac{1}{2}\\left( g_{A\/T}^{u+d-2s} + 2 g_{A\/T}^{s} + g_{A\/T}^{u-d} \\right)\\ ,\\ g^d_{A\/T} = \\frac{1}{2}\\left( g_{A\/T}^{u+d-2s} + 2 g_{A\/T}^{s} - g_{A\/T}^{u-d} \\right)\\ .\n\\label{eq:qcdec}\n\\end{equation}\n\nThe strange contributions have been shown above and our results for the isovector charges can be found in Ref. \\cite{Harris}. It is favorable to consider the $(u+d-2s)$-contribution, as it renormalizes like the isovector charges, so that no mixing occurs, and the disconnected contributions can be combined to $(l-s)$, which leads to a noise reduction. To handle the excited state contamination, we perform two-state fits simultaneously to all source-sink separations $y_0$ of the form\n\\begin{equation*}\ng_{A\/T}(z_0,y_0) = g_{A\/T} + A_{A\/T} \\left( e^{-\\Delta z_0} + e^{-\\Delta (y_0 - z_0)} \\right) + B_{A\/T} e^{-\\Delta y_0}\\ ,\n\\end{equation*}\nwhere the energy gap $\\Delta$ to the first excited state has been determined in a simultaneous two-state fit to all six isovector charges, as explained in detail in Ref. \\cite{Harris}. We show an example of this fit in \\Fig{fig:qcat}, where we also compare to the summation method. We then perform linear extrapolations in $m_\\pi^2$ to the physical point, also shown in \\Fig{fig:qcat}.\n\\begin{figure}\n\\includegraphics[scale=0.325]{plots\/twostatefit_N302.pdf}\n\\includegraphics[scale=0.325]{plots\/gaUpDm2s_phys.pdf}\n\\includegraphics[scale=0.325]{plots\/gtUpDm2s_phys.pdf}\n\\caption{Two-state fit to $g_A^{u+d-2s}$ at the smallest $y_0$ on ensemble N302 (left), where also the summation method result is shown (blue data point), and extrapolation of $g_A^{u+d-2s}$ (middle) and $g_T^{u+d-2s}$ (right) to the physical point (black data point).}\n\\label{fig:qcat}\n\\end{figure}\n\nTo assign a systematic error, we consider the difference to the extrapolation of the summation method results and an extrapolation with the inclusion of $\\mathcal{O}(a)$ lattice artifacts. The final results are given by\n\\begin{equation}\ng_A^{u} = 0.84(3)(4)\\ ,\\ g_A^{d} = -0.40(3)(4)\\ ,\\ g_T^{u} = 0.77(4)(6)\\ ,\\ g_T^{d} = -0.19(4)(6)\\ .\n\\end{equation}\n\n\\paragraph{Acknowledgements:} This research is supported by the DFG through the SFB 1044, and under grant HI 2048\/1-1. Calculations for this project were partly performed on the HPC clusters \"{}Clover\"{} and \"{}HIMster II\"{} at the Helmholtz-Institut Mainz and \"{}Mogon II\"{} at JGU Mainz. Additional computer time has been allocated through projects HMZ21 and HMZ36 on the BlueGene supercomputer system \"{}JUQUEEN\"{} at NIC, J\\\"ulich. Our programmes use the QDP++ library \\cite{QDPpp} and deflated SAP+GCR solver from the openQCD package \\cite{openQCD}, while the contractions have been explicitly checked using \\cite{QCT}. We are grateful to our colleagues in the CLS initiative for sharing ensembles. The ensembles for the calculation of the renormalization constants have been generated in a joint effort with the RQCD collaboration.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\emph{Address Parsing} is the task of decomposing an address into the different components it is made of. This task is an essential part of many applications, such as geocoding and record linkage. Indeed, to find a particular location based on textual data, it is quite useful to detect the different parts of an address to make an informed decision. Similarly, comparing two addresses to decide whether two or more database entries refer to the same entity can prove to be quite difficult and prone to errors if based on methods such as edit distance algorithms given the various address writing standards.\n\nThere have been many efforts to solve the address parsing problem. From rule-based techniques \\cite{rule-based} to probabilistic approaches and neural network models \\cite{8615844}, a lot of progress has been made in reaching an accurate segmentation of addresses. These previous pieces of work did a remarkable job at finding solutions for the challenges related to the address parsing task. However, most of these approaches either do not take into account parsing addresses from different countries or do so but at the cost of a considerable amount of meta-data and elaborate data pre-processing pipelines \\cite{rnn-parsing, hmm-parsing, crf-parsing, feedforward-parsing}. \n\nOur work comes with three contributions. First, we propose an approach for multinational address parsing using a Recurrent Neural Network (RNN) architecture by building on, and improving, our previous work on the subject \\cite{9357170}. Secondly, we evaluate the degree to which the improved models trained on countries' addresses data can perform well at parsing addresses from other countries. Finally, we evaluate the performance of our models on incomplete addresses and propose a method to improve their accuracy.\nThe article outline is as follows: we present the related work in \\autoref{sec:relatedwork}, our proposed architectures and approaches in \\autoref{sec:architectures}, the datasets that we used for our experiments in \\autoref{sec:data}, our experimental settings in \\autoref{sec:exp}, our results on complete addresses in \\autoref{sec:completeresults} and on addresses with missing values in \\autoref{sec:missingresults}, and finally we conclude our article with an overview of our work and future work in \\autoref{sec:conclusion}.\n\n\\section{Related Work}\n\\label{sec:relatedwork}\n\nSince address parsing is a sequence tagging task, it has been tackled using probabilistic methods mainly based on Hidden Markov Models (HMM) and Conditional Random Fields (CRF) \\cite{hmm-parsing, crf-parsing, 8615844}. For instance, \\cite{hmm-parsing} proposed a large scale HMM-based parsing technique capable of segmenting a large number of addresses whilst being robust to possible irregularities in the input data. In addition, \\cite{crf-parsing} implemented a discriminative model using a linear-chain CRF coupled with a learned Stochastic Regular Grammar (SRG). This approach allowed the authors to better address the complexity of the features while capturing higher-level dependencies by applying the SRG on the CRF outputs as a score function, thus taking into account the possible lack of features for a particular token in a lexicon-based model. These probabilistic methods usually rely on structured data as well as some sort of prior knowledge of this data for feature extraction or in order to implement algorithms such as Viterbi \\cite{viterbi}, especially in the case of generative methods. \n\nIn recent years, new methods \\cite{rnn-parsing, 8615844} utilizing the power of neural networks have been proposed as solutions for the address parsing problem. Using a single hidden layer feed-forward model, \\cite{feedforward-parsing} achieved good performance. However, their approach relied on a pre-processing and post-processing pipeline to deal with the different structures of address writing and the possible prediction errors. For instance, the input data is normalized to reduce noise and standardize the many variations that can refer to the same word, such as \\emph{road} and \\emph{rd}. In addition, the model's predictions are put through a rule-based validation step to make sure that they fit known patterns. In contrast, \\cite{rnn-parsing} proposed a deep learning approach based on the use of RNN and did not use any pre or post-processing. Their experiments focused on comparing the performance of both unidirectional and bidirectional vanilla RNN and Long-Short Term Memory Models (LSTM) \\cite{LSTM}, as well as a Seq2Seq. The models achieved high accuracy on test sets with the Seq2Seq leading the scoreboard on most of them with no particular pre-processing needed during the inference process.\n\nDespite reaching notable performances, the aforementioned approaches are limited to parsing addresses from a single country and would need to be adjusted to support a multinational scope of address parsing. To tackle this problem, Libpostal\\footnote{\\href{https:\/\/github.com\/openvenues\/libpostal}{\\url{https:\/\/github.com\/openvenues\/libpostal}}}, a library for international address parsing, has been proposed. This library uses a CRF-based model trained with an averaged Perceptron for scalability. The model was trained on data from each country in the world and was able to achieve a $99.45~\\%$ full parse accuracy\\footnote{The accuracy was computed considering the entire sequence and was not focused on individual tokens.}, thus defining a new state-of-the-art\\footnote{For a comparison of some of our models with Libpostal, visit our previous article \\cite{9357170}.}. However, this requires putting addresses through a heavy pre-processing pipeline before feeding them to the prediction model. \n\nTaking into account that some countries' addresses share the same language and format (i.e: the order of the address elements), \\cite{9357170} have proposed to leverage subword embeddings with a Seq2Seq architecture to achieve multinational address parsing. They achieved a $\\mathbf{99~\\%}$ parse accuracy on 20 countries. They also proposed to explore whether their trained model had learned transferable representation in a zero-shot evaluation procedure. Finally, they tested the parsing accuracy on 41 countries not seen during training and achieved at least $\\mathbf{80~\\%}$ parse accuracy for most of these countries.\n\nThis work aims to enhance the performance of a single model capable of parsing addresses from multiple countries, to explore the possibility of zero-shot transfer from some countries' addresses to others' and to explore (plus improve) the performance on incomplete addresses. In this respect, attention mechanisms have become ubiquitous when it comes to improving the performance of natural language processing models. These mechanisms come in different formats \\cite{attn_nlp} and have been applied to various tasks such as Machine Translation \\cite{luong-etal-2015-effective} Named Entity Recognition \\cite{attn_ner, attn_ner_2} and Sentiment Analysis \\cite{wang-etal-2016-attention}. Moreover, the Transformer architecture \\cite{NIPS2017_3f5ee243} which is based on self-attention has become the state-of-the-art in an overwhelming majority of natural language processing tasks. Zero-shot transfer has also been applied in similar settings to ours, particularly in cross lingual transfer. For instance, \\cite{zero-shot-bert} propose a linear transformation which projects the contextual representations obtained by BERT \\cite{devlin-etal-2019-bert} for certain target languages into the space of a source language, on which the model was trained to achieve the task of dependency parsing. This strategy managed to improve the model's performance in zero-shot cross lingual dependency parsing. As for \\cite{DBLP:conf\/aaai\/JiZDZCL20}, they have used cross-lingual pre-training of a Transformer encoder in order to achieve zero-shot machine translation for source-target language pairs with scarce parallel training data. Another technique that has proven to be effective in the context of zero-shot transfer is domain adaptation. In this regard, \\cite{peng2018zeroshot} and \\cite{9447890} approach the problem of zero-shot domain adaptation, in which target domain data for a particular task is completely absent, by training a model on an irrelevant task for which data from both source and target domains is available, while also utilizing data from the source domain to learn the objective task. We leverage both attention mechanisms and domain adaptation in an attempt to build a model capable of performing well on the task of address parsing for training countries and transferring the obtained knowledge to other countries without further training.\n\n\n\\section{Architectures}\n\\label{sec:architectures}\nOur models' architectures are based on previous work conducted in \\cite{9357170}. They are composed of an embedding model and a tagging model. The base architectures are improved using an attention mechanism as well as domain adaptation. These improvements are discussed in Subsections \\ref{subsec:attention} and \\ref{subsec:adann} respectively. Subsection~\\ref{subsec:base} introduces the base architecture.\n\n\\subsection{Base Architecture}\n\\label{subsec:base}\n\n\\subsubsection{Embedding Model}\n\\label{subsec:embedding}\nAn embedding model is essential in deep learning architectures solving natural language processing problems since it converts textual data into vector representations which are then used as inputs to the downstream models. Since we are dealing with a multilingual dataset, the used embedding model must be able to handle multiple languages.\n\nFirstly, we use a fixed pre-trained monolingual fastText model \\cite{fasttext} (pre-trained on the French language). The French embeddings were chosen since the French language shares Latin roots with many languages in our test set. It is also due to the considerable size of the corpus on which these embeddings were trained. Despite its monolingual characteristic, this embedding model is able to generate vector representations for words outside its training vocabulary by utilizing subword information in the form of character n-grams. We refer to this embedding model as FastText.\n\nSecondly, we use an encoding of words using multilingual subword embeddings provided by MultiBPEmb \\cite{bpemb}. This model relies on subword units to produce vector representations, therefore producing one or more vectors for each word. Consequently, we merge the obtained embeddings into a single word embedding using a Bidirectional LSTM (Bi-LSTM) with a hidden state dimension of 300. We build the word embeddings by running the concatenated forward and backward hidden states corresponding to the last time step for each word decomposition through a fully connected layer of which the number of neurons is equal to the dimension of the hidden states. This approach produces 300-dimensional word embeddings. We refer to these embeddings as BPEmb.\n\n\\subsubsection{Tagging Model}\n\\label{subsec:tagging}\n\nThe used downstream tagging model is a Seq2Seq model consisting of a one-layer unidirectional LSTM encoder and a one-layer unidirectional LSTM decoder followed by a fully-connected linear layer with a softmax activation. Both the encoder's and decoder's hidden states are of dimension $1024$. The embedded address sequence is fed to the encoder that produces hidden states, the last of which is used as a context vector to initialize the decoder's hidden states. The decoder is then given a Beginning Of Sequence (BOS) token as input and, at each time step, the prediction from the last step is used as input. To better adapt the model to the task of sequence labeling and to facilitate the convergence process, we only require the decoder to produce a sequence with the same length as the input address. The decoder's outputs are forwarded to the linear layer of which the number of neurons is equal to the tag space dimensionality. The softmax activation function computes probabilities over the linear layer's outputs to predict the most likely token at each time step.\n\n\\subsection{Attention Architecture}\n\\label{subsec:attention}\nAttention mechanisms are neural network components that can produce a distribution describing the interdependence between a model's inputs and outputs (general attention) or amongst model inputs themselves (self-attention). These mechanisms are common in natural language processing encoder-decoder architectures such as neural machine translation models \\cite{nmt} since they have shown to improve models' performance and help address some of the issues recurrent neural networks suffer from when it comes to dealing with long sequences.\n\nAttention mechanisms are also exploited for the interpretability of neural networks, where they are considered to provide insights about the impact of some neural network's inputs on its predictions. This use of attention mechanisms has been contested in \\cite{jain-wallace-2019-attention} because of a lack of consistency with feature importance measures, among other things. However, other work has suggested that attention mechanisms provide a certain degree of interpretability depending on the task at hand \\cite{wiegreffe2019attention} \\cite{vashishth2019attention}. In this work, we focus on the performance enhancement of address tagging models using attention.\n\n\\subsubsection{Attention models}\nThe implementation of our attention models was inspired by \\cite{nmt}. The models' architecture remains similar to that of our base models, with some alterations to the decoding process. Indeed, instead of feeding the last predicted tag as an input to the decoder at the current time step $i$, we compute an input using the encoder's outputs $\\vec{O}$, the last decoder hidden state $h_{i-1}$, and the last predicted tag's representation $\\vec{t}$. \n\nWe start by computing attention weights as follows:\n\n\\begin{center}\n$ \\displaystyle \\alpha_{i, j} = \\frac{\\exp({a_{i, j})}}{\\sum_{k} \\exp({a_{i, k}})}$\n\\end{center}\n\nwhere\n\n\\begin{center}\n$ a_{i, j} = p \\times tanh(W_{h} h_{i-1} + W_{o} O_{j})$\n\\end{center}\n\nand $W_{h}$, $W_{o}$ and $p$ are learnable parameters.\n\nNext, we compute a context vector by weighting the encoder's outputs with the obtained attention weights:\n\n\\begin{center}\n $\\vec{c_{i}} = \\sum_{k} \\alpha_{i, k} O_{k}$\n\\end{center}\n\nFinally, we obtain the decoder's input by concatenating the last prediction $\\vec{t}$ with the context vector $\\vec{c_{i}}$.\nNote that the first decoder's input is computed using the last encoder's hidden state.\n\nWe use this approach with the two aforementioned embedding methods and name the obtained models \\fasttextatt{} and \\textbf{BPEmbAtt}{} respectively.\n\n\\subsection{Domain Adaptation}\n\\label{subsec:adann}\nDomain adaptation is a branch of transfer learning which aims at applying a model trained on data from a source domain to data from a target domain which somewhat differs from one another but still retains a certain degree of similarity. More specifically, it is a technique used when the input and output belong to the same space, but the probability distribution which associates them changes as we move from one domain to the other \\cite{da-tl}. Our objective is to generalize the performance of address parsing models to countries of which no data is used at training time. In order to extend our models to cope with this domain adaptation problem, we enhance our base approach using domain adversarial neural networks. \n\n\\subsubsection{Domain Adversarial Neural Networks}\nDomain adversarial neural networks \\cite{dann} achieve domain adaptation by appending a second parallel output layer to a neural network classifier, the purpose of which is to predict the domain of the network's input. Since the target domain labels are not available during training, two losses are computed when the input belongs to the source domain (i.e. the labels classification loss and the domain classification loss), whilst only the domain classification loss is computed when the input belongs to the target domain. Moreover, the gradient associated with the domain classification layer is reversed during backpropagation. This aims to hinder the neural network's ability to differentiate between the source and target domains while still learning to perform well on classifying data from the source domain.\n\n\\subsubsection{Implementation}\nFirst of all, we modified our neural network architecture by adding a domain discriminator in the form of a fully connected layer with two output neurons which takes the context vector produced by the Seq2Seq encoder as input. This layer is preceded by a gradient reversal layer that reverses the computed gradient sign during backpropagation. The domain discriminator is followed by a Softmax activation function, and its loss is computed using a Cross-Entropy loss function. Secondly, we used the ADANN \\cite{adann} training algorithm to train our model since it is designed to enable multi-domain adversarial training by considering, during the forward pass of each batch, one domain as a source domain and another random domain as a target domain, and so on for each of the available domains; our domains being the countries for which training data is available. We hope, by using this approach, to construct models that can learn to parse addresses from countries with different address formats without making a significant distinction between them. Therefore these models would perform better on parsing addresses from countries not seen during their training.\n\nWe use this approach with the two aforementioned embedding methods and name the obtained models \\textbf{fastTextADANN}{} and \\textbf{BPEmbADANN}{} respectively.\n\n\\section{Data}\n\\label{sec:data}\n\\subsection{Complete Address Dataset}\n\nWe use the same dataset as per \\cite{9357170}\\footnote{\\href{https:\/\/github.com\/GRAAL-Research\/deepparse-address-data}{\\url{https:\/\/github.com\/GRAAL-Research\/deepparse-address-data}}}, Deepparse dataset.\nWhich is built on the open-source dataset on which Libpostal's models were trained.\nLibpostal dataset is based on three open source addresses datasets: OpenStreetMap, Yahoo GeoPlanet and OpenAddresses. \nIt contains over 1 billion addresses from nearly 241 countries \\cite{libpostal}.\nUsing this dataset, \\cite{9357170} have selected 61 countries based on twofold: availability of at least 500 addresses, availability of the addresses components type.\nThe address data of those 61 countries are in their official language.\nFor example, Korean addresses are in Korean while Canada's addresses are in the two official languages, French and English. \nAmong the 61 selected countries, twenty countries are used for multinational training, and forty-one are used for zero-shot transfer evaluation. \nThe dataset uses the following eight tags: StreetNumber, StreetName, Unit, Municipality, Province, PostalCode, Orientation, and GeneralDelivery.\n\n\\subsection{Incomplete Address Dataset}\n\\label{subsec:iad}\nWe introduce a second dataset based on Deepparse dataset. It is similar to the Complete address dataset. \nIt is composed of the same twenty countries used for multinational training but with a sample size of \\num{50000} for training and \\num{25000} for holdout evaluation. \nThe dataset consists only of addresses where each one is missing at least one of the following four tags: StreetName, PostalCode, Municipality and Province. We consider an address incomplete if it is not composed of at least all of the four tags. \nFor example, the sequence of tags for the address ``221 B Baker Street\" is \\{StreetNumber, Unit, StreetName, StreetName\\}, and it is incomplete since the PostalCode and the Municipality tags are missing. We will refer to this dataset as the ``incomplete address\" dataset.\n\n\\section{Experiments}\n\\label{sec:exp}\nFor our experiments, as per \\cite{9357170} we trained each of our four models (\\fasttextatt{}, \\textbf{fastTextADANN}{}, \\textbf{BPEmbAtt}{} and \\textbf{BPEmbADANN}{}) five times\\footnote{Using each of the following seeds $\\{5, 10, 15, 20, 25\\}$. When a model did not converge (a high loss value on train and validation), we retrain the model using a different seed (30).} for 200 epochs with a batch size of \\num{512} for the base approach and the attention models and \\num{256} for the ADANN one. An early stopping with a patience of fifteen epochs was also applied during training. We initialize the learning rate at 0.1 and use learning rate scheduling to lower it by a factor of 0.1 after ten epochs without loss reduction. Our loss function of choice is the Cross-Entropy loss due to its suitability for the softmax function. The optimization is done through Stochastic Gradient Descent.\n\nAs per \\cite{9357170}, we use teacher forcing \\cite{6795228} to speed up the convergence. The architecture and the training of the models were implemented using Pytorch \\cite{pytorch}, and Poutyne \\cite{poutyne}.\n\n\\subsection{Evaluation Procedure}\nWe train our four models on our multinational dataset, the difference between the models being (1) the word embedding method employed (fastText and BPEmb) and (2) the use of attention mechanism or domain adaption learning. Each model has been trained five times, and we report the models' mean accuracy and standard deviation on the per-country zero-shot data. The accuracy for each sequence is computed as the proportion of the tags predicted correctly by the model. As such, predicting all the tags for a sequence correctly yields a perfect accuracy. More precisely, errors in tag predictions have an impact on the accuracy for a given sequence. However, the accuracy will not be null unless all the predicted tags for the sequence are incorrect. These results will be discussed in \\autoref{sec:completeresults}.\n\n\\subsection{Incomplete Address Evaluation Procedure}\nSince addresses do not always include all the components, we also evaluate our four models on the incomplete addresses dataset introduced in the Subsection~\\ref{subsec:iad}. We hypothesize that an incomplete address can confuse our models since we use a Seq2Seq architecture, and the compressed representation of an incomplete address will not be the same as the same complete one. For example, the address ``221 B Baker Street London NW1 6XE\" is complete and is a typical way to write an address. But, many addresses are not always in such a form. Such as the address ``221 B Baker Street\", which is the same as the previous one but without the city and the postal code. That difference can be more challenging for our models since postal code is usually a good way to tell the difference between the pattern. We will also evaluate the performance of two new models, \\textbf{fastTextADANNNoisy}{} and \\textbf{BPEmbADANNNoisy}{}), trained on the complete and incomplete addresses dataset to investigate if the addition of incomplete address help improves performance on that type of data. These results will be discussed in \\autoref{sec:missingresults}.\n\n\\section{Complete addresses Results}\n\\label{sec:completeresults}\nIn this section, we present and discuss the results of all our four trained models plus the two models in \\cite{9357170} (\\fasttext{} and \\textbf{BPEmb}{}). We first evaluated them on the holdout addresses dataset and the zero-shot addresses dataset.\n\n\\subsection{Multinational Evaluation}\n\\label{subsec:multieval}\n\\begin{table*}\n \\centering\n \\caption{Mean accuracy and standard deviation obtained with multinational models on the holdout dataset for training countries. \\textbf{bold} values indicate the best accuracy for each country.}\n \\label{table:multinational-holdout}\n \\resizebox{0.85\\textwidth}{!}{%\n \\begin{tabular}{lrrrrrr}\n \\toprule\n Country & FastText & BPEmb & FastTextAtt & BPEmbAtt & FastTextADANN & BPEmbADANN\\\\\n \\midrule\n United States & $99.61 \\pm 0.09$ & $99.67 \\pm 0.09$ & $\\mathbf{99.73 \\pm 0.02}$ & $99.65 \\pm 0.23$ & $99.70 \\pm 0.03$ & $99.68 \\pm 0.16$\\\\\n Brazil & $99.40 \\pm 0.10$ & $99.42 \\pm 0.15$ & $\\mathbf{99.58 \\pm 0.04}$ & $99.42 \\pm 0.39$ & $99.53 \\pm 0.04$ & $99.42 \\pm 0.34$\\\\\n South Korea & $99.96 \\pm 0.01$ & $\\mathbf{100.00 \\pm 0.00}$ & $99.98 \\pm 0.01$ & $\\mathbf{100.00 \\pm 0.00}$ & $99.98 \\pm 0.01$ & $\\mathbf{100.00 \\pm 0.00}$\\\\\n Australia & $99.68 \\pm 0.05$ & $\\mathbf{99.80 \\pm 0.05}$ & $99.77 \\pm 0.03$ & $99.78 \\pm 0.13$ & $99.76 \\pm 0.03$ & $99.77 \\pm 0.17$\\\\\n Mexico & $99.60 \\pm 0.06$ & $99.68 \\pm 0.06$ & $\\mathbf{99.71 \\pm 0.03}$ & $99.70 \\pm 0.14$ & $99.68 \\pm 0.02$ & $99.69 \\pm 0.12$\\\\\n Germany & $99.77 \\pm 0.04$ & $99.89 \\pm 0.03$ & $99.85 \\pm 0.02$ & $99.90 \\pm 0.08$ & $99.84 \\pm 0.01$ & $\\mathbf{99.91 \\pm 0.05}$\\\\\n Spain & $99.75 \\pm 0.05$ & $99.85 \\pm 0.04$ & $99.83 \\pm 0.02$ & $\\mathbf{99.86 \\pm 0.09}$ & $99.80 \\pm 0.02$ & $99.83 \\pm 0.11$\\\\\n Netherlands & $99.61 \\pm 0.07$ & $99.88 \\pm 0.03$ & $99.75 \\pm 0.03$ & $99.90 \\pm 0.07$ & $99.72 \\pm 0.02$ & $\\mathbf{99.91 \\pm 0.05}$\\\\\n Canada & $99.79 \\pm 0.05$ & $\\mathbf{99.87 \\pm 0.04}$ & $99.87 \\pm 0.02$ & $99.87 \\pm 0.10$ & $99.85 \\pm 0.01$ & $99.86 \\pm 0.10$\\\\\n Switzerland & $99.53 \\pm 0.09$ & $99.75 \\pm 0.08$ & $99.62 \\pm 0.05$ & $99.77 \\pm 0.14$ & $99.59 \\pm 0.05$ & $\\mathbf{99.82 \\pm 0.12}$\\\\\n Poland & $99.69 \\pm 0.07$ & $99.89 \\pm 0.04$ & $99.80 \\pm 0.02$ & $99.90 \\pm 0.07$ & $99.78 \\pm 0.02$ & $\\mathbf{99.92 \\pm 0.04}$\\\\\n Norway & $99.46 \\pm 0.06$ & $98.41 \\pm 0.63$ & $99.44 \\pm 0.11$ & $98.20 \\pm 1.13$ & $\\mathbf{99.53 \\pm 0.04}$ & $97.95 \\pm 0.44$\\\\\n Austria & $99.28 \\pm 0.03$ & $98.98 \\pm 0.22$ & $\\mathbf{99.38 \\pm 0.06}$ & $98.96 \\pm 0.37$ & $99.30 \\pm 0.07$ & $99.34 \\pm 0.32$\\\\\n Finland & $99.77 \\pm 0.03$ & $\\mathbf{99.87 \\pm 0.01}$ & $99.83 \\pm 0.02$ & $99.86 \\pm 0.01$ & $99.82 \\pm 0.01$ & $99.84 \\pm 0.01$\\\\\n Denmark & $99.71 \\pm 0.07$ & $99.90 \\pm 0.03$ & $99.82 \\pm 0.03$ & $99.91 \\pm 0.06$ & $99.80 \\pm 0.02$ & $\\mathbf{99.93 \\pm 0.05}$\\\\\n Czechia & $99.57 \\pm 0.09$ & $99.89 \\pm 0.04$ & $99.73 \\pm 0.02$ & $99.89 \\pm 0.10$ & $99.70 \\pm 0.02$ & $\\mathbf{99.90 \\pm 0.06}$\\\\\n Italy & $99.73 \\pm 0.05$ & $99.81 \\pm 0.05$ & $99.83 \\pm 0.02$ & $\\mathbf{99.83 \\pm 0.11}$ & $99.80 \\pm 0.02$ & $99.82 \\pm 0.11$\\\\\n France & $99.66 \\pm 0.08$ & $99.69 \\pm 0.11$ & $\\mathbf{99.79 \\pm 0.04}$ & $99.69 \\pm 0.22$ & $99.77 \\pm 0.03$ & $99.70 \\pm 0.17$\\\\\n UK & $99.61 \\pm 0.10$ & $99.74 \\pm 0.08$ & $\\mathbf{99.77 \\pm 0.05}$ & $99.72 \\pm 0.20$ & $99.73 \\pm 0.03$ & $99.72 \\pm 0.20$\\\\\n Russia & $99.03 \\pm 0.24$ & $\\mathbf{99.67 \\pm 0.11}$ & $99.40 \\pm 0.13$ & $99.54 \\pm 0.39$ & $99.23 \\pm 0.12$ & $99.59 \\pm 0.31$\\\\\n \\midrule\n Mean & $99.61 \\pm 0.20$ & $99.68 \\pm 0.36$ & $\\mathbf{99.72 \\pm 0.16}$ & $99.67 \\pm 0.40$ & $99.70 \\pm 0.18$ & $99.68 \\pm 0.43$\\\\\n \\bottomrule\n \\end{tabular}%\n }\n\\end{table*}\n\n\\autoref{table:multinational-holdout} presents all the models' mean accuracy and standard deviation on the holdout dataset for training countries. First, we find that South Korea is the only country where a perfect accuracy was achieved when using byte-pairs embeddings (BPemb) or almost all the time (four seeds out of five) when using fastText embeddings. Since South Korea is the only country using a different pattern in the training set where the province and municipality occur before the street name, it seems that our models might have memorized this particular pattern. To validate this intuition, we randomly reordered \\num{6000} South Korean addresses to follow either the first (red) or the second (brown) address pattern (equally divided between the two). We observe, after this reordering, that the mean accuracy drops to $28.04~\\%$ considering that using a random tags annotation, we get a $12.29~\\%$ accuracy.\n\nIt is also interesting to notice that the models' accuracies are good when using fastText monolingual word embeddings, especially on South Korean addresses despite the entirely different alphabet. These results illustrate that our model, regardless of the embeddings model, learned the representation of an address sequence even if the words' representations are not native to the language (French vs Korean). \n\nFinally, all our models achieve state-of-the-art performance on our holdout dataset while using less data than previous approaches (e.g. Libpostal) and neither pre nor post-processing. However, at this point, it is difficult to conclude which of our models is the leading one. In the following subsection, we investigate the zero-shot performance of our models on countries not seen during training.\n\n\\subsection{Zero-Shot Evaluation}\nSince training a deep learning model to parse addresses from every country in the world would require a significant amount of data and resources, our ongoing work aims at achieving domain adaptation to be able to train on a reasonable amount of data and generalize to data from different sources. We begin by exploring how well our architecture can generalize in a zero-shot manner. To this end, we test each of our four models on address data from countries not seen during the training. The results are reported in \\autoref{table:zero-shot-results}.\n\n\\subsubsection{Attention Models}\nIn an attempt to improve our models' zero-shot performance, the base architecture was augmented with an attention mechanism as described in Subsection~\\ref{subsec:attention}. The results of those two models (\\fasttextatt{} and \\textbf{BPEmbAtt}{}) are compared to their base approach equivalents respectively in \\autoref{table:zero-shot-results}. For the model \\fasttextatt{} (left section of the table), we observe that attention mechanisms improve the performance for 20 out of 41 countries by more than $1~\\%$. Where 10 out of 41 improves with more than $2~\\%$. Also, 10 out of 41 countries' accuracies were increased by less than $1~\\%$. However, for the other countries (11 out of 41), we observe that the accuracy is less than $0.5~\\%$ poorer than the base approach for most of them. The attention mechanism improved two countries' accuracies above the $90~\\%$ accuracy (Belgium and Lithuania) and one above the $80~\\%$ (Philippines). \\textbf{FastTextAtt} achieves good results for $83~\\%$ of the countries (34), almost $52~\\%$ of which (21) are near state-of-the-art performance. \n\nIn contrast, the results for the \\textbf{BPEmbAtt}{} model (right section of the table) are not as good. We observe that results were be increased by at least more than $1~\\%$ for 14 out of 41 countries. While 11 out of 41 saw improvement by more than $2~\\%$. Also, 9 out of 41 countries improved by less than $1~\\%$. However, for the other countries (20), we observe that the accuracy is less than $0.5~\\%$ poorer than the base approach for half of them. For the other half, results are between $1~\\%$ and $2~\\%$ lower, thus lowering performance for two countries (India and Bangladesh) below $80~\\%$ and one (Malaysia) below $90~\\%$. Hence, the use of attention mechanism with byte-pair multilingual embeddings lowers performance overall, especially since only one country (Estonia) yields results above the $80~\\%$ compared to the base approach. \\textbf{BPEmbAtt}{} achieves good results for $78~\\%$ of the countries (32), almost $50~\\%$ of which (20) is near state-of-the-art performance. \n\nFinally, we observe that in some cases, the use of an attention mechanism can substantially improve performance, such as Greece where the increase is $4~\\%$ for \\fasttextatt{} and near $16~\\%$ for \\textbf{BPEmbAtt}{}. We also observe a smaller variance for both the models using attention mechanisms, meaning those models are more stable during training and converge to a better optimum. Overall, these results show that our attention mechanisms architecture can generally yield better accuracies during training.\n\n\\begin{table*}\n \\centering\n \\caption{Mean accuracy (and standard deviation) per country for zero-shot transfer models - base approach versus attention models. \\textbf{bold} values indicate the best accuracy for each embeddings type.}\n \\label{table:zero-shot-results}\n \\resizebox{\\textwidth}{!}{%\n \\begin{tabular}{lrr|rrlrr|rr}\n \\toprule\n Country & FastText & FastTextAtt & BPEmb & BPEmbAtt & Country & FastText & FastTextAtt & BPEmb & BPEmbAtt\\\\\n \\midrule\n Belgium & $88.14 \\pm 1.04$ & $\\mathbf{90.01 \\pm 1.59}$ & $87.29 \\pm 1.40$ & $\\mathbf{89.35 \\pm 0.48}$ & Faroe Islands & $\\mathbf{74.14 \\pm 1.83}$ & $74.00 \\pm 1.00$ & $85.50 \\pm 0.11$ & $\\mathbf{85.89 \\pm 1.08}$\\\\\n Sweden & $81.59 \\pm 4.53$ & $\\mathbf{85.73 \\pm 2.30}$ & $\\mathbf{90.76 \\pm 3.03}$ & $90.64 \\pm 5.07$ & R\u00e9union & $96.80 \\pm 0.45$ & $\\mathbf{96.86 \\pm 1.27}$ & $93.67 \\pm 0.26$ & $\\mathbf{94.19 \\pm 0.27}$\\\\\n Argentina & $86.26 \\pm 0.47$ & $\\mathbf{87.76 \\pm 0.71}$ & $\\mathbf{88.04 \\pm 0.83}$ & $87.45 \\pm 2.57$ & Moldova & $90.18 \\pm 0.79$ & $\\mathbf{91.25 \\pm 0.84}$ & $86.89 \\pm 3.01$ & $\\mathbf{89.99 \\pm 2.22}$\\\\\n India & $69.09 \\pm 1.74$ & $\\mathbf{71.94 \\pm 2.06}$ & $\\mathbf{80.04 \\pm 3.24}$ & $79.73 \\pm 3.65$ & Indonesia & $64.31 \\pm 0.84$ & $\\mathbf{65.98 \\pm 0.97}$ & $70.28 \\pm 1.64$ & $\\mathbf{72.05 \\pm 2.81}$\\\\\n Romania & $94.49 \\pm 1.52$ & $\\mathbf{96.01 \\pm 0.73}$ & $91.65 \\pm 1.21$ & $\\mathbf{93.01 \\pm 0.84}$ & Bermuda & $92.31 \\pm 0.60$ & $\\mathbf{92.82 \\pm 0.68}$ & $93.70 \\pm 0.35$ & $\\mathbf{93.82 \\pm 0.08}$\\\\\n Slovakia & $82.10 \\pm 0.98$ & $\\mathbf{84.16 \\pm 2.60}$ & $90.31 \\pm 3.88$ & $\\mathbf{92.60 \\pm 4.64}$ & Malaysia & $\\mathbf{78.93 \\pm 3.78}$ & $75.79 \\pm 2.77$ & $\\mathbf{94.16 \\pm 0.49}$ & $89.06 \\pm 5.22$\\\\\n Hungary & $\\mathbf{48.92 \\pm 3.59}$ & $48.48 \\pm 4.55$ & $\\mathbf{25.51 \\pm 2.60}$ & $24.00 \\pm 1.09$ & South Africa & $\\mathbf{95.31 \\pm 1.68}$ & $94.82 \\pm 2.79$ & $\\mathbf{96.87 \\pm 0.96}$ & $96.83 \\pm 2.11$\\\\\n Japan & $41.41 \\pm 3.21$ & $\\mathbf{42.63 \\pm 4.13}$ & $35.33 \\pm 1.28$ & $\\mathbf{42.81 \\pm 5.73}$ & Latvia & $\\mathbf{93.66 \\pm 0.64}$ & $93.04 \\pm 0.85$ & $\\mathbf{74.78 \\pm 4.33}$ & $72.14 \\pm 2.00$\\\\\n Iceland & $\\mathbf{96.55 \\pm 1.20}$ & $96.51 \\pm 0.50$ & $\\mathbf{97.38 \\pm 1.18}$ & $96.77 \\pm 0.43$ & Kazakhstan & $86.33 \\pm 3.06$ & $\\mathbf{89.52 \\pm 4.82}$ & $\\mathbf{94.12 \\pm 1.94}$ & $92.60 \\pm 5.40$\\\\\n Venezuela & $94.87 \\pm 0.53$ & $\\mathbf{95.37 \\pm 0.45}$ & $93.05 \\pm 2.02$ & $\\mathbf{93.79 \\pm 3.07}$ & New Caledonia & $99.48 \\pm 0.15$ & $\\mathbf{99.52 \\pm 0.08}$ & $99.25 \\pm 0.19$ & $\\mathbf{99.38 \\pm 0.34}$\\\\\n Philippines & $77.76 \\pm 3.97$ & $\\mathbf{82.87 \\pm 5.17}$ & $\\mathbf{81.95 \\pm 8.07}$ & $81.30 \\pm 3.50$ & Estonia & $87.08 \\pm 1.89$ & $\\mathbf{89.29 \\pm 2.49}$ & $77.30 \\pm 1.22$ & $\\mathbf{80.24 \\pm 5.49}$\\\\\n Slovenia & $95.37 \\pm 0.23$ & $\\mathbf{95.74 \\pm 0.44}$ & $\\mathbf{97.47 \\pm 0.45}$ & $97.04 \\pm 0.38$ & Singapore & $\\mathbf{86.42 \\pm 2.36}$ & $86.40 \\pm 2.59$ & $86.87 \\pm 2.01$ & $\\mathbf{87.53 \\pm 1.30}$\\\\\n Ukraine & $92.99 \\pm 0.70$ & $\\mathbf{93.45 \\pm 0.76}$ & $92.60 \\pm 1.84$ & $\\mathbf{93.52 \\pm 1.81}$ & Bangladesh & $\\mathbf{78.61 \\pm 0.43}$ & $76.48 \\pm 1.06$ & $\\mathbf{82.45 \\pm 2.54}$ & $79.01 \\pm 2.18$\\\\\n Belarus & $91.08 \\pm 3.08$ & $\\mathbf{94.43 \\pm 3.59}$ & $\\mathbf{96.40 \\pm 1.76}$ & $94.44 \\pm 4.19$ & Paraguay & $96.01 \\pm 1.23$ & $\\mathbf{96.68 \\pm 0.32}$ & $\\mathbf{97.20 \\pm 0.35}$ & $96.25 \\pm 1.17$\\\\\n Serbia & $95.31 \\pm 0.48$ & $\\mathbf{95.68 \\pm 0.33}$ & $92.62 \\pm 3.83$ & $\\mathbf{93.09 \\pm 1.81}$ & Cyprus & $\\mathbf{97.67 \\pm 0.34}$ & $97.57 \\pm 0.31$ & $94.31 \\pm 7.21$ & $\\mathbf{98.32 \\pm 0.37}$\\\\\n Croatia & $94.59 \\pm 2.21$ & $\\mathbf{96.40 \\pm 0.57}$ & $88.04 \\pm 4.68$ & $\\mathbf{90.66 \\pm 3.76}$ & Bosnia & $84.04 \\pm 1.47$ & $\\mathbf{87.42 \\pm 1.95}$ & $84.46 \\pm 5.76$ & $\\mathbf{88.61 \\pm 5.06}$\\\\\n Greece & $81.98 \\pm 0.60$ & $\\mathbf{85.00 \\pm 1.61}$ & $40.97 \\pm 14.89$ & $\\mathbf{56.01 \\pm 10.98}$ & Ireland & $87.44 \\pm 0.69$ & $\\mathbf{87.69 \\pm 0.95}$ & $86.49 \\pm 1.31$ & $\\mathbf{87.56 \\pm 3.01}$\\\\\n New Zealand & $94.27 \\pm 1.50$ & $\\mathbf{95.91 \\pm 1.41}$ & $\\mathbf{99.44 \\pm 0.29}$ & $98.21 \\pm 1.37$ & Algeria & $85.37 \\pm 2.05$ & $\\mathbf{86.03 \\pm 1.55}$ & $84.65 \\pm 4.47$ & $\\mathbf{85.08 \\pm 2.50}$\\\\\n Portugal & $93.65 \\pm 0.46$ & $\\mathbf{94.54 \\pm 0.67}$ & $92.68 \\pm 1.46$ & $\\mathbf{93.33 \\pm 0.59}$ & Colombia & $87.81 \\pm 0.92$ & $\\mathbf{89.09 \\pm 0.64}$ & $\\mathbf{89.51 \\pm 0.88}$ & $88.32 \\pm 2.55$\\\\\n Bulgaria & $\\mathbf{91.03 \\pm 2.07}$ & $90.87 \\pm 2.63$ & $\\mathbf{93.47 \\pm 3.07}$ & $92.97 \\pm 3.66$ & Uzbekistan & $86.76 \\pm 1.13$ & $\\mathbf{87.36 \\pm 0.74}$ & $75.18 \\pm 1.92$ & $\\mathbf{77.52 \\pm 2.62}$\\\\\n Lithuania & $87.67 \\pm 3.05$ & $\\mathbf{90.88 \\pm 1.73}$ & $\\mathbf{76.41 \\pm 1.66}$ & $76.16 \\pm 1.54$ & && & &\\\\\n \\bottomrule\n \\end{tabular}%\n }\n\\end{table*}\n\n\\subsubsection{ADANN}\nIn a second attempt to improve our models' zero-shot performance, the base architecture was augmented with a domain adaptation approach as described in Subsection~\\ref{subsec:adann}. The results of those two models (\\textbf{fastTextADANN}{} and \\textbf{BPEmbADANN}{}) are compared to their attention approach equivalents in \\autoref{table:zero-shot-results}. For both the \\textbf{fastTextADANN}{} and \\textbf{BPEmbADANN}{} models (left and right section of the table respectively), we observe similar results. \n\nFirst, the ADANN algorithm was able to improve the performance for a minority (4 and 8 respectively for \\textbf{fastTextADANN}{} and \\textbf{BPEmbADANN}{} respectively) of the countries by more than $1~\\%$. A few (2 or 3) out of the 41 improve by more than $2~\\%$ for both models. Also, near a fourth of the 41 countries' accuracies increased by less than $1~\\%$. For the other countries (the majority), we observe that for half of them, the difference is less than $1~\\%$ poorer than the attention approaches, and the other half is a couple of percent poorer. Sometimes the difference can be as much as $5~\\%$ (e.g. Sweden for \\textbf{fastTextADANN}{} or Estonia for \\textbf{BPEmbADANN}{}). Also, for both models, two countries drop below $80~\\%$ (Lithuania and Moldova for \\textbf{fastTextADANN}{} and Philippines and Bosnia for \\textbf{BPEmbADANN}{}) and for \\textbf{BPEmbADANN}{} Sweden accuracy's drops below $90~\\%$. Thus, the use of domain adaptation technique during training lowers the performance overall, especially since only one country for each model (Malaysia and India respectively) yields results above $80~\\%$ compared to the base approach and one above $90~\\%$ (Kazakhstan and Malaysia respectively). Although, overall, the two models achieve good results for nearly $80~\\%$ of the countries, almost $50~\\%$ of which are near state-of-the-art performance. However, these results are less good than models using attention mechanisms in many cases.\n\nSecond, we observe that our model does not seem to have fit the training data as much as possible, as shown in \\autoref{table:multinational-holdout-attention}. This table presents the multinational models' mean accuracy (and a standard deviation) on the holdout dataset for \\textbf{fastTextADANN}{}. Nevertheless, we are surprised by our results since the ADANN algorithm is a transfer learning technique. An advantage of ADANN is that the network's weights have strong incentives to be subject-agnostic, meaning that the learned representation extracted from the network can be thought of as general features for the prediction layer \\cite{adann}. We argue that it is more challenging to train models using the ADANN algorithm since the time needed for one epoch is nearly 5 hours. Meaning that the expected time to train for the 200 epochs is near 41 days per model (and we train five seeds) compared to a couple of days for the attention models. So we did not have much opportunity to fine-tune our model. We also hypothesize that the domain choice (i.e. the country) might be too granular since many countries have similar patterns or similar language, making the task more difficult. An idea of improvement could be to use more definitions for the domain, such as the language and the address pattern type. For example, we could use a categorical variable representing the address pattern number and a categorical variable for the language. That way, we could help guide the training into a better understanding of the context of an address which is not necessary the country but rather the language and the pattern. \n\nFinally, on average, performance is similar between the attention and ADANN approaches, but FastText models perform slightly better. They are, on average, $2~\\%$ better than the BPEmb approaches. For example, \\textbf{fastTextADANN}{} yields in average $86.45 \\pm 12.13~\\%$ accuracy across the zero-shot countries and \\textbf{BPEmbADANN}{} yields a $85.32 \\pm 15.31~\\%$. Again, both BPEmb models have higher variance than FastText one. These results show that BPEmb models' performance tends to be more variable and more sensitive to changes in seeds.\n\n\\begin{table*}\n \\centering\n \\caption{Mean accuracy (and standard deviation) per country for zero-shot transfer models - attention models versus ADANN models.}\n \\label{table:zero-shot-results-adann}\n \\resizebox{\\textwidth}{!}{%\n \\begin{tabular}{lrr|rrlrr|rr}\n \\toprule\n Country & FastTextAtt & FastTextADANN & BPEmbAtt & BPEmbADANN & Country & FastTextAtt & FastTextADANN & BPEmbAtt & BPEmbADANN\\\\\n \\midrule\n Belgium & $90.01 \\pm 1.59$ & $\\mathbf{90.90 \\pm 3.76}$ & $\\mathbf{89.35 \\pm 0.48}$ & $88.74 \\pm 0.59$ & Faroe Islands & $\\mathbf{74.00 \\pm 1.00}$ & $69.76 \\pm 2.08$ & $\\mathbf{85.89 \\pm 1.08}$ & $85.34 \\pm 0.20$\\\\\n Sweden & $\\mathbf{85.73 \\pm 2.30}$ & $80.99 \\pm 3.78$ & $\\mathbf{90.64 \\pm 5.07}$ & $89.65 \\pm 1.08$ & R\u00e9union & $\\mathbf{96.86 \\pm 1.27}$ & $96.48 \\pm 0.96$ & $\\mathbf{94.19 \\pm 0.27}$ & $93.99 \\pm 0.59$\\\\\n Argentina & $\\mathbf{87.76 \\pm 0.71}$ & $87.72 \\pm 1.33$ & $\\mathbf{87.45 \\pm 2.57}$ & $87.13 \\pm 2.65$ & Moldova & $\\mathbf{91.25 \\pm 0.84}$ & $89.87 \\pm 0.56$ & $\\mathbf{89.99 \\pm 2.22}$ & $88.67 \\pm 1.66$\\\\\n India & $\\mathbf{71.94 \\pm 2.06}$ & $70.30 \\pm 1.22$ & $79.73 \\pm 3.65$ & $\\mathbf{81.12 \\pm 2.97}$ & Indonesia & $\\mathbf{65.98 \\pm 0.97}$ & $65.42 \\pm 0.75$ & $\\mathbf{72.05 \\pm 2.81}$ & $71.71 \\pm 2.79$\\\\\n Romania & $96.01 \\pm 0.73$ & $\\mathbf{96.22 \\pm 1.11}$ & $\\mathbf{93.01 \\pm 0.84}$ & $92.58 \\pm 1.53$ & Bermuda & $\\mathbf{92.82 \\pm 0.68}$ & $92.02 \\pm 0.52$ & $93.82 \\pm 0.08$ & $\\mathbf{95.02 \\pm 0.58}$\\\\\n Slovakia & $\\mathbf{84.16 \\pm 2.60}$ & $81.93 \\pm 1.80$ & $\\mathbf{92.60 \\pm 4.64}$ & $91.14 \\pm 3.81$ & Malaysia & $75.79 \\pm 2.77$ & $\\mathbf{83.48 \\pm 3.42}$ & $89.06 \\pm 5.22$ & $\\mathbf{90.52 \\pm 4.27}$\\\\\n Hungary & $\\mathbf{48.48 \\pm 4.55}$ & $46.42 \\pm 5.62$ & $\\mathbf{24.00 \\pm 1.09}$ & $23.54 \\pm 1.69$ & South Africa & $\\mathbf{94.82 \\pm 2.79}$ & $93.09 \\pm 2.35$ & $\\mathbf{96.83 \\pm 2.11}$ & $96.02 \\pm 1.55$\\\\\n Japan & $42.63 \\pm 4.13$ & $\\mathbf{43.88 \\pm 1.26}$ & $\\mathbf{42.81 \\pm 5.73}$ & $38.64 \\pm 5.26$ & Latvia & $\\mathbf{93.04 \\pm 0.85}$ & $90.64 \\pm 1.26$ & $\\mathbf{72.14 \\pm 2.00}$ & $71.92 \\pm 2.60$\\\\\n Iceland & $\\mathbf{96.51 \\pm 0.50}$ & $96.18 \\pm 0.35$ & $96.77 \\pm 0.43$ & $\\mathbf{97.15 \\pm 0.64}$ & Kazakhstan & $89.52 \\pm 4.82$ & $\\mathbf{92.00 \\pm 1.36}$ & $92.60 \\pm 5.40$ & $\\mathbf{96.49 \\pm 1.94}$\\\\\n Venezuela & $\\mathbf{95.37 \\pm 0.45}$ & $95.02 \\pm 0.76$ & $93.79 \\pm 3.07$ & $\\mathbf{94.79 \\pm 0.57}$ & New Caledonia & $\\mathbf{99.52 \\pm 0.08}$ & $99.52 \\pm 0.11$ & $99.38 \\pm 0.34$ & $\\mathbf{99.40 \\pm 0.26}$\\\\\n Philippines & $82.87 \\pm 5.17$ & $\\mathbf{83.81 \\pm 4.95}$ & $\\mathbf{81.30 \\pm 3.50}$ & $74.94 \\pm 5.57$ & Estonia & $\\mathbf{89.29 \\pm 2.49}$ & $86.72 \\pm 4.00$ & $80.24 \\pm 5.49$ & $\\mathbf{85.84 \\pm 4.87}$\\\\\n Slovenia & $95.74 \\pm 0.44$ & $\\mathbf{95.75 \\pm 0.53}$ & $\\mathbf{97.04 \\pm 0.38}$ & $96.77 \\pm 0.57$ & Singapore & $\\mathbf{86.40 \\pm 2.59}$ & $84.96 \\pm 2.84$ & $\\mathbf{87.53 \\pm 1.30}$ & $84.70 \\pm 1.86$\\\\\n Ukraine & $93.45 \\pm 0.76$ & $\\mathbf{93.70 \\pm 0.28}$ & $93.52 \\pm 1.81$ & $\\mathbf{94.17 \\pm 2.30}$ & Bangladesh & $76.48 \\pm 1.06$ & $\\mathbf{76.80 \\pm 1.21}$ & $\\mathbf{79.01 \\pm 2.18}$ & $78.68 \\pm 3.96$\\\\\n Belarus & $94.43 \\pm 3.59$ & $\\mathbf{96.05 \\pm 0.57}$ & $94.44 \\pm 4.19$ & $\\mathbf{98.15 \\pm 0.97}$ & Paraguay & $96.68 \\pm 0.32$ & $\\mathbf{97.00 \\pm 1.08}$ & $96.25 \\pm 1.17$ & $\\mathbf{96.28 \\pm 0.96}$\\\\\n Serbia & $95.68 \\pm 0.33$ & $\\mathbf{95.75 \\pm 0.25}$ & $93.09 \\pm 1.81$ & $\\mathbf{93.45 \\pm 1.44}$ & Cyprus & $\\mathbf{97.57 \\pm 0.31}$ & $97.39 \\pm 0.38$ & $\\mathbf{98.32 \\pm 0.37}$ & $97.53 \\pm 1.49$\\\\\n Croatia & $\\mathbf{96.40 \\pm 0.57}$ & $94.98 \\pm 2.01$ & $90.66 \\pm 3.76$ & $\\mathbf{91.37 \\pm 4.39}$ & Bosnia & $\\mathbf{87.42 \\pm 1.95}$ & $84.95 \\pm 3.47$ & $\\mathbf{88.61 \\pm 5.06}$ & $79.11 \\pm 3.07$\\\\\n Greece & $\\mathbf{85.00 \\pm 1.61}$ & $83.18 \\pm 3.31$ & $56.01 \\pm 10.98$ & $\\mathbf{57.47 \\pm 7.75}$ & Ireland & $\\mathbf{87.69 \\pm 0.95}$ & $87.44 \\pm 0.20$ & $87.56 \\pm 3.01$ & $\\mathbf{88.42 \\pm 0.97}$\\\\\n New Zealand & $\\mathbf{95.91 \\pm 1.41}$ & $93.60 \\pm 2.42$ & $98.21 \\pm 1.37$ & $\\mathbf{99.13 \\pm 0.37}$ & Algeria & $\\mathbf{86.03 \\pm 1.55}$ & $83.07 \\pm 3.85$ & $\\mathbf{85.08 \\pm 2.50}$ & $83.64 \\pm 2.60$\\\\\n Portugal & $\\mathbf{94.54 \\pm 0.67}$ & $94.53 \\pm 0.41$ & $\\mathbf{93.33 \\pm 0.59}$ & $91.07 \\pm 1.26$ & Colombia & $\\mathbf{89.09 \\pm 0.64}$ & $87.76 \\pm 1.30$ & $88.32 \\pm 2.55$ & $\\mathbf{88.99 \\pm 1.83}$\\\\\n Bulgaria & $\\mathbf{90.87 \\pm 2.63}$ & $90.40 \\pm 1.10$ & $92.97 \\pm 3.66$ & $\\mathbf{93.93 \\pm 2.59}$ & Uzbekistan & $\\mathbf{87.36 \\pm 0.74}$ & $86.23 \\pm 2.18$ & $\\mathbf{77.52 \\pm 2.62}$ & $73.83 \\pm 2.92$\\\\\n Lithuania & $\\mathbf{90.88 \\pm 1.73}$ & $88.58 \\pm 2.35$ & $76.16 \\pm 1.54$ & $\\mathbf{77.26 \\pm 2.54}$ & &&&\\\\\n \\bottomrule\n \\end{tabular}%\n }\n\\end{table*}\n\n\\begin{table}\n \\centering\n \\caption{Mean accuracy (and standard deviation) for multinational models on holdout dataset for training countries - \\fasttextatt{} versus \\textbf{fastTextADANN}{}.}\n \\label{table:multinational-holdout-attention}\n \\resizebox{0.49\\textwidth}{!}{%\n \\begin{tabular}{lrrlrr}\n \\toprule\n Country & FastTextAtt & FastTextADANN & Country & FastTextAtt & FastTextADANN\\\\\n \\midrule\n United States & $\\mathbf{99.73 \\pm 0.02}$ & $99.70 \\pm 0.03$ & Poland & $\\mathbf{99.80 \\pm 0.02}$ & $99.78 \\pm 0.02$\\\\\n Brazil & $\\mathbf{99.58 \\pm 0.04}$ & $99.53 \\pm 0.04$ & Norway & $99.44 \\pm 0.11$ & $\\mathbf{99.53 \\pm 0.04}$\\\\\n South Korea & $\\mathbf{99.98 \\pm 0.01}$ & $\\mathbf{99.98 \\pm 0.01}$ & Austria & $\\mathbf{99.38 \\pm 0.06}$ & $99.30 \\pm 0.07$\\\\\n Australia & $\\mathbf{99.77 \\pm 0.03}$ & $99.76 \\pm 0.03$ & Finland & $\\mathbf{99.83 \\pm 0.02}$ & $99.82 \\pm 0.01$\\\\\n Mexico & $\\mathbf{99.71 \\pm 0.03}$ & $99.68 \\pm 0.02$ & Denmark & $\\mathbf{99.82 \\pm 0.03}$ & $99.80 \\pm 0.02$\\\\\n Germany & $\\mathbf{99.85 \\pm 0.02}$ & $99.84 \\pm 0.01$ & Czechia & $\\mathbf{99.73 \\pm 0.02}$ & $99.70 \\pm 0.02$\\\\\n Spain & $\\mathbf{99.83 \\pm 0.02}$ & $99.80 \\pm 0.02$ & Italy & $\\mathbf{99.83 \\pm 0.02}$ & $99.80 \\pm 0.02$\\\\\n Netherlands & $\\mathbf{99.75 \\pm 0.03}$ & $99.72 \\pm 0.02$ & France & $\\mathbf{99.79 \\pm 0.04}$ & $99.77 \\pm 0.03$\\\\\n Canada & $\\mathbf{99.87 \\pm 0.02}$ & $99.85 \\pm 0.01$ & UK & $\\mathbf{99.77 \\pm 0.05}$ & $99.73 \\pm 0.03$\\\\\n Switzerland & $\\mathbf{99.62 \\pm 0.05}$ & $99.59 \\pm 0.05$ & Russia & $\\mathbf{99.40 \\pm 0.13}$ & $99.23 \\pm 0.12$\\\\\n \\bottomrule\n \\end{tabular}%\n }\n\\end{table}\n\n\\section{Missing Values Handling}\n\\label{sec:missingresults}\nIn this section, we aim to evaluate and improve the results of our four best models, \\fasttextatt{}, \\textbf{BPEmbAtt}{}, \\textbf{fastTextADANN}{} and \\textbf{BPEmbADANN}{} on the incomplete address data. As presented in Subsection~\\ref{subsec:iad}, we introduce an incomplete address dataset where the addresses do not always include all the components. \\autoref{table:incomplete-holdout} presents the results of the four models evaluated on the incomplete holdout dataset without any prior training on incomplete addresses. Since we observe that performances for all of the training countries are lower by $20~\\%$ to $40~\\%$ than previous scores, we choose to only evaluate our models on the countries seen during training (holdout). The lower accuracy is that of South Korea for both the models (nearly over the $45~\\%$ accuracy)(will be discussed in more detail later). We also observe that the ADANN approach yields better results (12 out of 20) and is better by less than $1~\\%$ on average (last row). Finally, we observe that the BPEmb approach still has higher variance than the FastText one since we have observed some seeds converging to suboptimal loss. Nevertheless, we observe that despite poorer performance on zero-shot evaluation, the ADANN approaches yield better results on incomplete addresses than attention approaches. We hypothesize that ADANN models have not overfitted the representation of an address structure seen in the training and more on the general features of address structures and domain type (e.g. the country and language). Also, knowing the domain, and indirectly the address structure and language, makes it easier to parse an address when incomplete.\n\n\\begin{table*}\n \\centering\n \\caption{Mean accuracy (and standard deviation) for multinational models without any prior training on incomplete dataset.}\n \\label{table:incomplete-holdout}\n \\resizebox{0.6\\textwidth}{!}{%\n \\begin{tabular}{lrrrr}\n \\toprule\n Country & FastTextAtt & FastTextADANN & BPEmbAtt & BPEmbADANN\\\\\n \\midrule\n United States & $\\mathbf{68.59 \\pm 2.61}$ & $65.57 \\pm 0.80$ & $66.14 \\pm 3.88$ & $\\mathbf{67.94 \\pm 4.94}$\\\\\n Brazil & $57.00 \\pm 2.41$ & $\\mathbf{57.39 \\pm 3.63}$ & $\\mathbf{55.01 \\pm 3.90}$ & $51.10 \\pm 4.08$\\\\\n South Korea & $45.91 \\pm 4.22$ & $\\mathbf{46.28 \\pm 2.29}$ & $\\mathbf{49.84 \\pm 8.17}$ & $35.85 \\pm 5.32$\\\\\n Australia & $75.90 \\pm 1.28$ & $\\mathbf{76.02 \\pm 0.36}$ & $\\mathbf{77.18 \\pm 0.80}$ & $76.49 \\pm 1.56$\\\\\n Mexico & $\\mathbf{61.52 \\pm 2.42}$ & $59.88 \\pm 2.83$ & $\\mathbf{61.25 \\pm 1.42}$ & $58.93 \\pm 6.36$\\\\\n Germany & $48.39 \\pm 1.40$ & $\\mathbf{50.32 \\pm 6.96}$ & $\\mathbf{58.98 \\pm 0.86}$ & $58.62 \\pm 3.31$\\\\\n Spain & $70.74 \\pm 1.26$ & $\\mathbf{72.39 \\pm 1.13}$ & $79.51 \\pm 0.49$ & $\\mathbf{81.05 \\pm 1.17}$\\\\\n Netherlands & $59.72 \\pm 7.54$ & $\\mathbf{63.74 \\pm 3.47}$ & $75.34 \\pm 1.32$ & $\\mathbf{77.39 \\pm 3.52}$\\\\\n Canada & $\\mathbf{72.77 \\pm 0.76}$ & $71.04 \\pm 1.98$ & $73.00 \\pm 2.51$ & $\\mathbf{81.29 \\pm 3.22}$\\\\\n Switzerland & $\\mathbf{64.13 \\pm 6.21}$ & $62.66 \\pm 6.31$ & $\\mathbf{73.08 \\pm 0.89}$ & $72.55 \\pm 3.22$\\\\\n Poland & $43.99 \\pm 3.35$ & $\\mathbf{47.04 \\pm 4.06}$ & $48.50 \\pm 1.08$ & $\\mathbf{54.47 \\pm 4.73}$\\\\\n Norway & $\\mathbf{64.69 \\pm 9.77}$ & $62.67 \\pm 8.64$ & $76.43 \\pm 1.62$ & $\\mathbf{78.28 \\pm 4.04}$\\\\\n Austria & $\\mathbf{69.33 \\pm 6.67}$ & $69.21 \\pm 5.21$ & $77.69 \\pm 1.08$ & $\\mathbf{78.56 \\pm 2.39}$\\\\\n Finland & $57.94 \\pm 8.91$ & $\\mathbf{60.83 \\pm 6.98}$ & $74.65 \\pm 1.36$ & $\\mathbf{76.31 \\pm 3.46}$\\\\\n Denmark & $\\mathbf{56.94 \\pm 3.24}$ & $55.54 \\pm 5.48$ & $\\mathbf{65.32 \\pm 2.40}$ & $63.73 \\pm 3.04$\\\\\n Czechia & $61.16 \\pm 3.21$ & $\\mathbf{64.40 \\pm 2.91}$ & $72.58 \\pm 1.72$ & $\\mathbf{74.79 \\pm 3.53}$\\\\\n Italy & $66.42 \\pm 2.49$ & $\\mathbf{71.16 \\pm 1.67}$ & $73.89 \\pm 0.77$ & $\\mathbf{76.33 \\pm 1.46}$\\\\\n France & $70.15 \\pm 0.88$ & $\\mathbf{73.14 \\pm 2.38}$ & $71.91 \\pm 3.46$ & $\\mathbf{77.45 \\pm 1.08}$\\\\\n UK & $53.63 \\pm 1.28$ & $\\mathbf{54.58 \\pm 1.24}$ & $51.27 \\pm 1.96$ & $\\mathbf{53.62 \\pm 3.30}$\\\\\n Russia & $\\mathbf{58.41 \\pm 1.12}$ & $57.94 \\pm 0.99$ & $58.80 \\pm 5.21$ & $\\mathbf{61.91 \\pm 2.86}$\\\\\\midrule\n Mean & $61.37 \\pm 8.65$& $\\mathbf{62.09 \\pm 8.41}$ & $67.02 \\pm 9.91$ & $\\mathbf{67.83 \\pm 12.15}$ \\\\\n \\bottomrule\n \\end{tabular}%\n }\n\\end{table*}\n\nTo improve our models' incomplete addresses performance, we have trained two new models using the complete and incomplete addresses datasets. This merged dataset consists of \\num{150000} addresses per country using the same settings as described in \\autoref{sec:exp}. We choose to only train the two best models on incomplete addresses, \\textbf{fastTextADANN}{} and \\textbf{BPEmbADANN}{}. We refer to these new models by \\textbf{fastTextADANNNoisy}{} and \\textbf{BPEmbADANNNoisy}{} where the difference between the two is the embeddings. \\autoref{table:incomplete-holdout-train-clean-noisy} present the mean accuracy and a standard deviation of our two models tested on the incomplete dataset. \n\nFirst, we observe that using incomplete addresses during training substantially improves the accuracy for all the countries. We also observe that the \\textbf{fastTextADANNNoisy}{} is the leading model across the board with the best accuracy on all the twenty countries, where results are nearly always more than $98~\\%$ in accuracy. In contrast, we observe that, again, despite using an embeddings layer to learn the representation of the byte-pairs embeddings (BPEmb), \\textbf{BPEmbADANNNoisy}{} shows poor results compare to \\textbf{fastTextADANNNoisy}{}. Results are, on average, nearly $6~\\%$ worse and have a higher variance of more than the double, with results as low as $84~\\%$ (which is lower than some results observe in zero-shot evaluation). Again, this shows that the trained embeddings layer is overfitting or that the byte-pair embeddings are not well suited for address (such as the embeddings of postal code). This also could mean that the French fastText embeddings approach to construct out-of-vocabulary embeddings are more generalized than the one that we retrain using multi-lingual embeddings.\n\nSecond, it is interesting to note that even with a relatively large number of incomplete addresses in the training dataset (\\num{50000}), we did not achieve scores as good as seen with the complete dataset (\\autoref{table:multinational-holdout}). Results are in average of $98~\\%$ and near $93~\\%$ for \\textbf{fastTextADANN}{} and \\textbf{BPEmbADANN}{} respectively. Also, accuracies on the complete addresses holdout dataset are nearly as good as those presented in \\autoref{table:multinational-holdout}. We observe similar results for most of the training countries with a difference of $0.5~\\%$ lower.\n\nFinally, we can see that the worse results for both models are for South Korea. These results contrast with the nearly perfect score observed for all the models in \\autoref{table:multinational-holdout}. This highlights our hypothesis (Subsection~\\ref{subsec:multieval}) that our model has memorized the particular pattern of South Korea during training for complete addresses. However, since we also have trained using incomplete addresses, some incomplete addresses are now not so different from the other address patterns, confusing our models. For example, if we remove the tags Province and Municipality of an address, it can be in 4 of the five patterns making more harder for our models. This shows that our models have overfitted in that case, and adding noise in the training data helps reduce that overfitting but lowers the accuracy. We also observe that using a domain adversarial technique substantially improves performance for that specific case where we observe the best improvement with the accuracy nearly doubling.\n\n\\begin{table*}\n \\centering\n \\caption{Mean accuracy (and standard deviation) for multinational models trained on complete and incomplete datasets and evaluated on the incomplete dataset for training countries. \\textbf{bold} values indicate the best accuracy for each country.}\n \\label{table:incomplete-holdout-train-clean-noisy}\n \\resizebox{0.65\\textwidth}{!}{%\n \\begin{tabular}{lrrrr}\n \\toprule\n Country & FastTextADANN & FastTextADANNNoisy & BPEmbADANN & BPEmbADANNNoisy\\\\\n \\midrule\n United States & $65.57 \\pm 0.80$ & $\\mathbf{97.88 \\pm 2.04}$ & $67.94 \\pm 4.94$ & $92.12 \\pm 3.60$\\\\\n Brazil & $57.39 \\pm 3.63$ & $\\mathbf{98.39 \\pm 1.90}$ & $51.10 \\pm 4.08$ & $89.57 \\pm 6.46$\\\\\n South Korea & $46.28 \\pm 2.29$ & $\\mathbf{92.25 \\pm 3.72}$ & $35.85 \\pm 5.32$ & $84.58 \\pm 9.86$\\\\\n Australia & $76.02 \\pm 0.36$ & $\\mathbf{98.86 \\pm 1.46}$ & $76.49 \\pm 1.56$ & $93.32 \\pm 5.75$\\\\\n Mexico & $59.88 \\pm 2.83$ & $\\mathbf{97.56 \\pm 1.63}$ & $58.93 \\pm 6.36$ & $88.79 \\pm 5.12$\\\\\n Germany & $50.32 \\pm 6.96$ & $\\mathbf{99.20 \\pm 0.74}$ & $58.62 \\pm 3.31$ & $96.79 \\pm 2.05$\\\\\n Spain & $72.39 \\pm 1.13$ & $\\mathbf{98.24 \\pm 1.82}$ & $81.05 \\pm 1.17$ & $90.23 \\pm 7.02$\\\\\n Netherlands & $63.74 \\pm 3.47$ & $\\mathbf{98.65 \\pm 0.98}$ & $77.39 \\pm 3.52$ & $97.21 \\pm 1.77$\\\\\n Canada & $71.04 \\pm 1.98$ & $\\mathbf{97.88 \\pm 2.65}$ & $81.29 \\pm 3.22$ & $91.93 \\pm 3.74$\\\\\n Switzerland & $62.66 \\pm 6.31$ & $\\mathbf{99.03 \\pm 0.78}$ & $72.55 \\pm 3.22$ & $97.12 \\pm 1.52$\\\\\n Poland & $47.04 \\pm 4.06$ & $\\mathbf{99.28 \\pm 0.52}$ & $54.47 \\pm 4.73$ & $97.89 \\pm 1.80$\\\\\n Norway & $62.67 \\pm 8.64$ & $\\mathbf{99.35 \\pm 0.54}$ & $78.28 \\pm 4.04$ & $98.25 \\pm 1.06$\\\\\n Austria & $69.21 \\pm 5.21$ & $\\mathbf{99.10 \\pm 1.16}$ & $78.56 \\pm 2.39$ & $94.74 \\pm 2.88$\\\\\n Finland & $60.83 \\pm 6.98$ & $\\mathbf{98.97 \\pm 0.64}$ & $76.31 \\pm 3.46$ & $98.75 \\pm 0.57$\\\\\n Denmark & $55.54 \\pm 5.48$ & $\\mathbf{97.38 \\pm 2.56}$ & $63.73 \\pm 3.04$ & $92.15 \\pm 3.77$\\\\\n Czechia & $64.40 \\pm 2.91$ & $\\mathbf{98.14 \\pm 1.74}$ & $74.79 \\pm 3.53$ & $93.75 \\pm 3.79$\\\\\n Italy & $71.16 \\pm 1.67$ & $\\mathbf{98.82 \\pm 1.15}$ & $76.33 \\pm 1.46$ & $94.49 \\pm 3.72$\\\\\n France & $73.14 \\pm 2.38$ & $\\mathbf{98.98 \\pm 1.32}$ & $77.45 \\pm 1.08$ & $91.10 \\pm 6.85$\\\\\n UK & $54.58 \\pm 1.24$ & $\\mathbf{96.01 \\pm 4.75}$ & $53.62 \\pm 3.30$ & $84.95 \\pm 7.63$\\\\\n Russia & $57.94 \\pm 0.99$ & $\\mathbf{96.05 \\pm 4.05}$ & $61.91 \\pm 2.86$ & $86.71 \\pm 5.71$\\\\\\midrule\n Mean & $62.09 \\pm 8.41$ & $\\mathbf{98.00 \\pm 1.62}$ & $67.83 \\pm 12.15$ & $92.72 \\pm 4.22$ \\\\\n \\bottomrule\n \\end{tabular}%\n }\n\\end{table*}\n\n\\section{Conclusion}\n\\label{sec:conclusion}\nWe estimate that we have reached our first objective, which was to build a single model capable of learning to parse addresses of different formats and languages using a multinational dataset and subword embeddings. Indeed, all our approaches achieve accuracies around $99~\\%$ on all the twenty countries used for training. Our experiments with zero-shot transfer learning also yielded interesting results. First, our baseline approaches obtain good results achieving near $50~\\%$ of state-of-the-art performance. Second, using an attention mechanism helps to improve our results and could also provide insights about the address elements on which the model focuses to make a tag prediction. However, this analysis is left as future work. Third, our experiments indicate that using a domain adversarial training algorithm does not necessarily improve our results on countries not seen during training. However, they bring a significant contribution on incomplete addresses. Finally, we tested some of our models on incomplete addresses to evaluate their performance on such data. We also improved performance by using some incomplete addresses during training improved performance. These results provide insights into the direction that our future work should take. It would be interesting to explore how other subword embeddings techniques, such as character-based ones \\cite{kim2015characteraware}, would perform on the multinational address parsing task. Additional qualitative analysis of the results would also be required to investigate the models' typical errors further. \n\n\\section*{Acknowledgment}\nThis research was supported by the Natural Sciences and Engineering Research Council of Canada (IRCPJ 529529-17) and a Canadian insurance company. We wish to thank the reviewers for their comments regarding our work and methodology.\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n{\\bf Goal}\nElliptical galaxies are presumed to be the product of galaxy merging, where different mass ratios\nresult in different classes of elliptical galaxies (disky, boxy\\dots) \\cite{naa99}. This\nseems consistent with the hierarchical scenario of galaxy formation\n\\cite{pre74}, although it is clear that more standard collapse processes and secular \nevolution contribute to the shaping of galaxies.\nThere are many different and often complementary approaches to constrain these scenarios. \nThe one we adopted is to study the structures of nearby elliptical galaxies in detail, \nin order to probe the signatures of their evolution and to be able to trace their formation\nhistory. This fossil search requires data both on the chemical and\ndynamical status of the stellar components, and if present of the gaseous\nsystem, both being provided by spectroscopy. \\\\\n\n\\hspace{-.5cm}{\\bf 3D spectroscopy} Spectroscopy can indeed deliver information on the stars'\ndynamics and chemistry via the analysis of their absorption lines. Two-dimensional spatial\ncoverage provided by integral-field spectroscopy is critical to homogeneously sample\nan extended target, while more traditional long-slit spectroscopy generally restricts our view \nto a few a priori fixed axes. \nA qualitative assessment of the obtained maps is clearly not sufficient to fully\naddress the issues mentioned above. We need to go further by\nmodelling the observed galaxies using constraints retrieved from 3D spectroscopy, such as\nluminosity, stellar velocity distribution and stellar populations. As emphasized by \\cite{bin05},\n3D spectroscopy seems to be essential to properly constrain the dynamics of\nthe galaxy under scrutiny.\\\\\n\n\\hspace{-.5cm}{\\bf Existing models}\nOne of the key goal of galaxy modelling is to retrieve the full distribution function (DF), \n{\\it i.e.} the density of stars in phase-space.\nThere are already several known methods to address this problem,\neach of them having both advantages and drawbacks. We can distinguish different\nclasses of techniques~:\n\\begin{itemize}\n\\item DF-based method, where models are often restricted to simple geometries.\n\\item Moment-based method, which consists in solving Boltzmann and\nPoisson equations via the use of a closed system of relations (Jeans equations). The main issue here is that the final DF may not be positive everywhere.\n\\item Orbit-based method, or Schwarzschild modelling \\cite{sch79}. Libraries\n of orbits are built within a fixed potential, and each orbit is weighted\n as to reproduce the observed galaxy (photometry and kinematics).\n\\item Particle-based method, where the particles represent groups of stars.\n One has to guess the right initial conditions which will evolve in a configuration\n resembling the chosen galaxy.\n\\end{itemize}\n\\section{Method}\nThe method we wish to present here is an hybrid scheme between the Schwarzschild and N-body techniques.\nIt has been first proposed and developed by \\cite{sye96} and consists in an N-body realization where\nthe weight of each particle is gradually changed in order to fit the observables.\\\\\n\n\\hspace{-.5cm}{\\bf Previous applications}\nThis method has been tested by \\cite{sye96} on fake galaxy models, adjusting\nthe photometry alone. It has been recently applied on the Milky Way by \\cite{bis04}, \nfitting the photometry of a frozen snapshot extracted from a full N-body\nsimulation\\footnote{An extended independent implementation of this algorithm has been recently presented by de Lorenzi et al. 2007: see their paper for details}.\\\\\n\n\\hspace{-.5cm}{\\bf Algorithm}\nThe algorithm we use is illustrated in Fig.~\\ref{jou:fig1}.\nParticles start from (N-body) initial conditions and are integrated along their orbits. \nObservables are used as reference input to the modelling, particles being projected \nas to mimic these observables. A weight prescription is derived from the\ndifference between the model and the data, \nthis prescription being then applied on each particle accordingly. \nThis procedure (integration, projection, comparison, weight changing) is\nrepeated until the weights converge.\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.6\\textwidth]{jourdeuilF1.eps}\n\\caption{Algorithm of the Syer \\& Tremaine method.}\n\\label{jou:fig1}\n\\end{figure}\n\\\\\n\n\\hspace{-.5cm}{\\bf Initial conditions}\nThe initial conditions have to properly sample the galaxy, {\\it ie} in position\/velocity space or using integrals of motions. This is not an easy task when we do not {\\it a priori} know the internal structure of the galaxy we wish to model.\nIf we want to model real galaxies, we have to find\nappropriate initial conditions, without a priori knowing its DF. We thus first\nbuild an initial mass distribution, by using the known photometry and assuming a\nconstant mass-to-light ratio. This is achieved here via the Multi Gaussian Expansion (MGE) \nproposed by \\cite{mon92}, which allows us to decompose the mass distribution \ninto a sum of tri-dimensional Gaussians, and to derive the corresponding gravitational\npotential analytically \\cite{ems94}. Jeans equations are then used to obtain a first guess of\nthe dynamics and set up initial conditions.\\\\\n\n\\hspace{-.5cm}{\\bf Integration}\nIn a first version of the code, we keep the potential steady. The code has been\ndeveloped in a very modular and flexible fashion, such as to allow the easy\nimplementation of a self-consistency module~: this will be described in \na forthcoming paper. As we need to evaluate the observables of all particles at the same time, \nwe chose a second-order synchronized leapfrog scheme, where positions and velocities can be evaluated\nsimultaneously. The leapfrog technique is perhaps not the best choice in terms of accuracy, but \nwas found sufficient for this first development step. Other schemes (e.g. Runge-Kutta) can be easily\nimplemented.\nAdaptive time-steps have been included, in order to optimally sample all \nthe orbits. An independent integration of each particle would result in a very\ninefficient algorithm. We thus chose to follow the algorithm described in \nthe N-body code GADGET \\cite{spr01}, in which particles advance in bunches \nso that they are always distributed in a tight time range around the current time.\\\\\n\n\\hspace{-.5cm}{\\bf Observables}\nOur code has been designed around two new main items.\nFirst, we do not restrict ourselves to fit the photometry~: the code also allows the fitting of the kinematics, via the use of Line of Sight Velocity\nDistributions (LOSVDs). Second, the code has been developed as to permit the use of \n3D spectroscopic data, including spatial adaptive binning (Voronoi bins, see \\cite{cap03}). We have thus used the data obtained with the SAURON spectrograph (WHT,\nCanary Islands) for its survey of early-type galaxies \\cite{dez02,ems04}.\\\\\n\n\\hspace{-.5cm}{\\bf Prescription}\nThe heart of the code is the prescription developed by \\cite{sye96}, which\nrules the weight changing of each particle so that the cumulated\nobservables reproduce the observations~:\n\\begin{equation}\n \\frac{dw_i(t)}{dt} = -\\epsilon w_i(t)\\sum_{j=1}^J\\frac{K_{ij}}{Z_j}\\Delta_j\n\\label{jou:eq1}\n\\end{equation}\nwhere $i$ describes the particles, $j$ the observables, $w$ the weights,\n$K_{ij}$ the contribution of the particle $i$ to the observable $j$, and $\\Delta_j\n\\equiv y_j(t)\/Y_j -1$ the contribution of all the particles to this same observable. $\\epsilon$ is the strength of the weight changing, $w_i$\navoids the weights to be negative when $w_i$ approaches $0$. \nThe input parameters are the following~: \n\\begin{itemize} \\item $N$ is the number of particles,\n\\item $d_{time}$ is the time during which the observables are averaged,\n\\item $\\epsilon$ is the strength parameter mentioned in Eq.~\\ref{jou:eq1},\n\\item $\\alpha$ is the smoothing parameter of the observables, used in the equation~:\n\\begin{equation}\n \\tilde\\Delta_j(t) = \\alpha \\int_0^\\infty\\Delta_j(t-\\tau)e^{-\\alpha \\tau}d\\tau\n \\end{equation}\nso that $\\tilde\\Delta_j(t)$ replaces $\\Delta_j(t)$ in Eq.~\\ref{jou:eq1}.\n\\end{itemize}\nThe choice of these parameters is crucial, each galaxy requiring an\nappropriate set.\n\\section{Results}\n\nA preliminary model has been obtained on the nearby disky galaxy\nNGC~3377. In Fig.~\\ref{jou:fig2} we illustrate the fit of the photometry and LOSVDs \nobtained with the SAURON instrument on this galaxy, for a given mass\n(MGE) distribution. The maps are globally well reproduced ($I$, $\\sigma$, $h_3$).\nSome remaining discrepancy in the velocity field is probably due to an observed asymmetry in\nthe kinematics of this galaxy (tilt between the photometric and kinematic\nminor-axis). The relatively high $h_4$ observed level is not reached by the\nmodel~: this may be partly related to a template mismatch effect which\nmostly affects the even moments of the LOSVD.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.7\\textwidth]{jourdeuilF2.eps}\n\\caption{$I, V, \\sigma, h_3, h_4$ from the observations (top) and the model (bottom) of NGC~3377}\n\\label{jou:fig2}\n\\end{figure}\n\\section{Conclusion and perspectives}\nAlthough the results shown here are very preliminary,\nthey illustrate the possibility of fitting the kinematics,\nwhich is an important step in the development of such a code. \nIt has also been designed to adjust complex data such as the\none provided by 3D spectrographs. A few improvements have been or are now being implemented~:\n\\begin{itemize}\n\\item Self-consistency~: it is technically easy to replace the current integration \nmodule with a self-consistent scheme. The full implications of a continuous weight changing\nremains however unclear and have to be examined.\n\\item Addition of a central black hole~: this would allow the use of higher\nspatial resolution data of nearby galaxies.\n\\item Stellar populations~: this new class of observables would allow us\nto link the dynamics and chemistry of the stellar component.\n\\end{itemize}\n \\bibliographystyle{abbrv}\n ","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn 1965 Buchberger invented Gr\\\"obner basis theory, techniques enabling the \ncomputation of ideals in commutative polynomial rings over fields.\nImplementations of Buchberger's algorithm are now provided by all major\ncomputer algebra systems, a good cross-section of the ways in which the \ntheory has developed may be found in \\cite{RISC}. \nMora generalised Gr\\\"obner basis theory to noncommutative polynomial rings\n(algebras) \\cite{FMora}. \nIntroductions to these procedures may be found in \\cite{Ufn,TMora2}.\nThis paper presents an extension of the noncommutative Gr\\\"obner\nbasis procedures for polynomials to what we call {\\em tagged polynomials}.\nThe intention is to describe methods of computation that may be applied\nto the problem of computing right (or left) ideals in finitely presented\n$K$-algebras.\n\\\\\n\nThe data defining the problem consists of the field $K$, a set of\nnoncommuting\nvariables $X$, a set of generators $P \\subseteq K[X^\\dagger]$ for a two-\nsided\nideal $\\langle P \\rangle$, defining an algebra \n$A=K[X^\\dagger]\/ \\langle P \\rangle$ and a set of generators $Q'\n\\subseteq A$\nfor a right ideal $\\langle Q' \\rangle^r$.\nWe expect elements of $A$ to be given in terms of $K[X^\\dagger]$, so\n$Q'$ is\nspecified by a set $Q \\subseteq K[X^\\dagger]$.\nThe problem we address is that of computing the right $A$-ideal\ngenerated by $Q'$, written $\\langle Q' \\rangle^r$.\n\\\\\n\nOur solution lies in using the free right $K$-module $K[\\dashv \\!\nX^\\dagger]$. \nHere $\\dashv$ is just a symbol or tag and \n$K[\\dashv\\!X^\\dagger]$ is bijective with $K[X^\\dagger]$.\nWe call elements of $K[\\dashv\\!X^\\dagger]$ \\emph{tagged polynomials} \n(Definition 3.1) and\nwrite them $k_1\\dashv\\!m_1+ \\cdots + k_n\\dashv\\!m_n$ where \n$k_1,\\ldots,k_t \\in K$ and $m_1,\\ldots,m_t \\in X^\\dagger$. \nOrdinary polynomials $F_P$ defining the two-sided ideal\n$\\langle P \\rangle$ which defines $A$ are combined with\ntagged polynomials $F_T$ defining the one-sided ideal. \nThe mixed set of polynomials $F:=(F_T,F_P)$ determines a\nreduction relation $\\to_F$ (Definition 3.2) on the tagged polynomials\n$K[\\dashv \\! X^\\dagger]$.\n\\\\\n\n\nThe value of this combination and use of tagging is in computation, as\nwill be shown in \nthe main result (Algorithm 4.9), which describes a variant of the Buchberger\nalgorithm. The initial mixed set of polynomials $F$ is appended with tagged \nand non-tagged polynomials until the relation $\\to_F$ is complete on\n$K[\\dashv\\!X^\\dagger]$. When the procedure terminates the usual\nnormal form arguments apply and reduction modulo $F$ can be used to\nsolve \nthe membership problem for the right ideal $\\langle Q' \\rangle^r$\nof the finitely presented algebra $A$.\n\\\\\n \nPrevious work \\cite{Birgit,MRC} attempt the computation of one-sided ideals\nby using different definitions of \npurely one-sided reduction relation in particular algebras \n(e.g. $\\mathbb{Q}[M]$ for a monoid $M$ presented by a complete rewrite system). \nThe main problem encountered is that of computing in a non-free algebra,\nwe avoid this and base all the computations specifying the algebra at\nthe same level (in a particular free right module) \nas those for the ideal and compute the two simultaneously. \nIn other words, the methods we describe \nprovide for local computations, concerning single ideals \n$\\langle Q' \\rangle^r$ without the requirement to compute \nthe global structure of the algebra $A$ or face the\ndifficulties of calculations with elements of $A$.\nThis idea follows the philosophy that \ncomputations take place in free objects.\n\n \n\\section{Algebra Presentations and One-sided Ideals}\n\nIf $X$ is a set, then \n$X^\\dagger$ is the {\\em free semigroup} \nof all strings of elements of $X$, and\n$X^*$ is the {\\em free monoid} of all strings \ntogether with the empty string, which acts as the identity $i\\!d$ for\n$X^*$.\nA {\\em semigroup presentation} is a pair $sgp \\langle X | R \\rangle$\nwhere\n$X$ is a set and $R \\subseteq X^\\dagger \\times X^\\dagger$. \nIt {\\em presents a semigroup} $S$ if $X$ is a set of generators of $S$\nand the natural morphism $\\theta:X^\\dagger \\to S$ induces an isomorphism\nfrom $X^\\dagger\\!\/=_R$ to $S$, where $=_R$ is the\ncongruence generated on $X^\\dagger$ by $R$.\nSimilarly, a {\\em monoid presentation} is a pair $mon \\langle X | R \\rangle$\nwhere $X$ is a set and $R \\subseteq X^* \\times X^*$. \nIt {\\em presents a monoid} $M$ if $X$ is a set of generators of $M$\nand the natural morphism $\\theta:X^* \\to M$ induces an isomorphism\nfrom $X^*\\!\/=_R$ to $M$, where $=_R$ is the\ncongruence generated on $X^*$ by $R$.\n\\\\\n\nLet $K$ be a field.\nThe { \\em free $K$-algebra} $K[S]$ on a semigroup $S$ \nconsists of all sums of $K$-multiples\nof elements of $S$ with the operations of \naddition and multiplication defined\nin the obvious way. In particular the elements of\n$K[X^\\dagger]$ are called {\\em polynomials} and written\n$k_1m_1+\\cdots+k_nm_n$ where $k_1,\\ldots,k_n \\in K$ and \n$m_1,\\ldots,m_n \\in X^\\dagger$.\\\\\n\n\nIf $P$ is a subset of an algebra $Z$ then the \n{\\em two-sided ideal} generated by $P$ in \n$Z$ is denoted $\\langle P \\rangle$. In the case $Z=K[X^\\dagger]$ this\nconsists of all sums of multiples of\nelements of $P$, i.e.\n$$\\langle P \\rangle := \\{ k_1u_1p_1v_1+ \\cdots + k_nu_np_nv_n \n \\ | \\ p_1, \\ldots,p_n \\in P, k_1,\\ldots,k_n \\in K, \n u_1,v_1,\\ldots,u_n,v_n \\in X^*\\}.\n$$\nGiven an ideal in an algebra the {\\em membership problem} is that of\ndetermining,\nfor a given element of the algebra, whether it is an element of the\nideal.\n\\\\\n\nA {\\em congruence} on an algebra $Z$ is an equivalence relation $\\sim$\non its elements such that if $p \\sim q$ then $p+u \\sim q+u$ and \n$upv \\sim uqv$ for all $u,v \\in Z$.\nGiven an algebra $Z$ and an ideal $\\langle P \\rangle$ \nideal membership defines a congruence on the algebra by\n$$\np \\sim q \\Leftrightarrow p-q \\in \\langle P \\rangle.\n$$\nThe {\\em quotient algebra} $Z\/\\langle P \\rangle$ is the algebra of\ncongruence \nclasses of $Z$ under $\\langle P \\rangle$.\nA $K$-\\emph{algebra presentation} is a pair $alg\\langle X | P \\rangle$, where\n$P \\subseteq K[X^\\dagger]$.\nA $K$-algebra $A$ is presented by $alg\\langle X|P \\rangle$ if \n$X$ is a set of generators of $A$ and the natural morphism\n$K[X^\\dagger] \\to A$ induces an isomorphism $K[X^\\dagger]\/\\langle P\n\\rangle \\to A$.\\\\\n\n\nNoncommutative Gr\\\"obner basis theory (as described in \\cite{TMora, TMora2})\nuses the notion of an ordering on $X^\\dagger$, thereby allowing the \nconcepts of {\\em leading monomial}, \n{\\em leading term} and {\\em leading coefficient} \n on the polynomials of $K[X^\\dagger]$. \nGiven any subset $P$ of $K[X^\\dagger]$ \nan ordering determines a Noetherian \nreduction relation $\\to_P$ on the\nelements of $K[X^\\dagger]$. \nThe reflexive, symmetric, transitive closure of this\nrelation is a congruence relation \ncoinciding with the ideal membership of $\\langle P \\rangle$.\n\\\\\n\n\nLet $A$ be the $K$-algebra presented by\n$alg\\langle X | P \\rangle$ and let $Q' \\subseteq A$.\nWe wish to consider the right ideal $\\langle Q'\\rangle^r $\ngenerated in $A$ by $Q'$,\ni.e.\n$$\n\\langle Q'\\rangle^r :=\n \\{q_1'a_1+ \\cdots +q_n'a_n \\ | \\ a_1, \\ldots, a_n \\in A,\n q_1', \\ldots, q_n' \\in Q'\\}.\n$$\n\nA {\\em right congruence} on an algebra $A$ is an equivalence relation\n$\\stackrel{r}{\\sim}$ such that for all $a,b,y \\in A$\n$$a \\stackrel{r}{\\sim} b \\Rightarrow \n a+y \\stackrel{r}{\\sim} b+y \\text{ and } \n ay \\stackrel{r}{\\sim} by.$$\n\n\nMembership of a right ideal $\\langle Q' \\rangle^r$ defines \na right congruence on $A$, \nby $a \\stackrel{r}{\\sim}_{Q'} b \\Leftrightarrow a-b \\in \\langle Q'\n\\rangle^r$.\nThe quotient $A\/\\langle Q' \\rangle^r$ is \nthe set of all the right congruence classes of $A$ under \n$\\stackrel{r}{\\sim}_{Q'}$ \nwhere classes are denoted $[a]_{Q'}$ for $a \\in A$.\nNote that for $a, b \\in A$,\n$[a+b]_{Q'} = [a]_{Q'} + [b]_{Q'}$ \nand\n$[a]_{Q'}[b]_{Q'} = [a]_{Q'}$.\\\\\n\n\n {\\em Buchberger's algorithm} \nis a critical pair completion procedure. \nThe algorithm begins with a set of polynomials \n$P$ of a free algebra. Set $F:=P$ and a search for \noverlapping leading terms \nwill find all critical terms of the \nreduction relation $\\to_F$. \nThis enables a test for local confluence. \nOverlaps which cannot be resolved result in \n{\\em S-polynomials} all of which are \nadded to $F$ at each stage\n(though some elimination is possible, \nfor efficiency). The algorithm\nterminates when all the overlaps of \n$F$ can be resolved, i.e.\n$\\to_F$ is complete \n(Noetherian and confluent), when\nthis occurs $F$ is said to be a \n{\\em Gr\\\"obner basis} for \nthe ideal $\\langle P \\rangle$.\nObtaining a Gr\\\"obner basis $F$ \nallows, in particular, the \nsolution of the membership problem \nby using $\\to_F$ as a normal\nform function on $K[X^\\dagger]$.\nThus, if $F$ is a Gr\\\"obner basis for \nthe ideal $\\langle P \\rangle$ on $K[X^\\dagger]$ \nand $p,q \\in K[X^\\dagger]$, then\n$$\n p \\sim q \\Leftrightarrow \n p-q \\in \\langle P \\rangle \\Leftrightarrow\np \\stackrel{*}{\\to}_{F} u \\text{ and }\nq \\stackrel{*}{\\to}_{F} u \n \\text{ for some } u \\in K[X^\\dagger].\n$$\n\n\nThe Noetherian property ensures that \nthe process of reduction terminates with an \nirreducible element; confluence ensures that \nany two elements of the same class\nreduce to the same form. In practice: reduce \neach polynomial as far as \npossible using $\\to_{F}$, the original \npolynomials are equivalent only \nif their irreducible forms are equal.\\\\\n\n\nIn the next sections we show how to apply Buchberger's algorithm to obtain --\nwhen possible -- a Gr\\\"obner basis of (two types of) polynomials, which will\nenable the use of normal forms arguments.\n\n\n\\section{One-sided Noncommutative Gr\\\"obner Basis Procedures}\n\nGiven a finitely presented $K$-algebra $A$ and a subset $Q'$ of $A$ we\nwish to compute the right ideal $\\langle Q' \\rangle^r$. \nThe meaning of `computing the ideal' in this context is that of solving \nthe ideal membership problem for $\\langle Q' \\rangle^r$ in $A$.\nThe $K$-algebra $A$ is presented by \n$alg\\langle X | P \\rangle$ and to obtain normal forms for $A$ we would\ntherefore apply Gr\\\"obner basis procedures\nto $P$ in the free algebra $K[X^\\dagger]$.\nSince we are interested in a one-sided ideal we introduce the \ntagging notation which will allow the combination of $P$ and $Q$. \n\n\n\\begin{Def}[Tagged polynomials]\nLet $K$ be a field, let $X$ be a set and let $\\dashv$ be a symbol.\nThen $\\dashv\\!X^\\dagger$ is the set of \\textbf{tagged terms}\n$\\dashv\\!m$\nwhere $m \\in X^\\dagger$ and $K[\\dashv \\! X^\\dagger]$ is the free right \n$K[X^\\dagger]$-module of \\textbf{tagged polynomials},\ni.e. elements $k_1 t_1 + \\cdots + k_n t_n$ for\n$k_1, \\ldots, k_n \\in K$, $t_1, \\ldots, t_n \\in \\dashv\\! X^\\dagger$.\n\\end{Def}\n\nLet $>$ be a {\\em semigroup ordering} on $X^\\dagger$, i.e.\n$>$ is an irreflexive, antisymmetric, transitive relation on $X^\\dagger$\nsuch\nthat if $m_1>m_2$ then $um_1v>um_2v$ for all $u,v \\in X^*$.\nFurther we require the {\\em well-ordering} property, that there is no\ninfinite sequence $m_1>m_2>m_3> \\cdots$.\\\\\n\nLet $p=k_1m_1 + \\cdots + k_nm_n \\in K[X^\\dagger]$. \nThe $k_im_i$ are referred to as the {\\em monomials} of the polynomial, where\n$m_i$ is the {\\em term} and $k_i$ the {\\em coefficient}.\nAssuming the well-ordering on $X^\\dagger$, \nthe leading monomial $\\mathtt{LM}(p)$ is defined to be the monomial with\nthe largest term. The leading term $\\mathtt{LT}(p)$ and\nleading\ncoefficient $\\mathtt{LC}(p)$ are the coefficient and term of\nthis monomial.\\\\ \n\nTo simplify the definitions throughout this paper \nwe will assume all polynomials to be {\\em monic},\ni.e. their leading coefficients are all 1.\nThere is no loss in doing this: $K$ is a field so \nthe polynomials $F$ may always be divided by their leading \ncoefficients and still generate the same ideal.\\\\\n\n\nThe well-ordering on $X^\\dagger$ induces a well-ordering on\n$\\dashv\\!X^\\dagger$\ndefined by $\\dashv\\!m_1>\\dashv\\!m_2 \\Leftrightarrow m_1>m_2$.\nThis gives corresponding notions of leading monomial, leading term \nand leading coefficient for the tagged polynomials. \nIn detail: if $p=k_1m_1+ \\cdots + k_nm_n$ where $k_1,\\ldots,k_n \\in K$ and\n$m_1,\\ldots,m_n\\in X^\\dagger$ is a polynomial with leading term\n$\\mathtt{LT}(p)=m_i$ then the tagged polynomial\n$\\dashv\\!p:=k_1\\dashv\\!m_1 + \\cdots + k_n \\dashv\\!m_n$ has a tagged leading \nterm $\\mathtt{LT}(\\dashv\\!p)=\\dashv\\!m_i$. \n\\\\\n\nWe will now introduce the definition of a reduction relation \non $K[\\dashv\\!X^\\dagger]$, defined by a mixed set of polynomials\n$F=(F_T,F_P)$ where $F_T$ is a set of tagged polynomials, elements of\nthe module, and $F_P$ is a set of polynomials, \nelements of the algebra acting on the right of the module.\nThe reduction relation $\\to_F$ combines the two relations so\nthat they are defined on the free right module of tagged polynomials. \n\n\n\n\n\\begin{Def}[Reduction of tagged polynomials]\nLet $F:=(F_T,F_P)$ where \n$F_T \\subseteq K[\\dashv\\! X^\\dagger]$ and \n$F_P \\subseteq K[X^\\dagger]$. \nDefine the reduction relation $\\to_F$ on\ntagged polynomials $f \\in K[\\dashv \\! X^\\dagger]$ by\n$$\nf \\to_{F} f - k u (f_i) v\n$$\nif $u \\mathtt{LT}(f_i) v$ occurs in $f$ with coefficient $k \\in K$\nfor some $u \\in \\dashv\\!X^* \\cup \\{i\\!d\\}$, $v \\in X^*$, $f_i \\in F$. \n\\end{Def}\n\nA one-step reduction like that of the definition may also be\nwritten $f \\to_{f_i} f - k u (f_i) v$. \nThis relation may be understood to be a rewrite\nsystem on the polynomials (similarly to observations made in \\cite{Birgit}\non Mora's definitions of reduction \\cite{FMora}). When a multiple of the \nleading term of $f_i$ for $f_i \\in F$ occurs in the polynomial that is \nto be reduced, the rest of $f_i$ is substituted for the leading monomial \nof $f_i$. \n\n\nRegarding $F$ as a rewrite system with two types of rules that may be\napplied to monomials of polynomials, we could say that\nthe non-tagged polynomials can be applied anywhere in a term, \nbut the tagged ones apply only at the tagged side of a term.\n\n\\begin{example}[Reduction]\n\\emph{For example let $F_T:=\\{f_1,f_2\\}$ where \n$f_1:=\\dashv\\!xyx+\\dashv\\!yx+2\\dashv\\!y$,\n$f_2:=\\dashv\\!yx^2+\\dashv\\!x^2$ and\n$F_P:=\\{f_3,f_4\\}$ where $f_3:=x^2y-3yx$, $f_4:=yx^3-2xy$. \nThen the tagged polynomial \n$f:=8\\dashv\\!xyx^2y^3+5\\dashv\\!y$ cannot be reduced by $f_2$ or $f_4$ \nbut can be reduced by $f_1$ to \n$f-8f_1xy^3=5\\dashv\\!y-8\\dashv\\!yx^2y^3-16\\dashv\\!yxy^3$ \nor by $f_3$ to\n$f-8\\dashv\\!xyf_3y^2=5\\dashv\\!y+24\\dashv\\!xy^2xy^2$.}\n\\end{example}\n\n\nThese results allow the combination of two-sided and one-sided\ncongruences,\nby the use of tagged polynomials. A mixed set of polynomials $F$\ndefines a reduction relation on the module of all tagged polynomials.\nThe reflexive, symmetric, transitive closure of $\\to_F$ will be \ndenoted $\\stackrel{*}{\\leftrightarrow}_F$. \nThe class of $f \\in K[\\dashv \\! X^\\dagger]$ under the equivalence\nrelation \n$\\stackrel{*}{\\leftrightarrow}_F$ will be denoted $[f]_F$.\n\n\n\n\n\\begin{thm}\nLet $A$ be a $K$-algebra finitely presented by $alg\\langle X | P\n\\rangle$ with quotient morphism $\\theta$.\nLet $Q \\subseteq K[X^\\dagger]$ and define $Q':=\\theta Q$.\nDefine $F:=(\\dashv\\!Q, P)$ where $\\dashv\\!Q:=\\{\\dashv\\!q:q\\in Q\\}$.\nThen there is a bijection of sets\n$$\n\\frac{ K[\\dashv \\! X^\\dagger] }{ \\stackrel{*}{\\leftrightarrow}_F }\n\\cong \\frac{A}{ \\langle Q' \\rangle^r }$$\n\\end{thm}\n\n\\begin{proof}\nThe quotient\nmorphism $\\theta:K[X^\\dagger] \\to A$, \ndefines a surjection\n$\\theta^\\dashv:K[\\dashv \\! X^\\dagger] \\to A$. \nThen $\\theta^\\dashv(\\dashv \\! Q)=Q'$.\\\\\n\nDefine $\\phi:K[X^\\dagger]\/\\!\\stackrel{*}{\\leftrightarrow}_F \\; \n \\to A\/\\langle Q' \\rangle^r$ by \n\\, $\\phi([f]_F) := [\\theta^\\dashv(f)]_{Q'}.$\n\nTo prove that $\\phi$ is well-defined we show that it preserves the right \ncongruence $\\stackrel{*}{\\leftrightarrow}_F$.\nWe assume all polynomials of $F$ are monic.\nLet $f \\in K[\\dashv\\!X^\\dagger]$\nand $f_i \\in F$ and suppose that $f \\to_F f - kuf_iv$ for some\n$k \\in K$, $u \\in \\dashv\\!X^* \\cup \\{i\\!d\\}$, $v \\in X^*$.\nBy definition \n$\\phi([ f-kuf_iv ]_F) = [ \\theta^\\dashv(f) - \\theta^\\dashv(kuf_iv)\n]_{Q'}$.\n\\\\\n\nNow either\\\\ \n(i) $f_i \\in P \\subseteq K[X^\\dagger]$ and \n $\\theta^\\dashv(kuf_iv)=0$, since $kuf_iv \\in \\langle P \\rangle$,\\\\\nor else \\\\\n(ii) $f_i \\in \\dashv\\! Q \\subseteq K[\\dashv\\!X^\\dagger]$ and \n $u=i\\!d$, so $\\theta^\\dashv(kuf_iv)=k\\theta^\\dashv (f_iv)$.\\\\\nIn either case $\\theta^\\dashv(kuf_iv) \\in \\langle Q' \\rangle^r$, so\n$\\phi( [f ]_F ) = \\phi( [ f-kuf_iv ]_F )$, \ni.e. $\\phi$ preserves the relation $\\to_F$. \n\nFurthermore if $[f]_F=[g]_F$ for some $f,g\\in K[\\dashv\\!X^\\dagger]$, then\nfor all $v \\in X^*$,\n\\begin{align*} \n\\phi([fv]_F) &= [\\theta^\\dashv(fv)]_{Q'}\\\\\n &= [\\theta^\\dashv(f)]_{Q'} \\theta(v) \\\\\n &= [\\theta^\\dashv(g)]_{Q'} \\theta(v) \\\\\n &= [\\theta^\\dashv(gv)]_{Q'} \\\\\n &= \\phi([gv]_F).\n\\end{align*}\nTherefore $\\phi$ preserves the right congruence \n$\\stackrel{*}{\\leftrightarrow}_F$.\\\\\n\n\nWe now prove that $\\phi$ is surjective.\nLet $a \\in A$. \nThen there exists $f \\in K[\\dashv \\! X^\\dagger]$ such that \n$\\theta^\\dashv(f)=a$, because $\\theta^\\dashv$ is a surjection. \nThus for all $[a]_{Q'} \\in A\/\\langle Q' \\rangle^r$\nthere exists $[f]_F \n\\in K[\\dashv \\! X^\\dagger]\/\\stackrel{*}{\\leftrightarrow}_F$\nsuch that $\\phi([f]_F)=[\\theta^\\dashv f]_{Q'}=[a]_{Q'}$.\\\\\n\n\nFinally, we prove that $\\phi$ is injective.\nLet $f,g \\in K[\\dashv \\! X^\\dagger]$ such that \n$\\phi[f]_F = \\phi[g]_F$.\nThen $[\\theta^\\dashv(f)]_{Q'} = [\\theta^\\dashv(g)]_{Q'}$. \nTherefore there exist $q_1',\\ldots,q_n' \\in Q'$ and $k_1,\\ldots,k_n \\in\nK$, \n$a_1,\\ldots,a_n \\in A$, such that\n$$\\theta^\\dashv(f)-\\theta^\\dashv(g)=k_1q_1'a_1+\\cdots + k_nq_n'a_n.$$\nFor $i=1,\\ldots,n$ there exists $q_i \\in Q$, $y_i \\in X^*$ such that\n$\\theta^\\dashv(q_iy_i)=q_i'a_i$.\nHence \n$$\\theta^\\dashv(f)-\\theta^\\dashv(g)=k_1q_1y_1+ \\cdots + k_nq_ny_n.$$\nNow $\\theta^\\dashv$ preserves $+$ and therefore\n$\\theta^\\dashv( f - g - k_1q_1y_1 - \\cdots - k_nq_ny_n ) = 0$.\nBy the definition of $\\theta$, and so $\\theta^\\dashv$, therefore\n$$f-g-k_1q_1y_1 - \\cdots - k_nq_ny_n = l_1u_1p_1v_1 + \\cdots +\nl_mu_mp_mv_m$$\nfor some $p_1,\\ldots,p_m \\in P$, \n$l_1,\\ldots,l_m\\in K$ and $u_1,\\ldots,u_m \\in \\dashv\\!X^* \\cup \\{i\\!d\\}$, \n$v_1,\\ldots,v_m \\in X^*$.\nTherefore $f \\stackrel{*}{\\leftrightarrow}_F g$, from the definition.\nTherefore $\\phi$ is a well-defined bijection of sets.\n\\end{proof}\n\n\\begin{cor}\nLet $S$ be a semigroup with presentation $sgp\\langle X | R\\rangle$.\nLet $P:=\\{l-r:(l,r)\\in R\\}$, $Q \\subseteq K[X^\\dagger]$.\nDefine $F:=(P,\\dashv\\!Q)$. Then there is a bijection of sets\n$$\\frac{K[S]}{\\langle Q' \\rangle^r} \\cong \n \\frac{K[\\dashv\\!X^\\dagger]}{\\stackrel{*}{\\leftrightarrow}_F}$$\n\\end{cor}\n\nHere it is appropriate to observe the link to rewrite systems which is used\nin the proof of this corollary, in particular, $alg\\langle X|P\\rangle$ is\na presentation of $K[S]$ \\cite{TMora,Birgit,paper3}. \nThis corollary (also see the next result) provides an alternative\napproach to that of Reinert and Zecker \\cite{MRC} for attempting the computation of ideals in \n$\\mathbb{Q}[M]$, where $M$ is a monoid. \nOur computations are based in $\\mathbb{Q}[X^*]$ where $X$ is a set of \ngenerators for $M$, the computations of Reinert and Zecker are made within\n$\\mathbb{Q}[M]$, (also using a presentation of $M$).\n\n\n\\begin{thm}\n Let $X$ be a set of generators for the terms of a $K$-algebra $A$ and let\n$P \\subseteq K[X^*]$ such that the natural morphism $\\theta:K[X^*] \\to A$\ninduces an isomorphism $K[X^*]\/\\langle P \\rangle \\to A$.\nLet $Q \\subseteq K[X^*]$ and define $Q':=\\theta Q$.\nDefine $F:=(\\dashv\\!Q, P)$ where $\\dashv\\!Q:=\\{\\dashv\\!q:q\\in Q\\}$.\nThen there is a bijection of sets\n$$\n\\frac{ K[\\dashv \\! X^*] }{ \\stackrel{*}{\\leftrightarrow}_F }\n\\cong \\frac{A}{ \\langle Q' \\rangle^r }$$\n\\end{thm}\n\\begin{proof}\nDefine \n$\\phi:K[\\dashv\\!X^*]\/\\stackrel{*}{\\leftrightarrow}_F \\; \\to A\/ \\langle Q' \\rangle^r$ \nby\n$\\phi([f]_F):=[\\theta^\\dashv(f)]_{Q'}$.\nThe verification that $\\phi$ is a well-defined bijection on the congruence \nclasses is similar to that detailed in the proof of Theorem 3.4.\n\\end{proof}\n\n\n\n\n\n\\section{The Noncommutative Buchberger Algorithm for One-sided Ideals}\n\nRecall that $F=(F_T,F_P)$ is a mixed set of polynomials \n$F_T \\subseteq K[\\dashv\\!X^\\dagger]$ and\n$F_P \\subseteq K[X^\\dagger]$.\nThe definition of reduction of a tagged polynomial\n$f$ requires that a tagged term $\\dashv \\! m$ of $f$ is some\nmultiple\nof a leading term from the polynomials $f_i$ of $F$. \nThis definition of reduction will \nallow the application of the standard noncommutative\nBuchberger algorithm to $F$ to attempt to complete \n$\\to_F$.\n\n\\begin{Def}[Gr\\\"obner basis of mixed polynomials]\nA set $F=(F_T,F_P)$ where $F_T \\subseteq K[\\dashv\\!X^\\dagger]$ and\n$F_P \\subset K[X^\\dagger]$ is a \\textbf{Gr\\\"obner basis} on \n$K[\\dashv\\!X^\\dagger]$ with respect to $>$ if $\\to_F$ is complete.\n\\end{Def} \n\n\n\\begin{lem}[Noetherian property]\nLet $F=(F_T,F_P)$ where $F_T \\subseteq K[\\dashv \\! X^\\dagger]$ and \n$F_P \\subseteq K[X^\\dagger]$. \nLet $>$ be a semigroup well-ordering on $X^\\dagger$.\nThen the reduction relation $\\to_{F}$ is Noetherian on $K[\\dashv \\!\nX^\\dagger]$.\n\\end{lem}\n\n\\begin{proof}\nAccording to the definition, the process of reduction replaces one monomial\nwith monomials which are smaller with respect to $>$ \n(since $>$ is a term order on $X^\\dagger$).\nThe existence of an infinite sequence of reductions\n$f_1 \\to_{F} f_2 \\to_{F} \\cdots$ of polynomials \n$f_1,f_2, \\ldots \\in K[\\dashv \\! X^\\dagger]$ would imply the existence\nof\nan infinite sequence $m_1>m_2> \\cdots$ of terms \n$m_1,m_2, \\ldots \\in X^\\dagger$.\nTherefore $\\to_{F}$ is Noetherian.\n\\end{proof}\n\n\\begin{Def}[Matches and S-polynomials of tagged and non-tagged\npolynomials]\n\\mbox{ }\\\\\nLet $F=(F_T,F_P)$ where \n$F_T \\subseteq K[\\dashv\\!X^\\dagger]$ and \n$F_P \\subseteq K[X^\\dagger]$.\nA pair of polynomials $f_1,f_2 \\in F$ \nhas a \\textbf{match} if their\nleading terms $m_1,m_2$ coincide. If a pair of polynomials have a match then\nan \\textbf{S-polynomial} is defined. There are five possible cases:\n\\begin{center}\n\\begin{tabular}{|l|ll|l|l|}\n\\hline\n & & match & S-polynomial & \\\\\n\\hline\nboth $f_1$ and $f_2$ in $F_T$ & (i) & $m_1 v = m_2$ & $f_1v-f_2$ & where $v\\in X^*$ \\\\\n\\hline\n$f_1$ in $F_T$ and $f_2$ in $F_P$ & (ii) & $m_1 v = u m_2$ & $f_1v-uf_2$ & \\\\ \n& (iii) & $m_1 = u m_2 v$ & $f_1-uf_2v$ & where $u \\in \\dashv\\!X^*\\cup\\{i\\!d\\}, v \\in X^*$ \\\\\n\\hline\nboth $f_1$ and $f_2$ in $F_P$ & (iv) & $u m_1 = m_2 v$ & $f_1v-uf_2$ & \\\\\n& (v) & $m_1 = u m_2 v$ & $f_1-uf_2v$ & where $u,v \\in X^*$\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\nA match is said to \\textbf{resolve} if the resulting S-polynomial can be\nreduced to zero by $F$.\n\\end{Def}\n\n\n\\begin{rem}\n{\\em\nIf a match of any of the types above occurs between $f_1$ and $f_2$\nthen the match may be represented:\n$u_1 m_1 v_1 =u_2 m_2 v_2$, where $u_1,u_2,v_1,v_2 \\in \\dashv\\!X^* \\cup X^*$.\nA match of $f_1$ and $f_2$ may occur when either, neither, or both of\n$f_1$ and $f_2$ are tagged.\nHowever, if one or both has a tag, the tag forms part of the match and\nthe resulting S-polynomial will be tagged.}\n\\end{rem}\n\nThe following lemma is proved in the same way as in the standard \ncommutative non-tagged situation as described in \\cite{TAT}.\n\n\\begin{lem}\n\\label{zero}\nLet $F=(F_T,F_P)$ where $F_T \\subseteq K[\\dashv \\! X^\\dagger]$\nand $F_P \\subseteq K[X^\\dagger]$.\nLet $g_1, g_2 \\in K[\\dashv\\!X^\\dagger]$ where $g_1-g_2 \\to_{F}^* 0$. \nThen there exists a tagged polynomial $h \\in K[\\dashv\\!X^\\dagger]$ such\nthat \n$g_1 \\stackrel{*}{\\to}_{F} h$ and\n$g_2 \\stackrel{*}{\\to}_{F} h$.\n\\end{lem}\n\n\\begin{proof} \nThe {\\em length} of a reduction sequence is \ndefined to be the number of one-step reductions of which it is made up.\nThis proof is by induction on the length of the reduction sequence\n$g_1-g_2 \\stackrel{*}{\\to}_{F} 0$.\\\\\n\nFor the basis of induction \nsuppose the length of the reduction sequence is zero.\nThen $g_1-g_2=0$ so $g_1=g_2$.\\\\\n\nFor the induction step, assume that if $g_1'-g_2' \\stackrel{*}{\\to}_{F}\n0$\nis a reduction sequence of length $n$ then there exists \n$h \\in K[\\dashv\\!X^\\dagger]$ such that \n$g_1' \\stackrel{*}{\\to}_{F} h$ and\n$g_2' \\stackrel{*}{\\to}_{F} h$.\\\\\n\nSuppose $g_1-g_2 \\to_{f_i} g \\stackrel{*}{\\to}_{F} 0$ where\n$g \\stackrel{*}{\\to}_{F} 0$ is a reduction sequence of length $n$.\n\\\\\n\nLet $t \\in {\\dashv\\!X^\\dagger}$ be the tagged term in $g_1-g_2$ to\nwhich the reduction by $f_i$ is applied.\nLet $u \\in \\dashv\\!X^* \\cup \\{i\\!d\\}$, $v \\in X^*$ such that \n$t=u \\mathtt{LT}(f_i) v$, and let $k_1,k_2$ be the coefficients\nof $t$ in $g_1,g_2$ respectively.\nNow $k_1-k_2 \\not= 0$ since it is the coefficient of $t$ in $g_1-g_2$.\\\\\n\nDepending on whether $k_1$ and $k_2$ are zero or not we have the\nfollowing\nzero- or one-step reductions:\n$$g_1 \\stackrel{=}{\\to}_{f_i} g_1-k_1uf_iv, \\quad\ng_2 \\stackrel{=}{\\to}_{f_i} g_2-k_2uf_iv.$$\n\nSince $g=(g_1-k_1uf_iv)-(g_2-k_2uf_iv)$ and $g \\stackrel{*}{\\to} 0$ in\n$n$ \nsteps, by the induction hypothesis there exists $h \\in\nK[\\dashv\\!X^\\dagger]$\nsuch that $g_1 - k_1uf_iv \\stackrel{*}{\\to}_{F} h$ and \n$g_2 - k_2uf_iv \\stackrel{*}{\\to}_{F} h$. Hence $g_1 \\stackrel{*}{\\to}_F\nh$\nand $g_2 \\stackrel{*}{\\to}_F h$.\n\\end{proof}\n\n\\begin{thm}[Test for confluence]\nThe reduction relation $\\to_{F}$ generated by $F$ \nis complete on \n$K[\\dashv \\! X^\\dagger]$ if and only\nif all matches of $F$ resolve.\n\\end{thm}\n\n\\begin{proof}\nIn Lemma 4.2 we proved that $\\to_{F}$ is Noetherian and therefore,\nby Newman's Lemma for reduction relations on sets, we need only to prove\nthat\n$\\to_{F}$ is locally confluent.\\\\\n \n Let $f,g_1,g_2 \\in K[\\dashv\\!X^\\dagger]$ such that \n$f \\to_{F} g_1$ and \n$f \\to_{F} g_2$.\n Then \n$g_1=f-k_1u_1f_1v_1$ and\n$g_2=f-k_2u_2f_2v_2$ for some $f_1,f_2 \\in F$, $k_1,k_2 \\in K$,\n$u_1,u_2,v_1,v_2 \\in \\dashv\\!X^* \\cup X^*$.\n Let $m_1:=\\mathtt{LT}(f_1)$ and $m_2:=\\mathtt{LT}(f_2)$.\\\\\n \n If the reductions do not overlap on $f$, i.e. \n$u_1m_1v_1 \\not= u_2m_2v_2$ then it is immediate that \n$g_1 \\to_{F} h$ and \n$g_2 \\to_{F} h$ where \n$h = f - k_1u_1f_1v_1 - k_2u_2f_2v_2$.\\\\\n\nOtherwise $u_1m_1v_1=u_2m_2v_2$. \nIn this case $m_1$ and $m_2$ may or may not coincide.\\\\\n\nIf they do not coincide, i.e. if there exists $w \\in X^*$ such that\n$u_1m_1v_1=u_1m_1wm_2v_2$ or $u_1m_1v_1=u_2m_2wm_1v_1$ then again\n$g_1 \\stackrel{*}{\\to}_{F} h$ and \n$g_2 \\stackrel{*}{\\to}_{F} h$ where \n$h = f - u_1f_1wm_2v_2 - u_1m_1wf_2v_2$ or \n$h = f - u_2m_2wf_1v_1 - u_2f_2wm_1v_1$ respectively.\\\\\n\nIf the leading terms $m_1$ and $m_2$ do coincide then \n$u_1m_1v_1=u_2m_2v_2$ represents a multiple of a match between $f_1$ and\n$f_2$,\ni.e. there exist $u_1',u_2',v_1',v_2',w,z \\in \\dashv\\!X^* \\cup X^*$, \nsuch that $u_1=wu_1'$, $v_1=v_1'z$, $u_2=wu_2'$, $v_2=v_2'z$ and\n$u_1'm_1v_1'=u_2'm_2v_2'$ represents a match between $f_1$ and $f_2$.\nIn this case $u_1'f_1v_1'-u_2'f_2v_2' \\stackrel{*}{\\to}_{F} 0$ by\nassumption, and therefore \n$wu_1'f_1v_1'z-wu_2'f_2v_2'z \n = u_1m_1v_1-u_2m_2v_2 \\stackrel{*}{\\to}_{F} 0$. By Lemma 4.5 this\n implies that there exists $h \\in K[\\dashv \\! X^\\dagger]$ such that\n$g_1 \\stackrel{*}{\\to}_{F} h$ and \n$g_2 \\stackrel{*}{\\to}_{F} h$.\\\\\n \n\nThe converse of the above is easily checked. \nSuppose that $\\to_{F}$ is confluent. Then any\nS-polynomial arising from a match between polynomials is the result of\nreducing one term in two different ways, i.e. \n$f \\to_{F} g_1$ and\n$f \\to_{F} g_2$ for some $f,g_1,g_2\\in K[X^\\dagger]$. \nThe S-polynomial is\nequal to\n$g_1- g_2$. The relation $\\to_{F}$ is locally confluent and so\nthere\nexists $h \\in K[\\dashv \\! X^\\dagger]$ such that \n$g_1 \\to_{F} h$ and \n$g_2 \\to_{F} h$. Therefore\n$g_1 - g_2 \\stackrel{*}{\\to} h - h = 0$ as required.\n\\end{proof}\n\n\nWe may now apply the noncommutative version of Buchberger's algorithm \n(as described in \\cite{TMora2}) to \nattempt to complete a mixed set of tagged and non-tagged polynomials.\nTo verify steps 4 and 5 of the algorithm we observe the following two \ntechnical lemmas.\n\n\\begin{lem}[Addition of S-polynomials]\nLet $F=(F_T,F_P)$. If $f$ is an S-polynomial resulting from a match of $F$, \nthen the congruences $\\stackrel{*}{\\leftrightarrow}_F$ and\n$\\stackrel{*}{\\leftrightarrow}_{F\\cup\\{f\\}}$ coincide.\n\\end{lem}\n\\begin{proof}\nThe result is proved by showing that, in each of the five cases, an\nS-polynomial $f$ resulting from a match of polynomials $f_1,f_2 \\in F$ \ncan be written in the form $u_1f_1v_1-u_1f_2v_2$ and therefore \n$f \\stackrel{*}{\\leftrightarrow}_F 0$.\n\\end{proof}\n\n\\begin{lem}[Elimination of redundancies]\nLet $F=(F_T,F_P)$. If $f\\in F$ is such that $f \\to_{F \\setminus\\{f\\}} 0$ then \nthe relations $\\stackrel{*}{\\to}_F$ and\n$\\stackrel{*}{\\to}_{F\\setminus\\{f\\}}$ coincide.\n\\end{lem}\n\\begin{proof}\nThe result is immediate, since for all $g \\to_{\\{f\\}} h$ then \n$g = h+ kufv \\stackrel{*}{\\to}_{F\\setminus\\{f\\}} h$ where $k \\in K$, \n$ufv \\in \\dashv\\!X^\\dagger$.\n\\end{proof} \n\n\\begin{alg}[Noncommutative Buchberger Algorithm with tags]\n\nGiven a set of tagged and non-tagged polynomials the algorithm \nattempts to complete the set with respect to a given ordering\nso that the reduction relation generated is complete.\n\\begin{enumerate}\n\\item\n(Input:) A mixed set of tagged and non-tagged polynomials \n$F=(F_T,F_P)$ where $F_T \\subseteq K[\\dashv\\!X^\\dagger]$ and \n$F_P \\subseteq K[X^\\dagger]$.\n\\item\n(Initialise:)\nPut $\\mathtt{OLD} := F$ and $\\mathtt{SPOL}:=\\emptyset$. \n\\item\n(Search for matches:)\nIf the leading monomials of any of the polynomials overlap then calculate\nthe\nresulting S-polynomial and attempt to reduce it using $\\to_F$.\nRecord all non-zero reduced S-polynomials in the list $\\mathtt{SPOL}$.\n\\item\n(Add unresolved S-polynomials:)\nWhen all matches have been considered define $\\mathtt{NEW:=OLD \\cup\nSPOL}$.\n\\item\n(Eliminate redundancies:)\nPass through $\\mathtt{NEW}$ removing each polynomial in turn and\nreducing it with respect to the other polynomials in $\\mathtt{NEW}$. \nIf a polynomial reduces to zero, delete it from\n$\\mathtt{NEW}$. Otherwise replace each with its reduced form.\n\\item\n(Loop:)\nWhilst $\\mathtt{OLD} \\not= \\mathtt{NEW}$ set $\\mathtt{OLD:=NEW}$,\n$\\mathtt{SPOL}:=\\emptyset$ and return to step 3.\n\\item\n(Output:)\nA set $F:=\\mathtt{NEW}$ of polynomials such that $\\to_F$\nis a complete reduction relation on $K[\\dashv\\!X^\\dagger]$.\n\\end{enumerate}\n\\end{alg}\n\n\\begin{rem}[Left Ideals]\n\\emph{Placing tags to the right of polynomials rather than the left, \ni.e. working in $K[X^\\dagger\\vdash]$ or $K[X^*\\vdash]$, by similar arguments\nwe can compute left ideals. The tags act as a block to multiplication: \nto calculate left ideals, one blocks the right multiplication with a tag on \nthe right; to calculate right ideals, one blocks the left multiplication \nwith a tag on the left. It is natural that two-sided ideals have no tags, \nsince both multiplications are defined.}\n\\end{rem}\n\n\n\\begin{rem}[Implementation]\n\\emph{The use of the free monoid $(X \\cup \\{ \\dashv \\})^*$ is possible\nin Definition 3.2, i.e. $f \\to_F f-kuf_iv$ if \n$u\\mathtt{LT}(f_i)v$ occurs in $f$ with coefficient $k$\n$u,v \\in (X \\cup \\{ \\dashv \\})^*$. \nIf a match of any of the five types described above occurs between \n$f_1$ and $f_2$ then there exist\n$u_1,u_2,v_1,v_2 \\in (X \\cup \\{ \\dashv \\})^*$ such that\n$u_1 m_1 v_1 =u_2 m_2 v_2$ (the converse is not true).\nUnmeaningful monomials \nsuch as $\\dashv\\!xx\\!\\dashv\\dashv\\!x$ do not arise as a result of\nany procedure we describe, including Algorithm 4.9.\nTherefore there is no problem with the\ncomputations taking place inside the free $K$-algebra \n$K[(X \\cup\\{\\dashv\\})^*]$.\nThis is useful computationally as it allows us to use a standard \nnoncommutative Gr\\\"obner basis program as an implementation of the procedures.\nIn other words, this widens the scope of a noncommutative Gr\\\"obner basis \nprogram without modifying it: the program can now attempt to compute bases \nfor one-sided ideals in finitely presented algebras.}\n\\end{rem}\n\n\n\\section{Application to Green's Relations}\n\nThe standard way of expressing the structure of an (abstract) semigroup\nis in terms of Green's relations. The relations enable the expression\nof the local structure of the semigroup in terms of groups with certain\nactions \non them. Eggbox diagrams depict the partitions of a semigroup into their\n$L$-classes $R$-classes, $D$-classes and $H$-classes as defined by\nGreen's\nrelations. \nWe can sometimes determine the classes by using Gr\\\"obner bases applied \ndirectly to the presentation. The\nexamples show that there is also the possibility of dealing with\ninfinite\nsemigroups having infinitely many $H$-classes, $L$-classes or\n$R$-classes.\nFirst we recall some definitions \\cite{Howie}.\\\\\n\nA nonempty subset $A$ of a\nsemigroup $S$ is a {\\em right ideal} of $S$ if $AS \\subseteq A$, where\n$AS:=\\{as:a \\in A, s \\in S\\}$.\nIt is a {\\em left ideal} of $S$ if $SA \\subseteq A$.\nIf $x$ is an element of $S$ then the smallest right ideal of $S$\ncontaining $x$\nis $xS \\cup \\{x\\}$, we denote this $\\langle x\\rangle^r $ as it is called\nthe\n{\\em right ideal generated by $x$}. Similarly the \n{\\em left ideal generated by $x$} is $Sx \\cup \\{x\\}$ and is denoted \n$\\langle x \\rangle^l$.\\\\\n\n\\textbf{Green's Relations}\\\\\nLet $S$ be a semigroup and let $s$ and $t$ be elements of $S$.\nWe say that $s$ and $t$ are {\\em L-related} if the left ideal\ngenerated\nby $s$ in $S$ is equal to the left ideal generated by $t$:\n$$s \\sim_L t \\Leftrightarrow \\langle s\\rangle^l =\\langle t\\rangle^l .$$\nSimilarly they are {\\em R-related} if the right ideals are the same:\n$$s \\sim_R t \\Leftrightarrow \\langle s\\rangle^r =\\langle t\\rangle^r.$$\n\nThe $L$-relation is a right congruence on $S$ and the $R$-relation is a\nleft\ncongruence on $S$. (The right action of $S$ on itself is preserved by\nthe\nmapping to the\n$L$-classes - so $[x^y]_{\\sim_L}=[xy]_{\\sim_L}={[x]^y}_{\\sim_L}$,\nsimilarly for\nthe left action and $R$-classes.)\nThe elements $s$ and $t$ are said to be {\\em H-related} if they are\n\\emph{both} $L$-related \\emph{and} $R$-related, and are\n{\\em D-related} if they are \\emph{either} $L$-related \\emph{or}\n$R$-related.\\\\\n\nTo determine whether $s$ and $t$ are $R$ (or $L$)-related\nwe can compute the appropriate Gr\\\"obner bases and compare them.\nFirst let $K$ be (any) field.\nLet $S$ have presentation $sgp\\langle X|Rel\\rangle$\nLet $P$ be a Gr\\\"obner basis for $K[S]$ (so $K[X^\\dagger]\/\\! =_P\\cong\nK[S]$).\nWe would add\nthe polynomial $\\dashv \\! s$ to the Gr\\\"obner basis system for $K[S]$\nand\ncompute\nthe Gr\\\"obner basis, and see whether this was equivalent to the basis\nobtained\nfor $\\dashv \\! t$.\n\n\n\\section{Examples}\n\nThroughout the examples we will use the field $\\mathbb{Q}$ and the \nstandard length-lexicographical ordering $>$.\n\n\\begin{example}\n\\emph{The first example is a two element semigroup with presentation}\n$sgp \\langle x |x^3=x^2 \\rangle.$\\\\\n\n\\emph{The Gr\\\"obner basis for the right ideal $\\langle x\\rangle^r$ is\n$\\{\\dashv\\!x, x^3-x^2\\}$ and the Gr\\\"obner basis for $\\langle x^2 \\rangle^r$\nis $\\{\\dashv\\!x^2, x^3-x^2\\}$. The Gr\\\"obner bases are different and therefore \n$x$ and $x^2$ are not $R$-related.\nSimilarly, the Gr\\\"obner basis for the left ideal $\\langle x\\rangle^l$ is\n$\\{x\\!\\vdash, x^3-x^2\\}$ and the Gr\\\"obner basis for $\\langle x^2\\rangle^l$ is\n$\\{x^2\\!\\vdash, x^3-x^2\\}$ so the elements are not $L$-related. Therefore\nthis semigroup has two $H$-classes.} \n\\end{example}\n\n\n\\begin{example}\n\\emph{The following example is for the \nfinite monoid $Sym(2)$ with semigroup \npresentation}\n\n\\vspace{-.4cm}\n$$\nsgp\\langle e,s| e^2=e, s^3=s, s^2e=e, es^2=e, sese=ese, eses=ese\\rangle.\n$$ \n\\vspace{-.5cm}\n\n\\emph{The Gr\\\"obner basis equivalent to the rewrite system is}\n\n\\vspace{-.4cm}\n$$\nF:=\\{e^2-e, \\ s^3-s, \\ s^2e-e, \\ es^2-e, \\ eses-ese, \\ sese-ese\\}.\n$$\n\\vspace{-.5cm}\n\n\\emph{The elements are $\\{ e, s, es, se, s^2, ese, ses\\}$, \nwhere $s^2$ is the identity element.\nWe calculate Gr\\\"obner bases for \nthe right and left ideals for each of\nthe elements. The results are displayed \nin the table below. In detail, a Gr\\\"obner\nbasis for $\\langle ses \\rangle^r$ in $K[S]$ in \n$K[\\dashv\\! X^\\dagger]$ is calculated by \nadding $\\dashv\\! ses$ to the set of polynomials \n$F$.\nA match $s$ occurs on $\\dashv\\! sesse$ \nbetween $sse-e$ and $\\dashv\\! ses$. \nThis results in the S-polynomial\n$\\dashv \\! se(e)-(0)se$ \nwhich reduces to $\\dashv \\! se$. \nAnother match of $es$ occurs on $\\dashv \\! seses$ \nbetween $eses-ese$ and $\\dashv \\! ses$. \nThis results in the \nS-polynomial $\\dashv \\! s(ese)-(0)es$ \nwhich reduces to $\\dashv \\! ese$.\nAll further matches result in \nS-polynomials which reduce to zero. \nThe polynomials we add to $F$ \nto obtain a Gr\\\"obner basis are \n$\\{ \\dashv \\! se, \\dashv \\! ese \\}$ \n(note that $\\dashv \\! ses$ is a \nmultiple of $\\dashv \\! se$ so it is \nnot required in the Gr\\\"obner basis).\nThe table lists the polynomials which, \ntogether with $F$, will give the \nGr\\\"obner bases for the right and left \nideals generated by single elements.}\n\n\\begin{center}\n\\begin{tabular}{l|l|l}\nelement & right ideal & left ideal\\\\\n\\hline\n\n$e$ & $\\dashv\\! e $ & $e \\!\\vdash\n$\\\\\n$s$ & $\\dashv\\! e, \\dashv\\! s $ & $e \\!\\vdash, s \\!\\vdash\n$\\\\\n$es$ & $\\dashv\\! e $ & $es \\!\\vdash , ese \\!\\vdash\n$\\\\\n$se$ & $\\dashv\\! se, \\dashv\\! ese $ & $e \\!\\vdash\n$\\\\\n$ss$ & $\\dashv\\! e, \\dashv\\! s $ & $e \\!\\vdash, s \\!\\vdash\n$\\\\\n$ese$ & $\\dashv\\! ese $ & $ese \\!\\vdash\n$\\\\\n$ses$ & $\\dashv\\! se, \\dashv\\! ese $ & $es \\!\\vdash , ese \\!\\vdash\n$\n\\end{tabular}\n\\end{center}\n\n\\emph{Two elements whose right ideals are generated by the same\nGr\\\"obner basis have \nthe same right ideal (similarly left), and so\nit is immediately deducible that\\\\\nthe $R$-classes are $\\{s,s^2\\},\\{e,es\\},\\{se,ses\\}$ and $\\{ese\\}$, \nthe $L$-classes are $\\{s,s^2\\},\\{e,se\\},\\{es,ses\\}$ and $\\{ese\\}$,\nthe $H$-classes are $\\{s,s^2\\},\\{e\\},\\{se\\},\\{es\\},\\{ses\\}$ and\n$\\{ese\\}$ and\nthe $D$-classes are $\\{s,s^2\\},\\{e,es,se,ses\\}$ and $\\{ese\\}$.\\\\\nThe eggbox diagram is as follows\nwhere $L$-classes are columns, $R$-classes are rows, \n$D$-classes are diagonal boxes\nand $H$-classes are the small boxes:}\n\\\\\n\n\\setlength{\\unitlength}{0.7cm}\n\\begin{picture}(10,4)\n\\put(9,3){\\line(0,0){1}}\n\\put(10,1){\\line(0,0){3}}\n\\put(11,1){\\line(0,0){2}}\n\\put(12,0){\\line(0,0){3}}\n\\put(13,0){\\line(0,0){1}}\n\\put(9,4){\\line(1,0){1}}\n\\put(9,3){\\line(1,0){3}}\n\\put(10,2){\\line(1,0){2}}\n\\put(10,1){\\line(1,0){3}}\n\\put(12,0){\\line(1,0){1}}\n\\put(9.1,3.4){$s,\\!s^2$}\n\\put(10.3,1.4){$se$}\n\\put(10.4,2.4){$e$}\n\\put(11.2,1.4){$ses$}\n\\put(11.3,2.4){$es$}\n\\put(12.2,0.4){$ese$}\n\\end{picture}\n\n\n\\emph{This example could have been calculated by enumerating the\nelements of each of the fourteen ideals -- a time consuming procedure which\ncalculates details which we do not require.}\n\\end{example}\n\n\n\\begin{example}\n\\emph{The next example is the Bicyclic monoid which is infinite.\nWe use the semigroup presentation} \\;\n$sgp \\langle p,q, i | p i=p, q i= q, i p=p, iq=q, pq=i \\rangle.$\\\\\n\n\n\\emph{The equivalent Gr\\\"obner basis, defined on $K[\\{p,q,i\\}^\\dagger]$, is \n$\\{p i-p, q i- q, i p-p, iq-q, pq-i\\}$.\nWe begin the table as before:}\n\n\\begin{center}\n\\begin{tabular}{l|l|l}\nelement & right ideal & left ideal\\\\\n\\hline\n$i\\!d$ & $\\dashv\\! i$. & $i\\! \\vdash $.\\\\\n$p$ & $\\dashv\\! i$. & $p\\! \\vdash $.\\\\\n$q$ & $\\dashv\\! q$. & $q\\! \\vdash $.\\\\\n$p^2$ & $\\dashv\\! i$. & $p^2\\! \\vdash $.\\\\\n$qp$ & $\\dashv\\! q$. & $p\\! \\vdash $.\\\\\n$q^2$ & $\\dashv\\! q^2$. & $i\\! \\vdash $.\\\\\n$\\cdots$ & $\\cdots$ & $\\cdots$\\\\\n$q^np^m$ & $\\dashv\\! q^n$. & $p^m\\! \\vdash $.\\\\\n\\end{tabular}\n\\end{center}\n\\emph{It can be seen that there are infinitely many $L$-classes \nand infinitely many $R$-classes.\nRepresentatives for the $L$-classes are the elements of $\\{q\\}^*$\nbecause \n$q^np^m\\! \\vdash \\to q^n\\! \\vdash $ (using the S-polynomial resulting\nfrom\n$p^n(q^np^m\\! \\vdash )\\to p^n\\! \\vdash $ with $(p^nq^n)p^m\\! \\vdash\n\\to p^m\\! \\vdash $).\nSimilarly the elements of $\\{p\\}^*$ are representatives for the\n$R$-classes.\nAll elements are $D$-related and none of them are $H$-related. So the\neggbox\ndiagram would be an infinitely large box of cells with one element in\neach\ncell. This means that the monoid is} bisimple.\n\\end{example}\n\n\n\\begin{example}\n\\emph{Now consider the Polycyclic monoid $P_n$ which has monoid\npresentation}\n\n\\vspace{-.4cm}\n$$\nmon\\langle x_1,\\ldots,x_n,y_1,\\ldots,y_n,o, i\\!d\n\\; | \\; ox_i \\! = \\! x_io \\! = \\! oy_i \\! = \\! y_io \\! = \\! o, x_iy_i \\!\n= \\! i\\!d,\nx_iy_j \\! = \\! o \\text{\\emph{ for }} i,j \\! = \\! 1,\\ldots,{n\\!-\\!1},\ni\\not \\! = \\! j\\rangle\n$$\n\\vspace{-.4cm}\n\n\\emph{and therefore the Gr\\\"obner basis for $K[P_n]$, where $K$ is a\nfield, is}\n\n\\vspace{-.2cm}\n$$\n\\{x_iy_i-i\\!d, x_iy_j-0 \\text{ for } i,j=1,\\ldots,{n-1},\\, i \\not= j \\}.\n$$\n\\vspace{-.2cm}\n\n\\emph{Green's relations for the polycyclic monoids are naturally similar to\nthose for the Bicyclic monoid. The $L$-classes are represented by\nsequences of\n$y_i$'s and the\n$R$-classes are represented by sequences of $x_i$'s. To verify this, let\n$X=x_{i_1}\\cdots x_{i_k}$ be a\ngeneral word in the $x_i$'s, and let $Y$ be $y_{j_1}\\cdots y_{j_l}$ a\ngeneral\nword in the $y_j$'s.\nThen we can show that $YX \\sim_L X$.\nTo do this consider $\\langle YX\\! \\vdash \\rangle$ and $\\langle X\\!\n\\vdash \\rangle$.\n To find a Gr\\\"obner basis for $\\langle YX\\! \\vdash \\rangle$ consider\nthe match\n $x_{j_l} \\cdots x_{j_1}y_{j_1} \\cdots y_{j_l}x_{i_1} \\cdots\nx_{i_k}\\! \\vdash $.\nThis results in the S-polynomial\n $(i\\!d)x_{i_1} \\cdots x_{i_k}\\! \\vdash - x_{j_l} \\cdots x_{j_1}(0)$\nwhich simplifies to $x_{i_1} \\cdots x_{i_k}\\! \\vdash = X\\! \\vdash $.\nThis is\na\nGr\\\"obner basis for $\\langle YX\\! \\vdash \\rangle$, and so $\\langle YX\\!\n\\vdash \\rangle\n=\\langle\nX\\! \\vdash \\rangle$. Similarly $\\langle \\dashv \\! YX \\rangle= \\langle\n\\dashv \\! Y \\rangle$ so\n$YX\n\\sim_R X$ for any $Y = y_{j_1}\\cdots y_{j_l}$, $X = x_{i_1}\\cdots\nx_{i_k}$.}\n\\\\\n\n\\emph{The eggbox diagram is drawn below. As before the $L$ classes are the\ncolumns\nand the $R$-classes the rows, $H$-classes are the cells, and there is\njust one\n$D$-class other than the one containing the zero. This proves that the\npolycyclic monoids are bisimple.\nThe diagram is more conventional than the\nprevious one, as classes are listed but not individual elements, instead\nthe\nnumber of elements in each cell is indicated.}\n\n\n\\setlength{\\unitlength}{0.8cm}\n\\begin{picture}(16,12)\n\\put(5,10){\\line(0,0){1}}\n\\put(6,1){\\line(0,0){1}}\n\\put(6,4){\\line(0,0){2}}\n\\put(6,7){\\line(0,0){4}\n\\put(7,1){\\line(0,0){1}}\n\\put(7,4){\\line(0,0){2}}\n\\put(7,7){\\line(0,0){3}}\n\\put(8,1){\\line(0,0){1}}\n\\put(8,4){\\line(0,0){2}}\n\\put(8,7){\\line(0,0){3}}\n\\put(9,1){\\line(0,0){1}}\n\\put(9,4){\\line(0,0){2}}\n\\put(9,7){\\line(0,0){3}}\n\\put(10,1){\\line(0,0){1}}\n\\put(10,4){\\line(0,0){2}}\n\\put(10,7){\\line(0,0){3}}\n\\put(11,1){\\line(0,0){1}}\n\\put(11,4){\\line(0,0){2}}\n\\put(11,7){\\line(0,0){3}}\n\\put(12,1){\\line(0,0){1}}\n\\put(12,4){\\line(0,0){2}}\n\\put(12,7){\\line(0,0){3}}\n\\put(14,1){\\line(0,0){1}}\n\\put(14,4){\\line(0,0){2}}\n\\put(14,7){\\line(0,0){3}}\n\\put(15,1){\\line(0,0){1}}\n\\put(15,4){\\line(0,0){2}}\n\\put(15,7){\\line(0,0){3}}\n\\put(5,10){\\line(1,0){4}\n\\put(5,11){\\line(1,0){1}}\n\\put(6,9){\\line(1,0){3}}\n\\put(6,8){\\line(1,0){3}}\n\\put(6,7){\\line(1,0){3}}\n\\put(6,6){\\line(1,0){3}}\n\\put(6,5){\\line(1,0){3}}\n\\put(6,4){\\line(1,0){3}}\n\\put(6,2){\\line(1,0){3}}\n\\put(6,1){\\line(1,0){3}}\n\\put(10,9){\\line(1,0){2}}\n\\put(10,8){\\line(1,0){2}}\n\\put(10,7){\\line(1,0){2}}\n\\put(10,6){\\line(1,0){2}}\n\\put(10,5){\\line(1,0){2}}\n\\put(10,4){\\line(1,0){2}}\n\\put(10,2){\\line(1,0){2}}\n\\put(10,1){\\line(1,0){2}}\n\\put(10,10){\\line(1,0){2}}\n\\put(14,9){\\line(1,0){1}}\n\\put(14,8){\\line(1,0){1}}\n\\put(14,7){\\line(1,0){1}}\n\\put(14,6){\\line(1,0){1}}\n\\put(14,5){\\line(1,0){1}}\n\\put(14,4){\\line(1,0){1}}\n\\put(14,2){\\line(1,0){1}}\n\\put(14,1){\\line(1,0){1}}\n\\put(14,10){\\line(1,0){1}}\n\\put(4.2,1.4){$[X]$}\n\\put(4.2,4.4){$[x_1x_2]$}\n\\put(4.2,5.4){$[{x_1}^2]$}\n\\put(4.2,7.4){$[x_2]$}\n\\put(4.2,8.4){$[x_1]$}\n\\put(4.2,9.4){$[i\\!d]$} \\put(4.2,10.4){$[0]$} \\put(5.2,11.4){$[0]$}\n\\put(6.2,11.4){$[i\\!d]$}\n\\put(7.2,11.4){$[y_1]$}\n\\put(8.2,11.4){$[y_2]$}\n\\put(10.2,11.4){$[{y_1}^2]$}\n\\put(11.2,11.4){$[y_1y_2]$}\n\\put(14.2,11.4){$[Y]$}\n\\put(6,0.3){\\line(0,0){0.4}}\n\\put(7,0.3){\\line(0,0){0.4}}\n\\put(8,0.3){\\line(0,0){0.4}}\n\\put(9,0.3){\\line(0,0){0.4}}\n\\put(10,0.3){\\line(0,0){0.4}}\n\\put(11,0.3){\\line(0,0){0.4}}\n\\put(12,0.3){\\line(0,0){0.4}}\n\\put(14,0.3){\\line(0,0){0.4}}\n\\put(15,0.3){\\line(0,0){0.4}}\n\\put(6,2.3){\\line(0,0){0.4}}\n\\put(7,2.3){\\line(0,0){0.4}}\n\\put(8,2.3){\\line(0,0){0.4}}\n\\put(9,2.3){\\line(0,0){0.4}}\n\\put(10,2.3){\\line(0,0){0.4}}\n\\put(11,2.3){\\line(0,0){0.4}}\n\\put(12,2.3){\\line(0,0){0.4}}\n\\put(14,2.3){\\line(0,0){0.4}}\n\\put(15,2.3){\\line(0,0){0.4}}\n\\put(6,3.3){\\line(0,0){0.4}}\n\\put(7,3.3){\\line(0,0){0.4}}\n\\put(8,3.3){\\line(0,0){0.4}}\n\\put(9,3.3){\\line(0,0){0.4}}\n\\put(10,3.3){\\line(0,0){0.4}}\n\\put(11,3.3){\\line(0,0){0.4}}\n\\put(12,3.3){\\line(0,0){0.4}\n\\put(14,3.3){\\line(0,0){0.4}}\n\\put(15,3.3){\\line(0,0){0.4}}\n\\put(6,6.3){\\line(0,0){0.4}}\n\\put(7,6.3){\\line(0,0){0.4}}\n\\put(8,6.3){\\line(0,0){0.4}}\n\\put(9,6.3){\\line(0,0){0.4}}\n\\put(10,6.3){\\line(0,0){0.4}}\n\\put(11,6.3){\\line(0,0){0.4}}\n\\put(12,6.3){\\line(0,0){0.4}}\n\\put(14,6.3){\\line(0,0){0.4}}\n\\put(15,6.3){\\line(0,0){0.4}}\n\\put(9.3,1){\\line(1,0){0.4}}\n\\put(9.3,2){\\line(1,0){0.4}}\n\\put(9.3,4){\\line(1,0){0.4}}\n\\put(9.3,5){\\line(1,0){0.4}}\n\\put(9.3,6){\\line(1,0){0.4}}\n\\put(9.3,7){\\line(1,0){0.4}}\n\\put(9.3,8){\\line(1,0){0.4}}\n\\put(9.3,9){\\line(1,0){0.4}}\n\\put(9.3,10){\\line(1,0){0.4}}\n\\put(12.3,1){\\line(1,0){0.4}}\n\\put(12.3,2){\\line(1,0){0.4}}\n\\put(12.3,4){\\line(1,0){0.4}}\n\\put(12.3,5){\\line(1,0){0.4}}\n\\put(12.3,6){\\line(1,0){0.4}}\n\\put(12.3,7){\\line(1,0){0.4}}\n\\put(12.3,8){\\line(1,0){0.4}}\n\\put(12.3,9){\\line(1,0){0.4}}\n\\put(12.3,10){\\line(1,0){0.4}}\n\\put(13.3,1){\\line(1,0){0.4}}\n\\put(13.3,2){\\line(1,0){0.4}}\n\\put(13.3,4){\\line(1,0){0.4}}\n\\put(13.3,5){\\line(1,0){0.4}}\n\\put(13.3,6){\\line(1,0){0.4}}\n\\put(13.3,7){\\line(1,0){0.4}}\n\\put(13.3,8){\\line(1,0){0.4}}\n\\put(13.3,9){\\line(1,0){0.4}}\n\\put(13.3,10){\\line(1,0){0.4}}\n\\put(15.3,1){\\line(1,0){0.4}}\n\\put(15.3,2){\\line(1,0){0.4}}\n\\put(15.3,4){\\line(1,0){0.4}}\n\\put(15.3,5){\\line(1,0){0.4}}\n\\put(15.3,6){\\line(1,0){0.4}}\n\\put(15.3,7){\\line(1,0){0.4}}\n\\put(15.3,8){\\line(1,0){0.4}}\n\\put(15.3,9){\\line(1,0){0.4}}\n\\put(15.3,10){\\line(1,0){0.4}} \\put(5.4,10.4){1}\n\\put(6.4,9.4){1} \\put(6.4,8.4){1} \\put(6.4,7.4){1}\n\\put(6.4,5.4){1} \\put(6.4,4.4){1} \\put(6.4,1.4){1}\n\\put(7.4,9.4){1} \\put(7.4,8.4){1} \\put(7.4,7.4){1}\n\\put(7.4,5.4){1} \\put(7.4,4.4){1} \\put(7.4,1.4){1}\n\\put(8.4,9.4){1} \\put(8.4,8.4){1} \\put(8.4,7.4){1}\n\\put(8.4,5.4){1} \\put(8.4,4.4){1} \\put(8.4,1.4){1}\n\\put(10.4,9.4){1} \\put(10.4,8.4){1} \\put(10.4,7.4){1}\n\\put(10.4,5.4){1} \\put(10.4,4.4){1} \\put(10.4,1.4){1}\n\\put(11.4,9.4){1} \\put(11.4,8.4){1} \\put(11.4,7.4){1}\n\\put(11.4,5.4){1} \\put(11.4,4.4){1} \\put(11.4,1.4){1}\n\\put(14.4,9.4){1} \\put(14.4,8.4){1} \\put(14.4,7.4){1}\n\\put(14.4,5.4){1} \\put(14.4,4.4){1} \\put(14.4,1.4){1}\n\\end{picture}\n\\end{example}\n\n\nThese examples illustrate the fact that Buchberger's algorithm can be used to\ncompute Green's relations for (infinite) semigroups which have finite\ncomplete presentations. \nPrevious methods for calculating minimal ideals from presentations of\nsemigroups were variations on the classical Todd-Coxeter enumeration\nprocedure \\cite{Campbell95}.\nThis is an alternative computational approach \nto that given in \\cite{Linton,Linton2} which uses the transformation \nrepresentation of a semigroup rather than a presentation.\nAs with \\cite{Linton2} the methods described in this paper provide for local computations, \nconcerning a single $R$-class, without computing the whole semigroup.\nThe one-sided Gr\\\"obner basis methods have\nlimitations in that a complete rewrite system with respect to the chosen\norder might not be found, but they do give the possibility of calculating the\nstructure of infinite semigroups and do not require the determination of a\ntransformation representation for those semigroups which arise \nnaturally as presentations.\\\\\n\n\nThe calculations of the examples were achieved using a $\\mathsf{GAP3}$ \nimplementation of the Gr\\\"obner basis procedures for polynomials in \nnoncommutative variables over $\\mathbb{Q}$ as described in \\cite{TMora}. \nFurther details of this program can be found in \\cite{Annethesis} or e-mail\nthe author.\nOther implementations (e.g. OPAL, Bergman) are more powerful: the key point \nof this paper is to point out that such programs can be used \nfor a wider range of problems than has previously been recorded.\\\\\n\n\n\n{\\small\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}