diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzetua" "b/data_all_eng_slimpj/shuffled/split2/finalzzetua" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzetua" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nLet $\\overline{\\cal M}_{g,n}$ be the moduli space\nof genus $g$ stable curves---curves with only nodal singularities and finite automorphism group---with $n$ labeled points disjoint from nodes. Define $\\psi_i=c_1(L_i)\\in H^{2}(\\overline{\\mathcal{M}}_{g,n},\\mathbb{Q}) $ the first Chern class of the line bundle $L_i\\to\\overline{\\mathcal{M}}_{g,n}$ with fibre above $[(C,p_1,\\ldots,p_n)]$ given by $T_{p_i}^*C$. Consider the natural maps given by the forgetful map \n\\begin{equation}\\label{forgetful}\n\\overline{\\cal M}_{g,n+1}\\stackrel{\\pi}{\\longrightarrow}\\overline{\\cal M}_{g,n}\n\\end{equation}\nand the gluing maps \n\\begin{equation}\\label{gluing}\n\\overline{\\cal M}_{g-1,n+2}\\stackrel{\\phi_{\\text{irr}}}{\\longrightarrow}\\overline{\\cal M}_{g,n},\\qquad\\overline{\\cal M}_{h,|I|+1}\\times\\overline{\\cal M}_{g-h,|J|+1}\\stackrel{\\phi_{h,I}}{\\longrightarrow}\\overline{\\cal M}_{g,n},\\qquad I\\sqcup J=\\{1,...,n\\}.\n\\end{equation}\n\nIn this paper we construct cohomology classes $\\Theta_{g,n}\\in H^*(\\overline{\\cal M}_{g,n})$ for $g\\geq 0$, $n\\geq 0$ and $2g-2+n>0$ satisfying the following four properties:\n\\begin{enumerate}[(i)]\n\\setlength{\\itemindent}{20pt}\n\\item $\\Theta_{g,n}\\in H^*(\\overline{\\cal M}_{g,n})$ is of pure degree, \\label{pure}\n\\item $\\phi_{\\text{irr}}^*\\Theta_{g,n}=\\Theta_{g-1,n+2}$,\\quad $\\phi_{h,I}^*\\Theta_{g,n}=\\pi_1^*\\Theta_{h,|I|+1}\\cdot\\pi_2^* \\Theta_{g-h,|J|+1}$, \\label{glue}\n\\item $\\Theta_{g,n+1}=\\psi_{n+1}\\cdot\\pi^*\\Theta_{g,n}$, \\label{forget}\n\\item $\\Theta_{1,1}=3\\psi_1$. \\label{base}\n\\end{enumerate}\nThe properties \\eqref{pure}-\\eqref{base} uniquely define intersection numbers of the classes $\\Theta_{g,n}$ with the classes $\\psi_i$. It is not clear if they uniquely define the classes $\\Theta_{g,n}$ themselves.\n\\begin{remark} \\label{gluestab}\nOne can replace \\eqref{glue} by the equivalent property\n$$\\phi_{\\Gamma}^*\\Theta_{g,n}=\\Theta_{\\Gamma}.$$\nfor any stable graph $\\Gamma$ of genus $g$ and with $n$ external edges. Here\n$$\\phi_{\\Gamma}:\\overline{\\cal M}_{\\Gamma}=\\prod_{v\\in V(\\Gamma)}\\overline{\\cal M}_{g(v),n(v)}\\to\\overline{\\cal M}_{g,n},\\quad \\Theta_{\\Gamma}=\\prod_{v\\in V(\\Gamma)}\\pi_v^*\\Theta_{g(v),n(v)}\\in H^*(\\overline{\\cal M}_{\\Gamma})$$\nwhere $\\pi_v$ is projection onto the factor $\\overline{\\cal M}_{g(v),n(v)}$.\nThis generalises \\eqref{glue} from 1-edge stable graphs where $\\phi_{\\Gamma_{\\text{irr}}}=\\phi_{\\text{irr}}$ and $\\phi_{\\Gamma_{h,I}}=\\phi_{h,I}$. See Section~\\ref{pixrel} for more on stable graphs.\n\\end{remark}\n\\begin{remark}\nThe sequence of classes $\\Theta_{g,n}$ satisfies many properties of a cohomological field theory (CohFT). It is essentially a 1-dimensional CohFT with vanishing genus zero classes, not to be confused with the Hodge class which is trivial in genus zero but does not vanish there. The trivial cohomology class $1\\in H^0(\\overline{\\cal M}_{g,n})$, which is a trivial example of a CohFT, satisfies conditions \\eqref{pure} and \\eqref{glue}. In this case, the forgetful map property \\eqref{forget} is replaced by $\\Theta_{g,n+1}=\\pi^*\\Theta_{g,n}$ and the initial value property \\eqref{base} is replaced by $\\Theta_{1,1}=1$. \n\\end{remark}\n\\begin{remark}\nThe existence proof of $\\Theta_{g,n}$---see Section~\\ref{existence}---requires the initial value property \\eqref{base} given by $\\Theta_{1,1}=3\\psi_1$.\nThe existence of $\\Theta_{g,n}$ with \\eqref{base} replaced by $\\Theta_{1,1}=\\lambda\\psi_1$ for general $\\lambda\\in\\mathbb{C}$ is unknown. One can of course replace $\\Theta_{g,n}$ by $\\lambda^{2g-2+n}\\Theta_{g,n}$ but this would change property \\eqref{forget}.\n\\end{remark}\n\n\\begin{theorem} \\label{main}\nThere exists a class $\\Theta_{g,n}$ satisfying \\eqref{pure} - \\eqref{base} and furthermore any such class satisfies the following properties.\n\\begin{enumerate}[(I)]\n\\item \n$\\Theta_{g,n}\\in H^{4g-4+2n}(\\overline{\\cal M}_{g,n})$. \\label{degree}\n\\item $\\Theta_{0,n}=0$ for all $n$ and $\\phi_{\\Gamma}^*\\Theta_{g,n}=0$ for any $\\Gamma$ with a genus 0 vertex. \\label{genus0}\n\\item $\\Theta_{g,n}\\in H^*(\\overline{\\cal M}_{g,n})^{S_n}$, i.e. it is symmetric under the $S_n$ action. \\label{symmetric}\n\\item The intersection numbers\n$\\displaystyle \\int_{\\overline{\\cal M}_{g,n}}\\Theta_{g,n}\\prod_{i=1}^n\\psi_i^{m_i}\\prod_{j=1}^N\\kappa_{\\ell_j}$\nare uniquely determined. \\label{unique}\n\\item \n$\\displaystyle Z^{\\Theta}(\\hbar,t_0,t_1,...)=\\exp\\sum_{g,n,\\vec{k}}\\frac{\\hbar^{g-1}}{n!}\\int_{\\overline{\\cal M}_{g,n}}\\Theta_{g,n}\\cdot\\prod_{j=1}^n\\psi_j^{k_j}\\prod t_{k_j}$\nis a tau function of the KdV hierarchy. \\label{thetatau}\n\\end{enumerate}\n\\end{theorem}\n\nThe main content of Theorem~\\ref{main} is the existence of $\\Theta_{g,n}$ which is constructed via the push-forward of a class over the moduli space of spin curves in Section~\\ref{existence}, and the KdV property \\eqref{thetatau} proven in Section~\\ref{sec:kdv}. The non-constructive uniqueness result \\eqref{unique}---which relies on the existence of non-explicit tautological relations---follows from the more general property that the intersection numbers $\\int_{\\overline{\\cal M}_{g,n}}\\Theta_{g,n}\\prod_{i=1}^n\\psi_i^{m_i}$ are uniquely determined by {\\em any} initial value $\\int_{\\overline{\\cal M}_{1,1}}\\Theta_{1,1}\\in\\mathbb{C}$. For the initial value $\\int_{\\overline{\\cal M}_{1,1}}\\Theta_{1,1}=\\frac{1}{8}$, \\eqref{unique} is strengthened by \\eqref{thetatau} which allows one to recursively calculate all intersection numbers $\\int_{\\overline{\\cal M}_{g,n}}\\Theta_{g,n}\\prod_{i=1}^n\\psi_i^{m_i}$ via recursive relations coming out of the KdV hierarchy. The proofs of properties~\\eqref{degree} - \\eqref{unique} are straightforward and presented in Section~\\ref{sec:unique}. \n\nThe proof of \\eqref{thetatau} does not directly use the KdV hierarchy. Instead it identifies the proposed KdV tau function $Z^{\\Theta}$ with a known KdV tau function---the Brezin-Gross-Witten KdV tau function $Z^{\\text{BGW}}$ defined in \\cite{BGrExt,GWiPos}. This identification of $Z^{\\Theta}(\\hbar,t_0,t_1,...)$ with $Z^{\\text{BGW}}(\\hbar,t_0,t_1,...)$ is stated as Theorem~\\ref{tauf} in Section~\\ref{intkdv}. The proof of Theorem~\\ref{tauf} uses a set of tautological relations, known as Pixton's relations, obtained from the moduli space of 3-spin curves and proven in \\cite{PPZRel}. Just as tautological relations give topological recursion relations for Gromov-Witten invariants, the intersections of $\\Theta_{g,n}$ with Pixton's relations produce topological recursion relations satisfied by $\\Theta_{g,n}$ that are enough to uniquely determine all intersection numbers $\\int_{\\overline{\\cal M}_{g,n}}\\Theta_{g,n}\\prod_{i=1}^n\\psi_i^{m_i}\\prod_{j=1}^N\\kappa_{\\ell_j}$. The strategy of the proof of \\eqref{thetatau} is to show that the coefficients of $Z^{\\text{BGW}}(\\hbar,t_0,t_1,...)$ satisfy the same relations as the corresponding coefficients of $Z^{\\Theta}(\\hbar,t_0,t_1,...)$, given by $\\int_{\\overline{\\cal M}_{g,n}}\\Theta_{g,n}\\prod_{i=1}^n\\psi_i^{m_i}$. \\\\\n\n\n\n\n\\noindent {\\em Acknowledgements.} I would like to thank Dimitri Zvonkine for his ongoing interest in this work which benefited immensely from many conversations together. I would also like to thank Alessandro Chiodo, Oliver Leigh, Ran Tessler and Ravi Vakil for useful conversations, and the Institut Henri Poincar\\'e where part of this work was carried out.\n\n\n\n\\section{Existence} \\label{existence}\nThe existence of a cohomology class $\\Theta_{g,n}\\in H^*(\\overline{\\cal M}_{g,n})$ satisfying \\eqref{pure} - \\eqref{base} can be proven using the moduli space of stable twisted spin curves $\\overline{\\cal M}_{g,n}^{\\rm spin}$ which consist of pairs $(\\Sigma,\\theta)$ given by a twisted stable curve $\\Sigma$ equipped with an orbifold line bundle $\\theta$ together with an isomorphism $\\theta^{\\otimes 2}\\cong \\omega_{\\Sigma}^{\\text{log}}$. See precise definitions below. We first construct a cohomology class on $\\overline{\\cal M}_{g,n}^{\\rm spin}$ and then push it forward to a cohomology class on $\\overline{\\cal M}_{g,n}$.\n\n\n\nA stable twisted curve, with group $\\mathbb{Z}_2$, is a 1-dimensional orbifold, or stack, $\\mathcal{C}$ such that generic points of $\\mathcal{C}$ have trivial isotropy group and non-trivial orbifold points have isotropy group $\\mathbb{Z}_2$. A stable twisted curve is equipped with a map which forgets the orbifold structure $\\rho:\\mathcal{C}\\to C$ where $C$ is a stable curve known as the the coarse curve of $\\mathcal{C}$. We say that $\\mathcal{C}$ is smooth if its coarse curve $C$ is smooth. Each nodal point of $\\mathcal{C}$ (corresponding to a nodal point of $C$) has non-trivial isotropy group and all other points of $\\mathcal{C}$ with non-trivial isotropy group are labeled points of $\\mathcal{C}$. \n\nA line bundle $L$ over $\\mathcal{C}$ is a locally equivariant bundle over the local charts, such that at each nodal point there is an equivariant isomorphism of fibres. Hence each orbifold point $p$ associates a representation of $\\mathbb{Z}_2$ on $L|_p$ acting by multiplication by $\\exp(2\\pi i\\lambda_p)$ for $\\lambda_p=0$ or $\\frac12$. One says $L$ is {\\em banded} by $\\lambda_p$. The equivariant isomorphism at nodes guarantees that the representations agree on each local irreducible component at the node. \n\nThe sheaf of local sections $\\mathcal{O}_\\mathcal{C}(L)$ pushes forward to a sheaf $|L|:=\\rho_*\\mathcal{O}_\\mathcal{C}(L)$ on $C$ which can be identified with the local sections of $L$ invariant under the $\\mathbb{Z}_2$ action. Away from nodal points $|L|$ is locally free, hence a line bundle. The pull-back bundle $\\rho^*(|L|)=L\\otimes\\bigotimes_{i\\in I}\\mathcal{O}(-p_i)$ where $L$ is banded by the non-trivial representation precisely at $p_i$ for $i\\in I$. Hence $\\deg|L|=\\deg L-\\frac12 |I|$. At nodal points, the push-forward $|L|$ is locally free when $L$ is banded by the trivial representation, and $|L|$ is a torsion-free sheaf that is not locally free when $L$ is banded by the non-trivial representation. See \\cite{FJRQua} for a nice description of these ideas.\n\nThe canonical bundle $\\omega_\\mathcal{C}$ of $\\mathcal{C}$ is generated by $dz$ for any local coordinate $z$. At an orbifold point $x=z^2$ the canonical bundle $\\omega_\\mathcal{C}$ is generated by $dz$ hence it is banded by $\\frac12$ i.e. $dz\\mapsto-dz$ under $z\\mapsto -z$. Over the coarse curve $\\omega_C$ is generated by $dx=2zdz$. In other words $\\rho^*\\omega_C\\not\\cong\\omega_\\mathcal{C}$ however $\\omega_C\\cong\\rho_*\\omega_\\mathcal{C}$. Moreover, $\\deg\\omega_C=-\\chi=2g-2$ and \n$$\\deg\\omega_\\mathcal{C}=-\\chi^{\\text{orb}}=2g-2+\\frac12n.$$\nFor $\\omega_\\mathcal{C}^{\\text{log}}=\\omega_\\mathcal{C}(p_1,...,p_n)$, locally $\\frac{dx}{x}=2\\frac{dz}{z}$ so $\\rho^*\\omega_C^{\\text{log}}\\cong\\omega_\\mathcal{C}^{\\text{log}}$ and $\\deg\\omega_C^{\\text{log}}=2g-2+n=\\deg\\omega_\\mathcal{C}^{\\text{log}}$.\n\n\n\nFollowing \\cite{AJaMod}, define\n$$\\overline{\\cal M}_{g,n}^{\\rm spin}=\\{(\\mathcal{C},\\theta,p_1,...,p_n,\\phi)\\mid \\phi:\\theta^2\\stackrel{\\cong}{\\longrightarrow}\\omega_{\\mathcal{C}}^{\\text{log}}\\}.\n$$\nHere $\\omega_{\\mathcal{C}}^{\\text{log}}$ and $\\theta$ are line bundles over the stable twisted curve $\\mathcal{C}$ with labeled orbifold points $p_j$ and $\\deg\\theta=g-1+\\frac12n$. The relation $\\theta^2\\stackrel{\\cong}{\\longrightarrow}\\omega_{\\mathcal{C}}^{\\text{log}}$ is possible because the representation associated to $\\omega_{\\mathcal{C}}^{\\text{log}}$ at $p_i$ is trivial---$dz\/z\\stackrel{z\\mapsto-z}{\\longrightarrow}dz\/z$. We require the representations associated to $\\theta$ at each $p_i$ to be non-trivial, i.e $\\lambda_{p_i}=\\frac12$. At nodal points $p$, both types $\\lambda_p=0$ or $\\frac12$ can occur. The equivariant isomorphism of fibres over nodal points forces the balanced condition $\\lambda_{p_+}=\\lambda_{p_-}$ for $p_\\pm$ corresponding to $p$ on each irreducible component. Among the $2^{2g}$ different spin structures on a twisted curve $\\mathcal{C}$, some will have $\\lambda_p=0$ and some will have $\\lambda_p=\\frac12$. \n\nThe forgetful map\n$$f:\\overline{\\cal M}_{g,n+1}^{\\rm spin}\\to\\overline{\\cal M}_{g,n}^{\\rm spin}\n$$\nis defined via $f(\\mathcal{C},\\theta,p_1,...,p_{n+1},\\phi)=(\\rho(\\mathcal{C}),\\rho_*\\theta,p_1,...,p_n,\\rho_*\\phi)$ where $\\rho$ forgets the label and orbifold structure at $p_{n+1}$. As described above, the push-forward $\\rho_*\\theta$ consists of local sections invariant under the $\\mathbb{Z}_2$ action. Since the representation at $p_{n+1}$ is given by multiplication by $-1$, any invariant local section must vanish at $p_{n+1}$. In other words $\\rho_*\\theta=\\rho_*\\{\\theta(-p_{n+1})\\}$, $\\rho^*\\rho_*\\theta=\\{\\theta(-p_{n+1})\\}$ and $\\deg\\rho_*\\theta=\\deg\\theta-\\frac12$.\n\nTautological line bundles $L_{p_i}\\to\\overline{\\cal M}_{g,n}^{\\rm spin}$, $i=1,...,n$ are defined analogously to those defined over $\\overline{\\cal M}_{g,n}$. For a family $\\pi:\\mathcal{C}\\to S$ with sections $p_i:S\\to\\mathcal{C}$, $i=1,...,n$, they are defined by $L_{p_i}:=p_i^*(\\omega_{\\mathcal{C}\/S})$. \n\nWe can now define a vector bundle over $\\overline{\\cal M}_{g,n}^{\\rm spin}$ using the dual bundle $\\theta^{\\vee}$ on each stable twisted curve. Denote by $\\mathcal{E}$ the universal spin structure over $\\overline{\\cal M}_{g,n}^{\\rm spin}$. Given a map $S\\to\\overline{\\cal M}_{g,n}^{\\rm spin}$, $\\mathcal{E}$ pulls back to $\\theta$ giving a family $(\\mathcal{C},\\theta,p_1,...,p_n,\\phi)$ where $\\pi:\\mathcal{C}\\to S$ has stable twisted curve fibres, $p_i:S\\to\\mathcal{C}$ are sections with orbifold isotropy $\\mathbb{Z}_2$ and $\\phi:\\theta^2\\stackrel{\\cong}{\\longrightarrow}\\omega_{\\mathcal{C}\/S}^{\\text{log}}=\\omega_{\\mathcal{C}\/S}(p_1,..,p_n)$. Consider the push-forward sheaf $\\pi_*\\mathcal{E}^{\\vee}$ over $\\overline{\\cal M}_{g,n}^{\\rm spin}$. We have\n$$\\deg\\theta^{\\vee}=1-g-\\frac12n<0.\n$$\nFurthermore, for any irreducible component $\\mathcal{C}'\\stackrel{i}{\\to}\\mathcal{C}$, the pole structure on sections of the log canonical bundle at nodes yields $i^*\\omega_{\\mathcal{C}\/S}^{\\text{log}}=\\omega_{\\mathcal{C}'\/S}^{\\text{log}}$. Hence $\\phi':(\\theta|_{\\mathcal{C}'})^2\\stackrel{\\cong}{\\longrightarrow}\\omega_{\\mathcal{C}'\/S}^{\\text{log}}$, where $\\phi'=i^*\\circ\\phi|_{\\mathcal{C}'}$. Since the irreducible component $\\mathcal{C}'$ is stable its log canonical bundle has negative degree and\n$$\\deg\\theta^{\\vee}|_{\\mathcal{C}'}<0.\n$$\nNegative degree of $\\theta^{\\vee}$ restricted to any irreducible component implies that $R^0\\pi_*\\mathcal{E}^{\\vee}=0$ and the following definition makes sense.\n\\begin{definition} \\label{obsbun}\nDefine a bundle $E_{g,n}=R\\pi_*\\mathcal{E}^\\vee$ over $\\overline{\\cal M}_{g,n}^{\\rm spin}$ with fibre $H^1(\\theta^{\\vee})^{\\vee}$. \n\\end{definition} \nIt is a bundle of rank $2g-2+n$ by the following Riemann-Roch calculation. Orbifold Riemann-Roch takes into account the representation information in terms of its band $\\lambda_{p_i}$.\n$$h^0(\\theta^{\\vee})-h^1(\\theta^{\\vee})=1-g+\\deg \\theta^{\\vee}-\\sum_{i=1}^n\\lambda_{p_i}=1-g+1-g-\\frac12n-\\frac12n=2-2g-n.\n$$\nHere we used the requirement that $\\lambda_{p_i}=\\frac12$ for $i=1,...,n$. Since $\\deg\\theta^{\\vee}=1-g-\\frac12n<0$, and the restriction of $\\theta^{\\vee}$ to any irreducible component also has negative degree, we have $h^0(\\theta^{\\vee})=0$ and hence $h^1(\\theta^{\\vee})=2g-2+n$. Thus $H^1(\\theta^{\\vee})^{\\vee}$ gives fibres of a rank $2g-2+n$ vector bundle.\n\nThe analogue of the boundary maps $\\phi_{\\text{irr}}$ and $\\phi_{h,I}$ defined in \\eqref{gluing} are multivalued maps (binary relations) defined as follows. Consider a node $p\\in\\mathcal{C}$ for $(\\mathcal{C},\\theta,p_1,...,p_n,\\phi)\\in\\overline{\\cal M}_{g,n}^{\\rm spin}$. Denote the normalisation by $\\nu:\\tilde{\\mathcal{C}}\\to\\mathcal{C}$ with points $p_\\pm\\in\\tilde{\\mathcal{C}}$ that map to the node $p=\\nu(p_\\pm)$. When $\\tilde{\\mathcal{C}}$ is not connected, then since $\\theta$ must be banded by $\\lambda_p=0$ at an even number of orbifold points, $\\lambda_{p_i}=\\frac12$ forces $\\lambda_{p_\\pm}=\\frac12$. Hence it decomposes into two spin structures $\\theta_1$ and $\\theta_2$. Any two spin structures $\\theta_1$ and $\\theta_2$ can glue, but not uniquely, to give a spin structure on $\\mathcal{C}$. This gives rise to a multivalued map, as described in \\cite{FJRWit}, which uses the fibre product:\n$$\\begin{array}{ccc}\\left(\\overline{\\cal M}_{h,|I|+1}\\times\\overline{\\cal M}_{g-h,|J|+1}\\right)\\times_{\\overline{\\cal M}_{g,n}}\\overline{\\cal M}^{\\rm spin}_{g,n}&\\to&\\overline{\\cal M}^{\\rm spin}_{g,n}\\\\\n\\downarrow&&\\downarrow\\\\\n\\overline{\\cal M}_{h,|I|+1}\\times\\overline{\\cal M}_{g-h,|J|+1}&\\to&\\overline{\\cal M}_{g,n}\\end{array}\n$$\nand is given by\n$$\n\\begin{array}{c}\n\\left(\\overline{\\cal M}_{h,|I|+1}\\times\\overline{\\cal M}_{g-h,|J|+1}\\right)\n\\times_{\\overline{\\cal M}_{g,n}}\\overline{\\cal M}^{\\rm spin}_{g,n}\\\\\\hspace{1.5cm}\\swarrow\\hat{\\nu}\\hspace{3cm}\\phi_{h,I}\\searrow\\\\\n\\qquad\\overline{\\cal M}^{\\rm spin}_{h,|I|+1}\\times\\overline{\\cal M}^{\\rm spin}_{g-h,|J|+1}\\hspace{1cm}\\dashrightarrow\\hspace{1cm}\\overline{\\cal M}^{\\rm spin}_{g,n}\\end{array}\n$$\nwhere $I\\sqcup J=\\{1,...,n\\}$. The map $\\hat{\\nu}$ is given by the pull back of the spin structure obtained from $\\overline{\\cal M}^{\\rm spin}_{g,n}$ to the normalisation defined by the points of $\\overline{\\cal M}_{h,|I|+1}$ and $\\overline{\\cal M}_{g-h,|J|+1}$. The broken arrow $\\dashrightarrow$ represents the multiply-defined map $\\phi_{h,I}\\circ\\hat{\\nu}^{-1}$. \n\nWhen $\\tilde{\\mathcal{C}}$ is connected, a spin structure $\\theta$ on $\\mathcal{C}$ pulls back to a spin structure $\\tilde{\\theta}=\\nu^*\\theta$ on $\\tilde{\\mathcal{C}}$. As above, any spin structure $\\tilde{\\theta}$ glues non-uniquely, to give a spin structure on $\\mathcal{C}$, and defines a multiply-defined map which uses the fibre product:\n$$\\begin{array}{ccc}\\overline{\\cal M}_{g-1,n+2}\\times_{\\overline{\\cal M}_{g,n}}\\overline{\\cal M}^{\\rm spin}_{g,n}&\\to&\\overline{\\cal M}^{\\rm spin}_{g,n}\\\\\n\\downarrow&&\\downarrow\\\\\n\\overline{\\cal M}_{g-1,n+2}&\\to&\\overline{\\cal M}_{g,n}\\end{array}.\n$$\nThere are two cases corresponding to the following decomposition of the fibre product:\n$$\\overline{\\cal M}_{g-1,n+2}\\times_{\\overline{\\cal M}_{g,n}}\\overline{\\cal M}^{\\rm spin}_{g,n}=\\bigsqcup_{i\\in\\{0,1\\}}\\left(\\overline{\\cal M}_{g-1,n+2}\\times_{\\overline{\\cal M}_{g,n}}\\overline{\\cal M}^{\\rm spin}_{g,n}\\right)^{(i)}$$\nwhich depends on the behaviour of $\\theta$ at the nodal point $p_\\pm$. The superscript $\\cdot^{(0)}$, respectively $\\cdot^{(1)}$, denotes those $\\theta$ banded by $\\lambda_{p_\\pm}=\\frac12$, respectively $\\lambda_{p_\\pm}=0$.\n\nIn the first case, when $\\theta$ is banded by $\\lambda_{p_\\pm}=\\frac12$, we have: \n$$\n\\begin{array}{c}\\left(\\overline{\\cal M}_{g-1,n+2}\\times_{\\overline{\\cal M}_{g,n}}\\overline{\\cal M}^{\\rm spin}_{g,n}\\right)^{(0)}\\\\\\swarrow\\hat{\\nu}\\hspace{2cm}\\phi_{\\text{irr}}\\searrow\\\\\\overline{\\cal M}^{\\rm spin}_{g-1,n+2}\\hspace{1cm}\\dashrightarrow\\hspace{1cm}\\overline{\\cal M}^{\\rm spin}_{g,n}\\end{array}\n$$\nwhere $\\hat{\\nu}$ is given by pull back of the spin structure obtained from $\\overline{\\cal M}^{\\rm spin}_{g,n}$ to the normalisation defined by the point of $\\overline{\\cal M}_{g-1,n+2}$. \n\nIn the second case when $\\theta$ is banded by $\\lambda_{p_\\pm}=0$, we have:\n$$\n\\begin{array}{c}\\left(\\overline{\\cal M}_{g-1,n+2}\\times_{\\overline{\\cal M}_{g,n}}\\overline{\\cal M}^{\\rm spin}_{g,n}\\right)^{(1)}\\\\\\swarrow\\hat{\\nu}\\hspace{2cm}\\phi_{\\text{irr},2}\\searrow\\\\\\overline{\\cal M}^{\\rm spin}_{g-1,n,2}\\hspace{1cm}\\dashrightarrow\\hspace{1cm}\\overline{\\cal M}^{\\rm spin}_{g,n}\\end{array}\n$$\nwhere $\\overline{\\cal M}^{\\rm spin}_{g-1,n,2}$ consists of spin structures banded, as usual, by $\\lambda_{p_i}=\\frac12$ for $i=1,...,n$ and banded by $\\lambda_{p_i}=0$ for $i=n+1,n+2$. \n\nThe bundle $E_{g-1,n,2}\\to\\overline{\\cal M}^{\\rm spin}_{g-1,n,2}$ is still defined, with fibre $H^1(\\tilde{\\theta}^{\\vee})$, because $H^0(\\tilde{\\theta}^{\\vee})=0$ since the band $\\lambda_{p_{n+1}}=0=\\lambda_{p_{n+2}}$ does not affect $\\deg\\tilde{\\theta}^{\\vee}=1-(g-1)-\\frac12(n+2)<0$ and also negative degree on all irreducible components. It does affect the rank of the bundle. By Riemann-Roch $h^0(\\tilde{\\theta}^{\\vee})-h^1(\\tilde{\\theta}^{\\vee})=1-(g-1)+\\deg\\tilde{\\theta}^{\\vee}-\\frac12n=2-2g-n+1$ hence $\\dim H^1(\\tilde{\\theta}^{\\vee})=\\dim H^1(\\theta^{\\vee})-1$ when $\\tilde{\\theta}=\\nu^*\\theta$.\n\nThe bundle $E_{g,n}$ behaves naturally with respect to the boundary divisors.\n\\begin{lemma} \\label{pullback}\n$$\n\\phi_{\\text{irr}}^*E_{g,n}\\equiv \\hat\\nu^*E_{g-1,n+2},\\quad\\phi_{h,I}^*E_{g,n}\\cong \\hat\\nu^*\\left(\\pi_1^*E_{h,|I|+1}\\oplus\\pi_2^*E_{g-h,|J|+1}\\right)\n$$\nwhere $\\pi_i$ is projection from $\\overline{\\cal M}^{\\rm spin}_{h,|I|+1}\\times\\overline{\\cal M}^{\\rm spin}_{g-h,|J|+1}$ onto the $i$th factor, $i=1,2$. \n\\end{lemma}\n\\begin{proof}\nA spin structure $\\tilde{\\theta}$ on a connected normalisation $\\tilde{\\mathcal{C}}$ has $\\deg\\tilde{\\theta}^{\\vee}=1-(g-1)-\\frac12(n+2)<0$, and also negative degree on all irreducible components, hence $H^0(\\tilde{\\theta}^{\\vee})=0$.\nBy Riemann-Roch \n$$h^0(\\tilde{\\theta}^{\\vee})-h^1(\\tilde{\\theta}^{\\vee})=1-(g-1)+\\deg\\tilde{\\theta}^{\\vee}-\\frac12(n+2)=2-2g-n.$$ \nHence $\\dim H^1(\\tilde{\\theta}^{\\vee})=\\dim H^1(\\theta^{\\vee})$ and the natural map\n$$ 0\\to H^1(\\mathcal{C},\\theta^{\\vee})\\to H^1(\\tilde{\\mathcal{C}},\\tilde{\\theta}^{\\vee})\n$$\nis an isomorphism. In other words $\\phi_{\\text{irr}}^*E_{g,n}\\equiv \\nu^*E_{g-1,n+2}$.\n\nThe argument is analogous when $\\tilde{\\mathcal{C}}$ is not connected and $\\lambda_{p_\\pm}=\\frac12$. Again $\\deg\\theta_i^{\\vee}<0$, and it has negative degree on all irreducible components, hence $H^0(\\theta_i^{\\vee})=0$ for $i=1,2$. By Riemann-Roch $\\dim H^1(\\theta_1^{\\vee})+\\dim H^1(\\theta_2^{\\vee})=\\dim H^1(\\theta^{\\vee})$ so the natural map\n$$ 0\\to H^1(\\mathcal{C},\\theta^{\\vee})\\to H^1(\\tilde{\\mathcal{C}}_1,\\theta_1^{\\vee})\\oplus H^1(\\tilde{\\mathcal{C}}_2,\\theta_2^{\\vee})\n$$\nis an isomorphism. In other words $\\phi_{h,I}^*E_{g,n}\\cong \\hat\\nu^*\\left(\\pi_1^*E_{h,|I|+1}\\oplus\\pi_2^*E_{g-h,|J|+1}\\right)$.\n\\end{proof}\nThe following lemma describes the pull-back of $E_{g,n}$ to the fibre product $\\overline{\\cal M}_{g-1,n+2}\\times_{\\overline{\\cal M}_{g,n}}\\overline{\\cal M}^{\\rm spin}_{g,n}$ in terms of the bundle $E_{g-1,n,2}\\to\\overline{\\cal M}^{\\rm spin}_{g-1,n,2}$ defined above.\n\\begin{lemma} \\label{pullback1}\n$$0\\to \\hat\\nu^*E_{g-1,n,2}\\to \\phi_{\\text{irr},2}^*E_{g,n}\\to \\mathcal{O}_{\\overline{\\cal M}_{g-1,n+2}\\times_{\\overline{\\cal M}_{g,n}}\\overline{\\cal M}^{\\rm spin}_{g,n}} \\to 0.\n$$\n\\end{lemma}\n\\begin{proof}\nWhen the bundle $\\theta$ is banded by $\\lambda_{p_\\pm}=0$, the map between sheaves of local holomorphic sections\n$$\\mathcal{O}_C({\\theta},U)\\to\\mathcal{O}_{\\tilde{C}}({\\nu^*\\theta},\\nu^{-1}U)$$ \nis not surjective whenever $U\\ni p$. The image consists of local sections that agree, under an identification of fibres, at $p_+$ and $p_-$. Hence the dual bundle $\\theta^\\vee$ on $\\mathcal{C}$ is a quotient sheaf\n\\begin{equation} \\label{normcomp}\n0\\to I\\to\\nu^*\\theta^\\vee\\to\\theta^\\vee\\to 0\n\\end{equation} \nwhere $\\mathcal{O}_{\\tilde{C}}(I,U)$ is generated by the element of the dual that sends a local section $s\\in \\mathcal{O}_{\\tilde{C}}({\\nu^*\\theta},\\nu^{-1}U)$ to $s(p_+)-s(p_-)$. Note that evaluation $s(p_\\pm)$ only makes sense after a choice of trivialisation of $\\nu^*\\theta$ at $p_+$ and $p_-$, but the ideal $I$ is independent of this choice. The complex \\eqref{normcomp} splits as follows. We can choose a representative $\\phi$ upstairs of any element from the quotient space so that $\\phi(p_+)=0$, i.e. $\\mathcal{O}_C(\\theta^\\vee,U)$ corresponds to elements of $\\mathcal{O}_{\\tilde{C}}({\\nu^*\\theta^\\vee},\\nu^{-1}U)$ that vanish at $p_+$. This is achieved by adding the appropriate multiple of $s(p_+)-s(p_-)$ to a given $\\phi\\in\\mathcal{O}_{\\tilde{C}}({\\nu^*\\theta^\\vee},\\nu^{-1}U)$. (Note that $\\phi(p_-)$ is arbitrary. One could instead arrange $\\phi(p_-)=0$ with $\\phi(p_+)$ arbitrary.) In other words we can identify $\\theta^\\vee$ with $\\nu^*\\theta^\\vee(-p_+)$ in the complex:\n$$0\\to \\nu^*\\theta^\\vee(-p_+)\\to \\nu^*\\theta^\\vee\\to \\nu^*\\theta^\\vee|_{p_+}\\to 0.\n$$\nIn a family $\\pi:C\\to S$, $R^0\\pi_*(\\nu^*\\theta^\\vee)=0=R^0\\pi_*(\\nu^*\\theta^\\vee(-p_+))$ since $\\deg\\nu^*\\theta^\\vee<0$, and it has negative degree on all irreducible components. Also $R^1\\pi_*(\\nu^*\\theta^\\vee|_{p_+})=0$ since $p_+$ has relative dimension 0. Thus\n$$0\\to R^0\\pi_*(\\nu^*\\theta^\\vee|_{p_+})\\to R^1\\pi_*(\\nu^*\\theta^\\vee(-p_+))\\to R^1\\pi_*(\\nu^*\\theta^\\vee) \\to 0.\n$$\nFurthermore, we have $\\nu^*L|_{p_+}\\cong\\mathbb{C}$ canonically via evaluation, hence $R^0\\pi_*(\\nu^*L|_{p_+})\\cong\\mathcal{O}_S$. Since $\\hat\\nu^*E_{g-1,n,2}=R^1\\pi_*(\\nu^*\\theta^\\vee)$ and $\\phi_{\\text{irr},2}^*E_{g,n}=R^1\\pi_*(\\nu^*\\theta^\\vee(-p_+))$, take the dual of the sequence, and the result follows.\n\\end{proof}\n\\begin{remark}\nWhen $\\lambda_{p_\\pm}=\\frac12$ we have $\\lambda_{p_+}+\\lambda_{p_-}=1$. We see from above that $\\lambda_{p_\\pm}=0$ really wants one of $\\lambda_{p_\\pm}$ to be 1 to preserve $\\lambda_{p_+}+\\lambda_{p_-}=1$.\n\\end{remark}\n\n\n\n\n\n\n\\begin{definition} \\label{fundclass}\nFor $2g-2+n>0$ define the Euler class\n$$\\Omega_{g,n}:=c_{2g-2+n}(E_{g,n})\\in H^{4g-4+2n}(\\overline{\\cal M}_{g,n}^{\\rm spin},\\mathbb{Q}).\n$$\n\\end{definition} \nNote that $\\Omega_{0,n}=0$ for $n=3,4,...$ because rank$(E_{0,n})=n-2$ is greater than $\\dim\\overline{\\cal M}_{0,n}^{\\rm spin}=n-3$ so its top Chern class vanishes. Nevertheless, it would be interesting to know if the bundles $E_{0,n}$ carry non-trivial information.\n\nThe cohomology classes $\\Omega_{g,n}$ behave well with respect to inclusion of strata.\n\\begin{lemma} \\label{omeganat}\n$$\\phi_{\\text{irr}}^*\\Omega_{g,n}=\\hat\\nu^*\\Omega_{g-1,n+2},\\quad \\phi_{h,I}^*\\Omega_{g,n}=\\hat\\nu^*\\left(\\pi_1^*\\Omega_{h,|I|+1}\\cdot\\pi_2^*\\Omega_{g-h,|J|+1}\\right),\\quad\\phi_{\\text{irr},2}^*\\Omega_{g,n}=0.$$ \n\\end{lemma}\n \n\\begin{proof} \nThis is an immediate application of Lemma~\\ref{pullback}:\n$$\n\\phi_{\\text{irr}}^*E_{g,n}\\equiv \\hat\\nu^*E_{g-1,n+2},\\quad\\phi_{h,I}^*E_{g,n}\\cong \\hat\\nu^*\\left(\\pi_1^*E_{h,|I|+1}\\oplus\\pi_2^*E_{g-h,|J|+1}\\right)\n$$\nand the naturality of $c_{2g-2+n}=c_{\\rm top}$. We have \n$$\\phi_{\\text{irr}}^*c_{\\rm top}(E_{g,n})=\\hat\\nu^*c_{\\rm top}(E_{g-1,n+2}),\\quad\\phi_{h,I}^*c_{\\rm top}(E_{g,n})=\\hat\\nu^*\\left(\\pi_1^*c_{\\rm top}(E_{h,|I|+1})\\cdot\\pi_2^*c_{\\rm top}(E_{g-h,|J|+1})\\right).\n$$\n\nAlso, $\\phi_{\\text{irr},2}^*\\Omega_{g,n}=0$ follows immediately from the exact sequence of Lemma~\\ref{pullback1}\n$$0\\to E_{g-1,n,2}\\to \\phi_{\\text{irr},2}^*E_{g,n}\\to \\mathcal{O}_{\\overline{\\cal M}^{\\rm spin}_{g-1,n,2}} \\to 0\n$$\nwhich implies $\\phi_{\\text{irr},2}^*c_{2g-2+n}(E_{g,n})=c_{2g-3+n}(E_{g-1,n,2})\\cdot c_1(\\mathcal{O}_{\\overline{\\cal M}^{\\rm spin}_{g-1,n,2}})=0$.\n\\end{proof}\nA consequence of Lemma~\\ref{omeganat} is that $\\phi_{\\Gamma}^*\\Omega_{g,n}=0$ for any stable graph $\\Gamma$ (defined as for $\\overline{\\cal M}_{g,n}$ with extra data on edges) that contains a genus 0 vertex.\n\nConsider the map $\\pi:\\overline{\\cal M}_{g,n+1}^{\\rm spin}\\to\\overline{\\cal M}_{g,n}^{\\rm spin}$ that forgets the point $p_{n+1}$.\n\\begin{lemma} \\label{omegaforget}\n$$\\Omega_{g,n+1}=\\psi_{p_{n+1}}\\cdot \\pi^*\\Omega_{g,n}.$$ \n\\end{lemma}\n \n\\begin{proof} \nTensor the exact sequence of sheaves \n$$0\\to \\mathcal{O}(-p_{n+1})\\to\\mathcal{O}\\to\\mathcal{O} |_{p_{n+1}}\\to 0$$\nwith $\\omega_{\\mathcal{C}\/S}\\otimes\\theta$ over a family $\\pi:\\mathcal{C}\\to S$ where $S\\to\\overline{\\cal M}_{g,n+1}^{\\rm spin}$ to get:\n$$0\\to \\omega_{\\mathcal{C}\/S}\\otimes\\theta(-p_{n+1})\\to\\omega_{\\mathcal{C}\/S}\\otimes\\theta\\to\\omega_{\\mathcal{C}\/S}\\otimes\\theta |_{p_{n+1}}\\to 0$$\nhence\n$$0\\to R^0\\pi_*\\left(\\omega_{\\mathcal{C}\/S}\\otimes\\theta(-p_{n+1})\\right)\\to R^0\\pi_*\\left(\\omega_{\\mathcal{C}\/S}\\otimes\\theta\\right)\\to R^0\\pi_*\\left(\\omega_{\\mathcal{C}\/S}\\otimes\\theta |_{p_{n+1}}\\right)\\to R^1\\pi_*\\left(\\omega_{\\mathcal{C}\/S}\\otimes\\theta(-p_{n+1})\\right).$$\nWe have $R^1\\pi_*\\left(\\omega_{\\mathcal{C}\/S}\\otimes\\theta(-p_{n+1})\\right)=0$ (check) which leaves a short exact sequence. By Serre duality, the first two terms are\n$R^1\\pi_*\\left(\\theta^\\vee(p_{n+1})\\right)^\\vee\\to R^1\\pi_*\\left(\\theta^\\vee\\right)^\\vee$. \n\nRecall that the forgetful map $(\\mathcal{C},\\theta,p_1,...,p_{n+1},\\phi)\\mapsto(\\pi(\\mathcal{C}),\\pi_*\\theta,p_1,...,p_n,\\pi_*\\phi)$ pushes forward $\\theta$ via $\\pi$ which forgets the orbifold structure at $p_{n+1}$. As described earlier, $\\pi^*\\pi_*\\theta=\\{\\theta(-p_{n+1})\\}$ but $\\pi_*\\theta_{g,n+1}=\\theta_{g,n}$, where the ${g,n}$ subscript which is usually omitted is temporarily included for clarity. Hence $\\theta^\\vee(p_{n+1})=\\pi^*\\theta^\\vee$ and $R^1\\pi_*\\left(\\theta^\\vee(p_{n+1})\\right)=R^1\\pi_*\\left(\\pi^*\\theta^\\vee\\right)=\\pi^*R^1\\pi_*\\left(\\theta^\\vee\\right)$. Thus the first two terms of the short exact sequence become $\\pi^*E_{g,n}\\to E_{g,n+1}$.\n\nFor the third term of the short exact sequence, the residue map produces a canonical isomorphism $\\omega_{\\mathcal{C}\/S}^{\\text{log}}\\cong\\mathbb{C}$, as proven in \\cite{FJRWit}. Thus $\\omega_{\\mathcal{C}\/S}^{\\text{log}}|_{p_{n+1}}=\\mathcal{O}_{p_{n+1}}$, and hence also $\\theta |_{p_{n+1}}=\\mathcal{O}_{p_{n+1}}$. The triviality of $\\theta |_{p_{n+1}}$ implies $R^0\\pi_*\\left(\\omega_{\\mathcal{C}\/S}\\otimes\\theta |_{p_{n+1}}\\right)=R^0\\pi_*\\left(\\omega_{\\mathcal{C}\/S}|_{p_{n+1}}\\right)=L_{p_{n+1}}$. Hence the short exact sequence becomes:\n$$0\\to\\pi^*E_{g,n}\\to E_{g,n+1}\\to L_{p_{n+1}}\\to 0.\n$$\nSince $c_1(L_{p_{n+1}})=\\psi_{p_{n+1}}$, we have $c_{2g-2+n+1}(E_{g,n+1})=\\psi_{p_{n+1}}\\cdot \\pi^*c_{2g-2+n}(E_{g,n})$ as required. \n\\end{proof} \n\\begin{definition}\nFor $p:\\overline{\\cal M}_{g,n}^{\\rm spin}\\to\\overline{\\cal M}_{g,n}$ define\n$$\\Theta_{g,n}=2^np_*\\Omega_{g,n}\\in H^{4g-4+2n}(\\overline{\\cal M}_{g,n}).\n$$\n\\end{definition}\nLemma~\\ref{omegaforget} and the relation\n$$\\psi_{n+1}=\\frac12p^*\\psi_{n+1}$$\nproven in \\cite[Prop. 2.4.1]{FJRWit}, together with the factor of $2^n$ in the definition of $\\Omega_{g,n}$, immediately gives property \\eqref{forget} of $\\Theta_{g,n}$\n$$\\Theta_{g,n+1}=\\psi_{n+1}\\cdot\\pi^*\\Theta_{g,n}.$$ \nProperty \\eqref{base} of $\\Theta_{g,n}$ is given by the following calculation.\n\\begin{proposition} \\label{theta11}\n$\\Theta_{1,1}=3\\psi_1\\in H^2(\\overline{\\cal M}_{1,1})$.\n\\end{proposition}\n\\begin{proof}\nA one-pointed twisted elliptic curve $(\\mathcal{E},p)$ is a one-pointed elliptic curve $(E,p)$ such that $p$ has isotropy $\\mathbb{Z}_2$. The degree of the divisor $p$ in $\\mathcal{E}$ is $\\frac12$ and the degree of every other point in $\\mathcal{E}$ is 1. If $dz$ is a holomorphic differential on $E$ (where $E=\\mathbb{C}\/\\Lambda$ and $z$ is the identity function on the universal cover $\\mathbb{C}$) then locally near $p$ we have $z=t^2$ so $dz=2tdt$ vanishes at $p$. In particular, the canonical divisor $(\\omega_{\\mathcal{E}})=p$ has degree $\\frac12$ and $(\\omega^{\\text{log}}_{\\mathcal{E}})=(\\omega_{\\mathcal{E}}(p))=2p$ has degree 1. \n\nA spin structure on $\\mathcal{E}$ is a degree $\\frac12$ line bundle $\\mathcal{L}$ satisfying $\\mathcal{L}^2=\\omega^{\\text{log}}_{\\mathcal{E}}$. Line bundles on $\\mathcal{E}$ correspond to divisors on $\\mathcal{E}$ up to linear equivalence. Note that meromorphic functions on $\\mathcal{E}$ are exactly the meromorphic functions on $E$. The four spin structures on $\\mathcal{E}$ are given by the divisors $\\theta_0=p$ and $\\theta_i=q_i-p$, $i=1,2,3$, where $q_i$ is a non-trivial order 2 element in the group $E$ with identity $p$. Clearly $\\theta_0^2=2p=\\omega^{\\text{log}}_{\\mathcal{E}}$. For $i=1,2,3$, $\\theta_i^2=2q_i-2p\\sim 2p$ since there is a meromorphic function $\\wp(z)-\\wp(q_i)$ on $E$ with a double pole at $p$ and a double zero at $q_i$. Its divisor on $\\mathcal{E}$ is $2q_i-4p$, since $p$ has isotropy $\\mathbb{Z}_2$, hence $2q_i-2p\\sim 2p$.\n\nSince $H^2(\\overline{\\cal M}_{1,1})$ is generated by $\\psi_1$ it is enough to calculate $\\int_{\\overline{\\cal M}_{1,1}}\\Theta_{1,1}$. The Chern character of the push-forward bundle $E_{1,1}$ is calculated via the Grothendieck-Riemann-Roch theorem\n$$ \\text{Ch}(R\\pi_*\\mathcal{E}^\\vee)=\\pi_*(\\text{Ch}(\\mathcal{E}^\\vee)\\text{Td}(\\omega_\\pi^\\vee)).\n$$\nIn fact we need to use the orbifold Grothendieck-Riemann-Roch theorem \\cite{ToeThe}. The calculation we need is a variant of the calculation in \\cite[Theorem 6.3.3]{FJRWit} which applies to $\\mathcal{E}$ such that $\\mathcal{E}^2=\\omega_{\\mathcal{C}}^{\\text{log}}$ instead of $\\mathcal{E}^\\vee$. Importantly, this means that the Todd class has been worked out, and it remains to adjust the $\\text{Ch}(\\mathcal{E}^\\vee)$ term. We get\n$$\\int_{\\overline{\\cal M}_{1,1}}p_*c_1(E_{1,1})=2\\int_{\\overline{\\cal M}_{1,1}}\\left[\\frac{11}{24}\\kappa_1+\\frac{1}{24}\\psi_1+\\frac12 \\left(-\\frac{1}{24}+\\frac{1}{12}\\right)(i_{\\Gamma})_*(1)\\right]\n=2\\left(\\frac{11}{24^2}+\\frac{1}{24^2}+\\frac12\\cdot \\frac{1}{24}\\cdot\\frac12\\right)=\\frac{1}{16}$$\nwhich agrees with\n$$\\int_{\\overline{\\cal M}_{1,1}}\\frac32\\psi_1=\\frac32\\cdot\\frac{1}{24}=\\frac{1}{16}.\n$$\nHence $p_*c_1(E)=\\frac32\\psi_1$ and $\\Theta_{1,1}=2p_*c_1(E)=3\\psi_1$.\n\n\n\\end{proof}\n\\begin{proposition}\nThe pushforward $p_*2^n\\Omega_{g,n}=\\Theta_{g,n}\\in H^{4g-4+2n}(\\overline{\\cal M}_{g,n})$ satisfies property \\eqref{glue}.\n\\end{proposition}\n\\begin{proof}\nThe two properties \\eqref{glue} of $\\Theta_{g,n}$, follow from the analogous properties for $\\Omega_{g,n}$. This uses the relationship between compositions of pull-backs and push-forwards in the following diagrams:\n\\begin{center}\n\\begin{tikzpicture}[scale=0.5]\n\\draw (0,0) node {$\\overline{\\cal M}_{g-1,n+2}^{\\rm spin}$};\n\\draw [->, line width=1pt,dashed] (2,0)--(4,0);\n\\draw (3,.5) node {$\\phi_{\\text{irr}}\\circ\\hat{\\nu}^{-1}$};\n\\draw (3,-3.5) node {$\\phi_{\\text{irr}}$};\n\\draw (6,0) node {$\\overline{\\cal M}_{g,n}^{\\rm spin}$};\n\\draw [->, line width=1pt] (0,-1)--(0,-3);\n\\draw (-.5,-2) node {$p$};\n\\draw (0,-4) node {$\\overline{\\cal M}_{g-1,n+2}$};\n\\draw [->, line width=1pt] (2,-4)--(4,-4);\n\\draw (6,-4) node {$\\overline{\\cal M}_{g,n}$};\n\\draw [->, line width=1pt] (6,-1)--(6,-3);\n\\draw (5.5,-2) node {$p$};\n\\end{tikzpicture}\n\\hspace{3cm}\n\\begin{tikzpicture}[scale=0.5]\n\\draw (0,0) node {$\\overline{\\cal M}_{h,|I|+1}^{\\rm spin}\\times \\overline{\\cal M}_{g-h,|J|+1}^{\\rm spin}$};\n\\draw [->, line width=1pt,dashed] (4,0)--(6,0);\n\\draw (5,.5) node {$\\phi_{h,I}\\circ\\hat{\\nu}^{-1}$};\n\\draw (5,-3.5) node {$\\phi_{h,I}$};\n\\draw (8,0) node {$\\overline{\\cal M}_{g,n}^{\\rm spin}$};\n\\draw [->, line width=1pt] (0,-1)--(0,-3);\n\\draw (-.5,-2) node {$p$};\n\\draw (0,-4) node {$\\overline{\\cal M}_{h,|I|+1}\\times\\overline{\\cal M}_{g-h,|J|+1}$};\n\\draw [->, line width=1pt] (4,-4)--(6,-4);\n\\draw (8,-4) node {$\\overline{\\cal M}_{g,n}$};\n\\draw [->, line width=1pt] (8,-1)--(8,-3);\n\\draw (7.5,-2) node {$p$};\n\\end{tikzpicture}\n\\end{center}\nwhere the broken arrows signify multiply-defined maps which are defined above using fibre products.\n\nOn cohomology, we have $\\phi_{\\text{irr}}^*p_*=4p_*\\hat\\nu_*\\phi_{\\text{irr}}^*$ and $\\phi_{h,I}^*p_*=4p_*\\hat\\nu_*\\phi_{h,I}^*$ where the factor of 4 is due to the degree of $\\hat\\nu$ ramification of $p$---see (39) in \\cite{JKVMod}. Hence \n$$\\phi_{\\text{irr}}^*\\Theta_{g,n}=\\phi_{\\text{irr}}^*p_*2^n\\Omega_{g,n}=4p_*\\hat\\nu_*\\phi_{\\text{irr}}^*2^n\\Omega_{g,n}=p_*2^{n+2}\\Omega_{g-1,n+2}=\\Theta_{g-1,n+2}$$\nand similarly $\\phi_{h,I}^*\\Theta_{g,n}=\\pi_1^*\\Theta_{h,|I|+1}\\cdot\\pi_2^* \\Theta_{g-h,|J|+1}$ uses $4\\cdot2^n=2^{n+2}=2^{|I|+1+|J|+1}$.\n\\end{proof}\n\n\\begin{remark}\nThe construction of $\\Omega_{g,n}$ should also follow from the cosection construction in \\cite{CLLWit} using the moduli space of spin curves with fields \n$$\\overline{\\cal M}_{g,n}(\\mathbb{Z}_2)^p=\\{(C,\\theta,\\rho)\\mid (C,\\theta)\\in\\overline{\\cal M}_{g,n}^{\\rm spin},\\ \\rho\\in H^0(\\theta)\\}.$$\nA cosection of the pull-back of $E_{g,n}$ to $\\overline{\\cal M}_{g,n}(\\mathbb{Z}_2)^p$ is given by $\\rho^{-3}$ since it pairs well with $H^1(\\theta)$---we have $\\rho^{-3}\\in H^0((\\theta^\\vee)^3)$ while $H^1(\\theta)\\cong H^0(\\omega\\otimes\\theta^\\vee)^\\vee=H^0((\\theta^\\vee)^3)^\\vee$. Using the cosection $\\rho^{-3}$ a virtual fundamental class is constructed in \\cite{CLLWit} that likely gives rise to $\\Omega_{g,n}\\in H^{4g-4+2n}(\\overline{\\cal M}_{g,n}^{\\rm spin})$. The virtual fundamental class is constructed away from the zero set of $\\rho$.\n\\end{remark}\n\n\n\\section{Uniqueness} \\label{sec:unique}\nThe degree property \\eqref{degree} of Theorem~\\ref{main}, $\\Theta_{g,n}\\in H^{4g-4+2n}(\\overline{\\cal M}_{g,n})$, proven below, enables a reduction argument which leads to uniqueness of intersection numbers $\\displaystyle \\int_{\\overline{\\cal M}_{g,n}}\\Theta_{g,n}\\prod_{i=1}^n\\psi_i^{m_i}\\prod_{j=1}^N\\kappa_{\\ell_j}$ which is property \\eqref{unique} of Theorem~\\ref{main}. In this section we prove these two results together with properties \\eqref{genus0} and \\eqref{symmetric} which are immediate consequences.\nWe leave the proof of the final property \\eqref{thetatau} of Theorem~\\ref{main} until Section~\\ref{intkdv}.\n\nWe first prove the following lemma which will be needed later.\n\\begin{lemma} \\label{nonzero}\nProperties \\eqref{pure}-\\eqref{base} imply that $\\Theta_{g,n}\\neq 0$ for $g>0$.\n\\end{lemma}\n\\begin{proof}\nWe have $\\Theta_{1,1}=3\\psi_1\\neq 0$ by \\eqref{base} hence $\\Theta_{1,n}=3\\psi_1\\psi_2...\\psi_n$ by \\eqref{forget} together with \nthe equality $\\psi_n\\psi_i=\\psi_n\\pi^*\\psi_i$ for $i1$, let $\\Gamma$ be the stable graph consisting of a genus $g-1$ vertex attached by a single edge to a genus 1 vertex with $n$ labeled leaves (denoted {\\em ordinary} leaves in Section~\\ref{twloop}). Then by \\eqref{glue}\n$$\\phi_{\\Gamma}^*\\Theta_{g,n}=\\Theta_{g-1,1}\\otimes\\Theta_{1,n+1}\n$$\nwhich is non-zero since $\\Theta_{g-1,1}\\neq 0$ by the inductive hypothesis and $\\Theta_{1,n+1}\\neq 0$ by the calculation above.\n\\end{proof}\n\\begin{remark}\nWe can replace \\eqref{base} by any initial condition $\\Theta_{1,1}=\\lambda\\psi_1$ for $\\lambda\\neq 0$ and the proof of Lemma~\\ref{nonzero} still applies, if $\\Theta_{g,n}$ exists.\n\\end{remark}\n\n\\begin{proof}[Proof of \\eqref{degree}]\n\n\nWrite $d(g,n)=\\text{degree}(\\Theta_{g,n})$ which exists by \\eqref{pure}. The degree here is half the cohomological degree so $\\Theta_{g,n}\\in H^{2d(g,n)}(\\overline{\\cal M}_{g,n})$. Using \\eqref{glue}, $\\phi_{\\text{irr}}^*\\Theta_{g,n}=\\Theta_{g-1,n+2}$ implies that $d(g,n)=d(g-1,n+2)$ since $\\Theta_{g-1,n+2}\\neq 0$ by Lemma~\\ref{nonzero}. Hence \n$d(g,n)=f(2g-2+n)$ \nis a function of $2g-2+n$. Similarly, using \\eqref{glue}, $\\phi_{h,I}^*\\Theta_{g,n}=\\Theta_{h,|I|+1}\\otimes \\Theta_{g-h,|J|+1}$ implies that \n$f(a+b)=f(a)+f(b)=(a+b)f(1)$ since $\\Theta_{h,|I|+1}\\neq 0$ and $\\Theta_{g-h,|J|+1}\\neq 0$ again by Lemma~\\ref{nonzero}.\nHence $d(g,n)=(2g-2+n)k$ for an integer $k$. But $d(g,n)\\leq 3g-3+n$ implies $k\\leq 1$. When $k=0$, this gives $\\Theta_{g,n}\\in H^0(\\overline{\\cal M}_{g,n})$, and $\\Theta_{g,n}$ is a topological field theory. But $\\deg\\Theta_{g,n}=0$ contradicts \\eqref{base} hence $k=1$ and $\\deg\\Theta_{g,n}=2g-2+n$.\n\nThe proof above used only properties \\eqref{pure}, \\eqref{glue} and \\eqref{base}. Alternatively, one can use \\eqref{forget} in place of the second part of \\eqref{glue} given by $\\phi_{h,I}^*\\Theta_{g,n}=\\Theta_{h,|I|+1}\\otimes \\Theta_{g-h,|J|+1}$.\n\\end{proof}\n\\begin{proof}[Proof of \\eqref{genus0}]\nThis is an immediate consequence of \\eqref{degree} since $\\deg\\Theta_{0,n}=n-2>n-3=\\dim\\overline{\\cal M}_{0,n}$ hence $\\Theta_{0,n}=0$. For any stable graph $\\Gamma$ with a genus 0 vertex, Remark~\\ref{gluestab} gives $\\phi_{\\Gamma}^*\\Theta_{g,n}=\\Theta_{\\Gamma}$. Furthermore $0=\\Theta_{\\Gamma}=\\prod_{v\\in V(\\Gamma)}\\pi_v^*\\Theta_{g(v),n(v)}$ since the genus 0 vertex contributes a factor of 0 to the product.\n\\end{proof}\n\\begin{proof}[Proof of \\eqref{symmetric}]\nProperty \\eqref{forget} implies that $\\Theta_{g,n}=\\pi^*\\Theta_g\\cdot\\prod_{i=1}^n\\psi_i$ where $\\pi:\\overline{\\cal M}_{g,n}\\to\\overline{\\cal M}_{g}$ is the forgetful map. Since $\\pi^*\\omega\\in H^*(\\overline{\\cal M}_{g,n})^{S_n}$ for any $\\omega\\in H^*(\\overline{\\cal M}_{g})$ and clearly $\\prod_{i=1}^n\\psi_i\\in H^*(\\overline{\\cal M}_{g,n})^{S_n}$ hence we have $\\Theta_{g,n}\\in H^*(\\overline{\\cal M}_{g,n})^{S_n}$ as required.\n\\end{proof}\n\\begin{proof}[Proof of \\eqref{unique}]\nThe uniqueness result follows from the more general result given in the following proposition.\n\\end{proof}\n\\begin{proposition} \\label{th:unique}\nFor any $\\Theta_{g,n}$ satisfying \\eqref{pure} - \\eqref{forget} above, the intersection numbers\n\\begin{equation} \\label{corr}\n\\int_{\\overline{\\cal M}_{g,n}}\\Theta_{g,n}\\prod_{i=1}^n\\psi_i^{m_i}\\prod_{j=1}^N\\kappa_{\\ell_j}\n\\end{equation}\nare uniquely determined from the initial condition $\\Theta_{1,1}=\\lambda\\psi_1$ for $\\lambda\\in\\mathbb{C}$.\n\\end{proposition}\n\\begin{proof}\nFor $n>0$, we will push forward the integral \\eqref{corr} via the forgetful map $\\pi:\\overline{\\cal M}_{g,n}\\to\\overline{\\cal M}_{g,n-1}$ as follows. Consider first the case when there are no $\\kappa$ classes. The presence of $\\psi_n$ in $\\Theta_{g,n}=\\psi_n\\cdot\\pi^*\\Theta_{g,n-1}$ gives\n$$\\Theta_{g,n}\\psi_k=\\Theta_{g,n}\\pi^*\\psi_k,\\quad k0$. Equivalently it satisfies the dilaton equation:\n\\begin{equation} \\label{dilaton}\n\\frac{\\partial}{\\partial t_0}Z(\\hbar,t_0,t_1,...)=\\sum_{i=0}^{\\infty}(2i+1)t_i \\frac{\\partial}{\\partial t_i}Z(\\hbar,t_0,t_1,...)\n\\end{equation}\n\\end{proposition}\n\\begin{proof}\nWe have\n\\begin{align*} \n\\int_{\\overline{\\cal M}_{g,n+1}}\\Theta_{g,n+1}\\cdot\\prod_{j=1}^n\\psi_j^{k_j}&=\\int_{\\overline{\\cal M}_{g,n+1}}\\pi^*\\Theta_{g,n}\\cdot\\psi_{n+1}\\cdot\\prod_{j=1}^n\\psi_j^{k_j}=\\int_{\\overline{\\cal M}_{g,n+1}}\\pi^*\\Theta_{g,n}\\cdot\\psi_{n+1}\\cdot\\prod_{j=1}^n\\pi^*\\psi_j^{k_j}\\\\\n&=\\int_{\\overline{\\cal M}_{g,n}}\\Theta_{g,n}\\cdot\\prod_{j=1}^n\\psi_j^{k_j}\\cdot\\pi_*\\psi_{n+1}=\n(2g-2+n)\\int_{\\overline{\\cal M}_{g,n}}\\Theta_{g,n}\\cdot\\prod_{j=1}^n\\psi_j^{k_j}.\\nonumber\n\\end{align*}\nwhere we have used $\\psi_{n+1}\\cdot\\psi_j=\\psi_{n+1}\\cdot\\pi^*\\psi_j$ for $j=1,...,n$ and $\\pi_*(\\pi^*\\omega\\cdot\\psi_{n+1})=\\omega\\cdot\\pi_*\\psi_{n+1}$. But this exactly agrees with the dilaton equation \\eqref{dilaton} via the correspondence \\eqref{eq:tauf}.\n\\end{proof}\nProposition~\\ref{th:dilaton} together with the initial condition $\\Theta_{1,1}=\\lambda\\psi_1$, or $\\int_{\\overline{\\cal M}_{1,1}}\\Theta_{1,1}=\\frac{\\lambda}{24}$, gives \n\\begin{equation} \\label{genus1}\n\\log Z^{\\Theta}=-\\frac{\\lambda}{24}\\log(1-t_0)+O(\\hbar).\n\\end{equation}\nThe following example demonstrates Proposition~\\ref{th:unique} with an explicit genus 2 relation.\n\\begin{example} \\label{gen2rel}\nA genus two Pixton relation first proven by Mumford \\cite[equation (8.5)]{MumTow}, relating $\\kappa_1$ and the divisors $\\cal M_{\\Gamma_1}\\cong{\\overline{\\cal M}_{1,1}}\\times{\\overline{\\cal M}_{1,1}}$ and $\\cal M_{\\Gamma_2}\\cong{\\overline{\\cal M}_{1,2}}$ in $\\overline{\\cal M}_{2}$, labeled by stable graphs $\\Gamma_i$ is given by\n$$\\kappa_1-\\frac{7}{5}[\\cal M_{\\Gamma_1}]-\\frac{1}{5}[\\cal M_{\\Gamma_2}]=0$$\nwhich induces the relation \n$$\\Theta_{2}\\cdot\\kappa_1-\\frac{7}{5}\\Theta_{2}\\cdot[\\cal M_{\\Gamma_1}]-\\frac{1}{5}\\Theta_{2}\\cdot[\\cal M_{\\Gamma_2}]=0.$$ \nProperty \\eqref{glue} of $\\Theta_{g,n}$ yields\n$$\\int_{\\overline{\\cal M}_{2}}\\Theta_{2}\\cdot[\\cal M_{\\Gamma_1}]=\\int_{\\cal M_{\\Gamma_1}}\\Theta_{2}=\\int_{\\overline{\\cal M}_{1,1}}\\Theta_{1,1}\\cdot\\int_{\\overline{\\cal M}_{1,1}}\\Theta_{1,1}\\cdot\\frac{1}{|\\text{Aut}(\\Gamma_1)|},\\quad\\int_{\\cal M_{\\Gamma_2}}\\Theta_{2}=\\int_{\\overline{\\cal M}_{1,2}}\\Theta_{1,2}\\cdot\\frac{1}{|\\text{Aut}(\\Gamma_2)|}\n$$\nhence the relation on the level of intersection numbers is given by\n\\begin{equation*} \n\\int_{\\overline{\\cal M}_{2}}\\Theta_{2}\\cdot\\kappa_1-\\frac{7}{5}\\cdot\\int_{\\overline{\\cal M}_{1,1}}\\Theta_{1,1}\\cdot\\int_{\\overline{\\cal M}_{1,1}}\\Theta_{1,1}\\cdot\\frac{1}{|\\text{Aut}(\\Gamma_1)|}-\\frac{1}{5}\\cdot\\int_{\\overline{\\cal M}_{1,2}}\\Theta_{1,2}\\cdot\\frac{1}{|\\text{Aut}(\\Gamma_2)|}=0.\n\\end{equation*}\nWe have $\\int_{\\overline{\\cal M}_{1,1}}\\Theta_{1,1}=\\frac{\\lambda}{24}=\\int_{\\overline{\\cal M}_{1,2}}\\Theta_{1,1}$ from \\eqref{genus1}, and $|\\text{Aut}(\\Gamma_1)|=2=|\\text{Aut}(\\Gamma_2)|$.\nHence\n\\begin{align*} \n\\int_{\\overline{\\cal M}_{2}}\\Theta_{2}\\cdot\\kappa_1 &=\\frac{7}{5}\\cdot\\int_{\\overline{\\cal M}_{1,1}}\\Theta_{1,1}\\cdot\\int_{\\overline{\\cal M}_{1,1}}\\Theta_{1,1}\\cdot\\frac{1}{|\\text{Aut}(\\Gamma_1)|}+\\frac{1}{5}\\cdot\\int_{\\overline{\\cal M}_{1,2}}\\Theta_{1,2}\\cdot\\frac{1}{|\\text{Aut}(\\Gamma_2)|}\\\\&=\\frac{7}{5}\\cdot\\left(\\frac{\\lambda}{24}\\right)^2\\cdot\\frac{1}{2}+\\frac{1}{5}\\cdot\\frac{\\lambda}{24}\\cdot\\frac{1}{2}=\\frac{7\\lambda^2+24\\lambda}{5760}. \\nonumber\n\\end{align*} \n\\end{example}\nFrom now on we will specialise to the case $\\lambda=3$ for which we have a proof of existence of $\\Theta_{g,n}$. From example~\\ref{gen2rel}, we have\n$\\int_{\\overline{\\cal M}_{2,1}}\\Theta_{2,1}\\cdot\\psi_1=\\int_{\\overline{\\cal M}_{2,1}}\\pi^*\\Theta_2\\cdot\\psi_1^2= \\int_{\\overline{\\cal M}_{2}}\\Theta_{2}\\cdot\\kappa_1=\\frac{3}{128}$.\nAll genus 2 terms can be obtained from $\\int_{\\overline{\\cal M}_{2,1}}\\Theta_{2,1}\\cdot\\psi_1$ and \\eqref{dilaton}. Combining this with \\eqref{genus1} we have\n\\begin{equation} \\label{lowgtheta} \n\\log Z^{\\Theta}=-\\frac{1}{8}\\log(1-t_0)+\\hbar\\frac{3}{128}\\frac{t_1}{(1-t_0)^3}+O(\\hbar^2).\n\\end{equation} \n\n\\section{KdV tau functions} \\label{sec:kdv}\nIn this section we prove property~\\ref{thetatau} of Theorem~\\ref{main} which states that a generating function for the intersection numbers $\\int_{\\overline{\\cal M}_{g,n}}\\Theta_{g,n}\\prod_{i=1}^n\\psi_i^{m_i}$ is a tau function of the KdV hierarchy, analogous to the theorem conjectured by Witten and proven by Kontsevich for the intersection numbers $\\int_{\\overline{\\cal M}_{g,n}}\\prod_{i=1}^n\\psi_i^{m_i}$.\n\nA tau function $Z(t_0,t_1,...)$ of the KdV hierarchy (equivalently the KP hierarchy in odd times $p_{2k+1}=t_k\/(2k+1)!!$) gives rise to a solution $U=\\hbar\\frac{\\partial^2}{\\partial t_0^2}\\log Z$ of the KdV hierarchy \n\\begin{equation}\\label{kdv}\nU_{t_1}=UU_{t_0}+\\frac{\\hbar}{12}U_{t_0t_0t_0},\\quad U(t_0,0,0,...)=f(t_0).\n\\end{equation}\nThe first equation in the hierarchy is the KdV equation \\eqref{kdv}, and later equations $U_{t_k}=P_k(U,U_{t_0},U_{t_0t_0},...)$ for $k>1$ determine $U$ uniquely from $U(t_0,0,0,...)$. See \\cite{MJDSol} for the full definition.\n\nThe Brezin-Gross-Witten solution $U^{\\text{BGW}}=\\hbar\\frac{\\partial^2}{\\partial t_0^2}\\log Z^{\\text{BGW}}$ of the KdV hierarchy arises out of a unitary matrix model studied in \\cite{BGrExt,GWiPos}. It is defined by the initial condition\n$$\nU^{\\text{BGW}}(t_0,0,0,...)=\\frac{\\hbar}{8(1-t_0)^2}.\n$$\nThe low genus $g$ terms (= coefficient of $\\hbar^{g-1}$) of $\\log Z^{\\text{BGW}}$ are\n\\begin{align} \\label{lowg}\n\\log Z^{\\text{BGW}}&=-\\frac{1}{8}\\log(1-t_0)+\\hbar\\frac{3}{128}\\frac{t_1}{(1-t_0)^3}+\\hbar^2\\frac{15}{1024}\\frac{t_2}{(1-t_0)^5}+\\hbar^2\\frac{63}{1024}\\frac{t_1^2}{(1-t_0)^6}+O(\\hbar^3)\\\\\n&=(\\frac{1}{8}t_0+\\frac{1}{16}t_0^2+\\frac{1}{24}t_0^3+...)+\\hbar(\\frac{3}{128}t_1+\\frac{9}{128}t_0t_1+...)+\\hbar^2(\\frac{15}{1024}t_2+\\frac{63}{1024}t_1^2+...)+...\\nonumber\n\\end{align}\n\nThe tau function $Z^{\\text{BGW}}(\\hbar,t_0,t_1,...)$ shares many properties of the famous Kontsevich-Witten tau function $Z^{\\text{KW}}(\\hbar,t_0,t_1,...)$ introduced in \\cite{WitTwo}. The Kontsevich-Witten tau function $Z^{\\text{KW}}$ is defined by the initial condition $U^{\\text{KW}}(t_0,0,0,...)=t_0$ for $U^{\\text{KW}}=\\hbar\\frac{\\partial^2}{\\partial t_0^2}\\log Z^{\\text{KW}}$. The low genus terms of $\\log Z^{\\text{KW}}$ are \n$$\\log Z^{\\text{KW}}(\\hbar,t_0,t_1,...)=\\hbar^{-1}(\\frac{t_0^3}{3!}+\\frac{t_0^3t_1}{3!}+\\frac{t_0^4t_2}{4!}+...)+\\frac{t_1}{24}+...\n$$\n\\begin{theorem}[Witten-Kontsevich 1992 \\cite{KonInt,WitTwo}]\n$$Z^{\\text{KW}}(\\hbar,t_0,t_1,...)=\\exp\\sum_{g,n}\\hbar^{g-1}\\frac{1}{n!}\\sum_{\\vec{k}\\in\\mathbb{N}^n}\\int_{\\overline{\\cal M}_{g,n}}\\prod_{i=1}^n\\psi_i^{k_i}t_{k_i}\n$$\nis a tau function of the KdV hierarchy\n\\end{theorem}\nThe main aim of this section is to prove that $Z^{\\Theta}(\\hbar,t_0,t_1,...)=Z^{\\text{BGW}}(\\hbar,t_0,t_1,...)$ which implies property~\\ref{thetatau} of Theorem~\\ref{main}. Agreement of $Z^{\\Theta}(\\hbar,t_0,t_1,...)$ and $Z^{\\text{BGW}}(\\hbar,t_0,t_1,...)$ up to genus 2 is clear from \\eqref{lowgtheta} and \\eqref{lowg} and can be extended to genus 3 using Appendix~\\ref{sec:calc}. Furthermore, $Z^{\\text{BGW}}(\\hbar,t_0,t_1,...)$ also satisfies the homogeneity property \\eqref{dilaton}---see \\cite{DNoTop}---which is apparent in the low genus terms given in \\eqref{lowg}. The homogeneity property satisfied by both $Z^{\\Theta}$ and $Z^{\\text{BGW}}$ reduces the proof that they are equal to checking only finitely many coefficients for each genus. \n\\begin{theorem} \\label{tauf}\n$$Z^{\\Theta}(\\hbar,t_0,t_1,...)=Z^{\\text{BGW}}(\\hbar,t_0,t_1,...).$$ \n\\end{theorem}\n{\\em Outline of the proof of Theorem~\\ref{tauf}.} We summarise here Sections~\\ref{twloop}-\\ref{sec:TR} which contain the proof of Theorem~\\ref{tauf}.\n\nThe equality of $Z^{\\Theta}$ and $Z^{\\text{BGW}}$ up to genus two used the relation between coefficients of $Z^{\\Theta}(\\hbar,t_0,t_1,...)$\n\\begin{equation} \\label{relg2}\n\\int_{\\overline{\\cal M}_{2,1}}\\Theta_{2,1}\\cdot\\psi_1-\\frac{7}{10}\\cdot\\int_{\\overline{\\cal M}_{1,1}}\\Theta_{1,1}\\cdot\\int_{\\overline{\\cal M}_{1,1}}\\Theta_{1,1}-\\frac{1}{10}\\cdot\\int_{\\overline{\\cal M}_{1,2}}\\Theta_{1,2}=0\n\\end{equation}\narising from the genus two Pixton relation. The main idea of the proof is to show that the coefficients of $Z^{\\text{BGW}}(\\hbar,t_0,t_1,...)$ also satisfy \\eqref{relg2}, and more generally a set of relations arising from Pixton relations. \n\nAssociated to the $A_2$ Frobenius manifold is a partition function $Z_{A_2}(\\hbar,\\{t^{\\alpha}_k\\})$, defined in Section~\\ref{contw}. Pixton's relations on the level of intersection numbers arise due to unexpected vanishing of some of the coefficients in $Z_{A_2}(\\hbar,\\{t^{\\alpha}_k\\})$. Briefly, the partition function $Z_{A_2}(\\hbar,\\{t^{\\alpha}_k\\})$ is constructed in two ways---out of intersections of cohomology classes in $H^*(\\overline{\\cal M}_{g,n})$ which form a CohFT and out of a graphical construction that associates coefficients of $Z^{\\text{KW}}(\\hbar,t_0,t_1,...)$ to vertices of the graphs, described in Sections~\\ref{twloop} and \\ref{sec:A2part}. The vanishing of coefficients in $Z_{A_2}(\\hbar,\\{t^{\\alpha}_k\\})$ is clear only from one of these constructions hence leads to relations among intersection numbers. There is a third construction of $Z_{A_2}(\\hbar,\\{t^{\\alpha}_k\\})$ via topological recursion, defined in Section~\\ref{sec:TR}, applied to a spectral curve naturally associated to the $A_2$ Frobenius manifold. This construction gives a second method to prove vanishing of certain coefficients of $Z_{A_2}(\\hbar,\\{t^{\\alpha}_k\\})$.\n\nThe partition function $Z^{\\Theta}_{A_2}(\\hbar,\\{t^{\\alpha}_k\\})$ is built out of the $A_2$ CohFT cohomology classes in $H^*(\\overline{\\cal M}_{g,n})$ times $\\Theta_{g,n}$. It stores relations between intersection numbers involving $\\Theta_{g,n}$ such as \\eqref{relg2}. So for example, the coefficient of $\\hbar t_1^1$\nin $\\log Z^{\\Theta}_{A_2}(\\hbar,\\{t^{\\alpha}_k\\})$ vanishes and is also the relation \\eqref{relg2}. We can construct $Z^{\\Theta}_{A_2}(\\hbar,\\{t^{\\alpha}_k\\})$ via the graphical construction that produces $Z_{A_2}(\\hbar,\\{t^{\\alpha}_k\\})$ although with vertex contributions coming from coefficients of $Z^{\\Theta}(\\hbar,t_0,t_1,...)$ in place of coefficients of $Z^{\\text{KW}}(\\hbar,t_0,t_1,...)$.\n\nThe idea is to produce a partition function $Z^{\\text{BGW}}_{A_2}(\\hbar,\\{t^{\\alpha}_k\\})$ using the graphical construction of $Z_{A_2}(\\hbar,\\{t^{\\alpha}_k\\})$ with $Z^{\\text{KW}}$ replaced by $Z^{\\text{BGW}}$. Vanishing of coefficients of $Z^{\\text{BGW}}_{A_2}(\\hbar,\\{t^{\\alpha}_k\\})$ are relations between coefficients of $Z^{\\text{BGW}}$ analogous to each of the relations satisfied by the coefficients of $Z^{\\Theta}_{A_2}(\\hbar,\\{t^{\\alpha}_k\\})$ such as \\eqref{relg2}. To prove vanishing of primary coefficients of $Z^{\\text{BGW}}_{A_2}(\\hbar,\\{t^{\\alpha}_k\\})$ for $n\\leq g-1$, which is enough to prove that the coefficients of $Z^{\\text{BGW}}$ satisfy the same relations as the coefficients of $Z^{\\Theta}$, we use the third construction of $Z_{A_2}(\\hbar,\\{t^{\\alpha}_k\\})$ via topological recursion. We construct a spectral curve to get $Z^{\\text{BGW}}_{A_2}(\\hbar,\\{t^{\\alpha}_k\\})$ and this construction allows us to prove vanishing of certain coefficients. To conclude, we have vanishing of certain coefficients of $Z^{\\Theta}_{A_2}(\\hbar,\\{t^{\\alpha}_k\\})$ due to a cohomological viewpoint, and we have vanishing of corresponding coefficients of $Z^{\\text{BGW}}_{A_2}(\\hbar,\\{t^{\\alpha}_k\\})$ due to the topological recursion viewpoint, and this shows that coefficients of $Z^{\\text{BGW}}$ and $Z^{\\Theta}$ satisfy the same relations.\n\n\n\n\n\n\\subsection{Twisted loop group action} \\label{twloop}\n\nGivental \\cite{GiventalMain} showed how to build partition functions of cohomological field theories which are sequences of cohomology classes in $H^*(\\overline{\\cal M}_{g,n})$---see Section~\\ref{sec:cohft}---out of the basic building block $Z^{\\text{KW}}(\\hbar,t_0,t_1,...)$. This construction can be immediately adapted to allow one to use in place of $Z^{\\text{KW}}(\\hbar,t_0,t_1,...)$ the building blocks $Z^{\\text{BGW}}(\\hbar,t_0,t_1,...)$ and $Z^{\\Theta}(\\hbar,t_0,t_1,...)$ (where the latter two will eventually be shown to coincide). Givental defined an action by elements of the twisted loop group and translations on the basic building block partition functions above. This action was interpreted as an action on sequences of cohomology classes in $H^*(\\overline{\\cal M}_{g,n})$ independently, by Katzarkov-Kontsevich-Pantev, Kazarian and Teleman---see \\cite{PPZRel,ShaBCOV}.\n\nConsider an element of the loop group $LGL(N,\\mathbb{C})$ given by a formal series\n$$R(z) = \\sum_{k=0}^\\infty R_k z^k$$\nwhere $R_k$ are $N\\times N$ matrices and $R_0=I$. We further require $R(z)$ to lie in the twisted loop group $L^{(2)}GL(N,\\mathbb{C})\\subset LGL(N,\\mathbb{C})$, the subgroup which is defined to consist of elements satisfying\n$$ R(z)R(-z)^T=I.\n$$\nDefine \n$$\\mathcal{E}(w,z)=\\frac{I-R^{-1}(z)R^{-1}(w)^T}{w+z}=\\sum_{i,j\\geq 0}\\mathcal{E}_{ij}w^iz^j$$\nwhich has the power series expansion on the right since the numerator $I-R^{-1}(z)R^{-1}(w)^T$ vanishes at $w=-z$ since $R^{-1}(z)$ is also an element of the twisted loop group.\n\nGivental's action is defined via weighted sums over graphs. Consider the following set of decorated graphs with vertices labeled by the set $\\{1,...,N\\}$.\n\\begin{definition}\nFor a graph $\\gamma$ denote by\n$$V(\\gamma),\\quad E(\\gamma),\\quad H(\\gamma),\\quad L(\\gamma)=L^*(\\gamma)\\sqcup L^\\bullet(\\gamma)$$\nits set of vertices, edges, half-edges and leaves. The disjoint splitting of $L(\\gamma)$ into ordinary leaves $L^*$ and dilaton leaves $L^\\bullet$ is part of the structure on $\\gamma$. The set of half-edges consists of leaves and oriented edges so there is an injective map $L(\\gamma)\\to H(\\gamma)$ and a multiply-defined map $E(\\gamma)\\to H(\\gamma)$ denoted by $E(\\gamma)\\ni e\\mapsto \\{e^+,e^-\\}\\subset H(\\gamma)$.\nThe map sending a half-edge to its vertex is given by $v:H(\\gamma)\\to V(\\gamma)$. Decorate $\\gamma$ by functions:\n\\begin{align*}\ng&:V(\\gamma)\\to\\mathbb{N}\\\\\n\\alpha&:V(\\gamma)\\to\\{1,...,N\\}\\\\\np&:L^*(\\gamma)\\stackrel{\\cong}{\\to}\\{1,2,...,n\\}\\\\\nk&:H(\\gamma)\\to\\mathbb{N}\n\\end{align*}\nsuch that $k|_{L^\\bullet(\\gamma)}>1$ and $n=|L^*(\\gamma)|$. We write $g_v=g(v)$, $\\alpha_v=\\alpha(v)$, $\\alpha_\\ell=\\alpha(v(\\ell))$, $p_\\ell=p(\\ell)$, $k_\\ell=k(\\ell)$.\nThe {\\em genus} of $\\gamma$ is $g(\\gamma)=\\displaystyle b_1(\\gamma)+\\hspace{-2mm}\\sum_{v\\in V(\\gamma)}\\hspace{-2mm}g(v)$. We say $\\gamma$ is {\\em stable} if any vertex labeled by $g=0$ is of valency $\\geq 3$ and there are no isolated vertices labeled by $g=1$. We write $n_v$ for the valency of the vertex $v$.\nDefine $\\Gamma_{g,n}$ to be the set of all stable, connected, genus $g$, decorated graphs with $n$ ordinary leaves.\n\\end{definition}\nGiven a sequence of classes $\\Omega=\\{\\Omega_{g,n}\\in H^*(\\overline{\\cal M}_{g,n})\\otimes (V^*)^{\\otimes n}\\mid g,n\\in\\mathbb{N},2g-2+n>0\\}$, following \\cite{PPZRel,ShaBCOV} define a new sequence of classes $R\\Omega=\\{(R\\Omega)_{g,n}\\}$ by a weighted sum over stable graphs, with weights defined as follows.\n\\begin{enumerate}[(i)]\n\\item {\\em Vertex weight:} $w(v)=\\Omega_{g(v),n_v}$ at each vertex $v$ \n\\item {\\em Leaf weight:} $w(\\ell)=R^{-1}(\\psi_{p(\\ell)})$ at each leaf $\\ell$\n\\item {\\em Edge weight:} $w(e)=\\mathcal{E}(\\psi_e',\\psi_e'')$ at each edge $e$\n\\end{enumerate}\nThen\n$$(R\\Omega)_{g,n}=\\sum_{\\Gamma\\in G_{g,n}}\\frac{1}{|{\\rm Aut}(\\Gamma)|}(p_{\\Gamma})_*\\hspace{-6mm}\\prod_{\\begin{array}{c}v\\in V(\\Gamma)\\\\\\ell\\in L^*(\\Gamma)\\\\ e\\in E(\\Gamma)\\end{array}} \\hspace{-4mm}w(v) w(\\ell)w(e)\n$$\nThis defines an action of the twisted loop group on sequences of cohomology classes. It is applied in \\cite{PPZRel,ShaBCOV} to cohomological field theories $\\Omega_{g,n}$ whereas a generalised notion of cohomological field theory is used here.\n \nDefine a translation action of $T(z)\\in zV[[z]]$ on the sequence $\\Omega_{g,n}$ as follows.\n\\begin{enumerate}[(iv)]\n\\item {\\em Dilaton leaf weight:} $w(\\ell)=T(\\psi_{p(\\ell)})$ at each dilaton leaf $\\ell\\in L^\\bullet$.\n\\end{enumerate}\nSo the translation action is given by\n\\begin {equation} \\label{transl}\n(T\\Omega)_{g,n}(v_1\\otimes...\\otimes v_n)=\\sum_{m\\geq 0}\\frac{1}{m!}p_*\\Omega_{g,n+m}(v_1\\otimes...\\otimes v_n\\otimes T(\\psi_{n+1})\\otimes...\\otimes T(\\psi_{n+m}))\n\\end{equation}\nwhere $p:\\overline{\\cal M}_{g,n+m}\\to\\overline{\\cal M}_{g,n}$ is the forgetful map.\nTo ensure the sum \\eqref{transl} is finite, one usually requires $T(z)\\in z^2V[[z]]$, so that $\\dim\\overline{\\cal M}_{g,n+m}=3g-3+n+m$ grows more slowly in $m$ than the degree $2m$ coming from $T$. Instead, we control growth of the degree of $\\Omega_{g,n}$ in $n$ to ensure $T(z)\\in zV[[z]]$ produces a finite sum.\n\nThe partition function of a sequence $\\Omega=\\{\\Omega_{g,n}\\in H^*(\\overline{\\cal M}_{g,n})\\otimes (V^*)^{\\otimes n}\\mid g,n\\in\\mathbb{N},2g-2+n>0\\}$ is defined by:\n$$\nZ_{\\Omega}(\\hbar,\\{t^{\\alpha}_k\\})=\\exp\\sum_{g,n,\\vec{k}}\\frac{\\hbar^{g-1}}{n!}\\int_{\\overline{\\cal M}_{g,n}}\\Omega_{g,n}(e_{\\alpha_1}\\otimes...\\otimes e_{\\alpha_n})\\cdot\\prod_{j=1}^n\\psi_j^{k_j}\\prod t^{\\alpha_j}_{k_j}\n$$\nwhere $\\{e_1,...,e_N\\}$ is a basis of $V$, $\\alpha\\in\\{1,...,N\\}$ and $k\\in\\mathbb{N}$. For $\\dim V=1$ and $\\Omega_{g,n}=1\\in H^*(\\overline{\\cal M}_{g,n})$, $Z_{\\Omega}(\\hbar,\\{t_k\\})=Z^{\\text{KW}}(\\hbar,\\{t_k\\})$. Similarly, for $\\Omega_{g,n}=\\Theta_{g,n}\\in H^*(\\overline{\\cal M}_{g,n})$, $Z_{\\Omega}(\\hbar,\\{t_k\\})=Z^{\\Theta}(\\hbar,\\{t_k\\})$.\n\nThe action on sequences of cohomology classes above immediately gives rise to an action on partition functions, which store correlators of the sequence of cohomology classes, known as the Givental action \\cite{DSSGiv,GivGro,ShaBCOV}. It gives a graphical construction of the partition functions $Z_{R\\Omega}$ and $Z_{T\\Omega}$ obtained from $Z_{\\Omega}$. \n\nThe graphical expansions can be conveniently expressed via the action of differential operators $\\hat{R}$ and $\\hat{T}$: $Z_{R\\Omega}=\\hat{R}Z_{\\Omega}$, $Z_{T\\Omega}=\\hat{T}Z_{\\Omega}$ as follows. Givental used $R(z)$ to produce a differential operator $\\hat{R}$, a so-called quantisation of $R(z)$, which acts on a product of tau-functions to produce a generating series for the correlators of the cohomological field theory. Put $R(z)=\\displaystyle\\exp(\\sum_{\\ell>0} r_\\ell z^\\ell)$.\n$$\n\\hat{R}=\\exp\\left\\{\\sum_{\\ell=1}^{\\infty}\\sum_{\\alpha,\\beta}\n\\left(\\sum_{k=0}^{\\infty}u^{k,\\beta}(r_k)_\\beta^\\alpha\\frac{\\partial}{\\partial u^{k+\\ell,\\alpha}}+\\frac{\\hbar}{2}\\sum_{m=0}^{\\ell-1}(-1)^{m+1}(r_\\ell)^{\\alpha}_{\\beta}\\frac{\\partial^2}{\\partial u^{m,\\alpha}\\partial u^{\\ell-m-1,\\beta}}\\right)\n\\right\\}.\n$$\nThe action of the differential operator $\\hat{R}$ on $Z_{\\Omega}(\\hbar,\\{t^{\\alpha}_k\\})$ is equivalent to the weighted sum over graphs. The first term $u^{k,\\beta}(r_k)_\\beta^\\alpha\\frac{\\partial}{\\partial u^{k+\\ell,\\alpha}}$ gives ordinary leaf contributions, and the second term $\\frac{\\hbar}{2}(-1)^{m+1}(r_\\ell)^{\\alpha}_{\\beta}\\frac{\\partial^2}{\\partial u^{m,\\alpha}\\partial u^{\\ell-m-1,\\beta}}$ gives edge contributions. \n\nAssociated to $T(z)=\\sum_{\\ell>0}v_\\ell z^\\ell=\\exp(t_\\ell z^\\ell)-I\\in zV[[z]]$\nis the differential operator\n$$\\hat{T}=\\exp\\left\\{\\sum_{\\ell=1}^{\\infty}\\sum_{\\alpha,\\beta}\n\\sum_{k=0}^{\\infty}(t_k)_\\beta^\\alpha\\frac{\\partial}{\\partial u^{k+\\ell,\\alpha}}\\right\\}\n$$\nwhich is a translation operator. It gives dilaton leaf contributions to the weighted sum of graphs. The differential operators $\\hat{R}$ and $\\hat{T}$ act on a product of functions $Z^{\\text{BGW}}(\\hbar,t_0,t_1,...)$, $Z^{\\text{KW}}(\\hbar,t_0,t_1,...)$ and $Z^{\\Theta}(\\hbar,t_0,t_1,...)$ where the TFT is encoded via rescaling of the variables. \n\n\\subsubsection{Pixton's relations} \\label{pixrel}\nDual to any point $(C,p_1,...,p_n)\\in\\overline{\\cal M}_{g,n}$ is its stable graph $\\Gamma$ with vertices $V(\\Gamma)$ representing irreducible components of $C$, internal edges representing nodal singularities and a (labeled) external edge for each $p_i$. Each vertex is labeled by a genus $g(v)$ and has valency $n(v)$. The genus of a stable graph is $g(\\Gamma)=\\displaystyle h_1(\\Gamma)+\\hspace{-3mm}\\sum_{v\\in V(\\Gamma)}g(v)$. For a given stable graph $\\Gamma$ of genus $g$ and with $n$ external edges we have\n$$\\phi_{\\Gamma}:\\overline{\\cal M}_{\\Gamma}=\\prod_{v\\in V(\\Gamma)}\\overline{\\cal M}_{g(v),n(v)}\\to\\overline{\\cal M}_{g,n}.$$\n\nThe {\\em strata algebra} $S_{g,n}$ is a finite-dimensional vector space over $\\mathbb{Q}$ with basis given by isomorphism classes of pairs $(\\Gamma, \\omega)$, for $\\Gamma$ a stable graph of genus $g$ with $n$ external edges and $\\omega\\in H^*(\\overline{\\cal M}_{\\Gamma})$ a product of $\\kappa$ and $\\psi$ classes in each $\\overline{\\cal M}_{g(v),n(v)}$ for each vertex $v\\in V(\\Gamma)$. There is a natural map $q:S_{g,n}\\to H^*(\\overline{\\cal M}_{g,n})$ defined by the push-forward $q(\\Gamma, \\omega)=\\phi_{\\Gamma*}(\\omega)\\in H^*(\\overline{\\cal M}_{g,n})$. The map $q$ allows one to define a multiplication on $S_{g,n}$, essentially coming from intersection theory in $\\overline{\\cal M}_{g,n}$, which can be described purely graphically. The image $q(S_{g,n})\\subset H^*(\\overline{\\cal M}_{g,n})$ is the tautological ring $RH^*(\\overline{\\cal M}_{g,n})$ and an element of the kernel of $q$ is a tautological relation. See \\cite{PPZRel}, Section~0.3 for a good description of $S_{g,n}$. \n\nThe main result of \\cite{PPZRel} is the construction of elements $R^d_{g,A}\\in S_{g,n}$ for $A=(a_1,...,a_n)$, $a_i\\in\\{0,1\\}$ satisfying $q(R^d_{g,A})=0$ which push forward to tautological relations in $H^{2d}(\\overline{\\cal M}_{g,n})$. The elements $R^d_{g,A}$ are constructed out of the elements $R\\Omega)_{g,n}$ defined above. The element $R^1_2\\in H^{2}(\\overline{\\cal M}_{2})$ is given in Example~\\ref{gen2rel}.\n\n\n\n\n\n\\subsection{Construction of elements of the twisted loop group} \\label{contw}\nConsider the linear system\n\\begin{equation} \\label{compat} \n\\left(\\frac{d}{dz}-U-\\frac{V}{z}\\right)Y=0.\n\\end{equation}\nwhere $Y(z)\\in\\mathbb{C}^N$, $U=\\mathrm{diag}(u_1,...,u_N)$ for $u_i$ distinct and $V$ is skew symmetric. An asymptotic solution of \\eqref{compat} as $z\\to\\infty$\n$$Y=R(z^{-1})e^{zU},\\quad R(z)=I+R_1z+R_2z^2+...$$\ndefines a power series $R(z)$ with coefficients given by $N\\times N$ matrices which is easily shown to satisfy $R(z)R^T(-z)=I$ hence $R(z)\\in L^{(2)}GL(N,\\mathbb{C})$. Substitute $Y=R(z^{-1})e^{zU}$ into \\eqref{compat} and send $z\\mapsto z^{-1}$ to get\n$$\n0=\\left(\\frac{d}{dz}+\\frac{U}{z^2}+\\frac{V}{z}\\right)R(z)e^{U\/z}\n=\\left(\\frac{d}{dz}R(z)+\\frac{1}{z^2}[U,R(z)]+\\frac{1}{z}VR(z)\\right)e^{U\/z}\n$$\nor equivalently\n\\begin{equation} \\label{teleman}\n[R_{k+1},U]=(k+V)R_k,\\quad k=0,1,...\n\\end{equation}\nWe will describe two natural solutions of \\eqref{compat} using the data of a Frobenius manifold, and using the data of a Riemann surface equipped with a meromorphic function. These will give two constructions of elements $R(z)$ of the twisted loop group. The construction of an element $R(z)$ in two ways using both of these viewpoints will be crucial to proving that $Z^{\\Theta}$ and $Z^{\\rm BGW}$ coincide.\n\n\n\nDubrovin \\cite{DubGeo} proved that there is a Frobenius manifold structure on the space of pairs $(U,V)$ where $V$ varies in the $u_i$ so that the monodromy of $Y$ around 0 remains fixed. Conversely, he associated a family of linear systems \\eqref{compat} depending on the canonical coordinates $(u_1,...,u_N)$ of any semisimple Frobenius manifold $M$ as follows. \n\nRecall that a Frobenius manifold is a complex manifold $M$ equipped with an associative product on its tangent bundle compatible with a flat metric---a nondegenerate symmetric bilinear form---on the manifold \\cite{DubGeo}. It is encoded by a single function $F(t_1,...,t_{N})$, known as the {\\em prepotential}, that satisfies a nonlinear partial differential equation known as the Witten-Dijkgraaf-Verlinde-Verlinde (WDVV) equation:\n$$F_{ijm}\\eta^{mn}F_{k\\ell n}=F_{i\\ell m}\\eta^{mn}F_{jkn},\\quad \\eta_{ij}=F_{0ij}\n$$\nwhere $\\eta^{ik}\\eta_{kj}=\\delta_{ij}$, $F_i=\\frac{\\partial}{\\partial t_i}F$ and $\\{t_1,...,t_{N}\\}$ are (flat) local coordinates of $M$ which correspond to the $k=0$ coordinates above via $t_{\\alpha}=t^{\\alpha}_0$, $\\alpha=1,...,N$. It comes equipped with the two vector fields $1\\!\\!1$, the unit for the product, and the Euler vector field $E$ which describes symmetries of the Frobenius manifold, neatly encoded by $E\\cdot F(t_1,...,t_{N})=c\\cdot F(t_1,...,t_{N})$ for some $c\\in\\mathbb{C}$. Multiplication by the Euler vector field $E$ produces an endomorphism $U$ with eigenvalues $\\{u_1,...,u_N\\}$ known as {\\em canonical coordinates} on $M$. They give rise to vector fields $\\partial\/\\partial u_i$ with respect to which the metric $\\eta$, product $\\raisebox{-0.5ex}{\\scalebox{1.8}{$\\cdot$}}$ and Euler vector field $E$ are diagonal\n$$\\frac{\\partial}{\\partial u_i}\\raisebox{-0.5ex}{\\scalebox{1.8}{$\\cdot$}}\\frac{\\partial}{\\partial u_j}=\\delta_{ij}\\frac{\\partial}{\\partial u_i},\\quad \\eta\\left(\\frac{\\partial}{\\partial u_i},\\frac{\\partial}{\\partial u_j}\\right)=\\delta_{ij}\\Delta_i,\\quad E=\\sum u_i\\frac{\\partial}{\\partial u_i}.$$\n\nThe differential equation \\eqref{compat} in $z$ is defined at any point of the Frobenius manifold using $U$, the endomorphism defined by multiplication by the Euler vector field $E$, and the endomorphism $V=[\\Gamma,U]$ where $\\Gamma_{ij}=\\frac{\\partial_{u_i}\\Delta_j}{2\\sqrt{\\Delta_i\\Delta_j}}$ for $i\\neq j$ are the so-called rotation coefficients of the metric $\\eta$ in the normalised canonical basis. Hence associated to each point of the Frobenius manifold is an element $R(z)=\\sum R_kz^k$ of the twisted loop group. From $(H=T_pM,\\raisebox{-0.5ex}{\\scalebox{1.8}{$\\cdot$}},E,V)$ the endomorphisms $R_k$ of $H$ are defined recursively from $R_1$ by $R_0=I$ and \\eqref{teleman}. \n\n\\begin{example} \nThe 2-dimensional versal deformation space of the $A_2$ singularity has a Frobenius manifold structure \\cite{DubGeo,SaiPer} defined by the prepotential \n$$F(t_1,t_2)=\\frac12 t_1^2t_2+\\frac{1}{72}t_2^4.\n$$\nThe prepotential determines the flat metric with respect to the basis $\\{\\frac{\\partial}{\\partial t_1},\\frac{\\partial}{\\partial t_2}\\}$ by $\\eta_{ij}=F_{1ij}$ hence\n$$\\eta=\\left(\\begin{array}{cc}0&1\\\\1&0\\end{array}\\right)\n$$\nand the (commutative) product by $\\eta(\\partial\/\\partial t_i\\raisebox{-0.5ex}{\\scalebox{1.8}{$\\cdot$}}\\partial\/\\partial t_j,\\partial\/\\partial t_k)=F_{ijk}$ hence\n$$\\frac{\\partial}{\\partial t_1}\\raisebox{-0.5ex}{\\scalebox{1.8}{$\\cdot$}}\\frac{\\partial}{\\partial t_1}=\\frac{\\partial}{\\partial t_1},\\quad \\frac{\\partial}{\\partial t_1}\\raisebox{-0.5ex}{\\scalebox{1.8}{$\\cdot$}}\\frac{\\partial}{\\partial t_2}=\\frac{\\partial}{\\partial t_2},\\quad \\frac{\\partial}{\\partial t_2}\\raisebox{-0.5ex}{\\scalebox{1.8}{$\\cdot$}}\\frac{\\partial}{\\partial t_2}=\\frac13 t_2\\frac{\\partial}{\\partial t_1}.\n$$\nThe unit and Euler vector fields are\n$$1\\!\\!1=\\frac{\\partial}{\\partial t_1},\\quad E=t_1\\frac{\\partial}{\\partial t_1}+\\frac23t_2\\frac{\\partial}{\\partial t_2}.$$ \nand $E\\cdot F(t_1,t_2)=\\frac83 F(t_1,t_2)$.\nThe canonical coordinates are \n$$ u_1=t_1+\\frac{2}{3\\sqrt{3}}t_2^{3\/2},\\quad u_2=t_1-\\frac{2}{3\\sqrt{3}}t_2^{3\/2}.\n$$\nWith respect to the normalised canonical basis, the rotation coefficients $\\Gamma_{12}=\\dfrac{-i\\sqrt{3}}{8}t_2^{-3\/2}=\\Gamma_{21}$ give rise to\n$V=[\\Gamma,U]=\\frac{i\\sqrt{3}}{2}t_2^{-3\/2}\\left(\\begin{array}{cc}0&-1\\\\1&0\\end{array}\\right)\n$. In canonical coordinates we have\n\\begin{equation} \\label{A2V} U=\\left(\\begin{array}{cc}u_1&0\\\\0&u_2\\end{array}\\right),\\quad V=\\frac{2i}{3(u_1-u_2)}\\left(\\begin{array}{cc}0&1\\\\-1&0\\end{array}\\right).\n\\end{equation}\nThe metric $\\eta$ applied to the vector fields $\\partial\/\\partial u_i=\\frac12\\left(\\partial\/\\partial t_1-(-1)^i(3\/t_2)^{1\/2}\\partial\/\\partial t_2\\right)$ is $\\eta\\left(\\frac{\\partial}{\\partial u_i},\\frac{\\partial}{\\partial u_j}\\right)=\\delta_{ij}\\Delta_i$ where $\\Delta_1=\\frac{\\sqrt{3}}{2}t_2^{-1\/2}=-\\Delta_2$. \n\\end{example} \n\nWe consider three natural bases of the tangent space $H=T_pM$ at any point $p$ of a Frobenius manifold. The flat basis $\\{\\partial\/\\partial t_i\\}$ which gives a constant metric $\\eta$, the canonical basis $\\{\\partial\/\\partial u_i\\}$ which gives a trivial product $\\raisebox{-0.5ex}{\\scalebox{1.8}{$\\cdot$}}$, and the normalised canonical basis $\\{v_i\\}$, for $v_i=\\Delta_i^{-1\/2}\\partial\/\\partial u_i$, which gives a trivial metric $\\eta$. (A different choice of square root of $\\Delta_i$ would simply give a different choice of normalised canonical basis.) The transition matrix $\\Psi$ from flat coordinates to normalised canonical coordinates sends the metric $\\eta$ to the dot product, i.e. $\\Psi^T\\Psi=\\eta$. The TFT structure on $H$ induced from $\\eta$ and $\\raisebox{-0.5ex}{\\scalebox{1.8}{$\\cdot$}}$ is diagonal in the normalised canonical basis. It is given by\n$\n\\Omega_{g,n}(v_i^{\\otimes n})=\\Delta_i^{1-g-\\frac12 n}\n$$\nand vanishes on mixed products of $v_i$ and $v_j$, $i\\neq j$. We find the normalised canonical basis most useful for comparisons with topological recursion---see Section~\\ref{sec:TR}\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n\\iffalse\nThe coefficients $R_k$ are defined using $\\Psi$, the transition matrix from flat coordinates to normalised canonical coordinates, via the initial condition $R_0=I$ and the inductive equation\n\\begin{equation}\\label{eqdefR}\nd\\left( R(z) \\Psi \\right) = {\\left[R(z),dU\\right] \\over z} \\Psi\n\\end{equation}\nwhich uniquely determines $R(z)$ up to left multiplication by a diagonal matrix $D(z)$ independent of $u$ with $D(0)=I$.\nUsing $d\\Psi = \\left[\\Gamma,dU\\right] \\Psi$ one can write\n$$\nd\\left(R(z) \\Psi \\right) = d\\left[R(z)\\right] \\Psi + R(z) d \\Psi = \nd\\left[R(z)\\right] \\Psi + R(z) \\left[\\Gamma,dU\\right] \\Psi .\n$$\nTogether with equation \\eqref{eqdefR} and the invertibility of $\\Psi$, this gives\n\\begin{equation} \\label{eqdefR2}\ndR(z) = {\\left[R(z),dU\\right] \\over z} - R(z) \\left[\\Gamma,dU\\right].\n\\end{equation}\nThis re-expresses the equation for $R(z)$ in terms of the rotation coefficients, which uses less information than the full metric, encoded in $\\Psi$. Since $dU(1\\!\\!1) =I$, an immediate consequence of \\eqref{eqdefR2} is\n\\begin{equation} \\label{unR}\n1\\!\\!1\\cdot R(z)=0.\n\\end{equation}\n\n\n\n\n\nIf the theory is homogenous, then invariance under the action of the Euler field \n\\begin{equation} \\label{homR}\n(z\\partial_z+E)\\cdot R(z)=0\n\\end{equation}\nfixes the diagonal ambiguity in $R(z)$.\n\n\nThe first non-trivial term $R_1$ of $R(z)$ is given by the rotation coefficients\n\\begin{equation} \\label{R1=Gam}\nR_1=\\Gamma.\n\\end{equation}\nThis follows from comparing the constant (in $z$) term in \\eqref{eqdefR} which is\n$$d\\Psi=[R_1,dU]\\Psi\n$$\nto equation \\eqref{psipde} given by $d\\Psi = \\left[\\Gamma,dU\\right] \\Psi$. Since $dU$ is diagonal with distinct diagonal terms, we see that \\eqref{R1=Gam} holds for off-diagonal terms, and the ambiguity in the diagonal term for both is unimportant---it can be fixed in $R_1$ by \\eqref{eqdefR} together with $E\\cdot R_1=-R_1$.\n\\fi\n\n\n\\begin{example} \\label{A2RT}\nOn the $A_2$ Frobenius manifold restrict to the point $(u_1,u_2)=(2,-2)$, or equivalently $(t_1,t_2)=(0,3)$. Then $\\Delta_1=1\/2=-\\Delta_2$ determines the TFT. (In the flat basis the TFT structure at $(t_1,t_2)=(0,3)$ on $H$ is given by $\\Omega_{g,n}((\\partial\/\\partial t_1)^{\\otimes n_0}\\otimes (\\partial\/\\partial t_2)^{\\otimes n_1})=2^g\\cdot\\delta^{\\rm odd}_{g+n_1}$. )\nAt the point $(u_1,u_2)=(2,-2)$ the element $R(z)\\in L^{(2)}GL(2,\\mathbb{C})$ satisfying \\eqref{teleman} with $U$ and $V$ given by\n$$U=\\left(\\begin{array}{cc}2&0\\\\0&-2\\end{array}\\right),\\quad V=\\frac{1}{6}\\left(\\begin{array}{cc}0&i\\\\-i&0\\end{array}\\right)\n$$\nis\n$$ R(z)=\\sum_m\\frac{(6m)!}{(6m-1)(3m)!(2m)!}\\left(\\begin{array}{cc}-1&(-1)^m6mi\\\\-6mi&(-1)^{m-1}\\end{array}\\right)\\left(\\frac{z}{1728}\\right)^m.$$\nWe also have\n$$T_0(z) = 1\\!\\!1-R^{-1}(z)(1\\!\\!1),\\quad 1\\!\\!1=\\frac{1}{\\sqrt{2}}\\left(\\begin{array}{c}1\\\\i\\end{array}\\right)\n$$\n$$T(z) = zT_0(z).\n$$\n\\end{example}\n\n\\begin{remark}\nEquation~\\eqref{teleman} defines $R(z)$ as an endomorphism valued power series over an $N$-dimensional vector space $H$ equipped with a symmetric, bilinear, nondegenerate form $\\eta$. The matrix $R(z)$ here---using a basis for $H$ so that $\\eta$ is dot product---is related to the matrix $R(z)$ in \\cite{PPZRel} by conjugation by the transition matrix $\\Psi$ from flat coordinates to normalised canonical coordinates\n$$ R(z)=\\Psi\\sum_m\\frac{(6m)!}{(3m)!(2m)!}\\left(\\begin{array}{cc}\\frac{1+6m}{1-6m}&0\\\\0&1\\end{array}\\right)\\left(\\begin{array}{cc}0&1\\\\1&0\\end{array}\\right)^m\n\\left(\\frac{z}{1728}\\right)^m\\Psi^{-1},\\quad \\Psi=\\frac{1}{\\sqrt{2}}\\left(\\begin{array}{cc}1&1\\\\i&-i\\end{array}\\right).\n$$ \n\\end{remark}\n\n\\subsubsection{Spectral curves and the twisted loop group.} \\label{specLG2}\nAn element of the twisted loop group $R(z)\\in L^{(2)}GL(N,\\mathbb{C})$ can be naturally defined from a Riemann surface $\\Sigma$ equipped with a bidifferential $B(p_1,p_2)$ on $\\Sigma\\times\\Sigma$ and a meromorphic function $x:\\Sigma\\to\\mathbb{C}$, for $N=$ the number of zeros of $dx$. A basic example is the function $x=z^2$ on $\\Sigma=\\mathbb{C}$ which gives rise to the constant element $R(z)=1\\in GL(1,\\mathbb{C})$. More generally, any function $x$ that looks like this example locally---$x=s^2+c$ for $s$ a local coordinate around a zero of $dx$ and $c\\in\\mathbb{C}$---gives $R(z)=I+R_1z+...\\in L^{(2)}GL(N,\\mathbb{C})$ which is in some sense a deformation of $I\\in GL(N,\\mathbb{C})$, or $N$ copies of the basic example.\n\n\\begin{definition} \\label{bidiff} On any compact Riemann surface with a choice of $\\mathcal A$-cycles $(\\Sigma,\\{ {\\mathcal A}_i\\}_{i=1,...,g})$, define a {\\em fundamental normalised bidifferential of the second kind} $B(p,p')$ to be a symmetric tensor product of differentials on $\\Sigma\\times\\Sigma$, uniquely defined by the properties that it has a double pole on the diagonal of zero residue, double residue equal to $1$, no further singularities and normalised by $\\int_{p\\in{\\mathcal A}_i}B(p,p')=0$, $i=1,...,g$, \\cite{FayThe}. On a rational curve, which is sufficient for this paper, $B$ is the Cauchy kernel\n$$\nB(z_1,z_2)=\\frac{dz_1dz_2}{(z_1-z_2)^2}.$$ \n\\end{definition}\nThe bidifferential $B(p,p')$ acts as a kernel for producing meromorphic differentials on the Riemann surface $\\Sigma$ via $\\omega(p)=\\int_{\\Lambda}\\lambda(p')B(p,p')$ where $\\lambda$ is a function defined along the contour $\\Lambda\\subset\\Sigma$. Depending on the choice of $(\\Lambda,\\lambda)$, $\\omega$ can be a differential of the 1st kind (holomorphic), 2nd kind (zero residues) or 3rd kind (simple poles). A fundamental example is $B({\\cal P},p)$ which is a normalised (trivial $\\mathcal A$-periods) differential of the second kind holomorphic on $\\Sigma\\backslash{\\cal P}$ with a double pole at a simple zero ${\\cal P}$ of $dx$. The expression $B({\\cal P},p)$\nis an integral over a closed contour around ${\\cal P}$ defined as follows.\n\\begin{definition}\\label{evaluationform}\nFor a Riemann surface equipped with a meromorphic function $(\\Sigma,x)$ we define evaluation of any meromorphic differential $\\omega$ at a simple zero ${\\cal P}$ of $dx$ by\n$$\n\\omega({\\cal P}):=\\mathop{\\,\\rm Res\\,}_{p={\\cal P}}\\frac{\\omega(p)}{\\sqrt{2(x(p)-x({\\cal P}))}\n$$\nwhere we choose a branch of $\\sqrt{x(p)-x({\\cal P})}$ once and for all at ${\\cal P}$ to remove the $\\pm1$ ambiguity.\n\\end{definition}\n\n\n\nShramchenko \\cite{ShrRie} constructed a solution of \\eqref{compat} with $V=[\\mathcal{B},U]$ for $\\mathcal{B}_{ij}=B({\\cal P}_i,{\\cal P}_j)$ (defined for $i\\neq j$) given by\n$$Y(z)^i_j= -\\frac{\\sqrt{z}}{\\sqrt{2\\pi}}\\int_{\\Gamma_j} B({\\cal P}_i,p)\\cdot e^{-x(p)\\over z}.$$\nThe proof in \\cite{ShrRie} is indirect, showing that $Y(z)^i_j$ satisfies an associated set of PDEs in $u_i$, and using the Rauch variational formula to calculate $\\partial_{u_k}B({\\cal P}_i,p)$. Rather than giving this proof, we will work directly with the associated element $R(z)$ of the twisted loop group.\n\\begin{definition} \\label{BtoR}\nDefine the asymptotic series for $z$ near 0 by\n\\begin{equation} \\label{eq:BtoR}\nR^{-1}(z)^i_j = -\\frac{\\sqrt{z}}{\\sqrt{2\\pi}}\\int_{\\Gamma_j} B({\\cal P}_i,p)\\cdot e^{u_j-x(p)\\over z}\n\\end{equation}\nwhere $\\Gamma_j$ is a path of steepest descent for $-x(p)\/z$ containing $u_j=x({\\cal P}_j)$. \n\\end{definition}\nNote that the asymptotic expansion of the contour integral \\eqref{eq:BtoR} for $z$ near $0$ depends only the intersection of $\\Gamma_j$ with a neighbourhood of $p={\\cal P}_j$. When $i=j$, the integrand has zero residue at $p={\\cal P}_j$ so we deform $\\Gamma_j$ to go around ${\\cal P}_j$ to get a well-defined integral. Locally, this is the same as defining $\\int_\\mathbb{R} s^{-2}\\exp(-s^2)ds=-2\\sqrt{\\pi}$ by integrating the analytic function $z^{-2}\\exp(-z^2)$ along the real line in $\\mathbb{C}$ deformed to avoid 0.\n\\begin{lemma} [\\cite{ShrRie}] \\label{eynlem}\nThe asymptotic series $R(z)$ defined in \\eqref{eq:BtoR} satisfies the twisted loop group condition\n\\begin{equation}\\label{twloopgp}\nR(z) R^T(-z) = Id.\n\\end{equation}\n \\end{lemma}\n \\begin{proof}\nThe proof here is taken from \\cite{DNOPSPri}. We have\n \\begin{align} \\label{unberg}\n\\sum_{i=1}^N\\mathop{\\,\\rm Res\\,}_{q={\\cal P}_i}\\frac{B(p,q)B(p',q)}{dx(q)}&=-\\mathop{\\,\\rm Res\\,}_{q=p}\\frac{B(p,q)B(p',q)}{dx(q)}-\\mathop{\\,\\rm Res\\,}_{q=p'}\\frac{B(p,q)B(p',q)}{dx(q)}\\\\ \\nonumber\n &=-d_p\\left(\\frac{B(p,p')}{dx(p)}\\right)-d_{p'}\\left(\\frac{B(p,p')}{dx(p')}\\right)\n \\end{align}\nwhere the first equality uses the fact that the only poles of the integrand are $\\{p,p',{\\cal P}_i,i=1,...,N\\}$, and the second equality uses the Cauchy formula satisfied by the Bergman kernel. Define the Laplace transform of the Bergman kernel by\n$$\n\\check{B}^{i,j}(z_1,z_2) =\n\\frac{ e^{ \\frac{u_i }{ z_1}+ \\frac{u_j}{ z_2}}}{2 \\pi \\sqrt{z_1z_2}} \\int_{\\Gamma_i}\n\\int_{\\Gamma_j} B(p,p') e^{- \\frac{x(p) }{ z_1}- \\frac{x(p')}{ z_2}} .\n$$\nThe Laplace transform of the LHS of \\eqref{unberg} is\n \\begin{align*} \n \\frac{ e^{ \\frac{u_i }{ z_1}+ \\frac{u_j}{ z_2}}}{2 \\pi \\sqrt{z_1z_2}} \\int_{\\Gamma_i}\n\\int_{\\Gamma_j} e^{- \\frac{x(p) }{ z_1}- \\frac{x(p')}{ z_2}}\\sum_{k=1}^N\\mathop{\\,\\rm Res\\,}_{q={\\cal P}_k}\\frac{B(p,q)B(p',q)}{dx(q)}&\n=\\sum_{k=1}^N\\frac{ e^{ \\frac{u_i }{ z_1}+ \\frac{u_j}{ z_2}}}{2 \\pi \\sqrt{z_1z_2}} \\int_{\\Gamma_i}\ne^{- \\frac{x(p) }{ z_1}}B(p,{\\cal P}_k) \\int_{\\Gamma_j}e^{- \\frac{x(p')}{ z_2}}B(p',{\\cal P}_k)\\\\\n&=\\sum_{k=1}^N\\frac{\\left[R^{-1}(z_1)\\right]^k_i\\left[R^{-1}(z_2)\\right]_{j}^k}{ z_1z_2}.\n \\end{align*}\nSince the Laplace transform satisfies\n$\\displaystyle\\int_{\\Gamma_i} d\\left(\\frac{\\omega(p)}{dx(p)}\\right)e^{- \\frac{x(p) }{ z}}=\\frac{1}{z}\\int_{\\Gamma_i} \\omega(p)e^{- \\frac{x(p) }{ z}}\n$\nfor any differential $\\omega(p)$, by integration by parts,\nthen the Laplace transform of the RHS of \\eqref{unberg} is\n$$ -\\frac{ e^{ \\frac{u_i }{ z_1}+ \\frac{u_j}{ z_2}}}{2 \\pi \\sqrt{z_1z_2}} \\int_{\\Gamma_i}\n\\int_{\\Gamma_j} e^{- \\frac{x(p) }{ z_1}- \\frac{x(p')}{ z_2}}\n\\left\\{d_p\\left(\\frac{B(p,p')}{dx(p)}\\right)+d_{p'}\\left(\\frac{B(p,p')}{dx(p')}\\right)\\right\\}=-\\left(\\frac{1}{z_1}+\\frac{1}{z_2}\\right)\\check{B}^{i,j}(z_1,z_2).\n $$\nPutting the two sides together yields the following result due to Eynard \\cite{EynInv}\n\\begin{equation}\\label{shape2}\n\\check{B}^{i,j}(z_1,z_2) = - \\frac{\\sum_{k=1}^{N} \\left[R^{-1}(z_1)\\right]^k_i \\left[R^{-1}(z_2)\\right]^k_j }{ z_1+z_2}.\n\\end{equation}\nEquation \\eqref{twloopgp} is an immediate consequence of \\eqref{shape2} and the finiteness of $\\check{B}^{i,j}(z_1,z_2)$ at $z_2=-z_1$.\n\\end{proof}\n\\begin{example} \\label{specA2B}\nLet $\\Sigma\\cong\\mathbb{C}$ be a rational curve equipped with the meromorphic function $x=z^3-3z$ and bidifferential $B(z_1,z_2)=dz_1dz_2\/(z_1-z_2)^2$.\n\nChoose a local coordinate $t$ around $z=-1={\\cal P}_1$ so that $x(t)=\\frac12 t^2+2$. Then\n\\begin{align*}\nB({\\cal P}_1,t)&=\\frac{-i}{\\sqrt{6}}\\frac{dz}{(z+1)^2}=dt\\left(t^{-2}-\\frac{1}{144}+\\frac{35}{41472}t^2+...+{\\rm odd\\ terms}\\right)\\\\\nB({\\cal P}_2,t)&=\\frac{1}{\\sqrt{6}}\\frac{dz}{(z-1)^2}=dt\\left(-\\frac{i}{24}+\\frac{35i}{3456}t^2+...+{\\rm odd\\ terms}\\right)\n\\end{align*}\nChoose a local coordinate $s$ around $z=1={\\cal P}_2$ so that $x(s)=\\frac12 s^2-2$. Then\n\\begin{align*}\nB({\\cal P}_1,s)&=\\frac{-i}{\\sqrt{6}}\\frac{dz}{(z+1)^2}=ds\\left(-\\frac{i}{24}-\\frac{35i}{3456}s^2+...+{\\rm odd\\ terms}\\right)\\\\\nB({\\cal P}_2,s)&=\\frac{1}{\\sqrt{6}}\\frac{dz}{(z-1)^2}=ds\\left(s^{-2}+\\frac{1}{144}+\\frac{35}{41472}s^2+...+{\\rm odd\\ terms}\\right)\n\\end{align*}\n\nThe odd terms are annihilated by the Laplace transform, and we get\n\\begin{align*}R^{-1}(z)^1_1 &= -\\frac{\\sqrt{z}}{\\sqrt{2\\pi}}\\int_{\\Gamma_1} B({\\cal P}_1,t)\\cdot e^{-\\frac12t^2\\over z}=1+\\frac{1}{144}z-\\frac{35}{41472}z^2+...\\\\\nR^{-1}(z)^1_2 &= -\\frac{\\sqrt{z}}{\\sqrt{2\\pi}}\\int_{\\Gamma_1} B({\\cal P}_2,t)\\cdot e^{-\\frac12t^2\\over z}=\\frac{i}{24}z-\\frac{35i}{3456}z^2+...\\\\\nR^{-1}(z)^2_1 &= -\\frac{\\sqrt{z}}{\\sqrt{2\\pi}}\\int_{\\Gamma_2} B({\\cal P}_1,s)\\cdot e^{-\\frac12s^2\\over z}=\\frac{i}{24}z+\\frac{35i}{3456}z^2+...\\\\\nR^{-1}(z)^2_2 &= -\\frac{\\sqrt{z}}{\\sqrt{2\\pi}}\\int_{\\Gamma_2} B({\\cal P}_2,s)\\cdot e^{-\\frac12s^2\\over z}=1-\\frac{1}{144}z-\\frac{35}{41472}z^2+...\n\\end{align*}\nHence $R^{-1}(z)=I-R_1z+(R_1^2-R_2)z^2+...=I-R_1^Tz+R_2^Tz^2+...$ gives \n$$ R_1=\\frac{1}{144}\\left(\\begin{array}{cc}-1&-6i\\\\-6i&1\\end{array}\\right),\\quad\nR_2=\\frac{35}{41472}\\left(\\begin{array}{cc}-1&12i\\\\-12i&-1\\end{array}\\right)\n$$\nwhich agrees with Example~\\ref{A2RT} for the $A_2$ Frobenius manifold.\n\\end{example}\n\\subsection{Partition Functions from the $A_2$ singularity} \\label{sec:A2part}\nUsing the twisted loop group action and the translation action defined in the previous section, we construct in \\eqref{ZA2}, \\eqref{ZtA2} and \\eqref{ZBGWA2} below, three partition functions out of $Z^{\\text{BGW}}(\\hbar,t_0,t_1,...)$, $Z^{\\text{KW}}(\\hbar,t_0,t_1,...)$ and $Z^{\\Theta}(\\hbar,t_0,t_1,...)$, denoted by\n$$Z_{A_2}(\\hbar,\\{t^{\\alpha}_k\\}),\\quad Z^{\\Theta}_{A_2}(\\hbar,\\{t^{\\alpha}_k\\}),\\quad Z^{\\text{BGW}}_{A_2}(\\hbar,\\{t^{\\alpha}_k\\})$$\nwhere $\\alpha\\in\\{1,2\\}$, $k\\in\\mathbb{N}$, i.e. $\\{t^{\\alpha}_k\\}=\\{t^1_0,t^2_0,t^1_1,t^2_1,t^1_2,t^2_2,...\\}$. \n\nThe three partition functions above use the element $R(z)$ of the twisted loop group arising out of the $A_2$ Frobenius manifold. \nThe partition function $Z_{A_2}(\\hbar,\\{t^{\\alpha}_k\\})$ is defined via the graphical action defined in Section~\\ref{twloop} (and represented here via the equivalent action by differential operators) using $R(z)$ and $T(z)$ defined from the $A_2$ Frobenius manifold in Example~\\ref{A2RT}\n\\begin{align} \\label{ZA2}\nZ_{A_2}(\\hbar,\\{t^{\\alpha}_k\\})&:=\\hat{\\Psi}\\cdot\\hat{R}\\cdot \\hat{T}\\cdot Z^{\\text{KW}}(\\tfrac{1}{2}\\hbar,\\{\\frac{1}{\\sqrt{2}}v^{1}_k\\})Z^{\\text{KW}}(-\\tfrac{1}{2}\\hbar,\\{\\frac{i}{\\sqrt{2}}v^{2}_k\\})\\\\\n&=\\exp\\sum_{g,n,\\vec{k}}\\frac{\\hbar^{g-1}}{n!}\\int_{\\overline{\\cal M}_{g,n}}\\Omega^{A_2}_{g,n}(e_{\\alpha_1}\\otimes...\\otimes e_{\\alpha_n})\\cdot\\prod_{j=1}^n\\psi_j^{k_j}\\prod t^{\\alpha_j}_{k_j}\n\\nonumber\n\\end{align}\nwhere $\\hat{R}$ and $\\hat{T}$ are differential operators acting on copies of $Z^{\\text{KW}}$, which can be expressed as a sum over stable graphs, and $\\hat{\\Psi}$ is the linear change of coordinates $v^i_k=\\Psi^i_{\\alpha}t^{\\alpha}_k$ (also expressible as a differential operator).\nSimilarly, define\n\\begin{align} \\label{ZtA2}\nZ^{\\Theta}_{A_2}(\\hbar,\\{t^{\\alpha}_k\\})&:=\\hat{\\Psi}\\cdot\\hat{R}\\cdot \\hat{T}_0\\cdot Z^{\\Theta}(\\tfrac{1}{2}\\hbar,\\{\\frac{1}{\\sqrt{2}}v^{1}_k\\})Z^{\\Theta}(-\\tfrac{1}{2}\\hbar,\\{\\frac{i}{\\sqrt{2}}v^{2}_k\\})\\\\\n&=\\exp\\sum_{g,n,\\vec{k},\\vec{\\alpha}}\\frac{\\hbar^{g-1}}{n!}\\int_{\\overline{\\cal M}_{g,n}}\\Theta_{g,n}\\cdot\\Omega^{A_2}_{g,n}(e_{\\alpha_1},...,e_{\\alpha_n})\\cdot\\prod_{j=1}^n\\psi_j^{k_j}\\prod t_{k_j}^{\\alpha_j}.\n\\nonumber\n\\end{align}\nThe operators $\\hat{\\Psi}$ and $\\hat{R}$ in \\eqref{ZtA2} coincide with the operators in \\eqref{ZA2} associated to the $A_2$ singularity. The translation operator $\\hat{T}_0=z^{-1}\\hat{T}$ is related to the translation operator in \\eqref{ZA2} essentially by the shift $m_i+1\\to m_i$ we saw in \\eqref{thetakappa}. The graphical expression \\eqref{A2coh} for $\\Omega^{A_2}_{g,n}(e_{\\alpha_1},...,e_{\\alpha_n})$ immediately gives rise to a graphical expression for $\\Theta_{g,n}\\cdot\\Omega^{A_2}_{g,n}(e_{\\alpha_1},...,e_{\\alpha_n})$. This produces a graphical expression for $Z^{\\Theta}_{A_2}$ which is expressed in \\eqref{ZtA2} as a differential operator acting on two copies of $Z^{\\Theta}$ in place of $Z^{\\text{KW}}$. This is a sum over graphs $G'_{g,n}$ via the structure shown in \\eqref{thetarel} (which also applies when the expression is non-zero). \n\nFinally, define\n\\begin{equation} \\label{ZBGWA2}\nZ^{\\text{BGW}}_{A_2}\n:=\\hat{\\Psi}\\cdot\\hat{R}\\cdot \\hat{T}_0\\cdot Z^{\\text{BGW}}(\\tfrac{1}{2}\\hbar,\\{\\frac{1}{\\sqrt{2}}v^{1}_k\\})Z^{\\text{BGW}}(-\\tfrac{1}{2}\\hbar,\\{\\frac{i}{\\sqrt{2}}v^{2}_k\\}).\n\\end{equation}\nEquivalently, we have replaced each $\\int_{\\overline{\\cal M}_{g,n+N}}\\Theta_{g,n+N}\\prod_{i=1}^{n+N}\\psi_i^{k_i}$ in $Z^{\\Theta}_{A_2}$ with the corresponding coefficients from $F_g^{\\text{BGW}}$.\nTo prove Theorem~\\ref{tauf}, we will show that $Z^{\\text{BGW}}_{A_2}$ has many vanishing terms from which it will follow that $Z^{\\text{BGW}}_{A_2}=Z^{\\Theta}_{A_2}$ and $Z^{\\text{BGW}}=Z^{\\Theta}$.\n\n\n\n\nThe $A_2$ Frobenius manifold and its associated cohomological field theory $\\Omega^{A_2}_{g,n}\\in H^*(\\overline{\\cal M}_{g,n})\\otimes (V^*)^{\\otimes n}$ for $H=\\mathbb{C}^2$ is used to produce Pixton's relations among tautological cohomology classes over $\\overline{\\cal M}_{g,n}$. \nThe class $\\Omega^{A_2}_{g,n}\\in H^*(\\overline{\\cal M}_{g,n})\\otimes (V^*)^{\\otimes n}$ is defined by the Givental-Teleman theorem---see \\cite{DSSGiv, GivGro, TelStr}---via a sum over stable graphs:\n\\begin{equation} \\label{A2coh}\n\\Omega^{A_2}_{g,n}=\\sum_{\\Gamma\\in G_{g,n}}(\\phi_{\\Gamma})_*\\omega^R_{\\Gamma}\\in H^*(\\overline{\\cal M}_{g,n})\n\\end{equation}\nwhere $\\omega^R_{\\Gamma}$ is built out of contributions to edges and vertices of $\\Gamma$ using $R$, $\\psi$ and $\\kappa$ classes and a topological field theory at vertices that takes in vectors of $H$.\n\nThe key idea behind Pixton's relations among tautological classes \\cite{PPZRel} is a degree bound on the cohomological classes $\\deg\\Omega^{A_2}_{g,n}\\leq\\frac13(g-1+n )<3g-3+n$. The construction of $\\Omega^{A_2}_{g,n}$ using $R$ in \\eqref{A2coh} does not know about this degree bound and produces classes in the degrees where $\\Omega^{A_2}_{g,n}$ vanishes. This leads to sums of tautological classes representing the zero class, i.e. relations, expressed as sums over stable graphs $G_{g,n}$:\n\\begin{equation} \\label{pixtonrel} \n0=\\sum_{\\Gamma\\in G_{g,n}}(\\phi_{\\Gamma})_*\\omega^R_{\\Gamma}\\in H^*(\\overline{\\cal M}_{g,n})=\\sum_{\\Gamma\\in G'_{g,n}}(\\phi_{\\Gamma})_*\\tilde{\\omega}^R_{\\Gamma}\n\\end{equation}\nfor classes $\\omega^R_{\\Gamma},\\tilde{\\omega}^R_{\\Gamma}\\in H^*(\\overline{\\cal M}_{\\Gamma})$. Here we denote by $G'_{g,n}$ the set of all stable graphs of genus $g$ with $n$ labeled points and any number of extra leaves, known as {\\em dilaton leaves} at each vertex. So the set $G_{g,n}$ is finite whereas $G'_{g,n}$ is infinite. Nevertheless, all sums in \\eqref{pixtonrel} are finite. The classes $\\omega_{\\Gamma}$ appearing in \\eqref{pixtonrel} consist of products of $\\psi$ and $\\kappa$ classes associated to each vertex of $\\Gamma$. The classes $\\tilde{\\omega}^R_{\\Gamma}$ consist of products of only $\\psi$ classes, again associated to each vertex of $\\Gamma$.\n\nPixton's relations \\eqref{pixtonrel} induce relations between intersection numbers of $\\psi$ classes with $\\Theta_{g,n}$:\n\\begin{equation} \\label{thetarel}\n 0=\\Theta_{g,n}\\cdot\\sum_{\\Gamma\\in G_{g,n}}(\\phi_{\\Gamma})_*\\omega^R_{\\Gamma}\n=\\sum_{\\Gamma\\in G_{g,n}}(\\phi_{\\Gamma})_*\\Big(\\Theta_{\\Gamma}\\cdot\\omega^R_{\\Gamma}\\Big)\n=\\sum_{\\Gamma\\in G'_{g,n}}(\\phi_{\\Gamma})_*\\Big(\\Theta_{\\Gamma}\\cdot\\tilde{\\omega}^R_{\\Gamma}\\Big).\n\\end{equation}\nThe second equality uses $\\Theta_{g,n}\\cdot(\\phi_{\\Gamma})_*=(\\phi_{\\Gamma})_*\\Theta_{\\Gamma}$. The final equality uses Remark~\\ref{removekappa} to replace $\\kappa$ classes by $\\psi$ classes, such as in Example~\\ref{gen2rel} below where the $\\kappa_1$ term is replaced by $\\int_{\\overline{\\cal M}_{2,1}}\\Theta_{2,1}\\cdot\\psi_1=\\int_{\\overline{\\cal M}_{2}}\\Theta_{2}\\cdot\\kappa_1$. The classes $\\omega_{\\Gamma}$ in \\eqref{thetarel} are linear combinations of monomials $\\prod\\psi_i^{k_i}\\cdot R_{\\bf m}$, which form a basis for products of $\\psi$ and $\\kappa$ classes. The final equality in \\eqref{thetarel} is obtained by substituting (linear combinations of) the following expression into the sum over $G_{g,n}$\n$$\\Theta_{g,n}(\\phi_{\\Gamma})_*\\Big(\\prod\\psi_i^{k_i}\\cdot R_{\\bf m}\\Big)\n=(\\phi_{\\Gamma})_*\\Big(\\Theta_{\\Gamma}\\prod\\psi_i^{k_i}\\cdot R_{\\bf m}\\Big)\n=(\\phi_{\\Gamma})_*\\Big(\\prod\\psi_i^{k_i}\\cdot \\pi_*(\\Theta_{\\Gamma_{(N)}}\\psi_{n+1}^{m_1}...\\psi_{n+N}^{m_N})\\Big)\n$$\nwhere $\\Gamma_{(N)}$ is obtained from $\\Gamma$ by adding $N$ (dilaton) leaves to the vertex of $\\Gamma$ on which $\\prod\\psi_i^{k_i}\\cdot R_{\\bf m}$ is defined. The final term in \\eqref{thetarel} is a sum over graphs of intersection numbers of $\\Theta_{g,n}$ with $\\psi$ classes only. \n\n{\\em Primary invariants} of a partition function are those coefficients of $\\prod_{i=1}^nt^{\\alpha_i}_{k_i}$ with all $k_i=0$. They correspond to intersections in $\\overline{\\cal M}_{g,n}$ with no $\\psi$ classes. The primary invariants of $Z^{\\Theta}_{A_2}(\\hbar,\\{t^{\\alpha}_k\\})$ vanish for $n<2g-2$. This uses $\\deg\\Omega^{A_2}_{g,n}\\leq\\frac13(g-1+n)$ so $\\deg\\Omega^{A_2}_{g,n}\\cdot\\Theta_{g,n}\\leq\\frac13(g-1+n)+2g-2+n<3g-3+n$ when $n<2g-2$. These vanishing coefficients correspond to top intersections of $\\psi$ classes with the relations \\eqref{thetarel}. For $n\\leq g-1$ these relations are sufficient to uniquely determine the intersection numbers $\\int_{\\overline{\\cal M}_{g,n}}\\Theta_{g,n}\\prod_{i=1}^n\\psi_i^{m_i}$ via the recursive method in Section~\\ref{sec:unique}. For $n\\geq g$ the dilaton equation \\eqref{dilaton} determines the intersection numbers from the lower ones.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Topological recursion} \\label{sec:TR}\n\nTopological recursion is a procedure which takes as input a spectral curve, defined below, and produces a collection of symmetric tensor products of meromorphic 1-forms $\\omega_{g,n}$ on $C^n$. The correlators store enumerative information in different ways. Periods of the correlators store top intersection numbers of tautological classes in the moduli space of stable curves $\\overline{\\cal M}_{g,n}$ and local expansions of the correlators can serve as generating functions for enumerative problems. \n\nA spectral curve $S=(C,x,y,B)$ is a Riemann surface $C$ equipped with two meromorphic functions $x, y: C\\to \\mathbb{C}$ and a bidifferential $B(p_1,p_2)$ defined in \\eqref{bidiff}, which is the Cauchy kernel in this paper. Topological recursion, as developed by Chekhov, Eynard, Orantin \\cite{CEyHer,EOrInv}, is a procedure that produces from a spectral curve $S=(C,x,y,B)$ a symmetric tensor product of meromorphic 1-forms $\\omega_{g,n}$ on $C^n$ for integers $g\\geq 0$ and $n\\geq 1$, which we refer to as {\\em correlation differentials} or {\\em correlators}. The correlation differentials $\\omega_{g,n}$ are defined by\n\\[\n\\omega_{0,1}(p_1) = -y(p_1) \\, d x(p_1) \\qquad \\text{and} \\qquad \\omega_{0,2}(p_1, p_2) = B(p_1,p_2)\n\\]\nand for $2g-2+n>0$ they are defined recursively via the following equation.\n\\[\n\\omega_{g,n}(p_1, p_L) = \\sum_{d x(\\alpha) = 0} \\mathop{\\text{Res}}_{p=\\alpha} K(p_1, p) \\Bigg[ \\omega_{g-1,n+1}(p, \\hat{p}, p_L) + \\mathop{\\sum_{g_1+g_2=g}}_{I \\sqcup J = L}^\\circ \\omega_{g_1,|I|+1}(p, p_I) \\, \\omega_{g_2,|J|+1}(\\hat{p}, p_J) \\Bigg]\n\\]\nHere, we use the notation $L = \\{2, 3, \\ldots, n\\}$ and $p_I = \\{p_{i_1}, p_{i_2}, \\ldots, p_{i_k}\\}$ for $I = \\{i_1, i_2, \\ldots, i_k\\}$. The outer summation is over the zeroes $\\alpha$ of $dx$ and $p \\mapsto \\hat{p}$ is the involution defined locally near $\\alpha$ satisfying $x(\\hat{p}) = x(p)$ and $\\hat{p} \\neq p$. The symbol $\\circ$ over the inner summation means that we exclude any term that involves $\\omega_{0,1}$. Finally, the recursion kernel is given by\n\\[\nK(p_1,p) = \\frac{1}{2}\\frac{\\int_{\\hat{p}}^p \\omega_{0,2}(p_1, \\,\\cdot\\,)}{[y(p)-y(\\hat{p})] \\, d x(p)}.\n\\]\nwhich is well-defined in the vicinity of each zero of $dx$. It acts on differentials in $p$ and produces differentials in $p_1$ since the quotient of a differential in $p$ by the differential $dx(p)$ is a meromorphic function. For $2g-2+n>0$, each $\\omega_{g,n}$ is a symmetric tensor product of meromorphic 1-forms on $C^n$ with residueless poles at the zeros of $dx$ and holomorphic elsewhere. A zero $\\alpha$ of $dx$ is {\\em regular}, respectively irregular, if $y$ is regular, respectively has a simple pole, at $\\alpha$. The order of the pole in each variable of $\\omega_{g,n}$ at a regular, respectively irregular, zero of $dx$ is $6g-4+2n$, respectively $2g$. Define $\\Phi(p)$ up to an additive constant by $d\\Phi(p)=y(p)dx(p)$. For $2g-2+n>0$, the invariants satisfy the dilaton equation~\\cite{EOrInv}\n\\[\n\\sum_{\\alpha}\\mathop{\\,\\rm Res\\,}_{p=\\alpha}\\Phi(p)\\, \\omega_{g,n+1}(p,p_1, \\ldots ,p_n)=(2g-2+n) \\,\\omega_{g,n}(p_1, \\ldots, p_n),\n\\] \nwhere the sum is over the zeros $\\alpha$ of $dx$. This enables the definition of the so-called {\\em symplectic invariants}\n\\[ F_g=\\sum_{\\alpha}\\mathop{\\,\\rm Res\\,}_{p=\\alpha}\\Phi(p)\\omega_{g,1}(p).\\]\nThe correlators $\\omega_{g,n}$ are normalised differentials of the second kind in each variable---they have zero $\\mathcal{A}$-periods, and poles only at the zeros ${\\cal P}_i$ of $dx$ of zero residue. Their principle parts are skew-invariant under the local involution $p\\mapsto\\hat{p}$. A basis of such normalised differentials of the second kind is constructed from $x$ and $B$ in the following definition. \n\\begin{definition}\\label{auxdif}\nFor a Riemann surface $C$ equipped with a meromorphic function $x:C\\to\\mathbb{C}$ and bidifferential $B(p_1,p_2)$ define the auxiliary differentials on $C$ as follows. For each zero ${\\cal P}_i$ of $dx$, define\n\\begin{equation} \\label{Vdiff}\nV^i_0(p)=B({\\cal P}_i,p),\\quad V^i_{k+1}(p)=d\\left(\\frac{V^i_k(p)}{dx(p)}\\right),\\ i=1,...,N,\\quad k=0,1,2,...\n\\end{equation}\nwhere evaluation $B({\\cal P}_i,p)$ at ${\\cal P}_i$ is given in Definition~\\ref{evaluationform}.\n\\end{definition}\nThe correlators $\\omega_{g,n}$ are polynomials in the auxiliary differentials $V^i_k(p)$. To any spectral curve $S$, one can define a partition function $Z^S$ by assembling the polynomials built out of the correlators $\\omega_{g,n}$ \\cite{DOSSIde,EynInv,NScPol}.\n\\begin{definition} \\label{TRpart}\n$$Z^S(\\hbar,\\{u^{\\alpha}_k\\}):=\\left.\\exp\\sum_{g,n}\\frac{\\hbar^{g-1}}{n!}\\omega^S_{g,n}\\right|_{V^{\\alpha}_k(p_i)=u^{\\alpha}_k}.\n$$\n\\end{definition}\nAs usual define $F_g$ to be the contribution from $\\omega_{g,n}$:\n$$\\log Z^S(\\hbar,\\{u^{\\alpha}_k\\})=\\sum_{g\\geq 0}\\hbar^{g-1}F_g^S(\\{u^{\\alpha}_k\\}).$$\n\n\\subsubsection{From topological recursion to Givental reconstruction} \\label{sec:TR2Giv}\nThe relation between Givental's reconstruction of CohFTs defined in Section~\\ref{twloop} and topological recursion was proven in \\cite{DOSSIde}. The $A_2$ case is treated in \\cite{DNOPSDub}. Recall that the input data for Givental's reconstruction is an element $R(z)\\in L^{(2)}GL(N,\\mathbb{C})$ and $T(z)=z\\left(1\\!\\!1-R^{-1}(z)1\\!\\!1\\right)\\in z\\ \\mathbb{C}^N[[z]]$ defined by a vector $1\\!\\!1\\in\\mathbb{C}^N$. Its output is a CohFT or its partition function $Z(\\hbar,\\{t^{\\alpha}_k\\})$. The input data for topological recursion is a spectral curve $S=(C,x,y,B)$. Its output is the correlators $\\omega_{g,n}$ which can be assembled into a partition function $Z^S(\\hbar,\\{t^{\\alpha}_k\\})$. This is summarised in the following diagram.\n\\begin{center}\n\\begin{tikzpicture}[scale=0.5]\n\\draw (1,4) node {$R(z)\\in L^{(2)}GL(N,\\mathbb{C})$};\n\\draw (1,3) node {$T(z)\\in z\\cdot \\mathbb{C}^N[[z]]$};\n\\draw [->, line width=1pt] (5,4)--(13,4);\n\\draw (9,5) node {Givental reconstruction};\n\\draw (15.5,4) node {$Z(\\hbar,\\{t^{\\alpha}_k\\})$};\n\\draw [<->, line width=1pt] (1,2.5)--(1,-1.5);\n\\draw [<->, line width=1pt] (15.5,2.5)--(15.5,-1.5);\n\\draw (1,-2.5) node {$S=(C,x,y,B)$};\n\\draw (15.5,-2.5) node {$Z^S(\\hbar,\\{t^{\\alpha}_k\\})$};\n\\draw [->, line width=1pt] (5,-2.5)--(13,-2.5);\n\\draw (9,-2) node {topological recursion};\n\\end{tikzpicture}\n\\end{center}\nThe left arrow in the diagram, i.e. a correspondence between the input data, which produces the same output $Z(\\hbar,\\{t^{\\alpha}_k\\})=Z^S(\\hbar,\\{t^{\\alpha}_k\\})$ is the main result of \\cite{DOSSIde}. Given $R(z)$ and $T(z)$ arising from a CohFT, it was proven in \\cite{DOSSIde} that there exists a local spectral curve $S$, which is a collection of disks neighbourhoods of zeros of $dx$ on which $B$ and $y$ are define locally, giving the partition function $Z$ of the CohFT. We will use the converse of this result proven in \\cite{DNOPSPri}, beginning instead from $S$, which builds on the construction of \\cite{DOSSIde}.\n\nRecall from Section~\\ref{specLG2} the construction\n$$(C,x,B)\\mapsto R(z)\\in L^{(2)}GL(N,\\mathbb{C})$$\nof an element $R(z)$ from part of the data of a spectral curve $S$. We will now associate a (locally defined) function $y$ on $C$ to the unit vector $1\\!\\!1=\\{\\Delta_i^{1\/2}\\}\\in\\mathbb{C}^N$ in normalised canonical coordinates to get the full data of a spectral curve $S=(C,x,y,B)$.\nDefine $dy$ by\n\\begin{equation} \\label{ytoT}\n\\sum_{k=1}^N R^{-1}(z)^i_k \\cdot\\Delta_k^{1\/2} = \\frac{1}{\\sqrt{2\\pi z}}\\int_{\\Gamma_i}dy(p)\\cdot e^{(u_i-x(p)) \\over z}.\n\\end{equation}\nThis defines $y$ locally around each ${\\cal P}_i$. In fact, it only defines the part of $y$ skew-invariant under the local involution $p\\mapsto\\hat{p}$ defined by $x$ since the Laplace transform annihilates invariant parts, but topological recursion depends only on this skew-invariant part of $y$. The $z\\to0$ limit of \\eqref{ytoT} gives\n\\begin{equation} \\label{yunit}\ndy({\\cal P}_i)=\\Delta_i^{1\/2}\n\\end{equation}\nhence conversely $y$ determines the unit $1\\!\\!1=\\{\\Delta_i^{1\/2}\\}\\in\\mathbb{C}^N$ and $R^{-1}(z)1\\!\\!1$ which produces the translation. Equation \\eqref{yunit} is a strong condition on $y$ since it shows that $y(p)$ is determined by $dy({\\cal P}_i)$, $i=1,...,N$ and $B(p,p')$. If a globally defined differential $dy$ on a compact curve $C$ satisfies the condition\n\\begin{equation} \\label{DOSStest}\nd\\left(\\frac{dy}{dx}(p)\\right)=-\\sum_{i=1}^n \\mathop{\\,\\rm Res\\,}_{p'={\\cal P}_i}\\frac{dy}{dx}(p')B(p',p)\n\\end{equation}\nthen $dy$ satisfies \\eqref{ytoT} for $\\Delta_i^{1\/2}$ defined by \\eqref{yunit} and $R(z)$ defined by \\eqref{eq:BtoR}. This is immediate by taking the Laplace transform of \\eqref{DOSStest} and using the fact that $\\displaystyle\\mathop{\\,\\rm Res\\,}_{p'={\\cal P}_i}\\frac{dy}{dx}(p')B(p',p)=dy({\\cal P}_i)B({\\cal P}_i,p)$. Note that if the poles of $dy$ are dominated by the poles of $dx$, equivalently $dy\/dx$ has poles only at the zeros of $dx$, then \\eqref{DOSStest} is satisfied as a consequence of the Cauchy formula. This allows us to produce spectral curves $S=(C,x,y,B)$ that give rise to $R(z)$ and $1\\!\\!1$---see Example~\\ref{specA2rel} below.\n\nThe result of \\cite{DOSSIde} was generalised in \\cite{CNoTop} to show that the differential operators $\\hat{\\Psi}$, $\\hat{R}$ and $\\hat{T}_0$ acting on copies of $Z^{\\text{BGW}}$\narises by applying topological recursion to an irregular spectral curve. Equivalently, periods of the correlators of an irregular spectral curve store linear combinations of coefficients of $\\log Z^{\\text{BGW}}$. The appearance of $Z^{\\text{BGW}}$ is due to its relationship with topological recursion applied to the curve $x=\\frac{1}{2}z^2$, $y=\\frac{1}{z}$ \\cite{DNoTop}.\n\n\\subsubsection{Examples}\nWe demonstrate topological recursion with four key examples of rational spectral curves equipped with the bidifferential $B(p_1,p_2)$ given by the Cauchy kernel. The spectral curves in the Examples~\\ref{specBes} and \\ref{specAiry}, denoted $S_{\\text{Airy}}$ and $S_{\\text{Bes}}$, have partition functions $Z^{\\text{KW}}$ and $Z^{\\text{BGW}}$ respectively. \nAny spectral curve at regular, respectively irregular, zeros of $dx$ is locally isomorphic to $S_{\\text{Airy}}$, respectively $S_{\\text{Bes}}$. \nA consequence is that the tau functions $Z^{\\text{KW}}$ and $Z^{\\text{BGW}}$ are fundamental to the correlators produced from topological recursion. Moreover, they are built out of $Z^{\\text{KW}}$ and $Z^{\\text{BGW}}$ via Givental reconstruction described in Section~\\ref{twloop} where $R$ and $T$ are obtained from the spectral curve as described in Section~\\ref{sec:TR2Giv}.\n\nExample~\\ref{specA2} with spectral curve $S_{A_2}$ has partition function $Z_{A_2}$ corresponding to the $A_2$ Frobenius manifold---defined in \\eqref{ZA2}. Example~\\ref{specA2rel} with spectral curve $S^{\\text{BGW}}_{A_2}$ has partition function $Z^{\\text{BGW}}_{A_2}$ which will be shown to have vanishing primary terms for $n\\leq g-1$ which correspond to relations among coefficients of $Z^{\\text{BGW}}$. In each example we use a global rational parameter $z$ for the curve $C\\cong\\mathbb{C}$. \n\n\\begin{example} \\label{specBes}\nTopological recursion applied to the Bessel curve\n$$\nS_{\\text{Bes}}=\\left(\\mathbb{C},x=\\frac{1}{2}z^2,\\ y=\\frac{1}{z},\\ B=\\frac{dzdz'}{(z-z')^2}\\right)\n$$\nproduces correlators\n$$\n\\omega_{g,n}^{\\text{Bes}}=\\sum_{\\vec{k}\\in\\mathbb{Z}_+^n}C_g(k_1,...,k_n)\\prod_{i=1}^n(2k_i+1)!!\\frac{dz_i}{z_i^{2k_i+2}}\n$$\nwhere $C_g(k_1,...,k_n)\\neq 0$ only for $\\sum_{i=1}^n k_i=g-1$. Define $\\xi_k(z)=(2k+1)!!z^{-(2k+2)}dz$. It is proven in \\cite{DNoTop} that\n$$Z^{\\text{BGW}}(\\hbar,t_0,t_1,...)=\\left.\\exp\\sum_{g,n}\\frac{\\hbar^{g-1}}{n!}\\omega_{g,n}^{\\text{Bes}}\\right|_{\\xi_k(z_i)=t_k}\\hspace{-8mm}=\\hspace{3mm}\\exp\\sum_{g,n,\\vec{k}}\\frac{\\hbar^{g-1}}{n!}C_g(k_1,...,k_n)\\prod t_{k_j}.$$\n\\end{example}\n\\begin{example} \\label{specAiry}\nTopological recursion applied to the Airy curve \n$$\nS_{\\text{Airy}}=\\left(\\mathbb{C},x=\\frac{1}{2}z^2,\\ y=z,\\ B=\\frac{dzdz'}{(z-z')^2}\\right)\n$$\nproduces correlators which are proven in \\cite{EOrTop} to store intersection numbers\n$$\\omega_{g,n}^{\\text{Airy}}=\\sum_{\\vec{k}\\in\\mathbb{Z}_+^n}\\int_{\\overline{\\cal M}_{g,n}}\\prod_{i=1}^n\\psi_i^{k_i}(2k_i+1)!!\\frac{dz_i}{z_i^{2k_i+2}}\n$$\nand the coefficient is non-zero only for $\\sum_{i=1}^n k_i=3g-3+n$. Hence\n$$Z^{\\text{KW}}(\\hbar,t_0,t_1,...)=\\left.\\exp\\sum_{g,n}\\frac{\\hbar^{g-1}}{n!}\\omega_{g,n}^{\\text{Airy}}\\right|_{\\xi_k(z_i)=t_k}\\hspace{-8mm}=\\hspace{3mm}\\exp\\sum_{g,n,\\vec{k}}\\frac{\\hbar^{g-1}}{n!}\\int_{\\overline{\\cal M}_{g,n}}\\prod_{i=1}^n\\psi_i^{k_i}\\prod t_{k_j}.$$\n\\end{example}\nFor the next two examples, define a collection of differentials $\\xi^{\\alpha}_k(z)$ on $\\mathbb{C}$ for $\\alpha\\in\\{1,2\\}$, $k\\in\\{0,1,2,...\\}$ by\n\\begin{equation} \\label{flatdiff}\n\\xi^{\\alpha}_0=\\frac{dz}{(1-z)^2}-(-1)^{\\alpha}\\frac{dz}{(1+z)^2},\\quad \\xi^\\alpha_{k+1}(p)=d\\left(\\frac{\\xi^\\alpha_k(p)}{dx(p)}\\right),\\ \\alpha=1,2,\\quad k=0,1,2,...\n\\end{equation}\nThese are linear combinations of the $V^i_k(p)$ defined in \\eqref{Vdiff} with $x=z^3-3z$. The $V^i_k(p)$ correspond to normalised canonical coordinates while the $\\xi^{\\alpha}_k(p)$ correspond to flat coordinates.\n\\begin{example} \\label{specA2}\nConsider the spectral curve \n$$\nS_{A_2}=\\left(\\mathbb{C},x=z^3-3z,\\ y=z\\sqrt{-3},\\ B=\\frac{dzdz'}{(z-z')^2}\\right).\n$$\nExample~\\ref{specA2B} shows that $(\\mathbb{C},x=z^3-3z,B)$ produces the $R(z)$ associated to the $A_2$ Frobenius manifold at the point $(u_1,u_2)=(2,-2)$ calculated in Example~\\ref{A2RT}. \n$$R^{-1}(z)=I- \\frac{1}{144}\\left(\\begin{array}{cc}-1&-6i\\\\-6i&1\\end{array}\\right)z+\\frac{35}{41472}\\left(\\begin{array}{cc}-1&-12i\\\\12i&-1\\end{array}\\right)z^2+...\n$$\nIt remains to show that $y$ gives rise to $1\\!\\!1$ and $R^{-1}(z)1\\!\\!1$. The local expansions of $dy=\\sqrt{-3} dz$ around $z=-1={\\cal P}_1$ and $z=1={\\cal P}_2$ in the respective local coordinates $t$ and $s$, such that $x(t)=\\frac12 t^2+2$ and $x(s)=\\frac12 s^2-2$ are:\n$$ dy=\\sqrt{-3} dz=\\left(\\frac{1}{\\sqrt{2} }-\\frac{5}{144\\sqrt{2}}t^2+\\frac{385}{124416\\sqrt{2}}t^4+...+{\\rm odd\\ terms}\\right)dt\n$$\n$$ dy=\\left(\\frac{i}{\\sqrt{2}}+\\frac{5i}{144\\sqrt{2}}s^2+\\frac{385i}{124416\\sqrt{2}}s^4+...+{\\rm odd\\ terms}\\right)ds\n$$\nhence the Laplace transforms are:\n$$\\frac{1}{\\sqrt{2\\pi z}}\\int_{\\Gamma_1}dy(p)\\cdot e^{(u_1-x(p)) \\over z}=\\frac{1}{\\sqrt{2}}-\\frac{5}{144\\sqrt{2}}z+\\frac{385}{41472\\sqrt{2}}z^2+...\n$$\n$$\\frac{1}{\\sqrt{2\\pi z}}\\int_{\\Gamma_2}dy(p)\\cdot e^{(u_2-x(p)) \\over z}=\\frac{i}{\\sqrt{2}}+\\frac{5i}{144\\sqrt{2}}z+\\frac{385i}{41472\\sqrt{2}}z^2+...\n$$\nwhich indeed gives \n$$\\left\\{\\frac{1}{\\sqrt{2\\pi z}}\\int_{\\Gamma_k}dy(p)\\cdot e^{(u_k-x(p)) \\over z}\\right\\}=R^{-1}(z)1\\!\\!1=\\frac{1}{\\sqrt{2}}\\left(\\begin{array}{c}1\\\\i\\end{array}\\right)+\\frac{5}{144\\sqrt{2}}\\left(\\begin{array}{c}-1\\\\i\\end{array}\\right)z+\\frac{385}{41472\\sqrt{2}}\\left(\\begin{array}{c}1\\\\i\\end{array}\\right)z^2+...\n$$\nNote that $dy({\\cal P}_1)=\\frac{1}{\\sqrt{2}}=\\Delta_1^{1\/2}$ and $dy({\\cal P}_2)=\\frac{i}{\\sqrt{2}}=\\Delta_2^{1\/2}$---the first terms in the expansions in $t$ and $s$ above---gives the unit $1\\!\\!1$ (and the TFT). These first terms are enough to produce the entire local expansions of $dy$, and hence prove that the series for $dy$ gives $R^{-1}(z)1\\!\\!1$. The poles of $dy$ are dominated by the poles of $dx$, i.e. $dy\/dx$ has poles only at the zeros ${\\cal P}_1$ and ${\\cal P}_2$ of $dx$, hence $dy$ satisfies \\eqref{DOSStest} so its Laplace transform satisfies \\eqref{ytoT} as required.\n\nApply topological recursion to $S_{A_2}$ to produce correlators $\\omega^{A_2}_{g,n}$. Then the partition function associated to $S_{A_2}$ coincides with the partition function of the $A_2$ Frobenius manifold:\n$$Z^{A_2}(\\hbar,\\{t^{\\alpha}_k\\})=\\left.\\exp\\sum_{g,n}\\frac{\\hbar^{g-1}}{n!}\\omega^{A_2}_{g,n}\\right|_{\\xi^{\\alpha}_k(z_i)=t^{\\alpha}_k}.\n$$ \nThis was first proven in \\cite{DNOPSDub} via the superpotential construction of Dubrovin \\cite{DubGeo}.\n\n\n\nThe coefficients of $\\log Z^{A_2}(\\hbar,\\{t^{\\alpha}_k\\})$ are obtained from $\\omega_{g,n}$ by\n\\begin{equation} \\label{insert}\n\\left.\\frac{\\partial^n}{\\partial t^{\\alpha_1}_{k_1}...\\partial t^{\\alpha_n}_{k_n}}F_g^{A_2}(\\{t^{\\alpha}_k\\})\\right|_{t^{\\alpha}_k=0}=\\mathop{\\,\\rm Res\\,}_{z_1=\\infty}...\\mathop{\\,\\rm Res\\,}_{z_n=\\infty}\\prod_{i=1}^n p_{\\alpha_i,k_i}(z_i)\\omega_{g,n}(z_1,...,z_n)\n\\end{equation}\nfor the polynomials in $z$ given by $p_{\\alpha,k}(z)=\\sqrt{-3}\\frac{(-1)^{\\alpha}}{\\alpha}z^{3k+\\alpha}+\\text{lower order terms}$. The lower order terms (and the top coefficient) will not be important here because we will only consider vanishing of \\eqref{insert} arising from high enough order vanishing of $\\omega_{g,n}(z_1,...,z_n)$ at $z_i=\\infty$ so that the integrand in \\eqref{insert} is holomorphic at $z_i=\\infty$. Equation~\\eqref{insert} is a special case of the more general phenomena, proven in \\cite{DNOPSPri}, that periods of $\\omega_{g,n}$ are dual to insertions of vectors in a CohFT. In this case it follows from the easily verified fact that the residues are dual to the differentials $\\xi^{\\alpha}_k$ defined in \\eqref{flatdiff}. \n\n\n\nRecall that vanishing of coefficients of $\\log Z^{A_2}(\\hbar,\\{t^{\\alpha}_k\\})$ correspond to relations among top intersection classes of tautological relations. The correlators $\\omega^{A_2}_{g,n}$ are holomorphic at $z=\\infty$ (their only poles occur at $z=\\pm1$) and in fact have high order vanishing there. The high order vanishing at $z=\\infty$ together with \\eqref{insert} proves vanishing of some coefficients of $\\log Z^{A_2}(\\hbar,\\{t^{\\alpha}_k\\})$. For example:\n$$\\omega^{A_2}_{2,1}(z) = \\frac{35}{243}\\frac{z(11z^4+14z^2+2)}{(z^2-1)^{10}}dz\\quad\\Rightarrow\\quad\\mathop{\\,\\rm Res\\,}_{z=\\infty}z^m\\omega^{A_2}_{2,1}(z)=0,\\quad m\\in\\{0,1,2,...,13\\}\n$$\nHence \\eqref{insert} vanishes for $k_1=0,1,2,3$ which gives relations between intersection numbers\n$$\\int_{\\overline{\\cal M}_{2,1}}R^{(m)}\\psi_1^{4-m}=0,\\quad m=1,2,3,4$$\nwhere $R^{(m)}$ is a relation between cohomology classes in $H^{2m}(\\overline{\\cal M}_{2,1})$ proven in \\cite{PPZRel}, such as $R^{(2)}=\\psi_1^2+$ boundary terms $=0$.\n\nNote that if one rescales $y\\mapsto\\lambda y$ then the correlators rescale by $\\omega^{A_2}_{g,n}\\mapsto\\lambda^{2-2g-n}\\omega^{A_2}_{g,n}$ so the change does not affect the vanishing terms. The coefficient $\\sqrt{-3}$ is chosen for convenience to get precise agreement with the associated topological field theory and Frobenius manifold.\n\\end{example}\nThe next example produces $Z^{\\text{BGW}}_{A_2}$, defined in \\eqref{ZBGWA2}, which we recall replaces factors of $Z^{\\text{KW}}$ with $Z^{\\text{BGW}}$ in the partition function $Z_{A_2}$ of the $A_2$ Frobenius manifold. \n\\begin{example} \\label{specA2rel}\nThe following spectral curve $S^{\\text{BGW}}_{A_2}$ shares $(C,x,B)$ with the spectral curve $S_{A_2}$, since this produces the correct operator $R(z)$ used in the construction of both $Z_{A_2}$ and $Z^{\\text{BGW}}_{A_2}$. We replace the function $y$ in $S_{A_2}$ by $dy\/dx$ which produces the required shift on the Laplace transform. \n$$\nS^{\\text{BGW}}_{A_2}=\\left(\\mathbb{C},x=z^3-3z,\\ y=\\frac{\\sqrt{-3}}{3z^2-3},\\ B=\\frac{dzdz'}{(z-z')^2}\\right).\n$$\nAs in the previous example, we have:\n$$R^{-1}(z)=I- \\frac{1}{144}\\left(\\begin{array}{cc}-1&-6i\\\\-6i&1\\end{array}\\right)z+\\frac{35}{41472}\\left(\\begin{array}{cc}-1&-12i\\\\12i&-1\\end{array}\\right)z^2+...\n$$\nThe local expansions of $dy=\\frac{2}{\\sqrt{-3}}\\frac{zdz}{(z^2-1)^2}$ around $z=-1={\\cal P}_1$ and $z=1={\\cal P}_2$ in the respective local coordinates $t$ and $s$, such that $x(t)=\\frac12 t^2+2$ and $x(s)=\\frac12 s^2-2$ are:\n$$ dy=\\frac{2}{\\sqrt{-3}}\\frac{zdz}{(z^2-1)^2}=\\left(-\\frac{1}{\\sqrt{2}}t^{-2}-\\frac{5}{144\\sqrt{2}}+\\frac{385}{41472\\sqrt{2}}t^2+...+{\\rm odd\\ terms}\\right)dt\n$$\n$$ dy=\\left(-\\frac{i}{\\sqrt{2}}s^{-2}+\\frac{5i}{144\\sqrt{2}}+\\frac{385i}{41472\\sqrt{2}}s^2+...+{\\rm odd\\ terms}\\right)ds\n$$\nand the Laplace transforms are a shift of the Laplace transforms in the previous example:\n$$\\frac{1}{\\sqrt{2\\pi z}}\\int_{\\Gamma_1}dy(p)\\cdot e^{(u_1-x(p)) \\over z}=\\frac{1}{\\sqrt{2}}z^{-1}-\\frac{5}{144\\sqrt{2}}+\\frac{385}{41472\\sqrt{2}}z+...\n$$\n$$\\frac{1}{\\sqrt{2\\pi z}}\\int_{\\Gamma_2}dy(p)\\cdot e^{(u_2-x(p)) \\over z}=\\frac{i}{\\sqrt{2}}z^{-1}+\\frac{5i}{144\\sqrt{2}}+\\frac{385i}{41472\\sqrt{2}}z+...\n$$\nApply topological recursion to $S^{\\text{BGW}}_{A_2}$ to produce correlators $\\omega_{g,n}$ and partition function $Z^{S^{\\text{BGW}}_{A_2}}$. It is proven in \\cite{CNoTop} that topological recursion applied to an irregular spectral curve, i.e. $y$ has simple poles at the zeros of $dx$, produces a partition function $Z^S$ with translation term encoded in the Laplace transform of $dy$ and in this case $T_0(z)=1\\!\\!1-R^{-1}(z)1\\!\\!1$. This is exactly the construction of $Z^{\\text{BGW}}_{A_2}$ defined in Section~\\ref{twloop} hence we have:\n$$Z^{\\text{BGW}}_{A_2}(\\hbar,\\{t^{\\alpha}_k\\})=Z^{S^{\\text{BGW}}_{A_2}}(\\hbar,\\{t^{\\alpha}_k\\}).$$ \nLemma~\\ref{vanA2BGW} below proves that the sums of the orders of vanishing of $\\omega^{{\\text{BGW}},A_2}_{g,n}(z_1,...,z_n)$ at $z_i=\\infty$ is bounded below by $2g-2$. \nExplicitly in low genus, \n$$\\omega^{{\\text{BGW}},A_2}_{1,1}(z)=\\frac{z^2+1}{4\\sqrt{-3}(z^2-1)^2}dz,\\quad\\omega^{{\\text{BGW}},A_2}_{2,1}(z)=\\frac{-5z^2-1}{16\\sqrt{-3}(z-1)^4(z+1)^4}dz.\n$$\nWe find that $-\\displaystyle\\mathop{\\,\\rm Res\\,}_{z=\\infty}\\sqrt{-3}\\cdot z\\cdot\\omega_{1,1}(z)=\\tfrac{1}{4}$ agrees with the graphical expansion\n\\begin{center}\n\\begin{tikzpicture}\n\\node(0) at (1,0) {$e_0$} ;\n\\node(1) at (2,0)[shape=circle,draw] {1};\n\\path [-] (0) edge node {} (1);\n\\end{tikzpicture}\n\\end{center}\nwhich contributes $2^g\\cdot\\int_{\\overline{\\cal M}_{1,1}}\\Theta_{1,1}=2\\cdot\\frac{1}{8}=\\frac{1}{4}$.\n\n\nFrom the vanishing of $\\omega_{2,1}(z)$ at $z=\\infty$ it immediately follows that $\\displaystyle\\mathop{\\,\\rm Res\\,}_{z=\\infty}\\frac{\\sqrt{-3}}{2}z\\cdot\\omega_{2,1}(z)=0$ which signifies a relation between coefficients of $Z^{\\text{BGW}}(\\hbar,t_0,t_1,...)$. We will write the relations using $\\Theta_{g,n}$ however the relations are between coefficients of $Z^{\\text{BGW}}(\\hbar,t_0,t_1,...)$ and what we are showing here is that these coefficients satisfy the same relations as intersection numbers involving $\\Theta_{g,n}$, or equivalently coefficients of $Z^{\\Theta}(\\hbar,t_0,t_1,...)$. The graphical expansion of this is given by:\n\\begin{center}\n\\begin{tikzpicture}\n \n\\node(0) at (-5,0) {} ;\n\\node(1) at (-4,0)[shape=circle,draw] {2};\n\\path [-] (0) edge node {} (1);\n\\node(5) at (-2.5,0) {} ;\n\\node(6) at (-1.5,0)[shape=circle,draw] {2};\n\\node(7) at (-.5,.5) {};\n\\tikzset{decoration={snake,amplitude=.4mm,segment length=2mm,\n post length=0mm,pre length=0mm}}\n \\draw[decorate] (6) -- (7);\n\\node(2) at (6,0)(text) {};\n\\path [-] (5) edge node {} (6);\n\n\\node(3) at (1,0)[shape=circle,draw] {1};\n\\node(4) at (3,0)[shape=circle,draw] {1};\n\\path [-] (3) edge node [above] {} (4);\n\\node(10) at (0,0) {};\n\\path [-] (10) edge node {} (3);\n\\circledarrow{}{text}{.8cm};\n\\node(8) at (5.25,0)[shape=circle,draw] {1};\n\\node(11) at (4.25,0) {};\n\\path [-] (11) edge node {} (8);\n\n\\end{tikzpicture}\n\\end{center}\n(plus graphs containing genus 0 vertices on which $\\Theta_{2,1}$ vanishes)\t \nwhich contributes \n$$2^2\\cdot\\frac{60}{1728}\\cdot\\int_{\\overline{\\cal M}_{2,1}}\\Theta_{2,1}\\cdot\\psi_1+2^2\\cdot\\frac{-60}{1728}\\cdot\\int_{\\overline{\\cal M}_{2,1}}\\Theta_{2,1}\\cdot\\kappa_1+2^2\\cdot\\frac{84}{1728}\\cdot\\int_{\\overline{\\cal M}_{1,2}}\\Theta_{1,2}\\cdot\\int_{\\overline{\\cal M}_{1,1}}\\Theta_{1,1}+\\frac{2}{2}\\cdot\\frac{84-60}{1728}\\cdot\\int_{\\overline{\\cal M}_{1,3}}\\Theta_{1,3}\n$$\nwhich agrees with the expansion in weighted graphs of $\\displaystyle\\mathop{\\,\\rm Res\\,}_{z=\\infty}\\frac{\\sqrt{-3}}{2}z\\cdot\\omega_{2,1}(z)=0$ given by\n$$\\frac{5}{1536}-\\frac{15}{1536}+\\frac{7}{2304}+\\frac{1}{288}=0. \n$$\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\\end{example}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsubsection{Intersection numbers and the Brezin-Gross-Witten tau function.} \\label{intkdv}\n\nWe are finally in a position to prove that $Z^{\\text{BGW}}(\\hbar,t_0,t_1,...)$ is a generating function for the intersection numbers $\\int_{\\overline{\\cal M}_{g,n}}\\Theta_{g,n}\\prod_{i=1}^n\\psi_i^{k_i}$, or\n$$Z^{\\text{BGW}}(\\hbar,t_0,t_1,...)=Z^{\\Theta}(\\hbar,t_0,t_1,...)$$ \nstated as Theorem~\\ref{tauf} below. {\\em A priori} the coefficients of $Z^{\\text{BGW}}_{A_2}$ have nothing to do with integration of cohomology classes over $\\overline{\\cal M}_{g,n}$, nevertheless we will see that $Z^{\\text{BGW}}_{A_2}$ and $Z^{\\Theta}_{A_2}$ share the same relations which will be used to prove their coincidence. This relies on the spectral curve $x=z^3-3z$, $y=\\frac{1}{\\sqrt{-3}(z^2-1)}$ analysed in Example~\\ref{specA2rel}. \n\n\n\n\n\n\\begin{proof}[Proof of Theorem~\\ref{tauf}] The proof is a rather beautiful application of Pixton's relations proven by Pandharipande, Pixton and Zvonkine, \\cite{PPZRel}. We use Pixton's relations to induce relations among the intersection numbers $\\int_{\\overline{\\cal M}_{g,n}}\\Theta_{g,n}\\prod_{i=1}^n\\psi_i^{m_i}$. \nThe key idea of the proof of the theorem is to show that the coefficients of $Z^{\\text{BGW}}$ satisfy the same relations as those, such as \\eqref{relg2}, induced by Pixton's relations on the intersection numbers $\\int_{\\overline{\\cal M}_{g,n}}\\Theta_{g,n}\\prod_{i=1}^n\\psi_i^{m_i}$. The proof here brings together all of the results of Section~\\ref{sec:kdv}.\\\\\n\nRecall from Section~\\ref{sec:A2part} that the partition functions $Z^{\\Theta}_{A_2}(\\hbar,\\{t^{\\alpha}_k\\})$ and $Z^{\\text{BGW}}_{A_2}(\\hbar,\\{t^{\\alpha}_k\\})$ have the same structure since they are built out of the same element $R(z)$, arising from the $A_2$ Frobenius manifold, and the same translation $T_0(z)=1\\!\\!1-R^{-1}(z)1\\!\\!1$, but with {\\em a priori} different vertex contributions. Vanishing coefficients in both partition functions produce the same relations between coefficients of $Z^{\\Theta}(\\hbar,t_0,t_1,...)$, respectively $Z^{\\text{BGW}}(\\hbar,t_0,t_1,...)$. Enough of these relations will prove that $Z^{\\Theta}(\\hbar,t_0,t_1,...)=Z^{\\text{BGW}}(\\hbar,t_0,t_1,...)$. Vanishing of certain coefficients of $Z^{\\Theta}_{A_2}(\\hbar,\\{t^{\\alpha}_k\\})$ is a consequence of the cohomological viewpoint---it comes from $\\Theta_{g,n}R^{(g-1)}=0$ for Pixton relations $0=R^{(g-1)}\\in H^{2g-2}(\\overline{\\cal M}_{g,n})$. The vanishing of corresponding coefficients of $Z^{\\text{BGW}}_{A_2}(\\hbar,\\{t^{\\alpha}_k\\})$ uses topological recursion applied to the spectral curve $S^{\\text{BGW}}_{A_2}$, defined in Example~\\ref{specA2rel}, which produces the partition function $Z^{\\text{BGW}}_{A_2}(\\hbar,\\{t^{\\alpha}_k\\})$ out of correlators $\\omega^{{\\text{BGW}},A_2}_{g,n}(z_1,...,z_n)$. We know that $\\omega^{{\\text{BGW}},A_2}_{g,n}(z_1,...,z_n)$ is holomorphic at $z_i=\\infty$ for each $i=1,...,n$. The next lemma shows its order of vanishing there.\n\\begin{lemma} \\label{vanA2BGW}\n$$\\sum_{i=1}^n\\text{ord}_{z_i=\\infty\\ }\\omega^{{\\text{BGW}},A_2}_{g,n}(z_1,...,z_n)\\geq 2g-2\n$$\nwhere $\\text{ord}_{z=\\infty\\ }\\eta(z)$ is the order of vanishing the differential at $z=\\infty$.\n\\end{lemma}\n\\begin{proof}\nWe can make the rational differential\n$$\\omega^{A_2}_{g,n}(z_1,...,z_n)=\\frac{p_{g,n}(z_1,....,z_n)}{\\prod_{i=1}^n(z_i^2-1)^{2g}}dz_1...dz_n\n$$\nhomogeneous by applying topological recursion to $x(z)=z^3-3Q^2z$ and $y=\\sqrt{-3}\/x'(z)$ which are homogeneous in $z$ and $Q$. Then $\\omega^{A_2}_{g,n}(Q,z_1,...,z_n)$ is homogeneous in $z$ and $Q$ of degree $2-2g-n$:\n$$\\omega^{A_2}_{g,n}(Q,z_1,...,z_n)=\\lambda^{2-2g-n}\\omega^{A_2}_{g,n}(\\lambda Q,\\lambda z_1,...,\\lambda z_n).\n$$\nThe degree of homogeneity uses the fact that $(z,Q)\\mapsto(\\lambda z,\\lambda Q)$ $\\Rightarrow$ $ydx\\mapsto\\lambda ydx$ $\\Rightarrow$ $\\omega_{g,n}\\mapsto\\lambda^{2-2g-n}\\omega_{g,n}$\nbecause $ydx$ appears in the kernel $K(p_1,p)$ with homogeneous degree $-1$ which easily leads to degree $2-2g-n$ for $\\omega_{g,n}$. The degree $2-2g-n$ homogeneity of\n$$\\omega^{A_2}_{g,n}(Q,z_1,...,z_n)=\\frac{p_{g,n}(Q,z_1,....,z_n)}{\\prod_{i=1}^n(z_i^2-Q^2)^{2g}}dz_1...dz_n\n$$\nimplies that $\\deg p_{g,n}(Q,z_1,....,z_n)=4gn-n+2-2g-n$. But we also know that $\\omega^{A_2}_{g,n}(Q,z_1,...,z_n)$ is well-defined as $Q\\to 0$---the limit becomes $\\omega_{g,n}$ of the spectral curve $x(z)=z^3$ and $y=\\sqrt{-3}\/x'(z)$ using the topological recursion defined by Bouchard and Eynard \\cite{BEyThi}---so $\\deg p_{g,n}(z_1,....,z_n)\\leq 4gn-n+2-2g-n$. Note that $dz_i$ is homogeneous of degree 1 but has a pole of order 2 at $z_i=\\infty$, hence\n$$\\sum_{i=1}^n\\text{ord}_{z_i=\\infty\\ }\\omega^{{\\text{BGW}},A_2}_{g,n}(z_1,...,z_n)=4gn-\\deg p_{g,n}(z_1,....,z_n)-2n\\geq 2g-2.\n$$\n\\end{proof}\nThe primary coefficients of $Z^{\\text{BGW}}_{A_2}(\\hbar,\\{t^{\\alpha}_k\\})$, those where $k=0$, correspond to $$ \\mathop{\\,\\rm Res\\,}_{z_1=\\infty}...\\mathop{\\,\\rm Res\\,}_{z_n=\\infty}\\prod_{i=1}^nz_i^{\\epsilon_i}\\omega^{A_2}_{g,n}(z_1,...,z_n)$$ for $\\epsilon_i=1$ or 2. Different choices of $\\epsilon_i$ give different relations (except half which vanish for parity reasons). We have:\n\\begin{corollary}\n$$n< 2g-2\\Rightarrow \\mathop{\\,\\rm Res\\,}_{z_1=\\infty}...\\mathop{\\,\\rm Res\\,}_{z_n=\\infty}\\prod_{i=1}^nz_i^{\\epsilon_i}\\omega^{A_2}_{g,n}(z_1,...,z_n)=0$$\n\\end{corollary}\n\\begin{proof}\nSince $n< 2g-2$ and $\\sum_{i=1}^n\\text{ord}_{z_i=\\infty\\ }\\omega^{{\\text{BGW}},A_2}_{g,n}(z_1,...,z_n)\\geq 2g-2$ by Lemma~\\ref{vanA2BGW}, there exists an $i$ such that $\\text{ord}_{z_i=\\infty\\ }\\omega^{{\\text{BGW}},A_2}_{g,n}(z_1,...,z_n)\\geq 2$. Hence $z_i^{\\epsilon_i}\\omega^{A_2}_{g,n}(z_1,...,z_n)$ is holomorphic at $z_i=\\infty$, so \n$$\\displaystyle\\mathop{\\,\\rm Res\\,}_{z_i=\\infty}z_i^{\\epsilon_i}\\omega^{A_2}_{g,n}(z_1,...,z_n)=0$$ \nand the multiple residue vanishes as required.\n\\end{proof}\nHence the primary coefficients of $Z^{\\text{BGW}}_{A_2}(\\hbar,\\{t^{\\alpha}_k\\})$ vanish for $n< 2g-2$ proving that the coefficients of $Z^{\\Theta}(\\hbar,t_0,t_1,...)$ and $Z^{\\text{BGW}}(\\hbar,t_0,t_1,...)$ for $n<2g-2$ satisfy the same relations. Pixton's relations give enough relations among coefficients of $Z^{\\Theta}(\\hbar,t_0,t_1,...)$ to calculate them, and hence the same relations also uniquely determine the coefficients of $Z^{\\text{BGW}}(\\hbar,t_0,t_1,...)$ and we have $Z^{\\Theta}(\\hbar,t_0,t_1,...)=Z^{\\text{BGW}}(\\hbar,t_0,t_1,...)$ up to coefficients with $n< 2g-2$. The dilaton equation \\eqref{dilaton} then proves equality of coefficients for all $n$ giving $Z^{\\Theta}(\\hbar,t_0,t_1,...)=Z^{\\text{BGW}}(\\hbar,t_0,t_1,...)$.\n\n\n\n\n\\end{proof}\n\n\n\n\n\n\n \n\n\n\\section{Cohomological field theories} \\label{sec:cohft}\n\nThe class $\\Theta_{g,n}$ combines with known enumerative invariants, such as Gromov-Witten invariants, to give rise to new invariants. More generally, $\\Theta_{g,n}$ pairs with any cohomological field theory, which is fundamentally related to the moduli space of curves $\\overline{\\cal M}_{g,n}$, retaining many of the properties of the cohomological field theory, and is in particular often calculable.\n\nA {\\em cohomological field theory} is a pair $(V,\\eta)$ composed of a finite-dimensional complex vector space $V$ equipped with a metric $\\eta$ and a sequence of $S_n$-equivariant maps. \n\\[ \\Omega_{g,n}:V^{\\otimes n}\\to H^*(\\overline{\\cal M}_{g,n})\\]\nthat satisfy compatibility conditions from inclusion of strata:\n$$\\phi_{\\text{irr}}:\\overline{\\cal M}_{g-1,n+2}\\to\\overline{\\cal M}_{g,n},\\quad \\phi_{h,I}:\\overline{\\cal M}_{h,|I|+1}\\times\\overline{\\cal M}_{g-h,|J|+1}\\to\\overline{\\cal M}_{g,n},\\quad I\\sqcup J=\\{1,...,n\\}$$\ngiven by\n\\begin{align}\n\\phi_{\\text{irr}}^*\\Omega_{g,n}(v_1\\otimes...\\otimes v_n)&=\\Omega_{g-1,n+2}(v_1\\otimes...\\otimes v_n\\otimes\\Delta) \n \\label{glue1}\n\\\\\n\\phi_{h,I}^*\\Omega_{g,n}(v_1\\otimes...\\otimes v_n)&=\\Omega_{h,|I|+1}\\otimes \\Omega_{g-h,|J|+1}\\big(\\bigotimes_{i\\in I}v_i\\otimes\\Delta\\otimes\\bigotimes_{j\\in J}v_j\\big) \\label{glue2}\n\\end{align}\nwhere $\\Delta\\in V\\otimes V$ is dual to the metric $\\eta\\in V^*\\otimes V^*$. When $n=0$, $\\Omega_g:=\\Omega_{g,0}\\in H^*(\\overline{\\cal M}_{g})$.\nThere exists a vector $1\\!\\!1\\in V$ compatible with the forgetful map $\\pi:\\overline{\\cal M}_{g,n+1}\\to\\overline{\\cal M}_{g,n}$ by\n\\begin{equation} \\label{cohforget}\n\\Omega_{g,n+1}(1\\!\\!1\\otimes v_1\\otimes...\\otimes v_n)=\\pi^*\\Omega_{g,n}(v_1\\otimes...\\otimes v_n)\n\\end{equation}\nfor $2g-2+n>0$, and\n$$\\Omega_{0,3}(1\\!\\!1\\otimes v_1\\otimes v_2)=\\eta(v_1,v_2).\n$$\n\nFor a one-dimensional CohFT, i.e. $\\dim V=1$, identify $\\Omega_{g,n}$ with the image $\\Omega_{g,n}(1\\!\\!1^{\\otimes n})$, so we write $\\Omega_{g,n}\\in H^*(\\overline{\\cal M}_{g,n})$. A trivial example of a CohFT is $\\Omega_{g,n}=1\\in H^0(\\overline{\\cal M}_{g,n})$ which is a topological field theory as we now describe.\n\nA two-dimensional topological field theory (TFT) is a vector space $V$ and a sequence of symmetric linear maps\n\\[ \\Omega^0_{g,n}:V^{\\otimes n}\\to \\mathbb{C}\\]\nfor integers $g\\geq 0$ and $n>0$ satisfying the following conditions. The map $\\Omega^0_{0,2}=\\eta$ defines a metric $\\eta$, and together with $\\Omega^0_{0,3}$ it defines a\nproduct $\\raisebox{-0.5ex}{\\scalebox{1.8}{$\\cdot$}}$ on $V$ via\n\\begin{equation} \\label{prod}\n\\eta(v_1\\raisebox{-0.5ex}{\\scalebox{1.8}{$\\cdot$}} v_2,v_3)=\\Omega^0_{0,3}(v_1,v_2,v_3)\n\\end{equation}\nwith identity $1\\!\\!1$ given by the dual of $\\Omega^0_{0,1}=1\\!\\!1^*=\\eta(1\\!\\!1,\\cdot)$. It satisfies \n$$\\Omega^0_{g,n+1}(1\\!\\!1\\otimes v_1\\otimes...\\otimes v_n)=\\Omega^0_{g,n}(v_1\\otimes...\\otimes v_n)$$ \nand the gluing conditions \n$$\n\\Omega^0_{g,n}(v_1\\otimes...\\otimes v_n)=\\Omega^0_{g-1,n+2}(v_1\\otimes...\\otimes v_n\\otimes\\Delta)=\\Omega^0_{g_1,|I|+1}\\otimes \\Omega^0_{g_2,|J|+1}\\big(\\bigotimes_{i\\in I}v_i\\otimes\\Delta\\otimes\\bigotimes_{j\\in J}v_j\\big)$$\nfor $g=g_1+g_2$ and $I\\sqcup J=\\{1,...,n\\}$.\n\nConsider the natural isomorphism $H^0({\\overline{\\cal M}_{g,n})}\\cong\\mathbb{C}$. The degree zero part of a CohFT $\\Omega_{g,n}$ is a TFT: \n$$\\Omega^0_{g,n}:V^{\\otimes n}\\stackrel{\\Omega_{g,n}}{\\to} H^*({\\overline{\\cal M}_{g,n}})\\to H^0({\\overline{\\cal M}_{g,n}}).\n$$\nWe often write $\\Omega_{0,3}=\\Omega^0_{0,3}$ interchangeably. Associated to $\\Omega_{g,n}$ is the product \\eqref{prod} built from $\\eta$ and $\\Omega_{0,3}$.\n\n\n\nGiven a CohFT $\\Omega=\\{\\Omega_{g,n}\\}$ and a basis $\\{e_1,...,e_N\\}$ of $V$, the partition function of $\\Omega$ is defined as in \\eqref{ZA2}.\n\\begin{equation} \\label{partfun}\nZ_{\\Omega}(\\hbar,\\{t^{\\alpha}_k\\})=\\exp\\sum_{g,n,\\vec{k}}\\frac{\\hbar^{g-1}}{n!}\\int_{\\overline{\\cal M}_{g,n}}\\Omega_{g,n}(e_{\\alpha_1}\\otimes...\\otimes e_{\\alpha_n})\\cdot\\prod_{j=1}^n\\psi_j^{k_j}\\prod t^{\\alpha_j}_{k_j}\n\\end{equation}\nfor $\\alpha_i\\in\\{1,...,N\\}$ and $k_j\\in\\mathbb{N}$.\n\n \n\\begin{remark} \\label{theco}\nThe class $\\Theta_{g,n}$ defined in the introduction satisfies properties~\\eqref{glue1} and \\eqref{glue2} of a one-dimensional CohFT. In place of property~\\eqref{cohforget}, it satisfies $\\Theta_{g,n+1}(1\\!\\!1\\otimes v_1\\otimes...\\otimes v_n)=\\psi_{n+1}\\cdot\\pi^*\\Theta_{g,n}(v_1\\otimes...\\otimes v_n)$ and $\\Theta_{0,3}=0$.\n\\end{remark}\n\\begin{definition} \\label{cohftheta}\nFor any CohFT $\\Omega$ defined on $(V,\\eta)$ define $\\Omega^\\Theta=\\{\\Omega^\\Theta_{g,n}\\}$ to be the sequence of $S_n$-equivariant maps $\\Omega^\\Theta_{g,n}:V^{\\otimes n}\\to H^*(\\overline{\\cal M}_{g,n})$ given by $\\Omega^\\Theta_{g,n}(v_1\\otimes...\\otimes v_n)=\\Theta_{g,n}\\cdot\\Omega_{g,n}(v_1\\otimes...\\otimes v_n)$. \n\\end{definition}\nThis is essentially to the tensor products of CohFTs, albeit involving $\\Theta_{g,n}$. The tensor products of CohFTs is obtained as above by cup product on $H^*(\\overline{\\cal M}_{g,n})$, generalising Gromov-Witten invariants of target products and the K\\\"unneth formula $H^*(X_1\\times X_2)\\cong H^*X_1\\otimes H^*X_2$. \n\nGeneralising Remark~\\ref{theco}, $\\Omega^\\Theta_{g,n}$ satisfies properties~\\eqref{glue1} and \\eqref{glue2} of a CohFT on $(V,\\eta)$. In place of property~\\eqref{cohforget}, it satisfies \n$$\\Omega^\\Theta_{g,n+1}(1\\!\\!1\\otimes v_1\\otimes...\\otimes v_n)=\\psi_{n+1}\\cdot\\pi^*\\Omega^\\Theta_{g,n}(v_1\\otimes...\\otimes v_n)$$ and $\\Omega^\\Theta_{0,3}=0$.\n\n The product defined in \\eqref{prod} is {\\em semisimple} if it is diagonal $V\\cong\\mathbb{C}\\oplus\\mathbb{C}\\oplus...\\oplus\\mathbb{C}$, i.e. there is a canonical basis $\\{ u_1,...,u_N\\}\\subset V$ such that $u_i\\cdot u_j=\\delta_{ij}u_i$. The metric is then necessarily diagonal with respect to the same basis, $\\eta(u_i,u_j)=\\delta_{ij}\\eta_i$ for some $\\eta_i\\in\\mathbb{C} \\setminus \\{0\\}$, $i=1,...,N$. The Givental-Teleman theorem \\cite{GiventalMain,TelStr} states that the twisted loop group action defined in Section~\\ref{twloop} acts transitively on semisimple CohFTs. In particular, a semisimple homogeneous CohFT is uniquely determined by its underlying TFT. The tau function $Z^{\\text{BGW}}$ appears in a generalisation of Givental's decomposition of CohFTs \\cite{CNoTop} which, combined with the result $Z^{\\text{BGW}}=Z^{\\Theta}$, generalises Givental's action on CohFTs to allow one to replace the TFT term by the classes $\\Theta_{g,n}$. For example, in \\cite{DNoTopI} the enumeration of bipartite dessins d'enfant is shown to satisfy topological recursion for the curve $xy^2+xy+1=0$ which has both a regular and irregular zero of $dx$. Underlying this example is a generalised CohFT built out of the classes $\\Theta_{g,n}\\in H^*(\\overline{\\cal M}_{g,n})$ combined with the class $1\\in H^*(\\overline{\\cal M}_{g,n})$ via Givental's action.\n\n\n\n\\subsection{Gromov-Witten invariants}\nLet $X$ be a projective algebraic variety and consider $(C,x_1,\\dots,x_n)$ a connected smooth curve of genus $g$ with $n$ distinct marked points. For $\\beta \\in H_2(X,\\mathbb{Z})$ the moduli space of stable maps $\\cal M^g_n(X,\\beta)$ is defined by:\n$$\\cal M_{g,n}(X,\\beta)=\\{(C,x_1,\\dots,x_n)\\stackrel{\\pi}{\\rightarrow} X\\mid \\pi_\\ast [C]=\\beta\\}\/\\sim$$\nwhere $\\pi$ is a morphism from a connected nodal curve $C$ containing distinct points $\\{x_1,\\dots,x_n\\}$ that avoid the nodes. Any genus zero irreducible component of $C$ with fewer than three distinguished points (nodal or marked), or genus one irreducible component of $C$ with no distinguished point, must not be collapsed to a point. We quotient by isomorphisms of the domain $C$ that fix each $x_i$. The moduli space of stable maps has irreducible components of different dimensions but it has a virtual class of dimension\n$$ \\dim[\\cal M_{g,n}(X,\\beta)]^{\\text{virt}}=(\\dim X-3)(1-g)+\\langle c_1(X),\\beta\\rangle +n.\n$$\nFor $i=1,\\dots,n$ there exist evaluation maps:\n\\begin{equation}\nev_i:\\cal M_{g,n}(X,\\beta)\\longrightarrow X, \\quad ev_i(\\pi)=\\pi(x_i)\n\\end{equation}\nand classes $\\gamma\\in H^*(X,\\mathbb{Z})$ pull back to classes in $H^*(\\cal M_{g,n}(X,\\beta),\\mathbb{Q})$\n\\begin{equation}\nev_i^\\ast:H^*(X,\\mathbb{Z})\\longrightarrow H^*(\\cal M_{g,n}(X,\\beta),\\mathbb{Q}).\n\\end{equation}\nThe forgetful map $p:\\cal M_{g,n}(X,\\beta)\\to \\overline{\\cal M}_{g,n}$ maps a stable map to its domain curve followed by contraction of unstable components. The push-forward map $p_*$ on cohomology defines a CohFT $\\Omega_X$ on the even part of the cohomology $V=H^{\\text{even}}(X;\\mathbb{C})$ (and a generalisation of a CohFT on $H^*(X;\\mathbb{C})$) equipped with the metric \n$$\\eta(\\alpha,\\beta)=\\int_X\\alpha\\wedge\\beta.$$\nWe have $(\\Omega_X)_{g,n}:H^{\\text{even}}(X)^{\\otimes n}\\to H^*(\\overline{\\cal M}_{g,n})$ defined by \n$$(\\Omega_X)_{g,n}(\\alpha_1,...\\alpha_n)=\\sum_{\\beta}p_*\\left(\\prod_{i=1}^nev_i^\\ast(\\alpha_i)\\cap[\\cal M_{g,n}(X,\\beta)]^{\\text{virt}}\\right)\\in H^*(\\overline{\\cal M}_{g,n}).$$\nNote that it is the dependence of $p=p(g,n,\\beta)$ on $\\beta$ (which is suppressed) that allows $(\\Omega_X)_{g,n}(\\alpha_1,...\\alpha_n)$ to be composed of different degree terms. The partition function of the CohFT $\\Omega_X$ with respect to a chosen basis $e_{\\alpha}$ of $H^{\\text{even}}(X;\\mathbb{C})$ is\n$$Z_{\\Omega_X}(\\hbar,\\{t^{\\alpha}_k\\})=\\exp\\sum_{\\Small{\\begin{array}{c}g,n,\\vec{k}\\\\ \\vec{\\alpha},\\beta\\end{array}}}\\frac{\\hbar^{g-1}}{n!}\\int_{\\overline{\\cal M}_{g,n}}p_*\\left(\\prod_{i=1}^nev_i^\\ast(e_{\\alpha_i})\\cap[\\cal M_{g,n}(X,\\beta)]^{\\text{virt}}\\right)\\cdot\\prod_{j=1}^n\\psi_j^{k_j}\\prod t^{\\alpha_j}_{k_j}.\n$$\nIt stores {\\em ancestor} invariants. These are different to {\\em descendant} invariants which use in place of $\\psi_j=c_1(L_j)$, $\\Psi_j=c_1(\\mathcal{L}_j)$ for line bundles $\\mathcal{L}_j\\to \\cal M_{g,n}(X,\\beta)$ defined as the cotangent bundle over the $i$th marked point.\n\nFollowing Definition~\\ref{cohftheta}, we define $\\Omega_X^{\\Theta}$ by\n$$(\\Omega_X^{\\Theta})_{g,n}(\\alpha_1,...\\alpha_n)=\\Theta_{g,n}\\cdot\\sum_{\\beta}p_*\\left(\\prod_{i=1}^nev_i^\\ast(\\alpha_i)\\right)\\in H^*(\\overline{\\cal M}_{g,n}).$$\nand\n$$Z^{\\Theta}_{\\Omega_X}(\\hbar,\\{t^{\\alpha}_k\\})=\\exp\\sum_{\\Small{\\begin{array}{c}g,n,\\vec{k}\\\\ \\vec{\\alpha},\\beta\\end{array}}}\\frac{\\hbar^{g-1}}{n!}\\int_{\\overline{\\cal M}_{g,n}}\\Theta_{g,n}\\cdot p_*\\left(\\prod_{i=1}^nev_i^\\ast(e_{\\alpha_i})\\right)\\cdot\\prod_{j=1}^n\\psi_j^{k_j}\\prod t^{\\alpha_j}_{k_j}.\n$$\nThe invariants $\\Omega_X^{\\Theta}$ have not been defined directly from a moduli space $\\cal M_{g,n}^{\\Theta}(X,\\beta)$, say obtained by restricting the domain of a stable map to a $g-1$-dimensional subvariety $X_{g,n}\\subset\\overline{\\cal M}_{g,n}$ arising as the push-forward of the zero set of a section of the bundle $E_{g,n}$ defined in Definition~\\ref{obsbun}. Nevertheless, it is instructive for comparison purposes to write the virtual dimension of this yet to be defined $\\cal M_{g,n}^{\\Theta}(X,\\beta)$. We have\n$$ \\dim[\\cal M_{g,n}^{\\Theta}(X,\\beta)]^{\\text{virt}}=(\\dim X-1)(1-g)+\\langle c_1(X),\\beta\\rangle.\n$$\nAgain we emphasise that the invariants of $X$ stored in $Z^{\\Theta}_{\\Omega_X}(\\hbar,\\{t^{\\alpha}_k\\})$ are rigorously defined, and the purpose of the dimension formula is for a comparison with usual Gromov-Witten invariants. We see that the virtual dimension is independent of $n$. Elliptic curves now take the place of Calabi-Yau 3-folds to give virtual dimension zero moduli spaces, independent of genus and degree. The invariants of a target curve $X$ are trivial when the genus of $X$ is greater than 1 and computable when $X=\\mathbb{P}^1$ \\cite{NorGro} producing results anologous to the usual Gromov-Witten invariants in \\cite{NScGro}. For $c_1(X)=0$ and $\\dim X>1$, the invariants vanish for $g>1$, while for $g=1$ it seems to predict an invariant associated to maps of elliptic curves to $X$.\n\n \n\n\\subsubsection{Weil-Petersson volumes} \nA fundamental example of a 1-dimensional CohFT is given by \n$$\\Omega_{g,n}=\\exp(2\\pi^2\\kappa_1)\\in H^*(\\overline{\\cal M}_{g,n}).$$ \nIts partition function stores Weil-Petersson volumes \n$$V_{g,n}=\\frac{(2\\pi^2)^{3g-3+n}}{(3g-3+n)!}\\int_{\\overline{\\cal M}_{g,n}}\\kappa_1^{3g-3+n}$$\nand deformed Weil-Petersson volumes studied by Mirzakhani \\cite{MirSim}.\nWeil-Petersson volumes of the subvariety of $\\overline{\\cal M}_{g,n}$ dual to $\\Theta_{g,n}$ make sense even before we find such a subvariety. They are given by \n$$V^{\\Theta}_{g,n}=\\frac{(2\\pi^2)^{g-1}}{(g-1)!}\\int_{\\overline{\\cal M}_{g,n}}\\Theta_{g,n}\\cdot\\kappa_1^{g-1}$$ \nwhich are calculable since they are given by a translation of $Z^{\\text{BGW}}$. If we include $\\psi$ classes, we get polynomials $V^{\\Theta}_{g,n}(L_1,...,L_n)$ which give the deformed volumes analogous to Mirzakhani's volumes. \n\n\n\n\\subsubsection{ELSV formula}\nAnother example of a 1-dimensional CohFT is given by \n$$\\Omega_{g,n}=c(E^\\vee)=1-\\lambda_1+...+(-1)^{g}\\lambda_{g}\\in H^*(\\overline{\\cal M}_{g,n})$$ \nwhere $\\lambda_i=c_i(E)$ is the $i$th Chern class of the Hodge bundle $E\\to\\overline{\\cal M}_{g,n}$ defined to have fibres $H^0(\\omega_C)$ over a nodal curve $C$.\n\nHurwitz \\cite{HurRie} studied the problem of connected curves $\\Sigma$ of genus $g$ covering $\\mathbb{P}^1$, branched over $r+1$ fixed points $\\{p_1,p_2,...,p_r,p_{r+1}\\}$ with arbitrary profile $\\mu=(\\mu_1,...,\\mu_n)$ over $p_{r+1}$. Over the other $r$ branch points one specifies simple ramification, i.e. the partition $(2,1,1,....)$. The Riemann-Hurwitz formula determines the number $r$ of simple branch points via $2-2g-n=|\\mu|-r$. \n\\begin{definition} \nDefine the simple Hurwitz number $H_{g,\\mu}$ to be the weighted count of genus $g$ connected covers of $\\mathbb{P}^1$ with ramification $\\mu=(\\mu_1,...,\\mu_n)$ over $\\infty$ and simple ramification elsewhere.\nEach cover $\\pi$ is counted with weight $1\/|{\\rm Aut}(\\pi)|$.\n\\end{definition}\nCoefficients of the partition function of the CohFT $\\Omega_{g,n}=c(E^\\vee)$ appear naturally in the ELSV formula \\cite{ELSVHur} which relates the Hurwitz numbers $H_{g,\\mu}$ to the Hodge classes. The ELSV formula is:\n\\[ H_{g,\\mu}=\\frac{r(g,\\mu)!}{|{\\rm Aut\\ }\\mu|}\\prod_{i=1}^n\\frac{\\mu_i^{\\mu_i}}{\\mu_i!}\\int_{\\overline{\\cal M}_{g,n}}\\frac{1-\\lambda_1+...+(-1)^g\\lambda_g}{(1-\\mu_1\\psi_1)...(1-\\mu_n\\psi_n)}\\]\nwhere $\\mu=(\\mu_1,...,\\mu_n)$ and $r(g,\\mu)=2g-2+n+|\\mu|$.\n\nUsing $\\Omega^{\\Theta}_{g,n}=\\Theta\\cdot c(E^\\vee)$ we can define an analogue of the ELSV formula:\n$$H^{\\Theta}_{g,\\mu}=\\frac{(2g-2+n+|\\mu|)!}{|{\\rm Aut\\ }\\mu|}\\prod_{i=1}^n\\frac{\\mu_i^{\\mu_i}}{\\mu_i!}\\int_{\\overline{\\cal M}_{g,n}}\\Theta_{g,n}\\cdot\\frac{1-\\lambda_1+...+(-1)^{g-1}\\lambda_{g-1}}{(1-\\mu_1\\psi_1)...(1-\\mu_n\\psi_n)}.\n$$\nIt may be that $H^{\\Theta}_{g,\\mu}$ has an interpretation of enumerating simple Hurwitz covers. \nNote that it makes sense to set all $\\mu_i=0$, and in particular there are non-trivial primary invariants over $\\overline{\\cal M}_g$, unlike for simple Hurwitz numbers. \nAn example calculation:\n$$\\int_{\\overline{\\cal M}_2}\\Theta_2\\lambda_1=\\frac{1}{5}\\cdot\\frac{1}{8}\\cdot\\frac{1}{8}\\cdot\\frac{1}{2}+\\frac{1}{10}\\cdot\\frac{1}{8}\\cdot\\frac{1}{2}=\\frac{1}{128}\\quad\\quad\\Leftarrow\\quad\n\\lambda_1=\\frac{1}{10}(2\\delta_{1,1}+\\delta_{\\text{irr}}).\n$$\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nOur world is bursting with knowledge. Nearly every discipline has been subdivided into numerous sub-disciplines. By July 27 2017, there are 42.5 million entries on Wikipedia\\footnote{\\url{https:\/\/en.wikipedia.org\/wiki\/Wikipedia:Size_of_Wikipedia}}, which, undoubtedly, stands for only a small portion of human knowledge. People learn whenever and wherever possible. People's learning of knowledge is not confined to childhood or the classroom but takes place throughout life and in a range of situations; it can take the form of formal learning or informal learning \\cite{Paradise2009}, such as daily interactions with others and with the world around us. Lifelong learning is the ``ongoing, voluntary, and self-motivated\" pursuit of knowledge for either personal or professional reasons \\cite{Cliath2000}. According to Tough's study, almost 70\\% of learning projects are self-planned \\cite{Tough1979}. \n\nAs people learn eternally, one significant issue is to evaluate how much knowledge an individual is possessing at a particular time. E.g., suppose we have a large database that records all the entries of Wikipedia and a person's understanding degree of each entry (on a scale of 1 to 10). With this information, a lot of new applications are becoming practical. The following are some examples:\n\n\n\\subsubsection*{1. Determine a person's knowledge state}\nIf we set a threshold to the understanding grades, and assume the subject has understood a knowledge entry if its grade is larger than the threshold, then we can determine a person's knowledge state and knowledge composition at a particular time. As Figure 1 illustrates.\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.8\\columnwidth]{knowledge-composition}\n\t\\caption{A person's knowledge composition}\n\\end{figure}\n\n\\subsubsection*{2. Discover a person's expertises and deficiencies}\t\nExpertise finding is critical for an organization or project. Since the participation of experts plays an important role for the success of an organization or project. With the understanding evaluation database, it is convenient to discover a person's domain-level expertise and topic-level expertise. E.g., if a person has a question of what is Poincare Conjecture, he do not need to ask a mathematician (domain-level expert), who is not always available. Instead, he can ask one of his friends who is not a mathematician but has topic-level expertise in it. Besides expertise, we can also discover a person's deficiencies in a field, so he can remedy the deficiencies.\n\n\\subsubsection*{3. Make personalized recommendations}\nIf we know a person's understanding degree to each piece of knowledge, it is natural to make personalized recommendations to the subject based on the information. E.g., suppose we know a person has good understanding in the topics of Deep Learning, then we can recommend latest papers about Deep Learning to the subject, or recommend the person as a reviewer of a paper related to Deep Learning. In addition, we can recommend learning materials to the subject to help him practice meaningful learning, which will be discussed in details in Section 4.\n\n\n\\subsection{Procedural and Conceptual Knowledge}\nStudies of knowledge indicate that knowledge may be classified into two major categories: procedural and conceptual knowledge \\cite{hiebert2013conceptual,mccormick1997conceptual}. Procedural knowledge is the knowledge exercised in the accomplishment of a task, and thus includes knowledge which cannot be easily articulated by the individual, since it is typically nonconscious (or tacit). It is commonly referred to as ``know-how\". Such as knowing how to cook delicious food, how to drive an airplane, or how to play basketball etc. Conceptual knowledge is quite different from procedural knowledge. It involves understanding of the principles that govern a domain and of the interrelations between pieces of knowledge in a domain; it concerns understanding and interpreting concepts and the relations between concepts. It is commonly referred to as ``know-why\", such as knowing why something happens in a particular way. In this article, we only deal with evaluating a person's understanding of conceptual knowledge.\n\n\\subsection{The framework}\nAt present, assessment of one's understanding of conceptual knowledge is primarily through tests \\cite{pisa2000measuring,hunt2003concept,qian2004evaluation} or interviews, which have some limitations such as low-efficiency and not-comprehensive. E.g., it needs other people's cooperation to accomplish the assessment, which is time-consuming; moreover, only a small portion of topics of a domain is evaluated during an assessment, which cannot comprehensively reflect a person's knowledge state of the domain.\n\n\nWe propose a new method named Individual Conceptual Knowledge Evaluation Model (ICKEM) to evaluate one's understanding of a piece of conceptual knowledge quantitatively. It has the advantage of evaluating a person's understanding of conceptual knowledge independently, comprehensively, and automatically. It keeps track of one's daily learning activities (reading, listening, discussing, writing etc.), dividing them into a sequence of learning sessions, then analyzes the text content of each learning session to discover the involved knowledge topics and their shares in the session. Then a learning experience is inserted to the involved knowledge topics' learning histories (It maintains a leaning history for \\emph{each} knowledge topic). Therefore, after a period of time (e.g., several years or decades of years), a knowledge topic's leaning history that records the subject's each leaning experience about the topic is generated. Based on the learning history, the subject's familiarity degree to a knowledge topic is evaluated. Finally, it estimates one's understanding degree to a topic, by comprehensively evaluating one's familiarity degrees to the topic itself and other topics that are essential to understand the topic. Figure 2 is the framework of ICKEM. Each hexagon of the diagram indicates a processing step; the following rectangle indicates the results of the process.\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.9\\columnwidth]{2flowchart}\n\t\\caption{The framework of ICKEM}\n\\end{figure}\n\nThe remainder of this paper is organized as follows. Section 2 discusses how to calculate a person's familiarity degree to a knowledge topic. Section 3 introduces a method to estimate a person's understanding degree to a topic, by checking the familiarity degrees of the topic's understanding tree. It also presents an algorithm to generate a knowledge topic's understanding tree. In Section 4, we introduce an utilization of ICKEM, computer-aided incremental meaningful learning (CAIML), which helps an individual practice meaningful learning. Section 5 discusses related issues about evaluating one's understanding of conceptual knowledge. We cover related work in Section 6, before concluding in Section 7. \n\n\n\\section{Evaluate familiarity degree}\nThis section introduces the procedures that are devised to evaluate a person's familiarity degree to a knowledge topic. It starts by presenting a formal definition of knowledge and learning, then discusses how to divide a person's daily learning activities into a series of learning sessions, and analyze the text learning content to obtain a topic's share in a session. After these procedures, a knowledge topic's learning history can be generated. Finally, based on the learning history, the subject's familiarity degree to a topic is calculated.\n\n\\subsection{Definition of knowledge and learning}\nKnowledge is conventionally defined as beliefs that are true and justified. To be `true' means that it is in accord with the way in which objects, people, processes and events exist and behave in the real world. However, exactly what evidence is necessary and sufficient to allow a true belief to be `justified' has been a topic of discussion (largely among philosophers) for more than 2,000 years \\cite{hunt2003concept}. In ICKEM, a Knowledge Point is defined as a piece of conceptual knowledge that is explicitly defined and has been widely accepted, such as Bayes' theorem, Euler's formula, mass-energy equivalence, Maxwell's equations, gravitational wave, and the expectation-maximization algorithm etc.\n\nLearning is the process of acquiring, modifying, or reinforcing knowledge, behaviors, skills, values, or preferences in memory \\cite{terry2015learning, hunt2003concept, bransford1999people}. An individual's possessing of knowledge is the product of all the experiences from the beginning of his\/her life to the moment at hand \\cite{hunt2003concept,benassi2014applying}. Learning produces changes in the organism and the changes produced are relatively permanent \\cite{schacter2011psychology}.\n\n\\subsection{Discriminate learning sessions}\nIn ICKEM, a person's daily activities are classified into two categories: learning activities and non-learning activities. An activity is recognized as a learning activity if its content involves at least one Knowledge Point of a predefined set. In addition, the learning activities are divided into a sequence of learning sessions, since it is essential to know how many times and how long for each time the individual has learned a Knowledge Point. An individual can employ many methods to learn conceptual knowledge, such as reading, listening, discussing, and writing. Different strategies should be used to discriminate learning sessions for different learning methods. E.g., for leaning by reading documents on a computer, Algorithm 1 is devised to discriminate learning sessions. For other learning methods, it is more complicated to divide learning sessions. However, there are already some attempts for detecting human daily activities \\cite{Ni2013,Piyathilaka2013,Sung2012}. Sung et al. devised an algorithm for recognizing human daily activities from RGB-D Images \\cite{Sung2012}. They tested their algorithm on detecting and recognizing twelve different activities (brushing teeth, cooking, working on computer, talking on phone, drinking water, talking on a chair etc.) performed by four people and achieved good performance.\n\nAlgorithm 1 works by periodically checking the occurrence of the following events: \n\\begin{enumerate}\n\t\\item A document is opened;\n\t\\item The foreground window has switched to an opened document from another application (APP), such as a game APP;\n\t\\item After the computer being idled for a period of time, there are some mouse or keyboard inputs detected, which indicates the individual has come back from other things. Meanwhile, the foreground window is an opened document;\n\t\\item A document is closed;\n\t\\item The foreground window has switched to another APP from a document;\n\t\\item The foreground window is a document, but the computer has idled for a certain period of time without any mouse or keyboard inputs detected, the individual is assumed to have left to do other things.\n\\end{enumerate}\n\nOccurrence of Event 1, 2, or 3 indicates a learning session has started; if Event 4, 5, or 6 occurred, a learning session is assumed being terminated. The duration of a learning session equals the interval between its start and stop time. \n\n\\begin{algorithm}\n\t\\caption{An algorithm to discriminate one's learning sessions when reading.}\n\t\\label{alg1}\n\t\\begin{algorithmic}[1]\t\t\t\n\t\t\\WHILE{The Reader APP is running}\t\t\n\t\t\\IF{Event 4 \\textbf{OR} Event 5 \\textbf{OR} Event 6 occurred}\t\t\t\t\t\n\t\t\\STATE Record that a document's learning session has stopped;\n\t\t\\ELSE\n\t\t\\IF{Event 1 \\textbf{OR} Event 2 \\textbf{OR} Event 3 occurred}\t\t\t\n\t\t\\STATE Record that a learning session about the current document has started;\n\t\t\\ENDIF\n\t\t\\ELSE\n\t\t\\IF{There is no APP and document switch}\t\t\t\n\t\t\\STATE Check and record if there is a Page switch;\n\t\t\\ENDIF\t\t\t\n\t\t\\ENDIF\n\t\t\\STATE Keep silent for $ T $ seconds;\n\t\t\\ENDWHILE\t\t\n\t\\end{algorithmic}\t\n\\end{algorithm}\n\nFigure 3 shows some examples of discriminated learning sessions. Attribute ``\\textit{did}'' means document ID, which indexes a document uniquely. Attribute ``\\textit{actiontype}'' indicates the type of an action. ``\\textit{Doc Act}'' means a document has been activated. ``\\textit{Page Act}'' is defined similarly. ``\\textit{Doc DeAct}'' means a document has been deactivated. That is to say, a learning session has stopped. Attribute ``\\textit{page}'' indicates a page number. Attribute ``\\textit{duration}'' records how long a page has been activated in seconds. If two learning sessions' interval is less than a certain threshold, such as 30 minutes, and their learning material is the same (e.g., the same document), they are merged into one session. Therefore, ``Session 2'' and ``Session 3'' are merged into one session.\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.8\\columnwidth]{4learningsession}\n\t\\caption{Some examples of discriminated learning sessions.}\n\\end{figure}\n\n\\subsection{Capture the text learning content}\n\nMost learning processes are associated with a piece of learning material. E.g., reading a book, the book is the learning material; attending a course or discussion, the course and discussion contents can be regarded as the learning material. Some learning materials are text or can be converted to text. E.g., discussing a piece of knowledge with others. The discussion contents can be converted to text by exploiting Speech Recognition. Similarly, if one is reading a printed book, the contents of the book can be captured by wearable computers like Google Glass and then converted to text through Optical Character Recognition (OCR). If the book is electronic, no conversion is needed; the text can be extracted directly. Since Algorithm 1 records the accurate set of pages a person has read during a learning session, only the text content of related pages is extracted.\n\n\\subsection{Calculate a Knowledge Point's share}\nA learning session may involve many topics, it is necessary to know how much a learning session involves a topic. To calculate a Knowledge Point's share, maybe the simplest way is to deem the text learning content as a bag of words, and calculate a Knowledge Point's share based on its Term Frequency (TF) or normalized TF. Just like Equation 1 and 2. $ N_{i} $ is term $ i $'s normalized TF. It is calculated with Equation 1, where $ T_{i} $ is term $ i $'s TF, $ Max(TF) $ is the maximum TF of the captured text content, $ \\alpha $ is a constant regulatory factor.\n\n\\begin{equation}\n\tN_{i} = \\alpha + (1 - \\alpha) * T_{i} \/ Max(TF)\n\\end{equation}\n\\begin{equation}\n\t\\xi_{i} = \\frac{N_{i}}{\\sum_{j=1}^mN_{j}}\n\\end{equation}\n\nAnother method of discovering a Knowledge Point's share is to analyze the text learning content with topic model. A topic model is a type of statistical model for discovering the abstract \"topics\" that occur in a collection of documents. Topic model is a frequently used text-mining tool for discovery of hidden semantic structures in a text body \\cite{blei2012probabilistic,steyvers2007}. An early topic model was described in \\cite{Papadimitriou1998}. Another one, called probabilistic latent semantic analysis (PLSA), was created by Hofmann in 1999 \\cite{hofmann1999probabilistic}. Latent Dirichlet allocation (LDA) is the most common topic model currently in use \\cite{blei2003latent}. It is a generalization of PLSA. It introduces sparse Dirichlet prior distributions over document-topic and topic-word distributions. Other topic models are generally extensions on LDA.\n\nThe inputs of a probabilistic topic model are a collection of $ N $ documents, a vocabulary set $ V $, and the number of topics $ k $. The outputs of a probabilistic topic model are the following:\n\n\\begin{itemize}\n\t\\item k topics, each is word distribution : $ \\{\\theta_{1},...,\\theta_{k}\\} $;\n\t\\item Coverage of topics in each document $ d_{i} $: $ \\{\\pi_{i1},...,\\pi_{ik}\\} $;\\\\\n\t$ \\pi_{ij} $ is the probability of document $ d_{i}$ covering topic $ \\theta_{j} $.\t\n\\end{itemize}\n\nThe subject's $ N $ learning sessions' text-contents can be deemed as a collection of $ N $ documents, then the document collection is analyzed with a topic model like LDA.\nBased on the outputs of topic model, Equation 3 is used to calculate a Knowledge Point's share in a learning session. Only the top $ m $ terms of each topic are considered. For document $ d $, $ \\varphi_{ij} $ is the share of term $ i $ which comes from topic $ j $, $ \\pi_{j} $ is topic $j $'s share of document $ d $, $ p(t_{i}|\\theta_{j}) $ is term $ i $'s share of topic $ j $. A Knowledge Point's share equals its term share. Therefore, with topic model, Knowledge Points involved in each learning session can be discovered; their shares can also be calculated.\n\n\\begin{equation}\n\\varphi_{ij} = \\frac{\\pi_{j}p(t_{i}|\\theta_{j})}{\\sum_{j=1}^k\\sum_{i=1}^m\\pi_{j}p(t_{i}|\\theta_{j})}\n\\end{equation}\n\n\n\\subsection{The subject's state during a session}\nThe subject's physical and psychological status may influence the effect of a learning session. E.g., if the subject is tired, or severely injured, the learning effect is usually worse than being in a normal state. Similarly, if the subject is in a state of severe depression, anxiety, or scatterbrained, the learning effect cannot be good either. One issue is how to collect values for these attributes. Fortunately, there are already systems for monitoring a person's physical and psychological status \\cite{Yang2005,Cheng2012,DeRemer2015,Kurniawan2013,Lin2014}. Another issue is studies need to be conducted to decide how these attributes would affect the learning effect. Preferable to have some equations to calculate a ``physical and psychological status factor\", describing how the learning effect is affected by the subject's physical and psychological state during a session.\n\nOn the other hand, if we just want a rough estimation of the subject's familiarity degree to a Knowledge Point, these physical and psychological status attributes may be ignored. Since we are observing the subject in a long time rang, typically several years or decades of years, we can assume the subject is in ``normal'' state most of the time, and ignore its fluctuation.\n\n\\subsection{Effectiveness of a learning method}\nDifferent learning methods may have different levels of effectiveness. E.g., learning by reading a book and learning by discussing with others, do these two methods have equivalent levels of memory retention after the learning experience? (supposing other conditions are equivalent, such as learning content, session duration, physical and psychological status etc.) These is no consensus about the effectiveness of different learning methods. The National Training Laboratories (NTL) Institute proposes a \\emph{learning pyramid} model, which maps a range of learning methods onto a triangular image in proportion to their effectiveness in promoting student retention of the material taught \\cite{sousa2001brain,magennis2005teaching}. However, the credibility and research base of the model have been questioned by other researchers \\cite{lalley2007learning,letrud2012rebuttal}. Further research is necessary to make a systematic comparison between the effectiveness of different learning methods. Preferable to have a ``learning method factor\" depicting the effectiveness of a method. Similarly, if we just want a rough estimation of the subject's familiarity degrees, we can omit the difference among different learning methods. As \\cite{lalley2007learning} concluded, there is no learning method being consistently superior to the others and all being effective in certain contexts. \\cite{lalley2007learning} also stressed the importance of reading: reading is not only an effective learning method, it is also the main foundation for becoming a ``life-long learner\".\n\n\\subsection{A Knowledge Point's learning history}\nWith the discriminated learning sessions, a Knowledge Point's share in each session, the subject's physical and psychological status during a session, and the ``learning method factor\", after a period of time, the subject's learning histories about each Knowledge Point can be generated. Figure 4 shows an exemplary learning history. It records a person's each learning experience about a Knowledge Point. ``LCT\" stands for ``learning cessation time\". It is used to calculate the interval between the learning time and the evaluation time, which is then used to estimate how much information has been lost due to memory decay. ``Duration\" is the length of a learning session. ``Proportion\" is the Knowledge Point's share during a learning session. ``PPS factor\" stands for the ``physical and psychological status factor\". It is a number between 0 and 1 that is calculated based on the subject's average physical and psychological status during a session. ``LM factor\" stands for the ``learning method factor\". It is also a number between 0 and 1 that is allocated to a learning method according to its effectiveness level.\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=1.0\\columnwidth]{LHIS-vldb}\n\t\\caption{An exemplary learning history of a Knowledge Point}\n\\end{figure}\n\n\\subsection{Memory retention of a learning experience}\nPeople learn all the time; meanwhile, people forget all the time. Human memory declines over time. Interestingly, most researchers report there is no age difference for memory decay \\cite{rubin1996one}. To calculate the familiarity degree, we need to address how the effect of a learning experience decays over time. However, there is no consensus of how human memory decays. Psychologists have suggested many functions to describe the monotonic loss of information with time \\cite{rubin1996one}. But there is no unanimous agreement of which one is the best. The search for a general description of forgetting is one of the oldest unresolved problems in experimental psychology \\cite{averell2011form}. \n\nWe propose to use Ebbinghaus' forgetting curve equation \\cite{ebbinghaus1913memory} to describe the memory retention of a learning experience over time. Since it is the most well-known forgetting curve equation and its soundness has been proved by many studies \\cite{murre2015replication,finkenbinder1913curve,heller1991replikation}.\nEbbinghaus found Equation 4 can be used to describe the proportion of memory retention after a period of time\\footnote{It can be found at \\url{http:\/\/psychclassics.yorku.ca\/Ebbinghaus\/memory7.htm}}, where $ t $ is the time in minutes counting from one minute before the end of the learning, $ k $ and $ c $ are two constants that equal 1.84 and 1.25, respectively. Figure 5 shows the percentage of memory retention over time calculated by Equation 4. The Y axis is the percentage of memory retention; the X axis is the time-since-learning in minutes. It can be seen that memory retention declines drastically during the first 24 hours (1,440 minutes), then the speed tends to be steady.\n\n\\begin{equation}\nb(t) = k\/((\\log t)^c + k)\n\\end{equation}\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.8\\columnwidth]{3forgettingcurve}\n\t\\caption{The percentage of memory retention over time calculated by Equation 4}\n\\end{figure}\n\n\\subsection{Calculate the familiarity degree}\n\nBased on the learning history, Equation 5 is utilized to calculate the subject's Familiarity Measure to a Knowledge Point $ k_{i} $ at time $ t $. A Familiarity Measure is defined as a score that depicts a person's familiarity degree to a Knowledge Point. Its unit is defined as \\emph{gl}. The input is $ k_{i} $'s learning history--a sequence of $ m $ learning sessions (like Figure 4). $ d_{j} $ is session $ j $'s duration; $ \\xi_{ij} $ is Knowledge Point $ k_{i} $'s share in session $ j $; $ t_{j} $ is session $ j $'s ``learning cessation time\", $ b(t-t_{j}) $ calculates the percent of memory retention of session $ j $ at time $ t $ with Equation 4; $ F^{pps}_{j} $ is the ``physical and psychological status factor\" of session $ j $; $ F^{lm}_{j} $ is the ``learning method factor\" of session $ j $.\n\n\\begin{equation}\n\tF(k_{i},t) = \\sum_{j=1}^md_{j}*\\xi_{ij}*b(t-t_{j})*F^{pps}_{j}*F^{lm}_{j}\n\\end{equation}\n\nThe computation hypothesizes each learning experience about a Knowledge Point contributes some effect to the subject's current understanding of it, and the learning effect declines over time according to Ebbinghaus' forgetting curve. Other attributes (the subject's physical and psychological status, learning method) that may affect the learning effect are counted in as numeric factors. The Familiarity Measure of a Knowledge Point is calculated as the additive effects of all the learning experiences about it. If we omit the influence of subject physical and psychological status and the difference among learning methods, Equation 5 can be simplified to Equation 6.\n\n\\begin{equation}\n\tF(k_{i},t) = \\sum_{j=1}^md_{j}*\\xi_{ij}*b(t-t_{j})\n\\end{equation}\n\nTable 1 compares Familiarity Measures calculated by Equation 5 and Equation 6 at different times based on the learning history of Figure 4. The evaluation times are an hour, a day, a month, a year, and 10 years after last learning of the knowledge point.\n\n\\begin{table*}\n\t\\centering\n\t\\begin{tabular}{ c c c c c c }\n\t\t\\hline\n\t\t& 5\/17\/2016 16:25 & 5\/18\/2016 15:25 & 6\/17\/2016 15:25 &5\/17\/2017 15:25 & 5\/17\/2026 15:25\\\\ \\hline\n\t\tEquation 5 & 25\t&23.5&\t21.9&\t19.4&\t16.6\\\\ \\hline\t\t\n\t\tEquation 6 & 42.6&\t40.2&\t37.6\t&33.3\t&28.5\\\\ \\hline\n\t\\end{tabular}\n\t\\caption{Familiarity Measures calculated at different times}\n\\end{table*}\n\n\n\\section{Estimate understanding degree}\nUnderstanding is quite subtle to measure. We hypothesize that if a person is familiar with a Knowledge Point itself and all the background knowledge that is essential to understand it, he should have understood the Knowledge Point. Because Familiarity Measure depicts the cumulated effects of one's learning experiences about a topic, high levels of Familiarity Measures imply intensive learning activities about the suite of knowledge topics. Intensive learning activities usually result in a good understanding.\n\nThe background Knowledge Points that are essential to understand a Knowledge Point can be extracted by analyzing its definition. Table 2 lists eight reduced documents, each of them is a definition of a Knowledge Point in Probability Theory or Stochastic Process, the texts are quoted from Wikipedia and other websites. The third column of Table 2 lists the involved Knowledge Points in the documents, which are deemed as the background knowledge to understand the host Knowledge Point.\n\n\\begin{table*}\n\t\\centering\n\t\\begin{tabular}{|C{1cm}|L{12cm}|C{2.5cm}|}\n\t\t\\hline\n\t\tDoc & \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad Content & Knowledge Points \\\\ \\hline\n\t\tD1 & A Strictly Stationary Process (SSP) is a Stochastic Process (SP) whose Joint Probability Distribution (JPD) does not change when shifted in time. & SSP, JPD,\\qquad\\qquad Time, SP \\\\ \\hline\n\t\t\\qquad \\qquad \\qquad \\qquad\\qquad \\qquad D2 & A Stochastic Process (SP) is a Probability Model (PM) used to describe phenomena that evolve over time or space. In probability theory, a stochastic process is a Time Sequence (TS) representing the evolution of some system represented by a variable whose change is subject to a Random Variation (RaV). & SP, PM, TS, Time, Space, System,\\qquad \\qquad Variable, RaV \\\\ \\hline\n\t\t\\qquad \\qquad \\qquad\\qquad \\qquad \\qquad D3 & In the study of probability, given at least two Random Variables (RV) X, Y, ... that are defined on a Probability Space (PS), the Joint Probability Distribution (JPD) for X, Y, ... is a Probability Distribution (PD) that gives the probability that each of X, Y, ... falls in any particular range or discrete set of values specified for that variable. & JPD, RV,\\qquad\\qquad PS, PD,\\qquad \\qquad Variable, Probability \\\\ \\hline\n\t\t\\qquad \\qquad D4 & A Probability model (PM) is a mathematical representation of a random phenomenon. It is defined by its Sample Space (SS), events within the SS, and probabilities associated with each event. & PM, SS,\\qquad\\qquad Event, Probability \\\\ \\hline\n\t\t\n\t\tD5 & In probability and statistics, a Random variable (RV) is a variable quantity whose possible values depend, in some clearly-defined way, on a set of random events. & RV, Variable, Event \\\\ \\hline\n\t\t\\qquad \\qquad D6 & A Probability Space (PS) is a Mathematical Construct (MC) that models a real-world process consisting of states that occur randomly. It consists of three parts: a Sample Space (SS), a set of events, and the assignment of probabilities to the events. & PS, MC, SS, Probability, Event \\\\ \\hline\n\t\tD7 & A Probability Distribution (PD) is a table or an equation that links each outcome of a statistical experiment with its probability of occurrence. & PD,\\qquad\\qquad Probability \\\\ \\hline\n\t\tD8 & The Sample Space (SS) is the set of all possible outcomes of the samples. & SS, Sample \\\\ \t \n\t\t\\hline \n\t\\end{tabular}\n\t\\caption{A list of documents and their involved Knowledge Points}\n\\end{table*}\n\nAn Understanding Tree is a treelike data structure which compiles the background Knowledge Points that are essential to understand the root Knowledge Point. Figure 6 illustrates four Understanding Trees based on the definitions of Table 2. The nodes of the tree can be further interpreted by other Knowledge Points until they are Basic Knowledge Points (BKP). A BKP is a Knowledge Point that is simple enough so that it is not interpreted by other Knowledge Points. Figure 7 shows a fully extended Understanding Tree based on the definitions of Table 2. Figure 8 shows an exemplary Understanding Tree with all the redundant nodes eliminated. Each node is tagged with a Familiarity Measure calculated with Equation 5 or 6. The leaf nodes of Figure 7 and Figure 8 are BKPs. The height and number of nodes of an Understanding Tree characterize the complexity degree of it. Understanding Tree can be used to evaluate a person's topic-level expertise, such as evaluating a person's understanding to a specific algorithm like the Quicksort algorithm.\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.99\\columnwidth]{12example-trees}\n\t\\caption{Some examples of Understanding Trees}\n\\end{figure}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\includegraphics[width=1.3\\columnwidth]{9full}\n\t\\caption{A fully extended Understanding Tree}\n\\end{figure*}\n\n\\begin{figure*}\n\t\\centering\n\t\\includegraphics[width=1.3\\columnwidth]{10full-concise-scores}\n\t\\caption{A standard Understanding Tree tagged with Familiarity Measures}\n\\end{figure*}\n\n\\begin{figure*}\n\t\\centering\n\t\\includegraphics[width=1.3\\columnwidth]{10full-concise-percent}\n\t\\caption{Familiarity Measures transformed into percentages}\n\\end{figure*}\n\n\\subsection{Calculation of understanding degree}\nIf all the Familiarity Measures of an Understanding Tree exceed a threshold (such as 100), it is assumed that the person has understood the root Knowledge Point. Then the Knowledge Point is classified as ``Understood''; otherwise, it is classified as ``Not Understood''. Due to the differences of people's intelligence and talent, different people may have different thresholds. \n\nIf a Familiarity Measure is less than the threshold, a percentage is calculated by dividing it by the threshold, indicating the subject's percent of familiarity of the node; if the Familiarity Measure is greater than the threshold, the percentage is set to 1, implying the quantity of familiarity of this node is large enough for understanding the root, extra familiarity is good but not necessary. Equation 7 is used to calculate the percentage of familiarity, $ F(k_{i},t) $ is the subject's Familiarity Measure to Knowledge Point $ k_{i} $ at time $ t $, $ f_{T} $ is the threshold.\n\nIf a Knowledge Point is classified as ``Not Understood'', a percent of understanding is calculated with Equation 8. $ PU(k_{r},t) $ is the subject's percent of understanding of the root Knowledge Point at time $ t $, $ PF(k_{r},t) $ is the percent of familiarity of the root, $ \\frac{1}{m}\\sum_{j=1}^mPF(k_{j},t) $ calculates the average percent of familiarities of its descendants (not including the root). E.g., Figure 9 is the Understanding Tree of Figure 8 with Familiarity Measures transformed into percentages (assuming $ f_{T} $ equals 100). The $ PF(k_{r},t) $ of it equals 85\\%, and $ \\frac{1}{m}\\sum_{j=1}^mPF(k_{j},t) $ equals 89\\%, so the $ PU(k_{r},t) $ equals 76\\%, indicating the subject's understanding of the root Knowledge Point is 76\\%. If $ PU(k_{r},t) $ is less than 100\\%, the Knowledge Point is classified as ``Not Understood\". Thus it can be seen that we are using a conservative strategy for estimating the subject's understanding. For a Knowledge Point to be classified as ``Understood\", the subject must be familiar with \\emph{every} node of its Understanding Tree. This strategy should guarantee a good precision of the Knowledge Points that are classified as ``Understood\", and thus guarantees the quality of the estimated topic-level expertise of the subject. \n\n\n\\begin{equation} \n\tPF(k_{i},t)=\\left\\{\n\t\\begin{array}{cl}\n\t\t 1 & F(k_{i},t) \\geq f_{T} \\\\\n\t\t F(k_{i},t)\/f_{T} & F(k_{i},t) < f_{T}\n\t\\end{array}\n\t\\right. \n\\end{equation}\n\n\\begin{equation}\n\tPU(k_{r},t) = PF(k_{r},t)*\\frac{1}{m}\\sum_{j=1}^mPF(k_{j},t)\n\\end{equation}\n\n\nIf $ PU(k_{r},t) $ equals 100\\%, the subject is assumed having understood the root Knowledge Point, then the average Familiarity Measure of the Understanding Tree divided by the threshold features the magnitude of understanding. Since people usually are very familiar with the BKPs, it may be preferable to exclude them or normalize their effects when computing the average Familiarity Measure. \n\n\\subsection{Construction of Understanding Tree}\nAn Understanding Tree can be constructed manually by a group of experts, or generated automatically by machines. Algorithm 2 shows the steps to generate an Understanding Tree automatically. Table 3 illustrates three definitions of the Central Limit Theorem (CLT)\\footnote{They can be found at \\url{http:\/\/www.math.uah.edu\/stat\/sample\/CLT.html}, \\url{http:\/\/sphweb.bumc.bu.edu\/otlt\/MPH-Modules\/BS\/BS704_Probability\/BS704_Probability12.html}, \\url{http:\/\/stattrek.com\/statistics\/dictionary.aspx}}, the third column lists the involved Knowledge Points in each definition. According to the rule mentioned in Algorithm 2, the child nodes of CLT are selected as \\emph{sample}, \\emph{distribution}, \\emph{mean}, \\emph{independent}, and \\emph{normal} (Knowledge Points that are not BKPs can be further extended). To be safe, the generated Understanding Tree can be inspected by human experts. Understanding Tree is a static structure, once constructed, it is not likely to change them.\n\n\\begin{algorithm} \n\t\\caption{ An algorithm to construct Understanding Tree} \n\t\\begin{algorithmic}[1]\n\t\t\\REQUIRE ~~\\\\\n\t\t\n\t\tThe set of all Knowledge Points, $ \\Omega $;\\\\\n\t\tThe set of all BKPs, $ B $;\\\\\n\t\tThe root Knowledge Point, $ R_k $;\n\t\t\n\t\t\\ENSURE ~~\\\\\n\t\t$ R_k $'s Understanding Tree;\n\t\t\\STATE Search definitions of $ R_k $ from a library of authoritative documents;\n\t\t\\STATE Discover involved Knowledge Points for each definition;\n\t\t\\STATE Select Knowledge Points according to some rules, e.g., more than half of the definitions have referenced the Knowledge Point; \n\t\t\\STATE Recursively extend non-BKP Knowledge Points that are selected;\n\t\t\\RETURN All the selected Knowledge Points;\t\t\n\t\\end{algorithmic}\n\\end{algorithm} \n\n\\begin{table*}\n\t\\centering\n\t\\begin{tabular}{|C{1cm}|L{12cm}|C{3.5cm}|}\n\t\t\\hline\n\t\t& \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad Content & Knowledge Points \\\\ \\hline\n\t\t\\qquad \\qquad \\qquad \\qquad \\qquad \\qquad 1 & The Central Limit Theorem (CLT) states that the sampling distribution of the mean of any independent, random variable will be normal or nearly normal, if the sample size is large enough. & CLT, sample, distribution, mean, independent, random variable, normal, size \\\\ \\hline\n\t\t\\qquad \\qquad \\qquad \\qquad \\qquad \\qquad 2 & The Central Limit Theorem (CLT) states that the distribution of the sum (or average) of a large number of independent, identically distributed variables will be approximately normal, regardless of the underlying distribution. & CLT, distribution, sum, average, independent, variable, normal \\\\ \\hline\n\t\t\\qquad \\qquad \\qquad \\qquad \\qquad \\qquad 3 & The Central Limit Theorem (CLT) states that if you have a population with mean $ \\mu $ and standard deviation $ \\sigma $ and take sufficiently large random samples from the population with replacement, then the distribution of the sample means will be approximately normally distributed. & CLT, population, standard deviation, random, replacement, distribution, sample, mean, normal \\\\ \n\t\t\\hline \n\t\\end{tabular}\n\t\\caption{Three definitions of the Central Limit Theorem (CLT)}\n\\end{table*}\n\n\\section{Facilitate meaningful learning}\nKnowing one's understanding degrees of knowledge helps practice meaningful learning. Meaningful learning is the concept that learned knowledge is fully understood to the extent that it relates to other knowledge, implies there is a comprehensive knowledge of the context of the facts learned \\cite{Mayer2002}. ICKEM helps people practice Computer-aided Incremental Meaningful Learning (CAIML). CAIML is defined as a strategy of letting the computer estimate a person's current knowledge states, then recommend the optimum learning material for the subject to accomplish a meaningful learning. The optimum material introduces some new knowledge blended with old knowledge the subject has known, meanwhile, interpreting the new knowledge.\n\nFor example, a college student, who has comprehended the basic knowledge of Advanced Mathematics and Computer Science, wants to be an expert of Artificial Intelligence (AI) by teaching himself. A professor recommends him 1,000 documents related to AI, and asserts if he can fully understand the contents of the documents, he will be an expert of AI. The question is: what is the best sequence for the student to learn the documents? Some documents are easy to understand, and should be read in the beginning; some documents are intricate, learning them in the first place is painful and frustrating, and should be put at the end of the learning process. If a computer knows a person's knowledge states at any time, it is not difficult to make the learning plan, and recommend the optimum document for current learning.\n\\subsection{An algorithm to facilitate CAIML}\nAlgorithm 3 is devised to facilitate CAIML. It recommends the optimum document for the subject to learn, by analyzing his current knowledge states. It searches for the document which has the least Knowledge Points that are not understood by the subject, which implies it is the easiest document to understand at present. In the document, the ``Understood\" Knowledge Points server as interpretations to the ``Not Understood\" ones when learning the document. Therefore, the algorithm facilitates a meaningful learning.\n\n\\begin{algorithm}\n\t\\caption{ An algorithm to recommend the optimum document(s) for CAIML} \n\t\\begin{algorithmic}[1]\n\t\t\\REQUIRE ~~\\\\\n\t\t\n\t\tThe set of documents to be learned;\\\\\n\t\tThe set of all Knowledge Points, tagged with the subject's Familiarity Measures to them;\\\\\n\t\tThe set of all non-BKP Knowledge Points' Understanding Trees;\n\t\t\n\t\t\\ENSURE ~~\\\\\n\t\tThe optimum document(s) for current learning to practice a meaningful learning;\n\t\t\\STATE Extract involved Knowledge Points in each document, classify them as ``Understood\" and ``Not Understood\", according to the rule mentioned in Section 3.1;\n\t\t\\STATE Count the number of ``Not Understood\" Knowledge Points for each document;\n\t\t\\RETURN The document(s) with the least ``Not Understood\" Knowledge Points;\t\t\n\t\\end{algorithmic}\n\\end{algorithm}\n\nAn alternative method to recommend the optimum document is to estimate the subject's anticipated percentage of understanding of each document, then return the one that is most approaching 100\\%. A person's understanding of a document is calculated with Equation 9. Suppose the document involves $ n $ Knowledge Points, $ PU(d,t) $ is the percent of understanding of the document at time $ t $; $ \\xi_{i} $ is Knowledge Point $ k_{i} $'s share of the document, its calculation has been discussed in Section 2.4; $ PU(k_{i},t) $ is Knowledge Point $ k_{i} $'s percent of understanding at time $ t $. If a person has understood all the Knowledge Points the document contains, its $ PU(d,t) $ is 100\\%. It is possible that a person learns a ``100\\% understood\" document to strengthen his knowledge. As Equation 9 can estimate a person's understanding degree to document, it can also be used to recommend appropriate reviewers for a paper.\n\n\\begin{equation}\n\tPU(d,t) = \\frac{\\sum_{i=1}^n\\xi_{i}*PU(k_{i},t)}{\\sum_{i=1}^n\\xi_{i}}\n\\end{equation}\n\n\nPeople's memory fades away over time, the Familiarity Measures decrease accordingly. It is also possible that the computer recommends a document that had been tagged ``100\\% understood\", because the $ PU(d,t) $ has abated.\n\\subsection{An example of CAIML}\nHere is an example to illustrate the logic of CAIML. Suppose a person wants to fully understand the suite of documents listed in Table 2 by learning them. Figure 7 shows all the Knowledge Points involved in the documents, the third column of Table 2 lists ones referenced by each of them. The subject is assumed to have understood the BKPs before the beginning of the learning, which are the leaf nodes of Figure 7 (if not, he can learn them first). Table 4 shows the number of ``Not Understood\" Knowledge Points before the beginning of the learning and after each learning, for each document. E.g., there are 8 ``Not Understood\" Knowledge Points before the starting of the learning for D1, they are SSP, SP, JPD, PM, SS, RV, PS, and PD. According to Algorithm 3, the subject should first learn either of D5, D7 or D8. After learning of D5, the subject is assumed to have understood RV. Therefore, the number of ``Not Understood\" Knowledge Points for D1 becomes 7. The first column of Table 4 suggests one of the optimum learning sequences of the documents, which is D5, D8, D4, D2, D7, D6, D3, and D1.\n\n\n\\begin{table}\n\t\\centering\n\t\\begin{tabular}{ c c c c c c c c c }\n\t\t\\hline\n\t\t& D1 & D2 & D3 & D4 & D5 & D6 & D7 & D8\\\\ \\hline\n\t\tBefore starting & 8 & 3 & 5 & 2 & 1 & 2 & 1 & 1\\\\ \\hline\n\t\tD5 & 7 & 3 & 4 & 2 & 0 & 2 & 1 & 1\\\\ \\hline\n\t\tD8 & 6 & 2 & 3 & 1 & 0 & 1 & 1 & 0\\\\ \\hline\n\t\tD4 & 5 & 1 & 3 & 0 & 0 & 1 & 1 & 0\\\\ \\hline\n\t\tD2 & 4 & 0 & 3 & 0 & 0 & 1 & 1 & 0\\\\ \\hline\n\t\tD7 & 3 & 0 & 2 & 0 & 0 & 1 & 0 & 0\\\\ \\hline\n\t\tD6 & 2 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\\\ \\hline\n\t\tD3 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\\\ \\hline\n\t\tD1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\\\ \t \n\t\t\\hline \n\t\\end{tabular}\n\t\\caption{An example of CAIML}\n\\end{table}\n\n\n\\section{Discussion}\nAssessing a person's understanding of conceptual knowledge is not easy; as an experimental model aimed to accomplish this, a great deal of thought and research are required to realize its potential.\n\n\\subsection{Dealing with similar Knowledge Points}\nThere is a problem of how to deal with Knowledge Points that are different but have little distinction, such as ``random variation\" and ``random variable\". One solution is to homogenize them to the same Knowledge Point; another is to compensate one Knowledge Point's Familiarity Measure with others' Familiarity Measures, because learning others helps to understand it. The compensation can be calculated with Equation 10. $ F_{k_{i}} $ is Knowledge Point $ k_{i} $'s Familiarity Measure, each of its sibling contributes $ 1\/c_{j} $ of its Familiarity Measure to $ k_{i} $ ($ c_{j} $ is coefficient to be determined). \n\n\\begin{equation}\n\tF_{k_{i}\\_new} = F_{k_{i}\\_old} + \\sum_{j=1}^m{\\frac{1}{c_{j}}}F_{k_{j}}\n\\end{equation}\n\n\\subsection{Trade-offs of evaluating one's understanding of conceptual knowledge}\nQuantitatively assessing one's understanding of conceptual knowledge seems to be a good thing, but there are risks that it introduces some harmful effects. For example, if the Familiarity Measures and understanding degrees calculated are inaccurate, it may lead to wrong decisions. In addition, it cannot detect a person's talent and potential in a field. On the other hand, traditional exams or interviews have their limitations. For example, it needs other people's cooperation to accomplish the evaluation; it only assesses one's knowledge in a particular field at a time, and the evaluation is not comprehensive. Since it only assesses questions being asked, not all of the topics in a field are evaluated. ICKEM assesses one's knowledge independently, comprehensively, and automatically. Therefore, the methods of evaluating one's understanding of knowledge should be used cooperatively, complementing one another.\n\n\\subsection{Privacy issues}\nRecording one's learning history will inevitably violates his privacy. To protect privacy, the learning histories can be password protected or encrypted and stored in personal storage; they should not be revealed to other people. The only information that can be viewed by the outside world is the individual's knowledge measures of some Knowledge Points. The Knowledge Points that may involve privacy are separated from others; every output of them should be authorized by the owner. A person may choose to keep all of his knowledge measures private.\n\n\\subsection{The success criteria for ICKEM}\nFor ICKEM to be a successful model of evaluating a person's understanding of conceptual knowledge, it should have a good performance of discovering a person's domain-level expertise and topic-level expertise. To evaluate its performance, firstly, a group of subjects' learning histories over a long time (several years or decades of years) should be recorded.\n\nFor evaluating its ability of discovering domain-level expertise, each subject's domain-level expertise, such as Mathematics, Chemistry, Computer Science, Biology etc., is determined through traditional methods like tests and interviews. For each domain, a set of knowledge topics that are distinctive for the domain is selected. Their Familiarity Measures are calculated based on the subjects' learning histories. Then randomly select a number of knowledge topics from each domain and calculate the average Familiarity Measure of them. The domain which has the highest average Familiarity Measure is a subject's domain-level expertise. Finally, compare the domain-level expertise discovered by ICKEM with the expertise discovered by traditional methods. If they are equivalent, it proves ICKEM's capability of discovering domain-level expertise.\n\nFor evaluating ICKEM's ability of discovering topic-level expertise, randomly select a set of knowledge topics from each domain, then evaluate a subject's understanding degrees about them, classifying them as ``Understood'' and ``Not Understood'' according to the method mentioned in Section 3.1, then use traditional methods to evaluate a subject's understanding degrees about them, classifying them as ``Understood'' and ``Not Understood'' too. By comparing the results of both classification methods, we can obtain the precision and recall of ICKEM's capability of discovering topic-level expertise.\n\n\\section{Related work}\nMany research fields focus on the collection of personal information, such as lifelogging, expertise finding, and personal informatics. Bush envisioned the `memex' system, in which individuals could compress and store personally experienced information, such as books, records, and communications \\cite{bush1979we}. Inspired by `memex', Gemmell et al. developed a project called MyLifeBits to store all of a person's digital media, including documents, images, audio, and video \\cite{gemmell2002mylifebits}. In \\cite{Liu2017}, a person's reading history about an electronic document is used as attributes for re-finding the document. ICKEM is similar to `memex' and MyLifeBits in that it records an individual's digital history, although for a different purpose. `Memex' and MyLifeBits are mainly for re-finding or reviewing personal data; ICKEM is for quantitatively evaluating a person's knowledge.\n\nPersonal informatics is a class of tools that help people collect personally relevant information for the purpose of self-reflection and gaining self-knowledge \\cite{li2010stage,wolf2009know,yau2009self}. Various tools have been developed to help people collect and analyze different kinds of personal information, such as location \\cite{lindqvist2011m}, finances \\cite{kaye2014money}, food \\cite{cordeiro2015barriers}, weight \\cite{kay2013there,maitland2011designing}, and physical activity \\cite{fritz2014persuasive}. ICKEM facilitates a new type of personal informatics tool that helps people discover their expertise and deficiencies in a more accurate way, by quantitatively assessing an individual's understanding of knowledge.\n\nExpertise is one's expert skill or knowledge in a particular field. Expertise finding is the use of tools for finding and assessing individual expertise \\cite{mcdonald1998just,mattox1999enterprise,vivacqua1999agents}. As an important link of knowledge sharing, expertise finding has been heavily studied in many research communities \\cite{ackerman2013sharing, Cheng2014,maybury2002awareness,tang2008arnetminer,Aslay2013,guy2013mining}. Many sources of data have been exploited to assess an individual's expertise, such as one's publications, documents, emails, web search behavior, other people's recommendations, social media etc. ICKEM provides a new source of data to analyze one's expertise--one's learning history about a topic, which is more comprehensive and straightforward than other data sources. Because one's expertise is mainly obtained through learning (Including ``Informal Learning\", which occurs through the experience of day-to-day situations, such as a casual conversation, play, exploring, etc.)\n\n\\section{Conclusion}\nPeople's pursuing of knowledge is never stopping. Most conceptual knowledge is transmitted through language; it is hard to imagine how a person can obtain conceptual knowledge without using language. A piece of written or spoken language can be converted into text. We propose a new method to estimate a person's understanding of a piece of conceptual knowledge, by analyzing the text content of one's all learning experiences about a knowledge topic. The computation of familiarity degree takes into account the total time the subject has devoted to a knowledge topic, a topic's share in a learning session, the subject's physical and psychological status during a session, the memory decay of each learning experience over time, and the difference among learning methods. To estimate a person's understanding degree to a knowledge topic, it comprehensively evaluates one's familiarity degrees to the topic itself and other topics that are essential to understand the topic.\nQuantitatively evaluating a person's understanding of knowledge facilitates many applications, such as personalized recommendation, meaningful learning, expertise and deficiency finding etc. With the prevailing of wearable computers like Google Glass and Apple Watch, and maturing of technologies like Speech Recognition and Optical Character Recognition (OCR), it is practicable to analysis people's daily learning activities like talking, listening, and reading. Therefore, ICKEM is technically feasible and beneficial.\n\n\\bibliographystyle{abbrv}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nOn November 12, 2014, the ESA\/Rosetta descent module Philae landed at the Abydos site on the surface of comet 67P\/Churyumov-Gerasimenko (hereafter 67P\/C-G). As part of the scientific payload aboard Philae, the Ptolemy mass spectrometer (Wright~et~al. 2007) performed the analysis of several samples from the surface at Agilkia (Wright~et~al. 2015) and atmosphere at Abydos (Morse~et~al. 2015). The main molecules detected by Ptolemy on the Abydos site were H$_2$O, CO and CO$_2$, with a measured CO\/CO$_2$ molar ratio of 0.07~$\\pm$~0.04. Meanwhile, the CO\/CO$_2$ ratio has also been sampled in 67P\/C-G's coma between August and September 2014 by the ROSINA Double Focusing Mass Spectrometer (DFMS; Balsiger~et~al. 2007; H{\\\"a}ssig~et~al. 2013) aboard the Rosetta spacecraft. Strong variations of the CO and CO$_2$ production rates, dominated by the diurnal changes on the comet, have been measured by ROSINA, giving CO\/CO$_2$ ratios ranging between 0.50~$\\pm$~0.18 and 1.62~$\\pm$~1.34 over the August-September 2014 time period (H{\\\"a}ssig~et~al. 2015). Large fluctuations correlated with the sampled latitudes have also been observed and explained either by seasonal variations or by a compositional heterogeneity in the nucleus (H{\\\"a}ssig~et~al. 2015). Further investigation of the coma heterogeneity performed by Luspay-Kuti~et~al. (2015) in the southern hemisphere of 67P\/C-G at a later time period led to conclusions in favor of compositional heterogeneity. This latter hypothesis is also reinforced by the Ptolemy measurement of the CO\/CO$_2$ ratio at the Abydos site, which is found outside the range covered by the ROSINA measurements (Morse~et~al. 2015).\n\n\nHere, we aim at investigating the physico-chemical properties of the Abydos subsurface which can reproduce the CO\/CO$_2$ ratio observed by Ptolemy, assuming that the composition of the solid phase located beneath the landing site initially corresponds to the value in the coma. To investigate the possibility of a heterogeneous nucleus for 67P\/C-G, we have employed a comet nucleus model with i) an updated set of thermodynamic parameters relevant for this comet and ii) an appropriate parameterization of the illumination at the Abydos site. This allows us to mimic the thermal evolution of the subsurface of this location. By searching for the matching conditions between the properties of the Abydos subsurface and the Ptolemy data, we provide several constraints on the structural properties and composition of Philae's landing site in different cases.\n\n\n\\section{The comet nucleus model}\n\nThe one-dimensional comet nucleus model used in this work is described in Marboeuf~et~al. (2012). This model considers an initially homogeneous sphere composed of a predefined porous mixture of ices and dust in specified proportions. It describes heat transmission, gas diffusion, sublimation\/recondensation of volatiles within the nucleus, water ice phase transition, dust release, and dust mantle formation. The model takes into account different phase changes of water ice, within: amorphous ice, crystalline ice and clathrates. The use of a 1D model is a good approximation for the study of a specific point at the surface of the comet nucleus, here the Abydos landing site. However, since 67P\/C-G's shape is far from being a sphere (Sierks~et~al. 2015), we have parameterized the model in a way that correctly reproduces the illumination conditions at Abydos. This has been made possible via the use of the 3D shape model developed by Jorda~et~al. (2014) which gives the coordinates of Abydos on the surface of 67P\/C-G's nucleus, as well as the radius corresponding to the Abydos landing site and the normal to the surface at this specific location. The Abydos landing site is located just outside the Hatmethit depression, at the very edge of the area illuminated during the mapping and close observation phases of the Rosetta mission, roughly until December 2014. It is also at the edge of a relatively flat region of the small lobe illuminated throughout the perihelion passage of the comet. Geomorphologically, Abydos is interpreted as being a rough deposit region composed of meter-size boulders (Lucchetti~et~al. 2016). Other geometric parameters specific to 67P\/C-G, such as the obliquity and the argument of the subsolar meridian at perihelion, are calculated from the orientation of the spin axis computed during the shape reconstruction process (Jorda~et~al. 2014). Table~1 summarizes the main parameters used in this work. The porosity and dust\/ice ratio of the cometary material are set in the range of measurement of 80~$\\pm$~5~\\% (Kofman~et~al. 2015) and 4~$\\pm$~2 (Rotundi~et~al. 2014), respectively. These two parameters are linked through the density of the cometary material, and are set to be compatible with the preliminary value determined by Jorda~et~al. (2014) (510~$\\pm$~20~kg\/m$^3$). 67P\/C-G's thermal inertia is estimated to be in the 10--150~W~K$^{-1}$~m$^{-2}$~s$^{1\/2}$ range based on the measurement obtained by the Rosetta\/VIRTIS instrument (Leyrat~et~al. 2015). According to the same study, regions surrounding Abydos are characterized by a thermal inertia in the lower half of this range. We have therefore chosen a low thermal inertia close to 50~W~K$^{-1}$~m$^{-2}$~s$^{1\/2}$.\n\nIn addition to water ice and dust, the solid phase of our model includes CO and CO$_2$ volatiles. Although coma abundances do not necessarily reflect those in the nucleus, they constitute the most relevant constraint available on its composition. We thus considered the CO\/CO$_2$ ratio (1.62~$\\pm$~1.34) measured by ROSINA on August 7, 2014 at 18 hours, as representative of the bulk ice composition in the nucleus, and more specifically in the Abydos subsurface. This ratio is derived from the CO\/H$_2$O and CO$_2$\/H$_2$O ROSINA measurements performed at this date, which are equal to 0.13~$\\pm$~0.07 and 0.08~$\\pm$~0.05, respectively (H{\\\"a}ssig~et~al. 2015). We selected the date of August 7 because i) the corresponding ROSINA measurements were performed at high northern latitudes where the Abydos site is located, and ii) the large CO\/CO$_2$ range obtained at this moment covers all values measured by ROSINA at other dates (including the late value obtained by Le~Roy~et~al. (2015) on October 20 for the northern hemisphere, namely CO\/CO$_2$~=~1.08).\n\nThe three main phases of ices identified in the literature, namely crystalline ice, amorphous ice and clathrate phase, are considered in this work. Outgassing of volatiles in 67P\/C-G could then result from the sublimation of ices, amorphous-to-crystalline ice phase transition, or destabilization of clathrates in the crystalline ice, amorphous ice and clathrates cases, respectively. Because the properties of volatiles trapping in the nucleus matrix strongly depend on the considered icy phase, the following models have been considered:\n\n\\paragraph{Crystalline ice model} Water ice is fully crystalline, meaning that no volatile species are trapped in the water ice structure. Here, CO and CO$_2$ are condensed in the pores of the matrix made of water ice and dust;\n\n\\paragraph{Amorphous ice model} The matrix itself is made of amorphous water ice with a volatile trapping efficiency not exceeding $\\sim$10\\%. In this case, the cumulated mole fraction of volatiles is higher than this value, implying that an extra amount of volatiles is crystallized in the pores. With this in mind, we consider different distributions of CO and CO$_2$ in the both phases of this model;\n\n\\paragraph{Clathrate model} Water ice is exclusively used to form clathrates. Similarly to amorphous ice, clathrates have a maximum trapping capacity ($\\sim$17\\%). The extra amount of volatiles, if any, also crystallizes in the pores. In our case, however, CO is fully trapped in clathrates and escapes only when water ice sublimates. In contrast, we assume that solid CO$_2$ exists in the form of crystalline CO$_2$ in the pores of the nucleus because this molecule is expected to condense in this form in the protosolar nebula (Mousis~et~al. 2008).\n\n\\begin{deluxetable}{lcl}\n\\tablecaption{Modeling parameters for the nucleus}\n\\tablehead{\n\\colhead{Parameter} & \\colhead{Value} & \\colhead{Reference}\n}\n\\startdata\nRotation period\t(hr)\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t& 12.4\t\t\t\t& Mottola~et~al. (2014)\t\t\\\\\nObliquity ($\\degree$)\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t& 52.25\t\t\t\t&\t\t\t\t\t\t\t\\\\\nSubsolar meridian $\\Phi$ ($\\degree$) \\footnote{Argument of subsolar meridian at perihelion.}\t\t\t& -111\t\t\t\t&\t\t\t\t\t\t\t\\\\\nCo-latitude ($\\degree$) \\footnote{Angle between the normal to the surface and the equatorial plane.}\t& -21\t\t\t\t&\t\t\t\t\t\t\t\\\\\nInitial radius (km)\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t& 2.43\t\t\t\t&\t\t\t\t\t\t\t\\\\\nBolometric albedo (\\%)\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t& 1.5\t\t\t\t& Fornasier~et~al. (2015)\t\\\\\nDust\/ice mass ratio\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t& 4~$\\pm$~2\t\t\t& Rotundi~et~al. (2014)\t\t\\\\\nPorosity (\\%)\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t& 80~$\\pm$~5\t\t& Kofman~et~al. (2015)\t\t\\\\\nDensity (kg\/m$^3$)\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t& 510~$\\pm$~20\t\t& Jorda~et~al. (2014)\t\t\\\\\nI (W~K$^{-1}$~m$^{-2}$~s$^{1\/2}$) \\footnote{Thermal inertia.}\t\t\t\t\t\t\t\t\t\t\t& 50\t\t\t\t& Leyrat~et~al. (2015)\t\t\\\\\nCO\/CO$_2$ initial ratio\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t& 1.62~$\\pm$~1.34\t& H{\\\"a}ssig~et~al. (2015)\t\\\\\n\\enddata\n\\end{deluxetable}\n\n\n\\section{Thermal evolution of the subsurface at Abydos}\n\nOur results show that the illumination at the surface of the Abydos site is a critical parameter for the evolution of the nucleus, regardless the considered ice structure. Consequently, all three models described in Section~2 present the same behavior up to a given point. We first describe the characteristics displayed in common by the three models by presenting the thermal evolution of the crystalline ice model, before discussing the variations resulting from the different assumptions on the nature of ices. Figure~1 shows the time evolution of the nucleus stratigraphy, which corresponds to the structural differentiation occurring in the subsurface of the Abydos site. This differentiation results from the sublimation of the different ices. After each perihelion passage, the sublimation interfaces of CO and CO$_2$ reach deeper layers beneath the nucleus surface, with a progression of $\\sim$20 m per orbit. The CO sublimation interface always propagates deeper than its CO$_2$ counterpart because of the higher volatility of the former molecule. On the other hand, because surface ablation is significant, the progression of these interfaces is stopped by the propagation of the water sublimation front after perihelion. This allows the Abydos region to present a ``fresh'' surface after each perihelion.\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=9cm]{figure1.pdf}\n\\caption{Stratigraphy of the Abydos subsurface represented during 35 years of evolution ($\\sim$5 orbits of 67P\/C-G), showing the interfaces of sublimation of considered volatile species. The surface ablation occurs at each perihelion (represented by the vertical lines) and reaches all interfaces.}\n\\end{center}\n\\end{figure}\n\nAt the surface of the Abydos site, the outgassing rates of CO and CO$_2$ vary with the illumination conditions, reaching maxima at perihelion and minima at aphelion. Because the sublimation interface of CO$_2$ is closer to the surface, its production rate is more sensitive to the illumination conditions than that of CO. As a result, the outgassing rate of CO$_2$ presents important variations with illumination while that of CO is less affected. This difference strongly impacts the evolution of the CO\/CO$_2$ outgassing ratio at the surface of Abydos (see Figure~2). Close to perihelion, this ratio crosses the range of values measured by Ptolemy (0.07~$\\pm$~0.04) and reaches a minimum. Note that the CO\/CO$_2$ outgassing ratio presents spikes during a certain period after perihelion. These spikes appear when the CO and CO$_2$ interfaces of sublimation are dragged out to the surface by ablation and result from temperature variations induced by diurnal variations of the insolation.\n\nWe define $\\Delta t$ as the time difference existing between the Ptolemy CO\/CO$_2$ observations (here 0.07 measured on November 12, 2014) and the epoch at which our model reproduces these data (see Figure~2). In each case investigated, we vary the input parameters of the model to minimize the value of $\\Delta t$ (see Table~2 for details). We have also defined the quantity $f_t = \\Delta t \/ P_{orb}$, namely the fraction of 67P\/C-G's orbital period ($P_{orb} = 6.44$ yr) corresponding to $\\Delta t$. The results of our simulations are indicated below.\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=9cm]{figure2.pdf}\n\\caption{Evolution of the CO\/CO$_2$ outgassing ratio at Abydos in the case of the crystalline ice model, during one orbit. The green line and green area represent the Ptolemy central value and its range of uncertainty, respectively. The blue dots correspond to the measurement epoch (November 12, 2014). Vertical lines correspond to passages at perihelion. See text for a description of $\\Delta t$.}\n\\end{center}\n\\end{figure}\n\n\\begin{deluxetable}{lccc}\n\\tablecaption{Optimized set of parameters for the three models}\n\\tablehead{\n\\colhead{} & \\colhead{Crystalline} & \\colhead{Amorphous} & \\colhead{Clathrate}\\\\ \n\\colhead{} & \\colhead{ice model} & \\colhead{ice model} & \\colhead{model}\n}\n\\startdata\nParameters\t\t\t\t\t\t\t\t\t\t&\t\t\t\t\t&\t\t\t\t\t&\t\t\t\t\\\\\n\\hline\nDust\/ice mass ratio\t\t\t\t\t\t\t\t& 6\t\t\t\t\t& 4\t\t\t\t\t& 4\t\t\t\t\\\\\nPorosity (\\%)\t\t\t\t\t\t\t\t\t& 78\t\t\t\t& 76\t\t\t\t& 76\t\t\t\\\\\nDensity (kg\/m$^3$)\t\t\t\t\t\t\t\t& 516\t\t\t\t& 510\t\t\t\t& 519\t\t\t\\\\\nI (W K$^{-1}$ m$^{-2}$ s$^{1\/2}$) \\footnote{Thermal inertia resulting from the different water ice conductivities.}\t\t& $\\sim$60\t& 40--60\t& $\\sim$50\t\\\\\nCO\/CO$_2$ initial ratio\t\t\t\t\t\t\t& 0.46\t\t\t\t& 1.62\t\t\t\t& 0.46\t\t\t\\\\\nCO\/H$_2$O initial abundance\t\t\t\t\t\t& 6\\%\t\t\t\t& 13\\% \\footnote{2.8\\% trapped in amorphous ice, 10.2\\% condensed in the pores (Marboeuf~et~al. 2012).}\t& 6\\%\t\\\\\nCO$_2$\/H$_2$O initial abundance\t\t\t\t\t& 13\\%\t\t\t\t& 8\\% \\footnote{4.2\\% trapped in amorphous ice, 3.2\\% condensed in the pores (Marboeuf~et~al. 2012).}\t& 13\\%\t\\\\\n\\hline\nResults\t\t\t\t\t\t\t\t\t\t\t&\t\t\t\t\t&\t\t\t\t\t&\t\t\t\t\\\\\n\\hline\n$\\Delta t$ (day)\t\t\t\t\t\t\t\t& 52\t\t\t\t& 4\t\t\t\t\t& 34\t\t\t\\\\\n$f_t$ (\\%)\t\t\t\t\t\t\t\t\t\t& 2.2\t\t\t\t& 0.17\t\t\t\t& 1.4\t\t\t\\\\\n\\enddata\n\\end{deluxetable}\n\n\n\\paragraph{Crystalline ice model} In this case, we find that the initial CO\/CO$_2$ ratio and the dust\/ice ratio adopted in the nucleus have a strong influence on $\\Delta t$. This quantity is minimized i) when the adopted dust\/ice ratio becomes higher than those found by Rotundi~et~al. (2014) and ii) if the selected initial CO\/CO$_2$ ratio is lower than the nominal value found by ROSINA (H{\\\"a}ssig~et~al. 2015). Figure~3 shows the evolution of $\\Delta t$ as a function of the dust\/ice ratio for two different values of the initial CO\/CO$_2$ ratio, namely 1.62 (the central value) and 0.46 (close to the lower limit). These results confirm the aforementioned trend: with CO\/CO$_2$ = 0.46 and a dust\/ice ratio of 6 or higher, the Ptolemy's measurement epoch is matched with $\\Delta t$~=~52~$\\pm$~27 days or lower, which corresponds to $f_t$ lower than $\\sim$2\\%. These results can be explained by the thermal conductivity of crystalline water ice, which is in the 3--20~W~m$^{-1}$~K$^{-1}$ range (Klinger 1980), considering the temperatures in the comet. Because dust has a conductivity of 4~W~m$^{-1}$~K$^{-1}$ (Ellsworth~\\&~Schubert 1983), the global conductivity decreases with the increase of the dust\/ice ratio in the nucleus. Since heating in the nucleus is mostly provided by surface illumination, a low conductivity increases the temperature gradient between the upper and deeper layers (where the CO$_2$ and the CO sublimation interfaces are located, respectively). This gradient enhances more the sublimation rate of CO$_2$ ice than that of CO ice, leading to smaller CO\/CO$_2$ outgassing ratios at the surface and to a smaller $\\Delta t$ (see Figure~2).\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=9cm]{figure3.pdf}\n\\caption{Evolution of $\\Delta t$ and $f_t$ as a function of the dust\/ice mass ratio for two different values of the initial CO\/CO$_2$ ratio in the case of the crystalline ice model (see text).}\n\\end{center}\n\\end{figure}\n\n\\paragraph{Amorphous ice model} Here, values of $\\Delta t$ are significantly lower than those obtained with the crystalline ice model; a $\\Delta t$ of 4~$\\pm$~9 days is obtained using the central values listed in Table~2 for the initial CO\/CO$_2$ and dust\/ice ratios, leading to $f_t \\sim 0.2\\%$. Higher values of the CO\/CO$_2$ ratio never allow our model matching the Ptolemy data: the CO\/CO$_2$ outgassing ratio is increased sufficiently high so that even its minimum becomes higher than the range of measurements performed by Ptolemy. On the other hand, lower CO\/CO$_2$ ratios increase $\\Delta t$. Interestingly, the results of this model are poorly affected by the considered dust\/ice ratio. Within the 0.12--1.35~W~m$^{-1}$~K$^{-1}$ range (Klinger 1980), the conductivity of amorphous water ice lays under those of dust (4~W~m$^{-1}$~K$^{-1}$) and crystalline ices (3--20~W~m$^{-1}$~K$^{-1}$). Since amorphous ice dominates the volatile phase, the mean conductivity is never too far from to that of dust, irrespective of the dust\/ice ratio considered within the observed range.\n\n\\paragraph{Clathrate model} In this case, low CO\/CO$_2$ ratios (still within the range given in Table~1) are required to get values of $\\Delta t$ below 50 days (i.e., $f_t$ below 2\\%). Similarly to the case of the amorphous ice model, $\\Delta t$ is poorly sensitive to the variation of the dust\/ice ratio because the conductivity of clathrates (0.5~W~m$^{-1}$~K$^{-1}$; Krivchikov~et~al. 2005a, 2005b) is small compared to that of dust.\n\n\n\\section{Discussion}\n\nOur goal was to investigate the possibility of recovering the value of the CO\/CO$_2$ outgassing ratio measured by the Ptolemy instrument at the surface of Abydos. Interestingly, all considered models match the Ptolemy value with $f_t$ lower than $\\sim$2\\%, provided that an optimized set of parameters is adopted for the Abydos region. Despite the fact it is poorly sensitive to the adopted dust\/ice ratio, a nucleus model dominated by amorphous ice (and possibly including a smaller fraction of crystalline ices) gives the best results ($\\Delta t$~$\\leq$~4~days, i.e. $f_t$~$\\leq$~0.2\\%) for a primordial CO\/CO$_2$ ratio equal to the central value measured by the ROSINA instrument. On the other hand, the crystalline ice and clathrate models require a primordial CO\/CO$_2$ ratio close to the lower limit sampled by ROSINA to obtain values of $\\Delta t$ under 50 days (i.e. $f_t$ under 2\\%). We stress the fact that the CO\/CO$_2$ range of validity is used under the assumption that the ROSINA measurements correspond to the bulk nucleus abundances, which is not necessarily true. A second requirement to minimize the value of $\\Delta t$ in the crystalline ice model is the necessity to adopt a dust\/ice ratio at least equal to or higher than the upper limit determined by Rotundi~et~al. (2014) for 67P\/C-G. This is supported by the pictures taken at Abydos by the CIVA instrument (Bibring~et~al. 2007) aboard the Philae module. The very low reflectance of 3--5\\% of the Abydos region (Bibring~et~al. 2015) is in agreement with the OSIRIS and VIRTIS reflectance measurements in the visible (Sierks~et~al. 2015) and near-IR (Capaccioni~et~al. 2015), which are consistent with a low ice content in the upper surface layer (Capaccioni~et~al. 2015).\n\nSurface illumination can also greatly influence the CO\/CO$_2$ outgassing ratio on 67P\/C-G. To quantify this effect, we have simulated a point of 67P\/C-G's nucleus which is more illuminated in comparison to Abydos. We have performed a set of simulations at a co-latitude of -52.25$\\degree$, a point that receives permanent sunlight around perihelion. At the date when the CO\/CO$_2$ outgassing ratio at Abydos is equal to 0.07 (the central value of Ptolemy's range of measurement), we obtain a different value for the CO\/CO$_2$ outgassing ratio at this new location, irrespective of the adopted model. For the crystalline ice model, the outgassing ratio at the illuminated site reaches a value of 0.11, which is still within Ptolemy's range of measurement. This implies that a different illumination cannot explain a strong variation of the CO\/CO$_2$ outgassing ratio if the nucleus presents a homogeneous composition. In this case, the difference between the Ptolemy and ROSINA measurements is clearly due to a heterogeneity in the nucleus composition. On the other hand, the CO\/CO$_2$ outgassing ratio at the illuminated site is equal to 0.74 and 0.76 in the cases of the amorphous ice and clathrate models, respectively. These values are within ROSINA's range of measurements, implying that the difference of illumination is sufficient to explain the difference with the CO\/CO$_2$ ratio sampled at Abydos, assuming a homogeneous nucleus.\n\nIn summary, all possible water ice structures are able to reproduce the observations made by Ptolemy, assuming that the primordial CO\/CO$_2$ ratio is the one inferred by ROSINA. Each case requires a unique set of input parameters taken from the range of values inferred by Rosetta and which describes the structure and composition of the material. According to our simulations, a heterogeneity in the composition of 67P\/C-G's nucleus is possible only if the nucleus is composed of crystalline ices. However, if we consider different ice phases like amorphous ice or clathrates, the difference between the Ptolemy and ROSINA measurements could simply originate from the variation of illumination between different regions of the nucleus.\n\nIn the upcoming months, the Philae module could awaken and allow the Ptolemy mass spectrometer to perform additional measurements of the CO\/CO$_2$ ratio. By comparing these new values with the different CO\/CO$_2$ outgassing ratios predicted by our three models at the same date, we would be able to see which model is the most reliable, and thus to determine which water ice structure is dominant at the surface of 67P\/C-G's nucleus.\n\n\n\\acknowledgements\nO.M. acknowledges support from CNES. This work has been partly carried out thanks to the support of the A*MIDEX project (n\\textsuperscript{o} ANR-11-IDEX-0001-02) funded by the ``Investissements d'Avenir'' French Government program, managed by the French National Research Agency (ANR). Funding and operation of the Ptolemy instrument was provided by the Science and Technology Facilities Council (Consolidated Grant ST\/L000776\/1) and UK Space Agency (Post-launch support ST\/K001973\/1). A.L.-K. acknowledges support from the NASA Jet Propulsion Laboratory (subcontract No. 1496541).\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{introduction}\n\nIn 1962, John Olson isolated a water-soluble bacteriochlorophyll\n({\\it BChl a}) protein from green sulfur bacteria \\cite{olson63}. In\n1975, Roger Fenna and Brian Matthews resolved the X-ray structure of\nthis protein (Fenna-Matthews-Olson (FMO)) from {\\it\nProsthecochloris aestuarii} and found that the protein consists of\nthree identical subunits related by $C_3$ symmetry, each containing\nseven {\\it BChl a} pigments \\cite{fenna75}. In photosynthetic\nmembranes of green sulfur bacteria, this protein channels the\nexcitations from the chlorosomes to the reaction center. Since it\nwas the first photosynthetic antenna complex of which the X-ray\nstructure became available, it triggered a wide variety of studies\nof spectroscopic and theoretical nature, and it therefore has become\none of the most widely studied and well-characterized\npigment-protein complexes.\n\nThe excitation-energy transfer can be elucidated with 2D\necho-spectroscopy using three laser pulses which hit the sample\nwithin several femtoseconds time spacings. The experimentally\nmeasured 2D echo-spectra of the complex show wave-like energy\ntransfer with oscillation periods roughly consistent with the\neigenenergy spacings of the excited system\\cite{engel07}. This has\nbrought a long-standing question again into the scientific focus\nthat whether nontrivial quantum coherence effects exist in natural\nbiological systems under physiological conditions\n\\cite{lee07,adolphs06}.\n\nMany studies have attempted to unravel the precise role of quantum\ncoherence in the EET of light-harvesting complexes\n\\cite{mohseni08,plenio08,caruso09,chin10,olaya08,ishizaki09,yang10,hoyer09,sarovar11,shi11}.\nThe interplay of coherent dynamics, which leads to a delocalization\nof an initial excitation arriving at the FMO complex from the\nantenna, and the coupling to a vibronic environment with slow and\nfast fluctuations, has led to studies of environmentally assisted\ntransport in the FMO. An interesting finding is that the\nenvironmental decoherence and noise play a crucial role in the\nexcitation energy transfer in the FMO\n\\cite{mohseni08,plenio08,caruso09,chin10,caruso101,shabani11,ghosh11,yi01}.\n\nIn these studies, the FMO complex is treated using the so-called\nFrenkel exciton Hamiltonian,\n\\begin{equation}\nH=\\sum_{j=1}^7 E_j|j\\rangle\\langle\nj|+\\sum_{i>j=1}^7J_{ij}(|i\\rangle\\langle j|+h.c.),\n\\end{equation}\nwhere $|j\\rangle$ represents the state with only the $j$-th site\nbeing excited and all other sites in their electronic ground state.\n$E_j$ is the on-site energy of site $j$, and $J_{ij}$ denotes the\ninter-site coupling between sites $i$ and $j$. The inter-site\ncouplings $J_{ij}$ given by $J_{ij}=\\int\nd\\vec{r}\\Phi_i(\\vec{r})\\rho_j(\\vec{r})$ represent the Coulomb\ncouplings between the transition densities of the BChls\n\\cite{adolphs08}, where $\\Phi_i(\\vec{r})$ is the electrostatic\npotential of the transition density of {\\it BChl i} and\n$\\rho_j(\\vec{r})$ is the transition density of {\\it BChl j}. It is\neasy to find that this calculation can only give real inter-site\ncouplings $J_{ij}.$ In fact, the couplings, which has been used in\nprevious studies of 2D echo-spectra and facilitate comparisons\nbetween different approaches, are all real.\n\nWe should emphasize that the inter-site couplings originate from\ndipole-dipole couplings between different chromophores. They are in\ngeneral complex from the aspect of quantum mechanics, regardless it\nis difficult to calculate. In this paper, we will shed light on\nthe effects of complex inter-site couplings by addressing the\nfollowing questions. First, we present a theoretical framework which\nelucidates how complex couplings could assist the excitation\ntransfer. We optimize the relative phases in the couplings for\nmaximal transfer efficiency and study the robustness of excitation\ntransfer against the fluctuation of the on-site energy and\ninter-site couplings. Second, we present a study on the coherence in\nthe initially excited states by showing the dependence of the\ntransfer efficiency on the initial states.\n\nThis paper is organized as follows. In Sec. {\\rm II}, we introduce\nthe model to describe the FMO complex for the dynamics of the\nexciton transport. In Sec. {\\rm III}, we optimize the phases\nphenomenologically introduced in the couplings for maximal\nexcitation energy transfer efficiency and explore how the efficiency\ndepends on the initial state of the FMO. The fluctuations in the\nlocal on-site energy and in the inter-site couplings affect the\ntransfer efficiency, this effect is studied in Sec. {\\rm IV}.\nFinally, we conclude in Sec. {\\rm V}.\n\n\\section{model}\nSince the FMO complex is arranged as a trimer with three different\nsubunits interacting weakly with each other, we restrict our study\nto a single subunit which contains eight bacteriochlorophyll\nmolecules (BChls)\\cite{tronrud09, busch11}. The presence of the 8th\nBChl chromophore has just been suggested by the recent\ncrystallographic data\\cite{tronrud09}, and the recent experimental\ndata and theoretical studies indicated that the 8th BChl is the\nclosest site to the baseplate and should be the point at which\nenergy flows into the FMO complex\\cite{busch11,ritschel11}. However,\nthe eighth BChl is loosely bound, it usually detaches from the\nothers when the system is isolated from its environment to perform\nexperiments. Thus we do not take the eighth into account and the EET\nwill be explored by using the seven-site model\\cite{tronrud09}.\n\nWe describe the FMO complex by a generic one-dimensional Frenkel\nexciton model, consisting of a regular chain of 7 optically active\nsites, which are modeled as two-level systems with parallel\ntransition dipoles. The corresponding Hamiltonian reads,\n\\begin{equation}\nH=\\sum_{j=1}^7 E_j|j\\rangle\\langle\nj|+\\sum_{i>j=1}^7(J_{ij}|i\\rangle\\langle j|+J_{ij}^*|j\\rangle\\langle\ni|),\n\\end{equation}\nwhere $|j\\rangle$ represents the state where only the $j$-th site is\nexcited and all other sites are in their electronic ground state.\n$E_j$ is the on-site energy of site $j$, and $J_{ij}$ denotes the\nexcitonic coupling between sites $i$ and $j$, $J_{ij}=J_{ji}^*$. The\non-site and the inter-site energies from Ref.\\cite{adolphs06} (in\nunits of $\\mbox{cm}^{-1}$) will be adapted to study the quantum\ndynamics of the FMO. As afromentioned, a phase $\\phi_{ij}$ is added\nto the inter-site coupling $J_{ij}$ stronger than 15 $cm^{-1}$,\n\\begin{widetext}\n\\begin{equation}\nH \\!=\\!\\! \\left(\\!\\!\\begin{array}{ccccccc}\n\\mathbf{215} & \\!\\mathbf{-104.1e^{i\\phi_{12}}} & 5.1 & -4.3 & 4.7 & \\mathbf{-15.1e^{i\\phi_{16}}} & -7.8 \\\\\n\\!\\mathbf{-104.1e^{-i\\phi_{12}}} & \\mathbf{220.0} &\\mathbf{ 32.6e^{i\\phi_{23}}} & 7.1 & 5.4 & 8.3 & 0.8 \\\\\n5.1 & \\mathbf{ 32.6e^{-i\\phi_{23}} }& 0.0 & \\mathbf{-46.8e^{i\\phi_{34}}} & 1.0 & -8.1 & 5.1 \\\\\n-4.3 & 7.1 &\\!\\mathbf{-46.8e^{-i\\phi_{34}}} & \\mathbf{125.0} &\\!\n\\mathbf{-70.7e^{i\\phi_{45}}} &\\! -14.7 &\n\\mathbf{ -61.5e^{i\\phi_{47}}}\\\\\n4.7 & 5.4 & 1.0 & \\!\\mathbf{-70.7e^{-i\\phi_{45}}} & \\mathbf{450.0} & \\mathbf{ 89.7e^{i\\phi_{56}}} & -2.5 \\\\\n\\mathbf{-15.1e^{-i\\phi_{16}}} & 8.3 & -8.1 & -14.7 & \\mathbf{89.7e^{-i\\phi_{56}}} & \\mathbf{330.0} & \\mathbf{ 32.7e^{i\\phi_{67}}} \\\\\n-7.8 & 0.8 & 5.1 & \\mathbf{-61.5e^{-i\\phi_{47}}} & -2.5 &\n\\mathbf{32.7e^{-i\\phi_{67}}} & \\mathbf{280.0}\n\\end{array}\\!\\!\n\\right). \\label{ha}\n\\end{equation}\n\\end{widetext}\nHere the zero energy has been shifted by 12230 $\\mbox{cm}^{-1}$ for\nall sites, corresponding to a wavelength of $\\sim 800 \\mbox{nm}$. We\nnote that in units of $\\hbar=1$, we have 1 ps$^{-1}$=5.3 cm$^{-1}$.\nThen by dividing $|J_{ij}|$ and $E_{j}$ by 5.3, all elements of the\nHamiltonian are rescaled by units of ps$^{-1}$. We can find from\nthe Hamiltonian $H$ that in the Fenna-Matthew-Olson complex (FMO),\nthere are two dominating exciton energy transfer (EET) pathways:\n$1\\rightarrow 2\\rightarrow 3$ and $6\\rightarrow (5,7)\\rightarrow 4\n\\rightarrow 3$. Although the nearest neighbor terms dominate the\nsite to site coupling, significant hopping matrix elements exist\nbetween more distant sites. This indicates that coherent transport\nitself may not explain why the excitation energy transfer is so\nefficient.\n\nWe phenomenologically introduce a decoherence scheme to describe the\nexcitation population transfer from site 3 to the reaction center\n(site 8). The master equation to describe the dynamics of the FMO\ncomplex is then given by,\n\\begin{eqnarray}\n\\frac{d\\rho}{dt} = -i[H,\\rho]+ {\\cal L}_{38}(\\rho)\\;,\n\\label{masterE}\n\\end{eqnarray}\nwhere $\\mathcal{L}_{38}(\\rho)=\\Gamma \\left[P_{83}\\rho(t)\nP_{38}-\\frac{1}{2}P_3\\rho(t)-\\frac{1}{2}\\rho(t)P_3\\right]$ with\n$P_{38}=|3\\rangle\\langle 8|=P_{83}^{\\dagger}$ characterizes the\nexcitation trapping at site 8 via site 3.\n\nWe shall use the population $P_8$ at time $T$ in the reaction center\ngiven by $P_8(T)=\\Tr(|8\\rangle\\langle 8|\\rho(T))$ to quantify the\nexcitation transfer efficiency. Clearly, the Liouvillian\n$L_{38}(\\rho)$ plays an essential role in the excitation transfer.\nWith this term, we will show in the next section that the complex\ninter-site coupling can enhance the excitation energy transfer.\n\n\n\\section{simulation results}\nIt has been shown that the Hamiltonian with real inter-site\ncouplings can not give high excitation energy transfer efficiency.\nIndeed, the previous numerical simulations shown that $\\Gamma$ can\nbe optimized to 87.14 with all $\\phi_{ij}=0$ to obtain the highest\nexcitation transfer efficiency $p_8=0.6781$ in this case\n\\cite{yi12}.\n\nWith complex inter-site couplings, the EET efficiency can reach\nalmost $90\\%$ by optimizing the phases added to $J_{ij}$. In units\nof $\\pi$, these phases are $\\phi_{12}=1.2566, \\phi_{16}=3.3510,\n\\phi_{23}=1.8850, \\phi_{34}=1.8850, \\phi_{45}=1.8850,\n\\phi_{47}=0.8378, \\phi_{56}=3.1416, \\phi_{67}=0.3142.$ With these\nphases, the population on each site as a function of time is plotted\nin Fig. \\ref{fig1}. Two observations can be made from Fig.\n\\ref{fig1}: (1) The population shows oscillatory behavior in the\nwhole energy transfer process, this is different from the situation\nwhere decoherence dominates the mechanism of EET \\cite{yi12}, (2)\nthe excitation on site 1 and 2 dominates the population, while the\npopulation at site 3 is almost zero. The first observation can be\nunderstood as the quantum coherence lasts the whole EET, and the\nsecond observation suggests that the site 1 and site 2 play\nimportant role in the excitation energy transfer. The excitation on\nsite 3 is trapped and transferred to the reaction center almost\nimmediately as to obtain a high transfer efficiency, as Fig.\n\\ref{fig1} shows.\n\\begin{figure}\n\\includegraphics*[width=0.8\\columnwidth,\nheight=0.5\\columnwidth]{fmo_cf1.eps}\n\\includegraphics*[width=0.8\\columnwidth,\nheight=0.5\\columnwidth]{fmo_cf6.eps} \\caption{Population on each\nsite as a function of time. Site 8 represents the reaction center.\nThe phases are optimized for the EET efficiency. The exciton is\nassumed initially on the site 1. The upper panel shows the\npopulation without decoherence, while the bottom panel shows that\nwith $\\gamma_2=2.4$ and $\\gamma_{j\\neq 2}=0,$ see\nEq.(\\ref{masterE1}). $\\Gamma=44$ is chosen throughout this paper.}\n\\label{fig1}\n\\end{figure}\n\nSpatial and temporal relaxation of exciton shows that the site 1\nand 6 were populated initially with large contribution\n\\cite{adolphs06}. Then it is interesting to study how the initial\nstates affect the excitation transfer efficiency. In the following,\nwe shall shed light on this issue. Two cases are considered. First,\nwe calculate the excitation transfer efficiency with exciton\ninitially in a superposition of site 1 and 6 and analyze the effect\nof transfer efficiency on the initial states. Second, we extend this\nstudy to initial states where sites 1 and 2 are initially excited.\nThese analyses are performed by numerically calculate the transfer\nefficiency at time $T= 5 $ps, selected results are shown in Fig.\n\\ref{fig2} and Fig. \\ref{fig3}.\n\n\\begin{figure}\n\\includegraphics*[width=0.8\\columnwidth,\nheight=0.5\\columnwidth]{fmo_cf2.eps}\n\\includegraphics*[width=0.8\\columnwidth,\nheight=0.5\\columnwidth]{fmo_cf3.eps}\\caption{The dependence of the\ntransfer efficiency on the initial states. Top panel: the initial\nstate site is a superposition of $|1\\rangle$ and $|6\\rangle$.\nNamely, $|\\psi(t=0)\\rangle=\\cos\\theta|1\\rangle+\\sin\\theta\\exp(i\n\\phi)|6\\rangle.$ Lower panel: the initial state is\n$|\\psi(t=0)\\rangle=\\cos\\theta|1\\rangle+\\sin\\theta\\exp(i\n\\phi)|2\\rangle.$ The phases in $J_{ij}$ are optimized for maximal\ntransfer efficiency, i.e., they take the same values as in\nFig.\\ref{fig1}. } \\label{fig2}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics*[width=0.8\\columnwidth,\nheight=0.5\\columnwidth]{fmo_cf4.eps} \\caption{The transfer\nefficiency versus initial states: the case of classical\nsuperposition. The phases (couplings) are the same as in Fig.\n\\ref{fig1}. The initial state is $p|1\\rangle\\langle\n1|+(1-p)|6\\rangle\\langle 6|$ for solid line, and it is $p\n|1\\rangle\\langle 1|+(1-p)|2\\rangle\\langle 2|$ for dotted line.}\n\\label{fig3}\n\\end{figure}\n\nIn Fig. \\ref{fig2} we present the transfer efficiency as a function\nof $\\theta$ and $\\phi$, which characterize the pure initial states\nof the FMO complex through\n$|\\psi(t=0)\\rangle=\\cos\\theta|1\\rangle+\\sin\\theta\\exp(i\n\\phi)|6\\rangle$ (upper panel) and\n$|\\psi(t=0)\\rangle=\\cos\\theta|1\\rangle+\\sin\\theta\\exp(i\n\\phi)|2\\rangle$ (lower panel). We find the FMO complex efficient to\ntransfer excitation initially at site 1 is not good at EET with site\n6 occupied. Namely, coherent superposition of sites 1 and 6\ndecreases the exciton transfer efficiency. For exciton initially in\na superposition of 1 and 6, the transfer efficiency is more\nsensitive to the population ratio (characterized by $\\theta$) but\nnot to the relative phase $\\phi$. For exciton initially excited on\nsite 1 and 2, both the population ratio and the relative phase\naffect the energy transfer, the transfer efficiency reaches its\nmaximum with $\\theta=\\frac 3 4\\pi$ and $\\phi=\\frac 1 2\\pi$ and it\narrives at its minimum with $\\theta=\\frac {\\pi}{4}$ and $\\phi=\\frac\n1 2\\pi$. This observation suggests that a properly superposition of\nsite 1 and site 2 can enhance the EET, although the increase of\nefficiency is small.\n\nFig. \\ref{fig3} shows the dependence of the transfer efficiency on\nthe mixing rate $p$, where the initial state is a classical mixing\nof site 1 and 6 (or 2). Obviously, the population mixing of site 1\nand site 6 does not favors the transfer efficiency. While the\nclassical mixing of site 1 and site 2 increase the transfer\nefficiency a little bit (not clear from the figure).\n\nIt is recently suggested that decoherence is an essential feature\nof EET in the FMO complex, where decoherence arises from\ninteractions with the protein cage, the reaction center and the\nsurrounding environment. Further examination shown that dephasing\ncan increase the efficiency of transport in the FMO, whereas other\ntypes of decoherence (for example dissipation) block the EET. These\neffects of decoherence can be understood as the\nfluctuation-induced-broadening of energy levels, which bridges the\non-site energy gap and changes equivalently the coupling between\nthe sites.\n\nWe now examine if the decoherence can further increase the EET\nefficiency with the complex inter-site couplings. The decoherence\ncan be described by an added Liouvillian term\n\\begin{equation}\n\\mathcal{L}(\\rho)=\\sum_{j=1}^7 \\gamma_j \\left[P_j \\rho(t)\nP_j-\\frac{1}{2}P_j\\rho(t)-\\frac{1}{2}\\rho(t)P_j\\right],\n\\label{masterE1}\n\\end{equation}\nto Eq. (\\ref{masterE}) with $P_j=|j\\rangle\\langle j|$,\n$j=1,2,3,...,7$. The decoherence may come from site-environment\ncouplings, where the environment models the thermal phonon bath and\nradiation fields.\n\\begin{figure}\n\\includegraphics*[width=0.8\\columnwidth,\nheight=0.5\\columnwidth]{fmo_cf5.eps} \\caption{EET efficiency as a\nfunction of $\\gamma_2$. The other decoherence rates $\\gamma_j$ are\nzero, the phases in the inter-site couplings are optimized for EET\nefficiency with $\\gamma_2=0.$} \\label{fig4}\n\\end{figure}\nOptimizing the decoherence rates $\\gamma_j, (j=1,2,...,7)$ with\n$\\Gamma=44$ for the EET efficiency, we find $\\gamma_2=2.4$ and\n$\\gamma_j=0, (j\\neq 2)$ yields a transfer efficiency $P_8=0.9344,$\nsee Fig. \\ref{fig4}. Non-zero $\\gamma_j, (j=1,3,4,5,6,7)$ can\n increase the EET efficiency, but the amplitude of the\nincrease is not evident, for example, $\\gamma_1=2.4, \\gamma_3=2.0,\n\\gamma_6=1.8,$ the other $\\gamma_j=0$ and $\\Gamma=44$ yields\n$P_8=0.9374$.\n\n\\section{effect of fluctuations}\nThe site energies are among the most debated properties of the FMO\ncomplex. These values are needed for exciton calculations of the\nlinear spectra and simulations of dynamics. They depend on local\ninteractions between the {\\it BChl a} molecule and the protein\nenvelope and include electrostatic interactions and ligation. Since\nthe interactions are difficult to identify and even harder to\nquantify, the site energies are usually treated as independent\nparameters obtained from a simultaneous fit to several optical\nspectra. There are many approaches to obtain the site energies by\nfitting the spectra. One of the main differences between those\napproaches is whether they restrict the interactions to {\\it BChl a}\nmolecules within a subunits or whether they include interactions in\nthe whole trimer. Since the site energies and inter-site couplings\nmay be different and fluctuate due to the couplings to its\nsurroundings, it is interesting to ask how the fluctuations in the\nsite energies and couplings affect the transfer efficiency. In the\nfollowing, we will study this issue and show that the exciton\ntransfer in the FMO complex remains essentially unaffected in the\npresence of random variations in site\n energies and inter-site couplings. This strongly suggests that the\nexperimental results recorded for samples at low temperature would\nalso be observable at higher temperatures.\n\nTo be specific, we consider two types of fluctuations in the site\nenergies and inter-site couplings. In the first one, it has zero\nmean, while the another type of fluctuations has nonzero positive\nmean. For the fluctuations with zero mean, the Hamiltonian in Eq.\n(\\ref{ha}) takes the following changes, $H_{jj}\\rightarrow\nH_{jj}(1+r_1\\cdot(\\mbox {rand}(1)-0.5))$ and $H_{i\\neq j}\\rightarrow\nH_{i\\neq j}(e^{i\\phi_{ij}}+r_2\\cdot(\\mbox {rand}(1)-0.5)).$ For the\nfluctuations with nonzero positive mean, the Hamiltonian takes the\nfollowing changes, $H_{jj}\\rightarrow H_{jj}(1+r_1\\cdot \\mbox\n{rand}(1))$ and $H_{i\\neq j}\\rightarrow H_{i\\neq\nj}(e^{i\\phi_{ij}}+r_2\\cdot\\mbox {rand}(1)),$ where $\\mbox{rand(1)}$\ndenotes a random number between 0 and 1. So a 100\\% static disorder\nmay appear in the on-site energies and inter-site couplings.\n\n\\begin{figure}\n\\includegraphics*[width=0.8\\columnwidth,\nheight=0.5\\columnwidth]{fmo_cf7.eps} \\caption{Effect of fluctuations\nin the Hamiltonian on the transfer efficiency. The plot is for the\nfluctuations with zero mean. The resulting $P_8$ is an averaged\nresult over 20 independent runs. The fluctuations with non-zero mean\nhas a similar effect on the EET, which is not shown here.}\n\\label{fig5}\n\\end{figure}\n\nWith these arrangements, we numerically calculated the transfer\nefficiency and present the results in Fig. \\ref{fig5}. Each value of\ntransfer efficiency is a result averaged over 20 fluctuations. From\nthe numerical simulations, we notice that the results for\nfluctuations with non-zero mean are almost the same as those for\nzero-mean fluctuations, we hence present here only the results for\nzero-mean fluctuations. Two observations are obvious. (1) With\nzero-mean fluctuations in the site energies and couplings increase,\nthe transfer efficiency fluctuates greatly. (2) The efficiency\ndecreases with both $r_2$ and $r_1$. Moreover, the EET efficiency is\nsensitive to the fluctuations in the site energy more than that in\nthe inter-site couplings.\n\n\nThese feature can be interpreted as follows. Since the network\nwith Hamiltonian Eq.(\\ref{ha}) is optimal for EET efficiency, any\nchanges in the site-energy and inter-site couplings would diminish\nthe excitation energy transfer, leading to the decrease in the EET\nefficiency. This feature is different from the case that the\ndecoherence dominates the mechanism of EET. As Ref. \\cite{yi12}\nshown, the efficiency increases with $r_1$ but decreases with $r_2$.\nThe physics behind this difference is as follows. For the case that\nthe decoherence dominates the mechanism of EET, the energy gap\nbetween neighboring sites blocks the energy transfer, whereas the\ninter-site couplings (representing the overlap of different sites)\ntogether with the decoherence favor the transport. Whereas for the\npresent case the phases in the inter-site couplings need to {\\it\nmatch} the energy spacing between sites. As a result, any mismatches\nwould results in decrease in the EET efficiency.\n\n\\begin{figure}\n\\includegraphics*[width=0.8\\columnwidth,\nheight=0.5\\columnwidth]{fmo_cf10.eps}\n\\includegraphics*[width=0.8\\columnwidth,\nheight=0.5\\columnwidth]{fmo_cf9.eps} \\caption{Effect of fluctuations\nin the Hamiltonian on the transfer efficiency. The fluctuations are\nGaussian with mean $\\mu$ and the standard deviation $\\sigma$.\nUpper: The fluctuations happen only in the on-site energies; Bottom:\nThe fluctuations in both the on-site energies and couplings. The\nresulting fidelity is an averaged result over 15 independent runs.}\n\\label{fig6}\n\\end{figure}\n\nIt was suggested\\cite{adolphs06} that the fluctuations in the\non-site energies are Gaussian. To examine the effect of Gaussian\nfluctuations, we introduce a Gaussian function,\n$y(x|\\sigma,\\mu)=\\frac{1}{\\sigma\\sqrt{2\\pi}}e^{-\\frac{(x-\\mu)^2}{2\\sigma^2}},$\nwhere $\\mu$ is the mean, while $\\sigma$ denotes the standard\ndeviation. We assume that the fluctuations enter only into the\non-site energies, namely, $H_{jj}$ is replaced by\n$H_{jj}(1+y(x|\\sigma,\\mu))$, $j=1,2,...,7$. With these notations, we\nplot the effect of fluctuations on the transfer efficiency in Fig.\n\\ref{fig6}(upper panel). We find that the dependence of the\nefficiency on the variation is subtle: When the mean is zero, the\nefficiency reach its maximum at $\\sigma=0.5,$ while when the mean is\nlarge, small variation favors the EET efficiency. Moreover, we find\nthat the efficiency decreases as the mean increases. This again can\nbe understood as mismatch between the site energy and the\ninter-site couplings caused by the fluctuations. When these Gaussian\nfluctuations occur in both the on-site energies and the couplings,\nwe find from Fig.\\ref{fig6}(bottom panel) that the transfer\nefficiency decreases as the mean and variation increases.\n\nAbsorption spectroscopy and 2D echo-spectroscopy are used to study\ntime-resolved processes such as energy transfer or vibrational decay\nas well as to measure intermolecular couplings strengths. They\nprovide a tool that gives direct insight into the energy states of a\nquantum system. In turn, the eigenenergies are essential to\ncalculate the absorption spectroscopy and 2D echo-spectroscopy. It\nis easy to find that the eigenenergies and energy gaps for the FMO\nwith complex inter-site couplings, given by (96.2851, 62.2958,\n61.4307, 48.2046, 22.6379, 18.9438, -4.1374), are almost the same\nas in the system with real inter-site couplings given by (96.8523,\n62.6424, 57.9484, 50.6357, 22.8218, 19.2387, -4.4789). The maximal\ndifference is at the 7th eigenenergy, it is about 8.25\\% to its\neigenenergy. This suggests that the absorption spectroscopy and 2D\necho-spectroscopy may not change much due to the introduced phases\nin the inter-site couplings.\n\n\\section{Conclusion}\nExcitation energy transfer (EET) has been an interesting subject\nfor decades not only for its phenomenal efficiency but also for its\nfundamental role in Nature. Recent studies show that the quantum\ncoherence itself cannot explain the very high excitation energy\ntransfer efficiency in the Fenna-Matthews-Olson Complex. In this\npaper, we show that this is not the case when the inter-site\ncouplings are complex. Based on the Frenkel exciton Hamiltonian and\nthe phenomenologically introduced decoherence, we have optimized the\nphases in the inter-site couplings for maximal energy transfer\nefficiency, which can reach about 89.72\\% at time 5ps without\ndecoherence and 93.44\\% with only dephasing at site 2. By\nconsidering different mixing of exciton on site 1 and site 6 (site\n2) as the initial states, we have examined the effect of initial\nstates on the energy transfer efficiency. The results suggest that\nany mixing of site 1 and site 6 decreases the energy transfer\nefficiency, whereas a coherent superposition (or classically mixing)\nof site 1 and site 2 does not change much the EET efficiency. The\nfluctuations in the site energies and inter-site couplings diminish\nthe EET due to the mismatch caused by the fluctuations.\n\nFinishing this work, we have noticed a closely related\npreprint\\cite{zhu12}. While they focus on bi-pathway EET in the FMO\n(i.e., neglecting the inter-site couplings weaker than 15 $cm^{-1}$)\nand a general representation for complex network, we are mainly\nconcerned about the phases added to the inter-site couplings,\ninitial excitations and fluctuations in the site energies and\ninter-site couplings. In this respect, both works are complementary\nto each other.\n\n\\ \\ \\\\\nStimulating discussions with Jiangbin Gong are acknowledged. This\nwork is supported by the NSF of China under Grants Nos 61078011 and\n11175032 as well as the National Research Foundation and Ministry of\nEducation, Singapore under academic research grant No. WBS:\nR-710-000-008-271.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nDuring the past few decades, deep learning has achieved remarkable success in a wide range of applications, including computer vision \\citep{krizhevsky2017imagenet,he2016deep,chen2020simple,he2020momentum,chen2020big}, neural language processing \\citep{kim2014convolutional,vaswani2017attention,lample2019cross}, autonomous vehicles \\citep{grigorescu2020survey,al2017deep,fujiyoshi2019deep}, and protein structure prediction \\citep{senior2020improved,jumper2021highly}. An important reason for this success is due to powerful computing resources, which allow deep neural networks to directly handle giant datasets and bypass complicated manual feature extraction, which causes potential loss of data information \\citep{szegedy2016rethinking,szegedy2017inception}. For example, the powerful large language model GPT-3 contains $175$ billion parameters and is trained on $45$ terabytes of text data with thousands of graphics processing units (GPUs) \\citep{brown2020language}. However, massive data are generated every day \\citep{sagiroglu2013big}, which pose a significant threat to training efficiency and data storage, and deep learning might reach a bottleneck due to the mismatch between the volume of data and computing resources \\citep{strubell2019energy}. Recently, many methods have been proposed to improve training efficiency in deep learning from several perspectives as follows:\n\\begin{itemize}\n \\item Quantization: This approach sacrifices data at the byte level during the training process for acceleration \\citep{gupta2015deep,micikevicius2018mixed,faraone2018syq,gong2019mixed}.\n \\item Model pruning: This approach removes trainable parameters that have little influence on the final performance \\citep{sun2017meprop,lym2019prunetrain,anwar2017structured,luo2017thinet}.\n \\item Optimization: This approach designs the training algorithms for fast convergence \\citep{zhang2016efficient} or less memory cost \\citep{bernstein2018signsgd,karimireddy2019error}.\n \\item Dataset reduction: this approach generates few representative data to constitute a new training set. Depending on whether the generated data are natural or synthetic, dataset reduction can be classified into coreset selection \\citep{mirzasoleiman2020coresets,Coleman2020Selection} and dataset distillation \\citep{wang2018dataset}.\n\\end{itemize}\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=0.9\\columnwidth]{figures\/dd.pdf}\n \\caption{An illustration of dataset distillation. The models trained on the large original dataset and small synthetic dataset demonstrate comparable performance on the test set.}\n \\label{fig:dd}\n\\end{figure*}\n\nThe above narrative shows that efficient deep learning is an extensive task and is beyond the scope of this paper; readers can refer to \\citep{sze2017efficient,menghani2021efficient} for more details. In this survey, we focus on dataset distillation to improve training efficiency by synthesizing training data. To better appreciate dataset distillation, we briefly introduce other cognate methods in dataset reduction. For a support vector machine (SVM), its hyperplane is solely determined by ``support vectors'', and removing all other points in the training dataset does not influence on the convergence result \\citep{cortes1995support}. Therefore, the selection of ``support vectors'' is a favorable method to reduce the training burden for SVMs \\citep{nalepa2019selecting}. When the scenario expands to deep learning, the problem of selecting ``support vectors'' generalizes to the well-known {\\it coreset selection}; this is the algorithm \nselects a few prototype examples from the original training dataset as the coreset, and then the model is solely trained on the small coreset to save training costs while avoiding large performance drops.\nHowever, elements in the coreset are unmodified and constrained by the original data, which considerably restricts the coreset's expressiveness, especially when the coreset budget is limited. Recently, a novel approach, {\\it dataset distillation} (DD) \\footnote{Dataset distillation is also referred to as dataset condensation in some studies.}, has attracted growing attention from the deep learning community and has been leveraged in many practical applications \\citep{masarczyk2020reducing,liu2022meta,chen2022bidirectional, chen2023bidirectional}. Different from coreset selection, dataset distillation removes the restriction of uneditable elements and carefully modifies a small number of examples to preserve more information, as shown in Figure \\ref{fig:dd_sample} for synthetic examples. By distilling the knowledge of the large original dataset into a small synthetic set, models trained on the distilled dataset can acquire a comparable generalization performance. A general illustration for dataset distillation is presented in Figure \\ref{fig:dd}.\n\n\n\nDue to the property of extremely high dimensions in the deep learning regime, the data information is hardly disentangled to specific concepts, and thus distilling numerous high-dimensional data into a few points is not a trivial task. Based on the objectives applied to mimic target data, dataset distillation methods can be grouped into meta-learning frameworks and data matching frameworks, and these techniques in each framework can be further classified in a much more detailed manner. In the meta-learning framework, the distilled data are considered hyperparameters and optimized in a nested loop fashion according to the distilled-data-trained model's risk {\\it w.r.t.} the target data \\citep{maclaurin2015gradient,wang2018dataset}. The data matching framework updates distilled data by imitating the influence of target data on model training from parameter or feature space \\citep{zhao2021dataset,cazenavette2022distillation,zhao2023distribution}. Figure \\ref{fig:tree diagram} presents these different categories of DD algorithms in a tree diagram.\n\n\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.9\\columnwidth]{figures\/dd_survey_framework.pdf}\n \\caption{The schematic structure of dataset distillation and the relationship between the adjacent sections. The body of this survey mainly contains the fundamentals of dataset distillation, taxonomy of distillation schemes, types of factorized DD, distillation algorithms, performance comparison, applications, challenges, and future directions. Note that `Section' is abbreviated as `Sec.' in this figure.}\n \\label{fig:structure}\n\\end{figure}\n\n\\begin{figure}[t]\n \\centering\n \\scalebox{0.7}{\n \\begin{forest}\n [\\textbf{Dataset Distillation}\n [\\textbf{Meta-Learning Framework}\n [\\textbf{Backpropagation Through Time}\n [DD \\citep{wang2018dataset} \\\\ LD \\citep{bohdal2020flexible} \\\\ SLDD \\citep{sucholutsky2021soft} \\\\ \\blue{GTN} \\citep{such2020generative} \\\\ \\blue{RTP} \\citep{deng2022remember}]\n ]\n [\\textbf{Kernel Ridge Regression}\n [KIP \\citep{nguyen2020dataset,nguyen2021dataset} \\\\ FRePo \\citep{zhou2022dataset} \\\\ RFAD \\citep{loo2022efficient}]\n ]\n ]\n [\\textbf{Data Matching Framework}\n [\\textbf{Gradient Match}\n [DC \\citep{zhao2021dataset} \\\\ DCC \\citep{lee2022dataset} \\\\ \\blue{IDC} \\citep{kim2022dataset}]\n ]\n [\\textbf{Trajectory Match}\n [MTT \\citep{cazenavette2022distillation} \\\\ FTD \\citep{du2022minimizing} \\\\ TESLA \\citep{cui2022scaling} \\\\ \\blue{Haba} \\citep{liu2022dataset}]\n ]\n [\\textbf{Distribution Match}\n [DM \\citep{zhao2023distribution} \\\\ CAFE \\citep{wang2022cafe} \\\\ \\blue{IT-GAN} \\citep{zhao2022synthesizing} \\\\ \\blue{KFS} \\citep{lee2022kfs}]\n ]\n ]\n ]\n\\end{forest}}\n \\caption{Tree diagram for different categories of dataset distillation algorithms. The factorized DD methods are marked in blue.}\n \\label{fig:tree diagram}\n\\end{figure}\n\nApart from directly considering the synthetic examples as optimization objectives, in some studies, a proxy model is designed, which consists of latent codes and decoders to generate highly informative examples and to resort to learning the latent codes and decoders. For example, \\citet{such2020generative} employed a network to generate highly informative data from noise and optimized the network with the meta-learning framework. \\citet{zhao2022synthesizing} optimized the vectors and put them into the generator of a well-trained generative adversarial network (GAN) to produce the synthetic examples. Moreover, \\citet{kim2022dataset,deng2022remember,liu2022dataset,lee2022kfs} learned a couple of latent codes and decoders, and then synthetic data were generated according to the different combinations of latent codes and decodes. With this factorization of synthetic data, the compression ratio of DD can be further decreased, and the performance can also be improved due to the intraclass information extracted by latent codes.\n\n\n\nIn this paper, we present a comprehensive survey on dataset distillation, and the main objectives of this survey are as follows: (1) present a clear and systematic overview of dataset distillation; (2) review the recent progress and state-of-the-art algorithms and discuss various applications in dataset distillation; (3) give a comprehensive performance comparison {\\it w.r.t.} different dataset distillation algorithms; and (4) provide a detailed discussion on the limitation and promising directions of dataset distillation to benefit future studies. Recently, \\citet{larasati2022review} published a short review on dataset distillation. However, these authors only provided a general overview of DD and focus on the aspect of applications. Different from \\citet{larasati2022review}, our paper gives a systematic, comprehensive survey on dataset distillation from a wide aspect of distillation frameworks, algorithms, applications, limitations, and promising directions.\n\nThe rest of this paper is organized as follows. We first provide soem background with respect to DD in Section 2. DD methods under the meta-learning framework are described and comprehensively analyzed in Section 3, and a description of the data matching framework follows in Section 4. Section 5 discusses different types of factorized dataset distillation. Next, we report the performance of various distillation algorithms in Section 6. Applications with dataset distillation are shown in Section 7, and finally we discuss challenges and future directions in Section 8.\n\n\n\n\n\n\n\\section{Background}\nBefore introducing dataset distillation, we first define some notations used in this paper.\nFor a dataset $\\mathcal{T}=\\{(\\boldsymbol{x}_i, y_i)\\}_{i=1}^{m}$, $\\boldsymbol{x}_i\\in \\mathbb{R}^d$, $d$ is the dimension of the input data, and $y_i$ is the label. We assume that $(\\boldsymbol{x}_i,y_i)$ for $1\\leq i \\leq m$ are independent and identically distributed (i.i.d.) random variables drawn from the data generating distribution $\\mathcal{D}$. We employ $f_{\\boldsymbol{\\theta}}$ to denote the neural network parameterized by $\\boldsymbol{\\theta}$, and $f_{\\boldsymbol{\\theta}}(\\boldsymbol{x})$ is the prediction or output of $f_{\\boldsymbol{\\theta}}$ at $\\boldsymbol{x}$. Moreover, we also define the loss between the prediction and ground truth as $\\ell(f_{\\boldsymbol{\\theta}}(\\boldsymbol{x}),y)$, and the expected risk in terms of $\\boldsymbol{\\theta}$ is defined as\n\\begin{equation}\n \\mathcal{R}_{\\mathcal{D}}(\\boldsymbol{\\theta}) = \\mathbb{E}_{(\\boldsymbol{x},y)\\sim \\mathcal{D}}\\left[\\ell\\left(f_{\\boldsymbol{\\theta}}\\left(\\boldsymbol{x}\\right),y\\right) \\right].\n\\end{equation}\nSince the data generating distribution $\\mathcal{D}$ is unknown, evaluating the expected risk $\\mathcal{R}_{\\mathcal{D}}$ is impractical. Therefore, a practical way to estimate the expected risk is by the empirical risk $\\mathcal{R}_\\mathcal{T}$, which is defined as\n\\begin{equation}\n \\mathcal{R}_{\\mathcal{T}}(\\boldsymbol{\\theta}) = \\mathbb{E}_{(\\boldsymbol{x},y)\\sim \\mathcal{T}}\\left[\\ell\\left(f_{\\boldsymbol{\\theta}}\\left(\\boldsymbol{x}\\right),y\\right) \\right] = \\frac{1}{m}\\sum_{i=1}^{m} \\ell \\left(f_{\\boldsymbol{\\theta}}\\left(\\boldsymbol{x}_i\\right),y_i\\right).\n\\end{equation}\n\nFor the training algorithm $\\texttt{alg}$, $\\texttt{alg}(\\mathcal{T}, \\boldsymbol{\\theta}^{(0)})$ denote the learned parameters returned by empirical risk minimization (ERM) on the dataset $\\mathcal{T}$ with the initialized parameter $\\boldsymbol{\\theta}^{(0)}$:\n\\begin{equation}\n \\texttt{alg}(\\mathcal{T}, \\boldsymbol{\\theta}^{(0)}) = \\arg\\min_{\\boldsymbol{\\theta}} \\mathcal{R}_{\\mathcal{T}}(\\boldsymbol{\\theta}).\n\\end{equation}\nIn the deep learning paradigm, gradient descent is the dominant training algorithm to train a neural network by minimizing the empirical risk step by step. Specifically, the network's parameters are initialized with $\\boldsymbol{\\theta}^{(0)}$, and then the parameter is iteratively updated according to the gradient of empirical risk:\n\\begin{equation}\n\\label{eq:gradient descent}\n \\boldsymbol{\\theta}^{(k+1)} = \\boldsymbol{\\theta}^{(k)} - \\eta \\boldsymbol{g}_{\\mathcal{T}}^{(k)},\n\\end{equation}\nwhere $\\eta$ is the learning rate and $ \\boldsymbol{g}_{\\mathcal{T}}^{(k)}=\\nabla_{\\boldsymbol{\\theta}^{(k)}} \\mathcal{R}_{\\mathcal{T}}(\\boldsymbol{\\theta}^{(k)})$ is the gradient.\nWe omit $\\boldsymbol{\\theta}^{(0)}$ and use $\\texttt{alg}(\\mathcal{T})$ if there is no ambiguity for the sake of simplicity. Then, the model $f$ trained on the dataset $\\mathcal{T}$ can be denoted as $f_{\\texttt{alg}(\\mathcal{T})}$.\n\nBecause deep learning models are commonly extremely overparameterized, {\\it i.e.,} the number of model parameters overwhelms the number of training examples, the empirical risk readily reaches zero. In this case, the generalization error, which measures the difference between the expected risk and the empirical risk, can be solely equal to the expected risk, which is reflected by test loss or test error in practical pipelines.\n\n\n\n\n\\subsection{Formalizing Dataset Distillation}\n\nGiven a target dataset (source training dataset) $\\mathcal{T}=\\{(\\boldsymbol{x}_i,y_i)\\}_{i=1}^m$, the objective of dataset distillation is to extract the knowledge of $\\mathcal{T}$ into a small synthetic dataset $\\mathcal{S}=\\{(\\boldsymbol{s}_j,y_j)\\}_{j=1}^n$, where $n\\ll m$, and the model trained on the small distilled dataset $\\mathcal{S}$ can achieve a comparable generalization performance to the large original dataset $\\mathcal{T}$:\n\\begin{equation}\n \\mathbb{E}_{(\\boldsymbol{x},y)\\sim \\mathcal{D} \\atop \\boldsymbol{\\theta}^{(0)}\\sim \\mathbf{P}}\\left[\\ell\\left(f_{\\texttt{alg}(\\mathcal{T})}\\left(\\boldsymbol{x}\\right),y\\right) \\right] \\simeq \\mathbb{E}_{(\\boldsymbol{x},y)\\sim \\mathcal{D} \\atop \\boldsymbol{\\theta}^{(0)}\\sim \\mathbf{P} }\\left[\\ell\\left(f_{\\texttt{alg}(\\mathcal{S})}\\left(\\boldsymbol{x}\\right),y\\right) \\right].\n\\end{equation}\n\n\\iffalse\nIntuitively, if the parameters learned on the distilled dataset $\\mathcal{S}$ can also have a minimal loss on the target dataset $\\mathcal{T}$, the models trained on $\\mathcal{S}$ and $\\mathcal{T}$ are expected to possess a similar generalization performance. Therefore, \\citet{wang2018dataset} convert the objective of DD to minimize the term $\\mathcal{R}_{\\mathcal{T}}\\left(\\texttt{alg}\\left(\\mathcal{S}\\right)\\right)$, and the dataset distillation can be formulated to a bilevel optimization problem as below:\n\\begin{equation}\n\\mathcal{S}^\\ast = \\arg\\min_{\\mathcal{S}}\\mathcal{R}_{\\mathcal{T}}\\left({\\texttt{alg}(\\mathcal{S})}\\right) \\quad \\text{(outer loop)}\n\\end{equation}\nsubject to\n\\begin{equation}\n \\texttt{alg}(\\mathcal{S})=\\arg\\min _{\\boldsymbol{\\theta}} \\mathcal{R}_\\mathcal{S}\\left({\\boldsymbol{\\theta}}\\right) \\quad \\text{(inner loop)}\n\\end{equation}\nThe inner loop optimizes the model parameters based on the synthetic dataset and is often realized by gradient descent for neural networks or regression for kernel method. During the outer loop iteration, the synthetic set is updated by minimizing the model's risk in terms of the target dataset. With the nested loop, the synthetic dataset gradually converges to one of the optima.\n\\fi\n\n\n\nBecause the training algorithm $\\texttt{alg}\\left(\\mathcal{S},\\boldsymbol{\\theta}^{(0)}\\right)$ is determined by the training set $\\mathcal{S}$ and the initialized network parameter $\\boldsymbol{\\theta}^{(0)}$, many dataset distillation algorithms will take expectation on $\\mathcal{S}$ {\\it w.r.t.} $\\boldsymbol{\\theta}^{(0)}$ to improve the robustness of the distilled dataset $\\mathcal{S}$ to different parameter initialization, and the objective function has the form of $\\mathbb{E}_{ \\boldsymbol{\\theta}^{(0)}\\sim \\mathbf{P}}\\left[\\mathcal{L} \\left(\\mathcal{S} \\right ) \\right]$. In the following narrative, we omit this expectation {\\it w.r.t.} initialization $\\boldsymbol{\\theta}^{(0)}$ for the sake of simplicity.\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=0.9\\columnwidth]{figures\/synthetic_image_samples.pdf}\n \\caption{Example synthetic images distilled from CIFAR-10\/100 and Tiny ImageNet by matching the training trajectory \\citep{cazenavette2022distillation}.}\n \\label{fig:dd_sample}\n\\end{figure*}\n\n\\section{Meta-Learning Framework}\nBy assuming $\\mathcal{R}_{\\mathcal{D}}\\left({\\texttt{alg}(\\mathcal{S})}\\right) \\simeq \\mathcal{R}_{\\mathcal{T}}\\left({\\texttt{alg}(\\mathcal{S})}\\right)$, the target dataset $\\mathcal{T}$ is employed as the validation set in terms of the model ${\\texttt{alg}(\\mathcal{S})}$. In consequent, the objective of DD can be converted to minimize $\\mathcal{R}_{\\mathcal{T}}\\left({\\texttt{alg}(\\mathcal{S})}\\right)$ to improve the generalization performance of ${\\texttt{alg}(\\mathcal{S})}$ in terms of $\\mathcal{D}$. {To this end, dataset distillation falls into the meta-learning area: the hyperparameter $\\mathcal{S}$ is updated by the meta (or outer) algorithm, and the base (or inner) algorithm solves the conventional supervised learning problem {\\it w.r.t.} the synthetic dataset $\\mathcal{S}$} \\citep{hospedales2021meta}; and the dataset distillation task can be formulated to a bilevel optimization problem as follows:\n\\begin{equation}\n\\mathcal{S}^\\ast = \\arg\\min_{\\mathcal{S}}\\mathcal{R}_{\\mathcal{T}}\\left({\\texttt{alg}(\\mathcal{S})}\\right) \\quad \\text{(outer loop)}\n\\end{equation}\nsubject to\n\\begin{equation}\n\\label{eq: inner loop}\n \\texttt{alg}(\\mathcal{S})=\\arg\\min _{\\boldsymbol{\\theta}} \\mathcal{R}_\\mathcal{S}\\left({\\boldsymbol{\\theta}}\\right) \\quad \\text{(inner loop)}\n\\end{equation}\nThe inner loop optimizes the model parameters based on the synthetic dataset and is often realized by gradient descent for neural networks or regression for kernel method. During the outer loop iteration, the synthetic set is updated by minimizing the model's risk in terms of the target dataset. With the nested loop, the synthetic dataset gradually converges to one of the optima. \n\nAccording to the model and optimization methods used in the inner lop, the meta-learning framework of DD can be further classified into two sub-categories of backpropagation through time time (BPTT) approach and kernel ridge regression (KRR) approach.\n\n\\subsection{Backpropagation Through Time Approach}\n\nAs shown in the above formulation of bilevel optimization, the objective function of DD can be directly defined as the meta-loss of\n\\begin{equation}\n \\mathcal{L}(\\mathcal{S}) = \\mathcal{R}_{\\mathcal{T}}\\left(\\texttt{alg}\\left(\\mathcal{S}\\right)\\right).\n\\end{equation}\nThen the distilled dataset is updated by $\\mathcal{S} = \\mathcal{S} - \\alpha \\nabla_\\mathcal{S} \\mathcal{L}(\\mathcal{S})$ with the step size $\\alpha$. However, in the inner loop of Eq. \\ref{eq: inner loop}, the neural network is trained by iterative gradient descent (Eq. \\ref{eq:gradient descent}), which yields a series of intermediate parameter states of $\\{\\boldsymbol{\\mathcal{\\theta}}^{(0)}, \\boldsymbol{\\mathcal{\\theta}}^{(1)}, \\cdots, \\boldsymbol{\\mathcal{\\theta}}^{(T)} \\}$, backpropagation through time \\citep{werbos1990backpropagation} is required to recursively compute the meta-gradient $\\nabla_\\mathcal{S}\\mathcal{L}(\\mathcal{S})$: \n\\begin{equation}\n \\nabla_\\mathcal{S}\\mathcal{L}(\\mathcal{S}) = \\frac{\\partial \\mathcal{L}}{\\partial \\mathcal{S}} = \\frac{\\partial \\mathcal{L}}{\\partial \\boldsymbol{\\theta}^{(T)}}\\left( \\sum_{k=0}^{k=T} \\frac{\\partial \\boldsymbol{\\theta}^{(T)}}{\\partial \\boldsymbol{\\theta}^{(k)}} \\cdot \\frac{\\partial \\boldsymbol{\\theta}^{(k)}}{\\partial \\mathcal{S}} \\right), \\qquad \\frac{\\partial \\boldsymbol{\\theta}^{(T)}}{\\partial \\boldsymbol{\\theta}^{(k)}} = \\prod_{i=k+1}^{T} \\frac{\\partial \\boldsymbol{\\theta}^{(i)}}{\\partial \\boldsymbol{\\theta}^{(i-1)}}.\n\\end{equation}\n\nWe present the meta-gradient backpropagation of BPTT in Figure \\ref{figure:metaloss}. Due to the requirement of unrolling the recursive computation graph, BPTT is both computationally expensive and memory demanding, which also severely affects the final distillation performance. \n\n\nTo alleviate the inefficiency in unrolling the long parameter path of $\\{\\boldsymbol{\\mathcal{\\theta}}^{(0)}, \\cdots, \\boldsymbol{\\mathcal{\\theta}}^{(T)} \\}$, \\citet{wang2018dataset} adopted a single-step optimization {\\it w.r.t.} the model parameter from $\\boldsymbol{\\theta}^{(0)}$ to $\\boldsymbol{\\theta}^{(1)}$, and the meta-loss was computed based on $\\boldsymbol{\\theta}^{(1)}$ and the target dataset $\\mathcal{T}$: {\n\\begin{equation}\n \\boldsymbol{\\theta}^{(1)} = \\boldsymbol{\\theta}^{(0)} - \\eta \\nabla_{ \\boldsymbol{\\theta}^{(0)}}\\mathcal{R}_{\\mathcal{S}}(\\boldsymbol{\\theta}^{(0)}) \\quad \\text{and} \\quad \\mathcal{L} = \\mathcal{R}_{\\mathcal{T}}(\\boldsymbol{\\theta}^{(1)}).\n\\end{equation}\nTherefore, the distilled data $\\mathcal{S}$ and learning rate $\\eta$ can be efficiently updated via the short-range BPTT as follows:\n\\begin{equation}\n \\mathcal{S} = \\mathcal{S} - \\alpha_{\\boldsymbol{s}_i } \\nabla \\mathcal{L} \\quad \\text{and} \\quad \\eta = \\eta - \\alpha_\\eta \\nabla \\mathcal{L}.\n\\end{equation}}\nUnlike freezing distilled labels, \\citet{sucholutsky2021soft} extended the work of \\citet{wang2018dataset} by learning a soft-label in the synthetic dataset $\\mathcal{S}$; {\\it i.e.}, the label $y$ in the synthetic dataset is also trainable for better information compression, {\\it i.e.}, $y_i = y_i - \\alpha \\nabla_{y_i} \\mathcal{L}$ for $(\\boldsymbol{s}_i, y_i)\\in \\mathcal{S}$. Similarly, \\citet{bohdal2020flexible} also extended the standard example distillation to label distillation by solely optimizing the labels of synthetic datasets. Moreover, these researchers provided improvements on the efficiency of long inner loop optimization via (1) iteratively updating the model parameters $\\boldsymbol{\\theta}$ and the distilled labels $y$, {{\\it i.e.}, one outer step followed by only one inner step for faster convergence}; and (2) fixing the feature extractor of neural networks and solely updating the last linear layer with ridge regression to avoid second-order meta-gradient computation. \nAlthough the BPTT framework has been shown to underperform other algorithms, \\citet{deng2022remember} empirically demonstrated that adding momentum term and longer unrolled trajectory ($200$ steps) in the inner loop optimization can considerably enhance the distillation performance; and the inner loop of model training becomes\n\\begin{equation}\n \\boldsymbol{\\theta}^{(k+1)} = \\boldsymbol{\\theta}^{(k)} - \\eta \\boldsymbol{m}^{(k+1)},\n\\end{equation}\nwhere\n\\begin{equation}\n \\boldsymbol{m}^{(k+1)} = \\beta \\boldsymbol{m}^{(k)} + \\nabla_{\\boldsymbol{\\theta}^{(k)}} \\mathcal{R}_{\\mathcal{T}}(\\boldsymbol{\\theta}^{(k)}) \\quad \\text{s.t.} \\quad \\boldsymbol{m}^{(0)} = \\boldsymbol{0}.\n\\end{equation}\n\n\n\n\\begin{figure}[t]\n\\centering\n\n\\subfigure[]{\n\\begin{minipage}[b]{0.35\\textwidth}\n\\centering\n \t\t\\includegraphics[width=0.9\\columnwidth]{figures\/metaloss.pdf}\n \t\t\\end{minipage}\n\t\t\\label{figure:metaloss} \n \t}\n\\subfigure[]{\n\\begin{minipage}[b]{0.3\\textwidth}\n\\centering\n \t\t\\includegraphics[width=0.9\\columnwidth]{figures\/kernelloss.pdf}\n \t\t\\end{minipage}\n\t\t\\label{figure:kernelloss} \n \t}\n\\subfigure[]{\n\\begin{minipage}[b]{0.26\\textwidth}\n\\centering\n \t\t\\includegraphics[width=0.9\\columnwidth]{figures\/matchloss.pdf}\n \t\t\\end{minipage}\n\t\t\\label{figure:matchloss} \n \t}\n\\caption{ (a) Meta-gradient backpropagation in BPTT \\citep{zhou2022dataset}; (b) Meta-gradient backpropagation in kernel ridge regression; and (c) Meta-gradient backpropagation in gradient matching.}\n\\label{figure:high dimension}\n\\end{figure}\n\n\\subsection{Kernel Ridge Regression Approach}\nAlthough multistep gradient descent can gradually approach the optimal network parameters in terms of the synthetic dataset during the inner loop, this iterative algorithm makes the meta-gradient backpropagation highly inefficient, as shown in BPTT. \nConsidering the existence of closed-form solutions in the kernel regression regime, \\citet{nguyen2020dataset} replaced the neural network in the inner loop with a kernel model, which bypasses the recursive backpropagation of the meta-gradient. For the regression model $f(\\boldsymbol{x})=\\boldsymbol{w}^\\top\\psi(\\boldsymbol{x})$, where $\\psi(\\cdot)$ is a nonlinear mapping and the corresponding kernel is $K(\\boldsymbol{x},\\boldsymbol{x}^\\prime)=\\langle \\psi(\\boldsymbol{x}), \\psi(\\boldsymbol{x}^\\prime)\\rangle$, there exists a closed-form solution for $\\boldsymbol{w}$ when the regression model is trained on $\\mathcal{S}$ with kernel ridge regression (KRR):\n\\begin{equation}\n \\boldsymbol{w} = \\psi(X_{s})^\\top\\left(\\mathbf{K}_{X_s X_s}+\\lambda I\\right)^{-1}y_{s},\n\\end{equation}\nwhere $\\mathbf{K}_{X_s X_s} = [K(\\boldsymbol{s}_i,\\boldsymbol{s}_j)]_{ij}\\in \\mathbb{R}^{n\\times n}$ is called the {\\it kernel matrix} or {\\it Gram matrix} associated with $K$ and the dataset $\\mathcal{S}$, and $\\lambda > 0$ is a fixed regularization parameter \\citep{petersen2008matrix}. Therefore, the mean square error (MSE) of predicting $\\mathcal{T}$ with the model trained on $\\mathcal{S}$ is\n\\begin{equation}\n\\label{eq:mse}\n \\mathcal{L}(\\mathcal{S}) = \\frac{1}{2}\\left\\|y_t-\\mathbf{K}_{X_t X_s}\\left(\\mathbf{K}_{X_s X_s}+\\lambda I\\right)^{-1} y_s\\right\\|^2,\n\\end{equation}\nwhere $\\mathbf{K}_{X_t X_s}=[K(\\boldsymbol{x}_i,\\boldsymbol{s}_j)]_{ij}\\in \\mathbb{R}^{m\\times n}$. Then the distilled dataset is updated via the meta-gradient of the above loss. Due to the closed-form solution in KRR, $\\boldsymbol{\\theta}$ does not require an iterative update and the backward pass of the gradient thus bypasses the recursive computation graph, as shown in Figure \\ref{figure:kernelloss}. \n\n\nIn the KRR regime, the synthetic dataset $\\mathcal{S}$ can be directly updated by backpropagating the meta-gradient through the kernel function. Although this formulation is solid, this algorithm is designed in the KRR scenario and only employs simple kernels, which causes performance drops when the distilled dataset is transferred to train neural networks. \\citet{jacot2018neural} proposed the neural tangent kernel (NTK) theory that proves the equivalence between training infinite-width neural networks and kernel regression. With this equivalence, \\citet{nguyen2021dataset} employed infinite-width networks as the kernel for dataset distillation, which narrows the gap between the scenarios of KRR and deep learning. \n\nHowever, every entry in the kernel matrix must be calculated separately via the kernel function, and thus computing the kernel matrix $\\mathbf{K}_{X_s X_t}$ has the time complexity of $\\mathcal{O}(|\\mathcal{T}||\\mathcal{S}|)$, which is severely inefficient for large-scale datasets with huge $|\\mathcal{T}|$. To tackle this problem, \\citet{loo2022efficient} replaced the NTK kernel with neural network Gaussian process (NNGP) kernel that only considers the training dynamic of the last-layer classifier for speed up. With this replacement, the random features $\\psi(\\boldsymbol{x})$ and $\\psi(\\boldsymbol{s})$ can be explicitly computed via multiple sampling from the Gaussian process, and thus the kernel matrix computation can be decomposed into random feature calculation and random feature matrix multiplication. Because matrix multiplication requires negligible amounts of time for small distilled datasets, the time complexity of kernel matrix computation degrades to $\\mathcal{O}(|\\mathcal{T}|+|\\mathcal{S}|)$. In addition, these authors demonstrated two issues of MSE loss (Eq. \\ref{eq:mse}) used in \\citep{nguyen2020dataset} and \\citep{nguyen2021dataset}: (1) over-influence on corrected data: the correctly classified examples in $\\mathcal{T}$ can induce larger loss than the incorrectly classified examples; and (2) unclear probabilistic interpretation for classification tasks. To overcome these issues, they propose to apply a cross-entropy (CE) loss with Platt scaling \\citep{platt1999probabilistic} to replace the MSE loss:\n\\begin{equation}\n \\mathcal{L}_{\\tau} = \\text{CE}(y_t, \\hat{y}_t \/ \\tau),\n\\end{equation}\nwhere $\\tau$ is a positive learned temperature scaling parameter, and $\\hat{y}_t = \\mathbf{K}_{X_t X_s}\\left(\\mathbf{K}_{X_s X_s}+\\lambda I\\right)^{-1} y_s$ is still calculated using the KRR formula.\n\nA similar efficient method was also proposed by \\citet{zhou2022dataset}, which also focused on solving the last-layer in neural networks with KRR. Specifically, the neural network $f_{\\boldsymbol{\\theta}} = g_{\\boldsymbol{\\theta}_2} \\circ h_{\\boldsymbol{\\theta}_1}$ can be decomposed with the feature extractor $h$ and the linear classifier $g$. Then these coworkers fixed the feature extractor $h$ and the linear classifier $g$ possesses a closed-form solution with KRR; and the the distilled data are accordingly optimized with the MSE loss:\n\\begin{equation}\n \\mathcal{L}(\\mathcal{S}) = \\frac{1}{2}\\left\\|y_t-\\mathbf{K}_{X_t X_s}^{\\boldsymbol{\\theta}_1^{(k)}}\\left(\\mathbf{K}_{X_s X_s}^{\\boldsymbol{\\theta}_1^{(k)}}+\\lambda I\\right)^{-1} y_s\\right\\|^2\n\\end{equation}\nand\n\\begin{equation}\n \\boldsymbol{\\theta}_1^{(k)} = \\boldsymbol{\\theta}_1^{(k-1)} - \\eta \\nabla_{ \\boldsymbol{\\theta}_1^{(k-1)}}\\mathcal{R}_{\\mathcal{S}}(\\boldsymbol{\\theta}_1^{(k-1)}),\n\\end{equation}\nwhere the kernel matrices $\\mathbf{K}_{X_t X_s}^{\\boldsymbol{\\theta}_1}$ and $\\mathbf{K}_{X_s X_s}^{\\boldsymbol{\\theta}_1}$ are induced by $h_{\\boldsymbol{\\theta}_1}(\\cdot)$. Notably, although the feature extractor $h_{\\boldsymbol{\\theta}_1}$ is continuously updated, the classifier $g_{\\boldsymbol{\\theta}_2}$ is directly solved by KRR, and thus the meta-gradient backpropagation side-steps the recursive computation graph.\n\n\n\\subsection{Discussion}\n\n\n\nFrom the loss surface perspective \\citep{li2018visualizing}, the meta-learning framework of minimizing $\\mathcal{R}_{\\mathcal{T}}\\left(\\texttt{alg}\\left(\\mathcal{S}\\right)\\right)$ can be considered to mimic the local minima of target data with the distilled data. However, the loss landscape {\\it w.r.t.} parameters is closely related to the network architecture, while only one type of small network is used in the BPTT approach. Consequently, there is a moderate performance drop when the distilled dataset is employed to train other complicated networks. Moreover, a long unrolled trajectory and second-order gradient computation are also two key challenges for BPTT approach, which hinder its efficiency. The KRR approach compensates for these shortcomings by replacing networks with the nonparametric kernel model which admits closed-form solution. Although KRR is nonparametric and does not involve neural networks during the distillation process, previous research has shown that the training dynamic of neural networks is equal to the kernel method when the width of networks becomes infinite \\citep{jacot2018neural,golikov2022neural,bietti2019inductive,bietti2019inductive}, which partially guarantees the feasibility of the kernel regression approach and explains its decent performance when transferred to wide neural networks.\n\n\n\n\n\n\n\n\n\n\\section{Data Matching Framework}\n\n\n\nAlthough it is not feasible to explicitly extract information from target data and then inject them into synthetic data, information distillation can be achieved by implicitly aligning the byproducts of target data and synthetic data from different aspects; and this byproduct matching allows synthetic data to imitate the influence of target data on model training. The objective function of data matching can be summarized as follows.\n\\begin{equation}\n \\mathcal{L}(\\mathcal{S}) = \\sum_{k=0}^{T} D\\left( \\phi(\\mathcal{S},\\boldsymbol{\\theta}^{(k)}), \\phi(\\mathcal{T},\\boldsymbol{\\theta}^{(k)}) \\right)\n\\end{equation}\nsubject to\n\\begin{equation}\n\\label{eq:dm_update}\n \\boldsymbol{\\theta}^{(k)} = \\boldsymbol{\\theta}^{(k-1)} - \\eta \\nabla_{\\boldsymbol{\\theta}^{(k-1)}} \\mathcal{R}_{\\mathcal{S}}\\left(\\boldsymbol{\\theta}^{(k-1)}\\right),\n\\end{equation}\nwhere $D(\\cdot,\\cdot)$ is a distance function, and $\\phi(\\cdot)$ maps the dataset $\\mathcal{S}$ or $\\mathcal{T}$ to other informative spaces, such as gradient, parameter, and feature spaces. In practical implementation, the full datasets of $\\mathcal{S}$ and $\\mathcal{T}$ are often replaced with random sampled batches of $\\mathcal{B}_\\mathcal{S}$ and $\\mathcal{B}_\\mathcal{T}$ for memory saving and faster convergence. \n\nCompared to the aforementioned meta-learning framework, the data matching loss not only focuses on the final parameter $\\texttt{alg}(\\mathcal{S})$ but also supervises the intermediate states, as shown in the sum operation $\\sum_{k=0}^{T}$. By this, the distilled data can better imitate the influence of target data on training networks at different training stages.\n\n\n\\iffalse\nTo circumvent the computation and memory inefficiency brought by the recursive computation graph, recent methods attempt to explore a substitute for the meta-loss $\\mathcal{R}_{\\mathcal{T}}\\left( \\texttt{alg}\\left(\\mathcal{S}\\right) \\right)$.\nWith the common sense that similar model parameters induce similar performance, the objective of DD is converted to match the model parameters trained on $\\mathcal{S}$ and $\\mathcal{T}$. As the final well-trained parameters can be regarded as an accumulation of parameter gradients, the matching objective can be further converted from parameters to their gradients induced by $\\mathcal{S}$ and $\\mathcal{T}$. With the parameter or gradient matching, $\\mathcal{S}$ can mimic the influence of $\\mathcal{T}$ on network parameters, and consequently helps learn a satisfying model for prediction. \n\nExcept for shortening the distance between parameters or gradients, other works also study DD from feature perspective \\citep{zhao2021distribution}: to represent $\\mathcal{T}$ with $\\mathcal{S}$, the synthetic points in $\\mathcal{S}$ should be as close to the examples in $\\mathcal{T}$ as possible in terms of feature space, and consequently distribution or feature matching are developed for this distillation. Compared to the above parameter or gradient matching, distribution matching can more fully grasp the characteristic of original data distribution with the synthetic dataset $\\mathcal{S}$.\n\n\n\nOverall, these methods achieve dataset distillation from $\\mathcal{T}$ to $\\mathcal{S}$ by matching their byproducts from different perspectives of parameter, gradient, and feature space, and the objective function can be summarized as follows. \n\n\n\n\n\n\n\n\n\n\n\\begin{equation}\n \\mathcal{L}(\\mathcal{S}) = \\sum_{k=0}^{T-1} D\\left( \\phi(\\mathcal{S},\\boldsymbol{\\theta}^{(k)}), \\phi(\\mathcal{T},\\boldsymbol{\\theta}^{(k)}) \\right),\n\\end{equation}\nwhere $D(\\cdot,\\cdot)$ is a distance function, and $\\phi(\\cdot)$ maps the dataset $\\mathcal{S}$ or $\\mathcal{T}$ to gradient, feature, or parameter space. For example, $\\phi(\\mathcal{S},\\boldsymbol{\\theta}^{(k)}) = \\boldsymbol{g}_{\\mathcal{S}}^{(k)}$ in the gradient match. For distribution matching, the feature extractor, usually a set of pre-trained networks, is employed to achieve this mapping to feature space.\n\nCompared to the meta-loss approach, the loss $\\mathcal{L}^{(k)}(\\mathcal{S}) = D\\left( \\phi(\\mathcal{S},\\boldsymbol{\\theta}^{(k)}), \\phi(\\mathcal{T},\\boldsymbol{\\theta}^{(k)}) \\right) $ is only related to the current parameter $\\boldsymbol{\\theta}^{(k)}$ and does not acquire previous states, which decomposes the recursive computation graph and the backward propagation of the meta-gradient $\\nabla_{\\mathcal{S}}\\mathcal{L}$ consequently becomes more efficient, as shown in Figure \\ref{figure:matchloss}.\n\\fi\n\n\\subsection{Gradient Matching Approach}\nTo achieve comparable generalization performance, a natural approach is to imitate the model parameters, {\\it i.e.}, matching the training trajectories introduced by $\\mathcal{S}$ and $\\mathcal{T}$. With a fixed parameter initialization, the training trajectory of $\\{\\boldsymbol{\\mathcal{\\theta}}^{(0)}, \\boldsymbol{\\mathcal{\\theta}}^{(1)}, \\cdots, \\boldsymbol{\\mathcal{\\theta}}^{(T)} \\}$ is equal to a series of gradients $\\{\\boldsymbol{g}^{(0)}, \\cdots, \\boldsymbol{g}^{(T)}\\}$. Therefore, matching the gradients induced by $\\mathcal{S}$ and $\\mathcal{T}$ is a convincing proxy to mimic the influence on model parameters \\citep{zhao2021dataset}, and the objective function is formulated as\n\\begin{equation}\n \\mathcal{L}(\\mathcal{S}) = \\sum_{k=0}^{T-1} D\\left( \\nabla_{\\boldsymbol{\\theta}^{(k)}} \\mathcal{R}_{\\mathcal{S}}\\left(\\boldsymbol{\\theta}^{(k)}\\right), \\nabla_{\\boldsymbol{\\theta}^{(k)}} \\mathcal{R}_{\\mathcal{T}}\\left(\\boldsymbol{\\theta}^{(k)}\\right) \\right).\n\\end{equation}\nThe difference $D$ between gradients is measured in the layer-wise aspect:\n\\begin{equation}\n D\\left( \\boldsymbol{g}_{\\mathcal{S}}, \\boldsymbol{g}_{\\mathcal{T}} \\right) = \\sum_{l=1}^L \\texttt{dis}\\left( \\boldsymbol{g}_{\\mathcal{S}}^{l}, \\boldsymbol{g}_{\\mathcal{T}}^{l} \\right),\n\\end{equation}\nwhere $\\boldsymbol{g}^{l}$ denotes the gradient of $i$-th layer, and $\\texttt{dis}$ is the sum of cosine distance as follows:\n\\begin{equation}\n\\label{eq:dis}\n \\texttt{dis}\\left(\\boldsymbol{A}, \\boldsymbol{B}\\right) = \\sum_{i=1}^{\\texttt{out}} \\left(1 - \\frac{\\boldsymbol{A}_i \\boldsymbol{B}_i}{ \\| \\boldsymbol{A}_i \\| \\| \\boldsymbol{B}_i \\|} \\right),\n\\end{equation}\nwhere $\\texttt{out}$ denotes the number of output channels for specific layer; and $\\boldsymbol{A}_i$ and $\\boldsymbol{B}_i$ are the flatten gradients in the $i$-th channel.\n\nTo improve the convergence speed in practical implementation, \\citet{zhao2021dataset} proposed to match gradient in class-wise as follows:\n\\begin{equation}\n\\label{eq:interclass_gm}\n \\mathcal{L}^{(k)}=\\sum_{c=1}^C D\\left(\\nabla_{\\boldsymbol{\\theta}^{(k)}} \\mathcal{R}\\left(\\mathcal{B}_\\mathcal{S}^c, \\boldsymbol{\\theta}^{(k)}\\right), \\nabla_{\\boldsymbol{\\theta}^{(k)}} \\mathcal{R}\\left(\\mathcal{B}_\\mathcal{T}^c, \\boldsymbol{\\theta}^{(k)}\\right) \\right),\n\\end{equation}\nwhere $c$ is the class index and $\\mathcal{B}_\\mathcal{S}^c$ and $\\mathcal{B}_\\mathcal{T}^c$ denotes the batch examples belong to the $c$-th class. However, according to \\citet{lee2022dataset}, the class-wise gradient matching pays much attention to the class-common features and overlooks the class-discriminative features in the target dataset, and the distilled synthetic dataset $\\mathcal{S}$ does not possess enough class-discriminative information, especially when the target dataset is fine-grained; {\\it i.e.}, class-common features are the dominant. Based on this finding, these coworkers proposed an improved objective function of\n\\begin{equation}\n\\label{eq:intraclass_gm}\n \\mathcal{L}^{(k)}=D\\left(\\sum_{c=1}^C\\nabla_{\\boldsymbol{\\theta}^{(k)}} \\mathcal{R}\\left(\\mathcal{B}_\\mathcal{S}^c, \\boldsymbol{\\theta}^{(k)}\\right), \\sum_{c=1}^C \\nabla_{\\boldsymbol{\\theta}^{(k)}} \\mathcal{R}\\left(\\mathcal{B}_\\mathcal{T}^c, \\boldsymbol{\\theta}^{(k)}\\right) \\right)\n\\end{equation}\nto better capture constractive signals between different classes through the sum of loss gradients between classes. A similar approach was proposed by \\citet{jiang2022delving}, which employed both intraclass and interclass gradient matching by combining Eq. \\ref{eq:interclass_gm} and Eq. \\ref{eq:intraclass_gm}. Moreover, these researchers measure the difference of gradients by considering the magnitude instead of only considering the angle, {\\it i.e.}, the cosine distance, and the distance function of Eq. \\ref{eq:dis} is improved to\n\\begin{equation}\n \\texttt{dis}\\left(\\boldsymbol{A}, \\boldsymbol{B}\\right) = \\sum_{i=1}^{\\texttt{out}} \\left(1 - \\frac{\\boldsymbol{A}_i \\boldsymbol{B}_i}{ \\| \\boldsymbol{A}_i \\| \\| \\boldsymbol{B}_i \\|} + \\| \\boldsymbol{A}_i - \\boldsymbol{B}_i \\|\\right).\n\\end{equation}\nTo alleviate easy overfitting on the small dataset $\\mathcal{S}$, \\citet{kim2022dataset} proposed to perform inner loop optimization on the target dataset $\\mathcal{T}$ instead of $\\mathcal{S}$, {\\it i.e.}, replaced the parameter update of Eq. \\ref{eq:dm_update} with\n\\begin{equation}\n \\boldsymbol{\\theta}^{(k+1)} = \\boldsymbol{\\theta}^{(k)} - \\eta \\nabla_{\\boldsymbol{\\theta}^{(k)}} \\mathcal{R}_\\mathcal{T}\\left(\\boldsymbol{\\theta}^{(k)}\\right).\n\\end{equation}\n\nAlthough data augmentation facilitates a large performance increase in conventional network training, conducting augmentation on distilled datasets demonstrates no improvement on the final test accuracy, because the characteristics of synthetic images are different from those of natural images and are not optimized under the supervision of various transformations. To leverage data augmentation on synthetic datasets, \\citet{zhao2021dsa} designed data Siamese augmentation (DSA) that homologously augments the distilled data and the target data during the distillation process as follows:\n\\begin{equation}\n \\mathcal{L} = D\\left(\\nabla_{\\boldsymbol{\\theta}} \\mathcal{R}\\left(\\mathcal{A}_{\\omega}\\left(\\mathcal{B}_\\mathcal{S} \\right), \\boldsymbol{\\theta}\\right), \\nabla_{\\boldsymbol{\\theta}} \\mathcal{R}\\left(\\mathcal{A}_{\\omega}\\left(\\mathcal{B}_\\mathcal{T} \\right), \\boldsymbol{\\theta}\\right) \\right),\n\\end{equation}\nwhere $\\mathcal{A}$ is a family of image transformations such as cropping and flipping that are parameterized with $\\omega^{\\mathcal{S}}$ and $\\omega^{\\mathcal{T}}$ for synthetic and target data, respectively. In DSA, the augmented form of distilled data has a consistent correspondence {\\it w.r.t.} the augmented form of the target data, {\\it i.e.}, $\\omega^{\\mathcal{S}} = \\omega^{\\mathcal{T}} = \\omega$; and $\\omega$ is randomly picked from $\\mathcal{A}$ at different iterations. Notably, the transformation $\\mathcal{A}$ requires to be differentiable {\\it w.r.t.} the synthetic dataset $\\mathcal{S}$ for backpropagation:\n\\begin{equation}\n\\frac{\\partial D(\\cdot)}{\\partial \\mathcal{S}}=\\frac{\\partial D(\\cdot)}{\\partial \\nabla_\\theta \\mathcal{L}(\\cdot)} \\frac{\\partial \\nabla_\\theta \\mathcal{L}(\\cdot)}{\\partial \\mathcal{A}(\\cdot)} \\frac{\\partial \\mathcal{A}(\\cdot)}{\\partial \\mathcal{S}}.\n\\end{equation}\n\nThrough setting $\\omega^{\\mathcal{S}} = \\omega^{\\mathcal{T}}$, DSA permits the knowledge transfer from the transformation of target images to the corresponding transformation of synthetic images. Consequently, the augmented synthetic images also possess meaningful characteristics of the natural images. Due to its superior compatibility, DSA has been widely used in many data matching methods. \n\n\\begin{figure}[t]\n\\centering\n\n\\subfigure[Trajectory matching]{\n\\begin{minipage}[b]{0.62\\textwidth}\n\\centering\n \t\t\\includegraphics[width=0.9\\columnwidth]{figures\/mtt_illustration.pdf}\n \t\t\\end{minipage}\n\t\t\\label{figure:mtt_illustration} \n \t}\n\\subfigure[Distribution matching]{\n\\begin{minipage}[b]{0.3\\textwidth}\n\\centering\n \t\t\\includegraphics[width=0.9\\columnwidth]{figures\/dm_illustration.pdf}\n \t\t\\end{minipage}\n\t\t\\label{figure:dm_illustration} \n \t}\n\n\\caption{(a) An illustration of trajectory matching \\citep{cazenavette2022distillation}; (b) Compared to gradient matching approach, distribution matching approach can more comprehensively cover the data distribution in feature space.}\n\\label{figure:match illustration}\n\\end{figure}\n\n\\subsection{Trajectory Matching Approach}\n\nUnlike circuitously matching the gradients, \\citet{cazenavette2022distillation} directly matched the long-range training trajectory between the target dataset and the synthetic dataset. Specifically, they train models on the target dataset $\\mathcal{T}$ and collect the expert training trajectory into a buffer in advance. Then the ingredients in the buffer are randomly selected to initialize the networks for training $\\mathcal{S}$. After collecting the trajectories of $\\mathcal{S}$, the synthetic dataset is updated by matching their parameters, as shown in Figure \\ref{figure:mtt_illustration}. The objective loss of trajectory matching is defined as \n\\begin{equation}\n\\label{eq:mtt_loss}\n \\mathcal{L}= \\frac{\\| \\boldsymbol{\\theta}^{(k+N)} - \\boldsymbol{\\theta}_{\\mathcal{T}}^{(k+M)} \\|_2 ^2}{ \\|\\boldsymbol{\\theta}_{\\mathcal{T}}^{(k)} - \\boldsymbol{\\theta}_{\\mathcal{T}}^{(k+M)} \\|_2 ^2},\n\\end{equation}\nwhere $\\boldsymbol{\\theta}_{\\mathcal{T}}$ denotes the target parameter by training the model on $\\mathcal{T}$, which is stored in the buffer, and $\\boldsymbol{\\theta}^{(k+N)}$ are the parameter obtained by training the model on $\\mathcal{S}$ for $N$ epochs with the initialization of $\\boldsymbol{\\theta}_{\\mathcal{T}}^{(k)}$. The denominator in the loss function is for normalization.\n\n\nAlthough trajectory matching demonstrated empirical success, \\citet{du2022minimizing} argued that $\\boldsymbol{\\theta}^{(k)}$ is generally induced by training on $\\mathcal{S}$ for $k$ epochs during natural learning process, while it was directly initialized with $\\boldsymbol{\\theta}_{\\mathcal{T}}^{(k)}$ in trajectory matching, which causes the {\\it accumulated trajectory error} that measures the difference between the parameters from real trajectories of $\\mathcal{S}$ and $\\mathcal{T}$. Because the accumulated trajectory error is induced by the mismatch between $\\boldsymbol{\\theta}_{\\mathcal{T}}^{(k)}$ and real $\\boldsymbol{\\theta}^{(k)}$ from the natural training, these researchers alleviate this error by adding random noise to when initialize $\\boldsymbol{\\theta}^{(k)}$ to improve robustness {\\it w.r.t.} $\\boldsymbol{\\theta}_{\\mathcal{T}}^{(k)}$.\n\n\n\n\nCompared to matching gradients, while trajectory matching side-steps second-order gradient computation, unfortunately, unrolling $N$ SGD updates are required during meta-gradient backpropagation due to the existence of $\\boldsymbol{\\theta}^{(k+N)}$ in Eq. \\ref{eq:mtt_loss}. The unrolled gradient computation significantly increases the memory burden and impedes scalability. By disentangling the meta-gradient {\\it w.r.t.} synthetic examples into two passes, \\citet{cui2022scaling} greatly reduced the memory required by trajectory matching and successfully scaled trajectory matching approach to the large ImageNet-1K dataset \\citep{russakovsky2015imagenet}. Motivated by knowledge distillation \\citep{gou2021knowledge}, these scholars also proposed assigning soft-labels to synthetic examples with pretrained models in the buffer; and the soft-labels help learn intraclass information and consequently improve distillation performance.\n\n\n\n\n\n\n\n\\subsection{Distribution Matching Approach}\n\n\\begin{wrapfigure}{r}{0.5\\textwidth}\n \\centering\n \\includegraphics[width=0.45\\columnwidth]{figures\/dm.pdf}\n \\caption{An illustration of distribution matching. $\\psi$ is the feature extractor, and red dashed lines denote the backpropagation of gradients.}\n \\label{fig:dm}\n\\end{wrapfigure}\nAlthough the above parameter-wise matching shows satisfying performance, \\citet{zhao2023distribution} visualized the distilled data in two dimension and revealed that there is a large distribution discrepancy between the distilled data and the target data. In other words, the distilled dataset cannot comprehensively cover the data distribution in feature space, as shown in Figure \\ref{figure:dm_illustration}. Based on this discovery, these researchers proposed to match the synthetic and target data from the distribution perspective for dataset distillation. Specifically, they employed the pretrained feature extractor $\\psi_{\\boldsymbol{v}}$ with the parameter $\\boldsymbol{v}$ to achieve mapping from the input space to the feature space. The synthetic data are optimized according to the objective function \n\\begin{equation}\n\\label{eq:dm}\n \\mathcal{L}(\\mathcal{S})=\\sum_{c=1}^C \\|\\psi_{\\boldsymbol{v}}(\\mathcal{B}_\\mathcal{S}^c) - \\psi_{\\boldsymbol{v}}(\\mathcal{B}_\\mathcal{T}^c) \\|^2,\n\\end{equation}\nwhere $c$ denotes the class index. An illustration of distribution matching is also presented Figure \\ref{fig:dm}. As shown in Eq. \\ref{eq:dm}, distribution matching does not rely on the model parameters and drops bilevel optimization for less memory requirement, whereas \nit empirically underperforms the above gradient and trajectory matching approaches. \n\n\\citet{wang2022cafe} improved the distribution matching from several aspects: (1) using multiple-layer features other than only the last-layer features for matching, and the feature matching loss is formulated as follows:\n\\begin{equation}\n \\mathcal{L}_{\\text{f}} = \\sum_{c=1}^C \\sum_{l=1}^L |f_{\\boldsymbol{\\theta}}^l (\\mathcal{B}_\\mathcal{S}^c) - f_{\\boldsymbol{\\theta}}^l (\\mathcal{T}_\\mathcal{S}^c)|^2,\n\\end{equation}\nwhere $f_{\\boldsymbol{\\theta}}^l$ denotes the $l$-th layer features and $L$ is the total number of network layers; (2) recalling the bilevel optimization that updates $\\mathcal{S}$ with different model parameters by inserting the inner loop of $\\boldsymbol{\\theta}^{(k+1)} = \\boldsymbol{\\theta}^{(k)} - \\eta\\nabla_{\\boldsymbol{\\theta}^{(k)}} \\mathcal{R}_{\\mathcal{S}}(\\boldsymbol{\\theta}^{(k)})$;\nand (3) proposing the discrimination loss in the last-layer feature space to enlarge the class distinction of synthetic data. Specifically, the synthetic feature center $\\bar{f}^\\mathcal{S}_{c,L}$ of each category $c$ is obtained by averaging the batch $\\mathcal{B}_\\mathcal{S}^c$. The objective of discrimination loss is to improve the classification ability of the feature center $\\bar{\\boldsymbol{F}}_{L}^{\\mathcal{S}} = [\\bar{f}^\\mathcal{S}_{1,L}, \\bar{f}^\\mathcal{S}_{2,L}, \\cdots, \\bar{f}^\\mathcal{S}_{C,L}]$ on predicting the real data feature $\\boldsymbol{F}_L^{\\mathcal{T}}=[{f}^\\mathcal{T}_{1,L}, {f}^\\mathcal{T}_{1,L}, \\cdots, {f}^\\mathcal{T}_{C,L}]$. The logits are first calculated by\n\\begin{equation}\n \\mathbf{O} = \\left\\langle \\mathbf{\\boldsymbol{F}}_{L}^{\\mathcal{T}}, \\left(\\bar{\\boldsymbol{F}}_{L}^\\mathcal{S}\\right)^\\top \\right \\rangle,\n\\end{equation}\nwhere $\\mathbf{O}\\in \\mathbb{R}^{N\\times C}$ contains the logits of $N=C\\times |\\mathcal{B}_\\mathcal{T}^c|$ target data points. Then the probability $p_i$ in terms of the ground-truth label is derived via $p_i = \\text{softmax}(\\mathbf{O}_i)$; and the classification loss is\n\\begin{equation}\n \\mathcal{L}_{\\text{d}} = \\frac{1}{N} \\sum_{i=1}^N \\log p_i.\n\\end{equation}\nTherefore, the total loss in \\citep{wang2022cafe} for dataset distillation is $\\mathcal{L}_{\\text{total}} = \\mathcal{L}_{\\text{f}} + \\beta \\mathcal{L}_{\\text{d}}$, where $\\beta$ is a positive factor for balancing the feature matching and discrimination loss.\n\n\n\n\n\n\\subsection{Discussion}\n\nThe gradient matching approach can be considered as to be short-range parameter matching, while its backpropagation requires second-order gradient computation. Although the trajectory matching approach make ups for this imperfection, the long-range trajectory introduces a recursive computation graph during meta-gradient backpropagation. Different from matching in parameter space, distribution matching approach employs the feature space as the match proxy and also bypasses second-order gradient computation. Although distribution matching has advantages in scalability {\\it w.r.t.} high-dimensional data, it empirically underperforms trajectory matching, which might be attributed to the mismatch between the comprehensive distribution and decent distillation performance. In detail, distribution matching achieves DD by mimicking features of the target data, so that the distilled data are evenly distributed in the feature space. However, not all features are equally important and distribution matching might waste the budget on imitating less informative features, thereby undermining the distillation performance. \n\n\n\n\n\n\n\\iffalse\n\\section{Distillation schemes}\n\\label{sec:distillation schemes}\n\nIn this section, we taxonomise the dataset distillation schemes according to their optimized objectives. Because dataset distillation can be formulated as a complicated bilevel problem and the closed-form solution is intractable, the distilled dataset is optimized by the nested outer loop and inner loop gradient descent: the inner loop, {\\it i.e.}, model training, optimizes the parameter $\\boldsymbol{\\theta}$ according to the distilled dataset, and the outer loop optimization is responsible for the synthetic dataset updating and is the core of dataset distillation algorithms. The outer loop optimization can be concisely summarized as three steps: (1) compute the objective function $\\mathcal{L}(\\mathcal{S})$ that is induced by the distilled dataset $\\mathcal{S}$ and can reflect the distillation performance; (2) get the gradient $\\nabla_{\\mathcal{S}}\\mathcal{L}$ {\\it w.r.t.} the distilled data; and (3) update the distilled dataset with the gradient. Therefore, the key to dataset distillation is the design of objective function $\\mathcal{L}(\\mathcal{S})$, which closely relates to the computation of gradients and determines the distillation performance and also efficiency. In this survey, we divide the objective function of dataset distillation into three categories: (1) meta-loss approach; (2) data matching approach; and (3) kernel regression approach.\n\\fi\n\n\n\n\n\n\n\\iffalse\n\\subsection{Meta-loss Approach}\n\nMotivated by meta-learning, which strives to obtain a set of satisfied hyperparameters for fast model training \\citep{maclaurin2015gradient,finn2017model}, most pioneered dataset distillation methods regard the distilled dataset as hyperparameters and directly define the objective function as the meta-loss of \n\\begin{equation}\n \\mathcal{L}(\\mathcal{S}) = \\mathcal{R}_{\\mathcal{T}}\\left(\\texttt{alg}\\left(\\mathcal{S}\\right)\\right).\n\\end{equation}\nThen the distilled dataset is updated by $\\mathcal{S} = \\mathcal{S} - \\alpha \\nabla_\\mathcal{S} \\mathcal{L}(\\mathcal{S})$ with the step size $\\alpha$. However, as the neural network is adopted in the distillation process, the training algorithm $\\texttt{alg}$ on the model parameter $\\boldsymbol{\\theta}$ is iterative (Eq. \\ref{eq:gradient descent}) in the inner loop. Hence, the computation of meta-gradient $\\nabla_\\mathcal{S}\\mathcal{L}(\\mathcal{S})$ requires unrolling the recursive computation graph, which retraces to the previous states of $\\{\\boldsymbol{\\mathcal{\\theta}}^{(0)}, \\boldsymbol{\\mathcal{\\theta}}^{(1)}, \\cdots, \\boldsymbol{\\mathcal{\\theta}}^{(T)} \\}$, as shown in Figure \\ref{figure:metaloss}. For this reason, albeit the meta-loss approach is brief and clear, this unroll is both computing and memory expensive, which also severely affects the final distillation performance.\n\n\n\n\\subsection{Data Matching Approach}\n\n\nTo circumvent the computation and memory inefficiency brought by the recursive computation graph, recent methods attempt to explore a substitute for the meta-loss $\\mathcal{R}_{\\mathcal{T}}\\left( \\texttt{alg}\\left(\\mathcal{S}\\right) \\right)$.\nWith the common sense that similar model parameters induce similar performance, the objective of DD is converted to match the model parameters trained on $\\mathcal{S}$ and $\\mathcal{T}$. As the final well-trained parameters can be regarded as an accumulation of parameter gradients, the matching objective can be further converted from parameters to their gradients induced by $\\mathcal{S}$ and $\\mathcal{T}$. With the parameter or gradient matching, $\\mathcal{S}$ can mimic the influence of $\\mathcal{T}$ on network parameters, and consequently helps learn a satisfying model for prediction. \n\nExcept for shortening the distance between parameters or gradients, other works also study DD from feature perspective \\citep{zhao2021distribution}: to represent $\\mathcal{T}$ with $\\mathcal{S}$, the synthetic points in $\\mathcal{S}$ should be as close to the examples in $\\mathcal{T}$ as possible in terms of feature space, and consequently distribution or feature matching are developed for this distillation. Compared to the above parameter or gradient matching, distribution matching can more fully grasp the characteristic of original data distribution with the synthetic dataset $\\mathcal{S}$.\n\n\n\nOverall, these methods achieve dataset distillation from $\\mathcal{T}$ to $\\mathcal{S}$ by matching their byproducts from different perspectives of parameter, gradient, and feature space, and the objective function can be summarized as follows. \n\n\n\n\n\n\n\n\n\\begin{equation}\n \\mathcal{L}(\\mathcal{S}) = \\sum_{k=0}^{T-1} D\\left( \\phi(\\mathcal{S},\\boldsymbol{\\theta}^{(k)}), \\phi(\\mathcal{T},\\boldsymbol{\\theta}^{(k)}) \\right),\n\\end{equation}\nwhere $D(\\cdot,\\cdot)$ is a distance function, and $\\phi(\\cdot)$ maps the dataset $\\mathcal{S}$ or $\\mathcal{T}$ to gradient, feature, or parameter space. For example, $\\phi(\\mathcal{S},\\boldsymbol{\\theta}^{(k)}) = \\boldsymbol{g}_{\\mathcal{S}}^{(k)}$ in the gradient matching. For distribution matching, the feature extractor, usually a set of pre-trained networks, is employed to achieve this mapping to feature space.\n\nCompared to the meta-loss approach, the loss $\\mathcal{L}^{(k)}(\\mathcal{S}) = D\\left( \\phi(\\mathcal{S},\\boldsymbol{\\theta}^{(k)}), \\phi(\\mathcal{T},\\boldsymbol{\\theta}^{(k)}) \\right) $ is only related to the current parameter $\\boldsymbol{\\theta}^{(k)}$ and does not acquire previous states, which decomposes the recursive computation graph and the backward propagation of the meta-gradient $\\nabla_{\\mathcal{S}}\\mathcal{L}$ consequently becomes more efficient, as shown in Figure \\ref{figure:matchloss}.\n\n\n\\subsection{Kernel Regression Approach}\n\nUnlike training neural networks where the optimum is intractable, some methods consider dataset distillation in the kernel regression regime that admits a closed-form solution. For the regression model $f(\\boldsymbol{x})=\\boldsymbol{w}^\\top\\psi(\\boldsymbol{x})$, where $\\psi(\\cdot)$ is a non-linear mapping and the corresponding kernel is $K(\\boldsymbol{x},\\boldsymbol{x}^\\prime)=\\langle \\psi(\\boldsymbol{x}), \\psi(\\boldsymbol{x}^\\prime)\\rangle$, there exists a closed-form solution for $\\boldsymbol{w}$ when the regression model is trained on $\\mathcal{S}$ with kernel ridge regression (KRR):\n\\begin{equation}\n \\boldsymbol{w} = \\psi(X_{s})^\\top\\left(\\mathbf{K}_{X_s X_s}+\\lambda I\\right)^{-1}y_{s},\n\\end{equation}\nwhere $\\mathbf{K}_{X_s X_s} = [K(\\boldsymbol{s}_i,\\boldsymbol{s}_j)]_{ij}\\in \\mathbb{R}^{n\\times n}$ is called the {\\it kernel matrix} or {\\it Gram matrix} associated to $K$ and the dataset $\\mathcal{S}$, and $\\lambda > 0$ is a fixed regularization parameter. Therefore, the mean square error (MSE) of predicting $\\mathcal{T}$ with the model trained on $\\mathcal{S}$ is\n\\begin{equation}\n \\mathcal{L}(\\mathcal{S}) = \\frac{1}{2}\\left\\|y_t-\\mathbf{K}_{X_t X_s}\\left(\\mathbf{K}_{X_s X_s}+\\lambda I\\right)^{-1} y_s\\right\\|^2,\n\\end{equation}\nwhere $\\mathbf{K}_{X_t X_s}=[K(\\boldsymbol{x}_i,\\boldsymbol{s}_j)]_{ij}\\in \\mathbb{R}^{m\\times n}$. Then the distilled dataset is updated via the gradient of the above loss. Due to the closed-form solution in KRR, $\\boldsymbol{\\theta}$ does not require an iterative update and the backward pass of gradient thus bypasses the recursive computation graph, as shown in Figure \\ref{figure:kernelloss}. \n\nIt is worth noting that the KRR is non-parametric and does not involve neural networks during the distillation process. However, previous research has shown that the training dynamic of neural networks is equal to the kernel method when the width of networks becomes infinite \\citep{jacot2018neural}, which partially guarantees the feasibility of the kernel regression approach and explains its decent performance when transferred to the neural networks.\n\n\n\\fi\n\n\\iffalse\n\\begin{table}[t]\n \n \\centering\n \\caption{A summary of dataset distillation algorithms}\n \\begin{tabular}{c|c}\n \\toprule\n Methods & Distillation schemes \\\\\n \\hline\n DD (\\citet{wang2018dataset}) & BPTT \\\\ \n LD (\\citet{bohdal2020flexible}) & BPTT \\\\\n SLDD (\\citet{sucholutsky2021soft}) & BPTT \\\\\n DC (\\citet{zhao2021dataset})& Gradient match \\\\\n \n DCC (\\citet{lee2022dataset}) & Gradient match \\\\\n MTT (\\citet{cazenavette2022distillation}) & Trajectory match \\\\\n FTD (\\citet{du2022minimizing})& Trajectory match \\\\\n TESLA (\\citet{cui2022scaling})& Trajectory match \\\\\n DM (\\citet{zhao2021distribution}) & Distribution match \\\\\n CAFE (\\citet{wang2022cafe})& Distribution match \\\\\n KIP (\\citet{nguyen2020dataset,nguyen2021dataset}) & Kernel ridge regression \\\\\n FRePo (\\citet{zhou2022dataset})& Kernel ridge regression \\\\\n RFAD (\\citet{loo2022efficient})& Kernel ridge regression \\\\\n GTN (\\citet{such2020generative}) & BPTT \\\\\n GAN (\\citet{zhao2022synthesizing}) & Distribution match \\\\\n IDC (\\citet{kim2022dataset}) & Gradient match \\\\\n \\citet{deng2022remember} & BPTT \\\\\n Haba (\\citet{liu2022dataset}) & Trajectory match \\\\\n KFS (\\citet{lee2022kfs}) & Distribution match \\\\\n \\bottomrule\n \\end{tabular}\n \\label{tab:my_label}\n\\end{table}\n\\fi\n\n\\iffalse\n\\begin{table}[t]\n \n \\centering\n \\caption{A summary of dataset distillation algorithms}\n \\begin{tabular}{c|c|c}\n \\toprule\n Methods & Distillation schemes & Objective function \\\\\n \\hline\n DD (\\citet{wang2018dataset}) & BPTT & $\\mathcal{R}_{\\mathcal{T}}\\left(\\texttt{alg}\\left(\\mathcal{S}\\right)\\right)$ \\\\ \n LD (\\citet{bohdal2020flexible}) & BPTT & $\\mathcal{R}_{\\mathcal{T}}\\left(\\texttt{alg}\\left(\\mathcal{S}\\right)\\right)$ \\\\\n SLDD (\\citet{sucholutsky2021soft}) & BPTT & $\\mathcal{R}_{\\mathcal{T}}\\left(\\texttt{alg}\\left(\\mathcal{S}\\right)\\right)$ \\\\\n DC (\\citet{zhao2021dataset})& Gradient match & $\\sum_{c=1}^C D\\left(\\nabla_{\\boldsymbol{\\theta}_t}\\mathcal{R}\\left(B_c^{\\mathcal{S}}, \\boldsymbol{\\theta}_t\\right), \\nabla_{\\boldsymbol{\\theta}_t}\\mathcal{R}\\left(B_c^{\\mathcal{T}}, \\boldsymbol{\\theta}_t\\right)\\right)$ \\\\\n \n DCC (\\citet{lee2022dataset}) & Gradient match & $ D\\left( \\sum_{c=1}^C \\nabla_{\\boldsymbol{\\theta}_t}\\mathcal{R}\\left(B_c^{\\mathcal{S}}, \\boldsymbol{\\theta}_t\\right), \\sum_{c=1}^C \\nabla_{\\boldsymbol{\\theta}_t}\\mathcal{R}\\left(B_c^{\\mathcal{T}}, \\boldsymbol{\\theta}_t\\right)\\right)$ \\\\\n MTT (\\citet{cazenavette2022distillation}) & Trajectory match & $\\| \\boldsymbol{\\theta}^{(k+N)} - \\boldsymbol{\\theta}_{\\mathcal{T}}^{(k+M)} \\|_2 ^2 \/ \\|\\boldsymbol{\\theta}_{\\mathcal{T}}^{(k)} - \\boldsymbol{\\theta}_{\\mathcal{T}}^{(k+M)} \\|_2 ^2$ \\\\\n \n \n DM (\\citet{zhao2021distribution}) & Distribution match & $\\sum_{c=1}^{C}\\left\\|\\frac{1}{\\left|B_c^{\\mathcal{T}}\\right|} \\psi_{\\boldsymbol{\\vartheta}}(B_c^{\\mathcal{S}})-\\frac{1}{\\left|B_c^{\\mathcal{S}}\\right|} \\psi_{\\boldsymbol{\\vartheta}}(B_c^{\\mathcal{T}})\\right\\|^2$ \\\\\n \n KIP (\\citet{nguyen2020dataset,nguyen2021dataset}) & Kernel regression & $\\frac{1}{2}\\left\\|y_t-\\mathbf{K}_{X_t X_s}\\left(\\mathbf{K}_{X_s X_s}+\\lambda I\\right)^{-1} y_s\\right\\|^2$ \\\\\n \n \n \n \n \n \n \n \n \\bottomrule\n \\end{tabular}\n \\label{tab:my_label}\n\\end{table}\n\\fi\n\n\\iffalse\n\\section{Distillation Algorithms}\n\nAlbeit the objective of DD is to match the performance induced by $\\mathcal{S}$ and $\\mathcal{T}$ and is easy to describe, the realization of DD is not trivial due to many ingredients like high-dimensional data, intractable solution of neural networks, {\\it etc}. \nIn the following parts, we will review recently proposed dataset distillation algorithms in detail according to the taxonomy in Section \\ref{sec:distillation schemes}. Because the training algorithm $\\texttt{alg}\\left(\\mathcal{S},\\boldsymbol{\\theta}^{(0)}\\right)$ is both determined by the training set $\\mathcal{S}$ and the initialized parameter $\\boldsymbol{\\theta}^{(0)}$, many dataset distillation algorithms will take expectation {\\it w.r.t.} $\\boldsymbol{\\theta}^{(0)}$ in order to improve the robustness of the distilled dataset $\\mathcal{S}$ to different parameter initialization. In the following narrative, we will omit this expectation {\\it w.r.t.} initialization for the sake of simplicity.\n\n \n\\subsection{Meta-loss Approach}\n\nFrom the above illustration in Section \\ref{sec:distillation schemes}, the most direct objective is to match the loss $\\mathcal{R}_{\\mathcal{T}}\\left(\\texttt{alg}\\left(\\mathcal{S}\\right)\\right)$ and $\\mathcal{R}_{\\mathcal{T}}\\left(\\texttt{alg}\\left(\\mathcal{T}\\right)\\right)$ {\\it w.r.t} the large target dataset $\\mathcal{T}$. As the empirical risk $\\mathcal{R}_{\\mathcal{T}}\\left(\\texttt{alg}\\left(\\mathcal{T}\\right)\\right)$ easily achieves zero due to the overparameterization, the objective is the same as minimizing the meta-loss $\\mathcal{R}_{\\mathcal{T}}\\left(\\texttt{alg}\\left(\\mathcal{S}\\right)\\right)$. \\citet{wang2018dataset} first realize this optimization via the hyperparameter optimization\\citep{maclaurin2015gradient}. To alleviate the inefficiency in unrolling the long parameter path of $\\{\\boldsymbol{\\mathcal{\\theta}}^{(0)}, \\boldsymbol{\\mathcal{\\theta}}^{(1)}, \\cdots, \\boldsymbol{\\mathcal{\\theta}}^{(T)} \\}$, they employ a single step optimization{\\it w.r.t.} the model parameter from $\\boldsymbol{\\theta}^{(0)}$ to $\\boldsymbol{\\theta}^{(1)}$, and the loss is computed based on $\\boldsymbol{\\theta}^{(1)}$ and the target dataset $\\mathcal{T}$. Besides, both the data $\\boldsymbol{s}\\in \\mathcal{S}$ and learning rate $\\eta$ are learnable in \\citet{wang2018dataset}. Unlike freezing labels, \\citet{sucholutsky2021soft} extend \\citet{wang2018dataset} by learning a soft-label in the synthetic dataset $\\mathcal{S}$, {\\it i.e.}, the label $y$ in the synthetic dataset is also trainable. Similarly, \\citep{bohdal2020flexible} also extend the standard example distillation to label distillation by solely optimizing the labels of synthetic datasets. Moreover, they provide improvements on the efficiency of long inner loop optimization via (1) introducing shorter but more frequent inner loop and (2) employing ridge regression to estimate the solution of classifiers to avoid second-order gradient computation. Albeit the BPTT framework has been shown to underperform other algorithms, \\citet{deng2022remember} empirically demonstrate that adding momentum term and longer trajectory in the inner loop optimization can considerably increase the distillation performance.\n\n\n\\iffalse\n\\begin{figure}[t]\n\\centering\n\n\\subfigure[]{\n\\begin{minipage}[b]{0.35\\textwidth}\n\\centering\n \t\t\\includegraphics[width=\\columnwidth]{figures\/standard_dd.png}\n \t\t\\end{minipage}\n\t\t\\label{figure:standard_dd} \n \t}\n\\subfigure[]{\n\\begin{minipage}[b]{0.26\\textwidth}\n\\centering\n \t\t\\includegraphics[width=\\columnwidth]{figures\/standard_dd.png}\n \t\t\\end{minipage}\n\t\t\\label{figure:xx1} \n \t}\n\\subfigure[]{\n\\begin{minipage}[b]{0.3\\textwidth}\n\\centering\n \t\t\\includegraphics[width=\\columnwidth]{figures\/standard_dd.png}\n \t\t\\end{minipage}\n\t\t\\label{figure:xx2} \n \t}\n\\caption{xxx.}\n\\label{figure:xx4}\n\\end{figure}\n\\fi\n\n \n\n\\subsection{Gradient Matching Approach}\nAs optimizing the meta-loss is not easy, some substitutes of objective have been proposed to distil the dataset. To achieve comparable generalization performance, the intuition is to employ the distilled dataset to imitate the effect on model parameters, {\\it i.e.}, matching the training trajectories introduced by $\\mathcal{S}$ and $\\mathcal{T}$. With a fixed parameter initialization, the training trajectory of $\\{\\boldsymbol{\\mathcal{\\theta}}^{(0)}, \\boldsymbol{\\mathcal{\\theta}}^{(1)}, \\cdots, \\boldsymbol{\\mathcal{\\theta}}^{(T)} \\}$ is equal to a series of gradients $\\{\\boldsymbol{g}^{(0)}, \\cdots, \\boldsymbol{g}^{(T)}\\}$. Therefore, matching the gradients induced by $\\mathcal{S}$ and $\\mathcal{T}$ is a convincing proxy to mimic the influence on model parameters \\citep{zhao2021dataset}, and the objective function can be formulated as\n\\begin{equation}\n \\mathcal{L}(\\mathcal{S}) = \\sum_{k=0}^{T-1} D\\left( \\boldsymbol{g}_{\\mathcal{S}}^{(k)}, \\boldsymbol{g}_{\\mathcal{T}}^{(k)} \\right),\n\\end{equation}\nwhere $\\boldsymbol{g}_{\\mathcal{S}}^{(k)}$ and $\\boldsymbol{g}_{\\mathcal{T}}^{(k)}$ denote the gradient {\\it w.r.t.} model parameters generated by $\\mathcal{S}$ and $\\mathcal{T}$ in the $k$-th training epoch, respectively. It is worth noting that these two gradients are induced on the same parameter $\\boldsymbol{\\theta}^{(k)}$ to be cohesive with the bilevel optimization. Concretely, the gradient matching is class-wise: $\\mathcal{L}^{(k)}=\\sum_{c=1}^C D\\left(\\nabla_{\\boldsymbol{\\theta}^{(k)}} \\mathcal{R}\\left(\\mathcal{S}_c, \\boldsymbol{\\theta}^{(k)}\\right), \\nabla_{\\boldsymbol{\\theta}^{(k)}} \\mathcal{R}\\left(\\mathcal{T}_c, \\boldsymbol{\\theta}^{(k)}\\right) \\right)$, where $c$ is the class index and $\\mathcal{T}_c$ denotes the examples belong to the $c$-th class. According to \\citet{lee2022dataset}, the class-wise gradient matching pays much attention to the class-common features and overlooks the class-discriminative features in the target dataset, and the distilled synthetic dataset $\\mathcal{S}$ does not possess enough class-discriminative information, especially when the target dataset is fine-grained, {\\it i.e.}, class-common features are the dominant. Based on this finding, they propose a improved objective function of $\\mathcal{L}^{(k)}=D\\left(\\sum_{c=1}^C\\nabla_{\\boldsymbol{\\theta}^{(k)}} \\mathcal{R}\\left(\\mathcal{S}_c, \\boldsymbol{\\theta}^{(k)}\\right), \\sum_{c=1}^C \\nabla_{\\boldsymbol{\\theta}^{(k)}} \\mathcal{R}\\left(\\mathcal{T}_c, \\boldsymbol{\\theta}^{(k)}\\right) \\right)$ to better capture constractive signals between different classes. A similar approach is proposed by \\citet{jiang2022delving}, which considers both intraclass and inter-class gradient matching. To alleviate easy overfitting on the small dataset $\\mathcal{S}$, \\citet{kim2022dataset} propose to do inner loop optimization on the target dataset $\\mathcal{T}$. In spite of data augmentation brings a large performance increase, conducting augmentation on distilled datasets has no improvement on the final test accuracy, because the synthetic images have different characteristics compared to natural images and also are not optimized under the supervision of various transformations. To leverage data augmentation on synthetic datasets. \\citet{zhao2021dsa} design data siamese augmentation (DSA) that homologously augments the distilled data and the target data during the distillation process. In DSA, the augmented form of distilled data has a consistent correspondence {\\it w.r.t.} the augmented form of the target data, which permits the knowledge transfer from the transformation of target images to the corresponding transformation of synthetic images. Consequently, the augmented synthetic images also possess meaningful characteristics of the natural images. Due to its superior compatibility, DSA has been widely equipped in many data matching methods. \n\n\\subsection{Trajectory matching approach}\n\nUnlike circuitously matching the gradients, \\citet{cazenavette2022distillation} directly matches the long-range training trajectory between the target dataset and the synthetic dataset. Concretely, they collect the typical training trajectory {\\it w.r.t.} the target dataset into the buffer in advance, and then ingredients in the buffer are randomly selected to initialize the networks for training $\\mathcal{S}$. After collecting the trajectory of $\\mathcal{S}$, the synthetic dataset is updated by matching their trajectory. The objective loss of matching trajectory is defined as $\\mathcal{L}=\\| \\boldsymbol{\\theta}^{(k+N)} - \\boldsymbol{\\theta}_{\\mathcal{T}}^{(k+M)} \\|_2 ^2 \/ \\|\\boldsymbol{\\theta}_{\\mathcal{T}}^{(k)} - \\boldsymbol{\\theta}_{\\mathcal{T}}^{(k+M)} \\|_2 ^2 $, where $\\boldsymbol{\\theta}_{\\mathcal{T}}$ denote the target parameter by training the model on $\\mathcal{T}$ and is stored in the buffer, and $\\boldsymbol{\\theta}^{k+N}$ are the parameter by training the model on $\\mathcal{S}$ for $N$ epochs with the initialization of $\\boldsymbol{\\theta}_{\\mathcal{T}}^{(k)}$. The denominator in the loss function is for normalization.\n\n\nAlbeit trajectory matching received empirical success, \\citet{du2022minimizing} propose that there exists an accumulated trajectory error when matching the trajectory due to the segmented alignment from $\\boldsymbol{\\theta}_{\\mathcal{T}}^{(k)}$ to $\\boldsymbol{\\theta}_{\\mathcal{T}}^{(k+M)}$, and they alleviate this by adding random noise when initialize the distilled network to improve robustness {\\it w.r.t.} the accumulated trajectory error.\n\nCompared to matching gradients, while trajectory matching side-steps second-order gradient computation, it, unfortunately, requires unrolling $N$ SGD updates during the meta-gradient backpropagation as the existence of $\\boldsymbol{\\theta}^{(k+N)}$. The unrolled gradient computation significantly increases the memory burden and impedes scalability. By disentangling the meta-gradient {\\it w.r.t.} synthetic examples into two passes, \\citet{cui2022scaling} greatly reduce the memory required by trajectory matching. Motivated by knowledge distillation \\citep{gou2021knowledge}, they propose to assign soft-label to synthetic examples with pre-trained models in the buffer, and the soft-label helps learn intraclass information and consequently improves distillation performance.\n\n\n\n\n\n\n\n\\subsection{Distribution Matching Approach}\nAlbeit the parameter-wise match shows a satisfying performance, \\citet{zhao2021distribution} visualise the distilled data in 2-dimension and reveal that there is a large distribution discrepancy between the distilled data and the target data. In other words, the distilled dataset can not comprehensively cover the data distribution. Based on this discovery, they propose to match the synthetic and target data from the distribution perspective for dataset distillation. Concretely, they employ the pre-trained feature extractor $\\psi_{\\boldsymbol{v}}$ with the parameter $\\boldsymbol{v}$ to achieve the mapping from input space to feature space. The synthetic data is optimized according to the objective function $\\mathcal{L}(\\mathcal{S})=\\sum_{c=1}^C \\|\\psi_{\\boldsymbol{v}}(\\mathcal{S}_c) - \\psi_{\\boldsymbol{v}}(\\mathcal{T}_c) \\|^2$, where $c$ denotes the class index. Though this distribution matching drops bilevel optimization for boosting, it empirically underperforms the above gradient and trajectory matching approaches. \\citet{wang2022cafe} improve the distribution alignment from several aspects: (1) using multiple-layer features other than only the last-layer outputs for matching; (2) proposing the discrimination loss to enlarge the class distinction of synthetic data; and (3) recalling the bilevel optimization that updates $\\mathcal{S}$ with different model parameters for better generalization. Besides, \\citet{lee2022kfs} analyze that the subsampling in distribution matching is biased, and they propose to use full batch training to mitigate this problem.\n\n\n\n\n\n\n\\subsection{Kernel Regression Approach}\nBecause the solutions of deep non-linear neural networks are commonly intractable, multi-step gradient descent is used to get the optimal parameters in terms of the synthetic dataset. Nevertheless, the iterative algorithm makes the loss backpropagation in DD very inefficient. Different from distilling datasets with deep networks, \\citet{nguyen2020dataset} considers the dataset distillation in the simple kernel regression regime that admits a closed-form solution. The regression model is defined as $f(\\boldsymbol{x}) = \\boldsymbol{w}^\\top \\psi(\\boldsymbol{x})$, where $\\psi(\\cdot)$ is a non-linear mapping, and the weight $\\boldsymbol{w}$ is learnable. Given the dataset $\\mathcal{S}$, the weight can be directly solved with $\\boldsymbol{w} = \\psi(X_{s})^\\top\\left(\\mathbf{K}_{X_s X_s}+\\lambda I\\right)^{-1}y_{s}$ \\citep{petersen2008matrix}, where $\\mathbf{K}_{X_s X_s} = [K(\\boldsymbol{s}_i,\\boldsymbol{s}_j)]_{ij}\\in \\mathbb{R}^{n\\times n}$ is the kernel matrix associated to the kernel $K(\\boldsymbol{x},\\boldsymbol{x}^\\prime)=\\langle \\psi(\\boldsymbol{x}), \\psi(\\boldsymbol{x}^\\prime)\\rangle$ and the dataset $\\mathcal{S}$. Therefore, the MSE of predicting the target dataset $\\mathcal{T}$ can be derived as $\\mathcal{L}(\\mathcal{S}) = \\frac{1}{2}\\left\\|y_t-\\mathbf{K}_{X_t X_s}\\left(\\mathbf{K}_{X_s X_s}+\\lambda I\\right)^{-1} y_s\\right\\|^2$. In the kernel ridge regression regime, the synthetic dataset $\\mathcal{S}$ can be directly updated by backpropagating the meta-gradient through the kernel function. Albeit this formulation is solid, this algorithm is designed in the KRR scenario and only employs simple kernels, which causes performance drops when the distilled dataset is transferred to train neural networks. \\citet{jacot2018neural} propose the neural tangent kernel (NTK) theory that proves the equivalence between training infinite-width neural networks and kernel regression. With this equivalence, \\citet{nguyen2021dataset} employ the infinite-width networks as the kernel for dataset distillation, which narrows the gap between the scenarios of KRR and deep learning. However, computing the kernel matrix $\\mathbf{K}_{X_s X_t}$ and its corresponding inverse is expensive and almost intractable for large target datasets due to the complexity of $\\mathcal{O}(|\\mathcal{T}||\\mathcal{S}|)$. \\citet{loo2022efficient} replace NTK kernel with neural network Gaussian process (NNGP) kernel that only considers the training of last-layer classifier for speed up. With random features in NNGP, the computation of kernel matrix can be decomposed, and the complexity of $\\mathbf{K}_{X_s X_t}$ becomes $\\mathcal{O}(|\\mathcal{T}|+|\\mathcal{S}|)$. A similar method is also proposed by \\citet{zhou2022dataset}, which fixes the feature extractor in the neural networks to improve the efficiency of meta-gradient backpropagation.\n\\fi\n\n\n\\section{Factorized Dataset Distillation}\n\nIn representation learning, although the image data are in the extremely high-dimensional space, they may lie on a low-dimensional manifold and rely on a few features, and one can recover the source image from the low-dimensional features with specific decoders \\citep{bengio2013representation,zhang2018network}. For this reason, it is plausible to implicitly learn synthetic datasets by optimizing their factorized features and corresponding decoders, which is termed as {\\it factorized dataset distillation}, as shown in Figure \\ref{figure:disentange}. To be in harmony with decoders, we recall the notion of a feature as code in terms of the dataset. By factorizing synthetic datasets with a combination of codes and decoders, the compression ratio can be further decreased, and information redundancy in distilled images can also be reduced. According to the learnability of the code and decoder, we classify factorized dataset distillation into three categories of code-based DD, decoder-based DD, and code-decoder DD.\n\n\\begin{figure*}[t]\n\\centering\n\n\\subfigure[Nonfactorized dataset distillation]{\n\\begin{minipage}[b]{0.4\\textwidth}\n\\centering\n \t\t\\includegraphics[width=0.95\\columnwidth]{figures\/vanilla_dd.pdf}\n \t\t\\end{minipage}\n\t\t\\label{figure:vanilla_dd} \n \t}\n\\subfigure[Factorized dataset distillation]{\n\\begin{minipage}[b]{0.56\\textwidth}\n\\centering\n \t\t\\includegraphics[width=0.95\\columnwidth]{figures\/disentangled_dd.pdf}\n \t\t\\end{minipage}\n\t\t\\label{figure:disentangled_dd} \n \t}\n\\caption{Schematic diagrams of nonfactorized dataset distillation and factorized dataset distillation.}\n\\label{figure:disentange}\n\\end{figure*}\n\nCode-based DD aims to learn a series of low-dimensional codes for generating highly informative images through a specific generator. In \\citep{zhao2022synthesizing}, the vectors that are put into the GAN generator are learned to producing informative images. Concretely, these investigators inverse real examples with a GAN generator and collect corresponding latent codes. Then, the latent vectors are further optimized with the distribution matching algorithm. By this, the optimized latent vectors can induce more informative synthetic examples with the pretrained GAN generator. In addition to use the GAN, \\citet{kim2022dataset} employed a deterministic multi-formation function $\\texttt{multi-form}(\\cdot)$ as the decoder to create synthetic data from fewer condensed data, {\\it i.e.}, the synthetic data are generated by $\\mathcal{S} = \\texttt{multi-form} (\\mathcal{C})$. Then the condensed data $\\mathcal{C}$ is optimized in an end-to-end fashion by the gradient matching approach.\n\nDifferent from code-based DD, decoder-based DD solely learns decoders that are used to produce highly informative data. In \\citep{such2020generative}, a generative teaching network (GTN) that employs a trainable network was proposed to generate synthetic images from random noise based on given labels, and the meta-gradients are backpropagated via BPTT to update the GTN rather than the synthetic data.\n\nNaturally, code-decoder DD combines code-based and decoder-based DD and allows the training on both codes and decoders. \\citet{deng2022remember} generated synthetic images via the matrix multiplication between codes and decodes, which they call the {\\it memory} and {\\it addressing function}. Specifically, they use the memory $\\mathcal{M} = \\{\\boldsymbol{b}_1, \\cdots, \\boldsymbol{b}_K\\}$ to store the bases $\\boldsymbol{b}_i \\in \\mathbb{R}^{d}$ that have the same dimension as target data; and $r$ addressing functions of $\\mathcal{A} = \\{\\boldsymbol{A}_1,\\cdots,\\boldsymbol{A}_r\\}$ are used to recover the synthetic images according to the given one-hot label $\\boldsymbol{y}$ as follows:\n\\begin{equation}\n\\label{eq:memory}\n \\boldsymbol{s}_i^\\top = \\boldsymbol{y}^T \\boldsymbol{A}_i [\\boldsymbol{b}_1, \\cdots, \\boldsymbol{b}_K] ^\\top,\n\\end{equation}\nwhere the one-hot label $\\boldsymbol{y} \\in \\mathbb{R}^{C\\times 1}$, and $\\boldsymbol{A}_i \\in \\mathbb{R}^{C\\times K}$, and $C$ is the number of classes. By plugging different addressing function $\\boldsymbol{A}_i$ into Eq. \\ref{eq:memory}, a total number of $r$ synthetic images can be generated from the bases for each category. During the distillation process, these two elements of $\\mathcal{M}$ and $\\mathcal{A}$ were optimized with the meta-learning framework. Different from matrix multiplication in \\citep{deng2022remember}, \\citet{liu2022dataset} employ {\\it hallucinator} networks as decoders to generate synthetic data from {\\it basis}. Specifically, a hallucinator network is consist of three parts of a encoder $\\texttt{enc}$, an affine transformation with trainable scale $\\sigma$ and shift $\\mu$, and a decoder $\\texttt{dec}$, then a basis $\\boldsymbol{b}$ is put into the hallucinator network to generate corresponding synthetic image $\\boldsymbol{s}$ as follows:\n\\begin{equation}\n \\mathbf{f}_1 = \\texttt{enc}(\\boldsymbol{b}), \\quad \\mathbf{f}_2 = \\sigma \\times \\mathbf{f}_1 + \\mu, \\quad \\boldsymbol{s} = \\texttt{dec}(\\mathbf{f}_2),\n\\end{equation}\nwhere the multiplication is element-wise operation. In addition, to enlarge the knowledge divergence encoded by different hallucinator networks for efficient compression, these researchers proposed an adversarial contrastive constraint that maximizes the divergence of features of different generated synthetic data. A similar code-decoder factorization method is also presented in \\citet{lee2022kfs}, where they adopted an improved distribution matching to optimize the latent codes and decoders.\n\nThrough implicitly learning the latent codes and decoders, factorized DD possesses advantages such as more compact representation and shared representation across classes that consequently improve the dataset distillation performance. Notably, this code-decoder factorization is compatible with the aforementioned distillation approaches. Therefore, the exploration of synthetic data generation and distillation frameworks can promote dataset distillation in parallel.\n\n\n\n\n\n\n \n \n \n\n \n\n \n\n \n\n \n \n\n\n\\section{Performance Comparison}\n\\label{sec:performance comparison}\n\n\\begin{table}[t]\n \n \\centering\n \\caption{Performance comparison of different dataset distillation methods on different datasets. Abbreviations of GM, TM, DM are for gradient matching, trajectory matching, and distribution matching, respectively. Whole denotes the test accuracy in terms of the whole target dataset.}\n \\resizebox{\\linewidth}{!}{\n \\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c}\n \\toprule\n\n \\multirow{2}{*}{Methods} & \\multirow{2}{*}{Schemes} & \\multicolumn{3}{c|}{MNIST} & \\multicolumn{3}{c|}{FashionMNIST} & \\multicolumn{3}{c|}{SVHN} & \\multicolumn{3}{c|}{CIFAR-10} & \\multicolumn{3}{c|}{CIFAR-100} & \\multicolumn{3}{c}{Tiny ImageNet} \\\\\n \\cline{3-20}\n & & $1$ & $10$ & $50$ & $1$ & $10$ & $50$ & $1$ & $10$ & $50$ & $1$ & $10$ & $50$ & $1$ & $10$ & $50$ & $1$ & $10$ & $50$ \\\\\n\n \\hline\n Random & - & \\format{64.9}{3.5} & \\format{95.1}{0.9} & \\format{97.9}{0.2} & \\format{51.4}{3.8} & \\format{73.8}{0.7} & \\format{82.5}{0.7} & \\format{14.6}{1.6} & \\format{35.1}{4.1} & \\format{70.9}{0.9} & \\format{14.4}{2.0} & \\format{26.0}{1.2} & \\format{43.4}{1.0} & \\format{4.2}{0.3} & \\format{14.6}{0.5} & \\format{30.0}{0.4} & \\format{1.4}{0.1} & \\format{5.0}{0.2} & \\format{15.0}{0.4} \\\\\n\n Herding & - & \\format{89.2}{1.6} & \\format{93.7}{0.3} & \\format{94.8}{0.2} & \\format{67.0}{1.9} & \\format{71.1}{0.7} & \\format{71.9}{0.8} & \\format{20.9}{1.3} & \\format{50.5}{3.3} & \\format{72.6}{0.8} & \\format{21.5}{1.2} & \\format{31.6}{0.7} & \\format{40.4}{0.6} & \\format{8.4}{0.3} & \\format{17.3}{0.3} & \\format{33.7}{0.5} & \\format{2.8}{0.2} & \\format{6.3}{0.2} & \\format{16.7}{0.3} \\\\\n\n \n DD \\citep{wang2018dataset} & BPTT & - & \\format{79.5}{8.1} & - & - & - & - &- & - & - & \n - & \\format{36.8}{1.2} & - & - & - & - & \n - & - & - \\\\\n\n\n LD \\citep{bohdal2020flexible} & BPTT & \\format{60.9}{3.2} & \\format{87.3}{0.7} & \\format{93.3}{0.3} & \n - & - & - & - & - & - & \n \\format{25.7}{0.7} & \\format{38.3}{0.4} & \\format{42.5}{0.4} & \\format{11.5}{0.4} & - & - & \n - & - & - \\\\\n\n DC \\citep{zhao2021dataset} & GM & \\format{91.7}{0.5} & \\format{94.7}{0.2} & \\format{98.8}{0.2} & \n \\format{70.5}{0.6} & \\format{82.3}{0.4} & \\format{83.6}{0.4} & \\format{31.2}{1.4} & \\format{76.1}{0.6} & \\format{82.3}{0.3} & \\format{28.3}{0.5} & \\format{44.9}{0.5} & \\format{53.9}{0.5} & \\format{13.9}{0.4} & \\format{32.3}{0.3} & \\format{42.8}{0.4} & - & - & - \\\\\n\n DSA \\citep{zhao2021dsa} & GM & \\format{88.7}{0.6} & \\format{97.8}{0.1} & \\orformat{99.2}{0.1} & \n \\format{70.6}{0.6} & \\format{86.6}{0.3} & \\format{88.7}{0.2} & \\format{27.5}{1.4} & \\orformat{79.2}{0.5} & \\format{84.4}{0.4} & \\format{28.8}{0.7} & \\format{52.1}{0.5} & \\format{60.6}{0.5} & \\format{13.9}{0.4} & \\format{32.3}{0.3} & \\format{42.8}{0.4} & - & - & -\\\\\n\n DCC \\citep{lee2022dataset} & GM & - & - & - & \n - & - & - & \n \\format{47.5}{2.6} & \\orformat{80.5}{0.6} & \\orformat{87.2}{0.3} & \\format{34.0}{0.7} & \\format{54.5}{0.5} & \\format{64.2}{0.4} & \\format{14.6}{0.3} & \\format{33.5}{0.3} & \\format{39.3}{0.4} & - & - & - \\\\\n\n MTT \\citep{cazenavette2022distillation} & TM & \\format{91.4}{0.9} & \\format{97.3}{0.1} & \\format{98.5}{0.1} & \n \\format{75.1}{0.9} & \\orformat{87.2}{0.3} & \\format{88.3}{0.1} & - & - & - & \n \\format{46.3}{0.8} & \\format{65.3}{0.7} & \\format{71.6}{0.2} & \\format{24.3}{0.3} & \\format{40.1}{0.4} & \\format{47.7}{0.2} & \\format{8.8}{0.3} & \\format{23.2}{0.2} & \\orformat{28.0}{0.3} \\\\\n\n FTD \\citep{du2022minimizing} & TM & - & - & - & - & - & - & - & - & - & \n \\format{46.8}{0.3} & \\orformat{66.6}{0.3} & \\orformat{73.8}{0.2} & \\format{25.2}{0.2} & \\orformat{43.4}{0.3} & \\format{50.7}{0.3} & \\orformat{10.4}{0.3} & \\orformat{24.5}{0.2} & - \\\\\n\n TESLA \\citep{cui2022scaling} & TM & - & - & - & - & - & - & - & - & - & \n \\format{48.5}{0.8} & \\orformat{66.4}{0.8} & \\orformat{72.6}{0.7} & \\format{24.8}{0.4} & \\format{41.7}{0.3} & \\format{47.9}{0.3} & \\format{7.7}{0.2} & \\format{18.8}{1.3} & \\orformat{27.9}{1.2} \\\\\n\n DM \\citep{zhao2023distribution} & DM & \\format{89.2}{1.6} & \\format{97.3}{0.3} & \\format{94.8}{0.2} & \n - & - & - & - & - & - & \n \\format{26.0}{0.8} & \\format{48.9}{0.6} & \\format{63.0}{0.4} & \\format{11.4}{0.3} & \\format{29.7}{0.3} & \\format{43.6}{0.4} & \\format{3.9}{0.2} & \\format{12.9}{0.4} & \\format{24.1}{0.3} \\\\\n\n CAFE \\citep{wang2022cafe} & DM & \\format{90.8}{0.5} & \\format{97.5}{0.1} & \\format{98.9}{0.2} & \n \\format{73.7}{0.7} & \\format{83.0}{0.3} & \\format{88.2}{0.3} & \\format{42.9}{3.0} & \\format{77.9}{0.6} & \\format{82.3}{0.4} & \\format{31.6}{0.8} & \\format{50.9}{0.5} & \\format{62.3}{0.4} & \\format{14.0}{0.3} & \\format{31.5}{0.2} & \\format{42.9}{0.2} & - & - & - \\\\\n\n KIP \\citep{nguyen2020dataset,nguyen2021dataset} & KRR & \\format{90.1}{0.1} & \\format{97.5}{0.0} & \\format{98.3}{0.1} & \n \\format{73.5}{0.5} & \\format{86.8}{0.1} & \\format{88.0}{0.1} & \\orformat{57.3}{0.1} & \\format{75.0}{0.1} & \\orformat{85.0}{0.1} & \\orformat{49.9}{0.2} & \\format{62.7}{0.3} & \\format{68.6}{0.2} & \\format{15.7}{0.2} & \\format{28.3}{0.1} & - & \n - & - & - \\\\\n\n FRePo \\citep{zhou2022dataset} & KRR & \\orformat{93.0}{0.4} & \\orformat{98.6}{0.1} & \\orformat{99.2}{0.0} & \n \\orformat{75.6}{0.3} & \\format{86.2}{0.2} & \\orformat{89.6}{0.1} & - & - & - & \n \\format{46.8}{0.7} & \\format{65.5}{0.4} & \\format{71.7}{0.2} & \\orformat{28.7}{0.1} & \\orformat{42.5}{0.2} & \\format{44.3}{0.2} & \\orformat{15.4}{0.3} & \\orformat{25.4}{0.2} & - \\\\\n\n RFAD \\citep{loo2022efficient} & KRR & \\orformat{94.4}{1.5} & \\orformat{98.5}{0.1} & \\format{98.8}{0.1} & \n \\orformat{78.6}{1.3} & \\orformat{87.0}{0.5} & \\orformat{88.8}{0.4} & \\orformat{52.2}{2.2} & \\format{74.9}{0.4} & \\format{80.9}{0.3} & \\orformat{53.6}{1.2} & \\format{66.3}{0.5} & \\format{71.1}{0.4} & \\orformat{26.3}{1.1} & \\format{33.0}{0.3} & - &\n - & - & - \\\\\n\n IDC$^\\ast$ \\citep{kim2022dataset} & GM & - & - & - & - & - & - & \\format{68.1}{0.1} & \\format{87.3}{0.2} & \\format{90.2}{0.1} & \\format{50.0}{0.4} & \\format{67.5}{0.5} & \\format{74.5}{0.1} & - & \\format{44.8}{0.2} & - &\n - & - & -\\\\\n\n RTP$^\\ast$ \\citep{deng2022remember} & BPTT & \\blformat{98.7}{0.7} & \\blformat{99.3}{0.5} & \\blformat{99.4}{0.4} & \n \\blformat{88.5}{0.1} & \\blformat{90.0}{0.7} & \\blformat{91.2}{0.3} & \\blformat{87.3}{0.1} & \\format{89.1}{0.2} & \\format{89.5}{0.2} & \\blformat{66.4}{0.4} & \\format{71.2}{0.4} & \\format{73.6}{0.4} & \\format{34.0}{0.4} & \\format{42.9}{0.7} & - & \n \\format{16.0}{0.7} & - & - \\\\\n\n HaBa$^\\ast$ \\citep{liu2022dataset} & TM & - & - & - & - & - & - & \\format{69.8}{1.3} & \\format{83.2}{0.4} & \\format{88.3}{0.1} & \\format{48.3}{0.8} & \\format{69.9}{0.4} & \\format{74.0}{0.2} & \\format{33.4}{0.4} & \\format{40.2}{0.2} & \\blformat{47.0}{0.2} & - & - & - \\\\\n\n KFS$^\\ast$ \\citep{lee2022kfs} & DM & - & - & - & - & - & - & \n \\format{82.9}{0.4} & \\blformat{91.4}{0.2} & \\blformat{92.2}{0.1} & \\format{59.8}{0.5} & \\blformat{72.0}{0.3} & \\blformat{75.0}{0.2} & \\blformat{40.0}{0.5} & \\blformat{50.6}{0.2} & - & \n \\blformat{22.7}{0.3} & \\blformat{27.8}{0.2} & - \\\\\n \\cline{3-20}\n Whole & - & \\multicolumn{3}{c|}{\\format{99.6}{0.0}} & \\multicolumn{3}{c|}{\\format{93.5}{0.1}} & \\multicolumn{3}{c|}{\\format{95.4}{0.1}} & \\multicolumn{3}{c|}{\\format{84.8}{0.1}} & \\multicolumn{3}{c|}{\\format{56.2}{0.3}} & \\multicolumn{3}{c}{\\format{37.6}{0.4}}\\\\\n \\bottomrule\n \\end{tabular}\n }\n \\label{tab:dd performance}\n\\end{table}\n\n\\begin{table}[t]\n \\centering\n \\caption{Performance comparison of different dataset distillation methods on ImageNet-1K.}\n \\begin{tabular}{c|c|c|c | c }\n \\toprule\n Methods & IPC=$1$ & IPC = $2$ & IPC = $10$ & IPC = $50$ \\\\\n \\hline\n Random & \\format{0.5}{0.1} & \\format{0.9}{0.1} & \\format{3.6}{0.1} & \\format{15.3}{2.3} \\\\\n DM \\citep{zhao2023distribution} & \\format{1.5}{0.1} & \\format{1.7}{0.1} & - & - \\\\\n\n FRePo \\citep{zhou2022dataset} & \\format{7.5}{0.3} & \\format{9.7}{0.2} & - & - \\\\\n TESLA \\citep{cui2022scaling} & \\bformat{7.7}{0.2} & \\bformat{10.5}{0.3} & \\bformat{17.8}{1.3} & \\bformat{27.9}{1.2} \\\\\n \\cline{2-5}\n Whole & \\multicolumn{4}{c}{\\format{33.8}{0.3}} \\\\\n \\bottomrule\n \\end{tabular}\n \\label{tab:dd performance imagenet}\n\\end{table}\n\nTo demonstrate the effectiveness of dataset distillation, we collect and summarize the classification performance of some characteristic dataset distillation approaches on the following image datasets: MNIST \\citep{lecun1998gradient}, FashionMNIST \\citep{xiao2017\/online} , SVHN \\citep{netzer2011reading}, CIFAR-10\/100 \\citep{krizhevsky2009learning}, Tiny ImageNet \\citep{le2015tiny}, and ImageNet-1K \\citep{russakovsky2015imagenet}. The details of these datasets are presented as follows. Recently, \\citet{cui2022dc} publish a benchmark in terms of dataset distillation, however, it only contains five DD methods, and we provide a comprehensive comparison of over $15$ existing dataset distillation methods in this survey.\n\nMNIST is a black-and-white dataset that consists of $60,000$ training images and $10,000$ test images from $10$ different classes; the size of each example image is $28\\times 28$ pixels. SVHN is a colorful dataset and consists of $73,257$ digits and $26,032$ digits for training and testing, respectively, and the example images in SVHN are $32\\times 32$ RGB images. CIFAR-10\/100 are composed of $50,000$ training images and $10,000$ test images from $10$ and $100$ different classes, respectively. The RGB images in CIFAR-10\/100 are comprised of $32\\times 32$ pixels. Tiny ImageNet consists of $100,000$ and $10,000$ $64\\times 64$ RGB images from $200$ different classes for training and testing, respectively. ImageNet-1K is a large image dataset that consists of over $1,000,000$ high-resolution RGB images from $1,000$ different classes.\n\nAn important factor affecting the test accuracy {\\it w.r.t.} the distilled dataset is the {\\it distillation budget}, which constrains the size of the distilled dataset by the notion of images allocated per class (IPC). Usually, the distilled dataset is set to have the same number of classes as the target dataset. Therefore, for a target dataset with $100$ classes, setting the distillation budget to $\\text{IPC}=10$ suggests that there are a total of $10\\times 100 = 1,000$ images in the distilled dataset. \n\n\\begin{table*}[t]\n \\centering\n \\caption{Cross-architecture performance {\\it w.r.t.} ResNet-18 (RN-18) and ResNet-152 (RN-152) on CIFAR-10. The results of DC, DSA, MTT, DM, and KIP are derived from \\citet{cui2022dc}. The results of FTD and CAFE are collected from \\citet{du2022minimizing}; and the result of TESLA is derived from \\citet{cui2022scaling}.}\n \\resizebox{\\linewidth}{!}{\n \\begin{tabular}{c|ccc|ccc|ccc}\n \\toprule\n \\multirow{2}{*}{Methods} & \\multicolumn{3}{c|}{IPC=$1$} & \\multicolumn{3}{c|}{IPC=$10$} & \\multicolumn{3}{c}{IPC=$50$} \\\\ \n \\cline{2-10}\n & \\multicolumn{1}{c}{ConvNet} & \\multicolumn{1}{c}{RN-18} & \\multicolumn{1}{c|}{RN-152} & \\multicolumn{1}{c}{ConvNet} & \\multicolumn{1}{c}{RN-18} & \\multicolumn{1}{c|}{RN-152} & \\multicolumn{1}{c}{ConvNet} & \\multicolumn{1}{c}{RN-18} & \\multicolumn{1}{c}{RN-152} \\\\\n \\hline\n \n DC \\citep{zhao2021dataset} & \\format{28.3}{0.5} & \\format{25.6}{0.6} & \\orformat{15.3}{0.4} & \\format{44.9}{0.5} & \\format{42.1}{0.6} & \\format{16.1}{1.0}& \\format{53.9}{0.5}& \\format{45.9}{1.4}& \\format{19.7}{1.2}\\\\\n \n DSA \\citep{zhao2021dsa} & \\format{28.8}{0.5} & \\format{25.6}{0.6} & \\format{15.1}{0.7} & \\format{52.1}{0.5} & \\format{42.1}{0.6}& \\format{16.1}{1.0} & \\format{60.6}{0.5} & \\format{49.5}{0.7} & \\format{20.0}{1.2}\\\\\n\n MTT \\citep{cazenavette2022distillation} & \\format{46.3}{0.8}& \\orformat{34.2}{1.4} & \\format{13.4}{0.9} & \\format{65.3}{0.7} & \\format{38.8}{0.7} & \\format{15.9}{0.2}& \\format{71.6}{0.2}& \\format{60.0}{0.7}& \\format{20.9}{1.6}\\\\\n\n FTD \\citep{du2022minimizing} & -& -&- &-&-&-& \\orformat{73.8}{0.2}& \\orformat{65.7}{0.3}&-\\\\\n\n TESLA \\citep{cui2022scaling} & -& -& -& \\orformat{66.4}{0.8}& \\orformat{48.9}{2.2}&-&-&-&-\\\\\n\n DM \\citep{zhao2023distribution}& \\format{26.0}{0.8}& \\format{20.6}{0.5} & \\format{14.1}{0.6} & \\format{48.9}{0.6}& \\format{38.2}{1.1}& \\format{15.6}{1.5}& \\format{63.0}{0.4}& \\format{52.8}{0.4}& \\orformat{21.7}{1.3}\\\\\n\n CAFE \\citep{wang2022cafe}&- & -& -&-&-&-& \\format{55.5}{0.4}& \\format{25.3}{0.9}&-\\\\\n \n KIP \\citep{nguyen2021dataset} & \\orformat{49.9}{0.2} & \\format{27.6}{1.1} & \\format{14.2}{0.8} & \\format{62.7}{0.3}& \\format{45.2}{1.4} & \\orformat{16.6}{1.4}& \\format{68.6}{0.2}& \\format{60.0}{0.7}& \\format{20.9}{1.6}\\\\\n\n \\bottomrule\n \\end{tabular}}\n \\label{tab:cross architecture 1}\n\\end{table*}\n\n\\begin{table*}[t]\n \\centering\n \\caption{Cross-architecture performance {\\it w.r.t.} ResNet-10 (RN-10) and DenseNet-121 (DN-121) on CIFAR-10. The results are derived from \\citet{lee2022kfs}}\n \\resizebox{\\linewidth}{!}{\n \\begin{tabular}{c|ccc|ccc|ccc}\n \\toprule\n \\multirow{2}{*}{Methods} & \\multicolumn{3}{c|}{IPC=$1$} & \\multicolumn{3}{c|}{IPC=$10$} & \\multicolumn{3}{c}{IPC=$50$} \\\\ \n \\cline{2-10}\n & \\multicolumn{1}{c}{ConvNet} & \\multicolumn{1}{c}{RN-10} & \\multicolumn{1}{c|}{DN-121} & \\multicolumn{1}{c}{ConvNet} & \\multicolumn{1}{c}{RN-10} & \\multicolumn{1}{c|}{RN-121} & \\multicolumn{1}{c}{ConvNet} & \\multicolumn{1}{c}{RN-10} & \\multicolumn{1}{c}{RN-121} \\\\\n \\hline\n \n \n DSA \\citep{zhao2021dsa} & \\format{28.8}{0.7} & \\format{25.1}{0.8} & \\format{25.9}{1.8} & \\format{52.1}{0.5} & \\format{31.4}{0.9}& \\format{32.9}{1.0} & \\format{60.6}{0.5} & \\format{49.0}{0.7} & \\format{53.4}{0.8}\\\\\n\n\n DM \\citep{zhao2023distribution}& \\format{26.0}{0.8}& \\format{13.7}{1.6} & \\format{12.9}{1.8} & \\format{48.9}{0.6}& \\format{31.7}{1.1}& \\format{32.2}{0.8}& \\format{63.0}{0.4}& \\format{49.1}{0.7}& \\format{53.7}{0.7}\\\\\n\n IDC \\citep{kim2022dataset}& \\format{50.0}{0.4} & \\format{41.9}{0.6}& \\format{39.8}{1.2} & \\format{67.5}{0.5}& \\format{63.5}{0.1}& \\format{61.6}{0.6}& \\format{74.5}{0.1}& \\format{72.4}{0.5}& \\format{71.8}{0.6}\\\\\n \n KFS \\citep{lee2022kfs} & \\orformat{59.8}{0.5} & \\orformat{47.0}{0.8} & \\orformat{49.5}{1.3} & \\orformat{72.0}{0.3}& \\orformat{70.3}{0.3} & \\orformat{71.4}{0.4}& \\orformat{75.0}{0.2}& \\orformat{75.1}{0.3}& \\orformat{76.3}{0.4}\\\\\n\n \\bottomrule\n \\end{tabular}}\n \\label{tab:cross architecture 2}\n\\end{table*}\n\n\\subsection{Standard Benchmark}\n\\label{sec:standard benchmark}\n\nFor a comprehensive comparison, we also present the test accuracy {\\it w.r.t.} a random select search, coreset approach, and the original target dataset. For the coreset approach, the Herding algorithm \\citep{rebuffi2017icarl} is employed in this survey due to its superior performance. Notably, for most of parametric DD methods, ConvNet \\citep{gidaris2018dynamic} is the default architecture to distill the synthetic data; and all the test accuracy of distilled data are obtained by training the same architecture of ConvNet for fair comparison. \n\nThe empirical comparison results associated with MNIST, FashionMNIST, SVHN, CIFAR-10\/100, and Tiny ImageNet are presented in Table \\ref{tab:dd performance}. In spite of severe scale issues and few implementations on ImageNet-1K, we present the DD results of ImageNet-1K in Table \\ref{tab:dd performance imagenet} for completion. As shown in these tables, many methods are not tested on some datasets due to the lack of meaning for easy datasets or scalability problems for large datasets. We preserve these blanks in tables for a more fair and comprehensive comparison. To make the comparison clear, we use the orange number for the best of two nonfactorized DD methods and blue number for the best factorized DD methods in each category in Table \\ref{tab:dd performance}. In addition, we note the factorized DD methods with $^\\ast$. Based on the performance comparison in the tables, we observe the followings:\n\\begin{itemize}\n \\item Dataset distillation can be realized on many datasets with various sizes.\n \\item Dataset distillation methods outperforms random pick and coreset selection by large margins.\n \\item KRR and trajectory matching approaches possess advanced DD performance {\\it w.r.t.} three larger datasets of CIFAR-10\/100 and Tiny ImageNet among nonfactorized DD methods.\n \\item Factorized dataset distillation by optimizing the latent codes and decoders can largely improve the test accuracy of distilled data.\n \\item KFS \\citep{lee2022kfs} has the best overall performance among different datasets of CIFAR-10\/100 and Tiny ImageNet.\n \n\\end{itemize}\n\n\n\\subsection{Cross-Architecture Transferability}\n\\label{sec:cross architecture transferability}\n\nThe standard benchmark in Table \\ref{tab:dd performance} only shows the performance of DD methods in terms of the specific ConvNet, while the distilled dataset is usually used to train other unseen network architectures. Therefore, measuring the DD performance on different architecture is also important for more comprehensive evaluation. However, there is not a standard set of architectures to evaluate cross-architecture transferability of DD algorithms, and the choice of architectures are highly different in the DD literature. \n\nIn this survey, we collect the DD results on four popular architectures of ResNet-10\/18\/152 \\citep{he2016deep} and DenseNet-121 \\citep{huang2017densely} to evaluate the cross-architecture transferability, as shown in Tables \\ref{tab:cross architecture 1} and \\ref{tab:cross architecture 2}. By investigating these tables, we can obtain several observations as follows:\n\n\\begin{itemize}\n \\item There is a significant performance drop for nonfactorized DD methods when the distilled data are used to train unseen architectures (Table \\ref{tab:cross architecture 1}).\n \\item The distilled data are not always possess the best performance across different architectures, which underlines the importance of evaluation on multiple architectures (Table \\ref{tab:cross architecture 1}).\n \\item The factorized DD methods (IDC and KFS) possess better cross-architecture transferasbility {\\it i.e.}, suffer a smaller accuracy drop when encounter unseen architectures (Table \\ref{tab:cross architecture 2}).\n\\end{itemize}\n\n\n\n\\iffalse\n\\begin{table}\n \n \\centering\n \\caption{Performance comparison of different dataset distillation methods on MNIST.}\n \\begin{tabular}{c|c|c|c|c}\n \\toprule\n\n \\multirow{2}{*}{Methods} & \\multirow{2}{*}{Distillation schemes} & \\multicolumn{3}{c}{Accuracy} \\\\\n \\cline{3-5}\n & & IPC=$1$ & IPC=$10$ & IPC=$50$ \\\\\n\n \\hline\n Random & - & $64.9\\pm 3.5$ & $95.1\\pm 0.9$ & $97.9\\pm 0.2$ \\\\\n Herding & - & $89.2\\pm 1.6$ & $93.7\\pm 0.3$ & $94.8\\pm 0.2$ \\\\\n DD (\\citet{wang2018dataset}) & BPTT & - & $79.5 \\pm 8.1$ & - \\\\\n LD (\\citet{bohdal2020flexible}) & BPTT & $60.9\\pm 3.2$ & $87.3\\pm 0.7$ & $93.3\\pm 0.3$ \\\\\n DC (\\citet{zhao2021dataset})& Gradient match & $91.7\\pm 0.5$ & $97.4\\pm 0.2$ & $98.8\\pm 0.2$ \\\\\n DSA (\\citet{zhao2021dsa})& Gradient match & $88.7\\pm 0.6$ & $97.8\\pm 0.1$ & \\bm{$99.2\\pm 0.1$} \\\\\n DCC (\\citet{lee2022dataset}) & Gradient match & - & - & - \\\\\n MTT (\\citet{cazenavette2022distillation}) & Trajectory match & $91.4\\pm 0.9$ & $97.3 \\pm 0.1$ & $98.5\\pm 0.1$ \\\\\n FTD (\\citet{du2022minimizing})& Trajectory match & - & - & - \\\\\n DM (\\citet{zhao2021distribution}) & Distribution match & $89.2\\pm 1.6$ & $93.7\\pm 0.3$ & $94.8\\pm 0.2$ \\\\\n CAFE (\\citet{wang2022cafe})& Distribution match & $90.8\\pm 0.5$ & $97.5\\pm 0.1$ & $98.9\\pm 0.2$ \\\\\n KIP (\\citet{nguyen2020dataset,nguyen2021dataset}) & Kernel regression & $90.1\\pm 0.1$ & $97.5\\pm 0.0$ & $98.3\\pm 0.1$ \\\\\n FRePo (\\citet{zhou2022dataset})& Kernel regression & \\bm{$93.0\\pm 0.4$} & \\bm{$98.6\\pm 0.1$} & \\bm{$99.2\\pm 0.0$} \\\\\n RFAD (\\citet{loo2022efficient})& Kernel regression & \\bm{$94.4\\pm 1.5$} & \\bm{$98.5\\pm 0.1$} & $98.8\\pm 0.1$ \\\\\n IDC (\\citet{kim2022dataset})$^\\ast$ & Gradient match & - & - & - \\\\\n \\citet{deng2022remember}$^\\ast$ & BPTT & $89.2\\pm 1.6$ & $93.7\\pm 0.3$ & $94.8\\pm 0.2$ \\\\\n Haba (\\citet{liu2022dataset})$^\\ast$ & Trajectory match & - & - & - \\\\\n KFS (\\citet{lee2022kfs})$^\\ast$ & Distribution match & - & - & - \\\\\n \\cline{3-5}\n Whole dataset & - & \\multicolumn{3}{c}{$99.6 \\pm 0.0$} \\\\\n \\bottomrule\n \\end{tabular}\n \\label{tab:dd on mnist}\n\\end{table}\n\n\n\\begin{table}\n \n \\centering\n \\caption{Performance comparison of different dataset distillation methods on FashionMNIST.}\n \\begin{tabular}{c|c|c|c|c}\n \\toprule\n\n \\multirow{2}{*}{Methods} & \\multirow{2}{*}{Distillation schemes} & \\multicolumn{3}{c}{Accuracy} \\\\\n \\cline{3-5}\n & & IPC=$1$ & IPC=$10$ & IPC=$50$ \\\\\n\n \\hline\n Random & - & $51.4\\pm 3.8$ & $73.8\\pm 0.7$ & $82.5\\pm 0.7$ \\\\\n Herding & - & $67.0\\pm 1.9$ & $71.1\\pm 0.7$ & $71.9\\pm 0.8$ \\\\\n DD (\\citet{wang2018dataset}) & BPTT & - & - & - \\\\\n LD (\\citet{bohdal2020flexible}) & BPTT & - & - & - \\\\\n DC (\\citet{zhao2021dataset})& Gradient match & $70.6\\pm 0.6$ & $82.3\\pm 0.4$ & $83.6\\pm 0.4$ \\\\\n DSA (\\citet{zhao2021dsa})& Gradient match & $70.6\\pm 0.6$ & $86.6\\pm 0.3$ & $88.7\\pm 0.2$ \\\\\n DCC (\\citet{lee2022dataset}) & Gradient match & - & - & - \\\\\n MTT (\\citet{cazenavette2022distillation}) & Trajectory match & $75.1\\pm 0.9$ & \\bm{$87.2 \\pm 0.3$} & $88.3\\pm 0.1$ \\\\\n FTD (\\citet{du2022minimizing})& Trajectory match & - & - & - \\\\\n DM (\\citet{zhao2021distribution}) & Distribution match & - & - & - \\\\\n CAFE (\\citet{wang2022cafe})& Distribution match & $73.7\\pm 0.7$ & $83.0\\pm 0.3$ & $88.2\\pm 0.3$ \\\\\n KIP (\\citet{nguyen2020dataset,nguyen2021dataset}) & Kernel regression & $73.5\\pm 0.5$ & $86.8\\pm 0.1$ & $88.0\\pm 0.1$ \\\\\n FRePo (\\citet{zhou2022dataset})& Kernel regression & $75.6\\pm 0.3$ & $86.2\\pm 0.2$ & \\bm{$89.6\\pm 0.1$} \\\\\n RFAD (\\citet{loo2022efficient})& Kernel regression & \\bm{$78.6\\pm 1.3$} & $87.0\\pm 0.5$ & $88.8\\pm 0.4$ \\\\\n IDC (\\citet{kim2022dataset})$^\\ast$ & Gradient match & - & - & - \\\\\n \\citet{deng2022remember}$^\\ast$ & BPTT & \\bm{ $88.5\\pm 0.1$} & \\bm{$90.0\\pm 0.7$} & \\bm{$91.2\\pm 0.3$} \\\\\n Haba (\\citet{liu2022dataset})$^\\ast$ & Trajectory match & - & - & - \\\\\n KFS (\\citet{lee2022kfs})$^\\ast$ & Distribution match & - & - & - \\\\\n \\cline{3-5}\n Whole dataset & - & \\multicolumn{3}{c}{$93.5 \\pm 0.1$} \\\\\n \\bottomrule\n \\end{tabular}\n \\label{tab:dd on fmnist}\n\\end{table}\n\n\n\\begin{table}\n \n \\centering\n \\caption{Performance comparison of different dataset distillation methods on SVHN.}\n \\begin{tabular}{c|c|c|c|c}\n \\toprule\n\n \\multirow{2}{*}{Methods} & \\multirow{2}{*}{Distillation schemes} & \\multicolumn{3}{c}{Accuracy} \\\\\n \\cline{3-5}\n & & IPC=$1$ & IPC=$10$ & IPC=$50$ \\\\\n\n \\hline\n Random & - & $14.6\\pm 1.6$ & $35.1\\pm 4.1$ & $70.9\\pm 0.9$ \\\\\n Herding & - & $20.9\\pm 1.3$ & $50.5\\pm 3.3$ & $72.6\\pm 0.8$ \\\\\n DD (\\citet{wang2018dataset}) & BPTT & - & - & - \\\\\n LD (\\citet{bohdal2020flexible}) & BPTT & - & - & - \\\\\n DC (\\citet{zhao2021dataset})& Gradient match & $31.2\\pm 1.4$ & $76.1\\pm 0.6$ & $82.3\\pm 0.3$ \\\\\n DSA (\\citet{zhao2021dsa})& Gradient match & $27.5\\pm 1.4$ & $79.2\\pm 0.5$ & $84.4\\pm 0.4$ \\\\\n DCC (\\citet{lee2022dataset}) & Gradient match & $47.5\\pm 2.6$ & $80.5\\pm 0.6$ & $87.2\\pm 0.3$ \\\\\n MTT (\\citet{cazenavette2022distillation}) & Trajectory match & - & - & - \\\\\n FTD (\\citet{du2022minimizing})& Trajectory match & - & - & - \\\\\n DM (\\citet{zhao2021distribution}) & Distribution match & - & - & - \\\\\n CAFE (\\citet{wang2022cafe})& Distribution match & $42.9\\pm 3.0$ & $77.9\\pm 0.6$ & $82.3\\pm 0.4$ \\\\\n KIP (\\citet{nguyen2020dataset,nguyen2021dataset}) & Kernel regression & $57.3\\pm 0.1$ & $75.0\\pm 0.1$ & $80.5\\pm 0.1$ \\\\\n FRePo (\\citet{zhou2022dataset})& Kernel regression & - & - & - \\\\\n RFAD (\\citet{loo2022efficient})& Kernel regression & $52.2\\pm 2.2$ & $74.9\\pm 0.4$ & $80.9\\pm 0.3$ \\\\\n IDC (\\citet{kim2022dataset})$^\\ast$ & Gradient match & $68.1\\pm 0.1$ & $87.3\\pm 0.2$ & \\bm{$90.2\\pm 0.1$} \\\\\n \\citet{deng2022remember}$^\\ast$ & BPTT & \\bm{$87.3\\pm 0.1$} & \\bm{$89.1\\pm 0.2$} & {$89.5\\pm 0.2$} \\\\\n Haba (\\citet{liu2022dataset})$^\\ast$ & Trajectory match & $69.8\\pm 1.3$ & $83.2\\pm 0.4$ & $88.3\\pm 0.1$ \\\\\n KFS (\\citet{lee2022kfs})$^\\ast$ & Distribution match & \\bm{$82.9\\pm 0.4$} & \\bm{$91.4\\pm 0.2$} & \\bm{$92.2\\pm 0.1$} \\\\\n \\cline{3-5}\n Whole dataset & - & \\multicolumn{3}{c}{$95.4 \\pm 0.1$} \\\\\n \\bottomrule\n \\end{tabular}\n \\label{tab:dd on svhn}\n\\end{table}\n\n\\begin{table}\n \n \\centering\n \\caption{Performance comparison of different dataset distillation methods on CIFAR-10.}\n \\begin{tabular}{c|c|c|c|c}\n \\toprule\n\n \\multirow{2}{*}{Methods} & \\multirow{2}{*}{Distillation schemes} & \\multicolumn{3}{c}{Accuracy} \\\\\n \\cline{3-5}\n & & IPC=$1$ & IPC=$10$ & IPC=$50$ \\\\\n\n \\hline\n Random & - & $14.4\\pm 2.0$ & $26.0\\pm 1.2$ & $43.4\\pm 1.0$ \\\\\n Herding & - & $21.5\\pm 1.2$ & $31.6\\pm 0.7$ & $40.4\\pm 0.6$ \\\\\n DD (\\citet{wang2018dataset}) & BPTT & - & $36.8\\pm 1.2$ & - \\\\\n LD (\\citet{bohdal2020flexible}) & BPTT & $25.7\\pm 0.7$ & $38.3\\pm 0.4$ & $42.5\\pm 0.4$ \\\\\n DC (\\citet{zhao2021dataset})& Gradient match & $28.3\\pm 0.5$ & $44.9\\pm 0.5$ & $53.9\\pm 0.5$ \\\\\n DSA (\\citet{zhao2021dsa})& Gradient match & $28.8\\pm 0.7$ & $52.1\\pm 0.5$ & $60.6\\pm 0.5$ \\\\\n DCC (\\citet{lee2022dataset}) & Gradient match & $47.5\\pm 2.6$ & $80.5\\pm 0.6$ & $87.2\\pm 0.3$ \\\\\n MTT (\\citet{cazenavette2022distillation}) & Trajectory match & $46.3\\pm 0.8$ & $65.3\\pm 0.7$ & $71.6\\pm 0.2$ \\\\\n FTD (\\citet{du2022minimizing})& Trajectory match & $46.8\\pm 0.3$ & $66.6\\pm 0.3$ & $73.8\\pm 0.2$ \\\\\n TESLA (\\citet{cui2022scaling})& Trajectory match & $48.5\\pm 0.8$ & $66.4\\pm 0.8$ & $72.6\\pm 0.7$ \\\\\n DM (\\citet{zhao2021distribution}) & Distribution match & $26.0\\pm 0.8$ & $48.9\\pm 0.6$ & $63.0\\pm 0.4$ \\\\\n CAFE (\\citet{wang2022cafe})& Distribution match & $31.6\\pm 0.8$ & $50.9\\pm 0.5$ & $62.3\\pm 0.4$ \\\\\n KIP (\\citet{nguyen2020dataset,nguyen2021dataset}) & Kernel regression & $49.9\\pm 0.2$ & $62.7\\pm 0.3$ & $68.6\\pm 0.2$ \\\\\n FRePo (\\citet{zhou2022dataset})& Kernel regression & $46.8\\pm 0.7$ & $65.5\\pm 0.4$ & $71.7\\pm 0.2$ \\\\\n RFAD (\\citet{loo2022efficient})& Kernel regression & $53.6\\pm 1.2$ & $66.3\\pm 0.5$ & $71.1\\pm 0.4$ \\\\\n IDC (\\citet{kim2022dataset})$^\\ast$ & Gradient match & $50.0\\pm 0.4$ & $67.5\\pm 0.5$ & \\bm{$74.5\\pm 0.1$} \\\\\n \\citet{deng2022remember}$^\\ast$ & BPTT & \\bm{$66.4\\pm 0.4$} & \\bm{$71.2\\pm 0.4$} & {$73.6\\pm 0.4$} \\\\\n Haba (\\citet{liu2022dataset})$^\\ast$ & Trajectory match & $48.3\\pm 0.8$ & $69.9\\pm 0.4$ & $74.0\\pm 0.2$ \\\\\n KFS (\\citet{lee2022kfs})$^\\ast$ & Distribution match & \\bm{$59.8\\pm 0.5$} & \\bm{$72.0\\pm 0.3$} & \\bm{$75.0\\pm 0.2$} \\\\\n \\cline{3-5}\n Whole dataset & - & \\multicolumn{3}{c}{$84.8 \\pm 0.1$} \\\\\n \\bottomrule\n \\end{tabular}\n \\label{tab:dd on cifar-10}\n\\end{table}\n\n\\begin{table}\n \n \\centering\n \\caption{Performance comparison of different dataset distillation methods on CIFAR-100.}\n \\begin{tabular}{c|c|c|c|c}\n \\toprule\n\n \\multirow{2}{*}{Methods} & \\multirow{2}{*}{Distillation schemes} & \\multicolumn{3}{c}{Accuracy} \\\\\n \\cline{3-5}\n & & IPC=$1$ & IPC=$10$ & IPC=$50$ \\\\\n\n \\hline\n Random & - & $4.2 \\pm 0.3$ & $14.6\\pm 0.5$ & $30.0\\pm 0.4$ \\\\\n Herding & - & $8.4 \\pm 0.3$ & $17.3\\pm 0.3$ & $33.7\\pm 0.5$ \\\\\n DD (\\citet{wang2018dataset}) & BPTT & - & - & - \\\\\n LD (\\citet{bohdal2020flexible}) & BPTT & $11.5\\pm 0.4$ & - & - \\\\\n DC (\\citet{zhao2021dataset})& Gradient match & $12.8\\pm 0.3$ & $26.6\\pm 0.3$ & $32.1\\pm 0.3$ \\\\\n DSA (\\citet{zhao2021dsa})& Gradient match & $13.9\\pm 0.4$ & $32.3\\pm 0.3$ & $42.8\\pm 0.4$ \\\\\n DCC (\\citet{lee2022dataset}) & Gradient match & $14.6\\pm 0.3$ & $33.5\\pm 0.3$ & $39.3\\pm 0.4$ \\\\\n MTT (\\citet{cazenavette2022distillation}) & Trajectory match & $24.3\\pm 0.3$ & $40.1\\pm 0.4$ & \\bm{$47.7\\pm 0.2$} \\\\\n FTD (\\citet{du2022minimizing})& Trajectory match & $25.2\\pm 0.2$ & $43.4\\pm 0.3$ & \\bm{$50.7\\pm 0.3$} \\\\\n TESLA (\\citet{cui2022scaling})& Trajectory match & $24.8\\pm 0.4$ & $41.7\\pm 0.3$ & $47.9\\pm 0.3$ \\\\\n DM (\\citet{zhao2021distribution}) & Distribution match & $11.4\\pm 0.3$ & $29.7\\pm 0.3$ & $43.6\\pm 0.4$ \\\\\n CAFE (\\citet{wang2022cafe})& Distribution match & $14.0\\pm 0.3$ & $31.5\\pm 0.2$ & $42.9\\pm 0.2$ \\\\\n KIP (\\citet{nguyen2020dataset,nguyen2021dataset}) & Kernel regression & $15.7\\pm 0.2$ & $28.3\\pm 0.1$ & - \\\\\n FRePo (\\citet{zhou2022dataset})& Kernel regression & $28.7\\pm 0.1$ & $42.5\\pm 0.2$ & $44.3\\pm 0.2$ \\\\\n RFAD (\\citet{loo2022efficient})& Kernel regression & $26.3\\pm 1.1$ & $33.0\\pm 0.3$ & - \\\\\n IDC (\\citet{kim2022dataset})$^\\ast$ & Gradient match & - & \\bm{$44.8\\pm 0.2$} & - \\\\\n \\citet{deng2022remember}$^\\ast$ & BPTT & \\bm{$34.0\\pm 0.4$} & {$42.9\\pm 0.7$} & - \\\\\n Haba (\\citet{liu2022dataset})$^\\ast$ & Trajectory match & $33.4\\pm 0.4$ & $40.2\\pm 0.2$ & $47.0\\pm 0.2$ \\\\\n KFS (\\citet{lee2022kfs})$^\\ast$ & Distribution match & \\bm{$40.0\\pm 0.5$} & \\bm{$50.6\\pm 0.2$} & - \\\\\n \\cline{3-5}\n Whole dataset & - & \\multicolumn{3}{c}{$56.2 \\pm 0.3$} \\\\\n \\bottomrule\n \\end{tabular}\n \\label{tab:dd on cifar-100}\n\\end{table}\n\n\\begin{table}\n \n \\centering\n \\caption{Performance comparison of different dataset distillation methods on Tiny ImageNet.}\n \\begin{tabular}{c|c|c|c|c}\n \\toprule\n\n \\multirow{2}{*}{Methods} & \\multirow{2}{*}{Distillation schemes} & \\multicolumn{3}{c}{Accuracy} \\\\\n \\cline{3-5}\n & & IPC=$1$ & IPC=$10$ & IPC=$50$ \\\\\n\n \\hline\n Random & - & $1.4 \\pm 0.1$ & $5.0 \\pm 0.2$ & $15.0\\pm 0.4$ \\\\\n Herding & - & $2.8 \\pm 0.2$ & $6.3 \\pm 0.2$ & $16.7\\pm 0.3$ \\\\\n DD (\\citet{wang2018dataset}) & BPTT & - & - & - \\\\\n LD (\\citet{bohdal2020flexible}) & BPTT & - & - & - \\\\\n DC (\\citet{zhao2021dataset})& Gradient match & - & - & - \\\\\n DSA (\\citet{zhao2021dsa})& Gradient match & - & - & - \\\\\n DCC (\\citet{lee2022dataset}) & Gradient match & - & - & - \\\\\n MTT (\\citet{cazenavette2022distillation}) & Trajectory match & $8.8 \\pm 0.3$ & $23.2\\pm 0.2$ & \\bm{$28.0\\pm 0.3$} \\\\\n FTD (\\citet{du2022minimizing})& Trajectory match & $10.4\\pm 0.3$ & $24.5\\pm 0.2$ & - \\\\\n TESLA (\\citet{cui2022scaling})& Trajectory match & $7.7\\pm 0.2$ & $18.8\\pm 1.3$ & \\bm{$27.9\\pm 1.2$} \\\\\n DM (\\citet{zhao2021distribution}) & Distribution match & $3.9 \\pm 0.2$ & $12.9\\pm 0.4$ & $24.1\\pm 0.3$ \\\\\n CAFE (\\citet{wang2022cafe})& Distribution match & - & - & - \\\\\n KIP (\\citet{nguyen2020dataset,nguyen2021dataset}) & Kernel regression & - & - & - \\\\\n FRePo (\\citet{zhou2022dataset})& Kernel regression & $15.4\\pm 0.3$ & \\bm{$25.4\\pm 0.2$} & - \\\\\n RFAD (\\citet{loo2022efficient})& Kernel regression & - & - & - \\\\\n IDC (\\citet{kim2022dataset})$^\\ast$ & Gradient match & - & - & - \\\\\n \\citet{deng2022remember}$^\\ast$ & BPTT & \\bm{$16.0\\pm 0.7$} & - & - \\\\\n Haba (\\citet{liu2022dataset})$^\\ast$ & Trajectory match & - & - & - \\\\\n KFS (\\citet{lee2022kfs})$^\\ast$ & Distribution match & \\bm{$22.7\\pm 0.3$} & \\bm{$27.8\\pm 0.2$} & - \\\\\n \\cline{3-5}\n Whole dataset & - & \\multicolumn{3}{c}{$37.6 \\pm 0.4$} \\\\\n \\bottomrule\n \\end{tabular}\n \\label{tab:dd on ti}\n\\end{table}\n\\fi\n\n\n\n\n\\section{Application}\nDue to this superior performance in compressing massive datasets, dataset distillation has been widely employed in many application domains that limit training efficiency and storage, including continual learning and neural architecture search. Furthermore, due to the correspondence between examples and gradients, dataset distillation can also benefit privacy preservation, federated learning, and adversarial robustness. In this section, we briefly review these applications {\\it w.r.t} dataset distillation.\n\n\\subsection{Continual Learning}\n\nDuring to the training process, when there is a shift in the training data distribution, the model will suddenly lose its ability to predict the previous data distribution. This phenomenon is referred to as {\\it catastrophic forgetting} and is common in deep learning. To overcome this problem, continual learning has been developed to incrementally learn new tasks while preserving performance on old tasks \\citep{rebuffi2017icarl,castro2018end}. A common method used in continual learning is the replay-based strategy, which allows a limited memory to store a few training examples for rehearsal in the following training. Therefore, the key to the replay-based strategy is to select highly informative training examples to store. Benefiting from extracting the essence of datasets, the dataset distillation technique has been employed to compress data for memory with limited storage \\citep{zhao2021dataset,carta2022distilled}. Because the incoming data have a changing distribution, the frequency is high for updating the elements in memory, which leads to strict requirements for the efficiency of dataset distillation algorithms. To conveniently embed dataset distillation into the replay-based strategy, \\citet{wiewel2021condensed} and \\citet{sangermano2022sample} decomposed the process of synthetic data generation by linear or nonlinear combination, and thus fewer parameters were optimized during dataset distillation. In addition to enhancing replay-based methods, dataset distillation can also learn a sequence of stable datasets, and the network trained on these stable datasets will not suffer from catastrophic forgetting \\citep{masarczyk2020reducing}.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Neural Architecture Search}\n\nFor a given dataset, the technique of neural architecture search (NAS) aims to find an optimal architecture from thousands of network candidates for better generalization. The NAS process usually includes training the network candidates on a small proxy of the original dataset to save training time, and the generalization ranking can be estimated according to these trained network candidates. Therefore, it is important to design the proxy dataset such that models trained on it can reflect the model's true performance in terms of the original data, but the size of the proxy dataset requires control for the sake of efficiency. To construct proxy datasets, conventional methods have been developed, including random selection or greedy search, without altering the original data \\citep{c2022speeding,li2020sgas}. \\citet{such2020generative} first proposed optimizing a highly informative dataset as the proxy for network candidate selection. More subsequent works have considered NAS as an auxiliary task for testing the proposed dataset distillation algorithms \\citep{zhao2021dataset,zhao2021dsa,zhao2023distribution,du2022minimizing,cui2022dc}. Through simulation on the synthetic dataset, a fairly accurate generalization ranking can be collected for selecting the optimal architecture while considerably reducing the training time.\n\n\n\n\n\\subsection{Privacy Protection}\nAs overparameterized neural networks easily memorize all the training data, there exists the risk for privacy leakage by inferring the well-trained networks \\citep{shokri2017membership,long2018understanding,salem2018ml,hu2022membership}. With a synthetic dataset, the network avoids explicitly training on the original dataset and consequently helps protect data privacy \\citep{zhou2022dataset}. In addition, \\citet{chen2022privacy} employed gradient matching to distil private datasets by adding differential privacy noise to the gradients of synthetic data. Theoretically, \\citet{dong2022privacy} and \\citet{zheng2023differentially} built the connection between dataset distillation and differential privacy, and proved the superiority of dataset distillation in privacy preservation over conventional private data generation methods. However, \\citet{carlini2022no} argued that there exists flaws in both experimental and theoretical evidences in \\citet{dong2022privacy}: (1) membership inference baseline should include all training examples, while only $1\\%$ of training points are used to match the synthetic sample size, which thus contributes to the high attack accuracy of baseline; and (2) the privacy analysis is based on a assumption stating that the output of learning algorithms follows a exponential distribution in terms of loss; and the assumption has already implied differential privacy.\n\n\n\\subsection{Federated Learning}\n\nFederated learning has received increasing attention during training neural networks in the past few years due to its advantages in distributed training and private data protection \\citep{yang2019federated}. The federated learning framework consists of multiple clients and one central server, and each client possesses exclusive data for training the corresponding local model \\citep{mcmahan2017communication}. In one round of federated learning, clients transmit the induced gradients or model parameters to the server after training with their exclusive data. Then the central server aggregates the received gradients or parameters to update the model parameters and broadcast new parameters to clients for the next round. In the federated learning scenario, the data distributed in different clients are often non-i.i.d data, which causes a biased minimum {\\it w.r.t.} the local model and significantly hinders the convergence speed. Consequently, the data heterogeneity remarkably imposes a communication burden between the server and clients. \\citet{goetz2020federated} proposed distilling a small set of synthetic data from original exclusive data by matching gradients, and then the synthetic data instead of a large number of gradients were transmitted to the server for model updating. Other distillation approaches such as BPTT \\citep{hu2022fedsynth,zhou2020distilled}, KRR \\citep{song2022federated}, and distribution matching \\citep{xiong2022feddm}, were also employed to compress the exclusive data to alleviate the communication cost at each transition round. However, \\citet{liu2022meta} discovered that synthetic data generated by dataset distillation are still heterogeneous. To address the problem, these researchers proposed two strategies of dynamic weight assignment and meta knowledge sharing during the distillation process, which significantly accelerate the convergence speed of federated learning. Apart from compressing the local data, \\citet{pi2022dynafed} also distilled data via trajectory matching in the server, which allows the synthetic data to possess global information. Then the distilled data can be used to fine-tune the server model for convergence speed up. \n\n\n\n\n\\subsection{Adversarial Robustness}\nDeep learning networks with standard training are fragile with respect to adversarial attacks, and adding imperceptible perturbations to the input can completely fool the network \\citep{goodfellow2014explaining}. Adversarial training has been widely used to address this problem by continuously feeding adversarial examples during the training process \\citep{madry2017towards}. However, the generation of adversarial examples requires multistep gradient ascent, which considerably undermines training efficiency. Recently, \\citet{tsilivis2022can} and \\citet{wu2022towards} employed dataset distillation to extract the information of adversarial examples and generate robust datasets. Then standard network training on the distilled robust dataset is sufficient to achieve satisfactory robustness to adversarial perturbations, which substantially saves computing resources compared to expensive adversarial training.\n\n\n\n\n\n\\subsection{Other Applications}\n\nApart from the aforementioned applications, we also summarize other applications in terms of dataset distillation as follows.\n\nBecause of their small size, synthetic datasets have been applied to explainability algorithms \\citep{loo2022efficient}. In particular, due to the small size of synthetic data, it is easy to measure how the synthetic examples influence the prediction of test examples. If the test and training images both rely on the same synthetic images, then the training image will greatly influence the prediction of the test image. In other words, the synthetic set becomes a bridge to connect the training and testing examples. \n\nUsing the characteristic of capturing the essence of datasets, \\citep{cazenavette2022textures,chen2022fashion} utilized dataset distillation for visual design. \\citet{cazenavette2022textures} generated the representative textures by randomly cropping the synthetic images during the distillation process. In addition to extracting texture, \\citet{chen2022fashion} imposed the synthetic images to model the outfit compatibility through dataset distillation.\n\n\nIn addition to the continuous image domain, dataset distillation has also been extended to discrete data such as graphs \\citep{jin2022graph,jin2022condensing,liu2022graph} and recommender systems \\citep{sachdeva2022data}. \\citet{jin2022graph} first formulated this dataset distillation {\\it w.r.t.} graph data, and successfully distilled a small, condensed graph via gradient matching. Due to the computational inefficiency in distilling pairwise relations for graph nodes, \\citet{jin2022condensing} turned to learning a probabilistic graph model, which allows a differentiable optimization {\\it w.r.t.} the discrete graph. These authors also employed one-step strategy for further distillation speedup. Moreover, \\citet{liu2022dataset} accelerated the graph distillation through distribution matching from the perspective of receptive fields. For the recommender system, \\citet{sachdeva2022data} distilled a massive dataset to a continuous prior for tackling the discrete data problem.\n\nDue to their small size and abstract visual information, distilled data can also be applied in medicine, especially in medical image sharing \\citep{li2023sharing}. Empirical studies on gastric X-ray images have shown the advantages of DD in medical image transition and the anonymization of patient information \\citep{li2020soft,li2022compressed}. Through dataset distillation, hospitals can share their valuable medical data at lower cost and risk to collaboratively build powerful computer-aided diagnosis systems.\n\n\nFor backdoor attack, \\citet{liu2023backdoor} considered to inject backdoor triggers into the small distilled dataset. Because the synthetic data has a small size, directly trigger insertion is less effective and also perceptible. Therefore, these coworkers turned to insert triggers into the target dataset during the dataset distillation procedure. In addition, they propose to iteratively optimize the triggers during the distillation process to preserve the triggers' information for better backdoor attack performance.\n\n\n\n\n\n\n\n\n\n\n\\section{Challenges and Directions}\n\nDue to its superiority in compressing datasets and training speedup, dataset distillation has promising prospects in a wide range of areas. In the following, we discuss existing challenges in terms of dataset distillation. Furthermore, we investigate plausible directions and provide insights into dataset distillation to promote future studies.\n\n\n\n\\subsection{Challenges}\n\nFor dataset distillation, there are two main ingredients: (1) measure the difference between the knowledge of synthetic data and real (original) data; and (2) update the synthetic data to narrow the knowledge difference. Based on these ingredients, we discuss the challenges of dataset distillation from the perspectives of (1) the definition of the difference in the knowledge between two types of datasets, (2) the form of synthetic datasets, (3) the evaluation of synthetic datasets, and (4) the theory behind dataset distillation.\n\n\nDataset distillation methods aim to transfer the knowledge of a target dataset into a small synthetic dataset to achieve a comparable generalization with respect to the original data distribution. Although the knowledge of datasets is an abstract notion, many DD approaches indirectly measure the difference in the knowledge base via various proxy losses. However, the existing definition {\\it w.r.t.} the knowledge difference is inconsistent with the original comparable generalization. For example, the meta-loss $\\mathcal{R}_{\\mathcal{T}}(\\texttt{alg}(\\mathcal{S}))$ in the meta-learning framework measure the knowledge difference in the error in terms of a specific target dataset. Through this, one can only obtain a similar training error other than the test error {\\it w.r.t.} the target dataset. As for the data matching framework, it intuitively makes knowledge concrete with a series of parameters or features, which are matched for knowledge transmission. Therefore, a more consistent definition of knowledge difference should be investigated to improve the dataset distillation performance.\n\nAlthough most dataset distillation methods directly update the synthetic images, optimizing factorized codes and decoders that cooperatively generate synthetic data outperform the direct optimization by a large margin, as discussed in Section \\ref{sec:standard benchmark}. From this, the predefined form of the code and decoder has a remarkable influence on the distillation performance. Many Factorization methods on synthetic data are based on intuition and lack rigorous analysis, and only some vague reasons are proposed, such as decreasing redundant information across different classes and learning more compact representations. Hence, it is necessary to explore the influence of the predefined factorization on the dataset distillation for learning a more informative distilled dataset.\n\nIn practical dataset distillation, the optimization of synthetic data is closely related to the specific network architecture (or kernels in KRR approach), and there is naturally a performance drop when the learned synthetic dataset is applied to other unseen architectures; please see Section \\ref{sec:cross architecture transferability}. Therefore, it is one-sided to compare different DD methods only on one or a few architectures. To compare different DD methods in a more comprehensive scenario, it is essential to set a reasonable baseline consisting of multiple handpicked network architectures with different complexity. \n\nAlthough various DD approaches have been emerging, there have been few investigations that discuss the theory behind dataset distillation. Nevertheless, developing the theory is extremely necessary and will considerably promote the dataset distillation by directly improving the performance and also suggesting the correct direction of development. For example, the theory can help derive a better definition of the dataset knowledge and consequently increase the performance. In addition, the theoretical correlation between the distillation budget (the size of synthetic datasets) and performance can provide an upper bound on the test error, which can offer a holistic understanding and prevent researchers from blindly improving the DD performance. Hence, a solid theory is indispensable to take the development of DD to the next stage.\n\n\n\n\n\n\n\n\n\n\n\\subsection{Future Directions}\n\nAlthough the existing DD methods have shown impressive results in compressing massive datasets, there are still many areas that are worth exploring to promote dataset distillation. In this section, we will discuss some promising directions that shed light on future studies.\n\n\\textbf{Fine-tune with synthetic data.} With the rapid development of foundational models \\citep{bommasani2021opportunities}, the mainstream deep learning pipeline gradually converts from scratch learning to pretrained learning or transfer learning. The foundational models of BERT \\citep{devlin2018bert}, GPT \\citep{radford2018improving,radford2019language,brown2020language}, {\\it etc.}, are trained with billions of data points to learn an appropriate latent representation. With only fine-tuning on specific datasets, the foundational models can achieve extraordinary performance on various downstream tasks and outperform models learned from scratch by a large margin. To this end, conducting dataset distillation in the fine-tuning regime is a promising direction to cooperate with foundational models. With synthetic data, the fine-tuning process will be faster while the downstream performance is preserved. Despite possessing a similar goal, distilling datasets based on pretrained models might be more challenging than the initialized models because the pretrained models are harder to collect for repeated distillation to enhance synthetic datasets' robustness {\\it w.r.t.} model initialization. Apart from condensing datasets in the fine-tuning stage, it is also an interesting topic to distil datasets in the vanilla training stage so that models trained on synthetic datasets can be easier or faster to fine-tune by specific datasets.\n\n\n\n\n\n\n\\textbf{Initialization augmentation.} Benefiting from excellent efficiency and compatibility, data augmentation has become an essential strategy for enhancing the model's generalization {\\it w.r.t.} data distribution \\citep{shorten2019survey, chawla2002smote,perez2017effectiveness}. From basic cropping and flipping \\citep{zagoruyko2016wide} to the mixup method \\citep{zhang2018mixup}, data augmentation efficiently constructs variants of training images with prior rules to improve the model's robustness to patterns. In dataset distillation, synthetic datasets are also sensitive to the parameter initialization and network architecture adopted in the distillation process \\citep{wang2018dataset,zhao2021dataset}. Existing works alleviate this problem by trivial repeated distillation {\\it w.r.t.} different initializations and then taking an expectation \\citep{zhou2022dataset}. However, the random selection of initialization is not an elaborate strategy and has limited improvement. \\citet{zhang2022accelerating} showed that employing early-stage models trained with a few epochs as the initialization can achieve better distillation performance, and they further proposed weight perturbation methods to efficiently generate early-stage models for repeated distillation. Therefore, it is important for many dataset distillation algorithms to investigate how to design the initialization augmentation so that synthetic datasets achieve better generalization to different initializations and architectures.\n\n\n\n\\textbf{Preprocessing of target datasets.} To better extract knowledge from training data, preprocessing, such as normalization, is a common approach that reshapes the structure of data by well-designed mappings \\citep{garcia2016big}. In existing dataset distillation methods, only a few works discuss the influence of preprocessing on final distillation performance \\citep{nguyen2020dataset}. \\citet{nguyen2020dataset,nguyen2021dataset} and \\citet{cazenavette2022distillation} preprocessed target datasets with zero-phase compenent analysis (ZCA) whitening \\citep{kessy2018optimal} before distillation and achieved satisfying empirical results. Through elaborate preprocessing, more explicit information can emerge, and the target datasets might be easier to distill. In addition, this preprocessing is compatible with existing dataset distillation algorithms and thus can be deployed without extra effort. For this reason, it is useful to investigate the preprocessing of target datasets for distillation acceleration and also performance improvement.\n\n\n\\section*{Acknowledge}\n\nWe sincerely thank \\href{https:\/\/github.com\/Guang000\/Awesome-Dataset-Distillation}{Awesome-Dataset-Distillation} for its comprehensive and timely DD publication summary. We also thank Can Chen and Guang Li for their kind feedback and suggestions.\n\n\n\\bibliographystyle{plainnat}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}