diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzpoz" "b/data_all_eng_slimpj/shuffled/split2/finalzpoz" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzpoz" @@ -0,0 +1,5 @@ +{"text":"\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section*{Acknowledgement}\nSupported by the National Key R\\&D Program of China under Grant 2019YFB1406500, National Natural Science Foundation of China (No.62025604, 62076213, 62006217). Open Project Program of State Key Laboratory of Virtual Reality Technology and Systems, Beihang University (No.VRLAB2021C06). The university development fund of the Chinese University of Hong Kong, Shenzhen under grant No. 01001810, and Tencent AI Lab Rhino-Bird Focused Research Program under grant No.JR202123. \n\\section{The Proposed Approach}\nWe propose a novel adversarial training framework by introducing the concept of ``learnable attack strategy\". \nWe first introduce the pipeline of our framework in Sec.~\\ref{sec:game} and then present our novel formulation of adversarial training in Sec.~\\ref{sec:minmax} and our proposed loss terms in Sec.~\\ref{sec:loss} followed by the proposed optimization algorithm in Sec.~\\ref{sec:optimize}. \n\\subsection{Pipeline of the Proposed Framework} \\label{sec:game}\nThe pipeline of our framework is shown in Fig.~\\ref{fig:santan1}. \nOur model is composed of a target network and a strategy network. \nThe former uses AEs for training to improve its robustness, whilst the latter generates attack strategies to create AEs to attack the target network. They are competitors. \n\n\\vspace{1mm}\n\\noindent \\textbf{Target Network.} \nThe target network is a convolutional network for image classification, denoted as $\\hat{y} = f_{\\mathbf{w}}(\\mathbf{x})$ where $\\hat{y}$ is the estimation of the label, $\\mathbf{x}$ is an image, and $\\mathbf{w}$ are the parameters of the network. \n\n\\vspace{1mm}\n\\noindent \\textbf{Strategy Network.} \nThe strategy network generates adversarial attack strategies to control the AE generation, which takes a sample as input and outputs a strategy. \nSince the strategy network updates gradually, it gives different strategies given the same sample as input according to the robustness of the target network at different training stages. \nThe architecture of the strategy network is illustrated in the \\textbf{supplementary material}. Given an image, the strategy network outputs an attack strategy, \\textit{i.e.,} the configuration of how to perform the adversarial attack. \nLet $\\mathbf{a} = \\{a_1, a_2, ..., a_M\\} \\in \\mathcal{A}$ denote a strategy of which each element refers to an attack parameter. \n$\\mathcal{A}$ denotes the value space of strategy. \nParameter $a_m \\in \\{1,2,...,K_m\\}$ has $K_m$ options, which is encoded by a one-hot vector. \nThe meaning of each option differs in different attack parameters. \nFor example, PGD attack~\\cite{madry2017towards} has three attack parameters, \\textit{i.e.,} the attack step size $\\alpha$, the attack iteration $I$, and the maximal perturbation strength $\\varepsilon$. \nEach parameter has $K_m$ optional values to select, \\textit{e.g.,} the options for $\\alpha$ could be $\\{0.1, 0.2, 0.3, ...\\}$ and the options for $I$ could be $\\{1, 10, 20, ...\\}$. \nA combination of the selected values for these attack parameters is an attack strategy. \nThe strategy is used to created AEs along with the target model. \nThe strategy network captures the conditional distribution of a given $\\mathbf{x}$ and $\\boldsymbol{\\theta}$, $p(\\mathbf{a}|\\mathbf{x};\\boldsymbol{\\theta} )$, where $\\mathbf{x}$ is the input image and $\\boldsymbol{\\theta}$ denotes the strategy network parameters. \n\n\n\n\\vspace{1mm}\n\\noindent \\textbf{Adversarial Example Generator.} Given a clean image, the process of the generation of AEs can be defined as:\n\\begin{align}\n \\mathbf{x}_{adv}:= \\mathbf{x} + \\boldsymbol{\\delta} \\leftarrow g(\\mathbf{x}, \\mathbf{a}, \\mathbf{w}) , \\label{Eq:adv} \n\\end{align}\nwhere $\\mathbf{x}$ is a clean image, $\\mathbf{x}_{adv}$ is its corresponding AEs, and $\\boldsymbol{\\delta}$ is the generated perturbation. $\\mathbf{a}$ is an attack strategy. $\\mathbf{w}$ represents the target network parameters, and $g(\\cdot)$ is the PGD attack. \nThe process is equivalent to solving the inner optimization problem of Eq.~(\\ref{Eq:sat}) given an attack strategy $\\mathbf{a}$, \\textit{i.e.,} finding the optimal perturbation to maximize the loss. \n\n\\subsection{Novel Formulation of Adversarial Training} \\label{sec:minmax}\nBy using Eq.~(\\ref{Eq:adv}) that represents the process of AE generation, the standard AT with a fixed attack strategy can be rewritten as: \n\\begin{align}\n \\min_{\\mathbf{w}} \\mathbb{E}_{(\\mathbf{x}, y) \\sim \\mathcal{D}}~ \\mathcal{L}(f_{\\mathbf{w}}(\\mathbf{x}_{adv}), y), \\label{eq:oldAT}\n\\end{align}\nwhere $\\mathbf{x}_{adv} = g(\\mathbf{x}, \\mathbf{a}, \\mathbf{w})$ and $\\mathbf{a}$ is the hand-crafted attack strategy. \n$\\mathcal{D}$ is the training set. \n$\\mathcal{L}$ is the cross-entropy loss function which is used to measure the difference between the predicted label of the AE $\\mathbf{x}_{adv}$ and the ground truth $y$. \n\nDifferently, instead of using a hand-crafted sample-agnostic strategy, we use a strategy network to produce automatically generated sample-dependent strategies, \\textit{i.e.,} $p(\\mathbf{a}|\\mathbf{x};\\boldsymbol{\\theta})$.\nOur novel formulation for AT can be defined as: \n\\begin{align}\n \\min_{\\mathbf{w}} \\mathbb{E}_{(\\mathbf{x}, y) \\sim \\mathcal{D}}\n [\\max_{\\boldsymbol{\\theta}}\n~\\mathbb{E}_{ \\mathbf{a} \\sim p(\\mathbf{a}| \\mathbf{x};\\boldsymbol{\\theta})}~ \\mathcal{L}(f_{\\mathbf{w}}(\\mathbf{x}_{adv}),y) ].\n\\label{eq:newAT}\n\\end{align}\nCompared to the standard AT, the most distinct difference lies in the generation of AEs, \\textit{i.e.,} $\\mathbf{x}_{adv}$ \\eqref{Eq:adv}. \nThe standard AT uses a hand-crafted sample-agnostic strategy $\\mathbf{a}$ to solve the inner optimization problem while we use a strategy network to produce the sample-dependent strategy by $p(\\mathbf{a}|\\mathbf{x};\\boldsymbol{\\theta})$, \\textit{i.e.,} our strategy is learnable. \nOur AE generation involves the parameters $\\boldsymbol{\\theta}$ of the strategy network, which leads that our loss being a function of the parameters of both networks. \n\nComparing Eq.~(\\ref{eq:oldAT}) and Eq.~(\\ref{eq:newAT}), our formulation is a minimax problem and the inner optimization involves the parameters of the strategy network. \nFrom Eq.~(\\ref{eq:newAT}), it can be observed that the two networks compete with each other in minimizing or maximizing the same objective. \nThe target network learns to adjust its parameters to defend AEs generated by the attack strategies, while the strategy network learns to improve attack strategies according to the given samples to attack the target network.\nAt the beginning of the training phase, the target network is vulnerable, which a weak attack can fool. \nHence, the strategy network can easily generate effective attack strategies. The strategies could be diverse because both weak and strong attacks can succeed. \nAs the training process goes on, the target network becomes more robust. \nThe strategy network has to learn to generate attack strategies that create stronger AEs. \nTherefore, the gaming mechanism could boost the robustness of the target network gradually along with the improvement of the strategy network. \n\n\\subsection{The Proposed Loss Terms}\n\\label{sec:loss}\n\\vspace{1mm}\n\\noindent\\textbf{Loss of Evaluating Robustness.}\nTo guide the learning of the strategy network, we propose a new metric to evaluate attack strategy by using the robustness of the one-step updated version of the target model.\nSpecifically, an attack strategy $\\mathbf{a}$ is first used to create an AE $\\mathbf{x}_{adv}$ which is then used to adjust the parameters of the target model $\\mathbf{w}$ for one step through the first-order gradient descent. \nThe attack strategy is criticized to be effective if the updated target model $\\hat{\\mathbf{w}}$ can correctly predict labels for AEs $\\mathbf{x}_{adv}^{\\hat{\\mathbf{a}}}$ that generated by another attack strategy $\\hat{\\mathbf{a}}$, \\textit{e.g.,} PGD with the maximal perturbation strength of 8, iterative steps of $10$ and step size of $2$. \nThe loss function of evaluating robustness can be defined as: \n\\begin{align}\n \\mathcal{L}_{2}(\\boldsymbol{\\theta}) \n&=-\\mathcal{L}(f( \\mathbf{x}_{adv}^{\\hat{\\mathbf{a}}},\\hat{\\mathbf{w}} ), y), \\label{eq:loss_robust}\n\\end{align}\nwhere $\\hat{\\mathbf{w}} = \\mathbf{w} - \\lambda \\nabla_\\mathbf{w} \\mathcal{L}_1|_{\\mathbf{x}_{adv}}$ is the parameters of the updated target network and $\\lambda$ is the step size. \n$\\mathcal{L}_1$ refers to the loss in Eq.~(\\ref{eq:newAT}), \\textit{i.e.,} $\\mathcal{L}_1(\\mathbf{w},\\boldsymbol{\\theta}):= \\mathcal{L}(f(\\mathbf{x}_{adv}, \\mathbf{w}), y)$. \n$\\mathbf{x}_{adv}$\nis created by the attack strategy $\\mathbf{a}$, which is to be evaluated. \n$\\mathbf{x}_{adv}^{\\hat{\\mathbf{a}}} := g(\\mathbf{x}, \\hat{\\mathbf{a}}, \\hat{\\mathbf{w}}) $ is the AE created by another attack strategy $\\hat{\\mathbf{a}}$, which is used to evaluate the robustness of the updated model $\\hat{\\mathbf{w}}$. \nPlease note that $\\mathcal{L}_2$ is used to evaluate the attack strategy and $\\mathbf{w}$ is treated as a variable here rather than parameters to optimize. \nHence, the value of $\\mathbf{w}$ is used in Eq.~(\\ref{eq:loss_robust}), but the gradient of $\\mathcal{L}_2$ will not be backpropagated to update $\\mathbf{w}$ through $\\hat{\\mathbf{w}}$. \nEq.~(\\ref{eq:loss_robust}) indicates that a larger $\\mathcal{L}_2$ means the updated target model is more robust, \\textit{i.e.,} a better attack strategy. \n\n\\vspace{1mm}\n\\noindent\\textbf{Loss of Predicting Clean Samples.}\nA good attack strategy should not only improve the robustness of the target model but also maintain the performance of predicting clean samples, \\textit{i.e.,} clean accuracy. \nTo further provide guidance for learning the strategy network, we also consider the performance of the one-step updated target model in predicting clean samples. \nThe loss of evaluating the attack strategy can be defined as: \n\\begin{align}\n \\mathcal{L}_3(\\boldsymbol{\\theta}) &= - \\mathcal{L}(f(\\mathbf{x}, \n \\hat{\\mathbf{w}}), y), \\label{eq:clean_acc}\n\\end{align}\nwhere $\\hat{\\mathbf{w}}$ is the same as that in Eq.~(\\ref{eq:loss_robust}), \\textit{i.e.,} the parameters of the one-step updated target model. \n$\\mathcal{L}_3$ is the function of $\\boldsymbol{\\theta}$ as the AE $\\mathbf{x}_{adv}$ involves computing $\\hat{\\mathbf{w}}$ and $\\mathbf{a}$ is the output of the strategy network. \nEq.~(\\ref{eq:clean_acc}) indicates that a larger $\\mathcal{L}_3$ means the updated target model has a lower loss in clean samples, \\textit{i.e.,} a better attack strategy. \n\n\\paragraph{Formal Formulation.} \nIncorporating the two proposed loss terms, our formulation for AT can be defined as: \n\\begin{equation}\n \\begin{aligned}\n & & &\\min_{\\mathbf{w}} \\mathbb{E}_{(\\mathbf{x}, y) \\sim \\mathcal{D}}\\left[\\max_{\\boldsymbol{\\theta}}~\\mathbb{E}_{ \\mathbf{a} \\sim p(\\mathbf{a}| \\mathbf{x};\\boldsymbol{\\theta})}~[\\mathcal{L}_1(\\mathbf{w},\\boldsymbol{\\theta})+\\right.\\\\ \n & & & \\left.\\textcolor{white}{\\min_{\\mathbf{w}} \\mathbb{E}_{(\\mathbf{x}, y) \\sim \\mathcal{D}}[\\max_{\\boldsymbol{\\theta}}EEEE}\\alpha \\mathcal{L}_2(\\boldsymbol{\\theta}) + \\beta \\mathcal{L}_3(\\boldsymbol{\\theta})]\\right],\n \\end{aligned}\n \\label{eq:newAT_pro}\n \\end{equation}\nwhere $\\mathcal{L}_1$ is a function of the parameters of both the target network and the strategy network while $\\mathcal{L}_2$ and $\\mathcal{L}_3$ involve the parameters of the strategy network. $\\alpha$ and $\\beta$ are the trade-off hyper-parameters of the two loss terms.\n\n\n\\subsection{Optimization} \\label{sec:optimize}\nWe propose an algorithm to alternatively optimize the parameters of the two networks. \nGiven $\\boldsymbol{\\theta}$, the subproblem of optimizing the target network can be defined as,\n\\begin{align}\n \\min_{\\mathbf{w}} \\mathbb{E}_{(\\mathbf{x}, y) \\sim \\mathcal{D}}\n\\mathbb{E}_{ \\mathbf{a} \\sim p(\\mathbf{a}| \\mathbf{x};\\boldsymbol{\\theta})}\n[\\mathcal{L}_1(\\mathbf{w},\\boldsymbol{\\theta})]. \\label{Eq:subprob1}\n\\end{align}\nGiven a clean image, the strategy network generates a strategy distribution $p(\\mathbf{a}| \\mathbf{x};\\boldsymbol{\\theta})$, and we randomly sample a strategy from the conditional distribution. \nThe sampled strategy is used to generate AEs. \nAfter collecting the AEs for a batch of samples, we can update the parameters of the target model through gradient descent, \\textit{i.e.,}\n\\begin{align}\n \\label{eq:update_w}\n \\mathbf{w}^{t+1}=\\mathbf{w}^{t}-\\eta_1 \\frac{1}{ N} \\sum_{n=1}^{N} \\nabla_{\\mathbf{w}}\n\\mathcal{L}\\left(f(\\mathbf{x}_{adv}^n, \\mathbf{w}^t), y_{n}\\right), \n\\end{align}\nwhere $N$ is the number of samples in a mini batch and $\\eta_1 $ is the learning rate. \n\nGiven $\\mathbf{w}$, the subproblem of optimizing the parameters of the strategy network can be written as,\n\\begin{align}\n \\max_{\\boldsymbol{\\theta}} J(\\boldsymbol{\\theta}) , \n\\end{align}\nwhere $J(\\boldsymbol{\\theta}) := \\mathbb{E}_{(\\mathbf{x}, y) \\sim \\mathcal{D}}\n~\\mathbb{E}_{ \\mathbf{a} \\sim p(\\mathbf{a}| \\mathbf{x};\\boldsymbol{\\theta})}~ [\\mathcal{L}_1 + \\alpha \\mathcal{L}_2 + \\beta \\mathcal{L}_3]$. \nThe biggest challenge of this optimization problem is that the process of AE generation (see Eq.~(\\ref{Eq:adv})) is not differentiable, namely, the gradient can not be backpropagated to the attack strategy through the AEs. \nMoreover, there are some non-differentiable operations (\\emph{e.g.} choosing the iteration times) related to attack \\cite{peng2018jointly,ilyas2019adversarial}, which sets an obstacle to backpropagate the gradient to the strategy network. \n\nFollowing by the REINFORCE algorithm \\cite{williams1992simple}, we can compute the derivative of the objective function $J(\\boldsymbol{\\theta})$ with respect to the parameters $\\boldsymbol{\\theta}$ as:\n\\begin{align}\n \\nabla_{\\boldsymbol{\\theta}} J(\\boldsymbol{\\theta})&= \\nabla_{\\boldsymbol{\\theta}} \\mathbb{E}_{(\\mathbf{x}, y) \\sim \\mathcal{D}}\n\\mathbb{E}_{ \\mathbf{a} \\sim p(\\mathbf{a}| \\mathbf{x};\\boldsymbol{\\theta})}\n\\left[ \\mathcal{L}_{0}\\right] \\\\\n& = \\mathbb{E}_{(\\mathbf{x}, y) \\sim \\mathcal{D}} \\int_{\\mathbf{a}} \\mathcal{L}_{0} \\cdot \\nabla_{\\boldsymbol{\\theta}} p(\\mathbf{a}| \\mathbf{x};\\boldsymbol{\\theta}) d\\mathbf{a} \\nonumber \\\\\n& = \\mathbb{E}_{(\\mathbf{x}, y) \\sim \\mathcal{D}} \\int_{\\mathbf{a}} \\mathcal{L}_{0} \\cdot \np(\\mathbf{a}| \\mathbf{x};\\boldsymbol{\\theta}) \\nabla_{\\boldsymbol{\\theta}} \\log p(\\mathbf{a}| \\mathbf{x};\\boldsymbol{\\theta}) d\\mathbf{a} \\nonumber \\\\\n& = \\mathbb{E}_{(\\mathbf{x}, y) \\sim \\mathcal{D}}\n\\mathbb{E}_{ \\mathbf{a} \\sim p(\\mathbf{a}| \\mathbf{x};\\boldsymbol{\\theta})} [ \\mathcal{L}_{0} \\cdot \\nabla_{\\boldsymbol{\\theta}} \\log p(\\mathbf{a}| \\mathbf{x};\\boldsymbol{\\theta}) ] \\nonumber ,\n\\end{align}\nwhere $ \\mathcal{L}_{0} = \\mathcal{L}_{1} + \\alpha \\mathcal{L}_{2} + \\beta \\mathcal{L}_{3}$. \nSimilar to solving Eq.~(\\ref{Eq:subprob1}), we sample attack strategy from the conditional distribution of strategy to generate AEs. \nThe gradient with respect to the parameters\ncan be approximately computed as: \n\\begin{align}\n \\nabla_{\\boldsymbol{\\theta}}J(\\boldsymbol{\\theta}) \\approx \\frac{1}{N} \\sum_{n=1}^{N} \\mathcal{L}_{0}(\\mathbf{x}^n;\\boldsymbol{\\theta}) \\cdot \\nabla_{\\boldsymbol{\\theta}} \\log p_{\\boldsymbol{\\theta}} (\\mathbf{a}^n|\\mathbf{x}^n).\n\\end{align}\nThen, the parameters of the strategy network can be updated through gradient ascent, \\textit{i.e.,} \n\\begin{align}\n \\label{eq:update_theta}\n \\boldsymbol{\\theta}^{t+1} = \\boldsymbol{\\theta}^t + \\eta_2 \\nabla_{\\boldsymbol{\\theta}}J(\\boldsymbol{\\theta}^t), \n\\end{align}\nwhere $\\eta_2$ is the learning rate. And $\\boldsymbol{\\theta}$ and $\\mathbf{w}$ are updated iteratively. We update $\\mathbf{w}$ every k times of updating $\\boldsymbol{\\theta}$.\n\n\\subsection{Convergence Analysis}\nBased on \\eqref{eq:update_w} and \\eqref{eq:update_theta}, we have the following convergence result of the proposed adversarial training algorithm.\n\\vspace{-1mm}\n\\begin{restatable}{theorem}{convergence}\n\\label{thm:1}\nSuppose that the objective function $\\mathcal{L}_0=\\mathcal{L}_1+\\alpha \\mathcal{L}_2+\\beta \\mathcal{L}_3$ in \\eqref{eq:newAT_pro} satisfied the gradient Lipschitz conditions \\textit{w.r.t.} $\\boldsymbol{\\theta}$ and $\\mathbf{w}$, and $\\mathcal{L}_0$ is $\\mu$-strongly concave in $\\boldsymbol{\\Theta}$, the feasible set of $\\boldsymbol{\\theta}$.\nIf $\\mathbf{\\hat{x}}_{adv}(\\mathbf{x},\\mathbf{w})$ is a $\\sigma$-approximate solution of the $\\ell_{\\infty}$ ball with radius $\\epsilon$ constraint, the variance of the stochastic gradient is bounded by a constant $\\sigma^2>0$, and we set the learning rate of $\\mathbf{w}$ as \n\\begin{equation}\n \\eta_1 = \\min\\left(\\frac{1}{L_0},\\ \\sqrt{\\frac{\\mathcal{L}_0(\\mathbf{w}^0)-\\underset{\\mathbf{w}}{\\min}\\ \\mathcal{L}_0(\\mathbf{w})}{\\sigma^2TL_0}}\\right),\n\\end{equation}\nwhere $L_0=L_{\\mathbf{w}\\boldsymbol{\\theta}}L_{\\boldsymbol{\\theta}\\mathbf{w}}\/\\mu+L_{\\mathbf{w}\\mathbf{w}}$ is the Lipschitz constants of $\\mathcal{L}_0$, it holds that\n\\begin{equation}\n \\frac{1}{T}\\sum_{t=0}^{T-1}\\mathbb{E}\\big[\\|\\nabla \\mathcal{L}_0(\\mathbf{w}^t)\\|^2_2\\big]\\leq4\\sigma\\sqrt{\\frac{\\Delta L_0}{T}}+\\frac{5\\delta L^2_{\\mathbf{w}\\boldsymbol{\\theta}}}{\\mu},\n\\end{equation}\nwhere $T$ is the maximum adversarial training epoch number and $\\Delta=\\mathcal{L}_0(\\mathbf{w}^0)-\\underset{\\mathbf{w}}{\\min}\\ \\mathcal{L}_0(\\mathbf{w})$.\n\\end{restatable}\nThe detailed proof is presented in the \\textbf{supplementary material}. By Theorem 1, if the inner maximization process can obtain a $\\delta$-approximation of $\\mathbf{x}^*_{\\text{adv}}$, the proposed method \\textbf{LAS-AT} can archive a stationary point by sub-linear rate with the precision $5\\delta L^2_{\\mathbf{w}\\boldsymbol{\\theta}}\/\\mu$. Moreover, if $5\\delta L^2_{\\mathbf{w}\\boldsymbol{\\theta}}\/\\mu$ is sufficient small, our method can find the desired robust model $\\mathbf{w}^T$ with a good approximation of $\\mathbf{x}^*_{\\text{adv}}$. \n\n\n\n \n\n\\section{Conclusion and Discussion }\nWe propose a novel adversarial training framework by introducing the concept of ``learnable attack strategy\", which is composed of two competitors, \\textit{i.e.,} a target network and a strategy network. \nUnder the gaming mechanism, the strategy network learns to produce dynamic sample-dependent attack strategies according to the robustness of the target model for adversarial example generation, instead of using hand-crafted attack strategies. \nTo guide the learning of the strategy network, we also propose two loss terms that involve evaluating the robustness of the target network and predicting clean samples. \nExtensive experimental evaluations are performed on three benchmark databases to demonstrate the superiority of the proposed method.\n\n\n\n\n\\section{ Experiments}\nTo evaluate the proposed method, we conduct experiments on three databases, \\textit{i.e.,} CIFAR10 \\cite{krizhevsky2009learning}, CIFAR100 \\cite{krizhevsky2009learning}, and Tiny ImageNet \\cite{deng2009ImageNet}. The details of these databases are presented in the \\textbf{supplementary material}.\n\n\\input{table\/tb_ablation_k}\n\n\\input{table\/tb_cifar10_v5}\n\\input{table\/tb_cifar100_V5}\n\\subsection{Settings}\n\\label{Settings}\n\\vspace{1mm}\n\\noindent\\textbf{Competitive Methods.}\nTo evaluate the proposed method effectiveness in improving the robustness of a target model, we combine it with several state-of-the-art adversarial training methods and illustrate its performance improvements. \nWe choose not only the most popular methods such as the early stopping PGD-AT \\cite{rice2020overfitting} and TRADES\n\\cite{zhang2019theoretically} as base models but also the recently proposed method AWP \\cite{wu2020adversarial}.\nThe combinations of our method and these models are referred to as LAS-PGD-AT, LAS-TRADES, and LAS-AWP, respectively. \nNote that we use the \\textbf{same training settings} as the base models \\cite{rice2020overfitting, zhang2019theoretically, wu2020adversarial} to train our proposed models, including data splits and training losses. Then we compare the proposed LAS-PGD-AT, LAS-TRADES, and LAS-AWP with the following baselines: (1) PGD-AT~\\cite{rice2020overfitting}, (2) TRADES ~\\cite{zhang2019theoretically}, (3) SAT~\\cite{sitawarin2021sat}, (4) MART~\\cite{wang2019improving}, (5) FAT~\\cite{zhang2020attacks}, (6) GAIRAT~\\cite{zhang2020geometry}, (7) AWP ~\\cite{wu2020adversarial} and (8) LBGAT~\\cite{cui2021learnable}.\nMoreover, we compare our method with CAT \\cite{cai2018curriculum}, DART \\cite{wang2019convergence} and FAT~\\cite{zhang2020attacks}\nThey use different attack strategies at different training stages to conduct AT. Besides, we also compare our method with other state-of-the-art hyper-parameter search methods \\cite{lin2019online, DBLP:conf\/iclr\/ZhangWZZ20} to evaluate our method.\n\n\n\n\n\\noindent\\textbf{Evaluation.}\\label{Evaluation}\nWe choose several adversarial attack methods to attack the trained models, including PGD~\\cite{madry2017towards}, C\\&W~\\cite{carlini2017towards}\nand AA \\cite{croce2020reliable} which consists of APGD-CE \\cite{croce2020reliable}, APGD-DLR \\cite{croce2020reliable}, FAB \\cite{croce2019minimally} and Square \\cite{andriushchenko2019square}. \nFollowing the default setting of AT, the max perturbation strength $\\epsilon$ is set to 8 for all attack methods under the $L_{\\infty}$. \nThe clean accuracy and robust accuracy are used as the evaluation metrics. \n\n\n\n\n\\noindent\\textbf{Implementation Details.}\nOn CIFAR-10 and CIFAR-100, we use ResNet18 \\cite{he2016deep} or WideResNet34-10(WRN34-10) \\cite{zagoruyko2016wide} as the target network. \nOn Tiny ImageNet, \nwe use PreActResNet18 \\cite{he2016identity} as the target model. \nFor all experiments, we train the target network for defense baselines, following their original papers.\nFor the training hyper-parameters of the target network of our method, we use the \\textbf{same setting} as the base models \\cite{rice2020overfitting, zhang2019theoretically, wu2020adversarial}.\nThe detailed settings are presented in the\n\\textbf{supplementary material}. \nFor the target network,\nwe adopt SGD momentum optimizer with a learning rate of 0.1, weight decay of $5 \\times 10^{-4}$.\nWe use ResNet18 as the backbone of the strategy network. \nFor the strategy network of our method, we adopt SGD momentum optimizer with a learning rate of $0.001$. The trade-off hyper-parameters $\\alpha$ and $\\beta$ are set to $2.0$ and $4.0$. \nThe range of the maximal perturbation strength is set from $3$ to $15$, the range of the attack step is set from $1$ to $6$, and the range of the attack iteration is set from $3$ to $15$. \n\\vspace{-1mm}\n\n\n\n\\subsection{Hyper-parameter Selection}\nThe hyper-parameter k controls the alternative update of $\\mathbf{w}$ and $\\boldsymbol{\\theta}$. We update $\\mathbf{w}$ every k times of updating $\\boldsymbol{\\theta}$. It \nnot only affects model robustness\nbut also affects model training efficiency. A hyper-parameter selection experiment with ResNet18 is conducted on CIFAR-10 to select the optimal hyper-parameter k. The results are shown in Table~\\ref{table:ablation_k}.\nThe training time of the proposed LAS-PGD-AT decreases along with the increase of parameter k. The more time the strategy network requires for training, the smaller the k is. When k = 40, the proposed LAS-PGD-AT achieves the best adversarial robustness.\nConsidering AT efficiency, we set k to 40. The selection of hyper-parameters $\\alpha$ and $\\beta$ \n is presented in the \\textbf{supplementary material}.\n\n\\subsection{Comparisons with Other AT Methods}\nOur method is a plug-and-play component that can be combined with other AT methods to boost their robustness.\n\n\n\\input{table\/tb_tiny_v5}\n\\input{table\/tb_Comparison_AT_cifar100_v2}\n\\vspace{1mm}\n\\noindent \\textbf{Comparisons on CIFAR-10 and CIFAR-100.}\nThe results on CIFAR-10 and CIFAR-100 are shown in Table~\\ref{tb:cifar10} and Table~\\ref{tb:cifar100}. \nAnalyses are as follows. \nFirst, the three proposed models outperform their base models under most attack scenarios.\nIn a lot of cases, our method not only improves the robustness but also improves the clean accuracy of the base models though there is always a trade-off between accuracy and robustness. \nFor example, on CIFAR-10\nwhen using WRN34-10 as the target network, our method improves the clean accuracy of powerful AWP by about $2.2\\%$ and also improves the performance of AWP under PGD-10 attack and AA attack by about $2.1\\%$ and $1.62\\%$, respectively. Moreover, the proposed LAS-AWP achieves the best robustness performance under all attack scenarios.\nWe attribute the improvements to using automatically generated attack strategies instead of hand-crafted ones. Second, on CIFAR-100, the proposed LAS-AWP not only achieves the highest accuracy on clean images but also achieves the best robustness performance under all attack scenarios.\nIn detail, our LAS-AWP outperforms the original AWP $4.5\\%$ and $1.9\\%$ on the clean accuracy and AA attack accuracy, respectively. Moreover, our LAS-AWP outperforms the powerful LBGAT under all attack scenarios. \n\\input{table\/tb_ablation_v5}\n\n\n\n\n\n\\input{table\/tb_CAT_DART}\n\n\\noindent \\textbf{Comparisons on Tiny ImageNet.}\nFollowing \\cite{lee2020adversarial}, we use PreActResNet18 \\cite{he2016identity} as the target model for evaluation on Tiny ImageNet. \nThe results are shown in Table~\\ref{tb:tiny}. \nAs Tiny ImageNet has more classes than CIFAR-10 and CIFAR-100, the defense of AEs is more challenging. \nOur method improves the clean and adversarial robustness accuracy of the three base models. \n\n\\vspace{1mm}\n\\noindent \\textbf{Comparisons with state-of-the-art robustness model.}\nAuto Attack (AA) is a reliable and strong attack method to evaluate model robustness. It consists of three white-box attacks and a black-box attack. The details is introduced in Sec~\\ref{Related Work}. Under their leaderboard results~\\footnote{https:\/\/github.com\/fra31\/auto-attack}, on CIFAR-10, Gowal~\\emph{et al}.~\\cite{gowal2020uncovering} study the impact of hyper-parameters ( such as model weight averaging and model size ) on model robustness and adopt WideResNet70-16 (WRN-70-16) to conduct AT, which ranks the 1st under AA attack without additional real or synthetic data. We also adopt WRN-70-16 for our method. LAS-AWP can boost the model robustness and achieve higher robustness accuracy.\nOn CIFAR-100, Cui~\\emph{et al}. train WideResNet34-20 (WRN-34-20) for LBGAT and achieves state-of-the-art robustness without additional real or synthetic data.\nWe also adopt WRN-34-20 for our method. LAS-AWP can also achieve higher robustness accuracy. The result is shown Table~\\ref{tb:Comparison_AT_cifar10}.\n\n\n\\subsection{Ablation Study}\nIn our formulation in Eq.~(\\ref{eq:newAT_pro}), besides the loss $\\mathcal{L}_1$, we propose two additional loss terms to guide the learning of the strategy network, \\textit{i.e.,} the loss of evaluating robustness $\\mathcal{L}_2$ and the loss of predicting clean samples $\\mathcal{L}_3$. \nTo validate the effectiveness of each element in the objective function, we conduct ablation experiments with ResNet18 on CIFAR-10. \nWe train four LAS-PGD-AT models by using $\\mathcal{L}_1$, $\\mathcal{L}_1 \\& \\mathcal{L}_2$, $\\mathcal{L}_1 \\& \\mathcal{L}_3$, and $\\mathcal{L}_1 \\& \\mathcal{L}_2 \\& \\mathcal{L}_3$, respectively.\nThe trained models are attacked by a set of adversarial attack methods. \nThe results are shown in Table \\ref{table:table0}.\nThe classification accuracy is the evaluation metric. \n\\textit{Clean} represents using clean images for testing while other attack methods use AEs for testing.\n\nAnalyses are summarized as follows. \nFirst, when incorporating the loss $\\mathcal{L}_2$ only, the performance of robustness under all attacks improves while the clean accuracy slightly drops. \nWhen incorporating the loss $\\mathcal{L}_3$ only, the clean accuracy improves, but the performance of robustness under partial attacks slightly drops. \nThe results show that $\\mathcal{L}_2$ contributes more to improve the robustness and $\\mathcal{L}_3$ contributes more to improve the clean accuracy. \nSecond, using all losses achieves the best performance in robustness as well as the clean accuracy, which indicates that the two losses are compatible and combining them could remedy \nthe side effect of independent use. \n\n\n\n\n\n\\input{figtex\/fig_comparsion_HP}\n\\subsection{Performance Analysis}\n\n \\noindent \\textbf{Comparisons with hand-crafted attack strategy methods.} \nTo investigate the effectiveness of automatically generated attack strategies generated by our method, we compare our LAS-AT with some AT methods (CAT~\\cite{cai2018curriculum}, DART~\\cite{wang2019convergence} and FAT~\\cite{zhang2020attacks}) which adopt dynamic hand-crafted attack strategies for training. \nFor a fair comparison, we keep the training and evaluation setting as same as those used in FAT~\\cite{zhang2020attacks}. The details are presented in the \\textbf{supplementary material}. The result is shown in Table~\\ref{tb:CAT_DART}. Our method outperforms competing methods under all attacks. It indicates that compared with previous hand-crafted attack strategies, the proposed automatically generated attack strategies can achieve the greater robustness improvement.\n\n\n\n\\noindent \\textbf{Comparisons with hyper-parameter search methods.\n } \nWe compare the proposed method with other hyper-parameter search methods that include a classical hyper-parameter search method (random search) and two automatic hyper-parameter search methods (OHL~\\cite{lin2019online} and AdvHP~\\cite{DBLP:conf\/iclr\/ZhangWZZ20}). For a fair comparison, the same hyper-parameters and search range that are used in our method (see Sec~\\ref{Settings}) are adopted for them. The detail settings are presented in the \\textbf{supplementary material}. The result is shown in Fig.~\\ref{fig:comparsion_HP}. It can be observed that our method achieves the best robustness performance\nunder all attack scenarios. The automatically generated attack strategies generated by our method are more suitable for AT.\n\n\n\n\\noindent \\textbf{Adversarial Training from Easy to Difficult.}\nTo investigate how LAS-AT works, we analyze the distribution of the strategy network's attack strategies at different training stages.\nExperiments using ResNet18 with LAS-PGD-AT are performed on the CIFAR-10 database. \nThe range of the maximal perturbation strength is set from 3 to 15. \nThe distribution evolution of the maximal perturbation strength during adversarial training is illustrated in Fig.~\\ref{fig:discuss}. \n\nAt the beginning of AT, the distribution covers all the optional values of the maximal perturbation strength. \nEach value has a chance to be selected, which ensures the diversity of AEs. \nAs the training process goes on, the percentage of small perturbation strengths decreases. \nAt the late stages, the distribution of the maximal perturbation strength is occupied by several large values. \nThis phenomenon indicates that the strategy network gradually increases the percentage of large perturbation strengths to generate strong AEs because the robustness of the target network is gradually boosted by training with the generated AEs. \nTherefore, it can be observed that under the gaming mechanism, our method starts training with diverse AEs when the target network is vulnerable, and then learns with more strong AEs at the late stages when the robustness of the target network improves. \n CAT~\\cite{wang2019convergence}, DART~\\cite{wang2019convergence} and FAT~\\cite{zhang2020attacks} adopt hand-crafted strategies to use weak AEs at early stages and then use strong AEs at late stages. \nUnlike them, under our framework, the strategy network automatically generates strategies that determine the difficulty of AEs, according to the robustness of the target network at different stages. \n\n\\input{figtex\/fig_perturbation}\n\n\n\\section{Introduction}\n\n\n\n\n\nAlthough deep neural networks (DNNs) have achieved great success in academia and industry, they could be easily fooled by adversarial examples (AEs) \\cite{szegedy2013intriguing,goodfellow2014explaining} generated via adding indistinguishable perturbations to benign images. Recently, many studies~\\cite{li2019nattack,dong2018boosting,wang2021enhancing,bai2021targeted,bai2020targeted,gu2021adversarial,jia2020adv} focus on generating AEs.\nIt has been proven that many real-world applications~\\cite{kurakin2016adversarial} of DNNs are vulnerable to AEs, such as image classification \\cite{goodfellow2014explaining,hirano2021universal}, object detection \\cite{wei2018transferable,jia2021effective}, \nneural machine translation \\cite{zou2019reinforced,huang2020reinforced}, \\emph{etc}. \nThe vulnerability of DNNs makes people pay attention to the safety of artificial intelligence and brings new challenges to the application of deep learning~\\cite{zheng2020pcal,yin2021adv,xu2020adversarial,gu2021effective,gu2021capsule}. Adversarial training (AT) \\cite{madry2017towards,wong2020fast,song2021regional,li2022semi} \nis considered as one of the most effective defense methods to \nimprove adversarial robustness by injecting AEs into the training procedure through a minimax formulation. \nUnder the minimax framework, the generation of AEs plays a key role in determining robustness. \n\\input{figtex\/Home}\n\nSeveral recent works improve the standard AT method from different perspectives. Although existing methods~\\cite{gowal2020uncovering,dai2021parameterizing,bai2021improving,cui2021learnable,rebuffi2021fixing,jia2021boosting} have made significant progress in improving robustness, they rarely explore the impact of attack strategy on adversarial training.\nFirst, as shown in Fig.~\\ref{fig:sat0}, most existing methods leverage a hand-crafted attack strategy to generate AEs by manually specifying the attack parameters, \\textit{e.g.,} PGD attack with the maximal perturbation of 8, iteration of 10, and step size of 2. \nA hand-crafted attack strategy lacks flexibility and might limit the generalization performance. \nSecond, most methods use only one attack strategy. Though some works~\\cite{cai2018curriculum,wang2019convergence,zhang2020attacks} have realized that exploiting different attack strategies at different training stages could improve robustness, \\textit{i.e.,} using weak attacks at the early stages and strong attacks at the late stages, they use manually designed metrics to evaluate the difficulty of AEs and still use one strategy at each stage. However, they need much domain expertise, and the robustness improvement is limited. They use sample-agnostic attack strategies that are hand-crafted and independent of any information of specific samples. There exist statistical differences among samples, and attack strategy should be designed according to the information of the specific sample, \\textit{i.e.,} sample-dependent.\n\n\\input{figtex\/fig_pipeline}\n\nTo alleviate these issues, we propose a novel adversarial training framework by introducing the concept of ``learnable attack strategy\", \\textit{dubbed LAS-AT}, which learns to automatically produce sample-dependent attack strategies for AE generation instead of using hand-crafted ones (see Fig.~\\ref{fig:las-at0}). \nOur framework consists of two networks, \\textit{i.e.,} a target network and a strategy network. \nThe former uses AEs for training to improve robustness, while the latter produces attack strategies to control the generation of AEs. \nThe two networks play a game where the target network learns to minimize the training loss of AEs while the strategy network learns to generate strategies to maximize the training loss. \nUnder such a gaming mechanism, at the early training stages, weak attacks can successfully attack the target network. \nAs the robustness improves, the strategy network learns to produce strategies to generate stronger attacks. \nUnlike~\\cite{cai2018curriculum} ~\\cite{wang2019convergence}, and ~\\cite{zhang2020attacks} that use designed metrics and hand-crafted attack strategies, we use the strategy network to automatically produce an attack strategy according to the given sample. \nAs the strategy network updates according to the robustness of the target model and the given sample, the strategy network figures out to produce different strategies accordingly at different stages, rather than setting up any manually designed metrics or strategies. We propose two loss terms to guide the learning of the strategy network.\nOne evaluates the robustness of the target model updated with the AEs generated by the strategy. \nThe other evaluates how well the updated target model performs on clean samples. \nOur main contributions are in three aspects:\n\\textbf{1)} We propose a novel adversarial training framework by introducing the concept of ``learnable attack strategy\", which learns to automatically produce sample-dependent attack strategies to generate AEs. \n Our framework can be combined with other state-of-the-art methods as a plug-and-play component.\n\\textbf{2)} We propose two loss terms to guide the learning of the strategy network, which involve explicitly evaluating the robustness of the target model and the accuracy of clean samples.\n\\textbf{3)} We conduct experiments and analyses on three databases to demonstrate the effectiveness of the proposed method. \n\n \n\n \n\n\\section{Related Work}\n\\paragraph{Adversarial Attack Methods.}\n\\label{Related Work}\nAs the vulnerability of deep learning models has been noticed~\\cite{szegedy2013intriguing}, many works studied the model's robustness and proposed a series of adversarial attack methods. \nFast Gradient Sign Method (FGSM) \\cite{goodfellow2014explaining} was a classic adversarial attack method, which made use of the gradient of the model to generate AEs. \nMadry \\emph{et al}.~\\cite{madry2017towards} proposed a multi-step version of FGSM, called Projected Gradient Descent (PGD). \nTo solve the problem of parameter selection in FGSM, Moosavi-Dezfooli \\emph{et al}.~\\cite{moosavi2016deepfool} proposed a simple but accurate method, called Deepfool, to attack deep neural networks. \nIt generated AEs by using an iterative linearization of the classification model. \nCarlini-Wagner \\emph{et al}.~\\cite{carlini2017towards} proposed several powerful attack methods that could be widely used to evaluate the robustness of deep learning models. \nMoreover, Croce \\emph{et al}.~\\cite{croce2020reliable} proposed two improved methods (APGD-CE, APGD-DLR) of the PGD-attack. \nThey did not need to choose a step size or alternate a loss function. \nAnd then they combined the proposed method with two complementary adversarial attack methods (FAB \\cite{croce2019minimally} and Square \\cite{andriushchenko2019square}) to evaluate the robustness, which was called AutoAttack (AA).\n\n\\vspace{-4mm}\n\\paragraph{Adversarial Training Defense Methods.}\nAdversarial training is an effective way to improve robustness by using AEs for training, such as ~\\cite{ kannan2018adversarial,roth2019adversarial, pang2020boosting, wang2020once, wu2020adversarial, wang2019improving,wang2021probabilistic}. \nThe standard adversarial training (AT) is formulated as\na minimax optimization problem in \\cite{madry2017towards}.\nThe objective function is defined as:\n\\begin{equation}\n\\min_{\\mathbf{w}} \\mathbb{E}_{(\\mathbf{x}, y) \\sim \\mathcal{D}}[\\max _{\\boldsymbol{\\delta} \\in \\Omega} \\mathcal{L}(f_{\\mathbf{w}}(\\mathbf{x} + \\boldsymbol{\\delta}), y)], \\label{Eq:sat}\n\\end{equation}\nwhere $\\mathcal{D}$ represents an underlying data distribution and $\\Omega$ represents the perturbation set. \n $\\mathbf{x}$ represents the example, $y$\nrepresents the corresponding label, and $\\boldsymbol{\\delta}$ represents the indistinguishable perturbation. \n$f_{\\mathbf{w}}(\\cdot)$ represents the target network and\n$\\mathcal{L}(f_{\\mathbf{w}}(\\mathbf{x}), y)$ represents the loss function of the target network.\nThe inner maximization problem of standard AT can be regarded as the attack strategy that guides the creation of AEs, which is the core to improve the model robustness. A training strategy is designed accordingly, which significantly improves the network's robustness. \nMadry \\emph{et al}. proposed the \nprime AT framework, PGD-AT~\\cite{\nmadry2017towards}, to improve the robustness.\nAnd Rice \\emph{et al}. \nproposed a early stopping version~\\cite{rice2020overfitting} of PGD-AT, which gained a great improvement. Zhang \\emph{et al}.~\\cite{zhang2019theoretically} explored a trade-off between standard accuracy and adversarial robustness and proposed a defense method (TRADES) that can trade standard accuracy off against adversarial robustness. Wu \\emph{et al.}~\\cite{wu2020adversarial} investigated the weight loss landscape and proposed an effective Adversarial Weight Perturbation (AWP) method to improve the robustness.\nCui \\emph{et al}.~\\cite{cui2021learnable} proposed to adapt the logits of one model trained on clean data to guide adversarial training (LBGAT). These AT methods adopted a fxed attack strategy to conduct AT. Some AT methods exploited different attack\nstrategies at different training stages to improve robustness. In detail, \n Cai \\emph{et al}.~\\cite{cai2018curriculum}\nadopted curriculum adversarial training (CAT) to improve model robustness.\nWang \\emph{et al}.~\\cite{wang2019convergence} designed a criterion to measure the convergence quality and proposed dynamic adversarial training\n(DART) to improve the robustness of the target model. Zhang \\emph{et al.}~\\cite{zhang2020attacks} proposed to search for the least adversarial data for AT, which could be called friendly adversarial training (FAT). \n\n\\section*{Detailed Proof}\n\nFirst we introduce some notations. Let $\\mathcal{L}_0:\\mathcal{X}\\times\\mathcal{Y}\\times\\mathcal{W}\\times\\boldsymbol{\\Theta}\\rightarrow\\mathbb{R}^+$ be the objective function in \\eqref{eq:newAT_pro} as\n\\begin{equation}\n \\mathcal{L}_0 = \\mathcal{L}_1+\\alpha \\mathcal{L}_2+\\beta \\mathcal{L}_3.\n\\end{equation}\nWe define $\\boldsymbol{x}^*_{\\text{adv}}(\\boldsymbol{x},\\mathbf{w})$ as the optimal adversarial example generated by the strategy network\n\\begin{equation}\n \\begin{aligned}\n & \\boldsymbol{x}^*_{\\text{adv}}(\\boldsymbol{x},\\mathbf{w}) &=&\\ \\ \\underset{\\boldsymbol{\\theta}}{\\arg\\max}\\ g(\\boldsymbol{x},\\boldsymbol{a}(\\boldsymbol{\\theta}),\\mathbf{w})\\\\\n & &=&\\ \\ \\underset{\\boldsymbol{\\theta}}{\\arg\\max}\\ \\mathbb{E}_{\\boldsymbol{a}\\sim p(\\boldsymbol{a}|\\boldsymbol{x},\\boldsymbol{\\theta})}[\\mathcal{L}_0],\n \\end{aligned}\n\\end{equation}\nand $\\boldsymbol{\\hat{x}}_{\\text{adv}}(\\boldsymbol{x},\\mathbf{w})$ is a $\\delta$-approximate solution to $\\boldsymbol{x}^*_{\\text{adv}}(\\boldsymbol{x},\\mathbf{w})$. In addition, the full gradient of $\\mathcal{L}_0$ w.r.t $\\mathbf{w}$ is \n\\begin{equation}\n \\label{eq:grad_1}\n \\begin{aligned}\n & \\nabla_{\\mathbf{w}} \\mathcal{L}_0(\\mathbf{w}) &=&\\ \\ \\frac{1}{N} \\sum_{i=n}^N \\nabla_{\\mathbf{w}} \\mathcal{L}^n_0\\\\\n & &=&\\ \\ \\frac{1}{N} \\sum_{n=1}^N \\nabla_{\\mathbf{w}} \\mathcal{L}_0(\\boldsymbol{x}^*_{\\text{adv}}(\\boldsymbol{x}_n,\\mathbf{w}),\\mathbf{w}),\n \\end{aligned}\n\\end{equation}\nwhere $\\boldsymbol{x}^*_{\\text{adv}}(\\boldsymbol{x}_n)$ is the optimal adversarial example for $\\boldsymbol{x}_n$. The stochastic gradient of $\\mathcal{L}_0$ w.r.t $\\mathbf{w}$ is \n\\begin{equation}\n \\label{eq:grad_2}\n \\begin{aligned}\n & \\nabla_{\\mathbf{w}}\\ell(\\mathbf{w}) &=&\\ \\ \\frac{1}{|\\mathcal{B}|}\\sum_{i=1}^{|\\mathcal{B}|} \\nabla_{\\mathbf{w}} \\mathcal{L}^i_0\\\\\n & &=&\\ \\ \\frac{1}{|\\mathcal{B}|} \\sum_{n=1}^N \\nabla_{\\mathbf{w}} \\mathcal{L}_0(\\boldsymbol{x}^*_{\\text{adv}}(\\boldsymbol{x}_i,\\mathbf{w}),\\mathbf{w}).\n \\end{aligned}\n\\end{equation}\nThen $\\nabla_{\\boldsymbol{\\theta}} \\mathcal{L}_0$ and $\\nabla_{\\boldsymbol{\\theta}}\\ell$ correspond to the full and stochastic gradients of $\\mathcal{L}_0$ w.r.t $\\boldsymbol{\\theta}$. Without lose of generality, we assume that\n\\begin{equation}\n \\mathbb{E}[\\nabla_{\\mathbf{w}} \\ell(\\mathbf{w})] = \\nabla_{\\mathbf{w}}\\mathcal{L}_0(\\mathbf{w}).\n\\end{equation}\nWe note the approximate stochastic gradient as $\\nabla_{\\mathbf{w}}\\hat{\\ell}$:\n\\begin{equation}\n \\label{eq:grad_3}\n \\begin{aligned}\n & \\nabla_{\\mathbf{w}}\\hat{\\ell}~(\\mathbf{w}) &=&\\ \\ \\frac{1}{|\\mathcal{B}|}\\sum_{i=1}^{|\\mathcal{B}|} \\nabla_{\\mathbf{w}} \\hat{\\mathcal{L}}^i_0\\\\\n & &=&\\ \\ \\frac{1}{|\\mathcal{B}|} \\sum_{n=1}^N \\nabla_{\\mathbf{w}} \\mathcal{L}_0(\\hat{\\boldsymbol{x}}_{\\text{adv}}(\\boldsymbol{x}_i,\\mathbf{w}),\\mathbf{w}).\n \\end{aligned} \n\\end{equation}\nMoreover, the adversarial example $\\boldsymbol{x}_{\\text{adv}}(\\boldsymbol{x},\\mathbf{w})$ can be identified by a parameter $\\boldsymbol{\\theta}$ of the strategy network and the gradients like \\eqref{eq:grad_1}, \\eqref{eq:grad_2}, \\eqref{eq:grad_3} would be\n\\begin{equation}\n \\begin{aligned}\n & \\nabla_{\\mathbf{w}} \\mathcal{L}_0(\\boldsymbol{\\theta},\\mathbf{w}) &:=&\\ \\ \\nabla_{\\mathbf{w}} \\mathcal{L}_0(\\mathbf{w})\\\\\n & \\nabla_{\\mathbf{w}}\\ell(\\boldsymbol{\\theta},\\mathbf{w}) &:=&\\ \\ \\nabla_{\\mathbf{w}}\\ell(\\mathbf{w})\\\\\n & \\nabla_{\\mathbf{w}}\\hat{\\ell}(\\boldsymbol{\\theta},\\mathbf{w}) &:=&\\ \\ \\nabla_{\\mathbf{w}}\\hat{\\ell}~(\\mathbf{w}).\n \\end{aligned}\n\\end{equation}\nThe corresponding gradients w.r.t $\\boldsymbol{\\theta}$ will be $\\nabla_{\\boldsymbol{\\theta}} \\mathcal{L}_0$, $\\nabla_{\\boldsymbol{\\theta}} \\ell$ and $\\nabla_{\\boldsymbol{\\theta}} \\hat{\\ell}$. As the $\\mathcal{L}_0$ in \\eqref{eq:newAT_pro} satisfies the Lipschitz gradient conditions, given $\\boldsymbol{x}_n\\in\\mathcal{X}$, it holds that\n\\begin{equation}\n \\label{eq:lipschitz_grad}\n \\begin{aligned}\n & &&\\ \\ \\underset{\\boldsymbol{\\theta}}{\\sup}\\ \\|\\nabla_{\\mathbf{w}} \\mathcal{L}^n_0(\\boldsymbol{\\theta},\\mathbf{w})-\\nabla_{\\mathbf{w}} \\mathcal{L}^n_0(\\boldsymbol{\\theta},\\mathbf{w}')\\|_2\\\\\n & &\\leq&\\ \\ L_{\\mathbf{w}\\mathbf{w}}\\|\\mathbf{w}-\\mathbf{w}'\\|_2\\\\[5pt]\n & & &\\ \\ \\underset{\\mathbf{w}}{\\sup}\\ \\|\\nabla_{\\mathbf{w}} \\mathcal{L}^n_0(\\boldsymbol{\\theta},\\mathbf{w})-\\nabla_{\\mathbf{w}} \\mathcal{L}^n_0(\\boldsymbol{\\theta}',\\mathbf{w})\\|_2\\\\\n & &\\leq&\\ \\ L_{\\mathbf{w}\\boldsymbol{\\theta}}\\|\\boldsymbol{\\theta}-\\boldsymbol{\\theta}'\\|_2\\\\[5pt]\n & & &\\ \\ \\underset{\\boldsymbol{\\theta}}{\\sup}\\ \\|\\nabla_{\\boldsymbol{\\theta}} \\mathcal{L}^n_0(\\boldsymbol{\\theta},\\mathbf{w})-\\nabla_{\\boldsymbol{\\theta}} \\mathcal{L}^n_0(\\boldsymbol{\\theta},\\mathbf{w}')\\|_2\\\\\n & &\\leq&\\ \\ L_{\\boldsymbol{\\theta}\\mathbf{w}}\\|\\mathbf{w}-\\mathbf{w}'\\|_2,\n \\end{aligned}\n\\end{equation}\nwhere $L_{\\mathbf{w}\\mathbf{w}}$, $L_{\\mathbf{w}\\boldsymbol{\\theta}}$ and $L_{\\boldsymbol{\\theta}\\mathbf{w}}$ are positive constants. Furthermore, by the strongly-concavity of $\\mathcal{L}_0$ and given $\\boldsymbol{x}_n\\in\\mathcal{X}$, we know that for any $\\boldsymbol{\\theta}_1$ and $\\boldsymbol{\\theta}_2\\in\\boldsymbol{\\Theta}$,\n\\begin{equation}\n \\begin{aligned}\n & &&\\ \\ \\mathcal{L}^n_0(\\boldsymbol{\\theta}_1,\\mathbf{w})-\\mathcal{L}^n_0( \\boldsymbol{\\theta}_2,\\mathbf{w})\\\\[5pt]\n & &\\leq&\\ \\ \\big\\langle\\nabla_{\\boldsymbol{\\theta}} \\mathcal{L}^n_0(\\boldsymbol{\\theta},\\mathbf{w}),\\boldsymbol{\\theta}_1-\\boldsymbol{\\theta}_2\\big\\rangle - \\frac{\\mu}{2}\\|\\boldsymbol{\\theta}_1-\\boldsymbol{\\theta}_2\\|_2^2.\n \\end{aligned}\n\\end{equation}\nAs the variance of the stochastic gradient is bounded by $\\sigma^2>0$, it means that\n\\begin{equation}\n \\mathbb{E}\\big[\\|\\nabla_{\\mathbf{w}} \\ell(\\mathbf{w})-\\nabla_{\\mathbf{w}} \\mathcal{L}_0(\\mathbf{w})\\|^2_2\\big]\\leq\\sigma^2.\n\\end{equation}\n\nTo prove the main result, we need the following two important lemmas.\n\\begin{lemma}\n\\label{lem:1}\nSuppose that $\\mathcal{L}_0$ in \\eqref{eq:newAT_pro} satisfies the Lipschitz gradient conditions as \\eqref{eq:lipschitz_grad} and $\\mathcal{L}_0$ is $\\mu$-strongly concave in $\\boldsymbol{\\Theta}$, we have $\\mathcal{L}_0$ is Lipschitz smooth with $L_0$ \n\\begin{equation}\n L_0=\\frac{L_{\\mathbf{w}\\boldsymbol{\\theta}}L_{\\boldsymbol{\\theta}\\mathbf{w}}}{\\mu}+L_{\\mathbf{w}\\mathbf{w}}.\n\\end{equation}\nIt holds that\n\\begin{equation}\n \\begin{aligned}\n & \\mathcal{L}_0(\\mathbf{w}_1)\\leq\\mathcal{L}_0(\\mathbf{w}_2)&+&\\ \\ \\ \\left\\langle\\nabla_{\\mathbf{w}} \\mathcal{L}_0(\\mathbf{w}_2),\\mathbf{w}_1-\\mathbf{w}_2\\right\\rangle\\\\\n & &+&\\ \\ \\ \\frac{L_0}{2}\\|\\mathbf{w}_1-\\mathbf{w}_2\\|^2_2,\n \\end{aligned}\n\\end{equation}\nand\n\\begin{equation}\n \\left\\|\\nabla_{\\mathbf{w}} \\mathcal{L}_0-\\nabla_{\\mathbf{w}} \\mathcal{L}_0(\\mathbf{w}_2)\\right\\|_2\\leq L_0\\|\\mathbf{w}_1-\\mathbf{w}_2\\|_2.\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nBy the strongly-concavity of $\\mathcal{L}_0$ and given $\\boldsymbol{x}_n\\in\\mathcal{X}$, for any $\\boldsymbol{\\theta}_1$, $\\boldsymbol{\\theta}_2$ and the corresponding $\\mathbf{w}_1$, $\\mathbf{w}_2$ , we have\n\\begin{equation}\n \\label{eq:30}\n \\begin{aligned}\n & &&\\ \\ \\mathcal{L}^n_0(\\boldsymbol{\\theta}_1,\\mathbf{w}_2)-\\mathcal{L}^n_0( \\boldsymbol{\\theta}_2,\\mathbf{w}_2)\\\\[5pt]\n & &\\leq&\\ \\ \\big\\langle\\nabla_{\\boldsymbol{\\theta}} \\mathcal{L}^n_0(\\boldsymbol{\\theta}_2,\\mathbf{w}_2),\\boldsymbol{\\theta}_1-\\boldsymbol{\\theta}_2\\big\\rangle - \\frac{\\mu}{2}\\|\\boldsymbol{\\theta}_1-\\boldsymbol{\\theta}_2\\|_2^2\\\\[3pt]\n & &\\leq&\\ \\ -\\frac{\\mu}{2}\\|\\boldsymbol{\\theta}_1-\\boldsymbol{\\theta}_2\\|_2^2.\n \\end{aligned} \n\\end{equation}\nThe second inequality is true as \n$$\n \\langle\\nabla_{\\boldsymbol{\\theta}} \\mathcal{L}^n_0(\\boldsymbol{\\theta}_2,\\mathbf{w}_2),\\boldsymbol{\\theta}_1-\\boldsymbol{\\theta}_2\\rangle\\leq 0.\n$$\nIn addition, we have\n\\begin{equation}\n \\label{eq:31}\n \\begin{aligned}\n & &&\\ \\ \\mathcal{L}^n_0(\\boldsymbol{\\theta}_2,\\mathbf{w}_2)-\\mathcal{L}^n_0( \\boldsymbol{\\theta}_1,\\mathbf{w}_2)\\\\[5pt]\n & &\\leq&\\ \\ \\big\\langle\\nabla_{\\boldsymbol{\\theta}} \\mathcal{L}^n_0(\\boldsymbol{\\theta}_1,\\mathbf{w}_2),\\boldsymbol{\\theta}_2-\\boldsymbol{\\theta}_1\\big\\rangle - \\frac{\\mu}{2}\\|\\boldsymbol{\\theta}_1-\\boldsymbol{\\theta}_2\\|_2^2\\\\[3pt]\n & &\\leq&\\ \\ -\\frac{\\mu}{2}\\|\\boldsymbol{\\theta}_1-\\boldsymbol{\\theta}_2\\|_2^2.\n \\end{aligned} \n\\end{equation}\nCombining \\eqref{eq:30} and \\eqref{eq:31}, we have\n\\begin{equation}\n \\label{eq:32}\n \\begin{aligned}\n & & &\\ \\ \\mu\\|\\boldsymbol{\\theta}_1-\\boldsymbol{\\theta}_2\\|^2_2\\\\[5pt]\n & &\\leq&\\ \\ \\langle\\nabla_{\\boldsymbol{\\theta}} \\mathcal{L}^n_0(\\boldsymbol{\\theta}_1,\\mathbf{w}_2),\\boldsymbol{\\theta}_2-\\boldsymbol{\\theta}_1\\rangle\\\\[5pt]\n & &\\leq&\\ \\ \\langle\\nabla_{\\boldsymbol{\\theta}} \\mathcal{L}^n_0(\\boldsymbol{\\theta}_1,\\mathbf{w}_2)-\\nabla_{\\boldsymbol{\\theta}} \\mathcal{L}^n_0(\\boldsymbol{\\theta}_1,\\mathbf{w}_1),\\boldsymbol{\\theta}_2-\\boldsymbol{\\theta}_1\\rangle\\\\[5pt]\n & &\\leq&\\ \\ \\|\\nabla_{\\boldsymbol{\\theta}} \\mathcal{L}^n_0(\\boldsymbol{\\theta}_1,\\mathbf{w}_2)-\\nabla_{\\boldsymbol{\\theta}} \\mathcal{L}^n_0(\\boldsymbol{\\theta}_1,\\mathbf{w}_1)\\|_2\\|\\boldsymbol{\\theta}_2-\\boldsymbol{\\theta}_1\\|_2\\\\[5pt]\n & &\\leq&\\ \\ L_{\\boldsymbol{\\theta}\\mathbf{w}}\\|\\mathbf{w}_2-\\mathbf{w}_1\\|_2\\|\\boldsymbol{\\theta}_2-\\boldsymbol{\\theta}_1\\|_2,\n \\end{aligned}\n\\end{equation}\nwhere the second inequality holds as\n$$\n \\langle\\nabla_{\\boldsymbol{\\theta}} \\mathcal{L}^n_0(\\boldsymbol{\\theta}_1,\\mathbf{w}_1),\\boldsymbol{\\theta}_2-\\boldsymbol{\\theta}_1\\rangle\\leq 0,\n$$\nthe third inequality follows from the Cauchy-Schwarz inequality, and the last one holds by the Lipschitz smoothness of the gradients of $\\mathcal{L}_0$ \\eqref{eq:lipschitz_grad}. \n\nFor any $n\\in[N]$, we have\n\\begin{equation}\n \\begin{aligned}\n & & &\\ \\ \\|\\nabla_{\\mathbf{w}} \\mathcal{L}^n_0(\\boldsymbol{\\theta}_1,\\mathbf{w}_1)-\\nabla_{\\mathbf{w}} \\mathcal{L}^n_0(\\boldsymbol{\\theta}_2,\\mathbf{w}_2)\\|_2\\\\[5pt]\n & &\\leq&\\ \\ \\|\\nabla_{\\mathbf{w}} \\mathcal{L}^n_0(\\boldsymbol{\\theta}_1,\\mathbf{w}_1)-\\nabla_{\\mathbf{w}} \\mathcal{L}^n_0(\\boldsymbol{\\theta}_2,\\mathbf{w}_1)\\|_2\\\\[5pt]\n & & &\\ \\ +\\|\\nabla_{\\mathbf{w}} \\mathcal{L}^n_0(\\boldsymbol{\\theta}_2,\\mathbf{w}_1)-\\nabla_{\\mathbf{w}} \\mathcal{L}^n_0(\\boldsymbol{\\theta}_2,\\mathbf{w}_2)\\|_2\\\\[5pt]\n & &\\leq&\\ \\ L_{\\mathbf{w}\\boldsymbol{\\theta}}\\|\\boldsymbol{\\theta}_1-\\boldsymbol{\\theta}_2\\|_2+L_{\\mathbf{w}\\mathbf{w}}\\|\\mathbf{w}_1-\\mathbf{w}_2\\|_2\\\\[5pt]\n & &=&\\ \\ \\left(\\frac{L_{\\mathbf{w}\\boldsymbol{\\theta}}L_{\\boldsymbol{\\theta}\\mathbf{w}}}{\\mu}+L_{\\mathbf{w}\\mathbf{w}}\\right)\\|\\mathbf{w}_1-\\mathbf{w}_2\\|_2,\n \\end{aligned}\n\\end{equation}\nwhere the first inequality follows from the triangle inequality, and the second inequality holds due to \\eqref{eq:32} and the Lipschitz smoothness of the gradients of $\\mathcal{L}_0$ \\eqref{eq:lipschitz_grad}. By the definition of $\\mathcal{L}$, it holds that\n\\begin{equation}\n \\begin{aligned}\n & & &\\ \\ \\|\\nabla_{\\mathbf{w}} \\mathcal{L}_0(\\mathbf{w}_1)-\\nabla_{\\mathbf{w}} \\mathcal{L}_0(\\mathbf{w}_2)\\|_2\\\\[5pt]\n & &=&\\ \\ \\left\\|\\frac{1}{N}\\sum_{n=1}^N \\left(\\nabla_{\\mathbf{w}} \\mathcal{L}^n_0(\\boldsymbol{\\theta}_1,\\mathbf{w}_1)-\\nabla_{\\mathbf{w}} \\mathcal{L}^n_0(\\boldsymbol{\\theta}_2,\\mathbf{w}_2)\\right)\\right\\|_2\\\\\n & &\\leq&\\ \\ \\frac{1}{N}\\sum_{n=1}^N\\|\\nabla_{\\mathbf{w}} \\mathcal{L}^n_0(\\boldsymbol{\\theta}_1,\\mathbf{w}_1)-\\nabla_{\\mathbf{w}} \\mathcal{L}^n_0(\\boldsymbol{\\theta}_2,\\mathbf{w}_2)\\|_2\\\\\n & &\\leq&\\ \\ \\left(\\frac{L_{\\mathbf{w}\\boldsymbol{\\theta}}L_{\\boldsymbol{\\theta}\\mathbf{w}}}{\\mu}+L_{\\mathbf{w}\\mathbf{w}}\\right)\\|\\mathbf{w}_1-\\mathbf{w}_2\\|_2.\n \\end{aligned}\n\\end{equation}\nWith the definition of the Lipschitz smoothness, we complete the proof.\n\\end{proof}\n\n\\begin{lemma}\n\\label{lem:2}\nSuppose that $\\mathcal{L}_0$ in \\eqref{eq:newAT_pro} satisfies the Lipschitz gradient conditions as \\eqref{eq:lipschitz_grad} and $\\mathcal{L}_0$ is $\\mu$-strongly concave in $\\boldsymbol{\\Theta}$, the approximate stochastic gradient $\\nabla_{\\mathbf{w}}\\hat{\\ell}(\\mathbf{w})$ \\eqref{eq:grad_3} satisfies\n\\begin{equation}\n \\|\\nabla_{\\mathbf{w}}\\hat{\\ell}~(\\mathbf{w})-\\nabla_{\\mathbf{w}}\\ell(\\mathbf{w})\\|_2\\leq L_{\\mathbf{w}\\boldsymbol{\\theta}}\\sqrt{\\frac{\\delta}{\\mu}},\n\\end{equation}\nwhere $\\boldsymbol{\\hat{x}}_{\\text{adv}}(\\boldsymbol{x},\\mathbf{w})$ is a $\\delta$-approximate solution to $\\boldsymbol{x}^*_{\\text{adv}}(\\boldsymbol{x},\\mathbf{w})$ with given $\\boldsymbol{x}\\in\\mathcal{X}$.\n\\end{lemma}\n\\begin{proof}\nBy the definitions of $\\nabla_{\\mathbf{w}}\\hat{\\ell}$ and $\\nabla_{\\mathbf{w}}\\ell$, we have\n\\begin{equation}\n \\label{eq:36}\n \\begin{aligned}\n & &&\\ \\ \\|\\nabla_{\\mathbf{w}}\\hat{\\ell}~(\\mathbf{w})-\\nabla_{\\mathbf{w}}\\ell(\\mathbf{w})\\|_2\\\\[5pt]\n & &=&\\ \\ \\left\\|\\frac{1}{|\\mathcal{B}|} \\sum_{i=1}^{|\\mathcal{B}|} (\\nabla_{\\mathbf{w}} \\mathcal{L}_0(\\boldsymbol{\\hat{x}}_{\\text{adv}}(\\boldsymbol{x}_i,\\mathbf{w}),\\mathbf{w})\\right.\\\\\n & & &\\ \\ \\left.\\textcolor{white}{\\frac{1}{|\\mathcal{B}|} \\sum_{i=1}^{|\\mathcal{B}|}}-\\nabla_{\\mathbf{w}}\\mathcal{L}_0(\\boldsymbol{x}^*_{\\text{adv}}(\\boldsymbol{x}_i,\\mathbf{w}),\\mathbf{w}))\\right\\|_2\\\\\n & &\\leq&\\ \\ \\frac{1}{|\\mathcal{B}|} \\sum_{n=1}^N \\left\\|\\nabla_{\\mathbf{w}} \\mathcal{L}_0(\\boldsymbol{\\hat{x}}_{\\text{adv}}(\\boldsymbol{x}_i,\\mathbf{w}),\\mathbf{w})\\right.\\\\\n & & &\\ \\ \\textcolor{white}{\\frac{1}{|\\mathcal{B}|} \\sum_{i=1}^{|\\mathcal{B}|}}\\left.-\\nabla_{\\mathbf{w}}\\mathcal{L}_0(\\boldsymbol{x}^*_{\\text{adv}}(\\boldsymbol{x}_i,\\mathbf{w}),\\mathbf{w})\\right\\|_2\\\\[5pt]\n & &\\leq&\\ \\ \\frac{1}{|\\mathcal{B}|} \\sum_{i=1}^{|\\mathcal{B}|} L_{\\mathbf{w}\\boldsymbol{\\theta}}\\|\\boldsymbol{\\hat{\\theta}}-\\boldsymbol{\\theta}^*\\|_2,\n \\end{aligned}\n\\end{equation}\nwhere the second inequality follows from the triangle inequality, the third inequality holds due to the gradient Lipschitz condition, and $\\boldsymbol{\\hat{\\theta}}$ is the parameter of strategy network corresponding to $\\boldsymbol{\\hat{x}}_{\\text{adv}}(\\boldsymbol{x}_i,\\mathbf{w})$, $\\boldsymbol{\\theta}^*$ is similar.\n\nSince $\\boldsymbol{\\hat{x}}_{\\text{adv}}(\\boldsymbol{x}_i,\\mathbf{w})$ is a $\\delta$-approximate adversarial example generated by the strategy network, we have\n\\begin{equation}\n \\label{eq:37}\n \\left\\langle\\boldsymbol{\\theta}^*-\\boldsymbol{\\hat{\\theta}}, \\nabla_{\\boldsymbol{\\theta}} \\mathcal{L}_0(\\boldsymbol{\\hat{\\theta}},\\mathbf{w})\\right\\rangle\\leq \\delta.\n\\end{equation}\nIn addition, it holds that\n\\begin{equation}\n \\label{eq:38}\n \\left\\langle\\boldsymbol{\\hat{\\theta}}-\\boldsymbol{\\theta}^*, \\nabla_{\\boldsymbol{\\theta}} \\mathcal{L}_0(\\boldsymbol{\\theta}^*,\\mathbf{w})\\right\\rangle\\leq 0.\n\\end{equation}\nPutting \\eqref{eq:37} and \\eqref{eq:38} together gives birth to\n\\begin{equation}\n \\label{eq:39}\n \\left\\langle\\boldsymbol{\\hat{\\theta}}-\\boldsymbol{\\theta}^*,\\nabla_{\\boldsymbol{\\theta}} \\mathcal{L}_0(\\boldsymbol{\\theta}^*,\\mathbf{w})-\\nabla_{\\boldsymbol{\\theta}} \\mathcal{L}_0(\\boldsymbol{\\hat{\\theta}},\\mathbf{w})\\right\\rangle\\leq \\delta.\n\\end{equation}\nMoreover, by the strongly concavity of $\\mathcal{L}_0$ and \\eqref{eq:32}, we have\n\\begin{equation}\n \\label{eq:40}\n \\begin{aligned}\n & & &\\ \\ \\mu\\|\\boldsymbol{\\theta}^*-\\boldsymbol{\\hat{\\theta}}\\|^2_2\\\\[2.5pt]\n & &\\leq&\\ \\ \\langle\\nabla_{\\boldsymbol{\\theta}}\\mathcal{L}^n_0(\\boldsymbol{\\theta}^*,\\mathbf{w})-\\nabla_{\\boldsymbol{\\theta}} \\mathcal{L}^n_0(\\boldsymbol{\\hat{\\theta}},\\mathbf{w}),\\boldsymbol{\\hat{\\theta}}-\\boldsymbol{\\theta}^*\\rangle\\\\[5pt]\n & &\\leq&\\ \\ \\delta.\n \\end{aligned}\n\\end{equation}\nConsequently, it immediately yields\n\\begin{equation}\n \\label{eq:41}\n \\|\\boldsymbol{\\theta}^*-\\boldsymbol{\\hat{\\theta}}\\|_2\\leq\\sqrt{\\frac{\\delta}{\\mu}}.\n\\end{equation}\nSubstituting \\eqref{eq:41} into \\eqref{eq:36}, we complete the proof.\n\\end{proof}\n\n\\convergence*\n\\begin{proof}\nBy Lemma \\ref{lem:1}, we have\n\\begin{equation*}\n \\begin{aligned}\n & \\mathcal{L}_0(\\mathbf{w}^{t+1})&\\leq&\\ \\ \\mathcal{L}_0(\\mathbf{w}^t)+\\frac{L_0}{2}\\|\\mathbf{w}^{t+1}-\\mathbf{w}^{t}\\|^2_2\\\\[5pt]\n & &+&\\ \\ \\left\\langle\\nabla_{\\mathbf{w}} \\mathcal{L}_0(\\mathbf{w}^t),\\mathbf{w}^{t+1}-\\mathbf{w}^{t}\\right\\rangle.\n \\end{aligned} \n\\end{equation*}\nDue to\n\\[\n \\mathbf{w}^{t+1}=\\mathbf{w}^{t}-\\eta_t\\nabla_{\\mathbf{w}}\\hat{\\ell}(\\mathbf{w}^{t}),\n\\]\nit holds that\n\\begin{equation}\n \\begin{aligned}\n & & &\\ \\ \\mathcal{L}_0(\\mathbf{w}^{t+1})\\\\[5pt]\n & &\\leq&\\ \\ \\mathcal{L}_0(\\mathbf{w}^t)-\\eta_t\\|\\nabla_{\\mathbf{w}}\\mathcal{L}_0(\\mathbf{w}^t)\\|_2^2+\\frac{L_0\\eta_t^2}{2}\\|\\nabla_{\\mathbf{w}}\\hat{\\ell}(\\mathbf{w}^{t})\\|_2^2\\\\[5pt]\n & & &\\ \\ +\\eta_t\\langle\\nabla_{\\mathbf{w}}\\mathcal{L}_0(\\mathbf{w}^t),\\nabla_{\\mathbf{w}}\\mathcal{L}_0(\\mathbf{w}^t)-\\nabla_{\\mathbf{w}}\\hat{\\ell}(\\mathbf{w}^{t})\\rangle\\\\[5pt]\n & &=&\\ \\ \\mathcal{L}_0(\\mathbf{w}^t)-\\eta_t\\left(1-\\frac{L_0\\eta_t}{2}\\right)\\|\\nabla_{\\mathbf{w}}\\mathcal{L}_0(\\mathbf{w}^t)\\|_2^2\\\\[5pt]\n & & &\\ \\ +\\eta_t\\left(1-\\frac{L_0\\eta_t}{2}\\right)\\langle\\nabla_{\\mathbf{w}}\\mathcal{L}_0(\\mathbf{w}^t),\\nabla_{\\mathbf{w}}\\mathcal{L}_0(\\mathbf{w}^t)-\\nabla_{\\mathbf{w}}\\hat{\\ell}(\\mathbf{w}^{t})\\rangle\\\\[5pt]\n & & &\\ \\ + \\frac{L_0\\eta_t^2}{2}\\|\\nabla_{\\mathbf{w}}\\hat{\\ell}(\\mathbf{w}^{t})-\\nabla_{\\mathbf{w}}\\mathcal{L}_0(\\mathbf{w}^t)\\|_2^2\\\\[5pt]\n & &=&\\ \\ \\mathcal{L}_0(\\mathbf{w}^t)-\\eta_t\\left(1-\\frac{L_0\\eta_t}{2}\\right)\\|\\nabla_{\\mathbf{w}}\\mathcal{L}_0(\\mathbf{w}^t)\\|_2^2\\\\[5pt]\n & & &\\ \\ +\\eta_t\\left(1-\\frac{L_0\\eta_t}{2}\\right)\\langle\\nabla_{\\mathbf{w}}\\mathcal{L}_0(\\mathbf{w}^t),\\nabla_{\\mathbf{w}}\\ell(\\mathbf{w}^{t})-\\nabla_{\\mathbf{w}}\\hat{\\ell}(\\mathbf{w}^{t})\\rangle\\\\[5pt]\n & & &\\ \\ +\\eta_t\\left(1-\\frac{L_0\\eta_t}{2}\\right)\\langle\\nabla_{\\mathbf{w}}\\mathcal{L}_0(\\mathbf{w}^t),\\nabla_{\\mathbf{w}}\\mathcal{L}_0(\\mathbf{w}^t)-\\nabla_{\\mathbf{w}}\\ell(\\mathbf{w}^{t})\\rangle\\\\[5pt]\n & & &\\ \\ +\\frac{L_0\\eta_t^2}{2}\\|\\nabla_{\\mathbf{w}}\\hat{\\ell}(\\mathbf{w}^{t})-\\nabla_{\\mathbf{w}}\\ell(\\mathbf{w}^{t})+\\nabla_{\\mathbf{w}}\\ell(\\mathbf{w}^{t})-\\nabla_{\\mathbf{w}}\\mathcal{L}_0(\\mathbf{w}^t)\\|_2^2\\\\\n & &\\leq&\\ \\ \\mathcal{L}_0(\\mathbf{w}^t)-\\frac{\\eta_t}{2}\\left(1-\\frac{L_0\\eta_t}{2}\\right)\\|\\nabla_{\\mathbf{w}}\\mathcal{L}_0(\\mathbf{w}^t)\\|_2^2\\\\[5pt]\n & & &\\ \\ +\\frac{\\eta_t}{2}\\left(1-\\frac{L_0\\eta_t}{2}\\right)\\|\\nabla_{\\mathbf{w}}\\ell(\\mathbf{w}^{t})-\\nabla_{\\mathbf{w}}\\hat{\\ell}(\\mathbf{w}^{t})\\|^2_2\\\\[5pt]\n & & &\\ \\ +\\eta_t\\left(1+\\frac{L_0\\eta_t}{2}\\right)\\langle\\nabla_{\\mathbf{w}}\\mathcal{L}_0(\\mathbf{w}^t),\\nabla_{\\mathbf{w}}\\mathcal{L}_0(\\mathbf{w}^t)-\\nabla_{\\mathbf{w}}\\ell(\\mathbf{w}^{t})\\rangle\\\\[5pt]\n & & &\\ \\ +L_0\\eta_t^2\\|\\nabla_{\\mathbf{w}}\\hat{\\ell}(\\mathbf{w}^{t})-\\nabla_{\\mathbf{w}}\\ell(\\mathbf{w}^{t})\\|_2^2\\\\[5pt]\n & & &\\ \\ +L_0\\eta_t^2\\|\\nabla_{\\mathbf{w}}\\ell(\\mathbf{w}^{t})-\\nabla_{\\mathbf{w}}\\mathcal{L}_0(\\mathbf{w}^t)\\|_2^2\\\\\n \\end{aligned}\n\\end{equation}\nTaking expectation on both sides of the above inequality conditioned on $\\mathbf{w}^t$, then we have\n\\begin{equation}\n \\begin{aligned}\n & & &\\ \\ \\mathbb{E}[\\mathcal{L}_0(\\mathbf{w}^{t+1})-\\mathcal{L}_0(\\mathbf{w}^{t})|\\mathbf{w}^t]\\\\[5pt]\n & &\\leq&\\ \\ -\\frac{\\eta_t}{2}\\left(1-\\frac{L_0\\eta_t}{2}\\right)\\|\\nabla_{\\mathbf{w}}\\mathcal{L}_0(\\mathbf{w}^t)\\|_2^2\\\\[5pt]\n & & &\\ \\ +\\frac{\\eta}{2}\\left(1+\\frac{3\\eta_tL_0}{2}\\right)\\frac{\\delta L^2_{\\mathbf{w}\\boldsymbol{\\theta}}}{\\mu}+L_0\\eta^2_t\\sigma^2.\n \\end{aligned}\n\\end{equation}\nThen we do the telescope sum over $t=0,\\dots,T-1$, we obtain\n\\begin{equation}\n \\begin{aligned}\n & & &\\ \\ \\sum_{t=0}^{T-1}\\frac{\\eta_t}{2}\\left(1-\\frac{L_0\\eta_t}{2}\\right)\\mathbb{E}[\\|\\mathcal{L}_0(\\mathbf{w}^{t})\\|^2_2]\\\\[5pt]\n & &\\leq&\\ \\ \\mathbb{E}[\\mathcal{L}_0(\\mathbf{w}^0)-\\mathcal{L}_0(\\mathbf{w}^T)]+L_0\\sum_{t=0}^{T-1}\\eta^2_t\\sigma^2\\\\[5pt]\n & & &\\ \\ + \\sum_{t=0}^{T-1}\\frac{\\eta}{2}\\left(1+\\frac{3\\eta_tL_0}{2}\\right)\\frac{\\delta L^2_{\\mathbf{w}\\boldsymbol{\\theta}}}{\\mu}.\n \\end{aligned}\n\\end{equation}\nChoosing $\\eta_t= \\eta_1$ as\n\\begin{equation}\n \\eta_1 = \\min\\left(\\frac{1}{L_0},\\ \\sqrt{\\frac{\\mathcal{L}_0(\\mathbf{w}^0)-\\underset{\\mathbf{w}}{\\min}\\ \\mathcal{L}_0(\\mathbf{w})}{\\sigma^2TL_0}}\\right),\n\\end{equation}\nit holds that\n\\begin{equation}\n \\frac{1}{T}\\sum_{t=0}^{T-1}\\mathbb{E}\\big[\\|\\nabla \\mathcal{L}_0(\\mathbf{w}^t)\\|^2_2\\big]\\leq4\\sigma\\sqrt{\\frac{\\Delta L_0}{T}}+\\frac{5\\delta L^2_{\\mathbf{w}\\boldsymbol{\\theta}}}{\\mu},\n\\end{equation}\nwhere $\\Delta=\\mathcal{L}_0(\\mathbf{w}^0)-\\underset{\\mathbf{w}}{\\min}\\ \\mathcal{L}_0(\\mathbf{w})$.\n\\end{proof}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Incompressible setting and results}\n\\label{sec:4.2.5}\nLet $B\\ss \\mathbb{R}^2$ be the unit ball. For $u_0\\in L^p(B,\\mathbb{R}^2)$ with $2\\le p<\\infty$ and $\\det\\ensuremath{\\nabla} u_0=1$ a.e. in $B$ we define\n\\[\n\\mathcal{A}_{u_0}^{p,c}:=\\{u\\in W^{1,p}(B,\\mathbb{R}^2): \\det\\ensuremath{\\nabla} u=1 \\;\\mb{a.e. in}\\;B ,\\;u-u_0\\in C_c^\\infty(B,\\mathbb{R}^2)\\}\n\\]\nand for every $u\\in \\mathcal{A}_{u_0}^{p,c}$ we define\n\\begin{equation}E(u)=\\int\\limits_B{f(x,\\ensuremath{\\nabla} u)\\;dx}\\label{eq:HFU.I.2.1},\\end{equation}\nwhere $f$ is given by\n\\[f(x,\\xi)=\\nu(x)|\\xi|^p \\]\nfor a.e. $x\\in B$ and $\\xi\\in \\mathbb{R}^{2\\ti2}.$ \nMoreover, $\\nu\\in L^\\infty(B)$ is supposed to satisfy $\\nu(x)\\ge0$ a.e.\\! in $B.$ Since $\\nu$ is allowed to take on the value $0,$ the integrand could indeed disappear for some $x \\in B.$ Additionally, $f(x,\\ensuremath{\\cdot})$ is convex in its 2nd variable for a.e. $x\\in B.$ All of the above, is making $E$ a version of the $p-$Dirichlet functional. \\vspace{0.5cm}\n\nAs usual, we are interested in the corresponding minimization problem\n\\begin{equation}\n\\inf\\limits_{u\\in\\mathcal{A}_{u_0}^{p,c}} E(u).\n\\label{eq:USPS:1.1}\n\\end{equation}\nNotice, that the missing uniformity might cause troubles guaranteeing the existence of a minimizer, however, this is without any consequence for this work.\nAssuming for now, that the situation is such that the minimum is indeed obtained and there are some corresponding minimizing maps or more generally corresponding stationary points of $E,$ which are defined as follows:\n\n\\begin{de}(Stationary point)\nWe say that $u$ is a stationary point of $E(\\cdot)$ if there exists a function $\\ensuremath{\\lambda}$, which we shall henceforth refer to as a pressure, belonging to $W^{1,1}(B)$ and such that\n\\begin{align}\\label{def:SP1} \\div( \\nabla_{\\xi}f(x,\\nabla u) + p \\ensuremath{\\lambda}(x) \\, \\textnormal{cof}\\; \\nabla u) = 0 \\quad \\mathrm{in} \\ \\mathcal{D}'(B).\n\\end{align}\n\\end{de}\n\n\n Now we can state the main result for the incompressible scenario. Recall, that for any vector $y=y_Re_R+y_\\th e_\\th\\in\\mathbb{R}^2$ we define its maximum norm via $|y|_{\\infty}:=\\max\\{y_R,y_\\th\\}.$\n\n\\begin{thm} [High frequency uniqueness] Let $2\\le p<\\infty,$ assume $u_0\\in L^p(B,\\mathbb{R}^2)$ to be the boundary conditions and let $u\\in \\mathcal{A}_{u_0}^{p,c}$ be a stationary point of $E,$ as given in \\eqref{def:SP1}. \nFurthermore, let $\\sigma(x):=\\sqrt{\\nu(x)|\\ensuremath{\\nabla} u(x)|^{p-2}}\\in L^{\\frac{4}{p-2}}(B)$ and assume that there exists $l\\in \\mathbb{N}$ s.t.\n\\begin{equation}|\\sigma,_\\th(x)|\\le l \\sigma(x) \\mb{for a.e.}\\;x\\in B\\label{eq:HFU.UC.G.01} \\end{equation}\nholds.\\\\\n\nThen the following statements are true:\n\ni) \\textbf{(purely high modes.)} Suppose the corresponding pressure $\\ensuremath{\\lambda}$ exists and satisfies\n\\begin{equation}|\\ensuremath{\\nabla} \\ensuremath{\\lambda}(x)R|_{\\infty}\\le \\frac{n}{\\sqrt{2}}\\nu(x)|\\ensuremath{\\nabla} u|^{p-2}\\mb{for a.e.}\\;x\\in B\\label{eq:HFU.IC.G.02}\\end{equation}\nfor some $n\\in\\mathbb{N}.$\\\\\n\nThen $u$ is a minimizer of $E$ in the subclass \n\\[\\mathcal{F}_{n_*}^{p,\\sigma,c}=\\left\\{v\\in \\mathcal{A}_{u_0}^{p,c}|\\eta=v-u\\in W_0^{1,p}(B,\\mathbb{R}^2) \\;\\mb{and}\\; \\sigma\\eta=\\sum\\limits_{j\\ge n}(\\sigma\\eta)^{(j)} \\right\\},\\]\nwhere $n_*:=n+l.$\nMoreover, if there exists a constant $\\sigma_0>0$ s.t. $\\sigma(x)\\ge\\sigma_0>0$ for any $x\\in B$ and inequality \\eqref{eq:HFU.IC.G.02} is strictly satisfied on a non-trivial set, then $u$ is the unique minimizer in $\\mathcal{F}_{n_*}^{p,\\sigma,c}$.\\\\\n\nii) \\textbf{($0-$mode and high modes.)} Suppose the corresponding pressure $\\ensuremath{\\lambda}$ exists and satisfies\n\\begin{equation}|\\ensuremath{\\nabla} \\ensuremath{\\lambda}(x)R|_{\\infty}\\le \\frac{\\sqrt{3}m\\nu(x)|\\ensuremath{\\nabla} u|^{p-2}}{2\\sqrt{2}}\\mb{for a.e.}\\;x\\in B\\label{eq:HFU.IC.G.03}\\end{equation}\nfor some $m\\in\\mathbb{N}.$\\\\\nThen $u$ is a minimizer of $E$ in the subclass \n\\[\\mathcal{F}_{0,m_*}^{p,\\sigma,c}=\\left\\{v\\in \\mathcal{A}_{u_0}^{p,c}|\\eta=v-u\\in W_0^{1,p}(B,\\mathbb{R}^2) \\;\\mb{and}\\; \\sigma\\eta=(\\sigma\\eta)^{(0)}+\\sum\\limits_{j\\ge m_*}(\\sigma\\eta)^{(j)} \\right\\},\\]\nwhere $m_*=m+l.$\nMoreover, if there exists a constant $\\sigma_0>0$ s.t. $\\sigma(x)\\ge\\sigma_0>0$ for any $x\\in B$ and inequality \\eqref{eq:HFU.IC.G.03} is is strictly satisfied on a non-trivial set, then $u$ is the unique minimizer in $\\mathcal{F}_{0,m_*}^{p,\\sigma,c}$.\n\\label{thm1.1}\n\\end{thm}\\vspace{0.5cm}\n\n\\section{Compressible setting and results}\n For $u_0\\in L^p(B,\\mathbb{R}^2)$ with $2\\le p\\le\\infty$ we define the set of admissible maps by\n\\[\n\\mathcal{A}_{u_0}^p=\\{u\\in W^{1,p}(B,\\mathbb{R}^2): \\; u-u_0\\in C_c^\\infty(B,\\mathbb{R}^2)\\}.\n\\]\nHere we consider energies given by\n\\begin{equation}I(u)=\\int\\limits_B{\\Phi(x,\\ensuremath{\\nabla} u)\\;dx},\\label{eq:HFU.UC.G.0}\\end{equation}\nwhere the integrand is of the form\n\\begin{equation}\\Phi(x,\\xi)=\\frac{\\nu(x)}{p}|\\xi|^p+\\Psi(x,\\xi,\\det\\xi),\\label{eq:HFU.UC.GA}\\end{equation}\nNow we want $(\\xi,d)\\mapsto \\Psi(x,\\xi,d)$ to be convex for a.e.\\! $x \\in B,$ making $\\Psi(x,\\ensuremath{\\cdot})$ a convex representative of a polyconvex function a.e. in $B$. \nMoreover, the function $\\nu\\in L^\\infty(B),$ satisfying $\\nu(x)\\ge0$ a.e.\\! in $B,$ is ought to be optimal, in the sense, that there can be no term of the form $a(x)|\\xi|^q$ for any $q\\ge p$ in $\\Psi.$ However, $\\Psi$ might be negative. Finally, we want $\\Phi$ to be of $p-$growth, supposing that there exits $C\\in L^\\infty(B)$ with $C(x)\\ge0$ a.e. in $B$ s.t.\n\\[0\\le \\Phi(x,\\xi)\\le \\frac{C(x)}{p}(1+|\\xi|^p)\\;\\mb{for all} \\;\\xi\\in \\mathbb{R}^{2\\ti2}\\;\\mb{and a.e.} \\; x\\in B. \\]\nAll of the above combined guarantees that $x\\mapsto \\Phi(x,\\ensuremath{\\nabla} u(x))\\in L^1(B)$ for any $u\\in W^{1,p}(B,\\mathbb{R}^2)$ and hence the corresponding energy is finite. \\vspace{0.5cm}\n\nThe corresponding minimization problem\n\\begin{equation}\n\\inf\\limits_{u\\in\\mathcal{A}_{u_0}^p} I(u)\n\\label{eq:USPS:1.1}\n\\end{equation}\nmight lack a minimizer in general. Assuming, that such a minimizer\/stationary point of $I$ exists, motivates the following definition: \n\\begin{de}(Stationary point)\nWe say that $u\\in\\mathcal{A}_{u_0}^p$ is a stationary point of $I(\\cdot)$ if $u$ satisfies the Euler-Lagrange equation given by\n\\begin{align}\\label{def:SP.2}\n\\div( \\nu(x)|\\ensuremath{\\nabla} u|^{p-2}\\ensuremath{\\nabla} u+\\ensuremath{\\partial}_\\xi\\Psi(x,\\ensuremath{\\nabla} u,d_{\\ensuremath{\\nabla} u})+\\ensuremath{\\partial}_d\\Psi(x,\\ensuremath{\\nabla} u,d_{\\ensuremath{\\nabla} u})\\textnormal{cof}\\;\\ensuremath{\\nabla} u) = 0\\; \\mathrm{in} \\;\\mathcal{D}'(B).\n\\end{align}\n\\end{de} \\vspace{-0.3cm}\nNow we can give an analogous result for these types of integrands.\n\n \n\\begin{thm} [High frequency uniqueness] Let $B\\ss \\mathbb{R}^2$ be the unit ball. Furthermore, let $2\\le p\\le\\infty,$ assume $u_0\\in L^p(B,\\mathbb{R}^2)$ to be the boundary conditions and let $u\\in \\mathcal{A}_{u_0}^p$ be a stationary point of $I,$ as given in \\eqref{def:SP.2}. Furthermore, let $\\sigma(x):=\\sqrt{\\nu(x)|\\ensuremath{\\nabla} u(x)|^{p-2}}\\in L^{\\frac{4}{p-2}}(B)$ and assume that there exists $l\\in \\mathbb{N}$ s.t.\n\\begin{equation}|\\sigma,_\\th(x)|\\le l \\sigma(x) \\mb{for a.e.}\\;x\\in B\\label{eq:HFU.UC.G.01} \\end{equation}\nholds.\\\\\n\nThen the following statements are true:\n\ni) \\textbf{(purely high modes.)} Assume there exists $n\\in\\mathbb{N}$ s.t.\n\\begin{equation}|\\ensuremath{\\nabla}_x \\ensuremath{\\partial}_d\\Psi(x,\\ensuremath{\\nabla} u(x),d_{\\ensuremath{\\nabla} u(x)})R|_{\\infty}\\le \\frac{n}{\\sqrt2}\\nu(x)|\\ensuremath{\\nabla} u|^{p-2}\\mb{for a.e.}\\;x\\in B.\\label{eq:HFU.UC.G.1} \\end{equation}\n\nThen $u$ is a minimizer of $E$ in the subclass \n\\[\\mathcal{F}_{n_*}^{p,\\sigma}=\\left\\{v\\in \\mathcal{A}_{u_0}^p|\\;\\eta=v-u\\in W_0^{1,p}(B,\\mathbb{R}^2) \\;\\mb{and}\\; \\sigma\\eta=\\sum\\limits_{j\\ge {n_*}}(\\sigma\\eta)^{(j)} \\right\\},\\]\nwhere $n_*:=n+l.$\nMoreover, if there exists a constant $\\sigma_0>0$ s.t. $\\sigma(x)\\ge\\sigma_0>0$ for any $x\\in B$ and inequality \\eqref{eq:HFU.UC.G.1} is strictly satisfied on a non-trivial set, then $u$ is the unique minimizer in $\\mathcal{F}_{n_*}^{p,\\sigma}$.\\\\\n\nii) \\textbf{($0-$mode and high modes.)} Assume there exists $m\\in\\mathbb{N}$ s.t.\n\\begin{equation}|\\ensuremath{\\nabla}_x \\ensuremath{\\partial}_d\\Psi(x,\\ensuremath{\\nabla} u(x),d_{\\ensuremath{\\nabla} u(x)})R|_{\\infty}\\le \\frac{\\sqrt{3}m\\nu(x)|\\ensuremath{\\nabla} u|^{p-2}}{2\\sqrt{2}}\\mb{for a.e.}\\;x\\in B.\\label{eq:HFU.UC.G.2}\\end{equation}\nThen $u$ is a minimizer of $E$ in the subclass \n\\[\\mathcal{F}_{0,m_*}^{p,\\sigma}=\\left\\{v\\in \\mathcal{A}_{u_0}^p|\\;\\eta=v-u\\in W_0^{1,p}(B,\\mathbb{R}^2) \\;\\mb{and}\\; \\sigma\\eta=(\\sigma\\eta)^{(0)}+\\sum\\limits_{j\\ge m_*}(\\sigma\\eta)^{(j)} \\right\\},\\]\nwhere $m_*:=m+l.$\nMoreover, if there exists a constant $\\sigma_0>0$ s.t. $\\sigma(x)\\ge\\sigma_0>0$ for any $x\\in B$ and inequality \\eqref{eq:HFU.UC.G.2} is strictly satisfied on a non-trivial set, then $u$ is the unique minimizer in $\\mathcal{F}_{0,m_*}^{p,\\sigma}$.\n\\label{thm:HFU.UC.G.1}\n\\end{thm}\n\n\nHere, we contribute to John Ball's objectives, as formulated in \\cite[\u00a7 2.6]{BallOP}, which tries to get an understanding of questions related to uniqueness in elastic situations. We recently started this discussion in \\cite{D1}, where we presented a uniqueness criteria, in the incompressible case\\footnote{Recall, that we call situations, where the energies remain finite either incompressible elastic, if the considered admissible maps must be measure-preserving, otherwise we call it compressible elastic. Moreover, we call a model (fully) non-linear elastic, if the considered integrand $f,$ ignoring any other dependencies, satisfies $f(\\xi)=+\\infty$ for any $\\xi\\in \\mathbb{R}^{n\\ti n}$ s.t. $\\det\\xi\\le 0$ and $f(\\xi)\\ensuremath{\\rightarrow}+\\infty$ if $\\det\\xi\\ensuremath{\\rightarrow} 0^+$ or $\\det\\xi\\ensuremath{\\rightarrow} +\\infty.$ Notice, that in the latter model some of the consider energies might be infinite.}, for quadratic uniformly convex integrands $f(x,\\xi)$ and subject to suitable boundary conditions. Here we go beyond the latter first by allowing the more general $p-$Dirichtlet integrands but also by discussing high-pressure situations, where uniqueness can only be guaranteed for a subclass of variations which consist of purely high-modes. We also provide the analogous results for polyconvex-type functionals, which are of $p-$growth, in the compressible case. The work, which is certainly most relevant to ours is the one by Sivaloganathan and Spector \\cite{SS18}. In particular, in \\cite[Theorem 4.2]{SS18} they consider the same functionals $I$, as given in (\\ref{eq:HFU.UC.G.0}-\\ref{eq:HFU.UC.GA}), however, on a suitable class of admissible maps satisfying the constraint $\\det\\ensuremath{\\nabla} u>0 \\;\\mb{a.e.}$ and subject to suitable boundary data. Assuming then that $u$ is a weak solution satisfying the corresponding equilibrium equation and the condition given by\n\n\\begin{equation}|\\ensuremath{\\partial}_d\\Psi(x,\\ensuremath{\\nabla} u,d_{\\ensuremath{\\nabla} u})R|\\le \\nu(x)|\\ensuremath{\\nabla} u|^{p-2}\\mb{for a.e.}\\;x\\in \\Omega.\n\\label{eq:Uni.SivSP.1}\n\\end{equation}\nThen $u$ must be a global minimizer of $I.$ Additionally, if the latter inequality is in some sense strictly satisfied then $u$ must be the unique one. It is crucial to realise, that the criterion given by Sivaloganathan and Spector differs from ours. Indeed, \\eqref{eq:Uni.SivSP.1} involves a 1st order derivative of $\\Psi$ while in \\eqref{eq:HFU.UC.G.2} a 2nd order derivative is used. \\\\\n \nA possible incomplete list of results regarding the compressible setting might look as follows: Initially, it is well known, that uniformly convex functionals possess unique global minimizers, see for instance \\cite[\u00a7 3.3]{Kl16}. Knorps and Stuart showed in \\cite{KS84}, that for a strongly quasiconvex integrand defined on a star-shaped domain and subject to linear boundary data $u_0=Ax,$ any $C^2-$stationary point needs to agree with $Ax$ everywhere. A generalisation can be found in \\cite{T03}. These results have been transferred to the incompressible by Shahrokhi-Dehkordi and Taheri \\cite{ST10} and the fully non-linear case by Bevan \\cite{B11}. In \\cite{C14}, Cordero presented a uniqueness result guaranteeing a unique minimizer for strongly quasiconvex $C^2-$integrands if the given boundary data is smooth and small enough.\n Zhang \\cite{Z91} discusses situations, in in- and in compressible ones, where the considered energies are polyconvex and the under pure displacement boundary conditions and subject to that the Jacobian must be strictly positive a.e. Then the corresponding minimizer must agree with the solution of the corresponding Euler-Lagrange equation, which is highly non-trivial due to the weak spaces involved and the lack of compactness of the constraint.\n On the contrary, non-uniqueness for minimizers of strongly polyconvex functionals has been established by Spadaro in \\cite{S08}. However, these counterexamples rely highly on the fact that the determinant can take on negative values, which is neither possible in the incompressible nor in the NLE-stetting. John \\cite{J72} and Spector and Spector \\cite{SS19} obtain uniqueness of equilibrium solutions for small enough strains and under various boundary conditions. In sharp contrast, Post and Sivaloganathan \\cite{PS97} construct multiple equilibrium solutions in finite elasticity. \\\\\n\nA first treatment of uniqueness in incompressible elasticity can be found in \\cite[Section 6]{KS84}. Much research in the incompressible setting is concerned with the double covering problem, that is, given the Dirichlet Energy $\\mathbb{D}(\\xi)= |\\xi|^2\/2$ on the unit ball in $2d$ and subject to double covering boundary conditions given by $u_2=(\\cos(2\\th),\\sin(2\\th)),$ first considered by Ball \\cite{B77}. Since then a lot of progress towards a solution has been made and partial results are available. For instance, Bevan \\cite{JB14} showed that $u_2$ is the unique global minimizer up to the first Fourier-mode. This paper is central for several reasons, firstly, it introduces the type of uniqueness argument we provide here and in \\cite{D1}. Moreover, it also contains the concept of high-frequency uniqueness. In \\cite{BeDe21} Bevan and Deane obtained that $u_2$ is the unique global minimizer for either purely inner \nor purely outer variations and local minimality is shown for a subclass of variations allowing a certain mixture of both types. In contrast, in \\cite{BeDe20}, equal energy stationary points of an inhomogeneous uniformly convex functional $(x,\\xi)\\mapsto f(x,\\xi)$ depending discontinuously on $x$ are constructed. It remains unknown for now if these stationary points are actually global minimizers.\\\\\n\n\nIn the fully non-linear case Bevan and Yan \\cite{BY07} show, that the famous BOP$-$map, constructed by Bauman, Owen and Phillips \\cite{BOP91MS}, is the unique global minimizer in a suitable sub-class of admissible maps. \\\\\n\nFinally, many papers address uniqueness questions, in situations where the reference domain agrees with an annulus, see for instance \\cite{J72,PS97,T09,MT17,MT19,BK19}. \\\\\n\n\n\n\n\n\n \n\n\n\n\n\n\\vspace{3mm}\n\n\\textbf{Plan of the paper:} After introducing the most important notation, which we will use in this paper, Then the proof of theorem \\ref{thm1.1} will be given in \u00a7.\\ref{sec:3} followed by some important remarks in the incompressible situation. \u00a7.\\ref{sec.4} then discusses the compressible case instead and, in particular, a proof of theorem \\ref{thm:HFU.UC.G.1}.\\\\\n\n\\textbf{Notation:} For any $2\\times2-$matrix $A$ we define its cofactor by\n\\begin{equation}\n\\textnormal{cof}\\; A=\\begin{pmatrix}a_{22} &-a_{21}\\\\-a_{12} & a_{11}\\end{pmatrix}.\n\\label{eq:1.3}\n\\end{equation}\nMoreover, we will make use of the shorthand $d_A:=\\det A.$\\\\\n\nThe Fourier-representation for any $\\eta\\in C^\\infty(B,\\mathbb{R}^2)$ (For members of Sobolev- spaces one might approximate) is given by\n\\begin{align*}\n\\eta(x)=\\sum\\limits_{j\\ge0}\\eta^{(j)}(x),\\;\\mb{where}\\;\\eta^{(0)}(x)=\\frac{1}{2}A_0(R), \\;A_0(R)=\\frac{1}{2\\pi}\\int\\limits_{0}^{2\\pi}{\\eta(R,\\th)\\;d\\th}\\end{align*} \nand for any $ j\\ge1$ we have\n\\begin{align*}\n \\eta^{(j)}(x)=A_j(R)\\cos(j\\th)+B_j(R)\\sin(j\\th),\n \\end{align*}\n where\n\\begin{align*}\nA_j(R)=\\frac{1}{2\\pi}\\int\\limits_{0}^{2\\pi}{\\eta(R,\\th)\\cos(j\\th)\\;d\\th}\\;\\mb{and}\\;\nB_j(R)=\\frac{1}{2\\pi}\\int\\limits_{0}^{2\\pi}{\\eta(R,\\th)\\sin(j\\th)\\;d\\th}.\n\\end{align*}\nFurther we use $\\tilde{\\eta}:=\\eta-\\eta^{(0)}.$\\\\\n\n\n\n\\section{The incompressible case}\nWe immediately start with the proof in the incompressible setting. The proof is obtained by comparing energies and gaining a lower bound by means of the ELE and a Poincar\u00e9-type inequality.\\\\\n\n\n\\label{sec:3}\n\\textbf{Proof of Theorem \\ref{thm1.1}:}\\\\\n\n\n\ni) Let $u\\in\\mathcal{A}_{u_0}^{p,c}$ be a stationary point of $E$ and let $v\\in\\mathcal{F}_{n_*}^{p,\\sigma,c}$ be arbitrary and set $\\eta:=v-u\\in W_0^{1,2}(B,\\mathbb{R}^2)$ and $ \\sigma\\eta=\\sum\\limits_{j\\ge {n_*}}(\\sigma\\eta)^{(j)}$ with $n_*=n+l$ assuming wlog. $\\eta\\in C_c^\\infty(B,\\mathbb{R}^2)$ and $\\sigma\\in C^\\infty(B).$\n\nWe start by the standard expansion\n\\begin{align}\nE(v)-E(u)=&\\int\\limits_B{\\nu(x)(|\\ensuremath{\\nabla} u+\\ensuremath{\\nabla} \\eta|^{p}-|\\ensuremath{\\nabla} u|^{p})\\;dx}&\\nonumber\\\\\n\\ge&\\frac{p}{2}\\int\\limits_B{\\nu(x)|\\ensuremath{\\nabla} u|^{p-2}|\\ensuremath{\\nabla}\\eta|^2+p\\nu(x)|\\ensuremath{\\nabla} u|^{p-2}\\ensuremath{\\nabla} u\\ensuremath{\\cdot}\\ensuremath{\\nabla}\\eta\\;dx},&\n\\label{eq:HFU.IC.G.04}\n\\end{align}\nwhere we used the following inequality\\footnote{see, \\cite[Prop A.1]{SS18}with $\\sigma=0.$}\\vspace{-5mm}\n\n\\begin{align}\\frac{1}{p}|b|^{p}\\ge\\frac{1}{p}|a|^{p}+|a|^{p-2}a(b-a)+\\frac{1}{2}|a|^{p-2}|b-a|^{2}.\\label{eq:3.1}\\end{align}\n\nHenceforth, we shall denote the rightmost term in \\eqref{eq:HFU.IC.G.04} by \\[H(u,\\eta):=\\int\\limits_B{p\\nu(x)|\\ensuremath{\\nabla} u|^{p-2}\\ensuremath{\\nabla} u\\ensuremath{\\cdot}\\ensuremath{\\nabla}\\eta\\;dx}.\\]\nRecall, the ELE \\eqref{def:SP1} is given by\n\\begin{align}\n\\int\\limits_B{p\\nu(x)|\\ensuremath{\\nabla} u|^{p-2}\\ensuremath{\\nabla} u\\ensuremath{\\cdot}\\ensuremath{\\nabla}\\eta\\;dx}=&-\\int\\limits_B{p\\ensuremath{\\lambda} \\textnormal{cof}\\;\\ensuremath{\\nabla} u\\ensuremath{\\cdot} \\ensuremath{\\nabla}\\eta\\;dx}\\;\\mb{for all}\\;\\eta\\in C_c^\\infty(B,\\mathbb{R}^2).\n\\label{eq:ELE.2}\n\\end{align}\nIn order for us, to control $H$ from below, we start by rewriting said term via the relation $\\det \\ensuremath{\\nabla} \\eta=-\\textnormal{cof}\\;\\ensuremath{\\nabla} u\\ensuremath{\\cdot}\\ensuremath{\\nabla}\\eta$ a.e. and lemma 2.1(v) of \\cite{D1} in the following way\n\\begin{align}\nH(u,\\eta)=&-\\frac{p}{2}\\int\\limits_B{R((\\textnormal{cof}\\;\\ensuremath{\\nabla}\\eta)\\ensuremath{\\nabla}\\ensuremath{\\lambda})\\ensuremath{\\cdot}\\eta\\;\\frac{dx}{R}}&\\label{eq:HFU.UC.G.21.a}\\\\\n=&-\\frac{p}{2}\\int\\limits_B(\\ensuremath{\\lambda},_RR e_R+ \\ensuremath{\\lambda},_\\th e_\\th)\\ensuremath{\\cdot}\\left[(\\tilde{\\eta}_1\\tilde{\\eta}_{2,\\th}-\\tilde{\\eta}_2\\tilde{\\eta}_{1,\\th})\\frac{e_R}{R}\\right.&\\nonumber\\\\\n&+\\left. (\\tilde{\\eta}_2\\eta_{1,R}-\\tilde{\\eta}_1\\eta_{2,R})e_\\th\\right]\\;\\frac{dx}{R}.&\\nonumber\n\\end{align}\nAn application of H\u00f6lder's inequality in $\\mathbb{R}^2$, that is, for any $y,z\\in \\mathbb{R}^2$ it holds $|y\\ensuremath{\\cdot} z|\\le|y|_{\\infty}|z|_{1},$ yields\n\\vspace{1mm}\n\\begin{small}\n\\begin{align}\nH(u,\\eta)\\ge&-\\frac{p}{2}\\int\\limits_B\\!|R\\ensuremath{\\nabla}\\ensuremath{\\lambda}(x)|_{\\infty}\\!\\ensuremath{\\cdot}\\!\\left[|\\eta_1\\eta_{2,\\th}-\\eta_2\\eta_{1,\\th}|\\frac{1}{R}+ |\\eta_2\\eta_{1,R}-\\eta_1\\eta_{2,R}|\\right]\\!\\frac{dx}{R}.&\n\\end{align}\n\\end{small}\nBy applying \\eqref{eq:HFU.UC.G.1} and Cauchy-Schwarz inequalities with weight $\\ensuremath{\\varepsilon}=\\frac{n}{\\sqrt{2}}$ we get \n\\begin{align*}\nH(u,\\eta)\\ge-\\frac{p}{2}\\frac{n}{\\sqrt{2}}&\\Big[\\frac{n}{\\sqrt{2}}\\|\\sigma\\eta_1\\|_{L^2(dx\/R^2)}^2+\\frac{n}{\\sqrt{2}}\\|\\sigma\\eta_2\\|_{L^2(dx\/R^2)}^2&\\\\\n&+\\frac{1}{\\sqrt{2}n}\\Big(\\|\\sigma\\eta_{1,\\th}\\|_{L^2(dx\/R^2)}^2+\\|\\sigma\\eta_{2,\\th}\\|_{L^2(dx\/R^2)}&\\\\\n&+\\|\\sigma\\eta_{1,R}\\|_{L^2(dx)}^2+\\|\\sigma\\eta_{2,R}\\|_{L^2(dx)}^2\\big)\\Big].&\n\\end{align*}\nUsing \\eqref{eq:HFU.FE.2} and collecting terms yields\n\\begin{align*}\nH(u,\\eta)\\ge&-\\frac{p}{2}\\left[\\|\\sigma\\eta,_\\th\\|_{L^2(B,\\mathbb{R}^2,\\frac{dx}{R^2})}^2+\\frac{1}{2}\\|\\sigma\\eta,_R\\|_{L^2(B,\\mathbb{R}^2,dx)}^2\\right]&\\nonumber\\\\\n\\ge&-\\frac{p}{2}\\int\\limits_B{\\nu(x)|\\ensuremath{\\nabla} u|^{p-2}|\\ensuremath{\\nabla}\\eta|^2\\;dx},&\\nonumber\n\\end{align*}\ncompleting the proof. For the last step we made used of the following version of the Fourier-estimate given by\n\\begin{equation} n^2\\int\\limits_B{\\sigma^2|\\eta|^2\\;\\frac{dx}{R^2}} \\le \\int\\limits_B{\\sigma^2|\\eta_{,\\th}|^2\\;dx}.\\label{eq:HFU.FE.2}\\end{equation}\nThis is indeed true, for this sake, first assume that $\\sigma\\in C^\\infty(B).$ \nWe will make use of a Poincare type inequality, first established in \\cite{JB14},\n\\begin{equation}\\int\\limits_B{R^{-2}|\\xi_{,\\th}|^2\\;dx}\\ge N^2\\int\\limits_B{R^{-2}|\\xi|^2\\;dx},\\label{eq:Uni.HFU.1}\\end{equation}\nwhich holds true for any $\\xi\\in C^\\infty(B,\\mathbb{R}^2)$ if $\\xi$ only consists of Fourier-modes $N$ or higher.\\\\\n\nThen an application of \\eqref{eq:Uni.HFU.1}, the product rule, Minkowski's inequality and \\eqref{eq:HFU.UC.G.01} yields \n\\begin{align} n_*\\|\\sigma\\eta\\|_{L^2\\left(\\frac{dx}{R^2}\\right)}\\le&\\|(\\sigma\\eta),_\\th\\|_{L^2\\left(\\frac{dx}{R^2}\\right)}&\\nonumber\\\\\n\\le& \\|\\sigma\\eta,_\\th\\|_{L^2\\left(\\frac{dx}{R^2}\\right)}+ \\|\\sigma,_\\th\\eta\\|_{L^2\\left(\\frac{dx}{R^2}\\right)}&\\nonumber\\\\\n\\le& \\|\\sigma\\ensuremath{\\nabla}\\eta\\|_{L^2(dx)}+l \\|\\sigma\\eta\\|_{L^2\\left(\\frac{dx}{R^2}\\right)}.&\n\\label{eq:HFU.FE.2a}\n\\end{align}\nAbsorbing the rightmost term of the latter expression into the LHS yields \\eqref{eq:HFU.FE.2}. Additionally, \\eqref{eq:HFU.FE.2a} justifies its own upgrade by remaining valid for any $\\sigma\\in L^2(B),$ satisfying \\eqref{eq:HFU.UC.G.01}.\\\\\n\nii) Let $u\\in\\mathcal{A}_{u_0}^{p,c}$ be a stationary point of $E$ and let $v\\in\\mathcal{F}_{0,m_*}^{p,\\sigma,c}$ be arbitrary and set $\\eta:=v-u\\in W_0^{1,2}(B,\\mathbb{R}^2)$ and $ \\sigma\\eta=(\\sigma\\eta)^{(0)}+\\sum\\limits_{j\\ge {m_*}}(\\sigma\\eta)^{(j)}$ with $m_*=m+l$ assuming wlog. $\\eta\\in C_c^\\infty(B,\\mathbb{R}^2)$ and $\\sigma\\in C^\\infty(B).$\n\nNotice, that\n\\begin{align}\nE(v)-E(u)\n\\ge&\\frac{p}{2}\\int\\limits_B{\\nu(x)|\\ensuremath{\\nabla} u|^{p-2}|\\ensuremath{\\nabla}\\eta|^2\\;dx}+H(u,\\eta),&\n\\label{eq:HFU.IC.G.05}\n\\end{align}\nremains valid with the same mixed term\n\\begin{equation}H(u,\\eta)=\\int\\limits_B{p\\nu(x)|\\ensuremath{\\nabla} u|^{p-2}\\ensuremath{\\nabla} u\\ensuremath{\\cdot}\\ensuremath{\\nabla}\\eta\\;dx}.\\label{eq:Uni.SPC.1}\\end{equation}\nFrom here on we have to argue more along the lines of the proof of \\cite[thm 1.2]{D1}. Again, by the ELE \\eqref{def:SP1}, the identity $\\det \\ensuremath{\\nabla} \\eta=-\\textnormal{cof}\\;\\ensuremath{\\nabla} u\\ensuremath{\\cdot}\\ensuremath{\\nabla}\\eta$ a.e. and by \\cite[lem 2.1.(vi)]{D1} we get\n\\[H(u,\\eta)=-p\\int\\limits_B{(\\textnormal{cof}\\;\\ensuremath{\\nabla}\\eta^{(0)}\\ensuremath{\\nabla}\\ensuremath{\\lambda}(x))\\ensuremath{\\cdot}\\tilde{\\eta}\\;dx}-p\\int\\limits_B{(\\textnormal{cof}\\;\\ensuremath{\\nabla}\\eta\\ensuremath{\\nabla}\\ensuremath{\\lambda}(x))\\ensuremath{\\cdot}\\tilde{\\eta}\\;dx}=:(I)+(II).\\]\nNow by noting that the $0-$mode is only a function of $R,$ we get\n\\begin{align*}\n(\\textnormal{cof}\\;\\ensuremath{\\nabla}\\eta^{(0)}\\ensuremath{\\nabla}\\ensuremath{\\lambda}(x))\\ensuremath{\\cdot}\\tilde{\\eta}=\\frac{\\ensuremath{\\lambda},_\\th}{R}(\\eta_{1,R}^{(0)}\\tilde{\\eta}_2-\\eta_{2,R}^{(0)}\\tilde{\\eta}_1).\n\\end{align*}\nInstead of just $\\ensuremath{\\lambda},_\\th$ on the right hand side of the latter equation we would like to have the full gradient of $\\ensuremath{\\lambda}.$ This can be achieved by using the basic relations $e_\\th\\ensuremath{\\cdot} e_\\th=1$ and $e_R\\ensuremath{\\cdot} e_\\th=0$ to obtain \n\\begin{align*}\n(\\textnormal{cof}\\;\\ensuremath{\\nabla}\\eta^{(0)}\\ensuremath{\\nabla}\\ensuremath{\\lambda}(x))\\ensuremath{\\cdot}\\tilde{\\eta}=(\\ensuremath{\\lambda},_RR e_R+\\ensuremath{\\lambda},_\\th e_\\th)\\ensuremath{\\cdot}(\\eta_{1,R}^{(0)}\\tilde{\\eta}_2-\\eta_{2,R}^{(0)}\\tilde{\\eta}_1)\\frac{e_\\th}{R}.\n\\end{align*}\nArguing similarly for (II), and a short computation shows\n\\begin{align}\nH(u,\\eta)=&-p\\int\\limits_B(\\ensuremath{\\lambda},_RR e_R+\\ensuremath{\\lambda},_\\th e_\\th)\\ensuremath{\\cdot}\\left[(\\tilde{\\eta}_1\\tilde{\\eta}_{2,\\th}-\\tilde{\\eta}_2\\tilde{\\eta}_{1,\\th})\\frac{e_R}{R}\\right.&\\nonumber\\\\\n&+\\left. (\\tilde{\\eta}_2(\\eta_{1,R}^{(0)}+\\eta_{1,R})-\\tilde{\\eta}_1(\\eta_{2,R}^{(0)}+\\eta_{2,R}))e_\\th\\right]\\;\\frac{dx}{R}.&\n\\label{eq:Uni.SPC.2}\\end{align}\nBy H\u00f6lder's inequality in $\\mathbb{R}^2$ we get\\begin{align*} \nH(u,\\eta)\\ge&-p\\int\\limits_B|\\ensuremath{\\nabla} \\ensuremath{\\lambda}(x)R|_\\infty \\left[\\left|\\tilde{\\eta}_1\\tilde{\\eta}_{2,\\th}-\\tilde{\\eta}_2\\tilde{\\eta}_{1,\\th}\\right|\\frac{1}{R}\\right.&\\\\\n&+\\left.\\left|\\tilde{\\eta}_2(\\eta_{1,R}^{(0)}+\\eta_{1,R})-\\tilde{\\eta}_1(\\eta_{2,R}^{(0)}+\\eta_{2,R})\\right|\\right]\\;\\frac{dx}{R}.&\n\\end{align*}\nBy $|\\ensuremath{\\nabla} \\ensuremath{\\lambda}(x)R|_\\infty\\le\\frac{\\sqrt{3}m\\sigma^2(x)}{2\\sqrt{2}}$ and a weighted Cauchy-Schwarz inequality, we see\n\\begin{align*}\nH(u,\\eta)\\!\\ge&-\\frac{\\sqrt{3}mp}{4\\sqrt{2}}\\!\\left[2a\\|\\sigma\\tilde{\\eta}_1\\|_{L^2\\left(\\frac{dx}{R^2}\\right)}^2\\!+2a\\|\\sigma\\tilde{\\eta}_2\\|_{L^2\\left(\\frac{dx}{R^2}\\right)}^2\\!+\\frac{1}{a}\\|\\sigma\\eta_{2,R}^{(0)}+\\sigma\\eta_{2,R}\\|_{L^2(dx)}^2\\right.&\\\\\n&\\left.+\\frac{1}{a}\\|\\sigma\\tilde{\\eta}_{2,\\th}\\|_{L^2\\left(\\frac{dx}{R^2}\\right)}^2+\\frac{1}{a}\\|\\sigma\\eta_{1,R}^{(0)}+\\sigma\\eta_{1,R}\\|_{L^2(dx)}^2+\\frac{1}{a}\\|\\sigma\\tilde{\\eta}_{1,\\th}\\|_{L^2\\left(\\frac{dx}{R^2}\\right)}^2\\right].&\n\\end{align*}\n\nApplying the Cauchy-Schwarz inequality, inequality \\eqref{eq:HFU.FE.2}, and combining some of the norms yields\n\\begin{align*}\nH(u,\\eta)\\ge-\\frac{\\sqrt{3}mp}{4\\sqrt{2}}&\\left[\\left(\\frac{2a}{m^2}+\\frac{1}{a}\\right)\\|\\sigma\\tilde{\\eta},_\\th\\|_{L^2\\left(\\frac{dx}{R^2}\\right)}^2+\\frac{2}{a}\\|\\sigma\\eta,_R^{(0)}\\|_{L^2(dx)}^2\\right.&\\\\\n&\\left.+\\frac{2}{a}\\|\\sigma\\eta,_R\\|_{L^2(dx)}^2\\right].&\n\\end{align*}\nBy using $\\tilde{\\eta},_\\th=\\eta,_\\th$ and $\\|\\sigma\\eta,_R^{(0)}\\|_{L^2(dx)}^2\\le\\|\\sigma\\eta,_R\\|_{L^2(dx)}^2$ we obtain\n\\begin{align*}\nH(u,\\eta)\\ge&-\\frac{\\sqrt{3}mp}{4\\sqrt{2}}\\left[\\left(\\frac{2a}{m^2}+\\frac{1}{a}\\right)\\|\\sigma\\eta,_\\th\\|_{L^2\\left(\\frac{dx}{R^2}\\right)}^2+\\frac{4}{a}\\|\\sigma\\eta,_R\\|_{L^2(dx)}^2\\right].&\n\\end{align*}\nChoosing $a=\\frac{\\sqrt{3}m}{\\sqrt{2}}$ and combining the norms even further yields\n\\begin{align*}\nH(u,\\eta)\\ge-\\frac{p}{2}\\int\\limits_B{\\nu(x)|\\ensuremath{\\nabla} u|^{p-2}|\\ensuremath{\\nabla}\\eta|^2\\;dx},\n\\end{align*}\n completing the 2nd part of the proof.\n\\qed\n\n\\begin{re} \n 1. Notice, that \\eqref{eq:HFU.UC.G.01} is especially satisfied if $p=2$ and $\\nu(x)=\\nu(R).$ So condition \\eqref{eq:HFU.UC.G.01}\ncan be thought of as a natural extension of this fact to the case, where $p$ might be arbitrary and $\\sigma(x)$ depends on $x$ instead of $R.$\\\\\n2. Despite the fact that the sets $F_{n_*}^{p,\\sigma,c}$ and $F_{0,m_*}^{p,\\sigma,c}$ depend on $\\sigma$ it remains true that if $n_*=0$ or $m_*\\in\\{0,1\\}$ one gets uniqueness in the full class $\\mathcal{A}^{p,c}.$ Indeed, there are two cases to consider. Firstly, let $n_*=0$ or $m_*\\le1$ s.t. $n=0$ or $m=0.$ Then $\\ensuremath{\\nabla}\\ensuremath{\\lambda}\\equiv0$ and by \\eqref{eq:HFU.IC.G.04} and \\eqref{eq:ELE.2} one obtains\n\\begin{align*}\nE(v)-E(u)\\ge\\int\\limits_B{\\frac{\\nu(x)}{2}|\\ensuremath{\\nabla} u|^{p-2}|\\ensuremath{\\nabla}\\eta|^2\\;dx},\\end{align*}\nimplying, that $u$ is a global minimizer in the full class $\\mathcal{A}^{p,c}.$ Assuming, additionally, that $\\sigma(x)\\ge\\sigma_0>0$ for a.e. $x\\in B$ then one can conclude that it needs to be the unique one. In the case when $m_*=1$ and $m=1,l=0,$ then $\\sigma_{,\\th}=0$ and \\eqref{eq:HFU.FE.2} can be applied with $m_*=m=1$ completing the argument. \n\\label{re:1}\n\\end{re}\\vspace{0.5cm}\n\n\\section{Compressible uniqueness criterion}\nHere we outline the proof for the analogous compressible results, where the proof strategy remains the same.\\\\\n\n\\label{sec.4}\n\\textbf{Proof of Theorem \\ref{thm:HFU.UC.G.1}:}\\\\\ni) Let $u\\in\\mathcal{A}_{u_0}^p$ be a stationary point of $I$ and let $v\\in\\mathcal{F}_{n_*}^{p,\\sigma}$ be arbitrary and set $\\eta:=v-u\\in W_0^{1,2}(B,\\mathbb{R}^2)$ and $ \\sigma\\eta=\\sum\\limits_{j\\ge {n_*}}(\\sigma\\eta)^{(j)}$ with $n_*=n+l$ assuming wlog. $\\eta\\in C_c^\\infty(B,\\mathbb{R}^2)$ and $\\sigma\\in C^\\infty(B).$ Again, by the standard expansion, the subdifferential inequality for $\\Psi$, and inequality \\eqref{eq:3.1} we obtain\n\\begin{align}\nI(v)-I(u)=&\\int\\limits_B{\\frac{\\nu(x)}{p}(|\\ensuremath{\\nabla} u+\\ensuremath{\\nabla} \\eta|^{p}-|\\ensuremath{\\nabla} u|^{p})}&\\nonumber\\\\\n&+{\\Psi(x,\\ensuremath{\\nabla} u+\\ensuremath{\\nabla} \\eta,\\det\\ensuremath{\\nabla} u+\\ensuremath{\\nabla} \\eta)-\\Psi(x,\\ensuremath{\\nabla} u,\\det\\ensuremath{\\nabla} u)\\;dx}&\\nonumber\\\\\n\\ge&\\int\\limits_B{\\frac{\\nu(x)}{2}|\\ensuremath{\\nabla} u|^{p-2}|\\ensuremath{\\nabla}\\eta|^2+\\nu(x)|\\ensuremath{\\nabla} u|^{p-2}\\ensuremath{\\nabla} u\\ensuremath{\\cdot}\\ensuremath{\\nabla}\\eta}&\\nonumber\\\\\n&+{\\ensuremath{\\partial}_\\xi\\Psi(x,\\ensuremath{\\nabla} u,\\det\\ensuremath{\\nabla} u)\\ensuremath{\\cdot}\\ensuremath{\\nabla}\\eta+\\ensuremath{\\partial}_d\\Psi(x,\\ensuremath{\\nabla} u,\\det\\ensuremath{\\nabla} u)\\textnormal{cof}\\;\\ensuremath{\\nabla} u\\ensuremath{\\cdot}\\ensuremath{\\nabla}\\eta\\;dx}.&\\nonumber\n\\end{align}\n\nThen by applying the ELE \\eqref{def:SP.2} we get\n\\begin{align}\nI(v)-I(u)\\ge&\\int\\limits_B{\\frac{\\nu(x)}{2}|\\ensuremath{\\nabla} u|^{p-2}|\\ensuremath{\\nabla}\\eta|^2+\\ensuremath{\\partial}_d\\Psi(x,\\ensuremath{\\nabla} u,\\det\\ensuremath{\\nabla} u)d_{\\ensuremath{\\nabla}\\eta}\\;dx}.&\n\\label{eq:HFU.UC.G.21}\n\\end{align}\nReplacing $\\ensuremath{\\lambda}$ by $\\ensuremath{\\partial}_d\\Psi$ in the argument given in the 1st part of the proof of theorem \\ref{thm:HFU.UC.G.1}, see \u00a7.\\ref{sec:3}, completes the argument.\\\\\n\nii) If $\\sigma\\eta$ can be such that the first Fourier-mode $(\\sigma\\eta)^{(0)}\\not=0$ then \\eqref{eq:HFU.UC.G.21} does still hold, but one needs to conclude similarly to the 2nd part of the proof of theorem \\ref{thm:HFU.UC.G.1}.\\qed \n\\begin{re}\n1. Arguing as in Remark \\ref{re:1} in \u00a7.\\ref{sec:3}, assuming, additionally, that $\\sigma(x)\\ge\\sigma_0>0$ for a.e. $x\\in B$ then it remains true that if $n_*=0$ or $m_*\\in\\{0,1\\}$ one gets uniqueness in the full class $\\mathcal{A}^{p}.$\\\\\n2. It seems reasonable to believe, that similar uniqueness criteria could be given in many other elastic scenarios if the considered situation is such, that the describing functional decomposes into two parts, the main one and the perturbative one acting like the pressure. If the perturbation then is assumed to be small in some sense there might be a chance of obtaining uniqueness.\n\\end{re}\n\\vspace{0.5cm}\n\n\\textbf{Data Declaration}\nData sharing not applicable to this article as no datasets were generated or analysed during the current study.\\\\\n\n\\textbf{Acknowledgments}\n\nThe author is in deep dept to Jonathan J. Bevan, Bin Cheng, and Ali Taheri for vital discussions and suggestions, the fantastic people with the Department of Mathematics at the University of Surrey and the Engineering \\& Physical Sciences Research Council (EPRSC), which generously funded this work.\\\\\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe human brain can self-organize and coordinate different cognitive functions to flexibly adapt to complex and changing environments. A major challenge for Artificial Intelligence and computational neuroscience is integrating multi-scale biological principles to build biologically plausible brain-inspired intelligent models. As the third generation of neural networks~\\cite{Maass1997Networks}, Spiking Neural Networks (SNNs) are more biologically realistic at multiple scales, more biologically interpretable, more energy-efficient, and naturally more suitable for modeling various cognitive functions of the brain and creating biologically plausible AI. \n\n\\begin{figure*}[!htbp]\n\t\\centering\n\t\\includegraphics[scale=0.3]{.\/fig\/braincog.jpg}\n\t\\caption{The architecture of Brain-inspired Cognitive Intelligence Engine (BrainCog).}\n\t\\label{braincog}\n\\end{figure*}\n\nExisting neural simulators attempted to simulate elaborate biological neuron models or build large-scale neural network simulations, neural dynamics models, deep SNNs. Neuron~\\cite{carnevale2006neuron} focuses on simulating elaborate biological neuron models. NEST~\\cite{gewaltig2007nest} implements large-scale neural network simulations. Brian\/Brian2~\\cite{stimberg2019brian, goodman2009brian} provides an efficient and convenient tool for modeling spiking neural networks. Shallow SNNs implemented by Brian2 can realize unsupervised visual classification~\\cite{diehl2015unsupervised}. Further, BindsNET~\\cite{ hazan2018bindsnet} builds SNNs with coordination of various neurons and connections and incorporates multiple biological learning rules for training SNNs. Such learning SNNs can perform machine learning tasks, including simple supervised, unsupervised, and reinforcement learning. However, supporting more complex tasks would be great challenge for these SNNs frameworks, and there is a large gap in performance compared with traditional deep neural networks (DNNs). Deep SNNs trained by surrogate gradient or converting well-trained DNNs to SNNs have achieved great progress in the fields of speech recognition~\\cite{dominguez2018deep, loiselle2005exploration}, computer vision~\\cite{kim2020spiking, wu2018spatio}, and reinforcement learning~\\cite{tan2021strategy}. Motivated by this, SpikingJelly~\\cite{SpikingJelly} develops a deep learning SNN framework (trained by surrogate gradient or converting well-trained DNNs to SNNs), which integrates deep convolutional SNNs and various deep reinforcement learning SNNs for multiple benchmark tasks. Platforms as SpikingJelly are relatively more inspired by the field of deep learning and focuse on improving the performance of different tasks. They are currently lack of in-depth inspiration from the brain information processing mechanisms and may not aim at, hence short at simulating large scale functional brains.\n\nBrainPy~\\cite{Wang2021} excels at modeling, simulating, and analyzing the dynamics of brain-inspired neural networks from multiple perspectives, including neurons, synapses, and networks. While it focuses on computational neuroscience research, it fails to consider the learning and optimization of deep SNNs or the implementation of brain-inspired functions. SPAUN~\\cite{eliasmith2012large}, a large-scale brain function model consisting of 2.5 million simulated neurons and implemented by Nengo~\\cite{bekolay2014nengo}, integrates multiple brain areas and realizes multiple brain cognitive functions, including image recognition, working memory, question answering, reinforcement learning, and fluid reasoning. However, it is not suitable for solving challenging and complex AI tasks that can be handled by deep learning models. In summary, there is still a lack of open-source spiking neural network frameworks that could incorporate the ability of simulating brain structures, cognitive functions at large scale, while keep itself effective for creating complex and efficient AI models at the same time.\n\nConsidering the various limitations of existing frameworks mentioned above, in this paper, we present the Brain-inspired Cognitive Intelligence Engine (BrainCog), a spiking neural network based open source platform for both brain-inspired AI and brain simulation at multiple scales. As shown in Fig.~\\ref{braincog}, some basic modules (such as different types of neuron models, learning rules, encoding strategies, etc.) are provided as building blocks to construct different brain areas and neural circuits to implement brain-inspired cognitive functions. BrainCog is an easy-to-use framework that can flexibly replace different components according to various purposes and needs. BrainCog tries to achieve the vision ``the structure and mechanism are inspired by the brain, and the cognitive behaviors are similar to humans'' for Brain-inspired AI~\\cite{2016Retrospect}. BrainCog is developed based on the deep learning framework (currently it is based on PyTorch, while it is easy to migrate to other frameworks such as PaddlePaddle, TensorFlow, etc.). This approach aims at greatly facilitating researchers to quickly familiarize themselves with the platform and implement their own algorithms.\n\n\\subsection{Brain-inspired AI}\nBrainCog is aimed at providing infrastructural support for Brain-inspired AI. Currently, it provides cognitive functions components that can be classified into five categories: Perception and Learning, Decision Making, Motor Control, Knowledge Representation and Reasoning, and Social Cognition. These components collectively form neural circuits corresponding to 28 brain areas in the mammalian brains, as shown in Fig.~\\ref{born}. These brain-inspired AI models have been effectively validated on various supervised and unsupervised learning, deep reinforcement learning, and several complex brain-inspired cognitive tasks. \n\n\n\\begin{figure}[!htbp]\n\t\\centering\n\t\\includegraphics[scale=0.35]{.\/fig\/born.jpg}\n\t\\caption{Multiple cognitive functions integrated in BrainCog, their related brain areas and neural circuits.}\n\t\\label{born}\n\\end{figure}\n\n\\subsubsection{Perception and Learning} \nBrainCog provides a variety of supervised and unsupervised methods for training spiking neural networks, such as the biologically-plausible Spike-Timing-Dependent Plasticity (STDP)~\\cite{bi1998synaptic}, the backpropagation based on surrogate gradients~\\cite{wuSpatioTemporalBackpropagationTraining2018,zhengGoingDeeperDirectlyTrained2021,fangIncorporatingLearnableMembrane2021}, and the conversion-based algorithms~\\cite{li2021free, han2020deep, han2020rmp}. In addition to high performance in common perception and learning process, it also shows strong adaptability in small samples and noisy scenarios. BrainCog also provides a multi-sensory integration framework for human-like concept learning~\\cite{wywFramework}. Inspired by quantum information theory, BrainCog provides a quantum superposition spiking neural network, which encodes complement information to neuronal spike trains with different frequency and phase~\\cite{SUN2021102880}. \n\n\\subsubsection{Decision Making}\nFor decision making, BrainCog provides multi-brain areas coordinated decision-making spiking neural network~\\cite{zhao2018brain}. The biologically inspired decision-making model implemented by BrainCog achieves human-like learning ability on the Flappy Bird game and supports UAVs' online decision-making tasks. In addition, BrainCog combines SNNs with deep reinforcement learning and provides the brain-inspired spiking deep Q network~\\cite{sun2022solving}.\n\n\\subsubsection{Motor Control}\nEmbodied cognition is crucial to realizing biologically plausible AI. As part of the embodied cognition modules, and inspired by the motor control mechanism of the brain, BrainCog provides a multi-brain areas coordinated robot motion spiking neural networks model, which includes premotor cortex (PMC), supplementary motor areas (SMA), basal ganglia and cerebellum functions. With proper mapping, the spiking motor network outputs can be used to control various robots.\n\n\\subsubsection{Knowledge Representation and Reasoning} \nBrainCog incorporates multiple neuroplasticity and population coding mechanisms for knowledge representation and reasoning. The brain-inspired music memory and stylistic composition model implements the knowledge representation and memory of note sequences and can generate music according to different styles~\\cite{LQ2020,LQ2021}. Sequence Production Spiking Neural Network (SPSNN) achieves the memory of the symbol sequence and can reconstruct the symbol sequence in the light of different rules~\\cite{fang2021spsnn}. Commonsense Knowledge Representation Graph SNN (CKR-GSNN) realizes the representation of commonsense knowledge through incorporating multi-scale neural plasticity and population coding mechanism into a graph SNN model~\\cite{KRRfang2022}.\nCausal Reasoning Spiking Neural Network (CRSNN) encodes the causal graph into a spiking neural network and realizes deductive reasoning tasks accordingly~\\cite{fang2021crsnn}.\n\n\\subsubsection{Social Cognition} \nBrainCog provides a brain-inspired social cognition model with biological plausibility. This model gives the agent a preliminary ability to perceive and understand itself and others and can enable the robots pass the Multi-Robots Mirror Self-Recognition Test~\\cite{zeng2018toward} and the AI Safety Risks Experiment~\\cite{zhao2022brain}. The former is a classic experiment of self-perception in social cognition, and the latter is a variation and application of the theory of mind experiment in social cognition.\n\\subsection{Brain Simulation}\n\n\\subsubsection{Brain Cognitive Function Simulation}\nTo demonstrate the capability of BrainCog for cognitive function simulation, we provide \\emph{Drosophila} decision-making and prefrontal cortex working memory function simulation~\\cite{zhao2020neural,zhangqian2021comparison}. For \\emph{Drosophila} nonlinear and linear decision-making simulation, BrainCog verifies the winner-takes-all behaviors of the nonlinear dopaminergic neuron\u2011GABAergic neuron\u2011mushroom body (DA\u2011GABA\u2011MB) circuit under a dilemma and obtains consistent conclusions with \\emph{Drosophila} biological experiments~\\cite{zhao2020neural}. For the working memory performance of the prefrontal cortex network implemented by BrainCog, we discover that using human neurons to replace rodent neurons without changing network structure can significantly improve the accuracy and completeness of an image memory task~\\cite{zhangqian2021comparison}, and this implies the evolution of the brains are not only on their structures, but also applies to single computational units such as neurons.\n\n\\subsubsection{Multi-scale Brain Structure Simulation} \nBrainCog provides simulations of brain structures at different scales, from microcircuits, and cortical columns, to whole-brain structure simulations. Anatomical and imaging multi-scale connectivity data is used to support whole-brain simulations from mouse brain, macaque brain to human brain at different scales.\n\n\n\n\\section{Basic Components}\nBrainCog provides essential and fundamental components to model biological and artificial intelligence. It includes various biological neuron models, learning rules, encoding strategies, and models of different brain areas. One can build brain-inspired SNN models through reusing and refining these building blocks. Expanding and refining the components and cognitive functions included in BrainCog is an ongoing effort. We believe this should be a continuous community effort, and we welcome researchers and practitioners to enrich and improve the work together in a complementary way.\n\n\\subsection{Neuron Models}\nBrainCog supports various models for spiking neurons, such as the following :\n\n(1) Integrate-and-Fire spiking neuron (IF)~\\cite{abbott1999lapicque}:\n\n\\begin{equation}\n\\frac{dV}{dt} = I\n\\end{equation}\n\n$I$ denotes the input current from the pre-synaptic neurons. Once the membrane potential reaches the threshold $V_{th}$, the neuron $j$ fires a spike~\\cite{abbott1999lapicque}.\n\n(2) Leaky Integrate-and-Fire spiking neuron (LIF)~\\cite{dayan2005theoretical}:\n\\begin{equation}\n \\tau\\frac{dV}{dt} = - V + I\n\\label{equa_lif1}\n\\end{equation}\n$\\tau=RC$ denote the time constant, $R$ and $C$ denotes the membrane resistance and capacitance respectively~\\cite{dayan2005theoretical}. \n\n(3) The adaptive Exponential Integrate-and-Fire spiking neuron (aEIF)~\\cite{Fourcaud2003How,brette2005adaptive}:\n\n\\begin{equation}\n\\left\\{\n\\begin{aligned}\nC \\frac{d V}{d t}&=-g_{L}(V-E_{L})+g_{L} \\exp(\\frac{V-V_{t h}}{\\Delta T})+I-w \\\\\n\\tau_{w} \\frac{dw}{dt}&=a(V-E_L)-w\n\\end{aligned}\n\\right.\n\\end{equation}\n\nwhere $g_L$ is the leak conductance, $E_L$ is the leak reversal potential, $V_r$ is the reset potential, $\\Delta T$ is the slope factor, $I$ is the background currents. $\\tau_w$ is the adaptation time constant. When the membrane potential is greater than the threshold $V_{th}$, $V = V_{r}$, and $w = w +b$. $a$ is the subthreshold adaptation, and $b$ is the spike-triggered adaptation~\\cite{Fourcaud2003How,brette2005adaptive}. \n\n(4) The Izhikevich spiking neuron~\\cite{izhikevich2003simple}:\n\\begin{equation}\n\\left\\{\n\\begin{aligned}\n\\frac{dV}{dt} &= 0.04V^2 + 5v + 140 - u + I \\\\\n\\frac{du}{dt} &= a(bv-u)\\\\\n\\end{aligned}\n\\right.\n\\end{equation}\nWhen the membrane potential is greater than the threshold:\n\\begin{equation}\n\\left\\{\n\\begin{aligned}\nV &= c \\\\\nu &= u + d\\\\\n\\end{aligned}\n\\right.\n\\end{equation}\n$u$ represents the membrane recovery variable, and $a, b, c, d$ are the dimensionless parameters~\\cite{izhikevich2003simple}. \n\n(5) The Hodgkin-Huxley spiking neuron (H-H)~\\cite{hodgkin1952quantitative}:\n\\begin{equation}\nI = C \\frac{dV}{dt} + \\overline{g}_K n^4 (V - V_K) + \\overline{g}_{Na} m^3 h (V - V_{Na}) + \\overline{g}_L (V - V_L)\n\\label{equa_hh}\n\\end{equation}\n\\begin{equation}\n\\left\\{\n\\begin{aligned}\n\\frac{dn}{dt} &= \\alpha_n(V)(1-n) - \\beta_n(V)n \\\\\n\\frac{dm}{dt} &= \\alpha_m(V)(1-m) - \\beta_m(V)m \\\\\n\\frac{dh}{dt} &= \\alpha_n(V)(1-h) - \\beta_n(V)h \\\\\n\\end{aligned}\n\\right.\n\\end{equation}\n\n\n$\\alpha_i$ and $\\beta_i$ are used to control the $i_{th}$ ion channel, $n$, $m$, $h$ are dimensionless probabilities between 0 and 1. $\\overline{g}_i$ is the maximal value of the conductance~\\cite{hodgkin1952quantitative}.\n\n\nThe H-H model shows elaborate modeling of biological neurons. In order to apply it more efficiently to AI tasks, BrainCog incorporates a simplified H-H model ($C=0.02\\mu F\/cm^2, V_r=0, V_{th}=60mV$), as illustrated in~\\cite{wang2016spiking}.\n\n\n\n\n\n\\subsection{Learning Rules}\nBrainCog provides various plasticity principles and rules to support biological plausible learning and inference, such as:\n\n(1) Hebbian learning theory~\\cite{amit1994correlations}:\n\\begin{equation}\n\\Delta w_{ij}^t=x_i^t x_j^t\n\\end{equation}\nwhere $w_{ij}^t$ means the $i$th synapse weight of $j$th neuron at the time $t$. $x_i^t$ is the input of $i$th synapse at time $t$. $x_j^t$ is the output of $j$th neuron at time $t$~\\cite{amit1994correlations}.\n\n(2) Spike-Timing-Dependent Plasticity (STDP)~\\cite{bi1998synaptic}:\n\n\\begin{equation}\n\t\\begin{split}\n\t\t&\\Delta w_{j}=\\sum\\limits_{f=1}^{N} \\sum\\limits_{n=1}^{N} W(t^{f}_{i}-t^{n}_{j})\\\\\n\t\t&W(\\Delta t)= \n\t\t\\left\\{\n\t\t\\begin{split}\n\t\tA^{+}e^{\\frac{-\\Delta t}{\\tau_{+}}}\\quad if\\; \\Delta t>0&\\\\ \n\t\t-A^{-}e^{\\frac{\\Delta t}{\\tau_{-}}}\\quad if\\;\\Delta t<0&\\\\ \n\t\t\\end{split}\n\t\t\\right.\n\t\\end{split}\n\t\\label{eq4}\n\\end{equation}\nwhere $\\Delta w_{j}$ is the modification of the synapse $j$, and $W(\\Delta t)$ is the STDP function. $t$ is the time of spike. $A^{+},A^{-}$ mean the modification degree of STDP. $\\tau_{+}$ and $\\tau_{-}$ denote the time constant~\\cite{bi1998synaptic}.\n\n(3) Bienenstock-Cooper-Munros theory (BCM)~\\cite{bienenstock1982theory}:\n\\begin{equation}\n\\Delta w=y\\left(y-\\theta_M \\right)x-\\epsilon w\n\\end{equation}\n\nWhere $x$ and $y$ denote the firing rates of pre-synaptic and post-synaptic neurons, respectively, threshold $\\theta_M$ is the average of historical activity of the post-synaptic neuron~\\cite{bienenstock1982theory}.\n\n(4) Short-term Synaptic Plasticity (STP)~\\cite{maass2002synapses}: \n\n\nShort-term plasticity is used to model the synaptic efficacy changes over time.\n\n\\begin{equation}\n{a_k} = {u_k} \\cdot {R_k}\n\\end{equation}\n\\begin{equation}\n{u_{k+1}} = U + {u_{k}}(1 - U)\\exp \\left( { - {{\\rm{\\Delta }}t_{k}}\/{\\tau _{{\\rm{fac}}}}} \\right)\n\\end{equation}\n\\begin{equation}\n{R_{k+1}} = 1 + \\left( {{R_{k}} - {u_{k}}{R_{k}} - 1} \\right)\\exp \\left( { - {{\\rm{\\Delta }}t_{k}}\/{\\tau _{{\\rm{rec}}}}} \\right)\n\\end{equation}\n\n$a$ denotes the synaptic weight, $U$ denotes the fraction of synaptic resources. $\\tau _{{\\rm{fac}}}$ and $\\tau _{{\\rm{rec}}}$ denotes the time constant for recovery from facilitation and depression. The variable $R_n$models the fraction of synaptic efficacy available for the (k)th spike,and $u_n R_n$ models the fraction of synaptic efficacy~\\cite{maass2002synapses}.\n\n\n(5) Reward-modulated Spike-Timing-Dependent Plasticity (R-STDP)~\\cite{Eugene2007}: \n\n\n\nR-STDP uses synaptic eligibility trace $e$ to store temporary information of STDP. The eligibility trace accumulates the STDP $\\Delta w_{STDP}$ and decays with a time constant $\\tau_e$~\\cite{Eugene2007}. \n\n\\begin{equation}\\label{rstdpe}\n\\Delta e=-\\frac{e}{\\tau _e}+\\Delta w_{STDP}\n\\end{equation}\n\nThen, synaptic weights are updated when a delayed reward $r$ is received, as Eq.~\\ref{rstdpw} shown~\\cite{Eugene2007}. \n\n\\begin{equation}\\label{rstdpw}\n\\Delta w=r*\\Delta e \n\\end{equation}\n\n\n\n\\subsection{Encoding Strategies}\nBrainCog supports a number of different encoding strategies to help encode the inputs to the spiking neural networks.\n\n\n(1) Rate Coding~\\cite{adrian1926impulses}:\n\nRate coding is mainly based on spike counting to ensure that the number of spikes issued in the time window corresponds to the real value. Poisson distribution can describe the number of random events occurring per unit of time, which corresponds to the firing rate~\\cite{adrian1926impulses}. Set $\\alpha \\sim \\mathbb{U}(0,1)$, the input can be encoded as\n\\begin{equation}\n\ts(t) = \\begin{cases}\n\t 1,\\quad if \\quad x > \\alpha\\\\\n\t 0, \\quad else \n\t\\end{cases}\n\\end{equation}\n\n(2) Phase Coding~\\cite{kim2018deep}:\n\nThe idea of phase coding can be used to encode the analog quantity changing with time. The value of the analog quantity in a period can be represented by a spike time, and the change of the analog quantity in the whole time process can be represented by the spike train obtained by connecting all the periods.\nEach spike has a corresponding phase weighting under phase encoding, and generally, the pixel intensity is encoded as a 0\/1 input similar to binary encoding. $\\gg$ denotes the shift operation to the right, $K$ is the phase period~\\cite{kim2018deep}. Pixel $x$ is enlarged to $x^{\\prime}=x*(2^{K}-1)$ and shifted $k=K-1-(t \\mod K)$ to the right, where $\\mod$ is the remainder operation. If the lowest bit is one, $s$ will be one at time $t$. $\\&$ means \nbit-wise AND operation.\n\\begin{equation}\n\ts(t) = \\begin{cases} 1, \\quad if \\quad (x^{\\prime} \\gg k) \\& 1 = 1\\\\\n\t0, \\quad else\n\t\\end{cases}\n\\end{equation}\n\n(3) Temporal Coding~\\cite{thorpe1996speed}:\n\nThe characteristic of the neuron spike is that the form of the spike is fixed, and there are only differences in quantity and time. The most common implementation is to express information regarding the timing of individual spike. The stronger the stimulus received, the earlier the spike generated~\\cite{rueckauer2018conversion}. Let the total simulation time be T, and the input $x$ of the neuron can be encoded as the spike at time $t^s$:\n\\begin{equation}\n\tt^{s} = T - \\operatorname{round}(Tx)\n\\end{equation}\n\n(4) Population Coding~\\cite{bohte2002error}:\n\nPopulation coding helps to solve the problem of the ambiguity of information carried by a single neuron. The ambiguity of information carried can be understood as: the original information is input into a neuron, which makes the network hard to distinguish some overlapping or similar related information~\\cite{quian2009extracting,grun2010analysis}.\nThe intuitive idea of population coding is to make different neurons to be with different sensitivity to different types of inputs, which is also reflected in biology. For example, rats' whiskers have different sensitivity to different directions~\\cite{quian2009extracting}. The inputs will be transformed into a spike train with a period by population coding. A classical population coding method is the neural information coding method based on the Gaussian tuning curve referred to Eq.~\\ref{gaus}. This method is more suitable when the amount of data is small, and the information is concentrated. A Gaussian neuron covers a range of analog quantities in the form of Gaussian function~\\cite{bohte2002error}, \\cite{li2021online}. Suppose that $m$ ($m > 2$) neurons are used to encode a variable x with a value range of $[I_{min}, I_{max}]$. $f(x)$ can be firing time or voltage.\n\n\\begin{equation}\n f(x) = x^{-\\frac{(x-\\mu)^{2}}{2\\sigma^{2}}}\n\\label{gaus}\n\\end{equation}\n\nThe corresponding mean with adjustable parameters $\\beta$ and variance of the $ith$ ($i=1,2,..., m$) neuron are as follows:\n\n\\begin{equation}\n \\mu = I_{min} + \\frac{2i-3}{2}\\frac{I_{max}-I_{min}}{m-2}\n\\end{equation}\n\n\\begin{equation}\n \\sigma = \\frac{1}{\\beta}\\frac{I_{max}-I_{min}}{m-2}\n\\end{equation}\n\n\n\n\n\\subsection{Brain Areas Models}\n\nBrain-inspired models of several functional brain areas are constructed for BrainCog from different levels of abstraction.\n\n(1) Basal Ganglia (BG): \n\nBasal ganglia facilitates desired action selection and inhibit competing behavior (making winner-takes-all decisions)~\\cite{chakravarthy2010basal,redgrave1999basal}. It cooperates with PFC and the thalamus to realize the decision-making process in the brain~\\cite{parent1995functional}. BrainCog models the basal ganglia brain area, including excitatory and inhibitory connections among striatum, Globus pallidus internas (Gpi), Globus pallidus external (Gpe), and subthalamic nucleus (STN) of basal ganglia~\\cite{lanciego2012functional}, as shown in the orange areas of Fig.~\\ref{dmsnn}. The BG brain area component adopts the \\emph{LIF} neuron model in BrainCog, as well as the \\emph{STDP} learning rule and \\emph{CustomLinear} to build internal connections of the BG. Then, the BG brain area component can be used to build brain-inspired decision-making SNNs (see section 3.2.1 for detail).\n\n(2) Prefrontal Cortex (PFC):\n\nPFC is of significant importance when human high-level cognitive behaviors happen. In BrainCog, many cognitive tasks based on SNN are inspired by the mechanisms of the PFC~\\cite{bechara1998dissociation}, such as decision-making, working memory~\\cite{rao1999isodirectional,d2000prefrontals,lara2015role}, knowledge representation~\\cite{wood2003human}, theory of mind and music processing~\\cite{frewen2015healing}. Different circuits are involved to complete these cognitive tasks. In BrainCog, the data-driven PFC column model contains 6 layers and 16 types of neurons. The distribution of neurons, membrane parameters and connections of different types of neurons are all derived from existing biological experimental data. The PFC brain area component mainly employs the \\emph{LIF} neuron model to simulate the neural dynamics. The \\emph{STDP} and \\emph{R-STDP} learning rules are utilized to compute the weights between different neural circuits. \n\n(3) Primary Auditory Cortex (PAC):\n\nThe primary auditory cortex is responsible for analyzing sound features, memory, and the extraction of inter-sound relationships~\\cite{Koelsch2012}. This area exhibits a topographical map, which means neurons respond to their preferred sounds. In BrainCog, neurons in this area are simulated by the \\emph{LIF} model and organized as minicolumns to represent different frequencies of sounds. To store the ordered note sequences, the excitatory and inhibitory connections are updated by \\emph{STDP} learning rule.\n\n\n\n(4) Inferior Parietal Lobule (IPL):\n\nThe function of IPL is to realize motor-visual associative learning~\\cite{macuga2011selective}. The IPL consists of two subareas: IPLM (motor perception neurons in IPL) and IPLV (visual perception neurons in IPL). The IPLM receives information generated by self-motion from ventral premotor cortex (vPMC), and the IPLV receives information detected by vision from superior temporal sulcus (STS). The motor-visual associative learning is established according to the STDP mechanism and the spiking time difference of neurons in IPLM and IPLV.\n\n\n\n\n(5) Hippocampus (HPC):\n\nThe hippocampus is part of the limbic system and plays an essential role in the learning and memory processes of the human brain. Epilepsy patients with bilateral hippocampus removed (e.g. the patient H.M.) have symptoms of anterograde amnesia. They are unable to form new long-term declarative memories~\\cite{milner1998}. This case study has proved that the hippocampus is in the key process of converting short-term memory to long-term memory and plays a vital role~\\cite{smith1981role}. \n\nFurthermore, through electrophysiological means, it was found that the hippocampal region is also crucial for forming new concepts. Specific neurons in the hippocampus only respond selectively to specific concepts, completing the specific encoding between different concepts. Moreover, it was through the study of the hippocampus that neuroscientists discovered the STDP learning rule~\\cite{dan2004spike}, which further demonstrated the high plasticity of the hippocampus.\n\n\n(6) Insula:\n\nThe function of the Insula is to realize self-representation~\\cite{craig2009you}, that is, when the agent detects that the movement in the field of vision is generated by itself, the Insula is activated. The Insula receives information from IPLV and STS. The IPLV outputs the visual feedback information predicted according to its motion, and the STS outputs the motion information detected by vision. If both are consistent, the Insula will be activated.\n\n(7) Thalamus (ThA):\n\nStudies have shown that the thalamus is composed of a series of nuclei connected to different brain parts and heavily contributes to many brain processes. In BrainCog, this area is discussed from both anatomic and cognitive perspectives. Understanding the anatomical structure of the thalamus can help researchers to comprehend the mechanisms of the thalamus. Based on the essential and detailed anatomic thalamocortical data~\\cite{Izhikevichlargescale}, BrainCog reconstructs the thalamic structure by involving five types of neurons (including excitatory and inhibitory neurons) to simulate the neuronal dynamics and building the complex synaptic architecture according to the anatomic results. Inspired by the structure and function of the thalamus, the brain-inspired decision-making model implemented by BrainCog takes into account the transfer function of the thalamus and cooperates with the PFC and basal ganglia to realize multi-brain areas coordinated decision-making model.\n\n(8) Ventral Visual Pathway:\n\nCognitive neuroscience research has shown that the brain can receive external input and quickly recognize objects due to the hierarchical information processing of the ventral visual pathway. The ventral pathway is mainly composed of V1, V2, V4, IT, and other brain areas, which mainly process information such as object shape and color~\\cite{ishai1999distributed,kobatake1994neuronal}. These visual areas form connections through forward, feedback, and self-layer projections. The interaction of different visual areas enables humans to recognize visual objects. The primary visual cortex V1 is selective for simple edge features. With the transmission of information, high-level brain areas combine with lower-level receptive fields to form more complex large receptive fields to recognize more complex objects~\\cite{hubel1962receptive}. Inspired by the structure and function of the ventral visual pathway, BrainCog builds a deep forward SNN with layer-wise information abstraction and a feedforward and feedback interaction deep SNN. The performance is verified on several visual classification tasks. \n\n(9) Motor Cortex:\n\nThe control of biological motor function involves the cooperation of many brain areas. The extra circuits consisting of the PMC, cerebellum, and BA6 motor cortex area are primarily associated with motor control elicited by external stimuli such as visual, auditory, and tactual inputs. Internal motor circuits, including the basal ganglia and SMA, predominate in self-guided, learned movements~\\cite{geldberg1985supplementary,mushiake1991neuronal,gerloff1997stimulation}. Population activity of motor cortical neurons encodes movement direction. Each neuron has its preferred direction. The more consistent the target movement direction is with its preferred direction, the stronger the neuron activities are~\\cite{georgopoulos1995motor,kakei1999muscle}. The cerebellum receives input from motor-related cortical areas such as PMC, SMA, and the prefrontal cortex, which are important for the completion of fine movements, maintaining balance and coordination of movements~\\cite{strick2009cerebellum}. Inspired by the organization of the brain's motor cortex, we use spiking neurons to construct a motion control model, and apply it to the iCub robot, which enables the robot to play the piano according to music pieces.\n\n\\section{Brain-inspired AI}\nComputational units (different neuron models, learning rules, encoding strategies, brain area models, etc.) at multiple scales, provided by BrainCog serve as a foundation to develop functional networks. To enable BrainCog for Brain-inspired AI, cognitive function centric networks need to be built and provided as reusable functional building blocks to create more complex brain-inspired AI. This section introduces various functional building blocks developed based on BrainCog.\n\n\\subsection{Perception and Learning}\nIn this subsection, we will introduce the supervised and unsupervised perception and learning spiking neural networks based on the fundamental components of BrainCog. Inspired by the global feedback connections, the neural dynamics of spiking neurons, and the biologically plausible STDP learning rule, we improve the performance of the spiking neural networks. We show great adaption in the small training sample scenario. In addition, our model has shown excellent robustness in noisy scenarios by taking inspiration from the multi-component spiking neuron and the quantum mechanics. The burst spiking mechanism is used to help our converted SNNs with higher performance and lower latency. Based on this engine, we also present a human-like concept learning framework to generate representations with five types of perceptual strength information.\n\n\\textbf{1. Learning Model}\n\n\\emph{(1) SNN with global feedback connections}\n\n\nThe spiking neural network transmits information in discrete spike sequences, which is consistent with the information processing in the human brain~\\cite{Maass1997Networks}. The training of spiking neural networks has been widely concerned by researchers. Most researchers take inspiration from the mechanism of synaptic learning and updating between neurons in the human brain, and propose biologically plausible learning rules, such as Hebbian theory~\\cite{amit1994correlations}, STDP~\\cite{bi1998synaptic}, and STP~\\cite{zucker2002short}, which have been adopted into the training of spiking neural networks~\\cite{tavanaei2016bio,tavanaei2017multi,diehl2015unsupervised,falez2019multi,zhang2018plasticity,zhang2018brain}. However, most SNNs are based on feedforward structures, while the importance of the brain-inspired structures has been ignored. The anatomical and physiological evidences show that in addition to feedforward connections, numerous feedback connections exist in the brain, especially among sensory areas~\\cite{felleman1991distributed,sporns2004small}. The feedback connections will carry out the predictions from the top layer to cooperate with the local plasticity rules to formulate the learning and inference in the brain. \n\nHere, we introduce the global feedback connections and the local differential learning rule~\\cite{zhao2020glsnn} in the training of SNNs. \n\\begin{figure}[!htbp]\n\t\\centering\n\t\\includegraphics[scale=0.6]{.\/fig\/zdc_pipeline.pdf}\n\t\\caption{The feedforward and feedback pathway in the SNN model. The global feedback pathway propagates the target of the hidden layer, modified from~\\cite{zhao2020glsnn}.}\n\t\\label{snn}\n\\end{figure}\n\nWe use the LIF spiking neuron model in the BrainCog to simulate the dynamical process of the membrane potential $V(t)$ as shown in Eq.~\\ref{equa_lif1}. We use the mean firing rates $S_{l}$ of each layer to denote the representation of the $l_{th}$ layer in the forward pathway, and the corresponding target is denoted as $\\hat{S}_{l}$. Here we use the mean squared loss (MSE) as the final loss function.\n\n$\\hat{S}_{L-1}$ denotes the target of the penultimate layer, and is calculated as shown in Eq.~\\ref{equal_per}, $W_{L-1}$ denotes the forward weight between the $(L-1)_{th}$ layer and the $L_{th}$. $\\eta_t$ represents the learning rate of the target~\\cite{zhao2020glsnn}.\n\\begin{equation}\n\t\\hat{S}_{L-1} = S_{L-1} - \\eta_t\\Delta S = S_{L-1} - \\eta_t W_{L-1}^T(S_{out} - S^T)\n\t\\label{equal_per}\n\\end{equation}\n\nThe target of the other hidden layer can be obtained through the feedback connections:\n\n\\begin{equation}\n\t\\hat{S}_{l} = S_{l} - G_{l}(S_{out} - S^T)\n\t\\label{equal_all}\n\\end{equation}\n\nBy combining the feedforward representation and feedback target, we compute the local MSE loss. We can compute the local update of the parameters with the surrogate gradient. We have conducted experiments on the MNIST and Fashion-MNIST datasets, and achieved 98.23\\% and 89.68\\% test accuracy with three hidden layers. Each hidden layer is set with 800 neurons. The details are shown in Fig.~\\ref{zdc_result}.\n\\begin{figure}[!htbp]\n\t\\centering\n\t\\includegraphics[scale=0.35]{.\/fig\/result_zdc.pdf}\n\t\\caption{The test accuracy on MNIST and Fashion-MNIST datasets of the SNNs with global feedback connections.}\n\t\\label{zdc_result}\n\\end{figure}\n\n\n\\emph{(2) Biological-BP SNN}\n\nThe backpropagation algorithm is an efficient optimization method that is widely used in deep neural networks and has promoted the great success of deep learning. However, due to the non-smoothness of neurons in SNNs, the backpropagation optimization is difficult to be applied to train SNNs directly. To solve the above problem, Bengio et al.~\\cite{bengioEstimatingPropagatingGradients2013} proposed four gradient approximation methods, including Straight-Through Estimator to enable the application of the backpropagation algorithm to neural networks containing nonsmooth neurons. Wu et al.~\\cite{wuSpatioTemporalBackpropagationTraining2018} proposed the Spatio-Temporal Backpropagation (STBP) algorithm to train SNNs by using a differentiable function to approximate spiking neurons through a surrogate gradient method. Based on the spatio-temporal dynamic of SNNs, they achieved the backpropagation of SNNs in both time and space dimensions. Zheng et al.~\\cite{zhengGoingDeeperDirectlyTrained2021} proposed threshold-dependent batch normalization (tdBN) to improve the performance of deep SNNs. Fang et al.~\\cite{fangIncorporatingLearnableMembrane2021} proposed a parametric LIF (PLIF) model to further improve the performance of SNNs by optimizing the time constants of neurons during the training process. \n\nMost of these BP-based SNNs often simply regard SNN as a substitute for RNN, but ignore the dynamic of spiking neurons. To solve the problem of non-differentiable neuronal models, Bohte et al.~\\cite{bohte2011error} approximated the backpropagation process of neuronal models using the surrogate gradient method. However, this method results in gradient leakage and does not allow for proper credit assignment in both temporal and spatial dimensions. Also, due to the reset operation after the spikes are emitted, the errors in the backward process can not propagate across spikes. To solve the above problem, we propose Backpropagation with biologically plausible spatio-temporal adjustment~\\cite{shen2022backpropagation}, as shown in Fig.~\\ref{bpsta}, which can correctly assign credit according to the contribution of the neuron to the membrane potential at each moment.\n\n\\begin{figure}[!htbp]\n\t\\centering\n\t\\includegraphics[scale=0.48]{.\/fig\/shen_bpsta.pdf}\n\t\\caption{The forward and backward process of biological BP-based SNNs for BrainCog.}\n\t\\label{bpsta}\n\\end{figure}\n\nBased on LIF spiking neuron, the direct input encoding strategy, the MSE loss function and the surrogate gradient function supplied in BrainCog, we propose a Biologically Plausible Spatio-Temporal Adjustment (BPSTA) to help BP algorithm with more reasonable error adjustment in the spatial temporal dimension~\\cite{shen2022backpropagation}. The algorithm realizes the reasonable adjustment of the gradient in the spatial dimension, avoids the unnecessary influence of the neurons that do not generate spikes on the weight update, and extracts more important features. By applying the temporal residual pathway, our algorithm helps the error to be transmitted across multiple spikes, and enhances the temporal dependency of the BP-based SNNs. Compared with SNNs and ANNs with the same structure that only used the BP algorithm, our model greatly improves the performance of SNNs on the DVS-CIFAR10 and DVS-Gesture datasets, while also greatly reducing the energy consumption and decay of SNNs, as shown in Tab.~\\ref{compare2}.\n\\begin{table}[!htbp]\n \\centering\n \\caption{The energy efficiency study. The former represents our method, the latter represents the baseline, adopted from~\\cite{shen2022backpropagation}.}\n \\begin{tabular}{cccc}\n \\toprule[2pt]\n Dataset & Accuracy & Firing-rate &EE = $\\frac{E_{ANN}}{E_{SNN}}$ \\\\\n \\midrule[2pt]\n MNIST & 99.58\\%\/99.42\\% & 0.082\/0.183 & 35.1x\/15.7x \n \\\\\n \n N-MNIST & 99.61\\%\/99.32\\% & 0.097\/0.176 & 29.6x\/16.3x \\\\\n \n CIFAR10 & 92.33\\%\/89.49\\% & 0.108\/0.214 & 26.6x\/13.4x \\\\ \n \n DVS-Gesture & 98.26\\%\/93.92\\% & 0.083\/0.165 & 34.6x\/17.4x \\\\ \n \n DVS-CIFAR10 & 77.76\\%\/71.40\\% & 0.097\/0.177 & 29.5x\/16.2x \\\\\n \\bottomrule[2pt]\n \\end{tabular}\n \\label{compare2}\n\\end{table} \n\n\\emph{(3) Unsupervised STDP-based SNN}\n \nUnsupervised learning is an important cognitive function of the brain. The brain can complete the task of object recognition by summarizing the characteristics and features of objects. Unsupervised learning does not require explicit labels, but extracts the features of samples adaptively in learning process. Modeling of this ability of the brain is critical. There are multiple learning rules in the brain to accomplish various learning tasks. STDP is a widespread rule of synaptic weight modification in the brain. It updates the synaptic weights according to the temporal relationship of the pre- and post-synaptic spikes. Compared with the backpropagation algorithm widely used in DNN with gradient calculation, STDP is more biological plausible. However, unlike the backpropagation algorithm that relies on a large number of sample labels, STDP is a local optimization algorithm. Due to the lack of global information, the ability of self-organization and coordination between neurons is insufficient. In the SNN model, it will lead to disorder and uncertainty of spikes discharge, and it is difficult to achieve a stable release balance state of neurons. To this end, we design an unsupervised STDP-based spiking neural network model based on BrainCog, and bring unsupervised learning to BrainCog as a functional module.\nAs shown in Fig.~\\ref{frame}.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.48\\textwidth]{fig\/frame.pdf}\n\\caption{The framework of the unsupervised STDP-based spiking neural network model, which introduces the adaptive synaptic filter (ASF), the adaptive threshold balance (ATB), and the adaptive lateral inhibitory connection (ALIC) mechanisms to improve the information transmission and feature extraction of STDP-based SNNs. This figure is from \\cite{dong2022unsupervised}.}\n\\label{frame}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.49\\textwidth]{fig\/stp1.pdf}\n\\caption{(a) The adaptive synaptic filter and the adaptive threshold balance jointly regulate the neuron spikes balance of neuron. (b) The adaptive lateral inhibitory connection has different connections for different samples, preventing neurons from learning the same features. This figure is from~\\cite{dong2022unsupervised}.}\n\\label{stp}\n\\end{figure}\n\nTo solve the above problems, we introduced various adaptive mechanisms to improve the self-organization ability of the overall network. STP is another synaptic learning mechanism that exists in the brain. Inspired by STP, we designed an adaptive synaptic filter (ASF) that integrates input currents through nonlinear units, and an adaptive threshold balance (ATB) that dynamically changes the threshold of each neuron to avoid excessively high or low firing rates. The combination of the two controls the firing balance of neurons. As the Fig.~\\ref{stp} shows. We also address the problem of coordinating neurons within a single layer with an adaptive lateral inhibitory connection (ALIC). The mechanism have different connection structures for different input samples. Finally, in order to solve the problem of low efficiency of STDP training, we designed a sample temporal batch STDP. It combines the information between temporal and samples to uniformly update the synaptic weights, as shown by the following formula.\\begin{equation}\n\t\\begin{split}\n\t\t&\\frac{dw_{j}^{(t)}}{dt}^{+}=\\sum \\limits_{m=0}^{N_{batch}}\\sum \\limits_{n=0}^{T_{batch}}\\sum\\limits_{f=1}^{N} W(t^{f,m}_{i}-t^{n,m}_{j})\\\\\n\t\\end{split}\n\t\\label{eq5}\n\\end{equation} where $W(x)$ is the function of STDP, $N_{batch}$ is the batchsize of the input, $T_{batch}$ is the batch of time step, N is the number of neurons. We verified our model on MNIST and Fashion-MNIST, achieving 97.9\\% and 87.0\\% accuracy, respectively. To the best of our knowledge, these are the state-of-the-art results for unsupervised SNNs based on STDP. \n\n\n\\textbf{2. Adaptability and Optimization}\n\n\\emph{(1) Quantum superposition inspired SNN}\n\n\nIn the microscopic size, quantum mechanics dominates the rules of operation of objects, which reveals the probabilistic and uncertainties of the world. New technologies based on quantum theory like quantum computation and quantum communication provide an alternative to information processing. Researches show that biological neurons spike at random and the brain can process information with huge parallel potential like quantum computing. \n\nInspired from this, we propose the Quantum Superposition Inspired Spiking Neural Network (QS-SNN)~\\cite{SUN2021102880}, complementing quantum image (CQIE) method to represent image in the form of quantum superposition state and then transform this state to spike trains with different phase. Spiking neural network with time differential convolution kernel (TCK) is used to do further classification shown in Fig.~\\ref{Fig:QSSNN}.\n\n\n\n\n\\begin{figure}[!htb]\n\t\\centering\n\t\\includegraphics[width=1.0\\linewidth]{fig\/QSSNN.pdf} \n\t\\caption{Quantum superposition inspired spiking neural network, adopted from \\cite{SUN2021102880}.}\n\t\\label{Fig:QSSNN}\n\\end{figure}\n\n\n\\begin{figure}[!htb]\n\t\\centering\n\t\\includegraphics[width=1.0\\linewidth]{fig\/mnist_shift_result.pdf} \n\t\\caption{Performance of QS-SNN on background MNIST inverse image, adopted from \\cite{SUN2021102880}.}\n\t\\label{Fig:mnist-res}\n\\end{figure}\n\nThe effort tries to incorporate the quantum superposition mechanism to SNNs as a new form of encoding strategy for BrainCog, and the model finally shows its capability on robustness for learning. The proposed QS-SNN model is tested on color inverted MNIST datasets. The background-inverted picture is encoded in the quantum superposition form as shown in Eq.~\\ref{QS-SNN} and~\\ref{QS-SNN_limitation}.\n\\begin{equation}\n\t\\mathinner{|I(\\theta)\\rangle}=\\frac{1}{2^n}\\sum\\limits_{i=0}^{2^{2n}-1}(cos(\\theta_{i})\\mathinner{|x_{i}\\rangle}+sin(\\theta_{i})\\mathinner{|\\bar{x}_{i}\\rangle})\\otimes\\mathinner{|i\\rangle}, \\\\\n\t\\label{QS-SNN}\n\\end{equation}\n\n\n\\begin{equation}\n\t\\theta_{i} \\in [0, \\frac{\\pi}{2}], i= 1, 2, 3, \\dots, 2^{2n}-1.\n\t\\label{QS-SNN_limitation}\n\\end{equation}\n\nSpike sequences of different frequencies and phases are generated from the picture information of the quantum superposition state. Furthermore we use two-compartment spiking neural networks to process these spike trains.\n\nWe compare the QS-SNN model with other convolutional models. The result in Fig.~\\ref{Fig:mnist-res} shows that our QS-SNN model overtakes other convolutional neural networks in recognizing background-inverted image tasks.\n\n\\emph{(2) Unsupervised SNN with adaptive learning rule and structure}\n\nBrain can accomplish specific tasks by adaptively learning to organize the features of a small number of samples. Few-shot learning is an important ability of the brain. In the above section, an unsupervised STDP-based spiking neural network is introduced~\\cite{dong2022unsupervised}. To better illustrate the power of our model on small sample training, we tested the model with small samples and find that this model has stronger small sample processing ability than ANN with similar structures, as shown in Tab.~\\ref{small}~\\cite{dong2022unsupervised}.\n\n\\begin{table}[h]\n\t\\caption{The performance of unsupervised SNN compared with ANN on MNIST dataset with different number of training samples~\\cite{dong2022unsupervised}.}\n\t\\centering\n\t\\resizebox{0.8\\linewidth}{!}{\n\t\t\\begin{tabular}{lrrrr}\n\t\t\t\\toprule \n\t\t\tsamples&200 &100&50&10 \\\\\n\t\t\t\\midrule\t\n\t\t\tANN&79.77\\%&71.40\\%&68.72\\%&47.12\\%\\\\\n\t\t\tOurs&81.45\\%&75.44\\%&72.88\\%&51.45\\%\\\\\n\t\t\t\\midrule\n\t\t\t&1.68\\%&4.04\\%&4.16\\%&4.33\\%\\\\\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t}\n\t\\label{small}\n\t\n\\end{table}\n\n\n\n\n\\emph{(3) Efficient and Accurate Conversion of SNNs}\n\n\tSNNs have attracted attention due to their biological plausibility, fast inference, and low energy consumption. However the training methods based on the plasticity~\\cite{zeng2017improving} and surrogate gradient algorithms~\\cite{wu2018spatio} need much memory and perform worse than ANN on large networks and complex datasets. For users of BrainCog, we are certain there will be clear need to use SNNs while keep the benifit from ANNs. As an efficient method, the conversion method combines the characteristics of backpropagation and low energy consumption and can achieve the same excellent performance as ANN with lower power consumption~\\cite{li2021free, han2020deep, han2020rmp}. However, the converted SNNs typically suffer from severe performance degradation and time delays. \n\t\n\t\\begin{figure}[htb]\n \t\\centering\n \t\\includegraphics[scale=0.3]{fig\/error.png}\n \t\\caption{The conversion errors from IF neuron, time dimension, and MaxPooling, adopted from~\\cite{li2022efficient}.}\n \t\\label{error}\n \\end{figure}\n\n We divide the performance loss to IF neurons, time dimension, and MaxPooling layer~\\cite{li2022efficient}, as shown in Fig.~\\ref{error}. In SNN, the neuron can only send one spike at most in each time step, so the maximum firing rate of the neuron is 1. After normalizing the weight of trained ANN, some activation values greater than 1 cannot be effectively represented by IF neurons. Therefore, the residual membrane potential in neurons will affect the performance of the conversion to some extent. In addition, since IF receives pre-synaptic neuron spikes for information transmission, the synaptic current received by the neuron at each time is unstable, but its sum can be approx to the converted value by increasing the simulation time. However, the total number of spikes is the time-varying extremum of the sum of the synaptic currents received. When the activation value corresponding to the IF neuron is negative and the total synaptic current received by the neuron in a short time exceeds the threshold, the neuron that should be resting will issue the spike, and the influence of the spike on the later layers cannot be eliminated by increasing the simulation time. Finally, in the conversion of the MaxPooling layer, the previous work has enabled spikes from the neuron with the maximum firing rate to pass through. However, due to the instability of synaptic current, neurons with the maximum firing rate are often not fixed, which makes the output of the converted MaxPooling layer usually larger.\n\n \\begin{figure}[!h]\n \t\\centering\n \t\\includegraphics[scale=0.5]{fig\/conver.pdf}\n \t\\caption{The conversion methods. (a) Burst spikes increase the upper limit of firing rates; (b) spike calibration corrects the effect of faulty spikes on conversion, and (c) LIPooling uses lateral inhibition mechanisms to achieve accurate conversion of the MaxPooling layers. Refined based on~\\cite{li2022efficient,li2022spike}.}\n \t\\label{conver}\n \\end{figure}\n\n To solve the problem of the residual membrane potential of neurons, we introduce the burst mechanism, as shown in Fig.~\\ref{conver} (a), which enables neurons to send more than one spike between two time steps, depending on the current membrane potential. Once some neurons have residual information remaining, they can send spikes between two time steps. In this way, the firing rate of SNN can be increased, and the membrane potential remaining in the neuron can be transmitted to the neuron of the next layer. \n \n For classification problems, SNN only needs to ensure that the index of the maximum output is correct, but for more demanding conversion tasks, such as object detection, the solution of SIN problem is worth exploring. Note that the spikes emitted by hidden layer neurons are unstable, but the mean of its inter-spike interval distribution is related to its corresponding activation value~\\cite{li2022spike}. We monitor each neuron's spiking time in the forward propagation process and update its average inter-spike interval, as shown in Fig.~\\ref{conver} (b). Under a certain time allowance, the neuron that does not emit spikes will be determined to be Inactivated Neuron. Then, the twin weights emit spikes to suppress the influence of historical errors and calibrate the influence of wrong spikes to a certain extent to ensure accurate conversion.\n\n\n Inspired by the lateral inhibition mechanism~\\cite{blakemore1970lateral}, we propose LIPooling for converting the maximum pooling layer, as shown in Fig.~\\ref{conver} (c). From the operation perspective, the inhibition of other neurons by the winner in LIPooling is -1, so the neuron with the largest firing rate at the current time step may not spike due to the inhibition of other neurons in history. From the output perspective, LIPooling sums up the output of all neurons during simulation. So the key is that LIPooling uses competition between neurons to get an accurate sum (equal to the actual maximum), instead of picking the winner.\n\n\\textbf{3. Multi-sensory Integration}\n\n\nOne can build SNN models through BrainCog to process different types of sensory inputs, while the human brain learns and makes decisions based on multi-sensory inputs. When information from various sensory inputs is combined, it can lead to increased perception, quicker response times, and better recognition. Hence, enabling BrainCog to process and integrate multi-sensory inputs are of vital importance.\n\nIn this section, we focus on concept learning with multi-sensory inputs. We present a multi-sensory concept learning framework based on BrainCog to generate integrated vectors with the multi-sensory representation of the concept.\n\nEmbodied theories, which emphasize that meaning is rooted in our sensory and experiential interactions with the environment, supports multi-sensory representations.\nBased on SNNs, we present a human-like framework to learn concepts which can generate integrated representations with five types of perceptual strength information~\\cite{wywFramework}.\nThe framework is developed with two distinct paradigms: Associate Merge (AM) and Independent Merge (IM), as Fig.~\\ref{MultisensoryIntegrationFramework} shows.\n\\begin{figure}[h!]\n \\centering\n \\includegraphics[width=8cm]{.\/fig\/MultisensoryIntegrationBrainCogNEW.png}\\\\\n \\caption{The Framework of Concept Learning Based on SNNs with multi-sensory Inputs.}\n \\label{MultisensoryIntegrationFramework}\n\\end{figure}\n\nIM is based on the widely accepted cognitive psychology premise that each type of sense for the concept is independent before integration~\\cite{wywFramework}.\nAs the input to the model, we will employ five common perceptual strength: visual, auditory, haptic, olfactory and gustatory.\nDuring the data preparation step, we min-max normalize all kinds of perceptual strength of the concept in the multi-sensory dataset so that each value of the vector is in $[0, 1]$.\nWe regard them as stimuli to the presynaptic neurons.\nIt's a 2-layer SNN model, with 5 neurons in the first layer matching the concept's 5 kinds of perceptual strength, and 1 neuron in the second layer representing the neural state following multi-sensory integration.\nIn this paradigm, we use perceptual strength based presynaptic Poisson neurons and LIF or Izhikevich as the postsynaptic neural model.\n\nThe weights between the neurons are $W^i= \\frac{g_i}{\\Sigma_i^n g_i} $ where $g_i = \\frac{1}{\\sigma_i^2}$,$\\sigma_i^2$ is the variance of each kind of perceptual strength.\nWe convert the postsynaptic neuron's spiking train $S^{post}([0, T])$ in $[0, T]$ into integrated representations $B^{IM}([0,T])$ for the concept in this form:\n\\begin{equation}\n\\begin{aligned}\nB^{IM}([0,T]) &= [\\mathcal{T}(S^{post}((0, tol])), \\mathcal{T}(S^{post}((tol, 2*tol])), \\cdots , \\\\\n& \\mathcal{T}(S^{post}(((k-1)*tol, k*tol])) , \\cdots, \\\\\n& \\mathcal{T}(S^{post}((\\lfloor \\frac{T}{tol} \\rfloor * tol, T]))]\n\\end{aligned}\n\\end{equation}\nHere if the interval has any spikes, the bit is 1. Otherwise it is 0, according to the $\\mathcal{T} (interval)$ function.\n\nThe AM paradigm presupposes that each kind of modality is associated before integration~\\cite{wywFramework}. \nIt includes 5 neurons, matching the concept's 5 distinct modal information sources. \nThey are linked with each other and are not self-connected. \nThe input spike trains to the network are generated using a Poisson event-generation algorithm based on perceptual strength.\nFor each concept, we turn the spike trains of these neurons into the ultimate integrated representations.\n\n\n\nThe weight value is defined by the correlation between each two modalities, i.e. $W = Corr(i, j)$, where $i, j \\in [A, G, H, O, V]$.\nWe convert the spike trains $S^i([0, T])$ of all neurons into binarycode $B^i([0,T])$ and conjoin them as the ultimate vector $B([0,T])$ as follows:\n\\begin{equation}\n\\begin{aligned}\nB^i([0,T]) &= [\\mathcal{T}(S^i((0, tol])), \\mathcal{T}(S^i((tol, 2*tol])), \\cdots , \\\\\n& \\mathcal{T}(S^i(((k-1)*tol, k*tol])) , \\cdots, \\\\\n& \\mathcal{T}(S^i((\\lfloor \\frac{T}{tol} \\rfloor * tol, T]))]\n\\end{aligned}\n\\end{equation}\n\n\\begin{equation}\n\\begin{aligned}\nB^{AM}([0,T]) &= [B^A([0,T]) \\oplus B^H([0,T]) \\oplus B^G([0,T]) \\\\\n& \\oplus B^O([0,T]) \\oplus B^V([0,T])]\n\\end{aligned}\n\\end{equation}\n\n\nTo test our framework, we conducted experiments with three multi-sensory datasets (LC823~\\cite{wywLC423,wywLC400}, BBSR~\\cite{wywBBSR}, Lancaster40k~\\cite{wywLan40k}) for the IM and AM paradigms, respectively. \nWe used WordSim353~\\cite{wywWS353} and SCWS1994~\\cite{wywSCWS} as metrics~\\cite{wywFramework}. \nThe resuls show that integrated representations are closer to human beings than the original ones based on our framework, according to the overall results: 37 submodels outperformed a total of 48 tests for both AM and IM paradigm~\\cite{wywFramework}.\nMeanwhile, to compare the two paradigms, we introduce concept feature norms datasets which represent concepts with systematic and standardized feature descriptions. \nIn this study, we use the datasets McRae~\\cite{wywMcRae} and CSLB~\\cite{wywCSLB} as criteria.\nThe findings show that the IM paradigm performs better at multi-sensory integration for concepts with higher modality exclusivity. \nThe AM paradigm benefits the concept of uniform perceptual strength distribution. \nFurthermore, we present perceptual strength-free metrics to demonstrate that both paradigms of our framework have excellent generality~\\cite{wywFramework}.\n\n\n\\begin{figure}[h!]\n \\centering\n \\includegraphics[width=8cm]{.\/fig\/yuwei1.jpg}\\\\\n \\caption{The Correlation Results Between Modality Exclusivity and Average of 3 Neighbors' Rankings.}\n \\label{MEFS}\n\\end{figure}\n\n\n\n\n\n\\subsection{Decision Making}\nThis subsection introduces how BrainCog implements decision-making functions from the perspective of brain neural mechanism modeling and deep SNN-based reinforcement learning models. Using BrainCog, we build a multi-brain areas coordinated SNN model and a spiking deep Q-network to solve decision-making and control problems.\n\n\\textbf{1. Brain-inspired Decision-Making SNN}\n\n\nFor mammalian brain-inspired decision-making, we take inspiration from the PFC-BG-ThA-PMC neural circuit and build brain-inspired decision-making spiking neural network (BDM-SNN) model~\\cite{zhao2018brain} by BrainCog as shown in Fig. ~\\ref{dmsnn}. BDM-SNN contains the excitatory and inhibitory connections within the basal ganglia nuclei and direct, indirect, and hyperdirect pathways from the PFC to the BG~\\cite{frank2006anatomy,silkis2000cortico}. This BDM-SNN model incorporates biological neuron models (LIF and simplified H-H models), synaptic plasticity learning rules, and interactive connections among multi-brain areas developed by BrainCog. On this basis, we extend the dopamine (DA)-regulated BDM-SNN, which modulates synaptic learning for PFC-to-striatal direct and indirect pathways via dopamine. Different from the DA regulation method in~\\cite{zhao2018brain} which uses multiplication to modulate the specified connections, we improve it by introducing R-STDP~\\cite{Eugene2007} (from Eq.~\\ref{rstdpe} and Eq.~\\ref{rstdpw}) to modulate the PFC-to-striatal connections. \n\n\\begin{figure}[htbp]\n\\begin{center}\n\\includegraphics[width=8cm]{.\/fig\/bdmsnn.jpg}\n\\end{center}\n\\caption{The architecture of DA-regulated BDM-SNN, refined based on~\\cite{zhao2018brain}.}\\label{dmsnn}\n\\end{figure}\n\n\nThe BDM-SNN model implemented by BrainCog could perform different tasks, such as the Flappy Bird game and has the ability to support UAV online decision-making. For the Flappy Bird game, our method achieves a performance level similar to humans, stably passing the pipeline on the first try. Fig.~\\ref{fb}a illustrates the changes in the mean cumulative rewards for LIF and simplified H-H neurons while playing the game. The simplified H-H neuron achieves similar performances to that of LIF neurons. BDM-SNN with different neurons can quickly learn the correct rules and keep obtaining rewards. We also analyze the role of different ion channels in the simplified H-H model. From Fig.~\\ref{fb}b, we find that sodium and potassium ion channels have opposite effects on the neuronal membrane potential. Removing sodium ion channels will make the membrane potential decay, while the membrane potential rises faster and fires earlier when removing potassium ion channels. These results indicate that sodium ion channels can help increase the membrane potential, and potassium ion channels have the opposite effect. Experimental results also indicate that BDM-SNN with simplified H-H model that removes sodium ion channels fails to learn the Flappy Bird game.\n\n\n\\begin{figure}[htbp]\n\\begin{center}\n\\includegraphics[width=8.8cm]{.\/fig\/hhfly.jpg}\n\\end{center}\n\\caption{(a) Experimental result of BrainCog based BDM-SNN on Flappy Bird. The y-axis is the mean of cumulative rewards. (b) Effects of different ion channels on membrane potential for simplified H-H model.}\\label{fb}\n\\end{figure}\n\nIn addition, for the UAV decision-making tasks in the real scene, our model could perform potential applications including flying over doors and windows and obstacle avoidance, which have been realized in~\\cite{zhao2018brain}. Users only need to divide the state space and action space according to different tasks, call the DA-regulated BDM-SNN decision-making model, and combine the UAV's action control instructions to complete the UAV's decision-making process.\n \n \nThis part of the work mainly draws on the neural structure and learning mechanism of brain decision-making and proposes the multi-brain areas coordinated decision-making spiking neural network constructed by BrainCog, and verifies the ability of reinforcement learning in different application scenarios. \n\n\n\\textbf{2. Spiking Deep Q Network with potential based layer normalization}\n\nDeep Q network is widely used for decision-making tasks, and it is required to have SNN based deep Q network for BrainCog so that it can be used for SNN based decision making. We propose potential-based layer normalization spiking deep Q network (PL-SDQN) model to combine SNN with deep reinforcement learning~\\cite{sun2022solving}. We use the LIF neuron model in BrainCog to simulate neurodynamics. Deep spiking neural networks are difficult to be applied to reinforcement learning tasks. On the one hand, it is due to the complexity of reinforcement learning task itself, on the other hand, it is challenging to train spiking neural networks and transmit spiking signal characteristics in deep layers. We find that the spiking deep Q network quickly dissipates the spiking signal in the convolutional layer. Inspired by how local environmental potentials influence brain neurons, we propose the potential-based layer normalization (pbLN) method. The $x_t$, the postsynaptic potential of convolution layers, are normalized as \n\\begin{equation}\n\t\\hat{x_t} = \\frac{x_t-\\bar{x}_t}{\\sqrt{\\sigma_{x_t}+\\epsilon}}\n\t\\label{Eq:norm}\n\\end{equation}\t\n\n\\begin{equation}\n\t\\bar{x}_t = \\frac{1}{H} \\sum_{i=1}^Hx_{t, i}\n\\end{equation}\t\n\n\\begin{equation}\n\t\\sigma_{x_t} = \\frac{1}{H}\\sqrt{\\sum_{i=1}^H(x_{t, i}-\\bar{x}_t)}\n\\end{equation}\n\n\nWe construct PL-SDQN model as shown in Fig.~\\ref{Fig:sdqn}. Atari game images are processed by spiking convolution network and pbLN method and input into a fully connected LIF neural network. The spiking output of PL-SDQN is weighted and summed to continue state-action values. \n\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=8cm]{fig\/SDQN_struct.pdf}\n\t\\caption{The framework of PL-SDQN. Refined from~\\cite{sun2022solving}.}\n\t\\label{Fig:sdqn}\n\\end{figure}\n\n\n\nWe compared our model with the original ANN-based DQN model, and the results are shown in Fig.~\\ref{Fig:game_res}. It shows that our model achieved better performance compared with vanilla DQN model.\n\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=8cm]{fig\/game_res_new_resize.pdf}\n\t\\caption{PL-SDQN performance on Atari games, adopted from~\\cite{sun2022solving}.}\n\t\\label{Fig:game_res}\n\\end{figure}\n\n\n\\subsection{Motor Control}\n\nNeuromorphic models on robot control can achieve more robust and energy efficient effects than conventional methods. Spiking neural networks have been used in robot control studies like navigation~\\cite{wang2014mobile} and robot arm control~\\cite{Tieck2018}. Inspired by the brain motor circuit, we construct a multi-brain areas coordinated SNN robot motor control model, to extend BrainCog to control various robots to model embodied intelligence.\n\nWe construct the brain-inspired motor control model with LIF neuron provided by BrainCog. The whole network model architecture is shown in Fig.~\\ref{Fig:motor}. The high-level motion information is produced by SMA and PMC modules. As discussed above, the function of SMA is to process internal movement stimuli and is responsible for the planning and abstraction of advanced actions. The SMA model contains LIF neurons and receives input signals. One part of the output pulse of the SMA module stimulates the PMC module, and the other part is received by the BG module. The output of the BG module is used as a supplementary signal for action planning, and serves as the input to the PMC together with the SMA signal.\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=8cm]{fig\/motor.pdf}\n\t\\caption{A spiking neural network for motor control based on BrainCog.}\n\t\\label{Fig:motor}\n\\end{figure}\n\n\nIn order to expand the dimension of neuron direction representation, we use neuron population coding to output the high-level action abstraction. Population-coded spiking neural network has been used for energy-efficient continuous control~\\cite{tang2021deep}, showing population coding can increase the ability of spiking neurons to represent precise continuous values. In our work, we are inspired by neural mechanisms of population encoding of motion directions in the brain and use LIF neuron groups to process the output spikes from PMC module. \n\nThe cerebellum plays an important role in motor coordination and fine regulation of movements. We built spiking neural network based cerebellum model to process the high-level motor control population embedding. The outputs of populations are fused to encode motor control information generated by high-level cortex area inputs to a three-layer cerebellum spiking neural network, including GCs, PCs and DCN modules. And the cerebellum takes pathway connections like DenseNet~\\cite{huang2017densely}. The DCN layer generates the final joint control outputs.\n\n\n\\subsection{Knowledge Representation and Reasoning}\nThis subsection shows how the BrainCog platform achieve the ability of knowledge representation and reasoning. Via neuroplasticity and population coding mechanisms, spiking neural networks acquire music and symbolic knowledge. Moreover, on this basis, cognitive tasks such as music generation, sequence production, deductive reasoning and inductive reasoning are realized.\n\n\n\\textbf{1. Music Memory and Stylistic Composition SNN}\n\n\nMusic is part of human nature. Listening to melodies involves sensory perception, personal memory, action, emotion, and even creative behaviors, etc~\\cite{Koelsch2012}. Music memory is a fundamental part of musical behaviors, and humans have strong abilities to store a sequence of notes in the brain. Learning and creating music are also essential processes. A musician engages his memory, emotion, musical knowledge and skills to write a beautiful melody. Actually, neuroscientists have found that many brain areas need to collaborate to complete the cognitive behaviors with music. Inspired by brain mechanisms, this paper focuses on the two key issues of music memory and composition, which are modeled by spiking neural networks based on the BrainCog platform.\n\n\\subsubsection{Musical Memory Spiking Neural Network} \\label{secA}\nA musical melody is composed of a sequence of notes. Pitch and duration are two essential attributes of a note. Scientists have found that the primary auditory cortex provides a tonotopic map to encode the pitches, which means that neurons in this region have their preferences of pitches~\\cite{Kalat2015}. Meanwhile, neural populations in the medial premotor cortex have the preferences of all the time intervals covered in hundreds of milliseconds~\\cite{merchant2013a}. Besides, researchers have emphasized the contribution of the hippocampus in sequence memory~\\cite{Fortin2002}. Inspired by these mechanisms, this work proposes a spiking neural model, which contains collaborated subnetworks to encode, store and retrieve the music melodies~\\cite{LQ2020}.\n\n\\emph{Encoding:} As is shown in Fig.~\\ref{01}, this work defines pitch subnetwork and duration subnetwork to encode pitches and durations of musical notes respectively. These two subnetworks are composed of numbers of minicolumns with different preferences. Synaptic connections with transmission delays exist between neurons from different layers. Besides, a cluster that represents the title of a musical melody is composed of numerous individual neurons. This cluster has the feedforward and feedback connections with pitch and duration subnetworks. Since the BrainCog platform supports various neural models, this work takes LIF model to simulate neural dynamics.\n\\begin{figure}[!htbp]\n \\centering\n \\includegraphics[scale=0.4]{.\/fig\/music1.png}\n \\caption{The architecture of the music memory model, refined based on~\\cite{LQ2020}.}\n \\label{01}\n\\end{figure}\n \n\\emph{Storing:} Based on the encoding process, as the notes input sequently, the neurons in pitch and duration subnetworks with different preferences respond to these sequential notes and fire orderly. Meanwhile, connections between these neurons are computed and updated by the STDP learning rule. It is important to indicate that the neurons and synapses are grown dynamically. Besides, synaptic connections between the title cluster and other two subnetworks are generated and updated by the STDP learning rule simultaneously. The details of note sequence memorizing can be found in our previous work~\\cite{LQ2020}.\n\n\\emph{Retrieving:} Given the title of a musical work, the ordered notes can be recalled accurately. Since the weights of connections are updated in storing process, neural activities in the title cluster lead to the excitations of neurons in pitch and duration subnetworks. Then, the notes are retrieved in order. We use a public corpus that contains 331 classical piano works~\\cite{Krueger2018} recorded by MIDI standard format to evaluate the model. The experiments have shown that our model can memorize and retrieve the melodies with an accuracy of 99\\%. The details of the experiments have been discussed in our previous work~\\cite{LQ2020}.\n\n\n\\subsubsection{Stylistic Composition Spiking Neural Network}\n\n\nHow to learn and make music are quite complex processes for humans. Scientists have found that the memory system and knowledge experience participate in human creative behaviors~\\cite{Dietrich2004}. Many brain areas like the prefrontal cortex are engaged in human creativity~\\cite{Jung2013}. However, the details of brain mechanisms are still unclear. Inspired by the current neuroscientific findings, the BrainCog introduces a spiking neural network for learning musical knowledge and creating melodies with different styles~\\cite{LQ2021}. \n\n\\begin{figure}[!htbp]\n \\centering\n \\includegraphics[scale=0.28]{.\/fig\/2.pdf}\n \\caption{Stylistic composition model inspired by brain mechanisms. Refined based on~\\cite{LQ2021}.}\n \\label{02}\n\\end{figure}\n\n\\emph{Musical Learning:} This work proposes a spiking neural model which is composed of a knowledge network and a sequence memory network. As is shown in Fig.~\\ref{02}, the knowledge network is designed as a hierarchical structure for encoding and learning musical knowledge. These layers store the genre (such as Baroque, Classical, and Romantic), the names of famous composers and the titles of musical pieces. Neurons in the upper layers project their synapses to the lower layers. The sequence memory network stores the ordered notes which have been discussed in section~\\ref{secA}. During the learning process, synaptic connections are also projected from the knowledge network to the sequence memory network. This work also takes LIF model which is supported by the BrainCog platform to simulate neural dynamics. Furthermore, all the connections are generated and updated dynamically by the STDP learning rule. \n\n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[scale=0.6]{.\/fig\/3.pdf}\n \\caption{A sample of a generated melody with Bach's characteristic.}\n \\label{03}\n\\end{figure}\n\n\n\\emph{Musical Composition:} Based on the learning process, genre-based and composer-based melody compositions are discussed in this paper. Given the beginning notes and the length of the melody to be generated, the genre-based composition can produce a single-part melody with a specific genre style. This task is achieved by the neural circuits of genre cluster and sequential memory system. Similarly, the composer-based composition can produce melodies with composers' characters. The composer cluster and sequential memory system circuits contribute to this process~\\cite{LQ2021}. We also use a classical piano dataset including 331 musical works recorded by MIDI format~\\cite{Krueger2018} to train the model. Fig.~\\ref{03} shows a sample of the generated melody with Bach's style. The details of stylistic composition can be referred to our previous work~\\cite{LQ2021}.\n\n\n\nA total of 41 human listeners are invited to evaluate the quality of the generated melodies, and they are divided into two groups, one of which has a musical background. Experiments have shown that the pieces produced by the model have strong characteristics of different styles and some of them sound nice.\n\n\\textbf{2. Brain-Inspired Sequence Production SNN}\n\nSequence production is an essential function for AI applications. Components in BrainCog enable the community to build SNN models to handle this task. In this paper, we introduce the brain-inspired symbol sequences production spiking neural network (SPSNN) model that has been incorporated in BrainCog~\\cite{fang2021spsnn}. SPSNN incorporates multiple neuroscience mechanisms including Population Coding~\\cite{xie2022geometry}, STDP~\\cite{dan2004spike}, Reward-Modulated STDP~\\cite{fremaux2016neuromodulated}, and Chunking Mechanism~\\cite{pammi2004chunking}, mostly covered and provided by BrainCog. After reinforcement learning, the network can complete the memory of different sequences and production sequences according to different rules.\n\nFor Population Coding, this model utilizes populations of neurons to represent different symbols. The whole neural loop of SPSNN is divided into Working Memory Circuit, Reinforcement Learning Circuit, and Motor Neurons~\\cite{fang2021spsnn}, shown in Fig.~\\ref{SPSNN}. The Working Memory Circuit is mainly responsible for completing the memory of the sequence. The Reinforcement Learning Circuit is responsible for acquiring different rules during the reinforcement learning process. The Motor Neurons can be regarded as the network's output. \n\nIn the working process of the model, the Working Memory Circuit and the Reinforcement Learning Circuit cooperate to complete the memory and production of different sequences~\\cite{fang2021spsnn}. It is worth mentioning that with the increase of background noise, the recall accuracy of symbols at different positions in a sequence gradually decreases, and the overall change trend follows the \"U-shaped accuracy\", which is consistent with experiments in psychology and neuroscience~\\cite{jiang2018production}. The results are highly consistent due to the superposition of primacy and recency effects. Our model provides a possible explanation for both effects from a computational perspective.\n\\begin{figure}[!htbp]\n\\centering\n\\includegraphics [width=8.9cm]{.\/fig\/spsnnloop}\n\\caption{The architecture of SPSNN, adopted from~\\cite{fang2021spsnn}.}\n\\label{SPSNN}\n\\end{figure}\n\n\n\\textbf{3. Commonsense Knowledge Representation Graph SNN}\n\n\nCommonsense knowledge representation and reasoning are important cornerstones on the way to realize human-level general AI~\\cite{minsky2007emotion}. In this module, we build Commonsense Knowledge Representation SNN(CKR-SNN) to explore whether SNN can complete these cognitive function.\n\n\n\nThe hippocampus plays a critical role in the formation of new knowledge memory~\\cite{schlichting2017hippocampus}. Inspired by the population coding mechanism found in hippocampus~\\cite{ramirez2013creating}, this module encodes the entities and relations of commonsense knowledge graph into different populations of neurons. Via spiking timing-dependent plasticity (STDP) learning principle, the synaptic connections between neuron populations are formed after guiding the sequential firings of corresponding neuron populations~\\cite{KRRfang2022}.\n\nAs Fig.~\\ref{GSNN} shows, neuron populations together constructed the giant graph spiking neural networks, which contain the commonsense knowledge. In this module, Commonsense Knowledge Representation SNN(CKR-SNN) represents a subset of Commonsense Knowledge Graph ConceptNet~\\cite{Conceptnet55}. After training, CKR-SNN can complete conceptual knowledge generation and other cognitive tasks~\\cite{KRRfang2022}. \n\n\n\n\\begin{figure}\n\\centering\n\\includegraphics [width=8.9cm]{.\/fig\/GSNN}\n\\caption{Graph Spiking Neural Networks for Commonsense Representation, adopted from~\\cite{KRRfang2022}.}\n\\label{GSNN}\n\\end{figure}\n\n\n\n\n\n\n\n\n\\textbf{4. Causal Reasoning SNN}\n\nIn BrainCog, we constructed causal reasoning SNN, as an instance to verify the feasibility of spiking neural networks to realize deductive and inductive reasoning. Specifically, Causal Reasoning Spiking Neural Network (CRSNN) module contains a brain-inspired causal reasoning spiking neural network model~\\cite{fang2021crsnn}.\n\n\nThis model explores how to encode a static causal graph into a spiking neural network and implement subsequent reasoning based on a spiking neural network. Inspired by the causal reasoning process of the human brain~\\cite{pearl2018book}, we try to explore how causal reasoning can be implemented based on spiking neural networks. The 3D model of CRSNN is shown in Fig.~\\ref{CRSNN}.\n\nInspired by neuroscience, the CRSNN module adopts the population coding mechanism and uses neuron populations to represent nodes and relationships in the causal graph. Each node indicates different events in the causal graph, as shown in Fig.~\\ref{CRSNN}. By giving current stimulation to different neuron populations in the spiking neural networks and combining the STDP learning rule~\\cite{dan2004spike}, CRSNN can encode the topology between different nodes in a causal graph into a spiking neural network. Furthermore, according to this network, CRSNN completes the subsequent deductive reasoning tasks.\n\nThen, by introducing an external evaluation function, we can grasp the specific reasoning path in the working process of the network according to the firing patterns of the model, which gives the CRSNN more interpretability compared to traditional ANN models~\\cite{fang2021crsnn}. \n\n\n\n\\begin{figure}\n\\centering\n\\includegraphics [width=8.8cm]{.\/fig\/cg3d}\n\\caption{CRSNN 3D model, adapted from~\\cite{fang2021crsnn}.}\n\\label{CRSNN}\n\\end{figure}\n\n\n\n\n\n\\subsection{Social Cognition}\n\nThe nature and neural correlates of social cognition is an advanced topic in cognitive neuroscience. In the field of artificial intelligence and robotics, there are few in-depth studies that take the neural correlation and brain mechanisms of biological social cognition seriously. Although the scientific understanding of biological social cognition is still in a preliminary stage~\\cite{zeng2018toward}, we integrate the biological findings of social cognition into a model to construct a brain-inspired model for social cognition to extend the functions of BrainCog.\n\nUnderstanding ourselves and other people is a prerequisite for social cognition. \n\nAn individual's perception of the world is realized through his own body, and the importance of knowing oneself is the perception of self-body. Neuroscientific researches show that the inferior parietal lobule (IPL) is activated when the subjects see self-generated actions~\\cite{macuga2011selective} and their own faces~\\cite{sugiura2015neural}. Similar to the IPL, the Insula is activated in bodily ownership and self-recognition tasks~\\cite{craig2009you}. \n\nUnderstanding the mental states of others plays an important role for understanding other people. Theory of mind is an ability to distinguish between self and others and to infer others' mental states (such as desires, goals, beliefs, etc.) in the social context~\\cite{shamay-tsoory_dissociation_2007,sebastian_neural_2012,dennis2013cognitive}. This ability can help us reasonably infer other people's policies and goals. Inspired by this, we believe that applying theory of mind to the agent's decision-making process will improve the agent's inference of other agents, so as to take more reasonable actions. Neuroscientific researches~\\cite{abu-akel_neuroanatomical_2011,hartwright_multiple_2012,hartwright_special_2015, koster-hale_theory_2013} show that the brain areas related to theory of mind are mainly TPJ, part of PFC, ACC and IFG. The IPL contained in the TPJ is mainly used to represent self-relevant information, while the pSTS is used to represent information related to others. The insula, representing the abstract self, can be stimulated with self-related information~\\cite{zeng2018toward}. When theory of mind is going on, the IFG will suppress self-relevant information. Therefore, the TPJ will input other-relevant information into the PFC. The ACC evaluates the value of others' states, so as to help the PFC to infer others. The process of inferring others' goals or behaviors can be understood as simulating other people's decision-making~\\cite{suzuki_learning_2012}. Therefore, this process will be regulated by dopamine from substantia nigra compacta\/ventral tegmental area (Snc\/VTA).\n\n\n\n\n\n\n\n\nWith the neuron model and STDP function provided by the BrainCog framework, a brain-inspired social cognition model is constructed, as shown in Fig.~\\ref{BISC}.\n\n\n\\begin{figure}[htbp]\n\\begin{center}\n\\includegraphics[width=8cm]{fig\/SC.png}\n\\end{center}\n\\caption{Brain-inspired social cognition model.}\n\\label{BISC}\n\\end{figure}\n\n\n\n\nThe brain-inspired social cognition model contains two pathways: the bodily self-perception pathway and the theory of mind pathway.\n\nThe bodily self-perception pathway (shown in Fig.~\\ref{Self}) consists of inferior parietal lobule spiking neural network (IPL-SNN) and Insula spiking neural network (Insula-SNN). The IPL-SNN realizes motor-visual associative learning. The Insula-SNN realizes the abstract representation of oneself, that is, when the detected movement's visual results match the expected results of its own movement, the Insula will be activated, and the robot considers that the moving part in the field of vision belongs to itself.\n\n\\begin{figure}[htbp]\n\\begin{center}\n\\includegraphics[width=8cm]{fig\/Self.png}\n\\end{center}\n\\caption{The architecture and pathway of bodily self-perception in the brain-inspired social cognition model.}\n\\label{Self}\n\\end{figure}\n\nThe architecture of IPL-SNN and the process of motor-visual associative learning is shown in Fig.~\\ref{IPL}. The vPMC generates its own motion angle information, and the STS outputs the motion angle information detected by vision. According to the STDP mechanism and the spiking time difference of neurons in IPLM and IPLV, the motor-visual associative learning is established. \n\\begin{figure}[htbp]\n\\begin{center}\n\\includegraphics[width=8cm]{fig\/IPL.png}\n\\end{center}\n\\caption{Motor-visual associative learning in IPL, adapted from~\\cite{zeng2018toward}.}\n\\label{IPL}\n\\end{figure}\n\nThe architecture of Insula-SNN is shown in Fig.~\\ref{Insula}. The Insula receives angle information from IPLV and STS. After the motor-visual associative learning in IPL, IPLV outputs the visual feedback angle information predicted according to its own motion, and STS outputs the motion angle information detected by vision. If the two are consistent, the Insula will be activated.\n\n\\begin{figure}[htbp]\n\\begin{center}\n\\includegraphics[width=8cm]{fig\/Insula.png}\n\\end{center}\n\\caption{The architecture of Insula-SNN.}\n\\label{Insula}\n\\end{figure}\n\n\n\nThe theory of mind pathway~\\cite{zhao2022brain} is mainly composed of three modules: the perspective taking module, the policy inference module, the action prediction module, and the state evaluation module (shown in Fig.~\\ref{ToM}).\n\n\\begin{figure}[!htbp]\n\\centering\n\\includegraphics[width=0.48\\textwidth]{.\/fig\/ToM1.jpg}\n\\caption{The architecture of theory of mind in the brain-inspired social cognition model, refined based on~\\cite{zhao2022brain}.}\n\\label{ToM}\n\\end{figure}\n\nThe perspective taking (also called self-perspective inhibition~\\cite{zeng2018toward}) module simulates the function of suppressing self-relevant information in the process of distinguish self and others. The information related to self can stimulate an representation of abstract self. When we infer others, the information related to self can be suppressed. Assuming that the agent knows the environment. When the agent with the theory of mind (ToM) infers the observation of others, it only needs to bring its own observation into the position of others. A matrix is used to represent the observed environment, where 1 indicates that the area can be observed at the location, and 0 indicates that the area cannot be observed at the location. Another matrix is used to represent the position of objects. The position which is occupied by objects is represented by 1, otherwise it is represented by 0. By taking the intersection of the two matrices, the estimation of others' states are obtained. According to the fact that the IFG helps the brain to suppress the performance of self-relevant information in the process of ToM, agent with the ToM will inhibit its own representation of states, and further infer the behavior of others by these estimation of others' states. In summary, the input of this module is the observation vector and the matrix of the environment. The output is the observation vector of others' perspective.\n\nThe dorsolateral prefrontal cortex (DLPFC) has the function of storing working memory and predicting others' behaviors. The action prediction module is used to simulate the function of the DLPFC. The input is formed by the observation value of others' state output by the perspective taking module. The module is a single layer spiking neural network. There is lateral suppression in the output layers. The network is trained by R-STDP. The source of reward is the difference between the predicted value and the real value. When the predicted value is consistent with the real value, the reward is positive. When the predicted value is not equal to the real value, the reward is negative.\n\nThe state evaluation module composed of a single layer spiking neural network simulates the function of the ACC brain area. The inputs of the module are the predicted state and the output is safe or unsafe.\n\nFinally, we conducted two experiments to test the brain-inspired social cognition model.\n\n\\textbf{1. Multi-Robots Mirror Self-Recognition Test }\n\nThe mirror test is the most representative test of social cognition. Only a few animals passed the test, including chimpanzees~\\cite{RN826}, orangutans~\\cite{RN827}, bonobos~\\cite{RN828}, gorillas~\\cite{RN829,RN830}, Asiatic elephant~\\cite{RN831}, dolphins~\\cite{RN832}, orcas~\\cite{RN833}, macaque monkeys~\\cite{RN835}, etc. Based on the mirror test, we proposed the Multi-Robots Mirror Self-Recognition Test~\\cite{zeng2018toward}, in which three robots with identical appearance move their arms randomly in front of the mirror at the same time, and each robot needs to determine which mirror image belongs to it. The experiment includes training stage and test stage.\n\nThe training stage is shown in Fig.~\\ref{train}. Three blue robots with identical appearance move randomly in front of the mirror at the same time. Each robot establishes the motor-visual association according to the motion angle of its own arm and the angle of visual detection.\n\n\\begin{figure}[htbp]\n\\begin{center}\n\\includegraphics[width=8cm]{fig\/mirrortest_train.jpg}\n\\end{center}\n\\caption{Training stage in multi-robots mirror self-recognition test, adapted from~\\cite{zeng2018toward}.}\n\\label{train}\n\\end{figure}\n\nThe test stage is shown in Fig.~\\ref{test}. In the test stage, the robot can predicts the visual feedback generated by its arm movement according to the training results. By comparing the similarity between the predicted visual feedback and the detected visual results, the robot can identify which mirror image belongs to it. \n\n\n\\begin{figure}[htbp]\n\\begin{center}\n\\includegraphics[width=8cm]{fig\/mirrortest_test.jpg}\n\\end{center}\n\\caption{Test stage in multi-robots mirror self-recognition test, adapted from~\\cite{zeng2018toward}.}\n\\label{test}\n\\end{figure}\n\nIn the bodily self-perception pathway, the input is the angle of the robot's random motion and the angle detected by the robot's vision. After training and testing, the output is an image, which is the result of visual motion detection and the result of self motion prediction. The motion track corresponding to the red line in the visual motion detection result is generated by itself. The result is shown in Fig.~\\ref{result}.\n\n\\begin{figure}[htbp]\n\\begin{center}\n\\includegraphics[width=8cm]{fig\/mirrortest_result.png}\n\\end{center}\n\\caption{The result of IPL-SNN.}\n\\label{result}\n\\end{figure}\n\n\\textbf{2. AI Safety Risks Experiment }\n\nAI safety risks experiment is shown in Fig.~\\ref{fig_sim}. After observing the behavior of the other two agents, the green agent can infer the behaviors of others by utilizing its ToM ability when safety risks may arise due to environmental changes. The experiment was conducted in the environments with some simulated types of safety risks (e.g., the intersection will block the view of agents and may cause agents to collide in the crossing).\n\nThe experiment shows that the agent can infer others when they have different perspectives. In the first two environments, the agent observes the movement of others, and in the third environment, the agent predicts others' actions. In the experiment, we verify the effectiveness of the model by taking the rescue behavior as the standard when other agents might be in danger. The experimental results show that the agent with the ToM can predict the danger of the other agent in a slightly changed environment after watching other agents move in the previous environments.\n\n\\begin{figure}[!htbp]\n\\centering\n\\subfloat[Green agent observes others' behaviors (example 1)]{\\includegraphics[height=0.65in]{.\/fig\/Figure6A.jpg}%\n\\label{1}}\n\\hfil\n\\subfloat[Green agent observes others' behaviors (example 2)]{\\includegraphics[height=0.65in]{.\/fig\/Figure6B.jpg}%\n\t\\label{2}}\n\\hfil\n\\subfloat[Test example 1 (with ToM)]{\\includegraphics[height=0.65in]{.\/fig\/Figure7A.jpg}%\n\t\\label{3}}\n\\hfil\n\\subfloat[Test example 2 (without ToM)]{\\includegraphics[height=0.65in]{.\/fig\/Figure7B.jpg}%\n\t\\label{4}}\n\\caption{Comparison diagram of experimental results. (a) Example 1. The green agent observes others' behaviors. (b) Example 2. The green agent observes others' behaviors. (c) The green agent with ToM can help other agents avoid risks. (d) The green agent without ToM is unable to help other agents avoid risks. Similar results can be found in~\\cite{zhao2022brain}.}\n\\label{fig_sim}\n\\end{figure}\n\n\\section{Brain Simulation}\nBrain simulation includes two parts: brain cognitive function simulation and multi-scale brain structure simulation. We incorporate as much published anatomical data as possible to simulate cognitive functions such as decision-making and working memory. Anatomical and imaging multi-scale connectivity data is used to make whole-brain simulations from mouse, macaque to human more biologically plausible.\n\n\\subsection{Brain Cognitive Function Simulation}\n\n\\textbf{1. \\emph{Drosophila}-inspired Decision-Making SNN}\n\n\\emph{Drosophila} decision-making consists of value-based nonlinear decision and perception-based linear decision, where the nonlinear decision could help to amplify the subtle distinction between conflicting cues and make winner-takes-all choices~\\cite{tang2001choice}. In this paper, the BrainCog framework is used to build \\emph{Drosophila} nonlinear and linear decision-making pathways as shown in Fig.~\\ref{pi}a-b. The entire model consists of a training phase and a testing phase as same as~\\cite{zhao2020neural}. In the training phase, a two-layer SNN with LIF neurons is trained by reward-modulated STDP, which combines local STDP synaptic plasticity with global dopamine regulation. The training phase learns the safe pattern (upright-green T) and the punished pattern (inverted-blue T)~\\cite{zhao2020neural}. Therefore, it is safe for green color and upright T shape factors, while blue color and inverted T shape are dangerous. \n\n\n\\begin{figure}[htbp]\n\\begin{center}\n\\includegraphics[width=8cm]{.\/fig\/3.jpg}\n\\end{center}\n\\caption{(a) Linear Pathway. (b) Nonlinear Pathway. (c) Experiments for training and choice phases. (d) Experimental results of linear and nonlinear networks under the dilemma. The X-axis refers to the color density, and the Y-axis represents the PI values. Refined based on~\\cite{zhao2020neural}.}\n\\label{pi}\n\\end{figure}\n\nTwo cues (color and shape) are restructured during the testing phase, requiring linear and nonlinear pathways to make a choice between inverted-green T and upright-blue T, as shown in Fig.~\\ref{pi}c. The linear decision directly uses the knowledge acquired during the training phase to make decisions. The nonlinear network models the recurrent loop of the DA\u2011GABA\u2011MB circuit~\\cite{tang2001choice,zhang2007dopamine,zhou2019suppression}: KC activates the anterior posterior lateral (APL) neurons, which in turn releases GABA transmitter to inhibit the activity of KC. KC also provides mushroom body output neuron (MBON) with exciting input in order to generate behavioral choices. When faced with conflicting cues, the level of DA increases rapidly and produces mutual inhibition with APL, thereby producing a disinhibitory effect on KC. The excitatory connection between DA and MBON also helps speed up decision-making.\n\nTo verify the consistency of \\emph{drosophila}-inspired decision-making SNN with the conclusions from neuroscience~\\cite{tang2001choice}, we count the behavior paradigm of our model under different color intensities over a period of time. First, we run the network for 500 steps to count the time $t_1$ of selecting behavior 1 (avoiding) and the time $t_2$ of selecting behavior 2 (approaching). Then we calculate prefer index (PI) values under different color intensity: $PI=\\frac{\\left | t_1-t_2 \\right | }{\\left | t_1+t_2 \\right | } $. From Fig.~\\ref{pi}d, we find that nonlinear circuits could achieve a gain-gating effect to enhance relative salient cue and suppress less salient cue, thereby displaying the nonlinear sigmoid-shape curve~\\cite{zhao2020neural}. However, the linear network couldn't amplify the difference between conflicting cues, thus making an ambiguous choice (linear-shape curve)~\\cite{zhao2020neural}. This work proves that drawing on the neural mechanism and structure of the nonlinear and linear decision-making of the \\emph{Drosophila} brain, the brain-inspired computational model implemented by BrainCog could obtain consistent conclusions with the~\\emph{Drosophila} biological experiment~\\cite{tang2001choice}.\n\n\\textbf{2. PFC Working Memory }\n\n\nUnderstanding the detailed differences between the brains of humans and other species on multiple scales will help illuminate what makes us unique as a species~\\cite{zhang2021comparison}. The neocortex is associated with many cognitive functions such as working memory, attention and decision making~\\cite{miller2000prefontral,nieder2003coding,\nbishop2004prefrontal,koechlin2003architecture,\nwood2003human}. Based on the human brain neuron database of the Allen Institute for Brain Science, the key membrane parameters of human neurons are extracted \\footnote{http:\/\/alleninstitute.github.io\/AllenSDK\/cell\\_types.html}. Different types of human brain neuron models and rodent neuron models are established based on adaptive Exponential Integrate-and-Fire (aEIF) model~\\cite{Fourcaud2003How,brette2005adaptive}, which is supported by BrainCog.\n\nWe refined the model of a single PFC proposed by Haas and colleagues \\footnote{http:\/\/senselab.med.yale.edu\/ModelDB\/}~\\cite{hass2016detailed}. Subsequently, a 6-layer PFC column model based on biometric parameters was established~\\cite{shapson2021connectomic}. The pyramidal cells and interneurons were proportionally distributed from the literature~\\cite{beaulieu1993numerical,defelipe2011evolution} and connected with different connection probabilities for different types of neurons based on previous studies~\\cite{gibson1999two,hass2016detailed,\ngao2003dopamine}. Firstly, the accuracy of information maintenance was tested on rodent PFC network model.\n\n\n\\begin{figure}[!htbp]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{.\/fig\/final.png}\n\\caption{Anatomical and network stimulation diagram. (a) The connection of a single PFC column. (b) The distribution proportion of different types of neurons in each column layer. (c) Network persistent activity performance. Refined based on~\\cite{zhangqian2021comparison}.}\n\\end{figure}\n\nKeeping the network structure and other parameters unchanged, only using human neurons to replace rodent neurons can significantly improve the accuracy and integrity of image output. From an evolutionary perspective, the lower membrane capacitance of human neurons facilitates firing. This change improves the efficiency of information transmission, which is consistent with the results of biological experiments~\\cite{eyal2016unique}. This data-driven PFC column model provides an effective simulation-validation platform to study other high-level cognitive functions~\\cite{2020Computational}.\n\n\\subsection{Multi-scale Brain Structure Simulation}\n\n\\textbf{1. Neural Circuit}\n\n\\subsubsection{Microcircuit}\nBrainCog implements a BDM-SNN model inspired by the decision-making neural circuit of PFC-BG-ThA-PMC in the mammalian brain (as shown in Fig.~\\ref{dm})~\\cite{zhao2018brain}. The BDM-SNN models the excitatory and inhibitory reciprocal connections between the basal ganglia nucleus~\\cite{lanciego2012functional}: (1) Excitatory connections: STN-Gpi, STN-Gpe. (2) Inhibitory connections: StrD1-Gpi, StrD2-Gpe, Gpe-Gpi, Gpe-STN. Direct pathway (PFC-StrD1), indirect pathway (PFC-StrD2), and hyperdirect pathway (PFC-STN) from PFC to BG are further constructed. The output from BG transmits an inhibitory connection to the thalamus and finally excites PMC~\\cite{parent1995functional}. In addition, excitatory connections are also formed between PFC and thalamus, and lateral inhibition exists in PMC. Such brain-inspired neural microcircuit, consisting of connections among different cortical and subcortical brain areas, and incorporating DA-regulated learning rules, enable human-like decision-making ability.\n\n\n\\begin{figure}[htbp]\n\\begin{center}\n\\includegraphics[width=8cm]{.\/fig\/bdm.jpg}\n\\end{center}\n\\caption{The microcircuit of PFC-BG-ThA-PMC. Refined based on~\\cite{zhao2018brain}.}\n\\label{dm}\n\\end{figure}\n\n\\subsubsection{Cortical Column}\n\nA mammalian thalamocortical column is constructed in BrainCog, which is based on detailed anatomical data~\\cite{Izhikevichlargescale}. This column is made up of a six-layered cortical structure consisting of eight types of excitatory and nine types of inhibitory neurons. Thalamic neurons cover two types of excitatory neurons, inhibitory neurons and GABAergic neurons in the reticular thalamic nucleus (RTN). Neurons are simulated by \\emph{Izhikevich} model, which BrainCog applies to exhibit their specific spiking patterns depending on their different neural morphologies. For example, excitatory neurons (pyramidal and spiny stellate cells) always exhibit RS (Regular Spiking) or Bursting modes, while inhibitory neurons (basket and non-basket interneuron) are of FS (Fasting Spiking) or LTS (Low-threshold Spiking) patterns. Each neuron has a number of dendritic branches to accommodate a large number of synapses. The synaptic distribution and the microcircuits are reconstructed in BrainCog based on the previous works~\\cite{Izhikevichlargescale, Binzegger2004A}. Fig.~\\ref{minicolumn}(a) describes the details of the minicolumn. The column contains 1,000 neurons and more than 4,200,000 synapses. To understand the network further, we stimulate the spiny stellate cells in layer 4 to observe the activities of the whole network, Fig.~\\ref{minicolumn}(b) shows the result of neural activities after the cells in layer 4 receive the external stimulation. \n\n\\begin{figure}[!htbp]\n\\centering\n\\includegraphics[width=0.48\\textwidth]{.\/fig\/minicolumn.png}\n\\caption{The thalamocortical column. (a) shows the structure of the column and (b) describes the running activities of the unfold column when the neurons in Layer 4 receive the external stimulus.}\n\\label{minicolumn}\n\\end{figure}\n\n\\textbf{2. Mouse Brain}\n\n\nThe BrainCog mouse brain simulator is a spiking neural network model covering 213 brain areas of the mouse brain, which are classified according to the Allen Mouse Brain Connectivity Atlas~\\cite{richardson2003subthreshold}~\\footnote{http:\/\/connectivity.brain-map.org}. Each neuron was modeled by conductance-based spiking neuron model and simulated with a resolution of $dt$= 1 ms. A total of 6 types of neurons are included in this model, which are excitatory neurons (E), interneuron-basket cells (I-BC), interneuron-Matinotti cell (I-MC), thalamocortical relay neurons (TC), thalamic interneurons (TI) and thalamic reticular neurons (TRN).\n\n\n$$\nJ \\in\\{E, I\\underline{B} C, I\\underline{M} C, T C, T I, T R N\\}\n$$\n\nWe use the aEIF neuron model referring to previous work~\\cite{jiang2015principles,izhikevich2008large,tchumatchenko2014oscillations}, and obtain the parameters of this study, which are summarized in Tab.~\\ref{mouse1}.\n\n\\begin{table}[h]\n\t\\caption{Main parameters of different types of neuron models.}\n\t\\centering\n\t\\resizebox{0.8\\linewidth}{!}{\n\t\t\\begin{tabular}{lllllll}\n\t\t\t\\toprule \n\t\t &$V_{th,j}(mV)$ &$ V_{r,j}(mV)$ & $\\tau_{v,j}$ &$ \\tau_{w,j}$ &$ \\alpha _{j}$ & $\\beta_{ j} $\\\\\n\t\t\t\\midrule\t\n\t\t E & -50 & -110 & 100 & - & 0 & 0 \\\\\n\t\t\t\n\t\t\t\\midrule\n\t\t I-BC & -44 & -110 & 100 & 20 & -2 & 4.5 \\\\\n\t\t \\midrule\n\t\t I-MC & -45 & -66 & 85 & 20 & -2 & 4.5 \\\\\n\t\t \n\t\t \\midrule\n\t\t TC & -50 & -60 & 200 & - & 0 & 0 \\\\\n\t\t \\midrule\n\t\t TI & -50 & -60 & 20 & 20 & -2 & 4.5 \\\\\n\t\t \\midrule\n\t\t TRN & -45 & -65 & 40 & 20 & -2 & 4.5 \\\\ \n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t}\n\t\\label{mouse1}\n\t\n\\end{table}\n\n\nThe connections between brain areas are based on the quantitative anatomical dataset Allen Mouse Brain Connectivity Atlas. Methods for data generation have been previously described in~\\cite{oh2014mesoscale}. The proportions of the different types of neurons were adopted from a previous study~\\cite{izhikevich2008large, markram2004interneurons}.\n\nThe numbers of each type of neuron in the network are shown in Tab.~\\ref{mouse2}.\n\\begin{table}[h]\n\t\\caption{Number of different types of neurons in the BrainCog mouse brain simulator.}\n \\centering\n \\resizebox{0.8\\linewidth}{!}{\n\t\t\\begin{tabular}{lllllll}\n\t\t\t\\toprule \n Neuron Type & E & I\\_BC & I\\_MC & TC & TI & TRN \\\\ \n \t\\midrule\n Neuron Number & 56100 & 14960 & 7480 & 1300 & 260 & 520 \\\\ \n \t\\bottomrule\n \\end{tabular}\n }\n \\label{mouse2}\n\\end{table}\n\nThe spontaneous discharge of the model without external stimulation is shown in Fig.~\\ref{figmouse1}.\n \n\\begin{figure}\n\\centering\n\\includegraphics[width=0.48\\textwidth]{.\/fig\/rat.jpg}\n\\caption{Running of the BrainCog mouse brain simulator. The shining point is the spiking neuron at the time t and the point color represents the neuron belong to respective brain area.}\n\\label{figmouse1}\n\\end{figure}\nThis is an open platform, and both the parameters of the neuron model and the number of different types of neurons can be set flexibly.\n\n\\textbf{3. Macaque Brain}\n\nThe BrainCog macaque brain simulator is a large-scale spiking neural network model covering 383 brain areas~\\cite{Dharmendra2010}. We used the multi-scale connectome transformation method~\\cite{a2017Zhang} on the EGFP (enhanced green fluorescent protein) results~\\cite{Bakker2012, Chaudhuri2015ALC, Christine2010} to obtain the approximate amount of cells per region and the approximate number of synaptic connections between two connected regions~\\cite{Liu2016}. The final macaque model includes 1.21 billion spiking neurons and 1.3 trillion synapses, which is 1\/5 of a real macaque brain. \nSpecifically, the details of the brain micro-circuit are also considered in the simulation. The types of neurons in the micro-circuit include excitatory neurons (90 \\% of the neurons are of this type in the simulation) and inhibitory neurons (10\\% of the neurons are of this type in the simulation)~\\cite{Davis1979}. The spiking neuron follows Hodgkin\u2013Huxley model, which is supported by BrainCog. The running demo of the model is shown in Fig.~\\ref{mac}(a). To use the macaque model in the platform, the parameters of the neuron number in each region, the connectome power between regions, and the proportion between the excitatory and inhibitory neurons can be set flexibly.\n\n\\begin{figure}[!htbp]\n\\centering\n\\includegraphics[width=0.48\\textwidth]{.\/fig\/mac.jpg}\n\\caption{Running of the macaque brain (a) and the human brain (b) model. The shining point is the spiking neuron at the time t and the point color represents the region which the neuron belongs to.}\n\\label{mac}\n\\end{figure}\n\n\n\\textbf{4. Human Brain}\n\nThe BrainCog human brain simulator is built with the approach similiar to the BrainCog macaque brain. By using the EGFP results of the\nhuman brainnetom atlas~\\cite{Fan2016, Klein2012}, the BrainCog human brain simulator consists of 246 brain areas. It should be noted that since there is no directed human brain connectome available until the release of this paper, the BrainCog human brain simulator keeps bidirectional connections among brain areas. The details of the micro-circuit, including the excitatory neuron and the inhibitory neuron, are also considered. The final model (Fig.~\\ref{mac}(b) includes 0.86 billion spiking neurons and 2.5 trillion synapses, which is 1\/100 of a real human brain. To use this model in the platform, the neuron number per region, the connectome power, and the proportion between the excitatory and inhibitory neurons can be set flexibly.\nMorover, all the simulations were performed on distributed memory clusters~\\cite{Liu2016} at the super computing center affiliated to Institute of Automation, Chinese Academy of Sciences, Beijing, China. The cluster named the fat cluster is composed of 16 blade nodes and 2 \"fat\" computing nodes. In order to imporve the network communication efficiency and the simulation performance, the most connected areas were simulated in the 'fat' computing nodes to minimize the inter-node communications, while the other areas is randomly distributed in the blade nodes. The simulation shows the ability of the framework to deploy on the supercomputer or large-scale computer clusters.\n\n\\section{BORN: A spiking neural network driven Artificial Intelligence Engine based on BrainCog}\n\n\\begin{figure*}[htb]\n\\centering\n\\includegraphics[width=1\\textwidth]{.\/fig\/vision.jpg}\n\\caption{The functional framework and vision of BORN.}\n\\label{bornvision}\n\\end{figure*}\n\nBrainCog is designed to be an open source platform to enable the community to build spiking neural network based brain-inspired AI models and brain simulators. Based on the essential components developed for BrainCog, one can develop their own domain specific or general purpose AI engines. To further demonstrate how BrainCog can support building Brain-inspired AI engine, here we introduce BORN, an ongoing SNN driven Brain-inspired AI engine, ultimately designed for general purpose living AI. As shown in Fig.~\\ref{bornvision}, the high-level architecture of BORN is to integrate spatial and temporal plasticities to realize perception and learning, decision-making, motor control, working memory, long-term memory, attention and consciousness, emotion, knowledge representation and reasoning, social cognition and other brain cognitive functions. Spatial plasticity incorporates multi-scale neuroplasticity principles at micro, meso and macro scales. Temporal plasticity considers learning, developmental and evolutionary plasticity at different time scales. \n\n\nAs an essential component for BORN, we propose a developmental plasticity-inspired adaptive pruning (DPAP) model, that enables the complex deep SNNs and DNNs to gradually evolve into a brain-inspired efficient and compact structure, and eventually improves learning speed and accuracy in the extremely compressed networks~\\cite{Han2022Developmental}. The evolutionary process for the brain includes but not limited to searching for the proper connectome among different building blocks of the brain at multiple scales (e.g. neurons, microcircuits, brain areas). BioNAS for BORN uses brain-inspired neural architecture search to construct SNNs with diverse motifs in the brain and experimentally verify that SNNs with rich motif types perform better than plain feedforward SNNs~\\cite{shenBrainInspired2022}.\n\n\nHow the human brain selects and coordinates various learning methods to solve complex tasks is crucial for understanding human intelligence and inspiring future AI. BORN is dedicated to address critical research issues like this. The learning framework of BORN consists of multi-task continual learning, few-shot learning, multi-modal concept learning, online learning, lifelong learning, teaching-learning, and transfer learning, etc. \n\nTo demonstrate the ability and principles of BORN, we provide a relatively complex application on emotion dependent robotic music composition and playing. This application requires a humanoid robot perform music composition and playing depending on visual emotion recognition. The application requires BORN to provide cognitive functions such as visual emotion recognition, sequence learning and generation, knowledge representation and reasoning, and motor control, etc. This application of BORN starts with perception and learning, and ends with motor output. \n\nIt includes three modules implemented by BrainCog: the visual (emotion) recognition module, the emotion-dependent music composition module, and the robot music playing module. As shown in Fig.~\\ref{rmp}, the visual emotion recognition module enables robots to recognize the emotions expressed in images captured by the humanoid robot eyes. The emotion-dependent music composition module can generate music pieces according to various emotional inputs. When a picture is shown to the robot, the Visual Emotion Recognition network can firstly identify the emotions expressed in the picture, such as joy or sadness. The robot then selects or compose the music piece that best matches the emotions in the picture. And finally, with the help of the robot music playing module, the robot controls its arms and fingers in a series of movements, thus playing the music on the piano. Some details are introduced as follows:\n\n\\begin{figure*}[!htbp]\n\\centering\n\\includegraphics[width=1\\textwidth]{.\/fig\/rmp.jpg}\n\\caption{The procedure of multi-cognitive function coordinated emotion dependent music composition and playing by humanoid robot based on BORN.}\n\\label{rmp}\n\\end{figure*}\n\n\n\n\\emph{1) Visual Emotion Recognition: }\nFor emotion recognition, inspired by the ventral visual pathway, we construct a deep convolutional spiking neural network with LIF neuron model and surrogate gradient provided by BrainCog. The structure of the network is set as 32C3-32C3-MP-32C3-32C3-300-7. 32C3 means the output channel is set with 32, and the kernel size is set as 3. MP denotes the max pooling. The mean firing rate is used to make the final prediction. We use the Adam optimizer, and the mean square error loss. The initial learning rate is set with 0.001, and it will decay to 1\/10 of the previous value every 40 epochs, for a total of 100 epochs. We use the Emotion6 dataset~\\cite{peng2015mixed} to train and test our model. The Emotion6 dataset is composed of 6 emotions such as anger, disgust, fear, joy, sadness, surprise, and each type of emotion consists of 330 samples. On this basis, we extend the original Emotion6 dataset with exciting emotion which we collect online. 80\\% of the images are used as the training set, and the remaining 20\\% are used as the test set. \n\n\\emph{2) Emotion-dependent Music Composition: }\nListening to the the music can make us emotional, while, when people feel happy or sad, they always express their feeling with music. Amygdala plays a key role in human emotion. Inspired by this mechanism, we constructed a simple spiking neural network to simulate this important area to represent different types of emotions and learn the relationships with other brain areas related to music. As shown in Fig.~\\ref{rmp}, the amygdala network is composed of several LIF neurons supported through BrainCog, the connections from this cluster are projected to the musical sequence memory networks. \n\nDuring the learning process, amygdala, PFC, and the musical sequence memory networks cooperate with each other and form complex neural circuits. Here, connections are updated by the STDP learning rule. The dataset used here also contains 331 MIDI files of classical piano works~\\cite{Krueger2018}, and it is important to note that a part of these music works are labeled with different types of emotional categories (such as happy, depressed, passionate and beautiful). \n\nAt the generation process, given the beginning notes and specific emotion type, the model can generate a series of notes and form a melody with particular emotion finally.\n\n\\emph{3) Robot Music-Playing: }\nA humanoid robot iCub is used to validate the abilities of robotic music composition and playing depending on the result of visual emotion recognition. The iCub robot has a total of 53 degrees of freedoms throughout the body. In the piano playing task, we used 6 degrees of freedoms of the head, 3 degrees of freedoms of the torso, and 16 degrees of freedoms for each of the left and right arm (including the left and right hand). Besides, we mainly control the index fingers to press the keys; in the multi-fingered playing mode, we mainly control the thumbs, the index fingers, and the middle fingers to press the keys. During playing, the robot controls the movement of the hand in sequence according to the generated sequence of different musical notes, and presses the keys with corresponding fingers, thereby completing the performance. For each note to be played, the corresponding playing arm needs to complete the entire process of moving, waiting, pressing the key, holding, and releasing the key according to the beat. During the playing process, we also control the movements of the robot's head and the non-playing hand to match the performance.\n\nWe have constructed a multi-brain area coordinated robot motor control SNN model based on the brain motor control circuit. The SNN model is built with LIF neurons and implements SMA, PMC, basal ganglia and cerebellum functions. The music notes is first processed by SMA, PMC and basal ganglia networks to generate high-level target movement directions, and the output of PMC is encoded by population neurons to target movement directions. The fusion of population codings of movement directions is further processed by the cerebellum model for low level motor control. The cerebellum SNN module consists of Granular Cells (GCs), Purkinje Cells (PCs) and Deep Cerebellar Nuclei (DCN), which implements the three level residual learning in motor control. The DCN network generates the final joint control outputs for the robot arms to perform the playing movement.\n\n\n\n\\section{Conclusion}\nBrainCog aims to provide a community based open source platform for developing spiking neural network based AI models and cognitive brain simulators. It integrates multi-scale biological plausible computational units and plasticity principles. Different from existing platforms, BrainCog incorporates and provides task ready SNN models for AI, and supports brain function and structure simulations at multiple scales. With the basic and functional components provided in the current version of BrainCog, we have shown how a variety of models and applications can already be implemented for both brain-inspired AI and brain simulations. Based on BrainCog, we are also committed to building BORN into a powerful SNN-based AI engine that incorporates multi-scale plasticity principles to realize brain-inspired cognitive functions towards human level. Powered by 9 years development of BrainCog modules, components and applications, and inspired by biological mechanisms and natural evolution, continuous efforts on BORN will enable it to be a general purpose AI engine. We have already started the efforts to extend BrainCog and BORN to support high-level cognition such as theory of mind~\\cite{zhao2022brain}, consciousness~\\cite{zeng2018toward}, and morality~\\cite{zhao2022brain}, and it definitely takes the world to build true and general purpose AI for human and ecology good. Join us on this explorations to create the future for human-AI symbiotic society. \n\\section*{Acknowledgements}\nThis work is supported by the National Key Research and Development Program (Grant No. 2020AAA0104305), the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant No. XDB32070100).\n\n\n\n\n\n\\bibliographystyle{IEEEtran}\n\n\\section{Introduction}\nThe human brain can self-organize and coordinate different cognitive functions to flexibly adapt to complex and changing environments. A major challenge for Artificial Intelligence and computational neuroscience is integrating multi-scale biological principles to build biologically plausible brain-inspired intelligent models. As the third generation of neural networks~\\cite{Maass1997Networks}, Spiking Neural Networks (SNNs) are more biologically realistic at multiple scales, more biologically interpretable, more energy-efficient, and naturally more suitable for modeling various cognitive functions of the brain and creating biologically plausible AI. \n\n\\begin{figure*}[!htbp]\n\t\\centering\n\t\\includegraphics[scale=0.3]{.\/fig\/braincog.jpg}\n\t\\caption{The architecture of Brain-inspired Cognitive Intelligence Engine (BrainCog).}\n\t\\label{braincog}\n\\end{figure*}\n\nExisting neural simulators attempted to simulate elaborate biological neuron models or build large-scale neural network simulations, neural dynamics models, deep SNNs. Neuron~\\cite{carnevale2006neuron} focuses on simulating elaborate biological neuron models. NEST~\\cite{gewaltig2007nest} implements large-scale neural network simulations. Brian\/Brian2~\\cite{stimberg2019brian, goodman2009brian} provides an efficient and convenient tool for modeling spiking neural networks. Shallow SNNs implemented by Brian2 can realize unsupervised visual classification~\\cite{diehl2015unsupervised}. Further, BindsNET~\\cite{ hazan2018bindsnet} builds SNNs with coordination of various neurons and connections and incorporates multiple biological learning rules for training SNNs. Such learning SNNs can perform machine learning tasks, including simple supervised, unsupervised, and reinforcement learning. However, supporting more complex tasks would be great challenge for these SNNs frameworks, and there is a large gap in performance compared with traditional deep neural networks (DNNs). Deep SNNs trained by surrogate gradient or converting well-trained DNNs to SNNs have achieved great progress in the fields of speech recognition~\\cite{dominguez2018deep, loiselle2005exploration}, computer vision~\\cite{kim2020spiking, wu2018spatio}, and reinforcement learning~\\cite{tan2021strategy}. Motivated by this, SpikingJelly~\\cite{SpikingJelly} develops a deep learning SNN framework (trained by surrogate gradient or converting well-trained DNNs to SNNs), which integrates deep convolutional SNNs and various deep reinforcement learning SNNs for multiple benchmark tasks. Platforms as SpikingJelly are relatively more inspired by the field of deep learning and focuse on improving the performance of different tasks. They are currently lack of in-depth inspiration from the brain information processing mechanisms and may not aim at, hence short at simulating large scale functional brains.\n\nBrainPy~\\cite{Wang2021} excels at modeling, simulating, and analyzing the dynamics of brain-inspired neural networks from multiple perspectives, including neurons, synapses, and networks. While it focuses on computational neuroscience research, it fails to consider the learning and optimization of deep SNNs or the implementation of brain-inspired functions. SPAUN~\\cite{eliasmith2012large}, a large-scale brain function model consisting of 2.5 million simulated neurons and implemented by Nengo~\\cite{bekolay2014nengo}, integrates multiple brain areas and realizes multiple brain cognitive functions, including image recognition, working memory, question answering, reinforcement learning, and fluid reasoning. However, it is not suitable for solving challenging and complex AI tasks that can be handled by deep learning models. In summary, there is still a lack of open-source spiking neural network frameworks that could incorporate the ability of simulating brain structures, cognitive functions at large scale, while keep itself effective for creating complex and efficient AI models at the same time.\n\nConsidering the various limitations of existing frameworks mentioned above, in this paper, we present the Brain-inspired Cognitive Intelligence Engine (BrainCog), a spiking neural network based open source platform for both brain-inspired AI and brain simulation at multiple scales. As shown in Fig.~\\ref{braincog}, some basic modules (such as different types of neuron models, learning rules, encoding strategies, etc.) are provided as building blocks to construct different brain areas and neural circuits to implement brain-inspired cognitive functions. BrainCog is an easy-to-use framework that can flexibly replace different components according to various purposes and needs. BrainCog tries to achieve the vision ``the structure and mechanism are inspired by the brain, and the cognitive behaviors are similar to humans'' for Brain-inspired AI~\\cite{2016Retrospect}. BrainCog is developed based on the deep learning framework (currently it is based on PyTorch, while it is easy to migrate to other frameworks such as PaddlePaddle, TensorFlow, etc.). This approach aims at greatly facilitating researchers to quickly familiarize themselves with the platform and implement their own algorithms.\n\n\\subsection{Brain-inspired AI}\nBrainCog is aimed at providing infrastructural support for Brain-inspired AI. Currently, it provides cognitive functions components that can be classified into five categories: Perception and Learning, Decision Making, Motor Control, Knowledge Representation and Reasoning, and Social Cognition. These components collectively form neural circuits corresponding to 28 brain areas in the mammalian brains, as shown in Fig.~\\ref{born}. These brain-inspired AI models have been effectively validated on various supervised and unsupervised learning, deep reinforcement learning, and several complex brain-inspired cognitive tasks. \n\n\n\\begin{figure}[!htbp]\n\t\\centering\n\t\\includegraphics[scale=0.35]{.\/fig\/born.jpg}\n\t\\caption{Multiple cognitive functions integrated in BrainCog, their related brain areas and neural circuits.}\n\t\\label{born}\n\\end{figure}\n\n\\subsubsection{Perception and Learning} \nBrainCog provides a variety of supervised and unsupervised methods for training spiking neural networks, such as the biologically-plausible Spike-Timing-Dependent Plasticity (STDP)~\\cite{bi1998synaptic}, the backpropagation based on surrogate gradients~\\cite{wuSpatioTemporalBackpropagationTraining2018,zhengGoingDeeperDirectlyTrained2021,fangIncorporatingLearnableMembrane2021}, and the conversion-based algorithms~\\cite{li2021free, han2020deep, han2020rmp}. In addition to high performance in common perception and learning process, it also shows strong adaptability in small samples and noisy scenarios. BrainCog also provides a multi-sensory integration framework for human-like concept learning~\\cite{wywFramework}. Inspired by quantum information theory, BrainCog provides a quantum superposition spiking neural network, which encodes complement information to neuronal spike trains with different frequency and phase~\\cite{SUN2021102880}. \n\n\\subsubsection{Decision Making}\nFor decision making, BrainCog provides multi-brain areas coordinated decision-making spiking neural network~\\cite{zhao2018brain}. The biologically inspired decision-making model implemented by BrainCog achieves human-like learning ability on the Flappy Bird game and supports UAVs' online decision-making tasks. In addition, BrainCog combines SNNs with deep reinforcement learning and provides the brain-inspired spiking deep Q network~\\cite{sun2022solving}.\n\n\\subsubsection{Motor Control}\nEmbodied cognition is crucial to realizing biologically plausible AI. As part of the embodied cognition modules, and inspired by the motor control mechanism of the brain, BrainCog provides a multi-brain areas coordinated robot motion spiking neural networks model, which includes premotor cortex (PMC), supplementary motor areas (SMA), basal ganglia and cerebellum functions. With proper mapping, the spiking motor network outputs can be used to control various robots.\n\n\\subsubsection{Knowledge Representation and Reasoning} \nBrainCog incorporates multiple neuroplasticity and population coding mechanisms for knowledge representation and reasoning. The brain-inspired music memory and stylistic composition model implements the knowledge representation and memory of note sequences and can generate music according to different styles~\\cite{LQ2020,LQ2021}. Sequence Production Spiking Neural Network (SPSNN) achieves the memory of the symbol sequence and can reconstruct the symbol sequence in the light of different rules~\\cite{fang2021spsnn}. Commonsense Knowledge Representation Graph SNN (CKR-GSNN) realizes the representation of commonsense knowledge through incorporating multi-scale neural plasticity and population coding mechanism into a graph SNN model~\\cite{KRRfang2022}.\nCausal Reasoning Spiking Neural Network (CRSNN) encodes the causal graph into a spiking neural network and realizes deductive reasoning tasks accordingly~\\cite{fang2021crsnn}.\n\n\\subsubsection{Social Cognition} \nBrainCog provides a brain-inspired social cognition model with biological plausibility. This model gives the agent a preliminary ability to perceive and understand itself and others and can enable the robots pass the Multi-Robots Mirror Self-Recognition Test~\\cite{zeng2018toward} and the AI Safety Risks Experiment~\\cite{zhao2022brain}. The former is a classic experiment of self-perception in social cognition, and the latter is a variation and application of the theory of mind experiment in social cognition.\n\\subsection{Brain Simulation}\n\n\\subsubsection{Brain Cognitive Function Simulation}\nTo demonstrate the capability of BrainCog for cognitive function simulation, we provide \\emph{Drosophila} decision-making and prefrontal cortex working memory function simulation~\\cite{zhao2020neural,zhangqian2021comparison}. For \\emph{Drosophila} nonlinear and linear decision-making simulation, BrainCog verifies the winner-takes-all behaviors of the nonlinear dopaminergic neuron\u2011GABAergic neuron\u2011mushroom body (DA\u2011GABA\u2011MB) circuit under a dilemma and obtains consistent conclusions with \\emph{Drosophila} biological experiments~\\cite{zhao2020neural}. For the working memory performance of the prefrontal cortex network implemented by BrainCog, we discover that using human neurons to replace rodent neurons without changing network structure can significantly improve the accuracy and completeness of an image memory task~\\cite{zhangqian2021comparison}, and this implies the evolution of the brains are not only on their structures, but also applies to single computational units such as neurons.\n\n\\subsubsection{Multi-scale Brain Structure Simulation} \nBrainCog provides simulations of brain structures at different scales, from microcircuits, and cortical columns, to whole-brain structure simulations. Anatomical and imaging multi-scale connectivity data is used to support whole-brain simulations from mouse brain, macaque brain to human brain at different scales.\n\n\n\n\\section{Basic Components}\nBrainCog provides essential and fundamental components to model biological and artificial intelligence. It includes various biological neuron models, learning rules, encoding strategies, and models of different brain areas. One can build brain-inspired SNN models through reusing and refining these building blocks. Expanding and refining the components and cognitive functions included in BrainCog is an ongoing effort. We believe this should be a continuous community effort, and we welcome researchers and practitioners to enrich and improve the work together in a complementary way.\n\n\\subsection{Neuron Models}\nBrainCog supports various models for spiking neurons, such as the following :\n\n(1) Integrate-and-Fire spiking neuron (IF)~\\cite{abbott1999lapicque}:\n\n\\begin{equation}\n\\frac{dV}{dt} = I\n\\end{equation}\n\n$I$ denotes the input current from the pre-synaptic neurons. Once the membrane potential reaches the threshold $V_{th}$, the neuron $j$ fires a spike~\\cite{abbott1999lapicque}.\n\n(2) Leaky Integrate-and-Fire spiking neuron (LIF)~\\cite{dayan2005theoretical}:\n\\begin{equation}\n \\tau\\frac{dV}{dt} = - V + I\n\\label{equa_lif1}\n\\end{equation}\n$\\tau=RC$ denote the time constant, $R$ and $C$ denotes the membrane resistance and capacitance respectively~\\cite{dayan2005theoretical}. \n\n(3) The adaptive Exponential Integrate-and-Fire spiking neuron (aEIF)~\\cite{Fourcaud2003How,brette2005adaptive}:\n\n\\begin{equation}\n\\left\\{\n\\begin{aligned}\nC \\frac{d V}{d t}&=-g_{L}(V-E_{L})+g_{L} \\exp(\\frac{V-V_{t h}}{\\Delta T})+I-w \\\\\n\\tau_{w} \\frac{dw}{dt}&=a(V-E_L)-w\n\\end{aligned}\n\\right.\n\\end{equation}\n\nwhere $g_L$ is the leak conductance, $E_L$ is the leak reversal potential, $V_r$ is the reset potential, $\\Delta T$ is the slope factor, $I$ is the background currents. $\\tau_w$ is the adaptation time constant. When the membrane potential is greater than the threshold $V_{th}$, $V = V_{r}$, and $w = w +b$. $a$ is the subthreshold adaptation, and $b$ is the spike-triggered adaptation~\\cite{Fourcaud2003How,brette2005adaptive}. \n\n(4) The Izhikevich spiking neuron~\\cite{izhikevich2003simple}:\n\\begin{equation}\n\\left\\{\n\\begin{aligned}\n\\frac{dV}{dt} &= 0.04V^2 + 5v + 140 - u + I \\\\\n\\frac{du}{dt} &= a(bv-u)\\\\\n\\end{aligned}\n\\right.\n\\end{equation}\nWhen the membrane potential is greater than the threshold:\n\\begin{equation}\n\\left\\{\n\\begin{aligned}\nV &= c \\\\\nu &= u + d\\\\\n\\end{aligned}\n\\right.\n\\end{equation}\n$u$ represents the membrane recovery variable, and $a, b, c, d$ are the dimensionless parameters~\\cite{izhikevich2003simple}. \n\n(5) The Hodgkin-Huxley spiking neuron (H-H)~\\cite{hodgkin1952quantitative}:\n\\begin{equation}\nI = C \\frac{dV}{dt} + \\overline{g}_K n^4 (V - V_K) + \\overline{g}_{Na} m^3 h (V - V_{Na}) + \\overline{g}_L (V - V_L)\n\\label{equa_hh}\n\\end{equation}\n\\begin{equation}\n\\left\\{\n\\begin{aligned}\n\\frac{dn}{dt} &= \\alpha_n(V)(1-n) - \\beta_n(V)n \\\\\n\\frac{dm}{dt} &= \\alpha_m(V)(1-m) - \\beta_m(V)m \\\\\n\\frac{dh}{dt} &= \\alpha_n(V)(1-h) - \\beta_n(V)h \\\\\n\\end{aligned}\n\\right.\n\\end{equation}\n\n\n$\\alpha_i$ and $\\beta_i$ are used to control the $i_{th}$ ion channel, $n$, $m$, $h$ are dimensionless probabilities between 0 and 1. $\\overline{g}_i$ is the maximal value of the conductance~\\cite{hodgkin1952quantitative}.\n\n\nThe H-H model shows elaborate modeling of biological neurons. In order to apply it more efficiently to AI tasks, BrainCog incorporates a simplified H-H model ($C=0.02\\mu F\/cm^2, V_r=0, V_{th}=60mV$), as illustrated in~\\cite{wang2016spiking}.\n\n\n\n\n\n\\subsection{Learning Rules}\nBrainCog provides various plasticity principles and rules to support biological plausible learning and inference, such as:\n\n(1) Hebbian learning theory~\\cite{amit1994correlations}:\n\\begin{equation}\n\\Delta w_{ij}^t=x_i^t x_j^t\n\\end{equation}\nwhere $w_{ij}^t$ means the $i$th synapse weight of $j$th neuron at the time $t$. $x_i^t$ is the input of $i$th synapse at time $t$. $x_j^t$ is the output of $j$th neuron at time $t$~\\cite{amit1994correlations}.\n\n(2) Spike-Timing-Dependent Plasticity (STDP)~\\cite{bi1998synaptic}:\n\n\\begin{equation}\n\t\\begin{split}\n\t\t&\\Delta w_{j}=\\sum\\limits_{f=1}^{N} \\sum\\limits_{n=1}^{N} W(t^{f}_{i}-t^{n}_{j})\\\\\n\t\t&W(\\Delta t)= \n\t\t\\left\\{\n\t\t\\begin{split}\n\t\tA^{+}e^{\\frac{-\\Delta t}{\\tau_{+}}}\\quad if\\; \\Delta t>0&\\\\ \n\t\t-A^{-}e^{\\frac{\\Delta t}{\\tau_{-}}}\\quad if\\;\\Delta t<0&\\\\ \n\t\t\\end{split}\n\t\t\\right.\n\t\\end{split}\n\t\\label{eq4}\n\\end{equation}\nwhere $\\Delta w_{j}$ is the modification of the synapse $j$, and $W(\\Delta t)$ is the STDP function. $t$ is the time of spike. $A^{+},A^{-}$ mean the modification degree of STDP. $\\tau_{+}$ and $\\tau_{-}$ denote the time constant~\\cite{bi1998synaptic}.\n\n(3) Bienenstock-Cooper-Munros theory (BCM)~\\cite{bienenstock1982theory}:\n\\begin{equation}\n\\Delta w=y\\left(y-\\theta_M \\right)x-\\epsilon w\n\\end{equation}\n\nWhere $x$ and $y$ denote the firing rates of pre-synaptic and post-synaptic neurons, respectively, threshold $\\theta_M$ is the average of historical activity of the post-synaptic neuron~\\cite{bienenstock1982theory}.\n\n(4) Short-term Synaptic Plasticity (STP)~\\cite{maass2002synapses}: \n\n\nShort-term plasticity is used to model the synaptic efficacy changes over time.\n\n\\begin{equation}\n{a_k} = {u_k} \\cdot {R_k}\n\\end{equation}\n\\begin{equation}\n{u_{k+1}} = U + {u_{k}}(1 - U)\\exp \\left( { - {{\\rm{\\Delta }}t_{k}}\/{\\tau _{{\\rm{fac}}}}} \\right)\n\\end{equation}\n\\begin{equation}\n{R_{k+1}} = 1 + \\left( {{R_{k}} - {u_{k}}{R_{k}} - 1} \\right)\\exp \\left( { - {{\\rm{\\Delta }}t_{k}}\/{\\tau _{{\\rm{rec}}}}} \\right)\n\\end{equation}\n\n$a$ denotes the synaptic weight, $U$ denotes the fraction of synaptic resources. $\\tau _{{\\rm{fac}}}$ and $\\tau _{{\\rm{rec}}}$ denotes the time constant for recovery from facilitation and depression. The variable $R_n$models the fraction of synaptic efficacy available for the (k)th spike,and $u_n R_n$ models the fraction of synaptic efficacy~\\cite{maass2002synapses}.\n\n\n(5) Reward-modulated Spike-Timing-Dependent Plasticity (R-STDP)~\\cite{Eugene2007}: \n\n\n\nR-STDP uses synaptic eligibility trace $e$ to store temporary information of STDP. The eligibility trace accumulates the STDP $\\Delta w_{STDP}$ and decays with a time constant $\\tau_e$~\\cite{Eugene2007}. \n\n\\begin{equation}\\label{rstdpe}\n\\Delta e=-\\frac{e}{\\tau _e}+\\Delta w_{STDP}\n\\end{equation}\n\nThen, synaptic weights are updated when a delayed reward $r$ is received, as Eq.~\\ref{rstdpw} shown~\\cite{Eugene2007}. \n\n\\begin{equation}\\label{rstdpw}\n\\Delta w=r*\\Delta e \n\\end{equation}\n\n\n\n\\subsection{Encoding Strategies}\nBrainCog supports a number of different encoding strategies to help encode the inputs to the spiking neural networks.\n\n\n(1) Rate Coding~\\cite{adrian1926impulses}:\n\nRate coding is mainly based on spike counting to ensure that the number of spikes issued in the time window corresponds to the real value. Poisson distribution can describe the number of random events occurring per unit of time, which corresponds to the firing rate~\\cite{adrian1926impulses}. Set $\\alpha \\sim \\mathbb{U}(0,1)$, the input can be encoded as\n\\begin{equation}\n\ts(t) = \\begin{cases}\n\t 1,\\quad if \\quad x > \\alpha\\\\\n\t 0, \\quad else \n\t\\end{cases}\n\\end{equation}\n\n(2) Phase Coding~\\cite{kim2018deep}:\n\nThe idea of phase coding can be used to encode the analog quantity changing with time. The value of the analog quantity in a period can be represented by a spike time, and the change of the analog quantity in the whole time process can be represented by the spike train obtained by connecting all the periods.\nEach spike has a corresponding phase weighting under phase encoding, and generally, the pixel intensity is encoded as a 0\/1 input similar to binary encoding. $\\gg$ denotes the shift operation to the right, $K$ is the phase period~\\cite{kim2018deep}. Pixel $x$ is enlarged to $x^{\\prime}=x*(2^{K}-1)$ and shifted $k=K-1-(t \\mod K)$ to the right, where $\\mod$ is the remainder operation. If the lowest bit is one, $s$ will be one at time $t$. $\\&$ means \nbit-wise AND operation.\n\\begin{equation}\n\ts(t) = \\begin{cases} 1, \\quad if \\quad (x^{\\prime} \\gg k) \\& 1 = 1\\\\\n\t0, \\quad else\n\t\\end{cases}\n\\end{equation}\n\n(3) Temporal Coding~\\cite{thorpe1996speed}:\n\nThe characteristic of the neuron spike is that the form of the spike is fixed, and there are only differences in quantity and time. The most common implementation is to express information regarding the timing of individual spike. The stronger the stimulus received, the earlier the spike generated~\\cite{rueckauer2018conversion}. Let the total simulation time be T, and the input $x$ of the neuron can be encoded as the spike at time $t^s$:\n\\begin{equation}\n\tt^{s} = T - \\operatorname{round}(Tx)\n\\end{equation}\n\n(4) Population Coding~\\cite{bohte2002error}:\n\nPopulation coding helps to solve the problem of the ambiguity of information carried by a single neuron. The ambiguity of information carried can be understood as: the original information is input into a neuron, which makes the network hard to distinguish some overlapping or similar related information~\\cite{quian2009extracting,grun2010analysis}.\nThe intuitive idea of population coding is to make different neurons to be with different sensitivity to different types of inputs, which is also reflected in biology. For example, rats' whiskers have different sensitivity to different directions~\\cite{quian2009extracting}. The inputs will be transformed into a spike train with a period by population coding. A classical population coding method is the neural information coding method based on the Gaussian tuning curve referred to Eq.~\\ref{gaus}. This method is more suitable when the amount of data is small, and the information is concentrated. A Gaussian neuron covers a range of analog quantities in the form of Gaussian function~\\cite{bohte2002error}, \\cite{li2021online}. Suppose that $m$ ($m > 2$) neurons are used to encode a variable x with a value range of $[I_{min}, I_{max}]$. $f(x)$ can be firing time or voltage.\n\n\\begin{equation}\n f(x) = x^{-\\frac{(x-\\mu)^{2}}{2\\sigma^{2}}}\n\\label{gaus}\n\\end{equation}\n\nThe corresponding mean with adjustable parameters $\\beta$ and variance of the $ith$ ($i=1,2,..., m$) neuron are as follows:\n\n\\begin{equation}\n \\mu = I_{min} + \\frac{2i-3}{2}\\frac{I_{max}-I_{min}}{m-2}\n\\end{equation}\n\n\\begin{equation}\n \\sigma = \\frac{1}{\\beta}\\frac{I_{max}-I_{min}}{m-2}\n\\end{equation}\n\n\n\n\n\\subsection{Brain Areas Models}\n\nBrain-inspired models of several functional brain areas are constructed for BrainCog from different levels of abstraction.\n\n(1) Basal Ganglia (BG): \n\nBasal ganglia facilitates desired action selection and inhibit competing behavior (making winner-takes-all decisions)~\\cite{chakravarthy2010basal,redgrave1999basal}. It cooperates with PFC and the thalamus to realize the decision-making process in the brain~\\cite{parent1995functional}. BrainCog models the basal ganglia brain area, including excitatory and inhibitory connections among striatum, Globus pallidus internas (Gpi), Globus pallidus external (Gpe), and subthalamic nucleus (STN) of basal ganglia~\\cite{lanciego2012functional}, as shown in the orange areas of Fig.~\\ref{dmsnn}. The BG brain area component adopts the \\emph{LIF} neuron model in BrainCog, as well as the \\emph{STDP} learning rule and \\emph{CustomLinear} to build internal connections of the BG. Then, the BG brain area component can be used to build brain-inspired decision-making SNNs (see section 3.2.1 for detail).\n\n(2) Prefrontal Cortex (PFC):\n\nPFC is of significant importance when human high-level cognitive behaviors happen. In BrainCog, many cognitive tasks based on SNN are inspired by the mechanisms of the PFC~\\cite{bechara1998dissociation}, such as decision-making, working memory~\\cite{rao1999isodirectional,d2000prefrontals,lara2015role}, knowledge representation~\\cite{wood2003human}, theory of mind and music processing~\\cite{frewen2015healing}. Different circuits are involved to complete these cognitive tasks. In BrainCog, the data-driven PFC column model contains 6 layers and 16 types of neurons. The distribution of neurons, membrane parameters and connections of different types of neurons are all derived from existing biological experimental data. The PFC brain area component mainly employs the \\emph{LIF} neuron model to simulate the neural dynamics. The \\emph{STDP} and \\emph{R-STDP} learning rules are utilized to compute the weights between different neural circuits. \n\n(3) Primary Auditory Cortex (PAC):\n\nThe primary auditory cortex is responsible for analyzing sound features, memory, and the extraction of inter-sound relationships~\\cite{Koelsch2012}. This area exhibits a topographical map, which means neurons respond to their preferred sounds. In BrainCog, neurons in this area are simulated by the \\emph{LIF} model and organized as minicolumns to represent different frequencies of sounds. To store the ordered note sequences, the excitatory and inhibitory connections are updated by \\emph{STDP} learning rule.\n\n\n\n(4) Inferior Parietal Lobule (IPL):\n\nThe function of IPL is to realize motor-visual associative learning~\\cite{macuga2011selective}. The IPL consists of two subareas: IPLM (motor perception neurons in IPL) and IPLV (visual perception neurons in IPL). The IPLM receives information generated by self-motion from ventral premotor cortex (vPMC), and the IPLV receives information detected by vision from superior temporal sulcus (STS). The motor-visual associative learning is established according to the STDP mechanism and the spiking time difference of neurons in IPLM and IPLV.\n\n\n\n\n(5) Hippocampus (HPC):\n\nThe hippocampus is part of the limbic system and plays an essential role in the learning and memory processes of the human brain. Epilepsy patients with bilateral hippocampus removed (e.g. the patient H.M.) have symptoms of anterograde amnesia. They are unable to form new long-term declarative memories~\\cite{milner1998}. This case study has proved that the hippocampus is in the key process of converting short-term memory to long-term memory and plays a vital role~\\cite{smith1981role}. \n\nFurthermore, through electrophysiological means, it was found that the hippocampal region is also crucial for forming new concepts. Specific neurons in the hippocampus only respond selectively to specific concepts, completing the specific encoding between different concepts. Moreover, it was through the study of the hippocampus that neuroscientists discovered the STDP learning rule~\\cite{dan2004spike}, which further demonstrated the high plasticity of the hippocampus.\n\n\n(6) Insula:\n\nThe function of the Insula is to realize self-representation~\\cite{craig2009you}, that is, when the agent detects that the movement in the field of vision is generated by itself, the Insula is activated. The Insula receives information from IPLV and STS. The IPLV outputs the visual feedback information predicted according to its motion, and the STS outputs the motion information detected by vision. If both are consistent, the Insula will be activated.\n\n(7) Thalamus (ThA):\n\nStudies have shown that the thalamus is composed of a series of nuclei connected to different brain parts and heavily contributes to many brain processes. In BrainCog, this area is discussed from both anatomic and cognitive perspectives. Understanding the anatomical structure of the thalamus can help researchers to comprehend the mechanisms of the thalamus. Based on the essential and detailed anatomic thalamocortical data~\\cite{Izhikevichlargescale}, BrainCog reconstructs the thalamic structure by involving five types of neurons (including excitatory and inhibitory neurons) to simulate the neuronal dynamics and building the complex synaptic architecture according to the anatomic results. Inspired by the structure and function of the thalamus, the brain-inspired decision-making model implemented by BrainCog takes into account the transfer function of the thalamus and cooperates with the PFC and basal ganglia to realize multi-brain areas coordinated decision-making model.\n\n(8) Ventral Visual Pathway:\n\nCognitive neuroscience research has shown that the brain can receive external input and quickly recognize objects due to the hierarchical information processing of the ventral visual pathway. The ventral pathway is mainly composed of V1, V2, V4, IT, and other brain areas, which mainly process information such as object shape and color~\\cite{ishai1999distributed,kobatake1994neuronal}. These visual areas form connections through forward, feedback, and self-layer projections. The interaction of different visual areas enables humans to recognize visual objects. The primary visual cortex V1 is selective for simple edge features. With the transmission of information, high-level brain areas combine with lower-level receptive fields to form more complex large receptive fields to recognize more complex objects~\\cite{hubel1962receptive}. Inspired by the structure and function of the ventral visual pathway, BrainCog builds a deep forward SNN with layer-wise information abstraction and a feedforward and feedback interaction deep SNN. The performance is verified on several visual classification tasks. \n\n(9) Motor Cortex:\n\nThe control of biological motor function involves the cooperation of many brain areas. The extra circuits consisting of the PMC, cerebellum, and BA6 motor cortex area are primarily associated with motor control elicited by external stimuli such as visual, auditory, and tactual inputs. Internal motor circuits, including the basal ganglia and SMA, predominate in self-guided, learned movements~\\cite{geldberg1985supplementary,mushiake1991neuronal,gerloff1997stimulation}. Population activity of motor cortical neurons encodes movement direction. Each neuron has its preferred direction. The more consistent the target movement direction is with its preferred direction, the stronger the neuron activities are~\\cite{georgopoulos1995motor,kakei1999muscle}. The cerebellum receives input from motor-related cortical areas such as PMC, SMA, and the prefrontal cortex, which are important for the completion of fine movements, maintaining balance and coordination of movements~\\cite{strick2009cerebellum}. Inspired by the organization of the brain's motor cortex, we use spiking neurons to construct a motion control model, and apply it to the iCub robot, which enables the robot to play the piano according to music pieces.\n\n\\section{Brain-inspired AI}\nComputational units (different neuron models, learning rules, encoding strategies, brain area models, etc.) at multiple scales, provided by BrainCog serve as a foundation to develop functional networks. To enable BrainCog for Brain-inspired AI, cognitive function centric networks need to be built and provided as reusable functional building blocks to create more complex brain-inspired AI. This section introduces various functional building blocks developed based on BrainCog.\n\n\\subsection{Perception and Learning}\nIn this subsection, we will introduce the supervised and unsupervised perception and learning spiking neural networks based on the fundamental components of BrainCog. Inspired by the global feedback connections, the neural dynamics of spiking neurons, and the biologically plausible STDP learning rule, we improve the performance of the spiking neural networks. We show great adaption in the small training sample scenario. In addition, our model has shown excellent robustness in noisy scenarios by taking inspiration from the multi-component spiking neuron and the quantum mechanics. The burst spiking mechanism is used to help our converted SNNs with higher performance and lower latency. Based on this engine, we also present a human-like concept learning framework to generate representations with five types of perceptual strength information.\n\n\\textbf{1. Learning Model}\n\n\\emph{(1) SNN with global feedback connections}\n\n\nThe spiking neural network transmits information in discrete spike sequences, which is consistent with the information processing in the human brain~\\cite{Maass1997Networks}. The training of spiking neural networks has been widely concerned by researchers. Most researchers take inspiration from the mechanism of synaptic learning and updating between neurons in the human brain, and propose biologically plausible learning rules, such as Hebbian theory~\\cite{amit1994correlations}, STDP~\\cite{bi1998synaptic}, and STP~\\cite{zucker2002short}, which have been adopted into the training of spiking neural networks~\\cite{tavanaei2016bio,tavanaei2017multi,diehl2015unsupervised,falez2019multi,zhang2018plasticity,zhang2018brain}. However, most SNNs are based on feedforward structures, while the importance of the brain-inspired structures has been ignored. The anatomical and physiological evidences show that in addition to feedforward connections, numerous feedback connections exist in the brain, especially among sensory areas~\\cite{felleman1991distributed,sporns2004small}. The feedback connections will carry out the predictions from the top layer to cooperate with the local plasticity rules to formulate the learning and inference in the brain. \n\nHere, we introduce the global feedback connections and the local differential learning rule~\\cite{zhao2020glsnn} in the training of SNNs. \n\\begin{figure}[!htbp]\n\t\\centering\n\t\\includegraphics[scale=0.6]{.\/fig\/zdc_pipeline.pdf}\n\t\\caption{The feedforward and feedback pathway in the SNN model. The global feedback pathway propagates the target of the hidden layer, modified from~\\cite{zhao2020glsnn}.}\n\t\\label{snn}\n\\end{figure}\n\nWe use the LIF spiking neuron model in the BrainCog to simulate the dynamical process of the membrane potential $V(t)$ as shown in Eq.~\\ref{equa_lif1}. We use the mean firing rates $S_{l}$ of each layer to denote the representation of the $l_{th}$ layer in the forward pathway, and the corresponding target is denoted as $\\hat{S}_{l}$. Here we use the mean squared loss (MSE) as the final loss function.\n\n$\\hat{S}_{L-1}$ denotes the target of the penultimate layer, and is calculated as shown in Eq.~\\ref{equal_per}, $W_{L-1}$ denotes the forward weight between the $(L-1)_{th}$ layer and the $L_{th}$. $\\eta_t$ represents the learning rate of the target~\\cite{zhao2020glsnn}.\n\\begin{equation}\n\t\\hat{S}_{L-1} = S_{L-1} - \\eta_t\\Delta S = S_{L-1} - \\eta_t W_{L-1}^T(S_{out} - S^T)\n\t\\label{equal_per}\n\\end{equation}\n\nThe target of the other hidden layer can be obtained through the feedback connections:\n\n\\begin{equation}\n\t\\hat{S}_{l} = S_{l} - G_{l}(S_{out} - S^T)\n\t\\label{equal_all}\n\\end{equation}\n\nBy combining the feedforward representation and feedback target, we compute the local MSE loss. We can compute the local update of the parameters with the surrogate gradient. We have conducted experiments on the MNIST and Fashion-MNIST datasets, and achieved 98.23\\% and 89.68\\% test accuracy with three hidden layers. Each hidden layer is set with 800 neurons. The details are shown in Fig.~\\ref{zdc_result}.\n\\begin{figure}[!htbp]\n\t\\centering\n\t\\includegraphics[scale=0.35]{.\/fig\/result_zdc.pdf}\n\t\\caption{The test accuracy on MNIST and Fashion-MNIST datasets of the SNNs with global feedback connections.}\n\t\\label{zdc_result}\n\\end{figure}\n\n\n\\emph{(2) Biological-BP SNN}\n\nThe backpropagation algorithm is an efficient optimization method that is widely used in deep neural networks and has promoted the great success of deep learning. However, due to the non-smoothness of neurons in SNNs, the backpropagation optimization is difficult to be applied to train SNNs directly. To solve the above problem, Bengio et al.~\\cite{bengioEstimatingPropagatingGradients2013} proposed four gradient approximation methods, including Straight-Through Estimator to enable the application of the backpropagation algorithm to neural networks containing nonsmooth neurons. Wu et al.~\\cite{wuSpatioTemporalBackpropagationTraining2018} proposed the Spatio-Temporal Backpropagation (STBP) algorithm to train SNNs by using a differentiable function to approximate spiking neurons through a surrogate gradient method. Based on the spatio-temporal dynamic of SNNs, they achieved the backpropagation of SNNs in both time and space dimensions. Zheng et al.~\\cite{zhengGoingDeeperDirectlyTrained2021} proposed threshold-dependent batch normalization (tdBN) to improve the performance of deep SNNs. Fang et al.~\\cite{fangIncorporatingLearnableMembrane2021} proposed a parametric LIF (PLIF) model to further improve the performance of SNNs by optimizing the time constants of neurons during the training process. \n\nMost of these BP-based SNNs often simply regard SNN as a substitute for RNN, but ignore the dynamic of spiking neurons. To solve the problem of non-differentiable neuronal models, Bohte et al.~\\cite{bohte2011error} approximated the backpropagation process of neuronal models using the surrogate gradient method. However, this method results in gradient leakage and does not allow for proper credit assignment in both temporal and spatial dimensions. Also, due to the reset operation after the spikes are emitted, the errors in the backward process can not propagate across spikes. To solve the above problem, we propose Backpropagation with biologically plausible spatio-temporal adjustment~\\cite{shen2022backpropagation}, as shown in Fig.~\\ref{bpsta}, which can correctly assign credit according to the contribution of the neuron to the membrane potential at each moment.\n\n\\begin{figure}[!htbp]\n\t\\centering\n\t\\includegraphics[scale=0.48]{.\/fig\/shen_bpsta.pdf}\n\t\\caption{The forward and backward process of biological BP-based SNNs for BrainCog.}\n\t\\label{bpsta}\n\\end{figure}\n\nBased on LIF spiking neuron, the direct input encoding strategy, the MSE loss function and the surrogate gradient function supplied in BrainCog, we propose a Biologically Plausible Spatio-Temporal Adjustment (BPSTA) to help BP algorithm with more reasonable error adjustment in the spatial temporal dimension~\\cite{shen2022backpropagation}. The algorithm realizes the reasonable adjustment of the gradient in the spatial dimension, avoids the unnecessary influence of the neurons that do not generate spikes on the weight update, and extracts more important features. By applying the temporal residual pathway, our algorithm helps the error to be transmitted across multiple spikes, and enhances the temporal dependency of the BP-based SNNs. Compared with SNNs and ANNs with the same structure that only used the BP algorithm, our model greatly improves the performance of SNNs on the DVS-CIFAR10 and DVS-Gesture datasets, while also greatly reducing the energy consumption and decay of SNNs, as shown in Tab.~\\ref{compare2}.\n\\begin{table}[!htbp]\n \\centering\n \\caption{The energy efficiency study. The former represents our method, the latter represents the baseline, adopted from~\\cite{shen2022backpropagation}.}\n \\begin{tabular}{cccc}\n \\toprule[2pt]\n Dataset & Accuracy & Firing-rate &EE = $\\frac{E_{ANN}}{E_{SNN}}$ \\\\\n \\midrule[2pt]\n MNIST & 99.58\\%\/99.42\\% & 0.082\/0.183 & 35.1x\/15.7x \n \\\\\n \n N-MNIST & 99.61\\%\/99.32\\% & 0.097\/0.176 & 29.6x\/16.3x \\\\\n \n CIFAR10 & 92.33\\%\/89.49\\% & 0.108\/0.214 & 26.6x\/13.4x \\\\ \n \n DVS-Gesture & 98.26\\%\/93.92\\% & 0.083\/0.165 & 34.6x\/17.4x \\\\ \n \n DVS-CIFAR10 & 77.76\\%\/71.40\\% & 0.097\/0.177 & 29.5x\/16.2x \\\\\n \\bottomrule[2pt]\n \\end{tabular}\n \\label{compare2}\n\\end{table} \n\n\\emph{(3) Unsupervised STDP-based SNN}\n \nUnsupervised learning is an important cognitive function of the brain. The brain can complete the task of object recognition by summarizing the characteristics and features of objects. Unsupervised learning does not require explicit labels, but extracts the features of samples adaptively in learning process. Modeling of this ability of the brain is critical. There are multiple learning rules in the brain to accomplish various learning tasks. STDP is a widespread rule of synaptic weight modification in the brain. It updates the synaptic weights according to the temporal relationship of the pre- and post-synaptic spikes. Compared with the backpropagation algorithm widely used in DNN with gradient calculation, STDP is more biological plausible. However, unlike the backpropagation algorithm that relies on a large number of sample labels, STDP is a local optimization algorithm. Due to the lack of global information, the ability of self-organization and coordination between neurons is insufficient. In the SNN model, it will lead to disorder and uncertainty of spikes discharge, and it is difficult to achieve a stable release balance state of neurons. To this end, we design an unsupervised STDP-based spiking neural network model based on BrainCog, and bring unsupervised learning to BrainCog as a functional module.\nAs shown in Fig.~\\ref{frame}.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.48\\textwidth]{fig\/frame.pdf}\n\\caption{The framework of the unsupervised STDP-based spiking neural network model, which introduces the adaptive synaptic filter (ASF), the adaptive threshold balance (ATB), and the adaptive lateral inhibitory connection (ALIC) mechanisms to improve the information transmission and feature extraction of STDP-based SNNs. This figure is from \\cite{dong2022unsupervised}.}\n\\label{frame}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.49\\textwidth]{fig\/stp1.pdf}\n\\caption{(a) The adaptive synaptic filter and the adaptive threshold balance jointly regulate the neuron spikes balance of neuron. (b) The adaptive lateral inhibitory connection has different connections for different samples, preventing neurons from learning the same features. This figure is from~\\cite{dong2022unsupervised}.}\n\\label{stp}\n\\end{figure}\n\nTo solve the above problems, we introduced various adaptive mechanisms to improve the self-organization ability of the overall network. STP is another synaptic learning mechanism that exists in the brain. Inspired by STP, we designed an adaptive synaptic filter (ASF) that integrates input currents through nonlinear units, and an adaptive threshold balance (ATB) that dynamically changes the threshold of each neuron to avoid excessively high or low firing rates. The combination of the two controls the firing balance of neurons. As the Fig.~\\ref{stp} shows. We also address the problem of coordinating neurons within a single layer with an adaptive lateral inhibitory connection (ALIC). The mechanism have different connection structures for different input samples. Finally, in order to solve the problem of low efficiency of STDP training, we designed a sample temporal batch STDP. It combines the information between temporal and samples to uniformly update the synaptic weights, as shown by the following formula.\\begin{equation}\n\t\\begin{split}\n\t\t&\\frac{dw_{j}^{(t)}}{dt}^{+}=\\sum \\limits_{m=0}^{N_{batch}}\\sum \\limits_{n=0}^{T_{batch}}\\sum\\limits_{f=1}^{N} W(t^{f,m}_{i}-t^{n,m}_{j})\\\\\n\t\\end{split}\n\t\\label{eq5}\n\\end{equation} where $W(x)$ is the function of STDP, $N_{batch}$ is the batchsize of the input, $T_{batch}$ is the batch of time step, N is the number of neurons. We verified our model on MNIST and Fashion-MNIST, achieving 97.9\\% and 87.0\\% accuracy, respectively. To the best of our knowledge, these are the state-of-the-art results for unsupervised SNNs based on STDP. \n\n\n\\textbf{2. Adaptability and Optimization}\n\n\\emph{(1) Quantum superposition inspired SNN}\n\n\nIn the microscopic size, quantum mechanics dominates the rules of operation of objects, which reveals the probabilistic and uncertainties of the world. New technologies based on quantum theory like quantum computation and quantum communication provide an alternative to information processing. Researches show that biological neurons spike at random and the brain can process information with huge parallel potential like quantum computing. \n\nInspired from this, we propose the Quantum Superposition Inspired Spiking Neural Network (QS-SNN)~\\cite{SUN2021102880}, complementing quantum image (CQIE) method to represent image in the form of quantum superposition state and then transform this state to spike trains with different phase. Spiking neural network with time differential convolution kernel (TCK) is used to do further classification shown in Fig.~\\ref{Fig:QSSNN}.\n\n\n\n\n\\begin{figure}[!htb]\n\t\\centering\n\t\\includegraphics[width=1.0\\linewidth]{fig\/QSSNN.pdf} \n\t\\caption{Quantum superposition inspired spiking neural network, adopted from \\cite{SUN2021102880}.}\n\t\\label{Fig:QSSNN}\n\\end{figure}\n\n\n\\begin{figure}[!htb]\n\t\\centering\n\t\\includegraphics[width=1.0\\linewidth]{fig\/mnist_shift_result.pdf} \n\t\\caption{Performance of QS-SNN on background MNIST inverse image, adopted from \\cite{SUN2021102880}.}\n\t\\label{Fig:mnist-res}\n\\end{figure}\n\nThe effort tries to incorporate the quantum superposition mechanism to SNNs as a new form of encoding strategy for BrainCog, and the model finally shows its capability on robustness for learning. The proposed QS-SNN model is tested on color inverted MNIST datasets. The background-inverted picture is encoded in the quantum superposition form as shown in Eq.~\\ref{QS-SNN} and~\\ref{QS-SNN_limitation}.\n\\begin{equation}\n\t\\mathinner{|I(\\theta)\\rangle}=\\frac{1}{2^n}\\sum\\limits_{i=0}^{2^{2n}-1}(cos(\\theta_{i})\\mathinner{|x_{i}\\rangle}+sin(\\theta_{i})\\mathinner{|\\bar{x}_{i}\\rangle})\\otimes\\mathinner{|i\\rangle}, \\\\\n\t\\label{QS-SNN}\n\\end{equation}\n\n\n\\begin{equation}\n\t\\theta_{i} \\in [0, \\frac{\\pi}{2}], i= 1, 2, 3, \\dots, 2^{2n}-1.\n\t\\label{QS-SNN_limitation}\n\\end{equation}\n\nSpike sequences of different frequencies and phases are generated from the picture information of the quantum superposition state. Furthermore we use two-compartment spiking neural networks to process these spike trains.\n\nWe compare the QS-SNN model with other convolutional models. The result in Fig.~\\ref{Fig:mnist-res} shows that our QS-SNN model overtakes other convolutional neural networks in recognizing background-inverted image tasks.\n\n\\emph{(2) Unsupervised SNN with adaptive learning rule and structure}\n\nBrain can accomplish specific tasks by adaptively learning to organize the features of a small number of samples. Few-shot learning is an important ability of the brain. In the above section, an unsupervised STDP-based spiking neural network is introduced~\\cite{dong2022unsupervised}. To better illustrate the power of our model on small sample training, we tested the model with small samples and find that this model has stronger small sample processing ability than ANN with similar structures, as shown in Tab.~\\ref{small}~\\cite{dong2022unsupervised}.\n\n\\begin{table}[h]\n\t\\caption{The performance of unsupervised SNN compared with ANN on MNIST dataset with different number of training samples~\\cite{dong2022unsupervised}.}\n\t\\centering\n\t\\resizebox{0.8\\linewidth}{!}{\n\t\t\\begin{tabular}{lrrrr}\n\t\t\t\\toprule \n\t\t\tsamples&200 &100&50&10 \\\\\n\t\t\t\\midrule\t\n\t\t\tANN&79.77\\%&71.40\\%&68.72\\%&47.12\\%\\\\\n\t\t\tOurs&81.45\\%&75.44\\%&72.88\\%&51.45\\%\\\\\n\t\t\t\\midrule\n\t\t\t&1.68\\%&4.04\\%&4.16\\%&4.33\\%\\\\\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t}\n\t\\label{small}\n\t\n\\end{table}\n\n\n\n\n\\emph{(3) Efficient and Accurate Conversion of SNNs}\n\n\tSNNs have attracted attention due to their biological plausibility, fast inference, and low energy consumption. However the training methods based on the plasticity~\\cite{zeng2017improving} and surrogate gradient algorithms~\\cite{wu2018spatio} need much memory and perform worse than ANN on large networks and complex datasets. For users of BrainCog, we are certain there will be clear need to use SNNs while keep the benifit from ANNs. As an efficient method, the conversion method combines the characteristics of backpropagation and low energy consumption and can achieve the same excellent performance as ANN with lower power consumption~\\cite{li2021free, han2020deep, han2020rmp}. However, the converted SNNs typically suffer from severe performance degradation and time delays. \n\t\n\t\\begin{figure}[htb]\n \t\\centering\n \t\\includegraphics[scale=0.3]{fig\/error.png}\n \t\\caption{The conversion errors from IF neuron, time dimension, and MaxPooling, adopted from~\\cite{li2022efficient}.}\n \t\\label{error}\n \\end{figure}\n\n We divide the performance loss to IF neurons, time dimension, and MaxPooling layer~\\cite{li2022efficient}, as shown in Fig.~\\ref{error}. In SNN, the neuron can only send one spike at most in each time step, so the maximum firing rate of the neuron is 1. After normalizing the weight of trained ANN, some activation values greater than 1 cannot be effectively represented by IF neurons. Therefore, the residual membrane potential in neurons will affect the performance of the conversion to some extent. In addition, since IF receives pre-synaptic neuron spikes for information transmission, the synaptic current received by the neuron at each time is unstable, but its sum can be approx to the converted value by increasing the simulation time. However, the total number of spikes is the time-varying extremum of the sum of the synaptic currents received. When the activation value corresponding to the IF neuron is negative and the total synaptic current received by the neuron in a short time exceeds the threshold, the neuron that should be resting will issue the spike, and the influence of the spike on the later layers cannot be eliminated by increasing the simulation time. Finally, in the conversion of the MaxPooling layer, the previous work has enabled spikes from the neuron with the maximum firing rate to pass through. However, due to the instability of synaptic current, neurons with the maximum firing rate are often not fixed, which makes the output of the converted MaxPooling layer usually larger.\n\n \\begin{figure}[!h]\n \t\\centering\n \t\\includegraphics[scale=0.5]{fig\/conver.pdf}\n \t\\caption{The conversion methods. (a) Burst spikes increase the upper limit of firing rates; (b) spike calibration corrects the effect of faulty spikes on conversion, and (c) LIPooling uses lateral inhibition mechanisms to achieve accurate conversion of the MaxPooling layers. Refined based on~\\cite{li2022efficient,li2022spike}.}\n \t\\label{conver}\n \\end{figure}\n\n To solve the problem of the residual membrane potential of neurons, we introduce the burst mechanism, as shown in Fig.~\\ref{conver} (a), which enables neurons to send more than one spike between two time steps, depending on the current membrane potential. Once some neurons have residual information remaining, they can send spikes between two time steps. In this way, the firing rate of SNN can be increased, and the membrane potential remaining in the neuron can be transmitted to the neuron of the next layer. \n \n For classification problems, SNN only needs to ensure that the index of the maximum output is correct, but for more demanding conversion tasks, such as object detection, the solution of SIN problem is worth exploring. Note that the spikes emitted by hidden layer neurons are unstable, but the mean of its inter-spike interval distribution is related to its corresponding activation value~\\cite{li2022spike}. We monitor each neuron's spiking time in the forward propagation process and update its average inter-spike interval, as shown in Fig.~\\ref{conver} (b). Under a certain time allowance, the neuron that does not emit spikes will be determined to be Inactivated Neuron. Then, the twin weights emit spikes to suppress the influence of historical errors and calibrate the influence of wrong spikes to a certain extent to ensure accurate conversion.\n\n\n Inspired by the lateral inhibition mechanism~\\cite{blakemore1970lateral}, we propose LIPooling for converting the maximum pooling layer, as shown in Fig.~\\ref{conver} (c). From the operation perspective, the inhibition of other neurons by the winner in LIPooling is -1, so the neuron with the largest firing rate at the current time step may not spike due to the inhibition of other neurons in history. From the output perspective, LIPooling sums up the output of all neurons during simulation. So the key is that LIPooling uses competition between neurons to get an accurate sum (equal to the actual maximum), instead of picking the winner.\n\n\\textbf{3. Multi-sensory Integration}\n\n\nOne can build SNN models through BrainCog to process different types of sensory inputs, while the human brain learns and makes decisions based on multi-sensory inputs. When information from various sensory inputs is combined, it can lead to increased perception, quicker response times, and better recognition. Hence, enabling BrainCog to process and integrate multi-sensory inputs are of vital importance.\n\nIn this section, we focus on concept learning with multi-sensory inputs. We present a multi-sensory concept learning framework based on BrainCog to generate integrated vectors with the multi-sensory representation of the concept.\n\nEmbodied theories, which emphasize that meaning is rooted in our sensory and experiential interactions with the environment, supports multi-sensory representations.\nBased on SNNs, we present a human-like framework to learn concepts which can generate integrated representations with five types of perceptual strength information~\\cite{wywFramework}.\nThe framework is developed with two distinct paradigms: Associate Merge (AM) and Independent Merge (IM), as Fig.~\\ref{MultisensoryIntegrationFramework} shows.\n\\begin{figure}[h!]\n \\centering\n \\includegraphics[width=8cm]{.\/fig\/MultisensoryIntegrationBrainCogNEW.png}\\\\\n \\caption{The Framework of Concept Learning Based on SNNs with multi-sensory Inputs.}\n \\label{MultisensoryIntegrationFramework}\n\\end{figure}\n\nIM is based on the widely accepted cognitive psychology premise that each type of sense for the concept is independent before integration~\\cite{wywFramework}.\nAs the input to the model, we will employ five common perceptual strength: visual, auditory, haptic, olfactory and gustatory.\nDuring the data preparation step, we min-max normalize all kinds of perceptual strength of the concept in the multi-sensory dataset so that each value of the vector is in $[0, 1]$.\nWe regard them as stimuli to the presynaptic neurons.\nIt's a 2-layer SNN model, with 5 neurons in the first layer matching the concept's 5 kinds of perceptual strength, and 1 neuron in the second layer representing the neural state following multi-sensory integration.\nIn this paradigm, we use perceptual strength based presynaptic Poisson neurons and LIF or Izhikevich as the postsynaptic neural model.\n\nThe weights between the neurons are $W^i= \\frac{g_i}{\\Sigma_i^n g_i} $ where $g_i = \\frac{1}{\\sigma_i^2}$,$\\sigma_i^2$ is the variance of each kind of perceptual strength.\nWe convert the postsynaptic neuron's spiking train $S^{post}([0, T])$ in $[0, T]$ into integrated representations $B^{IM}([0,T])$ for the concept in this form:\n\\begin{equation}\n\\begin{aligned}\nB^{IM}([0,T]) &= [\\mathcal{T}(S^{post}((0, tol])), \\mathcal{T}(S^{post}((tol, 2*tol])), \\cdots , \\\\\n& \\mathcal{T}(S^{post}(((k-1)*tol, k*tol])) , \\cdots, \\\\\n& \\mathcal{T}(S^{post}((\\lfloor \\frac{T}{tol} \\rfloor * tol, T]))]\n\\end{aligned}\n\\end{equation}\nHere if the interval has any spikes, the bit is 1. Otherwise it is 0, according to the $\\mathcal{T} (interval)$ function.\n\nThe AM paradigm presupposes that each kind of modality is associated before integration~\\cite{wywFramework}. \nIt includes 5 neurons, matching the concept's 5 distinct modal information sources. \nThey are linked with each other and are not self-connected. \nThe input spike trains to the network are generated using a Poisson event-generation algorithm based on perceptual strength.\nFor each concept, we turn the spike trains of these neurons into the ultimate integrated representations.\n\n\n\nThe weight value is defined by the correlation between each two modalities, i.e. $W = Corr(i, j)$, where $i, j \\in [A, G, H, O, V]$.\nWe convert the spike trains $S^i([0, T])$ of all neurons into binarycode $B^i([0,T])$ and conjoin them as the ultimate vector $B([0,T])$ as follows:\n\\begin{equation}\n\\begin{aligned}\nB^i([0,T]) &= [\\mathcal{T}(S^i((0, tol])), \\mathcal{T}(S^i((tol, 2*tol])), \\cdots , \\\\\n& \\mathcal{T}(S^i(((k-1)*tol, k*tol])) , \\cdots, \\\\\n& \\mathcal{T}(S^i((\\lfloor \\frac{T}{tol} \\rfloor * tol, T]))]\n\\end{aligned}\n\\end{equation}\n\n\\begin{equation}\n\\begin{aligned}\nB^{AM}([0,T]) &= [B^A([0,T]) \\oplus B^H([0,T]) \\oplus B^G([0,T]) \\\\\n& \\oplus B^O([0,T]) \\oplus B^V([0,T])]\n\\end{aligned}\n\\end{equation}\n\n\nTo test our framework, we conducted experiments with three multi-sensory datasets (LC823~\\cite{wywLC423,wywLC400}, BBSR~\\cite{wywBBSR}, Lancaster40k~\\cite{wywLan40k}) for the IM and AM paradigms, respectively. \nWe used WordSim353~\\cite{wywWS353} and SCWS1994~\\cite{wywSCWS} as metrics~\\cite{wywFramework}. \nThe resuls show that integrated representations are closer to human beings than the original ones based on our framework, according to the overall results: 37 submodels outperformed a total of 48 tests for both AM and IM paradigm~\\cite{wywFramework}.\nMeanwhile, to compare the two paradigms, we introduce concept feature norms datasets which represent concepts with systematic and standardized feature descriptions. \nIn this study, we use the datasets McRae~\\cite{wywMcRae} and CSLB~\\cite{wywCSLB} as criteria.\nThe findings show that the IM paradigm performs better at multi-sensory integration for concepts with higher modality exclusivity. \nThe AM paradigm benefits the concept of uniform perceptual strength distribution. \nFurthermore, we present perceptual strength-free metrics to demonstrate that both paradigms of our framework have excellent generality~\\cite{wywFramework}.\n\n\n\\begin{figure}[h!]\n \\centering\n \\includegraphics[width=8cm]{.\/fig\/yuwei1.jpg}\\\\\n \\caption{The Correlation Results Between Modality Exclusivity and Average of 3 Neighbors' Rankings.}\n \\label{MEFS}\n\\end{figure}\n\n\n\n\n\n\\subsection{Decision Making}\nThis subsection introduces how BrainCog implements decision-making functions from the perspective of brain neural mechanism modeling and deep SNN-based reinforcement learning models. Using BrainCog, we build a multi-brain areas coordinated SNN model and a spiking deep Q-network to solve decision-making and control problems.\n\n\\textbf{1. Brain-inspired Decision-Making SNN}\n\n\nFor mammalian brain-inspired decision-making, we take inspiration from the PFC-BG-ThA-PMC neural circuit and build brain-inspired decision-making spiking neural network (BDM-SNN) model~\\cite{zhao2018brain} by BrainCog as shown in Fig. ~\\ref{dmsnn}. BDM-SNN contains the excitatory and inhibitory connections within the basal ganglia nuclei and direct, indirect, and hyperdirect pathways from the PFC to the BG~\\cite{frank2006anatomy,silkis2000cortico}. This BDM-SNN model incorporates biological neuron models (LIF and simplified H-H models), synaptic plasticity learning rules, and interactive connections among multi-brain areas developed by BrainCog. On this basis, we extend the dopamine (DA)-regulated BDM-SNN, which modulates synaptic learning for PFC-to-striatal direct and indirect pathways via dopamine. Different from the DA regulation method in~\\cite{zhao2018brain} which uses multiplication to modulate the specified connections, we improve it by introducing R-STDP~\\cite{Eugene2007} (from Eq.~\\ref{rstdpe} and Eq.~\\ref{rstdpw}) to modulate the PFC-to-striatal connections. \n\n\\begin{figure}[htbp]\n\\begin{center}\n\\includegraphics[width=8cm]{.\/fig\/bdmsnn.jpg}\n\\end{center}\n\\caption{The architecture of DA-regulated BDM-SNN, refined based on~\\cite{zhao2018brain}.}\\label{dmsnn}\n\\end{figure}\n\n\nThe BDM-SNN model implemented by BrainCog could perform different tasks, such as the Flappy Bird game and has the ability to support UAV online decision-making. For the Flappy Bird game, our method achieves a performance level similar to humans, stably passing the pipeline on the first try. Fig.~\\ref{fb}a illustrates the changes in the mean cumulative rewards for LIF and simplified H-H neurons while playing the game. The simplified H-H neuron achieves similar performances to that of LIF neurons. BDM-SNN with different neurons can quickly learn the correct rules and keep obtaining rewards. We also analyze the role of different ion channels in the simplified H-H model. From Fig.~\\ref{fb}b, we find that sodium and potassium ion channels have opposite effects on the neuronal membrane potential. Removing sodium ion channels will make the membrane potential decay, while the membrane potential rises faster and fires earlier when removing potassium ion channels. These results indicate that sodium ion channels can help increase the membrane potential, and potassium ion channels have the opposite effect. Experimental results also indicate that BDM-SNN with simplified H-H model that removes sodium ion channels fails to learn the Flappy Bird game.\n\n\n\\begin{figure}[htbp]\n\\begin{center}\n\\includegraphics[width=8.8cm]{.\/fig\/hhfly.jpg}\n\\end{center}\n\\caption{(a) Experimental result of BrainCog based BDM-SNN on Flappy Bird. The y-axis is the mean of cumulative rewards. (b) Effects of different ion channels on membrane potential for simplified H-H model.}\\label{fb}\n\\end{figure}\n\nIn addition, for the UAV decision-making tasks in the real scene, our model could perform potential applications including flying over doors and windows and obstacle avoidance, which have been realized in~\\cite{zhao2018brain}. Users only need to divide the state space and action space according to different tasks, call the DA-regulated BDM-SNN decision-making model, and combine the UAV's action control instructions to complete the UAV's decision-making process.\n \n \nThis part of the work mainly draws on the neural structure and learning mechanism of brain decision-making and proposes the multi-brain areas coordinated decision-making spiking neural network constructed by BrainCog, and verifies the ability of reinforcement learning in different application scenarios. \n\n\n\\textbf{2. Spiking Deep Q Network with potential based layer normalization}\n\nDeep Q network is widely used for decision-making tasks, and it is required to have SNN based deep Q network for BrainCog so that it can be used for SNN based decision making. We propose potential-based layer normalization spiking deep Q network (PL-SDQN) model to combine SNN with deep reinforcement learning~\\cite{sun2022solving}. We use the LIF neuron model in BrainCog to simulate neurodynamics. Deep spiking neural networks are difficult to be applied to reinforcement learning tasks. On the one hand, it is due to the complexity of reinforcement learning task itself, on the other hand, it is challenging to train spiking neural networks and transmit spiking signal characteristics in deep layers. We find that the spiking deep Q network quickly dissipates the spiking signal in the convolutional layer. Inspired by how local environmental potentials influence brain neurons, we propose the potential-based layer normalization (pbLN) method. The $x_t$, the postsynaptic potential of convolution layers, are normalized as \n\\begin{equation}\n\t\\hat{x_t} = \\frac{x_t-\\bar{x}_t}{\\sqrt{\\sigma_{x_t}+\\epsilon}}\n\t\\label{Eq:norm}\n\\end{equation}\t\n\n\\begin{equation}\n\t\\bar{x}_t = \\frac{1}{H} \\sum_{i=1}^Hx_{t, i}\n\\end{equation}\t\n\n\\begin{equation}\n\t\\sigma_{x_t} = \\frac{1}{H}\\sqrt{\\sum_{i=1}^H(x_{t, i}-\\bar{x}_t)}\n\\end{equation}\n\n\nWe construct PL-SDQN model as shown in Fig.~\\ref{Fig:sdqn}. Atari game images are processed by spiking convolution network and pbLN method and input into a fully connected LIF neural network. The spiking output of PL-SDQN is weighted and summed to continue state-action values. \n\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=8cm]{fig\/SDQN_struct.pdf}\n\t\\caption{The framework of PL-SDQN. Refined from~\\cite{sun2022solving}.}\n\t\\label{Fig:sdqn}\n\\end{figure}\n\n\n\nWe compared our model with the original ANN-based DQN model, and the results are shown in Fig.~\\ref{Fig:game_res}. It shows that our model achieved better performance compared with vanilla DQN model.\n\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=8cm]{fig\/game_res_new_resize.pdf}\n\t\\caption{PL-SDQN performance on Atari games, adopted from~\\cite{sun2022solving}.}\n\t\\label{Fig:game_res}\n\\end{figure}\n\n\n\\subsection{Motor Control}\n\nNeuromorphic models on robot control can achieve more robust and energy efficient effects than conventional methods. Spiking neural networks have been used in robot control studies like navigation~\\cite{wang2014mobile} and robot arm control~\\cite{Tieck2018}. Inspired by the brain motor circuit, we construct a multi-brain areas coordinated SNN robot motor control model, to extend BrainCog to control various robots to model embodied intelligence.\n\nWe construct the brain-inspired motor control model with LIF neuron provided by BrainCog. The whole network model architecture is shown in Fig.~\\ref{Fig:motor}. The high-level motion information is produced by SMA and PMC modules. As discussed above, the function of SMA is to process internal movement stimuli and is responsible for the planning and abstraction of advanced actions. The SMA model contains LIF neurons and receives input signals. One part of the output pulse of the SMA module stimulates the PMC module, and the other part is received by the BG module. The output of the BG module is used as a supplementary signal for action planning, and serves as the input to the PMC together with the SMA signal.\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=8cm]{fig\/motor.pdf}\n\t\\caption{A spiking neural network for motor control based on BrainCog.}\n\t\\label{Fig:motor}\n\\end{figure}\n\n\nIn order to expand the dimension of neuron direction representation, we use neuron population coding to output the high-level action abstraction. Population-coded spiking neural network has been used for energy-efficient continuous control~\\cite{tang2021deep}, showing population coding can increase the ability of spiking neurons to represent precise continuous values. In our work, we are inspired by neural mechanisms of population encoding of motion directions in the brain and use LIF neuron groups to process the output spikes from PMC module. \n\nThe cerebellum plays an important role in motor coordination and fine regulation of movements. We built spiking neural network based cerebellum model to process the high-level motor control population embedding. The outputs of populations are fused to encode motor control information generated by high-level cortex area inputs to a three-layer cerebellum spiking neural network, including GCs, PCs and DCN modules. And the cerebellum takes pathway connections like DenseNet~\\cite{huang2017densely}. The DCN layer generates the final joint control outputs.\n\n\n\\subsection{Knowledge Representation and Reasoning}\nThis subsection shows how the BrainCog platform achieve the ability of knowledge representation and reasoning. Via neuroplasticity and population coding mechanisms, spiking neural networks acquire music and symbolic knowledge. Moreover, on this basis, cognitive tasks such as music generation, sequence production, deductive reasoning and inductive reasoning are realized.\n\n\n\\textbf{1. Music Memory and Stylistic Composition SNN}\n\n\nMusic is part of human nature. Listening to melodies involves sensory perception, personal memory, action, emotion, and even creative behaviors, etc~\\cite{Koelsch2012}. Music memory is a fundamental part of musical behaviors, and humans have strong abilities to store a sequence of notes in the brain. Learning and creating music are also essential processes. A musician engages his memory, emotion, musical knowledge and skills to write a beautiful melody. Actually, neuroscientists have found that many brain areas need to collaborate to complete the cognitive behaviors with music. Inspired by brain mechanisms, this paper focuses on the two key issues of music memory and composition, which are modeled by spiking neural networks based on the BrainCog platform.\n\n\\subsubsection{Musical Memory Spiking Neural Network} \\label{secA}\nA musical melody is composed of a sequence of notes. Pitch and duration are two essential attributes of a note. Scientists have found that the primary auditory cortex provides a tonotopic map to encode the pitches, which means that neurons in this region have their preferences of pitches~\\cite{Kalat2015}. Meanwhile, neural populations in the medial premotor cortex have the preferences of all the time intervals covered in hundreds of milliseconds~\\cite{merchant2013a}. Besides, researchers have emphasized the contribution of the hippocampus in sequence memory~\\cite{Fortin2002}. Inspired by these mechanisms, this work proposes a spiking neural model, which contains collaborated subnetworks to encode, store and retrieve the music melodies~\\cite{LQ2020}.\n\n\\emph{Encoding:} As is shown in Fig.~\\ref{01}, this work defines pitch subnetwork and duration subnetwork to encode pitches and durations of musical notes respectively. These two subnetworks are composed of numbers of minicolumns with different preferences. Synaptic connections with transmission delays exist between neurons from different layers. Besides, a cluster that represents the title of a musical melody is composed of numerous individual neurons. This cluster has the feedforward and feedback connections with pitch and duration subnetworks. Since the BrainCog platform supports various neural models, this work takes LIF model to simulate neural dynamics.\n\\begin{figure}[!htbp]\n \\centering\n \\includegraphics[scale=0.4]{.\/fig\/music1.png}\n \\caption{The architecture of the music memory model, refined based on~\\cite{LQ2020}.}\n \\label{01}\n\\end{figure}\n \n\\emph{Storing:} Based on the encoding process, as the notes input sequently, the neurons in pitch and duration subnetworks with different preferences respond to these sequential notes and fire orderly. Meanwhile, connections between these neurons are computed and updated by the STDP learning rule. It is important to indicate that the neurons and synapses are grown dynamically. Besides, synaptic connections between the title cluster and other two subnetworks are generated and updated by the STDP learning rule simultaneously. The details of note sequence memorizing can be found in our previous work~\\cite{LQ2020}.\n\n\\emph{Retrieving:} Given the title of a musical work, the ordered notes can be recalled accurately. Since the weights of connections are updated in storing process, neural activities in the title cluster lead to the excitations of neurons in pitch and duration subnetworks. Then, the notes are retrieved in order. We use a public corpus that contains 331 classical piano works~\\cite{Krueger2018} recorded by MIDI standard format to evaluate the model. The experiments have shown that our model can memorize and retrieve the melodies with an accuracy of 99\\%. The details of the experiments have been discussed in our previous work~\\cite{LQ2020}.\n\n\n\\subsubsection{Stylistic Composition Spiking Neural Network}\n\n\nHow to learn and make music are quite complex processes for humans. Scientists have found that the memory system and knowledge experience participate in human creative behaviors~\\cite{Dietrich2004}. Many brain areas like the prefrontal cortex are engaged in human creativity~\\cite{Jung2013}. However, the details of brain mechanisms are still unclear. Inspired by the current neuroscientific findings, the BrainCog introduces a spiking neural network for learning musical knowledge and creating melodies with different styles~\\cite{LQ2021}. \n\n\\begin{figure}[!htbp]\n \\centering\n \\includegraphics[scale=0.28]{.\/fig\/2.pdf}\n \\caption{Stylistic composition model inspired by brain mechanisms. Refined based on~\\cite{LQ2021}.}\n \\label{02}\n\\end{figure}\n\n\\emph{Musical Learning:} This work proposes a spiking neural model which is composed of a knowledge network and a sequence memory network. As is shown in Fig.~\\ref{02}, the knowledge network is designed as a hierarchical structure for encoding and learning musical knowledge. These layers store the genre (such as Baroque, Classical, and Romantic), the names of famous composers and the titles of musical pieces. Neurons in the upper layers project their synapses to the lower layers. The sequence memory network stores the ordered notes which have been discussed in section~\\ref{secA}. During the learning process, synaptic connections are also projected from the knowledge network to the sequence memory network. This work also takes LIF model which is supported by the BrainCog platform to simulate neural dynamics. Furthermore, all the connections are generated and updated dynamically by the STDP learning rule. \n\n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[scale=0.6]{.\/fig\/3.pdf}\n \\caption{A sample of a generated melody with Bach's characteristic.}\n \\label{03}\n\\end{figure}\n\n\n\\emph{Musical Composition:} Based on the learning process, genre-based and composer-based melody compositions are discussed in this paper. Given the beginning notes and the length of the melody to be generated, the genre-based composition can produce a single-part melody with a specific genre style. This task is achieved by the neural circuits of genre cluster and sequential memory system. Similarly, the composer-based composition can produce melodies with composers' characters. The composer cluster and sequential memory system circuits contribute to this process~\\cite{LQ2021}. We also use a classical piano dataset including 331 musical works recorded by MIDI format~\\cite{Krueger2018} to train the model. Fig.~\\ref{03} shows a sample of the generated melody with Bach's style. The details of stylistic composition can be referred to our previous work~\\cite{LQ2021}.\n\n\n\nA total of 41 human listeners are invited to evaluate the quality of the generated melodies, and they are divided into two groups, one of which has a musical background. Experiments have shown that the pieces produced by the model have strong characteristics of different styles and some of them sound nice.\n\n\\textbf{2. Brain-Inspired Sequence Production SNN}\n\nSequence production is an essential function for AI applications. Components in BrainCog enable the community to build SNN models to handle this task. In this paper, we introduce the brain-inspired symbol sequences production spiking neural network (SPSNN) model that has been incorporated in BrainCog~\\cite{fang2021spsnn}. SPSNN incorporates multiple neuroscience mechanisms including Population Coding~\\cite{xie2022geometry}, STDP~\\cite{dan2004spike}, Reward-Modulated STDP~\\cite{fremaux2016neuromodulated}, and Chunking Mechanism~\\cite{pammi2004chunking}, mostly covered and provided by BrainCog. After reinforcement learning, the network can complete the memory of different sequences and production sequences according to different rules.\n\nFor Population Coding, this model utilizes populations of neurons to represent different symbols. The whole neural loop of SPSNN is divided into Working Memory Circuit, Reinforcement Learning Circuit, and Motor Neurons~\\cite{fang2021spsnn}, shown in Fig.~\\ref{SPSNN}. The Working Memory Circuit is mainly responsible for completing the memory of the sequence. The Reinforcement Learning Circuit is responsible for acquiring different rules during the reinforcement learning process. The Motor Neurons can be regarded as the network's output. \n\nIn the working process of the model, the Working Memory Circuit and the Reinforcement Learning Circuit cooperate to complete the memory and production of different sequences~\\cite{fang2021spsnn}. It is worth mentioning that with the increase of background noise, the recall accuracy of symbols at different positions in a sequence gradually decreases, and the overall change trend follows the \"U-shaped accuracy\", which is consistent with experiments in psychology and neuroscience~\\cite{jiang2018production}. The results are highly consistent due to the superposition of primacy and recency effects. Our model provides a possible explanation for both effects from a computational perspective.\n\\begin{figure}[!htbp]\n\\centering\n\\includegraphics [width=8.9cm]{.\/fig\/spsnnloop}\n\\caption{The architecture of SPSNN, adopted from~\\cite{fang2021spsnn}.}\n\\label{SPSNN}\n\\end{figure}\n\n\n\\textbf{3. Commonsense Knowledge Representation Graph SNN}\n\n\nCommonsense knowledge representation and reasoning are important cornerstones on the way to realize human-level general AI~\\cite{minsky2007emotion}. In this module, we build Commonsense Knowledge Representation SNN(CKR-SNN) to explore whether SNN can complete these cognitive function.\n\n\n\nThe hippocampus plays a critical role in the formation of new knowledge memory~\\cite{schlichting2017hippocampus}. Inspired by the population coding mechanism found in hippocampus~\\cite{ramirez2013creating}, this module encodes the entities and relations of commonsense knowledge graph into different populations of neurons. Via spiking timing-dependent plasticity (STDP) learning principle, the synaptic connections between neuron populations are formed after guiding the sequential firings of corresponding neuron populations~\\cite{KRRfang2022}.\n\nAs Fig.~\\ref{GSNN} shows, neuron populations together constructed the giant graph spiking neural networks, which contain the commonsense knowledge. In this module, Commonsense Knowledge Representation SNN(CKR-SNN) represents a subset of Commonsense Knowledge Graph ConceptNet~\\cite{Conceptnet55}. After training, CKR-SNN can complete conceptual knowledge generation and other cognitive tasks~\\cite{KRRfang2022}. \n\n\n\n\\begin{figure}\n\\centering\n\\includegraphics [width=8.9cm]{.\/fig\/GSNN}\n\\caption{Graph Spiking Neural Networks for Commonsense Representation, adopted from~\\cite{KRRfang2022}.}\n\\label{GSNN}\n\\end{figure}\n\n\n\n\n\n\n\n\n\\textbf{4. Causal Reasoning SNN}\n\nIn BrainCog, we constructed causal reasoning SNN, as an instance to verify the feasibility of spiking neural networks to realize deductive and inductive reasoning. Specifically, Causal Reasoning Spiking Neural Network (CRSNN) module contains a brain-inspired causal reasoning spiking neural network model~\\cite{fang2021crsnn}.\n\n\nThis model explores how to encode a static causal graph into a spiking neural network and implement subsequent reasoning based on a spiking neural network. Inspired by the causal reasoning process of the human brain~\\cite{pearl2018book}, we try to explore how causal reasoning can be implemented based on spiking neural networks. The 3D model of CRSNN is shown in Fig.~\\ref{CRSNN}.\n\nInspired by neuroscience, the CRSNN module adopts the population coding mechanism and uses neuron populations to represent nodes and relationships in the causal graph. Each node indicates different events in the causal graph, as shown in Fig.~\\ref{CRSNN}. By giving current stimulation to different neuron populations in the spiking neural networks and combining the STDP learning rule~\\cite{dan2004spike}, CRSNN can encode the topology between different nodes in a causal graph into a spiking neural network. Furthermore, according to this network, CRSNN completes the subsequent deductive reasoning tasks.\n\nThen, by introducing an external evaluation function, we can grasp the specific reasoning path in the working process of the network according to the firing patterns of the model, which gives the CRSNN more interpretability compared to traditional ANN models~\\cite{fang2021crsnn}. \n\n\n\n\\begin{figure}\n\\centering\n\\includegraphics [width=8.8cm]{.\/fig\/cg3d}\n\\caption{CRSNN 3D model, adapted from~\\cite{fang2021crsnn}.}\n\\label{CRSNN}\n\\end{figure}\n\n\n\n\n\n\\subsection{Social Cognition}\n\nThe nature and neural correlates of social cognition is an advanced topic in cognitive neuroscience. In the field of artificial intelligence and robotics, there are few in-depth studies that take the neural correlation and brain mechanisms of biological social cognition seriously. Although the scientific understanding of biological social cognition is still in a preliminary stage~\\cite{zeng2018toward}, we integrate the biological findings of social cognition into a model to construct a brain-inspired model for social cognition to extend the functions of BrainCog.\n\nUnderstanding ourselves and other people is a prerequisite for social cognition. \n\nAn individual's perception of the world is realized through his own body, and the importance of knowing oneself is the perception of self-body. Neuroscientific researches show that the inferior parietal lobule (IPL) is activated when the subjects see self-generated actions~\\cite{macuga2011selective} and their own faces~\\cite{sugiura2015neural}. Similar to the IPL, the Insula is activated in bodily ownership and self-recognition tasks~\\cite{craig2009you}. \n\nUnderstanding the mental states of others plays an important role for understanding other people. Theory of mind is an ability to distinguish between self and others and to infer others' mental states (such as desires, goals, beliefs, etc.) in the social context~\\cite{shamay-tsoory_dissociation_2007,sebastian_neural_2012,dennis2013cognitive}. This ability can help us reasonably infer other people's policies and goals. Inspired by this, we believe that applying theory of mind to the agent's decision-making process will improve the agent's inference of other agents, so as to take more reasonable actions. Neuroscientific researches~\\cite{abu-akel_neuroanatomical_2011,hartwright_multiple_2012,hartwright_special_2015, koster-hale_theory_2013} show that the brain areas related to theory of mind are mainly TPJ, part of PFC, ACC and IFG. The IPL contained in the TPJ is mainly used to represent self-relevant information, while the pSTS is used to represent information related to others. The insula, representing the abstract self, can be stimulated with self-related information~\\cite{zeng2018toward}. When theory of mind is going on, the IFG will suppress self-relevant information. Therefore, the TPJ will input other-relevant information into the PFC. The ACC evaluates the value of others' states, so as to help the PFC to infer others. The process of inferring others' goals or behaviors can be understood as simulating other people's decision-making~\\cite{suzuki_learning_2012}. Therefore, this process will be regulated by dopamine from substantia nigra compacta\/ventral tegmental area (Snc\/VTA).\n\n\n\n\n\n\n\n\nWith the neuron model and STDP function provided by the BrainCog framework, a brain-inspired social cognition model is constructed, as shown in Fig.~\\ref{BISC}.\n\n\n\\begin{figure}[htbp]\n\\begin{center}\n\\includegraphics[width=8cm]{fig\/SC.png}\n\\end{center}\n\\caption{Brain-inspired social cognition model.}\n\\label{BISC}\n\\end{figure}\n\n\n\n\nThe brain-inspired social cognition model contains two pathways: the bodily self-perception pathway and the theory of mind pathway.\n\nThe bodily self-perception pathway (shown in Fig.~\\ref{Self}) consists of inferior parietal lobule spiking neural network (IPL-SNN) and Insula spiking neural network (Insula-SNN). The IPL-SNN realizes motor-visual associative learning. The Insula-SNN realizes the abstract representation of oneself, that is, when the detected movement's visual results match the expected results of its own movement, the Insula will be activated, and the robot considers that the moving part in the field of vision belongs to itself.\n\n\\begin{figure}[htbp]\n\\begin{center}\n\\includegraphics[width=8cm]{fig\/Self.png}\n\\end{center}\n\\caption{The architecture and pathway of bodily self-perception in the brain-inspired social cognition model.}\n\\label{Self}\n\\end{figure}\n\nThe architecture of IPL-SNN and the process of motor-visual associative learning is shown in Fig.~\\ref{IPL}. The vPMC generates its own motion angle information, and the STS outputs the motion angle information detected by vision. According to the STDP mechanism and the spiking time difference of neurons in IPLM and IPLV, the motor-visual associative learning is established. \n\\begin{figure}[htbp]\n\\begin{center}\n\\includegraphics[width=8cm]{fig\/IPL.png}\n\\end{center}\n\\caption{Motor-visual associative learning in IPL, adapted from~\\cite{zeng2018toward}.}\n\\label{IPL}\n\\end{figure}\n\nThe architecture of Insula-SNN is shown in Fig.~\\ref{Insula}. The Insula receives angle information from IPLV and STS. After the motor-visual associative learning in IPL, IPLV outputs the visual feedback angle information predicted according to its own motion, and STS outputs the motion angle information detected by vision. If the two are consistent, the Insula will be activated.\n\n\\begin{figure}[htbp]\n\\begin{center}\n\\includegraphics[width=8cm]{fig\/Insula.png}\n\\end{center}\n\\caption{The architecture of Insula-SNN.}\n\\label{Insula}\n\\end{figure}\n\n\n\nThe theory of mind pathway~\\cite{zhao2022brain} is mainly composed of three modules: the perspective taking module, the policy inference module, the action prediction module, and the state evaluation module (shown in Fig.~\\ref{ToM}).\n\n\\begin{figure}[!htbp]\n\\centering\n\\includegraphics[width=0.48\\textwidth]{.\/fig\/ToM1.jpg}\n\\caption{The architecture of theory of mind in the brain-inspired social cognition model, refined based on~\\cite{zhao2022brain}.}\n\\label{ToM}\n\\end{figure}\n\nThe perspective taking (also called self-perspective inhibition~\\cite{zeng2018toward}) module simulates the function of suppressing self-relevant information in the process of distinguish self and others. The information related to self can stimulate an representation of abstract self. When we infer others, the information related to self can be suppressed. Assuming that the agent knows the environment. When the agent with the theory of mind (ToM) infers the observation of others, it only needs to bring its own observation into the position of others. A matrix is used to represent the observed environment, where 1 indicates that the area can be observed at the location, and 0 indicates that the area cannot be observed at the location. Another matrix is used to represent the position of objects. The position which is occupied by objects is represented by 1, otherwise it is represented by 0. By taking the intersection of the two matrices, the estimation of others' states are obtained. According to the fact that the IFG helps the brain to suppress the performance of self-relevant information in the process of ToM, agent with the ToM will inhibit its own representation of states, and further infer the behavior of others by these estimation of others' states. In summary, the input of this module is the observation vector and the matrix of the environment. The output is the observation vector of others' perspective.\n\nThe dorsolateral prefrontal cortex (DLPFC) has the function of storing working memory and predicting others' behaviors. The action prediction module is used to simulate the function of the DLPFC. The input is formed by the observation value of others' state output by the perspective taking module. The module is a single layer spiking neural network. There is lateral suppression in the output layers. The network is trained by R-STDP. The source of reward is the difference between the predicted value and the real value. When the predicted value is consistent with the real value, the reward is positive. When the predicted value is not equal to the real value, the reward is negative.\n\nThe state evaluation module composed of a single layer spiking neural network simulates the function of the ACC brain area. The inputs of the module are the predicted state and the output is safe or unsafe.\n\nFinally, we conducted two experiments to test the brain-inspired social cognition model.\n\n\\textbf{1. Multi-Robots Mirror Self-Recognition Test }\n\nThe mirror test is the most representative test of social cognition. Only a few animals passed the test, including chimpanzees~\\cite{RN826}, orangutans~\\cite{RN827}, bonobos~\\cite{RN828}, gorillas~\\cite{RN829,RN830}, Asiatic elephant~\\cite{RN831}, dolphins~\\cite{RN832}, orcas~\\cite{RN833}, macaque monkeys~\\cite{RN835}, etc. Based on the mirror test, we proposed the Multi-Robots Mirror Self-Recognition Test~\\cite{zeng2018toward}, in which three robots with identical appearance move their arms randomly in front of the mirror at the same time, and each robot needs to determine which mirror image belongs to it. The experiment includes training stage and test stage.\n\nThe training stage is shown in Fig.~\\ref{train}. Three blue robots with identical appearance move randomly in front of the mirror at the same time. Each robot establishes the motor-visual association according to the motion angle of its own arm and the angle of visual detection.\n\n\\begin{figure}[htbp]\n\\begin{center}\n\\includegraphics[width=8cm]{fig\/mirrortest_train.jpg}\n\\end{center}\n\\caption{Training stage in multi-robots mirror self-recognition test, adapted from~\\cite{zeng2018toward}.}\n\\label{train}\n\\end{figure}\n\nThe test stage is shown in Fig.~\\ref{test}. In the test stage, the robot can predicts the visual feedback generated by its arm movement according to the training results. By comparing the similarity between the predicted visual feedback and the detected visual results, the robot can identify which mirror image belongs to it. \n\n\n\\begin{figure}[htbp]\n\\begin{center}\n\\includegraphics[width=8cm]{fig\/mirrortest_test.jpg}\n\\end{center}\n\\caption{Test stage in multi-robots mirror self-recognition test, adapted from~\\cite{zeng2018toward}.}\n\\label{test}\n\\end{figure}\n\nIn the bodily self-perception pathway, the input is the angle of the robot's random motion and the angle detected by the robot's vision. After training and testing, the output is an image, which is the result of visual motion detection and the result of self motion prediction. The motion track corresponding to the red line in the visual motion detection result is generated by itself. The result is shown in Fig.~\\ref{result}.\n\n\\begin{figure}[htbp]\n\\begin{center}\n\\includegraphics[width=8cm]{fig\/mirrortest_result.png}\n\\end{center}\n\\caption{The result of IPL-SNN.}\n\\label{result}\n\\end{figure}\n\n\\textbf{2. AI Safety Risks Experiment }\n\nAI safety risks experiment is shown in Fig.~\\ref{fig_sim}. After observing the behavior of the other two agents, the green agent can infer the behaviors of others by utilizing its ToM ability when safety risks may arise due to environmental changes. The experiment was conducted in the environments with some simulated types of safety risks (e.g., the intersection will block the view of agents and may cause agents to collide in the crossing).\n\nThe experiment shows that the agent can infer others when they have different perspectives. In the first two environments, the agent observes the movement of others, and in the third environment, the agent predicts others' actions. In the experiment, we verify the effectiveness of the model by taking the rescue behavior as the standard when other agents might be in danger. The experimental results show that the agent with the ToM can predict the danger of the other agent in a slightly changed environment after watching other agents move in the previous environments.\n\n\\begin{figure}[!htbp]\n\\centering\n\\subfloat[Green agent observes others' behaviors (example 1)]{\\includegraphics[height=0.65in]{.\/fig\/Figure6A.jpg}%\n\\label{1}}\n\\hfil\n\\subfloat[Green agent observes others' behaviors (example 2)]{\\includegraphics[height=0.65in]{.\/fig\/Figure6B.jpg}%\n\t\\label{2}}\n\\hfil\n\\subfloat[Test example 1 (with ToM)]{\\includegraphics[height=0.65in]{.\/fig\/Figure7A.jpg}%\n\t\\label{3}}\n\\hfil\n\\subfloat[Test example 2 (without ToM)]{\\includegraphics[height=0.65in]{.\/fig\/Figure7B.jpg}%\n\t\\label{4}}\n\\caption{Comparison diagram of experimental results. (a) Example 1. The green agent observes others' behaviors. (b) Example 2. The green agent observes others' behaviors. (c) The green agent with ToM can help other agents avoid risks. (d) The green agent without ToM is unable to help other agents avoid risks. Similar results can be found in~\\cite{zhao2022brain}.}\n\\label{fig_sim}\n\\end{figure}\n\n\\section{Brain Simulation}\nBrain simulation includes two parts: brain cognitive function simulation and multi-scale brain structure simulation. We incorporate as much published anatomical data as possible to simulate cognitive functions such as decision-making and working memory. Anatomical and imaging multi-scale connectivity data is used to make whole-brain simulations from mouse, macaque to human more biologically plausible.\n\n\\subsection{Brain Cognitive Function Simulation}\n\n\\textbf{1. \\emph{Drosophila}-inspired Decision-Making SNN}\n\n\\emph{Drosophila} decision-making consists of value-based nonlinear decision and perception-based linear decision, where the nonlinear decision could help to amplify the subtle distinction between conflicting cues and make winner-takes-all choices~\\cite{tang2001choice}. In this paper, the BrainCog framework is used to build \\emph{Drosophila} nonlinear and linear decision-making pathways as shown in Fig.~\\ref{pi}a-b. The entire model consists of a training phase and a testing phase as same as~\\cite{zhao2020neural}. In the training phase, a two-layer SNN with LIF neurons is trained by reward-modulated STDP, which combines local STDP synaptic plasticity with global dopamine regulation. The training phase learns the safe pattern (upright-green T) and the punished pattern (inverted-blue T)~\\cite{zhao2020neural}. Therefore, it is safe for green color and upright T shape factors, while blue color and inverted T shape are dangerous. \n\n\n\\begin{figure}[htbp]\n\\begin{center}\n\\includegraphics[width=8cm]{.\/fig\/3.jpg}\n\\end{center}\n\\caption{(a) Linear Pathway. (b) Nonlinear Pathway. (c) Experiments for training and choice phases. (d) Experimental results of linear and nonlinear networks under the dilemma. The X-axis refers to the color density, and the Y-axis represents the PI values. Refined based on~\\cite{zhao2020neural}.}\n\\label{pi}\n\\end{figure}\n\nTwo cues (color and shape) are restructured during the testing phase, requiring linear and nonlinear pathways to make a choice between inverted-green T and upright-blue T, as shown in Fig.~\\ref{pi}c. The linear decision directly uses the knowledge acquired during the training phase to make decisions. The nonlinear network models the recurrent loop of the DA\u2011GABA\u2011MB circuit~\\cite{tang2001choice,zhang2007dopamine,zhou2019suppression}: KC activates the anterior posterior lateral (APL) neurons, which in turn releases GABA transmitter to inhibit the activity of KC. KC also provides mushroom body output neuron (MBON) with exciting input in order to generate behavioral choices. When faced with conflicting cues, the level of DA increases rapidly and produces mutual inhibition with APL, thereby producing a disinhibitory effect on KC. The excitatory connection between DA and MBON also helps speed up decision-making.\n\nTo verify the consistency of \\emph{drosophila}-inspired decision-making SNN with the conclusions from neuroscience~\\cite{tang2001choice}, we count the behavior paradigm of our model under different color intensities over a period of time. First, we run the network for 500 steps to count the time $t_1$ of selecting behavior 1 (avoiding) and the time $t_2$ of selecting behavior 2 (approaching). Then we calculate prefer index (PI) values under different color intensity: $PI=\\frac{\\left | t_1-t_2 \\right | }{\\left | t_1+t_2 \\right | } $. From Fig.~\\ref{pi}d, we find that nonlinear circuits could achieve a gain-gating effect to enhance relative salient cue and suppress less salient cue, thereby displaying the nonlinear sigmoid-shape curve~\\cite{zhao2020neural}. However, the linear network couldn't amplify the difference between conflicting cues, thus making an ambiguous choice (linear-shape curve)~\\cite{zhao2020neural}. This work proves that drawing on the neural mechanism and structure of the nonlinear and linear decision-making of the \\emph{Drosophila} brain, the brain-inspired computational model implemented by BrainCog could obtain consistent conclusions with the~\\emph{Drosophila} biological experiment~\\cite{tang2001choice}.\n\n\\textbf{2. PFC Working Memory }\n\n\nUnderstanding the detailed differences between the brains of humans and other species on multiple scales will help illuminate what makes us unique as a species~\\cite{zhang2021comparison}. The neocortex is associated with many cognitive functions such as working memory, attention and decision making~\\cite{miller2000prefontral,nieder2003coding,\nbishop2004prefrontal,koechlin2003architecture,\nwood2003human}. Based on the human brain neuron database of the Allen Institute for Brain Science, the key membrane parameters of human neurons are extracted \\footnote{http:\/\/alleninstitute.github.io\/AllenSDK\/cell\\_types.html}. Different types of human brain neuron models and rodent neuron models are established based on adaptive Exponential Integrate-and-Fire (aEIF) model~\\cite{Fourcaud2003How,brette2005adaptive}, which is supported by BrainCog.\n\nWe refined the model of a single PFC proposed by Haas and colleagues \\footnote{http:\/\/senselab.med.yale.edu\/ModelDB\/}~\\cite{hass2016detailed}. Subsequently, a 6-layer PFC column model based on biometric parameters was established~\\cite{shapson2021connectomic}. The pyramidal cells and interneurons were proportionally distributed from the literature~\\cite{beaulieu1993numerical,defelipe2011evolution} and connected with different connection probabilities for different types of neurons based on previous studies~\\cite{gibson1999two,hass2016detailed,\ngao2003dopamine}. Firstly, the accuracy of information maintenance was tested on rodent PFC network model.\n\n\n\\begin{figure}[!htbp]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{.\/fig\/final.png}\n\\caption{Anatomical and network stimulation diagram. (a) The connection of a single PFC column. (b) The distribution proportion of different types of neurons in each column layer. (c) Network persistent activity performance. Refined based on~\\cite{zhangqian2021comparison}.}\n\\end{figure}\n\nKeeping the network structure and other parameters unchanged, only using human neurons to replace rodent neurons can significantly improve the accuracy and integrity of image output. From an evolutionary perspective, the lower membrane capacitance of human neurons facilitates firing. This change improves the efficiency of information transmission, which is consistent with the results of biological experiments~\\cite{eyal2016unique}. This data-driven PFC column model provides an effective simulation-validation platform to study other high-level cognitive functions~\\cite{2020Computational}.\n\n\\subsection{Multi-scale Brain Structure Simulation}\n\n\\textbf{1. Neural Circuit}\n\n\\subsubsection{Microcircuit}\nBrainCog implements a BDM-SNN model inspired by the decision-making neural circuit of PFC-BG-ThA-PMC in the mammalian brain (as shown in Fig.~\\ref{dm})~\\cite{zhao2018brain}. The BDM-SNN models the excitatory and inhibitory reciprocal connections between the basal ganglia nucleus~\\cite{lanciego2012functional}: (1) Excitatory connections: STN-Gpi, STN-Gpe. (2) Inhibitory connections: StrD1-Gpi, StrD2-Gpe, Gpe-Gpi, Gpe-STN. Direct pathway (PFC-StrD1), indirect pathway (PFC-StrD2), and hyperdirect pathway (PFC-STN) from PFC to BG are further constructed. The output from BG transmits an inhibitory connection to the thalamus and finally excites PMC~\\cite{parent1995functional}. In addition, excitatory connections are also formed between PFC and thalamus, and lateral inhibition exists in PMC. Such brain-inspired neural microcircuit, consisting of connections among different cortical and subcortical brain areas, and incorporating DA-regulated learning rules, enable human-like decision-making ability.\n\n\n\\begin{figure}[htbp]\n\\begin{center}\n\\includegraphics[width=8cm]{.\/fig\/bdm.jpg}\n\\end{center}\n\\caption{The microcircuit of PFC-BG-ThA-PMC. Refined based on~\\cite{zhao2018brain}.}\n\\label{dm}\n\\end{figure}\n\n\\subsubsection{Cortical Column}\n\nA mammalian thalamocortical column is constructed in BrainCog, which is based on detailed anatomical data~\\cite{Izhikevichlargescale}. This column is made up of a six-layered cortical structure consisting of eight types of excitatory and nine types of inhibitory neurons. Thalamic neurons cover two types of excitatory neurons, inhibitory neurons and GABAergic neurons in the reticular thalamic nucleus (RTN). Neurons are simulated by \\emph{Izhikevich} model, which BrainCog applies to exhibit their specific spiking patterns depending on their different neural morphologies. For example, excitatory neurons (pyramidal and spiny stellate cells) always exhibit RS (Regular Spiking) or Bursting modes, while inhibitory neurons (basket and non-basket interneuron) are of FS (Fasting Spiking) or LTS (Low-threshold Spiking) patterns. Each neuron has a number of dendritic branches to accommodate a large number of synapses. The synaptic distribution and the microcircuits are reconstructed in BrainCog based on the previous works~\\cite{Izhikevichlargescale, Binzegger2004A}. Fig.~\\ref{minicolumn}(a) describes the details of the minicolumn. The column contains 1,000 neurons and more than 4,200,000 synapses. To understand the network further, we stimulate the spiny stellate cells in layer 4 to observe the activities of the whole network, Fig.~\\ref{minicolumn}(b) shows the result of neural activities after the cells in layer 4 receive the external stimulation. \n\n\\begin{figure}[!htbp]\n\\centering\n\\includegraphics[width=0.48\\textwidth]{.\/fig\/minicolumn.png}\n\\caption{The thalamocortical column. (a) shows the structure of the column and (b) describes the running activities of the unfold column when the neurons in Layer 4 receive the external stimulus.}\n\\label{minicolumn}\n\\end{figure}\n\n\\textbf{2. Mouse Brain}\n\n\nThe BrainCog mouse brain simulator is a spiking neural network model covering 213 brain areas of the mouse brain, which are classified according to the Allen Mouse Brain Connectivity Atlas~\\cite{richardson2003subthreshold}~\\footnote{http:\/\/connectivity.brain-map.org}. Each neuron was modeled by conductance-based spiking neuron model and simulated with a resolution of $dt$= 1 ms. A total of 6 types of neurons are included in this model, which are excitatory neurons (E), interneuron-basket cells (I-BC), interneuron-Matinotti cell (I-MC), thalamocortical relay neurons (TC), thalamic interneurons (TI) and thalamic reticular neurons (TRN).\n\n\n$$\nJ \\in\\{E, I\\underline{B} C, I\\underline{M} C, T C, T I, T R N\\}\n$$\n\nWe use the aEIF neuron model referring to previous work~\\cite{jiang2015principles,izhikevich2008large,tchumatchenko2014oscillations}, and obtain the parameters of this study, which are summarized in Tab.~\\ref{mouse1}.\n\n\\begin{table}[h]\n\t\\caption{Main parameters of different types of neuron models.}\n\t\\centering\n\t\\resizebox{0.8\\linewidth}{!}{\n\t\t\\begin{tabular}{lllllll}\n\t\t\t\\toprule \n\t\t &$V_{th,j}(mV)$ &$ V_{r,j}(mV)$ & $\\tau_{v,j}$ &$ \\tau_{w,j}$ &$ \\alpha _{j}$ & $\\beta_{ j} $\\\\\n\t\t\t\\midrule\t\n\t\t E & -50 & -110 & 100 & - & 0 & 0 \\\\\n\t\t\t\n\t\t\t\\midrule\n\t\t I-BC & -44 & -110 & 100 & 20 & -2 & 4.5 \\\\\n\t\t \\midrule\n\t\t I-MC & -45 & -66 & 85 & 20 & -2 & 4.5 \\\\\n\t\t \n\t\t \\midrule\n\t\t TC & -50 & -60 & 200 & - & 0 & 0 \\\\\n\t\t \\midrule\n\t\t TI & -50 & -60 & 20 & 20 & -2 & 4.5 \\\\\n\t\t \\midrule\n\t\t TRN & -45 & -65 & 40 & 20 & -2 & 4.5 \\\\ \n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t}\n\t\\label{mouse1}\n\t\n\\end{table}\n\n\nThe connections between brain areas are based on the quantitative anatomical dataset Allen Mouse Brain Connectivity Atlas. Methods for data generation have been previously described in~\\cite{oh2014mesoscale}. The proportions of the different types of neurons were adopted from a previous study~\\cite{izhikevich2008large, markram2004interneurons}.\n\nThe numbers of each type of neuron in the network are shown in Tab.~\\ref{mouse2}.\n\\begin{table}[h]\n\t\\caption{Number of different types of neurons in the BrainCog mouse brain simulator.}\n \\centering\n \\resizebox{0.8\\linewidth}{!}{\n\t\t\\begin{tabular}{lllllll}\n\t\t\t\\toprule \n Neuron Type & E & I\\_BC & I\\_MC & TC & TI & TRN \\\\ \n \t\\midrule\n Neuron Number & 56100 & 14960 & 7480 & 1300 & 260 & 520 \\\\ \n \t\\bottomrule\n \\end{tabular}\n }\n \\label{mouse2}\n\\end{table}\n\nThe spontaneous discharge of the model without external stimulation is shown in Fig.~\\ref{figmouse1}.\n \n\\begin{figure}\n\\centering\n\\includegraphics[width=0.48\\textwidth]{.\/fig\/rat.jpg}\n\\caption{Running of the BrainCog mouse brain simulator. The shining point is the spiking neuron at the time t and the point color represents the neuron belong to respective brain area.}\n\\label{figmouse1}\n\\end{figure}\nThis is an open platform, and both the parameters of the neuron model and the number of different types of neurons can be set flexibly.\n\n\\textbf{3. Macaque Brain}\n\nThe BrainCog macaque brain simulator is a large-scale spiking neural network model covering 383 brain areas~\\cite{Dharmendra2010}. We used the multi-scale connectome transformation method~\\cite{a2017Zhang} on the EGFP (enhanced green fluorescent protein) results~\\cite{Bakker2012, Chaudhuri2015ALC, Christine2010} to obtain the approximate amount of cells per region and the approximate number of synaptic connections between two connected regions~\\cite{Liu2016}. The final macaque model includes 1.21 billion spiking neurons and 1.3 trillion synapses, which is 1\/5 of a real macaque brain. \nSpecifically, the details of the brain micro-circuit are also considered in the simulation. The types of neurons in the micro-circuit include excitatory neurons (90 \\% of the neurons are of this type in the simulation) and inhibitory neurons (10\\% of the neurons are of this type in the simulation)~\\cite{Davis1979}. The spiking neuron follows Hodgkin\u2013Huxley model, which is supported by BrainCog. The running demo of the model is shown in Fig.~\\ref{mac}(a). To use the macaque model in the platform, the parameters of the neuron number in each region, the connectome power between regions, and the proportion between the excitatory and inhibitory neurons can be set flexibly.\n\n\\begin{figure}[!htbp]\n\\centering\n\\includegraphics[width=0.48\\textwidth]{.\/fig\/mac.jpg}\n\\caption{Running of the macaque brain (a) and the human brain (b) model. The shining point is the spiking neuron at the time t and the point color represents the region which the neuron belongs to.}\n\\label{mac}\n\\end{figure}\n\n\n\\textbf{4. Human Brain}\n\nThe BrainCog human brain simulator is built with the approach similiar to the BrainCog macaque brain. By using the EGFP results of the\nhuman brainnetom atlas~\\cite{Fan2016, Klein2012}, the BrainCog human brain simulator consists of 246 brain areas. It should be noted that since there is no directed human brain connectome available until the release of this paper, the BrainCog human brain simulator keeps bidirectional connections among brain areas. The details of the micro-circuit, including the excitatory neuron and the inhibitory neuron, are also considered. The final model (Fig.~\\ref{mac}(b) includes 0.86 billion spiking neurons and 2.5 trillion synapses, which is 1\/100 of a real human brain. To use this model in the platform, the neuron number per region, the connectome power, and the proportion between the excitatory and inhibitory neurons can be set flexibly.\nMorover, all the simulations were performed on distributed memory clusters~\\cite{Liu2016} at the super computing center affiliated to Institute of Automation, Chinese Academy of Sciences, Beijing, China. The cluster named the fat cluster is composed of 16 blade nodes and 2 \"fat\" computing nodes. In order to imporve the network communication efficiency and the simulation performance, the most connected areas were simulated in the 'fat' computing nodes to minimize the inter-node communications, while the other areas is randomly distributed in the blade nodes. The simulation shows the ability of the framework to deploy on the supercomputer or large-scale computer clusters.\n\n\\section{BORN: A spiking neural network driven Artificial Intelligence Engine based on BrainCog}\n\n\\begin{figure*}[htb]\n\\centering\n\\includegraphics[width=1\\textwidth]{.\/fig\/vision.jpg}\n\\caption{The functional framework and vision of BORN.}\n\\label{bornvision}\n\\end{figure*}\n\nBrainCog is designed to be an open source platform to enable the community to build spiking neural network based brain-inspired AI models and brain simulators. Based on the essential components developed for BrainCog, one can develop their own domain specific or general purpose AI engines. To further demonstrate how BrainCog can support building Brain-inspired AI engine, here we introduce BORN, an ongoing SNN driven Brain-inspired AI engine, ultimately designed for general purpose living AI. As shown in Fig.~\\ref{bornvision}, the high-level architecture of BORN is to integrate spatial and temporal plasticities to realize perception and learning, decision-making, motor control, working memory, long-term memory, attention and consciousness, emotion, knowledge representation and reasoning, social cognition and other brain cognitive functions. Spatial plasticity incorporates multi-scale neuroplasticity principles at micro, meso and macro scales. Temporal plasticity considers learning, developmental and evolutionary plasticity at different time scales. \n\n\nAs an essential component for BORN, we propose a developmental plasticity-inspired adaptive pruning (DPAP) model, that enables the complex deep SNNs and DNNs to gradually evolve into a brain-inspired efficient and compact structure, and eventually improves learning speed and accuracy in the extremely compressed networks~\\cite{Han2022Developmental}. The evolutionary process for the brain includes but not limited to searching for the proper connectome among different building blocks of the brain at multiple scales (e.g. neurons, microcircuits, brain areas). BioNAS for BORN uses brain-inspired neural architecture search to construct SNNs with diverse motifs in the brain and experimentally verify that SNNs with rich motif types perform better than plain feedforward SNNs~\\cite{shenBrainInspired2022}.\n\n\nHow the human brain selects and coordinates various learning methods to solve complex tasks is crucial for understanding human intelligence and inspiring future AI. BORN is dedicated to address critical research issues like this. The learning framework of BORN consists of multi-task continual learning, few-shot learning, multi-modal concept learning, online learning, lifelong learning, teaching-learning, and transfer learning, etc. \n\nTo demonstrate the ability and principles of BORN, we provide a relatively complex application on emotion dependent robotic music composition and playing. This application requires a humanoid robot perform music composition and playing depending on visual emotion recognition. The application requires BORN to provide cognitive functions such as visual emotion recognition, sequence learning and generation, knowledge representation and reasoning, and motor control, etc. This application of BORN starts with perception and learning, and ends with motor output. \n\nIt includes three modules implemented by BrainCog: the visual (emotion) recognition module, the emotion-dependent music composition module, and the robot music playing module. As shown in Fig.~\\ref{rmp}, the visual emotion recognition module enables robots to recognize the emotions expressed in images captured by the humanoid robot eyes. The emotion-dependent music composition module can generate music pieces according to various emotional inputs. When a picture is shown to the robot, the Visual Emotion Recognition network can firstly identify the emotions expressed in the picture, such as joy or sadness. The robot then selects or compose the music piece that best matches the emotions in the picture. And finally, with the help of the robot music playing module, the robot controls its arms and fingers in a series of movements, thus playing the music on the piano. Some details are introduced as follows:\n\n\\begin{figure*}[!htbp]\n\\centering\n\\includegraphics[width=1\\textwidth]{.\/fig\/rmp.jpg}\n\\caption{The procedure of multi-cognitive function coordinated emotion dependent music composition and playing by humanoid robot based on BORN.}\n\\label{rmp}\n\\end{figure*}\n\n\n\n\\emph{1) Visual Emotion Recognition: }\nFor emotion recognition, inspired by the ventral visual pathway, we construct a deep convolutional spiking neural network with LIF neuron model and surrogate gradient provided by BrainCog. The structure of the network is set as 32C3-32C3-MP-32C3-32C3-300-7. 32C3 means the output channel is set with 32, and the kernel size is set as 3. MP denotes the max pooling. The mean firing rate is used to make the final prediction. We use the Adam optimizer, and the mean square error loss. The initial learning rate is set with 0.001, and it will decay to 1\/10 of the previous value every 40 epochs, for a total of 100 epochs. We use the Emotion6 dataset~\\cite{peng2015mixed} to train and test our model. The Emotion6 dataset is composed of 6 emotions such as anger, disgust, fear, joy, sadness, surprise, and each type of emotion consists of 330 samples. On this basis, we extend the original Emotion6 dataset with exciting emotion which we collect online. 80\\% of the images are used as the training set, and the remaining 20\\% are used as the test set. \n\n\\emph{2) Emotion-dependent Music Composition: }\nListening to the the music can make us emotional, while, when people feel happy or sad, they always express their feeling with music. Amygdala plays a key role in human emotion. Inspired by this mechanism, we constructed a simple spiking neural network to simulate this important area to represent different types of emotions and learn the relationships with other brain areas related to music. As shown in Fig.~\\ref{rmp}, the amygdala network is composed of several LIF neurons supported through BrainCog, the connections from this cluster are projected to the musical sequence memory networks. \n\nDuring the learning process, amygdala, PFC, and the musical sequence memory networks cooperate with each other and form complex neural circuits. Here, connections are updated by the STDP learning rule. The dataset used here also contains 331 MIDI files of classical piano works~\\cite{Krueger2018}, and it is important to note that a part of these music works are labeled with different types of emotional categories (such as happy, depressed, passionate and beautiful). \n\nAt the generation process, given the beginning notes and specific emotion type, the model can generate a series of notes and form a melody with particular emotion finally.\n\n\\emph{3) Robot Music-Playing: }\nA humanoid robot iCub is used to validate the abilities of robotic music composition and playing depending on the result of visual emotion recognition. The iCub robot has a total of 53 degrees of freedoms throughout the body. In the piano playing task, we used 6 degrees of freedoms of the head, 3 degrees of freedoms of the torso, and 16 degrees of freedoms for each of the left and right arm (including the left and right hand). Besides, we mainly control the index fingers to press the keys; in the multi-fingered playing mode, we mainly control the thumbs, the index fingers, and the middle fingers to press the keys. During playing, the robot controls the movement of the hand in sequence according to the generated sequence of different musical notes, and presses the keys with corresponding fingers, thereby completing the performance. For each note to be played, the corresponding playing arm needs to complete the entire process of moving, waiting, pressing the key, holding, and releasing the key according to the beat. During the playing process, we also control the movements of the robot's head and the non-playing hand to match the performance.\n\nWe have constructed a multi-brain area coordinated robot motor control SNN model based on the brain motor control circuit. The SNN model is built with LIF neurons and implements SMA, PMC, basal ganglia and cerebellum functions. The music notes is first processed by SMA, PMC and basal ganglia networks to generate high-level target movement directions, and the output of PMC is encoded by population neurons to target movement directions. The fusion of population codings of movement directions is further processed by the cerebellum model for low level motor control. The cerebellum SNN module consists of Granular Cells (GCs), Purkinje Cells (PCs) and Deep Cerebellar Nuclei (DCN), which implements the three level residual learning in motor control. The DCN network generates the final joint control outputs for the robot arms to perform the playing movement.\n\n\n\n\\section{Conclusion}\nBrainCog aims to provide a community based open source platform for developing spiking neural network based AI models and cognitive brain simulators. It integrates multi-scale biological plausible computational units and plasticity principles. Different from existing platforms, BrainCog incorporates and provides task ready SNN models for AI, and supports brain function and structure simulations at multiple scales. With the basic and functional components provided in the current version of BrainCog, we have shown how a variety of models and applications can already be implemented for both brain-inspired AI and brain simulations. Based on BrainCog, we are also committed to building BORN into a powerful SNN-based AI engine that incorporates multi-scale plasticity principles to realize brain-inspired cognitive functions towards human level. Powered by 9 years development of BrainCog modules, components and applications, and inspired by biological mechanisms and natural evolution, continuous efforts on BORN will enable it to be a general purpose AI engine. We have already started the efforts to extend BrainCog and BORN to support high-level cognition such as theory of mind~\\cite{zhao2022brain}, consciousness~\\cite{zeng2018toward}, and morality~\\cite{zhao2022brain}, and it definitely takes the world to build true and general purpose AI for human and ecology good. Join us on this explorations to create the future for human-AI symbiotic society. \n\\section*{Acknowledgements}\nThis work is supported by the National Key Research and Development Program (Grant No. 2020AAA0104305), the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant No. XDB32070100).\n\n\n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:INTRO}\n\\subsection{LISA Pathfinder, the LISA Technology Package, and ST7-DRS}\n\nThe Space Technology 7 Disturbance Reduction System (ST7-DRS) is a NASA technology demonstration payload hosted on the European Space Agency (ESA) LISA Pathfinder (LPF) spacecraft, which launched from Kourou, French Guiana on December 3, 2015 and operated until July 17th, 2017, when it was decommissioned by ESA. The primary purpose of LPF was to validate key elements of the measurement concept for the Laser Interferometer Space Antenna (LISA), a planned space-based mission to observe gravitational waves in the millihertz band. Specifically, LPF demonstrated that the technique of drag-free control could be employed to place a test mass in near-perfect free-fall~\\cite{LPF_PRL_2016,LPF_PRL_2018}. LISA will use three drag-free satellites, configured as an equilateral triangle with $\\sim 2.5$ million km arms, to detect spacetime strains caused by passing gravitational waves~\\cite{LISA_PROPOSAL_2017}.\n\nThe basic components of a drag-free system are the reference test mass, which resides inside the spacecraft but makes no physical contact with it; a metrology system that measures the position and attitude of the test mass relative to the spacecraft as an inertial sensor; a control system that determines what forces and torques to apply to the spacecraft, and possibly the test mass; and an actuation system that can apply forces and torques to the spacecraft and possibly the test mass. In the case of LPF, European National Space Agencies provided the LISA Technology Package (LTP), which includes two test masses as part of the inertial sensor. Each test mass has its own independent six-degree-of-freedom electrostatic metrology and control system. LTP also includes an optical interferometer that measures the position and attitude of the test masses with respect to the spacecraft and each other much more precisely than the electrostatic system, but only along the axis that joins the two test masses as well as the tip and tilt angles orthogonal to that axis. Finally, the LTP includes systems to monitor and control the thermal, magnetic, and charge environment of the instrument. The ESA-provided spacecraft included its own set of drag-free control laws and its own cold-gas micropropulsion system. ESA's drag-free system was used for the majority of LPF's operations and achieved a striking level of performance, significantly exceeding the requirements set for LPF (which were deliberately relaxed from the LISA requirements) and meeting or exceeding the requirements for LISA itself~\\cite{LPF_PRL_2016,LPF_PRL_2018}. \n \nST7-DRS includes two main elements: an alternate set of drag-free control laws implemented on a separate computer, and an alternate micropropulsion system based on a novel colloidal microthruster technology~\\cite{Ziemer2006, Ziemer2008}. ST7-DRS provided the first demonstration of colloidal micropropulsion performance in space. During phases of the LPF mission where ST7-DRS operated, NASA's colloidal thrusters were used in place of ESA's cold-gas thrusters to move and orient the spacecraft, with the DRS control laws replacing the ESA control laws. For brief periods, NASA's colloid thrusters were also used as the actuators for ESA's drag-free system, replacing the cold gas thrusters, to show performance and the robust nature of the drag-free control laws and colloid microthruster technology. \n \nDuring ST7 operations, the LTP payload played the same role as during the ESA-led parts of the mission - providing information on the positions and attitudes of the test masses and applying forces and torques to the test masses, as requested by the DRS controllers. In this paper, we present an overview of the ST7-DRS operations, the measured performance of the ST7-DRS systems, and the implications for LISA.\n \n\\subsection{History of ST7-DRS development and operations}\n \nInitiated in 2002 as part of NASA's New Millennium program, ST7-DRS includes four subsystems: (1) The Integrated Avionics Unit (IAU), a computer based on the RAD750 processor; (2) Colloid Micro Newton Thrusters (CMNT), two clusters of four thrusters each; (3) Dynamic Control Software (DCS), a software subsystem which implements drag-free control algorithms and (4) Flight Software (FSW), a command and data handling software subsystem which processes commands and telemetry and hosts the DCS. The IAU was manufactured by Broadreach Engineering (Phoenix, AZ) and was delivered to NASA's Jet Propulsion Laboratory (JPL) for integrated testing in May, 2006. The CMNTs were manufactured by Busek (Natick, MA), put through acceptance and thermal testing in late 2007, and delivered to JPL with fully loaded propellant tanks in 2008. The DCS software was written at NASA's Goddard Spaceflight Center (GSFC) and the FSW was written at JPL, with initial versions both completed in March, 2006. The DRS completed a Pre-Ship Acceptance Review with ESA in June 2008 and was placed in storage until its delivery to Astrium UK, in Stevenage, England, for Assembly, Integration, Verification and Test (AIVT) in July 2009.\n\nDue to the the unexpectedly long duration between DRS delivery and LPF's launch, ST7 conducted shelf-life extension testing on the thruster propellant, materials, and microvalves in both 2010 and 2013, which qualified the system for launch in 2015 and serendipitously demonstrated a long storage lifetime (8 years on the ground and nearly 10 years total with on-orbit operations) that will be useful for LISA. During the storage period, the thrusters were left on the spacecraft, fully loaded with propellant, with removable protective covers on each thruster head to prevent debris from entering the electrodes. During this time, the spacecraft was kept in the integration and test facilities at Airbus Stevenage, UK with dynamic and thermal environmental testing occurring at IABG in Ottobrunn, Germany. The thrusters were part of all spacecraft-level testing with at least annual inspections removing the protective covers, none of which showed any signs of propellant leakage or damage to the thruster electrodes. The thrusters required no special handling or environmental control beyond the normal safeguards and environments used during typical spacecraft AIVT activities, and the protective covers were removed just before spacecraft encapsulation into the launch fairing.\n \nAfter launch, a composite of the LPF Science Module (SCM) and Propulsion Module (PRM) executed a series of orbit raising maneuvers culminating in a cruise to Earth-Sun L1. DRS, which was powered-off during launch, was turned on for initial commissioning January 2 - 10, 2016. Because the PRM was still fastened to SCM, the DRS did not control the spacecraft attitude in this commissioning, but the effects of the DRS thrusters were observed in the host spacecraft attitude control using on-board gyroscopes. LPF arrived on station at L1 on January 22, 2016 and the PRM was was discarded leaving the SCM. At this time, the ESA LISA Technology Package (LTP) was commissioned and began executing its primary mission on March 1st, 2016. A second commissioning of DRS was conducted June 27 - July 8, 2016, which included successful demonstrations of drag-free control. DRS operations were conducted over the next five months including 13 different experiments with occasional breaks for planned LTP station-keeping maneuvers or LTP experiments, as well as response to a number of anomalies on both DRS and LPF hardware. The DRS anomalies are discussed in Section~\\ref{sec:ANOMALIES}. DRS completed its baseline mission on December 6, 2016. An extended mission to further characterize the thrusters and control system was requested and approved, and operated from Mar 17, 2017 to April 30, 2017. The DRS was decommissioned as part of the LPF decommissioning process on July 13, 2017. Key dates for DRS operations are summarized in Table \\ref{tab:dates}. The complete ST7-DRS data set, along with tools for accessing it, is archived at \\url{https:\/\/heasarc.gsfc.nasa.gov\/docs\/lpf\/}. \n \n \n\\begin{table}\n\\begin{tabular}{|p{0.9in}|p{0.6in}|p{0.9in}|p{0.6in}|}\n\\hline\nEvent & Date & Event & Date \\\\ \\hline\nLPF Launch & 03 Dec '15 & Thruster-4 Anomaly & 27 Oct '16 \\\\ \\hline\nTransfer Phase Commissioning (9 days) & 02 Jan '16 & Start:Hybrid Propulsion & 29 Nov '16 \\\\ \\hline\nArrival at L1 & 22 Jan '16 & End:Primary Mission & 06 Dec '16 \\\\ \\hline\nExperiment Phase Commissioning (10 days) & 27 Jun '16 & Start:Extended Mission & 20 Mar '17 \\\\ \\hline\nCluster-2 DCIU Anomaly & 09 Jul '16 & End:Extended Mission & 30 Apr '17 \\\\ \\hline\nStart:Primary Mission & 15 Aug '16 & Decommissioning Activities & 13 Jul '17 \\\\ \\hline\n\\end{tabular}\n\\caption{ Key dates for DRS operations}\\label{tab:dates}\n\\end{table}\n\\normalsize\n\n\\subsection{DRS Components and Interfaces}\n\nFigure~\\ref{fig:block_diagram} shows a block diagram of the DRS hardware and its major functional interfaces to the LPF spacecraft and the LTP instrument. The DRS consists of three distinct hardware units: the IAU and two Colloidal Micronewton Thruster Assemblies (CMTAs) with four thrusters each. The IAU interfaces with the primary LPF computer, known as the On-Board Computer (OBC), and the OBC provides interfaces to the LTP instrument as well as other spacecraft systems such as the star tracker and communications systems. In drag-free operations when the DRS is in control of the spacecraft attitude, the LTP provides measurements of the position and attitude of the two test masses, which are processed by the OBC and sent to the IAU along with spacecraft attitude measurements derived from the LPF star trackers. This information is processed by the Dynamic Control System (DCS) software running on the IAU, which determines the appropriate forces and torques to apply to the spacecraft and the test masses. Test mass force\/torque commands are sent by the IAU to the OBC, which relays them to the GRS front-end electronics within the LTP. Spacecraft force\/torque commands are decomposed into individual CMNT thrust commands, which are then sent to the CMNTs. \\footnote{After the anomaly experienced by CMTA2, the commands sent by the IAU to the CMTAs were actually low-level current and voltage commands that are functionally equivalent to thrust commands. See Sec.~\\ref{sec:ANOMALIES} on anomalies and recovery for more detail.}\n\nThe DRS is a single-string system, but with a redundant RS-422 communication interface between the IAU and the OBC, and redundant IAU DC\/DC power converters and thruster power switches.\nThe redundant power busses are cross-strapped to each thruster cluster, which are single string. The A-Side power bus was the primary bus used during the mission.\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=\\columnwidth]{block_diagram.pdf}\n\\caption{Block diagram of the Disturbance Reduction System (DRS) and its interfaces with the LISA Pathfinder spacecraft and the LISA Technology Package instrument. Renderings of LPF and LTP courtesy of ESA\/Medialab.}\n\\label{fig:block_diagram}\n\\label{default}\n\\end{center}\n\\end{figure}\n\n\\section{The Colloidal Micro-Newton Thruster Assemblies}\n\\label{sec:CMNT}\n\\subsection{Components}\nColloid thrusters were selected to be developed by ST7-DRS because of their potential for extremely high precision thrust; extremely low noise; and a larger specific impulse compared to cold-gas systems ($\\sim240\\,$s vs $\\sim70\\,$s). Colloid thrusters are a type of electrospray propulsion, which operate by applying a high electric potential difference to charged liquid at the end of a hollow needle in such a way that a stream of tiny, charged droplets is emitted generating thrust. An advantage of this system is that the liquid colloidal propellant can be handled with a compact and lightweight propellent management system and requires no pressure vessels or high temperatures. The requirement for high-voltage power supplies is a disadvantage. Colloidal thrusters can be designed to operate in various thrust ranges according to the number of needles that are used in each thruster head. The ST7-DRS configuration, developed specifically for ST7s performance requirements by Busek, provides a thrust range from 5 to 30 $\\mu$N per thruster (larger thrusts are achievable in diagnostic mode).\n\nDRS includes two Colloidal Micro-Newton Thruster Assemblies (CMTAs), each of which includes: 4 thruster heads, 4 propellant feed systems, 4 Power Processing Units (PPUs), 1 cathode, and 1 Digital Control Interface Unit (DCIU)~\\cite{Ziemer2008}. Figure \\ref{fig:cmnts} shows both a block diagram for one of the 4-thruster assemblies. Each thruster head includes a manifold that feeds nine emitters in parallel, a heater to control propellant temperature and physical properties, and electrodes that extract and accelerate the propellant as charged droplets. The thruster heads are fed by independent bellows and microvalves (the feed system). The propellant is the room temperature ionic liquid 1-ethyl-3methylimidazolium bis(triflouromethylsulfonyl)imide (EMI-Im)\nand is stored in four electrically isolated, stainless steel bellows, which use compressed constant-force springs to supply the four microvalves with propellant at approximately 1 atm of pressure. The propellant flow rate is controlled by a piezo-actuated microvalve. The thruster heads and feed system voltages are independently controlled through the PPUs, which are controlled, in turn by the DCIU. The DCIU has an on-board PROM (programmable read-only memory) that stores the thruster operating software and control algorithms. The DCIU has power, command, and telemetry interfaces to the IAU. The CMTA mass is 14.8 kg with $\\sim0.5\\,$kg of propellant distributed into each of the 4 thruster bellows. The nominal power consumptions of each CMTA is $\\sim17\\,$W. \n\nEach CMTA also includes one propellantless field emission cathode neutralizer, included to neutralize the emitted spray of charged droplets after they are accelerated, and so prevent spacecraft charging by the thrusters. The cathode neutralizers are fabricated from a carbon nanotube (CNT) base with an opposing gate electrode controlled by the DCIU. Each CNT is capable of producing 10 $\\mu$A to 1 mA using extraction voltages of 250 to 800 V. The neutralizer was tested during the extended mission and produced the desired current. As expected, the measured spacecraft charging with respect to the test masses \\cite{LPF_CHARGE_PRD_2018} indicated that the induced spacecraft charge rate was larger in magnitude than and opposite in sign from the effect of the CMNTs, meaning the neutralizer was not necessary for maintaining spacecraft charge control. \n \n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=\\columnwidth]{cmnt-config.pdf}\n\\caption{CMNT propulsion system components and configuration. The Carbon nanotube cathode is not shown.\n \\label{fig:cmnts} }\n\\label{default}\n\\end{center}\n\\end{figure}\n\n\\subsection{Thrust model}\n\nThe thrust (T) from each CMNT is approximated by~\\cite{ziemer2009performance, demmons2008st7}:\n\n\\begin{equation}\\label{eq:T=CIV}\t\nT =C_1\\, I_B^{1.5} \\, V_B^{0.5} \\, ,\t\t\t\n\\end{equation}\nwhere $I_B$ is the total beam current from the $9$ emitters, $V_B$ is beam voltage, and $C_1$ is the thrust coefficient. $C_1$ depends mostly on physical properties of the propellant \n(viscosity, electrical conductivity, etc.) and also on the characteristics of the plume (beam divergence, charge-to-mass ratio distribution, etc.). In operation, the DCIU adjusts the the beam voltage (2-10 kV) and propellant flow rate for each thruster head to achieve the desired thrusts. The mass flow rate is not measured directly; instead the beam current $I_B$ is measured and controlled by actuating a piezo microvalve. $I_B$ is controlled to better than 1 nA over the operating range of 2.25 to 5.4$\\mu$A, corresponding to a thrust resolution of $\\le 0.01\\,\\mu$N. Independent, fine control of both the beam voltage and beam current allow for precise control of thrust to better than 0.1 $\\mu$N, with $<$0.1 $\\mu\\textrm{N}\/\\sqrt{\\textrm{Hz}}$ thrust noise.\n\nThe value $C_1$ is temperature-dependent; $C_1$ decreases with increasing temperature. Thrust-stand measurements on Engineering Model (EM) units validated this model to a precision of $\\sim\\,$2\\%, consistent with the calibration of the thrust stand\\cite{demmons2008st7}. The best-fit value for $C_1$ under nominal beam voltage and current and with a propellant temperature $T_{P} = 25^{\\circ}$C was $31.9\\,\\textrm{N}\\textrm{A}^{-3\/2}\\textrm{V}^{-1\/2}$. A substantial portion of the DRS operations was utilized to perform calibration experiments to validate this model in flight as well as explore potential higher-fidelity models. These experiments and their results are discussed in section \\ref{sec:thustCal}.\n\n\\section{The Dynamic Control System (DCS)}\n\\label{sec:DCS}\nThe DRS control system maintains the attitude of the spacecraft, as well as the position of the test masses within their housings, both by moving the spacecraft via the CMNTs and by moving the test masses via electrostatic actuation. There were initially six DRS Mission Modes managed by the IAU: Standby, Attitude Control, Zero-G, Drag-Free Low Force, 18-DOF (Degree of Freedom) Transitional, and 18-DOF Mode. One additional mode, the Zero-G Low Force was added for the extended DRS mission, to provide improved performance in the accelerometer mode, where some of the most sensitive thruster experiments were performed. \n\nStandby mode is used when IAU is powered on but no actuation commands are being generated by the control system, generally when the LTP controller is active. Attitude Control Mode is used for the transition from LTP control to DRS Control. In this mode, the DRS nulls spacecraft attitude errors and their rates using the thrusters, while electrostatically suspending the both test masses. In the Zero-G Mode, disturbance forces on the spacecraft, such as those from solar radiation pressure, are nulled out in a low-bandwidth loop that minimizes common-mode actuation on the test masses by applying forces on the spacecraft using the CMNTs. The Zero-G low force variant utilized the same control scheme but with the GRS actuation set to its high resolution \/ low force authority setting. In the Drag Free Low Force (DFLF) Mode the spacecraft's position is controlled via the CMNTs to follow the reference test mass (RTM, configurable to be either of the two LTP test masses) in all translational axes. Hence, it is the lowest mode in which drag-free flight of a single test mass is achieved. The 18-DOF Transitional is, as its name suggests, a transitional mode to get from DFLF to 18-DOF control of the spacecraft and the two test masses. In the 18-DOF mode, the DRS uses the thrusters to force the spacecraft to follow the RTM, i.e., to maintain the nominal gap of the RTM with respect to its housing along all three axes. The DRS uses the torque from the CMNTs to control the spacecraft attitude, in the measurement band (1-30mHz), so that it follows: (a) the non-reference test mass (NTM) in the transverse directions (normal to the LTP axis); and (b) the relative attitude of the RTM about the sensitive axis. The orientations of both test masses are controlled via electrostatic suspension below the measurement bandwidth (MBW). Further details on the spacecraft and test mass control design for each mode may be found in \\cite{Maghami2004,Hsu2004}. \n\nThe DRS baseline architecture made use of only the capacitive sensing measurements of the test mass positions and orientations, which was the configuration for both ST7 and LTP when the DRS design was consolidated. After successfully commissioning DRS in this configuration, the system was modified to use the higher-precision interferometric data from LTP for the degrees of freedom where it was available. \n\n\\section{Characterization of Thruster Performance}\n\\label{sec:CMNTP}\nIn-flight characterization of the CMNT technology was a major goal of the ST7 mission. This section summarizes the experiments conducted during DRS operations and the top-level results. The CMNT properties investigated during the flight campaign include thrust range, response time, calibration, and thrust noise. In general, two sources of data were available for these investigations. The first was internal thruster telemetry such as beam currents, beam voltages, valve voltages, temperatures, etc. These quantities can be used to estimate the CMNT thrust using physics-based models which were validated during the CMNT development with thrust stand measurements. The second source of data, which is unique to this flight test, was the rest of the Pathfinder spacecraft, in particular the test masses and interferometer of the LTP. Measurements with LPF and LTP data allowed the CMNT thrust model to be independently validated and its exquisite sensitivity allowed thrust noise measurements in the LISA band at a level never achieved with ground-based thrust stands. \n\n\\subsection{Functional Tests: Range and Response Time}\n\\label{sec:CMNTfunc}\nAs described in Table \\ref{tab:dates}, the CMNTs were commissioned in two phases in 2016. The first phase, in January 2016, was conducted prior to separation of the propulsion module so that the CMNTs would be available to serve as a backup propulsion system for LPF, should the primary cold-gas system experience an anomaly after separation. Figure \\ref{fig:cmnt_resp} shows a full-range response test in which all 8 CMNTs are initially at their minimum thrust level of 5$\\,\\mu$N and then commanded to their maximum level of 30$\\,\\mu$N for 300$\\,$s before returning to 5$\\,\\mu$N. The thrust command (same for all thrusters) is shown in black and the predicted thrust based on CMNT telemetry and the ground-validated model is shown in colored lines. This initial test demonstrated that all eight CMNTs could operate in their requisite 5-30$\\,\\mu$N thrust range. With the propulsion module still attached to the spacecraft and the LTP instrument not yet commissioned, there was limited availability of precision data from the platform. However, telemetry from the propulsion module ACS thrusters and from the LPF star tracker showed force\/torque motions on the platform that were roughly consistent with thrust levels commanded to the CMNTs.\\\\\n\nBoth the thruster control law and the dynamics of the thruster head are expected to lead to a delayed response time, for which the design requirement was 10$\\,$s. As shown in Figure \\ref{fig:cmnt_resp}, 7 of the 8 CMNTs meet this requirement for the response time from 5$\\,\\mu$N to 30$\\,\\mu$N. The exception is CMNT\\#1, which has a response time of $\\sim 170\\,$s. As discussed in Appendix A.1, this delayed response is consistent with some obstruction in the CMNT\\#1 feed system. The response time for 30$\\,\\mu$N to 5$\\,\\mu$N was slightly longer for all CMNTs and significantly longer for thrusters 1,6, and 7. This is likely due to the response of the piezo microvalve, which is controlled by a PID loop to maintain the desired current level. The valve actuator is encased in a potting compound that provides electrical isolation but also adds some mechanical compliance to the valve. In addition, there is some variation in the pre-load and piezo response from valve to valve that results in different flow response as a function of valve voltage. An examination of the valve voltage during this experiment reveals that while the voltage for valves 1,6, and 7 dropped to zero after the transition from commands of 30$\\,\\mu$N to 5$\\,\\mu$N, the electrometer still measured a slowly-declining current after the valve voltage reached zero. This suggests that the piezo actuators in these valves were unable to close the valves to the desired position until the valve mechanically relaxed, after which time the piezo could begin to actuate again. \\\\\nFinally, CMNT\\#1 also shows an impulsive behavior known as `blipping', which is caused when the number of emitters actively flowing propellant in the CMNT head changes. In standard operations, all 9 emitters should be expelling propellant for the full range of thrusts. However, if one emitter has a significantly increased hydraulic resistance due to an obstruction, it can periodically stop and start, leading to an abrupt change in the thrust. While efforts were made during the mission to improve the performance of CMNT\\#1 during the mission, neither the response time not the rate of `blipping' significantly improved. This is further discussed in Appendix A.1. Note that there is also some evidence of `blipping' in in CMNT\\#7 in Figure \\ref{fig:cmnt_resp}, although at a much lower rate than for CMNT\\#1 and also only at the minimum thrust level. The blipping behavior for CMNT\\#7 rapidly improved as commissioning proceeded and was not observed during the remainder of the mission, suggesting that the (presumably much smaller) obstruction that was responsible was cleared. \n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=\\columnwidth]{CMNT_resp.pdf}\n\\caption{Full-range response test for all eight CMNTs conducted as part of initial thruster commissioning in January 2016 prior to separation of the LPF Propulsion Module. All 8 thrusters demonstrated the full thrust range, although CMNT\\#1 had an abnormally slow response time, perhaps due to some obstruction. In addition CMNT\\#1 exhibited a `blipping' mode consistent with one of the nine emitter tips cycling between a spraying and non-spraying state. \\label{fig:cmnt_resp} }\n\\label{default}\n\\end{center}\n\\end{figure}\n\nAfter the propulsion module was successfully separated and the spacecraft was under control of the cold-gas micropropulsion, the CMNTs were placed in a safe mode for approximately 6 months of LTP operations. In July 2016, the second phase of CMNT commissioning was conducted to prepare for DRS operations. Figure \\ref{fig:cmnt_range} shows a thruster functional test in which each CMNT is successively ramped, in 5$\\,\\mu$N increments, from 5$\\,\\mu$N to 30$\\,\\mu$N and back. The blue line shows the thrust command and the red data shows the estimated thrust based on the CMNT telemetry and the ground-validated model. Again, 7 of the 8 CMNTs perform as designed, but CMNT\\#1 exhibits both the episodic `blipping' and an overall slow response time. We note that here the 30$\\,\\mu$N limit on the maximum thrust was set by the flight software. In the extended mission, the diagnostic mode of the CMNTs was used to manually command the current and voltage to demonstrate extended thrust range. CMNT\\#2 and CMNT\\#5 were stepped up to 40$\\,\\mu$N early in the extended mission and then CMNT\\#5 was stepped up to 50$\\,\\mu$N and CMNT\\#2 was stepped up to 60$\\,\\mu$N near the end of the extended mission. The CMNTs passed all of these extended-range tests without incident.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=\\columnwidth]{CMNT_range.pdf}\n\\caption{Thruster actuation test conducted during DRS commissioning in July 2016. Each of the eight CMNTs was successively cycled from 5$\\,\\mu$N to 30$\\,\\mu$N and back in 5$\\,\\mu$N steps. The achieved thrust estimated from the CMNT telemetry closely matches the thrust commands with the exception of CMNT\\#1, which exhibits both a slow response time and 'blipping' consistent with one of its nine emitters firing only intermittently. \\label{fig:cmnt_range} }\n\\label{default}\n\\end{center}\n\\end{figure}\n\n\n\\subsection{Thruster Calibration Measurements}\n\\label{sec:thustCal}\n\\vskip 0.2in\nAs mentioned above, the additional instrumentation on the Pathfinder spacecraft, in particular the LTP, allows use of the spacecraft as a low-noise thrust stand that can be used to calibrate the thrust model. This section describes the design, analysis, and results of thruster calibration experiments carried out using this method.\n\n\\subsubsection{Experiment design}\nAll the calibration experiments had the following general form. The thrust command to one of the eight CMNTs is modulated by some sinusoidal or square wave, which we refer to as the \\emph{injection waveform}, and each thruster's calibration constant is derived from the resulting modulations on i) the motion of the spacecraft , ii) the motion of the two TMs, and iii) the electrostatic forces on the TMs. The full set of injection waveforms that we used is summarized in Table \\ref{tab:T1}. In each experiment, we cycle through each of the thrusters one at a time.\n\nWe chose to perform the majority of the injections in accelerometer control mode, out of concern that drag-free control would lead to a complicated mixing of the injection signals across all thrusters simultaneously, as well as suppressing the modulation on the main thruster of interest. Operating in accelerometer mode is very close to the standard operating mode in the ground-based thrust stand measurements.\n\nThe amplitude and frequency of the injections were selected to balance the needs of the system (low disturbance, slew rate limits, and available experiment time) against the needs of the analysis, as characterized by the expected signal to noise ratio (SNR). Our baseline approach was to perform the experiment in the DRS's high-force actuation mode, since it provides higher control authority and therefore should permit larger-amplitude injections without losing stability. Later in the mission, a set of injections was designed for the DRS's low-force actuation mode, which had a better-characterized calibration of the applied test mass forces and torques than the high-force mode. \n\nEarly in DRS operations, each injection set was demonstrated on a subset of thrusters to assess the quality of the response and resulting analysis. Set 1, with the most gentle system response but longest duration, was used for initial checkout and sets 2 through 4 were used together to more rapidly characterize thruster performance. Set 5 utilized a square wave to measure the response at multiple Fourier frequencies and was utilized in some limited tests. In this paper, we present results from the `standard' suite of sets 2 through 4, which were used for the majority of the investigations in both baseline and extended DRS operations. \n\n\\begin{table}\n\\caption{Waveforms of thruster calibration experiments.\\label{tab:T1}}\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|c|c|c|}\n \\hline\nSet \\# & Waveform & Frequency & Amplitude & Duration & $\\rho_{HF}$ & $\\rho_{LF}$ \\\\ \\hline\n1 & sine & 23 mHz & 1 $\\mu$ N & 5220 s & 200 & 500 \\\\ \\hline\n2 & sine & 23 mHz & 3 $\\mu$ N & 696 s & 200 & 500 \\\\ \\hline\n3 & sine & 29 mHz & 5 $\\mu$ N & 552 s & 200 & 800 \\\\ \\hline\n4 & sine & 40 mHz & 3 $\\mu$ N & 600 s & 66 & 400 \\\\ \\hline\n5 & square & 23 mHz & 3 $\\mu$ N & 696 s & ~200 & ~500 \\\\ \\hline\n \\end{tabular}\n\\end{center}\n\\label{default}\n\\end{table}\n\n\\subsubsection{Calibration results}\n\nThe basic analysis approach is to estimate the acceleration of the LISA Pathfinder spacecraft using the LTP data, and compare that with the thrust derived from the thruster's measured $V_B$ and $I_B$ and the thrust model, Eq.~(\\ref{eq:T=CIV}). As shown in Appendix B, using the average acceleration of the two test masses causes most of the rotating-frame effects to cancel out, making the results more robust against systematic errors.\n\nWe use a Markov-Chain Monte-Carlo method to estimate the maximum-likelihood gain and delay of each thruster, jointly fit across the three injection frequencies. Figure \\ref{fig:t1psd} shows an example fit for CMNT\\#5 for an injection with the TMs in the high-force mode. The injection signal is suppressed by a factor of roughly 20, although it is still visible in the residual. Figure \\ref{fig:EM8psd} shows a similar fit for CMNT\\#5 for an injection with the TMs in the low-force mode. Here the injection signal is suppressed by a factor of roughly 100, and no residual is visible.\n\n Table \\ref{tab:C1} presents the best-fit value for the C1 coefficient in Eq.~(\\ref{eq:T=CIV}) for each thruster. These values are averaged over 4 measurements, except for CMNT\\#4 which was only averaged 3 times due to the propellant bridge (see Appendix A.3 for a discussion of this anomaly). All of the results are in the range $29\\sim32\\,\\textrm{N}\\textrm{A}^{-3\/2}\\textrm{V}^{-1\/2}$, roughly consistent with the value of $31.9\\,\\textrm{N}\\textrm{A}^{-3\/2}\\textrm{V}^{-1\/2}$ derived from the ground tests \\cite{Ziemer2010}.\n\nThere is also a small but statistically-significant discrepancy between the experiments conducted in high-force and low-force modes, which suggests that there is some aspect of the calibration that has not been properly accounted for. Note that there is no low -force mode measurement for CMNT\\#4, as this experiment was implemented after it was disabled. We do not report the best-fit delays, since in addition to physical delays, they include relative delays between the DRS data packets containing the CMNT telemetry and the LTP data packets containing the LTP telemetry, and the latter are not physically relevant. \n \n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=\\columnwidth]{t5a23residual.pdf}\n\\caption{Thrust spectrum as measured from LTP data (blue), thrust based on thrust model and best-fit C1 value(red), and residual (difference) between those two (green), for a 23 mHz injection in \\emph{high-force} actuation mode in CMNT\\#5. \\label{fig:t1psd} }\n\\label{default}\n\\end{center} \n\\end{figure}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=\\columnwidth]{t5a23residualEM8.pdf}\n\\caption{Thrust spectrum as measured from LTP data (blue), thrust based on thrust model and best-fit C1 value(red), and residual (difference) between those two (green), for a 23 mHz injection in \\emph{low-force} actuation mode in CMNT\\#5. \\label{fig:EM8psd} }\n\\label{default}\n\\end{center}\n\\end{figure}\n\n\\begin{table}\n\\caption{Summary of thruster calibration results. C1 values are in units of [$ N A^{-3\/2} V^{-1\/2}$].\\label{tab:C1}}\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\nT\\# & T1 C1 & error & T1 (LF) C1 & error \\\\\n\\hline\n1 & 31.52 & 0.15 & 31.91 & 0.10 \\\\ \\hline\n2 & 30.18 & 0.10 & 30.97 & 0.10 \\\\ \\hline\n3 & 28.78 & 0.09 & 32.12 & 0.10 \\\\ \\hline\n4 & 29.92 & 0.09 & - & - \\\\ \\hline\n5 & 29.96 & 0.10 & 31.49 & 0.10 \\\\ \\hline\n6 & 30.02 & 0.10 & 30.90 & 0.10 \\\\ \\hline\n7 & 29.71 & 0.09 & 30.37 & 0.10 \\\\ \\hline\n8 & 29.86 & 0.10 & 30.53 & 0.10 \\\\ \\hline \n\\hline\n\\end{tabular}\n\\end{center}\n\\label{default}\n\\end{table}\n\n\\subsubsection{Temperature Dependence}\n\\label{sec:CMNT_T}\n\nThe $C1$ coefficient in the CMNT thrust model, Eq.~(\\ref{eq:T=CIV}), is expected to depend on the propellant temperature, which is set to $25\\,$C during most of the mission. To validate this model, a campaign was undertaken to alter the temperature set point using onboard heaters, and measure the CMNT calibration using injection sets 2 through 4. Figure \\ref{fig:C1temp} shows the measured calibrations at temperatures of $15\\,$C, $20\\,$C, $25\\,$C, and $30\\,$C along with a linear fit for temperature dependance. Similar data obtained on the ground with thrust-stand measurements is included for comparison. Note that due to the CMNT\\#4 propellant bridge, there are no values for $30\\,$C for that thruster. Additionally, several other measurements were made in the initial calibration experiment at the nominal temperature of $25\\,$C, which are included for all thrusters. \n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=\\columnwidth]{C1vT.pdf}\n\\caption{ Comparison of measured and expected dependence of thruster coefficient on temperature. Legend entries show the best-fit linear coefficient for each CMNT and the ground test in units of $N A^{-3\/2} V^{-1\/2} C^{-1}$. \\label{fig:C1temp} }\n\\label{default}\n\\end{center}\n\\end{figure}\n\n\\subsection{Thruster noise performance}\n\\label{sec:CMNTnoise}\nThe DRS had a Level 1 performance requirement to demonstrate a spacecraft propulsion system with noise less than 0.1$\\, \\mu\\textrm{N}\/\\sqrt{\\textrm{Hz}}$ over a frequency range of 1 mHz to 30 mHz. We use two different approaches to estimate the CMNT noise performance. The first method uses the CMNT flight data and the thrust model in Eq.~\\ref{eq:T=CIV}, using the calibration results for the $C1$ coefficients. To estimate the intrinsic thruster noise apart from the required spacecraft control, we subtract the thrust commands. This provides the thrust error. This represents a lower limit on the thrust noise, since additional effects not captured by the measured $I_B$ and $V_B$ values could produce additional noise. Some such effects, such as CMNT shot noise, are known to be well below the measured noise floor, but there is also the possibility of unmodeled noise. \n\nOur second approach for estimating the CMNT noise is to use the Pathfinder spacecraft as a thrust stand, as was done for the thruster calibration measurements. In the measurement band, $1-30$mHz, thrust noise is expected to dominate the total budget of force noises on the spacecraft . Measuring the acceleration noise of the spacecraft should therefore be an effective way to estimate the thruster noise, and is formally an upper limit. Unfortunately, this approach is complicated by the fact that after the anomaly experienced by CMNT\\#4 (see Appendix A), some portion of the cold-gas micropropulsion system was required to be active whenever the CMNT system was active. The thruster noise measured during these periods includes contributions from the cold-gas system as well as the CMNTs.\n\n\n\\subsubsection{CMNT Noise from internal telemetry}\n\\label{sec:IVnoise}\n\nThe light blue trace in Figure \\ref{fig:CMNT5_alias} shows the measured amplitude spectral density of the thrust noise for CMNT\\#5, using the CMNT flight data estimation. This data, sampled at 1 Hz, comes from an 8-hour period on 2017-04-24, while in the 18DOF controller configuration. Below 100$\\,$mHz, it is a flat with a level of approximately 70$\\,\\textrm{nN}\/\\sqrt{\\textrm{Hz}}$. Note that this individual thrust noise is somewhat better than the requirement. Additionally, around 250$\\,$mHz, well above the measurement band, the thrust error for CMNT\\#5 exhibits a sharp spectral line feature. The light red trace in Figure \\ref{fig:CMNT5_alias} shows the thrust error spectrum from a 200s period of 10Hz-sampled data taken just after the data for the blue trace. In the red trace, the line is shifted to 750$\\,$mHz and the flat level at lower frequencies is reduced. This already strongly suggests that the 250$\\,$mHz in the light-blue trace is actually a 750$\\,$mHz effect that is getting aliased down to 250$\\,$mHz in the 1Hz-sampled data. The CMNT telemetry, which is delivered to the IAU at 10Hz, is typically decimated to 1Hz without the use of an anti-aliasing filter. The rationale for this decision was that the CMNT thrust model depends non-linearly on the beam voltage and current and any averaging or other filtering operation applied to the current and voltage would not give the correct result for the thrust model. In addition, neither the current nor voltage telemetry was expected to have significant power above 0.5Hz. To confirm this interpretation, we fit a spectral model to the 10Hz data (dashed red line in Figure \\ref{fig:CMNT5_alias} and compute the aliased version of that spectrum, shown by the dashed blue line in Figure \\ref{fig:CMNT5_alias}). Clearly, this reproduces the 250$\\,$mHz peak seen in the 1Hz data. Based on this analysis, we estimate that the intrinsic noise of CMNT\\#5 is closer to 40$\\,\\textrm{nN}\/\\sqrt{\\textrm{Hz}}$ in the absence of aliasing. It is important to note that aliasing only affects the \\emph{telemetered} values of beam current and voltage. The on-board processing is done at the full $10\\,$Hz rate.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=\\columnwidth]{CMNT5_alias.pdf}\n\\caption{Measured thrust error (thrust command - modeled thrust) in CMNT\\#5 for an 8-hour period on 2017-04-24. The light blue trace is the full duration, sampled at 1Hz. The light red trace is for a adjacent 200s segment sampled at 10Hz. The dashed red line is a fit to the 10Hz spectrum and the dashed blue is a model of how that spectrum gets aliased by downsampling the data without using an anti-aliasing filter.}\n \\label{fig:CMNT5_alias} \n\\end{center}\n\\end{figure}\n\nThe source of the peak in the 10Hz data, which was not observed in ground testing, is suspected to be an oscillation in the thruster control loop which arose when the thrust control algorithm was moved from the DCIU to the IAU, following the DCIU anomaly described in Appendix A. This switch resulted in an additional delays--first for the current and voltage commands to travel from the IAU to the DCIU, and then for the current and voltage telemetry to travel from the DCIU back to the IAU. The additional round-trip delay is expected to be 3 clock cycles, or $\\sim$300$\\,$ms. To test this hypothesis a software simulation of the thrust controller and thruster response was performed using the flight thrust command data for CMNT\\#5 from the period in Figure \\ref{fig:CMNT5_alias}. The green trace in Figure \\ref{fig:CMNTsim} shows the simulated thrust error, sampled at 10Hz, for the nominal case of zero delay between the $I_B$ and $V_B$ commands and the corresponding response. This would be the case for the original flight configuration in which the thrust control algorithm was implemented on the DCIU. The light red trace shows the simulated thrust error, sampled at 10Hz, for the case where a 300$\\,$ms delay is introduced between the command and response of the beam current and voltage. This delay causes noise enhancement at $\\sim$700$\\,$mHz by the control system, where the reduced control loop phase margin means that when the system attempts to suppress motion, the delayed commanded thrust actually mildly increases it instead, by the time the thrust change is enacted. The blue and yellow traces show the with- and without-delay signals, respectively, downsampled to 1Hz without anti-aliasing filters, as was done for nominal DRS operations. In both cases, this elevates the noise to roughly 50$\\,\\textrm{nN}\/\\sqrt{\\textrm{Hz}}$ and, for the delay case, produces a sharp peak near 300$\\,$mHz. \n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=\\columnwidth]{CMNT5_alias_sim.pdf}\n\\caption{Simulated thrust error (thrust command - modeled thrust) for CMNT\\#5 thrust commands using a software model of the thrust control algorithm. The green trace shows the expected thrust error in the baseline case of minimal delay between the commands and response of beam voltage and current. The light red trace shows the same signal when a 300ms delay is introduced in the beam voltage and current response. The blue and yellow traces show the with- and without-delay signals, respectively, downsampled to 1Hz without anti-aliasing filters}\n \\label{fig:CMNTsim} \n\\end{center}\n\\end{figure}\n\nAn expanded 8-thruster version of this simulation, including modeling of the commanding loop delays and flight software, a physically motivated model of the bubble-noise on thruster 1, and the decimation scheme for creating the 1 Hz data product, provided an estimate of the expected platform noise. The data produced by this model matched the available mission flight data, both the 10 Hz data and the heavily aliased 1 Hz data. The 10 Hz data provided over a sufficient duration gave an estimate of the noise in the required frequency band, without the aliasing effects. This system noise along a particular direction was computed using knowledge of the CMNT locations and orientations. Using this simulation of the noise floor from the 10 Hz data, of $\\sim$40$\\,\\textrm{nN}\/\\sqrt{\\textrm{Hz}}$ for thrusters $2-8$ and 74$\\,\\textrm{nN}\/\\sqrt{\\textrm{Hz}}$ for thruster 1, the estimated noise floor along spacecraft X,Y, and Z are $\\sim$70$\\,\\textrm{nN}\/\\sqrt{\\textrm{Hz}}$, $\\sim$87$\\,\\textrm{nN}\/\\sqrt{\\textrm{Hz}}$, and $\\sim$56$\\,\\textrm{nN}\/\\sqrt{\\textrm{Hz}}$.\n \n\\subsubsection{CMNT Noise Estimated from Spacecraft Response}\n\\label{sec:CMNTnoiseSC}\nThe procedure for estimating the thrust noise from the spacecraft response is very similar to that used for calibrating the thrusters, described in section \\ref{sec:thustCal}, with the exceptions that no modulations are applied to the thrusters and that here we also analyze the Y- and Z- degrees of freedom. This analysis can be applied to any segment of data where injections are not present. Here we present results from four segments that are listed in Table \\ref{tab:segments}. These segments were chosen to span various configurations of control systems and thrusters so as to better distinguish the contribution of the propulsion system to the overall spacecraft noise. Segment I represents the default configuration for LTP, with the ESA-provided DFACS in control of the spacecraft using the cold-gas thruster system. Segment II represents the design configuration for DRS, with the DCS in controlling the spacecraft using all 8 CMNTs (and the ESA-provided cold gas thruster system on standby). Segment III is from a brief `joint operations' campaign in which the DFACS controlled the spacecraft with the CMNTs and the cold gas system was on standby. Finally, Segment IV represents the DRS configuration after the CMNT\\#4 anomaly, with the DCS controlling the spacecraft using 7 CMNTs, and with the cold-gas system partially enabled as an out-of-loop static `crutch'. \n\n\\begin{table}\n\\caption{Experiments used to assess thruster noise performance from the spacecraft response. For each experiment a controller, either the ESA-provided DFACS or the NASA-provided DCS, controlled the spacecraft using a micropropulsion system, either the ESA-provided cold-gas (CGAS) or the NASA-provided CMNT. For segment IV, the DCS controlled the spacecraft with 7 CMNTs in-loop and the CGAS used as an out-of-loop `crutch'.\\label{tab:segments}}\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\nSegment & Start Date & Duration & Controller & Propulsion \\\\\n\\hline\nI & 2016-9-28 & 238 ks& DFACS & CGAS \\\\ \\hline\nII & 2016-10-04 & 111 ks& DCS & CMNT \\\\ \\hline\nIII & 2016-10-06 & 124 ks& DFACS & CMNT \\\\ \\hline\nIV & 2017-04-21 & 236 ks& DCS & CMNT \\\\ \n& & & & w\/ CGAS \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n \n Figures \\ref{fig:Fext_x}, \\ref{fig:Fext_y}, and \\ref{fig:Fext_z} show the estimated spacecraft force noise in the X-, Y-, and Z-directions for each of the four segments. The solid lines are amplitude spectral densities estimated using Welch's method of overlapped-averaged periodograms with 10,4,5, and 10 averages for segments I,II,II, and IV, respectively. For segment III, an impulse suspected to be from a micrometeoroid hit at 2016-10-07 9:51:12 UTC was excised from the data. The solid points represent logrithmically-binned estimates with 1-sigma error bars. The solid dashed lines show a `requirement' based on an uncorrelated thrust noise of 0.1$\\,\\mu\\textrm{N}\/\\sqrt{\\textrm{Hz}}$ in each of the CMNTs projected into the spacecraft body frame using the thruster orientations. This corresponds to 160$\\,\\textrm{nN}\/\\sqrt{\\textrm{Hz}}$, 190$\\,\\textrm{nN}\/\\sqrt{\\textrm{Hz}}$, and 140$\\,\\textrm{nN}\/\\sqrt{\\textrm{Hz}}$ for X-, Y-, and Z-axes respectively.\n \n \\begin{figure}\n\\begin{center}\n\\includegraphics[width=\\columnwidth]{SCForce_X.pdf}\n\\caption{X-component of force on the spacecraft estimated using measured test mass dynamics and spacecraft mass properties. The four traces correspond to the segments in \\ref{tab:segments} and probe different thruster configurations. See text for discussion. }\n\\label{fig:Fext_x}\n\\end{center}\n\\end{figure}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=\\columnwidth]{SCForce_Y.pdf}\n\\caption{Y-component of force on the spacecraft estimated using measured test mass dynamics and spacecraft mass properties. The four traces correspond to the segments in \\ref{tab:segments} and probe different thruster configurations. See text for discussion. }\n\\label{fig:Fext_y}\n\\end{center}\n\\end{figure}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=\\columnwidth]{SCForce_Z.pdf}\n\\caption{Z-component of force on the spacecraft estimated using measured test mass dynamics and spacecraft mass properties. The four traces correspond to the segments in \\ref{tab:segments} and probe different thruster configurations. See text for discussion. }\n\\label{fig:Fext_z}\n\\end{center}\n\\end{figure}\n\n In general, the noise for the two CMNT-only configurations (II and III) are somewhat lower than the two including cold gas (I and IV). In addition to the overall higher noise level, the two cold-gas segments exhibit a set of narrow-line features at $\\sim1.5\\,$mHz and harmonics thereof. Both of these effects are most pronounced in the Z-axis, possibly explained by a common-mode noise source in the cold gas system\\footnote{ Since all 6 Cold Gas thrusters thrust in the +Z (Sunward) direction with the same vector component, a common-mode noise will add coherently whereas correlated noise in X and Y would largely cancel when all 6 thrusters are active. Note that for the case of the `crutch' mode using only 4 of the 6 thrusters, the cancellation in X and Y no longer occurs.} Somewhat surprisingly, the CMNT noise under DFACS control (III) appears to be slightly lower than that for DCS (II). Upon inspection of the telemetry it was found that CMNT\\#1 was railed at the minimum thrust of 5$\\,\\mu$N due to a unoptimized thrust bias vector for this ad-hoc experiment. This inadvertently reduced the rate of `blipping' in CMNT\\#1, leading to a reduction in the overall noise. \n \n \\subsubsection{CMNT Noise Summary}\n \nIn both CMNT-only cases, the measured noise floor along the x-direction seems to be in the $100\\sim 300\\,\\textrm{nN}\/\\sqrt{\\textrm{Hz}}$ range, which is significantly higher than the noise predicted by the thrust telemetry in the absence of aliasing. Possible explanations for this include additional noise in the thrust system beyond what is inferred from the current and voltage noise, additional noise on the spacecraft platform, or noise on the test mass which is used as a reference. \n\n \\begin{table}\n\\caption{Comparison of measured spacecraft Force noise and combined estimate from measured and estimated effects in the in $1\\sim30\\,$mHz band. Details of noise estimates can be found in Appendix C. \\label{tab:SCnoiseComp} }\n\\begin{center}\n\\begin{tabular}{|c|c|}\n\\hline\nEffect & Estimate nN$\/\\sqrt{\\textrm{Hz}}$ \\\\\n\\hline\nMeasured Noise & 120 \\\\\n\\hline\nTotal Estimate & 70 \\\\\n\\hline\nUnmodeled noise & 97 \\\\\n\\hline\n\\multicolumn{2}{|c|}{\\emph{CMNT noises}} \\\\\n\\hline\nIV noise (CMNT\\#1) & 41\\\\\nIV noise (CMNT\\#2-7) & 56 \\\\\nshot noise & 0.16 \\\\\nflutter noise & 0.03 \\\\\n\\hline\n\\multicolumn{2}{|c|}{\\emph{S\/C noises}} \\\\\n\\hline\nSRP & 1.7 \\\\\nRadiometer Noise & 0.7 \\\\\nExt. B-fields & 0.01 \\\\\nMicrometeoroids & 0.5 \\\\\n\\hline\n\\multicolumn{2}{|c|}{\\emph{TM noises}} \\\\\n\\hline\nforce noise & $<$ 1 \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\vspace{0cm}\n\\end{table}%\n\nTable \\ref{tab:SCnoiseComp} summarizes measurements and estimates of some of these effects for the x-axis of the spacecraft in the $1\\sim30\\,$mHz band. Details on how some of these effects were estimated can be found in Appendix C. Measured Noise is an approximate white noise level equivalent to the red trace (segment II) in Figure \\ref{fig:Fext_x}. Total Estimate is a uncorrelated sum of the remaining entries in the table, which represents the total amount of noise accounted for in our model. This is dominated by an estimate of the modeled thrust noise without the presence of aliasing that is estimated from the simulations of current and voltage and resulting noise floor presented in section \\ref{sec:IVnoise} and Figure \\ref{fig:CMNTsim}. The table lists the contribution from CMNT\\#1, which has an elevated noise floor due to the blipping, as well as the sum of the rest of the thrusters. Unmodeled Noise is the size of the noise contribution, presumably uncorrelated with the modeled noise, that would be needed to be added to the model to match our measurements. This represents more than half of the measured noise, but at this time we are unable to account for the source of this effect. This suggests that designers of future low-disturbance platforms should take care when considering applications requiring force noise below $\\sim100\\,\\textrm{nN}\/\\sqrt{\\textrm{Hz}}$.\n\\\\\n\n\\section{DRS Performance}\n\\subsection{Operation of the Dynamic Control System}\nDRS operations were initialized with a handover sequence wherein control of the spacecraft and test masses was passed from the European DFACS control system to the DCS. After initial capture, the DCS executed sequences to transition through the various control modes described in section \\ref{sec:DCS} in order to bring the instrument to the desired state for conducting experiments. During DRS operations, this procedure occurred roughly once per week after station-keeping maneuvers were performed under DFACS control. Figure~\\ref{fig:modeTransitions} shows the measured positions and attitudes as well as the commanded forces and torques of the test mass and spacecraft for a typical transition sequence from handover to the 18DOF science mode. Note that spacecraft forces do not include the bias levels of the thrusters, which were adjusted to provide a net DC force of 24$\\,\\mu$N in the $+Z$ direction to compensate for solar radiation pressure. After an initial transient caused by the handover sequence, the Attitude Only controller works to stabilize the spacecraft angular error. Both test masses are commanded to follow the spacecraft by applying appropriate forces and torques. For the Zero-g mode, forces are applied to the spacecraft to minimize the z-axis forces on the test masses, thereby providing active compensation of the solar radiation pressure. In the drag-free mode, the spacecraft is commanded to follow the RTM along the linear DoFs, leading to an increase in the applied forces on the spacecraft and a decrease in both the position error and applied force on the RTM. The 18DOF mode, torques are applied to the spacecraft to further reduce the forces on the NTM in the transverse directions. In this particular sequence, the NTM experienced an impulsive disturbance approximately 2.5 hours after the transition into 18DOF mode that caused an excursion of the NTM angles and x position. The DCS compensated for this disturbance by applying appropriate torques and forces to the NTM. The DCS successfully executed dozens of mode transition sequences over the course of the baseline and extended mission, providing a robust platform with which to conduct experiments characterizing the CMNTs and other aspects of the spacecraft.\n\n\\begin{figure*}\\label{transitions}\n\\centering\n\\begin{tabular}{ccc}\n\n\\subf{\\includegraphics[width=0.32\\textwidth]{modes_xRTM.pdf}}\n {RTM positions}\n&\n\\subf{\\includegraphics[width=0.32\\textwidth]{modes_xNTM.pdf}}\n {NTM positions}\n &\n\\\\\n\\subf{\\includegraphics[width=0.32\\textwidth]{modes_aRTM.pdf}}\n {RTM angles}\n&\n\\subf{\\includegraphics[width=0.32\\textwidth]{modes_aNTM.pdf}}\n {NTM angles}\n &\n \\subf{\\includegraphics[width=0.32\\textwidth]{modes_aSC.pdf}}\n {SC angles}\n\\\\\n\\subf{\\includegraphics[width=0.32\\textwidth]{modes_FRTM.pdf}}\n {RTM forces}\n&\n\\subf{\\includegraphics[width=0.32\\textwidth]{modes_FNTM.pdf}}\n {NTM forces}\n &\n \\subf{\\includegraphics[width=0.32\\textwidth]{modes_FSC.pdf}}\n {SC forces}\n\\\\\n\\subf{\\includegraphics[width=0.32\\textwidth]{modes_NRTM.pdf}}\n {RTM torques}\n&\n\\subf{\\includegraphics[width=0.32\\textwidth]{modes_NNTM.pdf}}\n {NTM torques}\n &\n \\subf{\\includegraphics[width=0.32\\textwidth]{modes_NSC.pdf}}\n {SC torques}\n\\\\\n\\end{tabular}\n\\caption{DCS behavior during a typical mode transition sequence from handover to the 18 degree-of-freedom science mode (18DOF). Plots show measured positions and angles of both the reference (RTM) and non-reference (NTM) test masses; angles of the spacecraft; and forces and torques applied to the RTM, NTM, and spacecraft . Note that for spacecraft forces, the thruster bias levels are set to provide a net force in the $+Z$ direction of 24$\\,\\mu$N. Time origin is 2016-10-02 14:00UTC.}\n\\label{fig:modeTransitions} \n\\vspace{2cm}\n\\end{figure*}\n\n\n\\subsection{Position Accuracy}\nThe spacecraft position error, or the precision with which the spacecraft position is maintained relative to the RTM, is an important requirement for the DCS. The ST7-DRS Level I requirement was for a position error amplitude spectral density of $S_{SCx}^{1\/2}\\leq10\\,\\textrm{nm}\/\\sqrt{\\textrm{Hz}}$ in the band $1\\,\\textrm{mHz}\\leq f \\leq 30\\,\\textrm{mHz}$. Figure~\\ref{fig:position_error_f} shows the measured $S_{SCx}^{1\/2}$ for two different experiments: a 20.7 hr run in the drag-free low-force (DFLF) mode beginning on 2016-08-22 07:36 UTC and a 31.1 hr run in the 18 degree-of-freedom (18DOF) mode beginning on 2016-10-22 00:00 UTC. Both control modes comfortably meet the requirement over the measurement band. At high frequencies, both traces show a spectrum that follows a roughly $f^{-2}$ power-law and has an amplitude that is consistent with a white force noise on the order of $\\sim0.1\\,\\mu\\textrm{N}\/\\sqrt{\\textrm{Hz}}$ and a spacecraft mass of 422$\\,$kg. As discussed in section \\ref{sec:CMNTnoise}, thruster noise on the order of $\\sim0.1\\,\\mu\\textrm{N}\/\\sqrt{\\textrm{Hz}}$ is expected to dominate the spacecraft force noise budget. At $\\sim100\\,$mHz, the drag-free controller begins compensating for this disturbance by applying force commands on the spacecraft , leading to a flattening of the position error spectrum. The DFLF controller has a slightly higher control bandwidth for the x-axis drag-free loop than the 18DOF controller, resulting in a slightly lower level of position noise within the control bandwidth. At lower frequencies, additional gain in the drag-free loop further suppresses disturbances from the thrusters.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=\\columnwidth]{Spec_x1_zoom.pdf}\n\\caption{Amplitude spectral density of the measured RTM-Sspacecraft position along the x-direction for a 20.7 hr run in the drag-free low-force (DFLF, red) mode beginning on 2016-08-22 and a 31.1 hr run in the 18 degree-of-freedom (18DOF, blue) mode beginning on 2016-10-22. The solid trace shows a linearly-binned spectral density computed using Welch's method with a frequency resolution of 25$\\,\\mu$Hz while the solid points show a logarithmically-binned estimate with 1-sigma error bars. The black dashed line is the Level I position error requirement for ST7-DRS.}\n \\label{fig:position_error_f} \n\\label{default}\n\\end{center}\n\\end{figure}\n\nFigure \\ref{fig:position_error_cdf} plots the measured cumulative probability distribution function for the spacecraft position error along x for the same two runs as are plotted in Figure \\ref{fig:position_error_f}. The distributions are well-approximated by a Gaussian distribution and have confidence intervals of (-1.0,+1.0)$\\,$nm for DFLF and (-2.1,+1.9)$\\,$nm for 18DOF.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=\\columnwidth]{cdf_x1.pdf}\n\\caption{Cumulative probability distribution for measured RTM-spacecraft position along the x-direction for the two runs in Figure \\ref{fig:position_error_f}. The 95\\% confidence intervals for the DFLF and 18DOF position errors are (-1.0,+1.0)$\\,$nm and (-2.1,+1.9)$\\,$nm respectively.}\n \\label{fig:position_error_cdf} \n\\label{default}\n\\end{center}\n\\end{figure}\n\n\\subsection{Differential Acceleration Measurements}\n\\label{sec:deltag}\n\\vskip 0.2in\nWhile the primary purpose of DRS operations was to validate the performance of both the drag-free control laws and the CMNT micropropulsion system, a small portion of the operations time, in both the prime and extended missions, was utilized to make differential acceleration measurements of the two test masses. This `$\\delta g$' measurement is the primary measurement reported by the LTP collaboration\\cite{LPF_PRL_2016, LPF_PRL_2018}. To a leading approximation, one would not expect a change in the differential acceleration noise when either the control laws or the micropropulsion system was changed. As is extensively discussed in the LTP collaboration publications, the performance of the drag-free system is primarily determined by the physics of the sensor assembly and interferometric readout. For example, the minimum acceleration noise in the $1\\,\\textrm{mHz}\\,\\sim10\\,\\textrm{mHz}$ band is largely determined by the gas pressure around the test mass. The bulk of the LTP operations were composed of experiments to characterize and reduce these various couplings, leading to the improvement in performance from the initial\\cite{LPF_PRL_2016} to the final\\cite{LPF_PRL_2018} results. \n\nTo first order, a change in the control system does not affect $\\delta g$ measurements because the analysis used to construct the $\\delta g$ results includes both the error signal (motion of the test mass) as well as the control signal (forces on the test mass). Second-order effects, such as larger sensitivities to calibration errors or actuation-cross talk, can be present. Indeed, early in the DRS operations, it was noticed that the measured $\\delta g$ for Fourier frequencies $\\mathrel{\\hbox{\\rlap{\\hbox{\\lower4pt\\hbox{$\\sim$}}}\\hbox{$>$}}} 30\\,\\textrm{mHz}$ was non-stationary and on average higher during DRS operations than during LTP operations. This was traced to the fact that the DRS suspension controller was initially tuned to be `softer' than the corresponding LTP controller, which resulted in larger RMS motion (but less actuation) of the suspended test mass. This larger motion caused an increase in the average sensing noise of the LTP interferometer, which had a known degradation in noise performance if the test masses were allowed to move appreciably from their nominal position. Once this was understood, the DRS controllers were modified to be `stiffer' at high frequencies, thus reducing the motion of the suspended test mass and recovering the interferometric sensing noise performance observed in the LTP.\n\nOne also expects the $\\delta g$ results to depend only weakly on which micropropulsion system is used, since the differential nature of the measurement is specifically designed to reject disturbances on the spacecraft platform. The most direct coupling of microproulsion noise is through test mass `stiffness', which represents the coupling between spacecraft position and force on the test mass, which is physically caused by all of the following: AC electric fields used to control the test masses, stray electrostatic and magnetic fields coupling to test mass charge, and the gradient of the gravitational field due to the spacecraft itself. This coupling can be reduced by increasing the contribution from the actuation fields so that both test masses have the same stiffness, thus rejecting any coupling of spacecraft motion through the use of `matched' stiffness. In practice, the matched stiffness configuration was not extensively used in either LTP or DRS operations because the intrinsic stiffness of both test mass systems was significantly lower than requirements (and the spacecraft motion due to micropropulsion noise were within requirements). Micropropulsion noise could also enter the $\\delta g$ measurement through more subtle effects such as rotating-frame effects. For example, the measured $\\delta g$ signal includes a centrifugal term that arises from the product of the low-frequency rotation of the spacecraft as well as the in-band attitude jitter of the spacecraft. The standard $\\delta g$ analysis uses a combination of star-tracker attitude data and test mass torque data to estimate this contribution and subtract it. It is possible that increased angular jitter caused by a noisier microproulsion system could result in a larger contribution that is more difficult to fully subtract.\n\nFigure \\ref{fig:delta-g-DRS} shows our estimate of the amplitude spectral density of $\\delta g$ for two DRS configurations as well as the `ultimate' LTP performance\\cite{LPF_PRL_2018} and the current estimated requirements for the LISA mission. These data were obtained using the same data analysis pipelines and tools \\cite{LTPDA} as used by the LTP collaboration in their major results papers\\cite{LPF_PRL_2016, LPF_PRL_2018}. For all three segments, the solid trace represents the amplitude spectral density obtained using Welch's method of overlapped averaged periodograms of length 40$\\,$ks, with a Blackmann-Harris window applied. As was the case for the standard LTP analysis, the lowest reported frequency is the 4th bin (0.1$\\,$mHz). The solid points are logarithmically-binned estimates of the amplitude spectral density, including one-sigma error bars. For each time-series segment, the data is reduced in a series of steps that includes estimating the observed acceleration, correcting for the applied force on the non-reference test mass, correcting for stiffness as well as actuation and sensing cross talk, correcting for rotating-frame effects, and removing any impulsive `glitches'. Each of these steps requires both a model of the underlying contribution to $\\delta g$ as well as a set of parameters corresponding to the state of the instrument. Again, to the greatest extent possible, our analysis used an identical set of models and parameters as the corresponding LTP analysis. Similarly, glitches were identified and removed using the same procedure as for the final LTP results. \n\nThe first DRS segment in Figure \\ref{fig:delta-g-DRS} (red trace, `initial DRS', data from segment II in Table \\ref{tab:segments}) represents the nominal configuration with the DCS controlling the test masses via the LTP and the spacecraft via all eight CMNTs. Three glitches were identified in this segment, occurring at 2016-10-05 05:50 UTC, 2016-10-04 17:51 UTC, and 2016-10-04 18:07 UTC. Each glitch was fit and removed using a double-exponential, as described in the final LTP paper\\cite{LPF_PRL_2018}. The second DRS segment (blue trace, `optimized DRS', data from segment IV in Table \\ref{tab:segments}) represents the optimized DRS performance, obtained after the system had been tuned but also after CMNT \\#4 had failed. Again, after the failure of CMNT \\#4, the DCS was modified to control the spacecraft with seven CMNTs in closed-loop as well as an open-loop `crutch' provided by four of LPF's cold gas thrusters. Two glitches, at 2017-04-23 03:25 UTC and 2017-04-23 13:22 UTC, were identified and removed. The LTP segment (orange trace, `ultimate LTP') is the February 2017 segment plotted in Figure 1 of the final LTP $\\delta g$ paper\\cite{LPF_PRL_2018}. \n\nThe LISA requirements are the single test-mass acceleration noise requirement from the 2017 LISA Mission Proposal\\cite{LISA_PROPOSAL_2017}, but here multiplied by $\\sqrt{2}$ to compensate for the fact that LPF makes a measurement of the differential noise between two test masses whereas the requirement is written for a single test mass.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=\\columnwidth]{deltag_DRS.pdf}\n\\caption{Measured residual differential acceleration between Pathfinder's two test masses ($\\delta g$) for DRS operations and comparison with LTP configuration. The red trace (initial DRS) is for an all-DRS configuration from early in the mission when all eight CMNTs were operating and the cold gas micropropulsion system was on standby (segment II in Table \\ref{tab:segments}). The blue trace (optimized DRS) is for a configuration with the DCS controlling the spacecraft using 7 of 8 CMNTs while four of the cold gas microthrusters provided an open-loop static force (segment IV in Table \\ref{tab:segments}). The orange trace is the ultimate published LTP performance\\cite{LPF_PRL_2018}. For all three segments, the solid trace is a linearly-binned amplitude spectral density with a resolution of $250\\,\\mu$Hz while the solid markers represent logarithmically-binned estimates of the amplitude spectral density with one sigma error bars. The LISA requirements are the single test-mass acceleration requirement as expressed in the 2017 LISA Mission Proposal\\cite{LISA_PROPOSAL_2017} with a factor of $\\sqrt{2}$ applied to account for the fact that Pathfinder measures differential acceleration between two test masses. \\label{fig:delta-g-DRS} }\n\\label{default}\n\\end{center}\n\\end{figure}\n\nWhen comparing the three configurations in Fig. \\ref{fig:delta-g-DRS} it is useful to consider three different frequency regimes. At high frequencies ($f\\mathrel{\\hbox{\\rlap{\\hbox{\\lower4pt\\hbox{$\\sim$}}}\\hbox{$>$}}} 30\\,\\textrm{mHz}$), the three traces are quite similar, exhibiting a $f^2$ power-law behavior that is caused by white displacement noise in the LTP interferometric readout of the differential test mass position. The most notable differences are the presence of a line feature near 70$\\,$mHz in both the LTP and optimized DRS configurations as well as the fact that the initial DRS configuration is slightly lower than the other two. The elevated broad-band displacement noise is likely due to more significant misalignments of the test masses during the extended mission. During the baseline mission, an extensive campaign was carried out to identify misalignments and actively correct for them by modifying the static offsets of the test mass positions and attitudes. This procedure was shown to significantly reduce the cross-coupling term in the $\\delta g$ analysis (see discussion in initial LTP paper\\cite{LPF_PRL_2016}). This adjustment was not repeated in the extended mission and it is likely that the offsets may have changed due to either deliberate changes in the spacecraft temperature or other effects such as creep and outgassing. The origin of the 70$\\,$mHz line feature is unknown, although the fact that it is not present in the initial DRS configuration, when the cold gas micropropulsion system was placed in standby mode, suggests that it may be related to the cold gas micropropulsion system in some way. \n\nIn the middle band between ($1\\,\\textrm{mHz}\\mathrel{\\hbox{\\rlap{\\hbox{\\lower4pt\\hbox{$\\sim$}}}\\hbox{$>$}}} f \\mathrel{\\hbox{\\rlap{\\hbox{\\lower4pt\\hbox{$\\sim$}}}\\hbox{$>$}}} 10\\,\\textrm{mHz}$), the three traces show clear differences, with the LTP trace presenting a nearly flat, feature-free noise floor of approximately 1.8$\\,\\textrm{fm}\\,\\textrm{s}^{-2} \/ \\sqrt{\\textrm{Hz}}$. The initial DRS trace is roughly two times higher at 10$\\,$mHz and rises slowly towards lower frequencies. The optimized DRS trace has a lower broad band noise floor than the initial DRS trace, around 3$\\,\\textrm{fm}\\,\\textrm{s}^{-2} \/ \\sqrt{\\textrm{Hz}}$. This is still roughly a factor of two higher than the ultimate LTP case, although with the much shorter segment (236 ks vs. 1.15 Ms), the statistics are not as good. As mentioned above, the limiting noise in this band is expected to be gas pressure in the test mass enclosures. After an initial steady decrease in pressure in response to the opening of the vent duct to space, the GRS pressure was primarily controlled by setting the temperature of the GRS housing. The difference in temperatures between the optimized DRS (12.8$\\,^{\\circ}$C) and the optimized LTP (11.5$\\,^{\\circ}$C) is not large enough to account for the observed difference in noise, although the higher noise floor in the initial DRS run, which occurred at a higher temperature (23.5$^{\\circ}$C) and several months earlier in the mission, may well be due to increased pressure in the housings.\n\nAt the low end of the measured frequency band ($0.1\\,\\textrm{mHz}\\mathrel{\\hbox{\\rlap{\\hbox{\\lower4pt\\hbox{$\\sim$}}}\\hbox{$>$}}} f \\mathrel{\\hbox{\\rlap{\\hbox{\\lower4pt\\hbox{$\\sim$}}}\\hbox{$>$}}} 1\\,\\textrm{mHz}$), all three traces in Fig. \\ref{fig:delta-g-DRS} show a rise with a slope of roughly $f^{-1}$, with their relative amplitudes similar to those in the mid-band. At the lower end of the band, the ultimate DRS noise is slightly higher than that of the ultimate LTP, although the statistics on the DRS measurement are poor. While the long-duration LTP results were able to demonstrate performance down to 20$\\,\\mu$Hz, a region with important astrophysical implications, the DRS data is not sufficiently long to make any measurements below 0.1$\\,$mHz. However, based on the available data it would appear that a drag-free system employing CMNT micropropulsion could meet all of the LISA science requirements while providing significant savings in mass to the flight system.\n\n\n\\section{Summary, Conclusions and Future Work}\n\\vskip 0.2in\n The ST7-DRS successfully demonstrated NASA developed drag-free control laws and colloid thrusters in space. Colloid thrusters were selected for the mission because of their potential to be used for future mission requiring low thrust noise ($\\le 0.1 \\mu$ N\/ $\\sqrt{Hz}$), high precision ($\\le 0.1 \\mu N$ steps) with a comparatively small amount volume and mass used for propellent. The ST7-DRS controlled the attitude the 424 kg ESA LISA Pathfinder Spacecraft for 103.4 days (~41 days of drag-free control) using less than 1 kg of propellant. The mission was also an example of applying operational characteristics of colloid thrusters to a drag-free control application. \n As a technology demonstration, ST7-DRS was a success, meeting its performance requirements. However, the anomalies experienced during the mission would not be acceptable had they occurred in the primary propulsion system of a long-duration mission. Also, the propellent volume must be increased for a several year mission. In the decade since the CMNT thrusters were completed in 2008, significant progress has been made to adapt the technology to a longer mission. The on-orbit performance observed on ST7 is also informing this development and also the planning for the verification and validation testing to demonstrate the improvements have been realized without introducing new problems.\nThe on-orbit data form ST7 is also being used to infer the performance of colloid thrusters in other applications. While ST7-DRS specifically implemented a drag-free control system, the data collected allows the design of other types of control system. The performance demonstrated on LPF can enable many applications needing ultra-precise pointing, formation flying or dynamic stability, including separated element interferometers, coronagraphs, very large aperture telescopes and fundamental physics experiments. In addition, the control modes and operational approach demonstrated on ST7 are an example as to how to hand over from a higher-noise control system to a very low noise system, and also demonstrated the response to impulse events (and anomalies) that can be used in planning for any of these applications. We look forward to the results of LISA and many other amazing science missions in the future. \n\n\n\n\\acknowledgments\n\n\nWe would like to acknowledge Dr. Landis Markley for insightful contributions to the design and development of the DRS and DCS. We acknowledge the significant contributions of Jeff D\\'Agostino and Kathie Blackman of the Hammers Co., who implemented and tested the DCS algorithms in the FSW, and who supplied the corresponding simulators to JPL and ESA. \nThe data was produced by the NASA Disturbance Reduction System payload, developed under the NASA New Millennium Program and hosted on the LISA Pathfinder mission, which was part of the space-science programme of the European Space Agency. The work of JPL authors was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under contract to the National Aeronautics and Space Administration. Dr. Slutsky was supported by NASA through the CRESST II cooperative agreement CA 80GSFC17M0002.\nThe French contribution to LISA Pathfinder has been supported by the CNES (Accord Specific de projet CNES 1316634\/CNRS 103747), the CNRS, the Observatoire de Paris and the University Paris-Diderot. E.~Plagnol and H.~Inchausp\\'{e} would also like to acknowledge the financial support of the UnivEarthS Labex program at Sorbonne Paris Cit\\'{e} (ANR-10-LABX-0023 and ANR-11-IDEX-0005-02).\nThe Albert-Einstein-Institut acknowledges the support of the German Space Agency, DLR, in the development and operations of LISA Pathfinder. The work is supported by the Federal Ministry for Economic Affairs and Energy based on a resolution of the German Bundestag (FKZ 50OQ0501 and FKZ 50OQ1601). \nThe Italian contribution to LISA Pathfinder has been supported by Agenzia Spaziale Italiana and Istituto Nazionale di Fisica Nucleare.\nThe Spanish contribution to LISA Pathfinder has been supported by contracts AYA2010-15709 (MICINN), ESP2013-47637-P, and ESP2015-67234-P (MINECO). M.~Nofrarias acknowledges support from Fundacion General CSIC (Programa ComFuturo). F.~Rivas acknowledges an FPI contract (MINECO).\nThe Swiss contribution to LISA Pathfinder was made possible by the support of the Swiss Space Office (SSO) via the PRODEX Programme of ESA. L.~Ferraioli is supported by the Swiss National Science Foundation.\nThe UK LISA Pathfinder groups wish to acknowledge support from the United Kingdom Space Agency (UKSA), the University of Glasgow, the University of Birmingham, Imperial College, and the Scottish Universities Physics Alliance (SUPA). @ 2018. All rights reserved.\n\n\\section*{Appendix A: Anomalies experienced during ST7-DRS operations}\n\\label{sec:ANOMALIES}\n\nThe DRS experienced three significant anomalies during the mission that, while they affected operations, did not prevent the mission from meeting its objectives and performance requirements. First, CMNT\\#1 demonstrated a reduced maximum current and response time compared to acceptance and thermal vacuum testing prior to launch. One of its nine emitters also blipped on and off between 0.2-0.6 Hz, depending on current level, throughout both the primary and extended missions, significantly increasing thrust noise. Second, the DCIU PROM on cluster 2 suffered a partial memory failure in the location of the thruster control algorithm just after the instrument commissioning was complete, preventing direct thrust commands from being acted on without reseting the DCIU. Third, near the end of the primary mission, CMNT\\#4 experienced a propellant bridge between the emitter and the extractor electrodes, effectively preventing further use of the thruster. This electrical short occurred after the mission performance requirements had been met (including 60 days of operation) and during experiments that were designed to prove performance at more stressful operating conditions. For all three anomalies, solutions or work-arounds were developed to enable operations and continue experiments up to the end of the extended mission. The next three subsections provide more detail on all three anomalies.\n\n\\subsubsection*{CMNT\\#1 Performance Reduction} \n\\label{sec:anomaly1}\nCMNT\\#1 in-flight performance was not consistent with its performance in pre-delivery ground acceptance and thermal vacuum (TVAC) ground tests, where it met all response time and thrust noise requirements. For example, in the TVAC tests, the response time required for CMNT\\#1 to increase thrust from 5 to 30 ${\\mu}N$ was 9.8 s, which was below the 100 s requirement. In flight, CMNT\\#1 took 2 days longer than the other thrusters to fill the propellant feed system and initially turn on, indicating a significantly reduced maximum propellant flow rate. After startup and bubble removal in a test during commissioning, it demonstrated a slower response time of about 170 s and a maximum thrust capability that was less than the other thrusters but still within requirements. This increased response time characteristic continued throughout the mission but would gradually improve and nearly return to meeting the 100 s response time requirement after the microvalve was left open for a number of days. Whenever CMNT\\#1 was deactivated and its microvalve closed, the response time would then decrease again. Experiments during the extended mission showed that the response time increase was most closely related to how long CMNT\\#1's microvalve was closed. This indicated a problem in the microvalve actuator or a non-constant and not complete blockage in the feed system upstream of the microvalve or in it's flow limiting orifice just upstream of the microvalve seat. The net result was a requirement to ``prime'' CMNT\\#1 by running it in a diagnostic mode at constant current for 30 minutes prior to each start up to improve its response time. The extra delay in CMNT\\#1's response also impacted thrust noise during periods of higher fluctuations in thrust commands (i.e. 18DOF). Fortunately, the failure of CMNT\\#1 to meet this response time requirement did not significantly impact DRS operations during mode transitions or science mode, nor prevent the DRS as a whole from meeting its Level-1 performance requirements.\n\nIn flight, CMNT\\#1 also experienced beam current spikes of 160-240 nA every 1.75-4 seconds with the frequency and magnitude proportional to the current level throughout the entire mission. The size and high-speed characteristics of the current spikes were consistent with a single emitter turning on and off with $<10$\\% duty cycle - an effect colloquially referred to as `blipping'. During tests designed to allow counting of the number of active emitters on each thruster and from direct measurements of thrust using the GRS, it was clear that CMNT\\#1 had one of its nine emitters mostly off. Analyses of these experiments and others suggest that the observed behavior was not due to a bubble but a blockage or constriction in one emitter with a constant hydraulic impedance that impeded the propellant flow rate significantly compared to the other eight emitters. With a reduced flow rate in just one of the nine emitters, it was not possible to maintain a steady current. \n\nBoth of these performance reductions on CMNT\\#1 were observed to start at the same time, during startup, suggesting that they could be related and are both likely caused by an increase in hydraulic resistance; however, the location of the constrictions is not the same. It should be noted that during startup, a large amount of current $>24 {\\mu}A$ was emitted for over 20 minutes, potentially indicating a bubble passing through the microvalve orifice, expanding by a factor of 10 in volume and pushing a significant amount of propellant out of one or more emitter. This long-duration, high-current event occurred outside of daily real-time observations, fortunately terminating on its own, but potentially damaging one of the emitters in the process. Follow-on ground tests have shown constrictive damage to emitters that experience abnormally high currents with a build-up of decomposed propellant products, which could explain CMNT\\#1's blipping emitter. A bubble stuck in the microvalve orifice could also explain the reduced flow rate. The cause of both performance issues with CMNT\\#1 continues to be under investigation. Because the seven other thrusters and microvalves performed nearly the same in flight as on the ground, the CMNT\\#1 anomaly could be a yield issue that can be addressed by maturing the technology, improving the microvalve testing, screening, and filling processes and\/or flying redundant valves, as is common practice for primary propulsion systems on large science missions.\n\n\\subsection*{DCIU PROM Corruption}\n\\label{sec:anomalyDCIU}\nThe DCIU on cluster 2 experienced an anomaly on July 8, 2016, immediately after a successful instrument commissioning, resulting in a processor reset and CMTA 2 becoming disabled whenever it received a thrust command. Testing revealed that, while the Thrust Command Mode no longer functioned properly, the Diagnostic Mode and other control routines still functioned properly allowing diagnosis and an eventual work-around. By design, the DCIU PROM and control software could not be updated on orbit, but fortunately the IAU flight software (FSW) was designed with a back-up thrust control algorithm using the DCIU's Diagnostic Mode in case there was a desire to modify it on orbit. In this case, the IAU calculated the beam, extractor, and microvalve voltage commands directly from the DCS thrust requests and sent those commands instead of thrust commands to the DCIU. The IAU FSW and operational sequences required updating and verification to make the embedded thrust control algorithm work properly that enabled this DCIU ``Pass-through Mode'', and operations continued on August 8, 2016. \n\nThe suspected cause of this anomaly is a single radiation event that permanently damaged part of cluster 2's PROM. While the Pass-Through Mode was a work-around that solved the immediate issue, it also introduced a command and telemetry delay between thrust command processing and beam current and voltage commanding, execution, sensing, and feedback that did not exist when the DCIU had it's own internal control loop functioning properly. This added delay was 1-2 realtime intervals (RTIs) or 0.1-0.2 seconds, depending on the exact timing between telemetry and command packets passing back and forth between the IAU, DCIUs and on-board computer. As discussed in section \\ref{sec:IVnoise}, this extra delay led to a limit-cycle oscillation in the thruster control loop. Fortunately, this limit cycle oscillation was confined to frequencies above the target performance bandwidth and did not impact mission performance. To prevent this anomaly in future missions, a more robust, radiation-hardened, and re-writable EEPROM for the DCIU is recommended. \n \n\\subsection*{CMNT\\#4 Propellant Bridge}\n\\label{sec:anomaly4}\nNear the end of the primary mission and after all Level 1 mission requirements had been met, CMNT\\#4 developed an electrical short (An impedance of $200 M\\Omega $, which does constitute a ``short'' in the CMNT system) between the emitter and extractor electrodes due to a suspected propellant bridge, which rendered CMNT\\#4 effectively inoperable. The bridge occurred on Oct 27, 2016, after 1670 hours of operation on CMNT\\#4. While the exact location of the short on the emitters or extractor electrodes cannot be determined without access to the thruster, neither electrode was shorted to ground potential nor the accelerator electrode, indicating the short was not in the PPU. The variable nature of the short impedance also indicated that it was an electrically conductive bridge of partially-polymerized propellant formed between the emitter and the extractor, likely after the porous extractor became saturated with propellant from normal and off-nominal operation. These kind of propellant bridges have been observed on the ground previously due to poorly aligned emitters, saturated porous extractors, or large amounts of excess propellant near the emitter tip prior to operation.\n\nAt the time the short occurred, CMNT\\#4 was undergoing an experiment to verify the thrust performance model over a wide range of beam voltage and temperature conditions. At the point of failure, the beam voltage was 4 kV and the thruster temperature was reduced to 20C compared to 6 kV and 25C that are nominal operating conditions. Operating at a lower beam voltage widened the exhaust beam, increasing the flux of propellant at the edges of the beam onto the extractor electrode. Operating at reduced temperatures increased the propellant's viscosity and impeded absorption and capillary action of the extractor's pores that are designed to soak-up excess propellant. It is possible that operating in this off-nominal condition (which was still within the specified operational range) contributed to the failure.\n\nCareful analysis of the ground and in-flight data showed that CMNT\\#4 did experience significantly more operation time with bubble-driven flow during ground-based TVAC tests than the other thrusters, which could have increased the flux of propellant to the extractor, filling the pores. In addition, because of another LPF anomaly unrelated to the DRS that occurred approximately a week before the CMNT\\#4 short, all the thrusters were shut down abruptly, which could have allowed some excess propellant to escape out of CMNT\\#4's emitters during a thermal transient that caused the propellant to expand without voltage on the electrodes. The safer and normal version of the thruster shutdown procedure includes multiple steps to decrease the quantity of residual propellant in the emitters, reducing the risk of spraying during periods of prolonged shutdown. Finally, to preserve the stable thermal environment on the spacecraft, all CMNTs were left with PPUs enabled, default voltages on, and microvalves closed during standby mode and station keeping maneuvers, which was not originally specified during operations or tested on the ground. While this kind of operation should not have caused any additional spray or flux to the extractor, on examining the data more carefully, a $~0.05 {\\mu}A$ level current was observed during most of this time in standby mode on CMNT\\#4, which was 10 times larger in terms of integrated current or total charge than any other thruster during these same periods, indicating low-level spraying between the electrodes that could have lead to premature saturation of the extractor.\n\nUnfortunately, the ST7 mission did not include a method of measuring the current to the extractor or accelerator electrodes, and since this was not measured except on early engineering model ground testing, it is difficult to quantify how thruster lifetime was impacted for this specific case. Determining how to prevent current flux to the extractor during normal operation, spraying between electrodes with default voltages on during standby mode, and monitoring any current to the extractor and accelerator electrodes, will be critical to the further development of this thruster technology, especially for missions with long lifetime requirements like LISA. Implementing redundant thruster heads will also be important for providing the required lifetimes, as is common practice for the primary propulsion system on large science missions. \n\nAfter the CMNT\\#4 anomaly and because the locations of the two CMTAs on only the spacecraft x-axis, having just seven operable thrusters gave insufficient rotational authority around the x-axis. At the end of the nominal mission, with ESA's assistance, a hybrid ``crutch'' mode was developed to continue DRS operations using 4 of the LTP cold gas (CGAS) thrusters to provide a constant thrust bias that replaced what CMNT\\#4 would have normally provided. The colloid thruster and cold gas thruster thrust bias levels as well as new operational procedures and sequences were developed and validated on ground-based testbeds for this new operating mode. Both 4 and 2 CGAS thruster configurations were demonstrated on the ST7 testbed; however, the 4 CGAS thruster configuration was preferred to reduce the required colloidal thruster thrust bias levels. This hybrid operation was demonstrated, for all DCS modes, just before the final week of the primary mission,\nand it continued successfully through the extended mission. \n\n\\section*{Appendix B: Derivation of response to thrust injections}\n\\label{sec:mathAppendix}\nIn this section we present a more detailed derivation of the response of the Pathfinder spacecraft and LTP instrument to the thruster injections that were used to calibrate the CMNTs as described in \\ref{sec:thustCal} . \n\nLet $\\hat x_{bf}$, $\\hat y_{bf}$, $\\hat z_{bf}$ be unit vectors along the body-frame axes of the spacecraft ; let $\\vec r_{SC}$ be the position of the spacecraft CoM in some inertial frame; and let $\\vec r_{TM1}$ and $\\vec r_{TM2}$ be the positions (CoMs) of the two TMs in the same inertial frame. The $x1$ value measured by LTP is $\\big(\\vec r_{TM1} - \\vec r _{SC}\\big)\\cdot \\hat x_{bf}$, and similarly for $x2$. \n\nApplying Newton's 2nd law to either $x1$ or $x2$, and restricting to the most-sensitive ($\\hat x_{bf}$) direction gives\n\\begin{eqnarray}\n&\\ddot x& \\equiv \\frac{d^2}{dt^2} \\big[\\big(\\vec r_{TM} - \\vec r_{SC}\\big)\\cdot \\hat x_{bf}\\big] \\\\\n&=& \\big[\\big(\\ddot{\\vec r}_{TM} - \\ddot{\\vec r}_{SC}\\big)\\cdot \\hat{x}_{bf}\\big] + 2 \\big[\\big( \\,\\dot{\\vec r}_{TM} - \\dot{\\vec r}_{SC}\\big)\\cdot \\dot{\\hat x}_{bf}\\big] \\ \\ \\ \\ \\ \\ \\\\\n&+& \\big[\\big(\\vec r_{TM} - \\vec r_{SC}\\big)\\cdot \\ddot{\\hat x}_{bf}\\big] \\, \\label{basic} .\n\\end{eqnarray} \n\n\\noindent Re-arranging the above equation for $\\ddot x$ gives\n\\begin{eqnarray}\n&F^x_{SC} &= -M_{SC} \\ddot x + \\frac{M_{SC}}{M_{TM} }F^x_{TM} \\label{b1} \\\\\n&+& 2 M_{SC} \\big[\\big( \\,\\dot{\\vec r}_{TM} - \\dot{\\vec r}_{SC}\\big)\\cdot \\dot{\\hat x}_{bf}\\big] \\label{b2} \\\\\n& +& M_{SC} \\big[\\big(\\vec r_{TM} - \\vec r_{SC}\\big)\\cdot \\ddot{\\hat x}_{bf}\\big] \\, \\label{b3} \n\\end{eqnarray}\nwhere $F^x \\equiv \\vec F \\cdot \\hat x_{bf}$. The first term is Newton's 2nd law in the instrument frame while the later two terms account for the rotation of the frame.To evaluate the rotational frame terms, we define \n\\begin{equation}\n\\dot{\\hat x}_{bf} = \\vec \\Omega \\times \\hat x_{bf} \\, ,\n\\end{equation}\nwhere $\\vec \\Omega$ is the spacecraft 's instantaneous angular velocity, which also implies \n\\begin{eqnarray}\n\\ddot{\\hat x}_{bf} &=& \\dot{\\vec \\Omega} \\times \\hat x_{bf} + \\vec \\Omega \\times \\dot{\\hat x}_{bf} \\, , \\\\\n&=& \\dot{\\vec \\Omega} \\times \\hat x_{bf} + \\vec \\Omega \\times \\big(\\vec \\Omega \\times \\hat x_{bf} \\big) \\\\\n&=& \\dot{\\vec \\Omega} \\times \\hat x_{bf} + \\big(\\vec \\Omega \\cdot \\hat x_{bf}\\big) \\vec \\Omega \\, - \\, \\Omega^2 \\, \\hat x_{bf} \\label{c3} \\, .\n\\end{eqnarray}\n\nThe dynamical quantities can be separated into those dominated by the injection, including the TM actuation, and the rest, in order to estimate the sizes and timescales on which they are changing. If one restricts attention to the thruster force at the injection frequency, then line (\\ref{b2}) is\nnegligible compared to line (\\ref{b3}). Finally, $\\ddot{\\hat x}_{bf}$ in (\\ref{b3}) is well approximated by the \n$\\dot{\\vec \\Omega} \\times \\hat x_{bf} $ term in line (\\ref{c3}), and terms quadratic in $\\Omega$ are negligible. This implies the approximation\n\\begin{equation}\n\\dot{\\Omega}_i = (I^{-1})_{ij}N^j \\, .\n\\end{equation}\n\nIf we average the motion of $x1$ and $x2$, the largest rotational effects cancel. The only rotational effects that do not cancel are proportional to $\\Delta z$, defined as the z-displacement of both TMs from the spacecraft CoM, which is on the order of 5$\\,$cm. Averaged over the two TMs, and restricting analysis to the injection frequency, all the rotating-frame effects can be approximated by \n\\begin{equation}\n- M_{SC} (\\Delta z) \\dot{\\Omega}_y \\, \n\\end{equation}\nor\n\\begin{equation}\n- M_{SC} (\\Delta z)\\big[(I^{-1})_{yx}N^x + (I^{-1})_{yy}N^y + (I^{-1})_{yz}N^z \\big]\\, . \n\\end{equation}\nOf these, the middle term is by far the largest, but it's simple to carry along the off-diagonal terms.\n\n If $F_i$ is the amplitude of the thrust from thruster i, then there are matrices $T^{xi}$ and $K^{xi}$ such that the x-component of the force and torque from thruster i are $T^{xi}F_i$ and $K^{xi}F_i$. These matrices can be derived from the thruster positions and orientations. Plugging these into Eqs.~(\\ref{b1})--(\\ref{b3}) and re-arranging terms, we arrive at\n\n\\begin{equation}\nF_{i|x} = \\frac{1}{2}\\big[ -M_{SC} (\\ddot x_1 + \\ddot x_2) + \\frac{M_{SC}}{M_{TM} }(F^x_{TM1} + F^x_{TM2})\\big] \/(1 + R^x_i)\\label{eq:Fresp}\n\\end{equation}\n\n\\noindent where the symbol $F_{i|x}$ denotes \"the force exerted by thruster $i$, as estimated from $x$ equation of motion\", \nand where the rotational correction term $R^x_i$ is given by\n\\begin{equation}\nR^x_i = (\\Delta z) M_{SC} \\big[ (I^{-1})_{yx} K^{xi}+ (I^{-1})_{yy} K^{yi} + (I^{-1})_{yz} K^{zi} \\big] \/T^{xi} \\, .\\label{eq:Rcorrect}\n\\end{equation}\n\\\\\n\n\n\\section*{Appendix C: Estimates of Thruster and Platform Noise Contributions}\n\\label{sec:NoiseAppendix}\nThis appendix provides details of the estimates of contributions to the measured force noise on the spacecraft as listed in Table \\ref{tab:SCnoiseComp}. \n\n\\subsection*{CMNT Shot Noise}\nCMNT shot noise is an effect of the quantized nature of the electrospray thrust, which is composed of the momentum transfer from discrete droplets. Using a charge to mass ratio of 470 C$\/$kg, the current corresponds to a droplet rate of $3 * 10^{15}$ drops\/second with an associated shot noise of 0.16 nN when projected into the X direction. \n\\subsection*{CMNT Flutter Noise}\nFlutter noise refers to variations in the thrust direction, which induce a thrust-dependent thrust noise $S_{flutter} = T\\cdot S_{cos\\alpha}$ where $\\alpha$ is the deviation of the thrust vector from its nominal direction. Ground measurements using a 2-D electrometer array measured $S_\\alpha < 10^{-3}\\,\\textrm{rad}\/\\sqrt{\\textrm{Hz}}$ in the relevant band, which gives $S_{flutter}\\sim 0.03\\,\\textrm{nN}\/\\sqrt{\\textrm{Hz}}$ at the maximum thrust of 30$\\,\\mu$N.\n \n \\subsection*{Solar Force Noise}\n Solar radiation pressure (SRP) produces a force noise along the x-axis that is described by \n \\begin{equation}\n S_{SRP,x} = F_{SRP}\\cdot S_{SRP}\\cdot\\bar{H}+F_{SRP}\\cdot S_H,\\label{eq:SRPx}\n \\end{equation}\n where $F_{SRP}$ is the DC force on the spacecraft due to the SRP, $S_{SRP}$ is the spectral density of the stability of the SRP, $H$ is the angle of the spacecraft about the y-axis and $S_H$ is the spectral density of variations in $H$. The total DC force on the spacecraft in the anti-Sun direction ($-z$ axis) is measured at $\\sim24\\,\\mu$N. This includes contributions both from the direct solar radiation pressure as well as the differential thermal radiation from the warm sunward side of the spacecraft (radiometer effect). A rough order-of-magnitude estimate is that the the radiometer term is approximately 40\\% of the direct term. Hence $F_{SRP}\\sim17\\,\\mu$N. During DRS science operations, $\\bar{H}\\sim10^{-4}\\,$rad, and $S_H\\mathrel{\\hbox{\\rlap{\\hbox{\\lower4pt\\hbox{$\\sim$}}}\\hbox{$<$}}} 10^{-4}\\,\\textrm{rad}\/\\sqrt{\\textrm{Hz}}$. Measurements of variations in solar flux give estimates of $S_{SRP}\\mathrel{\\hbox{\\rlap{\\hbox{\\lower4pt\\hbox{$\\sim$}}}\\hbox{$<$}}} 10^{-3}\\,\/\\sqrt{\\textrm{Hz}}$. With these parameters, the second term in (\\ref{eq:SRPx}) dominates, with a contribution of $1.7\\,\\textrm{nN}\/\\sqrt{\\textrm{Hz}}$. \n \n \\subsection*{Radiometer Noise}\n The coupling of the radiometer noise, which is primarily along the z-axis, to motion in the x-axis is the same as for SRP. Using the same logic as presented for the SRP above, the DC force along z is estimated as $F_{rad}\\sim7\\,\\mu$N and the noise along x due to angular jitter along H is $0.7\\,\\textrm{nN}\/\\sqrt{\\textrm{Hz}}$. Note that because the coupling mechanism for SRP and Radiometer noise is the same (spacecraft jitter in H), they will add \\emph{coherently}. This is taken into account for the noise summations in Table \\ref{tab:SCnoiseComp}.\n \n\\subsection*{Magnetic Field Noise}\n The size of the spacecraft is vastly smaller than the spatial length scale over which the interplanetary $\\vec B_{ip}$-field varies, so we treat $\\vec B_{ip}$ as spatially uniform over the spacecraft . The interaction of $\\vec B_{ip}$ with the current $\\vec j$ in the spacecraft can torque the spacecraft , but it produces no net force. Note that time variations in $\\vec B_{ip}$ will cause extra currents to flow in the spacecraft , which create a B-field $\\vec B_{sc}$ that partially counteracts the changes in $\\vec B_{ip}$ -- this is magnetic shielding--and because magnetic permeability will vary across the spacecraft, the total field $\\vec B_{ip} + \\vec B_{sc}$ {\\it can} have a significant spatial gradient. But this does not invalidate our earlier argument that there is no net force from the interaction $\\vec B_{ip}$ and the total $\\vec j$.\n\nThe interaction of $\\vec B_{ip}$ with the net charge on the spacecraft does produce a net force, which we now estimate. The photo-electric effect \"kicks\" electrons off the surface of the solar panels. Indeed, when the CMNTs (which generate thrust by accelerating positively charged droplets away from the spacecraft) are providing the thrust, these two effects largely cancel, and this is the mechanism that keeps the net charge on the spacecraft small. The spacecraft potential adjusts until the net spacecraft charging rate averages over time to zero. The spacecraft potential in equilibrium is $\\sim 100$V, or (in cgs units) $\\sim \\frac{1}{3}$statvolt. From this, we can estimate the total charge on the spacecraft by $V \\sim Q\/R$. Using $R \\sim 100$cm, we find $Q \\sim 33\\ $statcoulomb (or $\\approx 10^{-8}\\ $ Coulomb). Using $\\vec F_{sc} = c^{-1} Q \\, \\vec v \\times \\vec B_{ip}$, $v\/c \\sim 10^{-4}$, and $B_{ip} \\approx 3 \\times 10^{-4} \\big(\\frac{f}{1 mHz}\\big)^{-0.8} gauss\\, Hz^{-1\/2}$, we arrive at $F \\sim 10^{-2} nN Hz^{-1\/2} \\big(\\frac{T}{10^7 s}\\big) \\big(\\frac{f}{1 mHz}\\big)^{-0.8} $ (where we have used the conversion $10^{-6}$dyne$ = 10^{-2}$ nN). \n\n\\subsection*{Micrometeoroid Impacts}\n\nThe LPF spacecraft occasionally encounters interplanetary dust particles which impart an impulsive momentum to the spacecraft. The size of these particles generally follow a power-law distribution with smaller size particles being more numerous than larger ones. For large impacts, the events can be identified and either subtracted or excised from the data, as was discussed in \\ref{sec:CMNTnoiseSC}. For smaller impacts, which are far more numerous, the impulsive momentum may not be recognized as a discrete signal and will instead average out to a force noise. Using a sample of 44 impact events that were identified in a search of $\\sim180$ days of LPF data, a powerlaw estimate of the collision rate $R$, per transferred momentum (in units of $\\mu N s$), was estimated to be\\footnote{A full analysis of these impacts is the subject of a forthcoming paper}: \n\\begin{equation}\nR = 1.5 \\times 10^{-6} \\, (\\bar p)^{-1.64} \\, s^{-1}\\, ,\n\\end{equation}\nwhere we have defined the dimensionless momentum transfer $\\bar p \\equiv p\/p_0$, with $p_0 = 1 \\mu N s $. Assume that in each collision, the momentum is deposited uniformly over some short time $\\delta t$. (As long as $\\delta t$ is short compared to $ 33 s$ [$= 1\/(30 mHz)$], we shall see that $\\delta t$ drops out of the expression for the force noise spectral density in the measurement band.) Since the collisions represent shot noise, force-noise spectrum is some constant $S_0$ up to $f \\approx 1\/(2\\delta t)$ (and falls roughly as $f^{-2}$ at higher $f$), and $S_0$ times $ (2\\delta t)^{-1}$ is the mean-square value of the force from collisions:\n\\begin{eqnarray}\nS_0 &=& (2 \\delta t) (p_0)^2 \\int_0^{\\bar p_t} R(\\bar p) \\big(\\frac{\\bar p}{\\delta t}\\big)^2 (\\delta t) \\ d \\bar p \\\\\n& = & 2.2 \\times 10^{-6} \\, \\bar p_t^{1.36} (\\mu N)^2\/Hz \\, ,\n\\end{eqnarray}\nwhere $\\bar p_t$ is some threshold value, above which collisions are individually identified and removed from the data.\nUsing $\\bar p_t \\approx 0.5$, and dividing $S^{1\/2}_{0}$ by $\\sqrt{3}$ to account for the fact that here we want only the x-component of the force noise, we arrive at\n\\begin{equation}\nS^{1\/2}_{0,x} \\approx 0.5 n N\\\/\\sqrt{Hz} \\, .\n\\end{equation}\n\n \n \n \\subsection*{Test Mass Force Noise}\nTest mass force can by estimated by looking at the measured differential acceleration between the two test masses. As shown in section \\ref{sec:deltag}, this is at the $\\sim 3\\,\\textrm{fm}\\,\\textrm{s}^{-2}\/\\sqrt{\\textrm{Hz}}$ level, which would represent an equivalent spacecraft noise of $\\sim 1\\,\\textrm{pN}\/\\sqrt{\\textrm{Hz}}$. A limitation of this estimate is that it is not sensitive to common-mode forces on the test masses, such as might be caused by time-varying magnetic fields. However, it seems unlikely that the couplings in each test mass would match sufficiently well to have a 6 order of magnitude difference between the absolute and differential effects. Assuming a worst-case common-mode rejection ratio of $10^3$ gives an upper limit of 1$\\,\\textrm{nN}\/\\sqrt{\\textrm{Hz}}$. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nSupernova SN1987A shows a ''pearl necklace'' and infrared emissions whose interpretation appears difficult (Bouchet et al. 2006, Lawrence et al. 2000, Thornhill \\& Ransom 2006). A source or these particularities may be simply spectroscopy.\n\nAssuming that the central object of SN 1987A is a neutron star heated by the accretion of a low density hydrogen cloud, the density of emitted energy is high, so that the transfers of energy are non-linear, strong. Such effects are usually observed with lasers.\n\nSection 2, recalls some of these effects, specifying the language and the notations.\n\nIn section 3 the generation of the pearl necklace appears as a trivial application of these effects.\n \n\\section{Some high energy optical effects.}\n\\subsection{Superradiant emissions.}\nPlanck used the notion of electromagnetic modes introduced in the nineteenth century to study music instruments. For this application, the used modes were restricted to stationary systems, while the mathematical definition is wider: A mode is a ray in the real vector space of the solutions of a linear set of equations (that is, all solutions in a mode depend only on a single real parameter named amplitude of the solution). Happily, the definition of the modes does not imply stationary and monochromatic conditions, because such conditions would require an infinite duration of the experiments.\n\nThe electromagnetic fields in the vacuum obey the linear Maxwell's equations up to very high frequencies. Schwarzschild and Fokker trick allows introducing sources replaced by their advanced fields.\n\nPlanck mistook computing the additive term in his equation which gives the energy of an electromagnetic mode inside a blackbody at temperature $T$, defining also the temperature of a mode. Nernst (1916) found the right value $h\\nu\/2$, so that the correct mean energy in a monochromatic mode of frequence $\\nu$ writes $h\\nu(1\/(\\exp(h\\nu\/kT)-1)+1\/2)$.\n\nEinstein (1917) introduced the stimulated emission, but justifying the spontaneous emission, he did not remark that the electromagnetic field has a minimal mean value corresponding to $h\\nu\/2$ in a blackbody at 0K; laser experiments show that the spontaneous emission is a stimulated emission corresponding to the amplification of this minimal value.\n\nSuppose that only two molecular levels of energies $e_1$ and $e_2$ ($e_2>e_1$) are implied in a transition of energy $\\nu = (e_2-e_1)\/h$. The molecular temperature $T_{12}$ relative to this transition is deduced from the molecular populations $n_1$ and $n_2$ in these levels by $n_1\/n_2 = \\exp(h\\nu\/kT_{12})$.\n\nSuppose that a long cell is filled with this gas; a low temperature light beam entering into the cell increases the entropy of the system, being amplified, heated to reach temperature $T_{12}$. At the beginning, the amplification may be said a spontaneous emission, then it is induced, the energy of the beam increasing proportionally to this energy, that is exponentially. But the available molecular energy is limited, so that the temperature $T_{12}$ {} is decreased, usually strongly. Thus, spontaneous emissions in other directions are strongly decreased. This decrease resulting from a large excitation seems a paradox.\n\nIf a long path in a strongly excited medium is real, it is a superradiant emission; if it is virtual, using mirrors, it is a laser emission.\n\n\\subsection{The \\textquotedblleft Impulsive Stimulated Raman Scattering (ISRS).\\textquotedblright }\n\nIt is usually assumed that there is no interaction between light beams refracted by a transparent medium. Experiments and a regular theory (Giordmaine et al. (1968), Yan et al. (1985), Weiner et al. (1990), Dougherty et al. (1992), Dhar et al. (1994), ...) show that this assumption is wrong if light and matter verify conditions set by Lamb (1971). The interaction which increases the entropy of a set of refracted usual time-incoherent beams by frequency shifts, without blurring the images and the spectra, works well in atomic hydrogen in its first excited state. Planck's law and thermodynamics show that, usually, light is redshifted, and the energy it looses is transferred to the thermal background. There is no energy threshold of the energy of the beams, the ISRS becoming the Coherent Raman Effect on Incoherent Light (CREIL) at low intensities (Moret-Bailly 2005-6).\n\n\\section{Generation of the pearl necklace.}\n\\subsection{Absorption of light emitted by the neutron star.}\nAccreting hydrogen, the neutron star becomes extremely hot, at least at hot spots; thus it emits light mainly in the far ultraviolet.\n\nClose to the star, an atom is ionized; hydrogen and light element impurities lose all their electrons, producing a transparent plasma. Some heavier atoms may absorb light, but they re-emit energy in the UV, so that few energy is lost.\n\nAssuming a low density, a combination of protons and electrons requires a low temperature, say 20 000 K. Hydrogen is mainly produced in its ground state, but it absorbs Lyman lines, so that a lot of its states may be populated.\n\nHydrogen absorbs frequencies lower than Lyman limit only on the its spectral lines. In next subsection, we will see that a superradiant emission decreases the population of the excited states, so that few 2P hydrogen remains, able to shift the spectrum, until, paradoxically, the Lyman $\\alpha$ line is almost absorbed: When this happens, the superradiance decreases, so that more 2P hydrogen remains and shifts the light, the energy at the 2P frequency and at the frequencies of other absorption lines is renewed.\n\nFinally, the absorption at a Lyman line, which is strong, nearly complete, sweeps a wide band.\n\n\\medskip\nThe final result is an extremely strong absorption not only at the frequencies of the lines (eventually not Lyman), but on wide bands corresponding to a redshift which lasts as long as energy is shifted to the Lyman $\\alpha$ frequency. The energy lost by the redshifts, blueshifts the thermal background, that is heats it, simulating hot dust.\n\n\\subsection{Generation of an UV necklace.}\nPrevious subsection showed that the gas is strongly excited in a spherical shell centred on the neutron star, by a wide band absorption. Superradiant emissions may appear in directions for which the gas is thick, that is tangentially to the spheres. This system is very similar to a laser pumped transversally by a light-emitting diode of the same frequency. Although the gas is excited to eigenstates, the light emitted by the shell of excited gas makes a continuous spectrum because the column density of 2S hydrogen crossed from the emission point to outside is variable, producing variable redshifts. The frequency shift of the emitted lines prevents an absorption at the exciting (Lyman $\\alpha$ ...) frequency.\n\nConsider two thin, close spherical shells emitting lines whose local difference of frequencies is of the order of the linewidth; each line induces the emission of the other, so that their modes are bound; this binding extends to the whole superradiant band whose modes are, consequently, connected.\n\nThe strong absorptions and re-emissions in the shell of excited atomic hydrogen generate for each direction a bright ring in the UV. In this ring, the competition of the polychromatic modes selects a particular set of modes, producing, in the UV, bright spots. A nearly complete transfer of energy from the radial beams to the rings obeys thermodynamics because the solid angle on which a ring is seen is much larger than the solid angle on which the central engine could be seen.\n\n\\subsection{Generation of a visible necklace.}\nAround the shell of far UV absorbing hydrogen, the columns of UV light emitted to the Earth excite hydrogen and various atoms strongly, so that, in the direction of the Earth, the emissions of these atoms are superradiant and co-linear to the columns: the visible pearl necklace is generated.\n\n\\section{Conclusion}\nThe present paper neglects a large part of the complexity of the problen: the necklace is not a circle; can the other circles be generated by a similar process around two other stars ? If yes, why did they illuminate when the necklace appeared ?\n\nHowever, although partial, the present theory gives results using few hypothesis and regular physics. The explanation of the pearl necklace of SN1987A is rough, but it requires very few astrophysical hypothesis, its originality being the use of optical and spectroscopic properties more generally developed and well verified in laser technology. The visible necklace should come with a remaining UV spectrum in which absorptions corresponding to the lines observed in the visible may appear. Among other verifications, it could be a positive test of the present theory.\n\n\\section{Bibliography.}\n\n Bouchet, P., E. Dwek, I. J. Danziger, R. G. Arendt, I. J. M. De Buizer, S. Park, N. B. Suntzeff, R. P. Kirshner, P. Challis, 2006, arxiv:astro-ph\/0601495\n\nDhar, L. , J. A. Rogers, \\& K. A. Nelson, 1994 {\\it Chem. Rev.} {\\bf 94}, 157\n \nDougherty, T. P., G. P. Wiederrecht, K. A. Nelson, M. H. Garrett, H. P. Jenssen \\& C. Warde,1992, {\\it Science} {\\bf 258,}, 770\n\nEinstein A., 1917, {\\it Phys. Z.}, {\\bf 18} 121\n\nGiordmaine, J. A. , M. A. Duguay \\& J. W.Hansen, 1968, {\\it IEEE J. Quantum Electron.}, {\\bf }4,252\n\nLawrence S. S.,B. E. Sugerman, P. Bouchet, A. P. S. Crotts, R. Uglesich \\& S. Heathcote, 2000, arxiv:astro-ph\/0004191\n\nMoret-Bailly, J., 2005, arxiv: physics\/0507141\n\nMoret-Bailly, J., 2006, {\\it AIP Conference Proceedings}, 822, 226-238\n\nNernst W. , {\\it Verh. Deutsch. Phys. Ges}, {\\bf 18}, 83 (1916)\n\nThornhill,W. W., \\& C. J. Ransom 2006 {\\it 2006 IEEE International Conference on Plasma Science}, 369\n\nWeiner, A. M. , D. E. Leaird., G. P. Wiederrecht, \\& K. A. Nelson, 1990, {\\it Science} {\\bf 247}, 1317\n\nYan, Y.-X. , E. B. Gamble Jr. \\& K. A. Nelson , 1985, {\\it J. Chem Phys.}, {\\bf 83}, 5391\n\n\\end{document}\n\n \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}