diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzkiwp" "b/data_all_eng_slimpj/shuffled/split2/finalzzkiwp" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzkiwp" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\\label{sec:introduction}}\n\n\\IEEEPARstart{R}{ecommendation} systems as an important information filtering device can effectively reduce information overload and assist users easily finding their mostly interested items. A common approach~\\cite{su:et.al:2009:AI:survey,shi:et.al:2014:CSUR,zhang:et.al:2019:CSUR} to provide personalized recommendation is through analyzing user profile and his behavior to match his mostly interested items. However, user profile may not be available in many practical situations, for example, a user browsing items without logging. The task of session-based recommendation (SBR) is hence proposed for such situations, which has attracted widespread attentions recently~\\cite{wang:et.al:2019:arXiv:survey}. According to~\\cite{li:et.al:2017:CIKM,liu:et.al:2018:KDD,wu:et.al:2019:AAAI}, the SBR task can be defined as follows:\n\n\\par\nLet $V=\\{v_i|i=1,2,...,N\\}$ denote the set of all unique items, where $N$ is the total number of items. A session is a chronological sequence of clicked items for an anonymous user, which can be represented as $S=\\{v_1, v_2, ..., v_\\tau\\}$, where $v_i \\in V$ in the session is the $i$-th item clicked by the user and $\\tau$ is the session length. The SBR task is to predict the next item to be clicked for an ongoing session, i.e. $v_{\\tau+1}$ for session $S$. A SBR recommendation model predicts the preference scores for all candidate items in $V$, i.e. $\\hat{\\mathbf{y}}=\\{\\hat{y}_1,\\hat{y}_2,...,\\hat{y}_N\\}$, and constructs a top-$K$ recommendation list by selecting $K$ items with highest preference scores.\n\n\\par\nThe challenges of SBR lies in two aspects: (1) Sessions are anonymous and items are recorded with only their identifiers (ID), which makes it impossible to establish users' profiles nor to analyze items' features; (2) Sessions are often short, which makes it difficult to capture transitional logics in sessions, not even mention that items often have no attribution information. Early approaches have mainly relied on some simple popularity rules~\\cite{sarwar:et.al:2001:WWW} or matrix factorization~\\cite{rendle:et.al:2009:UAI} for predicting the next item, which have not well encoded items' representations, nor utilizing items' sequential relations in sessions~\\cite{wu:et.al:2019:AAAI}. Recently, for their powerful capabilities of representation learning, many neural networks have been designed and applied for the SBR task.\n\n\\par\nSome neural network-based methods~\\cite{li:et.al:2017:CIKM,liu:et.al:2018:KDD,wang:et.al:2019:SIGIR,luo:et.al:2020:IJCAI,hidasi:et.al:2016:ICLR,ren:et.al:2019:AAAI} have achieved significant improvements over traditional approaches. The core idea is to design a neural network model for encoding items' representations and learning the ongoing session representation, such that the next item is selected with its representation the most similar to that the session representation, e.g., using a cosine or dot product similarity function. Some of neural models~\\cite{wang:et.al:2019:SIGIR,luo:et.al:2020:IJCAI} have further considered the collaborative information between multiple sessions to enhance the session representation learning. However, they lack the exploration of potential relations in between items from multiple sessions.\n\n\\par\nSome graph-based models propose to first construct an item graph for item embedding learning and session representation learning~\\cite{wu:et.al:2019:AAAI,xu:et.al:2019:IJCAI,xia:et.al:2021:AAAI,qiu:et.al:2019:CIKM,chen:et.al:2020:KDD,wang:et.al:2020:SIGIR,zheng:et.al:2020:ICDMW,yu:et.al:2020:SIGIR,pan:et.al:2020:CIKM}. As many items' transitions are extracted from multiple sessions, it is expected that more useful transitional information can be encoded in items' embeddings via, say for example, a \\textit{graph neural network} (GNN). We approve the effectiveness of item graph construction, as it has two advantages: One is to alleviate the issue of item sparsity in individual sessions; The other is to transfer the problem of mining item inter-relations as exploring structural characteristics of a graph. Some well-known GNNs, including \\textit{graph convolution network} (GCN)~\\cite{kipf:et.al:2017:ICLR}, \\textit{graph attention network} (GAT)~\\cite{velivckovic:et.al:2018:ICLR}, LightGCN~\\cite{he:et.al:2020:SIGIR}, have been adopted for learning item embedding mainly from the viewpoint of item co-occurrences in the graph by capturing node-centric local structural characteristics. Although these GNNs can encode graph structural characteristics, they may not be much suitable for item embedding learning in the SBR task.\n\n\\par\nThe items contained in anonymous sessions are only equipped with their identifiers (ID). On the one hand, items' embeddings are often randomly initialized. On the other hand, the objective of such ID-based embedding learning is to ensure a kind of \\textit{neighborhood affinity} by capturing items' co-occurrences as their topological relations on the item graph. We refer the neighborhood affinity to indicate the condition that a node embedding is closer to its neighbors' than its non-neighbors' in the embedding space. Some GNNs, like the GCN and GAT, employ a kind of global transformation kernel to update items' initial embeddings. Although they can capture local structural characteristics, such global transformation may not ensure the neighborhood affinity in the original item embedding space, not to mention introducing additional trainable parameters for embedding space transformation. The LightGCN, though without a global transformation, the aggregation computation simply updates a node embedding via averaging its neighbors', yet not including the node itself embedding in the update process.\n\n\\par\nIn this paper, we propose a new \\textit{graph spring network} (GSN) for learning item embedding on an item graph. The basic idea of GSN is to ensure the neighborhood affinity in the embedding space by aggregating the neighbors' information to a node through a series of current optimal aggregation weights in an iterative way. Compared with the GCN and GAT, it does not employ some global transformation kernel, so without involving additional trainable parameters. Compared with the LightGCN, its aggregation weights computation directly depends on item embeddings, rather than simple graph Laplacian matrix.\n\n\\par\nWe can stack multiple GSN layers to include high-order neighbors for learning item embedding, however it still may not be enough to encode possible relations for two item nodes far-apart in the item graph. Indeed, the item graph only establishes edges of co-appeared items in sessions. It could be the case that some items have not co-appeared in sessions, but they may share some similar latent features, say for example categorical features. This resembles the case of two nodes far-apart in the item graph but containing similar latent features, which motivates us to further encode some global topological signals for each item. In this paper, we propose to first select some informative items (called anchors) and then encode potential relations to these anchors.\n\n\\par\nIn this paper, we proposed a GSN-IAS model (\\underline{G}raph \\underline{S}pring \\underline{N}etwork and \\underline{I}nformative \\underline{A}nchor \\underline{S}election) for the SBR task. An item graph is first constructed for describing items' co-occurrences in all sessions. We design the GSN for item embedding learning based on the item graph, and analyze its convergence property. We propose a measure, called \\textit{item entropy}, to select informative anchors only based on their statistics in sessions. We next design an unsupervised learning mechanism to output items' encodings, which is to encode items' potential relations to anchors. Based on item embedding and item encoding, we employ a shared \\textit{gated recurrent unit} (GRU) network to learn two session representations and output two next item predictions. Finally, we design an adaptive decision fusion strategy to fuse the two predictions to output final preference scores for recommendation list construction. Extensive experiments on three public real-world datasets validate that our GSN-IAS outperforms the state-of-the-art methods .\n\\par\nOur main contributions can be summarized as follows:\n\\begin{itemize}\n\t\\item Propose a new graph neural network (GSN) for item embedding learning.\n\t\\item Provide a proof on the GSN convergence.\n\t\\item Define item entropy for informative anchor selection.\n\t\\item Propose an unsupervised method for anchor-based item encoding learning.\n\t\\item Propose a parameters shared GRU for two session representations and two next item predictions.\n \\item Propose an adaptive decision fusion to fuse two predictions for final recommendation.\n\\end{itemize}\n\\par\nThe rest is organized as follows: Section~\\ref{Sec:Retaled Work} reviews the related work. Section~\\ref{Sec:Method} presents our GSN-IAS model. Experiment settings are provided in Section~\\ref{Sec:Experiment Settings} and results are analyzed in Section~\\ref{Sec:Experiment Results}. Section~\\ref{Sec:Conlusion} concludes this paper.\n\n\\begin{figure*}[t]\n\t\\centering\n\t\\includegraphics[width=\\textwidth]{model.png}\n\t\\caption{The overall architecture of the proposed GSN-IAS model. (a) Item graph construction; (b) Graph spring network for item embedding; (c) Informative item selection for item encoding; (d) A shared GRU network for session representation learning; (e) Adaptive fusion of two predictions.}\n\t\\label{Fig:Model}\n\\end{figure*}\n\n\\section{Related Work}\n\\label{Sec:Retaled Work}\nWe review related work from three aspects: traditional methods, neural network methods and graph-based methods.\n\n\\subsection{Traditional Methods}\nSome general recommendation approaches~\\cite{sarwar:et.al:2001:WWW, rendle:et.al:2009:UAI, linden:et.al:2003:IC} can be directly applied in the SBR task. For example, the item-based method~\\cite{sarwar:et.al:2001:WWW} recommends items by computing items' similarity based on the co-occurrence relations. However, these methods neglect the sequential information in a session, i.e. the order of items. Some other studies~\\cite{shani:et.al:2005:JML, rendle:et.al:2010:WWW, zimdars:et.al:2001:UAI} employ Markov chain to model sequential information of a session. For example, Rendle et al.~\\cite{rendle:et.al:2010:WWW} combine matrix factorization and first-order Markov chain for next item prediction. However, the independence assumption of Markov chain-based methods is so strict, limiting its prediction accuracy~\\cite{wu:et.al:2019:AAAI}. Recently, some studies~\\cite{jannach:et.al:2017:Recsys,garg:et.al:2019:SIGIR} realize the importance of items' co-occurrence relations and temporal properties in sessions, and they can achieve competitive performance by using the k-nearest-neighbor approach and temporal decay functions.\n\n\\subsection{Neural Network Methods}\nRecurrent neural networks (RNNs) are capable of sequential modeling, and many RNN-based models have been designed for the SBR task~\\cite{li:et.al:2017:CIKM, wang:et.al:2019:SIGIR, ren:et.al:2019:AAAI, tan:et.al:2016:DLRS, hidasi:et.al:2016:ICLR, quadrana:et.al:2017:Recsys, pan:et.al:2020:SIGIR:An, zhang:et.al:2020:Neurocomputing}. For example, Hidasi et al.~\\cite{hidasi:et.al:2016:ICLR} propose a GRU4Rec model by applying a GRU layer to encoder sequential items. Li et al.~\\cite{li:et.al:2017:CIKM} propose a NRAM model to boost a GRU layer with attention to learn session representation by discriminating GRU hidden states. Wang et al.~\\cite{wang:et.al:2019:SIGIR} propose a CSRM model to exploit collaborative information from multiple sessions. Ren et al.~\\cite{ren:et.al:2019:AAAI} propose a RepeatNet model that incorporates a repeat-explore mechanism into a GRU network to automatically learn the switch probabilities between repeat and explore modes.\n\n\\par\nBesides using RNNs, some other neural networks have also be explored for the SBR task~\\cite{liu:et.al:2018:KDD,luo:et.al:2020:IJCAI, kang:et.al:2018:ICDM, yuan:et.al:2019:WSDM, song:et.al:2019:IJCAI, zhou:et.al:2019:WWW, pan:et.al:2020:SIGIR:Rethinking,yuan:et.al:2021:AAAI,cho:et.al:2021:SIGIR}. For example, Liu et al.~\\cite{liu:et.al:2018:KDD} propose the STAMP model that employs an attention network to emphasize the importance of the last click item when learning session representation. Kang et al.~\\cite{kang:et.al:2018:ICDM} propose the SASRec algorithm by stacking self-attention layers to capture latent relations in between consecutive items in a session. Yuan et al.~\\cite{yuan:et.al:2019:WSDM} propose a NextItNet model by using a dilated convolutional network to capture dependencies in between items. Song et al.~\\cite{song:et.al:2019:IJCAI} propose the ISLF model that employs a recurrent variational auto-encoder (VAE) to take interest shift into account. Pan et al.~\\cite{pan:et.al:2020:SIGIR:Rethinking} apply a modified self-attention network to estimate items' importance in a session. Luo et al.~\\cite{luo:et.al:2020:IJCAI} integrate the collaborative self-attention network to learn the session representation and predict the intent of the current session by investigating neighborhood sessions. Yuan et al.~\\cite{yuan:et.al:2021:AAAI} propose a dual sparse attention network to find the possible unrelated item in sessions.\n\n\\par\nThe aforementioned neural models mainly consider item transition or co-occurrence relations within a single session or several similar sessions, which limits their performance because of their lack of item embedding learning against the whole set of all available items and sessions.\n\n\\subsection{Graph-based Methods}\nRecently, graph neural models have aslo been adopted in the SBR task~\\cite{wu:et.al:2019:AAAI, xu:et.al:2019:IJCAI, xia:et.al:2021:AAAI, qiu:et.al:2019:CIKM, chen:et.al:2020:KDD, wang:et.al:2020:SIGIR, zheng:et.al:2020:ICDMW, yu:et.al:2020:SIGIR, pan:et.al:2020:CIKM,xia:et.al:2021:CIKM,zhang:et.al:2021:InfoSci,li:et.al:2022:TKDE}. Wu et al.~\\cite{wu:et.al:2019:AAAI} propose a SR-GNN model to construct a session graph and employ the GGNN model~\\cite{li:et.al:2015:arxiv} for item representation learning. Xu et al.~\\cite{xu:et.al:2019:IJCAI} make a combination of GNN and multi-layer self-attention network. Qiu et al. propose a FGNN model to convert a session into a directed weighted graph and use a weighted attention graph neural layer for item embedding learning. Zheng et al.~\\cite{zheng:et.al:2020:ICDMW} propose a DGTN model in which a session together with its similar sessions are used to first construct an item graph. Yu et al.~\\cite{yu:et.al:2020:SIGIR} propose a TAGNN model with a target attention mechanism to learn a target-aware session representation. Wang et al.~\\cite{wang:et.al:2020:SIGIR} propose a GCE-GNN model to learn two types of item embeddings from a session graph and a global item graph. Zhang et al.~\\cite{zhang:et.al:2021:InfoSci} propose a random walk approach to mine a kind of latent categorical information of items from an item graph. Xia et al.~\\cite{xia:et.al:2021:AAAI} learn the inter- and intra-session information from two types of hypergraphs. Li et al.~\\cite{li:et.al:2022:TKDE} propose a disentangled GNN method to cast item and session embeddings with disentangled representations of multiple factors.\n\n\\section{The Proposed GSN-IAS model}\\label{Sec:Method}\nFig.~\\ref{Fig:Model} presents the architecture of our GSN-IAS model, which consists of the following parts: (1) item graph construction; (2) graph spring network for item embedding; (3) informative anchor selection for item encoding; (4) session representation learning; and (5) prediction and fusion.\n\n\\subsection{Item Graph Construction} \\label{Sec:ItemGraph}\nTo deal with item sparsity in one session, we first construct an \\textit{item graph} to capture potential items' inter-relations from all sessions. In particular, we construct an undirected weighted item graph $\\mathcal{G}=\\{\\mathcal{V}, \\mathcal{E}\\}$ with $\\mathcal{V}$ the set of nodes (viz. items) and $\\mathcal{E}$ the set of edges as follows: An edge $e_{ij}=(v_i, v_j, w_{i,j})$ in $\\mathcal{E}$ is established between an item $v_i$ and another item $v_j$, if the $v_j$ is within a window of size $k$ centered at $v_i$ in any session, and $w_{i,j}$ is its weight corresponding the number of appearances of the relations $(v_i, v_j)$ in all sessions. Fig.~\\ref{Fig:Model} illustrates an item graph constructed from three sessions. Take item $v_3$ for example. For a window\nsize $k=2$, the neighbors of item $v_3$ include $\\{v_1, v_2, v_4, v_5, v_6\\}$.\n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[width=0.5\\textwidth]{GSN.png}\n\t\\caption{An example to illustrate the GSN core operations. (a) A node $v_i$ with three neighbors $v_1, v_2, v_3$. (b) The iteration process of GSN. (c) The lower part illustrates the input embeddings of the four nodes. After aggregation weights computation, it next illustrates the weighted neighbors' embeddings. The adjustments according to neighbors' embeddings are illustrated as dashed arrows; While the last one shows the updated embedding for node $v_i$ according to its own embedding and the weighted sum of its neighbors' embeddings. (d) Likens the GSN operation to a process in which multiple springs find a balance point via springs' push and pull.}\n\t\\label{Fig:GSN}\n\\end{figure}\n\n\n\n\\begin{algorithm}[t]\n\t\\caption{The Graph Spring Network (core operation for one node $v_i$)}\n\t\\label{Alg:GSN}\n\t\\KwIn{$\\mathbf{h}_i \\in \\mathbb{R}^d$ (the embedding of node $v_i$), and $\\{\\mathbf{h}_j\\in \\mathbb{R}^d|v_j \\in \\mathcal{N}_i\\}$ (the embeddings of $v_i$'s neighbors)}\n\t\\KwOut{$\\mathbf{h}_i^{\\prime}\\in \\mathbb{R}^d$ (updated embedding of $v_i$)}\n\t$\\mathbf{h} \\leftarrow \\mathtt{UN}(\\mathbf{h}) = \\frac{\\mathbf{h}}{\\|\\mathbf{h}\\|}$, for all $v_i, v_j$ \\\\\n\t$\\mathbf{c}_i^{(0)}$ $\\leftarrow$ $\\mathbf{h}_i$ \\\\\n\t\\For{$t=1, 2, ..., T$}{\n\t\t$w = 0$\\\\\n\t\t\\For{$v_j \\in \\mathcal{N}_i$}{\n\t\t\t$\\alpha_{ij} \\leftarrow \\mathbf{h}_j^{\\mathsf{T}} \\cdot \\mathbf{c}_i^{(t-1)}$ \/\/ Compute similarity \\\\\n\t\t\t$w \\leftarrow w +\\exp(\\alpha_{ij})$}\n\t\t\\For{$v_j \\in \\mathcal{N}_i$}{\n\t\t\t$\\alpha_{ij} \\leftarrow \\frac{\\exp(\\alpha_{ij})}{w}$ \/\/ Normalize to probability\n\t\t}\n\t\t$\\mathbf{c}_i \\leftarrow \\mathtt{UN}(\\mathbf{h}_i + \\sum_{v_j \\in \\mathcal{N}_i} \\alpha_{ij} \\mathbf{h}_j)$ \/\/ Update\n\t}\n\t\\KwResult{$\\mathbf{h}_i^{\\prime} \\leftarrow \\mathbf{c}_i$ }\n\\end{algorithm}\n\n\\subsection{Graph Spring Network for Item Embedding}\n\n\\subsubsection{Graph Spring Network}\nWe propose a \\textit{graph spring network} (GSN) to learn nodes' embeddings from a graph, which aims to find a \\textit{balance point} for each node in the embedding space given its neighbors' embeddings in an iterative way. We first introduce the operation of \\textit{unit normalization} for regulating a node's embedding as a \\textit{unit vector} as follows:\n\\begin{equation}\n\t\\mathtt{UN}(\\mathbf{h}) = \\frac{\\mathbf{h}}{\\|\\mathbf{h}\\|},\n\\end{equation}\nwhere $\\mathbf{h}$ denotes an item embedding and $\\|\\mathbf{h}\\|$ its vector norm, 2-norm here.\n\n\\par\nFor an item node $v_i$ in the item graph $\\mathcal{G}$, let $\\mathcal{N}_i$ denote the set of its one-hop neighbors. Given its neighbors' embeddings $\\{\\mathbf{h}_j|v_j \\in \\mathcal{N}_i\\}$, the core operation of GSN in one iteration contains two main steps:\n\\begin{align}\n\t\\alpha_{ij}^{(t+1)} & = \\frac{\\exp(\\mathbf{h}_j^\\mathsf{T}\\cdot \\mathbf{c}_i^{(t)})}{\\sum_{v_j \\in \\mathcal{N}_i} \\exp(\\mathbf{h}_j^\\mathsf{T}\\cdot \\mathbf{c}_i)}, \\label{Eq:GSNWeight} \\\\\n\t\\mathbf{c}_{i}^{(t+1)} & = \\mathtt{UN}( \\mathbf{h}_i + \\sum_{v_j \\in \\mathcal{N}_i} \\alpha_{ij}^{(t+1)} \\mathbf{h}_j ) \\label{Eq:GSNUpdate},\n\\end{align}\nwhere $\\mathbf{c}_i$ is an intermediate variable used for iteration and $\\mathbf{c}_i^{(0)} = \\mathbf{h}_i$. Note that the unit normalization obtains a unit vector without changing its direction. Algorithm~\\ref{Alg:GSN} presents the pseudo-codes for the core operations of one-layer GSN. Notice that for the input embeddings of a node $v_i$ and its neighbors, one-layer GSN iterates $T$ times to update the node $v_i$'s embedding.\n\n\\par\nFig.~\\ref{Fig:GSN} uses a toy example to illustrate the embedding update process for one node. The weight $\\alpha_{ij}$ measures the normalized similarity between node $v_i$ and its neighbor $v_j$ in the embedding space. The weighted vector $\\alpha_{ij}\\mathbf{h}_j$ can be regarded as some \\textit{virtual force} exerted by $v_j$ to $v_i$, liken to a spring between two nodes for pushing close or pull away two nodes; While all $\\alpha_{ij}\\mathbf{h}_j$s together with $\\mathbf{h}_i$ shall decide the new direction of the updated embedding for $v_i$. An iterated process of such core operations would lead to that the embedding $\\mathbf{h}_i$ of node $v_i$ gradually converges to some balance point in the embedding space, liken that the virtual forces by its neighbors reach a balance and no longer change its position.\n\n\\subsubsection{Convergence analysis}\nThe design philosophy of our GSN is to enable the embeddings of neighboring nodes can also reflect the local structural relations between a node and its neighbors in the item graph. We investigate whether these local structural relations have a balanced point, i.e. whether GSN converges after a sufficient number of iterations. We next prove the convergence of GSN by combining GSN and a von Mises-Fisher distribution~\\cite{ma:et.al:2019:ICML}. That is, for a node $v_i$, the iterative updating leads to the convergence of its normalized similarities $\\alpha_{ij}$ to its neighbors as well as its embedding $\\mathbf{h}_i$.\n\n\n\\par\nThe probability density function of the von Mises-Fisher distribution (vMF)\\footnote{https:\/\/en.wikipedia.org\/wiki\/Von\\_Mises-Fisher\\_distribution} is given by:\n\\begin{equation}\n\tf(\\mathbf{x};\\bm{\\mu},\\kappa) = C(\\kappa)\\exp(\\kappa\\bm{\\mu}^\\mathsf{T}\\mathbf{x}) \\propto \\exp(\\kappa\\bm{\\mu}^\\mathsf{T}\\mathbf{x}),\n\\end{equation}\nwhere $\\bm{\\mu}$ and $\\kappa \\geq 0$ are called the mean direction and the concentration parameter, respectively, and $C(\\kappa)$ is a constant. The larger value of $\\kappa$, the higher concentration of the distribution around the mean direction $\\bm{\\mu}$.\n\n\\par\nLet $\\mathbf{c}_i$ denote the parameter of vMF distribution. We initialize nodes' embeddings $\\mathbf{h}_i$ and $\\{\\mathbf{h}_j | v_j\\in \\mathcal{N}_i\\}$ by:\n\\begin{align}\n\t&\\mathbf{h}_i \\sim {\\rm vMF}(\\mathbf{c}_i, 1), \\\\\n\t&\\mathbf{h}_j \\sim {\\rm vMF}(\\mathbf{c}_i, \\alpha_{ij}),\n\\end{align}\nwhere $\\alpha_{ij}$ is the concentration of $\\mathbf{h}_j$ around the $\\mathbf{c}_i$. With such initializations, the GSN operation for computing $\\mathbf{c}_i$ can be converted to estimate the parameter $\\mathbf{c}_i$ based on the noisy observation $\\mathbf{h}=\\{\\mathbf{h}_i, \\mathbf{h}_j | v_i \\vee v_j\\in\\mathcal{N}_i\\}$. The following theorem summarizes the property and convergence of the GSN core operation:\n\n\\par\n\\begin{theorem}\n\t\\label{Theo:ConGSN}\n\tThe computation process of GSN core operation is equivalent to an expectation-maximization (EM) algorithm estimating parameter of the vMF distribution by maximizing the likelihood probability $P(\\mathbf{h}_i|\\mathbf{c}_i)$. The convergence of GSN core operation is equivalent to the convergence of the EM algorithm for estimating the parameter of vMF distribution.\n\\end{theorem}\n\\begin{proof}\n\t\\label{Proof:ConGSN}\n\tSee Appendix~\\ref{Appendix:GSNProof}.\n\\end{proof}\n\n\\par\nTheorem~\\ref{Theo:ConGSN} indicates that the GSN iteration process can be transformed as using the EM algorithm for parameter estimation. In Proof~\\ref{Proof:ConGSN}, we prove that Eq.~\\eqref{Eq:GSNWeight} is performing the E-step, and Eq.~\\eqref{Eq:GSNUpdate} is performing the M-step. Our GSN in fact is iteratively performing the E-step and M-step to estimate the parameter $\\mathbf{c}_i$ based on the given observations $\\mathbf{h}=\\{\\mathbf{h}_i, \\mathbf{h}_j | v_i \\vee v_j\\in\\mathcal{N}_i\\}$. According to the convergence of EM algorithm, the GSN therefore converges. The convergence of GSN core operation also reflects the difference between our GSN and some widely used GNNs, like LightGCN~\\cite{he:et.al:2020:SIGIR} and GAT~\\cite{velivckovic:et.al:2018:ICLR}. Given the neighbors' embeddings of a node $v_i$, our GSN is to iteratively find a locally optimal aggregation weights between $v_i$ and its neighbors; While most GNNs compute the aggregation weights only once by an attention function or by direct graph Laplacian matrix. More comparisons with other GNNs are provided in Appendix~\\ref{Appendix:GNNComparison}.\n\n\n\n\\subsubsection{Item Embedding}\nWe can also stack multiple layers of GSN to include high-order neighbors for learning a node embedding. Let $\\mathbf{h}_i^{(l)} (l=1,...,L)$ denote the $l$-th layer embedding of $v_i$ learned by GSN, and $\\mathbf{H}^{(l)}$ denote the $l$-th layer embedding matrix of all nodes. For an item node $v_i$, to enjoy some potential relations with its higher orders' neighbors, we compute its final item embedding $\\mathbf{h}_i^a$ by\n\\begin{equation}\n\t\\mathbf{h}_i^a = \\mathbf{h}_i^{(0)} + \\mathbf{h}_i^{(1)} + ... + \\mathbf{h}_i^{(L)}.\n\\end{equation}\nWe denote $\\mathbf{H}^a \\in \\mathbb{R}^{N\\times d}$ as the item embeddings matrix, where the $i$-th row is the embedding of item $v_i$.\n\n\\subsection{Informative Anchor Selection for Item Encoding}\n\n\\subsubsection{Informative Anchor Selection}\nWe would like to select a few of representative items to describe potential categorical features from thousands of items. To this end, we propose a measure, called \\textbf{item entropy}, to evaluate the informativeness of an item. The item entropy $H(v_i)$ of $v_i$ is defined by\n\\begin{equation}\n\tH(v_i) = -\\sum_{j=1}^{\\lvert \\mathcal{S} \\rvert}P_{v_i, s_j} \\cdot \\log(P_{v_i, s_j})\n\\end{equation}\nwhere $\\lvert \\mathcal{S} \\rvert$ is the number of all available sessions. If an item $v_i$ is clicked in a session $s_j$, then $P_{v_i, s_j}$ is computed by\n\\begin{equation}\n\\begin{split}\n\tP_{v_i, s_j} = \\frac{\\text{number of clicks in session } s_j}{\\text{total number of clicks}} \\times \\\\\n\t\\frac{\\text{number of sessions including item } v_i}{\\text{total number of sessions}};\n\\end{split}\n\\end{equation}\nOtherwise, $P_{v_i, s_j} = 0$. According to the definition, an item appearing in more sessions has higher item entropy, and a session including more clicks has more influence on item entropy. A large item entropy indicates that an item appears in a lot of sessions and these sessions include more clicks. In this regard, we select $M$ items with the top-$M$ highest item entropy as informative items, called \\textbf{anchors}. Let $\\mathcal{A}$ denote the set of all selected anchors.\n\n\\subsubsection{Item Encoding}\nWe extract the corresponding rows from item embedding matrix $\\mathbf{H}^a$ to construct an anchor embedding matrix $\\mathbf{A} \\in \\mathbb{R}^{M \\times d}$. Then we leverage a linear transformation to transfer the anchor embeddings from the item embedding space to another so-called \\textit{anchor encoding space}:\n\\begin{equation}\n\t\\mathbf{C} = \\mathbf{W}_c \\mathbf{A} + \\mathbf{b}_c,\n\\end{equation}\nwhere $\\mathbf{C} \\in \\mathbb{R}^{M \\times d}$ is the embedding matrix of anchors, $\\mathbf{W}_c$ and $\\mathbf{b}_c$ are trainable linear transformation parameters.\n\n\\par\nWe propose an unsupervised strategy to learn a soft assignment as the item-anchor distribution to describe their relations. Given an embedding $\\mathbf{h}_i^a$ of item $v_i$, we use an encoder network to produce the logits of the probabilities, and then convert them to a distribution $\\mathbf{p}_i \\in \\mathbb{R}^M$ by a softmax layer as follows:\n\\begin{align}\n\t\\boldsymbol{\\beta}_i & = f_p(\\mathbf{h}_i^a), \\\\\n\tp_{i,j} & = \\frac{\\exp(\\beta_{i,j})}{\\sum_{j^\\prime=1}^{M}\\exp(\\beta_{i,j^\\prime})}, \\\\\n\t\\mathbf{p}_i & = [p_{i,1}, p_{i,2}, ..., p_{i,M}],\n\\end{align}\nwhere $f_p(\\cdot)$ is the encoder network to map an item embedding $\\mathbf{h}_i^a \\in \\mathbf{R}^d$ to a logits $\\boldsymbol{\\beta}_i \\in \\mathbb{R}^M$. Any $d\\rightarrow M$ encoder network can be used as $f_p$. In this paper, we employ a two-layer feed forward neural network to complete this task:\n\\begin{equation}\n\tf_p(\\mathbf{h}) = \\mathbf{W}_p^{(2)^\\top} {\\rm LeakyReLU}(\\mathbf{W}_p^{(1)^\\top} \\mathbf{h} + \\mathbf{b}_p^{(1)}) + \\mathbf{b}_p^{(2)},\n\\end{equation}\nwhere $\\mathbf{W}_p^{(1)} \\in \\mathbb{R}^{d\\times d}, \\mathbf{b}_p^{(1)} \\in \\mathbb{R}^d$ and $\\mathbf{W}_p^{(2)} \\in \\mathbb{R}^{d\\times M}, \\mathbf{b}_p^{(2)} \\in \\mathbb{R}^M$ are trainable parameters. We use $\\mathbf{P} \\in \\mathbb{R}^{N \\times M}$ to denote the item-anchor distribution matrix.\n\n\\par\nWe assign the representative information of each anchor to items based on the item-anchor distribution to obtain item encoding as follows:\n\\begin{equation}\n\t\\mathbf{H}^b = \\mathbf{P} \\mathbf{C},\n\\end{equation}\nwhere $\\mathbf{H}^b \\in \\mathbb{R}^{N\\times d}$ is the item encoding matrix, the $i$-th row $\\mathbf{h}_i^b = \\mathbf{p}_i\\mathbf{C}$ in $\\mathbf{H}^b$ is the item encoding for $v_i$. Notice that this process is an end-to-end learning schema, where the model learns how to assign the distribution for anchors of an input item.\n\n\\subsection{Session Representation Learning}\nFor a session $S=\\{v_1, v_2, ..., v_{\\tau}\\}$, in order to capture its sequential information, we employ a \\textit{gated recurrent unit} (GRU) network to learn the session representation from two perspectives of the item embeddings $S^a=\\{\\mathbf{h}_1^a, \\mathbf{h}_2^a, ..., \\mathbf{h}_{\\tau}^a\\}$ and item encodings $S^b=\\{\\mathbf{h}_1^b, \\mathbf{h}_2^b, ..., \\mathbf{h}_{\\tau}^b\\}$:\n\\begin{align}\n\t(\\mathbf{s}_1^a, \\mathbf{s}_2^a, ..., \\mathbf{s}_{\\tau}^a) & = {\\rm GRU}(\\mathbf{h}_1^a, \\mathbf{h}_2^a, ..., \\mathbf{h}_{\\tau}^a), \\\\\n\t(\\mathbf{s}_1^b, \\mathbf{s}_2^b, ..., \\mathbf{s}_{\\tau}^b) & = {\\rm GRU}(\\mathbf{h}_1^b, \\mathbf{h}_2^b, ..., \\mathbf{h}_{\\tau}^b),\n\\end{align}\nwhere $\\mathbf{s}_{\\tau}$ is the hidden state of GRU at the timestamp $\\tau$. We select the last hidden state $\\mathbf{s}_{\\tau}^a$ and $\\mathbf{s}_{\\tau}^b$ as two session representations. We note that it is one GRU network shared for learning both session representations, which can help the GRU to be jointly optimized for both item embeddings and item encodings.\n\n\\subsection{Prediction and Fusion}\nWe propose to use \\textit{decision fusion} to obtain the final score prediction. Note that $\\mathbf{s}_{\\tau}^a$ and $\\mathbf{s}_{\\tau}^b$ encode a session $S$ from two different feature spaces: The objective of $\\mathbf{H}^a$ is to encode local structural characteristics; While that of $\\mathbf{H}^b$ is to encode global topological information. A feature fusion might confuse the two design objectives; yet a decision fusion may be able to reconcile them.\n\n\\par\nWe first independently compute two intermediate predictions $\\mathbf{\\hat{y}}^a \\in \\mathbb{R}^d$ and $\\mathbf{\\hat{y}}^b \\in \\mathbb{R}^d$ based on $\\mathbf{s}_{\\tau}^a$ and $\\mathbf{s}_{\\tau}^b$ by\n\\begin{align}\n\t\\mathbf{\\hat{y}}^a &= {\\rm softmax}(\\mathbf{H}^a \\mathbf{s}_\\tau^a), \\\\\n\t\\mathbf{\\hat{y}}^b &= {\\rm softmax}(\\mathbf{H}^b \\mathbf{s}_\\tau^b).\n\\end{align}\nWe further employ two trainable weights to implement an \\textit{adaptive decision fusion} as follows:\n\\begin{equation}\n\t\\mathbf{\\hat{y}} = \\sigma(\\omega^a) \\cdot \\mathbf{\\hat{y}}^a + \\sigma(\\omega^b) \\cdot \\mathbf{\\hat{y}}^b,\n\\end{equation}\nwhere $\\mathbf{\\hat{y}} \\in \\mathbb{R}^N$ is the final score prediction for recommendation, and $\\sigma(\\omega^a), \\sigma(\\omega^b)$ are the adaptive fusion weights, $\\sigma(\\cdot)$ is the $\\mathtt{sigmoid}$ function. The top-$K$ highest-scored items are selected to construct a recommendation list.\n\n\\par\nWe leverage the cross entropy loss function to supervise the prediction $\\mathbf{\\hat{y}}^a, \\mathbf{\\hat{y}}^b$ and $\\mathbf{\\hat{y}}$ as follows:\n\\begin{align}\n\t\\mathcal{L}_a(\\mathbf{\\hat{y}}^a, \\mathbf{y}) &= -\\sum_{i=1}^{N} y_i \\log(\\hat{y}_i^a), \\\\\n\t\\mathcal{L}_b(\\mathbf{\\hat{y}}^b, \\mathbf{y}) &= -\\sum_{i=1}^{N} y_i \\log(\\hat{y}_i^b), \\\\\n\t\\mathcal{L}_c(\\mathbf{\\hat{y}}, \\mathbf{y}) &= -\\sum_{i=1}^{N} y_i \\log(\\hat{y}_i), \\\\\n\t\\mathcal{L} &= \\mathcal{L}_a + \\mathcal{L}_b + \\mathcal{L}_c,\n\\end{align}\nwhere $\\mathbf{y}$ is the ground truth, a one-hot indicative vector: $y_i=1$ if $v_i = v_{\\tau+1}$; Otherwise $y_i=0$. Similarly, $\\hat{y}_i^a, \\hat{y}_i^b, \\hat{y}_i$ are the $i$-th scores in $\\mathbf{\\hat{y}}^a, \\mathbf{\\hat{y}}^b, \\mathbf{\\hat{y}}$, respectively. At last, we optimize the loss function $\\mathcal{L}$ to train our model.\n\n\n\\begin{table}[t]\n\t\\centering\n\t\\caption{Statistics of the three datasets.}\n\t\\begin{tabular}{lccc}\n\t\t\\toprule\n\t\tDatasets & Yoochoose 1\/64 & RetailRocket & Diginetica \\\\\n\t\t\\midrule\n\t\tnumber of nodes $N$ & 17,376 & 36,968 & 43,097 \\\\\n\t\tnumber of edges $|E|$ & 227,205 & 542,655 & 782,655 \\\\\n\t\t\\bottomrule\n\t\\end{tabular}\n\t\\label{Tble:Graph}\n\\end{table}\n\n\\section{Experiment Settings}\n\\label{Sec:Experiment Settings}\n\\subsection{Datasets}\nWe conduct experiments on three real-world datasets: Yoochoose\\footnote{http:\/\/2015.recsyschallenge.com\/challenge.html}, RetailRocket\\footnote{https:\/\/www.kaggle.com\/retailrocket\/ecommerce-dataset} and Diginetica\\footnote{http:\/\/cikm2016.cs.iupui.edu\/cikm-cup}, which are commonly used in the SBR task~\\cite{li:et.al:2017:CIKM, liu:et.al:2018:KDD, wu:et.al:2019:AAAI, wang:et.al:2019:SIGIR, qiu:et.al:2019:CIKM, zhang:et.al:2020:Neurocomputing, yuan:et.al:2021:AAAI}. The Yoochoose is from the Recsys Challenge 2015, containing six months of clicks from an European e-commerce website. The RetailRocket is a Kaggle contest dataset published by an e-commerce company, containing browsing activities in six months. The Diginetica comes from CIKM Cup 2016, and only its transactional data are used.\n\n\\par\nTo make a fair comparison, our data preprocessing is the same as that of~\\cite{li:et.al:2017:CIKM, liu:et.al:2018:KDD,wu:et.al:2019:AAAI,wang:et.al:2019:SIGIR,yuan:et.al:2021:AAAI}. In particular, we filter out sessions of length one and items appearing less than five times. Furthermore, after constructing the item graph, we adopt the data augmentation approach in~\\cite{wu:et.al:2019:AAAI, qiu:et.al:2019:CIKM}. Table~\\ref{Tble:Dataset} summarizes the statistics of the three datasets.\n\\begin{table}[t]\n\t\\centering\n\t\\caption{Statistics of the three datasets.}\n\t\\begin{tabular}{lccc}\n\t\t\\toprule\n\t\tDatasets & Yoochoose 1\/64 & RetailRocket & Diginetica \\\\\n\t\t\\midrule\n\t\t\\# train sessions & 369,859 & 433,648 & 719,470 \\\\\n\t\t\\# test sessions & 55,696 & 15,132 & 60,858 \\\\\n\t\t\\# items & 17,376 & 36,968 & 43,097 \\\\\n\t\t\\# average lengths & 6.16 & 5.43 & 5.12 \\\\\n\t\t\\bottomrule\n\t\\end{tabular}\n\t\\label{Tble:Dataset}\n\\end{table}\n\n\\subsection{Parameter Setting}\nFollowing previous work~\\cite{li:et.al:2017:CIKM, liu:et.al:2018:KDD,wu:et.al:2019:AAAI,wang:et.al:2019:SIGIR}, we set the vector dimension $d=100$ and the mini-batch size to 100 for the three datasets. We adopt the Adam optimizer with the initial learning rate 0.01, with a decay rate of 0.1 after every 3 epoches. We set the L2 penalty to $10^{-5}$ to avoid overfitting. We set $k=3$ for the item graph construction. We set the GSN iteration $T=4$. The hyper-parameters of the number of anchors $M$ and the number of GSN layer $L$ will be discussed in our later section~\\ref{Sec:Hyperparameter} of hyper-parameter analysis.\n\n\n\\begin{table*}[t]\n\t\\centering\n\t\\caption{Overall performance comparison.}\n\t\\begin{threeparttable}\n\t\t\\begin{tabular}{llcccccc}\n\t\t\t\\toprule\n\t\t\t~ & \\multirow{2}*{Methods} & \\multicolumn{2}{c}{Yoochoose 1\/64} & \\multicolumn{2}{c}{RetailRocket} & \\multicolumn{2}{c}{Digineitca} \\\\\n\t\t\t\\cmidrule(r){3-4} \\cmidrule(r){5-6} \\cmidrule(r){7-8}\n\t\t\t~ & ~ & HR@20(\\%) & MRR@20(\\%) & HR@20(\\%) & MRR@20(\\%) & HR@20(\\%) & MRR@20(\\%) \\\\\n\t\t\t\\cmidrule(r){1-8}\n\t\t\t\\multirow{5}*{Traditional Methods} & POP & 11.15 & 2.99 & 1.61 & 0.38 & 0.74 & 0.22 \\\\\n\t\t\t~ & S-POP & 39.62 & 18.99 & 39.53 & 27.17 & 21.60 & 13.51 \\\\\n\t\t\t~ & FPMC & 45.62 & 15.01 & 32.37 & 13.82 & 26.53 & 6.95 \\\\\n\t\t\t~ & SKNN & 63.77 & 25.22 & \\underline{54.28} & 24.46 & 48.06 & 16.95 \\\\\n\t\t\t~ & STAN & 69.45 & 28.74 & 53.48 & 26.81 & 49.93 & 17.59 \\\\\n\t\t\t\\cmidrule(r){1-8}\n\t\t\t\\multirow{4}*{Neural Methods} & NARM & 68.32 & 28.63 & 50.22 & 24.59 & 49.70 & 16.17 \\\\\n\t\t\t~ & STAMP & 68.74 & 29.67 & 50.96 & 25.17 & 45.64 & 14.32 \\\\\n\t\t\t~ & CSRM & 69.85 & 29.71 & - & - & 51.69 & 16.92 \\\\\n\t\t\t~ & CoSAN & 70.04 & 29.85 & 52.47 & 24.40 & 48.34 & 15.22 \\\\\n\t\t\t\\cmidrule(r){1-8}\n\t\t\t\\multirow{6}*{Graph-based Methods} & SR-GNN & 70.57 & 30.94 & 50.32 & 26.57 & 50.73 & 17.59 \\\\\n\t\t\t~ & GC-SAN & 70.66 & 30.04 & 51.18 & \\underline{27.40} & 50.84 & 17.79 \\\\\n\t\t\t~ & TAGNN & 71.02 & 31.12 & - & - & 51.31 & 18.03 \\\\\n\t\t\t~ & FLCSP & 71.58 & 31.31 & 52.63 & 25.85 & 51.68 & 17.27 \\\\\n\t\t\t~ & Disen-GNN & 71.46 & \\underline{31.36} & - & - & 53.79 & 18.99 \\\\\n\t\t\t~ & GCE-GNN & \\underline{72.18} & 30.84 & - & - & \\underline{54.22} & \\underline{19.04} \\\\\n\t\t\t~ & $S^2$-DHCN & 68.34 & 27.89 & 53.66 & 27.30 & 53.18 & 18.44 \\\\\n\t\t\t\\cmidrule(r){1-8}\n\t\t\t\\multirow{2}*{Proposed Methods} & GSN-IAS & \\textbf{72.34} & \\textbf{31.45} & \\textbf{57.13} & \\textbf{29.97} & \\textbf{55.65} & \\textbf{19.24} \\\\\n\t\t\t~ & Improv.(\\%) & 0.22 & 0.29 & 5.25 & 9.38 & 2.64 & 1.05 \\\\\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\t\\begin{tablenotes}\n\t\t\t\\centering\n\t\t\t\\item[1] We have run 3 times in each dataset, and report the mean performances metrics, and the variances of different runs are consistently smaller than 0.01\\% in our model.\n\t\t\\end{tablenotes}\n\t\\end{threeparttable}\n\t\\label{Tble:Overall Performance}\n\\end{table*}\n\n\\subsection{Competitors}\nWe compare GSN-IAS with the following competitors, which are divided into three groups:\n\n\\par\n\\textbf{Traditional methods:} these apply some simple recommendation rules, like item popularity, matrix factorization, Markov chain or k-nearest-neighbors.\n\n\\par\\noindent\n$\\cdot$ \\textsf{POP} and \\textsf{S-POP} recommend the most popular items in the training dataset and in the current session respectively.\n\\par\\noindent\n$\\cdot$ \\textsf{FPMC}~\\cite{rendle:et.al:2010:WWW} combines matrix factorization and Markov chain for the next item recommendation.\n\\par\\noindent\n$\\cdot$ \\textsf{SKNN}~\\cite{jannach:et.al:2017:Recsys} uses the k-nearest-neighbors (KNN) approach for session-based recommendation.\n\\par\\noindent\n$\\cdot$ \\textsf{STAN}~\\cite{garg:et.al:2019:SIGIR} extends the SKNN method to incorporate sequential and temporal information with a decay factor.\n\n\\par\n\\textbf{Neural Network Models:} these focus on designing neural network to learn session representation.\n\\par\\noindent\n$\\cdot$ \\textsf{NARM}~\\cite{li:et.al:2017:CIKM} incorporates a GRU layer with attention mechanism to learn session sequential representation.\n\\par\\noindent\n$\\cdot$ \\textsf{STAMP}~\\cite{liu:et.al:2018:KDD} uses an attention mechanism to capture general interests of a session and current interests of the last click.\n\\par\\noindent\n$\\cdot$ \\textsf{CSRM}~\\cite{wang:et.al:2019:SIGIR} considers the collaborative neighborhors of the latest $m$ sessions for predicting the sessision intent.\n\\par\\noindent\n$\\cdot$ \\textsf{CoSAN}~\\cite{luo:et.al:2020:IJCAI} learns the session representation and predicts the session intent by investigating neighborhood sessions.\n\n\\par\n\\textbf{Graph-based Models:} these incorporate complex transition relations, co-occurrence relations into graph to capture richer information.\n\\par\\noindent\n$\\cdot$ \\textsf{SR-GNN}~\\cite{wu:et.al:2019:AAAI} is the first work proposing to apply the graph neural network to learn item embedding.\n\\par\\noindent\n$\\cdot$ \\textsf{GC-SAN}~\\cite{xu:et.al:2019:IJCAI} combines the graph neural network and multi-layers self-attention network.\n\\par\\noindent\n$\\cdot$ \\textsf{TAGNN}~\\cite{yu:et.al:2020:SIGIR} uses a target attention mechanism to learn session representation.\n\\par\\noindent\n$\\cdot$ \\textsf{FLCSP}~\\cite{zhang:et.al:2021:InfoSci} makes a decision fusion of latent categorical prediction and sequential prediction.\n\\par\\noindent\n$\\cdot$ \\textsf{Disen-GNN}~\\cite{li:et.al:2022:TKDE} proposes a disentangled graph neural network to capture the session purpose with the consideration of factor-level attention on each item.\n\\par\\noindent\n$\\cdot$ \\textsf{GCE-GNN}~\\cite{wang:et.al:2020:SIGIR} constructs session graph and global graph to learn two levels of item embedding.\n\\par\\noindent\n$\\cdot$ \\textsf{$S^2$-DHCN}~\\cite{xia:et.al:2021:AAAI} learns the inter- and intra-session information from two types of hypergraphs.\n\n\n\n\\section{Experiment Results}\n\\label{Sec:Experiment Results}\nWe adopt two metrics commonly used in the SBR task, i.e., HR@20 and MRR@20. \\textbf{HR@20} (Hit Rate) focuses on whether the desired item appearing in the recommended list, without considering the ranking of item in the recommended list. \\textbf{MRR@20} (Mean Reciprocal Rank) is the average of reciprocal ranks of the correct recommended items in the recommendation list.\n\n\n\n\n\\subsection{Overall Comparison (RQ1)}\nTable~\\ref{Tble:Overall Performance} presents the overall performance comparison between our GSN-IAS and the competitors, where the best results in every column are boldfaced and the second results are underlined. It is observed that our GSN-IAS outperforms the others on all three datasets in terms of the highest HR@20 and MRR@20 on three datasets.\n\n\\par\nIn traditional methods, the \\textsf{POP} and \\textsf{S-POP} based on simple recommendation rules are not competitive, since they lack the ability to capture the item sequential dependency in sessions. The \\textsf{FPMC} using Markov chain to model the sequential relations also performs poorly, since the strict independence assumption of Markov chain is inconsistent with the real situation of the SBR task. The \\textsf{SKNN} and \\textsf{STAN} achieve competitive performances compared with neural methods and graph-based methods, and the \\textsf{SKNN} even reaches the second best of HR@20 on RetailRocket dataset. They both consider the influence of similar sessions, and the \\textsf{STAN} designs two time decay functions to describe the recency of a past session and the chronological order of items in a session, respectively. These imply that the collaborative information of similar sessions and the sequential relations in a session are important for the SBR task.\n\n\\par\nThe neural methods, i.e. the \\textsf{NARM}, \\textsf{STAMP}, \\textsf{CSRM} and \\textsf{CoSAN}, generally achieve significant performance improvements over traditional methods, which reflects the powerful representation ability of neural networks. However, these methods focus on learning a session representation, but have neglected to exploit some potential relations in between items from different sessions. Notice that compared with the \\textsf{NARM} and \\textsf{STAMP}, the \\textsf{CSRM} and \\textsf{CoSAN} take consideration of the collaborative information of similar sessions to assist the representation learning of current session, which makes them get better performance.\n\n\\par\nThe graph-based methods, i.e. the \\textsf{SR-GNN}, \\textsf{GC-SAN}, \\textsf{TAGNN}, \\textsf{FLCSP}, \\textsf{Disen-GNN}, \\textsf{GCE-GNN} and \\textsf{$S^2$-DHCN}, achieve the state-of-the-art performance, which indicates that modeling the potential relations in between items by a graph structure is helpful for item and senssion representation learning. The first five construct a directed graph to capture item transition relations; While the \\textsf{GCE-GNN} and \\textsf{$S^2$-DHCN} consider item co-occurrences via constructing an undirected graph and a hypergraph respectively. The \\textsf{GCE-GNN} gets three second best performances including HR@20 on Yoochoose 1\/64 and HR@20, MRR@20 on Diginetica, and \\textsf{$S^2$-DHCN} also gets competitive performances on RetailRocket and Diginetica. These suggest the importance of item co-occurrence relations for item embedding learning.\n\n\\par\nOur approach \\textsf{GSN-IAS} outperforms all the state-of-the-art algorithms on the three datasets. Especially on the RetailRocket dataset, the improvements of HR@20 and MRR@20 are 5.25\\% and 9.38\\% compared with the second best. This suggests the effectiveness of using our GSN for item embedding learning and informative anchor selection for item encoding. Note that except the \\textsf{GCE-GNN} on Diginetica, other competitors only achieve the second best performance of either HR@20 or MRR@20; While our \\textsf{GSN-IAS} achieves the best on both metrics.\n\n\n\\begin{table}[t]\n\t\\centering\n\t\\caption{Ablation study results.}\n\t\\resizebox{0.5\\textwidth}{!}{\n\t\t\\begin{tabular}{lcccccc}\n\t\t\t\\toprule\n\t\t\tDataset & \\multicolumn{2}{c}{Yoochoose 1\/64} & \\multicolumn{2}{c}{RetailRocket} & \\multicolumn{2}{c}{Digineitca} \\\\\n\t\t\t\\cmidrule(r){1-7}\n\t\t\tMethod & HR@20 & MRR@20 & HR@20 & MRR@20 & HR@20 & MRR@20 \\\\\n\t\t\t\\cmidrule(r){1-7}\n\t\t\tGSN-Anchor & 70.77 & 29.61 & 53.18 & 27.16 & 53.64 & 18.23 \\\\\n\t\t\tGSN-Item & 72.16 & 31.33 & 56.87 & 29.70 & 55.50 & 19.20 \\\\\n\t\t\tGSN-IAS-AvgFuse & 72.27 & 31.35 & 56.67 & 29.66 & 55.42 & 19.13 \\\\\n\t\t\tGSN-IAS & \\textbf{72.34} & \\textbf{31.45} & \\textbf{57.13} & \\textbf{29.97} & \\textbf{55.65} & \\textbf{19.24} \\\\\n\t\t\t\\cmidrule(r){1-7}\n\t\t\t$(\\sigma(\\omega^a),\\sigma(\\omega^b))$ & \\multicolumn{2}{c}{(0.8834, 0.1125)} & \\multicolumn{2}{c}{(0.8421, 0.1517)} & \\multicolumn{2}{c}{(0.8062, 0.1850)} \\\\\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t}\n\t\\label{Tble:AblationStudy}\n\\end{table}\n\n\\subsection{Ablation Study}\nWe conduct ablation experiments to examine the effectiveness of each component in our \\textsf{GSN-IAS}. We develop the following three ablation algorithms:\n\\begin{itemize}\n\t\\item \\textsf{GSN-Anchor}: It only uses item encoding for session representation learning and prediction.\n\t\\item \\textsf{GSN-Item}: It only uses item embedding for session representation learning and prediction.\n\t\\item \\textsf{GSN-IAS-AvgFuse}: It uses both item embedding and item encoding for session representation learning, but uses an average pooling for decision fusion.\n\\end{itemize}\n\n\\par\nTable~\\ref{Tble:AblationStudy} presents the results of ablation study. We first observe that \\textsf{GSN-IAS} achieves the best performance than its ablation algorithms. Even the worst-performing \\textsf{GSN-Anchor} has achieved the same level of performance as the state-of-the-art \\textsf{SR-GNN}, \\textsf{GC-SAN}, \\textsf{TAGNN}, \\textsf{FLCSP} methods, and \\textsf{GSN-Item} also outperforms all competitors. These reflect two points: One is that each component in \\textsf{GSN-IAS} makes its contribution to more accurate prediction, where \\textsf{GSN-Item} plays the protagonist and \\textsf{GSN-Anchor} the auxiliary; The other is that our GSN is an effective graph neural network to learn item embedding for the SBR task.\n\n\\par\nFrom Table~\\ref{Tble:AblationStudy}, we also observe that \\textsf{GSN-IAS} outperforms the \\textsf{GSN-IAS-AvgFuse}, which indicates the effectiveness of our adaptive fusion for learning fusion weights. On this basis, the last line of Table~\\ref{Tble:AblationStudy} specifically shows the fusion weights learned for different datasets, where the fusion weights are learned differently for different datasets. Furthermore, it is not unexpected that $\\sigma(\\omega^a)>\\sigma(\\omega^b)$ on all datasets. This is in consistent with our previous analysis that the item embedding learning is the dominant factor, while the informative anchors information complements the former for the SBR task.\n\n\n\\begin{table}[t]\n\t\\centering\n\t\\caption{Performance comparison for using GCN, GAT, LightGCN for item embedding learning.}\n\t\\resizebox{0.5\\textwidth}{!}{\n\t\t\\begin{tabular}{lcccccc}\n\t\t\t\\toprule\n\t\t\tDataset & \\multicolumn{2}{c}{Yoochoose 1\/64} & \\multicolumn{2}{c}{RetailRocket} & \\multicolumn{2}{c}{Digineitca} \\\\\n\t\t\t\\cmidrule(r){1-7}\n\t\t\tMethod & HR@20 & MRR@20 & HR@20 & MRR@20 & HR@20 & MRR@20 \\\\\n\t\t\t\\cmidrule(r){1-7}\n\t\t\tGCN-IAS & 71.86 & 30.42 & 56.73 & 27.47 & 50.10 & 14.88 \\\\\n\t\t\tGAT-IAS & 72.16 & 30.81 & 57.05 & 29.51 & 55.06 & 18.61 \\\\\n\t\t\tLightGCN-IAS & \\underline{72.47} & \\textbf{30.98} & \\textbf{57.18} & \\underline{29.55} & \\underline{55.64} & \\underline{19.20} \\\\\n\t\t\tGSN-IAS & \\textbf{72.34} & \\underline{31.45} & \\underline{57.13} & \\textbf{29.97} & \\textbf{55.65} & \\textbf{19.24} \\\\\n\t\t\t\\bottomrule\n\t\\end{tabular}\n}\n\t\\label{Tble:Comparison GNNs}\n\\end{table}\n\n\\begin{figure*}[t]\n\t\\centering\n\t\\includegraphics[width=\\textwidth, height=0.7\\textwidth]{scatter.png}\n\t\\caption{Visualization by t-sne for item embedding and anchor-based item embedding. We conduct a KMeans (K=5) clustering, and present the embeddings clustered in one class with the same color.}\n\t\\label{Fig:Visualization}\n\\end{figure*}\n\n\\subsection{Comparison with other graph neural networks}\nWe conduct experiments using other well-knwon graph neural networks, including GCN~\\cite{kipf:et.al:2017:ICLR}, GAT~\\cite{velivckovic:et.al:2018:ICLR}, LightGCN~\\cite{he:et.al:2020:SIGIR}, to replace our GSN for item embedding learning. We fix the hyper-parameter of the GNN layer $L=\\{2,2,5\\}$ and the number of anchors $M=\\{100, 500, 1000\\}$ for Yoochoose 1\/64, RetailRocket, Dignetica, respectively. These comparison algorithms are denoted as \\textsf{GCN-IAS}, \\textsf{GAT-IAS} and \\textsf{LightGCN-IAS}.\n\n\\par\nTable~\\ref{Tble:Comparison GNNs} presents the experiment results of using different GNNs. It is observed that \\textsf{GSN-IAS} and \\textsf{LightGCN-IAS} achieve comparable performance and are better than the other two, which validates our motivation of designing GSN without trainable parameters for ID-based item embedding learning in the SBR task. In addition, the \\textsf{GAT-IAS} achieves better performance than that of \\textsf{GCN-IAS}, which implies that learning aggregation weights based on node embedding is better than computing on graph adjacent matrix. Although the performance of our GSN is comparable to LightGCN, our GSN provides a new idea using a more simple, convenient and effective way to learn node embedding on a graph without using any additional parameters.\n\n\n\\begin{figure*}[t]\n\t\\centering\n\t\\includegraphics[width=\\textwidth]{anchors.png}\n\t\\caption{The performance of GSN-Anchor for using different numbers of selected anchors.}\n\t\\label{Fig:NumAnchors}\n\\end{figure*}\n\n\\subsection{Visualization of item embedding}\nFig.~\\ref{Fig:Visualization} presents the visualization of item embedding and item encoding in our \\textsf{GSN-IAS} by the t-sne~\\cite{van:et.al:2008:JMLR} algorithm. We observe that the results are consistent across different datasets. The original embeddings (the 1st column) are randomly initialized without much differentiations. Contrastively, after GSN learning for item embedding and after informative anchor selection for item encoding, the item embeddings (the 2nd column) and item encodings (the 4th column) reflect obvious clustering effect, that is, the similar items are closer to each other in the latent embedding space. To present the results more intuitively, we employ KMeans (K=5) algorithm to cluster the item embeddings (the 3rd column) and the item encodings (the 5th column). We can observe that the clustering effect is more obvious. Such clustering effect implies that both item embeddings and encodings can provide good item differentiations. Previous experiment results also reflect that such item differentiations can help to achieve more accurate recommendation.\n\n\n\\begin{table}[t]\n\t\\centering\n\t\\caption{Performance of using different numbers of GSN layers.}\n\t\\resizebox{0.5\\textwidth}{!}{\n\t\t\\begin{tabular}{lcccccc}\n\t\t\t\\toprule\n\t\t\tDataset & \\multicolumn{2}{c}{Yoochoose 1\/64} & \\multicolumn{2}{c}{RetailRocket} & \\multicolumn{2}{c}{Digineitca} \\\\\n\t\t\t\\cmidrule(r){1-7}\n\t\t\tMethod & HR@20 & MRR@20 & HR@20 & MRR@20 & HR@20 & MRR@20 \\\\\n\t\t\t\\cmidrule(r){1-7}\n\t\t\tGSN-IAS-1hop & 72.23 & 31.38 & 56.32 & 29.79 & 54.93 & 19.14 \\\\\n\t\t\tGSN-IAS-2hop & 72.34 & \\textbf{31.45} & 57.13 & \\textbf{29.97} & 55.21 & 19.22 \\\\\n\t\t\tGSN-IAS-3hop & 72.42 & 31.11 & 57.00 & 29.66 & 55.53 & 19.18 \\\\\n\t\t\tGSN-IAS-4hop & \\textbf{72.52} & 31.07 & 57.34 & 29.63 & 55.57 & 19.13 \\\\\n\t\t\tGSN-IAS-5hop & 72.46 & 31.03 & \\textbf{57.59} & 29.42 & \\textbf{55.65} & \\textbf{19.24} \\\\\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t}\n\t\\label{Tble:NumLayers}\n\\end{table}\n\n\\subsection{Hyperparameter Analysis}\n\\label{Sec:Hyperparameter}\nWe finally examine the two hyperparameters in our \\textsf{GSN-IAS}: One is the number of anchors, i.e. $M$; and the other is the number of stacked GSN layers, i.e $L$.\n\n\\par\nFig.~\\ref{Fig:NumAnchors} presents the performance of GSN-Anchor for using different numbers of anchors. We observe that the trend of curve of HR@20 or MRR@20 on all the three datasets are consistent, that is, increasing first, then maintaining relatively stable, even decreasing slightly on RetailRocket with the increase of $M$. These results first indicate that too few anchors carry insufficient information; With increasing the number of anchors, the information provided by anchors tend to be saturated; While too many anchors may introduce some noise or repeated information to weaken the item differentiations.\n\n\\par\nTable~\\ref{Tble:NumLayers} presents the performance of using different layers of GSN. above contrast experiments. We first note that even the \\textsf{GSN-IAS-1hop} still outperforms the state-of-the-art methods. With the increase of $L$, the HR@20 increases, but the growth rate slows down and even decreases on Yoochoose 1\/64. The MRR@20 increases first and then decreases on Yoochoose 1\/64 and RetailRocket. These results are easy to understand: Using a lager $L$, higher-order information of each node in the item graph can be explored, but more high-order neighbors may blur a node's information. The choice of $L$ is also related to the dataset size. The Yoochoose 1\/64 is smaller than RetailRocket and Diginetica; So using a small $L$ suffices for Yoochoose 1\/64. The RetailRocket and Diginetica contain about 2.1 times and 2.5 times more items than Yoochoose 1\/64, needing a larger $L$ to explore more higher-order neighbors.\n\n\n\\section{Conclusion}\\label{Sec:Conlusion}\nIn this paper, we have proposed a novel GSN-IAS model for the SBR task. Our model explores item co-occurrence to construct an item graph, on which we propose a new GSN neural network to optimize neighborhood affinity in ID-based item embedding learning. We have also deigned an informative anchor selection strategy to select anchors and to encode potential relations of all nodes to such anchors. Furthermore, we have employed a shared GRU to learn two session representations for two predictions. We have also proposed an adaptive decision fusion mechanism to fuse the two predictions to output the final recommendation list. Experiments on three public datasets have validated the superiority of our GSN-IAS model over the state-of-the-art algorithms.\n\n\\par\nIn this paper, we have especially discussed the characteristics of ID-based item embedding learning for the SBR task. We note that our GSN model may not only be suitable for such ID embedding learning in the SBR task, but may also be of further applications in other scenarios, like collaborative filtering. Our future work will explore the GSN potentials in other tasks. Furthermore, we suggest to further mine and encode some latent transition knowledge such as diverse transitional modes in between different kinds of items as another interesting future work.\n\n\n\n\n\n\n\n\n\n\n\\ifCLASSOPTIONcaptionsoff\n \\newpage\n\\fi\n\n\n\n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nG\\\"odel's \\emph{functional} or \\emph{Dialectica} interpretation was introduced in \\cite{Goedel(58)} as a reduction of first order arithmetic\nto the ``finitistic\" quantifier-free calculus of primitive recursive functionals (system $\\systemT$). Soon after G\\\"odel's paper appeared in print, Spector\n\\cite{Spector(62)} showed how G\\\"odel's interpretation of arithmetic could be extended to analysis by extending\nsystem $\\systemT$ with what he called \\emph{bar recursion}. By \\emph{analysis} we mean\nclassical arithmetic in all finite types extended with countable choice and dependent choice -- and hence comprehension.\n\nSpector's original work has given rise to several other bar recursive interpretations of analysis, whereby different proof interpretations\nother than the Dialetica interpretation have been used. In such cases one was either able to continue using Spector's original form\nof bar recursion (e.g. \\cite{FE(2010),Kohlenbach(92)}) or some variant of bar recursion was proposed (e.g. \\cite{Berardi(98),BO(05)}).\n\nAs we have shown in \\cite{EO(2009),EO(2011A)}, there are close connections between the different forms of bar recursion and the calculation of optimal strategies in a general class of sequential games. This was achieved by showing that bar recursion turns out to correspond to the iterated product of\nquantifiers and selection functions. Spector's original bar recursion can be shown to be equivalent to the iterated product of quantifiers, whereas\nthe restricted form needed to witness the Dialectica interpretation of $\\DNS$ is equivalent to the iterated product of selection functions \\cite{EO(2015A)}.\n\nThis analogy between computability and games is based on the modelling of players via quantifiers $\\K{R}{X} = (X \\to R) \\to R$. If $X$ is the set of moves available to a player, and $R$ is the set of possible outcomes, then mappings of type $X \\to R$ can be seen as describing the context a player lives in. Such contexts (a form of continuation) describe the final outcome for each of the possible choices of the player. Hence, to specify a player is to describe her preferred outcomes for each given game context. Similarly, a selection function $\\J{R}{X} = (X \\to R) \\to X$ also takes a game context as input, but determines the optimal move for any given game context. \n\nIn this paper we consider the iterated product of selection functions parametrised by an arbitrary strong monad $T X$, i.e. $\\JT{T}{R}{X} = (X \\to R) \\to TX$. Using the intuition that an element of a monad $T X$ provides ``information\" about concrete elements of $X$, and the correspondence with games, we can view such selection functions $\\JT{T}{R}{X}$ as specifying some information about the optimal move for any given game context. \n\nWe study the bar recursion that arises from the iterated product of such $T$-selection functions. Our first step is to show that $\\JT{T}{R}{X}$ is also a strong monad. Since any strong monad embeds into the continuation monad, it follows that we have an embedding of $\\JT{T}{R}{X}$ into $K X$. We make use of this embedding to show that the iterated\nproduct of $T$-selection functions is in fact primitive recursively definable from the iterated product of quantifiers, and hence from Spector's original bar recursion.\n\nFinally, we consider the particular case when $T X$ is the finite power set monad $\\power{X}$. We prove several properties of the iterated product of\nselection functions $(X \\to R) \\to \\power{X}$, and show how it provides a witness for the \\emph{Herbrand functional interpretation} \\cite{BBS(2012)} of double-negation shift $\\DNS$\n\\[\n\\forall n^\\mathbb{N} \\neg \\neg A(n) \\to \\neg \\neg \\forall n^\\mathbb{N} A(n).\n\\]\n\n\\subsection{Heyting arithmetic in all finite types, and bar induction}\n\nWe work in the setting of Heyting arithmetic in all finite types, with full extensionality. This corresponds to the system $\\EHAomega$ of \\cite{Troelstra(73)}. When carrying out the verification of the Herbrand functional interpretation of $\\DNS$ we will make free use of classical logic, in order to simplify the verification of the bar-recursive construction, hence will be working on $\\EPAomega$. Although it is well-known that full extensionality is not normally interpreted by the functional interpretations, we are simply assuming full extensionality in the verification of our interpretation of $\\DNS$, which is obviously harmless.\n\nThe quantifier-free part of the theories $\\EHAomega$ and $\\EPAomega$ is normally referred to as G\\\"odel's system $\\systemT$. Although in $\\systemT$ one normally only assumes the natural numbers $\\mathbb{N}$ as basic types, and function space constructions $X \\to Y$ as the only type constructor, we will follow here the same formulation of $\\systemT$ as in \\cite{BBS(2012)} where one also assumes products $X \\times Y$, finite sequences $X^*$, and even finite power sets $\\power{X}$. We write $r \\preceq s$ to say that the finite sequence $r$ is a prefix of the finite sequence $s$. We assume that each type $X$ contains a `default' value ${\\bf 0} \\colon X$, so that we can define an canonical extension operation $(\\cdot)^+ \\colon X^* \\to X^\\mathbb{N}$ from finite to infinite sequences, by appending an infinite sequence of default values. For instance, for the natural numbers ${\\bf 0}^\\mathbb{N}$ could be the number zero, whereas for $\\power{X}$ we can take ${\\bf 0}^{\\power{X}} = \\emptyset$.\n\nOn top of $\\EHAomega$, in the proofs of Lemmas \\ref{bar-full} and \\ref{t-spector} will make use of the following form of bar induction: \n\n\\begin{definition}[Bar induction] Let $P(s)$ be a universal formula, and $s \\colon X^*$. We say that bar induction holds for $P(s)$ if whenever\n\\begin{itemize}\n\t\\item $\\omega(s^+) < |s|$ implies $P(s)$, and\n\t\\item $\\omega(s^+) \\geq |s|$ and $\\forall x P(s * x)$ implies $P(s)$\n\\end{itemize}\nthen $P(\\langle \\, \\rangle)$. \n\\end{definition}\n\nThis form of bar induction implicitly assumes that the bar condition $\\omega(s^+) < |s|$ eventually holds. \nThis is indeed the case in all models of Spector's bar recursion \\cite{Bezem(85),Scarpellini(71)}. \\\\[2mm]\n{\\bf Notation}. In the paper we will use sub-scripts in four different ways, and hope their respective meanings will be clear from context:\n\\begin{itemize}\n\t\\item In the following section we use sub-scripts to denote the type of a functional. For instance, the identity function of type $X$ will be written as $\\operatorname{id}_X$.\n\t\\item If $\\alpha \\colon X \\to Y$ we can view $\\alpha$ as a family of elements of $Y$ indexed by $X$, i.e. $\\{ \\alpha_x \\}_{x \\colon X}$. When taking this view we might write $\\alpha_x$ instead of $\\alpha(x)$.\n\t\\item Bar recursive functionals have several parameters, normally $\\BR(\\omega)(s)(\\varepsilon)(q)$. In order to focus on the selection functions $\\varepsilon$ and the outcome function $q$ we shall rewrite this as $\\BR^\\omega_s(\\varepsilon)(q)$. This makes sense since $s$ is the `index' of the bar recursion whereas $\\omega$ is the stopping condition.\n\t\\item Finally, for $q \\colon X^* \\to R$ we write $q_s(t)$ for $q(s * t)$ when we wish to `partially evaluate' $q$ on $s$ to produce another function $q_s \\colon X^* \\to R$.\n\\end{itemize}\n\n\\subsection{Strong monads}\n\nIn this section we recall the basic notions about strong monads needed in this paper. Throughout the paper we work in G\\\"odel's system $\\systemT$. \nHence, $X, Y$ and $R$ should be viewed as finite types\\footnote{It will be clear, however, that what we describe would work more generally in any of the\nwell-known models of higher-order computability.}. \n\n\\begin{definition}[Strong monad] \\label{monad-laws} Let $T$ be a meta-level unary operation on simple types, that we will call a \\emph{type operator}. A type operator $T$ is called a \\emph{strong monad} if we have a family of closed terms\n\\eqleft{\n\\begin{array}{rl}\n\\eta_X & \\colon X \\to TX \\\\[2mm]\n(\\cdot)^\\dagger & \\colon (X \\to TY) \\to (TX \\to TY)\n\\end{array}\n}\nsatisfying (provably in $\\systemT$) the laws\n\\begin{itemize}\n\t\\item[$(i)$] $(\\eta_X)^\\dagger = \\operatorname{id}_{T X}$ \\\\[-3mm]\n\t\\item[$(ii)$] $g^\\dagger \\circ \\eta_Y = g$ \\\\[-3mm]\n\t\\item[$(iii)$] $(g^\\dagger \\circ f)^\\dagger = g^\\dagger \\circ f^\\dagger$\n\\end{itemize}\nwhere $g \\colon Y \\to T R$ and $f \\colon X \\to TY$. When several strong monads are involved we shall use the super-script in $\\eta^T$ so as to be clear which $\\eta$ is being used. \\\\[1mm]\nGiven $f \\colon X \\to Y$ we define $T f \\colon T X \\to T Y$ by $T f =\n(\\eta_Y \\circ f)^\\dagger$. The laws for the monad show that this\nconstruction makes $T$ into a functor, that is, $T \\operatorname{id}_X = {\\rm\n id}_{T X}$, and for $g : Y \\to Z$ we have $T(g \\circ f) = Tg \\circ\nTf$.\n\\end{definition}\n\nMonads have been extensively studied in category theory \\cite{kock:monoidal}, programming language semantics \\cite{Moggi(1991)}, and in the functional programming community \\cite{Wadler92}. In a monad one would normally have a non-uniform mapping from $f \\colon X \\to TY$ to $f^\\dagger \\colon TX \\to TY$. The term \\emph{strong} here refers to the assumption that we have a uniform map $(\\cdot)^\\dagger \\colon (X \\to TY) \\to (TX \\to TY)$.\n\n\\begin{definition}[$T$-algebra] \\label{algebra-def} Given a strong monad $T$, a type $R$ is called a \\emph{$T$-algebra} if we have a family of maps $(\\cdot)^* \\colon (X \\to R) \\to (T X \\to R)$ satisfying\n\\begin{itemize}\n\t\\item[$(i)$] $g^* \\circ \\eta_Y = g$ \\\\[-3mm]\n\t\\item[$(ii)$] $(g^* \\circ f)^* = g^* \\circ f^\\dagger$\n\\end{itemize}\nwhere $g \\colon Y \\to R$ and $f \\colon X \\to TY$. \n\\end{definition}\n\n\n\nThe reason we focus here on \\emph{strong} monads is that on such monads we can define a binary product operation as follows:\n\n\\begin{lemma} \\label{lemma-strong} For any strong monad $T$ we can define a product operation\n\\[ \\otimes \\;\\colon\\; TX \\times (X \\to TY) \\to T(X \\times Y) \\]\nas\n\\begin{equation} \\label{t-product}\na \\otimes f = (\\lambda x . (\\lambda y . \\eta_{X \\times Y}(x, y))^\\dagger(f x))^\\dagger(a)\n\\end{equation}\nsatisfying, for $q \\colon X \\times Y \\to T R$, \n\\[ q^\\dagger(a \\otimes f) = (\\lambda x . (q_x)^\\dagger(f x))^\\dagger(a), \\]\nwhere $q_x = \\lambda y . q(x,y)$. When $q \\colon X \\times Y \\to R$ and $R$ is a $T$-algebra it satisfies\n\\[ q^*(a \\otimes f) = (\\lambda x . (q_x)^*(f x))^*(a). \\]\n\\end{lemma}\n{\\bf Proof}. We calculate as follows:\n\\[\n\\begin{array}{lcl}\nq^\\dagger(a \\otimes f)\n\t& \\stackrel{(\\ref{t-product})}{=} & q^\\dagger((\\lambda x . (\\lambda y . \\eta_{X \\times Y}(x, y))^\\dagger(f x))^\\dagger(a)) \\\\[1mm]\n\t& \\stackrel{(\\circ)}{=} & (q^\\dagger \\circ (\\lambda x . (\\lambda y . \\eta_{X \\times Y}(x, y))^\\dagger(f x))^\\dagger)(a) \\\\[1mm]\n\t& \\stackrel{\\textup{D}\\ref{monad-laws}(iii)}{=} & (q^\\dagger \\circ (\\lambda x . (\\lambda y . \\eta_{X \\times Y}(x, y))^\\dagger(f x)))^\\dagger(a) \\\\[1mm]\n\t& \\stackrel{(\\circ)}{=} & (\\lambda x. q^\\dagger((\\lambda y . \\eta_{X \\times Y}(x, y))^\\dagger(f x)))^\\dagger(a) \\\\[1mm]\n\t& \\stackrel{(\\circ)}{=} & (\\lambda x. ((q^\\dagger \\circ (\\lambda y . \\eta_{X \\times Y}(x, y))^\\dagger)(f x)))^\\dagger(a) \\\\[1mm]\n\t& \\stackrel{\\textup{D}\\ref{monad-laws}(iii)}{=} & (\\lambda x. ((q^\\dagger \\circ (\\lambda y . \\eta_{X \\times Y}(x, y)))^\\dagger(f x)))^\\dagger(a) \\\\[1mm]\n\t& \\stackrel{(\\circ)}{=} & (\\lambda x. (( \\lambda y . q^\\dagger (\\eta_{X \\times Y}(x, y)) )^\\dagger(f x)))^\\dagger(a) \\\\[1mm]\n\t& \\stackrel{\\textup{D}\\ref{monad-laws}(ii)}{=} & (\\lambda x . (q_x)^\\dagger(f x))^\\dagger(a).\n\\end{array}\n\\]\nIn the case $q \\colon X \\times Y \\to R$ and $R$ is a $T$-algebra we use properties $(i)$ and $(ii)$ of Definition \\ref{algebra-def} instead.\n$\\hfill \\Box$\n\n\\section{$T$-Selection Functions}\n\nIn the following two sections we assume that $T$ is a strong monad, and that $R$ is a $T$-algebra.\n\n\\begin{definition}[$T$-selection functions] Let $\\JT{T}{R}{X} = (X \\to R) \\to T X$, where $R$ is a $T$-algebra. The elements of the type $\\JT{T}{R}{X}$ will be called \\emph{$T$-selection functions}. \n\\end{definition}\n\nUnder the assumptions that $T$ is a strong monad and $R$ a $T$-algebra, it follows that $\\JT{T}{R}{}$ is also a strong monad.\n\n\\begin{lemma} \\label{j-t-monad} $\\JT{T}{R}{}$ is a strong monad with operations:\n\\eqleft{\n\\begin{array}{cl}\n(i) & \\eta^{\\JT{T}{R}{}}_X(x) = \\lambda p . \\eta^T_X(x) \\\\[2mm]\n(ii) & \\delta^\\dagger(\\varepsilon) = \\lambda p . (b^\\delta_p)^\\dagger (a^{\\varepsilon,\\delta}_p), \\mbox{where $\\delta \\colon X \\to \\JT{T}{R}{Y}$ and $\\delta^\\dagger \\colon \\JT{T}{R}{X} \\to \\JT{T}{R}{Y}$}\n\\end{array}\n}\nwhere $b_p^\\delta(x) \\stackrel{T Y}{=} \\delta(x)(p)$ and $a^{\\varepsilon,\\delta}_p \\stackrel{T X}{=} \\varepsilon(p^* \\circ b_p^\\delta)$. \n\\end{lemma}\n{\\bf Proof}. It is easy to check conditions ($i$) and ($ii$). Define $\\Delta_x(p) = (b^\\delta_p)^\\dagger (a^{\\varepsilon_x,\\delta}_p)$ and $\\Gamma_\\nu(p) = (b^\\varepsilon_p)^\\dagger(a^{\\nu, \\varepsilon}_p)$. We outline property ($iii$):\n\\[\n\\begin{array}{lcl}\n(\\delta^\\dagger \\circ \\varepsilon)^\\dagger \n\t& = & (\\lambda x . \\delta^\\dagger(\\varepsilon_x))^\\dagger \\\\[0mm]\n\t& \\stackrel{(ii)}{=} & (\\lambda x . \\lambda p . (b^\\delta_p)^\\dagger (a^{\\varepsilon_x,\\delta}_p))^\\dagger \\\\[0mm]\n\t& \\stackrel{\\Delta\\;\\textup{def.}}{=} & (\\lambda x . \\Delta_x)^\\dagger \\\\[0mm]\n\t& \\stackrel{(ii)}{=} & \\lambda \\nu . \\lambda q . (b_q^{\\Delta})^\\dagger(a_q^{\\nu, \\Delta}) \\\\[1mm]\n\t& \\stackrel{b\\;\\textup{def.}}{=} & \\lambda \\nu . \\lambda q . (\\lambda x . \\Delta_x(q))^\\dagger(a_q^{\\nu, \\Delta}) \\\\[0mm]\n\t& \\stackrel{\\Delta\\;\\textup{def.}}{=} & \\lambda \\nu . \\lambda q . ((b^\\delta_q)^\\dagger \\circ (\\lambda x . a^{\\varepsilon_x,\\delta}_q))^\\dagger(a_q^{\\nu, \\Delta}) \\\\[0mm]\n\t& \\stackrel{\\textup{D}\\ref{monad-laws}(iii)}{=} & \\lambda \\nu . \\lambda q . ((b^\\delta_q)^\\dagger \\circ (\\lambda x . a^{\\varepsilon_x,\\delta}_q)^\\dagger)(a_q^{\\nu, \\Delta}) \\\\[1mm]\n\t& = & \\lambda \\nu . \\lambda q . (b^\\delta_q)^\\dagger ((\\lambda x . a^{\\varepsilon_x,\\delta}_q)^\\dagger(a_q^{\\nu, \\Delta})) \\\\[0mm]\n\t& \\stackrel{(*)}{=} & \\lambda \\nu . \\lambda q . (b_q^\\delta)^\\dagger (a^{\\Gamma_\\nu, \\delta}_q) \\\\[0mm]\n\t& \\stackrel{(ii)}{=} & \\lambda \\nu . \\delta^\\dagger(\\Gamma_\\nu) \\\\[0mm]\n\t& \\stackrel{\\Gamma\\;\\textup{def.}}{=} & \\lambda \\nu . \\delta^\\dagger(\\lambda p . (b^\\varepsilon_p)^\\dagger(a^{\\nu, \\varepsilon}_p)) \\\\[0mm]\n\t& \\stackrel{(ii)}{=} & \\lambda \\nu . \\delta^\\dagger(\\varepsilon^\\dagger(\\nu)) \\\\[1mm]\n\t& = & \\delta^\\dagger \\circ \\varepsilon^\\dagger.\n\\end{array}\n\\]\nIt remains to show that $(*) \\; a^{\\Gamma_\\nu, \\delta}_q = (\\lambda x . a^{\\varepsilon_x,\\delta}_q)^\\dagger(a_q^{\\nu, \\Delta})$. This can be shown as\n\\[\n\\begin{array}{lcl}\na^{\\Gamma_\\nu, \\delta}_q \n\t& \\stackrel{a\\;\\textup{def.}}{=} & \\Gamma_\\nu(q^* \\circ b_q^\\delta) \\\\[0mm]\n\t& \\stackrel{\\Gamma\\;\\textup{def.}}{=} & (\\lambda x . b^\\varepsilon_{q^* \\circ b_q^\\delta}(x))^\\dagger(a^{\\nu, \\varepsilon}_{q^* \\circ b_q^\\delta}) \\\\[0mm]\n\t& \\stackrel{b\\;\\textup{def.}}{=} & (\\lambda x . \\varepsilon_x(q^* \\circ b_q^\\delta))^\\dagger(a^{\\nu,\\varepsilon}_{q^* \\circ b_q^\\delta}) \\\\[0mm]\n\t& \\stackrel{(**)}{=} & (\\lambda x . \\varepsilon_x (q^* \\circ b_q^\\delta))^\\dagger(a_q^{\\nu, \\Delta}) \\\\[0mm]\n\t& \\stackrel{a\\;\\textup{def.}}{=} & (\\lambda x . a^{\\varepsilon_x,\\delta}_q)^\\dagger(a_q^{\\nu, \\Delta}),\n\\end{array}\n\\]\nwhere, finally, $(**) \\; a^{\\nu,\\varepsilon}_{q^* \\circ b_q^\\delta} = a_q^{\\nu, \\Delta}$ is shown as\n\\[\n\\begin{array}{lcl}\na^{\\nu,\\varepsilon}_{q^* \\circ b_q^\\delta} \n\t& \\stackrel{a\\;\\textup{def.}}{=} & \\nu((q^* \\circ b_q^\\delta)^* \\circ b_{q^* \\circ b_q^\\delta}^\\varepsilon) \\\\[0mm]\n\t& \\stackrel{\\textup{D}\\ref{algebra-def}(ii)}{=} & \\nu(q^* \\circ (b_q^\\delta)^\\dagger \\circ b_{q^* \\circ b_q^\\delta}^\\varepsilon) \\\\[0mm]\n\t& = & \\nu(\\lambda x . q^*((b_q^\\delta)^\\dagger(b_{q^* \\circ b_q^\\delta}^\\varepsilon(x)))) \\\\[0mm]\n\t& \\stackrel{b\\;\\textup{def.}}{=} & \\nu(\\lambda x . q^*((b^\\delta_q)^\\dagger (\\varepsilon_x(q^* \\circ b_q^\\delta)))) \\\\[0mm]\n\t& \\stackrel{a\\;\\textup{def.}}{=} & \\nu(\\lambda x . q^*((b^\\delta_q)^\\dagger (a^{\\varepsilon_x,\\delta}_q))) \\\\[0mm]\n\t& \\stackrel{\\Delta\\;\\textup{def.}}{=} & \\nu(\\lambda x . q^*(\\Delta_x(q))) \\\\[0mm]\n\t& \\stackrel{b\\;\\textup{def.}}{=} & \\nu(\\lambda x . q^*(b_q^\\Delta(x))) \\\\[0mm]\n\t& \\stackrel{a\\;\\textup{def.}}{=} & a_q^{\\nu, \\Delta}.\n\\end{array}\n\\]\n$\\hfill \\Box$\n\nIt follows that the product operation of the monad $\\JT{T}{R}{}$ can be explicitly described in terms of the product operation on $T$ as:\n\\begin{equation} \\label{j-t-monad-eq}\n(\\varepsilon \\otimes^{\\JT{T}{R}{}} \\delta)(q) = a \\otimes^T f \n\\end{equation}\nwhere $q \\colon X \\times Y \\to R$, $\\varepsilon \\colon (X \\to R) \\to T X$ and $\\delta \\colon X \\to (Y \\to R) \\to T Y$, and\n\\eqleft{\n\\begin{array}{lcl}\n\tf(x) & \\stackrel{TY}{=} & \\delta_x(q_x) \\\\[1mm]\n\ta & \\stackrel{TX}{=} & \\varepsilon(\\lambda x^X. (q_x)^*(f x)).\n\\end{array}\n}\n\nNote that $\\otimes$ on the right side of (\\ref{j-t-monad-eq}) denotes the product on the strong monad $T$ whereas $\\otimes$ on the left denotes the product of the strong monad $\\JT{T}{R}{}$. We will in general use the same notation $\\otimes$ for the product of any strong monad, as it will hopefully be clear from the context which monad we are referring to.\n\n\\begin{definition}[from $\\JT{T}{R}{}$ to $\\K{R}{}$] \\label{bar-def} Let $\\K{R}{X} = (X \\to R) \\to R$. Given a $T$-selection function $\\varepsilon \\colon \\JT{T}{R}{X}$ we can construct a quantifier $\\overline{\\varepsilon} \\colon \\K{R}{X}$ as\n\\[ \\overline{\\varepsilon}(p^{X \\to R}) \\;\\stackrel{R}{=}\\; p^*(\\varepsilon p). \\]\n\\end{definition}\n\nIt can be shown that the construction $\\varepsilon \\mapsto \\overline{\\varepsilon}$ is actually a monad morphism, from which the next lemma follows. Nevertheless, we shall prove the lemma directly. A particular instance of this lemma, when $T$ is the identity monad, was first proven in \\cite{EO(2009)}. It is important here that $R$ is a $T$-algebra.\n\n\\begin{lemma} \\label{bar-binary} Given $\\varepsilon \\colon \\JT{T}{R}{X}$ and $\\delta \\colon X \\to \\JT{T}{R}{Y}$ then\n\\[ \\overline{(\\varepsilon \\otimes^{\\JT{T}{R}{}} \\delta)} = \\overline{\\varepsilon} \\otimes^{\\K{R}{}} (\\lambda x . \\overline{\\delta_x}). \\]\n\\end{lemma}\n{\\bf Proof}. Define $f(x) = \\delta_x(q_x)$ and $p(x) = (q_x)^*(f x)$ and $a = \\varepsilon(p)$. We calculate as follows:\n\\eqleft{\n\\begin{array}{lcl}\n\\overline{(\\varepsilon \\otimes^{\\JT{T}{R}{}} \\delta)}(q)\n\t& \\stackrel{\\textup{D}\\ref{bar-def}}{=} & q^*((\\varepsilon \\otimes^{\\JT{T}{R}{}} \\delta)(q)) \\\\[0mm]\n\t& \\stackrel{(\\ref{j-t-monad-eq})}{=} & q^*(a \\otimes^T f) \\\\[1mm]\n\t& \\stackrel{\\textup{L}\\ref{lemma-strong}}{=} & (\\lambda x . (q_x)^*(f x))^*(a) \\\\[1mm]\n\t& \\stackrel{\\textup{Def($a$)}}{=} & (\\lambda x . (q_x)^*(f x))^*(\\varepsilon(p)) \\\\[1mm]\n\t& \\stackrel{\\textup{Def($p$)}}{=} & p^*(\\varepsilon(p)) \\\\[1mm]\n\t& \\stackrel{\\textup{D}\\ref{bar-def}}{=} & \\overline{\\varepsilon}(p) \\\\[1mm]\n\t& \\stackrel{\\textup{Def($p, f$)}}{=} & \\overline{\\varepsilon}(\\lambda x . (q_x)^*(\\delta_x(q_x))) \\\\[1mm]\n\t& \\stackrel{\\textup{D}\\ref{bar-def}}{=} & \\overline{\\varepsilon}(\\lambda x . \\overline{\\delta_x}(q_x)) \\\\[2mm]\n\t& = & (\\overline{\\varepsilon} \\otimes^{\\K{R}{}} (\\lambda x . \\overline{\\delta_x}))(q).\n\\end{array}\n}\nThe last equality in the chain above uses the definition of the product $\\otimes$ for the strong monad $\\K{R}{X}$. \n$\\hfill \\Box$\n\n\\section{Iterated Products and Bar Recursion}\n\nGiven any strong monad $M$ we can iterate its product operation $M X \\times (X \\to M Y) \\to M(X \\times Y)$ so as to obtain an operation\\footnote{A simpler instance of this operation without dependent types, namely $(M X)^\\mathbb{N} \\to M(X^\\mathbb{N})$, is actually a built-in function in standard implementations of the Haskell programming language called \\sf{sequence :: Monad m =$>$ [m a] -$>$ m [a]}.} on infinite sequences $(X^* \\to M(X))^\\mathbb{N} \\to M(X^\\mathbb{N})$. Although this will not be a total operation in general, it is surprising that, as shown in \\cite{EO(2009)}, it defines a total operation when $M$ is the selection monad $M X = \\J{R}{X}$ and $R$ is a discrete type. \n\nIt is also possible to iterate the binary product of $M$ in a controlled way, by using an explicit termination function $\\omega \\colon X^\\mathbb{N} \\to \\mathbb{N}$ as\n\\[\n\\IEP{M}^\\omega_s(\\alpha) = \n\\left\\{\n\\begin{array}{ll}\n\\eta^M(\\langle \\, \\rangle) & {\\rm if} \\; \\omega(s^+) < |s| \\\\[2mm]\n\\alpha_s \\otimes^M (\\lambda x . \\IEP{M}^\\omega_{s * x}(\\alpha) & {\\rm otherwise}\n\\end{array}\n\\right.\n\\]\nwhere $\\IEP{M}^\\omega_s$ is of type $(\\mathbb{N} \\to M(X) \\to M(X^*)$. We use the acronym $\\IEP{M}$ for the ``explicitly controlled iterated product of the strong monad $M$''.\n\nThe explicitly controlled product of selection functions ${\\sf EPS}$ or quantifiers ${\\sf EPQ}$ (cf. \\cite{EO(2015A)}) are particular cases when $M X = \\J{R}{X}$ and $M X = \\K{R}{X}$, this time for an arbitrary $R$, i.e. ${\\sf EPS} = \\IEP{\\J{R}{}}$ and ${\\sf EPQ} = \\IEP{\\K{R}{}}$. In turn, these are primitively recursively equivalent to restricted Spector bar recursion and the general Spector bar recursion, respectively \\cite{EO(2015A)}.\n\nIn this section we consider another instance where $M X = \\JT{T}{R}{X}$, with $T$ being a strong monad, i.e. $\\IEP{\\JT{T}{R}{}}$ which we shall call $T\\textup{-}{\\sf EPS}$.\n\n\\begin{definition}[Iterated $\\JT{T}{R}{}$ product] \\label{fin-set-br} Let $\\varepsilon_s \\colon \\JT{T}{R}{X}_{|s|}$ and $s \\colon X^*$ and $\\omega \\colon X^\\mathbb{N} \\to \\mathbb{N}$. \nWe define $T\\textup{-}{\\sf EPS}^\\omega_s(\\varepsilon) \\colon \\JT{T}{R}{X}^*$ as $T\\textup{-}{\\sf EPS}^\\omega_s = \\IEP{\\JT{T}{R}{}}^\\omega_s$.\n\\end{definition}\n\nUnfolding the definition of the binary product, as in Lemma \\ref{j-t-monad}, and noticing that $\\eta^{\\JT{T}{R}{}}(\\langle \\, \\rangle) = \\lambda q . \\eta^T(\\langle \\, \\rangle)$, the equation above can be also written as\n\\begin{equation} \\label{monad-br}\nT\\textup{-}{\\sf EPS}^\\omega_s(\\varepsilon)(q) = \n\\left\\{\n\\begin{array}{ll}\n\\eta^T(\\langle \\, \\rangle) & {\\rm if} \\; \\omega(s^+) < |s| \\\\[2mm]\na \\otimes^T f & {\\rm otherwise}\n\\end{array}\n\\right.\n\\end{equation}\nwhere $a = \\varepsilon_s(\\lambda x . (q_x)^*(f x))$ and $f(x) = T\\textup{-}{\\sf EPS}^\\omega_{s * x}(\\varepsilon)(q_x)$. \n\nRecall that ${\\sf EPQ}$ is the explicitly controlled iterated product of quantifiers, i.e. ${\\sf EPQ} = \\IEP{\\K{R}{}}$. ${\\sf EPQ}$ satisfies the equation\n\\[\n{\\sf EPQ}^\\omega_s(\\phi) = \n\\left\\{\n\\begin{array}{ll}\n\\lambda q . q(\\langle \\, \\rangle) & {\\rm if} \\; \\omega(s^+) < |s| \\\\[2mm]\n\\phi_s \\otimes^{\\K{R}{}} \\lambda x . {\\sf EPQ}^\\omega_{s * x}(\\phi) & {\\rm otherwise}.\n\\end{array}\n\\right.\n\\]\nAgain, the definition of the binary product of quantifiers can unfolded, leading to the equivalent equation\n\\begin{equation} \\label{epq-eq}\n{\\sf EPQ}^\\omega_s(\\phi)(q) = \n\\left\\{\n\\begin{array}{ll}\nq(\\langle \\, \\rangle) & {\\rm if} \\; \\omega(s^+) < |s| \\\\[2mm]\n\\phi_s(\\lambda x. {\\sf EPQ}^\\omega_{s * x}(\\phi)(q_x)) & {\\rm otherwise}\n\\end{array}\n\\right.\n\\end{equation}\nAs show in \\cite{EO(2010A)}, ${\\sf EPQ}$ is equivalent over system $\\systemT$ to Spector's bar recursion. The following lemma follows by a simple iteration of Lemma \\ref{bar-binary}. \n\n\\begin{lemma} \\label{bar-full} $\\overline{T\\textup{-}{\\sf EPS}^\\omega_{\\langle \\, \\rangle}(\\varepsilon)} = {\\sf EPQ}^\\omega_{\\langle \\, \\rangle}(\\overline{\\varepsilon})$.\n\\end{lemma}\n{\\bf Proof}. The proof goes by bar induction on $s$ with the bar $\\omega(s^+) < |s|$. In case we have reached the bar, i.e. $\\omega(s^+) < |s|$, we have\n\\eqleft{\n\\begin{array}{lcl}\n{\\sf EPQ}^\\omega_s(\\overline{\\varepsilon})(q) \n\t& = & q(\\langle \\, \\rangle) \\\\[1mm]\n\t& \\stackrel{\\textup{D}\\ref{algebra-def}(i)}{=} & q^*(\\eta^T(\\langle \\, \\rangle)) \\\\[1mm]\n\t& \\stackrel{\\textup{D}\\ref{fin-set-br}}{=} & q^*(T\\textup{-}{\\sf EPS}^\\omega_s(\\varepsilon)(q)) \\\\[1mm]\n\t& \\stackrel{\\textup{D}\\ref{bar-def}}{=} & \\overline{T\\textup{-}{\\sf EPS}^\\omega_s(\\varepsilon)}.\n\\end{array}\n}\nBy the bar inductive assumption we have that $\\overline{T\\textup{-}{\\sf EPS}^\\omega_{s * x}(\\varepsilon)} = {\\sf EPQ}^\\omega_{s * x}(\\overline{\\varepsilon})$, for all $x$, and hence\n\\eqleft{\n\\begin{array}{lcl}\n{\\sf EPQ}^\\omega_s(\\overline{\\varepsilon})(q) \n\t& = & (\\overline{\\varepsilon} \\otimes^{\\K{R}{}} (\\lambda x . {\\sf EPQ}^\\omega_{s * x}(\\overline{\\varepsilon})))(q) \\\\[2mm]\n\t& \\stackrel{\\textup{(IH)}}{=} & (\\overline{\\varepsilon} \\otimes^{\\K{R}{}} (\\lambda x . \\overline{T\\textup{-}{\\sf EPS}^\\omega_{s * x}(\\varepsilon)}))(q) \\\\[2mm]\n\t& \\stackrel{\\textup{L}\\ref{bar-binary}}{=} & (\\overline{\\varepsilon \\otimes^{\\JT{T}{R}{}} (\\lambda x . T\\textup{-}{\\sf EPS}^\\omega_{s * x}(\\varepsilon))})(q) \\\\[2mm]\n\t& = & \\overline{T\\textup{-}{\\sf EPS}^\\omega_s(\\varepsilon)}(q), \\\\[2mm]\n\\end{array}\n}\nsince we can assume $\\omega(s^+) \\geq |s|$.\n$\\hfill \\Box$\n\nIt is well know that the product of selection functions of type $(X, R)$ can be simulated by a product where $R$ is restricted to $R = X^\\mathbb{N}$ and $q \\colon X^\\mathbb{N} \\to R$ is the identity function. In fact, one can think of Spector's restricted form of bar recursion \\cite{Spector(62)} as the iterated product of these restricted selection functions. In terms of games, it corresponds to taking the outcome of the game to be the sequence of moves played. The actual outcome of the game can be reconstructed from this sequence via the outcome function. The next lemma shows that this simulation of an arbitrary outcome type $R$ by taking the outcome to be the actual sequence of moves also works in this monadic setting.\n\n\\begin{lemma} \\label{t-spector} $T\\textup{-}{\\sf EPS}$ of type $(X, R)$ is definable from $T\\textup{-}{\\sf EPS}$ of type $(X, T X^\\mathbb{N})$.\n\\end{lemma}\n{\\bf Proof}. Let ${\\sf add}_s \\colon X^\\mathbb{N} \\to X^\\mathbb{N}$ and ${\\sf drop}_n \\colon X^\\mathbb{N} \\to X^\\mathbb{N}$ be the functions that append the finite sequence $s$ to the beginning of an infinite list, and the function that drops $n$ elements from an infinite list, respectively. Clearly, ${\\sf drop}_{|s|} \\circ {\\sf add}_s$ is the identity, and hence, by functoriality, $T({\\sf drop}_{|s|}) \\circ T({\\sf add}_s)$ is the identity on $T(X^\\mathbb{N})$. Given $q \\colon X^\\mathbb{N} \\to R$ and $\\varepsilon_s \\colon \\JT{T}{R}{X}$ we define $\\varepsilon^q_s \\colon \\JT{T}{T X^\\mathbb{N}}{X}$ as\n\\[ \\varepsilon^q_s(p^{X \\to T X^\\mathbb{N}}) \\stackrel{T X}{=} \\varepsilon_s(\\lambda x . ((q_{s * x})^* \\circ T({\\sf drop}_{|s*x|}))(p x)). \\]\nNote that $T X^\\mathbb{N}$ is also a $T$-algebra with the map\n\\[ (\\cdot)^* \\colon (Y \\to T X^\\mathbb{N}) \\to (T Y \\to T X^\\mathbb{N}) \\]\nbeing simply the $(\\cdot)^\\dagger$ of the monad $T$. We claim that \\[ T\\textup{-}{\\sf EPS}^\\omega_{\\langle \\, \\rangle}(\\varepsilon)(q) = T\\textup{-}{\\sf EPS}^\\omega_{\\langle \\, \\rangle}(\\varepsilon^q)(\\eta^T). \\] Define\n\\eqleft{P(s) \\equiv T\\textup{-}{\\sf EPS}^\\omega_s(\\varepsilon)(q_s) = T\\textup{-}{\\sf EPS}^\\omega_s(\\varepsilon^q)(\\eta^T \\circ {\\sf add}_s)}\nand let us show $P(\\langle \\, \\rangle)$ by bar induction. Recall that $T({\\sf add}_s) = \\eta^T \\circ {\\sf add}_s$ by definition. In the base case, assuming $\\omega(s^+) < |s|$, we have\n\\eqleft{T\\textup{-}{\\sf EPS}^\\omega_s(\\varepsilon)(q_s) = \\eta^T(\\langle \\, \\rangle) = T\\textup{-}{\\sf EPS}^\\omega_s(\\varepsilon^q)(\\eta^T \\circ {\\sf add}_s).}\nFor the bar inductive step we assume $P(s * x)$ holds for all $x$ and must prove $P(s)$. We can also assume that $\\omega(s^+) \\geq |s|$. Let \n\\eqleft{\n\\begin{array}{rcl}\n\tf(x) & = & T\\textup{-}{\\sf EPS}^\\omega_{s * x}(\\varepsilon)(q_{s * x}) \\\\[2mm]\n\ta & = & \\varepsilon_s(\\lambda x . (q_{s * x})^*(f x)) \\\\[2mm]\n\t\\tilde{f}(x) & = & T\\textup{-}{\\sf EPS}^\\omega_{s * x}(\\varepsilon^q)(\\eta^T \\circ {\\sf add}_{s * x}) \\\\[2mm]\n\t\\tilde{a} & = & \\varepsilon_s^q(\\lambda x . T({\\sf add}_{s * x})(\\tilde{f} x)).\n\\end{array}\n}\nBy the bar inductive hypothesis we have $f = \\tilde{f}$ and hence\n\\eqleft{\n\\begin{array}{lcl}\n\\tilde{a}\n\t& = & \\varepsilon_s^q(\\lambda x . T({\\sf add}_{s * x})(\\tilde{f} x)) \\\\[1mm]\n\t& \\stackrel{\\textup{(IH)}}{=} & \\varepsilon_s^q(\\lambda x . T({\\sf add}_{s * x})(f x)) \\\\[1mm]\n\t& \\stackrel{(\\varepsilon^q\\,\\textup{def})}{=} & \\varepsilon_s(\\lambda x . ((q_{s * x})^* \\circ T({\\sf drop}_{|s * x|}))(T({\\sf add}_{s * x})(f x)))) \\\\[2mm]\n\t& = & \\varepsilon_s(\\lambda x . (q_{s*x})^*(fx)) \\\\[2mm]\n\t& = & a.\n\\end{array}\n}\nTherefore\n\\eqleft{\n\\begin{array}{lcl}\nT\\textup{-}{\\sf EPS}^\\omega_s(\\varepsilon)(q_s)\n\t& = & a \\otimes^T f \\\\[2mm]\n\t& = & \\tilde{a} \\otimes^T \\tilde{f} \\\\[2mm]\n\t& = & T\\textup{-}{\\sf EPS}^\\omega_s(\\varepsilon^q)(\\eta^T \\circ {\\sf add}_{s}).\n\\end{array}\n}\nIn the last step we have used that $T({\\sf add}_s)$ is defined as $\\eta^T \\circ {\\sf add}_s$.\n$\\hfill \\Box$\n\nThe main result in this section is that Spector's original bar recursion already defines the explicitly controlled product of $T$-selection functions $T\\textup{-}{\\sf EPS}$. Spector proves this in \\cite{Spector(62)} for the case when $T$ is the identity monad. The following theorem shows that this in fact holds for any strong monad $T$. \n\n\\begin{theorem} \\label{thm-teps-from-epq} $T\\textup{-}{\\sf EPS}$ is definable from ${\\sf EPQ}$.\n\\end{theorem}\n{\\bf Proof}. We claim that $T\\textup{-}{\\sf EPS}^\\omega_{\\langle \\, \\rangle}(\\varepsilon)(q)$ can be defined as ${\\sf EPQ}^\\omega_{\\langle \\, \\rangle}(\\overline{\\varepsilon^q})(\\eta)$, where $\\varepsilon^q$ is as in the proof of the previous lemma. Indeed we have:\n\\[\n\\begin{array}{lcl}\n{\\sf EPQ}^\\omega_{\\langle \\, \\rangle}(\\overline{\\varepsilon^q})(\\eta)\n\t& \\stackrel{\\textup{L}\\ref{bar-full}}{=} & \\overline{T\\textup{-}{\\sf EPS}^\\omega_{\\langle \\, \\rangle}(\\varepsilon^q)}(\\eta) \\\\[1mm]\n\t& \\stackrel{\\textup{D}\\ref{bar-def}}{=} & \\eta^*(T\\textup{-}{\\sf EPS}^\\omega_{\\langle \\, \\rangle}(\\varepsilon^q)(\\eta)) \\\\[1mm]\n\t& \\stackrel{\\textup{L}\\ref{t-spector}}{=} & \\eta^*(T\\textup{-}{\\sf EPS}^\\omega_{\\langle \\, \\rangle}(\\varepsilon)(q)) \\\\[2mm]\n\t& = & \\eta^\\dagger(T\\textup{-}{\\sf EPS}^\\omega_{\\langle \\, \\rangle}(\\varepsilon)(q)) \\\\[1mm]\n\t& \\stackrel{\\textup{D}\\ref{monad-laws}(i)}{=} & T\\textup{-}{\\sf EPS}^\\omega_{\\langle \\, \\rangle}(\\varepsilon)(q).\n\\end{array}\n\\]\nWe used that the map $(\\cdot)^*$ for the algebra $T X^\\mathbb{N}$ is just the $(\\cdot)^\\dagger$ map for the monad $T$, as discussed in the proof of Lemma \\ref{t-spector}.\n$\\hfill \\Box$\n\n\\section{Finite Power Sets}\n\\label{sec-lemmas}\n\nFor the rest of the paper we will make essential use of the definitional extension of G\\\"odel's system $\\systemT$ with the finite power-set type $\\power{X}$. To simplify the exposition, let us also abbreviate $\\power{X \\to Y}$ as $X \\Rightarrow Y$, i.e. the type of finite sets of functions from $X$ to $Y$. We can think of the elements $f \\colon X \\Rightarrow \\power{Y}$ as functions by defining the following \\emph{set-application} \n\\[ \\monApp{f}{x^X} \\stackrel{\\power{Y}}{=} \\bigcup_{g \\in f} g x. \\]\nHence, if $f \\colon X \\Rightarrow \\power{Y}$ then $\\monApp{f}{\\cdot} \\colon X \\to \\power{Y}$. In particular, if $f \\colon (X \\Rightarrow (Y \\Rightarrow \\power{Z}))$ then $\\monApp{\\monApp{f}{x}}{y}$ stands for\n\\[ \\bigcup_{g \\in f} \\bigcup_{h \\in g x} h y \\]\nand we will be abbreviated that as $\\monAppD{f}{x}{y}$.\n\n\\begin{lemma} The finite power set type operator $\\power{\\cdot}$ is a strong monad with operations\n\\begin{itemize}\n\t\\item $\\eta(x) = \\{ x \\}$ \\\\[-2mm]\n\t\\item $f^\\dagger(S) = \\bigcup \\{ f(x) \\; \\colon \\; x \\in S \\}$, for $f \\colon X \\to \\power{Y}$.\n\\end{itemize}\nMoreover, its binary product\n\\[ \\otimes \\colon \\power{X} \\times (X \\to \\power{Y}) \\to \\power{X \\times Y} \\]\ncan be explicitly described as\n\\[ S \\otimes f = \\{ \\langle a, b \\rangle \\; \\colon \\; a \\in S \\wedge b \\in f(a) \\}. \\]\n\\end{lemma}\n\nFor the rest of the paper we shall assume that $R = \\power{R'}$, for some $R'$, so that $R$ is an algebra for $\\power{\\cdot}$ with $(\\cdot)^* = (\\cdot)^\\dagger$. We will also use $\\bigcup \\colon \\power{R} \\to R$, the usual union operation which satisfies $S_i \\subseteq \\bigcup \\{ S_i \\; \\colon \\; i \\in I \\}$ (we use this in Lemma \\ref{lemma-main-eq}). \n\n\\begin{definition}[Herbrand bar recursion] Let us write $\\hBR$ for the instance of $T\\textup{-}{\\sf EPS}$ where $T = \\power{\\cdot}$, i.e\n\\[\n\\hBR^\\omega_s(\\varepsilon)(q) = \n\\left\\{\n\\begin{array}{ll}\n\\{\\langle \\, \\rangle\\} & {\\rm if} \\; \\omega(s^+) < |s| \\\\[2mm]\n\\{ a * r \\; \\colon \\; a \\in \\chi \\wedge r \\in \\hBR_{s * a}(\\omega)(\\varepsilon)(q_a) \\} & {\\rm otherwise}\n\\end{array}\n\\right.\n\\]\nwhere $\\chi = \\varepsilon_s(\\lambda x . \\bigcup \\{ q_x(r) \\; \\colon \\; r \\in \\hBR_{s * x}(\\omega)(\\varepsilon)(q_x) \\})$.\n\\end{definition}\n\nBy Theorem \\ref{thm-teps-from-epq} $\\hBR$ is $T$-definable from Spector's general form of bar recursion \\cite{Oliva(2012A)}. We now prove four lemmas about $\\hBR$, to be used in the interpretation of $\\DNS$ in the following section. For this section we will assume that $\\varepsilon$ and $\\omega$ are fixed functionals and hence, for the sake of readability, we shall omit these as parameters in $\\hBR_s^\\omega(\\varepsilon)(q)$.\n\n\\begin{lemma} \\label{lemma-basic} Let $t = \\hBR_{\\langle \\, \\rangle}(q)$ and $s \\in t$. For all $i \\leq |s|$ we have\n\\eqleft{s \\in \\{ \\langle s_0, \\ldots, s_{i-1} \\rangle * r \\; \\colon \\; r \\in \\hBR_{\\langle s_0, \\ldots, s_{i-1} \\rangle}(q_{\\langle s_0, \\ldots, s_{i-1} \\rangle}) \\}.}\nThe types are $t \\colon \\power{X^*}$ and $s \\colon X^*$.\n\\end{lemma}\n{\\bf Proof}. By induction on $i$. If $i = 0$ then $\\langle s_0, \\ldots, s_{i-1} \\rangle$ is the empty sequence and the result follows by the assumption that $s \\in t$. For the induction step assume that $i < |s|$ and that\n\\eqleft{s \\in \\{ \\langle s_0, \\ldots, s_{i-1} \\rangle * r \\; \\colon \\; r \\in \\hBR_{\\langle s_0, \\ldots, s_{i-1} \\rangle}(q_{\\langle s_0, \\ldots, s_{i-1} \\rangle}) \\}.}\nSince $i < |s|$ there must exist some $r \\in \\hBR_{\\langle s_0, \\ldots, s_{i-1} \\rangle}(q_{\\langle s_0, \\ldots, s_{i-1} \\rangle})$ of the form $s_i * r'$ so that\n\\begin{itemize}\n\t\\item[($i$)] $s = \\langle s_0, \\ldots, s_{i-1}, s_i \\rangle * r'$, and \\\\[-2mm]\n\t\\item[($ii$)] $s_i * r' \\in \\hBR_{\\langle s_0, \\ldots, s_{i-1} \\rangle}(q_{\\langle s_0, \\ldots, s_{i-1} \\rangle})$.\n\\end{itemize}\nIn particular, we cannot have $\\hBR_{\\langle s_0, \\ldots, s_{i-1} \\rangle}(q_{\\langle s_0, \\ldots, s_{i-1} \\rangle}) = \\{ \\langle \\, \\rangle \\}$, so it must be the case that $(*) \\; \\omega(\\langle s_0, \\ldots, s_{i-1} \\rangle^+) \\geq |\\langle s_0, \\ldots, s_{i-1} \\rangle |$. Hence\n\\[ \\hBR_{\\langle s_0, \\ldots, s_{i-1} \\rangle}(q_{\\langle s_0, \\ldots, s_{i-1} \\rangle}) = \\{ a * r \\; \\colon \\; a \\in \\chi \\wedge r \\in \\hBR_{\\langle s_0, \\ldots, s_{i-1}, a \\rangle}(q_{\\langle s_0, \\ldots, s_{i-1},a \\rangle}) \\} \\]\nwhere\n\\[ \\chi = \\varepsilon_{\\langle s_0, \\ldots, s_{i-1} \\rangle}( \\lambda y^{X} . \\bigcup \\{ q_{\\langle s_0, \\ldots, s_{i-1}, y \\rangle}(r) \\; \\colon \\; r \\in \\hBR_{\\langle s_0, \\ldots, s_{i-1}, y \\rangle}(q_{\\langle s_0, \\ldots, s_{i-1}, y \\rangle}) \\} ). \\]\nFrom ($ii$) it follows that $s_i \\in \\chi$ and\n\\begin{itemize}\n\t\\item[($iii$)] $r' \\in \\hBR_{\\langle s_0, \\ldots, s_{i-1}, s_i \\rangle}(q_{\\langle s_0, \\ldots, s_{i-1}, s_i \\rangle})$.\n\\end{itemize}\nFinally, from ($i$) and ($iii$) we have\n\\eqleft{s \\in \\{ \\langle s_0, \\ldots, s_{i-1}, s_i \\rangle * r' \\; \\colon \\; r' \\in \\hBR_{\\langle s_0, \\ldots, s_{i-1}, s_i \\rangle}(q_{\\langle s_0, \\ldots, s_{i-1}, s_i \\rangle}) \\}}\nwhich concludes the proof.\n$\\hfill \\Box$\n\n\nFor the following three lemmas let $t = \\hBR_{\\langle \\, \\rangle}(q)$, and assume $a_0, \\ldots, a_n$ is a finite sequence satisfying, for all $i \\leq n$,\n\\eqleft{\na_i \\in \\varepsilon_{\\langle a_0, \\ldots, a_{i-1} \\rangle}(p_i)\n}\nwhere $p_i$ is defined as\n\\eqleft{\np_i(y) = \\bigcup \\{ q_{\\langle a_0, \\ldots, a_{i-1}, y \\rangle}(r) \\; \\colon \\; r \\in \\hBR_{\\langle a_0, \\ldots, a_{i-1}, y \\rangle}(q_{\\langle a_0, \\ldots, a_{i-1}, y \\rangle}) \\}.\n}\n\n\\begin{lemma} \\label{lemma-pre-eq-basic} If $\\omega(\\langle a_0, \\ldots, a_{i-1} \\rangle^+) \\geq i$, for all $i \\leq n$, then\n\\eqleft{\\langle a_0, \\ldots, a_{n-1} \\rangle * x * r \\in t,}\nfor all $x \\in \\varepsilon_{\\langle a_0, \\ldots, a_{n-1} \\rangle}(p_n)$ and $r \\in \\hBR_{\\langle a_0, \\ldots, a_{n-1}, x \\rangle}(q_{\\langle a_0, \\ldots, a_{n-1}, x \\rangle})$.\n\\end{lemma}\n{\\bf Proof}. We prove the lemma by induction on $n$. \\\\[1mm]\nFor $n = 0$ the assumption of the lemma always holds, while the conclusion follows by the definition of $\\hBR$ \n\\eqleft{t = \\hBR_{\\langle \\, \\rangle}(q) = \\{ a * r \\; \\colon \\; a \\in \\varepsilon_{\\langle \\, \\rangle}(p_0) \\wedge r \\in \\hBR_{a}(q_a) \\}}\nsince $\\omega(\\langle \\, \\rangle^+) \\geq 0$. For the induction step, assume that $\\omega(\\langle a_0, \\ldots, a_{i-1} \\rangle^+) \\geq i$, for all $i \\leq n+1$. In particular this holds for $i \\leq n$. Hence, by induction hypothesis we have\n\\begin{itemize}\n\t\\item[] $\\langle a_0, \\ldots, a_{n-1} \\rangle * x * r \\in t$, \\\\[-3mm]\n\t\\item[] \\quad for all $x \\in \\varepsilon_{\\langle a_0, \\ldots, a_{n-1} \\rangle}(p_n)$ and $r \\in \\hBR_{\\langle a_0, \\ldots, a_{n-1}, x \\rangle}(q_{\\langle a_0, \\ldots, a_{n-1}, x \\rangle})$\n\\end{itemize}\nand, since $a_n \\in \\varepsilon_{\\langle a_0, \\ldots, a_{n-1} \\rangle}(p_n)$,\n\\begin{itemize}\n\t\\item[($i$)] $\\langle a_0, \\ldots, a_n \\rangle * r \\in t$, for all $r \\in \\hBR_{\\langle a_0, \\ldots, a_n \\rangle}(q_{\\langle a_0, \\ldots, a_n \\rangle})$.\n\\end{itemize}\nNow fix a $y \\in \\varepsilon_{\\langle a_0, \\ldots, a_n \\rangle}(p_{n+1})$ and an $r' \\in \\hBR_{\\langle a_0, \\ldots, a_n, y \\rangle}(q_{\\langle a_0, \\ldots, a_n, y \\rangle})$. In order to show that $\\langle a_0, \\ldots, a_n \\rangle * y * r' \\in t$, by ($i$) it is enough to show that $y * r' \\in \\hBR_{\\langle a_0, \\ldots, a_n \\rangle}(q_{\\langle a_0, \\ldots, a_n \\rangle})$. But since $\\omega(\\langle a_0, \\ldots, a_n \\rangle^+) \\geq n+1$, this indeed follows by the definition of $\\hBR$, and the assumptions on $y$ and $r'$.\n$\\hfill \\Box$\n\n\n\\begin{lemma} \\label{lemma-main-eq-basic} Let $t$ and $a_i$'s be as above. Define $N = 1 + \\max \\{ |s| \\; \\colon \\; s \\in t \\}$. For some $n < N$ we have that\n\\begin{itemize}\n\t\\item[($a$)] $n$ is the least such that $\\omega(\\langle a_0, \\ldots, a_n \\rangle^+) < n + 1$, and \\\\[-2mm]\n\t\\item[($b$)] $\\langle a_0, \\ldots, a_n \\rangle \\in t$.\n\\end{itemize}\n\\end{lemma}\n{\\bf Proof}. Suppose that for all $n \\leq N$ we have $\\omega(\\langle a_0, \\ldots, a_{n-1} \\rangle^+) \\geq n$. By Lemma \\ref{lemma-pre-eq-basic} this would imply $\\langle a_0, \\ldots, a_{N-1} \\rangle * r \\in t$ for some non-empty finite sequence $r$, which is a contradiction by the definition of $N$. Therefore, let $n < N$ be the smallest such that $\\omega(\\langle a_0, \\ldots, a_n \\rangle^+) < n + 1$, so that for all $i \\leq n$ we have $\\omega(\\langle a_0, \\ldots, a_{i-1} \\rangle^+) \\geq i$. By Lemma \\ref{lemma-pre-eq-basic} again we have that $\\langle a_0, \\ldots, a_{n-1} \\rangle * a_n * r \\in t$ for all $r \\in \\hBR_{\\langle a_0, \\ldots, a_n \\rangle}(q_{\\langle a_0, \\ldots, a_n \\rangle})$. But since $\\omega(\\langle a_0, \\ldots, a_n \\rangle^+) < n + 1$ we have that $\\hBR_{\\langle a_0, \\ldots, a_n \\rangle}(q_{\\langle a_0, \\ldots, a_n \\rangle}) = \\{\\langle \\, \\rangle\\}$, implying $\\langle a_0, \\ldots, a_n \\rangle \\in t$.\n$\\hfill \\Box$\n\n\\begin{lemma} \\label{lemma-main-eq} Let $t, p_i, a_i$ be as above, and $n < N$ as in Lemma \\ref{lemma-main-eq-basic}. Let also $s = \\langle a_0, \\ldots, a_n \\rangle$. Then for all $i \\leq n$ \n\\begin{equation}\n\\begin{array}{lcl}\n\tq(s) & \\subseteq & p_i(a_i).\n\\end{array}\n\\end{equation}\n\\end{lemma}\n{\\bf Proof}. By Lemma \\ref{lemma-main-eq-basic} we have that $s \\in t$. Hence, by Lemma \\ref{lemma-basic}, for $i \\leq n$\n\\eqleft{s \\in \\{ \\langle a_0, \\ldots, a_{i-1}, a_i \\rangle * r \\; \\colon \\; r \\in \\hBR_{\\langle a_0, \\ldots, a_{i-1}, a_i \\rangle}(q_{\\langle a_0, \\ldots, a_{i-1}, a_i \\rangle}) \\}.}\nIt follows that\n\\eqleft{q(s) \\in \\{ q(\\langle a_0, \\ldots, a_{i-1}, a_i \\rangle * r) \\; \\colon \\; r \\in \\hBR_{\\langle a_0, \\ldots, a_{i-1}, a_i \\rangle}(q_{\\langle a_0, \\ldots, a_{i-1}, a_i \\rangle}) \\}.}\nHence\n\\eqleft{\n\\begin{array}{lcl}\nq(s)\n\t& \\subseteq & \\bigcup \\{ q_{\\langle a_0, \\ldots, a_{i-1}, a_i \\rangle}(r) \\; \\colon \\; r \\in \\hBR_{\\langle a_0, \\ldots, a_{i-1}, a_i \\rangle}(q_{\\langle a_0, \\ldots, a_{i-1}, a_i \\rangle}) \\} \\\\[2mm]\n\t& = & p_i(a_i)\n\\end{array}\n}\nwhich concludes the proof.\n$\\hfill \\Box$\n\n\\section{Application: Herbrand Interpretation of $\\DNS$}\n\\label{sec-herbrand}\n\nIn this final section we show how the product of $T$-selection functions, with $T$ being the finite power-set monad, witnesses the Herbrand functional interpretation of the double negation shift\n\\[ \\DNS \\quad \\colon \\quad \\forallSt n^\\mathbb{N} \\neg \\neg A(n) \\to \\neg \\neg \\forallSt n^\\mathbb{N} A(n) \\]\nwhere $\\forallSt x A(x)$ is the quantification over standard objects from \\cite{BBS(2012)}. Let us first briefly recall here the definition of the Herbrand functional interpretation from \\cite{BBS(2012)}. We shall only present the $\\{\\to, \\forallSt, \\bot\\}$-fragment as this is enough to carry out the interpretation of $\\DNS$. Negation $\\neg A$ is defined as $A \\to \\bot$. Although we will present an explicit definition for the witnesses of $\\DNS$, for simplicity, we will carry out the \\emph{verification of correctness} in a classical setting, reading the weak existential $\\neg \\forall x \\neg A$ as the strong one $\\exists x A$.\n\n\\begin{definition}[\\cite{BBS(2012)}] The Herbrand functional interpretation of a formula $A$ is defined by structural induction. Assume\\footnote{Here $a, b, c$ and $d$ are potentially tuples of variables, though for simplicity we will treat them as if they were single variables.} $\\herb{A} = \\existsSt a^{X} \\forallSt b^Y A_H(a, b)$ and $\\herb{B} = \\existsSt c^V \\forallSt d^W B_H(c, d)$. The only relevant cases for the interpretation of $\\DNS$ are:\n\\[\n\\begin{array}{lcl}\n\t\\herb{\\bot} & \\equiv & \\bot \\\\[2mm]\n\t\\herb{A \\to B} & \\equiv & \\existsSt f, g \\forallSt a^{X}, d^W (\\forall b \\!\\in\\! \\monAppD{g}{a}{d} \\, A_H(a, b) \\to B_H(\\monApp{f}{a}, d)) \\\\[2mm]\n\t\\herb{\\forallSt z^Z A} & \\equiv & \\existsSt h^{Z \\Rightarrow X} \\forallSt z, b A_H(\\monApp{h}{z}, b)\n\\end{array}\n\\]\nwhere in the clause for $A \\to B$ the types of $f$ and $g$ are\n\\[\n\\begin{array}{lcl}\nf & \\colon & X \\Rightarrow V \\\\[2mm]\ng & \\colon & X \\Rightarrow (W \\Rightarrow \\power{Y}).\n\\end{array}\n\\]\nFor all other cases, including the other base cases, see \\cite{BBS(2012)}.\n\\end{definition}\n\nLet us start by working out the Herbrand interpretation of negation $\\neg A$ and double-negation $\\neg \\neg A$. If $\\herb{A} = \\existsSt a^{X} \\forallSt b^R A_H(a, b)$ then\n\\eqleft{\\herb{\\neg A} \\equiv \\existsSt p^{X \\Rightarrow \\power{R}} \\forallSt a^{X} \\neg \\forall b \\in \\monApp{p}{a} A_H(a, b)}\nand hence\n\\eqleft{(\\neg\\neg A)^H \\equiv \\existsSt \\varepsilon \\forallSt p^{X \\Rightarrow \\power{R}} \\exists a^X \\!\\in \\monApp{\\varepsilon}{p} \\forall b \\in \\monApp{p}{a} A_H(a, b)}\nwhere $\\varepsilon \\colon (X \\Rightarrow \\power{R}) \\Rightarrow \\power{X}$. Assuming $A(n)$ has a Herbrand functional interpretation $\\existsSt a^X \\forallSt b^R A_H(n, a, b)$ then the interpretation of $\\forallSt n^\\mathbb{N} \\neg \\neg A(n)$ is\n\\begin{equation}\n\\existsSt \\delta \\forallSt n \\forallSt p^{X \\Rightarrow \\power{R}} \\exists a \\in \\monAppD{\\delta}{n}{p} \\forall b \\in \\monApp{p}{a} A_H(n, a, b).\n\\end{equation}\nThe interpretation of the conclusion of $\\DNS$, $\\neg \\neg \\forallSt n^\\mathbb{N} A(n)$, follows from\\footnote{Instead of producing a set of functions $\\beta \\colon \\mathbb{N} \\Rightarrow X$ we will actually produce a single function $\\beta \\colon \\mathbb{N} \\to X$. Note that $\\monApp{\\{p\\}}{a} = p(a)$.}\n\\begin{equation}\n\\existsSt \\alpha \\forallSt \\varphi, q \\exists \\beta \\in \\monAppD{\\alpha}{\\varphi}{q} \\forall n \\in \\monApp{\\varphi}{\\beta} \\forall b \\in \\monApp{q}{\\beta} A_H(n, \\beta(n), b)\n\\end{equation}\nwhere the types above are\n\\eqleft{\n\\begin{array}{lcl}\n\\delta \\colon \\mathbb{N} \\Rightarrow (X \\Rightarrow \\power{R}) \\Rightarrow \\power{X} & \\quad & p \\colon X \\Rightarrow \\power{R} \\\\[2mm]\nq \\colon (\\mathbb{N} \\to X) \\Rightarrow \\power{R} & & \\beta \\colon \\mathbb{N} \\to X \\\\[2mm]\n\\varphi \\colon (\\mathbb{N} \\to X) \\Rightarrow \\power{\\mathbb{N}} & & \\monAppD{\\alpha}{\\varphi}{q} \\colon \\power{\\mathbb{N} \\Rightarrow X}.\n\\end{array}\n}\nGiven $\\delta, \\varphi$ and $q$, we will calculate finite sets $\\alpha, N$ and $P$ and show that\n\\[\n\\begin{array}{l}\n\\forall n \\!\\in\\! N \\forall p \\!\\in\\! P \\exists a \\!\\in\\! \\monAppD{\\delta}{n}{p} \\forall b \\!\\in\\! \\monApp{p}{a} A_H(n, a, b) \\\\[2mm]\n\\quad \\quad \\to \\exists \\beta \\!\\in\\! \\alpha \\forall n \\!\\in\\! \\monApp{\\varphi}{\\beta} \\forall b \\!\\in\\! \\monApp{q}{\\beta} A_H(n, \\beta(n), b).\n\\end{array}\n\\]\nAlthough the Herbrand interpretation here would only actually ask us to produce finite sets of candidate ``constructions\" for $\\alpha, N$ and $P$, with a guarantee that one of them did the job, we show that in fact we can produce concrete finite sets $\\alpha, N$ and $P$. Given $\\delta, \\varphi$ and $q$ as above, let us define\n\\eqleft{ \n\\begin{array}{l}\n\\varepsilon_n \\colon (X \\to \\power{R}) \\to \\power{X} \\\\[2mm]\n\\hat{q} \\colon X^* \\to \\power{R} \\\\[2mm]\n\\omega \\colon (\\mathbb{N} \\to X) \\to \\mathbb{N}\n\\end{array}\n}\nas\n\\eqleft{\n\\begin{array}{lcl}\n\\varepsilon_n(p) & = & \\monAppD{\\delta}{n}{\\{p\\}} \\\\[2mm]\n\\hat{q}(s) & = & \\monApp{q}{s^+} \\\\[2mm]\n\\omega(\\beta) & = & \\max (\\monApp{\\varphi}{\\beta}).\n\\end{array}\n}\nWe will then apply $\\hBR$ to $\\varepsilon_n$, $\\hat{q}$ and $\\omega$.\n\n\\begin{theorem} \\label{thm-main} Define $t = \\hBR_{\\langle \\, \\rangle}^\\omega(\\varepsilon)(\\hat{q})$. We claim that\n\\eqleft{\n\\begin{array}{lcl}\n\t\\alpha & = & \\{ s^+ \\; \\colon \\; s \\in t \\} \\\\[2mm]\n\tP & = & \\{ p_r \\; \\colon \\; r \\preceq s \\wedge s \\in t \\} \\\\[2mm]\n\tN & = & 1 + \\max \\{ |s| \\; \\colon \\; s \\in t \\}\n\\end{array}\n}\nwhere $p_r(y) = \\bigcup \\{ \\hat{q}(r * y * r') \\; \\colon \\; r' \\in \\hBR(r * y) \\}$, witness the Herbrand interpretation of $\\DNS$, i.e.\n\\[\n\\begin{array}{l}\n\\forall n \\!\\leq\\! N \\forall p \\!\\in\\! P \\exists a \\!\\in\\! \\monAppD{\\delta}{n}{p} \\forall b \\!\\in\\! \\monApp{p}{a} A_H(n, a, b) \\\\[2mm]\n\\hspace{2cm} \\to \\exists \\beta \\!\\in\\! \\alpha \\forall i \\!\\in\\! \\monApp{\\varphi}{\\beta} \\forall b \\!\\in\\! \\monApp{q}{\\beta} A_H(i, \\beta(i), b)\n\\end{array}\n\\]\nviewing the number $N$ as the finite set $\\{0, 1, \\ldots, N\\}$.\n\\end{theorem}\n{\\bf Proof}. Assume\n\\begin{equation} \\label{assumption}\n\\forall n \\leq N \\forall p \\!\\in\\! P \\exists a \\!\\in\\! \\monAppD{\\delta}{n}{p} \\forall b \\!\\in\\! \\monApp{p}{a} A_H(n,a, b).\n\\end{equation}\nBy induction on $n$ it follows that: For all $n \\leq N$ there exists a sequence $\\langle a_0, \\ldots, a_n \\rangle$ such that either\n\\begin{itemize}\n\t\\item for some $i < n$, $\\omega(\\langle a_0, \\ldots, a_i \\rangle^+) < i + 1$, or\n\t\\item for all $i \\leq n$, \n\t%\n\t\\begin{equation} \\label{fin-seq}\n\ta_i \\!\\in\\! \\underbrace{\\monAppD{\\delta}{i}{\\{p_{\\langle a_0, \\ldots, a_{i-1} \\rangle}\\}}}_{\\varepsilon_i(p_{\\langle a_0, \\ldots, a_{i-1} \\rangle})} \\,\\wedge\\; \\forall b \\!\\in\\! \\underbrace{\\monApp{\\{p_{\\langle a_0, \\ldots, a_{i-1} \\rangle}\\}}{a_i}}_{p_{\\langle a_0, \\ldots, a_{i-1} \\rangle} (a_i)} A_H(i,a_i, b).\n\t\\end{equation}\n\t%\n\\end{itemize}\nWe have used Lemma \\ref{lemma-pre-eq-basic}, since under the assumption that $\\omega(\\langle a_0, \\ldots, a_{i-1} \\rangle^+) \\geq i$ for all $i \\leq n$ then $\\langle a_0, \\ldots, a_{i-1} \\rangle * r \\in t$, for some $r$, and hence $p_{\\langle a_0, \\ldots, a_{i-1} \\rangle} \\in P$. By Lemma \\ref{lemma-main-eq-basic} there exists a least $n < N$ such that $\\omega(\\langle a_0, \\ldots, a_n \\rangle^+) < n + 1$, so that (\\ref{fin-seq}) holds for all $i \\leq n$, and $\\langle a_0, \\ldots, a_n \\rangle \\in t$. Let $s = \\langle a_0, \\ldots, a_n \\rangle$ and $\\beta = s^+$ (so that $\\beta \\in \\alpha$). Note that\n\\eqleft{\\max(\\monApp{\\varphi}{s^+}) = \\omega(s^+) < |s|.} \nHence, $i < |s|$ for all $i \\in \\monApp{\\varphi}{s^+}$. By Lemma \\ref{lemma-main-eq}\n\\eqleft{\\monApp{q}{\\beta} = \\monApp{q}{s^+} = \\hat{q}(s) \\subseteq p_{\\langle a_0, \\ldots, a_{i-1} \\rangle}(a_i)$, for all $i \\in \\monApp{\\varphi}{s^+}.}\nBy (\\ref{fin-seq}) we can conclude that $\\forall i \\!\\in\\! \\monApp{\\varphi}{\\beta} \\forall b \\!\\in\\! \\monApp{q}{\\beta} A_H(i, \\beta(i), b)$.\n$\\hfill \\Box$\n\nA reader familiar with the bounded functional interpretation of $\\DNS$ (cf. \\cite{FE(2010)}) will have noticed several similarities with the Herbrand functional interpretation of $\\DNS$ presented here. The main difference, however, is that we have made no effort to formalise the \\emph{verification} of the interpretation in a constructive setting, choosing to view $\\neg \\forall x \\neg A$ as a strong existence $\\exists x A$. Although it is clear to us that such formalisation is possible, attempting to do so would complicate the verification and probably obfuscate the crucial steps of the bar recursive construction. We hope that by simplifying the ``logical component\" of the proof one can better appreciate its ``computational\" aspect and the use of the ``Herbrand\" bar recursion. The recent paper \\cite{Ferreira(2015A)} sheds some light at the relationship between the two interpretations. \n\n\\section{Conclusion}\n\nWe conclude by noticing that all lemmas of Section \\ref{sec-lemmas} were proven for the specific case of the finite power set monad only. It is reasonable to ask whether more general versions of such lemmas work already for the monadic bar recursion $T\\textup{-}{\\sf EPS}$. The main challenge as we see it is to find the appropriate abstraction to the notion of set containment and subset inclusion. Similarly, one might consider generalisations of the Herbrand functional interpretation whereby the finite power set monads is replaced by an arbitrary monad, with possibly some extra structure. \n\n\\bibliographystyle{plain}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nAt equilibrium, colloidal suspensions can organize into familiar thermodynamic phases (gas, liquid, crystal, glass,\\ldots), and our statistical mechanics toolkit has been quite helpful in understanding their physics \\cite{Anderson2002}. However, many of the suspensions we interact with in our everyday life are not at equilibrium, but flowing. The out-of-equilibrium nature of a flowing suspension leads to new and rich behavior such as complex rheology, and the emergence of new kinds of ordering and structure. Here we will discuss suspensions which are \\emph{externally driven} by an applied field. In these systems, an external field applies force and\/or torque to individual particles. Seemingly, this would lead to quite limited possibilities for self-organization --- but due to strong collective interactions between particles, complex structures and non-trivial behavior can emerge. \n\n\nThis review focuses on the behavior of driven colloidal suspensions, a subject that has received less attention than understanding the dynamics of individual driven\/active particles. There has been an explosion of experimental techniques to manufacture new kinds of colloids, and several recent reviews focus on understanding their propulsion mechanisms from a single particle level \\cite{Aubret2017, Xu2017b, Martinez2018, Zottl2016, Zhang2015}. \nHere, we will discuss systems in which particles are driven with an external field, in contrast to those driven by local input of chemical energy. For a thorough discussion of chemically-driven systems, we guide to reader to the following reviews \\cite{Aubret2017, Liu2017, Ebbens2010}. \n\n\n\nOur aim is to highlight recent work in the field; more complete reviews exist which discuss each of the driving strategies we mention in greater technical detail. We have organized this review by the type of external drive: magnetic, electric, and acoustic. For the sake of conciseness we do not cover collective motion in suspensions of optically driven colloids. We refer the reader to pioneering and more recent studies on hydrodynamic synchronization and clustering in optically driven systems \\cite{Lutz2004,Roichman2007,Kotar2010,Sokolov2011,Curran2012,Nagar2014,Okubo2015} and to recent reviews on the subject \\cite{Bowman2013,Sukhov2017,Polimeno2018,Martinez2018,Saito2018}. Here, we have focused on assembly and structures with an eye towards transport and microfluidic applications; we do not cover the large body of experiments focused on self-assembled static structures, and recommend the following reviews for more information \\cite{Yi2013,Yang2016,Manoharan2015, Vogel2015, Frenkel2014}.\n\n\n\nFor each of the driving strategies, we emphasize self-assembly or spatio-temporal organization which occurs due to collective effects. We underline the crucial role of long-ranged hydrodynamic interactions, which, together with externally induced interactions, lead to these strong collective effects. In such situations, theoretical calculations can only handle one or two particles at most. Numerical simulations thus represent the most appropriate tool to study many-body systems such as colloidal suspensions. We have devoted attention to highlighting those simulation techniques and their recent advances. This is a rapidly emerging field; it is now possible to simulate not just a handful of single particles, but high-density suspensions with $O(10^6)$ particles with a reasonable amount of computational power. Coupled with experiment, detailed simulations can help us to understand the complexity of structures which emerge from collective dynamics in driven suspensions. We conclude by highlighting promising future directions and open questions. \n\n\n\n\n\\section{Magnetically powered colloidal suspensions}\nExternal magnetic fields offer a promising avenue for controlling dense colloidal suspensions. Low powered magnetic fields are relatively easy to generate in the lab, and offer the possibility of three dimensional control of particle orientation. Unlike chemical or electrical driving mechanisms, magnetic driving has the advantage of being inherently bio-compatible, opening up exciting new avenues for application possibilities. We briefly outline magnetic driving below; for a more detailed description, we refer the reader to recent reviews focused on magnetic propulsion \\cite{Han2017, Martinez2018, Snezhko2016, Rikken2014}. \n\n\\subsection{Particle-based driving}\n\nThe most straightforward way to use an external magnetic field to drive a colloidal suspension is to use the field to directly manipulate individual colloidal particles. The field is used to apply a torque to individual particles, which can then be converted into translational motion using a variety of strategies.\n\nThe appropriate orientation of the magnetic field depends on the relative balance of viscous and inertial forces. In the overdamped limit, where inertia can be neglected (very small Reynolds number), the applied torque must result in a particle motion that breaks the time-reversal symmetry inherent in Stokes flow. One strategy for generating this motion is to mimic bacteria by creating an artificial flagella or cilia, which is then actuated in either a travelling-wave or helical motion. This non-reciprocal actuation breaks the time-reversal symmetry and generates propulsion \\cite{Martinez2018, Peyer2013}. \n\nAlternatively, one can use the magnetic field to rotate particles near or on a solid surface, so that either the surface forces or the particle-surface hydrodynamic coupling generate motion. If the axis of rotation is oriented parallel to a surface, rotation leads to linear translation of the particle in both cases \\cite{Dean1963,Goldman1967}. We note that both permanently magnetized as well as ``super-paramagnetic'' colloids can be manipulated in this way \\cite{Martinez2018}.\n\nAt small but finite Reynolds number, a \\emph{vertically oscillating} magnetic field can be used to generate particle rotation. Ferromagnetic colloids will rotate either clockwise or counterclockwise, following the applied field. If the particles are large enough, the lag generated by inertia can lead to spontaneous symmetry breaking, so that individual particles begin to continuously roll \\cite{Kokot2015}. While this symmetry breaking mechanism is reminiscent of Quincke rotation (see discussion in section \\ref{electric}), we note two important differences: rotation speed is independent of field strength, and the particle dynamics are intrinsically chaotic \\cite{Kaiser2017}. \n\n\\subsection{Substrate based-driving}\n\nAn alternative to particle-level driving is to instead use a patterned magnetic substrate. By applying a time-varying magnetic field to the substrate, one can alter its magnetic properties in a such a way as to drive adjacent particles. One strategy, employed by Yellen et al., is to immerse non-magnetic particles in a ferrofluid above the patterned substrate \\cite{Yellen2005, Zhu2011}. The nonmagnetic particles then experience a force governed by the amount of magnetic fluid they displace, and by using a time-or-spatially varying magnetic field to adjust the magnetization of the patterned surface particles can be transported at rather high velocities ($\\sim 70 \\mu$m\/s). While there are some limitations due to immersion in a ferrofluid, this system is promising as it allows the transport of a wide variety of particles.\n\nAn alternative strategy employs uniform ferromagnetic films with patterned magnetic domains. These magnetic domains can be organized into a variety of structures such as stripes or bubbles. These structures (Bloch walls) can then be manipulated with an external magnetic field; adjacent magnetic particles couple to the domain motion and translate \\cite{Tierno2009, Ehresmann2011, Gunnarsson2005}. An underlying striped lattice confines transport along a single dimension, while using a geometry with more symmetry such as a bubble lattice allows for guided transport along an arbitrary 2D path, see Figure \\ref{fig:Magnetic}a. Patterned magnetic substrates, while limited in their reconfigurability, are a promising system for microfludic transport, as they allow for particle transport at quite high velocities \\cite{Tierno2009}. \nRecent work with these substrates has demonstrated that, due underlying to lattice symmetries, certain transport paths are topologically protected \\cite{Loehr2017, Loehr2016}. This system has also been used to create a landscape with quenched disorder by permanently placing large obstacles at random positions on the substrate, and then studying their interaction with translating magnetic particles \\cite{Stoop2017}. We believe that interaction with non-trivial boundaries is an exciting direction of study, and necessary to understand suspension transport in complex environments.\n\n\\subsection{Collective behavior: self assembly via rotating fields}\n\nMagnetically actuated suspensions can self-assemble into a variety of transient and permanent structures. One of the most common means of magnetic driving is to apply a rotating field, with an additional degree of control offered by the orientation of the field relative to a nearby surface which influences the particles due to hydrodynamic coupling. As first shown by experiments with mm-scale spinning disks, the balance between magnetic and hydrodynamic interactions can lead to the self-assembly of organized lattices \\cite{Grzybowski2002}. By altering the relative strength of magnetic vs.\\ hydrodynamic interactions, the collective behavior of the suspension can be dramatically changed. We note that here we only consider bulk fluid suspensions. When a fluid-fluid interface is present, quite different dynamics control the collective behavior, which are discussed in depth in the recent review by Snezhko \\cite{Snezhko2016}. \n\n\nIn the limit of strong magnetic interactions, the inter-particle attraction due to the spinning magnetic dipoles can be used to self-assemble a variety of dynamic structures such as rotating crystals, chains, tubes, and ``colloidal wheels'' \\cite{Tasci2016, Maier2016b, Yan2015a, Yan2015b}. Colloidal wheels represent an exciting new development; they translate at quite high velocities, up to 50 $\\mu$m\/s, and have unique dynamics due to the fact they assemble and roll with a slight tilt, see Figure \\ref{fig:Magnetic}b. Recent work has demonstrated that, unlike a bicycle wheel, this dynamic structure is chiral; its motion does not reverse upon reversal of the driving field. Thus, this system offers a minimal experimental model for studying the necessary ingredients for non-reciprocal motion \\cite{Maier2016b}.\n\n A particularly rich set of dynamics emerges in a system of asynchronously rotating ferromagnetic colloids. Here, large (50 micron) colloidal particles are driven by a uniform vertically oscillating field, but the rotation direction of individual particles is defined by spontaneous symmetry breaking, so that it can be altered due to collisions \\cite{Kokot2015}. Hydrodynamic interactions are very weak, and particles assemble into structures dictated by magnetic interactions and interparticle collisions. By sweeping the frequency of the applied field, the particles in this system assemble into several correlated phases, exhibiting both vortical motion and flocking, see Figure \\ref{fig:Magnetic}e \\cite{Kaiser2017}. This system holds great potential for studying flocking transitions in more complex systems, including living matter. Recent work by Kokot et al.\\ has demonstrated the existence of an active vortex phase \\emph{without} the need for geometric confinement, and explored its interaction with passive particles \\cite{Kokot2018}. This system shows promise for providing insight into how obstacles can be used to manipulate and guide self-assembled structures, and offers a new route to active particle transport. \n\n\nIn a system of synchronous rotating magnetic colloids, the structures which self assemble are again governed by the balance between magnetic and hydrodynamic interactions. When magnetic interactions between the particles are strong (colloids with large magnetic moment), particles assemble into chains, crystalline structures, and networks \\cite{Martinez2015, Martinez2015b, Maier2016a}. These dense structures can be used to transport cargoes; although the particles are closely packed, they can still follow the applied rotating field, creating a pumping flow that can be used for advective transport, see Figure \\ref{fig:Magnetic}d. In the intermediate range, when magnetic interactions are comparable to hydrodynamic interactions, chain-like structures assemble, but are much more flexible. Due to the finite relaxation time of particle magnetization, these chains orient perpendicular rather than parallel to the applied field \\cite{Martinez2016a, Martinez2016b, Massana2017a, Massana2017b}. The hydrodynamic flow adjacent to these assembled chains can be used to shuttle cargo \\cite{Massana2017b}. Additionally, these chains can easily be made into rings, which can be assembled around a cargo particle, and then used to transport it, see Figure \\ref{fig:Magnetic}c \\cite{Martinez2016a, Yang2017b}. \n\nWhen magnetic interactions are very weak, so that hydrodynamic coupling is the dominant particle-particle interaction, very different structures emerge. In this case, although the external magnetic field provides a convenient way to rotate particles, interparticle magnetic interactions can be completely neglected as compared to viscous forces\/torques acting on the particle. In this weakly magnetic case, rotation adjacent to, but a finite distance above, a no-slip surface leads to very strong collective effects. (This is quite different rotating a particle \\emph{at} a surface, which lead to translation akin to a wheel). In this case, a uniform suspension of particles shows density waves which propagate freely throughout the system \\cite{Delmotte2017}. If a large enough density perturbation is introduced into the system, a whole cascade of instabilities is observed: first a shock is formed, and then this shock quickly destabilizes into fingers with dense tips \\cite{Delmotte2017b, Driscoll2017}. The particle-wall distance becomes the dominant lengthscale in the system, controlling both the width of the shock front as well as the wavelength of the instability. As the instability grows, the dense tips of these fingers can then break off, and continue to travel as persistent, dense clusters termed `critters', see Figure \\ref{fig:Magnetic}f. Critters are bound together only through hydrodynamic interactions; if they are indeed a stable state of the system, this demonstrates that stable clustering does not require attractive interactions, but can be created by hydrodynamic coupling alone. These kinds of hydrodynamic bound states have also recently been demonstrated with pairs and chains of elliptical particles \\cite{Martinez2018b}. Hydrodynamic bound states are an important area to explore further, and there are still fundamental questions about the limits of their stability. From an applications standpoint, critters offer exciting possibilities for encapsulation and transport of cargoes at the microscale.\n\n\n\n\\subsection{Collective behavior: bio-mimetic systems}\n\nThe collective behavior of most bio-mimetic systems has been studied considerably less than that of simpler colloidal particles, likely due to the difficulty of manufacturing magnetically controlled artificial flagella at large scales. A few studies have been done, which demonstrate that these more complex particle shapes display clustering instabilities reminiscent of those seen in magnetic sphere systems \\cite{Vach2017}. This is an exciting area for future work: as larger-scale fabrication of these artificial flagellated systems becomes possible, we expect they will help enlighten open questions on bacterial swarming and flocking. Particle shape has a strong effect on hydrodynamic interactions, so studying these questions with spherical colloids alone is insufficient. For a more thorough overview of recent advances in creating anisotropic magnetic colloids, see the recent review by Tierno \\cite{Tierno2014}.\n\nRecent advancements in microfabication have made fabrication of cilia-like microstructures possible at a relatively large scale, allowing for the study of large-scale collective behavior. Large collections of cilia are know to beat synchronously; studying simplified models of cilia both allows one to understand the biological system, as well as to explore coupled-oscillator dynamics \\cite{Bruot2016, DiLeonardo2012, Damet2012}. Self-assembled artificial cilia were first studied experimentally by Vilfan et al., who demonstrated the feasibility of artificial cilia as a platform for creating pumping flow \\cite{Vilfan2009}. Carpets of artificial cilia can also be assembled from individual magnetic rods \\cite{Coq2011}. The rods in this system are created using a soft-lithography templating technique to align paramagentic beads. As the beads are fixed in place, the hydrodynamic interactions are that of many coupled flexible rods. Through modeling in addition to carefully controlled experiments, this work showed that that the collective beating of cilia is controlled by large-scale hydrodynamic coupling of the entire carpet of artificial cilia. Additional studies have recently succeeded in creating magnetic cilia by robust and simple methods \\cite{Zhang2018, Hanasoge2018, Hhanasoge2018b}. We hope to see more activity in this area in the future, it is highly relevant from both a biological point of view (many tissues contain cilia) and an applications point of view (it is an efficient system for microscale pumping). \n\n\\subsection{Outlook}\nMagnetically-driven colloidal suspensions are a powerful tool from an applications point of view: they are inherently bio-compatible and often easily re-configurable. As techniques to fabricate magnetic particles open up possibilities for particles with more complex shapes, we expect that this will continue to be a very active area of study. Additional possibilities are opened up by combining magnetic driving with other methods, such as electrical or acoustic driving.\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.95\\columnwidth]{figure_magneticv3.png} \\caption{Examples of individual and collective motion in magnetically driven colloidal suspensions. \\textbf{a}, Transport of a particle using a patterned magnetic substrate, adapted from \\cite{Tierno2014} \\textbf{b}, Guided transport of a self-assembled colloidal wheel, adpated from \\cite{Maier2016b}. \\textbf{c}, A flexible magnetic chain that is externally manipulated to form a ring, adpated from \\cite{Martinez2016a}. \\textbf{d} Transport of passive (non-magnetic) particles on a network of actively rotating (but not translating) 1.4 micron magnetic colloids, adpated from \\cite{Maier2016a}. \\textbf{e}, Asychronously rotating colloids dynamically assemble into a rich variety of phases, shown from left to right are experimental images of the gas, flocking, and vortex phase. The lower images represent course-grained velocity measurements (scale bars are 1 mm), adapted from \\cite{Kaiser2017}. \\textbf{f}, Creation of hydrodynamically stabilized clusters (critters) from a hydrodynamic fingering instability in both simulation (left) and experiment (right). The particles are rotated near (but not at) a wall; the particle-wall distance sets the scale of the instability as well as the size of the critters (scale bar is 200 micron). } \n \\label{fig:Magnetic}\n\\end{figure}\n\n\n\\section{Electrically powered colloidal suspensions}\n\\label{electric}\n\nElectric driving in colloidal suspensions is more complex than magnetic driving: except at very high frequencies, the suspending fluid as well as the particles can react to the applied field. Electro-hydrodynamics is a rich subject, and it is not our intent to discuss it in detail. Rather, we will highlight recent work in this field that illustrates the variety of driving strategies, and discuss emergent behavior due to collective effects. \n\n\\subsection{Electric driving}\n\nMany colloidal suspensions are charge-stabilized, i.e.\\ the individual particles have a net charge. If a homogeneous, DC electric field is applied to a suspension, the charged particles will migrate; this is termed electrophoresis. Even uncharged dielectric particles can be polarized in an applied field; if a field gradient is present this can also lead to particle motion, termed dielectrophoresis. Colloids can also be manipulated by using an electric field to act on the (charged) suspending fluid. Even initially uncharged colloids placed in a uniform field will polarize, and this will result in the particle being surrounded by a cloud of screening ions, termed the electric double layer. The external electric field exerts a force on this double layer, resulting in fluid flow. This effect is termed `induced-charge electrophoresis' or ICEP, as the applied field is exerting a force on the charge which is induced by the field itself. ICEP generates a quadrapolar flow around a symmetric particle, which will not lead to any translational motion. However, if a particle has some shape or charge asymmetry these flows become asymmetric and can lead to translation, see Figure \\ref{fig:Electric}a. A simple example of this is the charge asymmetry present in a bi-material colloid, such as a janus colloid. We have only briefly mentioned possible driving strategies, but we emphasize that understanding electrokinetic effects is a rich and active area of study. For a more in-depth discussion of electric driving of dielectric\/conducting particles, we direct the reader to the following reviews \\cite{Anderson1989, Squires2006, Bazant2010, Van2013}.\n\nA very different approach to electric driving is to take advantage of a rotational instability first observed by Quincke, who found that a dielectric colloidal particle in a strong electric field can begin to spontaneously rotate \\cite{Quincke1896}. This rotation occurs when there is a difference in the charge relaxation time between the colloid and the fluid. In a strong electric field, a dielectric particle will have an induced dipole moment. If the charge relaxation time of the colloid is larger than that of the fluid, this induced dipole will be aligned anti-parallel to the applied field. This configuration is unstable to infinitesimal perturbations; any small rotational displacement will produce a torque which amplifies the perturbation. This results in spontaneous rotation at a constant rate in a direction transverse to the applied field \\cite{Melcher1969}. When these rotating particles are adjacent to, or in contact with, a solid surface, they will translate, see Figure \\ref{fig:Electric}b. We note that the particles in a suspension of `Quincke rollers' may translate in different directions as the rotation axis is only confined to be parallel to the surface. However, collective effects can synchronize particle motion.\n\n\\subsection{Collective behavior}\n\nAs discussed above, there are a large variety of strategies for electrically driving suspensions. For example, Vissers et al.\\ demonstrated that even a simple system of oppositely charged colloids in DC field can organize into parallel lanes due to collective effects \\cite{Vissers2011}. Though asymmetry is required for transport via ICEP flows, symmetric particles have been shown to organize in clusters, chains, and organized bands and vortices due to hydrodynamic interactions competing with other effects \\cite{Zhang2006, Hu1994, Yeh1997, Fraden1989, Perez2010}. However, fully leveraging ICEP effects for driving suspensions requires some particle asymmetry. While new techniques are expanding our ability to create controllable shape asymmetry, charge asymmetry is more commonly implemented. This is easily achieved by using Janus colloids, and they remain a popular choice due to their ease of fabrication \\cite{Zhang2017}. Janus colloids driven via ICEP translate at quite high velocities (tens of $\\mu$m\/s) \\cite{Gangwal2008}. Suspensions of Janus particles have been shown to assemble into a variety of structures, such as chiral clusters, flocks, and chains \\cite{Ma2015, Nishiguchi2018, Yan2016}. An exciting recent development in this field is the work of Yan et al., which demonstrates that by sweeping the field frequency, they could tune the dipolar interactions between individual particles. By adjusting these electrostatic interactions, they achieved several varieties of ordered phases from one species of particle: chains, swarms, and clusters, see Figure \\ref{fig:Electric}d \\cite{Yan2016}. This is an interesting system, as in contrast with assembly in many other suspensions, the observed phases result from simple pairwise interactions, and not hydrodynamic flows. As techniques for fabricating Janus colloids become more sophisticated, new possibilities for driven assembly and transport are being realized. For example, by making a Janus particle that could be translated via ICEP and steered via a magnetic field, Demirrors et al.\\ recently demonstrated the potential for guided cargo transport \\cite{Demirors2018}. \n\nAlthough chemical heterogeneity has been more well-explored, shape asymmetry can also be used to leverage ICEP for particle translation. Ma et al.\\ demonstrated that shape asymmetry could lead to propulsion in experiments using asymmetric dimers. Additionally, they showed that these flows can be altered by changing the chemical proprieties of the dimers, i.e.\\ identically shaped dimers made from different materials can move in different directions \\cite{Ma2015b}. In a suspension of these dimers, the assembly behavior is controlled by the asymmetric ICEP flows. These flows can be finely tuned by altering particle charge or the conductive properties of the fluid, leading to the assembly of a variety of chiral and achiral clusters \\cite{Yang2017}, see Figure \\ref{fig:Electric}e. As fine control of shape and material properties of colloidal particles is becoming more and more technologically feasible, we hope to see more efforts in this area. Recent theoretical work by Brooks et al.\\ highlights the potential of using shape to create directed motion with ICEP flows. They present a systematic investigation of dynamics as a function of particle symmetry, demonstrating that essentially any dynamic function (transitional\/rotational motion) can be achieved by using an appropriately shaped colloid \\cite{Brooks2018}. We hope this work inspires follow up experimental work, creating driven suspensions with new properties. It will be especially exciting to see explorations of the collective behavior of these ``programmed'' colloids.\n\nThe collective dynamics of Quincke rollers has a rich phase behavior. Using lateral confinement, experiments have demonstrated that large-scale collective motion can emerge from a uniform population of Quincke rollers . Polar bands, flocking, and vortex states were all observed. As both the hydrodynamic and electrostatic particle-particle interactions are reasonably well-understood in this system, an analytic model of this flocking transition was constructed in addition to the experimental observations \\cite{Bricard2013, Bricard2015}. Like the asynchronous magnetic system discussed in the previous section \\cite{Kokot2018}, confined Quincke rollers are a promising model system for understanding flocking transitions in more complex active matter systems, such as biological ones (bacteria, birds, fish). Recently, artificial flocks were used to explore the interaction of these dynamic clusters with a disordered landscape, see Figure \\ref{fig:Electric}c. Morin et al.\\ found that flocking was suppressed above a critical amount of disorder, suggesting a first order phase transition they argue should occur for all kinds of flocking \\cite{Morin2017}. Additionally, this group explored the response of colloidal flocks to an external field. They find that although individual particles behave analogously to spins in a magnetic field, collectively these flocks show a quite different and nonlinear response, with the ability to align and travel opposite the direction of applied flow \\cite{Morin2018}. Mapping out the response to an applied field is important to understanding the stability and dynamics of the structures and condensed phases which arise in driven suspensions, and we look forward to seeing more work along these lines in other systems. \n\n\\subsection{Outlook}\n\nIt is only within the last ten years or so that fabrication techniques have become advanced enough to design particles with chemical\/shape asymmetry so that ICEP flows can be used for propulsion. This is a very active area of development, and we expect to see new studies examining particles with complex asymmetry. Other means of electrical driving, such as Quincke rotation, can be used to create well-understood model systems for exploring flocking. Very different behaviors are observed when Quincke rollers are suspended in liquid crystals \\cite{Jakli2008} or near fluid surfaces \\cite{Ouriemi2014}. This highlights that in both this and other systems, more work is needed exploring the behavior of driven suspensions near fluid and elastic interfaces.\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.95\\columnwidth]{figure_electricv3.png}\n \\caption{Examples of individual and collective motion in electrically driven colloidal suspensions. \\textbf{a}, Sketch of ICEP flow due an applied field, $\\bf\\vec{E}$. The flow is quadrapolar around a symmetric particle (left). However, if charge or shape asymmetry is present, ICEP flows are no longer symmetric and lead to translation at a velocity $\\bf{\\vec{u}}$ (right). \\textbf{b}, Illustration of Quincke rotation. When the charge relation time is larger in the colloid than in the fluid, the colloid's polarization, $\\bf\\vec{P}$, will be antiparallel to the applied field, $\\bf\\vec{E}$ (left). This unstable configuration results in steady rotation at a frequency $\\omega$ (right). \\textbf{c} A flock of Quincke rollers encountering a random lattice of obstacles, adapted from \\cite{Morin2017}. The upper panel shows a zoomed-out view of the colloidal flock (scale bar is 5 mm), and the lower panel shows a close-up of the flock moving past the obstacles, arrows indicate particles and dots indicate obstacles (scale bar is 100 $\\mu$m). \\textbf{d} Illustration of different phases achieved by tuning the driving frequency in a system of janus colloids (scale bar is 5 $\\mu$m in all bottom left, where it is 30 $\\mu$m), adapted from \\cite{Yan2016}. \\textbf{e}, Collective behavior of asymmetric particles (colloidal dimers), adapted from \\cite{Yang2017}. By modifying the shape asymmetry as well as the surface charge, a variety of behaviors is observed: gas, 3D clusters, and planar clusters; bottom left panel shows a variety of cluster sizes. Large images are optical images (scale bars 5 $\\mu$m) and small insets are SEM images (scale bars 2$\\mu$m). } \n \\label{fig:Electric}\n\\end{figure}\n\n\n\n\n\\section{Acoustically powered colloidal suspensions}\n\nAnother way to drive colloidal particles is to use acoustic fields \\cite{Lenshof2012,Rao2015,Connacher2018}\\footnote{We highly recommend the Acoustofluidics tutorial series of 23 papers in \\textit{Lab on a Chip}.}.\nHistorically, acoustic waves have been used in microfluidics for particle sorting via acoustic levitation. Less than a decade ago, it has been shown that acoustic waves could also be used to propel asymmetric particles with an hydrodynamic effect called acoustic streaming \\cite{Wang2012,Nadal2014}, thus opening a new route to study active matter and collective motion.\n In this section we will briefly explain the levitation and propulsion mechanisms and then focus on the collective behaviour induced by acoustic forces and hydrodynamic interactions.\n\n\\subsection{Levitation mechanism}\n\nParticles are manipulated by acoustic fields as follows: a vertical standing wave in a reflective channel generates a force, called the primary radiative force, that drives particles to the pressure nodes or antinodes. \n To calculate the magnitude of the radiative force, we consider a one dimensional standing wave in the $z-$direction (see Figure \\ref{fig:Acoustic}a), with wavenumber $k=2\\pi\/\\lambda$ and acoustic energy $E_{ac}\\sim p_0^2$, where $p_0$ is the wave amplitude. Then, the primary acoustic radiative force in the $z-$direction acting on a spherical particle with radius $a$ is given by \\cite{Lenshof2012,Rao2015}\n\\begin{equation}\n F^{\\mbox{rad}} = 4\\pi a^3 E_{ac} k \\sin(2kd)\\Phi.\n\\end{equation}\nHere $\\Phi$ is the acoustic contrast factor, and $d$ is the particle distance to the pressure nodes or antinodes. $\\Phi$ compares the material properties (compressibility and density) relative to the suspending medium. When $\\Phi>0$, which is the case of most colloids, the particle is attracted to the pressure nodes in the acoustic standing wave field. When $\\Phi<0$ the particle is attracted to the antinodes. \n\n\\subsection{Acoustic streaming and individual behaviour in the levitation plane}\n\nWhen a suspension of spherical colloids is placed in a 1D acoustic standing wave along $z$, it will first move towards the pressure nodes, where the primary radiative force is zero, and then follow the acoustic energy distribution in the plane.\nWhen the particle density is different than that of the fluid, the particles oscillates along $z$ with the frequency of the acoustic wave, and the relative motion of the particle with respect to the fluid generates non-linear terms whose period-averaged value is non-zero and results in a steady flow (e.g.\\ the nonlinear term $\\cos(\\omega t)\\cos(\\omega t) = \\frac{1}{2}(1+\\cos(\\omega t))$ has a non-zero period-averaged value of $\\frac{1}{2}$) \\cite{Sadhal2012,Sadhal2012b}. This phenomenon is known as acoustic streaming; Figure \\ref{fig:Acoustic}b illustrates the streaming flow around an oscillating sphere \\cite{Tatsuno1982}. \nThe net force due to the viscous stress generated by this steady flow on a particle cancels out to zero when the particle has fore-aft symmetry about the $z-$ axis. However, when the particle does not have this symmetry, the steady flow generates a net hydrodynamic force (torque) that results in a translational (rotational) motion in the levitation plane (see Figure \\ref{fig:Acoustic}a).\nThe farther the particle density is from the fluid density, the more it will oscillate relative to the fluid, the stronger the steady flow, and thus, the faster it will translate \\cite{Wang2012,Nadal2014}.\nAn interesting aspect of streaming-induced propulsion is that the direction of translation is very sensitive to the rate of asymmetry of the particle, its density distribution, and to the acoustic driving frequency. Recent experiments \\cite{Ahmed2016} and theory \\cite{Collis2017} show that a small change in the particle shape, driving frequency or density distribution, can reverse the direction of translation and drastically change the speed (see \\cite{Collis2017} for more detailed explanations on this complex dependence). \n\nAcoustic levitation can also lead to the spinning of rods and spheres about an axis parallel to the levitation plane \\cite{Wang2012,Zhou2017}. The physical mechanisms behind this behavior are still debated: \nWang et al.\\ \\cite{Wang2012} claim that the spinning is related to the helical elastic surface waves (called Rayleigh waves) on the colloids generated by the acoustic field, while, according to Zhou et al.\\ \\cite{Zhou2017}, the spinning is due to acoustic waves with different phases traveling in the plane that generate a viscous torque on the particles.\nRods have also been observed to orbit in tight circles in the levitation plane \\cite{Wang2012,Zhou2017}.\nAccording to \\cite{Zhou2017}, these orbiting trajectories come from the torque generated by the steady flow from acoustic streaming on the slightly bent shape of the particles about the rotation axis.\n\nThe spinning motion of colloids can be controlled by breaking their rotational symmetry. Sabrina et al.\\ engineered flat, twisted-star-shaped particle with fins (see Figure \\ref{fig:Acoustic}e) \\cite{Sabrina2018}. By playing with the parametric configuration of the particle shape, they quantified the relationship between a particle's shape and its rotational motion. In particular, they varied the radius $a$, the rate of chiral asymmetry $c$, and the rotational order $n$ (i.e. the number of fins) of the particles. They showed that the rate of rotation decreases with increasing radius, and changes sign with chirality, as expected from theoretical works on acoustic streaming. More surprisingly, they observed a non-monotonic dependence of the direction of rotation with $n$ (CW for $n=2,3,6$ and CCW for $n=4,5$). The reversal in the direction of rotation between $n = 3$ and $n = 4$ was predicted by their Boundary Integral numerical simulations. However, the change at $n=6$ is not consistent with their prediction. \nMany effects could explain such discrepancies: neglected high order inertial effects in their simulations and high sensitivity of trajectory with particle geometry \\cite{Collis2017}. \n\n\n\n\n\\subsection{Collective behaviour}\nAt the collective level, suspensions of acoustically powered colloids exhibit interesting behaviours due to acoustic radiation, hydrodynamic interactions, or a competition between these two effects. Self assembly of rods and spheres into large chains of particles have been experimentally observed \\cite{Wang2012,Zhou2017}. These chains are assumed to form due to secondary acoustic radiation \nforces (i.e.\\ Bjerknes forces) that are attractive in the levitation plane \\cite{Woodside1997,Groschl1998,Zhou2017}.\nWhen the particles are spinning, the chains form a large vortex that advects particles at large speeds due to hydrodynamic coupling \\cite{Wang2012,Zhou2017}.\nThe acoustic nodal structure in the levitation plane can also generate streak and ring-like structures made of thousands of colloids \\cite{Oberti2007,Wang2012}. When the particles are self-propelled, the chains, rings, and streaks translate as well \\cite{Wang2012,Zhou2017}.\nZhou et al.\\ showed that a suspension of microrods performing orbiting motion could be used to stir fluid at the microscale\\cite{Zhou2017}. Figure \\ref{fig:Acoustic}d shows the formation of spinning chains and nodal aggregates in suspensions of rods \\cite{Ahmed2016}.\n\nIn addition to secondary acoustic radiative forces, the hydrodynamic interactions due to acoustic streaming correlate particle motion. \\cite{Klotsa2007,Fabre2017} showed that the hydrodynamic forces due to acoustic streaming between two particles leads to an equilibrium configuration that depends on the frequency. \\cite{Voth2002} showed that these hydrodynamic effects result in various dynamical states and large scale clustering in suspensions. Recent works have used the acoustic streaming flow of two unequal spheres connected by a spring to build an acoustically powered swimmer, whose direction of motion depends on the Reynolds number (see Figure \\ref{fig:Acoustic}c) \\cite{Klotsa2015,Jones2018}. \n\nIn a standing acoustic wave, co-rotating star-shaped particles repel each other through hydrodynamic repulsion due to acoustic streaming flows, which is in competition with the acoustic confining potentials that push particles into nodal regions. \nAs a result, co-rotating spinners form stationary assemblies with a characteristic inter-particle separation distance of approximately 3 particle diameters \\cite{Sabrina2018} (see Figure \\ref{fig:Acoustic}e).\n\n\nAcoustic field have been coupled to other external fields to trigger self-assembly. \\cite{Ahmed2014} combined acoustic levitation with magnetic attraction in suspensions of tri-metallic, 2 $\\mu$m rods to form motile asters, whose velocity and direction of motion result from asymmetric acoustic streaming. They also showed that these clusters could be guided using an external magnetic field.\nInspired by the migration of leukocytes towards vascular walls and rolling motion before transmigration, \\cite{Ahmed2017} combined acoustic and magnetic fields to design bio-inspired magnetic rolling clusters, using suspensions of paramagnetic beads in a channel (see Figure \\ref{fig:Acoustic}f).\n Thanks to an oscillating magnetic field about an axis parallel to the wall, paramagnetic beads formed rotating compact aggregates made of tens of particles. By locating the pressure nodes of a standing acoustic wave outside the microchannel, the primary radiative force drove these clusters towards the boundaries. Once at the boundaries, the aggregates rolled and translated along the walls, just like leukocytes do. Li et al.\\\n developed hybrid magnetic nanoswimmers: the chirality of their helical structure allows them to self-propel in a given direction under a rotating magnetic field, and in the opposite direction due to acoustic streaming \\cite{Li2015}. By playing with the relative strength of magnetic and acoustic forcing they controlled the collective motion of these particles: they observed a static swarm vortex when both the acoustic and the magnetic are on, directional collective motion with just the magnetic field and stable aggregation with the acoustic field only (see Figure \\ref{fig:Acoustic}g).\n \n\n\\subsection{Outlook}\nAcoustic waves are bio-compatible and present promising applications for particle sorting, targeted drug delivery and micro-surgery \\cite{Rao2015}. The complex interplay between acoustic forces and hydrodynamic interactions leads to emergent behavior in large suspensions. These collective effects could be used for other applications such as pumping and mixing in microfluidic environments.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.95\\columnwidth]{Schematic_acoustic_streaming_v3.png}\n \\caption{Examples of individual and collective motion in acoustically powered colloidal suspensions. \\textbf{a}, Schematic of acoustic levitation and shape-directed motion. \n \n \\textbf{b}, Streamlines and pressure contours of the acoustic streaming flow induced by an oscillating sphere (simulations), adapted from \\cite{Fabre2017}.\n \\textbf{c}, Left: Experimental image of a two-sphere swimmer propelled by acoustic streaming \\cite{Klotsa2015}. Right: The simulated time-averaged flow field and direction of motion of the swimmer \\cite{Jones2018}. \\textbf{d}, Experimental image illustrating the formation of spinning chains and aggregation in suspensions rod-shaped acoustically-propelled colloids \\cite{Ahmed2016}. \\textbf{e}, Experimental image of three co-rotating star-shaped particles, illustrating their equilibrium configuration (scale bar is 15 microns) \\cite{Sabrina2018}. \\textbf{f}, The reversible assembly, migration, and rolling motion of magnetic clusters near boundaries. These clusters are manipulated with a combination of magnetic and acoustic fields \\cite{Ahmed2017}. \\textbf{g}, Left: The formation of a static swarm vortex when both magnetic and acoustic fields are on. Right: Directional motion that occurs with the magnetic field only \\cite{Li2015}. } \n \\label{fig:Acoustic}\n\\end{figure}\n\n\n\\section{Modeling the Brownian Dynamics of Colloidal Suspensions}\n\n\nA major parameter in the dynamics of colloidal suspensions is the Reynolds number, Re, which compares inertial convective effects to viscous dissipation. The Reynolds number is defined as follows: Re = $ U a\/\\nu$, where $U$ is the typical particle velocity, $a$ is the particle size and $\\nu$ is the kinematic viscosity of the fluid.\nIn most situations, driven colloidal particles live in the realm of low Reynolds number, Re $\\sim 10^{-5} - 10^{-2}$, where fluid inertia can be neglected and the fluid reaction to perturbations is instantaneous. \n\n\n\nOwing to their typical size and velocities, most colloids are subject to thermal fluctuations and live in a viscous fluid (Re $\\approx 0$), where the fluid momentum diffuses much faster than the particles. This means that the fluid velocity relaxes very quickly to a steady solution while the particles hardly move \\cite{Balboa2013}.\nAs a result, hydrodynamic interactions between particles are long-ranged (typically $~1\/r - 1\/r^3$), and therefore particle trajectories are highly correlated over long distances compared to the particle size. Thus, the dynamics of driven colloidal suspensions is led by not only by the external drive and thermal fluctuations, but also strongly influenced by hydrodynamic collective effects.\n\nNote that, in some cases, like acoustic streaming, the oscillatory motion of the particles is very fast and the Reynolds number becomes finite, typically Re $\\lesssim 1$. In this regime, fluid inertia must be accounted for to describe the physics of the system.\nBelow we provide a short overview of recently developed numerical methods that model driven colloidal suspensions at low Reynolds number. An extensive recent review for suspensions at finite Reynolds number is given by Maxey \\cite{Maxey2017}. \nHere, we only focus on methods that explicitly solve the \\textit{inertia-less} Stokes equations for the \\textit{incompressible} fluid phase. We refer the reader to specific reviews on alternative methods, such as Lattice-Boltzmann \\cite{Chen1998,Aidun2010}, or particle methods such as Smooth Particle Hydrodynamics (SPH) \\cite{Monaghan2005}, Smooth-Dissipative Particle Dynamics (S-DPD) \\cite{Groot1997,Espanol2003}, Multi-particle collision dynamics (MPCD) \\cite{Gompper2009} and Stochastic rotation dynamics (SRD) \\cite{Ihle2001}. \n\nSome of the phenomena presented in the sections above involve tens of thousands, even hundreds of thousands, of particles that interact through hydrodynamics and other interactions. \nAccounting for all these interactions together with thermal fluctuations in large scale Brownian Dynamics simulations is a challenging task that requires efficient numerical methods.\nBelow we present the basic equations for the dynamics of colloidal suspensions and briefly enumerate the challenges together with the numerical methods developed to address them. This section does not constitute an exhaustive review of the literature, but rather highlights the major developments that brought significant improvements over the traditional Brownian Dynamics methods. \n\n\n\\subsection{Brownian Dynamics equations and numerical challenges}\n\nConsider a collection of $N$ colloidal particles in a viscous fluid with positions $\\bm{x}_i, i=1,..,N$ and orientations $\\bm{\\theta}_i, i=1,..,N$ given by quaternions. Denote $\\bm{q}_i = [\\bm{x}_i, \\bm{\\theta}_i]$ the generalized position of particle $i$, and $\\bm{Q} = \\lbrace\\bm{q}_i, i=1,..,N\\rbrace$ the set of position vectors.\n In this regime, the equations of motion for a collection of colloidal particles is given by the overdamped limit of the Langevin equations \\cite{Delong2015} \n\\begin{equation}\n \\frac{d\\bm{Q}}{dt} = \\bm{\\mathcal{M}}\\bm{\\mathcal{F}} + \\sqrt{2k_BT}\\bm{\\mathcal{M}}^{1\/2}\\bm{\\mathcal{W}} +\n k_BT\\partial_{\\bm{Q}}\\cdot\\bm{\\mathcal{M}}\n \\label{eq:overdamped}\n\\end{equation}\nwhere \n$\\bm{\\mathcal{M}}$ is the $6N\\times 6N$ mobility matrix that contains all hydrodynamic interactions between the particles, \n$\\bm{\\mathcal{F}}$ is the $6N\\times 1$ vectors that collects the external generalized forces acting on all the particles,\n$\\bm{\\mathcal{M}}^{1\/2}$ is the square root (i.e.\\ the Cholesky factor) of the mobility matrix,\nand $\\bm{\\mathcal{W}}$ is a vector of white noise processes.\nNote that, due to the absence of inertia in the system, one only needs to solve the temporal evolution of the particle positions rather than the fast degrees of freedoms such as velocities, which allows study of larger time scales.\n\nThe first term in the RHS of Eq.\\ (\\ref{eq:overdamped}) corresponds to the deterministic velocities arising from external forcing. \nThe external forces and torques $\\bm{\\mathcal{F}}$ can be of various origins, such as gravitational, electro-magnetic (after integrating Maxwell's stress tensor over the particle's surface), and acoustic\\footnote{For the sake of conciseness, we only focus on hydrodynamic interactions and refer the reader to reviews and books on the modeling of electro-magnetic interactions \\cite{Ghaffari2015,Satoh2017} and acoustic waves \\cite{Laurell2014,Ley2017} in fluids and colloidal suspensions.}.\nHydrodynamic interactions between immersed particles (written in the mobility matrix) are obtained by solving Stokes equations with no-slip boundary conditions on particles' surfaces. \nComputing the action of the mobility matrix between immersed colloids can be done either by using analytical results for simple geometries (unbounded \\cite{Rotne1969,Yamakawa1970}, single wall \\cite{Swan2007}, spherical container \\cite{Aponte2016}), or by using numerical methods, like the Immersed Boundary Method \\cite{Peskin2002,Mittal2005} or the Force Coupling Method \\cite{Maxey2001,Lomholt2003,Yeo2010b}, that explicitly solve Stokes equations for the fluid phase on a grid and approximate the no-slip boundary conditions on the particles surface via interpolation operators.\n\nThe second term in the RHS of Eq.\\ (\\ref{eq:overdamped}) corresponds to the Brownian velocities. In traditional Brownian Dynamics, $\\bm{\\mathcal{M}}^{1\/2}$ is computed with a Cholesky factorization or using polynomial approximations \\cite{Fixman1986, Ando2012}, which yields a computational cost $O(N^3)$ and $O(N^2)$ respectively, making them poorly scalable to large collections of colloids. Alternatively, Landau and Lifshitz \\cite{Landau1959}, proposed a method called Fluctuating Hydrodynamics (FH). FH consists in adding a fluctuating stress tensor into Stokes equations, which, after solving Stokes equations, allows to obtain the particle Brownian velocities without having to compute the square root of the mobility matrix. The corresponding cost depends on the fluid solver and on the discretization of the particles. FH has been successfully implemented into various numerical methods for particulate flows with \\textit{linear scaling} ($O(N)$) such as the Immersed Boundary Method \\cite{Atzberger2007,Atzberger2007b,Atzberger2011,Delong2014,Usabiaga2014}, Distributed Lagrange Multipliers \\cite{Sharma2004}, Fluid Particle Dynamics \\cite{Tanaka2000}, Finite Element Method \\cite{Plunkett2014,DeCorato2016}, Force Coupling Method \\cite{Keaveny2014,Delmotte2015}, Boundary Element Method \\cite{Bao2017} or Particle Mesh Ewald Method \\cite{Fiore2017,Fiore2018}.\n\nThe third term in the RHS of Eq.\\ (\\ref{eq:overdamped}) involves the divergence of the mobility matrix with respect to the particles' positions and orientations. It corresponds to the drift term that appears when taking the overdamped limit of Langevin equations \\cite{Fixman1978,Grassia1995,Delong2015}. In traditional Brownian Dynamics this terms is never directly computed but approximated using a midpoint time integration scheme developed by Fixman \\cite{Fixman1978}. However this scheme has a high computational cost because it requires solving a resistance problem, i.e.\\ inverting the $6N\\times6N$ mobility matrix, at each time step. \nTo overcome this problem, a new wave of algorithms has emerged in the recent years, among which, the Random Finite Differences (RFD) scheme with various extensions \\cite{Delong2014,Sprinkle2017}, and the Drifter-Corrector \\cite{Delmotte2015,Fiore2017}. The main advantage of these schemes is that they only incur one or two additional evaluations of the action of the mobility matrix per time step, which makes them competitive compared to more traditional tools.\n\n\n\\subsection{Outlook}\n\nChoosing between the aforementioned methods depends on the shape of the colloids, the geometry of the system considered, and the level of resolution desired. For instance, to simulate the microroller instability, we used the analytic far-field mobility matrix for particles near a no-slip boundary \\cite{Swan2007}, together with an iterative method to compute the Brownian velocities \\cite{Ando2012} and RFD \\cite{Delong2014}, and with an efficient parallelization on GPU's, allowing us to run twenty thousand time iterations with more than thirty thousands particles in a few hours \\cite{Balboa2017,Balboa2017b}. \nThe design of efficient numerical methods for Brownian Dynamics is an active area of research. Fluctuating Hydrodynamics is being extended to more and more numerical methods and new time integration schemes are still under development.\n\n\n\n\\section{Perspectives}\n\nOne exciting avenue of recent work is the study of non-spherical\/asymmetric colloidal particles. Significant advances have been made in individual particle fabrication \\cite{Keaveny2013,Walker2015, Vach2015,Li2015, Klotsa2015,Gutman2016,Morozov2017,Sabrina2018,Brooks2018,Garcia2018}, and there is still much room to explore the influence of particle shape on collective dynamics.\nExploring shape asymmetry opens the possibility to engineer particles to achieve a given individual or collective behaviour, such as swarming, mixing, advection, or suspension rheology. Further possibilities include flow devices which separate\/sort particles by shape, shape-influenced flow trajectories, and new kinds of shape-determined dynamic structures.\n\nWe also look forward to seeing more work on collective motion in acoustically-powered suspensions; this is a rather young area, and we feel it raises fascinating questions. For example, an interesting property of the shape-directed microspinners \\cite{Sabrina2018} we discussed, is that their direction of rotation depends on their chirality. This should allow for the study of suspensions of counter-rotating particles in a single standing acoustic wave, something which is not possible with other types of external driving. So far, suspensions of counter-rotating particles at low Reynolds number have only been studied with numerical simulations \\cite{Yeo2015,Yeo2016}. These simulations showed that suspensions of counter-rotating particles could phase separate and mix passive particles. An experimental realization of such suspensions would be a major breakthrough. Another exciting direction are recent studies which have shown that particle density distribution can affect the advection direction and speed of colloidal particles \\cite{Ahmed2016,Collis2017}. Collis et al.\\ showed that the heaviest parts of a particle have a larger relative motion with respect to the fluid, while the lightest ones move with the fluid \\cite{Collis2017}. Thus, creating colloidal particles with an uneven mass distribution could lead to more efficient self-propulsion and have additional effects on collective hydrodynamic interactions.\nOptimizing acoustically-driven suspensions will require the assistance of theory and numerical simulations, as in these systems inertia must taken into account. Numerical tools have been developed at the individual or pair level \\cite{Klotsa2007,Muller2012,Hahn2015,Collis2017,Fabre2017,Sabrina2018,Jones2018}, but further development is necessary to simulate suspensions including the effects acoustic streaming, hydrodynamic interactions, and shape asymmetry.\n\n\n\nAs shown in recent works \\cite{Martinez2015,Martinez2015b,Massana2017a,Driscoll2017,Delmotte2017}, collective effects in rotating suspensions near boundaries can be used for guided transport of passive particles. For instance, carpets of microrollers act as an active boundary layer that generates advective flows near boundaries, while fingers and critters can trap and advect particles at large speeds over long distances \\cite{Driscoll2017}. \nAll these modes of transport have been studied on flat clean surfaces. However, in many practical situations the surfaces may be dirty, rough, curved \\cite{Ahmed2017} or elastic (such as membranes). Additionally, the suspending fluid may contain obstacles or exhibit a non-newtonian rheology. \nTherefore, in order to evaluate the feasibility and applicability of these modes of transport in real-world biological systems, further studies are necessary. It would be interesting to evaluate the effect of the competition between obstacles and hydrodynamic interactions on the stability of micro-carpets, on the wavelength of the fingering instability and on the robustness of the spontaneous self-assembly into critters \\cite{Morin2018,Stoop2017}. It will also be interesting to quantify the effect of fluid visco-elasticity \\cite{Patteson2016}, composition (e.g.\\ liquid crystals \\cite{Lavrentovich2016,Straube2018}), and wall elasticity \\cite{Trouilloud2008,Dias2013,Ledesma2013,Boryshpolets2013,Daddi2016,Daddi2016b,Daddi2016c,Daddi2018,Daddi2018b,Rallabandi2018} on the collective motion of driven colloidal suspensions. \n\nLarge scale Brownian Dynamics simulations will be necessary to assist experiments on these questions. \nAs shown in the previous section, the last decade has witnessed an surge in the emergence of new methods to simulate driven colloidal suspensions at large scales, close to the scale of lab experiments. Their computational efficiency surpass the traditional Brownian Dynamics methods that are still currently in use. We hope these promising methods will gain visibility in the near future.\n\n\\newpage\n\\pagenumbering{gobble}\n\\begin{enumerate}\n \\item outstanding interest \\cite{Sabrina2018}: Great example of shape engineering to achieve specific motion using acoustic streaming. We are looking forward to seeing the collective behaviour of rotors in large suspensions.\n \n \\item outstanding interest \\cite{Driscoll2017}: This work combines experiments, theory and simulations to study a new fingering hydrodynamic instability in a suspension of microrollers. The fingers pinch off to form self-sustained stable motile structures called ``critters\". They show for the first time that self-assembly of microrollers can be driven by hydrodynamic interactions and offer promising applications for particle transport and mixing in microfluidic systems.\n \n \\item outstanding interest \\cite{Kokot2018}: While vortex states have been seen in a multitude of active\/driven systems, this is one of the only systems in which an emergent vortex state appears without the need for geometric confinement. This offers new possibilities for studying transport by localized vortices as well as the pinning effects of obstacles.\n \n \\item outstanding interest: \\cite{Stoop2017, Morin2017} Experimental studies of driven suspensions interacting with disordered obstacles. Understanding the interaction of particles with complex boundaries is a crucial ingredient for mimicking real-world transport. \n \n \\item outstanding interest: \\cite{Brooks2018} Systematic theoretical study of shape effects in ICEP propulsion. Designing driven suspensions with custom behaviors offers the possibility of new functions for microfludic devices, and the ability to mimic biological systems in new ways.\n \n \\item outstanding interest \\cite{Fiore2017}: Extension of particle mesh Ewald methods with fluctuations. In this paper, the authors set up a very efficient method, using Fluctutating Hydrodynamics, to simulate colloidal suspensions. The computational efficiency allows them to simulate $O(10^6)$ Brownian particles per second. A high-performance implementation of their code on Graphing Processing Units (GPUs) is available as a plugin for the software package HOOMD-blue (\\url{http:\/\/glotzerlab.engin.umich.edu\/hoomd-blue\/index.html}) at the following address: \\url{https:\/\/github.com\/stochasticHydroTools\/PSE\/}.\n \n \\item special interest \\cite{Collis2017}: Theory paper that quantifies the effect of particle shape, mass distribution and frequency parameter on the mechanisms of self-propulsion under acoustic streaming with a simple, but relevant, analytical model and numerical simulations.\n \n \\item special interest \\cite{Ahmed2017}: This paper nicely combines different mechanisms to mimic the motion of leukocytes in blood channels. Particle clusters are formed using paramagnetic beads in a rotating magnetic field, wall migration is done using acoustic forces and rolling motion on the walls results from the hydrodynamic coupling between the walls and the rotating clusters. \n \n \\item special interest \\cite{Balboa2017}: They use recent numerical schemes for Brownian Dynamics to simulate the fingering instability in large microroller suspensions. This paper nicely illustrates the capacity offered by recent advances in Brownian Dynamics to simulate and quantitatively reproduce experiments at the lab scale. All the codes used in this paper are documented and freely available at \\url{https:\/\/github.com\/stochasticHydroTools\/}.\n\n \n \\item special interest \\cite{Li2015}: Hybrid magneto-acoustic microswimmers whose swimming direction can be reversed with the field used. In addition to the promising applications they offer at the individual level, these swimmers exhibit interesting collective behaviour in the form of localized swarm vortices that can be used for pumping and fluid mixing. \n\n \n \\item special interest: \\cite{Maier2016b} Self-assembled colloidal wheels represent a unique and high-speed mode of transport. Additionally, due to their tilt even these simple structures are chiral (have a preferred direction of transport), and thus offer a minimal model system to study non-reciprocal motion.\n \n \\item special interest: \\cite{Klotsa2015}: This paper introduces a smart utilization of acoustic streaming for self-propulsion. The combination of simulations and experiments gives a clear understanding of the mechanisms at play. It would be interesting to see how these swimmers interact collectively.\n \n \n \\item special interest: \\cite{Morin2018} Experimental and theoretical work studying the interaction of colloidal flocks with an external field. These kinds of studies are crucial to understanding the behavior of these emergent states. Additionally, this work demonstrates these interactions can be used to design new device functionality such as a spontaneous oscillator.\n \n \\item special interest: \\cite{Yang2017} This work demonstrates the fine tunablity of ICEP flows. By creating colloids with varying surface charge and modulating the conductance of the suspending fluid, a variety of particle clusters, including both chiral and achiral structures, were created. \n\n \n \n\\end{enumerate}\n\n\\newpage\n\n\\section{Acknowledgement}\nThis work was supported primarily by the Materials Research Science and Engineering Center (MRSEC) program of the National Science Foundation under Award Number DMR- 1420073. Additional support was provided by the Division of Chemical, Bioengineering, Environmental and Transport Systems program of the National Science Foundation under award CBET-1706562.\n\n \\bibliographystyle{elsarticle-num-names}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n In a sensor network, many tiny computing nodes called sensors, are scattered in an area for the purpose of sensing some data and transmitting the data to nearby \\emph{base stations} for further processing. The transmission between the sensors is done by short range radio communications. The base station is assumed to be computationally well-equipped whereas the sensor nodes are resource-starved. Such networks are used in many applications including tracking of objects in an enemy's area for military purposes, distributed seismic measurements, pollution tracking, monitoring fire and nuclear power plants, tacking patients, engineering and medical explorations like wildlife monitoring, etc. Mostly for military purposes, data collected by sensor nodes need be encrypted before transmitting to neighboring nodes and base stations. \\\\\n\n\n\\indent The following issues make secure communication between sensor networks different from usual (traditional) networks:\n\\begin{itemize}\n\\item \\emph{Limited resources in sensor nodes:} Each sensor node contains a primitive processor featuring very low computing speed and only small amount of programmable memory. An example is the popular Atmel ATmega 128L processor.\n\\item \\emph{Limited life-time of sensor nodes:} Each sensor node is battery-powered and is expected to operate for only few days. Therefore, once the deployed sensor nodes expire, it is necessary to add some fresh nodes for continuing the data collection operation. This is referred to as the \\emph{dynamic management of security objects (like keys)}.\n\\item \\emph{Limited communication abilities of sensor nodes:} Sensor nodes have the ability to communicate each other and the base stations by the short range wireless radio transmission at low bandwidth and over small communication ranges (typical example is 30 meters (100 feet)).\n\\item \\emph{Lack of knowledge about deployment configuration:} Most of cases, the post deployment network configuration is not known a priori. As a result, it is unreasonable to use security algorithms that have strong dependence on locations of sensor nodes in a sensor network.\n\\item \\emph{Mobility of sensor nodes:} Sensor nodes may be mobile or static. If sensor nodes are mobile then they can change the network configuration at any time. \n\\item \\emph{Issue of node capture:} A part of the network may be captured by the adversary\/enemy. The resilience measurement against node capture is computed by comparing the number of nodes captured, with the fraction of total network communications that are exposed to the adversary \\emph{not including} the communications in which the compromised nodes are directly involved.\n\\end{itemize}\n\n\\indent Thus, it is not feasible to use public-key cryptosystems in resource constrained sensor networks. Hence, only the symmetric cipher such as DES\/IDEA\/RC5~\\cite{bk:02,jr:04} is the viable option for encryption\/decryption of secret data. But setting up symmetric keys among communication nodes is a challenging task in a sensor network. A survey on sensor networks can be found in~\\cite{jr:06,jr:09}. \\\\\n\n\\indent The topology of sensor networks changes due to the following three phases:\n\\begin{itemize}\n\\item \\textit{Pre-deployment and deployment phase:} Sensor nodes can be deployed from the truck or the plane in the sensor field.\n\\item \\textit{Post-deployment phase:} Topology can change after deployment because of irregularities in the sensor field like obstacles or due to jamming, noise, available energy of the nodes, malfunctioning, etc., or due to the mobile sensor nodes in the network.\n\\item \\textit{Redeployment of additional nodes phase:} Additional sensor nodes can be redeployed at any time to replace the faulty or compromised sensor nodes.\n\\end{itemize} \n\n\n\\indent A protocol that establishes cryptographically secure communication links among the sensor nodes is\ncalled the \\emph{bootstrapping protocol}. Several methods~\\cite{cf:01,cf:02,cf:04,cf:07} are already proposed\nin order to solve the bootstrapping problem. All these techniques are based on random deployment models, that\nis, they do not use the pre-deployment knowledge of the deployed sensor nodes. Eschenauer and Gligor~\\cite{cf:01} proposed the basic random key predistribution called the EG scheme, in which each sensor is assigned\na set of keys randomly selected from a big key pool of the keys of the sensor nodes. Chan et al.~\\cite{cf:02} \nproposed the $q$-composite key predistribution and the random pairwise keys schemes. For both the EG\nand the $q$-composite schemes, if a small number of sensors are compromised, it may reveal to compromise a large fraction of pairwise keys shared between non-compromised sensors. However, the random pairwise keys predistribution is perfectly secure against node captures, but there is a problem in supporting the large network. Liu and Ning's polynomial-pool based key predistribution scheme~\\cite{cf:04} and the matrix-based key predistribution \nproposed by Du et al.~\\cite{cf:05} improve security considerably. Liu and Ning~\\cite{jr:10} proposed the extended version of the closest pairwise keys scheme~\\cite{cf:09} for static sensor networks. Their scheme is based on the pre-deployment locations of the deployed sensor nodes and a pseudo random function (PRF) proposed by Goldreich et al.~\\cite{jr:08}. There is no communication overhead for establishing direct pairwise keys between neighbor nodes and the scheme is perfectly secure against node capture. \\\\\n\n\\indent The rest of the paper is organized as follows. Section 2 describes our proposed scheme called the identity based key predistribution using a pseudo random function (IBPRF). In Section 3, we provide a theoretical analysis for this scheme. In Section 4, we discuss the security issues with respect to our scheme. In Section 5, we provide an improved version of our scheme for distributed sensor networks. In Section 6, we compare our scheme with the previous schemes~\\cite{cf:01,cf:02,cf:04} with respect to communication overhead, network connectivity, and resilience against node captures. Finally, Section 7 concludes the paper.\n\n\n\\section{Identity Based Key Pre-Distribution using a Pseudo Random Function (IBPRF)}\n\nThe bootstrapping protocol for the random key predistribution schemes ~\\cite{cf:01,cf:02,cf:04} incurs much \nmore communication overhead for establishing direct pairwise keys between sensor nodes in a sensor \nnetwork. Our goal is to design a protocol which basically reduces the communication overhead for establishing\ndirect pairwise keys between sensors during direct key establishment phase of the bootstrapping. We propose a\n new scheme called the \\emph{identity based key predistribution using a pseudo random function} (IBPRF), which \nserves our above desired purpose. \\\\\n\n\\indent IBPRF has the following interesting properties:\n\\begin{itemize}\n\\item There is no communication overhead during direct key establishment phase for establishing direct pairwise keys between sensors.\n\\item There is no communication overhead during the addition of new sensor nodes.\n\\item When the sensor nodes are mobile, our scheme easily establish direct pairwise keys between the mobile sensor nodes and their physical neighbors with which they do not share keys currently with some desired\nprobability.\n\\item It works for any deployment topology.\n\\end{itemize}\n\n\\indent IBPRF is based on the following two ingredients:\n\\begin{itemize}\n\\item A pseudo random function (PRF) proposed by Goldreich et al. in 1986~\\cite{jr:08}.\n\\item A master key (MK) shared between each sensor node and the key setup server.\n\\end{itemize}\n\n\\indent The different phases for this scheme are as follows. \\\\\n\\subsection{Key Pre-Distribution}\nLet $N$ be a pool of the ids of $n$ sensor nodes in a sensor network. Assume that each sensor node $u$ is capable of holding a total of $m+1$ cryptographic keys in its key ring $K_{u}$. The key predistribution has the following steps:\n\\begin{itemize}\n\\item \\emph{Step-1:} For each sensor node $u$, the key setup server randomly generates a master-key \n$MK_{u}$.\n\\item \\emph{Step-2:} For each sensor node $u$, the key setup server selects a set $S$ of $m$ randomly \ngenerated ids of sensor nodes from the pool $N$ which are considered as the probable physical neighbors'\nids. Let $S$ = \\{$v_{1},v_{2}, \\ldots ,v_{m}$\\}. For each node id $v_{i}\\in S$ $(i=1,2, \\ldots ,m)$, the\nkey setup server generates a symmetric key $SK_{u,v_{i}} = \\mathit{PRF}_{MK_{v_{i}}}(u)$ as the pairwise key\nshared between the nodes $u$ and $v_{i}$, where $MK_{v_{i}}$ is the master key for $v_{i}$ and $u$ is the id\nof the node $u$.\n\\end{itemize}\n\n\\indent For each $v_{i}\\!\\in\\! S$, the key-plus-id combination $(SK_{u,v_{i}},v_{i})$ is stored in $u$'s key ring $K_{u}$.\n We note that each node $v_{i}$ can easily compute the same key $SK_{u,v_{i}}$ with its master\nkey and the id of node $u$. The sensor node $v$ is called a master sensor node of $u$ if the shared key\n between them is calculated by $SK_{u,v} = \\mathit{PRF}_{MK_{v}}(u)$. In other words, node $u$ is called a\n slave sensor node of $v$ if $v$ is a master sensor node of $u$.\n\n\\subsection{Direct Key Establishment}\nAfter deployment of sensor nodes in a deployment area (i.e., target field), sensor nodes will establish direct pairwise keys\nbetween them. Direct key establishment phase has the following steps:\n\\begin{itemize}\n\\item \\emph{Step-1:} Each sensor node first locates its all physical neighbors. Nodes $u$ and $v$ are called \\emph{physical neighbors} if they are within the communication range of one another. They are called \\emph{key neighbors} if they share a pairwise key. They are said to be \\emph{direct neighbors} if they are both physical as\nwell as key neighbors. Now, after identifying the physical neighbors by a sensor node $u$, it can easily verify which ids of the physical neighbors exist in its key ring $K_{u}$. If $u$ finds that it has the predistributed pairwise key $SK_{u,v} = \\mathit{PRF}_{MK_{v}}(u)$ with node $v$ then it informs sensor $v$ that it has such a key. This notification is done by sending a short message containing the id of node $u$ that $u$ has such a key. We note that this message never contains the exact value of the key $SK_{u,v}$.\n\\item \\emph{Step-2:} Upon receiving such a message by node $v$, it can easily calculate the shared \npairwise key $SK_{u,v} = \\mathit{PRF}_{MK_{v}}(u)$ by using its own master key and the id of node $u$.\n\\end{itemize}\n\n\n\\noindent Thus, nodes $u$ and $v$ can establish a direct pairwise key shared between them very easily and use this key for their future communication.\n\n\\subsection{Path Key Establishment}\nThis is an optional stage, if requires, adds the connectivity of the network. After direct key establishment,\nif the connectivity is still poor, nodes $u$ and $v$ which are physical neighbors not sharing a pairwise key, can establish a direct key between them as follows.\n\\begin{itemize}\n\\item \\emph{Step-1:} $u$ first finds a path $\\langle u=u_{0},u_{1},u_{2}, \\ldots ,u_{h-1},u_{h}=v \\rangle$ such that\neach $(u_{i},u_{i+1})$ $(i=0,1,2, \\ldots ,h-1)$ is a secure link.\n\\item \\emph{Step-2: } $u$ generates a random number $k'$ as the shared pairwise key between $u$ and $v$ and encrypts it using the shared key $SK_{u,u_{1}}$ and sends to node $u_{1}$.\n\\item \\emph{Step-3:} $u_{1}$ retrieves $k'$ by decrypting the encrypted key using $SK_{u,u_{1}}$ and\nencrypts it using the shared key $SK_{u_{1},u_{2}}$ between $u_{1}$ and $u_{2}$ and sends to $u_{2}$.\n\\item \\emph{Step-4:} This process is continued until the key $k'$ reaches to the desired destination\nnode $v$.\n\\end{itemize}\n\\noindent As a result, nodes $u$ and $v$ use $k'$ as the direct pairwise key shared between them for future communication. Since this process involves more communication overhead to establish a pairwise key between nodes, in practice $h=2$ or $3$ is recommended. \n\n\n\\subsection{Mobility of Sensor Nodes}\nSuppose that a sensor node $u$ moves from one location to another. Due to location updation of $u$, the\nconnectivity of $u$ with the new neighbors may also change. In the new location, assume that $u$ finds the\nids of its some new physical neighbors with which it does not currently share any keys. If $v$ be one such physical neighbor, $u$ informs to $v$ that it has a pairwise key with $v$. This notification takes place by sending a request message to $v$ containing the id of sensor node $u$ excluding the exact value of the key. Upon receiving this message, $v$ can immediately compute the pairwise key shared between them by executing one efficient PRF operation and by using the master key $MK_{v}$ for $v$ and the id of sensor node $u$. Thus, $u$ and $v$ use this key for their future communication. \\\\\n\n\\indent After performing this stage, if sensor node $u$ finds still poor connectivity, it may opt for at most 1-hop path key establishment because path key establishment involves more communication overhead. Of course we assume that mobility of sensor nodes are infrequent.\n\n\\subsection{Addition of Sensor Nodes}\nIn order to add a new sensor node $u$, the key setup server selects a set $S$ of $m$ randomly generated ids\nof sensor nodes from the pool $N$. The key setup server randomly generates a master key $MK_{u}$ for node $u$. For each sensor node id $v \\in S$, the key setup server takes the master key $MK_{v}$ and compute the secret key $SK_{u,v} = \\mathit{PRF}_{MK_{v}}(u)$ as the shared pairwise key between nodes $u$ and $v$, and distributes the key-plus-id combination $(SK_{u,v},v)$ to $u$. After deployment of sensor node $u$, it establishes direct pairwise keys using direct key establishment phase of IBPRF with the physical neighbors for which the ids are in $u$'s key ring $K_{u}$. \\\\\n\n\\indent Now, if $u$ finds still poor connectivity after direct key establishment, it can perform path key establishment stage with 2 or 3 hops.\n\n\n\\section{Analysis}\nIn this section, we shall now compute the probability of establishing direct keys between two sensor nodes during direct key establishment, and the probability of establishing a pairwise key between two sensor nodes during path key establishment. We shall also analyze the storage overhead and the communication overhead required by our scheme.\n\n\\subsection{Probability of Establishing Direct Keys}\nLet $p$ be the probability that two physical neighbors can establish a direct pairwise key. For the derivation\nof $p$, we first observe that two physical neighbors $u$ and $v$ can establish a pairwise key only if the\nkey ring $K_{u}$ of node $u$ contains the shared secret key $SK_{u,v} = \\mathit{PRF}_{MK_{v}}(u)$ and the id\n of node $v$, or the key ring $K_{v}$ of node $v$ contains the shared secret key $SK_{v,u} = \\mathit{PRF}_{MK_{u}}(v)$ and the id of node $u$ because of the fact that any of nodes $u$ and $v$ can initiate for establishing a pairwise key between them. \\\\\n\n\\indent We then have, $p = 1 -$ (probability that both $u$ and $v$ do not establish a pairwise key). The total number of ways to select $m$ ids from the pool $N$ of size $n$ is $ \\left( \\begin{array}{c} n \\\\ m \\end{array} \\right) $. For a fixed key ring $K_{u}$ of node $u$, the total number of ways to select $K_{v}$ of a node $v$\nsuch that $K_{v}$ does not have the id of $u$ is $ \\left( \\begin{array}{c} n-1 \\\\ m \\end{array} \\right) $. Thus, we have\n\\begin{equation}\np = 1 - \\frac{\\left( \\begin{array}{c}\n n-1 \\\\ m\n \\end{array} \\right) }\n {\\left( \\begin{array}{c}\n n \\\\ m\n \\end{array} \\right) } \n= \\frac{m}{n}.\n\\end{equation}\n \n\\noindent We note that $p$ strictly depends on the network size $n$ and the key ring size. \\\\\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[scale=0.31,angle=270]{dir_con.ps}\n\\caption{The probability $p$ that two sensors establish a direct pairwise key v.s. the network size \n$n$, with $m = 100,150,200$.} \\label{Figure: }\n\\end{figure}\n\\begin{figure}[h]\n\\centering\n\\includegraphics[scale=0.31,angle=270]{path_con.ps}\n\\caption{The probability $p_{s}$ of establishing a pairwise key v.s. the probability $p$ that two sensor nodes establish a direct pairwise key, with $d = 20,60,100$.} \\label{Figure: } \n\\end{figure}\n\n\\noindent It is clear from Figure 1 that when the network size is small, our scheme provides better connectivity. Therefore, our scheme can not support a large network. In section 5, we have proposed an improved version of our scheme to support large networks.\n\n\\subsection{Probability of Establishing Keys using 1-hop Path Key Establishment}\nIf $d$ be the average number of neighbor nodes that each sensor node can contact, it follows from the similar\nanalysis in~\\cite{cf:04} that the probability of two sensor nodes establishing a pairwise key (directly or\nindirectly) is \n\\begin{equation}\np_{s} = 1 - (1-p)(1-p^{2})^{d}.\n\\end{equation}\n\n\\indent The network connectivity probabilities for path key establishment with 1-hop are plotted in Figure 2. From this figure it is also clear that we are able to achieve better connectivity after executing this stage even if the network is almost disconnected initially.\n\n\\subsection{Calculation of Storage Overhead}\nEach sensor node has to store a master key which is shared with the key setup server and $m$ predistributed\nkey-plus-id combinations. Hence, our scheme requires a storage overhead of maximum $m+1$ keys for each sensor node.\n\n\\subsection{Calculation of Communication Overhead}\nFor establishing a pairwise key between two sensor nodes $u$ and $v$, one of them, say $u$, initiates a request message to node $v$ that its key ring contains the shared key between them. Then, after receiving such a request,\nnode $v$ computes the shared key between $u$ and $v$ by performing only one efficient PRF operation. \nHence, the communication overhead involves only one short message for informing the other node that it has a\npairwise key and the computational overhead due to single efficient PRF operation.\n\n\n\n\\section{Security Considerations}\nThe security of IBPRF depends on the following facts:\n\\begin{itemize}\n\\item The security of PRF~\\cite{jr:08}.\n\\item A node's master key MK which is shared with the key setup server.\n\\end{itemize}\n\n\\indent It is observed that if a node's master key is not disclosed, no matter how many pairwise keys\ngenerated by this master key are disclosed, the task is still computationally difficult for an adversary to\nrecover the master key MK as well as the non-disclosed pairwise keys generated with different ids of sensor\nnodes. Again, each pre-distributed pairwise key between two sensor nodes is generated by using PRF function randomly. Thus, no matter how many sensor nodes are compromised, the direct pairwise keys between non-compromised nodes are still secure. In other word, node compromise does not eventually lead to compromise of the direct pairwise keys between the other non-compromised nodes. In this way, our scheme provides perfect security against node captures.\n\n\\section{ The Improved Scheme}\nWe note that our basic scheme (IBPRF) provides better connectivity if the network size is small, whereas it provides perfect security against node captures. In fact, there is no communication overhead during establishment of the direct pairwise keys between sensors and also during the addition of nodes after their initial deployment. \\\\\n\n\\indent To support a large sensor network, we wish to apply our basic scheme in distributed sensor networks. The deployment region is divided into $c$ number of sub-regions called the cells such that each cell can communicate with the base stations comfortably. Let the $i$-th cell be denoted by $\\mathit{cell}_{i}$. Assume that each $\\mathit{cell}_{i}$ contains $n_{i}$ number of sensor nodes. In practical situation, it is not always possible to deploy each node to a pre-determined location in the deployment region. We further assume that the key setup server only knows the nodes containing to a particular cell which will be deployed in that region randomly. In practice, this assumption is appropriate. Under this configuration, we now apply our basic scheme to each cell as follows. \\\\\n\n\\indent Let $N_{i}$ be the pool of the ids of $n_{i}$ sensor nodes in a cell $\\mathit{cell}_{i}$. Assume that each sensor node $u$ is capable of holding a total of $m+1$ cryptographic keys. In key pre-distribution phase, for each node $u\\in \\mathit{cell}_{i}$, the key setup server randomly generates a master key $MK_{u}$. For each node $u\\in \\mathit{cell}_{i}$, the key setup server also selects a set $S$ of $m$ randomly generated ids of the sensor nodes from the pool $N_{i}$. For each $v\\in S$, the key setup server generates a symmetric key $SK_{u,v} = \\mathit{PRF}_{MK_{v}}(u)$ as the pairwise key shared between nodes $u$ and $v$, where $MK_{v}$ is the master key for node $v$ and $u$ is the id for node $u$. The key-plus-id combination $(SK_{u,v},v)$ is stored in $u$'s key ring $K_{u}$. After deployment of the sensor nodes, they establish direct pairwise keys using direct key establishment phase of our basic scheme (IBPRF). The other phases like path key establishment, mobility of sensor nodes, and addition of sensor nodes remain same as our basic scheme. \\\\\n\n\\indent Thus, sensor nodes in each cell establish pairwise keys between them and communicate with each other in that cell securely. For mobility of the sensor nodes, we restrict the sensor nodes to move in a particular cell only. \\\\\n\n\\indent Let $p_{i}$ denote the probability that two sensor nodes in the $i$-th cell $\\mathit{cell}_{i}$ can establish a direct pairwise key between them. Similar to analysis in 3.1, we have\n\\begin{equation}\np_{i} = 1 - \\frac{\\left( \\begin{array}{c}\n n_{i}-1 \\\\ m\n \\end{array} \\right) }\n {\\left( \\begin{array}{c}\n n_{i} \\\\ m\n \\end{array} \\right) } \n= \\frac{m}{n_{i}}.\n\\end{equation}\n \n\\noindent The (average) probability that two sensor nodes in a network of size $n=\\sum_{i=1}^{c} n_{i}$, establish a direct pairwise key between them is given by\n\\begin{equation}\np = \\frac{\\sum_{i=1}^{c} p_{i}}{c}.\n\\end{equation}\n\n\\indent Hence, we are able to achieve better connectivity for the entire network by using our improved version and selecting the appropriate size of the cells. However, the communication overhead as well as resilience measurement against node captures remain same as our basic scheme (IBPRF). We note that this improved scheme may not always work for ad hoc mode sensor networks.\n \n\\section{Comparison with Previous Schemes}\nIn this section, we compare both our basic scheme (IBPRF) and the improved scheme with the EG~\\cite{cf:01}, the $q$-composite (qC)~\\cite{cf:02}, and the polynomial-pool based~\\cite{cf:04} schemes with respect to the communication overhead, network connectivity and resilience against node captures. \\\\\n\n\\noindent \\emph{(1) Communication overhead} \\\\\n\\indent For the EG and the $q$-composite schemes, when a node wishes to establish pairwise keys with its\nphysical neighbor nodes, it needs to send a list of some messages encrypted by keys in its key ring. \\\\\n\n\\indent In case of the polynomial-pool based scheme, a sensor node also needs to send a list of some messages\nencrypted by potential pairwise keys based on its polynomial shares for establishing a direct pairwise key\nwith a physical neighbor. \\\\\n\n\\indent Thus, the communication overhead is on the order of the key ring size for these schemes. But, for our schemes, the communication overhead is only due to one short message sent by a node to inform its physical neighbor that it has a pairwise key in its key ring and a single efficient PRF operation for computing the shared key $SK$ by the physical neighbor. Hence, both our basic scheme (IBPRF) and the improved scheme have much less communication overhead than the EG, the $q$-composite, and the polynomial-pool based schemes. \\\\ \n\n\\noindent \\emph{(2) Resilience against node capture} \\\\ \n\\indent From the analysis of the EG scheme~\\cite{cf:01} and the $q$-composite scheme~\\cite{cf:02}, it follows that even if the number of nodes captured is small, these schemes may reveal a large fraction of pairwise keys shared between non-compromised sensors. The analysis of the polynomial-pool based scheme~\\cite{cf:04} shows that this scheme is unconditionally secure and $t$-collusion resistant. Thus, it has better resilience against node captures than the EG and the $q$-composite schemes. However, both our basic scheme (IBPRF) and the improved scheme provide perfect security against node captures. \\\\\n\n\n\\noindent \\emph{(3) Network connectivity} \\\\\n\\indent For the EG scheme~\\cite{cf:01}, the probability of establishing a direct pairwise key between two sensor nodes is \n\\begin{equation}\np_{EG} = 1 - \\frac{\\left( \\begin{array}{c}\n M-m \\\\ m\n \\end{array} \\right) }\n {\\left( \\begin{array}{c}\n M \\\\ m\n \\end{array} \\right) } \n=1 - \\prod_{i=0}^{m-1} \\frac{M-m-i}{M-i}\n\\end{equation}\n \n\\noindent where $M$ and $m$ are the key pool size and key ring size of a sensor node. \\\\\n\n\\indent For the $q$-composite scheme~\\cite{cf:02}, the probability of establishing a direct pairwise key between two\nsensor nodes is\n\\begin{equation}\np_{qC} = 1 - \\sum_{i=0}^{q-1} p_{i}\n\\end{equation}\n\n\\noindent where $p_{i} = \\frac{{\\left( \\begin{array}{c}\n M \\\\ i\n \\end{array} \\right)}\n {\\left( \\begin{array}{c}\n M-i \\\\ 2(m-i)\n \\end{array} \\right)}\n {\\left( \\begin{array}{c}\n 2(m-i) \\\\ m-i\n \\end{array} \\right)} }\n {\\left( \\begin{array}{c}\n M \\\\ m\n \\end{array} \\right)^{2} } $, $M$ is the key pool size and $m$ is the key ring size of a sensor node. \\\\ \n\n\\indent For the polynomial-pool based scheme~\\cite{cf:04}, the probability of establishing a direct pairwise\nkey between two sensor nodes is\n\\begin{equation}\np_{poly-pool} = 1 - \\frac{\\left( \\begin{array}{c}\n s-s' \\\\ s'\n \\end{array} \\right) }\n {\\left( \\begin{array}{c}\n s \\\\ s'\n \\end{array} \\right) } \n=1 - \\prod_{i=0}^{s'-1} \\frac{s-s'-i}{s-i}\n\\end{equation}\n\\noindent where $s$ is the polynomial-pool size and $s'$ is the number of shares given to a sensor node. Thus, we see that the EG and the $q$-composite schemes depend on $M$ and $m$. The polynomial-pool based scheme depends on $s$ and $s'$ and the maximum supported network size is bounded by $\\frac{(t+1)s}{s'}$, where $t$ is the degree of the symmetric bivariate polynomial, whereas our scheme depends on the network size $n$ and the key ring size $m$.\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[scale=0.33,angle=270]{comparison.ps}\n\\caption{The probability $p$ of establishing a common key v.s. the maximum supported network size $n$ in order to be resilient against node compromise. Assume that each sensor node is capable of holding $200$ keys.} \\label{Figure: }\n\\end{figure}\n\n\\indent For comparison of network connectivity, we only consider the polynomial-pool based scheme because it is more resilient against node compromise than the EG scheme and the $q$-composite scheme. However, both the EG scheme and the $q$-composite scheme support networks of arbitarily big sizes. The relationship between the probability of establishing direct keys and the maximum supported network size for the polynomial-pool based scheme and our basic scheme (IBPRF) is shown in Figure 3. We assume that each sensor is capable of storing $200$ keys in its key ring. From this figure, it is very clear that our scheme provides better connectivity than the polynomial-pool based scheme in order to be resilient against node compromise.\n\n\\section{Conclusion}\nOur basic scheme (IBPRF) is an alternative to direct key establishment of the bootstrapping protocol. Both IBPRF and the improved scheme guarantee that they have better trade-off between communication overhead, network connectivity and also resilience against node captures compared to the EG, the $q$-composite, and the polynomial-pool based schemes. Both schemes can also be adapted for mobile sensor networks by initiating direct key establishment phase and one can achieve reasonable connectivity by applying these schemes.\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nLet $M$ be a four dimensional spacetime with the metric\n\n\\begin{equation}\ng_{\\mu\\, \\nu}=f^{-1}\\,\\eta_{1\\, \\mu\\, \\nu}-u_{\\mu}\\,u_{\\nu} \\label{mt1}\n\\end{equation}\n\n\\noindent\nwhere $\\eta_{1\\, \\mu\\, \\nu}=$ \\,diag $(0,1,1,1)$ and $u_{\\mu}=\\sqrt{f}\\,\n\\delta_{\\mu}^{0}$.\nHere Latin letters represent the space indices and $\\delta_{ij}$ is the\nthree dimensional Kronecker delta. In this work we shall use the same\nconvention as in \\cite{hwk}. The only difference is that we use Greek\nletters for four dimensional indices. Here $M$ is static.\nThe inverse metric is given by\n\n\\begin{equation}\ng^{\\mu \\, \\nu}= f\\, \\eta_{2\\,}^{ \\mu \\, \\nu}-u^{\\mu}\\, u^{\\nu} \\label{mt2}\n\\end{equation}\n\n\\noindent\nwhere $\\eta_{2\\, \\mu\\, \\nu}=$ \\,diag $(0,1,1,1)$ ,\n$u^{\\mu}=g^{\\mu \\, \\nu}\\,u_{\\nu}=-{1 \\over \\sqrt{f}}\\, \\delta^{\\mu}_{0}$.\nHere $u^{\\mu}$ is a time-like four vector, $u^{\\mu}\\,u_{\\mu}=-1. $\n\n\\noindent\nThe Maxwell antisymmetric tensor and the corresponding energy momentum\ntensor are respectively given by\n\n\\begin{eqnarray}\nF_{\\mu\\, \\nu}= \\nabla_{\\nu}\\,A_{\\mu}-\\nabla_{\\mu}\\,A_{\\nu}\\\\\nM_{\\mu \\, \\nu}={1 \\over 4 \\pi}\\,(F_{\\mu\\, \\alpha}\\, F_{\\nu}^{\\alpha}-\n{1 \\over 4}\\,F^2\\,g_{\\mu \\, \\nu})\n\\end{eqnarray}\n\n\\noindent\nwhere $F^2= F^{\\mu\\, \\nu}\\,F_{\\mu \\, \\nu}$. The current vector $j^{\\mu}$\nis defined as\n\n\\begin{equation}\n\\nabla_{\\nu}\\,F^{\\mu\\, \\nu}=4\\pi\\,j^{\\mu}\n\\end{equation}\n\n\\noindent\nThe Einstein field equations for a charged dust distribution are given by\n\n\\begin{equation}\nG_{\\mu\\, \\nu}=8\\, \\pi\\,T_{\\mu\\, \\nu}=8 \\pi\\,M_{\\mu \\, \\nu}+\n(8 \\, \\pi\\, \\rho)\\, u_{\\mu}\\,u_{\\nu}\n\\end{equation}\n\n\\noindent\nwhere $\\rho$ is the energy density of the charged dust distribution and the\nfour velocity of the dust is the same vector $u^{\\mu}$ appearing in the\nmetric tensor. Very recently \\cite{gur} we invetstigated the above filed equations\nwith metric given in (\\ref{mt1}). We find that $j^{\\mu}=\\rho_{e}\\, u^{\\mu}$, where $\\rho_{e}$\nis the charge density of the dust distribution. Let $\\lambda$ be a real\nfunction depending on the space coordinates. It turns out that $A_{i}=0$ and\n\n\\begin{equation}\nf={1 \\over \\lambda^2}, ~~A_{0}={k \\over \\lambda}\n\\end{equation}\n\n\\noindent\nwhere $k=\\pm 1$. Then the field equations reduce simply to the following\nequations.\n\n\\begin{eqnarray}\n\\nabla^2\\, \\lambda +4\\, \\pi \\, \\rho\\, \\lambda^3=0 \\label{eq2} \\\\\n\\rho_{e}=k\\, \\rho\n\\end{eqnarray}\n\n\\noindent\nwhere $\\nabla^2$ denotes the three dimensional Laplace operator in Cartesian\nflat coordinates. These equations represent the Einstein and Maxwell\nequations respectively. In particular the first equation (\\ref{eq2})\nis a generalization of the Poisson's potential equation in\nNewtonian gravity. When $\\rho$ vanishes , the space-time metric\ndescribes the Majumdar-Papapetrou space-times\n \\cite{maj}\\,$^{,}$\\,\\cite{pap}\\,$^{,}$\\,\\cite{hah}\\,$^{,}$\\,\\cite{ksm}.\nFor the case $\\rho \\ne 0$ , the reduced form of the field equations (\\ref{eq2})\nwere given quite recently \\cite{gur} (see also \\cite{das}).\n\n\\section{Charged dust clouds}\n\nIn the Newtonian approximation $\\lambda=1+V$,\nEq.(\\ref{eq2}) reduces to the Poisson equation, $\\nabla^2\\, V+4\\pi\\, \\rho=0$.\nHence for any physical mass density $\\rho$ of the dust distribution\nwe solve the equation (\\ref{eq2}) to find the function\n$\\lambda$. This determines the space-time metric completely.\nAs an example for a constant mass density $\\rho=\\rho_{0} >0$ we find that\n\n\\begin{equation}\n\\lambda=\\displaystyle {a \\over 2\\, \\sqrt{\\pi\\, \\rho_{0}}}\\,cn (l_{i}\\, x^{i})\n\\end{equation}\n\n\\noindent\nHere $l_{i}$ is a constant three vector ,$a^2=l_{i}l^{i}$ and $cn$ is one of\nthe Jacobi elliptic function with modulus square equals ${1 \\over 2}$.\nThis is a model universe which is filled by a (extreme) charged dust\nwith a constant mass density.\n\n\\section{Interior solutions}\n\nIn an asymptotically flat space-time , the function $\\lambda$\nasymptotically obeys the boundary condition $\\lambda \\rightarrow\n\\lambda_{0}$ (a constant).\nIn this case we can establish the equality of mass and charge\n$e=\\pm m_{0}$, where $m_{0}= \\int \\rho\\, \\sqrt{-g}\\,d^3 x$. For physical\nconsiderations our extended MP space-times\nmay be divided into inner and outer regions.\nThe interior and outer regions are defined as the regions where\n$\\rho_{i} >0$ and $\\rho=0$ respectively. Here $i=1,2,...,N$, where\n$N$ represents the number of regions. The gravitational fields of\nthe outer regions are described by any solution of the Laplace\nequation $\\nabla^2\\, \\lambda=0$ , for instance by the MP metrics.\nAs an example the extreme Reissner-Nordstr{\\\" o}m (RN) metric\n(for $r >R_{0}$), $\\lambda = \\lambda_{0}+\\displaystyle\n{\\lambda_{1} \\over r}$ may be matched to a metric with\n\n\\begin{equation}\n\\lambda=a\\, \\displaystyle { \\sin (b\\,r) \\over r},\\,\\, r < R_{0}\n\\end{equation}\n\n\\noindent\ndescribing the gravitational field of an inner region filled by\na spherically symmetric charged dust distribution with\na mass density\n\n\\begin{equation}\n\\rho=\\displaystyle \\rho(0)\\,[{b\\, r \\over \\sin (b\\,r)}]^{2}\n\\label{rho}\n\\end{equation}\n\n\\noindent\nHere $\\rho(0)={1 \\over 4\\, \\pi\\, a^2}$, $r^2=x_{i}\\,x^{i}$,\n$a$ and $b$ are constants to be determined\nin terms of the radius $R_{0}$ of the boundary and total mass $m_{0}$\n(or in terms of $\\rho(0)$). The boundary condition , when reduced on\nthe function $\\lambda$ on the surface $r=R$ it must satisfy both\n$\\lambda_{out}=\\lambda_{in}$ and $\\lambda^{\\prime}_{out}=\n\\lambda^{\\prime}_{in}$. Here prime denotes differentiation with respect\n$r$. They lead to \\cite{note}\n\n\\begin{eqnarray}\nb\\,R_{0}&=&\\displaystyle \\sqrt{ {3m_{0} \\over R_{0}}},\\\\\n\\lambda_{0}&=&ab \\cos (bR_{0}),\\\\\n\\lambda_{1}&=&a\\,(\\sin (bR_{0}) -bR_{0}\\, \\cos (bR_{0}))\n\\end{eqnarray}\n\n\\noindent\nWhen the coordinates transformed to the Schwarzschild coordinates\n.i.e., $ \\lambda_{0}\\,r+\\lambda_{1} \\rightarrow r $ then the line element\nbecomes\n\n\\begin{equation}\nds^2=\\displaystyle -{1 \\over \\lambda_{0}^2}(1- {\\lambda_{1} \\over r})^2\\, dt^2+\n{dr ^2 \\over( 1-{\\lambda_{1} \\over r})^2}+r^2\\,d \\Omega^2.\n\\end{equation}\n\n\\noindent\nhence $\\lambda_{1}$ is the mass in the Newtonian approximation then\n$\\lambda_{1}=m_{0}$.\n\n\\noindent\nIn this way one may eliminate\nthe singularities of the outer solutions by matching them to an inner\nsolution with a physical mass density.\n\n\n\nFor the mass density\n$\\rho= {b^{2} \\over 4 \\pi\\,\\lambda^2}$ in general ,\nwe may have the complete solution.\nHere $b$ is a nonzero constant which is related to $m_{0}$ by the relation\n$b^2\\,R_{0}^3=3 m_{0}$ and we find that\n\n\\begin{equation}\n\\lambda=\\displaystyle \\sum_{l,m}\\,a_{l,m}\\,\nj_{l}(b\\,r)\\,Y_{l,m}(\\theta, \\phi)\n\\end{equation}\n\n\\noindent\nwhere $j_{l}(b\\,r)$ are the spherical Bessel functions which are\ngiven by\n\n\\begin{equation}\nj_{l}(x)=\\displaystyle (-x)^l\\,\\big ( {1 \\over x}\\, {d \\over dx} \\big )^l\\,\n\\big ( {\\sin x \\over x} \\big )\n\\end{equation}\n\n\\noindent\nand $Y_{l,m}$ are the spherical harmonics. The constants\n$a_{l,m}$ are determined when this solution is matched to an outer solution\nwith $\\nabla^2 \\lambda=0$. The interior solution given above for\nthe extreme RN metric with density (\\ref{rho}) corresponds to $l=0$.\n\n\\section{Point particle solutions}\n\nNewtonian gravitation is governed by the Poisson type\nof linear equation. Gravitational fields of spherical objects\nin the exterior regions may also be identified as the gravitational\nfields of masses located at a discrete points (point particles located\nat centers of the spheres) in space $(R^3)$. The solution\nof the Poisson equation with $N$ point singularities may be given by\n\n\\begin{eqnarray}\n\\lambda= 1+\\displaystyle \\sum_{i=1}^{N} {m_{i} \\over r_{i}},\\label{pnt}\\\\\nr_{i}=[(x-x_{i})^2+(y-y_{i})^2+(z-z_{i})^2]^{1 \\over 2}\n\\end{eqnarray}\n\n\\noindent\nwhere $N$ point particles with masses $m_{i}$ are located at the points\n$(x^{i},y^{i},z^{i})$ with $i=1,2,,,N$. The same solution given above\nmay also describe the exterior solution of the $N$ spherical objects\n(with nonempty interiors) with total masses $m_{i}$ and radii $R_{i}$.\nThe interior gravitational fields of such spherical objects can be\ndetermined when the mass densities $\\rho_{i}$ are given. The essential point\nhere is that the limit $R_{i} \\rightarrow 0$ is allowed. This means that\nthe dust distribution is replaced by a distribution concentrated at the\npoints $(x^{i},y^{i},z^{i})$ .\nNamely in this limit mass densities behave as the Dirac delta functions,\n\n$$ \\displaystyle \\rho \\rightarrow \\sum_{i=1}^{N} m_{i}\\, \\delta (x-x_{i})\\,\n\\delta (y-y_{i})\\, \\delta (z-z_{i}).$$\n\n\\noindent\nThis is consistent with the\nPoisson equation $\\nabla^2 \\lambda +4\\pi\\, \\rho\\,=0$ , because\n${1 \\over r_{i}}$ in the solution (\\ref{pnt}) is the Green's function, i.e.,\n\n$$ \\displaystyle \\nabla^2 {1 \\over r_{i}}= -4\\pi\\, \\delta (x-x_{i})\\,\n\\delta (y-y_{i})\\, \\delta (z-z_{i}).$$\n\nSuch a limit , i.e., $R_{i} \\rightarrow 0$ is not consistent\nin our case , $\\nabla^2 \\lambda +4\\pi\\, \\rho\\, \\lambda^3=0$.\nThe potential equation is nonlinear and in particular in this limit\nthe product of $\\rho$ and $\\lambda^3$ does not make sense. Hence we\nremark that the Majumdar-Papapetrou metrics should represent\nthe gravitational field $N$ objects with nonempty interiors\n(not point-like objects).\n\n\n\\section{Thin shell solutions}\n\nIn the previous section we concluded that the dust distribution can\nnot be concentrated to a point. We observe that ,\nthe potential equation (\\ref{eq2}) does not also admit dust distributions on\n one dimensional (string like distributions) structures.\nThis is compatible with the results of Geroch-Traschen \\cite{ger}.\nOn the other hand , the mass distribution $\\rho$ can be defined on surfaces.\n\nLet $S$ be a regular surface in space ($R^3$) defined by\n$S=[(x,y,z) \\in R^3 ; F(x,y,z)=0]$ , where $F$ is a differentiable\nfunction in $R^3$. When the dust distribution is concentrated on\n$S$ the mass density may be represented by the Dirac delta\nfunction\n\n\\begin{equation}\n\\rho (x,y,z)= \\rho_{0}(x,y,z)\\, \\delta (F(x,y,z))\n\\end{equation}\n\n\\noindent\nwhere $\\rho_{0}(x,y,z)$ is a function of $(x,y,z)$ which is defined on $S$.\nThe function $\\lambda$ satisfying the potential equation (\\ref{eq2})\ncompatible with such shell like distributions may given as\n\n\\begin{equation}\n\\lambda (x,y,z)=\\lambda_{0} (x,y,z)-\\lambda_{1} (x,y,z)\\, \\theta (F)\n\\end{equation}\n\n\n\\noindent\nwhere $\\lambda_{0}$ and $\\lambda_{1}$ are differentiable functions of $(x,y,z)$\nand $\\theta (F)$ is the Heaviside step function. With these assumptions\nwe obtain\n\n\\begin{eqnarray}\n\\rho_{0}(x,y,z)={1 \\over 4 \\pi}\\,{\\vec{\\nabla} \\lambda_{1} \\cdot \\vec\n{\\nabla} F \\over (\\lambda_{0} )^3} \\vert_{S}\\\\\n\\nabla^2 \\lambda_{0}= \\nabla^2 \\lambda_{1}=0\n\\end{eqnarray}\n\n\\noindent\nand in addition $\\lambda_{1}\\vert_{S}=0$. We have some examples:\n\n\\vspace{0.5cm}\n\n\\noindent\n{\\bf 1.}\\,\\, $S$ is the plane $z=0$. We have $\\rho (x,y,z)=\n \\rho_{0}(x,y)\\, \\delta (z)$. Then it follows that\n\n\\begin{eqnarray}\n\\lambda (x,y,z)&=&\\lambda_{0}(x,y,z)-\\lambda_{2} (x,y)\\,z\\, \\theta (z)\\\\\n\\rho_{0} (x,y,z)&=&{1 \\over 4 \\pi}\\,{ \\lambda_{2} \\over (\\lambda_{0} \\vert_{S})^3}\\\\\n\\nabla^2 \\lambda_{0}&=& \\nabla^2 \\lambda_{2}=0\n\\end{eqnarray}\n\n\\vspace{0.5cm}\n\n\\noindent\n{\\bf 2.}\\,\\, $S$ is the cylinder $F=r-a=0$. We have $\\rho (r,\\theta,z)=\n \\rho_{0}(\\theta,z)\\, \\delta (r-a)$. Then it follows that\n\n\\begin{eqnarray}\n\\lambda (r,\\theta,z)&=&\\lambda_{0}(r,\\theta,z)-\\lambda_{2}\n(\\theta,z)\\,\\ln (r\/a)\\, \\theta (\\rho-a)\\\\\n\\rho_{0} (r,\\theta,z)&=&{1 \\over 4 \\pi a}\\,{ \\lambda_{2} \\over (\\lambda_{0} \\vert_{S})^3}\\\\\n\\nabla^2 \\lambda_{0}&=& \\nabla^2 \\lambda_{2}=0\n\\end{eqnarray}\n\n\\noindent\nHere we remark that the limit $a \\rightarrow 0$ does not exist.\nThis means that the mass distribution on the whole $z$- axes is not allowed.\n\n\\vspace{0.5cm}\n\n\\noindent\n{\\bf 3.}\\,\\, $S$ is the sphere $F=r-a=0$. We have $\\rho (r,\\theta,\\phi)=\n \\rho_{0}(\\theta,\\phi)\\, \\delta (r-a)$. Then it follows that\n\n\\begin{eqnarray}\n\\lambda (r,\\theta,\\phi)&=&\\lambda_{0}(r,\\theta,\\phi)-\\lambda_{2}\n(\\theta,\\phi)\\,({1 \\over a}-{1 \\over r})\\, \\theta (r-a)\\\\\n\\rho_{0} (r,\\theta, \\phi)&=&{1 \\over 4 \\pi a^2}\\,{ \\lambda_{2} \\over (\\lambda_{0}\n\\vert_{S})^3}\\\\\n\\nabla^2 \\lambda_{0}&=& \\nabla^2 \\lambda_{2}=0\n\\end{eqnarray}\n\n\\noindent\nWe note that the total mass is infinite on non-compact surfaces.\n\nFor compact case we shall consider the sphere in more detail.\nIn this case we may have $\\lambda_{0}= \\mu\\,\\lambda_{2}+\\psi$ such that\n$\\nabla^2 \\psi =0$ and $\\psi (a, \\theta, \\phi)=0$. We shall assume $\\psi=0$\neverywhere, then\n\n\\begin{equation}\n\\rho_{0}= {1 \\over 4 \\pi a^2}\\, {1 \\over \\mu^3\\,\\lambda_{2}^2}\n\\end{equation}\n\n\\noindent\nHence the total mass $m_{0}$ on $S$ is given by\n$m_{0}=\\int \\sqrt{-g}\\, \\rho\\,d^3\\,x={1 \\over \\mu}$. Let\n$\\lambda^{out}$ and $\\lambda^{in}$ denote solutions\nof (\\ref{eq2}) corresponding to the exterior and inner regions respectively.\nThey are given by\n\n\n\\begin{eqnarray}\n\\lambda^{out} (r,\\theta,\\phi)&=&\\lambda(r>a, \\theta,\\phi)=1 -{m_{0} \\over a}+\n{m_{0} \\over r}\\, \\label{sl1}\\\\\n\\lambda^{in} (r,\\theta,\\phi)&=&\\lambda(r