diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzgzcn" "b/data_all_eng_slimpj/shuffled/split2/finalzzgzcn" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzgzcn" @@ -0,0 +1,5 @@ +{"text":"\\section*{References}}\n\n\\makeatletter \n\\renewcommand\\@biblabel[1]{#1.} \n\\makeatother\n\n\n\\begin{document}\n\n\\title{Span Labeling Approach for Vietnamese and Chinese Word Segmentation}\n\\titlerunning{Span Labeling Approach for Vietnamese and Chinese Word Segmentation}\n\n\\author{Duc-Vu Nguyen\\inst{1,3}\\and Linh-Bao Vo\\inst{2,3} \\and Dang Van Thin\\inst{1,3} \\and Ngan Luu-Thuy Nguyen\\inst{2,3}}\n\n\\authorrunning{D.-V. Nguyen et al.}\n\n\\institute{Multimedia Communications Laboratory, University of Information Technology,\\\\Ho Chi Minh City, Vietnam \\\\ \\email{\\{vund, thindv\\}@uit.edu.vn}\\\\ \\and University of Information Technology, Ho Chi Minh City, Vietnam\\\\ \\email{18520503@gm.uit.edu.vn, ngannlt@uit.edu.vn} \\and Vietnam National University, Ho Chi Minh City, Vietnam}\n\n\\maketitle\n\n\\begin{abstract}\nIn this paper, we propose a span labeling approach to model n-gram information for Vietnamese word segmentation, namely \\textsc{SpanSeg}. We compare the span labeling approach with the conditional random field by using encoders with the same architecture. Since Vietnamese and Chinese have similar linguistic phenomena, we evaluated the proposed method on the Vietnamese treebank benchmark dataset and five Chinese benchmark datasets. Through our experimental results, the proposed approach \\textsc{SpanSeg} achieves higher performance than the sequence tagging approach with the state-of-the-art F-score of 98.31\\% on the Vietnamese treebank benchmark, when they both apply the contextual pre-trained language model XLM-RoBERTa and the predicted word boundary information. Besides, we do fine-tuning experiments for the span labeling approach on BERT and ZEN pre-trained language model for Chinese with fewer parameters, faster inference time, and competitive or higher F-scores than the previous state-of-the-art approach, word segmentation with word-hood memory networks, on five Chinese benchmarks.\n\\keywords{Natural Language Processing \\and Word Segmentation \\and Vietnamese \\and Chinese.}\n\\end{abstract}\n\n\n\\section{Introduction}\nWord segmentation is the first essential task for both Vietnamese and Chinese. The input of Vietnamese word segmentation (VWS) is the sequence of syllables delimited by space. In contrast, the input of Chinese word segmentation (CWS) is the sequence of characters without explicit delimiter. The use of a Vietnamese syllable is similar to a Chinese character. Despite deep learning dealing with natural language processing tasks without the word segmentation phase, the research on word segmentation is still necessary regarding the linguistic aspect. Since Vietnamese and Chinese have similar linguistic phenomena such as overlapping ambiguity in VWS \\cite{hongphuong08} and in CWS \\cite{sun-etal-1998-chinese}, therefore the research about VWS and CWS is a challenging problem.\n\nMany previous approaches for VWS have been proposed. For instance, in the early stage of VWS, \\citet{diend1} supposed VWS as a stochastic transduction problem. Therefore, they represented the input sentence as an unweighted Finite-State Acceptor. As a consequence, \\citet{hongphuong08} proposed the ambiguity resolver using a bi-gram language model as a component in their model for VWS. After that, \\citet{nguyenetal2006} used conditional random fields (CRFs) and support vector machines (SVMs) for VWS. Recently, \\citet{phongnt1} utilized rules based on the predicted word boundary and threshold for the classifier in the post-processing stage to control overlapping ambiguities for VWS. Besides, \\citet{datnq1} proposes a method for auto-learning rule based on the predicted word boundary for VWS. Furthermore, \\citet{nguyen-2019-neural} proposed the joint neural network model for Vietnamese word segmentation, part-of-speech tagging, and dependency parsing. Lastly, \\citet{nguyen-etal-2019-ws-svm} proposed feature extraction to deal with overlapping ambiguity and capturing word containing suffixes.\n\nFrom our observation, the number of research and approaches for CWS is greater than VWS. The research \\cite{tseng-etal-2005-conditional,song-etal-2006-chinese,chen-etal-2015-long,zhang-etal-2016-transition-based, ma-etal-2018-state, higashiyama-etal-2019-incorporating} treated CWS as a character-based sequence labeling task. The contextual feature extractions were proved helpful in CWS \\cite{higashiyama-etal-2019-incorporating}. After that, neural networks were powerful for CWS \\cite{chen-etal-2015-long, ma-etal-2018-state, higashiyama-etal-2019-incorporating}. The measuring word-hood for n-grams was an effective method for non-neural network model \\cite{sun-etal-1998-chinese} and neural network model \\cite{tian-etal-2020-improving-chinese}. Besides, the multi-criteria learning from many different datasets is a strong method \\cite{chen-etal-2017-adversarial, qiu-etal-2020-concise, ke-etal-2021-pre}. Remarkably, \\citet{tian-etal-2020-improving-chinese} incorporated the word-hood for n-gram into neural network model effectively.\n\nWe have an observation that most of the approaches for VWS and CWS treated word segmentation as a token-based sequence tagging problem, where the token is a syllable in VWS and character in CWS. Secondly, the intersection of VWS and CWS approaches leverages the context to model n-gram of token information, such as measuring the word-hood of the n-gram in CWS. All of the previous approaches in CWS incorporate the word-hood information as a module of their models. Therefore, our research hypothesizes whether we can model a simple model that can simulate measuring word-hood operation.\n\nFrom our observation and hypothesis, we get the inspiration of span representation in constituency parsing \\cite{stern-etal-2017-minimal} to propose our \\textsc{SpanSeg} model for VWS and CWS. The main idea of our \\textsc{SpanSeg} is to model all n-grams in the input sentence and score them. Modeling an n-gram is equivalent to find the probability of a span being a word. Via experimental results, the proposed approach \\textsc{SpanSeg} achieves higher performance than the sequence tagging approach when both utilize contextual pre-trained language model XLM-RoBERTa and predicted word boundary information on the Vietnamese treebank benchmark with the state-of-the-art F-score of 98.31\\%. Additionally, we do fine-tuning experiments for the span labeling method on BERT and ZEN pre-trained language model for Chinese with fewer parameters, faster inference time, and competitive or higher F-scores than the previous state-of-the-art approach, word segmentation with word-hood memory networks, on five Chinese benchmarks.\n\n\\section{The Proposed Framework}\n\\begin{figure}[!ht]\n\\centering\n\\includegraphics[width=\\textwidth]{fig1.pdf}\n\\caption{\\label{figmodel}The architecture of \\textsc{SpanSeg} for VWS. The input sentence is ``\\vi{h\u1ecdc sinh h\u1ecdc sinh h\u1ecdc}'' (\\textit{student learn biology}) including five syllables \\{``\\vi{h\u1ecdc}'', ``\\vi{sinh}'', ``\\vi{h\u1ecdc}'', ``\\vi{sinh}'', and ``\\vi{h\u1ecdc}''\\}. The gold-standard segmentation for the input sentence is ``\\vi{h\u1ecdc\\_sinh h\u1ecdc sinh\\_h\u1ecdc}'' including three words \\{``\\vi{h\u1ecdc\\_sinh}'', ``\\vi{h\u1ecdc}'', and ``\\vi{sinh\\_h\u1ecdc}''\\}. The initial BIES (Begin, Inside, End, or Singleton) word boundary tags (differing from gold-standard segmentation) were predicted by an off-the-shelf toolkit RDRsegmenter \\cite{datnq1}.}\n\\end{figure}\n\nDiffering from previous studies, we regard word segmentation as a span labeling task. The architecture of our proposed model, namely \\textsc{SpanSeg}, is illustrated in Figure~\\ref{figmodel}, where the general span labeling paradigm is at the top of the figure. This paper is the first work approach to word segmentation as a span labeling task to the best of our knowledge. Before presenting the details of \\textsc{SpanSeg}, we take a first look at problem representation of \\textsc{SpanSeg}. In Figure~\\ref{figmodel}, we consider the input sentence in the form of the index (integer type) and syllable (string type) as an array \\{$0$: ``\\vi{h\u1ecdc}'', $1$: ``\\vi{sinh}'', $2$: ``\\vi{h\u1ecdc}'', $3$: ``\\vi{sinh}'', $4$: ``\\vi{h\u1ecdc}''\\}. With this consideration, the gold-standard segmentation ``\\vi{h\u1ecdc\\_sinh h\u1ecdc sinh\\_h\u1ecdc}'' (\\textit{student learn biology}) (including three words \\{``\\vi{h\u1ecdc\\_sinh}'', ``\\vi{h\u1ecdc}'', and ``\\vi{sinh\\_h\u1ecdc}''\\}) is presented by three spans $(0,2)$ (``\\vi{h\u1ecdc\\_sinh}''), $(2, 3)$ (``\\vi{h\u1ecdc}''), and $(3, 5)$ (``\\vi{sinh\\_h\u1ecdc}''). By approaching word segmentation as a span labeling task, we have three positive samples (three circles filled with gray color) for the input sentence in Figure~\\ref{figmodel}, whereas other circles filled with white color with solid border are negative samples for the input sentence in Figure~\\ref{figmodel}. Also, in Figure~\\ref{figmodel}, we note that all circles with dashed border (e.g, spans $(0,0)$, $(1,1)$, $\\dots$, $(n,n)$, where $n$ is the length of the input sentence) are skipped in \\textsc{SpanSeg} because they do not represent spans.\n\nAfter presenting \\textsc{SpanSeg}, in the rest of this section, we firstly introduce problem representation of word segmentation as a span labeling task (in subsection~\\ref{subproblem}). Secondly, we introduce the proposed span post-processing algorithm for word segmentation (in subsection~\\ref{spanpost}). The first and second subsections are two important points of our research. Thirdly, we describe the span scoring module (in subsection~\\ref{scoringmodule}). In the last two subsections, we provide the architecture encoder for VWS and CWS. We describe the model \\textsc{SpanSeg} for VWS (in subsection~\\ref{spansegvi}). Lastly, we describe \\textsc{SpanSeg} for CWS (in subsection~\\ref{spansegzh}).\n\n\\subsection{Word segmentation as span labeling task for Vietnamese and Chinese}\n\\label{subproblem}\nThe input sentence of word segmentation task is a sequence of tokens $\\mathcal{X} = x_1 x_2 \\dots x_{n}$ with the length of $n$. The token $x_i$ is a syllable or character toward Vietnamese or Chinese, respectively. Given the input $\\mathcal{X}$, the output of word segmentation is a sequence of words $\\mathcal{W} = w_1 w_2 \\dots w_{m}$ with the length of $m$, where $1 \\leq m \\leq n$. We have a property that the word $w_j$ is constituted by one token or consecutive tokens. So, we use the sequence of tokens $x_i x_{i+1} \\dots x_{i+k-1}$ for denoting the word $w_j$ be constituted by $k$ consecutive tokens beginning at token $x_i$, where $1 \\leq k \\leq n$ (concretely, $k = 1$ representing single words and $2 \\leq k \\leq n$ representing compound words for both Vietnamese and Chinese). Inspired by the work of \\citet{stern-etal-2017-minimal} for constituency parsing, we use the span $(i - 1, i - 1 + k)$ to represent the word constituted by $k$ consecutive tokens $x_i x_{i+1} \\dots x_{i+k-1}$ beginning at token $x_i$. Therefore, the goal of the span labeling task for both VWS and CWS is to find the list of spans $\\hat{\\mathcal{S}}$ such that every token $x_i$ is spanned, and there is no overlapping between every two spans. Formally, the word segmentation model as span labeling task for both VWS and CWS can be formalized as:\n\\begin{IEEEeqnarray}{rCl}\n\\hat{\\mathcal{S}} = \\textsc{SpanPostProcessing}(\\hat{\\mathcal{Y}})\n\\end{IEEEeqnarray}\nwhere \\textsc{SpanPostProcessing}($\\cdot$) simply is a algorithm for producing the word segmentation boundary satisfying non-overlapping between every two spans. The $\\hat{\\mathcal{Y}}$ is the set of predicted spans as following:\n\\begin{IEEEeqnarray}{rCl}\n\\hat{\\mathcal{Y}} = \\{(l, r) | 0 \\leq l \\leq n - 1 ~\\text{and}~l < r \\leq n~\\text{and}~\\textsc{Scorer}(\\mathcal{X}, l, r) > 0.5\\}\n\\end{IEEEeqnarray}\nwhere $n$ is the length of the input sentence. The \\textsc{Scorer}($\\cdot$) is the scoring module for the span $(l, r)$ of sentence $\\mathcal{X}$. The output of \\textsc{Scorer}($\\cdot$) has a value in the range of 0 to 1. In our research, we choose the sigmoid function as an activation function at the last layer of \\textsc{Scorer}($\\cdot$) module. Lastly, the word segmentation as a span labeling task is the binary classification problem. We use the binary cross-entropy loss for the cost function as following:\n\\begin{align}\nJ(\\theta) = -\\frac{1}{|\\mathcal{D}|}\\sum_{\\mathcal{X, S} \\in \\mathcal{D}} \\biggl(\\frac{1}{(n(n+1))\/2}&\\sum_{l=0}^{n-1} \\sum_{r=l+1}^{n} \\left[(l,r) \\in \\mathcal{S}\\right]\\log\\big(\\textsc{Scorer}(\\mathcal{X}, l, r)\\big) \\nonumber\\\\\n{}+{}&\\left[(l,r) \\notin \\mathcal{S}\\right]\\log\\big(1 - \\textsc{Scorer}(\\mathcal{X}, l, r)\\big) \\biggr)\n\\end{align}\nwhere $\\mathcal{D}$ is the training set and $|\\mathcal{D}|$ is the size of training set. For each pair ($\\mathcal{X, S}$) in training set $\\mathcal{D}$, we compute binary cross-entropy loss for all spans $(l, r)$, where $0 \\leq l \\leq n - 1 ~\\text{and}~l < r \\leq n$, and $n$ is the length of sentence $\\mathcal{X}$. The term $\\left[(l,r) \\in \\mathcal{S}\\right]$ has the value of $1$ if span $(l, r)$ belongs to the list $\\mathcal{S}$ of sentence $\\mathcal{X}$ and conversely, of 0. Similarly, the term $\\left[(l,r) \\notin \\mathcal{S}\\right]$ has the value of $1$ if span $(l, r)$ does not belong to the list $\\mathcal{S}$ of sentence $\\mathcal{X}$ and conversely, of 0. Lastly, we make a note that in our training and prediction progress, we will discard spans with length greater than 7 for both Vietnamese and Chinese (7 is maximum n-gram length following \\cite{diao-etal-2020-zen} for Chinese, so we decide to choose 7 for Vietnamese according to the statistics in the work of \\cite{nguyen-etal-2019-ws-svm}).\n\n\n\\subsection{Post-Processing Algorithm For Predicted Spans}\n\\label{spanpost}\n\nIn the previous subsection~\\ref{subproblem}, we presented word segmentation as a span labeling task for Vietnamese and Chinese. In this subsection, we present our proposed post-processing algorithm for predicted spans from the span labeling problem. However, we found that in the predicted spans set $\\hat{\\mathcal{Y}}$ there exists overlapping between some two spans. We deal with the overlapping ambiguity by choosing the spans with the highest score and removing the rest. The overlapping ambiguity phenomenon occurs when our \\textsc{SpanSeg} predicts compound words. It occurs in our \\textsc{SpanSeg} and other word segmenters on Vietnamese \\cite{hongphuong08} and Chinese \\cite{sun-etal-1998-chinese}.\n\nApart from overlapping ambiguity, our \\textsc{SpanSeg} faces the missing word boundary problem. That problem can be caused by originally predicted spans or as a result of solving overlapping ambiguity. We choose the missing word boundary based on all predicted spans $(i - 1, i - 1 + k)$ with $k = 1$ for single words to deal with the missing word boundary problem. To sum up, our proposed post-processing algorithm for predicted spans from the span labeling problem, namely \\textsc{SpanPostProcessing}, deals with overlapping ambiguity and missing spans from predicted spans. The detail of our \\textsc{SpanPostProcessing} is presented in Algorithm~\\ref{alg}.\n\n\\begin{algorithm}[!ht]\n \\caption{\\textsc{SpanPostProcessing}}\\label{alg}\n \\begin{algorithmic}[1]\n \\Require\n \\Statex The input sentence $\\mathcal{X}$ with the length of $n$;\n \\Statex The scoring module \\textsc{Scorer}($\\cdot$) for any span $(l, r)$ in $\\mathcal{X}$, where $0 \\leq l \\leq n - 1 ~\\text{and}~l < r \\leq n$;\n \\Statex The set of predicted spans $\\hat{\\mathcal{Y}}$, sorted in ascending order.\n \\Ensure\n \\Statex The list of valid predicted spans $\\hat{\\mathcal{S}}$, satisfying non-overlapping between every two spans.\n \\Statex\n \\State $\\hat{\\mathcal{S}}_\\text{novlp} = [(0, 0)]$ \\Comment{The list of predicted spans without overlapping ambiguity.}\n \\State $\\hat{\\mathcal{S}} = []$ \\Comment{The final list of valid predicted spans.}\n \\For{$\\hat{y}$ \\textbf{in} $\\hat{\\mathcal{Y}}$} \\Comment{The $\\hat{y}[0]$ is the left boundary and $\\hat{y}[1]$ is the right boundary of each span $\\hat{y}$.}\n \\If{$\\hat{\\mathcal{S}}_\\text{novlp}[\\text{-}1][1] < \\hat{y}[0]$} \\Comment{Check for missing boundary.}\n \\State $\\hat{\\mathcal{S}}_\\text{novlp}$.\\textbf{append}$\\big((\\hat{\\mathcal{S}}_\\text{novlp}[\\text{-}1][1], \\hat{y}[0])\\big)$ \\Comment{Add the missing span to $\\hat{\\mathcal{S}}_\\text{novlp}$}\n \\EndIf\n \\If{$\\hat{\\mathcal{S}}_\\text{novlp}[\\text{-}1][0] \\leq \\hat{y}[0] < \\hat{\\mathcal{S}}_\\text{novlp}[\\text{-}1][1]$} \\Comment{Check for overlapping ambiguity.}\n \\If{$\\textsc{Scorer}(\\mathcal{X}, \\hat{\\mathcal{S}}_\\text{novlp}[\\text{-}1][0], \\hat{\\mathcal{S}}_\\text{novlp}[\\text{-}1][1]) < \\textsc{Scorer}(\\mathcal{X}, \\hat{y}[0], \\hat{y}[1])$}\n \\State $\\hat{\\mathcal{S}}_\\text{novlp}$.\\textbf{pop}() \\Comment{Remove the span causing overlapping with the lower score than $\\hat{y}$.}\n \\State $\\hat{\\mathcal{S}}_\\text{novlp}$.\\textbf{append}$\\big((\\hat{y}[0], \\hat{y}[1])\\big)$ \\Comment{Add the span $\\hat{y}$ to $\\hat{\\mathcal{S}}_\\text{novlp}$.}\n \\EndIf\n \\Else{}\n \\State $\\hat{\\mathcal{S}}_\\text{novlp}$.\\textbf{append}$\\big((\\hat{y}[0], \\hat{y}[1])\\big)$ \\Comment{Add the span $\\hat{y}$ to $\\hat{\\mathcal{S}}_\\text{novlp}$.}\n \\EndIf\n \\EndFor\n \\If{$\\hat{\\mathcal{S}}_\\text{novlp}[\\text{-}1][1] < n$} \\Comment{Check for missing boundary.} \n \\State $\\hat{\\mathcal{S}}_\\text{novlp}$.\\textbf{append}$\\big((\\hat{\\mathcal{S}}_\\text{novlp}[\\text{-}1][1], n)\\big)$ \\Comment{Add the missing span to $\\hat{\\mathcal{S}}_\\text{novlp}$}\n \\EndIf\n \\For{$i, \\hat{y}$ \\textbf{in} \\textbf{enumerate}($\\hat{\\mathcal{S}}_\\text{novlp}$)} \\Comment{The $\\hat{y}[0]$ is the left boundary and $\\hat{y}[1]$ is the right boundary of each span $\\hat{y}$, and $i$ is the index of $\\hat{y}$ in list $\\hat{\\mathcal{S}}_\\text{novlp}$.}\n \\If{$0 < i$ \\textbf{and} $\\hat{\\mathcal{S}}_\\text{novlp}[i-1][1] < \\hat{y}[0]$} \\Comment{Check for missing boundary.}\n \\State \\textit{missed\\_boundaries} $= \\big[\\hat{\\mathcal{S}}_\\text{novlp}[i-1][1]\\big]$\n \\For{\\textit{bound} \\textbf{in} \\textbf{range}$\\big(\\hat{\\mathcal{S}}_\\text{novlp}[i-1][1], \\hat{y}[0]\\big)$}\n \\If{$\\textsc{Scorer}(\\mathcal{X}, \\textit{bound}, \\textit{bound} + 1) > 0.5$}\\Comment{Check for single word.}\n \\State \\textit{missed\\_boundaries}.\\textbf{append}($\\textit{bound} + 1$)\n \\EndIf\n \\EndFor\n \\State \\textit{missed\\_boundaries}.\\textbf{append}($\\hat{y}[0]$)\n \\For{\\textit{j} \\textbf{in} \\textbf{range}$\\big(\\textbf{len}(\\textit{missed\\_boundaries}) - 1\\big)$}\n \\State $\\hat{\\mathcal{S}}$.\\textbf{append}$\\big((\\textit{missed\\_boundaries}[j], \\textit{missed\\_boundaries}[j+1])\\big)$\\Comment{Add the missing span to $\\hat{\\mathcal{S}}$}\n \\EndFor\n \\EndIf\n \\State $\\hat{\\mathcal{S}}$.\\textbf{append}$\\big(\\hat{y}[0], \\hat{y}[1]\\big)$ \\Comment{Add the non-overlapping span to $\\hat{\\mathcal{S}}$}\n \\EndFor\n \\end{algorithmic}\n\\end{algorithm}\n\n\n\n\\subsection{Span Scoring Module}\n\\label{scoringmodule}\nIn two previous subsections~\\ref{subproblem} and ~\\ref{spanpost}, we presented two critical points of our research. There we mentioned the \\textsc{Scorer}($\\cdot$) module many times. In this section, we present \\textsc{Scorer}($\\cdot$) module. It is based on the familiar module that name Biaffine \\cite{dozat2017deep}. While \\citet{ijcai2020-560} experimenting with the Biaffine module for constituency parsing, we use the Biaffine module for span labeling word segmentation. The Biaffine module is used in \\cite{dozat2017deep} to capture the directed relation between two words in a sentence for dependency parsing. In the constituency parsing problem, \\citet{ijcai2020-560} used the Biaffine module to find the representation of phrases. Our research uses the Biaffine module to model the representation of n-gram for the word segmentation task.\n\nAs we can see in Figure~\\ref{figmodel}, each token $x_i$ in the input sentence has two context-aware word representations including left and right boundary representations except the begin (``'') and end (``<\/s>'') tokens. In case we use the BiLSTM (Bidirectional Long Short Term Memory) encoder, the left boundary representation of token $x_i$ is the concatenation of the hidden state forward vector $\\textbf{f}_{i-1}$ and the hidden state backward vector $\\textbf{b}_{i}$ and the right boundary representation of token $x_i$ is the concatenation of the hidden state forward vector $\\textbf{f}_{i}$ and the hidden state backward vector $\\textbf{b}_{i+1}$, following \\citet{stern-etal-2017-minimal}. In case we use BERT \\cite{devlin-etal-2019-bert} or ZEN \\cite{diao-etal-2020-zen} encoder, we chunk the last hidden state vector into two vectors with the same size as forward and backward vectors of the BiLSTM encoder. Even though we use the BiLSTM, BERT, or ZEN encoder, we always have the left and right boundary representation for each token $x_i$ in the input sentence. Therefore, in Figure~\\ref{figmodel}, we see that the right boundary representation $\\textbf{f}_{i}\\oplus \\textbf{b}_{i+1}$ of token $x_i$ is the left boundary representation of token $\\textbf{x}_{i+1}$. As the work of \\citet{ijcai2020-560}, we use two MLPs to make the difference between the right boundary representation of token $x_i$ and the left boundary representation of token $x_{i+1}$. To sum up, we have the left $\\textbf{r}_i^{\\text{left}}$ and right $\\textbf{r}_i^{\\text{right}}$ boundary representations of token $x_{i}$ as following:\n\\begin{IEEEeqnarray}{rCl}\n\\textbf{r}_i^{\\text{left}} &=& \\text{MLP}^{\\text{left}}(\\textbf{f}_{i-1}\\oplus \\textbf{b}_{i})\\\\\n\\textbf{r}_i^{\\text{right}} &=& \\text{MLP}^{\\text{right}}(\\textbf{f}_{i}\\oplus \\textbf{b}_{i+1})\n\\end{IEEEeqnarray}\nFinally, inspired by \\citet{ijcai2020-560}, given the input sentence $\\mathcal{X}$, the span scoring module \\textsc{Scorer}($\\cdot$) for span $(l, r)$ in our \\textsc{SpanSeg} model is computed by using a biaffine operation over the left boundary representation of token $x_l$ and the right boundary representation of token $x_r$ as following:\n\\begin{IEEEeqnarray}{rCl}\n\\textsc{Scorer}(\\mathcal{X}, l, r) = \\text{sigmoid}\\bigg( \\begin{bmatrix}\\textbf{r}_l^{\\text{left}} \\\\ 1\\end{bmatrix}^{\\text{T}}\\textbf{W}\\textbf{r}_r^{\\text{right}}\\bigg)\n\\end{IEEEeqnarray}\nwhere $\\textbf{W} \\in \\mathbb{R}^{d\\times d}$. To sum up, the \\textsc{Scorer}($\\mathcal{X}, l, r$) gives us a score to predict whether a span $(l, r)$ is a word.\n\n\\subsection{Encoder and input representation for VWS}\n\\label{spansegvi}\n\\begin{sloppypar}In three previous subsection~\\ref{subproblem}, \\ref{spanpost}, and \\ref{scoringmodule}, we describe three mutual parts of the \\textsc{SpanSeg} model for Vietnamese and Chinese. In this subsection, we present the encoder and the input representation for VWS of the \\textsc{SpanSeg} model. Firstly, the default configuration of \\textsc{SpanSeg} for the input representation of token $x_i$ is composed as following:\n\\begin{align}\n\\textbf{default\\_embedding}_i = \\big(&\\textbf{static\\_syl\\_embedding}_{i} \\nonumber\\\\\n& + \\textbf{dynamic\\_syl\\_embedding}_{i}\\big) \\oplus \\textbf{char\\_embedding}_i\n\\end{align}\nwhere the symbol $\\oplus$ denotes the concatenation operation. The $\\textbf{static\\_syl\\_embedding}_{i}$ is extracted from the pre-trained Vietnamese syllable embedding with the dimension of 100 provided by \\citet{nguyen-etal-2017-word}. So, the dimension of vector $\\textbf{dynamic\\_syl\\_embedding}_{i}$ also is 100. We initialize randomly and update the value of $\\textbf{dynamic\\_syl\\_embedding}_{i}$ in the training progress. We do not update the value of $\\textbf{static\\_syl\\_embedding}_{i}$ during training model. Besides, we also use a character embedding for the input representation by using BiLSTM network for sequence of characters in token $x_i$ to obtain $\\textbf{char\\_embedding}_i$.\\end{sloppypar}\n\nThe default configuration does not utilize the Vietnamese predicted word boundary information as many previous works on VWS did. Following the work of \\citet{nguyen-2019-neural}, we additionally use the boundary BIES tag embedding for the input representation of token $x_i$. Therefore, the second configuration of \\textsc{SpanSeg}, namely \\textsc{SpanSeg} (TAG) is presented as following:\n\\begin{align}\n\\textbf{default\\_tag\\_embedding}_i = \\textbf{default\\_embedding}_i \\oplus \\textbf{bies\\_tag\\_embedding}_i\n\\end{align}\nwhere the value of $\\textbf{bies\\_tag\\_embedding}_i$ (with the dimension of 100) is initialized randomly and updated; and the boundary BIES tag is predicted by the off-the-shelf toolkit RDRsegmenter \\cite{datnq1}.\n\nRecently, many contextual pre-trained language models were proposed inspired by the work of \\citet{devlin-etal-2019-bert}. However, our research utilizes contextual pre-trained multilingual language model XLM-Roberta (XLM-R) \\cite{conneau-etal-2020-unsupervised} with the \\textit{base} architecture for VWS since there is no contextual pre-trained monolingual language model for Vietnamese at this time. So, the third configuration of \\textsc{SpanSeg}, namely \\textsc{SpanSeg} (XLM-R), is presented as following:\n\\begin{align}\n\\textbf{default\\_xlmr\\_embedding}_i = \\textbf{default\\_embedding}_i \\oplus \\textbf{xlmr\\_embedding}_i\n\\end{align}\nwhere the $\\textbf{xlmr\\_embedding}_i$ is the projected vector from the hidden state of \\textit{the last four layers} of the XLM-R model. The dimension of $\\textbf{xlmr\\_embedding}_i$ is 100. We do not update parameters of the XLM-R model during the training process.\n\nLastly, we make the fourth configuration for \\textsc{SpanSeg}, namely \\textsc{SpanSeg} (TAG + XLM-R). This configuration aims to combine all syllables, characters, predicted word boundaries, and contextual information for VWS.\n\\begin{align}\n\\textbf{default\\_tag\\_xlmr\\_embedding}_i =~& \\textbf{default\\_embedding}_i \\nonumber\\\\\n & \\oplus \\textbf{bies\\_tag\\_embedding}_i \\oplus \\textbf{xlmr\\_embedding}_i\n\\end{align}\n\nAfter we have the input representation for each token $x_i$ of the input sentence $\\mathcal{X}$, we feed them into the BiLSTM network to obtain the forward $f_i$ and backward $b_i$ vectors. The forward $f_i$ and backward $b_i$ vectors is used in the \\textsc{Scorer}($\\cdot$) module in subsection~\\ref{scoringmodule}.\n\n\n\\subsection{Encoder and input representation for CWS}\n\\label{spansegzh}\nTo make a fair comparison to the state-of-the-art model for CWS, we used the same encoder as the work of \\citet{tian-etal-2020-improving-chinese}. Following the work \\cite{tian-etal-2020-improving-chinese}, we choose two BERT \\cite{devlin-etal-2019-bert} and ZEN \\cite{diao-etal-2020-zen} encoders with the \\textit{base} architecture. The BERT and ZEN are two famous encoders utilizing contextual information for Chinese language processing, in which the ZEN encoder enhances n-gram of characters information. For each character $x_i$ in the input sentence $\\mathcal{X}$, we chunk the hidden state vector of \\textit{the last layer} of BERT or ZEN into two vectors with the same size as the forward $f_i$ and backward $b_i$ vectors in the BiLSTM network. Finally, the forward $f_i$ and backward $b_i$ vectors are used in the \\textsc{Scorer}($\\cdot$) module in subsection~\\ref{scoringmodule}. We update the parameters of BERT and ZEN in training progress following the work of \\citet{tian-etal-2020-improving-chinese}.\n\n\\section{Experimental Settings}\n\\subsection{Datasets}\n\nThe largest VWS benchmark dataset\\footnote{The details of VTB dataset are presented at \\url{https:\/\/vlsp.org.vn\/vlsp2013\/eval\/ws-pos}.} is a part of the Vietnamese treebank (VTB) project \\cite{nguyen-etal-2009-building}. We use the same split as the work of \\citet{datnq1}. The summary of the VTB dataset for the word segmentation task is provided in Table~\\ref{vtbstat}.\n\n\n\\begin{table}[ht]\n\\caption{\\label{vtbstat}Statistics of the Vietnamese treebank dataset for word segmentation. We provide the number of sentences, characters, syllables, words, character types, syllable types, word types. We also compute the out-of-vocabulary (OOV) rate as the percentage of unseen words in the development and test set.}\n\\centering\n\\resizebox{0.535\\textwidth}{!}{%\n\\begin{tabular}{l|r|r|r}\n\\hline\n\\multirow{2}{*}{} & \\multicolumn{3}{c}{\\textbf{VTB}} \\\\ \\cline{2-4} \n & \\multicolumn{1}{c|}{Train} & \\multicolumn{1}{c|}{Dev} & \\multicolumn{1}{c}{Test} \\\\ \\hline\n\\# sentences & 74,889 & 500 & 2,120 \\\\ \\hline\n\\# characters & 6,779,116 & 55,476 & 307,932 \\\\ \\hline\n\\# syllables & 2,176,398 & 17,429 & 96,560 \\\\ \\hline\n\\# words & 1,722,271 & 13,165 & 66,346 \\\\ \\hline\n\\# character types & 155 & 117 & 121 \\\\ \\hline\n\\# syllable types & 17,840 & 1,785 & 2,025 \\\\ \\hline\n\\# word types & 41,355 & 2,227 & 3,730 \\\\ \\hline\nOOV Rate & - & 2.2 & 1.6 \\\\ \\hline\n\\end{tabular}%\n}\n\\end{table}\n\n\nFor evaluating our \\textsc{SpanSeg} on CWS, we employ five benchmark datasets including MSR, PKU, AS, CityU (from SIGHAN 2005 Bakeoff \\cite{emerson-2005-second}), and CTB6 \\cite{xue_xia_chiou_palmer_2005}. We convert traditional Chinese characters in AS, and CityU into simplified ones following previous studies \\cite{chen-etal-2015-long, qiu-etal-2020-concise, tian-etal-2020-improving-chinese}. We follow the official training\/test data split of MSR, PKU, AS, and CityU, in which we randomly extract 10\\% of the training dataset for development as many previous works. For CTB6, we the same split as the work of \\citet{tian-etal-2020-improving-chinese}. For pre-processing phase of all CWS dataset in our research, we inherit the process\\footnote{\\url{https:\/\/github.com\/SVAIGBA\/WMSeg}} of \\citet{tian-etal-2020-improving-chinese}. The summary of five Chinese benchmark datasets for the word segmentation task is presented in Table~\\ref{zhstat}.\n\n\n\\begin{table}[H]\n\\caption{\\label{zhstat}Statistics of five Chinese benchmark dataset for word segmentation. We provide the number of sentences, characters, words, character types, word types. We also compute the out-of-vocabulary (OOV) rate as the percentage of unseen words in the test set.}\n\\resizebox{\\textwidth}{!}{%\n\\begin{tabular}{l|r|r|r|r|r|r|r|r|r|r|r}\n\\hline\n\\multirow{2}{*}{} & \\multicolumn{2}{c|}{\\textbf{MSR}} & \\multicolumn{2}{c|}{\\textbf{PKU}} & \\multicolumn{2}{c|}{\\textbf{AS}} & \\multicolumn{2}{c|}{\\textbf{\\textsc{CityU}}} & \\multicolumn{3}{c}{\\textbf{CTB6}} \\\\ \\cline{2-12} \n & \\multicolumn{1}{c|}{Train} & \\multicolumn{1}{c|}{Test} & \\multicolumn{1}{c|}{Train} & \\multicolumn{1}{c|}{Test} & \\multicolumn{1}{c|}{Train} & \\multicolumn{1}{c|}{Test} & \\multicolumn{1}{c|}{Train} & \\multicolumn{1}{c|}{Test} & \\multicolumn{1}{c|}{Train} & \\multicolumn{1}{c|}{Dev} & \\multicolumn{1}{c}{Test} \\\\ \\hline\n\\# sentences & 86,918 & 3,985 & 19,054 & 1,944 & 708,953 & 14,429 & 53,019 & 1,492 & 23,420 & 2,079 & 2,796 \\\\ \\hline\n\\# characters & 4,050,469 & 184,355 & 1,826,448 & 172,733 & 8,368,050 & 197,681 & 2,403,354 & 67,689 & 1,055,583 & 100,316 & 134,149 \\\\ \\hline\n\\# words & 2,368,391 & 106,873 & 1,109,947 & 104,372 & 5,449,581 & 122,610 & 1,455,630 & 40,936 & 641,368 & 59,955 & 81,578 \\\\ \\hline\n\\# character types & 5,140 & 2,838 & 4,675 & 2,918 & 5,948 & 3,578 & 4,806 & 2,642 & 4,243 & 2,648 & 2,917 \\\\ \\hline\n\\# word types & 88,104 & 12,923 & 55,303 & 13,148 & 140,009 & 18,757 & 68,928 & 8,989 & 42,246 & 9,811 & 12,278 \\\\ \\hline\nOOV Rate & - & 2.7 & - & 5.8 & - & 4.3 & - & 7.2 & - & 5.4 & 5.6 \\\\ \\hline\n\\end{tabular}%\n}\n\\end{table}\n\n\n\\subsection{Model Implementation}\n\\subsubsection{The detail of \\textsc{SpanSeg} for Vietnamese}\nFor the encoder mentioned in the subsection~\\ref{spansegvi}, the number of layers of BiLSTM is $3$, and the hidden size of BiLSTM is 400. The size of MLPs mentioned in the subsection~\\ref{scoringmodule} is 500. The dropout rate for embedding, BiLSTM, and MLPs is 0.33. We inherit hyper-parameters from the work of \\cite{dozat2017deep}. We trained all models up to 100 with the early stopping strategy with patience epochs of 20. We used AdamW optimizer \\cite{loshchilov2019decoupled} with the default configuration and learning rate of $\\text{10}^{\\text{-3}}$. The batch size for training and evaluating is up to 5000.\n\n\\subsubsection{The detail of \\textsc{SpanSeg} for Chinese}\nFor the encoder mentioned in the subsection~\\ref{spansegzh}, we do fine-tuning experiments based on BERT \\cite{devlin-etal-2019-bert} and ZEN \\cite{diao-etal-2020-zen} encoders. The size of MLPs mentioned in the subsection~\\ref{scoringmodule} is 500. The dropout rate for BERT and ZEN is 0.1. We trained all models up to 30 with the early stopping strategy with patience epochs of 5. We used AdamW optimizer \\cite{loshchilov2019decoupled} with the default configuration and learning rate of $\\text{10}^{\\text{-5}}$. The batch size for training and evaluating is 16.\n\n\n\\section{Results and Analysis}\n\n\n\\subsection{Main Results}\n\nFor VWS, we also implement the BiLSTM-CRF model with the same backbone and hyper-parameters as our \\textsc{SpanSeg}. The overall results are presented in Table~\\ref{mainvtb}. On the default configuration, our \\textsc{SpanSeg} gives a higher result than BiLSTM-CRF with the F-score of 97.76\\%. On the configuration with pre-trained XLM-R, our \\textsc{SpanSeg} (XLM-R) gives a higher result than BiLSTM-CRF (XLM-R) with the F-score of 97.95\\%. On the configuration with predicted boundary BIES tag from off-the-shelf toolkit RDRsegmenter \\cite{datnq1}, the BiLSTM-CRF (TAG) gives a higher result than our \\textsc{SpanSeg} (TAG) with the F-score of 98.10\\%. Finally, on the configuration with a combination of all features, our \\textsc{SpanSeg} (TAG+XLM-R) gives a higher result than BiLSTM-CRF (TAG+XLM-R) with the F-score of 98.31\\%, which is also the state-of-the-art performance on VTB. We can see that the contextual information is essential for \\textsc{SpanSeg} since \\textsc{SpanSeg} models the left and right boundary of a word rather than the between to consecutive tokens.\n\n\n\\begin{table}[ht]\n\\caption{\\label{mainvtb}Performance (F-score) comparison between \\textsc{SpanSeg} (with different configurations) and previous state-of-the-art models on the test set of VTB dataset.}\n\\centering\n\\resizebox{0.65\\textwidth}{!}{%\n\\begin{tabular}{l|c|c|c|c}\n\\hline\n\\multirow{2}{*}{} & \\multicolumn{4}{c}{\\textbf{VTB}} \\\\ \\cline{2-5} \n & P & R & F & $\\text{R}_\\text{OOV}$ \\\\ \\hline\nvnTokenizer \\cite{hongphuong08} & 96.98 & 97.69 & 97.33 & - \\\\ \\hline\nJVnSegmenter-Maxent \\cite{nguyenetal2006} & 96.60 & 97.40 & 97.00 & -\\\\ \\hline\nJVnSegmenter-CRFs \\cite{nguyenetal2006} & 96.63 & 97.49 & 97.06 & - \\\\ \\hline\nDongDu \\cite{luu2012} & 96.35 & 97.46 & 96.90 & - \\\\ \\hline\nUETsegmenter \\cite{phongnt1} & 97.51 & 98.23 & 97.87 & - \\\\ \\hline\nRDRsegmenter \\cite{datnq1} & 97.46 & 98.35 & 97.90 & - \\\\ \\hline\nUITsegmenter \\cite{nguyen-etal-2019-ws-svm} & 97.81 & \\textbf{98.57} & 98.19 & - \\\\ \\hline\nBiLSTM-CRF & 97.42 & 97.84 & 97.63 & 72.47 \\\\\n\\textsc{SpanSeg} & 97.58 & 97.94 & 97.76 & \\textbf{74.65} \\\\\nBiLSTM-CRF (XLM-R) & 97.69 & 97.99 & 97.84 & 72.66 \\\\\n\\textsc{SpanSeg} (XLM-R) & 97.75 & 98.16 & 97.95 & 70.01 \\\\\nBiLSTM-CRF (TAG) & 97.91 & 98.28 & 98.10 & 69.16 \\\\\n\\textsc{SpanSeg} (TAG) & 97.67 & 98.28 & 97.97 & 65.94 \\\\\nBiLSTM-CRF (TAG+XLM-R) & 97.94 & 98.44 & 98.19 & 68.87 \\\\\n\\textsc{SpanSeg} (TAG+XLM-R) & \\textbf{98.21} & 98.41 & \\textbf{98.31} & 72.28 \\\\ \\hline\n\\end{tabular}%\n}\n\\end{table}\n\nFor CWS, we presented the performances of our \\textsc{SpanSeg} in Table~\\ref{zhmain}. We do not compare our method with previous studies approaching multi-criteria learning since simply the training data is different. Our research focuses on the comparison between our \\textsc{SpanSeg} and sequence tagging approaches. Firstly, we can see that our \\textsc{SpanSeg} (BERT) achieves higher results than state-of-the-art methods \\textsc{WMSeg} (BERT-CRF) \\cite{tian-etal-2020-improving-chinese} on four datasets including MSR (98.31\\%), PKU (96.56\\%), AS (96.62\\%), and CTB6 (97.26\\%) except CityU (97.74\\%). Our \\textsc{SpanSeg} (ZEN) do not achieve the stable performance as \\textsc{SpanSeg} (BERT). The potential reason for this problem is that both ZEN \\cite{diao-etal-2020-zen} encoder and our \\textsc{SpanSeg} try to model n-gram of Chinese characters causing inconsistency.\n\n\\begin{table}[ht]\n\\caption{\\label{zhmain}Performance (F-score) comparison between \\textsc{SpanSeg} (BERT and ZEN) and previous state-of-the-art models on the test set of five Chinese benchmark datasets. The symbol [$\\bigstar$] denotes the methods learning from data annotated through different segmentation criteria, which means that the labeled training data are different from the rest.}\n\\resizebox{\\textwidth}{!}{%\n\\begin{tabular}{l|c|c|c|c|c|c|c|c|c|c}\n\\hline\n\\multirow{2}{*}{} & \\multicolumn{2}{c|}{\\textbf{MSR}} & \\multicolumn{2}{c|}{\\textbf{PKU}} & \\multicolumn{2}{c|}{\\textbf{AS}} & \\multicolumn{2}{c|}{\\textbf{\\textsc{CityU}}} & \\multicolumn{2}{c}{\\textbf{CTB6}} \\\\ \\cline{2-11} \n & F & $\\text{R}_\\text{OOV}$ & F & $\\text{R}_\\text{OOV}$ & F & $\\text{R}_\\text{OOV}$ & F & $\\text{R}_\\text{OOV}$ & F & $\\text{R}_\\text{OOV}$ \\\\ \\hline\n\\citet{chen-etal-2015-long} & 97.40 & - & 96.50 & - & - & - & - & - & 96.00 & - \\\\\n\\citet{xu-sun-2016-dependency} & 96.30 & - & 96.10 & - & - & - & - & - & 95.80 & - \\\\\n\\citet{zhang-etal-2016-transition-based} & 97.70 & - & 95.70 & - & - & - & - & - & 95.95 & - \\\\\n\\citet{chen-etal-2017-adversarial} [$\\bigstar$] & 96.04 & 71.60 & 94.32 & 72.64 & 94.75 & 75.34 & 95.55 & 81.40 & - & - \\\\\n\\citet{wang-xu-2017-convolutional} & 98.00 & - & 96.50 & - & - & - & - & - & - & - \\\\\n\\citet{zhou-etal-2017-word} & 97.80 & - & 96.00 & - & - & - & - & - & 96.20 & - \\\\\n\\citet{ma-etal-2018-state} & 98.10 & 80.00 & 96.10 & 78.80 & 96.20 & 70.70 & 97.20 & 87.50 & 96.70 & 85.40 \\\\\n\\citet{GongCGQ19} & 97.78 & 64.20 & 96.15 & 69.88 & 95.22 & 77.33 & 96.22 & 73.58 & - & - \\\\\n\\citet{higashiyama-etal-2019-incorporating} & 97.80 & - & - & - & - & - & - & - & 96.40 & - \\\\\n\\citet{qiu-etal-2020-concise} [$\\bigstar$] & 98.05 & 78.92 & 96.41 & 78.91 & 96.44 & 76.39 & 96.91 & 86.91 & - & - \\\\\\hline\n\\textsc{WMSeg (BERT-CRF)} \\cite{tian-etal-2020-improving-chinese} & 98.28 & \\textbf{86.67} & 96.51 & \\textbf{86.76} & 96.58 & 78.48 & 97.80 & 87.57 & 97.16 & 88.00 \\\\\n\\textsc{WMSeg (ZEN-CRF)} \\cite{tian-etal-2020-improving-chinese} & \\textbf{98.40} & 84.87 & 96.53 & 85.36 & \\textbf{96.62} & \\textbf{79.64} & 97.93 & \\textbf{90.15} & \\textbf{97.25} & \\textbf{88.46} \\\\ \\hline\n\\textsc{MetaSeg} \\cite{ke-etal-2021-pre} [$\\bigstar$] & 98.50 & - & 96.92 & - & 97.01 & - & 98.20 & - & 97.89 & - \\\\ \\hline\n\\textsc{SpanSeg (BERT)} & 98.31 & 85.32 & \\textbf{96.56} & 85.53 & \\textbf{96.62} & 79.36 & 97.74 & 87.45 & \\textbf{97.25} & 87.91 \\\\\n\\textsc{SpanSeg (ZEN)} & 98.35 & 85.66 & 96.35 & 83.66 & 96.52 & 78.43 & \\textbf{97.96} & 90.11 & 97.17 & 87.76 \\\\\n\\hline\n\\end{tabular}%\n}\n\\end{table}\n\n\\begin{table}[ht]\n\\caption{\\label{timetab}Statistics of model size (MB) and inference time (minute) of \\textsc{WMSeg} \\cite{tian-etal-2020-improving-chinese} and our \\textsc{SpanSeg} dealing with the training set of the AS dataset on Chinese. We use the same batch size as the work of \\citet{tian-etal-2020-improving-chinese}. The inference time is done by using Tesla P100-PCIE GPU with memory size of 16,280 MiB via Google Colaboratory.}\n\\centering\n\\resizebox{0.65\\textwidth}{!}{%\n\\begin{tabular}{l|c|r|c|r}\n\\hline\n\\multicolumn{1}{r|}{\\multirow{2}{*}{}} & \\multicolumn{2}{c|}{\\textbf{BERT Encoder}} & \\multicolumn{2}{c}{\\textbf{ZEN Encoder}} \\\\ \\cline{2-5} \n\\multicolumn{1}{r|}{} & \\textbf{\\textsc{WMSeg}} & \\multicolumn{1}{c|}{\\textbf{\\textsc{SpanSeg}}} & \\textbf{\\textsc{WMSeg}} & \\multicolumn{1}{c}{\\textbf{\\textsc{SpanSeg}}} \\\\ \\hline\nSize (MB) & \\multicolumn{1}{r|}{704} & 397 & \\multicolumn{1}{r|}{1,150} & 872 \\\\ \\hline\nInference Time (minute) & \\multicolumn{1}{r|}{28} & 15 & \\multicolumn{1}{r|}{46} & 32 \\\\ \\hline\n\\end{tabular}%\n}\n\\end{table}\n\nLastly, we test the \\textsc{WMSeg} and our \\textsc{SpanSeg} when dealing with the largest benchmark dataset AS on Chinese to discuss the size of the model and the inference time. The statistics are presented in Table~\\ref{timetab}, showing that our \\textsc{SpanSeg} has the smaller size and faster inference time than \\textsc{WMSeg}. The statistics can be explained by \\textsc{WMSeg} \\cite{tian-etal-2020-improving-chinese} containing word-hood memory networks to encode both n-grams and the word-hood information, while our \\textsc{SpanSeg} encodes n-grams information via span representation.\n\n\n\n\n\\subsection{Analysis}\n\n\\begin{table}[H]\n\\caption{\\label{viambi}Error statistics of the overlapping ambiguity problem involving three consecutive tokens on VWS dataset. The symbols \\cmark~and \\xmark~denote predicting correctly and incorrectly, respectively.}\n\\centering\n\\resizebox{0.6\\textwidth}{!}{%\n\\begin{tabular}{c|c|r|r|r|r}\n\\hline\n\\multicolumn{1}{l|}{\\multirow{2}{*}{\\textbf{BiLSTM-CRF}}} & \\multicolumn{1}{l|}{\\multirow{2}{*}{\\textbf{\\textsc{SpanSeg}}}} & \\multicolumn{4}{c}{\\textbf{Configuration}} \\\\ \\cline{3-6} \n\\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{} & \\multicolumn{1}{c|}{\\textbf{Defalut}} & \\multicolumn{1}{c|}{\\textbf{XLM-R}} & \\multicolumn{1}{c|}{\\textbf{TAG}} & \\multicolumn{1}{c}{\\textbf{TAG+XLM-R}} \\\\ \\hline\n\\xmark & \\xmark & 15 & 5 & 19 & 7 \\\\ \\hline\n\\cmark & \\xmark & \\textbf{7} & \\textbf{0} & 4 & 0 \\\\ \\hline\n\\xmark & \\cmark & \\textbf{7} & \\textbf{0} & \\textbf{18} & \\textbf{1} \\\\ \\hline\n\\end{tabular}%\n}\n\\end{table}\n\nTo explore how our \\textsc{SpanSeg} learns to predict VWS and CWS, we select the statistics of the overlapping ambiguity problem involving three consecutive tokens. The first case is that given the gold standard tags ``\\textbf{\\color{blue}\\underline{B E} S}'', the prediction is incorrect if its tags ``\\textbf{\\color{red}S \\underline{B E}}'', and is correct if its tags ``\\textbf{\\color{blue}\\underline{B E} S}''. The second case is that given the gold standard tags ``\\textbf{\\color{blue}S \\underline{B E}}'', the prediction is incorrect if its tags ``\\textbf{\\color{red}\\underline{B E} S}'', and is correct if its tags ``\\textbf{\\color{blue}S \\underline{B E}}''. Notably, we do not count the case that is not one in two cases we describe. We present the error statistics for Vietnamese in Table~\\ref{viambi}. We can see that the contextual information from XLM-R helps both BiLSTM-CRF and our \\textsc{SpanSeg} in reducing ambiguity. However, according to Table~\\ref{mainvtb}, the predicted word boundary information helps both BiLSTM-CRF and our \\textsc{SpanSeg} in increasing overall performance but causes the overlapping ambiguity problem. Our \\textsc{SpanSeg} (TAG) solves overlapping ambiguity better than BiLSTM-CRF (TAG) when utilizing predicted word boundary information. Lastly, we also provide error statistics for Chinese in Table~\\ref{zhambi}. We can see that overlapping ambiguity is the crucial problem for both \\textsc{WMSeg} \\cite{tian-etal-2020-improving-chinese} and our \\textsc{SpanSeg} on MSR, PKU, and AS datasets.\n\n\\begin{table}[H]\n\\caption{\\label{zhambi}Error statistics of the overlapping ambiguity problem involving three consecutive tokens on five Chinese benchmark datasets. The symbols \\cmark~and \\xmark~denote predicting correctly and incorrectly, respectively.}\n\\centering\n\\resizebox{0.6\\textwidth}{!}{%\n\\begin{tabular}{c|c|r|r|r|r|r}\n\\hline\n\\textbf{\\textsc{\\textbf{WMSeg}}} \\cite{tian-etal-2020-improving-chinese} & \\textbf{\\textsc{\\textbf{SpanSeg}}} & \\multicolumn{1}{c|}{\\textbf{MSR}} & \\multicolumn{1}{c|}{\\textbf{PKU}} & \\multicolumn{1}{c|}{\\textbf{AS}} & \\multicolumn{1}{c|}{\\textbf{\\textsc{CityU}}} & \\multicolumn{1}{c}{\\textbf{CTB6}} \\\\ \\hline\n\\xmark & \\xmark & 14 & 13 & 12 & 2 & 3 \\\\ \\hline\n\\cmark & \\xmark & \\textbf{2} & \\textbf{2} & 2 & \\textbf{1} & \\textbf{2} \\\\ \\hline\n\\xmark & \\cmark & \\textbf{2} & 1 & \\textbf{5} & 0 & 0 \\\\ \\hline\n\\end{tabular}%\n}\n\\end{table}\n\n\n\n\\section{Conclusion}\nThis paper proposes a span labeling approach, namely \\textsc{SpanSeg}, for VWS. Straightforwardly, our approach encodes the n-gram information by using span representations. We evaluate our \\textsc{SpanSeg} on the Vietnamese treebank dataset for the word segmentation task with the predicted word boundary information and the contextual pre-trained embedding from the XLM-RoBerta model. The experimental results on VWS show that our \\textsc{SpanSeg} is better than BiLSTM-CRF when utilizing the predicted word boundary and contextual information with the state-of-the-art F-score of 98.31\\%. We also evaluate our \\textsc{SpanSeg} on five Chinese benchmark datasets to verify our approach. Our \\textsc{SpanSeg} achieves competitive or higher F-scores through experimental results, fewer parameters, and faster inference time than the previous state-of-the-art method, \\textsc{WMSeg}. Lastly, we also show that overlapping ambiguity is a complex problem for VWS and CWS. Via the error analysis on the Vietnamese treebank dataset, we found that utilizing the predicted word boundary information causes overlapping ambiguity; however, our \\textsc{SpanSeg} is better than BiLSTM-CRF in this case. Finally, our \\textsc{SpanSeg} will be made available to the open-source community for further research and development.\n\n\\bibliographystyle{splncs04nat}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n \\label{sec:intro}\n \n With the discovery of 1I\/2017 U1 (`Oumuamua) we now have our first glimpse of an interstellar object \\citep{meech2017}. With a velocity at infinity of around 26~km\/s, and an inclination of 122$^{\\circ}$ that precludes a close encounter with one of the Solar system planets\\footnote{Orbital parameters taken from the JPL small body database, \\url{ssd.jpl.nasa.gov\/sbdb.cgi}}, 1I\/`Oumuamua is securely of interstellar origin. Furthermore, \\citet{mamajek2017} showed that the trajectory of 1I\/`Oumuamua prior to encountering the Solar system is not consistent with a recent ejection from a nearby star and that its velocity relative to the galactic background is close to the local standard of rest. This suggests that 1I\/`Oumuamua was ejected at low speed from its parent system and that it has been wandering interstellar space for a long time since.\n \n The existence of 1I\/`Oumuamua can also be used to place constraints on the mass of material typically ejected by planetary systems, \\citep[e.g.][]{laughlin2017, raymond2017}, albeit that with only a single object such estimates are subject to large uncertainties. In addition while the orbital characteristics of 1I\/`Oumuamua are consistent with our expectations for an interstellar object, its physical characteristics are rather more surprising, in particular its lack of observable activity \\citep[e.g.][]{jewitt2017} and highly elongated shape \\citep[e.g.][]{bolin2017, meech2017}.\n \n \\citet{cuk2017} recently noted that binary and multiple star systems could be a major source of interstellar bodies and suggested that 1I\/`Oumuamua might have originated as a fragment of a much larger body that was tidally disrupted during ejection from its parent system.\n \n In this work we quantitatively examine this scenario, showing that tidal disruptions are unlikely, but that tight binary systems can nevertheless eject an amount of rocky material comparable to the predominantly icy material thrown out by single and wide binary star systems. We begin in Section~\\ref{sec:comp} by summarising the current state of knowledge of the composition of 1I\/`Oumuamua, in particular examining whether it is a rocky or icy object since this has important implications for its origin. In Section~\\ref{sec:ejection} we outline our rationale for expecting that binary systems may be a major source of ejected material and then describe our methods in Section~\\ref{sec:method}. We present the results of our analysis in Section~\\ref{sec:results} and discuss their implications, before summarising in Section~\\ref{sec:conc}.\n \n \\section{The composition and nature of 1I\/`Oumuamua}\n \\label{sec:comp}\n \n Three clues to the composition of 1I\/`Oumuamua are its spectral reflectance profile, lack of a coma and elongated shape.\n A number of groups have obtained photometric colours across a variety of bands in the visible and near infrared \\citep[e.g.][]{bannister2017, bolin2017, jewitt2017, masiero2017, meech2017, ye2017}. These observations reveal an object with a relatively constant shallow red slope to its reflectance in the visible and near infrared. It is redder than some inner Solar system asteroid classes and does not appear to demonstrate the broad absorption longwards of 0.75~$\\mu$m seen in many asteroid classes. However it is also significantly less red than many Kuiper belt objects. Rather it seems fairly similar to D-type asteroids or the nuclei of long period comets.\n \n Despite its spectral similarity to volatile rich objects,\n 1I\/`Oumuamua has shown no detectable level of activity. \\citet{jewitt2017} place an upper limit on the rate of mass loss due to activity at $\\sim 2\\times 10^{-4}$~kg~s$^{-1}$, limiting the area of exposed water ice on the surface to $<$1~m$^2$. As they point out however, this does not preclude the existence of water ice within the interior of 1I\/`Oumuamua since regolith can be an extremely good insulator (with thermal diffusivities as low as $\\sim10^{-8}$~m$^2$~s$^{-1}$) and as such the thermal impulse during its transit of the inner Solar system may only have penetrated around 0.5-1~m below the surface at the present time. \n \n One might propose that since 1I\/`Oumuamua has spent a very long time outside the envelope of a stellar magnetosphere, exposed to the background galactic radiation environment, it might have developed a thick volatile depleted layer \\citep[e.g.][]{fitzsimmons2018}. We find this unlikely however, as the nuclei of long period comets spend the majority of their time in the outer reaches of the Oort cloud, experiencing the same interstellar radiation field that 1I\/`Oumuamua would have done, yet they become active within the inner solar system, an issue also acknowledged by \\citet{fitzsimmons2018}. Additionally, detailed calculations have shown that interstellar radiation is only capable of altering the upper few centimetres, even after over 10$^9$~years beyond the Solar heliopause, where exposure is highest to the most chemically significant radiation \\citep{cooper2003}. This penetration depth would be sufficient to change the colour of the surface, but probably not sufficient to generate an insulating layer that could keep volatiles from degassing during passage through the inner Solar system.\n \n As such we conclude that the lack of activity from 1I\/`Oumuamua is unlikely to be related to radiation exposure during its long sojourn in interstellar space. Instead we argue that the lack of activity is related to its origin: either it is a truly volatile poor body, or it underwent sufficient processing within its parent system to generate a thick ($\\ga$0.5~m) insulating crust. Both of these options imply that 1I\/`Oumuamua spent a significant period of time within the inner parts of its parent system prior to being ejected.\n \n \\section{The case for binary systems as a major source of ejected material}\n \\label{sec:ejection}\n \n If we consider a star with a companion and a population of small bodies, simple analysis shows that for ejection to dominate over accretion in encounters between the small bodies and the companion, the escape velocity of the companion should exceed the Keplerian orbital velocity at its semi-major axis \\citep[e.g.][]{wyatt2017}. This is independent of whether the companion is a planet or a star. In the Solar system Jupiter, Saturn, Uranus and Neptune all satisfy this criterion, however as discussed by \\citet{wyatt2017} the timescale for ejection is also important. For Neptune and Uranus, while ejection dominates over accretion, bodies can take 100s of millions of years to be ejected, long enough for galactic tides to perturb the bodies such that they enter the Oort cloud. Moreover, as we discussed in Section~\\ref{sec:comp} 1I\/`Oumuamua appears to be rocky or devolatilised, suggesting it was ejected from inside the ice line.\n \n For a solar-mass star, efficient ejection inside typical ice line distances of a few AU within the first few Myr of the life of the system requires that the companion has a mass greater than that of Saturn. However, radial velocity surveys show that the occurrence rate of giant planets at orbital periods of 100-400 days is low---approximately 3\\% \\citep{santerne2016} rising to around 10\\% for orbital periods of $<$10 years \\citep{mayor2011}. As such, we expect that at most 10\\% of Sun-like single stars will host a planet capable of efficiently ejecting material interior to the ice line. \\citet{laughlin2017} and \\cite{raymond2017} thus argue that if 1I\/`Oumuamua is indeed rocky, then typical extrasolar asteroid belts must be unusually massive.\\footnote{While material can still be ejected from our own inner Solar system as a result of the terrestrial planets passing material out to Jupiter, the timescales for doing so are longer ($\\sim$10-100~Myr), which means that the total mass can be significantly depleted by collisional processing prior to ejection \\citep{jackson2012, shannon2015}.} Similarly, recent results from micro-lensing surveys \\citep[e.g.][]{suzuki2016, mroz2017} suggest that giant planets at larger separations are also not common.\n \n While giant planets are relatively uncommon, tight binary systems are abundant \\citep{duchene2013}, and are extremely efficient at ejecting material \\citep{smullen2016}. They may therefore represent a dominant source of interstellar small bodies. We now examine this hypothesis in detail.\n \n \\section{Method}\n \\label{sec:method}\n \n We want to be able to examine the ejection of small bodies from binary systems, and compare this with single stars, in a way that is consistent across spectral classes. As summarized by \\citet{duchene2013}, the properties of binary systems, in particular their separation and their multiplicity frequency, are dependent on the mass of the primary, though they note that these dependences are subject to a fair degree of uncertainty and are the subject of active investigation. As such, we need to take into account the relative abundance of stars of different masses to be able to build a complete picture of the binary population. To do this we construct a population synthesis model as detailed below.\n \n Physically, our picture is one of planetesimals migrating inwards during the early phases of planet formation, in the presence of a protoplanetary disk. \\cite{holman1999} showed that any material in circumbinary orbit migrating inward will become unstable on short timescales once it passes a stability boundary $a_{\\rm c, out}$, for which they provide an empirical fit to results from N-body simulations (their equation 3). This critical distance is a function of the binary mass ratio and eccentricity and ranges from around 2 to 4 times the binary separation. We thus envisage planetesimals migrating in and then being ejected once they pass $a_{\\rm c, out}$.\n \n \\begin{table*}\n \\centering\n \\caption{Binary fractions and separation distributions used in the population synthesis model. Chosen to match \\citet{duchene2013}}\n \\label{table:bindists}\n \\begin{tabular}{r|c|l}\n Mass ($M_{\\odot}$) & Binary fraction (\\%) & Separation distribution \\\\\n $0.1 \\leq M_1 < 0.6$ & 26 & log-normal, mean = 5.30~AU, $\\sigma_{\\log_{10} a}$ = 0.867\\\\\n $0.6 \\leq M_1 < 1.4$ & 44 & log-normal, mean = 44.7~AU, $\\sigma_{\\log_{10} a}$ = 1.53\\\\\n $1.4 \\leq M_1 < 6.5$ & 50 & log-normal, mean = 0.141~AU, $\\sigma_{\\log_{10} a}$ = 0.50, + log-normal, mean = 316~AU, $\\sigma_{\\log_{10} a}$ = 1.65\\\\\n $M_1 \\geq 6.5$ & 70 & log-normal, mean = 0.178~AU, $\\sigma_{\\log_{10} a}$ = 0.36, + log-uniform, min = 0.50~AU, max = 10$^6$\n \\end{tabular}\n \\end{table*}\n \n \n \\subsection{Population synthesis model}\n \\label{sec:method:popsynth}\n \n Our population synthesis model proceeds as follows for the construction of a single system:\n \n \\begin{enumerate}\n \\item Sample the system mass, $M_{\\rm sys}$, from the system initial mass function of \\citet{chabrier2003}.\n \\item Sample the mass ratio, $\\mu = M_2\/(M_1+M_2)$ uniformly in the range [0, 0.5], as suggested by \\citet{duchene2013}, and use this to calculate $M_1$ and $M_2$.\n \\item Determine which of the \\citet{duchene2013} mass bins $M_1$ falls into and use the appropriate multiplicity fraction to determine if the system is actually binary\\footnote{Hierarchical triples would act similarly to binaries in our models, so for simplicity we ignore multiple systems.}. If the system is not binary set $\\mu=0$, $M_1=M_{\\rm sys}$ and return to step (i) and begin again for the next system.\n \\item Sample the eccentricity uniformly in the range [0, 0.9]\\footnote{We cap the eccentricity at 0.9 to avoid computationally expensive N-body simulations.} and the binary separation from the distributions listed in Table~\\ref{table:bindists}, taken to approximately match \\citet{duchene2013}.\n \\item Determine the radii and luminosities of the stars by interpolating on the isochrones generated by MIST (the MESA Isochrones and Stellar Tracks, \\citealt{dotter2016, choi2016}) for a system age of 1~Myr.\\footnote{We take 1~Myr to be a representative age of a young system in which substantial disk migration may be ongoing.}\n \\end{enumerate}\n \n Our population synthesis model ensures that binary systems of different masses have the correct weighting and also simultaneously constructs a single star comparison population. Integrated over all stellar masses the model produces binary systems at a rate of 30\\% by number, while the higher multiplicity rates for more massive stars means that binaries constitute 41\\% of all mass.\n \n We assume that disk dynamics are broadly the same across all systems and that disk masses are a roughly constant fraction of the total stellar mass, such that the mass of material that is ejected by each binary is a constant fraction of the system mass. As such we will weight systems by their total mass for all of our analysis in the later sections.\n \n In examining our binary systems we should also take into account that binaries with very wide separations are unlikely to have substantial material in circumbinary orbit, but rather can be expected to behave like single stars. Exactly where to draw the boundary between a wide binary in which we expect the stars to have their own disks that are largely independent and close binaries in which we expect a dominant circumbinary disk is not clear. We choose to set the boundary as the point at which the outermost stable orbit around the larger star, $a_{\\rm c, in}$ (as determined by Eq.~1 of \\citealt{holman1999}), is greater than 10~AU. Our results are not sensitive to the precise choice of this cut-off however, as discussed in Section~\\ref{sec:results}. Given these assumptions, we estimate a mass fraction of 26\\% in circumbinaries, with the remainder in singles and wide binaries.\n \n One could envisage there being some intermediate separation binaries in which a circumprimary disk extends to $a_{\\rm c, in}$ such that material in the outer regions of the disk can be ejected due to viscous spreading across the $a_{\\rm c, in}$. The mass available for ejection in such a system is fairly limited however, being sourced from the lower density outer regions of the disk, whereas inward drift can potentially make a large fraction of the mass of a circumbinary disk available for ejection. As such we focus here on circumbinary systems.\n \n \\subsection{$N$-body simulations}\n \\label{sec:method:nbody}\n \n While an object that migrates inward past $a_{\\rm c, out}$ will become unstable on short timescales, this does not determine the fate of the object, which can either be ejected or accreted onto one of the stars. In addition we also want to examine the distribution of close encounter distances prior to ejection to assess the possible role of tidal disruptions as suggested by \\citet{cuk2017}. Since the unstable region inside $a_{\\rm c, out}$ also applies to gas, we expect that the disk will have a central cavity where the gas density is low. The fate of bodies migrating into this central unstable region can thus be determined using simple $N$-body integrations.\n \n We conduct a set of 2000 $N$-body simulations using the high-order adaptive-timestep integrator {\\sc \\tt IAS15}\\xspace in the {\\sc \\tt REBOUND}\\xspace integration package \\citep{ReinSpiegel2015,ReinLiu2012}. As described in Section~\\ref{sec:method:popsynth} the mass ratio is uniformly distributed on~[0,~0.5] and the eccentricity is uniformly distributed on~[0,~0.9]. We then initialize particles on co-planar and initially circular orbits at orbital distances beyond the instability limit from \\citet{holman1999}. Finally, we apply an inward drag force that (in an orbit-averaged sense) exponentially decreases the semimajor axis on a timescale 1000 times longer than the orbital period using the \\texttt{modify\\_orbits\\_forces} routine in {\\sc \\tt REBOUNDx}\\xspace, and integrate for $10^4$ binary orbits. We track ejections by flagging particles that go beyond 100 binary orbit separations.\n \n Since Newtonian gravity is scale invariant, we express masses in terms of the primary mass and distances in units of the binary separation. Collisions introduce the scale of the stellar radii into the problem, so we first run a universal set of 2000 integrations of point particles, and dimensionalize after the fact, drawing stellar masses, radii, and binary separations as described above. We then look for collisions with the stars in this post-processing step.\n \n \\section{Results and discussion}\n \\label{sec:results}\n \n Comparing the distribution of collisional and ejection outcomes for our $N$-body simulations we find that the fraction of particles ejected is near unity, reaching a minimum of $95\\%$ for O and B stars that have the largest collisional cross-sections. As such, it is a good approximation to assume that all material that migrates to within the critical semi-major axis will be ejected.\n \n We now generate a synthetic population of 10$^7$ systems using our population synthesis models. As noted in Section~\\ref{sec:method:popsynth} we assume that the mass of material that is ejected by each binary is a constant fraction of the system mass. For now we remain agnostic as to what this ejection fraction is and label it as $M_{\\rm ej}^{\\rm bin}$. We discuss possible values for $M_{\\rm ej}^{\\rm bin}$ and their implications in Section~\\ref{sec:results:interstellarpop}.\n \n As we discussed above, we believe that 1I\/`Oumuamua is either rocky or has a substantial devolatilised crust, implying that it spent a considerable time inside the ice line in its parent system. Since bodies are rapidly ejected once they move within $a_{\\rm c, out}$ we are interested specifically in the subset of circumbinary systems where the ice line is outside $a_{\\rm c, out}$.\n \n We determine the ice line distance, $a_{\\rm ice}$, as the distance at which the radiative equilibrium temperature of a blackbody is 150~K, assuming the total luminosity of the two stars is located at the centre of mass of the system. While the presence of a massive protoplanetary disk can potentially have a large effect on the ice line location through optical depth preventing stellar radiation reaching the interior and through viscous heating, we note that the region interior to $a_{\\rm c, out}$ is expected to have a low gas density such that these effects can be ignored in that region. As such, while the precise location of the ice line might vary if it is outside $a_{\\rm c, out}$ the fraction of systems with $a_{\\rm ice} > a_{\\rm c, out}$ determined in this way should provide a reasonable estimate.\n \n Since we are allowing $a_{\\rm ice}$ to be close to $a_{\\rm c, out}$ it is worth considering how long a body might need to be inside $a_{\\rm ice}$ to develop a substantial devolatilised layer. For Solar system comets mass loss rates are typically a few $10^{-7}$~kg~m$^{-2}$~s$^{-1}$ just inside the ice line \\citep{ahearn2012}, corresponding to an erosion rate of $\\sim$0.01~m~yr$^{-1}$. These mass loss rates rise very rapidly as orbital distance decreases however, reaching $\\sim10^{-5}$-$10^{-4}$~kg~m$^{-2}$~s$^{-1}$ (1-10~m~yr$^{-1}$) by 1~AU where Solar intensity is twice as high as at the ice line. As such we might expect to require a few centuries to perhaps a few millenia of continuous activity to generate a sufficiently thick devolatilised crust, which is very plausible for a 100~m cometary body migrating in via gas drag from close to $a_{\\rm c, out}$.\n \n \\begin{figure}\n \\centering\n \\includegraphics[width=\\columnwidth]{cbaiacmwcomp3.png}\n \\caption{Histogram of $a_{\\rm ice}\/a_{\\rm c, out}$ for all close binaries, weighted by the mass of the system. The total distribution is shown in black, while the orange, green, blue and purple curves show the contributions from stars of different masses. The grey shading shows the effect of changing the wide binary cut-off to $a_{\\rm c, in} >$ 2.5, 5, 20, or 40~AU. All curves are normalised to the total mass of close binaries for a wide binary cut-off of $a_{\\rm c, in} >$ 10~AU. The dashed line indicates $a_{\\rm ice}\/a_{\\rm c, out}=1$.}\n \\label{fig:aiac}\n \\end{figure}\n \n In Fig.~\\ref{fig:aiac} we then show the distribution of $a_{\\rm ice}\/a_{\\rm c, out}$, weighted by the mass of the system. With our assumption that the mass of material ejected relative to the system mass is the same for all binaries the fraction with $a_{\\rm ice}\/a_{\\rm c, out} > 1$ in this distribution then tells us the fraction of interstellar material ejected by binaries that will be rocky or devolatilised. We find that this fraction is 36\\%, such that the ratio of icy to rocky\/devolatilised material is roughly 2:1. We also show the contributions from stars of different masses in Fig.~\\ref{fig:aiac}. This shows that the population of icy interstellar material predominantly originates from low mass stars while the population of rocky\/devolatilised material is dominated by intermediate mass stars. The fraction of systems with $a_{\\rm ice}\/a_{\\rm c, out} > 1$ is relatively insensitive to the choice we made in Section~\\ref{sec:method:popsynth} regarding where to place the wide binary cut-off, as shown by the grey shading in Fig.~\\ref{fig:aiac}.\n \n \\subsection{The population of interstellar bodies}\n \\label{sec:results:interstellarpop}\n \n The total population of interstellar bodies will be the combination of those ejected by binary systems and those ejected by single star systems. We previously defined the fractional mass ejected by binary systems as $M_{\\rm ej}^{\\rm bin}$. For single star systems (and wide binaries) we assume that ejection of significant masses of material is limited to those stars that host giant planets, and that those systems eject a fraction of their mass equal to $M_{\\rm ej}^{\\rm sin, gp}$. Averaged over all singles the fractional mass ejected is then $f_{\\rm giant}M_{\\rm ej}^{\\rm sin, gp}$, where $f_{\\rm giant}$ is the fraction of singles that host giant planets. The total fractional mass averaged over all stars is then\n \\begin{equation}\n M_{\\rm interstellar} = f_{\\rm bin} M_{\\rm ej}^{\\rm bin} + (1 - f_{\\rm bin}) f_{\\rm giant}M_{\\rm ej}^{\\rm sin, gp},\n \\label{eq:minter1}\n \\end{equation}\n where $f_{\\rm bin}$ is the mass fraction of stars that are binaries. In Section~\\ref{sec:method:popsynth} we found that $f_{\\rm bin}$ = 26\\%, while in Section~\\ref{sec:ejection} we argued that $f_{\\rm giant}$ is no more than 10\\%. Using these values Equation~\\ref{eq:minter1} becomes\n \\begin{equation}\n M_{\\rm interstellar} = 0.26 M_{\\rm ej}^{\\rm bin} + 0.074 M_{\\rm ej}^{\\rm sin, gp}.\n \\label{eq:minter2}\n \\end{equation}\n We are also interested in the fraction of all interstellar bodies that are rocky or devolatilised rather than icy, $R_{\\rm interstellar}$. We thus divide the ejected masses into rocky and icy components such that\n\\begin{equation}\n \\label{eq:minter3}\n \\begin{split}\n R_{\\rm interstellar} & = \\frac{M_{\\rm interstellar, rock}}{M_{\\rm interstellar}} \\\\\n & = \\frac{0.26 R_{\\rm bin} M_{\\rm ej}^{\\rm bin} + 0.074 R_{\\rm sin} M_{\\rm ej}^{\\rm sin, gp}}{0.26 M_{\\rm ej}^{\\rm bin} + 0.074 M_{\\rm ej}^{\\rm sin, gp}},\n \\end{split}\n \\end{equation}\n where $R_{\\rm bin}$ and $R_{\\rm sin}$ are the rock\/ice fractions for binaries and single systems respectively.\n \n The Nice model for the early evolution of the outer Solar system suggests that the Solar system began with an outer planetesimal disk of $\\sim$30~$M_{\\oplus}$, the majority of which was ejected \\citep[e.g.][]{gomes2005, levison2008}. In addition perhaps 1~$M_{\\oplus}$ was ejected from the inner Solar system \\citep[e.g.][]{shannon2015}. If other systems that host giant planets behave in a similar way to the Solar system then, taking a median value we can expect that $M_{\\rm ej}^{\\rm sin, gp} \\sim 30 M_{\\oplus}\/M_{\\odot}$ and that $R_{\\rm sin} \\sim 0.033$.\n \n We must now estimate $M_{\\rm ej}^{\\rm bin}$. In doing so we first note that from our $N$-body simulations we expect binaries to eject essentially all material that migrates to within $a_{\\rm c, out}$. Moreover since the ejection mechanism only relies on the central binary itself, material can be ejected from very early in the life of the system, whereas in a single system ejection can only begin once a giant planet has formed. If most systems begin with disks close to the gravitational instability limit of $\\sim$0.1~$M_{\\rm sys}$ and a typical gas to dust ratio of around 100 this implies a total mass of solids of $\\sim$300~$M_{\\oplus}\/M_{\\odot}$. If we estimate that around 10\\% of this material migrates to within the critical stability radius this implies that every binary system ejects as much material as a single star system that hosts giant planets. Using $M_{\\rm ej}^{\\rm bin} \\sim 30 M_{\\oplus}\/M_{\\odot}$ as our fiducial value we then find that $M_{\\rm interstellar} \\sim 10 M_{\\oplus}\/M_{\\odot}$. Since we expect $R_{\\rm bin}$ = 0.36 this leads to $R_{\\rm interstellar} \\sim 0.29$. With this estimate more than three quarters of interstellar bodies originate from binary stars, while for rocky objects the fraction would be even higher.\n \n \\subsection{Tidal disruptions vs extreme heating}\n \\label{sec:results:disruption}\n \n \\citet{cuk2017} suggested that 1I\/`Oumuamua might have originated as a tidal disruption event of a planet. We can test this hypothesis with our $N$-body simulations. For massive stars, the Roche limit (inside which tidal disruptions will occur) is smaller than the stellar radius. To provide the best-case scenario for tidal disruptions, we re-calculate the stellar collision outcomes using stellar radii for the Zero Age Main Sequence, when the radii are at a minimum. Treating a gravity-dominated planet as a strengthless body, we compute the Roche radius as $R_{\\rm Roche} = 1.26 (\\rho_*\/\\rho_{p})^{(1\/3)} R_*$, where we assume a lower bound for the density of the planet of $\\rho_{p}$ = 3000~kg~m$^{-3}$. We find that none of our 2000 simulations results in a close encounter within the Roche radius. Indeed the majority of closest approach distances are far outside the Roche radius. This implies that the frequency of tidal disruptions is $\\lesssim 10^{-3}$ times the ejection rate. Note that there is no contribution from higher mass stars since they always have their Roche radius inside the star.\n \n \\begin{figure}\n \\centering\n \\includegraphics[width=\\columnwidth]{danclosetemps2.png}\n \\caption{Mass weighted distribution of peak temperatures reached by particles ejected in our $N$-body simulations.}\n \\label{fig:peaktemps}\n \\end{figure}\n \n While we have shown that tidal disruptions are rare, material that is ejected can potentially be heated to high temperatures during a close approach. In Fig.~\\ref{fig:peaktemps} we show the mass weighted distribution of peak temperatures experienced by particles in our $N$-body simulations before they are ejected. Around 10\\% of bodies experience peak temperatures in excess of 1800~K, sufficient to melt rock. These are drawn from systems independent of their ice line distances, though we note that systems with higher temperatures at the stability boundary $a_{\\rm c, out}$ are more likely to experience extreme heating. \n It would be interesting to consider in future work how extreme heating may modify the shape and surface layers of 1I\/`Oumuamua. We do note that the extreme elongation would be easier to maintain for an annealed, monolithic body than for a rubble pile. The fact that such heating predominantly occurs around high-mass stars also further motivates understanding the planet formation process in these extreme regimes.\n \n \n Since submission of this work, the revised manuscript of \\citet{cuk2017} points out that planets on circumprimary (S type), rather than the circumbinary (P type) orbits considered here may be tidally disrupted more effectively. As he points out, a dynamical instability of several planets around a single star would be required to scatter the doomed planet into a region of phase space where it can chaotically transfer between stars and suffer a close enough encounter for tidal disruption. It would be valuable to extend the analysis of \\citet{cuk2017} to quantify the fraction of instabilities that could plausibly deliver planets into this chaotic intermediate regime instead of directly ejecting them. \n\n Alternatively, planets in such an S type configuration could have their eccentricities secularly driven to high enough values by the binary companion \\citep{naoz2012, petrovich2015} for tidal disruption, though this slow secular driving should be quenched by the tidal forces themselves before the planet is able to come close enough to the star for breakup \\citep[e.g.][]{wu2011, liu2014}.\n \n \\section{Conclusions}\n \\label{sec:conc}\n \n We have shown that the population of interstellar objects can be dominated by planetesimals ejected during planet formation in circumbinary systems. Even if a typical circumbinary only ejects as much material as the Solar system we would still expect close binaries to be the source of more than three quarters of interstellar bodies due to the relatively low abundance of single star systems with giant planets like the Solar system. Whereas in the Solar system the ejected material is overwhelmingly icy, we expect that around 36\\% of binaries may predominantly eject material that is rocky or substantially devolatilised, leading to similar expectations for the abundance of rocky\/devolatilised bodies in the interstellar population.\n \n Within the close binary population the dominant source of rocky\/devolatilised material are intermediate mass stars, A-stars and late B-stars. As such, we suggest that the apparently rocky or devolatilised appearance of 1I\/`Oumuamua indicates that it likely originated in such an intermediate mass binary system.\n \n We find that tidal disruptions during ejection from binary systems, as suggested by \\citet{cuk2017} are rare. While close encounter distances that result in strong tidal effects are highly unusual around 10\\% of bodies may experience extremely high peak temperatures during ejection. How these extreme temperatures could influence the shape and surface layers of a body like 1I\/`Oumuamua would be an interesting topic for future work, and that such heating predominantly occurs around high-mass stars also further motivates understanding the planet formation process in these extreme regimes.\n \n \n \\section*{Acknowledgements}\n\nAPJ thanks Andrew Shannon and Casey Lisse for helpful discussions.\nAPJ gratefully acknowledges funding through NASA grant NNX16AI31G (Stop hitting yourself). \nDT thanks Greg Laughlin for insightful discussions.\nHR gratefully acknowledges funding through NSERC Discovery Grant RGPIN-2014-04553.\nThe authors thank the referee for a rapid review and helpful comments that have improved the manuscript.\n\n{\\footnotesize\n\\bibliographystyle{mnras}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\\IEEEPARstart{H}{yperspectral} analysis has received an increasing attention because of its high spectral resolution, which enables a variety of objects to be identified and classified \\cite{Plaza2009}. \\textit{Mixed pixels} caused by the presence of multiple objects within a single pixel adversely affect the performance of hyperspectral analysis~\\cite{Kesha2002}. To address this problem, a wide variety of spectral unmixing methods have been developed over the last decades~\\cite{Biouc2012,Dobig2014,Heyle2014,Uezat2017}. Spectral unmixing methods aim at decomposing a mixed spectrum into a collection of reference spectra (known as \\textit{endmembers}) characterizing the macroscopic materials present in the scene, and their respective proportions (known as \\textit{abundances}) in each image pixel~\\cite{Kesha2002}. Despite the large number of developed spectral unmixing methods, there are still major challenges for accurate estimates of endmember signatures and abundances~\\cite{Biouc2012}. Among these challenges, endmember variability may lead to large amounts of errors in abundance estimates~\\cite{Zare2014}. It results from the fact that each endmember can rarely be represented by a unique spectral signature. Conversely, it is subject to so-called spectral variability, e.g., caused by variations in the acquisition process, the intensity of illumination or other physical characteristics of the materials~\\cite{Murph2012,Uezat2016a}. Taking this endmember variability into account during the spectral unmixing process is one of keys for successful application of spectral unmixing~\\cite{Somer2011}.\n\nThe methods that incorporate endmember variability can be categorized into two main approaches (see \\cite{Somer2011,Zare2014,Drume2016b} for recent overviews). The first approach relies on the definition of a set of multiple spectral signatures, referred to as endmember bundles, to characterize each endmember class. Endmember bundles can be collected from field campaign or can be extracted from data itself using endmember bundle extraction methods~\\cite{Bates2000,Somer2012,Uezat2016a}. The advantage of this approach is that the method can use \\textit{a priori} information representing endmember variability. Endmember bundles can be validated by experts in order to provide accurate representation about endmember variability~\\cite{Bates2000}. Although traditional methods incorporating endmember bundles (e.g.~\\cite{Rober1998}) are known to be computationally expensive, more efficient methods have been recently developed and have shown great potential~\\cite{Vegan2014,Goena2013,Uezat2016b,Meyer2016}. However, as pointed out in~\\cite{Uezat2016}, it is unlikely that the endmember bundles completely represent endmember variability present in an image. Such incomplete endmember bundles may lead to poor estimates of abundances. The second approach uses physical or statistical descriptions of the endmember variability. More precisely, these methods describes the endmember variability thanks to a statistical distribution~\\cite{Halim2015} or by incorporating additional variability terms in the mixing model~\\cite{Drume2016,Thouv2016,Hong2017}. The advantage of this approach results from the adaptive learning of the endmember variability. Indeed, state-of-the-art methods such as those recently introduced in \\cite{Drume2016,Thouv2016,Hong2017} enable endmember spectra to spatially vary within each pixel in order to describe endmember variability. This is important since the endmember spectra to be used for the abundance estimation can be different between pixels. However, estimating endmember variability without \\textit{a priori} knowledge is a challenging task, especially when large amounts of endmember variability are present in an image. In addition, the statistical distribution or additional terms used in these methods may be overly simplified to represent endmember variability.\n\nBoth approaches demonstrate benefits and drawbacks. A natural question arises: is it possible to combine the strong advantages of both approaches to robustly represent endmember variability? This paper addresses the question and introduces a novel spectral unmixing method that bridges the gap between the aforementioned two approaches with the help of a double sparsity-based method inspired by \\cite{Rubin2010}. Specifically, the proposed method aims at adaptively recovering endmember spectra within each pixel to describe endmember variability while incorporating available \\textit{a priori} information. The proposed method is closely related to the existing methods. Thus, the main contributions of this paper are threefold: 1) to propose a novel spectral unmixing method that incorporates endmember bundles and generates adaptive endmember spectra within each pixel, 2) to give a systematic review of related work and show the relationship between the proposed method and existing methods, 3) to provide comparison between the proposed method and other sparsity-based methods.\n\nThis paper is organized as follows. Section \\ref{section2} describes related works and existing methods, while highlighting their inherent drawbacks. In Section \\ref{section3}, a novel mixing model that incorporates endmember variability is proposed and its relationships with existing methods are discussed. Section \\ref{sec:algo} introduces an associated unmixing algorithm designed to recover the endmember classes, adaptive bundles and abundances. Section \\ref{section4} and Section \\ref{section5} show experimental results obtained from simulated data and real hyperspectral images. Finally, conclusions are drawn in Section \\ref{section6}.\n\n\\section{Related works and issues raised by existing methods}\\label{section2}\n\\subsection{Conventional (variability-free) linear mixing model}\nLet $\\mathbf{y}_i \\in \\mathbb{R}^{L \\times 1}$ denote the $L$-spectrum measured at the $i$th pixel of an hyperspectral image. According to the linear mixing model (LMM), the observed spectrum of the $i$th pixel $\\mathbf{y}_i$ is approximated by a weighted linear combination of endmember spectra and abundance fractions\n\\begin{eqnarray}\n\\label{eq:LMM}\n \\mathbf{y}_i=\\mathbf{M}\\mathbf{a}_i+\\mathbf{n}_i\n\\end{eqnarray}\nwhere $\\mathbf{M} \\in \\mathbb{R}^{L \\times K}$ is the matrix of the spectral signatures associated with the $K$ endmember classes, $\\mathbf{a}_i = \\left[a_{1i},\\ldots,a_{Ki}\\right]^T \\in \\mathbb{R}^{K \\times 1}$ is the abundance fractions of the pixel and $\\mathbf{n}_i \\in \\mathbb{R}^{L \\times 1}$ represents noise and modeling error. LMM is generally accompanied by abundance non-negativity constraint (ANC) and the abundance sum-to-one constraint (ASC)\n\\begin{eqnarray}\n \\forall k, \\forall i, \\ a_{ki}\\geq 0,\\quad \\text{and} \\quad \\forall i,\\ \\sum_{k=1}^{K}a_{ki}=1\n\\end{eqnarray}\nwhere $a_{ki}$ is the abundance fraction of the $k$th class in the $i$th pixel. This model implicitly relies on two assumptions: \\emph{i}) each endmember class is described by a unique spectrum and \\emph{ii}) the endmember matrix $\\mathbf{M}$ is fixed and commonly used to unmix all the pixels of a given image. In other words, LMM does not account for spectral variability. However, as discussed previously, this is likely unrealistic because spectral variability is naturally observed in hyperspectral images, e.g., because of variations in illumination or physical intrinsic characteristics of materials \\cite{Uezat2016}.\n\n\\subsection{Linear mixing models incorporating endmember bundles}\nMultiple endmember spectral mixture analysis (MESMA) \\cite{Rober1998} allows the variability of the endmember spectrum representative of each class and a varying number of endmember classes present within each pixel. Although MESMA has been widely used for a variety of applications \\cite{Rober1998,Denni2003}, it owns several major limitations: \\emph{i}) it is highly computationally expensive because MESMA needs to test a large number of combinations of endmember spectra \\cite{Tits2013}, \\emph{ii}) MESMA tends to select an overestimated number of endmember classes because it uses the reconstruction error to select the appropriate combination of endmember spectra \\cite{Demar2012} and \\emph{iii}) the performance of MESMA may significantly decrease when endmember spectra (or \\emph{bundles}) within each class do not completely represent the spectral variability \\cite{Uezat2016}.\n\nTo overcome these limitations, recent works have proposed a new class of methods that incorporate all endmember bundles defined as \\cite{Goena2013,Vegan2014}\n\\begin{eqnarray}\n\\label{eq:bundles}\n \\mathbf{E}=\\begin{bmatrix}\\ \\mathbf{E}_1 \\ | \\ \\mathbf{E}_2\\ |\\cdots|\\ \\mathbf{E}_K\\ \\end{bmatrix}\n\\end{eqnarray}\nwhere $\\mathbf{E}_k \\in \\mathbb{R}^{L \\times N_k}$ represents a set of endmember spectra (aka bundle) characterizing the $k$th class, $N_k$ is the number of endmember spectra in the $k$th class and $N$ is the total number of endmember spectra of all classes with $N= \\sum_{k=1}^K N_k$. Generalizing LMM in \\eqref{eq:LMM}, those methods firstly model a given observed pixel spectrum with respect to all spectra in endmember bundles and corresponding multiple abundances\n\\begin{eqnarray}\\label{eq:4}\n \\mathbf{y}_i=\\mathbf{E}\\mathbf{r}_i+\\mathbf{n}_i\n\\end{eqnarray}\nwhere $\\mathbf{r}_i \\in \\mathbb{R}^{N \\times 1}$ is multiple abundance fractions corresponding to each spectrum of the endmember bundles $\\mathbf{E}$. As for LMM, ASC or ANC can also be imposed to $\\mathbf{r}_i$. As a second step, multiple abundance fractions $\\mathbf{r}_i$ are summed within each class to generate a single abundance fraction for each class\n\\begin{eqnarray}\\label{summed}\n \\mathbf{a}_i=\\mathbf{G}^T\\mathbf{r}_i\n\\end{eqnarray}\nwith\n\\begin{eqnarray}\n \\mathbf{G} = \\begin{bmatrix} \\mathbf{1}_{N_1} & \\mathbf{0}_{N_1} & \\dotsb & \\mathbf{0}_{N_1}\n \\\\ \\mathbf{0}_{N_2} & \\mathbf{1}_{N_2} & \\mathbf{\\dotsb} & \\mathbf{0}_{N_2}\n\\\\ \\mathbf{\\vdots} & \\mathbf{\\vdots} & \\mathbf{\\ddots} & \\mathbf{\\vdots}\n\\\\ \\mathbf{0}_{N_K} & \\mathbf{0}_{N_K} & \\mathbf{\\dotsb} & \\mathbf{1}_{N_K} \\end{bmatrix}\n\\end{eqnarray}\nwhere $\\mathbf{1}_{N_k} \\in \\mathbb{R}^{N_k \\times 1}$ is a column vector of ones and $\\mathbf{0}_{N_k} \\in \\mathbb{R}^{N_k \\times 1}$ represent a $N_k$-dimensional vector whose components are zeros. While these two steps are conducted separately in \\cite{Goena2013,Vegan2014}, they can be also considered jointly within a multi-task Gaussian process framework \\cite{Uezat2016,Uezat2016b}. Even if these methods have been shown to be effective, a large number of endmember spectra within each class may be redundant. In such case, following a model selection inspiration, Veganzones \\emph{et al.} introduce a complementary sparsity regularization on the multiple abundance vectors \\cite{Vegan2014}\n\\begin{eqnarray}\n\\begin{aligned}\n\t& \\min\\limits_{\\mathbf{r}_i}\\frac{1}{2}\\Vert\\mathbf{E}\\mathbf{r}_i-\\mathbf{y}_i\\Vert^2_2+\\lambda_r\\Vert\\mathbf{r}_i\\Vert_1 \\\\\n & \\text{s.t. } \\forall i, \\quad \\mathbf{r}_{i} \\succeq 0\n\\end{aligned}\n\\end{eqnarray}\nwhere $\\succeq$ represents the element-wise comparison, $\\Vert\\cdot\\Vert_2$ is the $\\ell_2$-norm, $\\Vert\\cdot\\Vert_1$ is the $\\ell_1$-norm which is known to promote sparsity. Once the multiple abundance vector $\\mathbf{r}_i$ has been estimated, it is normalized in order to reduce the effects of multiplicative factors and satisfy ASC. Following the same approach, further sparsity can be imposed using $\\ell_p$-norm \\cite{Sigur2014} or reweighted $\\ell_1$-approaches \\cite{Zheng2016,He2017}. Overall, this sparsity property allows the selection of a smaller number of endmember spectra. However, it may not lead to the selection of a smaller number of endmember classes. Conversely, to promote sparsity on the number of endmember classes, one strategy consists in formulating the unmixing problem through a sparse group lasso \\cite{Iorda2011a}\n\\begin{eqnarray}\n\\begin{aligned}\n\t& \\min\\limits_{\\mathbf{r}_i}\\left\\lbrace \\frac{1}{2}\\Vert\\sum_{k=1}^{K}\\mathbf{E}_k(\\mathbf{g}_k\\odot\\mathbf{r}_i)-\\mathbf{y}_i\\Vert^2_2 \\right. \\\\\n & \\left. +\\lambda_g\\sum_{k=1}^{K}\\Vert\\mathbf{g}_k\\odot\\mathbf{r}_i\\Vert_2 + \\lambda_r\\Vert\\mathbf{r}_i\\Vert_1 \\right\\rbrace\\\\\n & \\text{s.t. } \\forall i, \\quad \\mathbf{r}_{i} \\succeq 0\n\\end{aligned}\n\\end{eqnarray}\nwhere $\\mathbf{g}_k$ is the $k$th column of $\\mathbf{G}$, $\\odot$ is the element-wise product and thus $\\mathbf{g}_k\\odot\\mathbf{r}_i$ extracts the elements in $\\mathbf{r}_i$ belonging to the $k$th class. This approach has the great advantage of promoting sparsity in both the number of endmember spectra and the number of endmember classes. Another strategy relies on the concept of ``social sparsity'' that can exploit the structure of endmember bundles more explicitly \\cite{Meyer2016}. The method assumes that $\\mathbf{r}_i$ can be partitioned into $K$ groups representing each endmember class, leading to the optimization problem\n\\begin{eqnarray}\n\\begin{aligned}\n\t& \\min\\limits_{\\mathbf{r}_i} \\left\\lbrace \\frac{1}{2}\\Vert\\mathbf{E}\\mathbf{r}_i-\\mathbf{y}_i\\Vert^2_2+\\lambda_r\\left(\\sum_{k=1}^{K}\\Vert \\mathbf{g}_k\\odot\\mathbf{r}_i\\Vert_p^q\\right)^{\\frac{1}{q}} \\right\\rbrace\\\\\n & \\text{s.t. } \\forall i,\\forall n, \\quad \\mathbf{r}_{i} \\succeq 0, \\quad \\sum_{n=1}^{N}r_{ni}=1\n\\end{aligned}\n\\end{eqnarray}\nwhere $\\Vert\\cdot\\Vert_p$ is the $\\ell_p$-norm and $r_{ni}$ is the $n$th multiple abundance fraction of the $i$th pixel. Finally, abundances associated each endmember class can be obtained by summing the multiple abundances within each class as in \\eqref{summed}. This method can be considered as a generalized model since, by adjusting the values of $(p,q)$, it boils down to the group lasso, the elitist lasso or the fractional case \\cite{Meyer2016}.\n\nHowever, all aforementioned sparsity-based methods still suffer from the following limitations:\n\\begin{enumerate}\n \\item \\emph{Physically unrealistic abundance fractions}: They explicitly generate unrealistic multiple abundances corresponding to each spectrum in endmember bundles.\n \\item \\emph{Lack of adaptability to describe endmember variability}: ASC imposed on $\\mathbf{r}_i$ does not allow a consistant description of the endmembers within each pixel. In addition, it cannot capture adaptive and hierarchical structure of endmember spectra for each pixel.\n\\end{enumerate}\nTo overcome these two shorcomings, the present paper capitalizes on this abundant literature to design a new multiple endember mixing model introduced in the next section.\n\n\\section{Multiple endmember mixing models}\\label{section3}\n\\subsection{MEMM}\nThe proposed model relies on 3 main ingredients, namely endmember bundles, bundling coefficients and abundances. According to this model, each endmember bundle is mixed to provide a suitable and adaptive endmember spectrum used to unmix a given pixel. The proposed multiple endmember mixing model (MEMM) is defined as\n\\begin{eqnarray}\n\\label{eq:MEMM_model}\n \\mathbf{y}_i=\\mathbf{E}\\mathbf{B}_i\\mathbf{a}_i+\\mathbf{n}_i\n\\end{eqnarray}\nwhere $\\mathbf{B}_i \\in \\mathbb{R}^{N \\times K}$ gathers so-called bundling coefficients of the $i$th pixel which decompose the endmember signatures according to the endmember bundles for the considered pixel. To enforce the bundle structure, the bundling coefficients $\\mathbf{B}_i$ associated with the pixel is defined as the following block-diagonal matrix\n\\begin{eqnarray}\n \\mathbf{B}_i = \\begin{bmatrix} \\mathbf{b}_{1i} & \\mathbf{0}_{N_1} & \\dotsb & \\mathbf{0}_{N_1}\n \\\\ \\mathbf{0}_{N_2} & \\mathbf{b}_{2i} & \\mathbf{\\dotsb} & \\mathbf{0}_{N_2}\n\\\\ \\mathbf{\\vdots} & \\mathbf{\\vdots} & \\mathbf{\\ddots} & \\mathbf{\\vdots}\n\\\\ \\mathbf{0}_{N_K} & \\mathbf{0}_{N_K} & \\mathbf{\\dotsb} & \\mathbf{b}_{Ki} \\end{bmatrix}\n\\end{eqnarray}\nwhere $\\mathbf{b}_{ki} \\in \\mathbb{R}^{N_k \\times 1}$ is the bundling coefficients for the $k$th class at the $i$th pixel. Each bundling coefficient must be nonnegative and the bundling vector $\\mathbf{b}_{ki}$ is expected to be sparse. Indeed multiple endmember spectra within each class are usually redundant and only a few endmember spectra within each class should be enough to unmix a pixel. This property can be induced by considering the following bundling constraints\n\\begin{equation}\n\\label{eq:B_constraints}\n \\forall i, \\ \\mathbf{B}_i \\succeq 0 \\quad \\text{and} \\quad \\Vert\\mathbf{B}_i\\Vert_0=\\sum_{k=1}^{K}\\Vert\\mathbf{b}_{ki}\\Vert_0 \\leq s\n\\end{equation}\nwhere $\\Vert\\cdot\\Vert_0$ is the $\\ell_0$-norm that counts the number of nonzero elements and $s$ is the maximum number of nonzero elements in $\\mathbf{B}_i$, i.e., the maximum number of endmembers to be used within each class to describe the pixel. The abundance non-negativity constraint (ANC) and the abundance sum-to-one constraint (ASC) are usually imposed. In addition, in this work, complementary sparsity is imposed on each abundance vector, i.e.,\n\\begin{equation}\n\\label{eq:A_constraints_MEMM}\n \\forall k,\\forall i,\\ a_{ki}\\geq 0,\\quad \\text{and} \\quad \\forall i,\\ \\sum_{k=1}^{K}a_{ki}=1, \\ \\ \\Vert\\mathbf{a}_i\\Vert_0 \\leq v\n\\end{equation}\nwhere $v$ is the number of endmember classes to be used to decompose the image pixel.\n\n\\subsection{MEMMs}\\label{subsec:MEMMs}\nThe sparsity constraint \\eqref{eq:B_constraints} applied to $\\mathbf{B}$ can be slightly modified to obtain another meaningful set of constraints\n\\begin{equation}\n\\label{eq:B_constraints_MEMMs}\n \\forall i,\\ \\mathbf{B}_i \\succeq 0 \\quad \\text{and} \\quad \\forall k, \\forall i,\\ \\Vert\\mathbf{b}_{ki}\\Vert_0 \\leq 1.\n\\end{equation}\nThe resulting model, referred to as MEMM$_s$ in what follows, is designed to generate at most one scaled endmember spectrum for each class.\n\n\n \n \n\n\\subsection{Relationships between MEMM and existing models}\n\\subsubsection{MEMM$_s$ and MESMA}\nWhen the sparsity constraint on the abundances $\\Vert\\mathbf{a}_i\\Vert_0 \\leq v$ in \\eqref{eq:A_constraints_MEMM} is not considered, the optimization problem associated with the MEMM$_s$ model described in paragraph \\ref{subsec:MEMMs} is equivalent to MESMA and sparse MESMA \\cite{Chen2016}. Unlike MESMA that considers the reconstruction error to determine the optimal combination of endmember classes within each pixel, MEMM$_s$ incorporates the sparsity constraint to select the optimal combination. This prevents a larger number of endmember classes to be selected for each pixel.\n\n\\subsubsection{MEMM and pixel-wise endmember variability models}\nBy denoting $\\mathbf{\\tilde{M}}_i=\\mathbf{E}\\mathbf{B}_i$ the equivalent endmember matrix associated with the $i$th pixel, MEMM models the observed pixel spectra as\n\\begin{eqnarray}\n\t\\mathbf{y}_i=\\mathbf{\\tilde{M}}_i\\mathbf{a}_i+\\mathbf{n}_i\n\\end{eqnarray}\nwhere $\\mathbf{\\tilde{M}}_i$ can be interpreted as a set of $K$ spatially varying endmember spectra. This approach has been also adpoted in recent works to incorporate endmember variability as additive factors \\cite{Thouv2016}, multiplicative factors \\cite{Drume2016,Imbir2017} or a combination of additive and multiplicative factors \\cite{Hong2017}. In particular, when $N_1=\\ldots,N_k=1$ in \\eqref{eq:bundles}, the endmember bundles $\\mathbf{E}_1,\\ldots, \\mathbf{E}_K$ are reduced to unique endmember spectra characterizing each class. The associated bundling coefficient matrix $\\mathbf{B} = \\mathrm{diag}[b_1,\\ldots,b_K]$ is diagonal where each coefficient $b_k$ scales the corresponding endmember spectrum $\\mathbf{E}_k$ ($k=1,\\ldots,K$). Thus, MEMM generalizes the recently introduced extended linear mixing model \\cite{Drume2016}.\n\nHowever, MEMM is different from the aforementioned methods since it resorts to \\textit{a priori} information (i.e., endmember bundles) to model the endmember variability. More precisely, MEMM describes the admissible variability within an endmember class as the convex cone spanned by the corresponding bundles. As a consequence, \\emph{per se}, MEMM offers an adaptive description of the spectral variability even when pre-defined endmember bundles do not completely capture this variability within each class.\n\n\\subsubsection{MEMM and sparsity-based unmixing methods}\nBy setting $\\mathbf{r}_i=\\mathbf{B}_i\\mathbf{a}_i$, MEMM in \\eqref{eq:MEMM_model} can be rewritten as\n\\begin{eqnarray}\n\t\\mathbf{y}_i=\\mathbf{E}\\mathbf{r}_i+\\mathbf{n}_i\n\\end{eqnarray}\nsimilarly to the existing models discussed in Section \\ref{section2}. The main difference is that MEMM enables the multiple abundances $\\mathbf{r}_i$ to be decomposed into bundling coefficients $\\mathbf{B}_i$ within each class and abundances $\\mathbf{a}_i$, resulting into a bi-layer description of the abundances. This hierarchical decomposition has been also adopted by unmixing methods based on multilayer nonnegative matrix factorization (MLNMF) \\cite{Rajab2015}. However, each layer induced by MEMM (i.e., bundling matrix and abundance vector) has a clear and meaningful role. Moreover, when the existing methods impose ASC onto $\\mathbf{r}_i$, they assume that mixed spectra belong to the simplex spanned by the endmember bundles. However, this is a limited assumption since observed spectra may be outside the simplex, e.g., when affected by variations in illumination. Conversely, MEMM imposes ASC only onto the abundances $\\mathbf{a}_i$ of endmember classes and enables the bundling coefficients $\\mathbf{B}_i$ to scale the endmember signature, e.g., to capture variability induced by varying illumination (see experiments in Section \\ref{section6}). In addition,\nMEMM complements this bi-layer hierarchy with a twofold structured, physically-motivated sparsity imposed on the multiple abundance vector $\\mathbf{r}_i$. This bilevel sparsity has the significant advantage of reducing overfitting and one may expect a significant improvement of stability and interpretability of the abundance estimates.\n\nFinally, the intrinsic structure of MEMM is also similar to the recently developed methods based on robust constrained matrix factorization \\cite{Akhta2017} and kernel archetypoid analysis \\cite{Zhao2017}. These methods model a set $\\mathbf{Y} = \\left[\\mathbf{y}_1,\\ldots,\\mathbf{y}_P\\right] \\in \\mathbb{R}^{L \\times P}$ of $P$ pixel spectra as\n\\begin{eqnarray}\n\t\\mathbf{Y}=\\mathbf{Y}\\mathbf{C}\\mathbf{A}+\\mathbf{N}\n\\end{eqnarray}\nwhere $\\mathbf{C} \\in \\mathbb{R}^{P \\times K}$ is a matrix gathering a set of coefficients, $\\mathbf{A} = \\left[\\mathbf{a}_1,\\ldots,\\mathbf{a}_P\\right] \\in \\mathbb{R}^{K \\times P}$ is the abundance matrix and $\\mathbf{N} = \\left[\\mathbf{n}_1,\\ldots,\\mathbf{n}_P\\right] \\in \\mathbb{R}^{L \\times N}$ is the error and noise matrix. Sparsity (induced by ASC or the use of $\\ell_0$-pseudonorm) and nonnegativity constraints are imposed onto each column of $\\mathbf{C}$ and $\\mathbf{Y}\\mathbf{C}$ can be interpreted as synthetic endmember spectra. These methods use the subset of whole image pixels to generate synthetic endmember spectra that are fixed within each image. On the other hand, MEMM uses the subset of endmember bundles to generate synthetic endmember spectra that may be different for each pixel.\n\n\\section{MEMM-based unmixing algorithm}\\label{sec:algo}\nUnmixing according to the proposed MEMM can be formulated as the minimization problem\n\\begin{eqnarray}\\label{eq_sep}\n\\begin{aligned}\n\t& \\min\\limits_{\\mathbf{B}_i,\\mathbf{a}_i}\\frac{1}{2}\\Vert\\mathbf{E}\\mathbf{B}_i\\mathbf{a}_i-\\mathbf{y}_i\\Vert^2_2 \\\\\n \\text{s.t.}\\ \\forall k,\\forall i, & \\ a_{ki} \\geq 0,\\quad \\sum_{k=1}^{K}a_{ki}=1,\\quad \\Vert\\mathbf{a}_i\\Vert_0 \\leq v, \\\\\n & \\mathbf{B}_i \\succeq 0,\\quad \\Vert\\mathbf{B}_i\\Vert_0 \\leq s.\n\\end{aligned}\n\\end{eqnarray}\nThis minimization problem is similar to the double sparsity-inducing method proposed in \\cite{Rubin2010}. Using an alternative formulation, the minimization problem can be written as the following non-convex minimization problem:\n\\begin{equation}\n\t\\min\\limits_{\\mathbf{B}_i,\\mathbf{a}_i} \\mathcal{J}\\left(\\mathbf{B}_i,\\mathbf{a}_i\\right)=\\left\\lbrace f(\\mathbf{B}_i,\\mathbf{a}_i) + h(\\mathbf{B}_i)+g(\\mathbf{a}_i) \\right\\rbrace\n\\end{equation}\nwith\n\\begin{eqnarray}\n f(\\mathbf{B}_i,\\mathbf{a}_i)&=\\frac{1}{2}\\Vert\\mathbf{E}\\mathbf{B}_i\\mathbf{a}_i-\\mathbf{y}_i\\Vert^2_2\\\\\n h(\\mathbf{B}_i)&=\\iota_{\\mathbb{R}_+}(\\mathbf{B}_i) + \\lambda_b\\Vert\\mathbf{B}_i\\Vert_0\\\\\n g(\\mathbf{a}_i)&=\\iota_{\\mathbb{S}}(\\mathbf{a}_i) + \\lambda_a\\Vert\\mathbf{a}_i\\Vert_0\n\\end{eqnarray}\nwhere $\\lambda_a$ and $\\lambda_b$ are parameters which control the balance between the data fitting term and the sparse regularizations, $\\iota_{\\mathcal{C}}(\\mathbf{x})$ is the indicator function on the set $C$ (i.e., $\\iota_{\\mathcal{C}}(\\mathbf{x})=0$ when $\\mathbf{x} \\in \\mathcal{C}$ whereas $\\iota_{\\mathcal{C}}(\\mathbf{x})=\\infty$ when $\\mathbf{x} \\notin \\mathcal{C}$), and $\\mathbb{S}$ is the simplex defined by the ASC and ANC. Solving this optimization problem is challenging since the regularization functions $h$ and $g$ are nonconvex and nonsmooth. However, it can be tackled thanks to the proximal alternating linearized minimization (PALM) \\cite{Bolte2014}. With guarantees to converge to a critical point, PALM iteratively updates the parameters $\\mathbf{a}_i$ and $\\mathbf{B}_i$ by alternatively minimizing the objective function with respect to (w.r.t.) these parameters, i.e., by solving the following proximal problems\n\\begin{eqnarray}\n\\begin{aligned}\n\t\\mathbf{B}_i^{(t+1)} \\in & \\min\\limits_{\\mathbf{B}_i} \\left\\lbrace h(\\mathbf{B}_i) + \\langle \\mathbf{B}_i-\\mathbf{B}_i^{(t)},\\nabla_{\\mathbf{B}_i}f(\\mathbf{B}_i^{(t)},\\mathbf{a}_i^{(t)}) \\rangle \\right.\\\\\n &\\left. + \\frac{c_t}{2}\\Vert\\mathbf{B}_i-\\mathbf{B}_i^{(t)}\\Vert_2^2 \\right\\rbrace\\\\\n \\mathbf{a}_i^{(t+1)} \\in & \\min\\limits_{\\mathbf{a}_i} \\bigg\\{ g(\\mathbf{a}_i) + \\langle \\mathbf{a}_i-\\mathbf{a}_i^{(t)},\\nabla_{\\mathbf{a}_i}f(\\mathbf{B}_i^{(t+1)},\\mathbf{a}_i^{(t)}) \\rangle \\}. \\\\\n &\\left. + \\frac{d_t}{2}\\Vert\\mathbf{a}_i-\\mathbf{a}_i^{(t)}\\Vert_2^2 \\right\\rbrace\n\\end{aligned}\n\\end{eqnarray}\nThe pseudocode for MEMM is shown in Algorithm~\\ref{algorithm_memm} and these two steps are described in what follows.\n\n\\subsection{Optimization w.r.t. $\\mathbf{B}_i$}\nTo optimize only w.r.t. the diagonal entries in $\\mathbf{B}_i$, the objective function can be rewritten with the following decomposition\n\\begin{eqnarray}\n\\begin{aligned}\n\tf(\\mathbf{b}_i,\\mathbf{a}_i)& =\\frac{1}{2}\\Vert\\mathbf{U}_i\\mathbf{b}_i-\\mathbf{y}_i\\Vert^2_2\\\\\n h(\\mathbf{b}_i)& =\\iota_{\\mathbb{R}_+}(\\mathbf{b}_i) + \\lambda_b\\Vert\\mathbf{b}_i\\Vert_0\n\\end{aligned}\n\\end{eqnarray}\nwhere\n\\begin{eqnarray*}\n\\begin{aligned}\n\t\\mathbf{U}_i&=\\left[\\mathbf{E}_1 \\odot a_{1i}| \\cdots| \\mathbf{E}_K\\odot a_{Ki}\\right]\\\\\n \\mathbf{b}_i &= \\left[ \\mathbf{b}_{1i}^T, \\mathbf{b}_{2i}^T, \\cdots, \\mathbf{b}_{Ki}^T\\right]^T.\n\\end{aligned}\n\\end{eqnarray*}\nThis leads to the following updating rule\n\\begin{eqnarray*}\n\t\\min\\limits_{\\mathbf{b}_i} \\left\\lbrace h(\\mathbf{b}_i) + \\frac{c_{t}}{2}\\Vert\\mathbf{b}_i-(\\mathbf{b}_i^{(t)}-\\frac{1}{c_{t}}\\nabla_{\\mathbf{b}}f(\\mathbf{b}_i^{(t)},\\mathbf{a}_i^{(t)}))\\Vert_2^2 \\right\\rbrace\n\\end{eqnarray*}\nwhere $\\nabla_{\\mathbf{b}_i}f(\\mathbf{b}_i^{(t)},\\mathbf{a}_i^{(t)})=\\mathbf{U}_i^T\\left(\\mathbf{U}_i \\mathbf{b}_i - \\mathbf{y}_i\\right)$.\n\nUsing similar computations as in \\cite{Bolte2014}, this can be conducted as\n\\begin{eqnarray}\n\t\\begin{aligned}\n\t\\mathbf{b}_i^{(t+1)} & \\in \\text{prox}_{c_t\/\\lambda_b}^h(\\mathbf{b}_i^{(t)}-\\frac{1}{c_t}\\nabla_{\\mathbf{b}_i}f(\\mathbf{b}_i^{(t)},\\mathbf{a}_i^{(t)}))\\\\\n \n \\end{aligned}\n\\end{eqnarray}\nwhere $c_{t}=\\gamma_m\\Vert\\mathbf{U}_i^T\\mathbf{U}_i\\Vert_{\\mathrm{F}}$ represents a step size for each iteration. The proximal operator associated with $f$ can be computed using the approach \\cite{Bolte2014}. Finally, the bundling matrix $\\mathbf{B}_i$ can be reconstructed as $\\mathbf{B}_i=\\text{blkdiag}(\\mathbf{b}_i)$ where $\\text{blkdiag}(\\cdot)$ generates the block diagonal matrix $\\mathbf{B}_i$ from the vector $\\mathbf{b}_i$.\\\\\n\n\\subsection{Optimization with respect to $\\mathbf{a}_i$}\nTo optimize w.r.t. $\\mathbf{a}_i$, the objective function can be rewritten using the decomposition\n\\begin{eqnarray}\n\t f(\\mathbf{B}_i,\\mathbf{a}_i)=&\\frac{1}{2}\\Vert\\mathbf{\\tilde{M}}_i\\mathbf{a}_i-\\mathbf{y}_i\\Vert^2_2\\\\\n g(\\mathbf{a}_i)=&\\iota_{\\mathbb{S}}(\\mathbf{a}_i) + \\lambda_a\\Vert\\mathbf{a}_i\\Vert_0\n\\end{eqnarray}\nwhere $\\mathbf{\\tilde{M}}_i=\\mathbf{E}\\mathbf{B}_i$. Thus, updating the abundance vector can be formulated as\n\\begin{eqnarray*}\n\t\\min\\limits_{\\mathbf{a}_i} \\left\\lbrace g(\\mathbf{a}_i) + \\frac{d_{t}}{2}\\left\\|\\mathbf{a}_i-\\left(\\mathbf{a}_i^{(t)}-\\frac{1}{d_{t}}\\nabla_{\\mathbf{a}_i}f(\\mathbf{B}_i^{(t+1)},\\mathbf{a}_i^{(t)})\\right)\\right\\|^2 \\right\\rbrace\n\\end{eqnarray*}\nwhere $\\nabla_{\\mathbf{a}_i}f(\\mathbf{B}_i^{(t+1)},\\mathbf{a}_i^{(t)})=\\mathbf{\\tilde{M}}_i^T\\left(\\mathbf{\\tilde{M}}_i \\mathbf{a}_i - \\mathbf{y}_i\\right)$. Using the proximal operator, this can be written as\n\\begin{eqnarray}\n\t\\begin{aligned}\n\t\\mathbf{a}_i^{(t+1)} & \\in \\text{prox}_{d_{t}\/\\lambda_a}^g\\left(\\mathbf{a}_i^{(t)}-\\frac{1}{d_{t}}\\nabla_{\\mathbf{a}_i}f(\\mathbf{B}_i^{(t+1)},\\mathbf{a}_i^{(t)})\\right)\n \\end{aligned}\n\\end{eqnarray}\nwhere $d_{t}=\\gamma_a\\Vert\\mathbf{\\tilde{M}}_i^T\\mathbf{\\tilde{M}}_i\\Vert_{\\mathrm{F}}$ represents a step size for each iteration. Moreover the proximal mapping associated with $g$ can be performed using the method developed in \\cite{Anast2013}.\n\n\\begin{algorithm}\n\\caption{Algorithm for MEMM-based unmixing}\\label{algorithm_memm}\n\\begin{algorithmic}[1]\n\\State $\\mathbf{Input}: \\mathbf{y}_i,\\mathbf{E}$\n\\State \\textbf{Initialization}: $\\mathbf{a}_i^{(0)}$ and $\\mathbf{B}_i^{(0)}$.\n\\State Set $\\mathbf{r}_i^{(0)}$ using an unmixing method (e.g. FCLS).\n\\State $\\mathbf{a}_i^{(0)}=\\mathbf{G}^T\\mathbf{r}_i^{(0)}$\n\\State $\\forall_k, \\mathbf{b}_{ik}^{(0)}=(\\mathbf{g}_k\\odot\\mathbf{r}_i^{(0)}) \\oslash a_{ki}^{(0)}$\n\\State \\textbf{Main procedure}:\n\\While{ the stopping criterion is not satisfied}\n\\State $\\mathbf{b}_i^{(t+1)} \\leftarrow \\text{prox}_{c_t\/\\lambda_b}^h(\\mathbf{b}_i^{(t)}-\\frac{1}{c_{t}}\\nabla_{\\mathbf{b}_i}f(\\mathbf{b}_i^{(t)},\\mathbf{a}_i^{(t)}))$\n\\State $\\mathbf{B}_i^{(t+1)}=\\text{blkdiag}(\\mathbf{b}_i^{(t+1)})$\n\\State $\\mathbf{a}_i^{(t+1)} \\leftarrow \\text{prox}_{d_{t}\/\\lambda_a}^g(\\mathbf{a}_i^{(t)}-\\frac{1}{d_{t}}\\nabla_{\\mathbf{a}_i}f(\\mathbf{B}_i^{(t+1)},\\mathbf{a}_i^{(t)}))$\n\\EndWhile\n\\State $\\mathbf{Output}: \\mathbf{a}_i^{(t+1)},\\mathbf{B}_i^{(t+1)}$\n\\end{algorithmic}\n\\end{algorithm}\n\n\\subsection{Initialization and stopping rule}\nMEMM requires initial estimates $\\mathbf{a}_i^{0}$ and $\\mathbf{B}_i^{0}$ of the abundance vector and bundling matrix, respectively. To do so, first, MEMM estimates a multiple abundance vector $\\mathbf{r}_i^{(0)}$ using a state-of-the-art LMM-based unmixing method (e.g., FCLS, see line 3). Then an initial estimate of the single abundance vector $\\mathbf{a}_i^{(0)}$ is computed according to \\eqref{summed} (see line 4). Finally, the bundling matrix $\\mathbf{B}_i^{(0)}$ is arbitrarily initialized as the corresponding scaling factor (see line 5, where $\\oslash$ stands for the element-wise division). Once initial estimates have been obtained, $\\mathbf{a}_i^{(t+1)}$ and $\\mathbf{b}^{(t+1)}$ are iteratively updated in lines 8--10. The algorithm stops when the difference between updated and previous values of the objective function $f(\\mathbf{B}_i,\\mathbf{a}_i)$ is smaller than a predetermined threshold.\n\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=\\linewidth]{endm_b_sim}\n \\caption{Synthetically generated endmember bundles.}\\label{fig:endm_b_sim}\n\\end{figure*}\n\n\\section{Experiments using simulated data}\\label{section4}\nFirst, the relevance of the proposed MEMM and its variant MEMM$_{s}$ has been evaluated thanks to experiments conducted on simulated datasets.\n\n\\subsection{Generation of bundles}\nFirst, $K=10$ spectra were selected from the USGS spectral library. The $10$ spectra were chosen so that the minimum angle between any two spectra was larger than $5^\\circ$. This pruning prevented synthetic endmember bundles to overlap each other. Second, endmember bundles $\\mathbf{E}_k$ ($k=1,\\ldots,K$) were designed by randomly generated $N_k=30$ endmember spectra for each endmember bundle using the approach proposed in \\cite{Thouv2016a}. These bundles, depicted in Fig. \\ref{fig:endm_b_sim}, were used for generating the following two simulated data.\n\n\\subsubsection{Simulated dataset 1 (SIM1)}\nThe first dataset, referred to as SIM1 in what follows, was generated using MESMA. A mixed spectrum was generated using the following 5 steps. First, the number of endmember classes $K$ was randomly determined in the set ($\\{1,\\ldots,5\\}$). Second, a random combination of endmember classes was selected. Third, one spectrum within each selected endmember class was randomly chosen. Forth, the abundances of the selected endmember classes were randomly generated using a Dirichlet distribution to jointly ensure ANC and ASC. Finally, a mixed spectrum was generated by a linear combination of endmember spectra of the selected endmember classes and the randomly generated abundances. A set of $P=100$ mixed spectra were generated in this study. Different amounts of additive Gaussian noise with corresponding signal-to-noise ratios of $50$dB, $40$dB and $30$dB were considered to the mixed spectra.\n\n\\subsubsection{Simulated dataset 2 (SIM2)}\nSIM2 was generated using MEMM. A mixed spectrum was generated similarly as done in SIM1. The main difference is the bundling coefficients in MEMM. In order to generate the bundling coefficients, the number of spectra $N_k$ was randomly chosen in the set ($\\{1,\\ldots,5\\}$). The bundling coefficients of the randomly selected spectra were generated from a Dirichlet distribution. A mixed spectrum was generated by a linear combination of spectra, the bundling coefficients and the abundances. As for SIM1, $P=100$ mixed spectra were generated and different amounts of Gaussian noise were also added to SIM2.\n\n\n\\subsection{Compared methods}\nMEMM and MEMM$_s$ were compared with other 5 methods that incorporate endmember bundles and promote sparsity: fully constrained least squares (FCLS) \\cite{Biouc2010}, sparse unmixing by variable splitting and augmented Lagrangian (SUnSAL) \\cite{Iorda2011}, alternating angle minimization (AAM) \\cite{Heyle2016a} and methods based on group lasso and elitist lasso \\cite{Meyer2016}. Note that FCLS and SUnSAL can incorporate endmember bundles by considering the model in (\\ref{eq:4}). FCLS and SUnSAL were chosen for comparing the proposed methods with most widely used unmixing methods that promote sparsity. AAM was selected for comparison because it is a latest variant of MESMA and it is more computationally efficient than MESMA while achieving good performance. Group lasso and elitist lasso were also included for comparison because they can incorporate the structure of endmember bundles as well as MEMM. MEMM, MEMM$_s$ and the methods based on group lasso and elitist lasso require initial estimates of abundances. In order to fairly compare methods, this study used abundances estimated by FCLS for the initial estimates required for the methods. SUnSAL, the methods based on group lasso or elitist lasso and MEMM$_s$ require a parameter $\\lambda_r$ or $\\lambda_a$ controlling sparsity regularization. MEMM requires two parameters $\\lambda_b$ and $\\lambda_a$. In order to fairly compare the methods, these parameters were empirically determined in the set $(0.0001, 0.001, 0.01, 0.1, 1, 5)$ for each simulated data so that the selected values produced the highest SRE$_a$. Parameter sensitivity of the proposed method is reported in the supplementary document \\cite{Uezat2018sup}. Finally, computational time were also discussed.\n\n\\subsection{Performance criteria}\nThe main objective of this experiment was to assess the algorithm performance while selecting a combination of endmember classes and spectra, and estimating abundances corresponding to endmember classes and spectra. Three criteria were chosen for quantitative validation of the methods. To evaluate the quality of the reconstruction, one defines the signal-to-reconstruction error (SRE) per endmember class and per endmember spectrum as \\cite{Iorda2014a}\n\\begin{equation}\n\\begin{aligned}\n\\text{SRE}_a \\equiv \\mathbb{E}\\left[\\Vert \\mathbf{a} \\Vert_2^2 \\right]\/\\mathbb{E}\\left[\\Vert \\mathbf{a}-\\hat{\\mathbf{a}} \\Vert_2^2 \\right]\\\\\n\\text{SRE}_r \\equiv \\mathbb{E}\\left[\\Vert \\mathbf{r} \\Vert_2^2 \\right]\/\\mathbb{E}\\left[\\Vert \\mathbf{r}-\\hat{\\mathbf{r}} \\Vert_2^2 \\right]\\\\\n\\end{aligned}\n\\end{equation}\nwhere $\\mathbf{a}$ and $\\hat{\\mathbf{a}}$ are the actual and estimated abundance vectors of all pixels, $\\mathbf{r}$ and $\\hat{\\mathbf{r}}$ are the actual and estimated multiple abundance vectors of all pixels.\n\nSecond, the number of nonzero abundances were used to evaluate the sparsity level per endmember class and per endmember spectrum recovered by the methods, i.e., \\cite{Iorda2014a,Chen2016}\n\\begin{equation}\n\\begin{aligned}\n\\text{SL}_a \\equiv \\frac{1}{P}\\sum_{i=1}^{P} \\Vert \\hat{\\mathbf{a}}_i \\Vert_0\\\\\n\\text{SL}_r \\equiv \\frac{1}{P}\\sum_{i=1}^{P} \\Vert \\hat{\\mathbf{r}}_i \\Vert_0.\n\\end{aligned}\n\\end{equation}\nAs in \\cite{Iorda2014a}, abundances smaller than $10^{-4}$ were considered as \\textit{zero abundances}.\n\nFinally, to validate the performance in selecting a relevant combination of endmember classes or spectra, one defines the distance between the two actual and estimated supports (DIST) \\cite{Elad2010,Chen2016}\n\\begin{equation}\n\\begin{aligned}\n\\text{DIST}_a \\equiv \\frac{1}{P}\\sum_{i=1}^{P} \\frac{\\max\\left(\\vert \\mathcal{S}_{i}^a \\vert, \\vert \\hat{\\mathcal{S}}_{i}^a \\vert\\right) - \\vert \\mathcal{S}_{i}^a \\cap \\hat{\\mathcal{S}}_{i}^a\\vert}{\\max\\left(\\vert \\mathcal{S}_{i}^a \\vert, \\vert \\hat{\\mathcal{S}}_{i}^a \\vert\\right)}\\\\\n\\text{DIST}_r \\equiv \\frac{1}{P}\\sum_{i=1}^{P} \\frac{\\max\\left(\\vert \\mathcal{S}_{i}^r \\vert, \\vert \\hat{\\mathcal{S}}_{i}^r \\vert\\right) - \\vert \\mathcal{S}_{i}^r \\cap \\hat{\\mathcal{S}}_{i}^r\\vert}{\\max\\left(\\vert \\mathcal{S}_{i}^r \\vert, \\vert \\hat{\\mathcal{S}}_{i}^r \\vert\\right)}\\\\\n\\end{aligned}\n\\end{equation}\nwhere $\\mathcal{S}$ and $\\hat{\\mathcal{S}}$ are true and estimated support sets (i.e., indexes of nonzero values), $\\vert \\mathcal{S} \\vert$ represents the total number of elements in the set $\\mathcal{S}$ and $\\cap$ stands for the intersection operator. The figures of merit DIST$_a$ and DIST$_r$ evaluated the distance between two supports of endmember classes and the distance between two supports of endmember spectra, respectively.\n\n\n\n\\begin{table}[h!]\n \\caption{SRE per endmember class ($\\text{SRE}_a$).}\\label{table:sre_c}\n\\resizebox{\\columnwidth}{!}{\n\\begin{tabular}{c||c||ccccccc}\t\t\t\t\t\t\t\t\t\n\\toprule\t\t\t\t\t\t\t\t\t\n\t&SNR\t&FCLS\t&AAM\t&SUnSAL\t&Group lasso\t&Elitist lasso\t&MEMM$_s$\t&MEMM\t\\\\\n\\midrule\t\t\t\t\t\t\t\t\t\n\\multirow{3}{*}{SIM1}\t&30dB\t&18.0507\t&18.7656\t&18.9285\t&\\textbf{19.4215}\t&19.2333\t &14.5739\t&18.6904\t\\\\\n\t&40dB\t&22.9074\t&\\textbf{25.8112}\t&23.0496\t&23.589\t&23.303\t&14.8229\t&23.3532\t\\\\\n\t&50dB\t&27.358\t&\\textbf{33.8279}\t&27.5013\t&27.4184\t&26.8979\t&14.1579\t&28.0017\t \\\\\n\\midrule\t\t\t\t\t\t\t\t\t\n\\multirow{3}{*}{SIM2}\t&30dB\t&16.1865\t&15.9501\t&16.5182\t&\\textbf{18.0736}\t&16.4315\t &12.1614\t&16.4785\t\\\\\n\t&40dB\t&21.6942\t&19.4906\t&21.6671\t&22.0263\t&21.3809\t&13.6682\t &\\textbf{22.1381}\t\\\\\n\t&50dB\t&25.6617\t&22.9952\t&25.5168\t&25.9215\t&24.7702\t&14.9405\t &\\textbf{26.5442}\t\\\\\n\\bottomrule\t\t\t\t\t\t\t\t\t\n\\end{tabular}\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\n}\n \\end{table}\n\n\\begin{table}[h!]\n \\caption{SRE per endmember spectrum ($\\text{SRE}_r$).}\\label{table:sre_s}\n\\resizebox{\\columnwidth}{!}{\n\\begin{tabular}{c||c||ccccccc}\t\t\t\t\t\t\t\t\t\n\\toprule\t\t\t\t\t\t\t\t\t\n\t&SNR\t&FCLS\t&AAM\t&SUnSAL\t&Group lasso\t&Elitist lasso\t&MEMM$_s$\t&MEMM\t\\\\\n\\midrule\t\t\t\t\t\t\t\t\t\n\\multirow{3}{*}{SIM1}\t&30dB\t&1.1752\t&\\textbf{2.2035}\t&1.7234\t&2.0398\t&1.5188\t&-0.4570\t &0.7692\t\\\\\n\t&40dB\t&2.8611\t&\\textbf{4.6569}\t&3.0262\t&3.0357\t&2.9957\t&0.5171\t&2.7653\t\\\\\n\t&50dB\t&3.6312\t&\\textbf{12.7796}\t&3.7692\t&3.7591\t&3.7576\t&0.1700\t&3.5721\t\\\\\n\\midrule\t\t\t\t\t\t\t\t\t\n\\multirow{3}{*}{SIM2}\t&30dB\t&1.0084\t&-0.2324\t&1.4405\t&\\textbf{1.8935}\t&1.3043\t&-2.0312\t &0.4313\t\\\\\n\t&40dB\t&2.3323\t&1.4226\t&2.4976\t&\\textbf{2.5741}\t&2.48\t&-2.1441\t&2.2462\t\\\\\n\t&50dB\t&3.2175\t&3.3104\t&3.3273\t&3.3266\t&\\textbf{3.3666}\t&-3.0151\t&3.2304\t\\\\\n\\bottomrule\t\t\t\t\t\t\t\t\t\n\\end{tabular}\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\n}\n \\end{table}\n\n\n\\subsection{Results}\nSRE per class was calculated for each method and reported in Table \\ref{table:sre_c}. For SIM1 with $50$dB, AAM performed best among all methods. The performance of AAM, however, was degraded as SNR became lower. For data with $30$dB, the results derived from AAM were worse than those derived from SUnSAL, elitist lasso and group lasso. In addition, AAM performed poorly compared with other methods in SIM2. This showed that the MESMA-based approach (AAM) was less effective when given endmember bundles did not completely represent endmember variability present in the data and SNR of the data was low ($<40$dB). This finding was also observed in \\cite{Uezat2016}. MEMM produced better results for SIM2 with $50$dB than the other sparsity-methods and produced comparable results with $40$dB and $30$dB. MEMM$_s$ performed poorly, compared with all methods. SRE per spectrum was also calculated from each method and reported in Table \\ref{table:sre_s}. Compared with SRE per class, SRE per spectrum was very low for all methods. This showed that the exact recovery of multiple abundances $\\mathbf{r}_i$ was challenging under conditions where a large number of endmember spectra were present within each class.\n\n\n\\begin{table}[h!]\n \\caption{Sparsity level per endmember class ($\\text{SL}_a$). Reference: $\\text{SL}_a=2.01$ in SIM1 and $\\text{SL}_a=2.27$ in SIM2.)}\\label{table:nonneg_c}\n\\resizebox{\\columnwidth}{!}{\n\\begin{tabular}{c||c||ccccccc}\t\t\t\n\\toprule\t\t\t\t\t\t\t\t\t\n\t&SNR\t&FCLS\t&AAM\t&SUNSAL\t&Group lasso\t&Elitist lasso\t&MEMM$_s$\t&MEMM\t\\\\\n\\midrule\t\t\t\t\t\t\t\t\t\n\\multirow{3}{*}{SIM1}\t&30dB\t&4.8\t&4.25\t&3.32\t&4.59\t&6.83\t&3.05\t&\\textbf{1.84}\t\\\\\n\t&40dB\t&4.62\t&4.29\t&3.82\t&4.54\t&5.64\t&3.64\t&\\textbf{2.14}\t\\\\\n\t&50dB\t&4.34\t&4.05\t&4.24\t&4.58\t&5\t&3.55\t&\\textbf{1.99}\t\\\\\n\\midrule\t\t\t\t\t\t\t\t\t\n\\multirow{3}{*}{SIM2}\t&30dB\t&5.02\t&4.45\t&3.56\t&4.72\t&6.63\t&4.27\t&\\textbf{2.03}\t\\\\\n\t&40dB\t&5.21\t&4.59\t&4.16\t&5.02\t&5.6\t&4.25\t&\\textbf{2.5}\t\\\\\n\t&50dB\t&4.97\t&4.49\t&4.87\t&4.72\t&5.54\t&3.9\t&\\textbf{2.28}\t\\\\\n\\bottomrule\t\t\t\t\t\t\t\t\t\n\\end{tabular}\t\t\t\t\t\t\t\t\t\t\t\t\t\t\n}\n \\end{table}\n\n\\begin{table}[h!]\n \\caption{Sparsity level per endmember class ($\\text{SL}_r$). Reference: $\\text{SL}_r=2.01$ in SIM1 and $\\text{SL}_r=2.39$ in SIM2.}\\label{table:nonneg_s}\n\\resizebox{\\columnwidth}{!}{\n\\begin{tabular}{c||c||ccccccc}\t\t\t\t\t\t\t\t\n\\toprule\t\t\t\t\t\t\t\t\t\n\t&SNR\t&FCLS\t&AAM\t&SUNSAL\t&Group lasso\t&Elitist lasso\t&MEMM$_s$\t&MEMM\t\\\\\n\\midrule\t\t\n\\multirow{3}{*}{SIM1}\t&30dB\t&10.83\t&4.25\t&10.52\t&43.12\t&11.28\t&\\textbf{3.04}\t&5.07\t\\\\\n\t&40dB\t&14.25\t&4.29\t&12.39\t&29.88\t&15.81\t&\\textbf{3.55}\t&8.72\t\\\\\n\t&50dB\t&18.66\t&4.05\t&18.07\t&23.57\t&21.31\t&\\textbf{3.45}\t&11.13\t\\\\\n\\midrule\t\t\t\t\t\t\t\t\t\n\\multirow{3}{*}{SIM2}\t&30dB\t&11.98\t&4.45\t&13.08\t&66.64\t&13.16\t&\\textbf{4.22}\t&5.26\t\\\\\n\t&40dB\t&16.79\t&4.59\t&15.6\t&45.99\t&19.67\t&\\textbf{4.15}\t&10.88\t\\\\\n\t&50dB\t&25.34\t&4.49\t&24.84\t&65.97\t&29.76\t&\\textbf{3.78}\t&19.38\t\\\\\n\\bottomrule\t\t\t\t\t\t\t\t\t\n\\end{tabular}\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\n}\n \\end{table}\n\nThe average number of nonzero abundances in $\\mathbf{a}_i$ and $\\mathbf{r}_i$ were shown in Table \\ref{table:nonneg_c} and Table \\ref{table:nonneg_s}, respectively. For the number of nonzero abundances per class, MEMM performed best among all methods for both SIM1 and SIM2. This showed that the sparse constraint used in MEMM successfully led to the selection of smaller numbers of endmember classes. For the number of nonzero abundances per spectrum, AAM and MEMM$_s$ produced the smaller number of nonzero abundances than other methods. This was because these methods selected at most one spectrum within each class and enforced greater sparsity when selecting endmember spectra.\n\n\n\\begin{table}[h!]\n \\caption{Distance between actual and estimated supports per endmember class ($\\text{DIST}_a$).}\\label{table:dist_c}\n\\resizebox{\\columnwidth}{!}{\n\\begin{tabular}{c||c||ccccccc}\t\t\t\t\t\t\t\t\t\n\\toprule\t\t\t\t\t\t\t\t\t\n\t&SNR\t&FCLS\t&AAM\t&SUNSAL\t&Group lasso\t&Elitist lasso\t&MEMM$_s$\t&MEMM\t\\\\\n\\midrule\t\t\t\t\t\t\t\t\t\n\\multirow{3}{*}{SIM1}\t&30dB\t&0.5893\t&0.5489\t&0.3939\t&0.5866\t&0.6967\t&0.3491\t&\\textbf{0.1195}\t \\\\\n\t&40dB\t&0.5689\t&0.5163\t&0.4616\t&0.5594\t&0.6368\t&0.4498\t&\\textbf{0.1265}\t\\\\\n\t&50dB\t&0.5137\t&0.4828\t&0.5035\t&0.5479\t&0.5795\t&0.4180\t&\\textbf{0.0758}\t\\\\\n\\midrule\t\t\t\t\t\t\t\t\t\n\\multirow{3}{*}{SIM2}\t&30dB\t&0.5725\t&0.5156\t&0.3775\t&0.5487\t&0.6538\t&0.5025\t&\\textbf{0.1843}\t \\\\\n\t&40dB\t&0.5789\t&0.4843\t&0.4521\t&0.5534\t&0.5952\t&0.4938\t&\\textbf{0.162}\t\\\\\n\t&50dB\t&0.5345\t&0.4696\t&0.5247\t&0.5185\t&0.5825\t&0.4179\t&\\textbf{0.1033}\t\\\\\n\\bottomrule\t\t\t\t\t\t\t\t\t\n\\end{tabular}\t\t\t\t\t\t\t\n}\n \\end{table}\n\n\\begin{table}[h!]\n \\caption{Distance between actual and estimated supports per endmember spectrum ($\\text{DIST}_r$).}\\label{table:dist_s}\n\\resizebox{\\columnwidth}{!}{\n\\begin{tabular}{c||c||ccccccc}\t\t\t\t\t\t\t\t\t\n\\toprule\t\t\t\t\t\t\t\t\t\n\t&SNR\t&FCLS\t&AAM\t&SUNSAL\t&Group lasso\t&Elitist lasso\t&MEMM$_s$\t&MEMM\t\\\\\n\\midrule\t\t\t\t\t\t\t\t\t\n\\multirow{3}{*}{SIM1}\t&30dB\t&0.9167\t&0.767\t&0.853\t&0.94\t&0.8963\t&\\textbf{0.7493}\t&0.8115\t \\\\\n\t&40dB\t&0.8914\t&\\textbf{0.6707}\t&0.8587\t&0.9137\t&0.8881\t&0.7351\t&0.774\t\\\\\n\t&50dB\t&0.8649\t&\\textbf{0.5622}\t&0.8578\t&0.8971\t&0.8911\t&0.6586\t&0.7431\t\\\\\n\\midrule\t\t\t\t\t\t\t\t\t\n\\multirow{3}{*}{SIM2}\t&30dB\t&0.9039\t&0.8426\t&0.8476\t&0.9332\t&0.8915\t&0.8729\t&\\textbf{0.8142}\t \\\\\n\t&40dB\t&0.8791\t&\\textbf{0.7672}\t&0.8525\t&0.9167\t&0.883\t&0.8175\t&0.7993\t\\\\\n\t&50dB\t&0.8738\t&\\textbf{0.6949}\t&0.8696\t&0.9172\t&0.8816\t&0.7277\t&0.8095\t\\\\\n\\bottomrule\t\t\t\t\t\t\t\t\t\n\\end{tabular}\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\n}\n \\end{table}\n\nThe errors in the support sets were shown in Table \\ref{table:dist_c} and Table \\ref{table:dist_s}. When estimating the support set per class, MEMM outperformed other methods for both SIM1 and SIM2. This showed that the double sparsity imposed by MEMM also led to the appropriate selection of the combination of endmember classes for each pixel. The performance of MEMM, however, was degraded when estimating the support set per spectrum. Other methods also performed poorly for the support set per spectrum. This shows that when multiple highly correlated endmember spectra are present within each class, the existing methods experience difficulty to select an optimal combination of endmember spectra.\n\n\n\\begin{table}[h!]\n \\caption{Computational time for unmixing $P=100$ pixels using endmember bundles of $N=300$ endmember spectra.}\\label{table:time}\n\\resizebox{\\columnwidth}{!}{\n\\begin{tabular}{c|ccccccc}\t\t\t\t\t\t\t\t\n\\toprule\t\t\t\t\t\t\t\t\nData\t&FCLS\t&AAM\t&SUNSAL\t&Group lasso\t&Elitist lasso\t&MEMM$_s$\t&MEMM\t\\\\\n\\midrule\t\t\t\t\t\t\t\t\nSIM1\t&0.3006\t&517.7361\t&\\textbf{0.155}\t&0.9517\t&1.2694\t&0.4846\t&2.6625\t\\\\\nSIM2\t&\\textbf{0.1966}\t&549.1244\t&0.2616\t&0.8426\t&1.2122\t&2.2311\t&2.5705\t\\\\\n\\bottomrule\t\t\t\t\t\t\t\t\n\\end{tabular}\t\t\t\t\t\t\t\t\n}\n \\end{table}\n\nFinally, the computational times of all methods were shown in Table \\ref{table:time}. MEMM was more computationally expensive than FCLS, SUnSAL, Group lasso and Elitist lasso. The proposed methods, however, were computationally cheaper than AAM because they did not need to test a large number of the combinations of endmember spectra.\n\n\\begin{figure}[h!]\n \\begin{subfigure}[b]{0.5\\columnwidth}\n \\centering\n \\includegraphics[width=.9\\linewidth]{rgb_1}\n \\caption{}\n \\label{fig:real_rgb_1}\n \\end{subfigure}%\n \\begin{subfigure}[b]{0.5\\columnwidth}\n \\centering\n \\includegraphics[width=.9\\linewidth]{rgb_2}\n \\caption{}\n \\label{fig:real_rbg_2}\n \\end{subfigure}\n \\caption{Real hyperspectral images: (a) MUESLI image and (b) AVIRIS image.}\\label{fig:rgb}\n\\end{figure}\n\n\\section{Experiments using real data}\\label{section5}\n\\subsection{Description of real hyperspectral images}\n\nThe methods were finally compared on two real hyperspectral images. The first hyperspectral image was acquired in June 2016, over the city of Saint-Andr{\\'e}, France, during the MUESLI airborne acquisition campaign. The image was composed of $415$ spectral bands. The spectral bands affected by noise (between $1.34-1.55\\mu$m and $1.80-1.98\\mu$m) were removed, leading to $L=345$ spectral bands. In the image scene, spatially discrete objects were present. In this study, each spatially discrete region was assumed to be composed of a single endmember class. Endmember bundles were extracted from each region. As a consequence, large amounts of endmember variability were expected to be present within each spatially discrete region associated with a particular class and mixed pixels were expected to be located in the boundary of these spatially discrete regions. Moreover, the scene of interest was composed of two flight lines under significantly different illumination conditions, as shown in Fig. \\ref{fig:real_rgb_1}. Thus, this image (referred to as MUESLI image) was used to evaluated whether the methods could accurately estimate abundances when large amounts of endmember variability were present. From this image, $K=6$ endmember bundles composed of a total of $N=180$ spectral signatures representing spatially discrete objects were extracted using the n-Dimensional Visualizer provided by the ENVI software. These bundles are represented in Fig. \\ref{fig:endm_b_real1}(top). Some endmember classes present in the studied area were affected by different illumination conditions. Unlike simulated data, ground truth was not available. Estimated abundances were qualitatively validated by visual inspection of the abundance maps.\n\\begin{figure*}\n \\centering\n \\includegraphics[width=\\linewidth]{endm_b_real1}\n \\caption{First row: endmember bundles used for unmixing the MUESLI image. Second row: endmember bundles generated by MEMM.}\\label{fig:endm_b_real1}\n\\end{figure*}\n\nThe second image was acquired over Moffett Field, CA, USA, by the Airborne Visible\/Infrared Imaging Spectrometer (AVIRIS). The image, depicted in Fig. \\ref{fig:real_rbg_2}, initially comprised $224$ spectral bands. After the noisy spectral bands were removed, $L=178$ spectral bands remained. The area of interest, composed of a lake and a vegetated coastal area, was considered in many previous studies, e.g., to assess the performance of unmixing methods. Thus, this second image (referred to as AVIRIS image) was used to test whether the proposed method could perform at least as well as the existing methods to analyze this widely used image scene. As for the AVIRIS image, $K=3$ endmember bundles were extracted and qualitative validation was conducted because of the lack of ground truth. The extracted bundles are depicted in Fig. \\ref{fig:endm_b_real2}(left).\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=.7\\linewidth]{endm_b_real2}\n \\caption{First column: endmember bundles used for unmixing the AVIRIS image. Second column: endmember bundles generated by MEMM.}\\label{fig:endm_b_real2}\n\\end{figure}\n\n\\subsection{Results}\nFor both images, the proposed method was compared with FCLS, SUnSAL, AAM, the methods based on group lasso and elitist lasso. The parameters ($\\lambda_r$, $\\lambda_b$ and $\\lambda_a$) required for the methods were empirically determined by qualitatively evaluating abundances derived from different values of the parameters. Finally, synthetic endmember bundles generated by MEMM were compared with endmember bundles initially used for unmixing.\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=\\linewidth]{abun_all_real1}\n \\caption{MUESLI image: estimated abundance maps. From top to bottom: building, road, shrub, crop land~1, crop land~2 and grass.}\\label{fig:abun_all_real1}\n\\end{figure*}\n\nAbundance maps estimated by the $7$ methods on the MUESLI image are depicted in Fig. \\ref{fig:abun_all_real1}. These maps show that the abundances estimated by MEMM were more consistent at the boundary affected by the different illumination conditions. This showed that MEMM was more robust to the different illumination conditions than concurrent methods. The abundances estimated by MEMM were also high for each endmember class and showed less noisy. This suggested that MEMM also promoted more sparsity than other methods.\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=\\linewidth]{abun_all_real2}\n \\caption{AVIRIS image: abundance maps. From top to bottom: vegetation, soil and water.}\\label{fig:abun_all_real2}\n\\end{figure*}\n\nFig. \\ref{fig:abun_all_real2} shows the abundance maps estimated for the AVIRIS image. All methods except elitist lasso generated similar abundances. Abundances estimated by elitist lasso were different because it was designed to use a larger number endmember classes for unmixing each pixel. MEMM produced similar abundances when compared with FCLS, group lasso. This showed that MEMM could perform at least as well as other sparsity-based methods to unmix this well-studied test site. MEMM$_s$, however, generated more noisy abundance maps. This showed that the initial estimates of abundances used in MEMM$_s$ did not lead to an optimal combination of endmember spectra and the optimal abundances.\n\nFinally, the endmember bundles recovered by MEMM were compared with endmember bundles initially used to unmix both hyperspectral images, see Fig. \\ref{fig:endm_b_real1} and Fig. \\ref{fig:endm_b_real2}. In Fig. \\ref{fig:endm_b_real1}, the synthetic endmember bundles filled the gaps that were present in the original endmember bundles. The extended endmember bundles showed more detailed spectral variability within each class in terms of both spectrum amplitudes and shapes also in Fig. \\ref{fig:endm_b_real2}. This enabled MEMM to generate adaptive endmember spectra within each pixel and estimate more accurate abundances even when initial endmember bundles did not completely represent endmember variability.\n\n\\section{Conclusion}\\label{section6}\nThis paper proposed a multiple endmember mixing model that bridges the gap between endmember bundle-based method and data driven-based methods. MEMM appeared to be superior to the existing methods as follows: \\emph{i}) it incorporated endmember bundles to generate adaptive endmember spectra for each pixel, \\emph{ii}) it had explicit physical meaning and generated hierarchical structure of endmember spectra, \\emph{iii}) it imposed double sparsity for the selection of both endmember classes and endmember spectra. MEMM were tested and compared to the state-of-the-art methods using simulated data and real hyperspectral images. MEMM showed comparable results for estimating abundances while it outperformed other methods in terms of selecting a set of endmember classes within each pixel. This paper deeply focused on sparsity constraints for both bundling coefficients and abundances. However, other constraints (e.g., spatial constraints) can be easily incorporated in the proposed unmixing framework. Future work will consist of reducing the computational complexity of the proposed method.\n\n\n\n\n\n\n\n\n\n\\section*{Acknowledgment}\nThe authors would like to thank Prof. Jose M. Bioucas-Dias, Dr. Rob Heylen and Dr. Travis R. Meyer for sharing the MATLAB codes of the SUnSAL, AAM and goup\/elitist lasso unmixing algorithms.\n\n\n\\ifCLASSOPTIONcaptionsoff\n \\newpage\n\\fi\n\n\n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nSQL is the {\\em de facto} relational database query language that stands still \\cite{Atzeni:2013:RMD:2503792.2503808} despite the advent of new trends as Big Data, NoSQL, RDF stores and others.\nIt builds upon the Codd's \\cite{Codd70,Codd72} seminal relational data model accompanied by an algebra and calculus to operate on data.\nFormer proposals such as \\cite{Grefen1994} better provide a formal framework for current SQL implementations.\nAs a query language, SQL can be well understood from Codd's tuple relational calculus but also from logic programming (in particular, \\cite{Ullman88} includes equivalences between relational operations and logic rules).\nHowever, among other features beyond the original relational model, SQL provides the notion of temporary view defined in {\\tt WITH} clauses (as described in Section \\ref{with}), whose definition is available only to the query in which it occurs \\cite{Silberschatz6th}.\nThis is no longer representable either in relational formal languages or directly in logic programming.\\footnote{Obviously, logic programming implementations provide general-purpose languages as Prolog that can emulate a temporary definition.}\n\nHere is when intuitionistic logic programming may come at help to providing first-class citizen semantics:\nApproaches as \\cite{DBLP:journals\/jlp\/Gabbay85,DBLP:journals\/jlp\/McCarty88,MillerModulesJLP,Bonner90hypotheticaldatalog,Hodas94} fit into this logic, an extension of logic programming including in particular embedded implications.\nAdding negation to intuitionistic logic programming might develop paradoxes which are circumvented in \\cite{bonner90adding} by dealing with two kind of implications: for rules ($\\leftarrow$) and for goals ($\\Leftarrow$, i.e., an embedded implication).\nWhereas in the formula $A \\leftarrow B$, the atom $B$ is ''executed\" for proving $A$, in the formula $A \\Leftarrow B$, the atom $B$ is ``assumed\" to be true for proving $A$.\nHypothetical Datalog \\cite{Bonner89hypotheticaldatalog,Bonner90hypotheticaldatalog,bonner90adding,bonner89expressing} incorporated this logic and has been a proposal thoroughly studied from semantic and complexity point-of-views.\nThe work \\cite{sae13c-ictai13} (recalled in Section \\ref{hypothetical-datalog}) presented an extended (w.r.t. \\cite{Bonner89hypotheticaldatalog,Bonner90hypotheticaldatalog,bonner90adding,bonner89expressing}) intuitionistic setting along with an implementation in the deductive system DES \\cite{saenzDES}, in which a rule is accepted in the antecedent of an embedded implication, and not only facts as in \\cite{bonner90adding}.\n\nDriven from the need for supporting a broader subset of SQL in this system, we show how to take advantage of the intuitionistic embedded implication to model {\\tt WITH} SQL queries in a logic setting, an application which has not been proposed to the best of our knowledge so far.\nThus, as shown in Section \\ref{translating}, it is possible to have such SQL queries translated into Datalog, and can be therefore processed by a deductive engine. \nBut Hypothetical Datalog is powerful enough to even apply the same technique to model assumptions in SQL queries, with the (non-standard) clause \\mytt{ASSUME}.\nThis clause enables both positive and negative assumptions on data, as shown in Section \\ref{assume}, which are useful for modelling ``what-if'' scenarios.\nFinally, we present the deductive system DES at work with examples of \\mytt{WITH} and \\mytt{ASSUME} queries in Section \\ref{examples}.\nOur approach is also useful for connecting with external relational database systems which cannot process these clauses.\nDES then behaves as a front-end capable of processing either novel or unsupported features in such systems.\n\n\\section{The SQL {\\tt WITH} Clause}\n\\label{with}\n\nTypically, complex SQL queries are broken-down for applying the {\\em divide-and-conquer} principle as well as for enhancing readability and maintenance.\nIntroducing intermediate views with {\\tt CREATE VIEW} statements is in the order of the day, but this might neither be recommendable (making these views observable for other users) nor possible (only certain users with administration permissions are allowed to create views). \nThe {\\tt WITH} clause provides a form of encapsulation in SQL by locally defining those broken-down views, making their realms to pertain to the context of a given query.\nNext, the syntax of a query $Q$ including this clause is recalled:\n\n\\vspace{2mm}\n\\begin{tabular}{ll}\n\\mytt{WITH} &\\mytt{{\\em R}$_1$ AS {\\em SQL}$_1$,}\\\\\n &\\mytt{\\ldots,} \\\\\n &\\mytt{{\\em R}$_n$ AS {\\em SQL}$_n$}\\\\\n\\mytt{{\\em SQL}}\n\\end{tabular}\n\n\\noindent where each \\mytt{{\\em R}$_i$} is a temporary view name defined by the SQL statement \\mytt{{\\em SQL}$_i$}, and which can be referenced only in \\mytt{{\\em SQL}}, the ultimate query that builds the outcome of the query $Q$.\nThis query can be understood as a relation with name \\mytt{{\\em R}} and defined with the DDL statement \\mytt{CREATE VIEW {\\em R} AS} $Q$. \n\nWith respect to the semantics of an SQL query, we recall and adapt the notation in \\cite{cgs12a-flops2012} which in turn is based on \\cite{Grefen1994}.\nA {\\em table instance} is a multiset of facts (following logic programming instead of relational databases).\nA {\\em database instance} $\\Delta$ of a database\nschema is a set of table instances, one for each defined table ({\\em extensional} relation) in the database.\nThe notation $\\Delta(T)$ represents the instance of a table $T$ in $\\Delta$.\nEach query or view ({\\em intensional} relation) $R$ is defined as a multiset of facts,\nand $\\Phi_R$ represents the ERA (Extended Relational Algebra) expression associated to an SQL query or view $R$, as explained in \\cite{Molina08}.\nAn intensional relation usually depends on previously defined relations, and sometimes it will be useful to write $\\Phi_R(R_1,\\dots,R_n)$ indicating that $R$ depends on $R_1, \\dots, R_n$.\nHere, we assume that each extensional relation in a database instance has attached type information for each one of its named arguments.\nAs well, each intensional relation argument receives its type via inferencing and arbitrary names if not provided in its definition.\nTables are denoted by their names, that is, $\\Phi_T = T$ if $T$ is a table.\n\t\\begin{definition} \\label{def:SQLERA} {\\em\n\tThe \\emph{computed answer} of an ERA expression $\\Phi_R$ with respect to some schema instance $\\Delta$ is denoted by $\\evalSQL{\\Phi_R}{\\Delta}$, where:\n\t\\begin{itemize}\n\t\t\\item If $R$ is an extensional relation, $\\evalSQL{\\Phi_R}{\\Delta} =\\, \\Delta(R)$.\n\t\t\\item If $R$ is an intensional relation and $R_1, \\dots, R_n$ the relations defined in $R$, then\n\t\t$\\evalSQL{\\Phi_R}{\\Delta} =\\, \\Phi_R(\\evalSQL{\\Phi_{R_1}}{\\Delta}, \\dots, \\evalSQL{\\Phi_{R_n}}{\\Delta})$. \\hfill $_\\square$\n\t\\end{itemize}\n}\n \\end{definition}\n\nQueries are executed by SQL systems. \nThe answer for a query $Q$ and a database instance $\\Delta$ in an implementation is represented by $\\ans_\\Delta$($Q$). The notation $\\ans_\\Delta$($R$) abbreviates $\\ans_\\Delta$(\\mytt{SELECT * FROM $R$}). \nIn particular, we assume the existence of \\emph{correct} SQL implementations.\n\n\t\\begin{definition} \\label{def:correctSQL} {\\em \n\tA \\emph{correct} SQL implementation verifies that $\\ans_\\Delta$($Q$) = $\\evalSQL{\\Phi_Q}{\\Delta}$ for every query $Q$.\\hfill $_\\square$\n}\n\t\\end{definition}\n\n\n\n\n\n\\section{Hypothetical Datalog}\n\\label{hypothetical-datalog}\n\nHypothetical Datalog is an extension of function-free Horn logic\n\\cite{bonner90adding}.\nFollowing \\cite{sae13c-ictai13}, the syntax of the logic is first order and includes a universe of constant symbols, a set of variables and a set of predicate symbols.\nFor concrete symbols, we write variables starting with either an upper-case letter or an underscore, and the rest of symbols starting with lower-case.\nRemoving function symbols from the logic is a condition for finiteness of answers, a natural requirement of relational database users.\nAs in Horn-logic, a rule has the form $A \\leftarrow \\phi$, where $A$ is an atom and $\\phi$ is a conjunction of goals.\nSince we consider a hypothetical system,\na goal can also take the form $R_1 \\land \\ldots \\land R_n \\Rightarrow G$, a construction known as an {\\em embedded implication}.\nThe following definition captures the syntax of the language, where $vars(T)$ is the set of variables occurring in $T$:\n\n\\begin{definition} {\\em \n\t\\quad \\\\\n\n\t$R := A \\mid A \\leftarrow G_1 \\land \\ldots \\land G_n$ \\\\\n\n\t$G := A \\mid \\neg G \\mid R_1 \\land \\ldots \\land R_n \\Rightarrow G$\\\\\n\n\t\\noindent where $R$ and $R_i$ stand for rules, $G$ and $G_i$ for goals, $A$ for an atom (possibly containing variables and constants, but no compound terms), and $\\bigcup vars(R_i) \\cap vars(R) = \\emptyset$, and $vars(R_i)$ and $vars(G)$ are disjoint.\\hfill $_\\square$\n}\n\\end{definition}\n\nDisjoint conditions ensure that assumed rules do not depend on actual substitutions along inference, i.e., assumed rules take the form they have in the program.\n\nSemantics is built with a stratified inference system which can be consulted in \\cite{sae13c-ictai13}.\nHere we recall the inference rule for the embedded implication:\\footnote{Each rule in this inference system is read as: If the formulas above the line can be inferred, then those below the line can also be inferred.}\nFor any goal $\\phi$ and database instance $\\Delta$:\n\n\\begin{center}\n\\medskip\n$\\infer[]{\\Delta \\vdash R_1 \\land \\ldots \\land R_n \\Rightarrow \\phi}{\\Delta \\cup \\{R_1, \\ldots, R_n\\} \\vdash \\phi}$\n\\medskip\n\\end{center}\n\nThis means that for proving the conclusion $\\phi$, rules $R_i$, together with the current database instance $\\Delta$ can be used in subsequent inference steps.\nThe {\\em unified stratified semantics} defined in \\cite{sae13c-ictai13} builds a set of axioms $\\mathcal{E}$ that provides a means to assign a meaning to a goal as: \n$solve(\\phi,\\mathcal{E}) = \\{\\Delta \\vdash id:\\psi \\in \\mathcal{E} ~ \\mbox{such that} ~ \\phi\\theta = \\psi\\}$, where $\\theta$ is a substitution and each axiom in $\\mathcal{E}$ is mapped to the database $\\Delta$ it was deduced for, and the inferred fact $\\psi$ is labelled with its data source (for supporting duplicates).\nWe use $\\Delta(\\mathcal{E})$ to denote the multiset of facts $\\psi$ so that $\\Delta \\vdash id:\\psi \\in \\mathcal{E}$ for any $id$.\nSo, this inference rule captures what SQL {\\tt WITH} statements need if translated to Hypothetical Datalog, because each $R_i$ can represent each temporary view definition, as will be shown in the next section.\n\n\n\\section{Translating SQL into Datalog}\n\\label{translating}\n\nWe consider standard SQL as found in many textbooks (e.g., \\cite{Silberschatz6th}), but also allowing \\verb|FROM|-less statements, i.e., providing a single-row output constructed with the comma-separated expressions after the \\verb|SELECT| keyword (Oracle, for instance, resorts to feed the row from the \\verb|dual| table to express the same feature).\nHere, we define a function \\mytt{{\\em SQL\\_to\\_DL}} that takes a relation name and an SQL statement as input and returns a multiset of Datalog rules providing the same meaning as the SQL relation for a corresponding predicate with the same name as the relation.\nThe following (incomplete) definition for this function includes only a couple of the basic cases, where others can be easily developed from \\cite{Ullman88}.\nFrom here on, set-related operators and symbols refer to multisets, as SQL relations can contain duplicates.\n\n\t \n\\medskip\n\\noindent {\\em \\% Basic SELECT statement} \\\\\n\\noindent \\mytt{{\\em SQL\\_to\\_DL}}($r$, \\mytt{SELECT A$_1$,\\ldots,A$_n$ FROM $Rel$ WHERE $Cond$}) =\\\\\n\\indent \\{ $r(\\overline{X_i}) \\leftarrow DLRel(\\overline{X_i}), DLCond(\\overline{X_j}) ~ \\} \\bigcup RelRules \\bigcup CondRules$, \\\\\nwhere \\mytt{{\\em SQLREL\\_to\\_DL}}$(Rel)=(DLRel(\\overline{X_i}), RelRules)$, and \\\\\n\\indent \\mytt{{\\em SQLCOND\\_to\\_DL}}$(Cond)=(DLCond(\\overline{X_j}), CondRules)$\n\\medskip\n\n\\noindent {\\em \\% Duplicate-preserving union} \\\\\n\\noindent \\mytt{{\\em SQL\\_to\\_DL}}($r$, \\mytt{{\\em SQL}}$_1$ \\mytt{UNION ALL} \\mytt{{\\em SQL}}$_2$) = \\\\\n\\indent \n\\mytt{{\\em SQL\\_to\\_DL}}($r$, \\mytt{{\\em SQL}}$_1$) $\\bigcup$\n\\mytt{{\\em SQL\\_to\\_DL}}($r$, \\mytt{{\\em SQL}}$_2$) \n\\medskip\n\n\nHere, each \\mytt{A}$_i$ is an argument name present in the relation $Rel$ with corresponding logic variable $X_i$. \n$Rel$ is constructed with either a single defined relation (table or view), or a join of relations, or an SQL statement.\nFunction \\mytt{{\\em SQLREL\\_to\\_DL}} (resp., \\mytt{{\\em SQLCOND\\_to\\_DL}}) takes an SQL relation (resp. condition) and returns a goal and, possibly, additional rules which result from the translation.\nVariables $\\overline{X_j}$ come as a result of the translation of the condition $DLCond$ to a goal. \nAs well, some basic cases are presented next for these functions, where $GoalName$ is an arbitrary, fresh new goal name:\n\n\\medskip\n\\noindent {\\em \\% Extensional\/Intensional Relation Name} \\\\\n\\noindent \\mytt{{\\em SQLREL\\_to\\_DL}}$(RelName)=(RelName(\\overline{X_i}), \\{ \\})$ \\\\\n\\noindent where $\\overline{X_i}$ are the $n$ variables corresponding to the $n$-degree relation $RelName$.\n\\medskip\n\n\\noindent {\\em \\% SQL Statement} \\\\\n\\noindent \\mytt{{\\em SQLREL\\_to\\_DL}}$(SQL)=(GoalName(\\overline{X_i})$, \\mytt{{\\em SQL\\_to\\_DL}}$(GoalName, SQL)$)\\\\\n\\noindent where $\\overline{X_i}$ are the $n$ variables corresponding to the $n$-degree statement $SQL$.\n\\medskip\n\n\\noindent {\\em \\% NOT IN Condition} \\\\\n\\noindent \\mytt{{\\em SQLCOND\\_to\\_DL}}(\\mytt{A$_i$ NOT IN} $Rel)=(not ~ DLRel(\\overline{X_j}), RelRules)$\\\\\nwhere \\mytt{{\\em SQLREL\\_to\\_DL}}$(Rel)=(DLRel(\\overline{X_j}), RelRules)$, and $X_i \\in \\overline{X_j}$ is the corresponding variable to argument \\mytt{A}$_i$.\n\\medskip\n\nCompleting this function by including the \\mytt{WITH} statement is straightforward because every temporary view can be represented by a predicate resulting from the translation of the temporary view definition into Datalog rules.\nAssuming such predicates as the antecedent of an embedded implication can be used to augment the (local, temporary) database for interpreting the meaning of the translated SQL outcome query:\n\n\\medskip\n\\noindent\n\t{\\mytt{{\\em SQL\\_to\\_DL}}($r$, \\mytt{WITH $r_1$ AS {\\em SQL}$_1$, \\ldots, $r_n$ AS {\\em SQL}$_n$ {\\em SQL}}) =}\\\\\n\\indent\t\\{ $r(\\overline{X_i}) \\leftarrow$ \\\\\n\\indent\t\\hspace*{1.7mm}\n\t$\\land($\\mytt{{\\em SQL\\_to\\_DL}}($r_1$,\\mytt{{\\em SQL}}$_1$)) $\\land$ \\ldots \\\\\n\\indent\t\\hspace*{1.7mm}\n\t$\\land($\\mytt{{\\em SQL\\_to\\_DL}}($r_n$,\\mytt{{\\em SQL}}$_n$)) $\\Rightarrow$ \n\t$s(\\overline{X_i})$ \\} $\\bigcup$ \\texttt{{\\em SQL\\_to\\_DL}}($s$,\\texttt{{\\em SQL}}) \n\\medskip\n\n\\noindent where $\\land(Bag)$ denotes $B_1 \\land \\cdots \\land B_m$ ($B_i \\in Bag$).\n\nThe following theorem establishes the semantic equivalence of an SQL relation and its counterpart Datalog translation.\n\n\\begin{theorem} {\\em\n\tThe semantics of an SQL $n$-degree relation $r$ defined by the query $Q$ on a database instance $\\Delta$ coincides with the meaning of a goal $r(\\overline{X_i})$, $1 \\leq i \\leq n$, for $\\Delta' = \\Delta \\bigcup$ \\mytt{{\\em SQL\\_to\\_DL}}($r$,$Q$), that is:\n\t$\\ans_\\Delta(Q) = \\Delta(solve(r(\\overline{X_i}),\\mathcal{E}))$, where $\\mathcal{E}$ is the unified stratified semantics for $\\Delta'$.} \\hfill $_\\square$\n\\end{theorem}\n\n\\section{Beyond the WITH Clause: Expressing Assumptions}\n\\label{assume}\n\nAs a novel feature, hypothetical SQL queries (absent in the standard) were introduced (inspired in \\cite{nss-flops08}) in DES version 2.6 for solving ``what-if'' scenarios.\nSyntax for such queries is:\n\n{\\fontsize{11}{11}\n\t\\vspace{3mm}\n\t\\noindent {\\tt ASSUME {\\em SQL}$_1$ IN {\\tt {\\em Rel}}$_1$, ..., {\\em SQL}$_n$ IN {\\tt {\\em Rel}}$_n$}\n\t{\\tt {\\em SQL};}\n\t\\vspace{3mm}\n}\n\n\\noindent which makes to {\\em assume} the result of {\\tt {\\em SQL}}$_i$ in {\\tt {\\em Rel}}$_i$ when processing {\\tt {\\em SQL}}.\nThis means that the semantics of each {\\tt {\\em Rel}}$_i$ is either overloaded (if the relation already exists) or otherwise defined with the facts of {\\tt {\\em SQL}}$_i$.\nImplementing this resorted to {\\em globally} define each {\\tt {\\em Rel}}$_i$, which is not the expected behaviour as its definition must be local to {\\tt {\\em SQL}}.\nRoughly, solving an \\mytt{ASSUME} query resorted to overload the meanings of each {\\tt {\\em Rel}}$_i$ (by inserting the required facts) before computing {\\tt {\\em SQL}} and, after solving, to restore them (by deleting the same facts).\nThis also precluded nested assumptions, and such statements were allowed only as top-level queries but not as part of query definitions.\nFor instance, if it would be allowed, the following query would be incorrectly computed in that scenario:\n\n{\\fontsize{11}{11}\n\t\\begin{verbatim}\nASSUME SELECT 1 IN r(a), \n (ASSUME SELECT 2 IN r(a) SELECT * FROM r) IN s \nSELECT * FROM r,s;\n\t\\end{verbatim}\n}\n\n\\noindent because the meaning of \\mytt{r} in the context of \\mytt{SELECT * FROM r,s} would be overloaded with both \\mytt{\\{(1)\\}} and \\mytt{\\{(2)\\}}, instead of just with \\mytt{\\{(1)\\}}.\n\nApplying hypothetical reasoning in this case solves this issue, allowing us not only to use nested assumptions in both top-level queries and views, but also to take advantage of negative assumptions.\nA negative assumption allows to {\\em remove} facts from the meaning of a relation, which broadens the applicability of queries in decision-support scenarios.\nTo specify negative assumptions, \\mytt{NOT IN} is used instead of just \\mytt{IN}.\nHypothetical Datalog in \\cite{Sae15c} introduces the notion of {\\em restricted predicate} to handle negative assumptions in embedded implications. \nA restricted predicate includes at least a restricting rule whose head is an atom preceded by a minus sign.\nIts meaning is the set of facts deduced from regular rules minus the set of facts deduced from restricting rules.\nSo, a negative assumption is modelled with a restricting rule in the antecedent of an embedded implication, so that the translation from SQL to Hypothetical Datalog for \\mytt{ASSUME} statements becomes:\n\n\\medskip\n\\noindent\n\\begin{tabular}{m{0.1cm}m{0.1cm}l}\n\t\\multicolumn{3}{l}{\\mytt{{\\em SQL\\_to\\_DL}}($r$, \\mytt{ASSUME {\\em SQL}$_1$ [NOT] IN $r_1$,\\ldots,{\\em SQL}$_n$ [NOT] IN $r_n$} \\mytt{{\\em SQL}}) =}\\\\\n\t& \\{ & $r(\\overline{X_i}) \\leftarrow$ \\\\\n\t& & $\\land$(\\mytt{{\\em SQL\\_to\\_DL}}($r_1$, \\mytt{{\\em SQL}}$_1$)$[$\\mytt{[-]}$r_1$\/$r_1]$) $\\land$\n\t\\ldots \\\\\n\t& & $\\land$(\\mytt{{\\em SQL\\_to\\_DL}}($r_n$, \\mytt{{\\em SQL}}$_n$)$[$\\mytt{[-]}$r_n$\/$r_n]$) \\texttt{=>} \n\t$s(\\overline{X_i})$ \\} \\\\\n\t& & $\\cup$ $\\texttt{{\\em SQL\\_to\\_DL}}(s,\\texttt{{\\em SQL}})$\n\\end{tabular}\n\\medskip\n\n\\noindent where $A[B\/C]$ represents the application of the syntactic substitution $C$ by $B$ in all the rule heads in $A$, \\mytt{[{\\em T}]} represents that \\mytt{{\\em T}} is optional but it must occur if the corresponding $i$-th entry also occurs (i.e., if \\mytt{NOT} occurs in the assumption for $r_i$, then \\mytt{-} also occurs in the corresponding substitution).\n\n\\section{Playing with the System}\n\\label{examples}\n\nTranslating an SQL query to Datalog in a practical system involves more features that the ones briefly suggested before and are out of the scope of this extended abstract.\nFor example, the {\\tt SELECT} list can include expressions and scalar SQL statements, nested statements can be correlated, aggregate functions and grouping can be used, and so on.\nAlso, from a capacity point-of-view, a needed stage in the translation is folding\/unfolding of rules \\cite{prolog} to simplify the Datalog program resulting from the translation.\nThis is quite relevant because deducing the meaning (either complete or restricted to a given call) of the involved relations along query solving is needed, therefore significantly augmenting their space and time requirements.\nNext, we introduce a couple of examples of this translation with the system DES \\cite{saenzDES}, which in particular supports such features and inputs from several query languages, including Datalog and SQL.\nHere, we resort to the actual textual syntax of Datalog rules in this system, which follows the syntax of Prolog.\n\n\\begin{wrapfigure}{r}{45mm}\n\t\\vspace{-35pt}\n\t\\begin{center}\n\t\t\\begin{tabular}{cc}\n\t\t\t\\begin{minipage}[c]{2cm}\n\t\t\t\t\\begin{tabular}{|c|}\n\t\t\t\t\t\\hline \n\t\t\t\t\t\\mytt{student}\\\\\n\t\t\t\t\t\\hline \n\t\t\t\t\t\\mytt{(adam)} \\\\ \n\t\t\t\t\t\\mytt{(bob)} \\\\ \n\t\t\t\t\t\\mytt{(pete)} \\\\ \n\t\t\t\t\t\\mytt{(scott)} \\\\ \n\t\t\t\t\t\\hline \n\t\t\t\t\\end{tabular}\n\t\t\t\\end{minipage}\n\t\t\t& \n\t\t\t\\begin{tabular}{|c|}\n\t\t\t\t\\hline \n\t\t\t\t\\mytt{take}\\\\\n\t\t\t\t\\hline \n\t\t\t\t\\mytt{(adam,db)} \\\\ \n\t\t\t\t\\mytt{(pete,db)} \\\\ \n\t\t\t\t\\mytt{(pete,lp)} \\\\ \n\t\t\t\t\\mytt{(scott,lp)} \\\\ \n\t\t\t\t\\hline \n\t\t\t\\end{tabular}\t\n\t\t\\end{tabular}\n\t\\end{center}\n\t\\vspace{-25pt}\n\n\n\n\n\\end{wrapfigure}\nLet us consider a database containing the relations \\mytt{student(name)} and \\mytt{take(name, title)}.\nThe first one states names of students and the second one the course (\\mytt{title}) each student (\\mytt{name}) is enrolled in. \nTypes can be specified either with a Datalog assertion (as \\mytt{:-type(student(name:string))} for the first case) or a DDL SQL statement (as \\mytt{create table take(name string, title string)} for the second one, where a foreign key \\mytt{take.name} $\\rightarrow$ \\mytt{student.name} could be stated as well). \nWe consider the database instance depicted in the tables above.\n\n\nThe next SQL statement (looking for students that have not been already enrolled in a course)\nis translated as follows in a system session with DES 4.0:\n\n{\\fontsize{11}{11}\n\t\\begin{verbatim}\n\tDES> select * from student where name not in \n (select name from take)\n\tInfo: SQL statement compiled to:\n\t answer(A) :- student(A), not take(A,_B).\n\tanswer(student.name:string) ->\n\t{ answer(bob) }\n\tInfo: 1 tuple computed. \n\t\\end{verbatim}\n}\n\nThis example shows a few of things.\nFirst, as a query $Q$ is allowed at the system prompt, the call to the translation function becomes \\mytt{{\\em SQL\\_to\\_DL}}(\\mytt{answer},$Q$), i.e., the outcome relation is automatically renamed to the reserved keyword \\mytt{answer}.\nSecond, the outcome schema \\mytt{answer(student.name:string)} shows that the single output argument comes from the argument \\mytt{name} of the relation \\mytt{student}, with type \\mytt{string}.\nThird, following the definition of the translation function, this query should be translated into:\n\n{\\fontsize{11}{11}\n\t\\begin{verbatim}\n\tanswer(A) :- student(A), goal1(A).\n\tgoal1(A) :- not take(A,_B)\n\t\\end{verbatim}\n}\n\nBut folding\/unfolding simplifies this as it was displayed in the system session.\nThese translations have been displayed because it was specified so by issuing the command \\mytt{\/show\\_compilations on}. \nFinally, non-relevant variables to a rule outcome are underscored (otherwise, they would be signalled as anonymous).\nThis is important in this case to identify as safe the rule in which this underscored variable occurs.\nClassical safety \\cite{Ullman88} would tag the rule \\mytt{answer(A) :- student(A), not take(A,\\_B)} as unsafe, but an equivalent set of safe rules can be found: \\mytt{answer(A) :- student(A), not goal1(A)} and \\mytt{goal1(A) :- take(A,\\_B)}.\nUnderscored variables are a means to encapsulate this form of safety, which is identified by the system and processed correspondingly.\nIn general, there can be several rules for \\mytt{answer} (e.g., when a \\mytt{UNION} is involved) and others on which this predicate depends on.\n\nAs an example of a \\mytt{WITH} query, the following statement defines the relation \\mytt{grad} intended to retrieve the eligible students for graduation (those that took both \\mytt{db} and \\mytt{lp} in this tiny example):\n\n{\\fontsize{11}{11}\n\t\\begin{verbatim}\n\tDES> with grad(name) as \n\t (select student.name\n\t from student, take t1, take t2\n\t where student.name=t1.name\n\t and t1.name=t2.name \n\t and t1.title='db' and t2.title='lp') \n\t select * from grad;\n\tInfo: SQL statement compiled to:\n\t answer(A) :-\n\t (grad(B) :- student(B), take(B,db), take(B,lp))\n\t =>\n\t grad(A).\n\tanswer(grad.name:varchar(30)) ->\n\t{ answer(pete) }\n\tInfo: 1 tuple computed. \n\t\\end{verbatim}\n}\n\nAs an example of an \\mytt{ASSUME} query, we reuse the \\mytt{grad} definition above, assume that \\mytt{adam} is not an eligible student, and that \\mytt{adam} and \\mytt{scott} took \\mytt{lp} and \\mytt{db} respectively:\n\n{\\fontsize{11}{11}\n\t\\begin{verbatim}\nDES> assume \n (select 'adam') not in student, \n (select 'adam','lp' union all select 'scott','db')\n in take, \n (select student.name from student, take t1, take t2\n where student.name=t1.name and t1.name=t2.name and\n t1.title='lp' and t2.title='db') in grad(name) \n select * from grad;\nInfo: SQL statement compiled to:\n answer(A) :-\n -student(adam) \/\\ take(adam,lp) \/\\ take(scott,db) \/\\\n (grad(B) :- student(B), take(B,lp), take(B,db))\n =>\n grad(A).\nanswer(grad.name:varchar(30)) ->\n{ answer(pete), answer(scott) }\nInfo: 2 tuples computed. \n\\end{verbatim}\n}\n\nHere, the assumption on \\mytt{student} is negative and is compiled to a restricting fact. \nThe second one is compiled to a couple of facts because of the union.\nThe last one is the same as the previous example.\nThe SQL statement after the assumptions simply leads to the goal \\mytt{grad(A)}, for which even when \\mytt{adam} took the courses to graduate, he was removed as an eligible student and therefore from the answer.\n\nIf the extensional relations \\mytt{student} and \\mytt{take} are already defined in an external relational database (as, e.g., MySQL or PostgreSQL), they can be made available to DES via an ODBC connection (with the command \\mytt{\/open\\_db}), and queried as if they were local \\cite{Sae14a}.\nThis way, DES behaves as a front-end for both straight calls to native (i.e., supported by the external relational system) SQL queries and non-native queries (as those including \\mytt{ASSUME}).\nFor non-native statements, prepending the command \\mytt{\/des} to the query makes DES to handle such queries which are unsupported in the external database.\nFor example:\n\n{\\fontsize{11}{11}\n\t\\begin{verbatim}\n\tDES> \/open_db postgresql \n\tDES> \/des assume ...\n\tDES> \/open_db mysql \n\tDES> \/des with ...\n\t\\end{verbatim}\n}\n \n\\noindent obtaining the same answers as before for the same queries (both omitted here in the ellipses).\nNote that, in particular, \\mytt{WITH} is unsupported in both MySQL and MS Access.\n\nEven when \\mytt{WITH} is supported in several relational database systems, they are somewhat restricted because, referring to the syntax in Section \\ref{with}, \\mytt{{\\em SQL}} cannot contain a \\mytt{WITH} clause, whereas we do allow for it.\n\n\n\n\n\n\n\n\\section{Conclusions}\nThis work has presented a proposal to take advantage of intuitionistic logic programming to model both temporary definitions (with the \\mytt{WITH} clause) and assumptions (with the \\mytt{ASSUME} clause) in SQL.\nIts motivation lies in providing a clean semantics that makes assumptions to behave as first-class citizen in the object language.\nThe deductive database system DES was used as a test bed to experiment with assumptions, translating SQL queries into Hypothetical Datalog.\nFurther, this system can be used as a front-end to relational systems lacking features as the \\mytt{WITH} clause.\nThe most related work is \\cite{ANSS13d}, which includes assumptions in SQL with a tailored semantics, and generates SQL scripts implementing fixpoint computations.\nWith respect to the intuitionistic formal framework, our work is based on \\cite{Bonner89hypotheticaldatalog,Bonner90hypotheticaldatalog,bonner90adding,bonner89expressing} and adapted to assume rules and deal with duplicates in \\cite{sae13c-ictai13}.\nHowever, it is not powerful enough to include embedded universal quantifiers in premises as in \\cite{bonner89expressing}, which provides the ability to create new constant symbols hypothetically along inference.\nThough this is not directly applicable to the current work, it is indeed an interesting subject to explore by considering that domains can be finitely constrained in practical applications, as with foreign keys.\n\n\\section*{Acknowledgements}\nThanks to the anonymous referees for their suggestions to improve this work, which has been partially supported by the Spanish MINECO project CAVI-ART (TIN2013-44742-C4-3-R), Madrid regional project N-GREENS Software-CM (S2013\/ICE-2731) and UCM grant GR3\/14-910502.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nMean-field approximations of stochastic processes are crucial in many areas of science, where a large system is described by a stochastic process and its expected behaviour is approximated by a simpler mean-field model using a system of differential equations. The stochastic processes of particular interest to us are binary network processes, where each node of a large network can be in one of two states and the state of a node changes depending on the states of the neighbouring nodes. The theory of their mathematical modelling can be found in several books and review papers \\cite{Barratetal, Danon, Nekovee, Newmanetal}. The two well-known examples we analyze in Section 4 are epidemic and rumour spreading.\n\n\nFor concreteness we consider a continuous time Markov process, $X(t)$, with states $k\\in\\{0,1,\\ldots,N\\}$ and let $p_k(t)=P[X(t)=k]$ denote the discrete probability distribution of $X(t)$. The time evolution of $p_k(t)$ is described by a linear system of differential equations, called the master equations. There is a well established theory for solving linear systems by expressing their solution as $\\exp(At)$, where $A$ is the transition rate matrix of the system. It is also well-known that the matrix exponential is hard to compute for large matrices, hence methods have been developed exploiting any special structure of the matrix. Recently, an extremely powerful method has been worked out for tridiagonal matrices by Smith and Shahrezaei \\cite{SmithShahrezaei}. Since one-step processes have tridiagonal transition matrices, the time dependence of the probabilities $p_k(t)$ can be computed very efficiently. Besides computational methods there are theoretical approaches to approximate, estimate and characterize qualitatively the time dependence of the probability distribution $p_k$ and its moments (mainly its expected value). The typical approach is to introduce a low dimensional system of non-linear ODEs called the mean-field equations. These are not only faster to solve computationally but also allow for analysis giving a better qualitative understanding of the system.\n\nThe accuracy of mean-field approximations is not obvious. The problem of rigorously linking exact stochastic models to mean-field approximations goes back to the early work of Kurtz \\cite{Kurtz1970} (see \\cite{EthierKurtz} for a more recent reference). He studied density-dependent Markov processes and proved their stochastic convergence to the deterministic mean-field model. It can be shown that the difference between the solution of the mean-field equation and the expected value of the process is of order $1\/N$ as $N$ tends to infinity \\cite{BaKiSiSi}.\n\nHowever, it is natural to look for actual lower and upper bounds that can be used for finite $N$ (in contrast to the previous asymptotic results). It is known that in many cases, the mean-field model yields an upper bound on the expected value of the process. The first lower bound is by Armbruster and Beck \\cite{ArBe} where a system of two ODEs proves a lower bound on the expected value in the case of a susceptible-infected-susceptible (SIS) epidemic model on a complete graph. The aim of this paper is to extend these results to a wider class of Markov chains, including several which approximate network processes.\n\nWe develop bounds for \\emph{one-step processes}, also known as \\emph{birth-death processes}, that form an important class of continuous time Markov processes, for which the state space is $\\{ 0,1, \\ldots , N\\}$ and it is assumed that transition from state $k$ is possible only to states $k-1$ at rate $c_k$ and to state $k+1$ at rate $a_k$. The master equations are\n\\begin{equation}\n p_k'= a_{k-1}p_{k-1}-(a_{k}+c_{k})p_k+c_{k+1}p_{k+1},\\quad k=0,\\ldots, N , \\label{eqKolm}\n\\end{equation}\nwhere of course $a_{-1}=a_N=c_0=c_{N+1}=0$.\nThe bounds we develop are the solutions of simple ODEs and will hold for all $N$. We will illustrate with numerical examples that the upper and lower bounds are remarkably close to each other.\n\nIn Section 2 we set up the problem, give the mean-field equation and the approximating system, state the main result, and state the tools used in the proof. Section 3 proves the main result. In Section 4 we illustrate the performance of the bounds on two examples: an SIS epidemic with and without airborne infection, and a voter-like model. Section 5 concludes and discusses directions for future research.\n\n\n\n\\section{Formulation of the main result}\n\n\\subsection{Differential equations for the moments}\nWe first establish the differential equations for the moments of the process $X(t)$. We focus on the fraction $X\/N$ since we are interested in situations with large $N$ and density dependent coefficients. By definition, the $n$-th moment of $(X\/N)$ is\n\\begin{equation}\\label{ynODE}\ny_n(t) := \\mathbb{E}[(X(t)\/N)^n]=\\sum_{k=0}^N (k\/N)^n p_k(t)\\quad (n=0,1,\\dots).\n\\end{equation}\nOf course $y_0=1$.\nThe following equations for $y_n'$ can be derived using Lemma 2 in \\cite{BaKiSiSi}, using the Kolmogorov backward equations, or taking the time derivative of \\eqref{ynODE} and substituting \\eqref{eqKolm}.\n\\[\ny'_n = \\frac{1}{N^n} \\sum_{k=0} ^N \\left( a_k ( (k+1)^n-k^n) + c_k ( (k-1)^n-k^n) \\right) p_k .\n\\]\nWe are interested in the case when $a_k\/N$ and $c_k\/N$ are polynomials of $k\/N$. Then, $y'_n$ can be expressed in terms of the coefficients of these polynomials. In fact, the following result was proved in \\cite{BaKiSiSi}.\n\n\\begin{lemma}\\label{momlem}\nLet\n\\begin{equation}\\label{akck}\n\\frac{a_k}N=A(k\/N) \\quad\\text{and}\\quad\\frac{c_k}N=C(k\/N)\n\\end{equation}\nwith polynomials $A(x)=\\sum_{j=0}^mA_jx^j$ and $C(x)=\\sum_{j=0}^mC_jx^j$ such that $A(1)=0$ and $C(0)=0$.\nThen\n\\begin{align}\\label{ysystem1}\ny_1'&=\\sum_{j=0}^mD_jy_j,\\\\\\label{ysystem2}\ny_n'&=n\\sum_{j=0}^mD_jy_{n+j-1}+\\frac1NR_n\\quad (n=2,3,\\dots)\n\\end{align}\nwhere $D_j=A_j-C_j$, $0\\leq R_n\\leq \\frac{n(n-1)}2 c$, and $c=\\sum_{j=0}^m (|A_j|+|C_j|)$.\n\\end{lemma}\n\n\\subsection{Mean-field equation and an approximating system}\nOur mean-field equation\n\\begin{equation}\\label{meanf}\ny'=\\sum_{j=0}^mD_jy^j,\\quad y(0)=y_1(0),\n\\end{equation}\nis motivated by the approximations $\\frac1NR_n\\approx0$ and $y_n\\approx y_1^n$ for large $N$,\nwhere the second approximation essentially assumes $X\/N$ is deterministic. We are interested whether the solution $y$ of the mean-field equation converges uniformly on $[0,T]$ to $y_1$ as $N\\to\\infty$. Recently in \\cite{ArBe}, the following special case was proved in an elementary way.\n\\begin{thm}\\label{arbethm}\nLet $m=2$, $D_0=0$, $D_1=\\tau-\\gamma$, and $D_2=-\\tau$ with positive constants $\\tau$ and $\\gamma$.\nIf $y(0)=y_1(0)=u\\in [0,1]$ is fixed, then for any fixed $T>0$,\n\\[y_1(t)\\leq y(t)\\quad \\text{ for } t\\in[0,T],\\]\nand\n\\[\\lim_{N\\to\\infty}|y(t)-y_1(t)|=0\\quad \\text{uniformly in } t\\in [0,T].\\]\n\\end{thm}\nIn the theorem above, the constants $\\tau$ and $\\gamma$ correspond to the infection and recovery rates in an SIS model of disease spread (see Subsection \\ref{sissub}). Our aim is to generalize Theorem \\ref{arbethm} to a broader class of coefficients $D_j$ and to provide a lower bound for $y_1$. In the proof of our main result we will not only compare $y$ with $y_1$ but also $y^n$ with $y_n$. Thus using \\eqref{meanf},\n\\[(y^n)'=ny^{n-1}y'=n\\sum_{j=0}^mD_jy^{n+j-1}=n\\sum_{j=0}^mD_j(y^n)^{\\frac{n+j-1}n}.\\]\nHence, the powers of $y$ satisfy the initial value problem below:\n\\begin{align}\\label{psystem1}\ny'&=\\sum_{j=0}^mD_jy^j,\\quad y(0)=y_1(0),\\\\\\label{psystem2}\n(y^n)'&=n\\sum_{j=0}^mD_j(y^n)^{\\frac{n+j-1}n},\\quad y^n(0)=y_1^n(0)\\quad (n=2,3,\\dots).\n\\end{align}\n(The equation for $y'$ is separated from $(y^n)'$ for $n\\geq 2$ because it will have a different role.)\nThis system in combination with system \\eqref{ysystem1}--\\eqref{ysystem2} for $y_n'$ motivates the following initial value problem:\n\\begin{align}\\label{zsystem1}\nz_1'&=\\sum_{j=0}^mD_jz_j,\\quad z_1(0)=y_1(0),\\\\\\label{zsystem2}\nz_n'&=n\\sum_{j=0}^mD_jz_n^{\\frac{n+j-1}n}+\\frac{n(n-1)}{2N}c,\\quad z_n(0)=y_1^n(0)\\quad (n=2,\\dots,m),\n\\end{align}\nwhere we let $z_0=1$.\n\n\\subsection{Main result}\nWe are now ready to state our main result.\n\\begin{thm}\\label{main}\nAssume that\n\\begin{equation}\\label{sign}\nD_0\\geq0, D_1\\in\\mathbb{R}\\text{ and } D_j\\leq0 \\text{ for }j\\geq 2,\n\\end{equation}\nand let $y_1(0)=u\\in(0,1]$ be fixed.\nThen for the solutions $y$ of \\eqref{meanf} and $z_1$ of \\eqref{zsystem1}--\\eqref{zsystem2}, it holds that\n\\begin{align*}\nz_1(t)\\leq y_1(t)\\leq y(t)& \\text{ for }t\\geq0,\\\\\nz_1(t)^n \\leq y_n(t)\\leq z_n(t)& \\text{ for }t\\geq0,\\ 2\\leq n\\leq m,\n\\end{align*}\nand for every $T>0$ there exists a constant $C_T>0$ such that\n\\[\nz_1(t)-y(t)\\leq \\frac{C_T}N \\text{ in }[0,T].\n\\]\n\\end{thm}\n\nThe proof is based on some familiar inequalities which we recall in the next subsection.\n\n\\subsection{Tools of the proof}\n\nThe following comparison results are standard in the theory of ODEs, see \\cite{Hale}.\n\n\\begin{lemma}[Comparison]\\label{complem}\nSuppose that $f(t,x)$ is continuous in $x$;\n\\begin{itemize}\n\\item the initial value problem $x_2'(t)=f(t,x_2(t))$, $x_2(0)=x_0$ has a unique solution for $t\\in[0,T]$;\n\\item $x_1'(t)\\leq f(t,x_1(t))$ for $t\\in[0,T]$; and $x_1(0)\\leq x_0$.\n\\end{itemize}\nThen $x_1(t)\\leq x_2(t)$ for $t\\in[0,T]$.\n\\end{lemma}\n\n\\begin{lemma}[Peano's inequality]\nSuppose that $f_1,f_2\\colon[0,T]\\times[a,b]\\to\\mathbb{R}$ are Lipschitz continuous functions in their second variable with Lipschitz constant $L$ and $|f_1(t,x)-f_2(t,x)|\\leq M$ in $[0,T]\\times[a,b]$ with some constant $M$. If\n\\begin{itemize}\n\\item $x_1'(t)=f_1(t,x_1(t)), x_2'(t)=f_2(t,x_2(t))$ for $t\\in(0,T]$ and\n\\item $x_1(0)=x_2(0)$,\n\\end{itemize}\nthen\n\\[|x_1(t)-x_2(t)|\\leq \\frac{M}L\\left(e^{Lt}-1\\right) \\quad (t\\in[0,T]).\\]\n\\end{lemma}\n\nThe classical Jensen's inequality and the definition of the expected value yields the probabilistic version of Jensen's inequality, see \\cite{Ross}.\n\n\\begin{lemma}[Jensen's inequality]\nIf $X$ is a random variable and $\\varphi\\colon\\mathbb{R}\\to\\mathbb{R}$ is a convex function, then\n\\[\\varphi(\\mathbb{E}[X])\\leq\\mathbb{E}[\\varphi(X)].\\]\nFor concave $\\varphi$, the reverse inequality holds.\n\\end{lemma}\n\n\n\n\n\\section{Proof of the main result}\n\n\n\\begin{proof}[Proof of Theorem \\ref{main}]\nThe proof will be carried out in multiple steps. First, we derive bounds for $y_n$ and $z_n$ independent of $N$ which will guarantee uniform Lipschitz constants with respect to $N$ in steps three and four. Second, we show that $y_1\\leq y$. In the third step, we show that $y_n \\leq z_n$ for $n\\geq 2$ and use Peano's inequality to deduce from equations \\eqref{psystem2}, \\eqref{ysystem2} and \\eqref{zsystem2} that there is some constant $C(n,T)$ depending on $n$ and $T$ such that\n\\[|y^n(t)-z_n(t)|\\leq \\frac{C(n,T)}N \\text{ for } t\\in[0,T],\\ n=2,\\ldots,m.\\]\nFinally with these estimates at hand, we obtain in the same manner $z_1\\leq y_1$ and\n\\[|y(t)-z_1(t)|\\leq \\frac{C_T}N \\text{ for } t\\in[0,T],\\]\nwith some suitable constant $C_T$.\n\n\\textbf{Step 1: A priori bounds independent of $N$.}\nWe first focus on a lower bound for $y_n$.\nThe definition of $y_n$ implies $0\\leq y_\\ell\\leq y_n$ for all $n\\leq \\ell$. Therefore,\n\\[y_1'\\geq D_0+y_1\\sum_{j=1}^mD_j\\geq y_1 D,\\ y_1(0)=u,\\]\nwhere the second inequality is due to \\eqref{sign} and we define $D=\\sum_{j=1}^mD_j$. Applying Lemma \\ref{complem} we obtain\n\\[y_1(t)\\geq ue^{tD}.\\]\nNow due to Jensen's inequality,\n\\begin{equation}\\label{Jensen}\ny_1^n=\\mathbb{E}[X\/N]^n\\leq \\mathbb{E}[(X\/N)^n]=y_n\\text{ for } n\\geq 1.\n\\end{equation}\nDefining $\\delta_1(T)=u^m\\min\\{1,e^{TmD}\\}>0$, this leads to our lower bound,\n\\begin{equation}\\label{also}\ny_n(t) \\geq y_1^n(t) \\geq u^n e^{tnD}\\geq \\delta_1(T)\\text{ for } 1\\leq n\\leq m,t\\in[0,T].\n\\end{equation}\n\nNext we focus on an upper bound for $z_n$ for $n\\geq 2$. We bound the definition \\eqref{zsystem2} by assuming $z_n\\geq 1$. Then using the sign condition \\eqref{sign}, $z_n^{\\frac{n-1}{n}}\\leq z_n$, and $N\\geq 1$,\n\\[z'_n \\leq n(D_0+|D_1|+c n)z_n,\\quad z_n(0)=u^n \\text{ for } n\\geq 2.\\]\nApplying Lemma \\ref{complem} to this differential inequality, we obtain an upper bound,\n\\begin{equation}\\label{zbound}\nz_n(t) \\leq u^n e^{n(D_0+|D_1|+cn)t}\\leq \\delta_2(T)\\text{ for } 2\\leq n\\leq m,t\\in[0,T],\n\\end{equation}\nwhere $\\delta_2(T)=\\max\\{1,u^2 e^{m(D_0+|D_1|+cm)T}\\}$.\n\n\\textbf{Step 2: Comparison of $y$ and $y_1$.}\nSubstituting \\eqref{Jensen} into \\eqref{ysystem1} with regard to the sign condition \\eqref{sign} yields\n\\[y_1'\\leq \\sum_{j=0}^m D_jy_1^j.\\]\nApplying Lemma \\ref{complem} to the above inequality and to \\eqref{meanf} yields for $t\\in[0,T]$,\n\\begin{equation}\\label{y1lessy}\ny_1(t)\\leq y(t).\n\\end{equation}\n\n\\textbf{Step 3: Comparison of $y_n$, $y^n$, and $z_n$.}\nApplying Jensen's inequality again,\n\\begin{gather}\\label{jen1}\ny_{n+j-1}=\\mathbb{E}[(X\/N)^{n+j-1}]\\geq \\mathbb{E}[(X\/N)^n]^{\\frac{n+j-1}n}=y_n^{\\frac{n+j-1}n}\\text{ for } j\\geq 1,\\\\\ny_{n-1}=\\mathbb{E}[(X\/N)^{n-1}]\\leq \\mathbb{E}[(X\/N)^n]^{\\frac{n-1}n}=y_n^{\\frac{n-1}{n}}.\\label{jen2}\n\\end{gather}\nWe also have the trivial estimate\n\\begin{equation}\\label{triv}\n0\\leq y_n\\leq1.\n\\end{equation}\n\nPutting the estimates \\eqref{jen1}, \\eqref{jen2} and \\eqref{triv} into the differential equation \\eqref{ysystem2} of the $n$-th moment for $n\\geq 2$ and taking care of the sign condition \\eqref{sign} we obtain\n\\begin{equation}\\label{ynest}\ny_n'\\leq n\\sum_{j=0}^mD_jy_n^{\\frac{n+j-1}n}+\\frac{n(n-1)}{2N}c,\\quad y_n(0)=y_1^n(0)=u^n.\n\\end{equation}\nBy introducing the function $g_n(x)=n\\sum_{j=0}^mD_jx^{\\frac{n+j-1}n}+\\frac{n(n-1)}{2N}c$, \\eqref{ynest} can be written as $y_n'\\leq g_n(y_n)$, and \\eqref{zsystem2} has the form $z_n'=g_n(z_n)$.\nNow, $g_n(x)$ is Lipschitz continuous except at 0. Thus we can apply Lemma \\ref{complem} while $z_n\\geq\\delta_1(T)$ holds, to obtain\n\\begin{equation}\\label{ysubnzn}\ny_n(t)\\leq z_n(t) \\text{ for } 2\\leq n\\leq m,t\\in [0,T].\n\\end{equation}\nFortunately, \\eqref{also} ensures $z_n(t)\\geq y_n(t)\\geq \\delta_1(T)$ for $t\\leq T$ and $2\\leq n\\leq m$.\n\nUsing the same steps and \\eqref{psystem2} we can also show that\n\\begin{equation}\\label{ynzn}\ny^n(t)\\leq z_n(t) \\text{ for } 2\\leq n\\leq m,t\\in [0,T].\n\\end{equation}\nNow,\n\\begin{equation}\\label{zybound}\n\\delta_2(T)\\geq z_n(t)\\geq y^n(t) \\geq y_1^n(t) \\geq \\delta_1(T)\n\\text{ for } 2\\leq n\\leq m,t\\in [0,T],\n\\end{equation}\nwhere the inequalities are from \\eqref{zbound}, \\eqref{ynzn}, \\eqref{y1lessy}, and \\eqref{also}, respectively.\n\nThis lets us use Peano's inequality to estimate the difference of the solution $y^n$ of \\eqref{psystem2} and $z_n$ of \\eqref{zsystem2} since $z_n'=g_n(z_n)$, $(y^n)'=g_n(y^n)-\\frac{n(n-1)}{2N}c$, and \\eqref{zybound} ensures a Lipschitz constant independent of $N$. Therefore, there is some constant $C(n,T)$ depending on $n$ and $T$ such that\n\\begin{equation}\\label{est}\n|y^n(t)-z_n(t)|\\leq \\frac{C(n,T)}N \\text{ for } 2\\leq n\\leq m,t\\in[0,T].\n\\end{equation}\n\n\\textbf{Step 4: Comparison of $y_1$ and $z_1$.}\nNow, with estimates \\eqref{ysubnzn} and \\eqref{est} in our hand we turn to equations \\eqref{psystem1}, \\eqref{ysystem1}, \\eqref{zsystem1} to obtain estimates on their solutions analogously to the preceding part of the proof. First, by substitution of estimates \\eqref{ysubnzn} into \\eqref{ysystem1} with regard to the sign condition \\eqref{sign} it follows that\n\\begin{equation}\\label{y1est}\ny_1'\\geq D_0+D_1y_1+\\sum_{j=2}^mD_jz_j,\\quad y(0)=y_1(0)=u.\n\\end{equation}\nConsidering functions $z_n$ ($n=2,\\dots,m$) fixed, then \\eqref{y1est} and \\eqref{zsystem1} have the form $y_1'(t)\\geq g_1(t,y_1(t))$ and $z_1'(t)=g_1(t,z_1(t))$ where $g_1(t,x)=D_0+D_1x+\\sum_{j=2}^mD_jz_j(t)$. Thus, Lemma \\ref{complem} implies\n\\[y_1\\geq z_1\\]\nand this leads to the lower bound for the $n$-th moment as $y_n\\geq z_1^n$ by using \\eqref{also}.\n\nNow we apply Peano's inequality to estimate the difference of the solution $y$ of \\eqref{psystem1} and $z_1$ of \\eqref{zsystem1}. Indeed, $y'(t)=h_1(t,y(t))$ and $z_1'(t)=g_1(t,z_1(t))$ where $h_1(t,x)=D_0+D_1x+\\sum_{j=2}^mD_jy^j(t)$. Therefore,\n\\[h_1(t,x)-g_1(t,x)=\\sum_{j=2}^m D_j(y^j(t)-z_j(t)).\\]\nThus by \\eqref{est},\n\\[| h_1(t,x)-g_1(t,x)|\\leq \\frac{m \\cdot \\max_{2\\leq n\\leq m} |D_n|C(n,T)}N \\text{ for } t\\in[0,T].\\]\nThen Peano's inequality implies that there is some constant $C$ depending on $m$ and $T$ such that\n\\[\n|y(t)-z_1(t)|\\leq \\frac{C}N \\text{ for } t\\in[0,T].\n\\]\nThe proof is now complete.\n\\end{proof}\n\n\n\n\n\n\n\n\\section{Examples}\n\nHere we show numerically the lower and upper bounds in the case of two network processes.\n\n\\subsection{SIS epidemic propagation with and without airborne infection}\\label{sissub}\n\nConsider SIS epidemic propagation on a regular random graph with $N$ nodes. We allow for multiple routes of infection so that the infection spreads not only via the contact network, but also due to external forcing, such as airborne infection. The state space of the corresponding one-step process is $\\{ 0,1, \\ldots , N\\}$, where $k$ denotes the state with $k$ infected nodes. In fact, the position of the infected nodes also affects the spreading process, hence the state space is larger, and therefore our model is only an approximation of the real infection propagation process. The validity of this approximation is discussed in detail in \\cite{akckcikk}. We note that in the case of a complete graph, the model is exact. Starting from state\n$k$ the system can move either to state $k+1$ or to $k-1$, since at a given instant only one node can change its state. When the system moves from state $k$ to $k+1$ then a susceptible node becomes infected. The rate of external infection of a susceptible node is denoted by $\\beta$. The rate of internal infection is proportional to the number of infected neighbours, which is $d\\frac{k}{N}$ in average, where $d$ denotes the degree of each node in the network and $\\frac{k}{N}$ is the proportion of the infected nodes. Since there are $N-k$ susceptible nodes and each of them has $d\\frac{k}{N}$ infected neighbours the total number of $SI$ edges is $d(N-k)\\frac{k}{N}$. Then the rate of transition from state $k$ to state $k+1$ is obtained by adding the rates of the two infection processes\n$$\na_k=\\tau d(N-k)\\frac{k}{N} +\\beta (N-k) .\n$$\nwhere $\\tau$ is the infection rate. The rate of transition from state $k$ to $k-1$ is $c_k=\\gamma k$, because any of the $k$ infected nodes can recover with recovery rate $\\gamma$. Using these coefficients, $a_k$ and $c_k$, the spreading process can be described by equation \\eqref{eqKolm}. The coefficients can be given in the form \\eqref{akck} by choosing the functions $A(x)=\\tau d x (1-x)+\\beta (1-x) $ and $C(x)=\\gamma x$. The coefficients of these polynomials are $A_0=\\beta $, $A_1=\\tau d -\\beta$, $A_2=-\\tau d $, $C_0=0$, $C_1=\\gamma$ and $C_2=0$. Thus the coefficients\n$$\nD_0=\\beta, \\quad D_1=\\tau d -\\beta - \\gamma, \\quad D_2= -\\tau d\n$$\nsatisfy the sign condition \\eqref{sign}. According to \\eqref{meanf}, the mean-field equation takes the form\n\\begin{equation}\ny'=\\beta+(\\tau d -\\beta - \\gamma)y-\\tau d y^2 \\label{meanfSISa}\n\\end{equation}\nsubject to the initial condition $y(0)=i\/N$, where $i$ is the number of initially infected nodes.\nSystem \\eqref{zsystem1}-\\eqref{zsystem2} can be written as\n\\begin{align}\\label{zsystem1SISa}\nz_1'&=\\beta+(\\tau d -\\beta - \\gamma)z_1-\\tau d z_2,\\\\\\label{zsystem2SISa}\nz_2'&=2\\beta z_2^{1\/2}+2(\\tau d -\\beta - \\gamma)z_2 -2\\tau d z_2^{3\/2} + \\frac{c}{N},\n\\end{align}\nwhere $c=\\beta+|\\tau d-\\beta|+\\tau d +\\gamma$. The initial condition is $z_1=i\/N$, $z_2=(i\/N)^2$. The mean-field equation \\eqref{meanfSISa} and the system \\eqref{zsystem1SISa}--\\eqref{zsystem2SISa} can be easily solved with an ODE solver.\n\nThe solutions without airborne infection, $\\beta=0$, are shown in Figure \\ref{fig:SIS} for $N=10^6$ and $N=10^7$. It is important to note that for such large values of $N$ the master equation \\eqref{eqKolm} cannot be solved numerically, but we know that the expected value $y_1(t)= \\sum_{k=0}^N \\frac{k}{N} p_k(t)$ is between the two curves given in the Figure. We can also see that for small times the two bounds are nearly identical, i.e., we get the expected value with high accuracy. As time increases the bounds move apart, moreover the length of time interval, where the two bounds give the expected value accurately increases with $N$.\n\nThe solutions with airborne infection, $\\beta>0$, are shown in Figure \\ref{fig:SISair} for $N=100$ together with the expected value $y_1(t)= \\sum_{k=0}^N \\frac{k}{N} p_k(t)$ obtained by solving the master equation \\eqref{eqKolm} for $p_k$. One can see that the expected value is between the two bounds, in fact, it is hardly to distinguish from the solution of the mean-field equation, therefore the stationary part of the curves are enlarged in the inset. Note that the performance of the bounds is much better than in the case without airborne infection. Here we get much closer bounds even for a small value of $N$.\n\n\\subsection{A voter-like model}\n\nConsider again a regular random network where each node can be in one of two states, 0 or 1, representing two opinions propagating along the edges of the network (see \\cite{HolleyLiggett}). If a node is in state 0 and has $j$ neighbours in state 1, then its state will change to 1 with probability $j\\tau \\Delta t$ in a small time interval $\\Delta t$. This describes a node switching to opinion 1. The opposite case can also happen, that is a node in state 1 can become a node with opinion 0 with a probability $j\\gamma \\Delta t$ in a small time interval $\\Delta t$, if it has $j$ neighbours in state 0. The parameters $\\tau$ and $\\gamma$ characterize the strengths of the two opinions. Voter models are related to the famous Ising spin model in physics where the atomic spin, $\\pm1$, in a domain is affected by the spin in neighboring domains. The state space of the corresponding one-step process is $\\{ 0,1, \\ldots , N\\}$, where $k$ denotes the state, in which there are $k$ nodes with opinion 1. Starting from state $k$ the system can move either to state $k+1$ or to $k-1$, since at a given instant only one node can change its opinion. When the system moves from state $k$ to $k+1$ then a node with opinion 0 is ``invaded'' and becomes a node with opinion 1. The rate of this transition is proportional to the number of neighbours with opinion 1, which is $d\\frac{k}{N}$ in average, where $d$ is the degree of each node in the network and $\\frac{k}{N}$ is the proportion of the nodes with opinion 1. Since there are $N-k$ nodes with opinion 0 and each of them has $d\\frac{k}{N}$ neighbours with the opposite opinion the total number of edges connecting nodes with two different opinions is $d(N-k)\\frac{k}{N}$. Hence the rate of transition from state $k$ to state $k+1$ is\n$$\na_k=\\tau d(N-k)\\frac{k}{N}.\n$$\nSimilar reasoning leads to the rate of transition from state $k$ to $k-1$ as\n$$\nc_k=\\gamma dk\\frac{N-k}{N}.\n$$\nUsing these coefficients, $a_k$ and $c_k$, the spreading process can be described by equation \\eqref{eqKolm}. The coefficients can be given in the form \\eqref{akck} by choosing the functions $A(x)=\\tau d x (1-x)$ and $C(x)=\\gamma d x(1-x)$. The coefficients of these polynomials are $A_0=0$, $A_1=\\tau d $, $A_2=-\\tau d $, $C_0=0$, $C_1=\\gamma d$ and $C_2=-\\gamma d$. Thus the coefficients\n$$\nD_0=0, \\quad D_1=\\tau d - \\gamma d, \\quad D_2= \\gamma d-\\tau d\n$$\nsatisfy the sign condition \\eqref{sign} if $\\gamma < \\tau$. According to \\eqref{meanf} the mean-field equation takes the form\n\\begin{equation}\ny'=(\\tau d - \\gamma d)(y- y^2) \\label{meanfvot}\n\\end{equation}\nsubject to the initial condition $y(0)=i\/N$, where $i$ is the number of nodes with opinion 1 at time 0.\nSystem \\eqref{zsystem1}-\\eqref{zsystem2} can be written as\n\\begin{align}\\label{zsystem1vot}\nz_1'&=(\\tau d - \\gamma d)z_1-(\\tau d - \\gamma d) z_2,\\\\\\label{zsystem2vot}\nz_2'&=2(\\tau d - \\gamma d)z_2 -2(\\tau d - \\gamma d) z_2^{3\/2} + \\frac{c}{N},\n\\end{align}\nwhere $c=2\\tau d +2\\gamma d $. The initial condition is $z_1= i\/N$, $z_2= (i\/N)^2$. The mean-field equation \\eqref{meanfvot} and system \\eqref{zsystem1vot}--\\eqref{zsystem2vot} can be easily solved with an ODE solver. The solutions are shown in Figure \\ref{fig:vot} both for $D_2<0$ (left panel) and for $D_2>0$ (right panel). For $D_2<0$ the lower bound performs well only for large values of $N$. The master equation \\eqref{eqKolm} cannot be solved numerically for such large values of $N$. Hence the expected value $y_1$ is not shown in the left panel. For $D_2>0$, i.e., when $\\gamma > \\tau$, the role of $y$ and $z_1$ is exchanged. The solution $y$ of the mean-field equation becomes the lower bound and $z_1$ becomes the upper bound. The function $y$ is hardly distinguishable from the expected value $y_1$, the inset shows that $y$ is really a lower bound.\n\n\\begin{figure}[h!]\n\\begin{center}\n\\includegraphics[scale=0.4]{figSIS}\n\\caption{SIS epidemic without airborne infection. The solution $y$ of the mean-field equation \\eqref{meanfSISa} (continuous curve) and the first coordinate $z_1$ of the solution of system \\eqref{zsystem1SISa}--\\eqref{zsystem2SISa} for $N=10^6$ (dashed curve) and for $N=10^7$ (dashed-dotted curve). The parameter values are $\\gamma=1$, $\\tau=0.1$, $d=30$, and $\\beta=0$. }\\label{fig:SIS}\n\\end{center}\n\\end{figure}\n\n\n\\begin{figure}[h!]\n\\begin{center}\n\\includegraphics[scale=0.4]{figSISair}\n\\caption{SIS epidemic with airborne infection. The solution $y$ of the mean-field equation \\eqref{meanfSISa} (continuous curve), the first coordinate $z_1$ of the solution of system \\eqref{zsystem1SISa}--\\eqref{zsystem2SISa} (dashed curve) and the expected value $y_1$ obtained from the solution of the master equation (dashed-dotted curve). The stationary part of the curves are enlarged in the inset. The parameter values are $N=100$, $\\gamma=1$, $\\tau=0.05$, $d=20$ and $\\beta=1$. }\\label{fig:SISair}\n\\end{center}\n\\end{figure}\n\n\\begin{figure}[h!]\n\\begin{center}\n\\includegraphics[scale=0.4]{figvot_a}\n\\includegraphics[scale=0.4]{figvot_b}\n\\caption{Voter-like model. The solution $y$ of the mean-field equation \\eqref{meanfvot} (continuous curve), the first coordinate $z_1$ of the solution of system \\eqref{zsystem1vot}--\\eqref{zsystem2vot} (dashed curve) and the expected value $y_1$ obtained from the solution of the master equation (dashed-dotted curve). Left panel: the case $D_2<0$, the values of $N$ are shown in the figure, the other parameter values are $\\gamma =0.1$, $\\tau=0.2$, $d=10$. Right panel: the case $D_2>0$, the parameter values are $N=200$, $\\gamma =0.2$, $\\tau=0.1$, $d=10$. The stationary part of the curves are enlarged in the inset. }\\label{fig:vot}\n\\end{center}\n\\end{figure}\n\n\\begin{figure}[h!]\n\\begin{center}\n\\includegraphics[scale=0.4]{figSIS_qCS}\n\\caption{Potential lower bounds of SIS model without airborne infection. The solution $y$ of the mean-field equation \\eqref{meanfSISa} (continuous curve), the first coordinate $z_1$ of the solution of system \\eqref{zsystem1SISq}--\\eqref{zsystem2SISq} (dashed curve) for different values of $q$ shown in the figure, $z_1$ of the solution of system \\eqref{zsystem1SISCS}--\\eqref{zsystem2SISCS} (dotted curve) and the expected value $y_1$ obtained from the solution of the master equation (dashed-dotted curve). The parameter values are $N=100$, $\\gamma =1$, $\\tau=0.1$, $d=30$, and $\\beta=0$. }\\label{fig:SISq}\n\\end{center}\n\\end{figure}\n\n\n\\section{Discussion}\n\nWe started from the master equation of a one-step process, assumed that the coefficients are density dependent, and the functions $A$ and $C$ are polynomials. Then under certain sign condition on the coefficients of the polynomials we proved that the mean-field equation yields an upper bound for the expected value of the process. We constructed an auxiliary system for the artificially defined functions $z_j$, and proved that $z_1$ is a lower bound. We showed several examples, where the upper and lower bounds are close to each other, hence the method can be used to approximate the expected value without solving the large system of master equations or using simulation, which only gives probabilistic guarantees.\n\nTwo avenues for future research are relaxing the sign condition \\eqref{sign} and improving the lower bound.\nIt is easy to see that in the case $D_j \\geq 0$ for $j\\geq 2$ and following the argument in Step 2 that the mean field equation yields a lower bound. Also in the voter-like model, a violation of the sign condition (i.e., when $\\gamma$, the rate of switching to opinion 0, is greater than $\\tau$, the rate of switching to opinion 1) can be dealt with by switching the labels of the opinions or equivalently replacing $x$ by $1-x$. Perhaps more general violations of the sign condition can be dealt with by considering the convexity or concavity of the entire polynomials $A(x)$ and $C(x)$ instead of by the signs of their coefficients.\n\nThe second avenue for future research is improving the lower bound. Unlike the case of SIS disease propagation with airborne infection, $\\beta>0$, where the upper and lower bounds are very close to each other even for small system sizes, say $N=100$, we can see that for the case of regular SIS epidemic without airborne infection, $\\beta=0$, and the voter-like model, that the lower bound may be quite far from the expected value for moderately large $N$, say $N=10^5$.\n\nThe reason that the lower bound veers off to 0 is that $z_2$ converges to a positive steady state and then the derivative of $z_1$ becomes negative after some time. This problem can be overcome by altering the differential equation of $z_1$. The term $D_2z_2$ could be changed to a term that contains also $z_1$ in order to prevent $z_1$ from becoming negative. The new term could be introduced by exploiting the fact that $z_j$ approximates the $j$-th moment $y_j$ and this function can be approximated by $y^j$. This suggests that $z_2$ can be approximated by $z_1^2$, or in other words, $z_1$ approximates $\\sqrt{z_2}$. In order to tune the approximation we introduce the artificial parameter $q\\in [0,2]$ and change the term $D_2z_2$ to $D_2z_2^{q\/2} z_1^{2-q}$. For $q=0$ we obtain the mean-field upper bound and for $q=2$ we get back the original lower bound. In the quadratic case the modified differential equations for the lower bound of the SIS epidemic \\eqref{zsystem1SISa}--\\eqref{zsystem2SISa}\nwithout airborne infection, $\\beta=0$, are\n\\begin{align}\\label{zsystem1SISq}\nz_1'&=D_0 + D_1 z_1 + D_2z_2^{q\/2} z_1^{2-q} \\\\\n\\label{zsystem2SISq}\nz_2'&=2D_0 z_2^{1\/2} + 2D_1 z_2 +2 D_2 z_2^{3\/2} +\\frac{c}{N}.\n\\end{align}\nIn Figure \\ref{fig:SISq} the solution of this system is shown for different values of $q$ together with the solution of the mean-field equation and the expected value. The curve for $q=0.5$ is close to the mean-field upper bound (i.e., $q=0$) and at least appears to be a valid lower bound for $y_1$.\n\nAn alternate approach for an improved lower bound modifies the differential equation for $z_2$ \\eqref{zsystem2} by replacing the $z_2^{3\/2}$ term by $z_2^2\/z_1$ resulting in the system\n\\begin{align}\\label{zsystem1SISCS}\nz_1'&=D_0 + D_1 z_1 + D_2z_2 \\\\\n\\label{zsystem2SISCS}\nz_2'&=2D_0 z_2^{1\/2} + 2D_1 z_2 +2 D_2 z_2^2\/z_1 +\\frac{c}{N}.\n\\end{align}\nBy Jensen's inequality, $y_2^{3\/2}\\leq y_3$.\nThus assuming $z_2\\approx y_2$, the $z_2^{3\/2}$ term can be interpreted as a lower bound for a $y_3$ term. Now we motivate the $z_2^2\/z_1$ term. Using the Cauchy-Schwarz inequality, $y_2^2\\leq y_3y_1$, leads to a lower bound $y_2^2\/y_1\\leq y_3$ that appears to be tighter than $y_2^{3\/2}$. Assuming $z_1\\approx y_1$ and $z_2\\approx y_2$ then motivates using $z_2^2\/z_1$ instead of $z_2^{3\/2}$. From Figure \\ref{fig:SISq} it appears to be a valid lower bound for $y_1$ and closer to the mean-field upper bound than the original lower bound.\nOf course these potential lower bounds \\eqref{zsystem1SISq}--\\eqref{zsystem2SISq} and \\eqref{zsystem1SISCS}--\\eqref{zsystem2SISCS} are only justified by a numerical experiment (Figure \\ref{fig:SISq}) and some intuition. It is an open question whether these are provable lower bounds or not.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}