diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzazuc" "b/data_all_eng_slimpj/shuffled/split2/finalzzazuc" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzazuc" @@ -0,0 +1,5 @@ +{"text":"\\section{Basic Terminology and Notation}\n\\label{notation}\n\nWe assume an introductory knowledge in formal language and automata theory \\cite{Hopcroft}, including\nthe definitions of finite automata, context-free languages, Turing machines etc. Next, some notations are given.\nAn {\\em alphabet} is a finite set of symbols. A {\\em word} $w$ over $\\Sigma$ is any finite sequence of\nsymbols from $\\Sigma$. Given an alphabet $\\Sigma$, then $\\Sigma^*$ is the\nset of all words over $\\Sigma$, including the empty word $\\epsilon$, and $\\Sigma^+$ is the set\nof non-empty words over $\\Sigma$. A {\\em language} $L$\nis any subset of $\\Sigma^*$. The {\\em complement} of a language $L \\subseteq \\Sigma^*$ with respect to $\\Sigma$ is\n$\\overline{L} = \\Sigma^* - L$. Given a word $w \\in \\Sigma^*$, $w^R$ is the reverse of $w$,\n $w[i]$ is the $i$'th character of $w$, and $|w|$ is the length of $w$.\nGiven an alphabet $\\Sigma = \\{a_1, \\ldots, a_m\\}$ and $a \\in \\Sigma$, $|w|_a$ is the number of $a$'s in $w$. The {\\em Parikh map} of $w$ is\n$\\psi(w) = ( |w|_{a_1}, \\ldots, |w|_{a_m})$, which is extended to the\nParikh map of a language $L$, $\\psi(L) = \\{\\psi(w) \\mid w \\in L\\}$.\nAlso, $\\alp(w) = \\{ a \\in \\Sigma \\mid |w|_a>0\\}$. \nGiven languages $L_1,L_2$, the {\\em left quotient} of $L_2$ by $L_1$, $L_1^{-1}L_2 = \\{ y \\mid xy \\in L_2, x \\in L_1\\}$, and the {\\em right quotient} of $L_1$ by $L_2$ is $L_1 L_2^{-1} = \\{x \\mid xy \\in L_1, y \\in L_2\\}$.\n\n\n\nFor a language $L$, let $f_L(n)$ be the number of strings of length $n$ in $L$. \nA language $L$ is called {\\it counting-regular} if there exists a regular language $L'$ such that for all integers $n \\geq 0$, $f_L(n)$ = $f_{L'}(n)$. Furthermore, $L$ is called {\\it strongly counting-regular} if, for any regular language $L_1$, $L \\cap L_1$ is counting-regular.\nLet $k \\ge 1$. \nA language $L$ is $k$-slender if $ f_L(n) \\le k$ for all $n$, and $L$ is thin if is is $1$-slender. Furthermore, $L$ is slender if it is $k$-slender for some $k$.\n\n\nA language $L\\subseteq \\Sigma^*$ is {\\em bounded} if there exist (not necessarily distinct)\nwords $w_1, \\ldots, w_k \\in \\Sigma^+$ such that $L \\subseteq w_1^* \\cdots w_k^*$;\nit is also {\\em letter-bounded} if each of $w_1, \\ldots, w_k$ are letters. \n\nLet $\\mathbb{N}$ be the set of positive integers and $\\mathbb{N}_0 = \\mathbb{N} \\cup \\{0\\}$. \nA {\\em linear set} is a set\n$Q \\subseteq \\mathbb{N}_0^m$ if there exist $\\vec{v_0},\n\\vec{v_1}, \\ldots, \\vec{v_n}$ such that\n$Q = \\{\\vec{v_0} + i_1 \\vec{v_1} + \\cdots + i_n \\vec{v_n} \\mid\ni_1, \\ldots, i_n \\in \\mathbb{N}_0\\}$. The vector\n$\\vec{v_0}$ is called the {\\em constant}, and \n$\\vec{v_1}, \\ldots, \\vec{v_n}$ are called the {\\em periods}. We also say that $Q$ is the\nlinear set generated by constant $\\vec{v_0}$ and periods $\\vec{v_1}, \\ldots, \\vec{v_n}$.\nA linear set is called {\\em simple} if the periods form a basis.\nA {\\em semilinear set} is a finite union of linear sets.\nAnd, a semilinear set is {\\em semi-simple} if it is the finite disjoint union of simple sets\n\\cite{Flavio,Sakarovitch}.\n\n\n\n\n\nA language $L \\subseteq \\Sigma^*$ is {\\em semilinear} if $\\psi(L)$ is a semilinear set.\nEquivalently, a language $L$ is semilinear if and only if there is a regular language\n$L'$ with the same Parikh map (they have the same commutative closure) \\cite{harrison1978}.\nThe {\\em length set} of a language $L$ is the set $\\{n \\mid w \\in L, n = |w|\\}$.\nA language $L$ is {\\em length-semilinear} if the {\\em length set} is a semilinear set;\ni.e.\\ after mapping all letters of $L$ onto one letter, the Parikh map is semilinear, which is equivalent to it being regular.\n\n\nA language $L \\subseteq \\Sigma^+$ is a {\\em code} if $x_1 \\cdots x_n = y_1 \\cdots y_m, x_i, y_j \\in L$,\nimplies $n=m$ and $x_i = y_j$, for $i$, $1 \\leq i \\leq n$. Also, $L$ is a prefix code if\n$L \\cap L \\Sigma^+ = \\emptyset$, and a suffix code if $L \\cap \\Sigma^+ L = \\emptyset$. \nSee \\cite{CodesHandbook} for background on coding theory.\n\nA language family ${\\cal L}$ is said to be {\\em semilinear} if all $L \\in {\\cal L}$ are semilinear. It is said that language family ${\\cal L}$ is a {\\em trio}\nif ${\\cal L}$ is closed under inverse homomorphism, $\\epsilon$-free homomorphism, and intersection with regular languages. \nIn addition, ${\\cal L}$ is a full trio if it is a trio closed under homomorphism; and a full AFL is a full trio closed\nunder union, concatenation, and Kleene-*.\nMany well-known families form trios, such as each family of the Chomsky hierarchy \\cite{Hopcroft}.\nThe theory of these types of families is explored in \\cite{G75}.\nWhen discussing a language family that has certain properties, such as a semilinear trio, we say that the family has {\\em all properties effective} if all these properties provide effective constructions. For semilinearity, this means that \nthere is an effective construction to construct the constant and periods from each linear set making up the semilinear set.\n\n\n\nA pushdown automaton $M$ is $t$-reversal-bounded if $M$ makes at most\n$t$ changes between non-decreasing and non-increasing the size of its pushdown on every input, and it is reversal-bounded if it is $t$-reversal-bounded for some $t$. A pushdown automaton $M$ is unambiguous if, for all $w\\in \\Sigma^*$, there is at most one accepting computation of $w$ by $M$. More generally, $M$ is $k$-ambiguous if there are at most $k$ accepting computations of $w$. \n\nLet $\\DPDA$ ($\\PDA$) denote the class of deterministic (nondeterministic) pushdown automata (and languages). We also use $\\CFL = \\PDA$, the family of context-free languages.\n\nWe make use of one particular family of languages, which we will only describe intuitively (see \\cite{ibarra1978} for formal details). Consider a nondeterministic machine with a one-way input and $k$ pushdowns, where each pushdown only has a single symbol plus a bottom-of-stack marker. Essentially, each pushdown operates like a counter, where each counter contains some non-negative integer, and machines can add or subtract one, and test for emptiness or non-emptiness of each counter. \nWhen $k = 1$, we call these nondeterministic (and deterministic) one counter machines. \nAlthough such a machine with two counters has the same power as a Turing machine \\cite{Hopcroft}, if the counters are restricted, then the machine can have positive decidability properties. Let $\\NCM(k,t)$ be the family of $k$ counter machines where the counters are $t$-reversal-bounded counters, and let $\\DCM(k,t)$ be the deterministic subset of these machines. Also, let $\\NCM$ be $\\bigcup_{k,t\\geq 1} \\NCM(k,t)$ and $\\DCM$ be $\\bigcup_{k,t \\geq 1} \\DCM(k,t)$. The class of $\\PDA$'s or $\\DPDA$'s augmented with reversal-bounded counter machines is denoted by $\\NPCM$ or $\\DPCM$ \\cite{ibarra1978}.\nIt is known that $\\NCM$ has a decidable emptiness problem, and $\\DCM$ also has a decidable containment problem, with both being closed under intersection \\cite{ibarra1978}. Furthermore, $\\NCM$ \nand $\\NPCM$ are semilinear trios.\n\n\\section{Counting-Regular Languages}\n\\label{sec:countingregular}\n\nObviously every regular language is counting-regular, and so are many non-regular languages. For example, $L_{Sq}$ = $\\{ w w \\mid w \\in \\{a, b\\}^*\\}$ is counting-regular since $L'$ = $(a(a+b))^*$ has the same number of strings of length $n$ as $L_{Sq}$ for all $n$. It is a simple exercise to show that $L_{Sq}$ is actually strongly counting-regular.\nIt is also easy to exhibit languages that are not counting-regular, e.g., $L_{bal}$ = $\\{ w \\mid w$ is a balanced parentheses string$\\}$ over alphabet $\\{ [, ]\\}$. The reason is that there is no regular language $L$ such that the number of strings of length $2n$ in $L$ is the $n$'th Catalan number, as will be seen from the characterization theorem stated below.\n\nOur goal in this section is to explore general families of languages that are counting-regular. \nWe briefly note the following:\n\\begin{theorem}\nAny family ${\\cal L}$ that contains some non-length-semilinear language $L$ is not counting regular.\n\\end{theorem}\n\\begin{proof}\nGiven such an $L$, then examine the set of all $n$ with $f_L(n) >0$. But, as every regular language $R$\nis length-semilinear, the set of all $n$ with $f_R(n) >0$ must be different.\n\\qed \\end{proof}\nThus, it is immediate that e.g.\\ any family that contains some non-semilinear unary language\nmust not be counting-regular. This includes such families as those accepted by checking stack\nautomata \\cite{CheckingStack}, and many others. Of interest are exactly what families of length-semilinear languages\nare counting-regular. We will investigate these questions here.\n\n\n\nThe following result is due to Berstel \\cite{Berstel2}.\n\n\\begin{theorem}\n\\label{regularcharacterization}\nLet $L$ be a regular language and let $f_L(n)$ denote the number of strings of length $n$. Then, one of the following holds:\n\\begin{enumerate}\n\\item[(i)] $f_L(n)$ is bounded by a constant $c$.\n\n\\item[(ii)] There is an integer $k > 0$ and a rational $c > 0$ such that $limsup_{n \\rightarrow \\infty} {{f_L(n)} \\over {n^k}} = c$.\n\n\\item[(iii)] There exists an integer $k \\geq 0$ and an algebraic number $\\alpha$ such that \\\\\n$limsup_{n \\rightarrow \\infty} {{f_L(n)} \\over {\\alpha^n n^k}} = c$ (where $c \\neq 0$ is rational).\n\\end{enumerate}\n\\end{theorem}\n\nWe also need the following theorem due to Soittola \\cite{Berstel}. We begin with the following definitions. \nA sequence $s$ = $\\{s_n\\}$, $n \\geq 0$, is said to be the {\\it merge} of the sequences $\\{s^{(0)}\\}, \\ldots , \\{s^{(p-1)}\\}$, where $p$ is a positive integer, if \n$s_n^{(i)}$ = $s_{i+np}$ for $0 \\leq i \\leq p-1$. A sequence $\\{s_n\\}$ is said to be {\\it regular} if there exists a regular language $L$ such that $f_L(n)$ = $s_n$ for all $n$.\n\n\nNext we define a $\\mathbb{Z}$-rational sequence as follows: A sequence $\\{s_n\\}$, $n \\geq 0$, is $\\mathbb{Z}$-rational if there is a matrix $M$ of order $d \\times d$, a row vector $u$ of order $1 \\times d$, and a column vector $v$ of order $d \\times 1$ such that $s_n$ = $u \\ M^n v$. All the entries in $M$, $u$ and $v$ are over $\\mathbb{Z}$.\nA $\\mathbb{Z}$-rational sequence is said to have a {\\it dominating pole} if its generating function $s(z)$ = $\\sum_{n=0}^{\\infty} s_n z^n$ \ncan be written as a rational function $s(z)$ = $p(z)\/q(z)$ where $p$ and $q$ are relatively prime polynomials, and $q$ has a simple root $r$ such that \n$r' > r$ for any other root $r'$.\n\n\nSoittola's theorem \\cite{Berstel} can be stated as follows.\n\n\\begin{theorem}\n\\label{Soittola}\nA $\\mathbb{Z}$-rational sequence with non-negative terms is regular if and only if it is the merge of $\\mathbb{Z}$-rational sequences with a \ndominating pole.\n\\end{theorem}\n\nWe will also need the following theorem due to B\\'{e}al and Perrin \\cite{Beal}.\n\\begin{theorem}\n\\label{Beal}\nA sequence $s$ is the generating sequence of a regular language over a $k$-letter alphabet if and only if both the sequences $s$ = $\\{s_n\\}$, $n \\geq 0$ \nand $t$ = $\\{(k^n - s_n)\\}$, $n \\geq 0$ are regular.\n\\end{theorem}\n\nWe now show the main result of this section. This result can be viewed as a strengthening\n of a result of Baron and Kuich \\cite{Kuich} that $L(M)$ has a rational generating function if $M$ is an unambiguous finite-turn $\\PDA$. \n \n \\begin{comment}\n First a normal form is required.\nA $\\PDA$ is in {\\em normal form} if\n\\begin{enumerate}\n\\item the stack starts with $Z_0$, always pushes some letter, and accepts by final state and empty stack (ending at $Z_0$),\n\\item each transition either pushes one symbol onto the stack (a push move) or pops one off the stack (a pop move).\n\\end{enumerate}\nThus, there can be no moves applied that do not change the size of the stack.\n\\begin{lemma}\n\\label{normalform}\nLet $t \\geq 1$. Given a one-way unambiguous $t$-reversal-bounded $\\PDA$ $M$, there\nis a one-way unambiguous $t$-reversal-bounded $\\PDA$ $M'$ in normal form such that $L(M) = L(M')$.\n\\end{lemma}\n\\begin{proof}\nIt is clear that the first condition can be assumed. \nIt can also be assumed that each transition that replaces the topmost stack symbol with another symbol does not change the stack (i.e. it is a `stay' transition) as follows:\nafter a push, nondeterminism is used to guess (and verify) the final value of the topmost\nsymbol before either a pop or push occurs. After a pop, instead of changing the topmost symbol, it guesses whether the next\nchange is another pop or a push. If it is a pop, the simulated top symbol can be stored in the finite control. If it is a push, then the new machine pushes a new tagged symbol associated with the new top of the stack (which when popped, pops the untagged symbol beneath).\n\n\nGiven $M$ satisfying these conditions, construct $M'$ in normal form as follows: $M'$ simulates every push transition exactly as in $M$, but immediately after simulating a push transition (and at the beginning of the computation), it guesses whether or not the next change to the stack will be a push or a pop (at a reversal). If it guesses a push, then it simulates all stay transitions by pushing a new symbol $\\#_1$ for each such stay transition applied. Then, immediately before simulating the next push transition, it pushes arbitrarily many copies of the symbol $\\#_2$ onto the stack (on $\\epsilon$ transitions), then immediately continues the simulation of the next push transition. If it guesses the next transition is a pop (i.e.\\ a reversal occurs), then $M'$ guesses whether an even or odd number of stay transitions are to be applied. If it guesses even, it simulates some number of stay transitions while pushing $\\#_1$, then nondeterministically switches to simulating while popping $\\#_1$, then continuing to simulate $M$'s pop moves. If it guesses odd, it applies one push on $\\#_1$, then proceeds as with the even case.\nIn a similarly fashion, $M'$ simulates each pop transition exactly as in $M$, and then simulates each stay transition but it must pop a $\\#_2$ for each such stay transition simulated. Before the next pop (or push if there is a reversal) transition simulated, it pops all $\\#_1$ symbols first (on $\\epsilon$ input). It is clear that $L(M) = L(M')$, and that $M'$ is $t$-reversal-bounded. \n \n For unambiguity, for each word $w \\in L(M)$, there is exactly one accepting computation of $M$ on $w$. For each symbol $c$ pushed onto the pushdown, there is some some unique number of stay transitions applied, $l$, say, before another transition is simulated that changes the stack. Assume first that the next move that changes the stack is a push. Then later in the computation before the next time this same symbol $c$ reaches the top of the stack again, there is some pop move followed by a sequence of $r$ say, stay transitions with $c$ on top of the stack before another move is simulated that changes the stack. Since $l$ and $r$ are unique, when $M'$ simulates $M$, in any accepting computation, it must push exactly $l$ copies of $\\#_1$ during the stay transitions, and then must push exactly $r$ copies of $\\#_2$ on $\\epsilon$ input since later in the computation $r$ is unique. Similarly after the corresponding pop transition, since $r$ is unique, it must pop $r$ copies of $\\#_2$ during the stay transitions simulated, and it must then pop $l$ copies of $\\#_1$ on $\\epsilon$ input. Next, assume the next move that changes the stack is a pop (at a point of reversal from non-decreasing to non-increasing), then the number of stay transitions $l$ is unique, and if $l$ is even then only the even strategy above (pushing $l\/2$ copies of $\\#_1$ following by popping them) leads to acceptance, and similarly with the odd case. Hence, there is only one accepting computation of $M'$ on $w$.\n\\qed\n\\end{proof}\n\nThis normal form is helpful in showing the following result:\n\\begin{theorem}\n\\label{main2}\nLet $M$ be an unambiguous reversal-bounded $\\PDA$ over a $k$ letter alphabet $\\Sigma$.\nThen, $L(M)$ is strongly counting-regular where the regular language is over a $k+1$ letter alphabet. \n\\end{theorem}\n\\begin{proof}\n\nIt is already known that all languages in $\\DCM(1,t)$ are counting-regular \\cite{R}, but here the result is shown to work for reversal-bounded pushdown automata, and with nondeterminism as long as the $\\PDA$'s are unambiguous.\nMoreover, an important improvement we make here is the size of the alphabet associated with this construction. Specifically, the proof presented in \\cite{R} used an alphabet of size $3k$ for the regular language $L'$ such that $f_{L'}$ = $f_{L(M)}$. Here we show that there is a regular language $L'$ over an alphabet of size $(k+1)$ such that $f_{L'}$ = $f_{L(M)}$.\n\nLet $M$ be $t$-reversal-bounded, $t \\geq 1$.\nFirst we note that it is enough to show that $L(M)$ is counting-regular. The stronger claim that $L(M)$ is strongly counting-regular can be seen as follows: for an arbitrary $\\DFA$ $M_1$, using the standard proof of closure of $\\PDA$'s under intersection with regular languages \\cite{Hopcroft}, the $\\PDA$ constructed accepting $L(M_1) \\cap L(M)$ is unambiguous and $t$-reversal-bounded. Hence, proving counting-regularity of the language accepted by each $t$-reversal-bounded unambiguous $\\PDA$ is enough.\n\n\nWe will present the proof in detail for the case $t$ = 1 and at the end, describe how to extend the result for a general $t$.\nWe assume without loss of generality, by Lemma \\ref{normalform}, that $M = (Q,\\Sigma,\\Gamma,\\delta,q_0,Z_0,F)$ is in normal form.\n\nIn the following, we consider a specific input string $w = w_1 w_2\\ \\cdots\\ w_n \\in L(M)$,\nwhere $w_i \\in \\Sigma, 1 \\leq i \\leq n$, and the computation described below refers to the unique accepting computation of $M$ on $w \\neq \\epsilon$ (the empty string can be handled as a special case).\nConsider an accepting (and unique) computation of $M$ on $w$:\n\\begin{equation}\n\\label{acceptingcomp}\n(q_0,x_0,\\gamma_0) \\vdash (q_1,x_1,\\gamma_1) \\vdash \\cdots \\vdash (q_m,x_m,\\gamma_m),\n\\end{equation}\nwhere $q_i \\in Q, x_i \\in \\Sigma^*, \\gamma_i \\in \\Gamma^*, 0 \\leq i \\leq m, w = x_0, \\gamma_0 = \\gamma_m = Z_0$ (the bottom-of-stack symbol), $x_m = \\epsilon, \\ q_m \\in F$.\nFor each $i$, $0 \\leq i \\leq m$, let $c_i$ be the top of the pushdown $\\gamma_i$.\nSince every transition applied before the reversal pushes a character, then after the reversal, each transition applied pops a character, it is immediate that the first $m\/2$ transitions applied must be push transitions, the last $m\/2$ must be pop transitions, and for each $i$, $0 \\leq i \\leq m\/2, c_i = c_{m-i}$.\nNote that some of these moves could be $\\epsilon$ moves (that do not depend on the input symbol scanned by the input head and do not advance the input head), and others depend on the input symbol and result in advancing the input head on the tape. \n\n\nLet $e$ be a new symbol ($e$ used in place of $\\epsilon$), \nand let $\\Sigma_e = \\Sigma \\cup \\{e\\}$. For $1 \\leq i \\leq m\/2$, let $\\sigma_i \\in \\Sigma_e$ be the letter \nread in Equation (\\ref{acceptingcomp}) when the $i$'th symbol gets pushed on the stack, with an $e$ in place of every $\\epsilon$ transition.\nSimilarly, for $1 \\leq i \\leq m\/2$, let $\\tau_i \\in \\Sigma_e$ be the letter read when the $i$'th symbol above $Z_0$ in the stack gets popped, with $e$ in \nplace of $\\epsilon$.\nLet $z = \\sigma_1 \\sigma_2 \\cdots \\sigma_{m\/2} \\tau_{m\/2} \\tau_{m\/2-1} \\cdots \\tau_1$ be over $\\Sigma_e^*$,\nwhich is the word read in Equation (\\ref{acceptingcomp}) with $e$ in place of $\\epsilon$.\n\nLet $\\Sigma_1 = \\{a^{+}, a^{-}, a^{+,\\epsilon}, a^{-,\\epsilon} \\mid a \\in \\Sigma\\}$.\nFor a pair $a,b \\in \\Sigma_e$, let \n$$f(a,b) = \\begin{cases}\n a^{+} b^{-} & \\mbox{if~} a,b \\in \\Sigma,\\\\\na^{+,\\epsilon} & \\mbox{if~} a \\in \\Sigma, b = e,\\\\\nb^{-,\\epsilon} & \\mbox{if~} b \\in \\Sigma, a = e,\\\\\n\\epsilon & \\mbox{if~} a = b = e.\n\\end{cases}$$\nFor $1 \\leq i \\leq m\/2$, let $y_i = f(\\sigma_i,\\tau_i)$, and let $y = y_1 \\cdots y_{m\/2}$.\n\n\nHence, $y \\in \\Sigma_1^*$ satisfies the condition that $|y|$ = $|w|$. Let $\\code(w)$ denote the string $y$ as defined above. Define the language $L' = \\{\\code(w) \\ | \\ w \\in L(M)\\}$. \n\nTo complete the proof, we will show the following: (a) $L'$ is regular and (b) $f_L(n)$ = $f_{L'}(n)$ for all $n$. \n\n\nTo show (a), we create an $\\NFA$ (with $\\epsilon$ transitions and multiple initial states) $M'$ accepting $L'$.\nIntuitively, $M'$ simulates $M$ on the $+$ and $+,\\epsilon$ marked symbols, and in parallel, $M'$ simulates\n$M$ ``in reverse'' on the $-$ and $-,\\epsilon$ marked symbols, also verifying that when there are consecutive symbols, one marked with $+$, and the next marked with $-$, they correspond to manipulating the same stack symbol; i.e., that the $i$'th symbol pushed matches the same symbol popped.\nThe state set of $M'$ is $Q' = Q \\times Q \\times \\Gamma \\cup Q \\times Q \\times \\Gamma \\times \\{-\\}$. Here, the first and third component is the state and topmost stack symbol in the forward simulation, and the second component is the state in the reverse simulation with the same topmost symbol as the third component, and the last optional component is a\nsignal that the next transition must only apply to the reverse simulation. \nThe initial state set is $\\{q_0\\} \\times F \\times \\{Z_0\\}$, and the final state set $F'= \\{(q,q,c) \\mid q \\in Q,c \\in \\Gamma\\}$. The transition function $\\delta'$ is\nconstructed as follows:\n\\begin{enumerate}\n\\item For all pairs of transitions, $(q',push(c))) \\in \\delta(q,a,d)$ and $(p',pop) \\in \\delta(p,b,c)$, where $p,q,p',q' \\in Q, a,b \\in \\Sigma, c,d \\in \\Gamma$, add\n$$(q',p',c,-) \\in \\delta'((q,p',d),a^+)\\mbox{~and~} (q',p,c) \\in \\delta'((q',p',c,-), b^-).$$\n\\item For all pairs of transitions $(q',push(c)) \\in \\delta(q,a,d)$ and $(p',pop) \\in \\delta(p, \\epsilon,c)$, where $p,q,p',q' \\in Q, a \\in \\Sigma, c,d \\in \\Gamma$, add\n$$(q',p,c) \\in \\delta'((q,p',d), a^{+,\\epsilon}).$$\n\\item For all pairs of transitions $(q',push(c)) \\in \\delta(q,\\epsilon,d)$ and $(p',pop) \\in \\delta(p,b,c)$, where $p,q,p',q' \\in Q, b \\in \\Sigma, c,d \\in \\Gamma$, add\n$$(q',p,c) \\in \\delta'((q,p',d) , b^{-,\\epsilon}).$$\n\\item For all pairs of transitions $(q',push(c)) \\in \\delta(q,\\epsilon,d)$ and $(p',pop) \\in \\delta(p, \\epsilon,c)$, where $p,q,p',q' \\in Q, c,d \\in \\Gamma$, add\n$$(q',p,c) \\in \\delta'((q,p',d),\\epsilon).$$\n\\end{enumerate}\n\nGiven the accepting computation of $w$ in Equation (\\ref{acceptingcomp}), $M'$\naccepts $\\code(w) = y = y_1 \\cdots y_{m\/2}$ as follows. \nFirst, $M'$ starts in $(q_0,q_m, Z_0)$. We will show by induction that for each $i$,\n$0 \\leq i \\leq m\/2$, $$(q_i, q_{m-i},c_i) \\in \\hat{\\delta}((q_0,q_m,Z_0), y_1 \\cdots y_i),$$ (where $i=0$ implies $y_1 \\cdots y_i = \\epsilon$). The base case is true as when $i = 0$, $(q_0,q_m,Z_0) = (q_i, q_{m-i},c_i)$.\nAssume it is true for $i$, $0 \\leq i < m\/2$. First assume that from\n$(q_i,x_i,\\gamma_i)$ to $(q_{i+1},x_{i+1},\\gamma_{i+1})$, a transition that reads $a\\in \\Sigma$ is applied, and from $(q_{m-i-1},x_{m-i-1},\\gamma_{m-i-1})$ to\n$(q_{m-i},x_{m-i},\\gamma_{m-i})$, a transition that reads $b \\in \\Sigma$ is applied. Then $y_{i+1} = a^+ b^-$, and using the sequence of two transitions created in step 1,\n$(q_{i+1}, q_{m-i-1},c_{i+1}) \\in \\hat{\\delta}(q_i,q_{m-i},c_i),y_{i+1})$. Similarly for the other three cases, and hence the induction follows. In particular,\n$(q_{m\/2},q_{m\/2},c_{m\/2}) \\in \\hat{\\delta}((q_0,q_m,Z_0), \\code(w)) \\cap F'.$\n\n\n\nConversely, given an accepting computation of some word $y \\in \\Sigma_1^*$, the first component must change as in $M$ while pushing is occurring, and the second component must \nchange as in $M$ in reverse from the final state on the same sequence of stack symbols. Thus, there exists an accepting computation of $h_+(y) h_-(y)^R$ in $M$ where $h+$ is a homomorphism mapping each letter marked with $+$ or $+,\\epsilon$ to the corresponding letter of $\\Sigma$, and erasing all others, and $h_-$ is a homomorphism that takes each letter marked with $-$ or $-,\\epsilon$ and maps it onto the corresponding letter of $\\Sigma$, and erases all others.\n\n\n\n\nFinally, we note that the mapping from $w \\rightarrow \\code(w)$ is bijective. The mapping in the forward direction is well-defined since $M$ (an unambiguous $\\PDA$) has a unique accepting sequence on any input $w \\in L(M)$. Thus the coding follows a deterministic procedure and $\\code(w)$ is well-defined and is unique for a given $w \\in L(M)$. Conversely, given $y$ accepted by $M_1$, it can be decoded in to a unique string $w \\in L(M)$ such that $y$ = $\\code(w)$ (using \n$h_+(y) h_-(y)^R$ as above). \n\nNext, we provide a sketch of how the above proof can be generalized to unambiguous $\\PDA$'s $M$ that reverse the stack a finite number of times. The coding is similar to the one described above. Suppose, the number of reversals is $t$. We can align the input positions corresponding to the same symbol getting pushed and popped by dividing the input string into several segments where in each segment, $M$ only pushes or only pops. As an example, (see Figure \\ref{revboundedfigure}) suppose the $\\PDA$ makes two reversals and suppose the height of the stack corresponding to the input position is as shown in the figure. In this case, the input $w$ is divided into $w_1 \\cdots w_6$ and the segments aligned to create $\\code(w)$ will be $(w_1, w_6^R)$, followed by $(w_2, w_3^R)$, followed by $(w_4, w_5^R)$. Thus, in this case, the states of $M'$ have four state components where it starts by deriving $w_1$ and $w_6^R$ in the first and last component until it reaches line $l$ in the figure, where it continues to derive $w_2$ and $w_3^R$ using the first and second component until those reach the same state, then it derives $w_4$ and $w_5^R$ using the third and fourth component.\nMore generally, in order to globally check that the simulation is correct, the states associated with all the segments must be kept in a vector of states by the simulating $\\NFA$ with $t+1$ components to carry out the simulation and check the `global' correctness of the simulation. The $\\NFA$ will guess when each block starts and ends, and keeps track of the states associated with the blocks in the global vectors of states, and verifies the start and end states of the blocks.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=3in]{graph.pdf}\n\\caption{A $2$-reversal-bounded $\\PDA$ and its pattern of stack change.}\n\\label{revboundedfigure}\n\\end{center}\n\\end{figure}\n\n\nWe have shown that, for any unambiguous $\\PDA$ $M$ that reverses the stack a finite number of times, there is a regular language $L(M_1)$ such that $f_{L(M_1)}(n)$ = $f_{L(M)}(n)$ for all $n$. The size of the alphabet over which $L(M_1)$ is defined is $4s$ where $s$ = $|\\Sigma|$. \n\nSurprisingly, we can reduce the size of the alphabet from $4s$ to $s+1$. More precisely, let $M$ be an unambiguous $\\PDA$ that reverses the stack a finite number of times and $s$ be the size of the alphabet over which $M$ is defined. We will show that there is a regular language $L_1$ over an alphabet of size $s+1$ such that $f_{L(M)}(n)$ = $f_{L_1}(n)$ for all $n$. As we have shown above, there is a regular language $L(M_1)$ such that $f_{L(M_1)}(n)$ = $f_{L(M)}(n)$ for all $n$. Thus, by Theorem \\ref{Soittola}, the generating function $a(z)$ associated with the sequence $(f_{L(M)}(n))$, $n \\geq 0$, is a rational function $p(z)\/q(z)$ that satisfies the conditions of Theorem \\ref{Soittola}. Let $b_n$ be defined as $b_n$ = $(k+1)^n - a_n$. The generating function $b(z)$ for the sequence $(b_n)$, $n \\geq 0$, is ${p(z)} \\over {(1-(k+1)z)\\ q(z)}$. This is a rational function with dominating pole $1 \\over{k+1}$ and hence it satisfies Theorem \\ref{Soittola}. Since both $(a_n)$ and $(b_n)$ = $((k+1)^n - a_n)$ are regular sequences, by Theorem \\ref{Beal}, there is a regular language $L_1$ over a $k+1$ letter alphabet such that $f_{L_1}(n)$ = $f_{L(M)}(n)$ for all $n$.\n\\qed \\end{proof}\n\n\\end{comment}\n\n\nHere, an $\\NTM$ is considered to have a one-way read-only input tape, plus a two-way read\/write worktape that uses blank symbol $\\blank$. Such a machine is said to be {\\em reversal-bounded} if there is a bound on the number of changes in direction between moving left and right and vice versa on the worktape.\nAn $\\NTM$ is said to be in {\\em normal form} if,\nwhenever the worktape head moves left or right (or at the beginning of a computation) to\na cell $c$, then the next transition can change the worktape letter in cell $c$, but then the cell does not change again until after the worktape head moves.\nThis essentially means that if there is a sequence of `stay' transitions (that do not\nmove the read\/write head) followed by a transition that moves, then only the first such transition can change the tape contents.\n\\begin{lemma}\nGiven an unambiguous reversal-bounded $\\NTM$ $M$, there exists an unambiguous reversal-bounded\n$\\NTM$ $M'$ in normal form such that $L(M) = L(M')$.\n\\end{lemma}\n\\begin{proof}\nGiven $M$, an $\\NTM$ $M'$ is constructed as follows:\nAfter a transition that moves the read\/write head (or at the first move of the computation),\ninstead of simulating a `stay' transition directly, $M'$ guesses the final value to be\nwritten on the cell before the head moves, and writes it during the first `stay' transition.\n$M'$ then continues to simulate the sequence of stay transitions without changing the\nvalue in the cell but remembering it in the finite control. Then, during the last transition of this sequence\nthat moves the tape head, $M'$ verifies that it guessed the final value correctly.\nCertainly, $M'$ is reversal-bounded if and only if $M$ is reversal-bounded. Further,\nas $M$ is unambiguous, there is only one computation that is accepting on every word in $L(M)$. And, in $M'$ therefore, there can only be one value guessed on each sequence of `stay' transitions that leads to acceptance. Thus, $M'$ is unambiguous. \n\\qed\\end{proof}\n\nIt will be shown that every such $\\NTM$ is counting regular.\n\\begin{theorem}\n\\label{main2}\nLet $M$ be an unambiguous reversal-bounded $\\NTM$ over a $k$ letter alphabet. Then $L(M)$ is strongly counting regular, where the regular language is over a $k+1$ letter alphabet. \n\\end{theorem}\n\\begin{proof}\nFirst we note that it is enough to show that $L(M)$ is counting regular, as the class of languages accepted by unambiguous reversal-bounded $\\NTM$s are closed under\nintersection with regular languages.\n\nLet $M= (Q,\\Sigma,\\Gamma,\\delta,q_0,F)$ be an unambiguous $\\NTM$ that is $t$-reversal-bounded such that\n$M$ is in normal form. Also, assume without loss of generality that $t$ is odd.\n\nIntuitively, the construction resembles the construction that the store languages (the language\nof all contents of the worktape that can appear in an accepting computation) of every such\n$\\NTM$ is a regular language \\cite{StoreLanguages}. \nA $2\\NFA \\ M'$ is constructed that has $t+1$ ``tracks'', and it uses the first track for simulating $M$\nbefore the first reversal, the second track for simulation of $M$ between the first and the second reversal, etc. Thus the input to $M'$ is the set of strings in which the first track contains an input string $x$ of $M$, and the other tracks are annotated with the contents of the read-write tape during moves of $M$ between successive reversals. Formal details are given below.\n\nA $2\\NFA$ $M'= (Q',\\Sigma',\\delta',q_0',F')$ is constructed as follows:\nLet $C = [(Q \\times (\\Sigma \\cup \\{\\epsilon\\})\\times Q \\times \\Gamma) \\cup \\{\\blank\\}]^{t+1}$\n(the $t+1$ tracks; each track is either a blank, or some tuple in $Q \\times (\\Sigma \\cup \\{\\epsilon\\}) \\times Q \\times \\Gamma$). Also, let $C_i$ have $t+1$ tracks where the $i$'th track, for $1 \\leq i \\leq t+1$, contains an element from $ Q \\times (\\Sigma \\cup \\{\\epsilon\\}) \\times Q \\times \\Gamma$, and\nall other tracks contain new symbol $\\#$.\nLet $\\Sigma' = C \\cup C_1 \\cup \\cdots \\cup C_{t+1}$.\nTo simulate moves between the $(i-1)$st reversal and the $i$'th reversal, $M'$ will\nexamine track $i$ of a letter of $C$ to simulate the first transition after the tape head\nmoves to a different cell (or at the first step of a computation), and track $i$ of letters\nof $C_i$ to simulate any stay transitions that occur before the tape head moves again, followed by the transition that moves.\n\nLet $X = C_{t+1}^* \\cdots C_4^* C_2^* C C_1^* C_3^* \\cdots C_t^*$. Let $h_i$ be a homomorphism that maps each string in $(\\Sigma')^*$ to the $i$'th track for symbols in $C \\cup C_i$, and erases all symbols of $C_j, j \\neq i$. Also, $\\bar{h_i}$ is a homomorphism that maps each string in $(\\Sigma')^*$\nto the $i$'th track if it is not $\\blank$, and $\\epsilon$ otherwise, for symbols in $C \\cup C_i$, and erases all symbols of $C_j$, $j \\neq i$.\nThen $M'$ does the following:\n\\begin{enumerate}\n\\item $M'$ verifies that the input $w$ is in $X^*$, and that no letter of $[\\blank]^{t+1}$ is used in $w$.\n\\item $M'$ verifies that for each $i$, $h_i(w) \\in \\blank^* (Q \\times (\\Sigma \\cup \\{\\epsilon\\}) \\times Q \\times \\Gamma)^* \\blank^*$, so blanks can only occur at the ends.\n\\item $M'$ verifies that $w$ represents an accepting computation of $M$ as follows: $M'$ goes to the first symbol of $C$ with a non-blank in the first track. Say \n$\\bar{h_1}(w) = (p_1,a_1,p_1',d_1) \\cdots (p_m,a_m,p_m',d_m), m \\geq 1$. For $j$ from $1$ to $m$,\nwe say that $j$ is from $C$ if $(p_j,a_j,p_j',d_j)$ is from a symbol of $C$ and not $C_1$, and we say $j$ is from $C_1$ otherwise. It verifies from \nleft-to-right on $w$ that for each $j$ from $1$ to $m$, there is a transition of $M$ that switches from $p_j$ to $p_j'$ while reading $a_j \\in \\Sigma \\cup \\{\\epsilon\\}$ as input on worktape letter $\\blank$ if $j$ is in $C$, and $d_{j-1}$ otherwise, replacing it with $d_j$, that:\n\\begin{itemize}\n\\item moves right on the worktape, if $j$ where $R$ is a regular language, $C$ a system of linear constraints and $\\mu$, a length-preserving morphism that is injective on $R \\cap [C]$. The specific language we denote by $L_{\\RCM}$\n is defined as follows:\n$$L_{\\RCM} = \\{w \\in \\{a,b\\}^* \\ |\\ w[|w|_a] = b\\}.$$\n(From the definition, it is clear that $a^n$ is not in $L_{\\RCM}$. But the definition does not specify the status of the string $b^n$ since there is no position 0 in the string. We resolve this by explicitly declaring the string $b^n$ to be in $L_{\\RCM}$ for all $n$.) \n\nWe first observe that $L_{\\RCM}$ is in $\\NCM(1,1)$ and hence is context-free: We informally describe a nondeterministic 1-reversal-bounded 1-counter machine $M$ for $L_{\\RCM}$. Let $w$ = $w_1 \\cdots w_n \\in L_{\\RCM}$ and suppose $|w|_a$ = $i$ and\n$|w_1 w_2 \\cdots w_{i-1}|_a$ = $j$. Then, it is clear that $w_i$ = $b$, $j < i$ and $|w_{i+1} \\cdots w_n|_a$ = $i-j$. Now we describe the operation of $M$ on input string $w$ (not necessarily in $L_{\\RCM}$) as follows. $M$ increments the counter for every $b$ until it reaches position $i$. $M$ guesses that it has reached position $i$, and checks that the symbol currently scanned is $b$, then switches to decrementing phase in which it decrements the counter for each $a$ seen. When the input head falls off the input tape, if the counter becomes 0 it accepts the string. If $w \\in L_{\\RCM}$, it is clear that when $M$ switches from incrementing to decrementing phase, the counter value is $i-j$ and hence when it finishes reading the input, the counter will become 0. The converse is also true. Thus it is clear that $L(M)$ = $L_{\\RCM}$.\n\nAn interesting aspect of the next result is that it applies the simulation technique of Theorem \\ref{main2} twice --- by first bijectively (and in length-preserving way) mapping $L_{\\RCM}$ to a language accepted by $\\DCM(1,1)$, then applying Theorem \\ref{main2} to map it to a regular language. \n\n\\begin{theorem}\n$L_{\\RCM}$ is strongly counting-regular.\n\\end{theorem}\n\\begin{proof} Consider $L'$ = $\\{ [a_1, b_1] [a_2, b_2] \\cdots [a_n, b_n]\\ |\\ a_1 a_2 \\cdots a_n \\in L_{\\RCM}$, and $b_i$ = $c$ for all $1 \\leq i \\leq k$, and $b_i$ = $d$ for all $k+1 \\leq i \\leq n$ where $k$ is the number of $a$'s in $a_1 a_2 \\cdots a_n\\}$. Thus, $L'$ is defined over $\\Sigma_1$ = $\\{[a,c], [b,c], [a,d],[b,d]\\}$. As an example, the string $[a, c][b,c][a, d]$ is in $L'$ since $aba \\in L_{\\RCM}$. It is easy to see that there is a bijective mapping between strings in $L_{\\RCM}$ and strings in $L'$. Given a string $w \\in L_{\\RCM}$, there is a unique string $w'$ in $L'$ whose upper-track consists of $w$ and the lower-track consists of $c^k d^{n-k}$ where $|w|$ = $n$ and $|w|_a$ = $k$. Since $k$ and $n$ are uniquely specified for a given string $w$, the string $w'$ is uniquely defined for a given $w$. Conversely, the projection of a $w' \\in L'$ to its upper-track string uniquely defines a string in $L_{\\RCM}$. \n\nLet $R$ be a regular language over $\\{a, b\\}$. Define a language $R' = \\{[a_1, b_1] [a_2, b_2] \\cdots [a_n, b_n]\\ |\\ a_1 \\cdots a_n \\in L' \\cap R\\}$. It is clear that there is a 1-1 correspondence between strings in $R'$ and $L_{RCM} \\cap R$. Finally, we note that there is a $\\DCM(1,1)$ $M$ that accepts $R'$: $M$ simulates the $\\DFA$ for $R$ on the upper-track input and at the same time, performs the following operation. For every input $[b,c]$, the counter is incremented, and for every $[a,d]$ the counter is decremented. It also remembers the previous symbol scanned, and checks that the previous symbol scanned before the first occurrence of $[b,d]$ is $[a,c]$. Finally, when the counter value becomes 0, the string is accepted. Note that since all $d$'s in the second track occur after the $c$'s, the counter reverses at most once.\n\nSince all $\\DCM(1,1)$'s are $1$-reversal-bounded $\\DPDA$'s, then by Corollary \\ref{cortomain}, $R'$ is counting-regular. Hence, $L_{\\RCM} \\cap R$ is counting-regular.\n\\qed\n\\end{proof}\n\nIn this section, we have shown that the languages accepted by unambiguous nondeterministic Turing machines with a one-way read-only input tape and a reversal-bounded worktape are strongly counting-regular. We also showed that some natural extensions of this class fail to be counting-regular. We presented some relationships between counting-regular languages and the class $\\RCM$. However, our understanding of which languages in $\\DCM$, $\\NCM$, or $\\CFL$ are (strongly) counting-regular is quite limited at this time.\n\n\\section{Bounded Semilinear Trio Languages are Counting-Regular}\n\\label{bounded}\n\nIn this section, we will show that all bounded languages in any semilinear\ntrio are counting-regular.\n\n\nFirst, the following is known \\cite{CIAA2016} (follows from results in \\cite{ibarra1978}, and the fact that all bounded $\\NCM$\nlanguages are in $\\DCM$ \\cite{IbarraSeki}):\n\n\\begin{lemma}\nLet $u_1, \\ldots, u_k \\in \\Sigma^+$, and let $\\phi$ be a function from $\\mathbb{N}_0^k$ to $u_1^* \\cdots u_k^*$ which associates\nto every vector $(l_1, \\ldots, l_k)$, the word $\\phi((l_1, \\ldots, l_k)) = u_1^{l_1} \\cdots u_k^{l_k}$.\nThen the following are true:\n\\begin{itemize}\n\\item given a semilinear set $Q$, then $\\phi(Q) \\in \\DCM$,\n\\item given an $\\NCM$ (or $\\DCM$) language $L \\subseteq u_1^* \\cdots u_k^*$, then $IND(L) = \\{(l_1, \\ldots, l_k) \\mid \\phi((l_1, \\ldots, l_k)) \\in L\\}$ is a semilinear set.\n\\end{itemize} Moreover, both are effective.\n\\label{phi}\n\\end{lemma}\n\n\nRecall that two languages $L_1$ and $L_2$ are called commutatively equivalent if there is a Parikh-map preserving bijection between them. Therefore, a language $L$ being commutatively equivalent to some regular language is stronger than saying it is counting-regular.\nSplit across three papers in \\cite{FlavioBoundedSemilinearpaper1,FlavioBoundedSemilinearpaper2,FlavioBoundedSemilinear}, it was shown that all bounded semilinear languages ---\nwhich are all bounded languages where $IND(L)$ is a semilinear set --- are commutatively\nequivalent to some regular language, and are therefore counting-regular. Recently, it was shown that\nall bounded languages from any semilinear trio are in $\\DCM$ \\cite{CIAA2016} (and are therefore\nbounded semilinear by Lemma \\ref{phi}). This enables us to conclude that all bounded languages\nin any semilinear trio are commutatively equivalent to some regular language, and thus counting-regular.\nHowever, as the proof that all bounded semilinear languages are commutatively equivalent to some regular language is quite lengthy, we provide a simple alternate proof that all bounded languages in any semilinear trio (all bounded semilinear languages) are counting-regular. The class $\\DCM$ plays a key role in this proof.\n\n\n\\begin{lemma}\nLet $u_1, \\ldots, u_k \\in \\Sigma^+$, $L\\subseteq u_1^* \\cdots u_k^*$ be a bounded $\\DCM$ language, and let $\\phi$ be a function from Lemma \\ref{phi}.\nThere exists a semilinear set $B$ such that $\\phi(B) = L$ and $\\phi$ is injective on $B$. Also, the construction of $B$ is effective.\n\\label{injective}\n\\end{lemma}\n\\begin{proof}\nLet $A = \\{a_1, \\ldots, a_k\\}$, and consider the homomorphism $h$ that maps $a_i$ to $u_i$, for $1 \\leq i \\leq k$. It is known that\nthere exists a regular subset $R$ of $a_1^* \\cdots a_k^*$ that $h$ maps bijectively from $R$ onto $u_1^* \\cdots u_k^*$ \n(the Cross-Section Theorem of Eilenberg \\cite{Flavio}).\nLet $L' = h^{-1}(L) \\cap R$. Then $L'$ is in $\\DCM$ since $\\DCM$ is closed under inverse homomorphism and intersection with regular languages \\cite{ibarra1978}. Hence, there is a semilinear set $B = IND(L')$ from Lemma \\ref{phi}(2).\nThen $\\phi(B) = L$ since, given $(l_1, \\ldots, l_k) \\in B$, then\n$u_1^{l_1} \\cdots u_k^{l_k} \\in L$, and given $w \\in L$, by the bijection $h$,\nthere exists a string $a_1^{l_1} \\cdots a_k^{l_k}$ of $R$ such that $a_1^{l_1} \\cdots a_k^{l_k} = h^{-1}(w)$,\nand so $w \\in \\phi(B)$. Also, $\\phi$ is injective on $B$, as given two distinct elements\n$(l_1, \\ldots, l_k)$ and $(j_1, \\ldots, j_k)$ in $B$, then both\n$a_1^{l_1} \\cdots a_k^{l_k}$ and $a_1^{j_1} \\cdots a_k^{j_k}$ are in $R$, which means that $h$ maps them onto\ndifferent words in $u_1^* \\cdots u_k^*$ since $h$ is a bijection.\n\\qed \\end{proof}\n\n\nThe proof of the next result uses similar techniques as the proof that all bounded context-free languages are counting-regular from \\cite{Flavio}. But because there are key\ndifferences, we include a full proof for completeness.\n\n\\begin{lemma}\nLet $L \\subseteq u_1^* \\cdots u_k^*$ be a bounded $\\DCM$ language\nfor given words $u_1, \\ldots, u_k$. Then there exists an effectively constructible bounded\nregular language $L'$ such that, for every $n \\geq 0$, $f_L(n) = f_{L'}(n)$.\n\\end{lemma}\n\\begin{proof}\n\nLet $\\phi$ be a function from $\\mathbb{N}_0^k$ to $u_1^* \\cdots u_k^*$ such that \n$\\phi((l_1, \\ldots, l_k)) = u_1^{l_1} \\cdots u_k^{l_k}$. By Lemma \\ref{phi}, there exists a semilinear\nset $B$ of $\\mathbb{N}_0^k$ such that $\\phi(B) = L$ \\cite{ibarra1978}. Let\n$B = B_1\\cup \\cdots \\cup B_m$, where $B_i$, $1 \\leq i \\leq m$ are linear sets. \nLet $L_1 = \\phi(B_1) \\in \\DCM$ (by Lemma \\ref{phi}), \n$L_2 = \\phi(B_2) - L_1 \\in \\DCM$ (by Lemma \\ref{phi} and since $\\DCM$ is closed under intersection\nand complement \\cite{ibarra1978}), etc.\\ until $L_m = \\phi(B_m) - (L_1 \\cup \\cdots \\cup L_{m-1}) \\in \\DCM$ (inductively, by Lemma \\ref{phi}, by closure of $\\DCM$ under intersection, complement, and union). Then $L_1 \\cup \\cdots \\cup L_m = L$, and also $L_1, \\ldots, L_m$\nare pairwise disjoint, and therefore\nby Lemma \\ref{injective}, there is a semilinear\nset $B_i'$ such that $\\phi(B_i') =L_i$, and $\\phi$ is injective on $B_i'$.\nIt is known that, given any set of constants and periods generating a semilinear set $Q$, \nthere is a procedure to effectively construct another set of constants and periods that forms a semi-simple set,\nalso generating $Q$ \\cite{Flavio,Sakarovitch} (this is phrased more generally in both works, to say that the rational sets of a commutative monoid\nare semi-simple; but in our special case it amounts to constructing a new set of constants and periods generating the same\nsemilinear set such that the linear sets are disjoint, and the periods generating each linear set form a basis). \nHence, each $B_i'$ must also be semi-simple as well (generated by a possibly different set of constants\nand periods). Let $B' = B_1' \\cup \\cdots \\cup B_m'$. Since each word in $L$ is only in exactly one language of $L_1, \\ldots, L_m$,\nit follows that for each $(l_1, \\ldots, l_k)$, $\\phi((l_1, \\ldots, l_k))$ is in at most one language of $L_1, \\ldots, L_m$.\nAnd, since $\\phi$ is injective on each $B_i'$, it \ntherefore follows that $\\phi$ is injective on $B'$. Also, $\\phi(B') = L = \\phi(B)$.\n\nThe rest of the proof then continues just as Theorem 10 of \\cite{Flavio} (starting at the second paragraph) which we describe. That is, define an alphabet $A = \\{a_1, \\ldots, a_k\\}$, and let $\\psi$ be the Parikh map of $A^*$ to $\\mathbb{N}_0^k$.\nFor every linear set $B''$ making up any of the semilinear sets of some $B_i'$, let\n$B''$ have constant $b_0$ and \nperiods $b_1, \\ldots, b_t$. Define the regular language $R_{B''} = v_0 v_1^* \\cdots v_t^*$\nwhere $v_0, \\ldots, v_t$ are any fixed words of $A^*$ such that, for every $i$,\n$\\psi(v_i) = b_i$. Thus, $\\psi(R_{B''}) = B''$, for each $B''$. Let $R$ be the union of all $C_{B''}$ over all the linear\nsets making up $B_1', \\ldots, B_m'$. Certainly $R$ is a regular language.\n\nIt is required to show that $\\psi$ is injective on $R$. Indeed, consider\n$x, y$ be two distinct elements in $R$. If $x,y$ are constructed from two different linear sets\nfrom distinct semilinear sets $B_i', B_j', i \\neq j$, then $\\psi(x) \\neq \\psi(y)$ since the semilinear\nsets $B_i'$ and $B_j'$ are disjoint. If $x,y$ are constructed from two different linear sets\nmaking up the same semilinear set $B_i'$, then since $B_i'$ is semi-simple,\nthe linear sets must be disjoint, and hence $\\psi(x) \\neq \\psi(y)$.\nIf $x,y$ are in the same linear set, then $\\psi(x)\\neq \\psi(y)$ since the linear\nset must be simple, its periods form a basis, and therefore, there is only one\nlinear combination giving each. Hence, $\\psi$ is injective on $R$.\n\nConsider the map such that, for every $i$, $1 \\leq i \\leq k$, $a_i$ maps to\n$a_i^{|u_i|}$, and extend this to a homomorphism $\\chi$ from $A^*$ to $A^*$.\nSince $\\chi(A)$ is a code, $\\chi$ is an injective homomorphism of $A^*$ to\nitself. Let $L' = \\chi(R)$. Then $L'$ is a regular language.\n\nNext, it will be shown that $f_L(n) = f_{L'}(n)$. Consider the relation\n$\\zeta = \\phi^{-1} \\psi^{-1} \\chi$. Then, when restricting $\\zeta$ to \n$L$, this is a bijection between $L$ and $L'$, since $\\phi$ is a bijection\nfrom $B'$ to $L$, $\\psi$ is a bijection of $R$ to $B'$, and $\\chi$ is a bijection\nof $R$ to $L'$. It only remains to show that, for each $u \\in L$,\n\\begin{equation}\n|u| = |\\zeta(u)|,\n\\label{conc}\n\\end{equation}\nwhich therefore would imply $f_L(n) = f_{L'}(n)$.\nFor each $u \\in L$, then\n$u = u_1^{l_1} \\cdots u_k^{l_k} = \\phi((l_1, \\ldots, l_k)) = \\phi(\\psi(x))$,\nwhere $x$ is in $R$ and $\\psi^{-1}((l_1, \\ldots, l_k))$.\nSince $|x| = \\sum_{1 \\leq i \\leq k}|x|_{a_i} = \\sum_{1 \\leq i \\leq k} l_i$,\nthen $$|\\chi(x)| = \\sum_{1 \\leq i \\leq k} |x|_{a_i} |\\chi(a_i)| = \\sum_{1 \\leq i \\leq k} l_i|\\chi(a_i)| = \\sum_{1 \\leq i \\leq k} l_i|u_i| = |u|.$$ Thus, \\ref{conc} is true, and\nthe theorem follows.\n\\qed \\end{proof}\nWe should note that if the bounded language $L \\subseteq a_1^* \\cdots a_k^*$, where\n$a_1, \\ldots, a_k$ are distinct symbols, then there is a simpler proof of the theorem\nabove as follows: Let $L \\subseteq a_1^* \\cdots a_k^*$ where the symbols are distinct.\nThen the Parikh map of $L$ is semilinear, and therefore a regular language $L'$\ncan be built with the same Parikh map, and in this language $f_L(n) = f_{L'}(n)$. \nBut when the bounded language $L$ is not of this form, this simpler proof does not work. \n\\begin{comment}\n{\\bf I'm tempted to remove this part. What do you think.\nFor example, $L = \\{(ab)^i c^i, c^i (ab)^i \\mid i \\geq 0\\} \\subseteq a^* c^* a^*$.\nThen, even if we apply an inverse homomorphism that maps $d$ to $ab$ and $c$ to $c$ to get\n$\\{d^i c^i, c^i d^i \\mid i \\geq 0\\}$, then the regular language\n$(dc)^* + (dc)^*$ has the same Parikh map, and re-applying the homomorphism\ngives $(abc)^* + (abc)^* = (abc)^*$, but the latter language does not have the same counting function.}\n\\end{comment}\n\nOur next result is a generalization of the previous result.\n\n\\begin{theorem} \nLet $L \\subseteq u_1^* \\cdots u_k^*$, \nfor words $u_1, \\ldots, u_k$ where $L$ is in any \nsemilinear trio ${\\cal L}$. \nThere exists a bounded\nregular language $L'$ such that, for every $n \\geq 0$, $f_L(n) = f_{L'}(n)$.\nMoreover, $L$ is strongly counting-regular. Furthermore, if $u_1, \\ldots, u_k$ are given,\nand all closure properties are effective in ${\\cal L}$, then $L'$ is effectively constructible.\n\\label{cor9}\n\\end{theorem}\nAgain, this follows from \\cite{CIAA2016} since it is known that every\nbounded language from any such semilinear trio where the closure properties are effective\ncan be effectively converted\ninto a $\\DCM$ language. Strong counting-regularity follows since intersecting\na bounded language in a trio with a regular language produces another bounded language\nthat is in ${\\cal L}$, since trios are closed under intersection with regular languages.\n\nAlso, since the family of regular languages is the smallest semilinear trio \\cite{G75}, it follows that the counting functions for the bounded languages in every semilinear trio are identical.\n\\begin{corollary}\nLet ${\\cal L}$ be any semilinear trio. The counting functions for the bounded\nlanguages in ${\\cal L}$ are identical to the counting functions for the bounded regular languages.\n\\end{corollary}\n\n\n\nThis works for many semilinear full trios. We will briefly discuss some in the next example.\n\\begin{example}\n\\label{semilinearfulltrioexamples}\nThe families accepted\/generated from the following grammar\/machine models form semilinear full trios (the closure properties and semilinearity are effective):\n\\begin{enumerate}\n\\item the context-free languages, $\\CFL$s, \n\\item one-way nondeterministic reversal-bounded multicounter machines, $\\NCM$s, \\cite{ibarra1978},\n\\item finite-index $\\ETOL$ systems ($\\ETOL$ systems where the number of non-active\nsymbols in each derivation is bounded by a constant) \\cite{RozenbergFiniteIndexETOL}.\n\\item $k$-flip $\\NPDA$s ($\\NPDA$s with the ability to ``flip'' their pushdown up to $k$ times) \\cite{Holzer2003}. \n\\item one-way reversal-bounded queue automata (queue automata with a bound on the number of switches between enqueueing and dequeueing) \\cite{Harju}.\n\\item $\\NTM$s with a one-way read-only input tape and a finite-crossing worktape \\cite{Harju},\n\\item uncontrolled finite-index indexed grammars (a restricted version of indexed grammars, where\nevery accepting derivation has a bounded number of nonterminals), \\cite{LATA2017}.\n\\item multi-push-down machines (a machine with multiple pushdowns where the machine can simultaneously push to all pushdowns, but can only\npop from the first non-empty pushdown) \\cite{multipushdown}.\n\\end{enumerate}\nMoreover, all of these machine models can be augmented by reversal-bounded counters and\nthe resulting machines are semilinear full trios \\cite{Harju,fullaflcounters}.\n\\end{example}\n\n\n\\begin{corollary}\nLet $L \\subseteq u_1^* \\cdots u_k^*$, be a bounded language\nfor given words $u_1, \\ldots, u_k$, such that $L$ is from any of the \nfamilies listed in Example \\ref{semilinearfulltrioexamples}.\nThen there exists an effectively constructible bounded\nregular language $L'$ such that, for every $n \\geq 0$, $f_L(n) = f_{L'}(n)$.\n\\end{corollary}\nNote that it is not assumed for these models that the machines are unambiguous, like in\nTheorem \\ref{main2}.\n\n\nThe results in this section assumed that the words $u_1, \\ldots, u_k$ \nsuch that $L \\subseteq u_1^* \\cdots u_k^*$ are given. However, it is an open problem whether, given a language $L$ in an arbitrary semilinear trio ${\\cal L}$, it is possible to determine whether\n$L \\subseteq u_1^* \\cdots u_k^*$ for some words $u_1, \\ldots, u_k$.\n\n\\section{Closure Properties for Counting-Regular Languages}\n\\label{sec:closure}\n\nIn this section, we will address the closure properties of counting-regular languages, and also\ncounting-regular $\\CFL$'s. \n\nFirst, it is immediate that counting-regular languages are closed under reversal (and since the $\\CFL$s\nare closed under reversal, so are the counting-regular $\\CFL$s). Next Kleene-* will be addressed.\n\\begin{theorem}\n\\label{code}\nIf $L$ is counting-regular and $L$ is a code, then $L^*$ is counting-regular.\n\\end{theorem}\n\\begin{proof}\nSince $L$ is a code, for each word $w \\in L^*$, there is a unique decomposition of\n$w = u_1 \\cdots u_k$, where each $u_i \\in L$. Since $L$ is counting-regular, there is\nsome regular language $R$ with the same counting function. From $R$, make\n$R'$ where the first letter of each word is tagged with a prime, and all other letters are unmarked.\nNow, $R'$ is a code because of the tagged letters, and $R'$ has the same counting function\nas $R$.\n\nMoreover, $(R')^*$ has the same counting function as $L^*$. Indeed, let $n \\geq 0$. Consider all sequences $u_1, \\ldots, u_k$ such that $n = |u_1| + \\cdots + |u_k|$. Then for each\n$u_i$, $L$ has the same number of words of length $|u_i|$ as does $R'$. Since $R'$ is a code,\nit follows that there are the same number of such sequences using elements from $R'$.\n\\qed \\end{proof}\n\n\nA similar relationship to codes exists for concatenation.\n\\begin{theorem}\n\\label{prefixcode}\nIf $L_1, L_2$ are counting-regular and either $L_1$ is a prefix code or $L_2$ is a suffix code, then $L_1 L_2$ is counting-regular.\n\\end{theorem}\n\\begin{proof}\nAssume first that $L_1$ is a prefix code, so that $L \\cap L \\Sigma^+ = \\emptyset$. Let $w = uv, u \\in L_1, v \\in L_2$. Then this decomposition is unique since $L_1$ is a prefix code. Let $R_1, R_2$ be regular\nlanguages with the same counting functions as $L_1, L_2$ respectively. Let $R_1'$ be obtained from $R_1$ by tagging the last letter with a prime. Then $R_1'$ is also a prefix code. Further,\nthe counting function for $R_1'R_2$ is equal to that of $L_1L_2$. \n\nThe case is similar for $L_2$ being a suffix code.\n\\qed \\end{proof}\n\n\n\\begin{corollary}\n\\label{corcode}\nIf $L_1, L_2 \\subseteq \\Sigma^*$ are counting-regular, and $\\$,\\#$ are new symbols, then $L^R,\nL_1 \\$L_2, \\$L_1 \\cup \\#L_2$, and $(L\\$)^*$ are counting-regular.\n\\end{corollary}\n\\begin{proof}\nThe first was discussed above. The second follows from Theorem \\ref{prefixcode} and since\n$L_1 \\$$ is a prefix code. The fourth\nfollows from Theorem \\ref{code} and since $L\\$$ is a code. For the third, since $L_1,L_2$ are\ncounting-regular, this implies there exist regular languages $R_1, R_2$ with the same\ncounting functions, and $\\$L_1 \\cup \\#L_2$ has the same counting function as \n$\\$R_1 \\cup \\#R_2$.\n\\qed\n\\end{proof}\n\n\nThis means that even though e.g.\\ non-reversal-bounded $\\DPDA$s can accept non-counting-regular\nlanguages but reversal-bounded $\\DPDA$s cannot, if a $\\DPDA$ was reversal-bounded but\nreading a $\\$$ caused a ``reset'' where the pushdown emptied, and another reversal-bounded computation was then possible, then this model would only accept counting-regular languages.\nThis is also the case with say $\\DTM$s where the worktape was reversal-bounded, but reading\na $\\$$ caused a reset, where more reversal-bounded computations were again possible.\nThis is quite a general model for which this property holds.\n\nThe next questions addressed are whether these are true when removing the $\\$$ and $\\#$\n(or removing or weakening the coding properties).\n\n\n\\begin{comment}\nClearly counting-regular $\\CFL$'s are not closed under intersection since $\\{a^n b^n c^m \\mid n, m \\geq 1\\}$ and \n$\\{a^n b^m c^m \\mid n, m \\geq 1\\}$ are counting-regular $\\CFL$'s, but their intersection is not. (It is not a $\\CFL$ although it is counting-regular.) Similarly, counting-regular $\\CFL$'s are not closed under complement since $\\overline{L_{Sq}}$ (recall that $L_{Sq}$ = $\\{ w w \\mid w \\in \\{a, b\\}^*\\}$) is a counting-regular $\\CFL$ since it has the same counting function as the regular language \n$\\{a_1b_1 \\cdots a_n b_n \\mid a_i,b_i \\in \\Sigma, a_1 \\cdots a_n \\neq b_1 \\cdots b_n\\}$, but $L_{Sq}$ is not a $\\CFL$ and is therefore not a counting-regular $\\CFL$. These are trivial (non)-closure properties inherited from the super-class of the class of $\\CFL$'s. Similarly it follows trivially that counting-regular $\\CFL$'s are closed under reversal. However, a more interesting fact is that counting-regular $\\CFL$'s are {\\it not} closed under union or intersection with regular sets, as we show next. \n\\end{comment}\n\n\\begin{theorem}\nThe counting-regular languages (and the counting-regular $\\CFL$'s) are not closed under union or intersection with regular languages.\n\\end{theorem}\n\\begin{proof} Recall the $\\DFA$ $M_2$ presented in Figure \\ref{fig3}. Since $L_{\\MAJ}$ is a counting-regular $\\CFL$, and $L_{MAJ} \\cap L(M_2)$ is not, the non-closure under intersection with regular sets follows.\n\nFor non-closure under union with regular sets, we show that $L_{\\MAJ} \\cup L(M_2)$ is not counting-regular by explicitly computing the generating function for this language using the fact that there is a 1-1 mapping between strings of $L_{\\MAJ}$ and $\\overline{L_{MAJ}}$ and between strings of $L(M_2)$ and $\\overline{L(M_2)}$. Thus the generating functions for both $f_{L_{MAJ}}(n)$ and $f_{L(M_2)}(n)$ are ${1-z} \\over {1-2z}$, from which it follows that the generating function for $L_{MAJ} \\cup L(M_2)$ is ${{1-2z-z^2} \\over {1-2z}}-$ ${{z^2+z} \\over {\\sqrt {1-4z^2}}}$. Since this is not a rational function, the claim follows.\n\\qed \\end{proof}\nThus, Corollary \\ref{corcode} cannot be weakened to remove the marking from the marked union.\n\n\nIt is an open question as to whether Theorem \\ref{code} can be weakened to remove the code\nassumption, but we conjecture that it cannot.\nHowever, for concatenation, we are able to show the following:\n\\begin{theorem}\nThe counting-regular languages (and counting-regular $\\CFL$'s) are not closed under concatenation with regular languages.\n\\end{theorem}\n\\begin{proof}\nLet $S_1$ = $\\{ w \\ |\\ w = a^n b v a^n$ for some $v \\in \\{a, b\\}^*\\}$ and let \n$S$ = $\\{ w \\ |\\ w = a^n b v_1 a^n v_2$ for some $v_1, v_2 \\in \\{a, b\\}^*\\}$ (as in Theorem \\ref{counter-examples}).\nIt is easy to see that $S$ = $S_1 (a+b)^*$. We already showed that $S$ is not counting-regular. It is easy to show that $S_1$ is a counting-regular $\\CFL$. In fact, we can explicitly exhibit $f_{S_1}(n)$ as follows: $f_{S_1}(0)$ = 0 and, for $n \\geq 1$, $f_{S_1}(n)$ = $\\sum_{j=0}^{\\lfloor (n-1)\/2 \\rfloor} 2^{n-2i-1}$. From this, it is easy to see that $f_{S_1}(n)$ = $f_L(n)$ for the regular language $L$ with regular expression $(aa)^*b(a+b)^*$. \n\\qed \\end{proof}\nHence, Corollary \\ref{corcode} and Theorem \\ref{prefixcode} cannot be weakened to remove the marking with marked concatenation or the coding properties. It is an open problem as to whether\nTheorem \\ref{prefixcode} is true when $L_1$ or $L_2$ are codes (the set of\nsuffix codes together with the set of prefix codes is a strict subset of the set of codes \\cite{CodesHandbook}).\n\n\n\n\\begin{theorem}\n\\label{rightquotient}\nThe counting-regular languages (and counting-regular $\\CFL$'s) are not closed under right quotient with a single symbol, and are not closed under left quotient with a symbol.\n\\end{theorem}\n\\begin{proof} (sketch) \nFirst, it will be shown for right quotient.\nThe language $L_{\\MAJ}$ used in Theorem \\ref{MAJ} is counting-regular. In fact, it has exactly $2^{n-1}$ strings of length $n$ for all $n \\geq 1$. We will outline an argument that $L$ = $L_{\\MAJ}\\{0\\}^{-1}$ is not counting-regular. The number of strings of length $n$ in $L$ is \n$2^{n-1} - {n \\choose {n-1 \\over 2}}$ (for odd $n$), $2^{n-1} - {n-2 \\choose {n \\over 2}}$ (for even $n$). Using a technique similar to the proof of Theorem \\ref{MAJ}, we can show that the generating function for $L$ is not rational.\n\nNext, it will be shown for left quotient.\nSince $L_{\\MAJ}$ is counting regular, and the reversal of every counting\nregular language has the same counting function, then $L_{\\MAJ}^R$ is also counting-regular. \nBut, as in in proof for right quotient, $\\{0\\}^{-1}L_{\\MAJ}^R$ is not \ncounting-regular. \n\\qed \\end{proof}\nThe language $L_{MAJ}$ used is a deterministic one counter language with no reversal bound.\nThe following however provides a contrast, as it follows from closure properties of deterministic\nmachines.\nWhen the languages are accepted by reversal-bounded machines, we have: \n\\begin{theorem} \n\\begin{enumerate}\n\\item If $L$ is accepted by a reversal-bounded \n$\\DTM$, and $R$ is a regular language, then \n$LR^{-1}$ is counting-regular. \n\\item If $L$ is accepted by an unambiguous reversal-bounded $\\NTM$, and \n$x$ is a string, then $L \\{x\\}^{-1}$ is also counting-regular.\n\\end{enumerate}\n\\end{theorem}\n\\begin{proof} Part 1 follows from the fact that the languages accepted by \nreversal-bounded $\\DTM$'s are closed under \nright-quotient with regular languages \\cite{StoreLanguages}. \n\nFor Part 2, clearly, if $L$ is accepted by an unambiguous reversal-bounded \n $\\NTM$ $M$, we can construct \nan unambiguous reversal-bounded \n$\\NTM$ $M$ accepting $L\\{x\\}^{-1}$. \n\\qed \\end{proof}\nThis result also holds for all machine models in Corollary \\ref{cortomain}\n(that is, all deterministic models listed there work with right quotient with\nregular languages \\cite{StoreLanguages}, and the unambiguous nondeterministic models there work\nwith right quotient with a word). It is an open question as to whether\nunambiguous nondeterministic $\\NTM$s with a reversal-bounded worktape\nare closed under right quotient with regular languages, which would allow\npart 2 to be strengthened.\n\n\nPart 1 of of the next theorem contrasts Part 1 of the previous theorem. \n\\begin{theorem}\n\\begin{enumerate}\n\\item There is a counting-regular language $L$ accepted by a $\\DCM(1,1)$ and distinct symbols $\\$$ and $\\#$ such that $\\{\\$,\\#\\}^{-1}L$ is not counting-regular. \n\\item If $L$ is accepted by a reversal-bounded $\\DPDA$ (resp., reversal-bounded \nunambiguous $\\NPDA$, reversal-bounded unambiguous $\\NTM$), \nand $x$ is a string, then $\\{x\\}^{-1} L$ is also counting-regular. \n\\end{enumerate}\n\\end{theorem}\n\\begin{proof}\nFor Part 1, let \n$L_1 = \\{x 1 ^n \\mid x \\in (a+b)^+, n = |x|_a \\}$ and\n$L_2 = \\{x 1 ^n \\mid x \\in (a+b)^+, n = |x|_b \\}$. \nThen $L_1$ and $L_2$ can each be accepted by a $\\DCM(1,1)$. Let $L = L_1 \\cup L_2$, shown\nin Theorem \\ref{counter-examples} to not be counting-regular.\nLet $L' = \\$L_1 \\cup \\#L_2$, which can also be accepted by a $\\DCM(1,1)$, hence it is counting-regular. However, $\\{\\$,\\#\\}^{-1}L' = L$ is not counting-regular. \nPart 2 is obvious.\n\\qed \\end{proof}\n\n\n\n\nIt may seem obvious that for any counting-regular language $L$, $\\overline{L}$ is counting-regular because of the following putative reasoning: If there is a regular language $L'$ whose counting function equals that of the counting function of $L$, the complement of $L'$ (which is regular) has the same counting function as that of $\\overline{L}$. The fallacy in this argument is as follows. Suppose that the size of the alphabet over which $L$ is defined is $k$. The size of alphabet $k_1$ over which $L'$ is defined may be larger, i.e., $k_1 > k$. Thus, the complement of $L'$ has a counting function ${k_1}^n - f_{L'}(n)$ which is not the same as the counting function $k^n-f_L(n)$ of $\\overline{L}$. \n\n\nIn fact, the following result shows that the exact opposite of the fallacy is actually true.\n\\begin{theorem}\nThere is a counting-regular language $L$ (that is in $\\P$, i.e., $L$ is deterministic polynomial time computable) such that $\\overline{L}$ is not counting-regular.\n\\end{theorem}\n\\begin{proof} The proof relies on a result presented in B\\'{e}al and Perrin \\cite{Beal}. B\\'{e}al and Perrin \\cite{Beal} provide an example of a sequence $\\{r_n\\}$, $n \\geq 0$ and an integer $k$ such that $\\{r_n\\}$ is not a counting function of any regular language, but $\\{k^n - r_n\\}$ is the counting function of a regular language. Specifically, it is shown in \\cite{Beal} that the sequence $r_n$ = $b^{2n} \\cos^2(n\\theta)$, with $cos\\ \\theta$ = $a \\over b$, where the integers $a$, $b$ are such that $b \\neq 2a$ and $0 < a < b$, and $k$ such that $b^2 < k$, satisfies the properties stated above.\n\nDefine a language $L$ over an alphabet of size $k$ as follows: arrange the strings of length $n$ over the alphabet $\\{0, 1, \\ldots , k-1\\}$ lexicographically. A string $w$ of length $n$ is defined to be in $L$ if and only if the rank of $w$ (in the lexicographic order) is greater than $r_n$. (Rank count starts at 1.) Clearly, the number of strings of length $n$ in $L$ is exactly $s_n$ = $k^n - r_n$. As shown in \\cite{Beal}, $L$ is counting-regular and $\\overline{L}$ is not counting-regular. \n\nFinally, we provide a sketch of the proof that there is a deterministic polynomial time algorithm for $L$. Given a string $w$ of length $n$, and an integer $T$ (in binary) where the number of bits in $T$ is $O(n)$, it is easy to see that there is a deterministic algorithm that determines in time polynomial in $n$ if the rank of $w$ is greater than $T$. (This algorithm simply converts $T$ from binary to base $k$ and compares the resulting string to $w$ lexicographically. Base conversion can be shown to have complexity no more than that of integer multiplication.)\nTo complete the algorithm, we need to show how to compute $r_n$ (in binary), given $n$, in time polynomial in $n$. Note that $r_n$ is given by $b^{2n} \\cos^2(n\\theta)$. Clearly, $b^{2n}$ can be computed in polynomial time by repeated multiplication by $b$ . We don't even need repeated squaring to achieve a polynomial bound since the time complexity is measured in terms of $n$, not $log\\ n$. Also, $\\cos \\ (n \\ arccos (a\/b))$ can be computed as follows: $\\cos \\ (n \\ arccos (a\/b))$ is the well-known Tchebychev polynomial $T_n(a\/b)$ which is explicitly given by the series \\cite{Mason}:\n$$\\sum_{m=0}^{\\lfloor n\/2 \\rfloor} {n \\choose {2m}} \\left ( a \\over b \\right )^{n-2m} \\left( \\left ( a \\over b \\right )^2 -1 \\right)^m. $$\n\nSince each of the terms in the above series can be computed in time polynomial in $n$, and since there are $\\lfloor {n \\over 2} \\rfloor$ terms in the series, it is clear that $r_n$ can be computed in time polynomial in $n$. \n\\qed \\end{proof}\n\nIt is evident from Corollary \\ref{cortomain} and closure of reversal-bounded $\\DPDA$'s\nunder complement that counting-regular reversal-bounded $\\DPDA$'s are closed under complement. For $\\DPDA$'s generally,\nit is open whether counting-regular $\\DPDA$'s are closed under complement.\n At the end of the proof of Theorem \\ref{main2}, we observed that the mapping that we used to map the strings from a language $L$ accepted by an unambiguous reversal-bounded $\\NTM$ to a regular language increased the size of the alphabet from $k$ to $k+1$. If for every counting-regular $\\DPDA$, the size of the alphabet of the simulating $\\NFA$ is $k$, then it will follow that counting-regular $\\DCFL$'s are closed under complement.\n\nIn fact, we can show the following stronger claim.\n\n\\begin{theorem}\nThe following statements are equivalent:\n\\begin{enumerate}\n\\item For every counting-regular $\\DCFL$ over a $k$-letter alphabet, there is a regular language $L'$ over a $k$-letter alphabet such that $f_L(n)$ = $f_{L'}(n)$ for all $n$.\n\n\\item Counting-regular $\\DCFL$'s are closed under complement.\n\\end{enumerate}\n\\end{theorem}\n\\begin{proof} (1) $\\Rightarrow$ (2) is immediate from the above discussion.\n\nTo show that (2) $\\Rightarrow$ (1), let $L$ be a counting-regular $\\DCFL$ over a $k$-letter alphabet. This means $f_L(n)$ is a regular sequence. By (2), the complement of $L$ is counting-regular, so $k^n - f_L(n)$ is also a regular sequence. From Theorem \\ref{Beal}, it follows that there is a regular language $L'$ over a $k$-letter alphabet such that $f_L(n)$ = $f_{L'}(n)$ for all $n$. \n\\qed \\end{proof} \n\n\n\nOur conclusion is that the class of counting-regular languages (or counting-regular $\\CFL$s) are very fragile in that it is not closed under basic operations such as union or intersection with regular languages. We conjecture that the counting-regular $\\CFL$s are also not closed under Kleene star.\n\n\n\\section{Some Decision Problems Related to Counting-Regularity}\n\\label{sec:decidability}\n\nIn this section, some decision problems in regards to counting-regularity are addressed.\nIn particular, we will show the following:\n(1) It is undecidable,\ngiven a real-time $1$-reversal-bounded $2$-ambiguous $\\PDA$ $M$,\nwhether $L(M)$ (resp.\\ $\\overline{L(M)}$)\nis counting-regular;\n(2) It is undecidable, given a real-time $\\NCM(1,1)$\n $M$, whether $L(M)$ (resp.\\ $\\overline{L(M)}$)\nis counting-regular. \n\nWe begin with the following result:\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{theorem}\n\\label{undecide1}\nIt is undecidable, given two real-time $1$-reversal-bounded $\\DPDA$'s $M_1$ and $M_2$ accepting strongly counting-regular languages, whether $L(M_1) \\cap L(M_2)$ is counting-regular.\n\\end{theorem}\n\\begin{proof} \n\nLet $Z$ be a single-tape $\\DTM$ working on an initially blank tape. We assume that if $Z$ halts on blank tape, it makes $2k$ steps for some $k \\ge 2$.\nWe assume that $Z$ has one-way infinite tape and does not write blank. Let\n$$L_1' = \\{ \\begin{array}[t]{l} ID_1 \\# ID_3 \\# \\cdots \\# ID_{2k-1} \\$ ID_{2k}^R \\# \\cdots \\# ID_4^R \\# ID_2^R \\mid k \\ge 2, \\mbox{~each~} ID_i\\\\\n\\mbox{is a configuration of~} Z, ID_1 \\mbox{~is initial~} ID \\mbox{~of~} Z \\mbox{~on blank tape,} \\\\\nID_{2k} \\mbox{~is a unique halting~} ID, ID_i \\Rightarrow ID_{i+1} \\mbox{~for~} i = 1, 3, \\ldots, 2k-1 \\},\\end{array}$$\n$$L_2' = \\begin{array}[t]{l} \\{ ID_1 \\# ID_3 \\# \\cdots \\# ID_{2k-1} \\$ ID_{2k}^R \\# \\cdots \\# ID_4^R \\# ID_2^R \\mid k \\geq 2, \\mbox{~each~} ID_i\\\\\n\\mbox{is a configuration of~} Z, ID_1 \\mbox{~is initial~} ID \\mbox{~of~} Z \\mbox{~on blank tape}, \\\\\nID_{2k} \\mbox{~is a unique halting~} ID, ID_i \\Rightarrow ID _{i+1} \\mbox{~for~} i = 2, 4, \\ldots, 2k-2 \\},\\end{array}$$\nwith $L_1',L_2' \\subseteq \\Sigma^*$.\nClearly, $L_1'$ and $L_2'$ can be accepted by real-time $1$-reversal-bounded $\\DPDA$'s $M_1'$ and $M_2'$. Moreover, $L_1' \\cap L_2'$ is empty or\na singleton (if and only if $Z$ accepts, which is undecidable).\nLet $a$, $b$, $1$ be three new symbols. Let\n\\begin{eqnarray*}\nL_1 &=& \\{ x w 1^n\\ |\\ x \\in (a+b)^+, \\ w \\in L(M_1'),\\ |x|_a = n \\},\\\\\nL_2 &=& \\{x w 1^n\\ |\\ x \\in (a+b)^+, \\ w \\in L(M_2'),\\ |x|_b = n \\}.\n\\end{eqnarray*}\nWe can construct a real-time $1$-reversal-bounded $\\DPDA$ $M_1$ accepting $L_1$\nas follows: $M_1$ reads $x$ and stores the number of $a$'s in the stack.\nThen it simulates the $1$-reversal-bounded $\\DPDA$ $M_1'$ on $w$, and\nfinally checks (by continuing to pop the stack) that the number\nof $a$'s stored in the stack is $n$. Similarly, we can construct a \nreal-time $1$-reversal-bounded $\\DPDA$ $M_2$ accepting $L_2$.\nBy Corollary \\ref{cortomain}, $L_1$ and $L_2$ are\nstrongly counting-regular.\n\nClearly, if $L(M_1') \\cap L(M_2')$ = $\\emptyset$,\nthen $L_1 \\cap L_2$ = $\\emptyset$,\nhence $L_1 \\cap L_2$ is counting-regular.\n\nOn the other hand, if $L(M_1') \\cap L(M_2')$ is not empty, the intersection\nis a singleton $w$ and thus $L_1 \\cap L_2$ is given by:\n$$L_1 \\cap L_2 = \\{ x w 1^n\\ | \\ x \\in (a+b)^+ ,\\ |x|_a = |x|_b = n\\}.$$\n\nAs in the proof of Theorem \\ref{counter-examples}(ii), we can show that\n$L_1 \\cap L_2$ is not counting-regular. In fact, if $|w|$ = $t$, then the \nnumber of strings of length $3n+t$ is ${2n}\\choose{n}$ from which we can explicitly construct the generating function $f(z)$ for $L_1 \\cap L_2$ as:\n$$f(z) = {1 \\over 3} \\left ( {1 \\over {\\sqrt{1-4z^3}}} + {1 \\over {\\sqrt{1-4{\\omega}z^3}}} + {1 \\over {\\sqrt{1-4{\\omega^2}z^3}}} \\right ) z^t.$$\nClearly $f(z)$ is not rational and the claim follows.\n\\qed \\end{proof}\n\nThis result is used within the next proof:\n\n\\begin{theorem} \\label{thm16}\nIt is undecidable, given a $\\PDA$ $M$ that is real-time $2$-ambiguous and $1$-reversal-bounded,\nwhether $\\overline{L(M)}$ is counting-regular.\nAlso, it is undecidable, given such a machine $M$,\nwhether $L(M)$ is counting-regular.\n\\end{theorem}\n\\begin{proof}\nConsider the languages $L_1$ and $L_2$ in the proof of Theorem \\ref{undecide1}. Since $L_1$ and $L_2$\ncan be accepted by real-time $1$-reversal-bounded $\\DPDA$'s,\n$\\overline{L_1}$ and $\\overline{L_2}$ \ncan also be accepted by real-time $1$-reversal-bounded $\\DPDA$'s.\nHence $L = \\overline{L_1} \\cup \\overline{L_2}$ \ncan be accepted by a real-time $1$-reversal-bounded $2$-ambiguous $\\PDA$ $M$.\nIt follows that $\\overline{L(M)}$ is\ncounting-regular if and only if\n$L_1 \\cap L_2$ is counting-regular, which is undecidable\nfrom Theorem \\ref{undecide1}.\n\nNow consider $L(M)$. If $Z$ does not halt on blank tape,\nthen $L(M) = \\Sigma^*$, which is counting-regular.\nIf $Z$ halts on blank tape,\n then we will show that $L(M)$ is not counting-regular as follows:\nthe generating function $g(z)$ of $L(M)$ is given by \n${1 \\over {1-sz}} - f(z)$ where $f(z)$ is the generating function of $\\overline{L(M)}$, and $s$ is the size of the alphabet over which $M$ is defined. If $L(M)$ is counting-regular, then $g(z)$ is rational, then so is $f(z)$ = ${1 \\over {1-kz}} - g(z)$, contradicting the fact that $f(z)$ is not rational.\nHence, $L(M)$ is counting-regular if and only if $Z$ does not halt.\n This completes the proof. \n\\qed \\end{proof}\n\n\nNext, we will consider\n$\\NCM(1,1)$ ($1$-reversal-bounded $1$-counter machines).\n\\begin{theorem}\nIt is undecidable, given a real-time $\\NCM$(1,1) $M$, whether $\\overline{L(M)}$ is counting-regular. Also, it is undecidable, given such a machine $M$, whether $L(M)$ is counting-regular.\n\\end{theorem}\n\\begin{proof} \nAgain, we will use the undecidability of the halting problem for a $\\DTM$ $Z$ \non an initially blank tape. As before, assume that $Z$ has a one-way infinite tape\nand does not write blank symbols. Represent each $ID$ (configuration) of $Z$ with \nblank symbols filled to its right, since for the languages we will define below, we\nrequire that all $ID_i$'s have the same length. So, e.g, $ID_1 = q_0 B \\cdots B$ \n(where $B$ is the blank symbol). We also require that the halting $ID$ have all non-blanks in state $f$, which is unique. Clearly since \n$Z$ does not write any blank symbols, if $Z$ halts on the blank tape, the lengths of the $ID$'s in\nthe halting sequence of $ID$'s do not decrease in length. Assume that if $Z$ halts, \nit halts after $k$ steps for some $k \\ge 2$. \n\nLet\n$$L_1 = \\begin{array}[t]{l} \\{ ID_1 \\# ID_3 \\# \\cdots \\# ID_{2k-1} \\$ ID_{2k}^R \\# \\cdots \\# ID_4^R \\# ID_2^R \\mid k \\ge 2, \\mbox{~each~} ID_i\\\\\n\\mbox{is a configuration of~} Z, ID_1 \\mbox{~is initial~} ID \\mbox{~of~} Z \\mbox{~on blank tape,~} \nID_{2k} \\mbox{~is the}\\\\ \\mbox{halting~} ID, ID_i \\Rightarrow ID_{i+1} \\mbox{~for~} i = 1, 3, \\ldots, 2k-1, |ID_1| = \\cdots = |ID_{2k}|\\},\\end{array}$$\n$$L_2 = \\begin{array}[t]{l}\\{ ID_1 \\# ID_3 \\# \\cdots \\# ID_{2k-1} \\$ ID_{2k}^R \\# \\cdots \\# ID_4^R \\# ID_2^R \\mid k \\ge 2, \\mbox{~each~} ID_i \\mbox{~is} \\\\\n\\mbox{~a configuration of~} Z, ID_1 \\mbox{~is initial~} ID \\mbox{~of~} Z \\mbox{~on blank tape}, ID_{2k} \\mbox{~is the}\\\\\n\\mbox{halting~} ID, ID_i \\Rightarrow ID _{i+1} \\mbox{~for~} i = 2, 4, \\ldots, 2k-2, |ID_1| = \\cdots = |ID_{2k}|\\},\\end{array}$$\nwith $L_1,L_2 \\subseteq \\Sigma^*$.\nNote that $ID_{2k}$ must have all non-blanks in\nstate $f$.\n\nLet $a, b, 1$ be new symbols. We can construct a real-time\n$\\NCM$(1,1) $M_1$ which operates as follows. When given input string $z$,\n$M_1$ nondeterministically selects one of the following tasks to execute:\n\n\\begin{enumerate}\n\\item $M_1$ checks and accepts if $z$ is not a string of the form $x w 1^n \\in (a+b)^+ \\Sigma^+ 1^+$.\n(This does not require the use of the counter.)\n\n\\item $M_1$ checks and accepts if $z$ is of the form $x w 1^n \\in (a+b)^+ \\Sigma^+ 1^+$\nbut $|x|_a \\ne n$. (This requires only one counter reversal.)\n\n\\item $M_1$ checks that $z$ is of the form $x w 1^n \\in (a+b)^+ \\Sigma^+ 1^+$, but $w$ \nis not a string of the form in $L_1$. $M_1$ does not check the lengths of the\n$ID$'s and whether $ID_{i+1}$ is a successor of $ID_i$. (This does not require\na counter.)\n\n\\item $M_1$ checks that $z$ is of the form $x w 1^n \\in (a+b)^+ \\Sigma^+ 1^+$\nand $$w = ID_1 \\# ID_3 \\# \\ \\cdots \n\\# ID_{2k-1} \\$ ID_{2k}^R \\# \\ \\cdots \\ \\# ID_4^R \\# ID_2^R,$$ for\nsome $k \\ge 2$ but $|ID_i| \\ne |ID_j|$ for some $i \\ne j$, or $ID_1$ is not the initial $ID$,\nor $ID_{2k}$ is not the halting $ID$. (This requires one counter reversal.)\n\n\\item $M_1$ assumes that $z$ is of the form $x w 1^n \\in (a+b)^+ \\Sigma^+ 1^+$\nand $$w = ID_1 \\# ID_3 \\# \\cdots \n\\# ID_{2k-1} \\$ ID_{2k}^R \\# \\cdots \\# ID_4^R \\# ID_2^R,$$ for\nsome $k \\ge 2$ , $|ID_1| = \\cdots = |ID_{2k}|$, $ID_1$ is the initial $ID$, $ID_{2k}$\nis the halting $ID$, and accepts if $ID_{i+1}$ is not the successor of $ID_i$ for\nsome $i$ = $1, 3, \\ldots , 2k-1$. Since all the $ID$'s are assumed to have the same length,\n$M_1$ needs to only use a counter that reverses once to check one of the conditions.\n\\end{enumerate}\nSimilarly, we can construct a real-time $\\NCM$(1,1) $M_2$ as above using $L_2$.\nLet $M$ be a real-time $\\NCM(1,1)$ accepting $L(M_1) \\cup L(M_2)$ and consider $\\overline{L(M)}$.\n\nIf $Z$ does not halt on blank tape,\nthen $\\overline{L(M)} = \\overline{L(M_1) \\cup L(M_2)} = \\overline{L(M_1)}\n\\cap \\overline{L(M_2)} = \\emptyset$, which is counting-regular.\n\nIf $Z$ halts on blank tape, then\n$\\overline{L(M)}$ = $\\{ x w 1^n\\ |\\ x \\in (a+b)^+ ,\\ |x|_a = |x|_b = n\\}$ for some $w$\nwhich is not counting-regular (as shown in Theorem \\ref{undecide1}). \n\nNow consider $L(M)$.\nIf $Z$ does not halt on blank tape, then $L(M)$ = $\\Sigma^*$, hence is counting-regular. If $Z$ halts on blank tape, then\n$L(M)$ = $L(M_1) \\cup L(M_2)$, which we will show to be not counting-regular as follows: the generating function $g(z)$ of $L(M)$ is given by \n${1 \\over {1-sz}} - f(z)$ where $f(z)$ is the generating function of $\\overline{L(M)}$, and $s$ is the size of the alphabet over which $M$ is defined. If $L(M)$ is counting-regular, then $g(z)$ is rational, then so is $f(z)$ = ${1 \\over {1-kz}} - g(z)$, contradicting the fact that $f(z)$ is not rational. \n\\qed \\end{proof}\n\n\n\n\n\n\n\n\n\\noindent\nNote that the machine $M$ constructed in the proof above\nis 1-reversal-bounded but not finitely-ambiguous.\nNext, it is shown to be undecidable for $2$-ambiguous machines but without the reversal-bound.\n\n\\begin{theorem} \\label{thmXX}\nIt is undecidable, given a one-way 2-ambiguous nondeterministic one counter machine \n$M$,\nwhether $\\overline{L(M)}$ is counting-regular.\nAlso, it is undecidable, given such a machine \n$M$,\nwhether $L(M)$ is counting-regular.\n\\end{theorem}\n\\begin{proof}\nIt is known that it is undecidable, given two deterministic\none counter machines (with no restriction on counter reversals) $M_1$ and\n$M_2$, whether $L(M_1) \\cap L(M_2) = \\emptyset$ (shown implicitly in \\cite{undecidablehartmanis}). Moreover, if\nthe intersection is not empty, it is a singleton.\nLet $\\Sigma$ be the input alphabet of $M_1$ and $M_2$. Let $a, b, 1$ be three new symbols. Let\n\\begin{eqnarray*}\nL_1 &=& \\{ w x 1^n\\ |\\ w \\in L(M_1), \\ x \\in (a+b)^+, \\ |x|_a = n \\},\\\\\nL_2 &=& \\{ w x 1^n\\ |\\ w \\in L(M_2), \\ x \\in (a+b)^+, \\ |x|_b = n \\}.\n\\end{eqnarray*}\nClearly, we can construct deterministic one counter machines\naccepting $L_1$ and $L_2$. Hence, \n$\\overline{L_1}$ and $\\overline{L_2}$ can be accepted by \ndeterministic one counter machines. It follows that \n$\\overline{L_1} \\cup \\overline{L_2}$\ncan be accepted by a $2$-ambiguous nondeterministic one counter machine $M$. Then, as in the\nproof of Theorem \\ref{thm16}, $\\overline{L(M)}$ (resp., $L(M)$)\nis counting-regular if and only if $L(M_1) \\cap L(M_2) = \\emptyset$,\nwhich is undecidable.\n\\qed\n\\end{proof}\n\n\nIn view of the above theorems, it is an interesting\nopen question whether the undecidability holds for reversal-bounded\nfinitely-ambiguous $\\NCM$ machines.\n\n\n\n\n\n\n\n\n\n\n\n\\section{Slender Semilinear and Length-Semilinear Languages and Decidability Problems}\n\\label{sec:slender1}\n\nA topic closely related to counting functions of formal languages\nis that of slenderness. \nDecidability and closure properties of context-free languages ($\\CFL$s) have\nbeen investigated in \\cite{Ilie,matrix,Salomaa,Ilie2,Honkala1998}.\nFor example, \\cite{Ilie}\nshows that it is decidable whether a $\\CFL$ is slender, and\nin \\cite{matrix}, it is shown\nthat for a given $k \\geq 1$, it is decidable whether a language generated by\na matrix grammar is $k$-slender (although here, the $k$ needs to be provided as input\nin contrast to the $\\CFL$ result).\n\n\nIn this section, we generalize these results to arbitrary language families that\nsatisfy certain closure properties. These generalizations would then\nimply the known results for context-free languages and matrix languages,\nand other families where the problem was open.\n\n\nFirst, we discuss the bounded language case. \nThe constructive result of Theorem \\ref{cor9} above, plus\ndecidability of slenderness for regular languages implies the following:\n\\begin{corollary} \\label{cor13}\nLet ${\\cal L}$ be a semilinear trio (with all properties effective). \nThen, it is decidable, given $L$ bounded in ${\\cal L}$ and words \n$u_1, \\ldots, u_k$ such that $L \\subseteq u_1^* \\cdots u_k^*$,\nwhether $L$ is slender.\n\\end{corollary}\nIn \\cite{Honkala1998}, the similar result was shown that \nit is decidable whether or not a given bounded semilinear language $L$\nis slender. \n\nIn our definition of a semilinear family of languages ${\\cal L}$, we only\nrequire that every language in ${\\cal L}$ has a semilinear Parikh map.\nHowever, it is known that in every semilinear trio ${\\cal L}$, all\nbounded languages are bounded semilinear \\cite{CIAA2016}, and therefore\nthe result of \\cite{Honkala1998} also implies Corollary \\ref{cor13}. Conversely, all\nbounded semilinear languages are in the semilinear trio $\\NCM$\n\\cite{ibarra1978,CIAA2016}; hence, given any bounded semilinear \nlanguage (in any semilinear family\nso long as we can construct the semilinear set), slenderness is decidable.\nThis method therefore also provides an alternate proof to the result\nin \\cite{Honkala1998}.\n\n\nNext, we will examine the case where $L$ is not necessarily bounded.\nOne recent result is quite helpful in studying $k$-slender languages. In \\cite{fullaflcounters}, the following was shown.\n\\begin{theorem}\\cite{fullaflcounters}\nLet ${\\cal L}$ be any semilinear full trio where the semilinearity and intersection with regular languages properties are effective. Then the smallest full\nAFL containing intersections of languages in ${\\cal L}$ with $\\NCM$,\ndenoted by $\\hat{{\\cal F}}({\\cal L} \\wedge \\NCM)$, is effectively semilinear.\nHence, the emptiness problem for \n$\\hat{{\\cal F}}({\\cal L} \\wedge \\NCM)$ is decidable.\n\\label{fullAFL}\n\\end{theorem}\nWe make frequent use of this throughout the proofs of the next two sections.\n\nFirst, decidability of $k$-slenderness is addressed.\n\\begin{theorem}\n\\label{kslender}\nLet ${\\cal L}$ be a full trio which is either:\n\\begin{itemize}\n\\item semilinear, or\n\\item is length-semilinear, closed under concatenation, and intersection with $\\NCM$,\n\\end{itemize}\nwith all properties effective.\nIt is decidable, given $k$ and $L \\in {\\cal L}$, whether $L$ is a k-slender language.\n\\end{theorem}\n\\begin{proof} Let $L \\subseteq \\Sigma^*$, and let $\\#$ be a new symbol. \nFirst construct an $\\NCM$ $M_1$\nwhich when given $x_1 \\# x_2\\# \\ldots \\# x_{k+1}$, $x_i \\in \\Sigma^*, 1 \\leq i \\leq k+1$,\naccepts if $|x_1| = \\cdots = |x_{k+1}|$, and $x_i\\neq x_j$ are different, for\nall $i \\neq j$. To do this, $M_1$ uses (many) counters to verify the lengths,\nand it uses counters to guess positions and verify \nthe discrepancies between $x_i$ and $x_j$, for each $i \\ne j$.\nLet ${\\cal L}' = \\hat{{\\cal F}}({\\cal L} \\wedge \\NCM)$ (or just ${\\cal L}$ in the second case), which is semilinear\nby Theorem \\ref{fullAFL}, (or length-semilinear in the second case, by assumption).\nConstruct $L_2 \\in {\\cal L}'$ \nwhich consists of all words of the form \n$x_1 \\# x_2\\# \\ldots \\# x_{k+1}$, where $x_i \\in L$ (every full AFL is closed\nunder concatenation). \nThen $L_3 = L(M_1) \\cap L_2 \\in {\\cal L}'$.\nClearly, $L$ is not $k$-slender if and only if $L_3$ is not empty, \nwhich is decidable by Theorem \\ref{fullAFL}.\n\\qed \\end{proof}\n\n\nThere are many known semilinear full trios listed in Example \\ref{semilinearfulltrioexamples}. Plus,\nit is known that languages generated by matrix grammars form a length-semilinear (but not semilinear, in general) full\ntrio closed under concatenation and intersection with $\\NCM$ (\\cite{matrix},\nwhere is it is shown that the languages are closed under intersection with\nBLIND multicounter languages, known to be equivalent to $\\NCM$ \\cite{G78}). Therefore, the\nresult is implied for matrix grammars as well, although this is already known.\n\\begin{corollary}\n\\label{allcorollaries}\nLet ${\\cal L}$ be any of the families listed in Example \\ref{semilinearfulltrioexamples}. \nThen, the problem, ``for $k \\geq 1$ and $L \\in {\\cal L}$, is $L$ a $k$-slender language?''\nis decidable.\n\\end{corollary}\n\n\nAll the machine models used in Example \\ref{semilinearfulltrioexamples} have one-way inputs, however with a two-way input,\nthe problem is more complicated.\nNow let $2\\DCM(k)$ (resp., $2\\NCM(k)$) be a two-way $\\DFA$ (resp., two-way\n$\\NFA$) with end-markers on both sides of the input, augmented with $k$ reversal-bounded counters.\n\\begin{theorem} It is decidable, given $k$ and $2\\DCM(1)$ $M$, whether $M$ accepts \na $k$-slender language.\n\\label{deckslender}\n\\end{theorem}\n\\begin{proof} We may assume that $M$ always halts \\cite{IbarraJiang}. Given $M$,\nconstruct another $2\\DCM(1)$ $M'$ with a $(k+1)$-track tape. First for each\n$1 \\le i < k+1$, $M'$ checks that the string in track $i$ is different from\nthe strings in tracks $i+1, \\ldots, k+1$. Thus, $M$ needs to make multiple\nsweeps of the $(k+1)$-track input.\n\nThen $M'$ checks that the string in each track is accepted. Clearly, $L(M)$ is \nnot $k$-slender if and only if $L(M')$ is not empty. The result follows, since\nemptiness is decidable for $2\\DCM(1)$ \\cite{IbarraJiang}.\n\\qed \\end{proof}\n\nThe above result does not generalize for $2\\DCM(2)$:\n\n\\begin{theorem} The following are true:\n\\begin{enumerate} \n\\item It is undecidable, given $k$ and a $2\\DCM(2)$ $M$, whether $M$ accepts a\n$k$-slender language, even when $M$ accepts a letter-bounded language\nthat is a subset of $a_1^* \\cdots a_r^*$ for given $a_1, \\ldots, a_r$.\n\\item It is undecidable, given $k$ and a $2\\DCM(2)$ $M$, whether $M$ accepts a slender\nlanguage, even when $M$ accepts a letter-bounded language.\n\\end{enumerate}\n\\end{theorem}\n\\begin{proof} It is known \\cite{ibarra1978} that it is undecidable, given a $2\\DCM(2)$\n$M$ accepting a language that is in $b_1^* \\cdots b_r^*$ for given $b_1, \\ldots, b_r$, \nwhether $L(M) = \\emptyset$. Let $c$ and $d$ be new symbols.\nConstruct another $2\\DCM(2)$ $M'$ which when given\na string $w = b_1^{i_1} \\cdots b_k^{i_r} c^i d^j$, simulates $M$ on \n$b_1^{i_1} \\cdots b_k^{i_r}$, and when $M$ accepts, $M'$ accepts $w$.\nThen $L(M')$ is not $k$-slender for any given $k$ (resp., not slender) if and only if $L(M)$\nis not empty, which is undecidable.\n\\qed \\end{proof}\n\n\nWhether or not Theorem \\ref{kslender} holds for $2\\NCM(1)$ $M$ is open.\nHowever, we can prove a weaker version using the fact that it is\ndecidable, given a $2\\NCM(1)$ $M$ accepting a bounded language\nover $w_1^* \\cdots w_r^*$ for given $w_1, \\ldots, w_r$, whether\n$L(M) = \\emptyset$ \\cite{DangIbarra}.\n\n\\begin{theorem} It is decidable, given $k$ and $2\\NCM(1)$ $M$ that accepts a \nlanguage over $w_1^* \\cdots w_r^*$ for given $w_1, \\ldots, w_r$, whether $M$ accepts\na $k$-slender language.\n\\end{theorem}\n\\begin{proof}\nWe construct from $M$ another $2\\NCM(1)$ $M'$ which, when given \n$x_1 \\# x_2\\# \\cdots \\# x_{k+1}$, where each\n$x_i$ is in $w_1^* \\cdots w_r^*$ first checks that all $x_i$'s \nare different. For each $i$, and $j = i+1, \\ldots, k+1$, $M'$ guesses the position\nof discrepancy between $x_i$ and $x_j$ and records this position in the counter\nso that it can check the discrepancy. (Note that only one reversal-bounded\ncounter is needed for this.) Then $M'$ checks that each $x_i$ is accepted. \nClearly, $M$ is not $k$ slender if and only if $L(M')$ is not empty, which is \ndecidable, since the language accepted by $M'$ is bounded.\n\\qed \\end{proof}\n\nWe can also prove some closure properties. Here is an example:\n\n\\begin{theorem}\nThe following are true:\n\\begin{enumerate} \n\\item Slender (resp., thin) $\\NCM$ languages are closed under intersection.\n\\item Slender (resp., thin) $2\\DCM(1)$ languages ($2\\NCM(1)$ languages) are \n closed under intersection.\n\\end{enumerate}\n\\end{theorem}\n\\begin{proof}\nStraightforward since the families of languages above are\nclosed under intersection.\n\\qed \\end{proof}\n\nDeciding if an $\\NCM$ language (or anything more general than $\\CFL$s) is $k$-slender for some $k$ (where $k$ is not part of the input) is open,\nalthough as we showed in Theorem \\ref{deckslender}, for a given $k$, we can \ndecide if an $\\NCM$ or $\\NPCM$ accepts a $k$-slender language. We conjecture\nthat every $k$-slender $\\NPCM$ language is bounded. If \nthis can be proven, we will also need an algorithm to determine words $w_1, \\ldots,\nw_r$ such that the language is a subset of $w_1^* \\cdots w_r^*$, which \nwe also do not yet know how to do. \n\n\n\nLet $c \\ge 1$. A $2\\DCM(k)$ ($2\\NCM(k))$ $M$ is $c$-crossing if the number of times\nthe input head crosses the boundary of any two adjacent cells of the input is\nat most $c$. Then $ M$ is finite-crossing if it is $c$-crossing for some $c$. It is known\nthat a finite-crossing $2\\NCM(k)$ can can be converted to an $\\NCM(k')$ for some\n$k'$ \\cite{Gurari1981220}. Hence every bounded language accepted by any\nfinite-crossing $2\\NCM(k)$ is counting-regular. The next result shows that this is\nnot true if the two-way input is unrestricted:\n\\begin{theorem} The letter-bounded language $L = \\{a^i b^{ij} ~|~ i, j \\ge 1\\}$\nis accepted by a $2\\DCM(1)$ whose counter makes only $1$ reversal, but $L$ is not\ncounting-regular.\n\\end{theorem}\n\\begin{proof}\nA $2\\DCM(1)$ $M$ accepting $L$ operates as follows, given input $a^ib^k$ ($i, k \\ge 1$):\n$M$ reads and stores $k$ in the counter. Then it makes multiple sweeps on $a^i$\nwhile decrementing the counter to check that $k$ is divisible by $i$.\n\nTo see that $L$ is not counting-regular, we note \nthat, for any $n \\geq 2$, the number of strings of length $n$ in $L$ is $\\phi(n)$, Euler's totient function \nis equal to the number of divisors of $n$. In fact, let the divisors of $n$ be\n$d_1, d_2, \\ldots, d_m$. Then, there are exactly $m$ strings of length $n$, namely: $a^{d_1} b^{(n\/d_1 - 1)d_1}, a^{d_2} b^{(n\/d_2 - 1)d_2}, \\ldots , \\\\\na^{d_m} b^{(n\/d_m - 1)d_m}$. Conversely, for each string $w$ of length $n$ in $L$, there is a unique divisor of $n$ (namely the number of $a$'s in $w$) associated with the string. This means the generating function of $L$ is $f(z)$ = $\\sum_{n \\geq 2} \\phi(n) z^n$. \n\nIt can be shown that $f(z)$ = $\\sum_{n \\geq 2} a_n {z^n \\over {1-z^n}}$ where $a_n$ = $\\sum_{d|n} \\phi(d) \\mu(n\/d)$ where $\\mu$ is the Mobius function.\nFrom this expression, it is clear that every solution to $z^n=1$ is a pole of $f(z)$ and so the $n$'th root of unity is a pole of $f(z)$\nfor each positive integer $n \\geq 2$. Since a rational function can have only a finite number of poles, it follows that $f(z)$ is not\nrational and hence $L$ is not counting-regular.\n\\qed \\end{proof}\nHowever, it is known that unary languages accepted by $2\\NCM(k)$s are\nregular \\cite{IbarraJiang}; hence, such languages are counting-regular.\n\n\nThis is interesting, as $2\\DCM(1)$ has a decidable emptiness and slenderness problem, yet\nthere is a letter-bounded language from the theorem above accepted by a $2\\DCM(1)$\nthat is not counting-regular.\n\nThe following result provides an interesting contrast, as it\ninvolves a model with an undecidable membership\n(and emptiness) problem, but provides an example of a (non-recursively enumerable) \nlanguage that is all of slender, thin,\nbounded, semilinear, but also counting-regular. \n\n\\begin{theorem}\nThere exists a language $L$ that is letter-bounded and semilinear and thin and counting regular but is not recursively enumerable. Moreover, we can effectively construct a \n$\\DFA$ $M$ such that\n$f_{L(M)}(n) = f_L(n)$. \n\n\\end{theorem}\n\\begin{proof}\nLet $L \\subseteq a^*$ be a unary language that is not recursively enumerable, which is known to exist \\cite{Minsky}. Assume without loss of generality that the empty word is not in $L$, \nbut the letter $a$ itself is in $L$.\n\nLet $L'$ be the language consisting of, for each $n \\geq 1$, the single word\nwith all $a$'s except for one $b$ in the position of the largest $m \\leq n$ such that\n$a^m \\in L$.\n\n\nThen $L'$ has one word of every length, and is therefore thin. Also, it has one word of every length and exactly one $b$ in every word, so it has the same Parikh map as $a^* b$, so is semilinear. Also, it is clearly bounded in $a^*b a^*$. It is also not recursively enumerable, otherwise if it were, then make a gsm \\cite{Hopcroft} that outputs $a$'s for every $a$ until a $b$, then it outputs $a$. Then, for every remaining $a$, it outputs the empty word. Applying this to $L'$ gives $L$ (the position of the $b$ lets the gsm recover the words of $L$). But the recursively enumerable languages are closed under gsm mappings, a contradiction.\n\\qed \\end{proof}\n\n\n\\section{Characterization of $k$-Slender Semilinear and Length-Semilinear Languages}\n\\label{sec:slender2}\n\n\nThis section discusses decidability properties (such as the problem of testing whether two\nlanguages are equal, or one language is contained in another) for $k$-slender languages in\narbitrary families of languages satisfying certain closure properties. \nIt is known that the equivalence problem for $\\NPCM$ languages \nthat are subsets of $w_1^* \\cdots w_r^*$ for given $w_1, \\ldots , w_r$ \nis decidable. However, as mentioned above, we do not know yet \nif $k$-slender languages are bounded and even if they are, we do not\nknow yet how to the determine the associated words $w_1, \\ldots, w_r$.\nHence, the definition and results below are of interest.\n\nThe following notion is useful for studying decidability properties\nof slender languages.\nLet $k \\ge 1$ be given. A language $L$ is $k$-slender effective if we \ncan effectively construct a $\\DFA$ over a unary alphabet $\\{1\\}$ with $k+1$\ndistinguished states $s_0, s_1, \\ldots, s_k$ (other states can exist) \nwhich, when given an input\n$1^n$ where $n \\ge 0$, halts in state $s_i$ if $f_L(n) = i$, where \n$0 \\le i \\le k$. (Hence, the $\\DFA$ can determine the number of strings \nin $L$ of length $n$, for every $n$.)\n\nFor example, consider the language\n $$L = \\{a^i b^i \\mid i \\ge 1\\} \\cup \\{c^i a^i d^i \\mid i \\ge 1\\}.$$\nThen,\n$$f_L(n) = \n\\begin{cases} 0 & \\mbox{if $n = 0$ or $n=1$ or not divisible by $2$ and $3$,}\\\\\n 1 & \\mbox{if $n \\ge 2$, is divisible by $2$, and not divisible by $3$,}\\\\\n 1 & \\mbox{if $n \\ge 3$, is divisible by $3$, and not divisible by $2$,}\\\\\n 2 & \\mbox{if $n \\ge 2$, is divisible by $2$ and divisible by $3$}.\n\\end{cases}$$\nClearly, $L$ is $2$-slender effective.\n\n\nNext, we will discuss which $k$-slender languages are $k$-slender effective.\nFirst, we say that the length-semilinear property is effective if it is possible\nto effectively construct, for $L \\in {\\cal L}$, a $\\DFA$ accepting $\\{1^n \\mid f_L(n) \\geq 1 \\}$.\n\\begin{theorem}\n\\label{finiteunion} Let ${\\cal L}$ be an effective length-semilinear trio.\nA finite union of thin languages in ${\\cal L}$ is $k$-slender effective.\n\\end{theorem}\n\\begin{proof}\nLet $L$ is the finite union of $L_1, \\ldots, L_k \\in {\\cal L}$, each thin. Then construct\n$h(L_i)$, where $h$ maps $L_i$ onto the single letter $1$. Then $h(L_i) = \\{1^n \\mid f_{L_i}(n) = 1\\}$. Then since ${\\cal L}$ is length-semilinear, each of $h(L_i)$ are regular, and can be accepted by a $\\DFA$ $M_i$. Thus, make another $\\DFA$ $M'$ such that, on input $1^n$, $M'$ runs each $M_i$ in parallel on $1^n$, and then switches to distinguished state $s_j$ if there are $j$ of the $\\DFA$s $M_1, \\ldots, M_k$ that are accepting.\n\\qed \\end{proof}\n\n\nIt will be seen next that for all $k$-slender languages in ``well-behaved''\nfamilies, they are $k$-slender effective. First, the following lemma is needed.\n\\begin{lemma} \n\\label{makeeffective}\nLet ${\\cal L}$ be a full trio which is either:\n\\begin{itemize}\n\\item semilinear, or\n\\item is length-semilinear, closed under concatenation, and intersection with $\\NCM$,\n\\end{itemize}\nwith all properties effective.\nLet $ k \\ge 1$ and $L \\in {\\cal L}$, a $k$-slender\nlanguage such that $f_{L}(n)$ is equal to $0$ or $k$ for every $n$. \nThere\nis a $\\DFA$ that can determine $f_L(n)$. Hence $L$ is a $k$-slender\neffective language.\n\\end{lemma}\n\\begin{proof}\nLet ${\\cal L}' = \\hat{{\\cal F}}({\\cal L} \\wedge \\NCM)$ (or just ${\\cal L}$ in the second case), which is semilinear\nby Theorem \\ref{fullAFL}, (or length-semilinear by assumption).\n \nConsider\n$L' = \\{ x_1 \\# \\cdots \\# x_k \\mid x_1, \\ldots, x_k \\in L\\} \\in {\\cal L}'$.\nCreate $L''$ by intersecting $L'$ with an $\\NCM$ language that enforces that all words\nof the form \n$ x_1 \\# \\cdots \\# x_k $ have\n$|x_1| = \\cdots = |x_k|$, and $x_i \\ne x_j$ for each $i \\ne j$.\nThus $L'' = \\{ x_1 \\# \\cdots \\# x_k \\mid x_1, \\ldots, x_k \\in L,\n|x_1| = \\cdots = |x_k|, x_i \\ne x_j \\mbox{~for each~} i \\ne j\\}$.\nHence, $L''\\in {\\cal L}'$. Let $L'''$ be the language obtained from $L''$ by\nhomomorphism that projects onto the single letter $1$. \nSince $L'''$ is length-semilinear, it can be accepted by a $\\DFA$. Moreover, the length $n$ of a word\n$x_1\\# \\cdots \\# x_k \\in L''$ can be transformed into $|x_1|$ via $\\frac{n-(k-1)}{k}$. Given the $\\DFA$,\nthen another \n$\\DFA$ can be built that can determine $f_L(n)$.\n\\qed \\end{proof}\n\nThen, the following is true.\n\\begin{theorem}\nLet ${\\cal L}$ be a full trio which is either:\n\\begin{itemize}\n\\item semilinear, or\n\\item is length-semilinear, closed under concatenation, union, and intersection with $\\NCM$,\n\\end{itemize}\nwith all properties effective.\nIf $L \\in {\\cal L}$ be a $k$-slender language $L$, then\n$L$ is $k$-slender effective.\n\\label{thm22}\n\\end{theorem}\n\\begin{proof} The case $k = 1$ is true by Theorem \\ref{finiteunion}. \n Assume by induction that\nthe theorem is true for $k \\geq 1$.\n\nNow consider an $L \\in {\\cal L}$ that is a $(k+1)$-slender language,\n$k \\ge 1$. Hence $f_{L}(n) \\leq (k+1)$ for each $n \\geq 0$. \n\nLet ${\\cal L}' = \\hat{{\\cal F}}({\\cal L} \\wedge \\NCM)$ (or just ${\\cal L}$ in the second case), which is semilinear\nby Theorem \\ref{fullAFL}, (or length-semilinear by assumption).\nLet $A = \\{x_1 \\# \\cdots \\# x_{k+1} \\mid x_1, \\ldots,x_{k+1} \\in L\\} \\in {\\cal L}'.$\nThen intersect\n$A$ with an $\\NCM$ that enforces that all words\n$x_1 \\# \\cdots \\# x_{k+1}$ have\n$|x_1| = \\cdots = |x_{k+1} |$, and $x_i \\ne x_j$ for each $i \\ne j$.\nLet $A'$ be the resulting language.\nThen\n$A' = \\{x_1 \\# \\cdots \\# x_{k+1} \\mid x_1, \\ldots,x_{k+1}\\in L \\mbox{~such that~} \n |x_1| = \\cdots = |x_{k+1} |, x_i \\ne x_j \\mbox{~for each~} i \\ne j\\}.$\nBy Lemma \\ref{makeeffective}, a $\\DFA$ accepting $\\{1^n \\mid f_L(n) = k+1\\}$\ncan be effectively constructed. Thus, a $\\DFA$ accepting\n$\\{1^n \\mid f_L(n) \\neq k+1\\}$ can also be constructed. Furthermore,\n$B = \\{w \\mid w\\in L, f_L(|w|) \\neq k+1\\} \\in {\\cal L}'$.\nThen, $B$ is $k$-slender and, hence $k$-slender effective by induction hypothesis.\nHence, $L$ is $(k+1)$-slender effective.\n\\qed \\end{proof}\n\n\n\n\n\nThe proof of Theorem \\ref{thm22} actually shows the following:\n\\begin{corollary} \nLet ${\\cal L}$ be a full trio closed under concatenation, union, and intersection with $\\NCM$, and is length-semilinear\nwith all properties effective.\nLet $k \\ge 1$. A language $L \\in {\\cal L}$ is a $k$-slender language \nif and only if $L = L_1 \\cup \\cdots \\cup L_k$, where for $1 \\le i \\le k$, $L_i$ is\nan $i$-slender effective language such that $f_{L_i}(n)$ is equal to $0$ or $i$ for each $n$.\n\\end{corollary}\n\n\nNext, decidability of containment is addressed.\n\\begin{theorem} \nLet ${\\cal L}$ be a full trio which is either:\n\\begin{itemize}\n\\item semilinear, or\n\\item is length-semilinear, closed under concatenation, and intersection with $\\NCM$,\n\\end{itemize}\nwith all properties effective.\nIt is decidable, given $L_1, L_2 \\in {\\cal L}$ with $L_2$\nbeing a $k$-slender language, whether \n$L_1 \\subseteq L_2$.\n\\label{containment}\n\\end{theorem}\n\\begin{proof} Then $L_2$ is $k$-slender effective by Theorem \\ref{thm22}. Without loss of generality, assume that the input alphabet \nof both $L_1$ and $L_2$ is $\\Sigma$. Let $1, \\#$, and $\\$$ be new symbols. \nLet ${\\cal L}' = \\hat{{\\cal F}}({\\cal L} \\wedge \\NCM)$ (or just ${\\cal L}$ in the second case), which is semilinear\nby Theorem \\ref{fullAFL}, (or length-semilinear by assumption).\nWe will construct a sequence of machines and languages below.\n\\begin{enumerate}\n\\item First, let $M_1'$ (resp.\\ $M_2'$) be the unary $\\DFA$ accepting all words $1^n$ where\na word of length $n$ is in $L_1$ (resp.\\ in $L_2$). Let $A_1 = L(M_1) - L(M_2)$. (This is\nempty if and only if all lengths of words in $L_1$ are lengths\nof words in $L_2$. This language is regular.\n\n\\item Construct $A_2 \\in {\\cal L}'$ consisting of all words\n$w = 1^n \\$ x \\$ y_1\\# \\cdots y_r\\$$, where \n$x \\in L_1$ and each $y_j \\in L_2$.\n\n\\item Construct an $\\NCM$ $A_3$ which, when given \n$w = 1^n\\$x\\$y_1\\# \\cdots y_r\\$$, accepts $w$ if the following is true:\n\\begin{enumerate}\n\\item $r = f_{L_2}(n)$ (which can be tested since $L_2$ is $k$-slender effective). \n\\item $|x| = |y_1| = \\cdots = |y_r| = n$.\n\\item $y_i \\ne y_j$ for each $i \\ne j$.\n\\item $x \\ne y_i$ for each $i$.\n\\end{enumerate}\nNote that $A_3$ needs multiple reversal-bounded counters to carry out the four\ntasks in parallel.\n\\item Construct $A_4 = A_2 \\cap A_3 \\in {\\cal L}'$.\n\\item Finally construct an $A_5 = A_4 \\cup A_1 \\in {\\cal L}'$ (full trios are closed\nunder union with regular languages \\cite{G75}).\n\\end{enumerate}\n\nIt is easy to verify that $L_1 \\not\\subseteq L_2$ if and only \nif $A_5$ is not empty, which is decidable, since emptiness is decidable.\n\\qed \\end{proof}\n\\begin{corollary}\nLet ${\\cal L}$ be a full trio which is either:\n\\begin{itemize}\n\\item semilinear, or\n\\item is length-semilinear, closed under concatenation, and intersection with $\\NCM$,\n\\end{itemize}\nwith all properties effective.\nIt is decidable, given $L_1,L_2 \\in {\\cal L}$ that are \n$k$-slender languages, whether $L_1 = L_2$.\n\\end{corollary}\n\n\nThere are many semilinear full trios in the literature for which the properties in this section hold. \n\\begin{corollary}\nLet ${\\cal L}$ be any of the families from Example \\ref{semilinearfulltrioexamples}.\nThe following are decidable:\n\\begin{itemize}\n\\item For $L_1, L_2$ with $L_2$ being $k$-slender, is $L_1 \\subseteq L_2$?\n\\item For $L_1, L_2$ being $k$-slender languages, is $L_1 = L_2$?\n\\end{itemize}\n\\end{corollary}\n\n\n\n\nFurthermore, matrix grammars are an example of a length-semilinear \\cite{DassowHandbook} full trio closed under concatenation, union, and intersection with $\\NCM$ (although they do accept non-semilinear languages). We therefore\nget all these properties for matrix grammars as a consequence of these proofs. However, this result is already known \\cite{matrix}.\n\nUsing the ideas in the constructive proof of the theorem above,\nwe can also show:\n\\begin{theorem} Let ${\\cal L}$ be a union and concatenation closed length-semilinear full trio with all properties effective\nthat is closed under intersection with $\\NCM$.\nLet $L_1, L_2 \\in {\\cal L}$ with $L_2$ a $k$-slender language.\nThen $L_1-L_2 \\in {\\cal L}$. Hence, the complement of any $k$-slender language\nin ${\\cal L}$ is again in ${\\cal L}$.\n\\label{difference}\n\\end{theorem}\n\\begin{proof} Let $\\Sigma$ be the (without loss of generality) joint alphabet of $L_1$ and\n$L_2$, and let $\\Sigma'$ be the set of the\nprimed versions of the symbols in $\\Sigma$. Let $\\#$, and $\\$$ be new symbols. \nConsider input \n\\begin{equation}\nw = x\\$y_1\\# \\cdots \\#y_r,\n\\label{w}\n\\end{equation} where $x$ is in $(\\Sigma')^*$ and $y_1, \\ldots, y_r$ \nare in $\\Sigma^*$, for some $0 \\leq r \\leq k$. \nBy Theorem \\ref{thm22}, $L_2$ is $k$-slender effective. \nLet $M'$ be this unary $\\DFA$ accepting all words of lengths in $L_2$.\nBuild an $\\NCM$ $M''$ that on input $w$, verifies:\n\\begin{enumerate}\n\\item $r = f(n)$.\n\\item $|x|=|y_1| = \\cdots = |y_r|$.\n\\item $y_i \\ne y_j$ for each $i \\ne j$.\n\\item $h(x) \\ne y_i$ for each $i$, where $h(a') = a$ for each $a' \\in \\Sigma'$.\n\\end{enumerate}\n\nConsider $L''' \\in {\\cal L}$ \nconsisting of all words of the form of $w$ in Equation \\ref{w},\nwhere $x \\in L_1$, and each $y_i \\in L_2$.\nThis is in ${\\cal L}$ since ${\\cal L}$ is closed under concatenation.\n\nNow define a homomorphism $h_1$ which maps $\\#, \\$$, and symbols in $\\Sigma$ to\n$\\epsilon$ and fixes letters in $\\Sigma'$. Clearly, $h_1(L''' \\cap L(M''))$ is \n$L_1 - L_2$, and it is in ${\\cal L}$.\n\\qed \\end{proof}\n\nThis holds for not only the matrix languages, but also concatenation and union-closed\nsemilinear full trios closed under intersection with $\\NCM$. Some examples are:\n\\begin{corollary}\nLet ${\\cal L}$ be any family of languages that are\naccepted by a machine model in Example \\ref{semilinearfulltrioexamples} \nthat are augmented by reversal-bounded counters.\nGiven $L_1, L_2 \\in {\\cal L}$ with $L_2$ being $k$-slender, then $L_1 - L_2 \\in {\\cal L}$.\nFurthermore, the complement of any $k$-slender language in ${\\cal L}$ is again in ${\\cal L}$.\n\\end{corollary}\n\nNext, decidability of disjointness for $k$-slender languages will be addressed.\n\\begin{theorem}\nLet ${\\cal L}$ be a full trio which is either:\n\\begin{itemize}\n\\item semilinear, or\n\\item is length-semilinear, closed under concatenation, union, and intersection with $\\NCM$,\n\\end{itemize}\nwith all properties effective.\nGiven $L_1,L_2 \\in {\\cal L}$ being $k$-slender languages,\nit is decidable whether $L_1 \\cap L_2 = \\emptyset$.\n\\end{theorem}\n\\begin{proof}\nLet ${\\cal L}' = \\hat{{\\cal F}}({\\cal L} \\wedge \\NCM)$ (or just ${\\cal L}$ in the second case), which is semilinear\nby Theorem \\ref{fullAFL}, (or length-semilinear by assumption).\n\nNotice that $L_1 \\cap L_2 = (L_1 \\cup L_2) - ((L_1 - L_2) \\cup (L_2-L_1))$.\nBy Theorem \\ref{difference}, $L_1 - L_2 \\in {\\cal L}$ and $L_2 - L_1 \\in {\\cal L}$, and\nboth must be $k$-slender since $L_1$ and $L_2$ are both $k$-slender.\nCertainly $(L_1 - L_2) \\cup (L_2-L_1) \\in {\\cal L}$, and is also $2k$-slender.\nAlso, $L_1 \\cup L_2 \\in {\\cal L}$.\nHence, by another application of Theorem \\ref{difference},\n$(L_1 \\cup L_2) - ((L_1 - L_2) \\cup (L_2-L_1))\\in {\\cal L}$. Since emptiness\nis decidable in ${\\cal L}$, the theorem follows.\n\\qed \\end{proof}\nThis again holds for all the families in Example \\ref{semilinearfulltrioexamples} plus the languages accepted by matrix grammars.\n\n\n\nAn interesting open question is whether every $k$-slender $\\NCM$ language (or other more\ngeneral families)\ncan be decomposed into a finite disjoint union of\nthin $\\NCM$ languages.\nAlthough we have not been able to show this, we can give a related result. \nTo recall, in \\cite{Harju}, the model $\\TCA$ is introduced consisting of a nondeterministic Turing machine with a one-way read-only input tape, a finite-crossing read\/write tape, and reversal-bounded counters. It is shown that this model only accepts semilinear languages, and indeed, it is a full trio. Clearly, the model is closed under intersection with $\\NCM$ by adding more counters. Although we do not know whether it is possible to decompose $\\NCM$ slender languages into thin $\\NCM$ languages, we can decompose them into thin $\\TCA$ languages.\n\n\\begin{theorem}\nEvery $k$-slender $\\NCM$ language $L$ is a finite union of thin $\\TCA$ languages.\n\\end{theorem}\n\\begin{proof}\nLet $M$ be an $\\NCM$ accepting $L$. Since $L$ is $k$-slender, for each $n$, there are either exactly $k$ words of length $n$, or $k-1$ words of length $n$, etc.\\ or $0$ words of length $n$. \nLet $A_k = \\{ x_1 \\# \\cdots \\# x_k \\mid x_1, \\ldots, x_k \\in L(M), |x_1| = \\cdots = |x_k|$, $x_1 < \\cdots x_1$. From that point on, it replaces $x_1$ on the tape with $x_2$. It then\nrepeats up to $x_k$.\n\nLet $G_i$ be a gsm that extracts the $i$'th ``component'' of $A_k$. Then $G_1(A_k), \\ldots, G_k(A_k)$ are all thin languages.\nAs they are thin, there is a $\\DFA$ $M_k$ accepting all these lengths of words. \nNext, let $A_{k-1} = \\{ x_1 \\# \\cdots \\# x_{k-1} \\mid x_1, \\ldots, x_{k-1} \\in L(M), |x_1| = \\cdots = |x_{k-1}|$, $x_1 < \\cdots \\nonumber \\\\\n& +\\log \\pi(\\boldsymbol{\\theta})\n\\end{array}\n\\end{equation}\nwhere the brackets $<.>$ imply expectation with respect to $\\hat{p}(\\boldsymbol{x}_{1:L} \\mid \\boldsymbol{\\theta}^{(k-1)}, \\boldsymbol{y}_{1:L})$ as in \\refeq{eq:batch}. In order to maximize $\\hat{Q}(\\boldsymbol{\\Theta}^{(1:k-1)}, \\boldsymbol{\\Theta})$ as in \\refeq{eq:mstepsa} one needs to solve the system of equations arising from $\\frac{\\partial \\hat{Q}(\\boldsymbol{\\theta}^{(1:k-1)}, \\boldsymbol{\\theta}) }{ \\partial \\boldsymbol{\\theta} }=\\boldsymbol{0}$\nThese equations equations with respect to $\\boldsymbol{\\theta}$ are solved with fixed point iterations. They depend on the following $7$ sufficient statistics $\\boldsymbol{\\Phi}=\\{\\Phi_j\\}_{j=1}^7$:\n\\begin{equation}\n\\begin{array}{l}\n \\Phi_1=<\\boldsymbol{x}_1> \\\\\n\\Phi_2= < \\boldsymbol{x}_1 \\boldsymbol{x}^T_1 > \\\\\n\\Phi_3=\\left< \\sum_{t=2}^L\\boldsymbol{x}_{t-1} \\right> \\\\\n\\Phi_4= \\left< \\sum_{t=2}^L \\boldsymbol{x}_t-\\boldsymbol{x}_{t-1} \\right> \\\\\n\\Phi_5= \\left< \\sum_{t=2}^L \\boldsymbol{x}_{t-1} \\boldsymbol{x}_{t-1}^T \\right> \\\\\n\\Phi_6= \\left< \\sum_{t=2}^L (\\boldsymbol{x}_t-\\boldsymbol{x}_{t-1})\\boldsymbol{x}_{t-1}^T \\right> \\\\\n\\Phi_7= \\left< \\sum_{t=2}^L (\\boldsymbol{x}_t-\\boldsymbol{x}_{t-1})(\\boldsymbol{x}_t-\\boldsymbol{x}_{t-1})^T \\right> \n\\end{array}\n\\end{equation}\n\n\\subsection*{ Sufficient statistics for parameters appearing in the likelihood}\n\nThe process a bit more involved in the case of the parameters appearing in the likelihood \\refeq{eq:like} i.e. the projection matrices $\\{ \\boldsymbol{P}^{(m)} \\}_{m=1}^M$ of dimension $d \\times K$ and the covariance $\\boldsymbol{\\Sigma}$ which is a (positive definite) matrix of $d \\times d$. In order to retain scalability in high-dimensional problems (i.e. $d>>1$) we assume a diagonal form of $\\boldsymbol{\\Sigma}=diag(\\sigma_1^2, \\sigma_2^2, \\ldots, \\sigma_d^2)$ which implies learning $d$ parameters rather than $d(d+1)\/2$.\n\nDenoting now by $\\boldsymbol{\\theta}=(\\{ \\boldsymbol{P}^{(m)} \\}_{m=1}^M, \\{\\sigma_j^2\\}_{j=1}^d )$ , $\\pi(\\boldsymbol{\\theta})$ the prior and according to Equations (\\ref{eq:sdlpost}) and (\\ref{eq:batch}) we have that:\n\\begin{equation}\n\\label{eq:likess}\n\\begin{array}{ll}\n\\hat{Q}(\\boldsymbol{\\theta}^{(k-1)}, \\boldsymbol{\\theta})& =\\left< \\sum_{t=1}^L - \\frac{1}{2} \\log \\mid \\boldsymbol{\\Sigma} \\mid -\\frac{1}{2} (\\boldsymbol{y}_t-\\boldsymbol{W}_t \\boldsymbol{X}_t )^T \\boldsymbol{\\Sigma}^{-1} (\\boldsymbol{y}_t-\\boldsymbol{W}_t \\boldsymbol{X}_t) \\right> \\\\\n& + \\log \\pi(\\boldsymbol{\\theta})\n\\end{array}\n\\end{equation}\nDifferentiation with respect to $ \\boldsymbol{P}^{(m)} $ reveals that the stationary point must satisfy:\n\\begin{equation}\n\\boldsymbol{A}^{(m)}=\\sum_{n=1}^M \\boldsymbol{P}^{(n)} \\boldsymbol{B}^{(n,m)}\n\\end{equation}\nwhere the sufficient statistics are:\n\\begin{equation}\n\\label{eq:ap1}\n\\underbrace{\\boldsymbol{A}^{(m)}}_{d \\times K}=\\left< \\sum_{t=1}^L z_{t,m} \\boldsymbol{y}_t (\\boldsymbol{x}_t^{(m)})^T \\right>, \\quad m=1,2,\\ldots, M\n\\end{equation}\nand:\n\\begin{equation}\n\\underbrace{ \\boldsymbol{B}^{(n,m)} }_{K \\times K} =\\left< \\sum_{t=1}^L z_{t,n} z_{t,m} \\boldsymbol{x}_t^{(n)} (\\boldsymbol{x}_t^{(m)})^T \\right>\n\\end{equation}\nIn the absence of a prior $\\pi(\\boldsymbol{\\theta})$ and if $\\boldsymbol{P}_j^{(m)}$ and $\\boldsymbol{A}^{(m)}_j$ represent the $j^{th}$ rows ($j=1,\\ldots,d$) of the matrices $\\boldsymbol{P}^{(m)}$ and $\\boldsymbol{A}^{(m)}$ respectively, then \\refeq{eq:ap1} implies:\n\\begin{equation}\n\\begin{array}{ll}\n\\underbrace{ \\left[ \\begin{array}{llll} \\boldsymbol{A}^{(1)}_j & \\boldsymbol{A}^{(2)}_j & \\ldots & \\boldsymbol{A}^{(M)}_j \\end{array} \\right] }_{\\boldsymbol{A}_j: (1 \\times K~M)}= &\n\\underbrace{ \\left[ \\begin{array}{llll} \\boldsymbol{P}^{(1)}_j & \\boldsymbol{P}^{(2)}_j & \\ldots & \\boldsymbol{P}^{(M)}_j \\end{array} \\right] }_{\\boldsymbol{P}_j: (1\\times K~M)} \\\\\n& \\underbrace{ \\left[ \\begin{array}{llll} \\boldsymbol{B}^{(1,1)} & \\boldsymbol{B}^{(1,2)} & \\ldots & \\boldsymbol{B}^{(1,M)} \\\\ \\boldsymbol{B}^{(2,1)} & \\boldsymbol{B}^{(2,2)} & \\ldots & \\boldsymbol{B}^{(2,M)} \\\\ \\ldots & \\ldots & \\ldots & \\ldots \\\\ \\boldsymbol{B}^{(M,1)} & \\boldsymbol{B}^{(M,2)} & \\ldots & \\boldsymbol{B}^{(M,M)} \\end{array} \\right] }_{\\boldsymbol{B}: ( MK\\times MK)}\n\\end{array}\n\\end{equation}\nThis leads to the following update equations for $\\boldsymbol{P}_j^{(m)}, ~\\forall j,m$:\n\\begin{equation}\n\\boldsymbol{P}_j=\\boldsymbol{A}_j \\boldsymbol{B}^{-1}\n\\label{eq:ap2}\n\\end{equation}\nNote that the matrix $\\boldsymbol{B}$ to be inverted is \\underline{ independent } of the dimension of the observables $d$ ($d>>1$) and the inversion needs to be carried out once for all $j=1,\\ldots,d$. Hence the {\\em scaling of the update equations for $\\boldsymbol{P}^{(m)}$ is $O(d)$} i.e. linear with respect to the dimensionality of the original system.\n\n\nFurthermore, in the absence of a prior $\\pi(\\boldsymbol{\\theta})$, differentiation with respect to $\\sigma_j^{-2}$ ($j=1,\\ldots,d$) leads to the following update equation:\n\\begin{equation}\n\\begin{array}{ll}\nL~\\sigma_j^2 & = \\sum_{t=1}^L y_{t,j}^2-2 \\boldsymbol{A}_j ~\\boldsymbol{P}_j^T+\\boldsymbol{P}_j~\\boldsymbol{B} \\boldsymbol{P}_j^T\n\\end{array}\n\\end{equation}\n In summary the sufficient statistics needed are the ones in Equations (\\ref{eq:ap1}) and (\\ref{eq:ap2}).\n\nIn the numerical examples in this paper a diffuse Gaussian prior was used for $\\boldsymbol{P}^{(m)}$ with variance $100$ for each of the entries of the matrix. This leads to the addition of the term $1\/100$ in the diagonal elements of the $\\boldsymbol{B}$ in \\refeq{eq:ap2}. No priors were used for $\\sigma_j^2$.\n\n\\subsection{From static-linear to dynamic-nonlinear dimensionality reduction}\n\\label{sec:genintro}\n\nThe inherent assumption of all multiscale analysis methods is the existence of a lower-dimensional parameterization of the original system with respect to which the dynamical evolution is more tractable at the scales of interest.\nIn some cases these slow variables can be identified a priori and the problem reduces to finding the necessary closures that will give rise to a consistent dynamical model.\nIn general however one must identify the reduced space $\\mathcal{\\hat{Y}}$ as well as the dynamics within it.\n\nA prominent role in these efforts has been held by Principal Component Analysis (PCA) -based methods. With small differences and depending on the community other terms such as Proper Orthogonal Decomposition (POD) or Karhunen-Lo\\`eve expansion (KL), Empirical Orthogonal Functions (EOF) have also been used. PCA finds its roots in the early papers by Pearson \\cite{pea01lin} and Hotelling \\cite{hot33ana} and was originally developed as a {\\em static} dimensionality reduction technique. It is based\non projections on a reduced basis identified by the leading eigenvectors of the covariance matrix $\\boldsymbol{C}$. In the dynamic case and in the absence of closed form expressions for the actual covariance matrix, samples of the process $\\boldsymbol{y}_{t} \\in \\mathbb R^d$ at $N$ distinct time instants $t_i$ are used in order to obtain an estimate of the covariance matrix:\n\\begin{equation}\n\\label{eq:cov}\n \\boldsymbol{C} \\approx \\boldsymbol{C}_N=\\frac{1}{N-1} \\sum_{i=1}^N (\\boldsymbol{y}_{t_i} -\\boldsymbol{\\mu}) (\\boldsymbol{y}_{t_i}-\\boldsymbol{\\mu})^T\n\\end{equation}\nwhere $\\boldsymbol{\\mu}=\\frac{1}{N} \\sum_{i=1}^N \\boldsymbol{y}_{t_i} $ is the empirical mean. If there is a spectral gap after the first $k$ eigenvalues and $\\boldsymbol{V}_K$ is the $d\\times K$ matrix whose columns are the $K$ leading normalized eigenvectors of $\\boldsymbol{C}_N$ then the reduced-order model is defined with respect to $\\boldsymbol{\\hat{y}}_t=\\boldsymbol{V}_K \\boldsymbol{y}_t$. The reduced dynamics can be identified by a Galerkin projection (or a Petrov-Galerkin projection) of the original ODEs in \\refeq{eq:master}:\n\\begin{equation}\n\\label{eq:pca}\n\\frac {d \\boldsymbol{\\hat{y}}_t}{d t}=\\boldsymbol{V}^T_K \\boldsymbol{f}(\\boldsymbol{V}^T_K \\boldsymbol{\\hat{y}}_t)\n\\end{equation}\nHence the reduced space $\\mathcal{\\hat{Y}}$ is approximated by a hyperplane in $\\mathcal{Y}$ and the projection mapping $\\mathcal{P}$ linear (Figure \\ref{fig1a}). While it can be readily shown that the projection adopted is optimal in the mean square sense for stationary Gaussian processes, it is generally not so in cases where non-Gaussian processes or other distortion metrics are examined. The application of PCA-based techniques, to high-dimensional, multiscale dynamical systems poses several modeling limitations. Firstly, the reduced space $\\mathcal{\\hat{Y}}$ might not be sufficiently approximated by a hyperplane of dimension $K<>1$) and large datasets ($N>>1$) as the $K$ leading eigenvectors of large matrices (of dimension proportional to $d$ or $N$) need to be evaluated.\nThis effort must be repeated, if more samples become available (i.e. $N$ increases) and an update of the reduced-order model is desirable. Recent efforts have concentrated on developing online versions \\cite{war06onl} that circumvent this problem.\n\n\n\n\\begin{figure}\n\\subfigure[PCA]{\n\\psfrag{x1}{\\tiny $y^{(1)}$}\n\\psfrag{x2}{\\tiny $y^{(2)}$}\n\\psfrag{px}{\\color{red} $\\boldsymbol{\\hat{y}=Py}$}\n\\label{fig1a}\n\\includegraphics[width=0.30\\textwidth,height=3.5cm]{FIGURES\/pod.eps}} \\hfill\n\\subfigure[Nonlinear PCA]{\n\\psfrag{x1}{\\tiny $y^{(1)}$}\n\\psfrag{x2}{\\tiny $y^{(2)}$}\n \\psfrag{px}{\\tiny \\color{red} PCA:$\\boldsymbol{\\hat{y}=Py}$}\n\\psfrag{npx}{\\color{green} $\\boldsymbol{\\hat{y}=P(y)}$}\n\\label{fig1b}\n\\includegraphics[width=0.30\\textwidth,height=3.5cm]{FIGURES\/npod.eps}} \\hfill\n\\subfigure[Mixture PCA (SLDS)]{\n\\psfrag{x1}{\\tiny $y^{(1)}$}\n\\psfrag{x2}{\\tiny $y^{(2)}$}\n\\psfrag{px1}{\\color{red} $\\boldsymbol{\\hat{y}=P^{(1)}y}$}\n\\psfrag{px2}{\\color{red} $\\boldsymbol{\\hat{y}=P^{(2)}y}$}\n\\psfrag{px3}{\\color{red} $\\boldsymbol{\\hat{y}=P^{(3)}y}$}\n\\label{fig1c}\n\\includegraphics[width=0.30\\textwidth,height=3.5cm]{FIGURES\/hmm_pca.eps}} \\\\\n\\caption{ \\em The phase space is assumed two-dimensional for illustration purposes i.e. $\\boldsymbol{y}_t=(y^{(1)}_{t}, y^{(2)}_{t})$. Each black circle corresponds to a realization $\\boldsymbol{y}_{t_i}$. $\\mathcal{P}: \\mathcal{Y} \\to \\mathcal{\\hat{Y}} $ is the projection operator from the original high-dimensional space $\\mathcal{Y}$ to the reduced-space $\\mathcal{\\hat{Y}}$. }\n\\end{figure}\n\n\n\nThe obvious extension to the linear projections of PCA is nonlinear dimensionality reduction techniques. These have been the subject of intense research in statistics and machine learning in recent years (\\cite{sch97ker,Roweis:2000,Tenenbaum:2000,Donoho:2003, Shi:2000,Bach:2006,AzranG06}) and fairly recently have found their way to computational physics and multiscale dynamical systems (e.g. \\cite{Coifman:2005,Lafon:2006,Nadler:2006,bas08non}).\nThey are generally based on calculating eigenvectors of an affinity matrix of a weighted graph. While they circumvent the limiting, linearity assumption of standard PCA, they still assume that the underlying process is stationary (Figure \\ref{fig1b}). Even though the system's dynamics might be appropriately tracked on a lower-dimensional subspace for a certain time period, this might not be invariant across the whole time range of\n interest. The identification of the dynamics on the reduced-space $\\mathcal{\\hat{Y}}$ is not as straightforward as in standard PCA and in most cases, a deterministic or stochastic model is fit directly to the projected data points \\cite{Coifman:2008,Erban:2007,fra08hid}. More importantly since the inverse mapping $\\mathcal{P}^{-1}$ from the manifold $\\mathcal{\\hat{Y}}$ to $\\mathcal{Y}$ is not available analytically, approximations have to be made in order to find pre-images in the data-space \\cite{DBLP:conf\/dagm\/BakirZT04,Erban:2007}. From a computational point of view, the cost of identifying the projection mapping is comparable to standard PCA as an eigenvalue problem on a $N \\times N$ matrix has to be solved. Updating those eigenvalues and the nonlinear projection operator in cases where additional data become available implies a significant computational overhead although recent efforts \\cite{sch07fas} attempt to overcome this limitation.\n\n\n\n\n\nA common characteristic of the aforementioned techniques is that even though the reduced coordinates are learned from {\\em a finite amount of simulation data}, there is no {\\em quantification of the uncertainty} associated with these inferences. This is a critical component not only in cases where multiple sets of reduced parameters and coarse-grained models are consistent with the data, but also for assessing errors associated with the analysis and prediction estimates. It is one of the main motivations for adopting a {\\em probabilistic approach} in this project. Statistical models can naturally deal with stochastic systems that frequently arise in a lot of applications. Most importantly perhaps, even in cases where the fine-scale model is deterministic (e.g. \\refeq{eq:master}), a stochastic reduced model provides a better approximation that can simultaneously quantify the uncertainty arising from the information loss that takes place during the coarse-graining process \\cite{fat04com,kou07sto}. \n\n\n\n\n\n\n\n\n\n\n\n\n\nA more general perspective is offered by latent variable models where the observed data (experimental or computationally generated) is augmented by a set of hidden variables \\cite{bis99lat}. {\\em In the case of high-dimensional, multiscale dynamical systems, the latent model corresponds to a reduced-order process that evolves at scales of practical relevance.} Complex distributions over the observables can be expressed in terms of simpler and tractable joint distributions over the expanded variable space. Furthermore, {\\em structural characteristics} of the original, high-dimensional process $\\boldsymbol{y}_t$ can be revealed by interpreting the latent variables as generators of the observables.\n\n\n\nIn that respect, a general setting is offered by Hidden Markov Models (HMM, \\cite{gha01int}) or more generally State-Space Models (SSM) \\cite{cap01ten,gha04uns,Horenko:2007}. These assume the existence of an {\\em unobserved (latent)} process $\\boldsymbol{\\hat{y}}_t \\in \\mathbb R^K$ described by a (stochastic) ODE:\n\n\\begin{equation}\n\\label{eq:ssm1}\n\\frac{d \\boldsymbol{\\hat{y}}_t}{dt}=\\boldsymbol{\\hat{f}}(\\boldsymbol{\\hat{y}}_t; \\boldsymbol{w}_t) \\quad (\\textrm{transition equation})\n\\end{equation}\nwhich gives rise to the observables $\\boldsymbol{y}_t \\in \\mathbb R^d$ as: \n\n\\begin{equation}\n\\label{eq:ssm2}\n\\boldsymbol{y}_t=\\boldsymbol{h}(\\boldsymbol{\\hat{y}}_t, \\boldsymbol{v}_t) \\quad (\\textrm{emission equation})\n\\end{equation}\nwhere $\\boldsymbol{w}_t$ and $\\boldsymbol{v}_t$ are unknown stochastic processes (to be inferred from data) and $\\boldsymbol{\\hat{f}}: \\mathbb R^K \\to \\mathbb R^K$, $\\boldsymbol{h}: \\mathbb R^K \\to \\mathbb R^d$ are unknown measurable functions. The transition equation defines a prior distribution on the coarse-grained dynamics whereas the emission equation, the mapping that connects the reduced-order representation with the observable dynamics. The object of Bayesian inference is to learn the unobserved (unknown) model parameters from the observed data. Hence the coarse-grained model and its relation to the observable dynamics are inferred from the data. \n\nThe form of Equations (\\ref{eq:ssm1}) and (\\ref{eq:ssm2}) affords general representations. Linear and nonlinear PCA models arise as special cases by appropriate selection of the functions and random processes appearing in the transition and emission equations. Note for example that the transition equation (\\refeq{eq:ssm1}) for $\\boldsymbol{\\hat{y}}_t$ in the case of the PCA-based models reviewed earlier is given by \\refeq{eq:pca} and the {\\em emission equation} (\\refeq{eq:ssm2}) that relates latent and observed processes is linear, deterministic and specified by the matrix of $K$ leading eigenvectors $\\boldsymbol{V}_K$. \n\n\n An extension to HMM is offerered by switching-state models \\cite{har76bay,cha78sta,ham89new,shu91dyn} which can be thought of as dynamical mixture models \\cite{DBLP:journals\/tsmc\/ChaerBG97,DBLP:journals\/neco\/GhahramaniH00}. The latent dynamics consist of a discrete process that takes $M$ values, each corresponding to a distinct dynamical behavior. This can be represented by an $M$-dimensional vector $\\boldsymbol{z}_t$ whose entries are zero except for a single one $m$ which is equal to one and represents the active mode\/cluster. Most commonly, the time-evolution of $\\boldsymbol{z}_t$ is modeled by a first-order stationary Markov process:\n\\begin{equation}\n\\label{eq:slds}\n\\boldsymbol{z}_{t+1}=\\boldsymbol{T} \\boldsymbol{z}_t\n\\end{equation}\nwhere $\\boldsymbol{T}=[T_{m,n}]$ is the transition matrix and $T_{m,n}=Pr[z_{m,t+1}=1 \\mid z_{n,t}=1]$.\n In addition to $\\boldsymbol{z}_t$, \n $M$ processes $\\boldsymbol{x}_t^{(m)} \\in \\mathbb R^K, ~m=1,\\ldots,M$ parameterize the reduced-order dynamics (see also discussion in section \\ref{sec:pmhmm}). Each is activated when $z_{m,t}=1$. In the linear version (Switching Linear Dynamic System, SLDS \\footnote{sometimes referred to as jump-linear or conditional Gaussian models}) and conditioned on $z_{m,t}=1$, the observables $\\boldsymbol{y}_t$ arise by a projection from the active $\\boldsymbol{x}_t^{(m)}$ as follows:\n\\begin{equation}\n\\label{eq:obs_slds}\n\\boldsymbol{y}_t= \\boldsymbol{P}^{(m)} \\boldsymbol{x}_{t}^{(m)}+\\boldsymbol{v}_t, \\quad \\boldsymbol{v}_t \\sim N(\\boldsymbol{0}, \\boldsymbol{\\Sigma})~(i.i.d)\n\\end{equation}\nwhere $\\boldsymbol{P}^{(m)}$ are $d\\times K$ matrices ($K<