diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzezpm" "b/data_all_eng_slimpj/shuffled/split2/finalzzezpm" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzezpm" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n\n The performance of direct-sequence code-division multiple-access (DS-CDMA)\ntransmission over mobile fading channels depends strongly on the\nreliability of channel parameter and quality of synchronization for each user: state-of-the-art detection algorithms that\nexploit multiple-access-interference and inter-symbol-interference\nrequire very powerful estimation algorithms.\n\n\nSubstantial amount of relevant references appeared in the\nliterature on delay estimation. Namely, a new prospective is\npresented in \\cite{stuller97} for the maximum likelihood (ML) time-delay estimation. Code timing estimation in a near-far environment for DS-CDMA systems was introduced in \\cite{smith}.\n Joint symbol detection, time-delay and channel parameter estimation\nproblems for asynchronous DS-CDMA systems have been investigated in several previous works (e.g., \\cite{pelin,miller}). Most of these works either work on one signal at a time and treat the other signals as interference, or employ a training sequence to obtain a coarse\nestimate of the channel parameters which is consequently used to\ndetect data. It is clear that these approaches have disadvantages of\nhaving higher overhead and additional noise enhancement.\n\nSome other proposed approaches for joint blind multiuser detection and\nchannel estimation for DS-CDMA systems are subspace-based and\nlinear prediction-based methods. Subspace-based method usually\nrequire singular value decomposition or eigenvalue decomposition\nwhich is computationally costly and does not tolerate mismatched\nchannel parameters. Another drawback of this approach is that\naccurate rank determination may be difficult in a noisy\nenvironment \\cite{bensley,strom}. Moreover, it is not clear how these methods can be extended to include the estimation of the transmission delays jointly with the channel parameters.\n\nThe expectation maximization (EM) and space alternating EM (SAGE) algorithms are\nideally suited to these kind of problems as they are guaranteed to\nconverge in likelihood. Earlier work related with delay estimation based on the EM algorithm has appeared \\cite{Georg-90,Georg-91}.\nEfficient iterative receiver structures are presented in\n\\cite{kocian2003,kocian2007}, performing joint multiuser detection and\nchannel estimation for synchronous as well as asynchronous coded\nDS-CDMA systems operating over quasi-static flat Rayleigh fading\nchannels, under the assumption that the transmissions delays are\nknown. The Bayesian EM\/SAGE algorithm can be used for joint\n\\emph{soft}-data estimation and channel estimation but the\ncomputational complexity of the resulting receiver architecture is non-polynomial in the number of\nusers \\cite{Gallo-04}. To overcome this draw-back, Hu \\emph{et al.}\napplied the Variational Bayesian EM\/SAGE algorithms to joint\nestimation of the distributions for channel coefficients, noise\nvariance, and information symbols for synchronous DS-CDMA in\n\\cite{Hu-08}. Our work may be considered to be a twofold extension of the\nwork by Gallo \\emph{et al.} in \\cite{Gallo-04}:\nFirst, the proposed receiver performs joint channel coefficient and\ntransmission delay estimation within the SAGE framework. Secondly, the\nimplication of the Monte-Carlo method in the SAGE framework makes it\npossible to compute \\emph{soft}-data estimates for all users at polynomial\ncomputational complexity, as well. Here, an\nefficient Markov chain Monte Carlo (MCMC) technique \\cite{gelfand} called\n{\\em Gibbs sampling} is used to compute the {\\em a posteriori}\nprobabilities (APP) of data symbols \\cite{doucet}. The APP's can be computed\nexactly with the MCMC algorithm, which is significantly less complex than\na standard hidden Markov model approach. \nThe resulting receiver architecture\nworks in principal fully blind and is guaranteed to converge. For\nuncoded transmission, a few\npilot bits must be inserted, though, to resolve the phase ambiguity problem.\n\n\nThe theoretical framework for the joint transmission delays and\nchannel estimation as well as the data detection algorithms can easily\nbe extended to coded transmission. \n\n\n\n\\section {System Description}\n\\label{sec:system}\n\nWe consider an asynchronous single-rate DS-CDMA system with $K$\nactive users using binary phase shift keying (BPSK) modulation\nsharing the same propagation channel. The signal transmitted by each\nuser experiences flat Rayleigh fading, which is assumed to be\nconstant over the observation frame of $L$ data symbols. Each user employs\na random signature waveform for transmitting symbols of duration\n$T_{b}$, such that each symbol consists of $N_{c}$ chips with duration\n$T_{c}=T_{b}\/N_{c}$ where $N_{c}$ is an integer. The received signal\nis the noisy sum of all user's contribution, delayed by the propagation delays $\\tau_{k}\\in [0,T_{b}\/2)$, where the subscript $k$\ndenotes the label of the $k$th user. After down-converting the\nreceived signal to baseband and passing it through an\nintegrate-and-dump filter with integration time $T_{s}=T_{c}\/Q$, $Q\n\\in \\mathbb{Q}^+$, $QN_c(L+1)$ samples over an observation frame of $L$ symbols are\nstacked into a signal column vector ${\\boldsymbol{r}} \\in \\mathbb{C}^{Q N_{c}\n(L+1)-1}$. Note that sampling is chip-synchronous without\nknowledge of the individual transmission delays. It can\ntherefore be expressed as\n\\begin{equation}\n\\boldsymbol{r} = \n\\boldsymbol{S}(\\boldsymbol{\\tau})\\bf{A}\\bf{d}+\\boldsymbol{w}.\n\\label{sys:received}\n\\end{equation}\n\nIn this expression the matrix\n$\\boldsymbol{S} (\\boldsymbol{\\tau})\n\\in \\mathbb{C}^{Q N_{c} (L+1)-1 \\times LK}$ contains the signature\nsequences of all the users\n\\[\n\\boldsymbol{S}( \\boldsymbol \\tau)= \\left[ {\\begin{array}{*{20}c}\n \\boldsymbol{S}_{1} (\\tau_1), &\n \\boldsymbol{S}_{2}(\\tau_2), & \\cdots & ,\\boldsymbol{S}_{K} (\\tau_K) \\\\\n\\end{array}} \\right]\n\\]\nwhere $\\boldsymbol{S}_k (\\tau_k) \\in \\mathbb{C}^{Q N_{c} (L+1)-1\n\\times L}$ has the form\\\\\n\\[\n\\boldsymbol{S}_k( \\tau_k)=\n\\left[ {\\begin{array}{*{20}c}\n | & | & & | \\\\\n \\boldsymbol{S}_{k} (\\tau_k,0) &\n \\boldsymbol{S}_{k}(\\tau_k,1) & \\cdots & \\boldsymbol{S}_{k} (\\tau_k,L-1) \\\\\n | & | & & | \\\\\n\\end{array}} \\right]\n\\]\nand the spreading code vector $\\boldsymbol{S}_k(\\tau_k,\\ell)\n\\in \\mathbb{C}^{Q N_{c}(L+1)-1 \\times 1}$ is given by\n\\[\n\\boldsymbol{S}_k (\\tau_k,\\ell)=\n\\left[ {\\begin{array}{c}\n \\boldsymbol{0}_{Q N_{c}\\ell+\\tau_{k} \\times 1} \\\\\n | \\\\\n \\boldsymbol{s}_{k}(\\tau_k,\\ell) \\\\\n | \\\\\n \\boldsymbol{0} \\\\\n\\end{array}} \\right].\n\\]\n\n The vector $\\boldsymbol{s}_k (\\tau_k,\\ell)$ contains the\n spreading code of user~$k$ having support\n $[\\ell N_{c}T_{c}, (\\ell+1) N_{c}T_{c}]$ with energy\n $\\boldsymbol{s}_k^{\\dag}(\\tau_k,\\ell)\\boldsymbol{s}_k\n (\\tau_k,\\ell)=1$. \n Finally, $\\boldsymbol{0}_{M \\times 1}$ denotes the $M \\times 1$-dim. all-zero\n column vector.\n\n\nThe block diagonal channel matrix\n$\\boldsymbol{A}\\in \\mathbb{C}^{LK\\times\n LK}$ in (1) is given by $\\boldsymbol{A} =\n\\mbox{diag}\\{\\boldsymbol{A}_1,\\cdots, \\boldsymbol{A}_K\\}$. The channel\nmatrix for user~$k$,\n$\\boldsymbol{A}_k \\in \\mathbb{C}^{L\\times L}$,\n is given by\n$\\boldsymbol{A}_k = {\\bf I}_{L} \\otimes$ $a_{k}$ where $\\bf{I}_{L}$ is the $L$-dim. identity matrix, and the symbol $\\otimes$\ndenotes the Kronecker product. The $k$th user's channel coefficient\n$a_{k}$ is a circularly symmetric complex\nGaussian random variable with zero mean and variance $\\sigma_k^2$. The\n$k$th user's transmission delay is assumed to be uniformly distributed.\n\nThe symbol vector $\\boldsymbol{d} \\in \\mathbb{C}^{LK}$ takes the form $\\boldsymbol{d} =\\mbox{col}\\{{\\boldsymbol{d}_1,\\cdots,\\boldsymbol{d} _K}\\}$\nwhere the vector $\\boldsymbol{d}_k \\in\\mathbb{C}^{L} $ contains the\n$k$th user's symbols, i.e. $\\boldsymbol{d} _k =\n\\mbox{col}\\{d_{k}(0),\\cdots,d_{k}(L-1) \\}$\nwith $d_{k}(\\ell) \\in\n\\{-1,+1\\}$ denoting the symbol transmitted by the $k$th user during\nthe $\\ell$th signalling interval. Finally, the column vector\n$\\boldsymbol{w} \\in \\mathbb{C}^{QN_{c}(L+1)-1}$ contains complex,\ncircularly symmetric white Gaussian noise having covariance matrix\n$N_{0}{\\bf I}$.\nWe assume that the vectors\n$\\boldsymbol{a}\\triangleq \\mbox{col}\\{a_{1},a_{2},\\cdots,a_{K}\\}$,\n$\\boldsymbol{\\tau}\\triangleq\n\\mbox{col}\\{\\tau_{1},\\tau_{2},\\cdots,\\tau_{K}\\}$, ${\\boldsymbol{d}}$ and\n$\\boldsymbol{w}$ and their components are independent.\nThe receiver does not know the data sequences, the (complex) channel coefficients, or the transmission delays.\n\n\n\n\\section{Monte-Carlo SAGE Joint Parameter Estimation}\n\\label{sec:MCSAGE_appl}\n\n\\subsection{The SAGE Algorithm}\n\nIn previous applications, the SAGE algorithm \\cite{fessler} has been extensively used\nto iteratively approximate the ML\/MAP estimate of a parameter vector $\\boldsymbol{\\theta}$\nwith respect to the observed data ${\\boldsymbol{r}}$.\n To obtain a receiver architecture that iterates between soft-data and\n channel estimation, one might choose the parameter vector as\n ${\\boldsymbol{\\theta}}=\\left\\{\\mathfrak{R}(a_{1}),\\cdots,\n \\mathfrak{R}(a_{K}),\\mathfrak{I}(a_{1}),\\cdots,\\mathfrak{I}(a_{K}),\\tau_{1},\\cdots,\\tau_{K} \\right\\}$. The symbols $\\mathfrak{R}(\\cdot)$ and $\\mathfrak{I}(\\cdot)$ denote the real and imaginary parts of the complex argument, respectively. At iteration $i$,\nonly the parameter vector of user $k$, ${\\boldsymbol{\\theta}}_{k}$ are updated, while the\nparameter vectors of the other users ${{\\boldsymbol{\\theta}}}_{\\bar k}={\\boldsymbol{\\theta}}\n\\backslash {\\boldsymbol{\\theta}}_{k}$ are kept fixed. In the SAGE framework ${\\boldsymbol{r}}$ is\nreferred to as the \\emph{incomplete} data. The so-called {\\em admissible hidden} data\n${\\boldsymbol{\\chi}}_k$ with respect to ${\\boldsymbol{\\theta}}$ is selected to be\n${\\boldsymbol{\\chi}}_k=\\{{\\boldsymbol{r}},{\\boldsymbol{d}}\\}$. Notice that ${\\boldsymbol{\\chi}}_k$ can only be partially\nobserved. Applying the SAGE algorithm to MAP parameter estimation, yields the expectation (E)-step\n\\begin{equation}\n\\label{alg:sage_estep}\nQ_k({\\boldsymbol{\\theta}}_k,{\\boldsymbol{\\theta}}^{[i]})=E_{{\\boldsymbol{d}}} \\left\\{ \\log\n p\\left({\\boldsymbol{r}},{\\boldsymbol{d}},{\\boldsymbol{a}}_k,\\boldsymbol \\tau_k,{{\\boldsymbol{a}}}_{\\bar k}^{[i]},{\\boldsymbol \\tau}_{\\bar k}^{[i]} \\right) \\mid {\\boldsymbol{r}},{\\boldsymbol{a}}^{[i]},\\boldsymbol \\tau^{[i]} \\right\\}.\n\\end{equation}\nThe maximization (M)-step computes a value of the argument $\\boldsymbol \\tau_k$\nin (\\ref{alg:sage_estep}) to obtain the update\n${\\boldsymbol{\\theta}}_k^{[i+1]}$. The objective function is non-decreasing at each\niteration.\n\n\\subsection{The Monte-Carlo SAGE algorithm}\n\nWe will see that direct computation of the expectation in\n(\\ref{alg:sage_estep}) requires a non-polynomial number of operations in the\nnumber of users $K$ and thus becomes prohibitive with increasing $K$. To make the computation of the expectation in\n(\\ref{alg:sage_estep}) feasible though, we propose to use the technique of Markov\nchain Monte Carlo (MCMC) to obtain the Monte-Carlo SAGE\nalgorithm. MCMC is a statistical technique that allows generation of ergodic\npseudo-random samples ${\\boldsymbol{d}}^{[i,1]},\\ldots,{\\boldsymbol{d}}^{[i,N_t]}$ from the current\napproximation to the conditional pdf $p({\\boldsymbol{d}} | {\\boldsymbol{r}}, {\\boldsymbol{\\theta}}^{[i]})$. These samples are used to approximate the expectation in\n(\\ref{alg:sage_estep}) by the sample-mean. The Gibbs sampler and the\nMetropolis-Hastings algorithm are widely used MCMC algorithms. Here we\ndescribe only the Gibbs sampler \\cite{Borunjeny,doucet}, as it is the\nmost commonly used in applications. Having initialized\n${\\boldsymbol{d}}^{[0,0]}$ randomly, the Gibbs sampler iterates the following loop\nat SAGE iteration $i$:\n\\begin{itemize}\n\\item Draw sample ${\\boldsymbol{d}}_1^{[i,t]}$ from\n $p({\\boldsymbol{d}}_1|{\\boldsymbol{d}}_2^{[i,t-1]},\\ldots,{\\boldsymbol{d}}_K^{[i,t-1]}, {\\boldsymbol{r}}, {\\boldsymbol{\\theta}}^{[i]})$\\\\\n\\item Draw sample ${\\boldsymbol{d}}_2^{[i,t]}$ from\n $p({\\boldsymbol{d}}_2|{\\boldsymbol{d}}_1^{[i,t]},{\\boldsymbol{d}}_3^{[i,t-1]}\\ldots,{\\boldsymbol{d}}_K^{[i,t-1]}, {\\boldsymbol{r}},\n {\\boldsymbol{\\theta}}^{[i]})$\\\\\n\\vdots\n\\item Draw sample ${\\boldsymbol{d}}_K^{[i,t]}$ from\n $p({\\boldsymbol{d}}_K|{\\boldsymbol{d}}_1^{[i,t]},\\ldots,{\\boldsymbol{d}}_{K-1}^{[i,t]}, {\\boldsymbol{r}}, {\\boldsymbol{\\theta}}^{[i]})$\\\\\n\\end{itemize}\nFollowing this approach, we have\n\\begin{equation*}\n\\label{alg:mcmc_sage_estep}\nQ_k({\\boldsymbol{\\theta}}_k,{\\boldsymbol{\\theta}}^{[i]})=\\frac{1}{N_t} \\sum_{t=1}^{N_t} \\left\\{ \\log\n p\\left({\\boldsymbol{r}},{\\boldsymbol{d}}^{[i,t]},{\\boldsymbol{a}}_k,\\boldsymbol \\tau_k,{{\\boldsymbol{a}}}_{\\bar\n k}^{[i]},{\\boldsymbol \\tau}_{\\bar k}^{[i]} \\right) \\right\\}.\n\\end{equation*}\n\nNotice that with increasing $N_t$, the Monte-Carlo SAGE algorithm converges to the MAP\nsolution ${\\boldsymbol{\\theta}} = {\\boldsymbol{\\theta}}^\\star$ up to random fluctuations around ${\\boldsymbol{\\theta}}^\\star$ \\cite{Tanner-90}.\n\n\n\\subsection{Receiver design}\n\n\nThis subsection is devoted to the derivation of a receiver\narchitecture for joint estimation of parameters within the Monte-Carlo SAGE framework. Discarding terms\nindependent of ${\\boldsymbol{a}}$ and $\\boldsymbol \\tau$, we obtain\n\\begin{equation}\n\\log p({\\boldsymbol{r}},{\\boldsymbol{d}},{\\boldsymbol{a}},\\boldsymbol \\tau) =\n\\log p({\\boldsymbol{r}}|{\\boldsymbol{d}},{\\boldsymbol{a}},\\boldsymbol \\tau) +\n\\log p({\\boldsymbol{d}}) +\n\\log p({\\boldsymbol{a}}) + \\log p(\\boldsymbol \\tau).\n\\label{appl:likefkt}\n\\end{equation}\nFrom (\\ref{sys:received}), it follows that\n\\begin{equation}\n\\log p({\\boldsymbol{r}}|{\\boldsymbol{a}},\\boldsymbol \\tau,{\\boldsymbol{d}}) \\varpropto\n\\Re{\\{{\\boldsymbol{r}}^{\\dag}\\boldsymbol{S}\\boldsymbol{A}{\\boldsymbol{d}}}\\} -\\frac{1}{2}\n{\\boldsymbol{\\mu}}({\\boldsymbol{\\theta}},{\\boldsymbol{d}})^{\\dag}{\\boldsymbol{\\mu}}({\\boldsymbol{\\theta}},{\\boldsymbol{d}}),\n\\label{appl:likelihood}\n\\end{equation}\nwhere ${\\boldsymbol{\\mu}}({\\boldsymbol{\\theta}},{\\boldsymbol{d}}) \\triangleq\n\\sum_{k=1}^{K}\\sum_{\\ell=0}^{L-1}{\\boldsymbol{S}}_{k}(\\ell,\\tau_{k})a_{k}d_{k}(\\ell)$\nand $(.)^{\\dag}$ is the conjugate transpose of the argument.\n\\subsubsection{The E-step}\nSubstituting (\\ref{appl:likelihood}) into (\\ref{appl:likefkt}) yields\nafter some algebraic manipulations for the E-step of the Monte-Carlo SAGE algorithm:\n\\begin{eqnarray}\n\\lefteqn{Q_{k}({\\boldsymbol{\\theta}}_{k}|{\\boldsymbol{\\theta}}^{[i]})=}\\nonumber\\\\\n&& \\hspace*{-5ex}\n \\frac{2}{N_{0}} \\sum_{\\ell=0}^{L-1} \\Re\\left\\{a^{*}_{k}\n\\Psi(\\ell,\\tau_{k})\\right\\}-\\frac{L}{N_{0}}|a_{k}|^{2} -\n\\frac{1}{\\sigma_k^2}|a_k|^2 \\label{appl:Q-fct}\n\\end{eqnarray}\nwith the branch definition\n\\[\n\\Psi(\\ell,\\tau_{k}) \\triangleq \\boldsymbol{S}^{\\dag}_{k}(\\ell,\\tau_{k})\\left(\\exd{k}{\\ell}{\\boldsymbol{r}} -\n \\mathcal{I}_k^{[i]}(\\ell)\\right)\n\\]\nand the interference term \n\\begin{eqnarray*}\n \\mathcal{I}_k^{[i]}(\\ell) &\\triangleq& \\sum_{k' \\neq\nk}a_k'^{[i]}\\bigg(\\boldsymbol{S}_{k'}(\\ell+1,\n\\tau^{[i]}_{k'})\\corrd{k}{\\ell}{k'}{\\ell+1}\\\\\n&&\\hspace*{1ex} +\n\\boldsymbol{S}_{k'}(\\ell,\\tau^{[i]}_{k'})\\corrd{k}{\\ell}{k'}{\\ell}\\\\\n&&\\hspace*{1ex}+\\boldsymbol{S}_{k'}\n(\\ell-1,\\tau^{[i]}_{k'})\n\\corrd{k}{\\ell}{k'}{\\ell-1}\\bigg).\n\\end{eqnarray*}\nMoreover,\n\\begin{equation}\n\\label{appl:softdata}\n \\exd{k}{\\ell}\\triangleq \\sum_{m\\in\n \\mathcal{S}}m P(d_{k}(\\ell)=m| {\\boldsymbol{r}}, \\boldsymbol \\tau^{[i]},{\\boldsymbol{a}}^{[i]})\n\\end{equation}\nand\n\\begin{eqnarray}\n\\label{appl:softcorr}\n\\lefteqn{\\corrd{k}{\\ell}{k'}{\\ell'} \\triangleq\n\\sum_{m\\in \\mathcal{S}}\\sum_{n\\in \\mathcal{S}}~m~n}\\nonumber \\\\&&\n\\!\\!\\times P(d_{k}(\\ell)=m,d_{k'}(\\ell')=n \\mid {\\boldsymbol{r}},\n \\boldsymbol \\tau^{[i]},{\\boldsymbol{a}}^{[i]}), \\mbox{ for } k'\\neq k,~\n\\end{eqnarray}\n where $\\mathcal{S}\\triangleq\\{-1,+1\\}$ is the signal constellation\n and the lag is within range $\\ell' \\in \\{\\ell-1,\\ell,\\ell+1\\}$.\n\n\n\\subsubsection{The M-step}\n\nThe M-step of the SAGE algorithm is realized by first maximizing (\\ref{appl:Q-fct}) with respect to the transmission delays $\\tau_k$,\n\\begin{equation}\n\\label{appl:tau_update}\n\\tau^{(i+1)}_{k}=\\arg \\max_{\\tau_k} \\left|\\sum_{\\ell=0}^{L-1}\\Psi(\\ell,\\tau_k)\\right|.\n\\end{equation}\nThen by inserting (\\ref{appl:tau_update}) into\n(\\ref{appl:Q-fct}), taking derivatives with respect to the $a_k$'s,\nsetting the results equal to zero, and solving yields\n\\[\na_k^{(i+1)} = \\frac{1}{L+N_0\/\\sigma_k^2}\\sum_{\\ell=0}^{L-1}\\Psi(\\ell,\\tau^{(i+1)}_{k}).\n\\]\n\n\\section{Monte-Carlo Implementation to the Computation of A Posteriori Probabilities}\n\n\\subsection{Computation of the soft-data symbols in (\\ref{appl:softdata})}\n\n\\label{sec:softdata}\n\nLet $\\overline{{\\boldsymbol{d}}_{k}(\\ell)}\\triangleq {\\boldsymbol{d}} \\backslash \\{d_{k}(\\ell)\\}$.\nFor notational simplicity we use $\\bar{{\\boldsymbol{d}}}\\triangleq\\overline{{\\boldsymbol{d}}_{k}(\\ell)}$ throughout this section. Then, the {\\em a\nposteriori} probability of $d_{k}(\\ell)$ in (\\ref{appl:softdata}) can be evaluated as\n\n\\setlength\\arraycolsep{1pt}\n\\begin{eqnarray}\n\\lefteqn{P(d_{k}(\\ell)=m \\mid {\\boldsymbol{r}},\n \\boldsymbol \\tau^{[i]},{\\boldsymbol{a}}^{[i]})}\\nonumber\\\\&=&\\sum _{\\bar{{\\boldsymbol{d}}}}P(d_{k}(\\ell)=m \\mid \\bar{{\\boldsymbol{d}}}, {\\boldsymbol{r}}, \\boldsymbol \\tau^{[i]},{\\boldsymbol{a}}^{[i]})~P( \\bar{{\\boldsymbol{d}}}|{\\boldsymbol{r}}, \\boldsymbol \\tau^{[i]},{\\boldsymbol{a}}^{[i]}) \\nonumber\\\\\n &\\approx&\\frac{1}{N_t}\\sum_{t=1}^{N_t}\n P(d_{k}(\\ell)=m| \\bar{{\\boldsymbol{d}}}^{[i,t]},{\\boldsymbol{r}}, \\boldsymbol \\tau^{[i]},{\\boldsymbol{a}}^{[i]}).\n\\label{MC:APP}\n\\end{eqnarray}\nTo compute $P(d_{k}(\\ell)=m| \\bar{{\\boldsymbol{d}}}^{[i,t]},{\\boldsymbol{r}},\n\\boldsymbol \\tau^{[i]},{\\boldsymbol{a}}^{[i]})$ for this Markov chain Rao-Blackwellization technique, we define\n \\begin{equation}\n\\label{MC:LLR}\n \\lambda^{[i,t]}\\triangleq\\ln \\frac{P\\left(d_{k}(\\ell)= +1|\\bar{{\\boldsymbol{d}}}^{[i,t]},{\\boldsymbol{r}},\n\\boldsymbol \\tau^{[i]},{\\boldsymbol{a}}^{[i]}\\right)}{ P\\left(d_{k}(\\ell)=\n-1|\\bar{{\\boldsymbol{d}}}^{[i,t]},{\\boldsymbol{r}}, \\boldsymbol \\tau^{[i]},{\\boldsymbol{a}}^{[i]}\\right)},\n \\end{equation}\nFor uncoded transmission, the data symbols are i.i.d. and equally\nlikely. Therefore, it follows from (\\ref{MC:LLR}) that\n\\begin{equation}\n\\label{MC:LLR_exp}\n \\lambda^{[i,t]} = \\ln \\frac{P( {\\boldsymbol{r}} \\mid d_{k}(\\ell)= +1, \\bar{{\\boldsymbol{d}}}^{[i,t]},\n\\boldsymbol \\tau^{[i]},{\\boldsymbol{a}}^{[i]})}{ P( {\\boldsymbol{r}} \\mid d_{k}(\\ell)= -1,\n\\bar{{\\boldsymbol{d}}}^{[i,t]},\\boldsymbol \\tau^{[i]},{\\boldsymbol{a}}^{[i]})},\n\\end{equation}\nfrom which it can be easily seen that\n\\begin{equation*}\nP\\left(d_{k}(\\ell)= m\\mid \\bar{{\\boldsymbol{d}}}^{(t)},{\\boldsymbol{r}},\n\\boldsymbol \\tau^{[i]},{\\boldsymbol{a}}^{[i]}\\right)=\\frac{1}{1+\\exp\\left(-m\\lambda^{[i,t]}\\right)}.\n\\label{MC:logapp_lambda}\n\\end{equation*}\nFrom (\\ref{sys:received}), we have $p({\\boldsymbol{r}}|{\\boldsymbol{D}})\\thicksim \\exp(-\\frac{1}{N_{0}}|{\\boldsymbol{r}}-{\\boldsymbol{G}}\n{\\boldsymbol{d}}|^{2})$, with ${\\boldsymbol{G}}\\triangleq\\bf{S}(\\boldsymbol \\tau){\\boldsymbol{A}}$ and\n${\\boldsymbol{d}}=\\mbox{col}\\{{\\boldsymbol{d}}_{1},{\\boldsymbol{d}}_{2},\\cdots,{\\boldsymbol{d}}_{K}\\},$. After some\nalgebra (\\ref{MC:LLR_exp}) can be expressed as\n\\begin{equation}\n\\label{MC:LLR_fin}\n\\lambda^{[i,t]}=\\frac{4}{N_{0}}\n\\Re\\left\\{({\\boldsymbol{g}}^{[i]}_{q})^{\\dag}({\\boldsymbol{r}}-{{\\boldsymbol{G}}}^{[i]}_{\\bar q} \\mbox{ }\n{{\\boldsymbol{d}}}^{[i,t]}_{\\bar q})\\right\\},\n\\end{equation}\nwhere $q\\triangleq kL+\\ell$, and ${{\\boldsymbol{G}}}_{\\bar q}$ is ${\\boldsymbol{G}}$ with its $q$th column\n${\\boldsymbol{g}}_{q}$ removed. Similarly, ${{\\boldsymbol{d}}}_{\\bar q}$ denotes the\nvector ${\\boldsymbol{d}}$ with its $q$th component removed.\n\nIn summary, for each $k=1,2,\\cdots,K$ and $\\ell=0,1,\\cdots,L-1$, to\nestimate the {\\em a posteriori} probabilities\n$P(d_{k}(\\ell)|{\\boldsymbol{r}},\\boldsymbol \\tau^{[i]},{\\boldsymbol{a}}^{[i]})$ in (\\ref{MC:APP}), the Gibbs\nsampler runs over all symbols $N_{t}$ times to generate a collection\nof vectors $\\left\\{\\bar{{\\boldsymbol{d}}}^{[i,t]}\\triangleq\n\\bar{{\\boldsymbol{d}}}^{[i,t]}_{k}(\\ell)\\right\\}_{t=1}^{N_{t}}$ which are used\nin (\\ref{MC:LLR_fin}) to estimate the desired quantities.\n\n\n\\subsection{Computation of the soft-value for the product of two\n data symbols in (\\ref{appl:softcorr})}\n\nSimilarly, a number of random samples\n$\\overline{\\overline{{\\boldsymbol{d}}}}^{{[i,t]}}\\triangleq\n\\overline{\\overline{{\\boldsymbol{d}}_{k,k'}(\\ell')}}^{[i,t]},\nt=1,2,\\cdots, N_t, \\ell'\n\\in \\{-1,0,+1\\}$ are drawn, using the Gibbs sampling technique,\nfrom the joint conditional posterior distribution,\n$P(\\overline{\\overline{{\\boldsymbol{d}}}} \\mid {\\boldsymbol{r}}, \\boldsymbol \\tau^{[i]},{\\boldsymbol{a}}^{[i]}).$ Based on the samples\n$\\overline{\\overline{{\\boldsymbol{d}}}}^{[i,t]}$, $\\corrd{k}{\\ell}{k'}{\\ell'}$ in (\\ref{appl:softcorr}) can be evaluated by\n\\begin{eqnarray*}\n\\lefteqn{\\corrd{k}{\\ell}{k'}{\\ell'}\\approx\n(1\/N_{t})}\\nonumber\\\\&&\n\\times \\sum_{t=1}^{N_{t}}\n\\sum_{m,n\\in \\mathcal{S}}m n\nP\\left(d_{k}(\\ell)=m,d_{k'}(\\ell')=n \\mid \\overline{\\overline{{\\boldsymbol{d}}}}^{{[i,t]}}, {\\boldsymbol{r}}, \\boldsymbol \\tau^{[i]},\n{\\boldsymbol{a}}^{[i]}\\right).\n\\end{eqnarray*}\nWe need to evaluate the probability in the expression above. Following the same route taken as in the previous section and after some algebra, it can be expressed as\n\\[\nP\\left(d_{k}(\\ell)= m, d_{k'}(\\ell')= n \\mid \\overline{\\overline{{\\boldsymbol{d}}}}^{[i,t]}, {\\boldsymbol{r}}, \\boldsymbol \\tau^{[i]},\n{\\boldsymbol{a}}^{[i]}\\right)=\\]\n\\begin{equation}\\hspace*{3cm} \\frac{1}{1+\\exp\\left(-\\zeta^{[i,t]}\\right)}\\cdot\n\\frac{1}{1+\\exp\\left(-\\lambda^{[i,t]}\\right)}.\n\\label{MC:LLR_corr}\n\\end{equation}\n\nThe quantities $\\zeta^{[i,t]}$ and $\\lambda^{[i,t]}$ (\\ref{MC:LLR_corr}) are given by\n\\begin{eqnarray*}\n\\zeta^{[i,t]}&=&\\frac{4}{N_{0}}\n\\Re\\left\\{n({\\boldsymbol{g}}^{[i]}_{p})^{\\dag}({\\boldsymbol{r}}-{{\\boldsymbol{G}}}^{[i]}_{\\overline{p,q}}\n\\mbox{ }\n{{\\boldsymbol{d}}}^{[i,t]}_{\\overline{p,q}})-mn({\\boldsymbol{g}}^{[i]}_{p})^{\\dag}{\\boldsymbol{g}}^{[i]}_{q}\\right\\},\\\\\n\\lambda^{[i,t]}&=&\\frac{4}{N_{0}}\n\\Re\\left\\{m({\\boldsymbol{g}}^{[i]}_{q})^{\\dag}({\\boldsymbol{r}}-{{\\boldsymbol{G}}}^{[i]}_{\\bar q} \\mbox{ }\n{{\\boldsymbol{d}}}^{[i,t]}_{\\bar q})\\right\\},\n\\end{eqnarray*}\nwhere $p\\triangleq k'L+\\ell'$ and $q\\triangleq kL+\\ell$.\n${{\\boldsymbol{G}}}_{\\overline{p,q}}$ is ${\\boldsymbol{G}}$ with its $p$th and $q$th columns\n${\\boldsymbol{g}}_{p},{\\boldsymbol{g}}_{q}$ removed. Similarly, ${{\\boldsymbol{d}}}_{\\overline{p,q}}$\ndenotes the vector ${\\boldsymbol{d}}$ with its $p$th and $q$th components\nremoved.\n\n\n\n\n\\section{Performance analysis}\n\n\\subsection{ Modified Cramer-Rao Bounds for the Estimated Parameters}\nWe now derive the modified Cramer-Rao lower bounds (MCRB) on the\nvariances of any unbiased estimates $\\widehat{{\\boldsymbol{\\theta}}}$ of the\nparameter vector ${\\boldsymbol{\\theta}}$. It is shown in \\cite{VanTrees} that for $\\theta_{p} \\in {\\boldsymbol{\\theta}}$, $\\mbox{var}(\\widehat{\\theta}_{p}-\\theta_{p})\\geq [{\\boldsymbol{I}}^{-1}({\\boldsymbol{\\theta}})]_{pp}$, where ${\\boldsymbol{I}}({\\boldsymbol{\\theta}})$ is the $3K\\times 3K$ Fisher information matrix whose $(p,q)$th component is defined by\n\\begin{equation*}\n[{\\boldsymbol{I}}({\\boldsymbol{\\theta}})]_{pq}\\triangleq - E_{{\\boldsymbol{r}},{\\boldsymbol{a}}} \\bigg\\{\\frac{\\partial^{2}\\ln p({\\boldsymbol{r}},{\\boldsymbol{a}} \\mid \\boldsymbol \\tau)}{\\partial \\theta_{p}\\partial \\theta_{q}} \\bigg\\}, \\mbox{ for }p,q=1,2,\\cdots,3K.\n\\end{equation*}\nFor the joint likelihood function in (\\ref{appl:likelihood}),\nit is shown in \\cite{kay} that the Fisher information matrix can be computed by\n\\begin{equation}\n[{\\boldsymbol{I}}({\\boldsymbol{\\theta}})]_{pq}=\n\\frac{2}{N_{0}}E_{{\\boldsymbol{d}}}\\bigg\\{E_{{\\boldsymbol{a}}|{\\boldsymbol{d}}}\\bigg\\{\\Re\\bigg[\n\\frac{\\partial {\\boldsymbol{\\mu}}^{\\dag}({\\boldsymbol{\\theta}},{\\boldsymbol{d}})}{\\partial\n \\theta_{p}}\\frac{\\partial {\\boldsymbol{\\mu}}({\\boldsymbol{\\theta}},{\\boldsymbol{d}})}{\\partial\n \\theta_{q}}\\bigg]\\bigg\\}\\bigg\\},\n\\label{CRB_Fisher}\n\\end{equation}\n$p,q=1,2,\\cdots, 3K.$\n\nTaking the expectations with respect to channel coefficients ${\\boldsymbol{a}}$ and data ${\\boldsymbol{d}}$ after taking the partial derivatives in (\\ref{CRB_Fisher}) with respect to $\\theta_{p}$ and $\\theta_{q}$, for different regions of $p$ and $q$ values, under the assumption that the data sequences are independent and equally likely and the fact that ${\\boldsymbol{S}}^{\\dag}(\\tau_{p},\\ell){\\boldsymbol{S}}(\\tau_{p},\\ell)=1,$ for $p=1,2\\cdots K; \\mbox{ }\\ell=0,1,\\cdots,L-1$, the Fisher information matrix becomes a diagonal matrix whose $(p,p)$th component can be evaluated as\n\\begin{equation}\n[{\\boldsymbol{I}}({\\boldsymbol{\\theta}})]_{pp}=\\frac{2}{N_{0}}\\left\\{ \\begin{array}{ll}\n L; & p=1,\\cdots,K\\\\\n L; & p=K+1,\\cdots,2K\\\\\n \\sigma_{p}^{2}\\sum_{\\ell=0}^{L-1} \\mid {\\boldsymbol{S}}'(\\ell) \\mid^{2}; & p=2K+1,\\cdots,3K.\\\\\n \\end{array}\n \\right.\n\\label{CRB_Fisher_eval}\n\\end{equation}\nwith the short-cut ${\\boldsymbol{S}}'[\\ell] \\triangleq\n\\frac{\\partial{\\boldsymbol{S}}_p(\\tau_{p},\\ell)}{\\partial\\tau_{p}} \\mid_{t=\\ell\n T_b + \\widehat\\tau_p}$.\nThe final result for the MCRBs on the estimates of the channel coefficients and the transmission delays is obtained by inverting the diagonal matrix ${\\boldsymbol{I}}({\\boldsymbol{\\theta}})$ in (\\ref{CRB_Fisher_eval}) as follows.\n\\begin{eqnarray}\n\\mbox{var}(\\widehat{a}_{k})&\\geq & N_0\/L, \\label{MCRB:a} \\\\\n\\mbox{ }\\mbox{var}(\\widehat{\\tau}_{k})&\\geq & 1\/(8 \\pi^{2} L\\overline{\\gamma_{k}}\\mbox{ }B^{2}_{s_{k}} ) \\label{MCRB:tau},\n\\end{eqnarray}\n$k = 1,2,\\dots,K$. The symbol $\\overline{\\gamma_{k}}\\triangleq \\sigma^{2}_{k}\/N_{0}$\nis the average SNR, $B_{s_{k}}$ is the Gabor bandwidth of the $k$th user's\nspreading code waveform, $s_{k}(t)$ i.e., \n\\begin{equation*}\nB_{s_{k}}\\triangleq \\bigg(\\int_{-\\infty}^{+\\infty}f^{2} \\mid S_{k}(f)\\mid^{2} df\\bigg)^{1\/2},\n\\end{equation*}\nand $S_{k}(f)$ is the Fourier transform of $s_{k}(t), t\\in\n[0,T_{b}]$. Note that the Gabor bandwidth $B_{s_{k}}$ tends to infinity for rectangular-shaped (continuous-time)\nchip waveforms.\n\n\n\n\n\\subsection{Numerical Examples}\n\nTo assess the performance of the proposed (non-linear) Monte-Carlo SAGE scheme, an asynchronous uncoded DS-CDMA system with $K = 5$ users, rectangular chip waveforms with processing gain $N_c = 8$, and $L = 80$ transmitted symbols per block is considered. The receiver processes $Q = 12$ samples per chip. For each data block, Gibbs sampling is performed over $50$ iterations. \nA few, say $L_p=4$ pilot symbols are embedded in each block to overcome the phase ambiguity problem. Each user's strongest path from the MMSE\nestimate of ${\\boldsymbol{a}}$ given the $K~Q~(L_p+1)-1$ samples of ${\\boldsymbol{r}}$ and the pilot symbols, yield the initial estimates ${\\boldsymbol{a}}^{[0]}$ and $\\boldsymbol \\tau^{[0]}$. The MMSE estimate of $\\boldsymbol{d}$, given\n${\\boldsymbol{r}}$ and weighted by ${\\boldsymbol{a}}^{[0]}$ yields the initial symbol estimate\n${\\boldsymbol{d}}^{[0]}$. We refer to this method as MMSE-separate estimation (MMSE-SE). \nFor comparison purpose, the SAGE-scheme for joint data detection and channel estimation in\n\\cite{kocian2007} for known transmission delays and hard-decision\ndecoding has also been considered subsequently. We refer to these scheme as\n\"SAGE-JDE, $\\boldsymbol \\tau$ known\".\\\\\n\n\\begin{figure}\n\\begin{center}\n\\begin{psfrags}\n\\psfrag{0.10}[][][0.8]{$0.10$}\n\\psfrag{0.15}[][][0.8]{$0.15$}\n\\psfrag{0.20}[][][0.8]{$0.20$}\n\\psfrag{0.25}[][][0.8]{$0.25$}\n\\psfrag{0.30}[][][0.8]{$0.30$}\n\\psfrag{0.35}[][][0.8]{$0.35$}\n\\psfrag{0.40}[][][0.8]{$0.40$}\n\\psfrag{0.45}[][][0.8]{$0.45$}\n\\psfrag{0.50}[][][0.8]{$0.50$}\n\\psfrag{delay}[][][0.8]{$\\tau\/T_b$}\n\\psfrag{10}[][][0.8]{$10$}\n\\psfrag{-1}[][][0.6]{$-1$}\n\\psfrag{-2}[][][0.6]{$-2$}\n\\psfrag{-3}[][][0.6]{$-3$}\n\\psfrag{user1}[l][l][0.8]{MCMC-SAGE, user 1}\n\\psfrag{user3}[l][l][0.8]{MCMC-SAGE, user 3}\n\\psfrag{MCRLB(1)xxxxxxxxxxxx}[l][l][0.8]{MCRB (\\ref{MCRB:a}), user 1}\n\\psfrag{MCRLB(3)}[l][l][0.8]{MCRB (\\ref{MCRB:a}), user 3}\n\\psfrag{MSE(a)}[][][0.8]{$\\mbox{var}(\\widehat{a}_k)$}\n\\includegraphics[width=10cm]{MSEa33_MCMC.eps}\n\\end{psfrags}\n\\end{center}\n\\caption{$\\mbox{var}(\\widehat{a}_k)$ of the MCMC-SAGE in near-far scenario. \\label{fig:MMSE_a}}\n\\end{figure}\n\nTo study the behavior of the proposed MCMC-SAGE scheme, we consider communication over AWGN (not known to the receiver). The individual powers are given by\n\\begin{equation*}\n\\begin{array}{lll}\n\\sigma_1^2 = -4~\\mathrm{dB}, & \\hspace{2ex} \\sigma_2^2 = -2~\\mathrm{dB}, & \\hspace{2ex} \\sigma_3^2 = 0~\\mathrm{dB}, \\\\\n\\sigma_4^2 = +2~\\mathrm{dB}, & \\hspace{2ex} \\sigma_5^2 = +4~\\mathrm{dB}, \\\\ \n\\end{array}\n\\end{equation*}\nFig.~\\ref{fig:MMSE_a} shows the mean-square-error (MSE) of the channel estimates $\\widehat{\\boldsymbol a}_1$ (weakest user) and $\\widehat{\\boldsymbol a}_3$ (normal user) as a function of the normalized transmission delays $\\boldsymbol \\tau\/T_b$ which are uniformly distributed on the interval between zero and the value on the abscissa. It can be seen that the MCMC-SAGE performs close to the MCRB over the entire range of $\\boldsymbol \\tau$. Not shown in the plot, convergence is achieved after around 25 iterations i.e., every user's parameter vector is updated five times.\n\n\\begin{figure}\n\\begin{center}\n\\begin{psfrags}\n\\psfrag{0.10}[][][0.8]{$0.10$}\n\\psfrag{0.15}[][][0.8]{$0.15$}\n\\psfrag{0.20}[][][0.8]{$0.20$}\n\\psfrag{0.25}[][][0.8]{$0.25$}\n\\psfrag{0.30}[][][0.8]{$0.30$}\n\\psfrag{0.35}[][][0.8]{$0.35$}\n\\psfrag{0.40}[][][0.8]{$0.40$}\n\\psfrag{0.45}[][][0.8]{$0.45$}\n\\psfrag{0.50}[][][0.8]{$0.50$}\n\\psfrag{delay}[][][0.8]{$\\tau\/T_b$}\n\\psfrag{10}[][][0.8]{$10$}\n\\psfrag{-4}[][][0.6]{$-4$}\n\\psfrag{-5}[][][0.6]{$-5$}\n\\psfrag{-6}[][][0.6]{$-6$}\n\\psfrag{-7}[][][0.6]{$-7$}\n\\psfrag{user1xxxxxxxxxxxxx}[l][l][0.8]{MCMC-SAGE, user 1}\n\\psfrag{user3}[l][l][0.8]{MCMC-SAGE, user 3}\n\\psfrag{MSE(tau)}[][][0.8]{$\\mbox{var}(\\widehat{\\tau}_k)$}\n\\includegraphics[width=10cm]{MSEtau33_MCMC.eps}\n\\end{psfrags}\n\\end{center}\n\\caption{$\\mbox{var}(\\widehat{\\tau}_k)$ of the MCMC-SAGE in near-far scenario. \\label{fig:MMSE_tau}}\n\\end{figure}\n\nFig.~\\ref{fig:MMSE_tau} depicts the MSE of the delay estimates $\\widehat{\\boldsymbol \\tau}_1$ and $\\widehat{\\boldsymbol \\tau}_3$. Notice that the MCRB for $\\boldsymbol \\tau$ tends to zero for time-continuous signature waveforms. It can be seen that user~3 does not encounter delay estimation errors for small transmission delays i.e., $\\tau\/T_b \\leq 0.2$. This effect can be partially explained by the large number of samples per chip i.e., $Q=12$. Though for higher transmission delays, $\\mbox{var}(\\widehat{\\tau}_3)$ is finite, because of the increasing residual interference in the receiver.\n\n\\begin{figure}\n\\begin{center}\n\\begin{psfrags}\n\\psfrag{2}[][][0.8]{$2$}\n\\psfrag{4}[][][0.8]{$4$}\n\\psfrag{6}[][][0.8]{$6$}\n\\psfrag{8}[][][0.8]{$8$}\n\\psfrag{10}[][][0.8]{$10$}\n\\psfrag{12}[][][0.8]{$12$}\n\\psfrag{14}[][][0.8]{$14$}\n\\psfrag{16}[][][0.8]{$16$}\n\\psfrag{effective SNR}[][][0.8]{$\\frac{L-L_p}{L}\\bar{\\gamma}_1=\\frac{L-L_p}{L}\\bar{\\gamma}_2=\\ldots=\\frac{L-L_p}{L}\\bar{\\gamma}_K$ [dB]}\n\\psfrag{10}[][][0.8]{$10$}\n\\psfrag{0}[][][0.6]{$0$}\n\\psfrag{-1}[][][0.6]{$-1$}\n\\psfrag{-2}[][][0.6]{$-2$}\n\\psfrag{-3}[][][0.6]{$-3$}\n\\psfrag{-4}[][][0.6]{$-4$}\n\\psfrag{BER(user1)}[][][0.6]{${\\mathrm{BER}_1}$}\n\\psfrag{BER(user3)}[][][0.6]{${\\mathrm{BER}_3}$}\n\\psfrag{SU}[l][l][0.7]{SU, known channel}\n\\psfrag{SAGE(kc)}[l][l][0.7]{MCMC-SAGE, $\\boldsymbol \\tau$ known}\n\\psfrag{SAGE}[l][l][0.7]{MCMC-SAGE, $\\hat{\\boldsymbol \\tau}$}\n\\psfrag{MMSE-SDEXXXXXX}[l][l][0.7]{MMSE-SE}\n\\includegraphics[width=10cm]{BER33_MCMC.eps}\n\\end{psfrags}\n\\end{center}\n\\caption{BER-performance in near-far scenario. \\label{fig:BER}}\n\\end{figure}\n\nThe bit-error-rate ($\\overline{\\mathrm{BER}}$) of the proposed\nreceiver is plotted in Fig.~\\ref{fig:BER} versus the \\emph{effective}\nSNR $\\frac{L-L_p}{L}\\bar{\\gamma}_k$, $\\bar{\\gamma}_k \\triangleq\n\\sigma_k^2\/N_0$, $k=1,\\ldots,K$. The transmission delays are uniformly distributed on $[0,T_b\/2)$. It can be seen that the MMSE-SDE scheme cannot handle delay estimation errors at all due to high\ncorrelations between the users' signature sequences. The proposed\nMCMC-SAGE scheme and the \"SAGE-JDE, $\\boldsymbol \\tau$ known\" perform similar. The weakest user 1 performs close to the single-user (SU) bound. The normal user 3 has a multiuser efficiency\nof roughly 1~dB over the entire range of SNR values.\n\n\\section{Conclusions}\nA computationally efficient estimation algorithm has been proposed\nfor estimating the transmission delays and the channel coefficients\njointly in a non-data-aided fashion via the SAGE algorithm. The {\\em a\nposteriori} probabilities needed to implement the SAGE algorithm\nhave been computed by means of the Gibbs sampling technique. Exact\nanalytical expression have been obtained for the estimates of\ntransmission delays and channel coefficients. At each iteration the\nlikelihood function is non-decreasing.\n\n\n\\bibliographystyle{IEEEbib}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction and Motivation}\n\nDESIDERATA: ``To translate the information contained on protein\ndatabases in terms of random variables in order to model a dynamics\nof folding and unfolding of proteins''.\n\nThe information on the planetary motion has been annotated on Astronomical\nalmanacs (Ephemerides) along centuries and can be derived and analyzed\nby Classical Dynamics and Deterministic Chaos as well as confirmed and\ncorrected by General Relativity. The information which is accumulated\non Biological almanacs (Protein databases) on the last decades, is\nstill waiting for its first description to be done by a successful\ntheory of folding and unfolding of proteins. We think that this study\nshould be started from the evolution of protein families and its\nassociation into Family clans as a necessary step of their development.\n\nThe first fundamental idea to be developed here is that proteins do not\nevolute independently. We introduce several arguments and we have\ndone many calculations to span the bridge over the facts about\nprotein evolution in order to emphasize the existence of a protein\nfamily formation process (PFFP), a successful pattern recognition method\nand a coarse-grained protein dynamics are driven by optimal control\ntheory \\cite{mondaini1,mondaini2,mondaini3,mondaini4,mondaini5,mondaini6}.\nProteins, or their ``intelligent'' parts,\nprotein domains, evolute together as a family of protein domains. We then\nrealize that the exclusion of the evolution of an ``orphan'' protein is\nguaranteed by the probabilistic approach to be introduced in the present\ncontribution. We think that the elucidation of the nature of intermediate\nstages of the folding\/unfolding dynamics, in order to circumvent the\nLevinthal ``paradox'' \\cite{levinthal,karplus} as well as the determination\nof initial conditions, should be found from a detailed study of this PFFP\nprocess. A byproduct of this approach is the possibility of testing the\nhypothesis of agglutination of protein families into Clans by rigorous\nstatistical methods like ANOVA \\cite{deGroot,mondaini4}.\n\nWe take many examples of Entropy Measures as the generalized\nfunctions of random variables on our modelling. These are the\nprobabilities of occurrence of amino acids in rectangular arrays\nwhich are the representatives of families of protein domains.\nIn section 2, the sample space which is adequate for this\nstatistical approach is described in detail. We start from the\ndefinition of probabilities of occurrence and the restrictions\nimposed on the number of feasible families by the structure of\nthis sample space. Section 3 introduces the set of Sharma-Mittal\nEntropy Measures \\cite{sharma,mondaini4} to be adopted as the\nfunctions of probabilities of occurrence in the statistical\nanalysis to be developed. The Mutual Information measures\nassociated with the Sharma-Mittal set, as well as the normalized\nJaccard distance measures, are also introduced in this section.\nIn section 4, we present a naive sketch of assessing Protein\ndatabase, to set the stage for a more efficient approach of the\nfollowing sections. In section 5, we point out the inconvenience\nof the Maple computing system for the statistical calculations to\nbe done, by displaying tables with all CPU and real times\nnecessary to perform all necessary calculations. We have also\nprovided in this section, some adaptation of our methods in order\nto be used with the Perl computing system and we compare the new\ntimes of calculation with those by using Maple at the beginning\nof the section. We also include some comments on the use of Perl,\nespecially on its oddness to calculate with the input data given\nin arrays and the way of circumventing this. However, we also stress\nthat despite the fact that joint probabilities and their powers could be\nusually calculated, the output will come randomly distributed and the\nCPU and real times will increase too much to favour\nthe calculation of the entropy measures. This is due to the\nintrinsic ``Hash'' structure \\cite{cormen} of the Perl computing system.\nWe then introduce a modified array structure in order to calculate with\nPerl.\n\n\\section{The Sample Space for a Statistical Treatment}\nWe consider a rectangular array of \\textbf{\\emph{m}} rows (protein domains)\nand \\textbf{\\emph{n}} columns (amino acids). These arrays are organized\nfrom the protein database whose domains are classified into families and\nclans by the professional expertise of senior biologists \\cite{finn1,finn2}.\n\nThe random variable is the probability of occurrences of amino\nacids, $p_j(a)$, $j=1,2,\\ldots,n$, $a=A,C,D,\\ldots,W,Y$ (one-letter\ncode for amino acids), to be given by\n\\begin{equation}\n p_j(a) \\equiv \\frac{n_j(a)}{m} \\label{eq:eq1}\n\\end{equation}\n\n\\noindent where $n_j(a)$ is the number of occurrences of the amino acid\n\\textbf{\\emph{a}} in the $j$-th column. Eq.(\\ref{eq:eq1}) could be\nalso interpreted as the components of n vectors of 20 components\neach\n\\begin{equation}\n \\begin{pmatrix}\n p_1(A) \\\\ \\relax\n \\vdots \\\\ \\relax\n p_1(Y)\n \\end{pmatrix}\n \\begin{pmatrix}\n p_2(A) \\\\ \\relax\n \\vdots \\\\ \\relax\n p_2(Y)\n \\end{pmatrix}\n \\ldots\n \\begin{pmatrix}\n p_n(A) \\\\ \\relax\n \\vdots \\\\ \\relax\n p_n(Y)\n \\end{pmatrix} \\label{eq:eq2}\n\\end{equation}\n\n\\noindent and we have\n\\begin{equation}\n \\sum_a n_j(a)=m\\,, \\,\\, \\forall j \\, \\Rightarrow \\,\n \\sum_a p_j(a)=1\\,, \\,\\, \\forall j \\label{eq:eq3}\n\\end{equation}\n\nAnalogously, we could also introduce the joint probability\nof occurrence of a pair of amino acids \\textbf{\\emph{a}}, \\textbf{\\emph{b}}\nin columns \\textbf{\\emph{j}}, \\textbf{\\emph{k}}, respectively $P_{jk}(a,b)$\nas the random variables. These are given by\n\\begin{equation}\n P_{jk}(a,b) = \\frac{n_{jk}(a,b)}{m} \\label{eq:eq4}\n\\end{equation}\n\n\\noindent where $n_{jk}(a,b)$ is the number of occurrences of the pair\nof amino acids \\textbf{\\emph{a}}, \\textbf{\\emph{b}} in columns\n\\textbf{\\emph{j}}, \\textbf{\\emph{k}}, respectively.\n\nA convenient interpretation of these joint probabilities could be\nthe elements of $\\frac{n(n-1)}{2}$, square matrices of $20 \\times 20$\nelements, to be written as\n\\begin{equation}\n P_{jk} =\n \\begin{pmatrix}\n P_{jk}(A,A) & \\ldots & P_{jk}(A,Y) \\\\ \\relax\n \\vdots & \\ddots & \\vdots \\\\ \\relax\n P_{jk}(Y,A) & \\ldots & P_{jk}(Y,Y)\n \\end{pmatrix} \\label{eq:eq5}\n\\end{equation}\n\n\\noindent where $j=1,2,\\ldots,(n-1)$; $k=j+1,\\ldots,n$.\n\nWe can also write,\n\\begin{equation}\n P_{jk}(a,b)=P_{jk}(a|b)p_k(b) \\,, \\label{eq:eq6}\n\\end{equation}\nThis equation can be also taken as another definition of joint\nprobability. $\\!P\\!_{jk}(\\!a|b)$ is the Conditional probability of\noccurrence of the amino acid \\textbf{\\emph{a}} in column\n\\textbf{\\emph{j}} if the amino acid \\textbf{\\emph{b}} is already\nfound in column \\textbf{\\emph{k}}. We then have,\n\\begin{equation}\n \\sum_a P_{jk}(a|b) = 1 \\label{eq:eq7}\n\\end{equation}\nFrom eqs.(\\ref{eq:eq6}), (\\ref{eq:eq7}), we have:\n\\begin{equation}\n \\sum_a P_{jk}(a,b) = p_k(b) \\label{eq:eq8}\n\\end{equation}\nand from eq.(\\ref{eq:eq8}),\n\\begin{equation}\n \\sum_a\\sum_b P_{jk}(a,b) = 1 \\label{eq:eq9}\n\\end{equation}\n\n\\noindent which is an identity since $P_{jk}(a,b)$ is also a probability.\n\nEqs.(\\ref{eq:eq8}) and (\\ref{eq:eq9}) can be also derived from\n\\begin{equation}\n \\sum_a n_{jk}(a,b) = n_k(b) \\,;\\quad \\sum_a\\sum_b n_{jk}(a,b)\n = m \\label{eq:eq10}\n\\end{equation}\n\n\\noindent and the definitions, eqs.(\\ref{eq:eq1}), (\\ref{eq:eq4}).\n\nWe now have from Bayes' law:\n\\begin{equation}\n P_{jk}(a|b)p_k(b) = P_{kj}(b|a)p_j(a) \\label{eq:eq10a}\n\\end{equation}\nand from eq.(\\ref{eq:eq10a}), the property of symmetry,\n\\begin{equation}\n P_{jk}(a,b) = P_{kj}(b,a) \\label{eq:eq10b}\n\\end{equation}\n\nThe matrices $P_{jk}$ can be organized in a triangular array:\n\\begin{equation}\n P =\n \\begin{matrix}\n {} & P_{12} & P_{13} & P_{14} & \\ldots & P_{1\\,n-2} & P_{1\\,n-1}\n & P_{1\\,n} \\\\ \\relax\n {} & {} & P_{23} & P_{24} & \\ldots & P_{2\\,n-2} & P_{2\\,n-1}\n & P_{2\\,n} \\\\ \\relax\n {} & {} & {} & P_{34} & \\ldots & P_{3\\,n-2} & P_{3\\,n-1}\n & P_{3\\,n} \\\\ \\relax\n {} & {} & {} & {} & \\ddots & \\vdots & \\vdots & \\vdots \\\\ \\relax\n {} & {} & {} & {} & {} & P_{n-3\\,n-2} & P_{n-3\\,n-1}\n & P_{n-3\\,n} \\\\ \\relax\n {} & {} & {} & {} & {} & {} & P_{n-2\\,n-1} & P_{n-2\\,n}\n \\\\ \\relax\n {} & {} & {} & {} & {} & {} & {} & P_{n-1\\,n}\n \\end{matrix} \\label{eq:eq11}\n\\end{equation}\nThe number of matrices until the $P_{jk}$-th one is given by\n\\begin{equation}\n C_{jk} = j(n-1) - \\frac{j(j-1)}{2} - (n-k) \\label{eq:eq12}\n\\end{equation}\n\n\\noindent These numbers can be also arranged as a triangular array:\n\\begin{equation}\n \\resizebox{.99\\hsize}{!}{$C =\n \\begin{matrix}\n 1 & 2 & 3 & 4 & 5 & 6 & \\ldots & (n-3) & (n-2) & (n-1) \\\\ \\relax\n {} & {} & {} & {} & {} & {} & {} & {} & {} & {} \\\\ \\relax\n {} & n & (n+1) & (n+2) & (n+3) & (n+4) & \\ldots & (2n-5) & (2n-4) &\n (2n-3) \\\\ \\relax\n {} & {} & {} & {} & {} & {} & {} & {} & {} & {} \\\\ \\relax\n {} & {} & (2n-2) & (2n-1) & 2n & (2n+1) & \\ldots & (3n-8) & (3n-7) &\n (3n-6) \\\\ \\relax\n {} & {} & {} & {} & {} & {} & {} & {} & {} & {} \\\\ \\relax\n {} & {} & {} & (3n-5) & (3n-4) & (3n-3) & \\ldots & (4n-12) & (4n-11) &\n (4n-10) \\\\ \\relax\n {} & {} & {} & {} & {} & {} & {} & {} & {} & {} \\\\ \\relax\n {} & {} & {} & {} & (4n-9) & (4n-8) & \\ldots & (5n-17) & (5n-16) & (5n-15)\n \\\\ \\relax\n {} & {} & {} & {} & {} & {} & {} & {} & {} & {} \\\\ \\relax\n {} & {} & {} & {} & {} & (5n-14) & \\ldots & (6n-23) & (6n-22) & (6n-21)\n \\\\ \\relax\n {} & {} & {} & {} & {} & {} & {} & {} & {} & {} \\\\ \\relax\n {} & {} & {} & {} & {} & {} & \\ddots & \\vdots & \\vdots & \\vdots \\\\ \\relax\n {} & {} & {} & {} & {} & {} & {} & {} & {} & {} \\\\ \\relax\n {} & {} & {} & {} & {} & {} & {} & \\frac{1}{2}(n^2-n-10) & \\frac{1}{2}\n (n^2-n-8) & \\frac{1}{2}(n+2)(n-3) \\\\ \\relax\n {} & {} & {} & {} & {} & {} & {} & {} & {} & {} \\\\ \\relax\n {} & {} & {} & {} & {} & {} & {} & {} & \\frac{1}{2}(n^2-n-4) &\n \\frac{1}{2}(n+1)(n-2) \\\\ \\relax\n {} & {} & {} & {} & {} & {} & {} & {} & {} & {} \\\\ \\relax\n {} & {} & {} & {} & {} & {} & {} & {} & {} & \\frac{1}{2}\\,n(n-1)\n \\end{matrix}\n $} \\label{eq:eq13}\n\\end{equation}\n\nEq.(\\ref{eq:eq11}) should be used for the construction of a computational\ncode to perform all necessary calculations. We postpone to other publication\nthe presentation of some interesting results on the analysis of eq.(\\ref{eq:eq13}).\n\nThe calculation of the matrix elements $P_{jk}(a,b)$ from a rectangular array\n$m \\times n$ of amino acids is done by the ``concatenation'' process\nwhich is easily implemented on computational codes. We choose a pair of\ncolumns $j=\\bar{\\jmath}$, $k=\\bar{k}$ from the strings, $a=A$, $C$,\n$\\ldots$, $W$, $Y$, $b=A$, $C$, $\\ldots$, $W$, $Y$ and we look for the\noccurrence of the combinations $ab=AA$, $AC$, $\\ldots$, $AW$, $AY$, $CA$,\n$CC$, $\\ldots$, $CW$, $CY$, $\\ldots$, $WA$, $WC$, $\\ldots$, $WW$, $WY$,\n$\\ldots$, $YA$, $YC$, $\\ldots$, $YW$, $YY$. We then calculate their numbers\nof occurrences $n_{\\bar{\\jmath}\\bar{k}}(A,A)$,\n$n_{\\bar{\\jmath}\\bar{k}}(A,C)$, $\\ldots$, $n_{\\bar{\\jmath}\\bar{k}}(Y,W)$,\n$n_{\\bar{\\jmath}\\bar{k}}(Y,Y)$ and the corresponding probabilities\n$P_{\\bar{\\jmath}\\bar{k}}(A,A)$, $P_{\\bar{\\jmath}\\bar{k}}(A,C)$, $\\ldots$,\\\\\n$P_{\\bar{\\jmath}\\bar{k}}(Y,W)$, $P_{\\bar{\\jmath}\\bar{k}}(Y,Y)$ from\neq.(\\ref{eq:eq4}). We do the same for the other $\\frac{n^2-n-2}{2}$ pairs of\ncolumns.\n\nAs an example, let us suppose that we have the $3 \\times 4$ array:\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=0.25\\linewidth]{figure1}\n \\caption{\\small An example of a $3 \\times 4$ array with amino acids\n A, C, D. \\label{fig1}}\n\\end{figure}\n\nLet us choose the pair of columns 1,2. We look for the occurrence of the\ncombinations $AA$, $AC$, $AD$, $CA$, $CC$, $CD$, $DA$, $DC$, $DD$ on the\npair of columns 1,2 of the array above and we found $n_{12}(A,C) = 1$,\n$n_{12}(C,A) = 1$, $n_{12}(D,A) = 1$. The others $n_{12}(a,b) = 0$. From\neq.(\\ref{eq:eq4}) we can write for the matrices $P_{jk}$ of eq.(\\ref{eq:eq5}):\n\\begin{align}\n & P_{12} =\n \\begin{pmatrix}\n 0 & 1\/3 & 0 \\\\ \\relax\n 1\/3 & 0 & 0 \\\\ \\relax\n 1\/3 & 0 & 0\n \\end{pmatrix}\n \\!;\n P_{13} =\n \\begin{pmatrix}\n 1\/3 & 0 & 0 \\\\ \\relax\n 0 & 0 & 1\/3 \\\\ \\relax\n 0 & 1\/3 & 0\n \\end{pmatrix}\n \\!;\n P_{14} =\n \\begin{pmatrix}\n 0 & 0 & 1\/3 \\\\ \\relax\n 0 & 0 & 1\/3 \\\\ \\relax\n 0 & 1\/3 & 0\n \\end{pmatrix}\n \\nonumber \\\\\n & P_{23} =\n \\begin{pmatrix}\n 0 & 1\/3 & 1\/3 \\\\ \\relax\n 1\/3 & 0 & 0 \\\\ \\relax\n 0 & 0 & 0\n \\end{pmatrix}\n \\!;\n P_{24} =\n \\begin{pmatrix}\n 0 & 1\/3 & 1\/3 \\\\ \\relax\n 0 & 0 & 1\/3 \\\\ \\relax\n 0 & 0 & 0\n \\end{pmatrix}\n \\!;\n P_{34} =\n \\begin{pmatrix}\n 0 & 0 & 1\/3 \\\\ \\relax\n 0 & 1\/3 & 0 \\\\ \\relax\n 0 & 0 & 1\/3\n \\end{pmatrix} \\label{eq:eq14}\n\\end{align}\nThe Maple computing system ``recognizes'' the matricial structure through\nits Linear Algebra package. The Perl computing system ``operates'' only\nwith ``strings''. The results above are easily obtained in Maple, but in\nPerl we have to find alternative ways of calculating the joint probabilities.\nThe first method is to calculate the probabilities per row of the\n$3 \\times 4$ array. We have for the first row:\n\\begin{align}\n & \\Pi_{12}^{(1)} =\n \\begin{pmatrix}\n 0 & 1\/3 & 0 \\\\ \\relax\n 0 & 0 & 0 \\\\ \\relax\n 0 & 0 & 0\n \\end{pmatrix}\n \\!; \\,\\,\n \\Pi_{13}^{(1)} =\n \\begin{pmatrix}\n 1\/3 & 0 & 0 \\\\ \\relax\n 0 & 0 & 0 \\\\ \\relax\n 0 & 0 & 0\n \\end{pmatrix}\n \\!; \\,\\,\n \\Pi_{14}^{(1)} =\n \\begin{pmatrix}\n 0 & 0 & 1\/3 \\\\ \\relax\n 0 & 0 & 0 \\\\ \\relax\n 0 & 0 & 0\n \\end{pmatrix}\n \\nonumber \\\\\n & \\Pi_{23}^{(1)} =\n \\begin{pmatrix}\n 0 & 0 & 0 \\\\ \\relax\n 1\/3 & 0 & 0 \\\\ \\relax\n 0 & 0 & 0\n \\end{pmatrix}\n \\!; \\,\\,\n \\Pi_{24}^{(1)} =\n \\begin{pmatrix}\n 0 & 0 & 0 \\\\ \\relax\n 0 & 0 & 1\/3 \\\\ \\relax\n 0 & 0 & 0\n \\end{pmatrix}\n \\!; \\,\\,\n \\Pi_{34}^{(1)} =\n \\begin{pmatrix}\n 0 & 0 & 1\/3 \\\\ \\relax\n 0 & 0 & 0 \\\\ \\relax\n 0 & 0 & 0\n \\end{pmatrix} \\label{eq:eq15}\n\\end{align}\nFor the second row:\n\\begin{align}\n & \\Pi_{12}^{(2)} =\n \\begin{pmatrix}\n 0 & 0 & 0 \\\\ \\relax\n 1\/3 & 0 & 0 \\\\ \\relax\n 0 & 0 & 0\n \\end{pmatrix}\n \\!; \\,\\,\n \\Pi_{13}^{(2)} =\n \\begin{pmatrix}\n 0 & 0 & 0 \\\\ \\relax\n 0 & 0 & 1\/3 \\\\ \\relax\n 0 & 0 & 0\n \\end{pmatrix}\n \\!; \\,\\,\n \\Pi_{14}^{(2)} =\n \\begin{pmatrix}\n 0 & 0 & 0 \\\\ \\relax\n 0 & 0 & 1\/3 \\\\ \\relax\n 0 & 0 & 0\n \\end{pmatrix}\n \\nonumber \\\\\n & \\Pi_{23}^{(2)} =\n \\begin{pmatrix}\n 0 & 0 & 1\/3 \\\\ \\relax\n 0 & 0 & 0 \\\\ \\relax\n 0 & 0 & 0\n \\end{pmatrix}\n \\!; \\,\\,\n \\Pi_{24}^{(2)} =\n \\begin{pmatrix}\n 0 & 0 & 1\/3 \\\\ \\relax\n 0 & 0 & 0 \\\\ \\relax\n 0 & 0 & 0\n \\end{pmatrix}\n \\!; \\,\\,\n \\Pi_{34}^{(2)} =\n \\begin{pmatrix}\n 0 & 0 & 0 \\\\ \\relax\n 0 & 0 & 0 \\\\ \\relax\n 0 & 0 & 1\/3\n \\end{pmatrix} \\label{eq:eq16}\n\\end{align}\nFor the third row:\n\\begin{align}\n & \\Pi_{12}^{(3)} =\n \\begin{pmatrix}\n 0 & 0 & 0 \\\\ \\relax\n 0 & 0 & 0 \\\\ \\relax\n 1\/3 & 0 & 0\n \\end{pmatrix}\n \\!; \\,\\,\n \\Pi_{13}^{(3)} =\n \\begin{pmatrix}\n 0 & 0 & 0 \\\\ \\relax\n 0 & 0 & 0 \\\\ \\relax\n 0 & 1\/3 & 0\n \\end{pmatrix}\n \\!; \\,\\,\n \\Pi_{14}^{(3)} =\n \\begin{pmatrix}\n 0 & 0 & 0 \\\\ \\relax\n 0 & 0 & 0 \\\\ \\relax\n 0 & 1\/3 & 0\n \\end{pmatrix}\n \\nonumber \\\\\n & \\Pi_{23}^{(3)} =\n \\begin{pmatrix}\n 0 & 1\/3 & 0 \\\\ \\relax\n 0 & 0 & 0 \\\\ \\relax\n 0 & 0 & 0\n \\end{pmatrix}\n \\!; \\,\\,\n \\Pi_{24}^{(3)} =\n \\begin{pmatrix}\n 0 & 1\/3 & 0 \\\\ \\relax\n 0 & 0 & 0 \\\\ \\relax\n 0 & 0 & 0\n \\end{pmatrix}\n \\!; \\,\\,\n \\Pi_{34}^{(3)} =\n \\begin{pmatrix}\n 0 & 0 & 0 \\\\ \\relax\n 0 & 1\/3 & 0 \\\\ \\relax\n 0 & 0 & 0\n \\end{pmatrix} \\label{eq:eq17}\n\\end{align}\n\nWe stress that Perl does not recognize these matrix structures. This is just\nour arrangement in order to make comparison with Maple calculations. However,\nPerl ``knows'' how to sum the calculations done per rows to obtain:\n\\begin{equation}\n \\Pi_{12}^{(1)}+\\Pi_{12}^{(2)}+\\Pi_{12}^{(3)} =\n \\begin{pmatrix}\n 0 & 1\/3 & 0 \\\\ \\relax\n 1\/3 & 0 & 0 \\\\ \\relax\n 1\/3 & 0 & 0\n \\end{pmatrix}\n \\equiv P_{12} \\label{eq:eq18}\n\\end{equation}\n\\begin{equation}\n \\Pi_{13}^{(1)}+\\Pi_{13}^{(2)}+\\Pi_{13}^{(3)} =\n \\begin{pmatrix}\n 1\/3 & 0 & 0 \\\\ \\relax\n 0 & 0 & 1\/3 \\\\ \\relax\n 0 & 1\/3 & 0\n \\end{pmatrix}\n \\equiv P_{13} \\label{eq:eq19}\n\\end{equation}\n\\begin{equation}\n \\Pi_{14}^{(1)}+\\Pi_{14}^{(2)}+\\Pi_{14}^{(3)} =\n \\begin{pmatrix}\n 0 & 0 & 1\/3 \\\\ \\relax\n 0 & 0 & 1\/3 \\\\ \\relax\n 0 & 1\/3 & 0\n \\end{pmatrix}\n \\equiv P_{14} \\label{eq:eq20}\n\\end{equation}\n\\begin{equation}\n \\Pi_{23}^{(1)}+\\Pi_{23}^{(2)}+\\Pi_{23}^{(3)} =\n \\begin{pmatrix}\n 0 & 1\/3 & 1\/3 \\\\ \\relax\n 1\/3 & 0 & 0 \\\\ \\relax\n 0 & 0 & 0\n \\end{pmatrix}\n \\equiv P_{23} \\label{eq:eq21}\n\\end{equation}\n\\begin{equation}\n \\Pi_{24}^{(1)}+\\Pi_{24}^{(2)}+\\Pi_{24}^{(3)} =\n \\begin{pmatrix}\n 0 & 1\/3 & 1\/3 \\\\ \\relax\n 0 & 0 & 1\/3 \\\\ \\relax\n 0 & 0 & 0\n \\end{pmatrix}\n \\equiv P_{24} \\label{eq:eq22}\n\\end{equation}\n\\begin{equation}\n \\Pi_{34}^{(1)}+\\Pi_{34}^{(2)}+\\Pi_{34}^{(3)} =\n \\begin{pmatrix}\n 0 & 0 & 1\/3 \\\\ \\relax\n 0 & 1\/3 & 0 \\\\ \\relax\n 0 & 0 & 1\/3\n \\end{pmatrix}\n \\equiv P_{34} \\label{eq:eq23}\n\\end{equation}\n\nWe are then able to translate the Perl output in ``matrix language''.\nHowever, this output does not come as an ordered set of joint\nprobabilities, as we have done by arranging the output in the form of the\nmatrices $\\Pi_{jk}^{(l)}$, $j=1,2,3$, $k=2,3,4$, $l=1,2,3$. In order to\ncalculate functions of the probabilities as the entropy measures, it will\ntake too much time for the Perl computing system to collect the necessary\nprobability values. This is due to the ``Hash'' structure of the Perl as\ncompared to the usual ``array'' structure of the Maple. A new form of\narranging the strings to favour an a priori ordination will circumvent this\ninconvenience of the ``hash'' structure. Let us then write the following\nextended string associated to the $m \\times n$ rectangular array:\n\\begin{equation*}\n \\Big(\\underbrace{(\\overset{1}{A}\\overset{2}{C}\\overset{3}{D})}_{1}\n \\underbrace{(\\overset{1}{C}\\overset{2}{A}\\overset{3}{A})}_{2}\n \\underbrace{(\\overset{1}{A}\\overset{2}{D}\\overset{3}{C})}_{3}\n \\underbrace{(\\overset{1}{D}\\overset{2}{D}\\overset{3}{C})}_{4}\\Big)\n\\end{equation*}\nWe then get\n\\begin{align}\n &P_{12}(A,C) = 1\/3,\\quad P_{12}(C,A) = 1\/3,\\quad P_{12}(D,A) = 1\/3 \\nonumber \\\\\n &P_{13}(A,A) = 1\/3,\\quad P_{13}(C,D) = 1\/3,\\quad P_{13}(D,C) = 1\/3 \\nonumber \\\\\n &P_{14}(A,D) = 1\/3,\\quad P_{14}(C,D) = 1\/3,\\quad P_{14}(D,C) = 1\/3 \\nonumber \\\\\n &P_{23}(C,A) = 1\/3,\\quad P_{23}(A,D) = 1\/3,\\quad P_{23}(A,C) = 1\/3 \\nonumber \\\\\n &P_{24}(C,D) = 1\/3,\\quad P_{24}(A,D) = 1\/3,\\quad P_{24}(A,C) = 1\/3 \\nonumber \\\\\n &P_{34}(A,D) = 1\/3,\\quad P_{34}(D,D) = 1\/3,\\quad P_{34}(C,C) = 1\/3 \\label{eq:eq24}\n\\end{align}\nAll the other joint probabilities $P_{jk}(a,b)$ are equal to zero.\n\nThis is a feasible treatment for the ``hash'' structure. In the example\nsolved above the probabilities will come already ordered in triads. This\nwill save time in the calculations with the Perl system.\n\nIt should be stressed that the Perl computing system does not recognize any\nformal relations of Linear Algebra. However, it does quite well if these\nrelations are converted into products and sums. In order to give an example\nof working with the Perl system, we take a calculation with the usual Shannon\nEntropy measure. The calculation of the Entropy for the\ncolumns $j$,$k$ is done by\n\\begin{equation}\n S_{jk} = -\\sum_a\\sum_b P_{jk}(a,b)\\log P_{jk}(a,b) =\n -\\mathrm{Tr}\\Big(P_{jk}(\\log P_{jk})^{\\mathrm{T}}\\Big) \\label{eq:eq25}\n\\end{equation}\n\n\\noindent where $P_{jk}$ is the matrix given in eq.(\\ref{eq:eq5}) and\n$\\mathrm{Tr,T}$ stands for the operations of taking the trace and transposing\na matrix, respectively. The matrix $(\\log P_{jk})^{\\mathrm{T}}$ is given by\n\\begin{equation*}\n (\\log P_{jk})^{\\mathrm{T}} =\n \\begin{pmatrix}\n \\log P_{jk}(A,A) & \\ldots & \\log P_{jk}(Y,A) \\\\ \\relax\n \\vdots & {} & \\vdots \\\\ \\relax\n \\log P_{jk}(A,Y) & \\ldots & \\log P_{jk}(Y,Y)\n \\end{pmatrix}\n\\end{equation*}\n\n\\noindent we also include for a useful reference, the matrix\n\\begin{equation*}\n p_j(p_k)^{\\mathrm{T}} =\n \\begin{pmatrix}\n p_j(A)p_k(A) & \\ldots & p_j(A)p_k(Y) \\\\ \\relax\n \\vdots & {} & \\vdots \\\\ \\relax\n p_j(Y)p_k(A) & \\ldots & p_j(Y)p_k(Y)\n \\end{pmatrix}\n\\end{equation*}\n\nSince from eqs.(\\ref{eq:eq18})--(\\ref{eq:eq23}), we have\n\\begin{equation}\n P_{jk} = \\sum_{l=1}^m \\Pi_{jk}^{(l)} \\label{eq:eq26}\n\\end{equation}\nwe can write:\n\\begin{equation}\n S_{jk} = -\\mathrm{Tr}\\left(\\left(\\sum_{l=1}^m \\Pi_{jk}^{(l)}\\right)\n (\\log P_{jk})^{\\mathrm{T}}\\right) = -\\sum_{l=1}^m \\mathrm{Tr}\n \\Big(\\Pi_{jk}^{(l)}(\\log P_{jk})^{\\mathrm{T}}\\Big) \\label{eq:eq27}\n\\end{equation}\nThere is no problem for calculating in Perl, if we prepare eq.(\\ref{eq:eq26})\nby expressing previously all the products and sums to be done. The real\nproblem with Perl calculations is the arrangement of the output of values\n$P_{jk}(a,b)$, due to the ``hash'' structure as have been stressed above.\n\n\\section{Entropy Measures. The Sharma-Mittal set and the associated Jaccard\nEntropy measure}\nWe start this section with the definition of the two-parameter Sharma-Mittal\nentropies \\cite{sharma,mondaini1,mondaini2}\n\\begin{align}\n (SM)_{jk}(r,s) &= -\\frac{1}{1-r}\\left(1-\\left(\\sum_a\\sum_b\\big(P_{jk}(a,b)\n \\big)^s\\right)^{\\frac{1-r}{1-s}}\\right) \\label{eq:eq28} \\\\\n (SM)_{j}(r,s) &= -\\frac{1}{1-r}\\left(1-\\left(\\sum_a \\big(p_{j}(a)\n \\big)^s\\right)^{\\frac{1-r}{1-s}}\\right) \\label{eq:eq29}\n\\end{align}\n\n\\noindent where $p_j(a)$ and $P_{jk}(a,b)$ are the simple and joint probabilities\nof occurrence of amino acids as defined on eqs.(\\ref{eq:eq1}) and (\\ref{eq:eq4}),\nrespectively. \\textbf{\\emph{r}}, \\textbf{\\emph{s}} are non-dimensional parameters.\n\nWe can associate to the entropy measures above their corresponding\none parameter forms to be given by the limits:\n\\begin{align}\n H_{jk}(s) &= \\lim_{r \\to s} (SM)_{jk}(r,s) = -\\frac{1}{1-s}\n \\left(1-\\sum_a\\sum_b\\big(P_{jk}(a,b)\\big)^s\\right) \\label{eq:eq30} \\\\\n H_{j}(s) &= \\lim_{r \\to s} (SM)_{j}(r,s) = -\\frac{1}{1-s}\n \\left(1-\\sum_a\\big(p_{j}(a)\\big)^s\\right) \\label{eq:eq31}\n\\end{align}\n\n\\noindent These are the Havrda-Charvat Entropy Measures and they will be\nspecially emphasized in the present work. Other alternative proposals for\nthe single parameter entropies are given by\n\n\\noindent The Renyi's Entropy measures:\n\\begin{align}\n R_{jk}(s) &= \\lim_{r \\to 1} (SM)_{jk}(r,s) = \\frac{1}{1-s}\n \\log\\left(\\sum_a\\sum_b\\big(P_{jk}(a,b)\\big)^s\\right) \\label{eq:eq32} \\\\\n R_{j}(s) &= \\lim_{r \\to 1} (SM)_{j}(r,s) = \\frac{1}{1-s}\n \\log\\left(\\sum_a\\big(p_{j}(a)\\big)^s\\right) \\label{eq:eq33}\n\\end{align}\nThe Landsberg-Vedral Entropy measures:\n\\begin{align}\n L_{jk}(s) =\\! \\lim_{\\quad r \\to 2-s} (SM)_{jk}(r,s) &= \\frac{1}{1-s}\n \\left(1-\\Big(\\sum_a\\sum_b\\big(P_{jk}(a,b)\\big)^s\\Big)^{-1}\\right)\n \\label{eq:eq34} \\\\\n &= \\frac{H_{jk}(s)}{\\sum\\limits_a\\sum\\limits_b\\big(P_{jk}(a,b)\\big)^s}\n \\nonumber \\\\\n L_{j}(s) =\\! \\lim_{\\quad r \\to 2-s} (SM)_{j}(r,s) &= \\frac{1}{1-s}\n \\left(1-\\Big(\\sum_a\\big(p_{j}(a)\\big)^s\\Big)^{-1}\\right) \\label{eq:eq35} \\\\\n &= \\frac{H_j(s)}{\\sum\\limits_a\\big(p_j(a)\\big)^s} \\nonumber\n\\end{align}\nAll these Entropy measures have the free-parameter Shannon entropy\nin the limit $s \\to 1$.\n\\begin{align}\n \\lim_{s \\to 1} H_{jk}(s) &= \\lim_{s \\to 1} R_{jk}(s) = \\lim_{s \\to 1}\n L_{jk}(s) = S_{jk} \\label{eq:eq36} \\\\\n \\lim_{s \\to 1} H_j(s) &= \\lim_{s \\to 1} R_j(s) = \\lim_{s \\to 1} L_j(s)\n = S_j \\label{eq:eq37}\n\\end{align}\nwhere\n\\begin{align}\n S_{jk} &= -\\sum_a\\sum_b P_{jk}(a,b)\\log P_{jk}(a,b) \\tag{\\ref{eq:eq25}} \\\\\n S_j &= -\\sum_a p_j(a)\\log p_j(a) \\label{eq:eq38}\n\\end{align}\nare the Shannon entropy measures \\cite{mondaini6}.\n\nWe now introduce a convenient version of a Mutual Information measure:\n\\begin{equation}\n M_{jk}(r,s) = \\frac{1}{1-r}\\left(1-\\left(\\frac{\\sum\\limits_a\\sum\\limits_b\n \\big(P_{jk}(a,b)\\big)^s}{\\sum\\limits_a\\sum\\limits_b\\big(p_j(a)p_k(b)\n \\big)^s}\\right)^{\\frac{1-r}{1-s}}\\right) \\label{eq:eq39}\n\\end{equation}\n\n\\noindent We can see that $M_{jk}(r,0) = 0$ and if $\\exists\\, \\bar{\\jmath}, \\bar{k}$\nsuch that $P_{\\bar{\\jmath}\\bar{k}}(a,b) = p_{\\bar{\\jmath}}(a)p_{\\bar{k}}(b)$\n$\\Rightarrow$ $M_{\\bar{\\jmath}\\bar{k}}(r,s) = 0$. We also have,\n\\begin{equation}\n M_{jk}(1,s) = \\lim_{r \\to 1} M_{jk}(r,s) = -\\frac{1}{1-s}\\log\\left(\n \\frac{\\sum\\limits_a\\sum\\limits_b\\big(P_{jk}(a,b)\\big)^s}{\\sum\\limits_a\n \\sum\\limits_b\\big(p_j(a)p_k(b)\\big)^s}\\right) \\label{eq:eq40}\n\\end{equation}\nand in the limit $s \\to 1$\n\\begin{align}\n M_{jk} = \\lim_{s \\to 1} M_{jk}(1,s) =& \\sum_a\\sum_b P_{jk}(a,b)\\log\n P_{jk}(a,b) \\nonumber \\\\\n &- \\sum_a\\sum_b p_j(a)p_k(b)\\log\\big(p_j(a)p_k(b)\\big) \\label{eq:eq41}\n\\end{align}\nand from the identities:\n\\begin{align*}\n &\\sum_a p_j(a) = 1 \\,,\\, \\forall j \\,; \\quad \\sum_b p_k(b) = 1 \\,,\\,\n \\forall k \\\\\n &\\sum_a P_{jk}(a,b) = p_k(b) \\,,\\, \\forall j \\,; \\quad \\sum_b P_{jk}(a,b)\n = p_j(a) \\,,\\, \\forall k\n\\end{align*}\nobtained from eqs.(\\ref{eq:eq3}), (\\ref{eq:eq4}), (\\ref{eq:eq6}), (\\ref{eq:eq7}),\nwe can also write instead eq.(\\ref{eq:eq41}):\n\\begin{equation}\n M_{jk} = \\sum_a\\sum_b P_{jk}(a,b)\\log P_{jk}(a,b) - \\sum_a\\sum_b P_{jk}(a,b)\n \\log\\big(p_j(a)p_k(b)\\big) \\label{eq:eq42}\n\\end{equation}\nIt should be stressed that we are not assuming that $P_{jk}(a,b) \\equiv\np_j(a)p_k(b)$ above. This equality is assumed to be valid only for\n$j = \\bar{\\jmath}$, $k = \\bar{k}$.\n\nEq.(\\ref{eq:eq41}) or (\\ref{eq:eq42}) can be also written as:\n\\begin{equation}\n M_{jk} = -S_{jk} + S_j + S_k \\label{eq:eq43}\n\\end{equation}\n\n\\noindent where $S_{jk}$ and $S_j$, $S_k$ are the Shannon entropy measures for\njoint and single probabilities, respectively, eqs.(\\ref{eq:eq25}), (\\ref{eq:eq38}).\n\nAs an additional topic, we emphasize that the Mutual Information measure can\nbe also derived from the Kullback-Leibler divergence \\cite{mondaini6} which is\nwritten as\n\\begin{equation}\n (KL)_{jk}(b) = \\sum_a P_{jk}(a|b)\\log\\left(\\frac{P_{jk}(a|b)}{p_j(a)}\n \\right) \\label{eq:eq44}\n\\end{equation}\n\n\\noindent where $P_{jk}(a|b)$ is the Conditional probability, eq.(\\ref{eq:eq8}).\nWe then have,\n\\begin{equation}\n (KL)_{jk}(b) = \\sum_a \\frac{P_{jk}(a,b)}{p_k(b)}\\log\\left(\\frac{P_{jk}\n (a,b)}{p_j(a)p_k(b)}\\right) \\label{eq:eq45}\n\\end{equation}\n\n\\noindent and the $M_{jk}$ mutual information measure will be given by\n\\begin{equation}\n M_{jk} = \\sum_b p_k(b)(KL)_{jk}(b) = \\sum_a\\sum_b P_{jk}(a,b)\\log\\left(\n \\frac{P_{jk}(a,b)}{p_j(a)p_k(b)}\\right) \\label{eq:eq46}\n\\end{equation}\nwhich is the same as eq.(\\ref{eq:eq42}), q.e.d.\n\nAs the last topic of this section, we now introduce the concept of\nInformation Distance and we then derive the Jaccard Entropy measure as\nan obvious consequence. Let us write:\n\\begin{equation}\n d_{jk}(r,s) = H_{jk}(r,s) - M_{jk}(r,s) \\label{eq:eq47}\n\\end{equation}\nSince we are working with Entropy measures, we have to satisfy the\nnon-negativeness criteria:\n\\begin{equation}\n H_{jk}(r,s) \\geq 0 \\,;\\quad M_{jk}(r,s) \\geq 0 \\,;\\quad H_{jk}(r,s)\n -M_{jk}(r,s) \\geq 0 \\label{eq:eq48}\n\\end{equation}\nThis means that by satisfying the inequalities (\\ref{eq:eq48}), restrictions\non the $r$, $s$ parameters should be discovered and considered for\nthe description of the protein databases by Entropy measures like $H_{jk}(r,s)$.\n\nFrom inequalities (\\ref{eq:eq48}), we can write,\n\\begin{equation}\n 0 \\leq d_{jk}(r,s) = H_{jk}(r,s) - M_{jk}(r,s) \\leq H_{jk}(r,s)\n \\label{eq:eq49}\n\\end{equation}\nand\n\\begin{equation}\n 0 \\leq J_{jk}(r,s) \\leq 1 \\label{eq:eq50}\n\\end{equation}\nwhere\n\\begin{equation}\n J_{jk}(r,s) = 1-\\frac{M_{jk}(r,s)}{H_{jk}(r,s)} \\label{eq:eq51}\n\\end{equation}\n\n\\noindent is the normalized Jaccard Entropy Measure as obtained from the\nnormalized Information Distance. We then give below the results of checking\nthe inequalities (\\ref{eq:eq48}) for some families of the Pfam database. We\nshall take the limit $r \\to s$ and we work with the corresponding\none-parameter Entropy measures: $H_j(s)$, $H_{jk}(s)$, $M_{jk}(s)$,\n$J_{jk}(s)$. We then have to check:\n\\begin{align*}\n H_{jk}(s) \\geq 0 \\,&,\\quad M_{jk}(s) \\geq 0 \\,, \\quad H_{jk}(s)-M_{jk}(s)\n \\geq 0 \\,, \\\\\n &0 \\leq J_{jk}(s) = 1 - \\frac{M_{jk}(s)}{H_{jk}(s)} \\leq 1\n\\end{align*}\n\n\\begin{table}[H]\n \\begin{center}\n \\caption{\\small Study of the non-negativeness of $H_{jk}(s)$, $M_{jk}(s)$\n and $d_{jk}(s)$ values for the protein family PF06850. \\label{tab1}}\n \\begin{tabular}{|c|c|c|c|}\n \\hline\n $\\mathbf{s}$ & $\\mathbf{H_{jk}(s)}$ & $\\mathbf{M_{jk}(s)}$ &\n $\\mathbf{d_{jk}(s)}$ \\\\\n \\hline\n $0.1$ & $0$ & $0$ & $0$ \\\\\n \\hline\n $0.3$ & $0$ & $0$ & $0$ \\\\\n \\hline\n $0.5$ & $0$ & $0$ & $0$ \\\\\n \\hline\n $0.7$ & $0$ & $0$ & $0$ \\\\\n \\hline\n $0.9$ & $0$ & $0$ & $0$ \\\\\n \\hline\n $1.0$ & $0$ & $0$ & $0$ \\\\\n \\hline\n $1.2$ & $0$ & $718$ & $16$ \\\\\n \\hline\n $1.5$ & $0$ & $1708$ & $38$ \\\\\n \\hline\n $1.7$ & $0$ & $2351$ & $61$ \\\\\n \\hline\n $1.9$ & $0$ & $2898$ & $192$ \\\\\n \\hline\n $2.0$ & $0$ & $3139$ & $309$ \\\\\n \\hline\n \\end{tabular}\n \\end{center}\n\\end{table}\n\n\\noindent The $s$-values corresponding to negative $M_{jk}(s)$ values do not\nlead to a useful characterization of the Jaccard Entropy measure according\nto the inequality on eq.(\\ref{eq:eq49}) which is violated in this case and\nthese $s$-values will not be taken into consideration. Other studies of the\nEntropy values and specially those of the behaviour of the association of\nentropies, will give additional restrictions on the feasible $s$-range. The\nscope of the present work does not allow an intensive study of these\ntechniques of entropy association \\cite{mondaini2} which will then appear on\na forthcoming contribution.\n\n\n\\begin{table}[!ht]\n \\begin{center}\n \\caption{\\small Study of the non-negativeness of $H_{jk}(s)$, $M_{jk}(s)$\n and $d_{jk}(s)$ values for the protein family PF00135. \\label{tab2}}\n \\begin{tabular}{|c|c|c|c|}\n \\hline\n $\\mathbf{s}$ & $\\mathbf{H_{jk}(s)}$ & $\\mathbf{M_{jk}(s)}$ &\n $\\mathbf{d_{jk}(s)}$ \\\\\n \\hline\n $0.1$ & $0$ & $0$ & $0$ \\\\\n \\hline\n $0.3$ & $0$ & $0$ & $0$ \\\\\n \\hline\n $0.5$ & $0$ & $0$ & $0$ \\\\\n \\hline\n $0.7$ & $0$ & $0$ & $0$ \\\\\n \\hline\n $0.9$ & $0$ & $0$ & $0$ \\\\\n \\hline\n $1.0$ & $0$ & $0$ & $0$ \\\\\n \\hline\n $1.2$ & $0$ & $0$ & $0$ \\\\\n \\hline\n $1.5$ & $0$ & $0$ & $467$ \\\\\n \\hline\n $1.7$ & $0$ & $0$ & $14509$ \\\\\n \\hline\n $1.9$ & $0$ & $0$ & $19026$ \\\\\n \\hline\n $2.0$ & $0$ & $0$ & $19451$ \\\\\n \\hline\n \\end{tabular}\n \\end{center}\n\\end{table}\n\\begin{table}[!hbt]\n \\begin{center}\n \\caption{\\small Study of the non-negativeness of $H_{jk}(s)$, $M_{jk}(s)$\n and $d_{jk}(s)$ values for the protein family PF00005. \\label{tab3}}\n \\begin{tabular}{|c|c|c|c|}\n \\hline\n $\\mathbf{s}$ & $\\mathbf{H_{jk}(s)}$ & $\\mathbf{M_{jk}(s)}$ &\n $\\mathbf{d_{jk}(s)}$ \\\\\n \\hline\n $0.1$ & $0$ & $0$ & $0$ \\\\\n \\hline\n $0.3$ & $0$ & $0$ & $0$ \\\\\n \\hline\n $0.5$ & $0$ & $0$ & $0$ \\\\\n \\hline\n $0.7$ & $0$ & $0$ & $0$ \\\\\n \\hline\n $0.9$ & $0$ & $0$ & $0$ \\\\\n \\hline\n $1.0$ & $0$ & $0$ & $0$ \\\\\n \\hline\n $1.2$ & $0$ & $8$ & $5$ \\\\\n \\hline\n $1.5$ & $0$ & $33$ & $4741$ \\\\\n \\hline\n $1.7$ & $0$ & $55$ & $9679$ \\\\\n \\hline\n $1.9$ & $0$ & $65$ & $12442$ \\\\\n \\hline\n $2.0$ & $0$ & $69$ & $13203$ \\\\\n \\hline\n \\end{tabular}\n \\end{center}\n\\end{table}\n\nThe results on the previous three tables will clarify the idea of\nrestriction of the $s$-values of entropy measures for obtaining a\nsound classification of families and clans on the Pfam database.\nWe now announce that the non-negativeness of the values of\n$H_{jk}(s)$, $M_{jk}(s)$ and $d_{jk}(s)$ is actually guaranteed if\nwe restrict to $s \\leq 1$ for all 1069 families which are classified\ninto 68 clans and already characterized at section 2. In figures\n\\ref{fig2}, \\ref{fig3}, \\ref{fig4}, we present the histograms of the\nJaccard Entropy measures for some $s \\leq 1$ values of the\n$s$-parameter.\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=1\\linewidth]{figure2}\n \\caption{\\small Histograms of Jaccard Entropy for family PF06850.\n \\label{fig2}}\n\\end{figure}\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=1\\linewidth]{figure3}\n \\caption{\\small Histograms of Jaccard Entropy for family PF00135.\n \\label{fig3}}\n\\end{figure}\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=1\\linewidth]{figure4}\n \\caption{\\small Histograms of Jaccard Entropy for family PF00005.\n \\label{fig4}}\n\\end{figure}\n\n\\newpage\n\nWe also present the curves corresponding to the Average Jaccard Entropy\nMeasure (formula) for 09 families, a well-posed measure, with the\nrestriction $s \\leq 1$, which is given by\n\\begin{equation*}\n J(s,f) = \\frac{2}{n(n-1)}\\sum_j\\sum_k J_{jk}(s,f)\n\\end{equation*}\n\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=1\\linewidth]{figure5}\n \\caption{\\small Curves of the Average Jaccard Entropy measures for families\n PF00005, PF05673, PF13481, PF00135, PF06850, PF11339, PF02388, PF09924,\n PF13718. \\label{fig5}}\n\\end{figure}\n\n\\section{A First Assessment of Protein Databases with Entropy Measures}\nAs a motivation for future research to be developed in sections 6, 7, we\nnow introduce the first application of the formulae derived on the previous\nsections in terms of a naive analysis of averages and standard deviations of\nEntropy measure distributions. This will be also the first attempt at\nclassifying the distribution of amino acids in a generic protein database.\nA robust approach to this research topic will be introduced and intensively\nanalyzed on sections 6, 7 with the introduction of ANOVA statistics and the\ncorresponding Hypothesis testing.\n\nWe then consider a Clan with \\textbf{\\emph{F}} families. The Havrda-Charvat\nentropy measure associated to a pair of columns on the representative\n$m \\times n$ array of each family with a specified value of the\n\\textbf{\\emph{s}} parameter is given by\n\\begin{equation}\n H_{jk}(s;f) = -\\frac{1}{1-s} \\left(1-\\sum_a\\sum_b\\big(P_{jk}(a,b;f)\n \\big)^s\\right) \\label{eq:eq52}\n\\end{equation}\n\n\\noindent We can then define an average of these entropy measures for\neach family by\n\\begin{equation}\n \\langle H(s;f) \\rangle = \\frac{2}{n(n-1)} \\sum_j\\sum_k H_{jk}(s;f)\n \\label{eq:eq53}\n\\end{equation}\n\n\\noindent We also consider the average value of the averages over the\nset of \\emph{F} families:\n\\begin{equation}\n \\langle H(s) \\rangle_F = \\frac{1}{F} \\sum_{f=1}^F \\langle H(s;f) \\rangle\n \\label{eq:eq54}\n\\end{equation}\n\n\\noindent The Standard deviation of the Entropy measures $H_{jk}(s;f)$\nwith relation to the given average in eq.(\\ref{eq:eq53}) can be written\nas:\n\\begin{equation}\n \\sigma(s;f) = \\left(\\frac{1}{\\frac{n(n-1)}{2}-1}\\sum_j\\sum_k\\big(\n H_{jk}(s;f)-\\langle H(s;f) \\rangle\\big)^2\\right)^{1\/2} \\label{eq:eq55}\n\\end{equation}\n\n\\noindent and finally, the Standard deviation of the average\n$\\langle H(s;f)\\rangle$ with respect to the average\n$\\langle H(s)\\rangle_F$:\n\\begin{equation}\n \\sigma_F(s) = \\left(\\frac{1}{F-1} \\sum_{f=1}^F \\big(\\langle H(s;f)\\rangle -\n \\langle H(s)\\rangle_F\\big)^2\\right)^{1\/2} \\label{eq:eq56}\n\\end{equation}\n\nWe present in figs.\\ref{fig6}, \\ref{fig7} below the diagrams corresponding to\nformulae (\\ref{eq:eq53}) and (\\ref{eq:eq55}). We should stress that only\nClans with a minimum of five families are considered.\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=0.8\\textwidth]{figure6}\n \\caption{\\small The Average values of the Havrda-Charvat Entropy measures\n for the families of a selected set of Clans and eleven values of the\n $s$-parameter, eq.(\\ref{eq:eq53}). \\label{fig6}}\n\\end{figure}\n\n\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=0.8\\linewidth]{figure7}\n \\caption{\\small The standard deviation of the Havrda-Charvat Entropy\n measures with relation to the averages of these entropies for each family\n and eleven values of the $s$-parameter, eq.(\\ref{eq:eq55}). \\label{fig7}}\n\\end{figure}\n\n\\newpage\n\nWe now present the values of $\\langle H(s)\\rangle_F$ and $\\sigma_F(s)$ for\na selected number of Clans and eleven values of the $s$-parameter, according\nto eqs.(\\ref{eq:eq54}) and (\\ref{eq:eq56}).\n\\begin{table}[H]\n \\begin{center}\n \\caption{\\small The average values and the standard deviation of the Average\n Havrda-Charvat Entropy measures for eleven values of the $s$-parameter and\n a selected set of 8 Clans. \\label{tab4}}\n \\begingroup\n \\everymath{\\scriptstyle}\n \\footnotesize\n \\begin{tabular}{|c|c|c|c|c|c|c|c|c|}\n \\hline\n \\multicolumn{9}{|c|}{Clans from Pfam 27.0 --- Havrda-Charvat\n Entropies} \\\\\n \\hline\n Clan number & $s$ & $\\langle H(s)\\rangle_F$ & $\\sigma(s)_F$ & {} &\n Clan number & $s$ & $\\langle H(s)\\rangle_F$ & $\\sigma(s)_F$ \\\\\n \\cline{1-4} \\cline{6-9}\n {} & $0.1$ & $44.212$ & $6.396$ & {} &\n {} & $0.1$ & $43.001$ & $4.667$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.3$ & $23.375$ & $3.057$ & {} &\n {} & $0.3$ & $22.851$ & $2.295$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.5$ & $12.992$ & $1.492$ & {} &\n {} & $0.5$ & $12.765$ & $1.161$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.7$ & $7.645$ & $0.746$ & {} &\n {} & $0.7$ & $7.546$ & $0.605$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.9$ & $4.784$ & $0.384$ & {} &\n {} & $0.9$ & $4.741$ & $0.326$ \\\\\n \\cline{2-4} \\cline{7-9}\n CL0020 & $1.0$ & $3.875$ & $0.278$ & {} &\n CL0123 & $1.0$ & $3.846$ & $0.242$ \\\\\n \\cline{2-4} \\cline{7-9}\n (38 families) & $1.2$ & $2.661$ & $0.150$ & {} &\n (06 families) & $1.2$ & $2.648$ & $0.137$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $1.5$ & $1.680$ & $0.063$ & {} &\n {} & $1.5$ & $1.676$ & $0.062$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $1.7$ & $1.311$ & $0.038$ & {} &\n {} & $1.7$ & $1.309$ & $0.038$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $1.9$ & $1.062$ & $0.023$ & {} &\n {} & $1.9$ & $1.061$ & $0.024$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $2.0$ & $0.967$ & $0.018$ & {} &\n {} & $2.0$ & $0.966$ & $0.019$ \\\\\n %\n \\cline{1-4} \\cline{6-9}\n {} & $0.1$ & $39.235$ & $10.175$ & {} &\n {} & $0.1$ & $46.084$ & $6.790$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.3$ & $20.908$ & $4.984$ & {} &\n {} & $0.3$ & $24.260$ & $3.224$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.5$ & $11.733$ & $2.510$ & {} &\n {} & $0.5$ & $13.417$ & $1.561$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.7$ & $6.982$ & $1.305$ & {} &\n {} & $0.7$ & $7.853$ & $0.772$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.9$ & $4.422$ & $0.704$ & {} &\n {} & $0.9$ & $4.886$ & $0.392$ \\\\\n \\cline{2-4} \\cline{7-9}\n CL0023 & $1.0$ & $3.604$ & $0.525$ & {} &\n CL0186 & $1.0$ & $3.949$ & $0.282$ \\\\\n \\cline{2-4} \\cline{7-9}\n (119 families) & $1.2$ & $2.504$ & $0.302$ & {} &\n (29 families) & $1.2$ & $2.701$ & $0.149$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $1.5$ & $1.606$ & $0.144$ & {} &\n {} & $1.5$ & $1.696$ & $0.060$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $1.7$ & $1.263$ & $0.093$ & {} &\n {} & $1.7$ & $1.320$ & $0.035$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $1.9$ & $1.030$ & $0.063$ & {} &\n {} & $1.9$ & $1.067$ & $0.020$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $2.0$ & $0.940$ & $0.053$ & {} &\n {} & $2.0$ & $0.970$ & $0.016$ \\\\\n \\cline{1-4} \\cline{6-9}\n %\n {} & $0.1$ & $44.906$ & $7.996$ & {} &\n {} & $0.1$ & $42.862$ & $4.566$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.3$ & $23.671$ & $3.906$ & {} &\n {} & $0.3$ & $22.791$ & $2.190$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.5$ & $13.114$ & $1.961$ & {} &\n {} & $0.5$ & $12.741$ & $1.072$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.7$ & $7.692$ & $1.015$ & {} &\n {} & $0.7$ & $7.538$ & $0.537$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.9$ & $4.799$ & $0.545$ & {} &\n {} & $0.9$ & $4.739$ & $0.275$ \\\\\n \\cline{2-4} \\cline{7-9}\n CL0028 & $1.0$ & $3.882$ & $0.406$ & {} &\n CL0192 & $1.0$ & $3.847$ & $0.199$ \\\\\n \\cline{2-4} \\cline{7-9}\n (41 families) & $1.2$ & $2.660$ & $0.232$ & {} &\n (26 families) & $1.2$ & $2.650$ & $0.107$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $1.5$ & $1.676$ & $0.109$ & {} &\n {} & $1.5$ & $1.678$ & $0.044$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $1.7$ & $1.307$ & $0.070$ & {} &\n {} & $1.7$ & $1.311$ & $0.026$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $1.9$ & $1.058$ & $0.047$ & {} &\n {} & $1.9$ & $1.062$ & $0.015$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $2.0$ & $0.963$ & $0.039$ & {} &\n {} & $2.0$ & $0.967$ & $0.012$ \\\\\n %\n \\cline{1-4} \\cline{6-9}\n {} & $0.1$ & $42.312$ & $8.023$ & {} &\n {} & $0.1$ & $43.251$ & $7.469$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.3$ & $22.454$ & $3.857$ & {} &\n {} & $0.3$ & $22.905$ & $3.564$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.5$ & $12.534$ & $1.896$ & {} &\n {} & $0.5$ & $12.757$ & $1.734$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.7$ & $7.411$ & $0.956$ & {} &\n {} & $0.7$ & $7.524$ & $0.863$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.9$ & $4.660$ & $0.496$ & {} &\n {} & $0.9$ & $4.719$ & $0.440$ \\\\\n \\cline{2-4} \\cline{7-9}\n CL0063 & $1.0$ & $3.784$ & $0.362$ & {} &\n CL0236 & $1.0$ & $3.828$ & $0.317$ \\\\\n \\cline{2-4} \\cline{7-9}\n (92 families) & $1.2$ & $2.611$ & $0.198$ & {} &\n (21 families) & $1.2$ & $2.636$ & $0.169$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $1.5$ & $1.658$ & $0.086$ & {} &\n {} & $1.5$ & $1.669$ & $0.069$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $1.7$ & $1.297$ & $0.051$ & {} &\n {} & $1.7$ & $1.304$ & $0.040$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $1.9$ & $1.053$ & $0.032$ & {} &\n {} & $1.9$ & $1.058$ & $0.024$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $2.0$ & $0.960$ & $0.026$ & {} &\n {} & $2.0$ & $0.964$ & $0.019$ \\\\\n \\cline{1-4} \\cline{6-9}\n \\end{tabular}\n \\endgroup\n \\end{center}\n\\end{table}\n\n\nIn figs.\\ref{fig8}a and \\ref{fig8}b below, we present the graphs\ncorresponding to Table \\ref{tab4}. These results just point out a more\nelaborate formulation of the problem.\n\n\\begin{figure}[!hbt]\n \\centering\n \\includegraphics[width=1\\linewidth]{figure8}\n \\caption{\\small (a) The average values of Havrda-Charvat Entropies for a\n set of 08 Clans. (b) The Standard Deviation of the averages for\n Havrda-Charvat Entropies for a set of 08 Clans.\\label{fig8}}\n\\end{figure}\n\nWe now proceed to analyze a proposal (naive) for testing the robustness of\nthe Clan concept. We will check if Pseudo-Clans which have the same number\nof families (a minimum of 05 families) of the corresponding Clans will have\nessentially different values of $\\langle H(s)\\rangle_F$ and $\\sigma_F(s)$.\nThe families to be associated with a Pseudo-Clan are obtained by sorting on\nthe set of 1069 families and by withdrawal of the families already sorted.\nIn Table \\ref{tab5} below we present the values $\\langle H(s)\\rangle_F$ and\n$\\sigma_F(s)$ for the Pseudo-Clans obtained by the procedure described above.\n\nThe Figures 9a, 9b, do correspond to the comparison of data of Table\n\\ref{tab4} (Clans) with those of Table \\ref{tab5} (Pseudo-Clans). Clans are\nin red, Pseudo-Clans in blue.\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=1\\linewidth]{figure9}\n \\caption{\\small (a) The comparison of Clans and Pseudo-Clans average values.\n (b) The comparison of Clans and Pseudo-Clans standard deviation values.\n \\label{fig9}}\n\\end{figure}\n\n\\begin{table}[!ht]\n \\begin{center}\n \\caption{\\small The average values and the standard deviation of the\n Average Havrda-Charvat Entropy measures for eleven values of the\n $s$-parameter and a selected set of 8 Pseudo-Clans. \\label{tab5}}\n \\begingroup\n \\everymath{\\scriptstyle}\n \\footnotesize\n \\begin{tabular}{|c|c|c|c|c|c|c|c|c|}\n \\hline\n \\multicolumn{9}{|c|}{Pseudo-Clans \/ Pfam 27.0 --- Havrda-Charvat\n Entropies} \\\\\n \\hline\n Pseudo-Clan & \\multicolumn{1}{c|}{\\multirow{2}{*}{$s$}} &\n \\multicolumn{1}{c|}{\\multirow{2}{*}{$\\langle H(s)\\rangle_F$}} &\n \\multicolumn{1}{c|}{\\multirow{2}{*}{$\\sigma(s)_F$}} &\n \\multicolumn{1}{c|}{} & Pseudo-Clan &\n \\multicolumn{1}{c|}{\\multirow{2}{*}{$s$}} &\n \\multicolumn{1}{c|}{\\multirow{2}{*}{$\\langle H(s)\\rangle_F$}} &\n \\multicolumn{1}{c|}{\\multirow{2}{*}{$\\sigma(s)_F$}} \\\\\n number & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} &\n \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} & number &\n \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} &\n \\multicolumn{1}{c|}{} \\\\\n \\cline{1-4} \\cline{6-9}\n {} & $0.1$ & $42.381$ & $8.224$ & {} &\n {} & $0.1$ & $42.042$ & $4.484$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.3$ & $22.480$ & $3.979$ & {} &\n {} & $0.3$ & $22.359$ & $2.175$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.5$ & $12.542$ & $1.972$ & {} &\n {} & $0.5$ & $12.508$ & $1.082$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.7$ & $7.410$ & $1.005$ & {} &\n {} & $0.7$ & $7.409$ & $0.555$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.9$ & $4.657$ & $0.528$ & {} &\n {} & $0.9$ & $4.667$ & $0.293$ \\\\\n \\cline{2-4} \\cline{7-9}\n PCL0020 & $1.0$ & $3.781$ & $0.389$ & {} &\n PCL0123 & $1.0$ & $3.791$ & $0.216$ \\\\\n \\cline{2-4} \\cline{7-9}\n (38 families) & $1.2$ & $2.607$ & $0.216$ & {} &\n (06 families) & $1.2$ & $2.618$ & $0.120$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $1.5$ & $1.655$ & $0.096$ & {} &\n {} & $1.5$ & $1.663$ & $0.053$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $1.7$ & $1.295$ & $0.059$ & {} &\n {} & $1.7$ & $1.301$ & $0.032$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $1.9$ & $1.051$ & $0.037$ & {} &\n {} & $1.9$ & $1.056$ & $0.020$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $2.0$ & $0.958$ & $0.030$ & {} &\n {} & $2.0$ & $0.962$ & $0.016$ \\\\\n %\n \\cline{1-4} \\cline{6-9}\n {} & $0.1$ & $41.023$ & $9.007$ & {} &\n {} & $0.1$ & $40.888$ & $7.719$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.3$ & $21.825$ & $4.360$ & {} &\n {} & $0.3$ & $21.790$ & $3.768$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.5$ & $12.219$ & $2.163$ & {} &\n {} & $0.5$ & $12.220$ & $1.891$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.7$ & $7.247$ & $1.104$ & {} &\n {} & $0.7$ & $7.258$ & $0.981$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.9$ & $4.573$ & $0.582$ & {} &\n {} & $0.9$ & $4.585$ & $0.529$ \\\\\n \\cline{2-4} \\cline{7-9}\n PCL0023 & $1.0$ & $3.714$ & $0.427$ & {} &\n PCL0186 & $1.0$ & $3.730$ & $0.394$ \\\\\n \\cline{2-4} \\cline{7-9}\n (119 families) & $1.2$ & $2.573$ & $0.240$ & {} &\n (29 families) & $1.2$ & $2.582$ & $0.227$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $1.5$ & $1.640$ & $0.109$ & {} &\n {} & $1.5$ & $1.646$ & $0.109$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $1.7$ & $1.286$ & $0.068$ & {} &\n {} & $1.7$ & $1.290$ & $0.071$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $1.9$ & $1.046$ & $0.044$ & {} &\n {} & $1.9$ & $1.048$ & $0.049$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $2.0$ & $0.954$ & $0.037$ & {} &\n {} & $2.0$ & $0.956$ & $0.041$ \\\\\n \\cline{1-4} \\cline{6-9}\n %\n {} & $0.1$ & $42.716$ & $7.435$ & {} &\n {} & $0.1$ & $42.756$ & $9.361$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.3$ & $23.658$ & $3.573$ & {} &\n {} & $0.3$ & $22.666$ & $4.535$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.5$ & $12.641$ & $1.756$ & {} &\n {} & $0.5$ & $12.635$ & $2.253$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.7$ & $7.468$ & $0.886$ & {} &\n {} & $0.7$ & $7.458$ & $1.151$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.9$ & $4.692$ & $0.460$ & {} &\n {} & $0.9$ & $4.681$ & $0.608$ \\\\\n \\cline{2-4} \\cline{7-9}\n PCL0028 & $1.0$ & $3.808$ & $0.335$ & {} &\n PCL0192 & $1.0$ & $3.797$ & $0.448$ \\\\\n \\cline{2-4} \\cline{7-9}\n (41 families) & $1.2$ & $2.625$ & $0.183$ & {} &\n (26 families) & $1.2$ & $2.615$ & $0.250$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $1.5$ & $1.664$ & $0.079$ & {} &\n {} & $1.5$ & $1.657$ & $0.113$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $1.7$ & $1.302$ & $0.048$ & {} &\n {} & $1.7$ & $1.296$ & $0.070$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $1.9$ & $1.056$ & $0.030$ & {} &\n {} & $1.9$ & $1.051$ & $0.046$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $2.0$ & $0.962$ & $0.024$ & {} &\n {} & $2.0$ & $0.958$ & $0.037$ \\\\\n %\n \\cline{1-4} \\cline{6-9}\n {} & $0.1$ & $41.870$ & $8.134$ & {} &\n {} & $0.1$ & $41.551$ & $7.476$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.3$ & $22.252$ & $3.914$ & {} &\n {} & $0.3$ & $22.090$ & $3.591$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.5$ & $12.440$ & $1.929$ & {} &\n {} & $0.5$ & $12.357$ & $1.763$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.7$ & $7.366$ & $0.977$ & {} &\n {} & $0.7$ & $7.322$ & $0.887$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.9$ & $4.638$ & $0.510$ & {} &\n {} & $0.9$ & $4.614$ & $0.459$ \\\\\n \\cline{2-4} \\cline{7-9}\n PCL0063 & $1.0$ & $3.771$ & $0.375$ & {} &\n PCL0236 & $1.0$ & $3.752$ & $0.334$ \\\\\n \\cline{2-4} \\cline{7-9}\n (92 families) & $1.2$ & $2.603$ & $0.207$ & {} &\n (21 families) & $1.2$ & $2.595$ & $0.181$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $1.5$ & $1.654$ & $0.092$ & {} &\n {} & $1.5$ & $1.652$ & $0.077$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $1.7$ & $1.295$ & $0.057$ & {} &\n {} & $1.7$ & $1.294$ & $0.046$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $1.9$ & $1.052$ & $0.036$ & {} &\n {} & $1.9$ & $1.051$ & $0.028$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $2.0$ & $0.959$ & $0.030$ & {} &\n {} & $2.0$ & $0.959$ & $0.022$ \\\\\n \\cline{1-4} \\cline{6-9}\n \\end{tabular}\n \\endgroup\n \\end{center}\n\\end{table}\n\nFrom the figures and tables above we can see that the region $s \\leq 1$ leads\nto a better characterization of the Entropy measures distributions on protein\ndatabases.\n\nFor completeness, we list some useful formulae obtained from\neqs.(\\ref{eq:eq53}), (\\ref{eq:eq54}), (\\ref{eq:eq56}) which help to predict\nthe profile of the curves above:\n\\begin{equation}\n \\langle H(s;f) \\rangle = -\\frac{1}{1-s} \\left(1-\\frac{2}{n(n-1)}\n \\sum_{a,b,j,k} e^{-s \\lvert \\log P_{jk}(a,b;f)\\rvert}\\right) \\label{eq:eq57}\n\\end{equation}\n\\begin{equation}\n \\langle H(s) \\rangle_F = -\\frac{1}{1-s} \\left(1-\\frac{2}{Fn(n-1)}\n \\sum_{f,a,b,j,k} e^{-s \\lvert \\log P_{jk}(a,b;f)\\rvert} \\right)\n \\label{eq:eq58}\n\\end{equation}\n\\begin{equation}\n \\sigma_F(s) = \\frac{2(F-1)^{1\/2}}{Fn(n-1)}\\left(\\sum_{f=1}^F\\left(\\sum_{a,b,\n j,k} e^{-s \\lvert \\log P_{jk}(a,b;f)\\rvert}\\right)^2\\right)^{1\/2}\n \\label{eq:eq59}\n\\end{equation}\n\n\\section{The treatment of data with the Maple Computing system and its\ninadequacy for calculating Joint probabilities of occurrence. Alternative\nsystems.}\nIn this section, we specifically study the performance of the Maple system and\nan example of alternative computing system, the Perl system, for calculating\nthe simple and joint probabilities of occurrences of amino acids. We also use\nthese two systems for calculating 19 $s$-power values of these probabilities.\nWe now select the family PF06850 in order to get an idea of the CPU and real\ntimes which are necessary for calculating the probabilities and their powers\nfor the set of 1069 families. We start the calculation by adopting the Maple\nsystem version 18. There are some comments to be made on the construction of\na computational code for calculating joint probabilities. This will be done\nin detail at the end of CPU and real times for the calculation of the simple\nand joint probabilities by using the developed code. The table below will\nrepeat the times for calculating $200 \\times 20 = 4 \\times 10^3$ and $200\n\\times \\frac{200-1}{2} \\times 20 \\times 20 = 7.96 \\times 10^6$ of simple and\njoint probability values, respectively, for the PF06850 Pfam family.\n\n\\begin{table}[!ht]\n \\begin{center}\n \\caption{\\small CPU time and real times for the calculation of the simple and\n joint probabilities of occurrence associated with the protein family\n PF06850. \\label{tab6}}\n \\begin{tabular}{|c|c|c|}\n \\hline\n Maple System, version 18 & $t_{CPU}$ (sec) & $t_R$ (sec) \\\\\n \\hline\n Simple probabilities & $0.527$ & $0.530$ \\\\\n \\hline\n Joint probabilities & $5073.049$ & $4650.697$ \\\\\n \\hline\n \\end{tabular}\n \\end{center}\n\\end{table}\n\nAfter calculating all values of the probabilities $p_j(a)$ and $P_{jk}(a,b)$,\nwe can proceed to evaluate the powers $\\big(p_j(a)\\big)^s$ and $\\big(P_jk(a,\nb)\\big)^s$ for 19 $s$-values. Our aim will be to use these values for\ncalculating the Entropy Measures according to eqs.(\\ref{eq:eq30}),\n(\\ref{eq:eq31}). It should be noticed that the values of $p_j(a)$ and\n$P_{jk}(a,b)$, have to be calculated only once by using a specific\ncomputational code already referred on this work. Nevertheless, the use of\nthe code for calculating the joint probabilities associated to 1069 protein\nfamilies is the hardest of all calculations to be undertaken and it takes\ntoo much time. These probabilities once calculated should be grouped in sets\nof 400 values each corresponding to a pair of columns $j$, $k$ among the\n$\\frac{n(n-1)}{2}$ feasible ones and the calculating of\nentropy value associated to this pair of columns $j$, $k$.\nGiven a $s$-value and after calculating the entropy of this\nfirst pair $H_{1\\,2}$ as a function of the 400 variables\n$\\big(P_{jk}(a,b)\\big)^s$, $j \\neq 1$, $k \\neq 2$ and he\/she will proceeds to\ncalculate again all values of $\\frac{n(n-1)}{2} \\times (20)^2$ in order\nto extract another value of joint probability for calculating the\ncorresponding entropy value. This seems to be associated to\nthe unknowing of the concepts of a function of several variables,\nunfortunately. After circumventing these mistakes coming from a bad\neducational formation, we succeed at keeping all calculated values of the\nprobabilities and we then proceed to the calculation of the powers $\\big(\np_j(a)\\big)^s$, $\\big(P_{jk}(a,b)\\big)^s$ of these values and the\ncorresponding entropy measures. In tables \\ref{tab7}, \\ref{tab8}, \\ref{tab9},\n\\ref{tab10} below, we report all these calculations for 19 values of the\n$s$-parameter.\n\n\\begin{table}[!ht]\n \\begin{center}\n \\caption{\\small CPU and real times for the calculation of 19 $s$-powers\n of simple probabilities of occurrence associated with the protein family\n PF06850. \\label{tab7}}\n \\begin{tabular}{|c|c|c|}\n \\hline\n \\multicolumn{3}{|c|}{Maple System, version 18,} \\\\\n \\multicolumn{3}{|c|}{$s$-powers of probability $\\big(p_j(a)\\big)^s$}\\\\\n \\hline\n $s$ & $t_{CPU}$ (sec) & $t_R$ (sec) \\\\\n \\hline\n $0.1$ & $0.263$ & $0.358$ \\\\\n \\hline\n $0.2$ & $0.137$ & $0.145$ \\\\\n \\hline\n $0.3$ & $0.268$ & $0.277$ \\\\\n \\hline\n $0.4$ & $0.139$ & $0.153$ \\\\\n \\hline\n $0.5$ & $0.240$ & $0.219$ \\\\\n \\hline\n $0.6$ & $0.144$ & $0.157$ \\\\\n \\hline\n $0.7$ & $0.276$ & $0.254$ \\\\\n \\hline\n $0.8$ & $0.144$ & $0.157$ \\\\\n \\hline\n $0.9$ & $0.264$ & $0.235$ \\\\\n \\hline\n $1.0$ & $0.088$\/$0.151$ & $0.095$\/$0.307$ \\\\\n \\hline\n $2.0$ & $0.153$ & $0.095$ \\\\\n \\hline\n $3.0$ & $0.128$ & $0.131$ \\\\\n \\hline\n $4.0$ & $0.148$ & $0.141$ \\\\\n \\hline\n $5.0$ & $0.096$ & $0.144$ \\\\\n \\hline\n $6.0$ & $0.148$ & $0.167$ \\\\\n \\hline\n $7.0$ & $0.148$ & $0.155$ \\\\\n \\hline\n $8.0$ & $0.181$ & $0.094$ \\\\\n \\hline\n $9.0$ & $0.104$ & $0.092$ \\\\\n \\hline\n $10.0$ & $0.104$ & $0.100$ \\\\\n \\hline\n Total & $3.173$ & $3.164$ \\\\\n \\hline\n \\end{tabular}\n \\end{center}\n\\end{table}\n\n\\begin{table}[!ht]\n \\begin{center}\n \\caption{\\small CPU and real times for the calculation of 19 $s$-powers\n of joint probabilities of occurrence associated with the protein family\n PF06850. \\label{tab8}}\n \\begin{tabular}{|c|c|c|}\n \\hline\n \\multicolumn{3}{|c|}{Maple System, version 18,} \\\\\n \\multicolumn{3}{|c|}{$s$-powers of probability $\\big(P_{jk}(a,b)\\big)^s$}\n \\\\\n \\hline\n $s$ & $t_{CPU}$ (sec) & $t_R$ (sec) \\\\\n \\hline\n $0.1$ & $390.432$ & $206.646$ \\\\\n \\hline\n $0.2$ & $382.887$ & $202.282$ \\\\\n \\hline\n $0.3$ & $401.269$ & $210.791$ \\\\\n \\hline\n $0.4$ & $416.168$ & $216.993$ \\\\\n \\hline\n $0.5$ & $427.572$ & $221.541$ \\\\\n \\hline\n $0.6$ & $430.604$ & $223.227$ \\\\\n \\hline\n $0.7$ & $421.904$ & $218.484$ \\\\\n \\hline\n $0.8$ & $434.888$ & $224.267$ \\\\\n \\hline\n $0.9$ & $431.948$ & $223.023$ \\\\\n \\hline\n $1.0$ & $442.933$\/$482.612$ & $224.731$\/$259.301$ \\\\\n \\hline\n $2.0$ & $176.212$ & $147.455$ \\\\\n \\hline\n $3.0$ & $234.100$ & $174.853$ \\\\\n \\hline\n $4.0$ & $289.184$ & $181.552$ \\\\\n \\hline\n $5.0$ & $327.740$ & $178.117$ \\\\\n \\hline\n $6.0$ & $334.800$ & $194.691$ \\\\\n \\hline\n $7.0$ & $349.064$ & $195.258$ \\\\\n \\hline\n $8.0$ & $361.304$ & $195.437$ \\\\\n \\hline\n $9.0$ & $386.217$ & $197.150$ \\\\\n \\hline\n $10.0$ & $397.276$ & $197.868$ \\\\\n \\hline\n Total & $7036.502$ & $3834.366$ \\\\\n \\hline\n \\end{tabular}\n \\end{center}\n\\end{table}\n\nThe last row in tables \\ref{tab7}, \\ref{tab8}, includes the times necessary\nfor calculating the probabilities of table \\ref{tab6}.\n\nWe are then able to proceed to the calculation of the corresponding\nHavrda-Charvat entropy measures, $H_j(s)$, $H_{jk}(s)$: The results for 19\n$s$-values are given in tables \\ref{tab9}, \\ref{tab10} below.\n\n\\begin{table}[!ht]\n \\begin{center}\n \\caption{\\small CPU and real times for the calculation of the Entropy measures\n $H_j(s)$ for the protein family PF06850. \\label{tab9}}\n \\begin{tabular}{|c|c|c|}\n \\hline\n \\multicolumn{3}{|c|}{Maple System, version 18,} \\\\\n \\multicolumn{3}{|c|}{Entropy Measures $H_j(s)$}\\\\\n \\hline\n $s$ & $t_{CPU}$ (sec) & $t_R$ (sec) \\\\\n \\hline\n $0.1$ & $0.148$ & $0.261$ \\\\\n \\hline\n $0.2$ & $0.084$ & $0.153$ \\\\\n \\hline\n $0.3$ & $0.120$ & $0.189$ \\\\\n \\hline\n $0.4$ & $0.124$ & $0.198$ \\\\\n \\hline\n $0.5$ & $0.160$ & $0.299$ \\\\\n \\hline\n $0.6$ & $0.092$ & $0.139$ \\\\\n \\hline\n $0.7$ & $0.159$ & $0.199$ \\\\\n \\hline\n $0.8$ & $0.137$ & $0.175$ \\\\\n \\hline\n $0.9$ & $0.120$ & $0.166$ \\\\\n \\hline\n $1.0$ & $0.192$\/$0.175$ & $0.339$\/$0.159$ \\\\\n \\hline\n $2.0$ & $0.144$ & $0.099$ \\\\\n \\hline\n $3.0$ & $0.147$ & $0.105$ \\\\\n \\hline\n $4.0$ & $0.084$ & $0.101$ \\\\\n \\hline\n $5.0$ & $0.136$ & $0.070$ \\\\\n \\hline\n $6.0$ & $0.096$ & $0.119$ \\\\\n \\hline\n $7.0$ & $0.115$ & $0.078$ \\\\\n \\hline\n $8.0$ & $0.120$ & $0.109$ \\\\\n \\hline\n $9.0$ & $0.133$ & $0.080$ \\\\\n \\hline\n $10.0$ & $0.132$ & $0.133$ \\\\\n \\hline\n Total & $2.443$ & $3.012$ \\\\\n \\hline\n \\end{tabular}\n \\end{center}\n\\end{table}\n\n\\begin{table}[!hb]\n \\begin{center}\n \\caption{\\small CPU and real times for the calculation of the Entropy measures\n $H_{jk}(s)$ for the protein family PF06850. \\label{tab10}}\n \\begin{tabular}{|c|c|c|}\n \\hline\n \\multicolumn{3}{|c|}{Maple System, version 18,} \\\\\n \\multicolumn{3}{|c|}{Entropy Measures $H_{jk}(s)$}\\\\\n \\hline\n $s$ & $t_{CPU}$ (sec) & $t_R$ (sec) \\\\\n \\hline\n $0.1$ & $156.332$ & $133.242$ \\\\\n \\hline\n $0.2$ & $160.797$ & $136.706$ \\\\\n \\hline\n $0.3$ & $169.024$ & $140.960$ \\\\\n \\hline\n $0.4$ & $176.824$ & $147.853$ \\\\\n \\hline\n $0.5$ & $184.120$ & $150.163$ \\\\\n \\hline\n $0.6$ & $190.304$ & $154.058$ \\\\\n \\hline\n $0.7$ & $196.633$ & $157.750$ \\\\\n \\hline\n $0.8$ & $205.940$ & $164.101$ \\\\\n \\hline\n $0.9$ & $215.559$ & $169.549$ \\\\\n \\hline\n $1.0$ & $253.648$\/$124.501$ & $204.634$\/$148.993$ \\\\\n \\hline\n $2.0$ & $141.148$ & $184.030$ \\\\\n \\hline\n $3.0$ & $158.536$ & $167.173$ \\\\\n \\hline\n $4.0$ & $173.136$ & $181.282$ \\\\\n \\hline\n $5.0$ & $197.680$ & $238.723$ \\\\\n \\hline\n $6.0$ & $215.000$ & $111.476$ \\\\\n \\hline\n $7.0$ & $145.257$ & $115.221$ \\\\\n \\hline\n $8.0$ & $156.848$ & $122.957$ \\\\\n \\hline\n $9.0$ & $157.300$ & $126.233$ \\\\\n \\hline\n $10.0$ & $166.399$ & $135.080$ \\\\\n \\hline\n Total & $3420.485$ & $2941.791$ \\\\\n \\hline\n \\end{tabular}\n \\end{center}\n\\end{table}\n\nThe total time for calculating all the Havrda-Charvat Entropy measure\ncontent of probabilities of occurrence of amino acids on a\nspecific family is given in table \\ref{tab11} below.\n\n\\begin{table}[!hb]\n \\begin{center}\n \\caption{\\small Total CPU and real times for calculating the Entropy measure\n content of a family PF06850 and approximations for Grand\n Total of all sample space. \\label{tab11}}\n \\begingroup\n \\everymath{\\scriptstyle}\n \\small\n \\begin{tabular}{|c|c|c|}\n \\hline\n Maple System, & Entropy Measures &\n Entropy Measures \\\\\n version 18 & $H\\!_j(s)$ --- 19 $s$-values & $H\\!_{jk}(s)$ ---\n 19 $s$-values \\\\\n \\hline\n Total CPU time & $0.527+3.173+2.443$\n & $5,073.049+7,036.502+$ \\\\\n (family PF06850) & $=6.143$ sec & $3,420.485=15,530.036$ sec \\\\\n \\hline\n Grand Total CPU time & $6,566.867$ sec\n & $16,601,608.484$ sec \\\\\n (1069 families) & $=1.824$ hs & $=192.148$ days \\\\\n \\hline\n Total Real time & $0.530+3.164+3.012$\n & $4,650.697+3,834.366+$ \\\\\n (family PF06850) & $=6.706$ sec & $2,941.791=11,426.854$ sec \\\\\n \\hline\n Grand Total Real time & $7,168.714$ sec\n & $12,215,306.926$ sec \\\\\n (1069 families) & $=1.991$ hs & $=141.381$ days \\\\\n \\hline\n \\end{tabular}\n \\endgroup\n \\end{center}\n\\end{table}\n\nFrom inspections of table \\ref{tab11}, we realize that the Total CPU\nand real times for calculating the Havrda-Charvat entropies $H_j(s)$, of\nsimple probabilities of occurrence $p_j(a)$ are obtained by summing up the\ntotal time results from tables \\ref{tab6}, \\ref{tab7} and \\ref{tab9}. For the\nHavrda-Charvat entropies $H_{jk}(s)$, we have to sum up the total times at table\n\\ref{tab6}, \\ref{tab8} and \\ref{tab10}. We take for granted that the times for\ncalculating the Entropy Measure content of each family will not differ too much\nand the results 3rd and 5th rows of table \\ref{tab11} are obtained by multiplying\nby 1069 --- the number of families in the sample space.\n\nThe results of table \\ref{tab11} suggest the inadequacy of the Maple computing\nsystem for analyzing the Entropy measure content of an example of protein database.\nWe have restricted ourselves to operate with usual operating systems, Linux\nor OSX, on laptops. We have also worked with the alternative Perl computing\nsystem. The Maple computing system has an ``array'' structure which is very\neffective for doing calculations which require a knowledge of mathematical\nmethods. On the contrary, the alternative Perl computing system has a\n``hash'' structure as was emphasized in the 2nd section and it operates very\nwell elementary operations with very large numbers. It is essential the\ncomparison of a senior erudite which is largely conversant with a large amount\nof mathematical methods versus a ``genius'' brought to fame by media,\nwho is able only to multiply in a very fast way, numbers of many digits.\n\nIn order to specify the probabilities of computational configurations with\nthe usual desktops and laptops, we list below some of them which have been\nused in the present work. The computing systems were the Maple ($M$) and Perl\n($P$), the operating systems, the Linux ($L$) and Mac OSX ($O$) and the\nstructures: The Array I ($A_I$), Array II ($A_{I\\!I}$) and Hash ($H$) (page 8,\nsection 2). The available computational configurations to undertake the task\nof assessment of protein databases with Entropy measures could be listed as:\n\\begin{enumerate}\n \\item $MLA_I$ --- Maple, Linux, Array I\n \\item $POH$ --- Perl, OSX, Hash\n \\item $PLA_{I\\!I}$ --- Perl, Linux, Array II\n \\item $POA_{I\\!I}$ --- Perl, OSX, Array II\n\\end{enumerate}\n\nThe following table will display a comparison of the CPU and real times for\nthe calculation of 19 $s$-values of the joint probabilities $\\big(P_{jk}(a,\nb)^s\\big)$ for the protein family PF06850 by the four configurations\nnominated above. It should be stressed that we are here comparing the times\nfor calculating the $s$-power with the values of the probabilities\nthemselves previously calculated and kept on a file.\n\n\\begin{sidewaystable}[p]\n \\begin{center}\n \\caption{\\small A comparison of calculation times (CPU and real) for 19\n $s$-powers of joint probabilities only of PF06850 protein family, using\n the four configurations $MLA_I$, $POA_{I\\!I}$, $PLA_{I\\!I}$, $POH$. \\label{tab12}}\n \\begingroup\n \\everymath{\\scriptstyle}\n \\small\n \\begin{tabular}{|c|c|c|c|c|c|c|c|c|}\n \\hline\n \\multicolumn{1}{|c|}{\\multirow{2}{*}{$\\mathbf{s}$}} &\n \\multicolumn{4}{c|}{$\\mathbf{t_{CPU}}$ (sec)} &\n \\multicolumn{4}{c|}{$\\mathbf{t_R}$ (sec)} \\\\\n \\cline{2-9}\n \\multicolumn{1}{|c|}{} & $MLA_I$ & $POA_{I\\!I}$ & $PLA_{I\\!I}$ & $POH$ &\n $MLA_I$ & $POA_{I\\!I}$ & $PLA_{I\\!I}$ & $POH$ \\\\\n \\hline\n $0.1$ & $390.432$ & $33.478$ & $\\mathbf{41.633}$ & $24.849$ & $206.646$ &\n $88.527$ & $83.139$ & $26.862$ \\\\\n \\hline\n $0.2$ & $382.887$ & $31.711$ & $26.483$ & $25.093$ & $202.282$ & $80.624$\n & $53.384$ & $26.904$ \\\\\n \\hline\n $0.3$ & $401.269$ & $31.726$ & $25.286$ & $24.751$ & $210.791$ & $80.519$\n & $50.689$ & $27.305$ \\\\\n \\hline\n $0.4$ & $416.168$ & $31.448$ & $26.138$ & $26.345$ & $216.993$ & $79.032$\n & $51.409$ & $27.687$ \\\\\n \\hline\n $0.5$ & $427.572$ & $32.860$ & $\\mathbf{25.822}$ & $26.726$ & $221.541$\n & $93.048$ & $51.334$ & $27.528$ \\\\\n \\hline\n $0.6$ & $430.604$ & $33.444$ & $27.013$ & $25.021$ & $223.227$ & $102.317$\n & $52.889$ & $25.408$ \\\\\n \\hline\n $0.7$ & $421.904$ & $31.053$ & $25.814$ & $25.011$ & $218.484$ & $79.255$\n & $51.466$ & $26.414$ \\\\\n \\hline\n $0.8$ & $434.888$ & $31.526$ & $26.725$ & $25.183$ & $224.267$ & $80.469$\n & $53.388$ & $25.668$ \\\\\n \\hline\n $0.9$ & $431.948$ & $31.482$ & $26.895$ & $25.409$ & $223.023$ & $80.002$\n & $53.579$ & $25.536$ \\\\\n \\hline\n $1.0$ & $442.933$ & $32.056$ & $25.990$ & $25.096$ & $224.731$ & $80.917$\n & $51.687$ & $25.640$ \\\\\n \\hline\n $2.0$ & $176.212$ & $32.638$ & $27.089$ & $26.012$ & $147.454$ & $80.751$\n & $54.384$ & $26.960$ \\\\\n \\hline\n $3.0$ & $234.100$ & $31.892$ & $25.766$ & $24.498$ & $174.853$ & $85.853$\n & $51.843$ & $24.717$ \\\\\n \\hline\n $4.0$ & $284.184$ & $31.662$ & $26.515$ & $25.251$ & $181.552$ & $91.837$\n & $52.636$ & $25.718$ \\\\\n \\hline\n $5.0$ & $327.740$ & $32.295$ & $27.516$ & $24.925$ & $178.117$ & $87.486$\n & $54.908$ & $25.814$ \\\\\n \\hline\n $6.0$ & $334.800$ & $32.674$ & $28.126$ & $25.440$ & $194.691$ & $86.569$\n & $54.611$ & $25.847$ \\\\\n \\hline\n $7.0$ & $349.064$ & $31.674$ & $23.908$ & $26.389$ & $195.258$ & $86.215$\n & $49.262$ & $27.745$ \\\\\n \\hline\n $8.0$ & $361.304$ & $33.105$ & $26.020$ & $25.106$ & $195.437$ & $116.601$\n & $51.889$ & $26.735$ \\\\\n \\hline\n $9.0$ & $386.217$ & $31.881$ & $26.208$ & $24.783$ & $197.150$ & $81.372$\n & $53.114$ & $25.155$ \\\\\n \\hline\n $10.0$ & $397.276$ & $32.269$ & $26.125$ & $24.979$ & $197.868$ & $87.963$\n & $52.541$ & $26.504$ \\\\\n \\hline \\hline\n Total & $7036.502$ & $611.374$ & $515.072$ & $480.867$ & $3834.365$\n & $1649.357$ & $1028.655$ & $500.147$ \\\\\n \\hline \\hline\n Total & $7,522,020.640$ & $653,588.806$ & $550,611.968$ & $514,046.813$ &\n $4,098,936.190$ & $1,763,162.630$ & $1,099,632.200$ & $534,157.143$ \\\\\n (1069) families & $=87.067$ days & $=7.565$ days & $=6.373$ days & $=5.950$ days &\n $=47.441$ days & $=20.407$ days & $=12.727$ days & $=6.188$ days \\\\\n \\hline\n \\end{tabular}\n \\endgroup\n \\end{center}\n\\end{sidewaystable}\n\nWe can check that the times on table \\ref{tab12} seem to be generically\nordered as\n\\begin{equation}\n t_{MLA_I} > t_{POA_{I\\!I}} > t_{PLA_{I\\!I}} > t_{POH} \\label{eq:eq60}\n\\end{equation}\nFrom this ordering of computing times, we are then able to consider that the\ninconvenience of using the ``hash'' structure which has been emphasized on\nsection 2, was not circumvented by working with a modified array structure\n($A_{I\\!I}$ instead of $H$), at least for the Mac Pro machine used in these\ncalculations. We do not also know if this machine has been even used with an\n``overload'' of programs from the assumed part-time job of the experimenter\n(maybe 99.99\\% time job!). Anyhow, the usual Hash structure of Perl computing\nsystem has delayed the calculation of the Entropy Measures and even with the\nhelp of the modified $A_{I\\!I}$ structure, it does not succeed at computing\nwith this structure if operated on a OSX computing system.\n\nOn the other hand, the configuration $MLA_I$ could be chosen for parallelizing\nthe respective adopted code in a work to be done with supercomputer facilities.\nIf we try to avoid this kind of computational facility, in the belief that the\nproblem of classifying the distribution of amino acids of a protein database\nin terms of Entropy Measures could be treated with less powerful but very\nobjective ``weapons'', we should try to look for very fast laptop machines\ninstead, by working with a Linux operating system, a Perl computing system and\na modified array structure. This means that it would be worthwhile the\ncontinuation of the present work with the $PLA_{I\\!I}$ configuration. This is\nnow in progress and will be published elsewhere.\n\nWe summarize the conclusions commented above on tables \\ref{tab13}--\\ref{tab16}\nbelow for the calculation of CPU and Real times of 19 $s$-powers of joint\nprobabilities $\\big(P_{jk}(a,b)\\big)^s$ and the corresponding values of\nHavrda-Charvat entropy measures. The necessary times for calculating the joint\nprobabilities themselves has not been taken into consideration. It would be\nvery useful to make a comparison of the results of table \\ref{tab10} with those\non tables \\ref{tab14}, \\ref{tab16}, and table \\ref{tab12} with tables \\ref{tab13},\n\\ref{tab15} as well.\n\nAs a last remark of this section, we shall take into consideration,\nthe restrictions of $s \\leq 1$ for working with Jaccard Entropy\nmeasures and we calculate the total CPU and real times for the set\nof $s$-values: $s =$ $0.1$, $0.2$, $0.3$, $0.4$, $0.5$, $0.6$, $0.7$,\n$0.8$, $0.9$, $1.0$. The results are presented in table \\ref{tab17}\nbelow for the $PLA_{I\\!I}$ configuration and the calculation of the\nHavrda-Charvat entropies.\n\n\\begin{sidewaystable}[p]\n \\begin{center}\n \\caption{\\small Calculation of CPU and Real times of 19 $s$-powers of joint\n probabilities $\\big(P_{jk}(a,b)\\big)^s$ measures for 06 families from 03\n Clans with the $POA_{I\\!I}$ configuration. \\label{tab13}}\n \\begingroup\n \\everymath{\\scriptstyle}\n \\footnotesize\n \\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|}\n \\hline\n {} & \\multicolumn{4}{c|}{CL0028} & \\multicolumn{4}{c|}{CL0023}\n & \\multicolumn{4}{c|}{CL0257} \\\\\n \\cline{2-13}\n s & \\multicolumn{2}{c|}{PF06850} & \\multicolumn{2}{c|}{PF00135}\n & \\multicolumn{2}{c|}{PF00005} & \\multicolumn{2}{c|}{PF13481}\n & \\multicolumn{2}{c|}{PF02388} & \\multicolumn{2}{c|}{PF09924} \\\\\n \\cline{2-13}\n {} & $t_{CPU}$ (sec) & $t_R$ (sec) & $t_{CPU}$ (sec) & $t_R$ (sec)\n & $t_{CPU}$ (sec) & $t_R$ (sec) & $t_{CPU}$ (sec) & $t_R$ (sec)\n & $t_{CPU}$ (sec) & $t_R$ (sec) & $t_{CPU}$ (sec) & $t_R$ (sec) \\\\\n \\hline\n $0.1$ & $33.478$ & $88.527$ & $46.747$ & $130.014$ & $41.475$ & $169.679$\n & $44.187$ & $171.636$ & $44.716$ & $242.376$ & $46.177$ & $288.259$ \\\\\n \\hline\n $0.2$ & $31.711$ & $80.624$ & $42.157$ & $90.687$ & $36.893$ & $78.893$\n & $38.567$ & $88.754$ & $37.088$ & $88.091$ & $40.709$ & $148.371$ \\\\\n \\hline\n $0.3$ & $31.726$ & $80.519$ & $39.957$ & $82.240$ & $36.929$ & $97.270$\n & $38.800$ & $81.613$ & $37.350$ & $88.664$ & $38.986$ & $119.781$ \\\\\n \\hline\n $0.4$ & $31.448$ & $79.032$ & $41.737$ & $87.934$ & $36.252$ & $80.428$\n & $38.500$ & $78.757$ & $35.592$ & $80.337$ & $36.773$ & $87.217$ \\\\\n \\hline\n $0.5$ & $32.860$ & $93.048$ & $41.130$ & $89.997$ & $38.203$ & $93.595$\n & $37.618$ & $80.179$ & $35.117$ & $78.476$ & $35.534$ & $82.528$ \\\\\n \\hline\n $0.6$ & $33.944$ & $102.317$ & $41.417$ & $120.420$ & $37.143$ & $81.289$\n & $38.387$ & $81.667$ & $45.900$ & $501.222$ & $35.830$ & $83.439$ \\\\\n \\hline\n $0.7$ & $31.053$ & $79.255$ & $40.422$ & $78.531$ & $36.386$ & $84.421$\n & $43.216$ & $562.659$ & $43.452$ & $259.861$ & $35.261$ & $72.605$ \\\\\n \\hline\n $0.8$ & $31.526$ & $80.469$ & $41.556$ & $118.519$ & $36.955$ & $79.543$\n & $41.862$ & $148.028$ & $35.047$ & $78.469$ & $35.718$ & $85.142$ \\\\\n \\hline\n $0.9$ & $31.482$ & $80.002$ & $41.386$ & $81.747$ & $36.811$ & $81.025$\n & $35.518$ & $78.547$ & $35.540$ & $74.570$ & $40.534$ & $204.673$ \\\\\n \\hline\n $1.0$ & $32.056$ & $80.917$ & $40.724$ & $80.958$ & $37.234$ & $80.587$\n & $36.610$ & $87.848$ & $36.353$ & $78.767$ & $39.095$ & $125.292$ \\\\\n \\hline\n $2.0$ & $32.638$ & $80.751$ & $40.701$ & $79.667$ & $38.452$ & $79.336$\n & $36.916$ & $111.199$ & $36.658$ & $81.415$ & $40.415$ & $182.552$ \\\\\n \\hline\n $3.0$ & $31.892$ & $85.853$ & $40.822$ & $79.293$ & $38.223$ & $80.688$\n & $37.356$ & $84.312$ & $36.658$ & $81.415$ & $39.774$ & $157.090$ \\\\\n \\hline\n $4.0$ & $31.662$ & $91.837$ & $41.208$ & $79.825$ & $37.936$ & $98.905$\n & $37.000$ & $86.906$ & $36.012$ & $77.741$ & $38.794$ & $140.857$ \\\\\n \\hline\n $5.0$ & $32.295$ & $87.486$ & $41.059$ & $82.191$ & $38.293$ & $83.289$\n & $36.245$ & $80.873$ & $35.308$ & $77.225$ & $39.927$ & $147.368$ \\\\\n \\hline\n $6.0$ & $32.674$ & $86.569$ & $41.215$ & $86.311$ & $37.601$ & $82.375$\n & $36.395$ & $97.307$ & $35.748$ & $76.573$ & $39.327$ & $154.829$ \\\\\n \\hline\n $7.0$ & $31.674$ & $86.215$ & $41.290$ & $88.714$ & $38.341$ & $80.991$\n & $36.724$ & $95.148$ & $36.194$ & $75.341$ & $39.826$ & $153.204$ \\\\\n \\hline\n $8.0$ & $33.105$ & $116.601$ & $40.984$ & $83.157$ & $37.756$ & $81.073$\n & $40.524$ & $105.466$ & $36.716$ & $82.921$ & $41.056$ & $175.943$ \\\\\n \\hline\n $9.0$ & $31.881$ & $81.372$ & $41.469$ & $113.561$ & $38.748$ & $82.499$\n & $37.874$ & $94.981$ & $40.341$ & $121.311$ & $39.847$ & $144.595$ \\\\\n \\hline\n $10.0$ & $32.269$ & $87.963$ & $41.466$ & $89.366$ & $37.807$ & $81.043$\n & $37.134$ & $88.005$ & $40.053$ & $136.243$ & $40.333$ & $171.395$ \\\\\n \\hline \\hline\n Total & $611.374$ & $1649.357$ & $787.447$ & $1743.132$ & $717.438$\n & $1676.929$ & $729.433$ & $2303.885$ & $719.843$ & $2381.018$ & $743.916$\n & $2725.140$ \\\\\n \\hline\n \\end{tabular}\n \\endgroup\n \\end{center}\n\\end{sidewaystable}\n\n\\begin{sidewaystable}[p]\n \\begin{center}\n \\caption{\\small Calculation of CPU and Real times of Havrda-Charvat Entropy\n measures for 06 families from 03 Clans with the $POA_{I\\!I}$\n configuration. \\label{tab14}}\n \\begingroup\n \\everymath{\\scriptstyle}\n \\footnotesize\n \\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|}\n \\hline\n {} & \\multicolumn{4}{c|}{CL0028} & \\multicolumn{4}{c|}{CL0023}\n & \\multicolumn{4}{c|}{CL0257} \\\\\n \\cline{2-13}\n s & \\multicolumn{2}{c|}{PF06850} & \\multicolumn{2}{c|}{PF00135}\n & \\multicolumn{2}{c|}{PF00005} & \\multicolumn{2}{c|}{PF13481}\n & \\multicolumn{2}{c|}{PF02388} & \\multicolumn{2}{c|}{PF09924} \\\\\n \\cline{2-13}\n {} & $t_{CPU}$ (sec) & $t_R$ (sec) & $t_{CPU}$ (sec) & $t_R$ (sec)\n & $t_{CPU}$ (sec) & $t_R$ (sec) & $t_{CPU}$ (sec) & $t_R$ (sec)\n & $t_{CPU}$ (sec) & $t_R$ (sec) & $t_{CPU}$ (sec) & $t_R$ (sec) \\\\\n \\hline\n $0.1$ & $19.451$ & $26.196$ & $22.620$ & $56.287$ & $25.411$ & $121.508$\n & $18.968$ & $32.706$ & $23.380$ & $80.826$ & $22.502$ & $77.826$ \\\\\n \\hline\n $0.2$ & $19.245$ & $24.901$ & $22.851$ & $69.992$ & $23.362$ & $65.508$\n & $18.810$ & $26.835$ & $23.366$ & $87.098$ & $23.330$ & $60.684$ \\\\\n \\hline\n $0.3$ & $19.582$ & $26.001$ & $23.963$ & $60.960$ & $22.598$ & $72.363$\n & $18.602$ & $26.028$ & $20.012$ & $43.734$ & $21.929$ & $48.947$ \\\\\n \\hline\n $0.4$ & $20.194$ & $31.418$ & $22.211$ & $54.562$ & $23.448$ & $63.369$\n & $19.176$ & $30.482$ & $21.218$ & $67.817$ & $22.182$ & $45.480$ \\\\\n \\hline\n $0.5$ & $20.162$ & $31.653$ & $23.163$ & $67.391$ & $22.153$ & $55.173$\n & $19.281$ & $34.495$ & $21.037$ & $55.237$ & $23.200$ & $62.384$ \\\\\n \\hline\n $0.6$ & $20.941$ & $34.979$ & $23.961$ & $62.158$ & $22.342$ & $57.081$\n & $20.794$ & $44.558$ & $21.087$ & $54.900$ & $22.477$ & $62.421$ \\\\\n \\hline\n $0.7$ & $20.805$ & $34.057$ & $23.679$ & $82.669$ & $19.587$ & $35.750$\n & $19.982$ & $41.314$ & $20.375$ & $32.699$ & $22.398$ & $66.438$ \\\\\n \\hline\n $0.8$ & $20.900$ & $34.146$ & $22.787$ & $60.924$ & $19.081$ & $34.116$\n & $18.890$ & $28.054$ & $20.167$ & $32.685$ & $23.056$ & $61.376$ \\\\\n \\hline\n $0.9$ & $20.909$ & $34.568$ & $22.808$ & $54.890$ & $19.030$ & $33.258$\n & $18.894$ & $29.712$ & $19.422$ & $26.934$ & $23.543$ & $65.099$ \\\\\n \\hline\n $1.0$ & $21.353$ & $35.024$ & $22.860$ & $51.308$ & $19.789$ & $32.218$\n & $19.869$ & $37.922$ & $21.076$ & $35.774$ & $22.528$ & $52.209$ \\\\\n \\hline\n $2.0$ & $20.505$ & $32.920$ & $21.085$ & $46.921$ & $19.672$ & $33.778$\n & $19.083$ & $62.081$ & $22.530$ & $82.336$ & $21.783$ & $51.656$ \\\\\n \\hline\n $3.0$ & $23.923$ & $58.898$ & $22.020$ & $47.298$ & $18.788$ & $31.926$\n & $19.097$ & $35.399$ & $24.210$ & $83.371$ & $21.636$ & $69.905$ \\\\\n \\hline\n $4.0$ & $24.622$ & $68.692$ & $21.954$ & $50.909$ & $18.556$ & $26.512$\n & $19.208$ & $32.014$ & $24.300$ & $112.613$ & $21.458$ & $49.931$ \\\\\n \\hline\n $5.0$ & $24.221$ & $60.107$ & $22.131$ & $58.975$ & $19.424$ & $33.185$\n & $18.231$ & $25.898$ & $24.199$ & $95.807$ & $22.011$ & $55.109$ \\\\\n \\hline\n $6.0$ & $24.475$ & $64.843$ & $22.911$ & $72.004$ & $20.582$ & $38.006$\n & $18.330$ & $25.959$ & $22.741$ & $61.516$ & $21.857$ & $67.976$ \\\\\n \\hline\n $7.0$ & $24.593$ & $70.336$ & $22.779$ & $51.309$ & $20.564$ & $38.390$\n & $19.583$ & $30.719$ & $23.222$ & $88.997$ & $23.018$ & $85.054$ \\\\\n \\hline\n $8.0$ & $25.613$ & $83.423$ & $20.058$ & $34.861$ & $20.460$ & $35.589$\n & $19.977$ & $37.130$ & $23.825$ & $110.933$ & $24.474$ & $90.322$ \\\\\n \\hline\n $9.0$ & $24.139$ & $73.873$ & $21.418$ & $44.709$ & $19.274$ & $32.129$\n & $18.871$ & $29.089$ & $24.207$ & $106.471$ & $23.349$ & $74.724$ \\\\\n \\hline\n $10.0$ & $25.785$ & $101.370$ & $22.183$ & $46.878$ & $23.625$ & $87.442$\n & $22.652$ & $64.895$ & $23.779$ & $86.225$ & $21.802$ & $57.444$ \\\\\n \\hline \\hline\n Total & $421.418$ & $927.405$ & $427.442$ & $1075.005$ & $397.746$\n & $927.301$ & $368.298$ & $675.290$ & $424.153$ & $1345.973$ & $428.533$\n & $1204.985$ \\\\\n \\hline\n \\end{tabular}\n \\endgroup\n \\end{center}\n\\end{sidewaystable}\n \n\\begin{sidewaystable}[p]\n \\begin{center}\n \\caption{\\small Calculation of CPU and Real times of 19 $s$-powers of joint\n probabilities $\\big(P_{jk}(a,b)\\big)^s$ measures for 06 families from 03\n Clans with the $PLA_{I\\!I}$ configuration. \\label{tab15}}\n \\begingroup\n \\everymath{\\scriptstyle}\n \\footnotesize\n \\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|}\n \\hline\n {} & \\multicolumn{4}{c|}{CL0028} & \\multicolumn{4}{c|}{CL0023}\n & \\multicolumn{4}{c|}{CL0257} \\\\\n \\cline{2-13}\n s & \\multicolumn{2}{c|}{PF06850} & \\multicolumn{2}{c|}{PF00135}\n & \\multicolumn{2}{c|}{PF00005} & \\multicolumn{2}{c|}{PF13481}\n & \\multicolumn{2}{c|}{PF02388} & \\multicolumn{2}{c|}{PF09924} \\\\\n \\cline{2-13}\n {} & $t_{CPU}$ (sec) & $t_R$ (sec) & $t_{CPU}$ (sec) & $t_R$ (sec)\n & $t_{CPU}$ (sec) & $t_R$ (sec) & $t_{CPU}$ (sec) & $t_R$ (sec)\n & $t_{CPU}$ (sec) & $t_R$ (sec) & $t_{CPU}$ (sec) & $t_R$ (sec) \\\\\n \\hline\n $0.1$ & $41.633$ & $83.139$ & $43.135$ & $83.901$ & $44.003$ & $84.655$\n & $26.613$ & $52.365$ & $27.452$ & $53.350$ & $43.132$ & $83.249$ \\\\\n \\hline\n $0.2$ & $26.483$ & $53.384$ & $26.373$ & $50.442$ & $26.610$ & $52.904$\n & $26.599$ & $51.822$ & $43.199$ & $83.497$ & $25.762$ & $50.203$ \\\\\n \\hline\n $0.3$ & $25.286$ & $50.689$ & $26.644$ & $50.992$ & $29.965$ & $53.250$\n & $24.946$ & $48.846$ & $25.498$ & $50.303$ & $25.904$ & $50.737$ \\\\\n \\hline\n $0.4$ & $26.138$ & $51.409$ & $25.255$ & $48.576$ & $26.696$ & $52.482$\n & $27.769$ & $53.837$ & $25.753$ & $51.013$ & $25.684$ & $50.371$ \\\\\n \\hline\n $0.5$ & $25.822$ & $51.837$ & $26.041$ & $50.492$ & $26.648$ & $52.171$\n & $25.026$ & $48.927$ & $25.727$ & $49.887$ & $25.513$ & $48.793$ \\\\\n \\hline\n $0.6$ & $27.013$ & $52.889$ & $25.778$ & $50.324$ & $24.912$ & $49.340$\n & $26.062$ & $51.312$ & $26.066$ & $51.417$ & $25.435$ & $49.686$ \\\\\n \\hline\n $0.7$ & $25.814$ & $51.466$ & $25.496$ & $49.954$ & $25.284$ & $50.268$\n & $23.698$ & $48.134$ & $25.723$ & $50.760$ & $25.822$ & $51.122$ \\\\\n \\hline\n $0.8$ & $26.725$ & $53.388$ & $26.254$ & $50.865$ & $25.456$ & $50.870$\n & $25.816$ & $51.196$ & $26.883$ & $52.511$ & $29.837$ & $57.367$ \\\\\n \\hline\n $0.9$ & $26.895$ & $53.579$ & $23.740$ & $47.296$ & $26.237$ & $51.725$\n & $28.441$ & $54.743$ & $25.237$ & $50.532$ & $27.951$ & $54.289$ \\\\\n \\hline\n $1.0$ & $25.990$ & $51.687$ & $27.225$ & $54.384$ & $26.014$ & $51.969$\n & $27.148$ & $52.701$ & $23.672$ & $48.547$ & $26.314$ & $51.932$ \\\\\n \\hline\n $2.0$ & $27.089$ & $54.384$ & $26.237$ & $50.335$ & $27.756$ & $54.761$\n & $26.424$ & $52.503$ & $24.811$ & $49.859$ & $25.846$ & $51.125$ \\\\\n \\hline\n $3.0$ & $25.766$ & $51.843$ & $27.342$ & $52.508$ & $25.135$ & $50.954$\n & $27.753$ & $53.650$ & $25.072$ & $49.142$ & $25.727$ & $50.990$ \\\\\n \\hline\n $4.0$ & $26.515$ & $52.636$ & $25.716$ & $50.531$ & $25.106$ & $50.449$\n & $24.871$ & $50.279$ & $25.674$ & $51.536$ & $26.897$ & $52.227$ \\\\\n \\hline\n $5.0$ & $27.516$ & $54.908$ & $26.002$ & $50.390$ & $24.666$ & $49.463$\n & $23.973$ & $47.554$ & $25.675$ & $51.032$ & $27.810$ & $53.820$ \\\\\n \\hline\n $6.0$ & $28.126$ & $54.611$ & $27.441$ & $53.032$ & $26.014$ & $52.209$\n & $25.676$ & $50.426$ & $26.235$ & $51.375$ & $26.180$ & $51.346$ \\\\\n \\hline\n $7.0$ & $23.908$ & $49.262$ & $26.812$ & $51.556$ & $24.696$ & $49.783$\n & $25.519$ & $50.741$ & $25.863$ & $50.995$ & $25.593$ & $50.611$ \\\\\n \\hline\n $8.0$ & $26.020$ & $51.889$ & $25.431$ & $49.135$ & $26.757$ & $52.518$\n & $26.369$ & $49.668$ & $26.401$ & $52.682$ & $25.084$ & $50.596$ \\\\\n \\hline\n $9.0$ & $26.208$ & $53.114$ & $25.856$ & $50.090$ & $25.803$ & $51.253$\n & $24.628$ & $47.880$ & $25.379$ & $51.014$ & $27.049$ & $52.891$ \\\\\n \\hline\n $10.0$ & $26.125$ & $52.541$ & $24.963$ & $47.402$ & $24.369$ & $48.065$\n & $42.270$ & $83.281$ & $24.878$ & $49.926$ & $24.563$ & $48.616$ \\\\\n \\hline \\hline\n Total & $515.072$ & $1028.655$ & $511.741$ & $992.205$ & $509.127$\n & $1009.089$ & $509.601$ & $999.865$ & $505.198$ & $999.378$ & $516.103$\n & $1009.971$ \\\\\n \\hline\n \\end{tabular}\n \\endgroup\n \\end{center}\n\\end{sidewaystable}\n\n\\begin{sidewaystable}[p]\n \\begin{center}\n \\caption{\\small Calculation of CPU and Real times of Havrda-Charvat Entropy\n measures for 06 families from 03 Clans with the $PLA_{I\\!I}$\n configuration. \\label{tab16}}\n \\begingroup\n \\everymath{\\scriptstyle}\n \\footnotesize\n \\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|}\n \\hline\n {} & \\multicolumn{4}{c|}{CL0028} & \\multicolumn{4}{c|}{CL0023}\n & \\multicolumn{4}{c|}{CL0257} \\\\\n \\cline{2-13}\n s & \\multicolumn{2}{c|}{PF06850} & \\multicolumn{2}{c|}{PF00135}\n & \\multicolumn{2}{c|}{PF00005} & \\multicolumn{2}{c|}{PF13481}\n & \\multicolumn{2}{c|}{PF02388} & \\multicolumn{2}{c|}{PF09924} \\\\\n \\cline{2-13}\n {} & $t_{CPU}$ (sec) & $t_R$ (sec) & $t_{CPU}$ (sec) & $t_R$ (sec)\n & $t_{CPU}$ (sec) & $t_R$ (sec) & $t_{CPU}$ (sec) & $t_R$ (sec)\n & $t_{CPU}$ (sec) & $t_R$ (sec) & $t_{CPU}$ (sec) & $t_R$ (sec) \\\\\n \\hline\n $0.1$ & $21.219$ & $26.895$ & $18.698$ & $23.442$ & $19.383$ & $24.675$\n & $21.351$ & $26.966$ & $19.059$ & $24.560$ & $21.451$ & $26.829$ \\\\\n \\hline\n $0.2$ & $22.604$ & $28.014$ & $21.109$ & $26.536$ & $20.336$ & $25.665$\n & $19.555$ & $24.562$ & $19.370$ & $24.681$ & $22.504$ & $27.855$ \\\\\n \\hline\n $0.3$ & $19.056$ & $24.130$ & $20.692$ & $25.663$ & $21.934$ & $27.466$\n & $21.084$ & $26.236$ & $19.317$ & $24.934$ & $19.681$ & $24.732$ \\\\\n \\hline\n $0.4$ & $21.552$ & $27.442$ & $19.981$ & $25.021$ & $18.794$ & $24.142$\n & $21.065$ & $26.485$ & $20.878$ & $26.291$ & $21.947$ & $26.979$ \\\\\n \\hline\n $0.5$ & $31.952$ & $88.406$ & $21.161$ & $26.374$ & $19.123$ & $24.309$\n & $22.534$ & $27.851$ & $19.729$ & $25.011$ & $21.888$ & $27.168$ \\\\\n \\hline\n $0.6$ & $20.195$ & $25.362$ & $21.006$ & $26.371$ & $21.426$ & $30.303$\n & $20.640$ & $26.050$ & $19.424$ & $24.279$ & $20.750$ & $26.175$ \\\\\n \\hline\n $0.7$ & $19.168$ & $23.924$ & $20.989$ & $26.482$ & $21.440$ & $27.129$\n & $19.370$ & $24.287$ & $19.274$ & $24.779$ & $21.545$ & $26.830$ \\\\\n \\hline\n $0.8$ & $21.982$ & $27.362$ & $19.760$ & $24.681$ & $20.662$ & $25.948$\n & $21.324$ & $26.648$ & $21.685$ & $26.964$ & $21.644$ & $27.160$ \\\\\n \\hline\n $0.9$ & $21.356$ & $27.235$ & $21.251$ & $26.550$ & $19.518$ & $24.880$\n & $21.276$ & $26.736$ & $21.283$ & $26.981$ & $19.217$ & $23.910$ \\\\\n \\hline\n $1.0$ & $20.206$ & $25.608$ & $20.914$ & $26.173$ & $20.726$ & $25.877$\n & $21.430$ & $26.625$ & $20.102$ & $25.198$ & $19.810$ & $25.348$ \\\\\n \\hline\n $2.0$ & $19.435$ & $24.882$ & $20.660$ & $25.675$ & $20.798$ & $26.326$\n & $20.301$ & $25.481$ & $19.588$ & $24.837$ & $20.446$ & $25.755$ \\\\\n \\hline\n $3.0$ & $20.549$ & $25.646$ & $19.988$ & $25.479$ & $20.629$ & $25.516$\n & $19.914$ & $25.230$ & $20.438$ & $25.648$ & $19.393$ & $24.815$ \\\\\n \\hline\n $4.0$ & $19.528$ & $24.693$ & $20.649$ & $25.610$ & $19.428$ & $24.637$\n & $20.505$ & $26.001$ & $20.868$ & $26.009$ & $20.060$ & $25.643$ \\\\\n \\hline\n $5.0$ & $19.698$ & $24.824$ & $21.076$ & $26.468$ & $19.865$ & $24.753$\n & $19.120$ & $24.267$ & $19.466$ & $24.423$ & $21.760$ & $27.244$ \\\\\n \\hline\n $6.0$ & $20.809$ & $26.319$ & $20.352$ & $25.914$ & $19.507$ & $24.362$\n & $20.110$ & $25.587$ & $20.708$ & $25.961$ & $21.240$ & $26.665$ \\\\\n \\hline\n $7.0$ & $20.287$ & $25.951$ & $21.073$ & $26.459$ & $19.482$ & $24.861$\n & $18.272$ & $23.535$ & $19.015$ & $24.460$ & $20.646$ & $25.964$ \\\\\n \\hline\n $8.0$ & $21.427$ & $26.885$ & $19.494$ & $24.421$ & $20.107$ & $25.228$\n & $20.645$ & $26.095$ & $21.039$ & $26.222$ & $20.014$ & $25.155$ \\\\\n \\hline\n $9.0$ & $21.623$ & $27.335$ & $21.554$ & $26.517$ & $19.239$ & $24.529$\n & $20.629$ & $25.929$ & $21.035$ & $26.450$ & $21.086$ & $26.526$ \\\\\n \\hline\n $10.0$ & $20.815$ & $26.379$ & $21.127$ & $26.630$ & $19.927$ & $25.340$\n & $19.781$ & $25.063$ & $21.438$ & $27.019$ & $17.286$ & $22.390$ \\\\\n \\hline \\hline\n Total & $403.461$ & $557.294$ & $391.524$ & $490.466$ & $382.324$\n & $485.946$ & $388.906$ & $489.634$ & $383.716$ & $484.707$ & $392.368$\n & $493.143$ \\\\\n \\hline\n \\end{tabular}\n \\endgroup\n \\end{center}\n\\end{sidewaystable}\n\n\\begin{sidewaystable}[p]\n \\begin{center}\n \\caption{\\small Calculation of CPU and Real times of Havrda-Charvat\n Entropy measures for 06 families from 03 Clans with parameters\n $0 < s \\leq 1$ and the $PLA_{I\\!I}$ configuration. \\label{tab17}}\n \\begingroup\n \\everymath{\\scriptstyle}\n \\footnotesize\n \\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|}\n \\hline\n {} & \\multicolumn{4}{c|}{CL0028} & \\multicolumn{4}{c|}{CL0023}\n & \\multicolumn{4}{c|}{CL0257} \\\\\n \\cline{2-13}\n s & \\multicolumn{2}{c|}{PF06850} & \\multicolumn{2}{c|}{PF00135}\n & \\multicolumn{2}{c|}{PF00005} & \\multicolumn{2}{c|}{PF13481}\n & \\multicolumn{2}{c|}{PF02388} & \\multicolumn{2}{c|}{PF09924} \\\\\n \\cline{2-13}\n {} & $t_{CPU}$ (sec) & $t_R$ (sec) & $t_{CPU}$ (sec) & $t_R$ (sec)\n & $t_{CPU}$ (sec) & $t_R$ (sec) & $t_{CPU}$ (sec) & $t_R$ (sec)\n & $t_{CPU}$ (sec) & $t_R$ (sec) & $t_{CPU}$ (sec) & $t_R$ (sec) \\\\\n \\hline\n $0.1$ & $21.219$ & $26.895$ & $18.698$ & $23.442$ & $19.383$ & $24.675$\n & $21.351$ & $26.966$ & $19.059$ & $24.560$ & $21.451$ & $26.829$ \\\\\n \\hline\n $0.2$ & $22.604$ & $28.014$ & $21.109$ & $26.536$ & $20.336$ & $25.665$\n & $19.555$ & $24.562$ & $19.370$ & $24.681$ & $22.504$ & $27.855$ \\\\\n \\hline\n $0.3$ & $19.056$ & $24.130$ & $20.692$ & $25.663$ & $21.934$ & $27.466$\n & $21.084$ & $26.236$ & $19.317$ & $24.934$ & $19.681$ & $24.732$ \\\\\n \\hline\n $0.4$ & $21.552$ & $27.442$ & $19.981$ & $25.021$ & $18.794$ & $24.142$\n & $21.065$ & $26.485$ & $20.878$ & $26.291$ & $21.947$ & $26.979$ \\\\\n \\hline\n $0.5$ & $31.952$ & $88.406$ & $21.161$ & $26.374$ & $19.123$ & $24.309$\n & $22.534$ & $27.851$ & $19.729$ & $25.011$ & $21.888$ & $27.168$ \\\\\n \\hline\n $0.6$ & $20.195$ & $25.362$ & $21.006$ & $26.371$ & $21.426$ & $30.303$\n & $20.640$ & $26.050$ & $19.424$ & $24.279$ & $20.750$ & $26.175$ \\\\\n \\hline\n $0.7$ & $19.168$ & $23.924$ & $20.989$ & $26.482$ & $21.440$ & $27.129$\n & $19.370$ & $24.287$ & $19.274$ & $24.779$ & $21.545$ & $26.830$ \\\\\n \\hline\n $0.8$ & $21.982$ & $27.362$ & $19.760$ & $24.681$ & $20.662$ & $25.948$\n & $21.324$ & $26.648$ & $21.685$ & $26.964$ & $21.644$ & $27.160$ \\\\\n \\hline\n $0.9$ & $21.356$ & $27.235$ & $21.251$ & $26.550$ & $19.518$ & $24.880$\n & $21.276$ & $26.736$ & $21.283$ & $26.981$ & $19.217$ & $23.910$ \\\\\n \\hline\n $1.0$ & $20.206$ & $25.608$ & $20.914$ & $26.173$ & $20.726$ & $25.877$\n & $21.430$ & $26.625$ & $20.102$ & $25.198$ & $19.810$ & $25.348$ \\\\\n \\hline \\hline\n Total & $219.290$ & $324.380$ & $205.561$ & $257.293$ & $203.342$\n & $260.394$ & $209.629$ & $262.446$ & $200.121$ & $253.678$ & $210.437$\n & $262.986$ \\\\\n \\hline \\hline\n Grand & \\multicolumn{1}{c|}{\\multirow{2}{*}{$1284.343$}}\n & \\multicolumn{1}{c|}{\\multirow{2}{*}{$1895.873$}}\n & \\multicolumn{1}{c|}{\\multirow{2}{*}{$1199.098$}}\n & \\multicolumn{1}{c|}{\\multirow{2}{*}{$1727.300$}}\n & \\multicolumn{1}{c|}{\\multirow{2}{*}{$1131.878$}}\n & \\multicolumn{1}{c|}{\\multirow{2}{*}{$1693.572$}}\n & \\multicolumn{1}{c|}{\\multirow{2}{*}{$1066.375$}}\n & \\multicolumn{1}{c|}{\\multirow{2}{*}{$1511.927$}}\n & \\multicolumn{1}{c|}{\\multirow{2}{*}{$1124.465$}}\n & \\multicolumn{1}{c|}{\\multirow{2}{*}{$1681.087$}}\n & \\multicolumn{1}{c|}{\\multirow{2}{*}{$1211.808$}}\n & \\multicolumn{1}{c|}{\\multirow{2}{*}{$1772.106$}} \\\\\n Total & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{}\n & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{}\n & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{}\n & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{}\n & \\multicolumn{1}{c|}{} \\\\\n \\hline\n \\end{tabular}\n \\endgroup\n \\end{center}\n\\end{sidewaystable}\n\n\\newpage\n\nThe corresponding times for calculating the joint probabilities and\n$s$-powers of these have been added up to report the results for the\nHavrda-Charvat entropies of the Grand total row. We think that these\ntimes are very well affordable indeed.\n\n\\begin{table}[H]\n \\begin{center}\n \\caption{\\small Total CPU and real times for calculating the Entropy measure\n content of a family PF06850 and approximations for Grand\n Total of all sample space. \\label{tab18}}\n \\begin{tabular}{|c|c|c|}\n \\hline\n \\multicolumn{1}{|c|}{\\multirow{2}{*}{$POA_{I\\!I}$}} & Entropy Measures &\n Entropy Measures \\\\\n {} & $H\\!_j(s)$ --- 19 $s$-values & $H\\!_{jk}(s)$ ---\n 19 $s$-values \\\\\n \\hline\n Total CPU time & $0.358+2.562+0.292$\n & $550.129+611.374$ \\\\\n (family PF06850) & $=3.212$ sec & $+421.418=1,582.921$ sec \\\\\n \\hline\n Grand Total CPU time & $3,433.628$ sec\n & $1,692,142.549$ sec \\\\\n (1069 families) & $=0.954$ hs & $=19.585$ days \\\\\n \\hline\n Total Real time & $1.062+9.300+0.332$\n & $593.848+1,649.357+$ \\\\\n (family PF06850) & $=10.694$ sec & $927.405=3.170.61$ sec \\\\\n \\hline\n Grand Total Real time & $11,431.886$ sec\n & $3,389,382.090$ sec \\\\\n (1069 families) & $=3.175$ hs & $=39.229$ days \\\\\n \\hline\n \\end{tabular}\n \\end{center}\n\\end{table}\n\n\\begin{table}[H]\n \\begin{center}\n \\caption{\\small Total CPU and real times for calculating the Entropy measure\n content of a family PF06850 and approximations for Grand\n Total of all sample space. \\label{tab19}}\n \\begin{tabular}{|c|c|c|}\n \\hline\n \\multicolumn{1}{|c|}{\\multirow{2}{*}{$PLA_{I\\!I}$}} & Entropy Measures &\n Entropy Measures \\\\\n {} & $H\\!_j(s)$ --- 19 $s$-values & $H\\!_{jk}(s)$ ---\n 19 $s$-values \\\\\n \\hline\n Total CPU time & $0.291+1.801+0.640$\n & $787.254+515.072$ \\\\\n (family PF06850) & $=2.732$ sec & $+403.461=1,705.787$ sec \\\\\n \\hline\n Grand Total CPU time & $2,920.508$ sec\n & $1,823,486.303$ sec \\\\\n (1069 families) & $=0.811$ hs & $=21.105$ days \\\\\n \\hline\n Total Real time & $0.642+7.261+1.291$\n & $1,068.026+1,028.655$ \\\\\n (family PF06850) & $=9.194$ sec & $+557.294=2,603.975$ sec \\\\\n \\hline\n Grand Total Real time & $9,828.386$ sec\n & $2,783,649.275$ sec \\\\\n (1069 families) & $=2.730$ hs & $=32.218$ days \\\\\n \\hline\n \\end{tabular}\n \\end{center}\n\\end{table}\n\n\n\\section{Concluding Remarks and Suggestions for Future Work}\nThe treatment of the distributions of probability of occur in protein\ndatabases is a twofold procedure. We intend to find a way of characterizing\nthe protein database by values of Entropy Measures in order to provide a\nsound discussion to be centered on the maximization of a convenient average\nEntropy Measure to represent the entire protein database. We also intend to\nderive a partition function in order to derive a thermodynamical theory\nassociated to the temporal evolution of the database. If the corresponding\nevolution of the protein families is assumed to be registered on the\nsubsequent versions of the database (Table \\ref{tab17}), we will then be\nable to describe the sought thermodynamical evolution from this theory as\nwell as to obtain from it the convenient description of all intermediate\nLevinthal's stages which seem to be necessary for describing the\nfolding\/unfolding dynamical process.\n\nWe summarize this approach by the need of starting from a thermodynamical\ntheory of the evolution of protein databases via Entropy measures to the\nconstruction of a successful dynamical theory of protein families. In other\nwords, from the thermodynamics of evolution of a protein database, we will\nderive a statistical mechanics to give us physical insight on the\nconstruction of a successful dynamics of protein families.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{intro}\n\n\nSemiflexible polymers is a term coined to understand a variety of physical systems that involve linear molecules. For instance, understanding the behaviors of such polymers serves as the basis to understand phenomena encountered in polymer industry, biotechnology, and mo\\-le\\-cu\\-lar processes in living cells \\cite{Fal-Sakaue2007}. Indeed, biopolymers functionality are ruled by their conformation, which in turn is considerably modified in the geometrically confined or crowded environment inside the cell \\cite{Fal-Koster2008, Fal-Reisner2005,Fal-Cifra2010, Fal-Benkova2017}. Beyond the most prominent polymer example being DNA compaction in the nucleus or viral DNA packed in capsids \\cite{Fal-Cifra2010, Fal-Locker2006}, there is also the important outstanding example of DNA transcription and replication processes that are governed by the binding of specific proteins. These mechanisms are strongly connected to polymer configuration \\cite{Fal-Gowers2003,Fal-Broek2008,Fal-Ostermeir2010}. Furthermore, a wide range of biophysical processes is derived from DNA constrained to a ring enclosure and more general topologies \\cite{Fal-Witz2011}. \n\nOn the one hand, motivated by the packaging and coiling problems mentioned above, Mondescu and Muthukumar (MM) studied in \\cite{PDoi-Mondescu1998} the conformational states of an ideal Gaussian polymer \\cite{PDoi-Doi1988book} wrapping different curved surfaces, where they presented theoretical predictions for the mean-square end-to-end distance. Later on, Spakowitz and Wang (SW) in \\cite{PSaito-Spakowitz2003} studied the conformational states of an ideal semiflexible polymer confined to a spherical surface based on the continuous Worm-Like Chain Model (WLC) \\cite{PSaito-Saito1967}. Unlike the conformational states of the Gaussian polymer, SW found the existence of a shape transition from an ordered to a disordered phase, where polymer roughly looks like cooked spaghetti and a random walk, respectively. Moreover, in the appropriate limit, the behavior of the semiflexible polymer reduces to the one of the Gaussian polymer in the spherical case. Subsequently, the MM and SW results were confirmed through computer simulations, where the validity regimes for each theory were established. Additionally, as a consequence of the excluded volume effect a helical state was found in \\cite{Fal-Cerda2005,Fal-Slosar2006} and a Tennis ball-like state in \\cite{Psim-Zhang2011}; these states are absent in both MM and SW theories. The problem becomes richer when short-range and long-range electrostatic interactions are considered since they can induce a Tennis ball-like state which is predominant over the helical conformation in the case of long-range electrostatic interactions \\cite{PSim-Angelescu2008}. A transition is also reported in a similar manner to that reported by SW with slight corrections in the case of short-range interaction and more pronounced in the long-range ones. In the extreme limit of zero temperature, the conformational states are expected to be in the ordered phase. Indeed, by a variational principle consistent to the WLC model one can obtain an abundance of conformational states including those observed in the simulations mentioned above \\cite{Psim-Guven2012,Psim-Lin2007} . On the other hand, the confinement can induce a transition from a circular polymer to a figure eight shape \\cite{Fal-Ostermeir2010}; even when the conformation of the polymer is considered as a self-avoiding random walk, properties similar to that of a critical phenomenon are found \\cite{Fal-Berger2006,Fal-Sakaue2006,Fal-Sakaue2007,Fal-Usatenko2017}. Confinement can also occur in the three-dimensional volume enclosed by rigid or soft surfaces \\cite{Fal-Brochard2005, Fal-Cacciuto2006, Fal-Chen2007, Fal-Koster2008, PSaito-Morrison2009, Psim-Gao2014}, and in a crowded environment, for instance, modeled by a nanopost array \\cite{Fal-Benkova2017}. \n\nFurthermore, two-dimensional confinement in closed flat spaces can result in a order-disorder shape transition \\cite{Polcon-Liu2008} similar to that of SW. One advantage of confinement in the flat two-dimensional case is that it can be compared with the experiment \\cite{Fal-Moukhtar2007,Fal-Witz2011, Fal-Japaridze2017, Fal-Frielinghaus2013}. However, in the literature as far as we know, even in the semiflexible ideal chain there is not a systematic study of the SW transition in a flat and bounded region. In this work, we present a theoretical and numerical analysis of the conformational states of an ideal semiflexible polymer in a compact two-dimensional flat space. First, we deduce the Hermans-Ullman (HU) equation \\cite{PSaito-Hermans1952} under the supposition that the conformational states correspond to stochastic realizations of paths defined by the Frenet equations and the assumption that stochastic ``curvature'' satisfies a fluctuation theorem given by a white noise distribution. This latter hypothesis is consistent with the continuous version of the WLC model as we will see below. Using the HU equation we shall perform a multipolar decomposition for the probability density function $P({\\bf R}, \\theta, {\\bf R}^{\\prime}, \\theta^{\\prime}, L)$ that gives the probability to find a polymer with length $L$ with endings ${\\bf R}$ and ${\\bf R}^{\\prime}$, and directions $\\theta$ and $\\theta^{\\prime}$, respectively. This decomposition allows us to find a hierarchy of equations associated to the multipoles of $P({\\bf R}, \\theta, {\\bf R}^{\\prime}, \\theta^{\\prime}, L)$, namely, the positional density distribution $\\rho({\\bf R}, {\\bf R}^{\\prime}, L)$, the dipolar distribution $\\mathbb{P}^{i}\\left({\\bf R}, {\\bf R}^{\\prime}, L\\right)$, the quadrupolar two-rank tensor distribution $\\mathbb{Q}_{ij}({\\bf R}, {\\bf R}^{\\prime}, L)$, and so on. We shall show, for instance, that the positional density and the quadrupolar distributions are exactly related through the modified Telegrapher's Equation (MTE), \n\\begin{eqnarray}\n\\frac{\\partial^2\\rho}{\\partial s^2}+\\frac{1}{2\\ell_{p}}\\frac{\\partial\\rho}{\\partial s}=\\frac{1}{2}\\nabla^2\\rho+\\partial_{i}\\partial_{j}\\mathbb{Q}^{ij}.\n\\end{eqnarray}\nIn particular, using this equation and the traceless condition of $\\mathbb{Q}_{ij}$, we are going to verify the well known exact result of Kratky-Porod for a semiflexible polymer in a two-dimensional space \\cite{PSaito-Kratky1949}. Besides, we will show that as a consequence of the exponential decay of $\\mathbb{Q}_{ij}$ we are going to define a regime where quadrupolar distribution can be neglected in the MTE. In addition, we shall explore the conformational states for a semiflexible chain enclosed by a bounded compact two-dimensional domain through the mean-square end-to-end distance. In particular, for a square domain we will show the existence of a shape transition order-disorder of the same nature as the one found by SW \\cite{PSaito-Spakowitz2003}. Furthermore, we will develop a Monte Carlo algorithm for use in computer simulations in order to study the conformational states enclosed in a compact domain. Particularly, the algorithm shall be suited in the square domain which, additionally, will allow us to confirm the shape transition and validate the theoretical predictions. \n\n\nThis paper is organized as follows. In Sec. II, we present the stochastic version of the Frenet equations whose Fokker-Planck formalism give us a derivation of the Hermans-Ullman (HU) equation. In addition, we discuss a multipolar decomposition for the HU equation. In Sec. III, we provide an application of the methods developed in Sec. II in order to study semiflexible polymer conformations enclosed in a compact domain. Particularly, we focus on a square box domain. In Sec. IV, we present a Monte Carlo algorithm to study the conformational states of a semiflexible polymer enclosed in a compact domain. In Sec. V, we give the main results in a square box domain and we provide a comparison with the theoretical predictions. In the final Sec. VI, we give our concluding remarks and perspectives on this work.\n\n\n\\section{Preliminary notation and semiflexible polymers }\\label{theory}\n\nLet us consider a polymer on a two dimensional Euclidean space as a plane curve $\\gamma$, ${\\bf R}: I\\subset \\mathbb{R}\\to\\mathbb{R}^2$, parametrized by an arc-length, $s$. For each point $s\\in I$, a Frenet dihedral can be defined whose vector basis corresponds to the set\n $\\{{\\bf T}(s), {\\bf N}(s)\\}$, consisting of the tangent vector ${\\bf T}(s)={\\bf R}^{\\prime}(s)\\equiv d{\\bf R}\/ds$ and the normal vector ${\\bf N}(s)=\\boldsymbol{\\epsilon}{\\bf T}(s)$, where $\\boldsymbol{\\epsilon}$ is a rotation by an angle of $\\pi\/2$. Note that the components of the rotation correspond to the Levi-Civita antisymmetric tensor in two dimensions. Both are unit vectors ($\\left|{\\bf T}(s)\\right|=\\left|{\\bf N}(s)\\right|=1$), and by construction are orthogonal to each other. It is well known that along the points of the curve these vectors satisfy the Frenet equations, ${\\bf T}^{\\prime}(s)=\\kappa(s){\\bf N}(s)$ and ${\\bf N}^{\\prime}(s)=-\\kappa(s){\\bf T}(s)$, where $\\kappa(s)$ is the curvature of the curve \\cite{Fal-Montiel2009}. \n \nIn absence of thermal fluctuations, the conformations of the polymer are studied through different curve configurations determined by variational principles. For instance, one of the most successful models to des\\-cribe configurations of a semiflexible polymer is\n\\begin{eqnarray}\nH[{\\bf R}]=\\frac{\\alpha}{2}\\int ds ~\\kappa^2\\left(s\\right) , \n\\label{funcional}\n\\end{eqnarray}\nwhere $H[{\\bf R}]$ is the bending energy, and $\\alpha$ the bending rigidity modulus. This energy functional (\\ref{funcional}) corresponds to the continuous form of the worm-like chain model (WLC) \\cite{PSaito-Saito1967}. In a rather different context, a classical problem originally proposed by D. Bernoulli, later by L. Euler, between the XVIII and XIX centuries, consists of finding the family of curves $\\{\\gamma\\}$ with a fixed length that minimizes the functional \\eqref{funcional}. The solution to this problem is composed of those curves whose curvature satisfies the differential equation $k^{\\prime\\prime}+\\frac{1}{2}k^{3}-\\frac{\\lambda}{\\alpha}\\kappa=0$, where $\\lambda$ is a Lagrange multiplier introduced to constrain the curve length \\cite{Polcla-Miura2017}. This problem has been generalized to study elastic curves in manifolds \\cite{Polcla-Singer2008, Psim-Guven2012, Polcla-Manning1987}, that are nowadays relevant to understand the problem of DNA packaging and the winding problem of DNA around histone octamers \\cite{Fal-Hardin2012}. \n\n\nIn what follows, we shall develop an unusual approach that incorporates the thermal fluctuations in the study of semiflexible polymers described by the bending energy \\eqref{funcional}. \n\n\\subsection{Stochastic Frenet Equations Approach}\nIn this section, we propose an approach to study conformational states of a semiflexible polymer immersed in a thermal reservoir and confined to a two-dimensional Euclidean space. We start by postulating that each conformational realization of any polymer on the plane is described by a stochastic path satisfying the stochastic Frenet equations defined by\n\\begin{subequations}\n\\label{ecsestom1}\n\\begin{align}\n\\label{ecsesto0}\n\\frac{d}{ds}{\\bf R}\\left(s\\right)=&{\\bf T}(s),\\\\\n\\frac{d}{ds}{\\bf T}\\left(s\\right)=&{\\kappa}(s){\\boldsymbol{\\epsilon}}{\\bf T}(s),\n\\label{ecsesto}\n\\end{align}\n\\end{subequations}\nwhere ${\\bf R}(s)$, ${\\bf T}(s)$ and $\\kappa(s)$ are now random variables. According to this postulate, it can be show that $\\left|{\\bf T}(s)\\right|$ is a constant that can be fixed to unit. \n\nIn addition we postulate that $\\kappa(s)$ for semiflexible polymers is a random variable, named here stochastic curvature, and is distributed according to the following probability density\n\\begin{eqnarray}\n\\mathcal{P}[\\kappa]\\mathcal{D}\\kappa:=\\frac{1}{Z}\\exp\\left[-\\beta H\\right]\\mathcal{D}\\kappa, \n\\label{dft}\n\\end{eqnarray}\nwhere $H$ is given by Eq.~\\eqref{funcional}, $Z$ is an appropriate normalization constant, $\\mathcal{D}\\kappa$ is a functional measure, and $\\beta=1\/k_{B}T$ the inverse of the thermal energy $k_{B}T$, with $k_{B}$ and $T$ being the Boltzmann constant and the absolute temperature, respectively. It is also convenient to introduce the persistence length by $\\ell_{p}=\\beta\\alpha$. Note that due to the Gaussian structure of the probability density \\eqref{dft}, the stochastic curvature satisfies the following fluctuation theorem \\cite{Fal-Zinn1996}\n\\begin{subequations}\n\\label{flucm1}\n\\begin{align}\n\\label{fluc0}\n\\left<\\kappa(s)\\kappa(s^{\\prime})\\right>=&\\frac{1}{\\ell_{p}}\\delta(s-s^{\\prime}),\\\\\n\\left<\\kappa(s)\\right>=&0. \n\\label{fluc}\n\\end{align}\n\\end{subequations}\n \nSince the polymer is confined to a plane and ${\\bf T}(s)$ is a unit vector, then it may be written as ${\\bf T}(s)=(\\cos\\theta(s), \\sin\\theta(s))$, where $\\theta(s)$ is another random variable. In this way, the stochastic equations \\eqref{ecsestom1} can be rewritten in the following manner\n\\begin{subequations}\n\\label{stochasticecs}\n\\begin{align}\n\\label{stochasticecs0}\n\\frac{d}{ds}{\\bf R}\\left(s\\right)=&\\left(\\cos\\theta(s),\\sin\\theta(s)\\right),\\\\\n\\frac{d}{ds}\\theta\\left(s\\right)=&\\kappa(s).\n\\label{stochasticecs1}\n\\end{align}\n\\end{subequations}\nThe most important feature of these equations is their analogy with the Langevin equations for an active particle in the overdamped limit, where the noise is introduced through the stochastic curvature $\\kappa(s)$ \\cite{Fal-Sevilla2014}. Moreover, these equations can be studied through traditional numerical methods, for example, using standard routines implemented in Brownian dynamics \\cite{Fal-Ermak1978}. Here, from an analytical viewpoint, we find it is more convenient to use a Fokker-Planck formalism in order to extract information of the above stochastic equations (\\ref{stochasticecs}). \n\n\\subsection{From Frenet Stochastic Equations to the Hermans-Ullman Equation}\n\nIn this section, we present the Fokker-Planck formalism corresponding to the stochastic equations \\eqref{stochasticecs}. This description consists of determining the equation that governs the probability density function defined by\n\\begin{eqnarray}\nP(\\left.{\\bf R}, \\theta\\right|{\\bf R}^{\\prime}, \\theta^{\\prime}; s)=\\left<\\delta({\\bf R}-{\\bf R}(s))\\delta(\\theta-\\theta(s))\\right>, \n\\end{eqnarray}\nwhere ${\\bf R}$ and ${\\bf R}^{\\prime}$ are the ending positions of the polymer, and the angles $\\theta$ and $\\theta^{\\prime}$ are their corresponding directions, respectively. The parameter $s$ is the polymer length. \n\nApplying the standard procedure described in Refs. \\cite{Fal-Zinn1996, Fal-Gardiner1986} on the stochastic equations \\eqref{stochasticecs}, we obtain the corresponding Fokker-Planck equation\n\\begin{eqnarray}\n\\frac{\\partial P}{\\partial s}+\\nabla\\cdot\\left({\\bf t}\\left(\\theta\\right)P\\right)=\\frac{1}{2\\ell_{p}}\\frac{\\partial^2P}{\\partial\\theta^2},\n\\label{F-P}\n\\end{eqnarray}\nwhere ${\\bf t}(\\theta)=\\left(\\cos\\theta,\\sin\\theta\\right)$ and $\\nabla$ is the gradient operator with respect to ${\\bf R}$. Let us look carefully at the last equation. Surprisingly, this equation is exactly the equation found by J. J. Hermans and R. Ullman in 1952 \\cite{PSaito-Hermans1952}. They derived it supposing that the conformation of a polymer is determined by Markovian walks, taking the mean values of $\\theta$ and $\\theta^2$ as phenomenological parameters. These are parameters based on the X-ray dispersion experiments performed by Kratky and Porod \\cite{PSaito-Kratky1949}. For this reason, from now on, we name \\eqref{F-P} the Hermans-Ullman (HU) equation. It must be mentioned that H. E. Daniels found an equivalent equation few months before Hermans and Ullman \\cite{PSaito-Daniels1952}. A revision of the methods used to obtain the HU equation can be found in Refs. \\cite{Wlc-Yamakawa2016, Polcon-Chen2016}. For instance, taking into account \\eqref{funcional}, HU can be derived through the Green formalism \\cite{Polcon-Chen2016}. In contrast, in the present work, we have deduced the Hermans-Ullman equation considering two postulates, namely, I) the conformation of the semiflexible polymer satisfy the Frenet stochastic equations \\eqref{ecsestom1}, and II) the stochastic curvature is distributed according to \\eqref{dft}, which is consistent with the worm-like chain model (\\ref{funcional}). As far as we know, this procedure has not been reported in the literature. \n \n To end this section, let us remark, as it is pointed out in \\cite{Wlc-Yamakawa2016, Polcon-Chen2016}, that\n \\begin{eqnarray}\n \\int d^{2}{\\bf R}d^{2}{\\bf R}^{\\prime}~P(\\left.{\\bf R}, \\theta\\right|{\\bf R}^{\\prime}, \\theta_{0}; s)\\propto\\mathbb{Z}\\left(\\theta,\\theta_{0}, s\\right),\n \\end{eqnarray} \n where $\\mathbb{Z}\\left(\\theta,\\theta_{0}, s\\right)$ is the marginal probability density function (see appendix \\ref{A}), which establishes a bridge to the formalism in N. Sait$\\hat{\\rm o}$ et al. \\cite{PSaito-Saito1967} for the semiflexible polymer in the thermal bath.\n\n\\subsection{Multipolar decomposition for the Hermans-Ullman Equation} \\label{sect3}\nIt is necessary to emphasize that the HU equation naturally arises in the description of the motion of an active particle. Thus, being careful with the right interpretation, the methods developed in Refs. \\cite{Fal-Sevilla2014, Fal-Castro-Villarreal2018} to solve Eq.~\\eqref{F-P} can be applied in this context. Particularly, we use the multipolar expansion approach to solve Eq.~\\eqref{F-P}, which in the orthonormal Cartesian basis $\\{1, 2t_{i}, 4(t_{i}t_{j}-\\frac{1}{2}\\delta_{ij}), \\cdots\\}$, takes the following form \\cite{ Fal-Castro-Villarreal2018}\n\\begin{eqnarray}\nP({\\bf R},\\theta, s)&=&\\rho({\\bf R}, s)+2\\mathbb{P}_{i}({\\bf R}, s) {t_{i}}\\nonumber\\\\&+&4\\mathbb{Q}_{ij}\\left({\\bf R},s\\right)\\left(t_{i}t_{j}-\\frac{1}{2}\\delta_{ij}\\right)\\nonumber\\\\\n&+&8\\mathbb{R}_{ijk}\\left({\\bf R}, s\\right)\\left(t_{i}t_{j}t_{k}-\\frac{1}{4}\\delta_{\\left(ij\\right.}t_{\\left.k\\right)}\\right)+\\cdots,\\nonumber\\\\\n\\end{eqnarray}\nwhere we have adopted the Einstein summation convention, and the symbol $(ijk)$ means symmetrization on the indices $i,j,k$. The coefficients of the series are multipolar tensors given by\n \\begin{eqnarray}\n\\rho({\\bf R}, s)&=&\\int_{0}^{2\\pi} \\frac{d\\theta}{2\\pi}~P({\\bf R},\\theta, s)\\nonumber,\\\\ \n\\mathbb{P}_{i}({\\bf R}, s)&=&\\int_{0}^{2\\pi} \\frac{d\\theta}{2\\pi}~{t}_{i}~ P({\\bf R}, \\theta, s),\\nonumber\\\\ \n\\mathbb{Q}_{ij}({\\bf R}, s)&=&\\int_{0}^{2\\pi} \\frac{d\\theta}{2\\pi}~\\left(t_{i}t_{j}-\\frac{1}{2}\\delta_{ij}\\right)P({\\bf R}, \\theta, s), \\nonumber\\\\\n\\mathbb{R}_{ijk}({\\bf R}, s)&=&\\int_{0}^{2\\pi} \\frac{d\\theta}{2\\pi}~\\left(t_{i}t_{j}t_{k}-\\frac{1}{4}\\delta_{\\left(ij\\right.}t_{\\left.k\\right)}\\right)P({\\bf R}, \\theta, s), \\nonumber\\\\\n&\\vdots&.\n\\end{eqnarray}\nIn the latter coefficients, we have ignored the $\\theta$ dependence of the vector ${\\bf t}$ for reasons of notation. We also have obviated the dependence on ${\\bf R}^{\\prime}$ and $\\theta^{\\prime}$ to improve notation. The physical meaning of these tensors is as follows: $\\rho({\\bf R}, s)$ is the probability density function (PDF) of finding configurations with ends at ${\\bf R}$ and ${\\bf R}^{\\prime}$, $\\mathbb{P}({\\bf R}, s)$ \nis the local average of the polymer conformational direction, $\\mathbb{Q}_{ij}\\left({\\bf R}, s\\right)$ \nis the correlation between the components $i$ and $j$ of the polymer direction ${\\bf t}$, etc.\n\nFrom Hermans-Ullman Eq.~\\eqref{F-P}, it is possible to determine hierarchy equations for the multipolar tensors, which have already been shown for active particles in Refs. \\cite{Fal-Sevilla2014, Fal-Castro-Villarreal2018}.\nThe same hierarchy equations can also be found in the semiflexible polymer context. Integrating over the angle $\\theta$ in Eq.~\\eqref{F-P}, we obtain the following continuity-type equation\n\\begin{eqnarray}\n\\frac{\\partial \\rho({\\bf R}, s)}{\\partial s}=-\\partial_{i}\\mathbb{P}^{i}\\left({\\bf R}, s\\right). \n\\label{eq1}\n\\end{eqnarray}\nThe related equation for $\\mathbb{P}_{i}\\left({\\bf R},s\\right)$ is obtained by multiplying Eq.~\\eqref{F-P} by ${\\bf t}(\\theta)$, and using the definition of the tensor $\\mathbb{Q}_{ij}({\\bf R}, s)$. Thus, we found\n\\begin{eqnarray}\n\\frac{\\partial \\mathbb{P}_{i}({\\bf R}, s)}{\\partial s}=-\\frac{1}{2\\ell_{p}}\\mathbb{P}_{i}({\\bf R}, s)-\\frac{1}{2}\\partial_{i}\\rho({\\bf R},s)-\\partial^{j}\\mathbb{Q}_{ij}({\\bf R}, s).\\nonumber\\\\\n\\label{eq2}\n\\end{eqnarray}\nIn the same way, we obtain the equation for $\\mathbb{Q}_{ij}({\\bf R}, s)$,\n\\begin{eqnarray}\n\\frac{\\partial\\mathbb{Q}_{ij}({\\bf R}, s)}{\\partial s}=-\\frac{2}{\\ell_{p}}\\mathbb{Q}_{ij}({\\bf R}, s)-\\frac{1}{4}\\mathbb{T}_{ij}({\\bf R}, s)-\\partial^{k}\\mathbb{R}_{ijk}({\\bf R}, s), \\nonumber\\\\\n\\end{eqnarray}\nwhere $\\mathbb{T}_{ij}$ denotes the second rank tensor $-\\delta_{ij}\\partial_{k}\\mathbb{P}^{k}+\\left(\\partial^{i}\\mathbb{P}^{j}+\\partial^{j}\\mathbb{P}^{k}\\right)$. Similarly, the equations for the rest of tensorial fields can be computed recursively for consecutive ranks. Taking a combination of \\eqref{eq1} and \\eqref{eq2}, we observe that the PDF $\\rho({\\bf R}, s)$ and the two-rank tensor $\\mathbb{Q}_{ij}({\\bf R}, s)$ are involved in one equation given by\n\\begin{eqnarray}\n\\frac{\\partial^2\\rho({\\bf R}, s)}{\\partial s^2}+\\frac{1}{2\\ell_{p}}\\frac{\\partial\\rho({\\bf R}, s)}{\\partial s}=\\frac{1}{2}\\nabla^2\\rho({\\bf R}, s)+\\partial_{i}\\partial_{j}Q^{ij}({\\bf R}, s).\\nonumber\\\\\n\\label{eq3}\n\\end{eqnarray}\n\nIt is noteworthy to mention, that Eq. (\\ref{eq3}) is a modified version of Telegrapher's equation \\cite{Fal-Masoliver1989}, where the term $\\partial_{i}\\partial_{j}Q^{ij}({\\bf R}, s)$ makes the difference. In the following, we use Eq. (\\ref{eq3}) for the case of a semiflexible polymer in the open Euclidean plane as a test case. This allows us to verify the famous experimental result of Kratky and Porod \\cite{PSaito-Kratky1949} using this procedure.\n\n\\subsubsection{Example: Testing the Kratky-Porod result.}\\label{sectII}\nIn this section, we study the case of a semiflexible polymer on the Euclidean plane. In order to reproduce the well known result of Kratky-Porod \\cite{PSaito-Kratky1949}, we apply the multipolar series method shown in the previous section to compute the mean square end-to-end distance.\nThe end-to-end distance is defined as $\\delta{\\bf R}:={\\bf R}-{\\bf R}^{\\prime}$, thus the mean square end-to-end distance is given by \n\\begin{equation}\n \\left<\\delta{\\bf R}^2\\right>\\equiv\\int_{\\mathbb{R}^{2}\\times\\mathbb{R}^{2}}\\rho({\\bf R}|{\\bf R}^{\\prime}; s)\\delta{\\bf R}^2~d^{2}{\\bf R}~d^2{\\bf R}^{\\prime}.\n \\label{MS}\n \\end{equation}\nTo compute this quantity, we use Eq.~\\eqref{eq3} to show that l.h.s of (\\ref{MS}) satisfies \n {\\small\\begin{eqnarray}\n\\frac{\\partial^2\\left<\\delta{\\bf R}^2\\right>}{\\partial s^2}+\\frac{1}{2\\ell_{p}}\\frac{\\partial\\left<\\delta{\\bf R}^2\\right>}{\\partial s}&=&\\int d^{2}{\\bf R}~d^2{\\bf R}^{\\prime}\\left(\\delta{\\bf R}\\right)^2\\times\\nonumber\\\\&&\\left[\\frac{1}{2}\\nabla^2\\rho({\\bf R}, s)+\\partial_{i}\\partial_{j}Q_{ij}({\\bf R}, s)\\right].\\nonumber\\\\\n\\label{eq4}\n\\end{eqnarray}}\nIntegrating by parts on the r.h.s. of \\eqref{eq4} with respect to ${\\bf R}$, using that $\\nabla^2\\delta{\\bf R}^2=4$ and the traceless condition $\\delta_{ij}\\mathbb{Q}^{ij}=0$, we have that $\\left<\\delta{\\bf R}^2\\right>$ satisfies the differential equation\n\n\\begin{eqnarray}\n\\frac{\\partial^2\\left<\\delta{\\bf R}^2\\right>}{\\partial s^2}+\\frac{1}{2\\ell_{p}}\\frac{\\partial\\left<\\delta{\\bf R}^2\\right>}{\\partial s}=2.\n\\label{eq5}\n\\end{eqnarray}\nNow, we solve this differential equation with the initial conditions, for $s=0$, $\\left<\\delta{\\bf R}^2\\right>=0$ and $\\frac{d}{ds}\\left<\\delta{\\bf R}^2\\right>=0$. The final polymer length is denoted by $L$. \n\nIn this way, we found that the mean square end-to-end distance is given by\n\\begin{eqnarray}\n\\left<\\delta{\\bf R}^2\\right>=4\\ell_{p}L-8\\ell_{p}^2\\left(1-\\exp\\left(-\\frac{L}{2\\ell_{p}}\\right)\\right),\n\\label{planoabierto}\n\\end{eqnarray}\nwhich is the standard Kratky-Porod result for semiflexible polymers confined to a plane \n\\cite{PSaito-Kratky1949, PSaito-Hermans1952}. The last result has two well-known asymptotic limits, namely, \n\\begin{eqnarray}\n\\left<\\delta{\\bf R}^2\\right>\\simeq\\left\\{\\begin{array}{cc}\n 4\\ell_{p}L, & {\\rm if }~L\\gg \\ell_{p},\\\\\n &\\label{asymptotics}\\\\ \n L^2, & {\\rm if}~L\\ll\\ell_{p}.\\\\\n \\end{array}\\right.\n\\end{eqnarray}\n\nIn the first case, the polymer conformations are equivalent to brownian trajectories. In this case, the polymer is called Gaussian polymer \\cite{PDoi-Doi1988book}. In the second case, the polymer takes only one configuration; it goes in a straight line, which is known as the ballistic limit. We remark that the result in Eq.~\\eqref{planoabierto} is usually obtained by using different analytical approaches (for example, see Appendix \\ref{A} and Refs. \\cite{MGD-Kamien2002, PSaito-Saito1967}).\n \n In the next section, we address the study of a confined polymer to a flat compact domain within the approach developed above.\n\n\\section{Semiflexible polymer in a compact plane domain }\n\n\\subsection{General expressions for a semiflexible polymer in an arbitrary compact domain}\\label{sectTE}\nIn this section, we apply the hierarchy equations developed in section \\ref{sect3} in order to determine the conformational states of a semiflexible polymer confined to a flat compact domain $\\mathcal{D}$. Commonly, it is necessary to truncate the hierarchy equations at some rank. For instance, at first order, let us consider $\\mathbb{P}_{i}({\\bf R}, s)$ as a constant vector field, then (\\ref{eq1}) implies that $\\rho({\\bf R}, s)$ is uniformly distributed, which clearly it is not an accurate description because otherwise it means that the mean square end-to-end distance would be a constant for all values of the polymer length $s$. An improved approximation consists of taking the truncation on the second hierarchy rank, which corresponds to assume that $\\mathbb{Q}_{ij}({\\bf R}, s)$ is uniformly distributed. Indeed, the truncation approximation gets better the larger the polymer length is, since as it is pointed out in \\cite{Fal-Castro-Villarreal2018} from Eqs. (\\ref{eq1}) and (\\ref{eq2}) one can conclude that the tensors $\\mathbb{P}_{i}({\\bf R}, s)$ and $\\mathbb{Q}_{ij}({\\bf R}, s)$ damp out as $e^{-L\/(2\\ell_{p})}$ and $e^{-2 L\/\\ell_{p}}$, respectively. From these expressions, clearly $\\mathbb{Q}_{ij}({\\bf R}, s)$ damps out more strongly than $\\mathbb{P}_{i}({\\bf R}, s)$ for larger polymer length. In the polymer context, it means that the tangent directions of the polymer are uniformly correlated.\n\n In the following, let us define a characteristic length $a$ associated to the size of the compact domain $\\mathcal{D}$, thus if we scale polymer length $s$ with $a$ one can consider $2a\/\\ell_{p}$ as a dimensionless attenuation coefficient associated to the damp out of $\\mathbb{Q}_{ij}({\\bf R}, s)$. Thus as long as we consider cases when $2a\/\\ell_{p}$ far from 1, we may neglect the contribution of $\\mathbb{Q}_{ij}({\\bf R}, s)$. Here, we are going to consider this latter case, therefore according to (\\ref{eq3}), the Telegrapher's equation is the one considered as the governing equation of the PDF $\\rho(\\left.{\\bf R}\\right|{\\bf R}^{\\prime}, s)$, that is, \n\\begin{eqnarray}\n\\frac{\\partial^2\\rho({\\bf R}, s)}{\\partial s^2}+\\frac{1}{2\\ell_{p}}\\frac{\\partial\\rho({\\bf R}, s)}{\\partial s}=\\frac{1}{2}\\nabla^2\\rho({\\bf R}, s),\n\\label{eq6}\n\\end{eqnarray}\nwith the initial conditions \n\\begin{subequations}\n\\label{IC}\n\\begin{align}\n\\label{ICa}\n\\lim_{s\\to 0}\\rho(\\left.{\\bf R}\\right|{\\bf R}^{\\prime}, s)=&\\delta^{\\left(2\\right)}({\\bf R}-{\\bf R}^{\\prime}),\\\\\n\\lim_{s\\to 0}\\frac{\\partial \\rho(\\left.{\\bf R}\\right|{\\bf R}^{\\prime}, s)}{\\partial s}=&0.\n\\label{ICb}\n\\end{align}\n\\end{subequations}\nThese conditions have the following physical meaning. Clearly, Eq.~\\eqref{ICa} means that the polymer ends coincide when the polymer length is zero, whereas Eq.~\\eqref{ICb} means that the polymer length does not change spontaneously. Since the polymer is confined to a compact domain, we also impose a Neumann boundary condition\n\\begin{eqnarray}\n\\left.\\nabla \\rho\\left(\\left.{\\bf R}\\right|{\\bf R}^{\\prime}, s\\right)\\right|_{{\\bf R}, {\\bf R}^{\\prime}\\in\\partial \\mathcal{D}}&=&0, ~~~~\\forall s, \n\\label{BC}\n\\end{eqnarray}\nwhere $\\partial \\mathcal{D}$ is the boundary of $\\mathcal{D}$. \nThis boundary condition means that the polymer does not cross the boundary coating the domain. \n\nTo solve the differential equation \\eqref{eq6}, we use the standard separation of variables \\cite{Fal-Feshbach1953}. This method requires to solve the so-called Neumann eigenvalue problem. It consists of finding all possible real values $\\lambda$, for which there exists a non-trivial solution $\\psi\\in C^{2}(\\mathcal{D})$ that satisfies the eigenvalue equation $-\\nabla^2\\psi=\\lambda\\psi$, and the Neumann boundary condition \\eqref{BC}. In this case, the set of eigenvalues is a sequence $\\lambda_{{\\bf k}}$ with ${\\bf k}$ in a numerable set $I$, and each associated eigenspace is finite dimensional. These latter eigenespaces are orthogonal to each other in the space of square-integrable functions $L^2(\\mathcal{D})$ \\cite{Fal-Chavel1984, Fal-Feshbach1953}. That is, the sequence $\\lambda_{\\bf k}$ is associated with the set of eigenfunctions $\\{ \\psi_{\\bf k}({\\bf R})\\}$ that satisfy the orthonormal relation\n\\begin{eqnarray}\n\\int_{\\mathcal{D}}\\psi_{\\bf k}\\left({\\bf R}\\right)\\psi_{\\bf k^{\\prime}}\\left({\\bf R}\\right)d^{2}{\\bf R}=\\delta_{{\\bf k}, {\\bf k}^{\\prime}}. \\label{ortogonal}\n\\end{eqnarray}\n\n Next, we expand the probability density function $\\rho(\\left.{\\bf R}\\right|{\\bf R}^{\\prime}, s)$ in a linear combination of those eigenfunctions $\\{ \\psi_{\\bf k}({\\bf R})\\}$, that is, a spectral decomposition $\\rho(\\left.{\\bf R}\\right|{\\bf R}^{\\prime}; s)=\\sum_{{\\bf k}}g_{\\bf k}(s)\\psi_{\\bf k}({\\bf R})\\psi_{\\bf k}({\\bf R}^{\\prime})$. Substituting this series in the Telegrapher's equation \\eqref{eq6}, we find that the functions $g_{\\bf k}(s)$ satisfy the following ordinary differential equation\n\\begin{eqnarray}\n\\frac{d^2g_{\\bf k}(s)}{ds^2}+\\frac{1}{2\\ell_{p}}\\frac{dg_{\\bf k}(s)}{ds}+\\frac{1}{2}\\lambda_{\\bf k}g_{\\bf k}(s)=0, \n\\label{eq8}\n\\end{eqnarray}\nwhere the initial conditions, (\\ref{IC}), imply $g_{\\bf k}(0)=1$, and $dg_{\\bf k}(0)\/ds=0$. Therefore, the solution is given by \n\\begin{eqnarray}\ng_{\\bf k}(s)=G\\left(\\frac{s}{4\\ell_{p}}, 8\\ell_{p}^2\\lambda_{\\bf k}\\right),\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\n{\\scriptsize G(v, w)=e^{-v}\\left[\\cosh\\left(v\\sqrt{1-w}\\right)+ \\frac{\\sinh\\left(v\\sqrt{1-w}\\right)}{\\sqrt{1-w}}\\right]}.\\nonumber\\\\\n\\label{Gfunction}\n\\end{eqnarray}\n\nFinally, using the above information the probability density function is given by \n\\begin{eqnarray}\n\\rho(\\left.{\\bf R}\\right|{\\bf R}^{\\prime}, s)=\\frac{1}{A\\left(\\mathcal{D}\\right)}\\sum_{{\\bf k}\\in I}G\\left(\\frac{s}{4\\ell_{p}}, 8\\ell_{p}^2\\lambda_{\\bf k}\\right)\\psi_{\\bf k}({\\bf R})\\psi_{\\bf k}({\\bf R}^{\\prime}), \\nonumber\\\\\\label{pdf}\n\\end{eqnarray}\nwhere $A(\\mathcal{D})$ is the area of the domain $\\mathcal{D}$, which is needed in order to have a normalized probability density function in the space $\\mathcal{D}\\times{\\mathcal{D}}$. Then, we have that $\\rho(\\left.{\\bf R}\\right|{\\bf R}^{\\prime}, s)d^{2}{\\bf R}d^{2}{\\bf R}^{\\prime}$ is the probability of having a polymer in a conformational state with polymer length $s$, and ends at ${\\bf R}$ and ${\\bf R}^{\\prime}$. Additionally, using the expression (\\ref{pdf}), the mean square end-to-end distance can be computed in the standard fashion by\n\\begin{eqnarray}\n\\left<\\delta{\\bf R}^2\\right>_{\\mathcal{D}}=\\sum_{{\\bf k}\\in I}a_{k}~G\\left(\\frac{s}{4\\ell_{p}}, 8\\ell_{p}^2\\lambda_{\\bf k}\\right), \n\\label{gen-sol}\n\\end{eqnarray}\nwhere the coefficients $a_k$ are obtained by \n\\begin{eqnarray}\na_{k}=\\int_{\\mathcal{D}\\times{\\mathcal{D}}}\\left({\\bf R}-{\\bf R}^{\\prime}\\right)^2 \\psi_{\\bf k}({\\bf R})\\psi_{\\bf k}({\\bf R}^{\\prime})~d^{2}{\\bf R}~d^{2}{\\bf R}^{\\prime}.\n\\label{coef}\n\\end{eqnarray}\n\nIn the following, we shall discuss the specific case when the polymer is enclosed in a square box.\n\n\\subsection{Example: semiflexible polymer in a square domain}\\label{thdomain}\n\nIn this section, we study the case when the semiflexible polymer is enclosed in a square box $\\mathcal{D}=\\left[0,a\\right]\\times\\left[0, a\\right]$. For this domain, it is well known that the eigenfunctions of the Laplacian operator $\\nabla^2$ correspond to a combination of products of trigonometric functions \\cite{Fal-Chavel1984}. That is, for each pair of positive integer numbers $(n, m)$ and positions ${\\bf R}=(x,y)\\in\\mathcal{D}$, it is not difficult to show that the eigenfunctions of the Laplacian operator consistent with (\\ref{BC}) are \n\\begin{eqnarray}\n\\psi_{\\bf k}\\left({\\bf R}\\right)=\\frac{2}{a}\\cos\\left(\\frac{\\pi n}{a}x\\right)\\cos\\left(\\frac{\\pi m}{a}y\\right).\\nonumber\n\\end{eqnarray}\nThese functions constitute a complete orthonormal basis, that satisfy \\eqref{ortogonal}. The corresponding eigenvalues are $\\lambda_{\\bf k}={\\bf k}^2$, with ${\\bf k}=\\left(\\frac{\\pi n}{a}, \\frac{\\pi m}{a}\\right)$. \n\nNow, we proceed to determine the coefficients $a_{\\bf k}$ \\-using Eq.~\\eqref{coef} in order to give an expression for the mean square end-to-end distance. By straightforward calculation, the coefficients $a_{\\bf k}$ are given explicitly by \n\\begin{eqnarray}\na_{\\bf k}=\\left\\{\\begin{array}{cc}\n\\frac{1}{3}a^2, & {\\bf k}=0,\\\\\n-\\frac{4a^2}{\\pi^4}\\left(\\frac{\\left(1-(-1)^{n}\\right)}{n^4}\\delta_{m,0}+ \\frac{\\left(1-(-1)^{m}\\right)}{m^4}\\delta_{n,0}\\right), & {\\bf k}\\neq 0.\n\\end{array}\\right.\\nonumber\\\\\n\\end{eqnarray}\nUpon substituting the latter coefficients in the general expression \\eqref{gen-sol}, we found that\n\\begin{eqnarray}\n\\frac{\\left<\\delta{\\bf R}^2\\right>_{\\mathcal{D}}}{a^2}=\\frac{1}{3}-\\sum_{n\\in 2\\mathbb{N}+1 }\\frac{32}{\\pi^4 n^4}G\\left(\\frac{L}{4\\ell_{p}}, 8\\pi^2\\left(\\frac{\\ell_{p}}{a}\\right)^2n^2\\right),\\nonumber\\\\\n\\label{sol}\n\\end{eqnarray}\nwhere $2\\mathbb{N}+1$ is the set of odd natural numbers. Since the function $G(v,w)$ satisfy that $G(v, w)\\leq 1$ for all positive real numbers $v$ and $w$, the series in Eq.~\\eqref{sol} is convergent for all values of $L\/\\ell_{p}$ y $\\ell_{p}\/a$. Considering this last property, it is possible to prove the following assertions.\n\n\\begin{prop} \\label{result1}\nLet $L\/\\ell_{p}$ any positive non-zero real number, then the mean square end-to-end distance \\eqref{sol} obeys \n\\begin{eqnarray}\n \\lim_{\\ell_{p}\/a\\to 0}\\frac{\\left<\\delta{\\bf R}^2\\right>_{\\mathcal{D}}}{\\ell^2_{p}}=\\frac{4L}{\\ell_{p}}-8\\left(1-\\exp\\left(-\\frac{L}{2\\ell_{p}}\\right)\\right). \\nonumber\n\\end{eqnarray}\n\\end{prop}\n\n\\begin{prop}\\label{result2} Let $L\/\\ell_{p}$ and $\\ell_{p}\/a$ any positive non-zero real numbers and $c=2\/3-64\/\\pi^4$, then the mean square end-to-end distance \\eqref{sol} obeys\n\\begin{eqnarray}\n 0\\leq \\frac{\\left<\\delta{\\bf R}^2\\right>_{\\mathcal{D}}}{a^2}\\leq \\frac{2}{3},\\nonumber\\end{eqnarray}\nand\n\\begin{eqnarray}\n 0\\leq \\frac{\\left<\\delta{\\bf R}^2\\right>_{\\mathcal{D}}}{a^2}-\\left(\\frac{1}{3}-\\frac{1}{3}G\\left(\\frac{L}{4\\ell_{p}}, 8\\pi^2\\frac{\\ell^2_{p}}{a^2}\\right)\\right)\\leq c.\\nonumber\n\\end{eqnarray}\n\n\\end{prop}\n\nClaim {\\bf \\ref{result1}} recovers the Kratky-Porod result about the mean square end-to-end distance (see Eq.~\\eqref{planoabierto}). Claim {\\bf\\ref{result2}} means that the mean-square end-to-end distance is bounded from below by $0$ and is bounded from above by $2\/3 a^2$. In addition, this second claim also provides an approximation formula for $\\left<\\delta{\\bf R}^2\\right>_{\\mathcal{D}}$, that is, for all values of $L\/\\ell_{p}$ and $\\ell_{p}\/a$ such that the condition\n\\begin{eqnarray}\n1-G\\left(\\frac{L}{4\\ell_{p}}, 8\\pi^2\\frac{\\ell^2_{p}}{a^2}\\right)\\gg 3c\\label{cond}\n\\end{eqnarray}\nholds, one has the following approximation\n{\\small\\begin{eqnarray}\n\\frac{\\left<\\delta{\\bf R}^2\\right>_{\\mathcal{D}}}{a^2}&\\simeq&\\frac{1}{3}-\\frac{1}{3}\\exp\\left(-\\frac{L}{4\\ell_{p}}\\right)\\nonumber\\\\\n&\\times&\\left\\{ \\cosh\\left[\\frac{L}{4\\ell_{p}}\\left(1-8\\pi^2\\frac{\\ell^2_{p}}{a^2}\\right)^{\\frac{1}{2}}\\right]\\right.\\nonumber\\\\&+&\\left.\\left(1-8\\pi^2\\frac{\\ell^2_{p}}{a^2}\\right)^{-\\frac{1}{2}}\\sinh\\left[\\frac{L}{4\\ell_{p}}\\left(1-8\\pi^2\\frac{\\ell^2_{p}}{a^2}\\right)^{\\frac{1}{2}}\\right]\\right\\}.\\nonumber\\\\ \n\\label{approx2}\n\\end{eqnarray}}\nLet us point out, the validity of this approximation occurs provided that the condition (\\ref{cond}) holds, that is, whenever one can neglect the value $c$ (See Appendix \\ref{coefficients} for proofs of claims 1 and 2). \n\nIn the following, let us remark that for any fixed value of $a$, the r.h.s. of (\\ref{approx2}), as a function of $L$, shows the existence of a critical persistence length, $\\ell^{*}_{p}\\equiv a\/(\\pi\\sqrt{8})$, such that for all values $\\ell_{p}>\\ell^{*}_{p}$ it exhibits an oscillating behavior, whereas for $\\ell_{p}<\\ell^{*}_{p}$, it is monotonically increasing. In addition, for each value of $\\ell_{p}$ the function (\\ref{approx2}) converges to $1\/3$ as long as $L\\gg a$. The critical persistence length, therefore, distinguishes two conformational behaviors of the semiflexible polymer enclosed in the square box. \nIn Fig. \\ref{MSD-ex}, the mean-square end-to-end distance, Eq. (\\ref{sol}) and r.h.s. of (\\ref{approx2}), have been shown for the ratios $\\ell_{p}\/a=1\/32, 1\/16,1\/8, 1\/4,1\/2, 1$ where we can appreciate both conformational states. Furthermore, one of the most intriguing features of the above approximation (\\ref{approx2}) is its similar structure to the corresponding one in the case of a polymer wrapping a spherical surface. Indeed, let us remark that \\eqref{approx2} has the same mathematical structure that the mean square end-to-end distance found by Spakowitz and Wang in \\cite{PSaito-Spakowitz2003}, exhibiting both conformational states . \n\nIn the next section, we address the study of the semiflexible polymer through a Monte Carlo algorithm in order to corroborate the results found here.\n\n\n\n\n\\section{Monte Carlo-Metropolis algorithm for semiflexible polymers }\\label{mc}\n\nHere, we develop a Monte Carlo algorithm to be use in computer simulations in order to study the conformational states of a semiflexible polymer enclosed in a compact domain. Particularly, the algorithm shall be suited to the square domain which, additionally, will allow us to validate our analytical approximations shown in the latest section.\n \nAs we have emphasized above, the worm-like chain model is the suitable framework to describe the spatial distribution of semiflexible polymers, which are modeled as $n$ beads consecutively connected by $n-1$ rigid bonds, called Kuhn's segments \\cite{Fal-Yamakawa1971}. Each bead works like a pivot allowing us to define an angle $\\theta_i$ between two consecutive bonds, where $i$ is the label of the $i-$th bead. This model requires a potential energy description where all possible contributions due to bead-bead, bead-bond and bond-bond interactions are taken into account. In a general setting, energies of bond-stretching, elastic bond angle, electrostatic interaction, torsional potential, etc., should be considered, such as in Refs.~\\cite{Wlc-Qiu2016, Psim-Chirico1994, Psim-Allison1989,Psim-DeVries2005,Psim-Jian1997,Psim-Jian1998}. However, here we are interested solely on the study of possible spatial configurations of a single semiflexible polymer enclosed in a compact domain $\\mathcal{D}$, such as the one shown in Fig. \\ref{frame}. Thus in our case we only take into account two energetic contributions, namely, the elastic bond angle and the wall-polymer interaction. The first contribution, that is, the elastic bond angle is given by\n\\begin{equation}\nE_b=\\frac{g}{2}\\sum_i \\theta_i^2,\n\\end{equation}\nwhere $\\theta_i$ is the angle between two consecutively bonds, and $g=\\alpha\/l_{0}$, where we recall that $\\alpha$ is the bending rigidity, and $l_{0}$ is the Kuhn length. In addition, we must consider the wall-polymer interaction given by\n\\begin{eqnarray}\n E_{w}=\n \\begin{cases}\n 0, & \\text{if all beads are in $\\mathcal{D}$}, \\\\\n \\infty, & \\text{if there are beads outside of $\\mathcal{D}$}.\n \\end{cases}\n\\end{eqnarray}\n\nIn the algorithm, the acceptance changes criteria of the polymer spatial configurations take the structure of a Gaussian distribution function. In this context, we generate random chains enclosed in $\\mathcal{D}$, constituted by $N$ bonds with constant Kuhn length, implementing a growth algorithm. Our computational realization consists of bead generation attending the following conditions:\n{\\it starting bead}, {\\it beads far from walls}, {\\it beads near to walls}, and {\\it selection problem}\nwhich are explained in the following subsections. \n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[scale=.45]{contornob}\n\\caption{Generic compact domain $\\mathcal{D}$ is shown, where the boundary wall $\\partial\\mathcal{D}$ is represented by the black continuous line. The condition defining the beads near to the wall is represented by the blue filled region of width $l_s$. }\n\\label{frame}\n\\end{figure}\n\n\n\\subsection*{Starting bead}\n\nThis condition describes the process of initial bead generation and the acceptance criteria of the second bead. Let us regard, without loss of generality that the origin of $\\mathbb{R}^2$ belongs to $\\mathcal{D}$. We choose the initial bead at $\\mathbf{x}_0$ as a uniformly distributed random point in the region $\\mathcal{D}$. We define the auxiliary vector $\\mathbf{R}_{l_0}=(l_0,0)$, which is parallel to the horizontal axis, it will allow us to determine if the next bead is inside $\\mathcal{D}$. Also, we consider the angle $\\theta_0$ as the one that is formed between the horizontal axis and the first bead, which is taken from a uniform distribution in the interval $[0,2\\pi]$. Now, we compute the following vector\n\\begin{equation}\n\\mathbf{R}'=\\mathcal{R}\n\\left(\\theta_0\\right){\\mathbf{R}_{l_0}}^T,\n\\end{equation}\nwhere $\\mathcal{R}\n\\left(\\theta_0\\right)$\ndenotes the two-dimensional rotational matrix by an angle $\\theta_0$, defined as\n\\begin{equation}\n\\mathcal{R}\\left(\\theta_0\\right)=\n\\begin{bmatrix}\n\\cos \\theta_0 & -\\sin \\theta_0 \\\\\n\\sin \\theta_0 & \\cos \\theta_0 \\\\\n\\end{bmatrix},\n\\end{equation}\nand the superscript denotes the transposition of $\\mathbf{R}_{l_0}$. The resultant vector $\\mathbf{x}_0+\\mathbf{R}'^T$ will be the position of the second bead only if this is inside $\\mathcal{D}$. If this happened, the new bead will be denoted by $\\mathbf{x}_1$ as well as the vector $\\mathbf{R}_{l_0}=\\mathbf{R}'^T$ is actualized. On the contrary, we repeat the process until finding an angle that satisfies the condition that the second bead is enclosed in the domain.\n\n\\subsection*{Beads far from walls}\\label{bffw}\n\nThis condition describes the method of subsequence bead generation. We say that a bead is far from walls if the perpendicular distance between the boundary $\\partial\\mathcal{D}$ and the bead is greater than a particular distance $l_{s}$. In this case, if the $(k-1)$-th bead satisfies this condition, we generate the subsequent bead taking a random angle $\\theta_{k}$ distributed according to a Gaussian density function\n $\\mathcal{N}(0,l_{0}\/\\ell_{p})$, where we recall $\\ell_{p}$ as the polymer persistence length. As in the previous condition, we compute the vector $\\mathbf{R}'=\\mathcal{R}\\left(\\theta_0\\right){\\mathbf{R}_{l_0}}^T$ corresponding to the $k$-th rotation of $\\mathbf{R}_{l_0}$. If $l_0_{\\mathcal{D}}=\\langle (\\mathbf{x}(L)-\\mathbf{x}_0)^2 \\rangle$. In Sec.~\\ref{results} we shall analyze the results obtained for $\\left<{\\delta \\bf R}^2\\right>_{\\mathcal{D}}$ using this algorithm as a function of the persistence length and the polymer length when $\\mathcal{D}$ is a square box.\n\n\\subsection*{Selection problem }\n\nThe selection problem consist of choosing the adequate value of $l_{s}$. This value should be suitable to avoid the over or under bending of the polymer. For instance, if $\\ell_p$ is comparable with the size of $\\mathcal{D}$, and $l_s$ is not appropriate to promote the chain bending when the polymer is near the boundary $\\partial\\mathcal{D}$, the generation of beads outside of the domain will be favorable. Therefore, the polymer will present bendings with high values of angles where the chain meets the boundary. The selection problem is resolved using the dimensional analysis of $\\left<\\delta{\\bf R}^2\\right>_{\\mathcal{D}}$. Indeed, observe that in the continous limit of the chain, the mean-square end-to-end distance can be written as $\\left<\\delta{\\bf R}^2\\right>_{\\mathcal{D}}=a^2 g(\\ell_{p}\/a, L\/a)$, where $g(\\ell_{p}\/a, L\/a)$ is a dimensionless function. Then, we choose $l_{s}$ such that the mean-square end-to-end distance computed with the simulation data depends just on this combination $\\ell_{p}\/a$ and $L\/a$. In other words, for $k$ we calculate $k$ profiles of $\\left<\\delta{\\bf R}^2\\right>_{\\mathcal{D}}\/a^2$ for a $k$ number of pairs $(\\ell^{1}_{p}, a^{1}),(\\ell^{2}_{p}, a^{2}), \\cdots, (\\ell^{k}_{p}, a^{k})$, with $\\ell^{i}_{p}\/a^{i}$ fixed for all $i=1,\\cdots, k$, then we choose $l_{s}$ such that all these profiles collapse in a unique curve. \n\n\\section{Semiflexible polymers enclosed in a square box: simulation vs analytical results}\\label{results}\nIn this section, we are going to implement the algorithm explained in the preceding section for the particular case of a polymer enclosed in a square box of side $a$. In this case, let us first note that for beads near the corners, checking the conditions to promote the chain bending for both adjacent walls at the same time is needed. \nNext, in the simulation we set up our unit length by $d=10^{2}~l_{0}$. Now, we have to present the selection of $l_{s}$ according to the last part of the general algorithm. Thus, for the fixed ratios $\\ell_{p}\/a=1\/50, 1\/32, 1\/16,1\/8, 1\/4,1\/2, 1$, we study three profiles corresponding to the values $a\/d=5,10,15$, respectively. In Table \\ref{tab1} the selection values $l_{s}$ are shown once we collapse the three profiles in a unique curve. \n\n\\begin{table}[h!]\n\\caption{\\label{tab1}Values of $l_s$ used in simulations for different values of persistence length $\\ell_p$.}\n\\begin{tabular}{|c| c|}\n\\hline\n$\\ell_p\/a$ & $ l_s\/a$\\\\\n\\hline\n1 & 0.085\\\\\n1\/2 & 0.065\\\\\n1\/4 & 0.050\\\\\n1\/8 & 0.040\\\\\n1\/16 & 0.010\\\\\n1\/32 & 0.005\\\\\n$\\leq$1\/50 & 0 \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\nThe results shown in this section were computed as the average over $10^6$ spatial configuration of confined polymers, which were obtained using the algorithm described in the previous section. In particular, we shall study two respective regimes defined through the comparison between the box side $a$ and the polymer length $L$. The first one, polymers in weak confinement whenever $L\/a\\leq 1$ is discussed in subsection~\\ref{noconfinado}. The second one, polymers in strong confinement, corresponding to the situation when the polymer length is larger than the box side, is discussed in subsection ~\\ref{confinado}.\n\n\n\n\n\\subsection{Polymer in weak confinement}\\label{noconfinado}\n\nIn this regime, once we have solved the selection problem of $l_{s}$ we present the simulation results for polymers enclosed in a square box of side $a=10~d$ for different values of persistence lengths. In Fig.~\\ref{ex-short}, examples of semiflexible polymers in weak confinement are shown. Notice that for very short persistence length, $\\ell_p\\simeq l_0$, the chain looks like a very curly string like random walks, whereas when $\\ell_p$ increases, the polymer adopts uncoiled configurations.\n\n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[scale=1.]{ex-shortb}\n\\caption{Examples of semiflexible polymers, with length $L=a$, in the weak confinement regime for several values of persistence length. Solid black lines represent the walls of the box.}\n\\label{ex-short}\n\\end{figure} \n\n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[scale=1]{test-MSD-short2b}\n\\caption{Mean square end-to-end distance for confined polymers (point) with several persistence lengths. The deviation from the predictions for a polymer in a infinite plane (solid lines) given by Eq.~\\eqref{planoabierto} is because on average the chain meets the polymer at lengths $L\\simeq a$.}\n\\label{fig-MSDs}\n\\end{figure}\n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[scale=1.]{MSD-short-weakb}\n\\caption{Mean square end-to-end distance for polymers in the gaussian chain limit generated by the algorithm described in Sec.~\\ref{mc}.}\n\\label{gchain}\n\\end{figure}\n\n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[scale=1.]{MSD-Wangb}\n\\caption{Universal behavior of the Mean Square end-to-end distance in Fig.~\\ref{fig-MSDs} in the scaling $\\langle {\\delta \\bf R}^2 \\rangle\/ l_p^2$ vs $L\/l_p$ (solid lines). Dashed lines are references for the ballistic and diffusive regimes for polymers in weak confinement.}\n\\label{fig-MSDs-Wang}\n\\end{figure}\n\n\n\n\n\nThe mean-square end-to-end distance is shown in Fig. \\ref{fig-MSDs}, where it is noted that it increases as long as $\\ell_p$ increases, allowing the polymer to explore more surface as a function of the total polymer length. Also, notice that for small polymer lengths, the mean square end-to-end distance is in an excellent agreement with the results for semiflexible polymers in an infinite plane given by Eq.~\\eqref{planoabierto}. Conversely, for polymer lengths around $L\\simeq a$ we observe an slight deviation between the mean square end-to-end distance and the infinite plane solution (\\ref{planoabierto}), because of the finite-size of the box. Notwithstanding, for small persistence lengths ($\\ell_p\\simeq 10^{-3}a$), the mean square end-to-end distance is well fitted by Eq.~\\eqref{planoabierto}. In this case, the polymer does not seem to be affected by the walls, since the area explored by the chain is too short, on average, to meet the walls.\n\nFurthermore, when the mean square end-to-end distance and the polymer length is scaled by $l_p^2$ and $l_p$, respectively, we found that the data shown in Fig.~\\ref{fig-MSDs} and Fig.~\\ref{gchain} are collapsed, respectively, into an unique plot shown in Fig.~\\ref{fig-MSDs-Wang}, evidencing a ballistic behavior for small values of $L\/\\ell_p$ followed by a ``diffusive'' regime for large values of $L\/\\ell_{p}$. These results correspond to the well-known asymptotic limits of the Kratky-Porod result (\\ref{asymptotics}). In addition, it is noteworthy to mention that these asymptotic limits are also reported in \\cite{PSaito-Spakowitz2003} for a semiflexible polymer wrapping a spherical surface in the corresponding plane limit. \n\n\\subsection{Polymer in strong confinement}\\label{confinado}\n\nIn this section, we discuss the case of a polymer enclosed in a square box when its length is large enough to touch the walls, and to interact several times with them. We perform simulations in order to generate polymers of lengths up to $L\/a=10$ (chains of $10^4$ beads) for persistence lengths $\\ell_{p}\/a=1\/32, 1\/16,1\/8, 1\/4,1\/2, 1$ for the values of box side $a\/d=5,10,$ and $15$, respectively. These simulations use the values of $l_s$ shown in Table~\\ref{tab1}. Examples of these polymers are shown in Fig. \\ref{evo-ex} and in Fig.~\\ref{MSD-ex}. \n\\begin{figure}[h!]\n\\centering\n\\includegraphics[scale=1]{test10b}\n\\caption{Semiflexible polymer realizations for the value of persistence length $l_p=a$. Each column shows a polymer with a particular length from the left to the right with L\/a=2, L\/a=4, L\/a=6, L\/a=8, and L\/a=10, respectively. Black filled circles indicate the endings of the polymer. It is also shown how the polymer rolling up around the square box while the polymer length becomes larger. }\n\\label{evo-ex}\n\\end{figure}\n\\begin{figure*}\n\\centering\n\\includegraphics[scale=.68]{MSD-ex4b}\n\\caption{\\small Examples of polymers, in the first and third columns, are shown as solid green lines, where the initial and final beads are represented by black filled circles, and solid black lines represent the walls of the box. The figure also shows, in the second and fourth columns, the mean-square end-to-end distance for polymers in strong confinement as a function of the polymer length: The bold black line represents the superposition of the theoretical prediction (Eqs. (\\ref{sol}) and (\\ref{approx2})) with the simulation results shown for different box sides $a\/d=5$ (blue triangles), $a\/d=10$ (red squares) and $a\/d=15$ (green circles). }\n\\label{MSD-ex}\n\\end{figure*}\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[scale=1]{msd-Wang-strong22b}\n\\caption{\\small Mean square end-to-end distance for polymers in strong confinement as a function of the persistence length. Note that $\\langle \\delta \\mathbf{R} ^2 \\rangle$ shows an oscillating behavior for values of persistence length satisfying the relation $\\ell_p\/a>8$, which is the same signature for the mean square end-to-end distance for polymers confined into a sphere. Dashed and doted lines has been plotted as references for the ballistic and diffusive behaviors, respectively.}\n\\label{MSD-Wang-strong}\n\\end{figure}\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[scale=1]{convb}\n\\caption{\\small Convergence of the mean square end-to-end distance $\\langle \\delta \\mathbf{R}^2\\rangle_{\\mathcal{D}}=a^2\/3$ for very large polymers in strong confinement as a function of the ratio $a\/l_p$.}\n\\label{MSD-conv}\n\\end{figure}\n\n\nIn Fig.~\\ref{MSD-ex}, we also report the mean square end-to-end distance scaled by $a^2$ as a function of $L\/a$ for the different box sides ($a=$ 5, 10 and 15). In addition, in Fig. \\ref{MSD-Wang-strong} we report as well the mean square end-to-end distance scaled by $\\ell_{p}^2$ as a function of $L\/\\ell_{p}$. By simple inspection, in both cases an oscillating behavior is exhibited for values $\\ell_p\/a>\\frac{1}{8}$, whereas a monotonically increasing behavior becomes evident for persistence lengths such that $\\ell_{p}\/a<\\frac{1}{8}$. A growth in the number of oscillations is observed while $\\ell_{p}\/a$ increases from $1\/8$. In addition, for values $\\ell_{p}\/a$ less than $1\/8$ the behavior of the mean-square end-to-end distance corresponds to the one of a Gaussian polymer enclosed in a box and the corresponding conformational realization of the polymer looks like a confined random walk. As was mentioned before this transition between the oscillating and monotonic behavior of the conformation of the semiflexible polymer is very similar to that described by Wang and Spakowitz in Ref.~\\cite{PSaito-Spakowitz2003} for semiflexible polymer confined to a spherical surface. Moreover, as noted in Fig. \\ref{MSD-Wang-strong} in the small polymer lengths regime, the mean square end-to-end distance shows a ballistic behavior, followed by a brief interval with a ``diffusive'' behavior. Furthermore, an interesting observation is that the mean square end-to-end distance exhibits asymptotic plateau behavior for large values of $L\/l_p$ as a function of $a\/l_p$. In Fig.~\\ref{MSD-conv} we show the value of the mean square end-to-end distance for the the polymer length $L=10~a$ in logarithmic scale as a function of $a\/l_p$ in binary logarithmic scale. In this conditions, the plateau behavior is well fitted by a linear function, where the slope, $m$, of the line satisfies $m\/\\log2=1.97\\sim 2$, whereas that the intercept takes the value $b=-0.473\\pm0.006$. This last fact leads us to a universal scaling law of the mean square end-to-end distance regarding the box side for very large polymers:\n\\begin{equation}\n\\frac{\\langle \\delta \\mathbf{R}(L=10a)^2\\rangle}{a^2}=10^b\\sim 0.336\\pm 0.004,\n\\label{eq-msd-a}\n\\end{equation}\nwhere the error has been computed as the propagation error from the linear fit of the data in Fig.~\\ref{MSD-conv}. This result is the universal convergence of the rate $\\langle \\delta \\mathbf{R}^2\\rangle_{\\mathcal{D}}\/a^2$ to $1\/3$ for very large polymers, which becomes independent of the box side when the quotient $l_p\/a$ keeps fixed. This is comprised, if we consider all available space in the box occupied so that a uniform distribution of beads occurs. Indeed, through the definition (\\ref{MS}) with $\\rho(\\left. {\\bf R}\\right|{\\bf R}^{\\prime}; L\\to\\infty)=1\/a^4$ one can get the desired result. Finally, it is noticeable that by simple inspection of the five polymer realizations shown in Fig. \\ref{evo-ex}, for $\\ell_{p}\/a=1$, all have the same conspicuous relation between the period of oscillation and the number of turns that the polymer performs. \n\nAll these features of the behavior of the mean-square end-to-end distance are reproduced by the theoretical prediction (\\ref{approx2}) and (\\ref{sol}). In particular, it is significant to note that the critical persistence length found in our earlier discussion (see section (\\ref{thdomain})) satisfies approximately $\\ell^{*}_{p}\/a=1\/(\\pi\\sqrt{8})\\approx 1\/8$. We also note that for $\\ell_{p}\/a=1$ there is a slight discrepancy between the simulations results and the theoretical prediction (\\ref{sol}) appearing in the three local minima shown in Fig. \\ref{MSD-ex}. This is due to the fact that for values of $\\ell_{p}\/a$ near $2$ there is a breakdown of the Telegrapher approximation that we performed in the section \\ref{sectTE}. In other words, the small disagreement appears for $\\ell_{p}\/a\\approx 1$ because the role of the tensor $\\mathbb{Q}_{ij}$ becomes important and it can not be neglected in Eq. (\\ref{eq3}). \n\n\n\n\n\n\\section{Concluding remarks and perspectives}\\label{conclusions}\n\nIn this paper, we have analyzed the conformational states of a semiflexible polymer enclosed in a compact domain. The approach followed rests on two postulates, namely, that the conformation of a semiflexible polymer satisfies the Frenet stochastic equations \\eqref{ecsestom1}, and the stochastic curvature is distributed according to \\eqref{dft}, which is consistent with the worm-like chain model. In addition, it turned out that the Fokker-Planck equation, corresponding to the stochastic Frenet equations, is exactly the same as the Hermans-Ullman equation (see Eq. (\\ref{F-P})) \\cite{PSaito-Hermans1952}. Furthermore, taking advantage of the analogy between the Hermans-Ullman equation and the Fokker-Planck equation for a free active particle motion \\cite{Fal-Castro-Villarreal2018}, we establish a multipolar decomposition for the probability density function, $P(\\left.{\\bf R}, \\theta\\right|{\\bf R}^{\\prime}, \\theta^{\\prime}; L)$, that describes the manner in which a polymer with length $L$ distributes in the domain with certain endings, ${\\bf R}$ and ${\\bf R}^{\\prime}$, and their associated directions $\\theta$ and $\\theta^{\\prime}$, respectively. In consequence, exploiting this analogy we provide an approximation for the positional distribution $\\rho({\\bf R}, {\\bf R}^{\\prime}, L)$ through Telegrapher's Equation, which for a compact domain is a good approximation as long as $2a\/\\ell_{p}>1$, where $a$ is a characteristic length of the compact domain. In particular, we derive results for a semiflexible polymer enclosed in a square box domain, where we can give a mathematical formula for the {\\it mean-square end-to-end distance}. \n\nFurthermore, we have developed a Monte Carlo-Metropolis algorithm to study the conformational states of a semiflexible polymer enclosed in a compact domain. In particular, for the square box domain, we compare the results of the simulation with the theoretical predictions finding an excellent agreement. Particularly, we have considered two situations, namely, a {\\it polymer in weak confinement} and a {\\it polymer in strong confinement} corresponding to polymers with length lesser and greater than the box side, respectively. In the weak confinement case, we reproduce the two-dimensional solution of a free chain i.e. the Kratky-Porod result for polymers confined in two-dimensions. In the strong confinement case, we showed the existence of a critical persistent length $\\ell^{*}_{p}\\simeq a\/8$ such that for all values $\\ell_{p}>\\ell^{*}_{p}$ the mean-square end-to-end distance exhibits an oscillating behavior, whereas for $\\ell_{p}<\\ell^{*}_{p}$, it is monotonically increasing. In addition, for each value of $\\ell_{p}$ the function converges to $1\/3$ as long as $L\\gg a$. The critical persistence length, thus, distinguishes two conformational behaviors of the semiflexible polymer enclosed in the square box. As was mentioned above, this result is the same type to the one found by Wang and Spakowitz in \\cite{PSaito-Spakowitz2003} for a semiflexible polymer wrapping a spherical surface. As a consequence of this resemblance, one can conclude that the shape transition from oscillating to monotonic conformational states provides evidence of a universal signature for a semiflexible polymer enclosed in a compact space. \n\nOur approach can be extended in various directions. For instance, the whole formulation can be extended easily to semiflexible polymers in three dimensions. Although, we must consider an stochastic version of the Frenet-Serret equations, now in this case, one would obtain the three dimensional case of the Hermans-Ullman equation, because the worm-like chain model involves just the curvature. In addition, the approach developed here can also be extended to the case where the semiflexible polymer wraps a curved surface. \n\n\\section*{Acknowledgement}\nP.C.V. acknowledges financial support by CONACyT Grant No. 237425 and PROFOCIE-UNACH 2017. J.E.R. acknowledges financial support from VIEP-BUAP (grant no. VIEP2017-123). The computer simulations were performed at the LARCAD-UNACH.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nSeveral experimental collaborations at $ep$ and $p\\overline{p}$ colliders\npresented data on the differential cross section $d^2\\sigma\/dy\\,dp_T$ for the\ninclusive production of $D^0$, $D^+$, and $D_s^+$ mesons, $\\Lambda_c^+$\nbaryons, and their charge-conjugate counterparts.\nAt DESY HERA, such data were collected by the ZEUS Collaboration\n\\cite{zeus,pad} in low-$Q^2$ $ep$ collisions, equivalent to photoproduction,\nand by the H1 Collaboration \\cite{h1} in deep-inelastic $ep$ scattering.\nAt the Fermilab Tevatron, such data were taken by the CDFII Collaboration\n\\cite{cdf} in $p\\overline{p}$ collisions.\n\nOn the theoretical side, fragmentation functions (FF's) for the transitions\n$c,b\\to X_c$, where $X_c$ denotes a generic charmed hadron, are needed as\nnonperturbative inputs for the calculation of all the cross sections mentioned\nabove.\nSuch FF's are preferably constructed by using precise information from\n$e^+e^-\\to X_c+X$ via $e^+e^-$ annihilation at the $Z$-boson resonance, where \n$X$ denotes the hadronic rest.\nIn this process, two mechanisms contribute with similar rates:\n(i) $Z\\to c\\overline{c}$ decay followed by $c\\to X_c$ (or\n$\\overline{c}\\to X_c$) fragmentation; and\n(ii) $Z\\to b\\overline{b}$ decay followed by $b\\to X_b$ (or\n$\\overline{b}\\to X_b$) fragmentation and weak $X_b\\to X_c+X$ decay of the\nbottom-flavored hadron $X_b$.\nThe latter two-step process is usually treated as a one-step fragmentation\nprocess $b\\to X_c$.\n\nUsing ALEPH \\cite{aleph} and OPAL \\cite{opal} data on inclusive $D^{*+}$\nproduction at the $Z$-boson resonance, we determined separate FF's for\n$c\\to D^{*+}$ and $b\\to D^{*+}$ in collaboration with Binnewies \\cite{bkk}.\nIt is the purpose of this work to extract nonperturbative FF's for\n$c,b\\to D^0,D^+,D_s^+,\\Lambda_c^+$ from the respective data samples collected\nby the OPAL Collaboration at LEP1 \\cite{opal1} using the same theoretical\nframework as in Ref.~\\cite{bkk}.\n\nThe work in Ref.~\\cite{bkk} is based on the QCD-improved parton model\nimplemented in the modified minimal-subtraction ($\\overline{\\mathrm{MS}}$)\nrenormalization and factorization scheme in its pure form with $n_f=5$\nmassless quark flavors, which is also known the as the massless scheme\n\\cite{spira} or zero-mass variable-flavor-number scheme.\nIn this scheme, the masses $m_c$ and $m_b$ of the charm and bottom quarks are\nneglected, except in the initial conditions of their FF's.\nThis is a reasonable approximation for center-of-mass (c.m.) energies\n$\\sqrt s\\gg m_c,m_b$ in $e^+e^-$ annihilation or transverse momenta\n$p_T\\gg m_c,m_b$ in $ep$ and $p\\overline{p}$ scattering, if the respective\nFF's are used as inputs for the calculation of the cross sections for these\nreactions.\nHence, we describe the $c,b\\to X_c$ transitions by nonperturbative FF's, as is\nusually done for the fragmentation of the up, down, and strange quarks into\nlight hadrons.\n\nThe outline of this paper is as follows.\nIn Sec.~\\ref{sec:two}, we briefly recall the theoretical framework underlying\nthe extraction of FF's from the $e^+e^-$ data, which has already been\nintroduced in Refs.~\\cite{bkk,bkk1}.\nIn Sec.~\\ref{sec:three}, we present the $D^0$, $D^+$, $D_s^+$, and\n$\\Lambda_c^+$ FF's we obtained by fitting the respective LEP1 data samples\nfrom OPAL \\cite{opal1} at leading order (LO) and next-to-leading order (NLO)\nin the massless scheme and discuss their properties.\nIn Sec.~\\ref{sec:four}, we present predictions for the inclusive production of\nthese $X_c$ hadrons in nonresonant $e^+e^-$ annihilation at lower c.m.\\\nenergies and compare them with data from other experiments.\nOur conclusions are summarized in Sec.~\\ref{sec:five}.\n\n\\section{Theoretical Framework}\n\\label{sec:two}\n\nOur procedure to construct LO and NLO sets of $D$ FF's has already been\ndescribed in Refs.~\\cite{bkk,bkk1}.\nAs experimental input, we use the LEP1 data from OPAL \\cite{opal1}.\n\nIn $e^+e^-$ annihilation at the $Z$-boson resonance, $X_c$ hadrons are\nproduced either directly through the hadronization of charm quarks produced \nby $Z\\to c\\overline{c}$ or via the weak decays of $X_b$ hadrons from\n$Z\\to b\\overline{b}$.\nIn order to disentangle these two production modes, the authors of\nRef.~\\cite{opal1} utilized the apparent decay length distributions and energy\nspectra of the $X_c$ hadrons.\nBecause of the relatively long $X_b$-hadron lifetimes and the hard $b\\to X_b$\nfragmentation, $X_c$ hadrons originating from $X_b$-hadron decays have\nsignificantly longer apparent decay lengths than those from primary production.\nIn addition, the energy spectrum of $X_c$ hadrons originating from\n$X_b$-hadron decays is much softer than that due to primary charm production. \n\nThe experimental cross sections \\cite{opal1} were presented as distributions\ndifferential in $x=2E(X_c)\/\\sqrt s$, where $E(X_c)$ is the measured energy of\nthe $X_c$-hadron candidate, and normalized to the total number of hadronic\n$Z$-boson decays.\nBesides the total $X_c$ yield, which receives contributions from $Z\\to c\\bar c$\nand $Z\\to b\\bar b$ decays as well as from light-quark and gluon fragmentation,\nthe OPAL Collaboration separately specified results for $X_c$ hadrons from\ntagged $Z\\to b\\bar b$ events.\nAs already mentioned above, the contribution due to charm-quark fragmentation\nis peaked at large $x$, whereas the one due to bottom-quark fragmentation has\nits maximum at small $x$.\n\nFor the fits, we use the $x$ bins in the interval $[0.15,1.0]$ and integrate\nthe theoretical cross sections over the bin widths used in the experimental\nanalysis.\nFor each of the four charmed-hadron species considered here,\n$X_c=D^0,D^+,D_s^+,\\Lambda_c^+$, we sum over the two charge-conjugate states as\nwas done in Ref.~\\cite{opal1}.\nAs a consequence, there is no difference between the FF's of a given quark\nand its antiquark.\nAs in Refs.~\\cite{bkk,bkk1}, we take the starting scales for the $X_c$ FF's of\nthe gluon and the $u$, $d$, $s$, and $c$ quarks and antiquarks to be\n$\\mu_0=2m_c$, while we take $\\mu_0=2m_b$ for the FF's of the bottom quark and\nantiquark.\nThe FF's of the gluon and the first three flavors are assumed to be zero at\ntheir starting scale.\nAt larger scales $\\mu$, these FF's are generated through the usual\nDokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) \\cite{dglap} evolution at LO\nor NLO.\nThe FF's of the first three quarks and antiquarks coincide with each other at\nall scales $\\mu$.\n\nWe employ two different forms for the parameterizations of the charm- and\nbottom-quark FF's at their respective starting scales.\nIn the case of charm, we use the distribution of Peterson {\\it et al.}\\\n\\cite{pet},\n\\begin{equation}\nD_c(x,\\mu_0^2)=N\\frac{x(1-x)^2}{[(1-x)^2+\\epsilon x]^2}.\n\\label{eq:peterson}\n\\end{equation}\nIn the case of bottom, we adopt the ansatz\n\\begin{equation}\nD_b(x,\\mu_0^2)=Nx^{\\alpha}(1-x)^{\\beta},\n\\label{eq:standard}\n\\end{equation}\nwhich is frequently used for the FF's of light hadrons.\nEquation~(\\ref{eq:peterson}) is particularly suitable for FF's that peak at\nlarge values of $x$, as is typically the case for $c\\to X_c$ transitions.\nSince the $b\\to X_c$ FF is a convolution of the $b\\to X_b$ fragmentation and\nthe subsequent $X_b\\to X_c+X$ decay, it has its maximum at small $x$ values.\nTherefore, Eq.~(\\ref{eq:peterson}) is less suitable in this case.\nWe apply Eqs.~(\\ref{eq:peterson}) and (\\ref{eq:standard}) for the FF's of all \nfour $X_c$-hadron species considered here.\n\nThe calculation of the cross section $(1\/\\sigma_{\\rm tot})d\\sigma\/dx$ for\n$e^+e^-\\to\\gamma\/Z\\to X_c+X$ is performed as described in Ref.~\\cite{bkk}, in\nthe pure $\\overline{\\mathrm{MS}}$ subtraction scheme, {\\it i.e.}, without the\nsubtraction terms $d_{Qa}(x)$ specified in Eq.~(2) of Ref.~\\cite{kks}.\nAll relevant formulas and references may be found in Ref.~\\cite{bkk1}.\nAs for the asymptotic scale parameter for five active quark flavors, we adopt\nthe LO (NLO) value $\\Lambda_{\\overline{\\rm MS}}^{(5)}=108$~MeV (227~MeV) from\nour study of inclusive charged-pion and -kaon production \\cite{bkk2}.\nThe particular choice of $\\Lambda_{\\overline{\\rm MS}}^{(5)}$ is not essential,\nsince other values can easily accommodated by slight shifts of the other fit\nparameters.\nAs in Refs.~\\cite{bkk,bkk1}, we take the charm- and bottom-quark masses to be \n$m_c=1.5$~GeV and $m_b=5$~GeV, respectively.\n\n\\boldmath\n\\section{Determination of the $D^0$, $D^+$, $D_s^+$, and $\\Lambda_c^+$ FF's}\n\\label{sec:three}\n\\unboldmath\n\nThe OPAL Collaboration \\cite{opal1} presented $x$ distributions for their full\n$D^0$, $D^+$, $D_s^+$, and $\\Lambda_c^+$ samples and for their $Z\\to b\\bar b$\nsubsamples.\nWe received these data in numerical form via private communication\n\\cite{martin}.\nThey are displayed in Figs.~4 (for the $D^0$ and $D^+$ mesons) and 5 (for the\n$D_s^+$ meson and the $\\Lambda_c^+$ baryon) of Ref.~\\cite{opal1} in the form\n$(1\/N_{\\rm had})dN\/dx$, where $N$ is the number of $X_c$-hadron candidates\nreconstructed through appropriate decay chains.\nIn order to convert this into the cross sections\n$(1\/\\sigma_{\\rm tot})d\\sigma\/dx$, we need to divide by the branching \nfractions of the decays that were used in Ref.~\\cite{opal1} for the\nreconstruction of the various $X_c$ hadrons, namely,\n\\begin{eqnarray}\nB(D^0\\to K^-\\pi^+)&=&(3.84\\pm0.13)\\%,\\nonumber\\\\\nB(D^+\\to K^-\\pi^+\\pi^-)&=&(9.1\\pm0.6)\\%,\\nonumber\\\\\nB\\left(D_s^+\\to \\phi\\pi^+\\right)&=&(3.5\\pm0.4)\\%,\\nonumber\\\\\nB\\left(\\Lambda_c^+\\to pK^-\\pi^+\\right)&=&(4.4\\pm0.6)\\%,\n\\end{eqnarray}\nrespectively.\nThe experimental errors on these branching fractions are not included in our\nanalysis.\n\nThe values of $N$ and $\\epsilon$ in Eq.~(\\ref{eq:peterson}) and of $N$,\n$\\alpha$, and $\\beta$ in Eq.~(\\ref{eq:standard}) which result from our LO and\nNLO fits to the OPAL data are collected in Table~\\ref{tab:par}.\nFrom there, we observe that the parameters $\\alpha$ and $\\beta$, which\ncharacterize the shape of the bottom FF, take very similar values for the\nvarious $X_c$ hadrons, which are also similar to those for the $D^{*+}$\nmeson listed in Table~I of Ref.~\\cite{bkk}.\nOn the other hand, the values of the $\\epsilon$ parameter, which determines\nthe shape of the charm FF, significantly differ from particle species to\nparticle species.\nIn the $D^{*+}$ case \\cite{bkk}, our LO (NLO) fits to ALEPH \\cite{aleph} and\nOPAL \\cite{opal} data, which required separate analyses, yielded\n$\\epsilon=0.144$ (0.185) and 0.0851 (0.116), respectively.\nWe observe that, for each of the $X_c$-hadron species considered, the LO\nresults for $\\epsilon$ are considerably smaller than the NLO ones.\nFurthermore, we notice a tendency for the value of $\\epsilon$ to decrease as\nthe mass ($m_{X_c}$) of the $X_c$ hadron increases. \n\n\\begin{table}\n\\begin{center}\n\\caption{Fit parameters of the charm- and bottom-quark FF's for the various\n$X_c$ hadrons at LO and NLO.\nThe corresponding starting scales are $\\mu_0=2m_c=3$~GeV and \n$\\mu_0=2m_b=10$~GeV, respectively.\nAll other FF's are taken to be zero at $\\mu_0=2m_c$.}\n\\label{tab:par}\n\\begin{tabular}{ccccccc}\n\\hline\\hline\n$X_c$ & Order & $Q$ & $N$ & $\\alpha$ & $\\beta$ & $\\epsilon$ \\\\\n\\hline\n$D^0$ & LO & $c$ & 0.998 & -- & -- & 0.163 \\\\\n & & $b$ & 71.8 & 1.65 & 5.19 & -- \\\\\n & NLO & $c$ & 1.16 & -- & -- & 0.203 \\\\\n & & $b$ & 97.5 & 1.71 & 5.88 & -- \\\\\n$D^+$ & LO & $c$ & 0.340 & -- & -- & 0.148 \\\\\n & & $b$ & 48.5 & 2.16 & 5.38 & -- \\\\\n & NLO & $c$ & 0.398 & -- & -- & 0.187 \\\\\n & & $b$ & 64.9 & 2.20 & 6.04 & -- \\\\\n$D_s^+$ & LO & $c$ & 0.0704 & -- & -- & 0.0578 \\\\\n & & $b$ & 40.0 & 2.05 & 4.93 & -- \\\\\n & NLO & $c$ & 0.0888 & -- & -- & 0.0854 \\\\\n & & $b$ & 21.8 & 1.64 & 4.71 & -- \\\\\n$\\Lambda_c^+$ & LO & $c$ & 0.0118 & -- & -- & 0.0115 \\\\\n & & $b$ & 44.1 & 1.97 & 6.33 & -- \\\\\n & NLO & $c$ & 0.0175 & -- & -- & 0.0218 \\\\\n & & $b$ & 27.3 & 1.66 & 6.24 & -- \\\\\n\\hline\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\nIn Table~\\ref{tab:chi}, we list three values of $\\chi^2$ per degree of freedom\n($\\chi_{\\rm DF}^2$) for each of the fits from Table~\\ref{tab:par}:\none for the $Z\\to b\\overline{b}$ subsample, one for the total sample (sum of\ntagged-$c\\overline{c}$, tagged-$b\\overline{b}$, and gluon-splitting events),\nand an average one evaluated by taking into account the $Z\\to b\\overline{b}$\nsubsample and the total sample.\nThe actual $\\chi_{\\rm DF}^2$ values are rather small.\nThis is due to the sizeable errors and the rather limited number of data\npoints, especially for the $D_s^+$ and $\\Lambda_c^+$ data.\nIn each case, the $Z\\to b\\overline{b}$ subsample is somewhat less well\ndescribed than the total sample.\nThe NLO fits yield smaller $\\chi_{\\rm DF}^2$ values than the LO ones, except\nfor the $\\Lambda_c^+$ case.\n\n\\begin{table}\n\\begin{center}\n\\caption{$\\chi^2$ per degree of freedom achieved in the LO and NLO fits to the\nOPAL \\cite{opal1} data on the various $D$ hadrons.\nIn each case, $\\chi_{\\rm DF}^2$ is calculated for the $Z\\to b\\overline{b}$\nsample ($b$), the full sample (All), and the combination of both (Average).}\n\\label{tab:chi}\n\\begin{tabular}{ccccc}\n\\hline\\hline\n$X_c$ & Order & $b$ & All & Average \\\\\n\\hline\n$D^0$ & LO & 1.16 & 0.688 & 0.924 \\\\\n & NLO & 0.988 & 0.669 & 0.829 \\\\\n$D^+$ & LO & 0.787 & 0.540 & 0.663 \\\\\n & NLO & 0.703 & 0.464 & 0.584 \\\\\n$D_s^+$ & LO & 0.434 & 0.111 & 0.273 \\\\\n & NLO & 0.348 & 0.108 & 0.228 \\\\\n$\\Lambda_c^+$ & LO & 1.05 & 0.106 & 0.577 \\\\\n & NLO & 1.05 & 0.118 & 0.582 \\\\\n\\hline\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\nThe normalized differential cross sections $(1\/\\sigma_{\\rm tot})d\\sigma\/dx$\nfor $D^0$, $D^+$, $D_s^+$, and $\\Lambda_c^+$ hadrons (circles), extracted from\nRef.~\\cite{opal1} as explained above, are compared with our LO (upmost dashed\nlines) and NLO (upmost solid lines) fits in Figs.~\\ref{fig:xs}(a)--(d),\nrespectively.\nThe same is also done for the $Z\\to b\\overline{b}$ subsamples (squares).\nIn addition, our LO and NLO fit results for the $Z\\to c\\overline{c}$\ncontributions are shown.\nIn each case, the $X_c$ hadron and its charge-conjugate partner are summed\nover.\nFrom Figs.~\\ref{fig:xs}(a)--(d), we observe that the LO and NLO results are\nvery similar, except for very small values of $x$.\nThis is also true for the distributions at the starting scales, as may be seen\nby comparing the corresponding LO and NLO parameters in Table~\\ref{tab:par}.\nThe branching of the LO and NLO results at small values of $x$ indicates that,\nin this region, the perturbative treatment ceases to be valid. \nThis is related to the phase-space boundary for the production of $X_c$ hadrons\nat $x_{\\rm min}=2m_{X_c}\/\\sqrt s$.\nThese values are somewhat larger than the $x$ values where our NLO results turn\nnegative.\nSince our massless-quark approach is not expected to be valid in regions of\nphase space where finite-$m_{X_c}$ effects are important, our results should\nonly be considered meaningful for $x\\agt x_{\\rm cut}=0.1$, say.\nWe also encountered a similar small-$x$ behavior for the $D^{*+}$ FF's in\nRefs.~\\cite{bkk,bkk1}.\n\nAs mentioned above, we take the FF's of the partons\n$g,u,\\overline{u},d,\\overline{d},s,\\overline{s}$ to be vanishing at their\nstarting scale $\\mu_0=2m_c$.\nHowever, these FF's are generated via the DGLAP evolution to the high scale\n$\\mu=\\sqrt s$.\nThus, apart from the FF's of the heavy quarks $c,\\overline{c},b,\\overline{b}$,\nalso these radiatively generated FF's contribute to the cross section.\nAll these contributions are properly included in the total result for\n$(1\/\\sigma_{\\rm tot})d\\sigma\/dx$ shown in Figs.~\\ref{fig:xs}(a)--(d).\nAt LEP1 energies, the contribution from the first three quark flavors is still\nnegligible; it is concentrated at small values of $x$ and only amounts to a\nfew percent of the integrated cross section.\nHowever, the contribution from the gluon FF, which appears at NLO in \nconnection with $q\\overline{q}g$ final states, is numerically significant.\nAs in our previous works \\cite{bkk,bkk1}, motivated by the decomposition of\n$(1\/\\sigma_{\\rm tot})d\\sigma\/dx$ in terms of parton-level cross sections, we\ndistributed this contribution over the $Z\\to c\\bar c$ and $Z\\to b\\bar b$\nchannels in the ratio $e_c^2:e_b^2$, where $e_q$ is the effective electroweak\ncoupling of the quark $q$ to the $Z$ boson and the photon including propagator\nadjustments.\nThis procedure should approximately produce the quantities that are compared\nwith the OPAL data \\cite{opal1}.\n\nAs in Refs.~\\cite{bkk,bkk1}, we study the branching fractions for the\ntransitions\\break $c,b\\to D^0,D^+,D_s^+,\\Lambda_c^+$, defined by\n\\begin{equation}\nB_Q(\\mu)=\\int_{x_{\\rm cut}}^1dx\\,D_Q(x,\\mu^2),\n\\label{eq:br}\n\\end{equation}\nwhere $Q=c,b$, $D_Q$ are the appropriate FF's, and $x_{\\rm cut}=0.1$.\nThis allows us to test the consistency of our fits with information presented\nin the experimental paper \\cite{opal1} that was used for our fits.\nThe contribution from the omitted region $0 2$. \n\nWhen $\\alpha = \\beta$, our result that $f(n, P, d) = \\Theta(n^{d - 1})$ for every $d$-dimensional tuple permutation matrix $P$, on one hand, generalizes Geneson's result \\cite{G}\nfrom $d=2$ to $d \\geq 2$. On the other hand, even when $d=2$ our ideas improve some key calculations in Geneson's paper \\cite{G}.\nThese improvements are vital in our derivation of a new upper bound on the limit superior of the sequence $\\{ {f(n,P,d) \\over n^{d-1}}\\}$ that we discuss below. \n\nThe importance of our result $f(n, P, d) = \\Theta(n^{d - 1})$ for every $d$-dimensional tuple permutation matrix $P$ lies in the fact that, in view of Proposition \\ref{Easy}, $\\Theta(n^{d - 1})$ is the lowest possible order for the extremal function of any nontrivial $d$-dimensional matrix. \n\nIn the second direction, we study the limit inferior and limit superior of the sequence $\\{ {f(n,P,d) \\over n^{d-1}} \\}$ where $P$ satisfies $f(n, P, d) = \\Theta(n^{d - 1})$. These are the multidimensional analogues of the F\\\"uredi-Hajnal limit. We show that the limit inferior is at least $d(k-1)$ for $k \\times \\cdots \\times k$ permutation matrices, \ngeneralizing Cibulka's result \\cite{C} from $d=2$ to all $d \\ge 2$. \n\nWe observe that $f(n, P, d)$ is super-homogeneous in higher dimensions, i.e., $f(sn, P, d) \\geq K s^{d-1} f(n, P, d)$ for some positive constant $K$.\nThis super-homogeneity is key to our proof that \nthe limit inferior of $\\{ {f(n,P,d) \\over n^{d-1}} \\}$ has a lower bound $2^{\\Omega(k^{1 \/ d})}$ for a family of $k \\times \\cdots \\times k$ permutation matrices, generalizing Fox's result \\cite{Fox2} from $d=2$ to $d \\geq 2$.\n\nFinally, we show that the limit superior of the sequence $\\{ {f(n,P,d) \\over n^{d-1}} \\}$ is bounded above by $ 2^{O(k)}$ for all $k \\times \\cdots \\times k$ permutation matrices $P$.\nThis is a substantial improvement of Klazar and Marcus upper bound $2^{O(k \\log k)}$ for $d > 2$ in \npaper \\cite{KM} and it also generalizes Fox's bound $2^{O(k)}$ on the F\\\"uredi-Hajnal limit in two dimensions \\cite{Fox}. We further show that this upper bound $2^{O(k)}$ is also true for every tuple permutation matrix $P$, which is a new result even for $d=2$. We are able to extend the new upper bound from permutation matrices to tuple permutation matrices mainly because of our improvement of Geneson's approach as mentioned above.\n\nThe rest of the paper is organized as follows. In Section 2, we study $f(n,P,d)$ when $P$ is a block permutation matrix but not a tuple permutation matrix. The more difficult case when $P$ is a tuple permutation matrix is analyzed in Section \\ref{tuple}. \nIn Section 4, we study the limit inferior and limit superior of the sequence $\\{ {f(n,P,d) \\over n^{d-1}}\\}$ for permutation and tuple permutation matrices $P$. We conclude the paper and discuss our future directions in Section \\ref{conclusion}.\n\n\n\n\n\n\n\\section{Block permutation matrices}\n\\label{Block}\n\\bigskip\n\nIn this section, we study the extremal function of a variant of $d$-dimensional permutation matrices. \nWe are interested in the forbidden matrices which can be written as the Kronecker product of a $d$-dimensional permutation \nmatrix and a $d$-dimensional matrix of $1$-entries only.\n\nLet $R^{k_1,...,k_d}$ be the $d$-dimensional $k_1 \\times \\cdots \\times k_d$ matrix of all ones. We \nstudy lower and upper bounds on the extremal function of block permutation matrix $P \\otimes R^{k_1,...,k_d}$, \nwhere $P$ is a $d$-dimensional permutation matrix.\n\nWe first study the extremal function of $R^{k_1, \\ldots , k_d}$. We use the probabilistic method to obtain a lower \nbound on $f(n,R^{k_1,...,k_d},d)$. When $d=2$, this lower bound is classical \\cite{ES}.\n\\begin{theo}\n\\label{lowerbound}\nIf $k_1 \\cdot k_2 \\cdots k_d > 1$, then\n$f(n,R^{k_1, \\ldots , k_d},d) = \\Omega \\left( n^{d- \\beta(k_1, k_2, \\ldots , k_d)} \\right)$, where $\\beta = {k_1+ \\cdots +k_d - d \\over k_1 \\cdot k_2 \\cdots k_d-1}$.\n\\end{theo}\n\\begin{proof} Let each entry of a $d$-dimensional $n \\times \\cdots \\times n$ zero-one matrix \n$A$ be chosen to be $1$ with probability $p=n^{- \\beta(k_1, \\ldots , k_d)}$ and $0$ with probability $1 - p$. \nThe expected number of $1$-entries in $A$ is $pn^d$. There are ${n \\choose k_1} \\cdot {n \\choose k_2} \\cdots {n \\choose k_d}$ \npossible copies of $R^{k_1, \\ldots , k_d}$ in matrix $A$ and each has a probability of $p^{k_1 \\cdot k_2 \\cdots k_d}$ of occurring.\nThe expected number of copies of $R^{k_1, \\ldots , k_d}$ in $A$ is\n$$ {n \\choose k_1} \\cdot {n \\choose k_2} \\cdots {n \\choose k_d}p^{k_1 \\cdot k_2 \\cdots k_d} \\le Cn^{k_1+ \\cdots + k_d}p^{k_1 \\cdot k_2 \\cdots k_d} \\ ,$$\nwhere, since at least one of $k_1$, $\\ldots$ , $k_d$ is greater than one, $C$ is a positive constant less than 1. \n\nLet $A'$ be the matrix formed by changing a single $1$-entry in each copy of $R^{k_1, \\ldots , k_d}$ in \n$A$ to a 0-entry. Then $A'$ avoids $R^{k_1,...,k_d}$ and the expected number of $1$-entries in $A'$ is at least\n$pn^d- Cn^{k_1+k_2+ \\cdots + k_d}p^{k_1 \\cdot k_2 \\cdots k_d}=(1 - C) \\ n^{d-\\beta(k_1, k_2, \\ldots, k_d) }$.\n As a consequence, there exists some matrix $A'$ that avoids $R^{k_1, \\ldots , k_d}$ \nand has at least so many $1$-entries.\n\\end{proof}\n\nWe now obtain an upper bound on the extremal function of $R^{k_1, \\ldots , k_d}$. When $d=2$, this upper bound is due to K\\H{o}v\\'{a}ri, S\\'{o}s, and Tur\\'{a}n \\cite{KST}.\n\\begin{theo}\n\\label{upperbound}\n$f(n,R^{k_1, \\ldots , k_d},d)=O(n^{d- \\alpha(k_1, \\ldots , k_d)})$, where $\\alpha = {\\max({k_1, \\ldots , k_d}) \\over k_1 \\cdot k_2 \\cdots k_d }$.\n\\end{theo}\n\\begin{proof}\nWe prove the theorem by induction on $d$. The base case of $d=1$ is trivial. Assuming that \n$f(n, R^{k_1, \\ldots, k_{d-1}},d-1)=O(n^{d-1- \\alpha(k_1, \\ldots, k_{d-1})})$ \nfor some $d \\geq 2$, we show that $f(n,R^{k_1, \\ldots , k_d},d)=O(n^{d- \\alpha(k_1, \\ldots , k_d)})$.\n\nThroughout the proof, we let $A = (a_{i_1, \\ldots , i_{d}})$ be a $d$-dimensional $n \\times \\cdots \\times n$ matrix \nthat avoids $R^{k_1, \\ldots , k_d}$ and has the maximum number, $f(n,R^{k_1, \\ldots , k_d},d)$, of ones. We need the following lemma on the number \nof $d$-rows that have $1$-entries in each of predetermined $k_d$ $d$-cross sections.\n\n\\begin{lem}\n\\label{d-row}\nFor any set of $k_d$ $d$-cross sections of $A$, there are $O\\left(n^{d-1-\\alpha(k_1, \\ldots , k_{d-1})} \\right)$ $d$-rows in $A$ which contain a 1-entry in each of these $d$-cross sections.\n\n\\end{lem}\n\\begin{proof}\nLet the $d^{\\text{th}}$ coordinates of these $d$-cross sections be $\\ell_1, \\ldots , \\ell_{k_d}$. Define a ($d-1$)-dimensional\n$n\\times \\cdots \\times n$ matrix $B=(b_{i_1, \\ldots , i_{d-1}})$ such that\n$b_{i_1, \\ldots , i_{d-1}}=1$ if $a_{ i_1, \\ldots , i_{d-1},\\ell_1}= \\cdots =a_{i_1, \\ldots , i_{d-1}, \\ell_{k_d}}=1$ \nand $b_{i_1, \\ldots , i_{d-1}}=0$ otherwise.\n\nWe claim that matrix $B$ must avoid $R^{k_1, \\ldots , k_{d-1}}$. Suppose to the contrary that \n$B$ contains $R^{k_1, \\ldots , k_{d-1}}$. Let $e_1, \\ldots , e_{k_1 \\cdot k_2 \\cdots k_{d-1}}$ be all the $1$-entries \nin $B$ that represent $R^{k_1,...,k_{d-1}}$. By the construction of $B$,\nthere are $k_d$ nonzero entries with coordinates $(x_1, \\ldots , x_{d-1},\\ell_1), \\ldots , (x_1, \\ldots , x_{d-1}, \\ell_{k_d})$ in $A$ corresponding to\neach $e_i$ with coordinates $(x_1, \\ldots , x_{d-1})$ in $B$. All these $k_1 \\cdot k_2 \\cdots k_d$ nonzero entries \nform a copy of $ R^{k_1, \\ldots , k_{d}}$ in $A$, a contradiction. Thus $B$ must avoid \n$R^{k_1, \\ldots , k_{d-1}}$ and by our inductive assumption, $B$ must have \n$O(n^{d-1-\\alpha(k_1, \\ldots , k_{d-1})})$ ones. The result follows.\n\\end{proof}\n\nSuppose all the $d$-rows of $A$ have $r_1, \\ldots , r_{n^{d-1}}$ non-zero entries, respectively. \nCounting the total number of sets of $k_d$ nonzero entries in the same $d$-row in two different ways yields\n\\begin{equation}\n\\label{two}\n\\sum_{i=1}^{n^{d-1}}{r_i \\choose k_d} = {n \\choose k_d}O\\left( n^{d-1-\\alpha(k_1, \\ldots , k_{d-1})} \\right) \\ ,\n\\end{equation}\nwhere we use Lemma \\ref{d-row} to obtain the right hand side.\n\nMatrix $A$ avoids $R^{k_1, \\ldots, k_d}$ and has the largest possible number of $1$-entries, so $r_i \\geq k_d - 1$ for $1 \\leq i \\leq n^{d-1}$. \nSince ${r \\choose k}$ is a convex function of $r$ for $r \\geq k - 1$, we apply Jensen's inequality to obtain\n\\begin{eqnarray*}\n\\sum_{i=1}^{n^{d-1}}{r_i \\choose k_d} \\ge n^{d-1} {{1\\over n^{d-1}}\\sum_{i=1}^{n^{d-1}} r_i \\choose k_d} \n= n^{d-1}{{1\\over n^{d-1}}f(n,R^{k_1, \\ldots , k_d},d) \\choose k_d} \\ ,\n\\end{eqnarray*}\nwhere, in the equality, we use the assumption that $A$ has $f(n,R^{k_1, \\ldots , k_d}, d)$ total $1$-entries.\nSubstituting this into equation (\\ref{two}) yields\n$$n^{d-1}{{1\\over n^{d-1}}f(n,R^{k_1, \\ldots , k_d},d) \\choose k_d} = {n \\choose k_d}O\\left(n^{d-1-\\alpha(k_1,\\ldots , k_{d-1})} \\right) \\ ,$$\n\\noindent\nwhich together with ${n \\choose k}=\\Theta(n^k)$ gives\n$$n^{d-1} \\left({1\\over n^{d-1}}f(n,R^{k_1, \\ldots , k_d},d)\\right)^{k_d} = O\\left( n^{ k_d}\\cdot n^{d-1-\\alpha(k_1,\\ldots , k_{d-1})} \\right) \\ . $$ \nThis implies\n$$f\\left( n,R^{k_1, \\ldots , k_d},d \\right) = O \\left( n^{d-{\\alpha(k_1, \\ldots , k_{d-1}) \\over k_d }} \\right) \\ . $$\nSimilarly, we have\n$$f(n,R^{k_1, \\ldots , k_d},d) = O\\left(n^{d-{\\alpha(k_2, \\ldots , k_d) \\over k_1}}\\right) \\ .$$ \nNote that $\\max\\left({\\alpha(k_2, \\ldots , k_d) \\over k_1}, {\\alpha(k_1, \\ldots , k_{d-1}) \\over k_d}\\right)=\\alpha(k_1, \\ldots , k_d)$. \nThus taking the smaller of the two upper bounds gives\n$$f(n,R^{k_1, \\ldots , k_d},d) = O\\left(n^{d-\\alpha(k_1, \\ldots , k_d)}\\right) $$ \nwhich completes the inductive step, and thus Theorem \\ref{upperbound} is proved.\n\\end{proof}\n\nWe make the following observation on $\\alpha(k_1, \\ldots , k_d)$ and $\\beta(k_1, \\ldots , k_d)$.\n\\begin{pro}\n\\label{alpha}\n Suppose $d > 1$ and $k_1, \\ldots , k_d$ be positive integers such that $k_1 \\cdot k_2 \\cdots k_d > 1$. If only one of $k_1, \\ldots , k_d$ is greater than 1, then\n$\\alpha(k_1, \\ldots , k_d) = \\beta(k_1, \\ldots , k_d) = 1 $.\nOtherwise,\n$0 < \\alpha(k_1, \\ldots , k_d) < \\beta(k_1, \\ldots , k_d) < 1 $.\n\\end{pro}\n\nWe omit the proof since it is straightforward. Proposition \\ref{alpha} implies that the lower bound of Theorem \\ref{lowerbound} and the upper bound of Theorem \\ref{upperbound} are significant improvements of the bounds in Proposition \\ref{Easy}.\n\nWe now study the extremal function of the Kronecker product $P \\otimes R^{k_1, \\ldots , k_d}$, where $P$ is a $d$-dimensional permutation matrix. We show that the extremal functions of $P \\otimes R^{k_1, \\ldots , k_d}$ and $R^{k_1, \\ldots , k_d}$ share the same lower and upper bounds.\n\n\\noindent\n\\begin{theo}\n\\label{super}\nIf $P$ is a $d$-dimensional permutation matrix and at least two of $k_1, \\ldots , k_d$ are greater than 1, then there exist constants $C_1$ and $C_2$ such that for all $n$,\n\\begin{equation}\n\\label{block}\nC_1 n^{d - \\beta(k_1, \\ldots , k_d)} \\leq f(n,P\\otimes R^{k_1, \\ldots , k_d}, d) \\leq C_2 n^{d - \\alpha(k_1, \\ldots , k_d)}\n\\end{equation}\n\\end{theo}\n\\noindent\n\\begin{proof} We first have \n\\begin{equation}\n\\label{R}\nf(n, R^{k_1, \\ldots , k_d}, d) \\leq f(n,P\\otimes R^{k_1, \\ldots , k_d}, d).\n\\end{equation}\n This follows from the fact that any matrix that avoids $R^{k_1, \\ldots , k_d}$ must also avoid\n$P\\otimes R^{k_1, \\ldots , k_d}$. The left inequality of (\\ref{block}) is then the result of (\\ref{R}) and Theorem \\ref{lowerbound}.\n\nTo prove the right inequality of (\\ref{block}), we follow Hesterberg's idea for the $2$-dimensional case \\cite{H} to \nestimate $f(n,P\\otimes R^{k_1, \\ldots , k_d}, d)$ first for $n = c^m$, where $m$ is an arbitrary positive integer and $c$ \nis a positive integer to be determined, and then for all other positive integers $n$. \n\nWe make use of the upper bound in Theorem \\ref{upperbound}\n\\begin{equation}\n\\label{Kconstant}\nf(n, R^{k_1, \\ldots , k_d}, d) \\leq g(n) \\ , \n\\end{equation}\nwhere $g(n) = K n^{d - \\alpha(k_1, \\ldots , k_d)}$ for some positive constant $K$, and claim that\n\\begin{equation}\n\\label{cm}\n f(c^m,P\\otimes R^{k_1, \\ldots , k_d}, d) \\leq 2 c^d g(c^m).\n\\end{equation}\n\nWe justify the claim by induction. The base case of $m=0$ is trivially true. Suppose that \n\\begin{equation}\n\\label{indu}\nf(n,P\\otimes R^{k_1, \\ldots , k_d},d) \\le 2c^d g(n) \n\\end{equation}\nfor $n = c^m$. We show that \n$f(cn,P\\otimes R^{k_1, \\ldots , k_d},d)\\le 2c^d g(cn)$.\n\nLet $A$ be a $d$-dimensional $cn\\times \\cdots \\times cn$ matrix avoiding $P\\otimes R^{k_1, \\ldots , k_d}$ with $f(cn, P \\otimes R^{k_1, \\ldots , k_d}, d)$ total $1$-entries. We divide $A = (a_{i_1, \\ldots , i_d})$ into $c^d$ \ndisjoint submatrices of size $n \\times \\cdots \\times n$. We label these submatrices by $S(i_1, \\ldots , i_d) = (s_{j_1, \\ldots , j_d})$, where \n$$s_{j_1, \\ldots , j_d} =a_{j_1+n(i_1-1), \\ldots , j_d+n(i_d-1)} \\ .$$\nThese are called $S$ submatrices throughout the paper. \n\nLet $C$ be the $d$-dimensional $c \\times \\cdots \\times c$ matrix such that $c_{i_1, \\ldots , i_d}=1$ \nif submatrix $S(i_1, \\ldots , i_d)$ of $A$ contains $R^{k_1, \\ldots , k_d}$ and that $c_{i_1, \\ldots , i_d}=0$ otherwise. \nSince any two $1$-entries of the permutation matrix $P$ differ in all coordinates, $C$ must avoid $P$ or else $A$ contains $P\\otimes R^{k_1, \\ldots , k_d}$.\n\nWe can classify all the $S$ submatrices of $A$ into two classes.\n\n\\medskip\n\n\\noindent\n{\\bf Case 1: $S$ contains $R^{k_1, \\ldots , k_d}$} \n\nSince $C$ avoids $P$, there are at most $f(c,P,d)$ such $S$ submatrices. Clearly each $S$ \nsubmatrix must avoid $P \\otimes R^{k_1, \\ldots , k_d}$, so it has at most $f(n,P\\otimes R^{k_1, \\ldots , k_d},d)$ \n$1$-entries. There are at most $f(c,P,d)f(n,P\\otimes R^{k_1, \\ldots , k_d},d)$ $1$-entries from this type of $S$ submatrices.\n\n\\medskip\n\n\\noindent\n{\\bf Case 2: $S$ avoids $R^{k_1, \\ldots , k_d}$} \n\nThere are at most $c^d$ such submatrices in total. Each has at most $f(n,R^{k_1, \\ldots , k_d},d)$ $1$-entries. \nThere are at most $c^df(n,R^{k_1, \\ldots , k_d},d)$ $1$-entries from the second type of $S$ submatrices.\n\n\\medskip\n\n\\noindent\nSumming the numbers of $1$-entries in both cases gives \n\\begin{equation}\nf(cn,P\\otimes R^{k_1,...,k_d},d)\\le f(c,P,d)f(n,P\\otimes R^{k_1, \\ldots , k_d},d)+c^df(n,R^{k_1, \\ldots , k_d},d) . \\no\n\\end{equation}\nOn the right hand side of the inequality, $f(n,P\\otimes R^{k_1, \\ldots , k_d},d)$ has an upper bound $2c^d g(n)$ \nbecause of the inductive assumption (\\ref{indu}) and $f(n,R^{k_1, \\ldots , k_d},d)$ has an upper bound $g(n)$ by (\\ref{Kconstant}). Since $f(c,P,d) = O(c^{d-1})$ \nfor any permutation matrix $P$ \\cite{KM}, \nthere exists a constant $L$ such that $f(c,P,d) \\le Lc^{d-1}$. Because at least two of $k_1, k_2, \\ldots , k_d$ are greater than $1$, it follows from Proposition \\ref{alpha}\nthat $\\alpha < 1$. Hence, the integer $c$ can be chosen \nso large that $2 L c^{\\alpha - 1} \\leq 1$. Therefore,\n\\begin{eqnarray*}\nf(cn,P\\otimes R^{k_1,...,k_d},d)\n\\le (L c^{d-1})(2c^d g(n))+c^dg(n)\n\\le [2L c^{\\alpha(k_1, \\ldots, k_d) - 1}]c^{d}g(cn)+c^dg(cn)\n\\le2c^dg(cn) \\ ,\n\\end{eqnarray*}\nwhere we use $g(n) = K n^{d - \\alpha}$ in the second inequality. This completes our induction and hence proves equation (\\ref{cm}).\n\nFinally, we estimate $f(n,P\\otimes R^{k_1, \\ldots , k_d}, d)$ for all positive integers $n$.\n\\begin{eqnarray*}\nf(n, P \\otimes R^{k_1,...,k_d}, d) &=& f(c^{\\log_c n},P \\otimes R^{k_1, \\ldots , k_d},d) \\\\\n&\\le& f(c^{\\lceil\\log_c n \\rceil},P \\otimes R^{k_1, \\ldots , k_d},d) \\\\\n& \\le & 2c^dg(c^{\\lceil\\log_c n \\rceil}) \\\\\n&\\le& 2c^dg(c^{\\log_c n +1}) \\\\\n&=& 2c^dg(cn) \\\\\n&\\le& 2c^d c^dg(n),\n\\end{eqnarray*}\nwhere $\\lceil\\log_c n \\rceil$ is the smallest integer $\\ge \\log_c n$, \nand we use (\\ref{cm}) in the second inequality and $g(n) = K n^{d - \\alpha}$ in the last inequality.\nThis proves the right inequality of (\\ref{block}).\n\nThe proof of Theorem \\ref{super} is completed.\n\\end{proof}\n\nWe conclude this section with an observation. If only one of $k_1, \\ldots , k_d$ is greater than one, the matrix $P \\otimes R^{k_1, \\ldots, k_d}$\n is a tuple permutation matrix. By Proposition \\ref{alpha}, $\\alpha(k_1, \\ldots , k_d) = 1$. The proof of Theorem \\ref{super} fails in this case, but it can be \nmodified to show that $f(n, P \\otimes R^{k_1, \\ldots , k_d}, d) = O(n^{d - 1 + \\epsilon})$, where $\\epsilon$ is an arbitrarily small positive number. To see this, we can replace $g(n)$ of (\\ref{Kconstant}) by $g(n) = K n^{d - 1 + \\epsilon}$ and choose $c$ so large that $2 L c^{- \\epsilon} \\leq 1$. In the next section, we improve this result and show that $f(n, P \\otimes R^{k_1, \\ldots , k_d}, d) = O(n^{d - 1})$. The method is quite different from that of this section.\n\n\n\n\n\n\n\n\n\n\\section{Tuple permutation matrices}\n\\label{tuple}\nIn this section, we study the extremal function of an arbitrary tuple permutation matrix. As previously mentioned, a tuple permutation matrix is the Kronecker product of a $d$-dimensional permutation matrix and $ R^{k_1, \\ldots , k_d}$, where only one of $k_1, \\ldots, k_d$ is larger than unity. We improve Geneson's ideas for $d=2$ case \\cite{G} and obtain a tight bound on the extremal function for $d \\geq 2$.\n \nSuppose $P$ is a permutation matrix. We call a matrix $P \\otimes R^{k_1, \\ldots , k_d}$ a $j$-tuple permutation matrix generated by $P$ if one of $k_1, \\ldots , k_d$ is equal to $j$ and the rest are unity. In particular, a $j$-tuple permutation matrix is called a double permutation matrix if $j=2$.\n\nLet\n\\begin{displaymath}\nF(n,j,k,d)=\\max_{M} f(n,M,d) \\ ,\n\\end{displaymath}\nwhere $M$ ranges through all $d$-dimensional $j$-tuple permutations matrices generated by $d$-dimensional $k \\times \\cdots \\times k$ permutation matrices.\n\n\n\\begin{theo}\n\\label{main}\nFor all $j \\ge 2$,\n$F(n,j,k,d)=\\Theta(n^{d-1})$.\n\\end{theo}\n\n\\noindent\nThe proof of this theorem is based on a series of lemmas.\n\nSince $F(n,j,k,d)$ has $n^{d-1}$ as a lower bound in view of Proposition \\ref{Easy}, it suffices to prove that it has upper bound $O(n^{d-1})$.\n\nWe first observe that $F(n,j,k,d)$ and $F(n,2,k,d)$ are bounded by each other.\n\\begin{lem}\n\\label{2j}\n$F(n, 2, k, d) \\leq F(n, j, k, d) \\leq (j-1) F(n, 2, k, d) ~~~~~ \\mbox{for $j > 2$} \\ .$\n\\end{lem}\n\\begin{proof}\nIt suffices to show that \n\\begin{equation}\n\\label{2j'}\nf(n, P, d) \\leq f(n, P', d) \\leq (j-1) f(n, P, d) ,\n\\end{equation}\nwhere $P$ is a double permutation $2k \\times k \\times \\cdots \\times k$ matrix, $P'$ is a $j$-tuple permutation $jk \\times k \\times \\cdots \\times k$ matrix, and both $P$ and $P'$ are generated from the same arbitrary permutation matrix of size $k \\times \\cdots \\times k$.\n\nThe left inequality of (\\ref{2j'}) follows from the fact that a $d$-dimensional $n \\times \\cdots \\times n$ matrix that avoids $P$ must also avoid $P'$.\n\nTo prove the right inequality, we suppose $A$ is a $d$-dimensional $n \\times \\cdots \\times n$ matrix that avoids $P'$ and has $f(n, P', d)$ nonzero entries. \nIn each $1$-row of $A$, we list all the $1$-entries $e_1, e_2, \\ldots$ in the order of increasing first coordinates and then change all the $1$-entries in this $1$-row \nexcept $e_1, e_{j}, e_{2j-1}, \\ldots$ to $0$-entries. In this way, we obtain a new matrix $A'$, which avoids $P$ since $A$ avoids $P'$.\nThis together with $|A| \\leq (j-1)|A'|$, where $|M|$ denotes the number of $1$-entries in $M$, justifies the right inequality of (\\ref{2j'}).\n\\end{proof}\n\nIn view of Lemma \\ref{2j}, it suffices to study the upper bound on $f(n, P, d)$, where $P$ is a $d$-dimensional double permutation matrix of size \n$2k \\times k \\times \\cdots \\times k$.\n\nSuppose $A$ is an arbitrary $d$-dimensional $kn \\times \\cdots \\times kn$ matrix that avoids $P$. As in Section 2, we study the $S$ submatrices of $A$, which are constructed by dividing $A$ into $n^d$ disjoint submatrices of size $k \\times \\cdots \\times k$ and labeling these submatrices as $S(i_1, \\ldots , i_d)$.\n\nThe contraction matrix of $A$ is defined to be the $d$-dimensional $n \\times \\cdots \\times n$ matrix $C = \\left(c_{i_1,i_2, \\ldots , i_d}\\right)$ such that $c_{i_1,i_2, \\ldots ,i_d}=1$ if $S(i_1, i_2, \\ldots , i_d)$ is a nonzero matrix and $c_{i_1,i_2, \\ldots , i_d}=0$ if $S(i_1, i_2, \\ldots ,i_d)$ is a zero matrix.\n\nWe now construct a $d$-dimensional $n \\times \\cdots \\times n$ zero-one matrix $Q=(q_{i_1, \\ldots , i_d})$. Each entry $q_{i_1, \\ldots , i_d}$ is defined based on the $S$ submatrices of $A$.\n\\begin{enumerate}\n\\item $q_{i_1, \\ldots , i_d}=0$ if $S(i_1, \\ldots , i_d)$ is a zero matrix.\n\n\\item $q_{i_1, \\ldots , i_d}=1$ if $S(i_1, i_2, \\ldots , i_d)$ is a nonzero matrix and $S(1,i_2, \\ldots , i_d)$, $\\ldots$, $S(i_1-1,i_2, \\ldots , i_d)$ are all zero matrices.\n\n\\item Let $x$ be the largest integer less than $i_1$ for which $q_{x,i_2, \\ldots , i_d}=1$. Then define $q_{i_1, i_2, \\ldots , i_d}=1$ if the augmented matrix \nformed by submatrices $S(x, i_2, \\ldots , i_d)$, $\\ldots$ , $S(i_1, i_2, \\ldots , i_d)$ contains at least two $1$-entries in the same $1$-row, and $q_{i_1, \\ldots , i_d}= 0$ otherwise.\n\n\\end{enumerate}\n\n\\noindent\n\\begin{lem}\n\\label{Q}\n$Q$ avoids $P$.\n\\end{lem}\n\\begin{proof}\nSuppose to the contrary that $Q$ contains $P$. Suppose the $1$-entries $e_1$, $e_2$, $\\ldots$ , $e_{2k}$, where $e_{2i-1}$ and $e_{2i}$ are in the same $1$-row, form a copy of $P$ in $Q$. Denote $e_{2i-1}= q_{x_1, x_2, \\ldots , x_d}$ and $e_{2i}= q_{x_1',x_2, \\ldots, x_d}$, where $x_1 < x_1'$. Then, by the definition of matrix $Q$, the augmented matrix formed by $S(x_1, x_2, \\ldots , x_d), \\ldots , S(x_1', x_2, \\ldots , x_d)$ contains two $1$-entries, denoted by $f_{2i-1}$ and $f_{2i}$, in the same $1$-row of $A$. The one-entries\n$f_1, \\ldots , f_{2k}$ form a copy of $P$ in $A$, a contradiction.\n\\end{proof}\n\nWe now study those $S$ submatrices of $A$ which contain two nonzero entries in the same $1$-row. The next lemma is the key difference between our approach and Geneson's approach \\cite{G} even for $d=2$.\n\\begin{lem}\n\\label{wide}\n$A$ has at most $F(n, 1, k, d)$ total $S$ submatrices with two nonzero entries in the same $1$-row.\n\\end{lem}\n\\begin{proof} We assume to the contrary that $A$ has more than $F(n,1,k,d)$ such $S$ submatrices. Let $A'$ be formed by changing all $1$-entries in all other $S$ submatrices to $0$-entries in $A$. Suppose that the double permutation matrix $P$ is generated from the permutation matrix $P'$ and that $C'$ is the contraction matrix of $A'$. Matrix $C'$ has more than $F(n,1,k,d) \\ge f(n,P',d)$ $1$-entries, so it must contain $P'$. Denote by $e_1, \\ldots , e_k$ the $1$-entries in $C'$ forming a copy of $P'$. Then each of $S(e_1), \\ldots, S(e_k)$ is a $S$ submatrix of $A'$ that has at least two nonzero entries in the same $1$-row. All of these pairs of nonzero entries in $S(e_1), \\ldots , S(e_k)$ form a copy of $P$ in $A'$. Hence, $A'$ contains $P$ and so does $A$, a contradiction.\n\\end{proof}\n\nFor each 1-entry $q_{i_1, i_2, \\ldots , i_d}=1$ of $Q$, we define a chunk $C^*(i_1, i_2, \\ldots , i_d)$, which is an augmented matrix formed by consecutive $S$ submatrices, as follows \\cite{G}. \n\n\\begin{enumerate}\n\n\\item If $q_{i_1, i_2, \\ldots , i_d}=1$ and $i_1'$ is the smallest integer greater than $i_1$ such that $q_{i_1', i_2, \\ldots , i_d}=1$, then the chunk $C^*(i_1, i_2, \\ldots , i_d)$ is defined to be the augmented matrix formed by $S(i_1, i_2, \\ldots , i_d)$, $\\ldots$ , $S(i_1' - 1, i_2, \\ldots , i_d)$.\n\n\\item If $q_{i_1, i_2, \\ldots , i_d}=1$ and there is no $i_1'>i_1$ such that $q_{i_1', i_2, \\ldots , i_d}=1$, then $C^*(i_1, i_2, \\ldots , i_d)$ is the augmented matrix formed by $S(i_1, i_2, \\ldots , i_d)$, $\\ldots$ , $S(n, i_2, \\ldots , i_d)$.\n\n\\end{enumerate}\n\nWe call a chunk {\\it $j$-tall}, where $j=2,3, \\ldots , d$, if each of its $j$-cross sections contains at least one 1-entry. The ($d-1$)-dimensional matrix $M' = (m'_{i_1, \\ldots, i_{j-1}, i_{j+1}, \\ldots, i_d})$ is called the $j$-remainder of a $d$-dimensional matrix $M = (m_{i_1, \\ldots , i_d})$ if $m'_{i_1, \\ldots , i_{j-1},i_{j+1}, \\ldots , i_d}$ is defined to be $1$\nwhen there exists $i_j$ such that $m_{i_1, \\ldots , i_d}=1$ and to be $0$ otherwise.\n\n\\begin{lem}\n\\label{tall}\nFor each $j= 2, 3, \\ldots, d$ and each $m=1, \\ldots , n$, $A$ has at most $F(n,1+k^{d-2},k,d-1)$ total $j$-tall chunks of the form $C^*(i_1, \\ldots , i_{j-1},m,i_{j+1}, \\ldots , i_d)$.\n\\end{lem}\n\\begin{proof}\nAssume to the contrary that $A$ has $r$ chunks $C^*_1, C^*_2, \\ldots , C^*_r$, where $r>F(n,1+k^{d-2},k,d-1)$, of the form $C^*(i_1, \\ldots , i_{j-1},m,i_{j+1}, \\ldots , i_d)$ \nthat have $1$-entries in\nall their $j$-cross sections. Let $S_1, S_2, \\ldots, S_r$ be the starting $S$ submatrices of the chunks $C^*_1, C^*_2, \\cdots , C^*_r$, respectively. Let $A'$ be the matrix formed\nby changing all $1$-entries of $A$ that do not lie in the chunks $C^*_1, \\ldots , C^*_r$ to $0$-entries. We further change all the $1$-entries of $A'$ that do not sit in \n$S_1, \\ldots, S_r$ to $0$-entries and denote the resulting matrix by $A''$. \nDenote by $C$ the contraction matrix of the $j$-remainder of $A''$. Then $C$ is a $(d-1)$-dimensional $n \\times \\cdots \\times n$ matrix and it has $r$ ones so it contains every ($1+k^{d-2}$)-tuple \n($d-1$)-dimensional permutation matrix.\n\nWe now pick a $(d-1)$-dimensional $(1 + k^{d-2})$-tuple permutation matrix. Since $P$ is a $d$-dimensional double permutation matrix of size $2k \\times k \\times \\cdots \\times k$ and $j \\neq 1$, the $j$-remainder of $P$ is a $(d-1)$-dimensional double permutation matrix\nof size $2k \\times k \\times \\cdots \\times k$. We denote by $P'$ the $(d-1)$-dimensional $(1 + k^{d-2})$-tuple permutation matrix of size $(1+k^{d-2})k \\times k \\times \\cdots \\times k$ such that $P'$ and the $j$-remainder of $P$ are generated from the same $(d-1)$-dimensional permutation matrix.\n\nFor each pair of ones in a row of $P$ with coordinates $(x_1,x_2, \\ldots , x_d)$ and $(x_1 + 1,x_2, \\ldots , x_d)$, $P'$ has corresponding ($1 + k^{d-2}$) \nones with coordinates $(\\tilde{x}_1, x_2, \\ldots, x_{j-1}, x_{j+1}, \\ldots , x_d)$, $(\\tilde{x}_1 + 1, x_2, \\ldots , x_{j-1}, x_{j+1}, \\ldots , x_d)$, $\\cdots$ , \n$(\\tilde{x}_1 + k^{d-2}, x_2, \\ldots , x_{j-1}, x_{j+1}, \\ldots, x_d)$ \nin a single $1$-row. Since $C$ contains $P'$, this set of ($1 + k^{d-2}$) ones is represented by $1$-entries with coordinates \n$(t_1(\\lambda), t_2, \\ldots , t_{j-1}, t_{j+1}, \\ldots , t_d)$, where $\\lambda = 1, 2, \\ldots , 1 + k^{d-2}$, in the same $1$-row of $C$.\n\nLet $S(t_1(\\lambda), t_2, \\ldots , t_{j-1}, m, t_{j+1}, \\ldots , t_d)$, $1 \\leq \\lambda \\leq 1 + k^{d-2}$, be the corresponding $S$ submatrices of $A'$. By the construction of $A'$, $A''$ and $C$, these $S$ submatrices are the starting $S$ submatrices of some of the chunks $C^*_1, \\ldots , C^*_r$. Each of these ($1 + k^{d-2}$) chunks has $1$-entries in every $j$-cross section; in particular each chunk has a nonzero entry \nwith the same $j^{\\text{th}}$ coordinate $(m-1)k+x_j$. There are at least $1+k^{d-2}$ nonzero entries with this given $j^{\\text{th}}$ coordinate in these chunks, but there are $k^{d-2}$ $1$-rows in a $j$-cross section of these chunks. By the pigeonhole principle, there exist a pair of $1$-entries in the same $1$-row of $A'$.\n\nHence, for each pair of ones in the same $1$-row of $P$, we have a corresponding pair of ones in the same $1$-row of $A'$. Since two $1$-entries of $P$ not in the same \n$1$-row differ in all their coordinates, $A'$ contains $P$, and so does $A$; a contradiction.\n\\end{proof}\n\n\n\n\n\n\n\nWe can now derive a recursive inequality on $F(n, j, k, d)$, the resolution of which gives an upper bound on $F(n,j,k,d)$.\n\n\\begin{lem}\n\\label{Ine}\nLet $d$, $s$, $n$ be positive integers where $d \\ge 2$. Then\n\\begin{eqnarray}\nF(kn,2,k,d) &\\le& (d-1)nk^{d-1}F(n, 1+k^{d-2},k,d-1) + k^dF(n,1,k,d) \n +(k-1)^{d-1}F(n,2,k,d) . ~~~~~\n\\label{IN}\n\\end{eqnarray}\n\\end{lem}\n\\begin{proof} We count the maximum number of $1$-entries in $A$ by counting the number of ones in three types of chunks of $A$.\n\n\\medskip\n\n\\noindent\n{\\bf Case 1: chunk has two $1$-entries in the same $1$-row} \n\nIn view of the definitions of matrix $Q$ and a chunk, such a chunk has only one nonzero $S$ submatrix so it has at most $k^d$ nonzero entries. By Lemma \\ref{wide}, there are at most $F(n,1,k,d)$ such $S$ submatrices. Chunks of this type contain at most $k^d F(n, 1, k, d)$ nonzero entries.\n\n\\medskip\n\n\\noindent\n{\\bf Case 2: chunk is $j$-tall for some $j=2, 3, \\ldots, d$ and has no two $1$-entries in the same $1$-row}\n\nThere are $(d-1)$ choices for $j$-tall since $j=2, 3, \\ldots , d$. For each $j$, the integer $m$ of Lemma \\ref{tall} can be $1, \\ldots , n$. A $j$-tall chunk with no two $1$-entries in the same row has at most $k^{d-1}$ $1$-entries. For each pair of $j$ and $m$, there are at most $F(n, 1 + k^{d-2}, k, d-1)$ such chunks in view of Lemma \\ref{tall}. In total, chunks of this type contain at most $(d-1)nk^{d-1}F(n, 1+k^{d-2},k,d-1)$ nonzero entries.\n\n\\medskip\n\n\\noindent\n{\\bf Case 3: chunk is not $j$-tall for any $j=2, 3, \\ldots , d$ and has no two $1$-entries in the same $1$-row}\n\nSuch a chunk has at most $(k-1)^{d-1}$ ones. By the definition of a chunk, the number of chunks is equal to the number of nonzero entries in matrix $Q$, which, by Lemma \\ref{Q}, has \nat most $F(n,2,k,d)$ nonzero entries. There are at most $(k-1)^{d-1} F(n,2,k,d)$ ones in chunks of this type.\n\nSumming all cases proves Lemma \\ref{Ine}.\n\\end{proof}\n\nWe are now ready to finish the proof of Theorem \\ref{main}.\n\n\\begin{proof}[Proof of Theorem \\ref{main}]\nWe proceed by induction on $d$. The base case of $d=1$ is trivial. We then make the inductive assumption that \n\\begin{equation}\nF(n,j,k,d-1)=O(n^{d-2}) ~~ \\mbox{for some $d \\geq 2$} \\label{inductive}\n\\end{equation}\nand prove that $F(n,j,k,d) = O(n^{d-1})$.\n\nWe first use Lemma \\ref{Ine} to show that \n\\begin{equation}\nF(n,2,k,d) \\leq k(c+dk)n^{d-1} \\label{j=2} ,\n\\end{equation}\nwhere $c$ is a positive constant to be determined.\n\nWe simplify inequality (\\ref{IN}) of Lemma \\ref{Ine}. Inductive assumption (\\ref{inductive}) implies that $F(n, 1+ k^{d-2},k,d-1)=O(n^{d-2})$. \nWe also have $F(n,1,k,d)=O(n^{d-1})$, which was proven by Marcus and Tardos \\cite{MT} for $d=2$ and by Klazar and Marcus \\cite{KM} for $d > 2$. Hence, we can choose a sufficiently large constant $c$ \nsuch that the sum of the first two terms on the right hand side of (\\ref{IN}) is bounded by $c n^{d-1}$. Therefore,\n\\begin{equation}\n F(kn,2,k,d) \\le (k-1)^{d-1}F(n,2,k,d)+cn^{d-1} ~~~~~ \\mbox{for all $n$.} \\label{sn}\n\\end{equation}\n\nWe then use another induction, which is a strong induction on $n$, to prove inequality (\\ref{j=2}). The base case of $n \\leq k$ is trivial. Assuming that (\\ref{j=2}) is true for all $n < m$, we show that (\\ref{j=2}) also holds for $n =m$.\n\nLet $N$ be the maximum integer that is less than $m$ and divisible by $k$. A $d$-dimensional $m \\times \\cdots \\times m$ zero-one matrix has at most $m^d-N^d \\le m^d-(m-k)^d \\le dkm ^{d-1}$ more entries than a $d$-dimensional $N \\times N \\times \\cdots \\times N$ matrix. Thus we have $F(m,2,k,d)\\le F(N,2,k,d)+dkm^{d-1}$. This together with (\\ref{sn}) gives\n\\begin{eqnarray*}\nF(m, 2, k, d) &\\le& (k-1)^{d-1}F\\left({N \\over k}, 2,k,d\\right)+c\\left({N \\over k}\\right)^{d-1}+dkm^{d-1} \\\\\n&\\le& (k-1)^{d-1}k(c+dk) \\left({N \\over k}\\right)^{d-1} + c\\left({N \\over k}\\right)^{d-1}+dkm^{d-1} \\\\\n&\\le& (k-1)(c+dk)N^{d-1}+(c+dk)m^{d-1} \\\\\n&\\le& k(c+dk)m^{d-1} \\ ,\n\\end{eqnarray*}\nwhere we use the strong inductive assumption in the second inequality. Hence, inequality (\\ref{j=2}) holds for $n=m$. The strong induction shows that (\\ref{j=2}) is true for all positive integers $n$.\n\nHaving verified the inequality (\\ref{j=2}), we continue to complete the induction on $d$ by showing that $F(n,j,k,d)=O(n^{d-1})$. This easily follows from inequality (\\ref{j=2}) and Lemma \\ref{2j}.\nWe have completed the induction.\n\nSince $F(n,j,k,d) = \\Omega(n^{d-1})$ in view of Proposition \\ref{Easy}, this together with $F(n,j,k,d)=O(n^{d-1})$ completes the proof of Theorem \\ref{main}.\n\\end{proof}\n\nWe conclude this section with a remark. In the paragraph between two inequalities (\\ref{j=2}) and (\\ref{sn}), we use Klazar and Marcus' result \\cite{KM} $F(n, 1, k, d) = O(n^{d-1})$ to \nchoose the constant $c$ in (\\ref{j=2}). In fact, Klazar and Marcus gave a more refined upper bound ${F(n,1,k, d) \\over n^{d-1}} = 2^{O(k \\log k)}$. This allows us to improve the inductive assumption (\\ref{inductive}) to ${F(n,j,k,d-1) \\over n^{d-2}} = 2^{O( k \\log k)}$ and choose \n$c = 2^{O(k \\log k)}$. In this way, we are able to prove ${F(n,j,k,d) \\over n^{d-1}} = 2^{O(k \\log k)}$. \n\nIn the next section, we improve Klazar and Marcus upper bound from $2^{O(k \\log k)}$ to $2^{O(k)}$. As a consequence, $c = 2^{O(k)}$ and hence ${F(n,j,k,d) \\over n^{d-1}} = 2^{O(k)}$. Lemma \\ref{wide} is crucial in making the extension from ${F(n,1,k,d) \\over n^{d-1}} = 2^{O(k)}$ to ${F(n,j,k,d) \\over n^{d-1}} = 2^{O(k)}$ possible.\n\n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n\\section{Limit inferior and limit superior }\n\\label{constant}\nIn this section, we consider matrices $P$ such that $f(n, P, d) = \\Theta(n^{d-1})$. This tight bound implies that $\\{{ f(n, P, d) \\over n^{d - 1}} \\}$ is a bounded sequence. We are interested in the limits of this sequence.\n\nWhen $d=2$, Pach and Tardos showed that $f(n, P, 2)$ is super-additive \\cite{PT}.\nBy Fekete's Lemma on super-additive sequences \\cite{Fekete}, the sequence $\\{ {f(n, P, 2) \\over n} \\}$ is convergent. The limit is known as the F\\\"uredi-Hajnal limit.\n\nWhen $d > 2$, it is still an open problem to prove the convergence of the sequence $\\{ {f(n, P, d) \\over n^{d -1 }} \\}$. Instead, we consider the limit inferior and limit superior of the sequence and define\n\\begin{displaymath}\n\\label{ISd}\nI(P, d) = \\liminf_{n \\rightarrow \\infty} { f(n, P, d) \\over n^{d - 1}} \\ , ~~~~~ S(P, d) = \\limsup_{n \\rightarrow \\infty} { f(n, P, d) \\over n^{d - 1}} \\ .\n\\end{displaymath}\nWe derive lower bounds on $I(P,d)$ and an upper bound on $S(P,d)$. These bounds are written in terms of the size of $P$.\n\nThe main ideas in this section are Fox's interval minor containment \\cite{Fox} and our observation that the extremal function is super-homogeneous in higher dimensions.\n\n\n\\subsection{An improved upper bound}\n\nKlazar and Marcus \\cite{KM} showed that $S(P, d) =2^{O(k \\log k)}$ \nfor $k \\times \\cdots \\times k$ permutation matrices $P$. In this subsection, we extend Fox's ideas for the $d=2$ case \\cite{Fox} to improve this upper bound to $2^{O(k)}$ for $d \\geq 2$. We then show that the new upper bound also holds for tuple permutation matrices, which is a new result even for $d=2$.\n\n\n\\begin{theo} \n\\label{IS}\nIf $P$ is a $d$-dimensional $k \\times \\cdots \\times k$ permutation matrix or a tuple permutation matrix generated by such a permutation matrix, then\n$S(P,d) = 2^{O(k)}$.\n\\end{theo}\n\nThe proof uses the notion of cross section contraction and interval minor containment \\cite{Fox}. Contracting several consecutive $\\ell$-cross sections of a $d$-dimensional matrix means that we replace these $\\ell$-cross sections by a single $\\ell$-cross section, placing a one in an entry of the new cross section if at least one of the corresponding entries in the original $\\ell$-cross sections is a $1$-entry and otherwise placing a zero in that entry of the new cross section. The contraction matrix, as defined in Section 3, of an $sn \\times \\cdots \\times sn$ matrix $A$ can be obtained by contracting every $s$ consecutive $\\ell$-cross sections of $A$ uniformly for $1 \\leq \\ell \\leq d$.\n\nWe say that $A$ contains $B$ as an interval minor if we can use repeated cross section contraction to transform $A$ into a matrix which contains $B$. Matrix $A$ avoids $B$ as an interval minor if $A$ does not contain $B$ as an interval minor. \n\nEquivalently, a $k_1 \\times k_2 \\times \\cdots \\times k_d$ matrix $B = (b_{i_1, i_2, \\ldots , i_d} )$ is an interval minor of a matrix $A$ if\n\\begin{itemize}\n\n\\item for each $i=1, \\ldots , d$, there are $k_i$ disjoint intervals, $W_{i, 1}, \\ldots , W_{i, k_i}$, which are sets of consecutive positive integers,\n\n\\item and if $b_{i_1, \\ldots , i_d} = 1$ then the submatrix $W_{1, i_1} \\times \\cdots \\times W_{d, i_d}$ of $A$ contains a $1$-entry.\n\n\\end{itemize}\n\nThe containment in previous sections is generally stronger than containment as an interval minor. Indeed, $A$ contains $B$ implies that $A$ contains $B$ as an interval minor. However, since a permutation matrix has only one $1$-entry in every cross section, containment of a permutation matrix $P$ is equivalent to containment of $P$ as an interval minor.\n\nAnalogous to $f(n, P, d)$, we define $m(n, P, d)$ to be the maximum number of $1$-entries in a $d$-dimensional $n \\times \\cdots \\times n$ zero-one matrix that avoids $P$ as an interval minor.\n\nWe observe that\n\\begin{equation}\n\\label{fm}\nf(n, P, d) \\leq m(n, R^{k, \\ldots, k}, d) \n\\end{equation}\nfor every $k \\times \\cdots \\times k$ permutation matrix $P$. This follows from the fact that containment of $R^{k, \\ldots, k}$ as an interval minor implies containment of $P$. Hence, we seek an upper bound on $m(n, R^{k, \\ldots, k}, d)$. We denote by $f_{k_1, \\ldots , k_d}(n,t,s, d)$ the maximum number of $1$-rows that have at least $s$ nonzero entries in a $d$-dimensional $t \\times n \\times\\cdots \\times n$ matrix that avoids \n$R^{k_1, \\ldots , k_d}$ as an interval minor.\n\n\\begin{lem}\n\\label{mblock}\n\n\\begin{equation}\nm(tn,R^{k, \\ldots, k},d) \\le s^d m(n,R^{k, \\ldots, k},d)+dn t^df_{k, \\ldots, k}(n,t,s, d) ,\n\\end{equation}\n\n\\end{lem}\n\\begin{proof}\nLet $A$ be a $d$-dimensional $tn \\times \\cdots \\times tn$ matrix that avoids $R^{k, \\ldots , k}$ as an interval minor and has $m(tn,R^{k, \\ldots , k},d)$ $1$-entries. Partition $A$ uniformly into $S$ submatrices of size $t \\times \\cdots \\times t$. Let $C$ be the contraction matrix of $A$ as defined in Section 3.\n\nWe do casework based on whether an $S$ submatrix of $A$ has $s$ nonzero $\\ell$-cross sections for some $\\ell$.\n\nWe first count the number of $1$-entries from the $S$ submatrices which do not have $s$ nonzero $\\ell$-cross sections for any $\\ell$. The contraction matrix $C$ has at most\n$m(n,R^{k, \\ldots, k},d)$ $1$-entries for, otherwise, $C$ contains $R^{k, \\ldots, k}$ as an interval minor, and thus $A$ contains $R^{k, \\ldots, k}$ as an interval minor as well, \na contradiction.\nHence, $A$ has at most $m(n,R^{k, \\ldots, k},d)$ such $S$ submatrices, each of which contains at most $(s-1)^d m(n,R^{k, \\ldots, k},d-1)$. Then there is a $t \\times n \\times \\cdots \\times n$ matrix $A$ which avoids $R^{1,k,\\ldots,k}$\nas an interval minor and has more than $m(n,R^{k, \\ldots, k},d-1)$ $1$-rows with at least $s$ $1$-entries in each $1$-row. Let $B$ be\nthe $1 \\times n \\times \\cdots \\times n$ matrix obtained from $A$ by contracting all the $1$-cross sections. Then $B$, which can be viewed as a $(d-1)$-dimensional matrix, has over $m(n,R^{k, \\ldots, k},d-1)$ $1$-entries \nand thus contains the $(d-1)$-dimensional matrix $R^{k, \\ldots, k}$ as an interval minor. Consequently, $A$ contains the $d$-dimensional $R^{1, k, \\ldots, k}$ as\nan interval minor, a contradiction. Therefore, $$f_{1,k, \\ldots, k}(n,t,s, d) \\le m(n,R^{k, \\ldots, k},d-1) \\leq {2^{1-1}t^2 \\over s} \\ m(n, R^{k, \\ldots, k}, d - 1) \\ , $$ \nwhich proves the base case.\n\nAssuming that for all $s$ and $t$ that are powers of $2$ satisfying $2^{\\ell-2} \\le s \\le t$ we have \n\\begin{equation}\n\\label{f_i}\nf_{\\ell-1,k, \\ldots, k}(n,t,s, d) \\leq {2^{\\ell-2}t^2 \\over s}\\ m(n, R^{k, \\ldots, k}, d - 1)\n\\end{equation}\n for some $\\ell \\geq 2$, we need to show that \n\\begin{equation}\n\\label{f_}\nf_{\\ell ,k, \\ldots, k}(n,t,s, d) \\leq {2^{\\ell -1}t^2 \\over s} \\ m(n, R^{k, \\ldots, k}, d - 1) \n\\end{equation}\nfor all $s$ and $t$ that are powers of $2$ satisfying $2^{\\ell -1} \\le s \\le t$. \n\nWe use another induction on $t$ to show that (\\ref{f_}) is true for all $t \\geq s$ that are powers of $2$. The base case of $t=s$ is trivial. If $f_{\\ell,k, \\ldots, k}(n,t,s, d) \n\\leq {2^{\\ell-1}t^2 \\over s} m(n, R^{k, \\ldots, k}, d - 1) $ for some $t \\geq s$ that is a power of $2$, we prove the same inequality for $2t$.\nBy Lemma \\ref{f}, we have \n\\begin{eqnarray*}\nf_{\\ell,k, \\ldots, k}(n,2t,s, d) &\\le& 2f_{\\ell,k, \\ldots, k}(n,t,s, d)+2f_{\\ell-1,k, \\ldots, k}(n,t,s\/2, d) \\\\\n&\\le & 2{2^{\\ell-1}t^2 \\over s} m(n, R^{k, \\ldots, k}, d - 1) +2{2^{\\ell-2}t^2 \\over s\/2} m(n, R^{k, \\ldots, k}, d - 1) \\\\ \n&=& {2^{\\ell-1}(2t)^2 \\over s} m(n, R^{k, \\ldots, k}, d - 1) \\ ,\n\\end{eqnarray*}\nwhere we use the two inductive assumptions in the second inequality. \nThus our induction on $t$ is complete and (\\ref{f_}) is proved. As a result, our induction on $l$ is also complete.\n\\end{proof}\n\nWe are now ready to prove Theorem \\ref{IS}.\n\\begin{proof}[Proof of Theorem \\ref{IS}]\nWe first bound the right hand side of inequality (\\ref{fm}). \nWe claim that\n\\begin{equation}\n\\label{mO}\n{m(n,R^{k, \\ldots, k},d) \\over n^{d -1}} =2^{O(k)} \\ .\n\\end{equation}\n\nThe base case of $d=1$ is trivial. Assuming that (\\ref{mO}) is true for $(d-1)$, we\ncombine Lemmas \\ref{mblock} and \\ref{closed} to get\n$$m(tn,R^{k, \\ldots , k},d) \\le s^dm(n,R^{k, \\ldots, k}, d)+dt^d{2^{k-1}t^2 \\over s}2^{O(k)}n^{d-1} .$$\nChoosing $t=2^{dk }$ and $s=2^{k-1}$ yields\n$$m(2^{dk}n,R^{k, \\ldots , k},d) \\le 2^{(k-1)d}m(n,R^{k, \\ldots, k},d)+d 2^{kd(d + 2)} 2^{O(k)}n^{d-1} .$$\nIn particular, if $n$ is a positive integer power of $2^{dk}$, iterating this inequality yields\n\\begin{eqnarray*}\n\\lefteqn{m((2^{dk})^L, R^{k, \\ldots, k}, d)} \\\\\n& \\leq& 2^{(k-1)d} m((2^{dk})^{L - 1}, R^{k, \\ldots, k}, d) \n + d 2^{kd(d+2)} 2^{O(k)} (2^{dk})^{(L-1)(d-1)} \\\\\n&\\leq & 2^{2(k-1)d} m((2^{dk})^{L - 2}, R^{k, \\ldots, k}, d) \n + \\ d 2^{kd(d+2)} 2^{O(k)} \\left( 1 + {1 \\over 2^{d(dk - 2k +1)}} \\right) (2^{dk})^{(L-1)(d-1)} \\\\\n&\\leq& 2^{L(k-1)d} m(1, R^{k, \\ldots, k}, d) \n + \\ d 2^{kd(d+2)} 2^{O(k)} \\left( 1 + \n{1 \\over 2^{d(dk - 2k +1)}} + {1 \\over 2^{2d(dk - 2k +1)}} + \\cdots \\right) (2^{dk})^{(L-1)(d-1)} \\\\\n&=& 2^{O(k)} (2^{dk})^{(L-1)(d-1)} \\ .\n\\end{eqnarray*}\nHence, if $(2^{kd})^{L-1} \\leq n < (2^{kd})^L$, then\n$$m(n, R^{k, \\ldots, k}, d) \\leq m((2^{dk})^L, R^{k, \\ldots, k}, d) = 2^{O(k)} (2^{dk})^{(L-1)(d-1)} \\leq 2^{O(k)} n^{d-1} \\ .$$\nThis completes the induction on $d$, and hence (\\ref{mO}) is proved.\n\nIt follows from (\\ref{fm}) and (\\ref{mO}) that Theorem \\ref{IS} is true for every permutation matrix $P$. By the remark at the end of Section 3, this result can be extended to tuple permutation matrices. The proof of Theorem \\ref{IS} is completed.\n\\end{proof}\n\n\\subsection{\nLower bounds and super-homogeneity}\n\nWe first use Cibulka's method in \\cite{C} to show that $I(P, d) \\geq d(k-1)$ for all permutation matrices of size $k \\times \\cdots \\times k$ and extend this lower bound to tuple permutation matrices. \n\\begin{theo}\n\\label{llb}\nIf $P$ is a $d$-dimensional $k \\times \\cdots \\times k$ permutation matrix or a tuple permutation matrix generated by such a permutation matrix, then $I(P, d) \\geq d(k-1)$. Furthermore, if $P$ is the identity matrix, then $I(P, d) = S(P, d) = d(k-1)$.\n\\end{theo}\n\\begin{proof} We first show that, for all $n\\ge k-1$, we have\n\\begin{equation}\n\\label{lb}\nf(n, P, d) \\geq n^d - (n - k +1)^d\n\\end{equation}\nfor every permutation matrix $P$. \nPick one nonzero entry $p_{i_1, \\ldots , i_d}=1$ of $P$. Construct a $d$-dimensional $n \\times \\cdots \\times n$ matrix $A$ with entries such that $a_{j_1, \\ldots , j_d}=0$ if $i_l \\le j_l \\le n-k+i_l$ for all $1 \\leq l \\leq d$ and $a_{j_1, \\ldots , j_d}=1$ otherwise. We first show that $A$ avoids $P$. Suppose to the contrary that $A$ contains $P$. Let the special nonzero entry \n$p_{i_1, \\ldots, i_d}=1$ of $P$ be represented by entry $a_{y_1, \\ldots, y_d}$ of $A$. By the construction of $A$, we must have \neither $y_l\\le i_l-1$ or $y_l \\ge n-k+i_l+1$. If $y_l \\le i_l -1$, since $A$ contains $P$,\n$A$ has $i_l -1$ other nonzero entries whose $l^{\\text{th}}$ coordinates are smaller than $y_l \\leq i_l -1$ to represent $1$-entries of $P$, an impossibility. If $y_l \\geq n - k + i_l +1$, a similar argument leads to another impossibility. Counting the number of $1$-entries in $A$ proves (\\ref{lb}).\n\nWe next show that \n\\begin{equation}\n\\label{ub}\nf(n,P,d) \\le n^d-(n-k+1)^d\n\\end{equation}\nwhen $P$ is the identity matrix, i.e., $p_{i_1, \\ldots, i_d}$ is one on the main diagonal $i_1=\\cdots=i_d$ and zero otherwise. \nIf $A$ is a matrix that avoids $P$, each diagonal of $A$, which is parallel to the main diagonal, has at most $k-1$ nonzero entries. Summing over the maximum numbers of $1$-entries in all diagonals\nproves (\\ref{ub}). \n\nThe second part of Theorem \\ref{llb} follows immediately from (\\ref{lb}) and (\\ref{ub}). The first part is obvious for a permutation matrix $P$ because of (\\ref{lb}). \nThe first part is also true \nfor a tuple permutation matrix $P'$ since $f(n, P, d) \\leq f(n, P', d)$ if $P'$ is generated by a permutation matrix $P$.\n\\end{proof}\n\nThe lower bound given in Theorem \\ref{llb} is linear in $k$. One may ask how large a lower bound on $I(P,d)$ can be for some $P$. In the the rest of this section, we extend Fox's idea for the $d=2$ case \\cite{Fox, Fox2} to show that a lower bound can be as large as an exponential function in $k$ in multiple dimensions. The crucial part in our approach is our observation that $f(n,P,d)$ is super-homogeneous.\n\n\\begin{theo}\n\\label{lower}\nFor each large $k$, there exists a family of $d$-dimensional $k\\times \\cdots \\times k$ permutation matrices $P$ such that \n$I(P,d)=2^{\\Omega(k^{1 \/ d})}$.\n\\end{theo}\nThe proof uses the super-homogeneity of extremal functions. In dimension two, the extremal function \nwas shown to be super-additive \\cite{PT}, i.e., $f(m+ n, P, 2) \\geq f(m, P, 2) + f(n, P, 2)$. This was the key in showing the convergence of the sequence\n$\\{ {f(n,P, 2) \\over n} \\}$ for those matrices $P$ whose extremal functions are $\\Theta(n)$. The limit is the well-known F\\\"uredi-Hajnal limit \\cite{FH}.\n\nWe note that the super-additivity of $f(n, P, 2)$ implies super-homogeneity, i.e., $f(s n, P, 2) \\geq s f(n, P, 2)$ for every positive integer $s$. In higher dimensions, \nwe show that $f(n, P, d)$ is\nsuper-homogeneous of a higher degree.\n\nA corner entry of a $k_1 \\times \\cdots \\times k_d$ matrix $P = (p_{i_1, \\ldots, i_d})$ is defined to be an entry $p_{i_1, \\ldots, i_d}$ located at a corner of $P$, i.e., where $i_{\\tau}=1$ or $k_{\\tau}$ for $1 \\leq \\tau \\leq d$.\n\n\\begin{lem}\n\\label{homo}\nIf $P$ is a $d$-dimensional matrix with a corner $1$-entry, then $f(sn,P,d) \\ge {s^{d-1} \\over (d-1)!}f(n,P,d)$.\n\\end{lem}\n\\begin{proof}\n Without loss of generality, we assume that $p_{1, \\ldots, 1}=1$ is the corner $1$-entry in $P$. Let $M$ be an $s \\times \\cdots\\times s$ matrix with $1$-entries \nat the coordinates $(i_1, \\ldots, i_d)$ where $i_1+ \\cdots+i_d=s+d-1$ and $0$-entries everywhere else, so $M$ has ${s+d-2 \\choose d-1} \\ge {s^{d-1} \\over (d-1)!}$ $1$-entries. \nLet $N$ be an $n \\times \\cdots \\times n$ matrix that avoids $P$ and has $f(n,P,d)$ $1$-entries. It then suffices to prove that $M \\otimes N$ avoids $P$.\n\nAssume for contradiction that the Kronecker product $M \\otimes N$ contains $P$. Pick an arbitrary $1$-entry $p^*$ in $P$ other than $p_{1, \\ldots, 1}$. Suppose that $p_{1, \\ldots, 1}$ and $p^*$ are represented by $e_1$ and $e_2$ in $M \\otimes N$, respectively.\nWe consider the $n \\times \\cdots \\times n$ $S$ submatrices of $M \\otimes N$. We may assume that $e_1$ and $e_2$ are in\nthe $S$-submatrices $S(i_1, \\ldots, i_d)$ and $S(j_1, \\ldots, j_d)$, respectively. Note that $i_1+ \\cdots +i_d = j_1+ \\cdots + j_d$. Since $p^*$\nhas larger coordinates than $p_{1, \\ldots, 1}$ in $P$, entry $e_2$ must also have larger coordinates than $e_1$ in $M \\otimes N$ and hence $i_{\\tau} \\leq j_{\\tau}$ for $\\tau = 1, 2, \\ldots, d$. It then follows from $i_1+ \\cdots +i_d = j_1+ \\cdots + j_d$\nthat $i_{\\tau} = j_{\\tau}$ for $\\tau= 1, 2, \\ldots, d$, i.e., \nthe two entries $e_1$ and $e_2$ must be in the same $S$ submatrix in $M \\otimes N$. Since $p^*$ is an arbitrary $1$-entry other than $p_{1, \\ldots , 1}$ in $P$, \nthe $S$ submatrix contains $P$. But this is a contradiction since each nonzero $S$ submatrix in $M \\otimes N$ is an exact copy of $N$, \nwhich avoids $P$. Thus $M \\otimes N$ avoids $P$.\n\\end{proof}\n\nJust as super-additivity leads to the F\\\"uredi-Hajnal limit in dimension two, super-homogeneity also produces an interesting result on limits.\n\\begin{lem}\n\\label{converge}\nIf $P$ is a $d$-dimensional matrix which contains a corner $1$-entry, then for any positive integer $m$,\n$$I(P,d) \\ge {1 \\over (d-1)!} \\ {f(m,P,d) \\over m^{d-1}}.$$\n\\end{lem}\n\\begin{proof}\nFor each fixed positive integer $m$, we write $n$ as $n=sm+r$, where $0 \\le r 0$ and $(1 + 1\/x_1)^{x_1} \\leq e$ for $x_1 \\geq 1$. We also note that $$r(\\ell, \\ldots , \\ell) = (1-q)^{\\ell^d}\\left[{2\\ell-1\\choose \\ell-1}2^{\\ell}{\\ell^{\\ell} \\over \\ell!} \\right]^d \\le e^{-q\\ell^d}(2^{2\\ell-1}2^{\\ell}e^{\\ell})^d \\le \n2^{-20^{d-1}\\ell\/2 } (2^{3\\ell - 1}e^{\\ell})^d \\le (2N)^{-d}, $$\nwhere we use Stirling's inequality and ${2\\ell-1 \\choose \\ell-1}\\le 2^{2\\ell-1}$.\nWe now use the \nsymmetry of $r(x_1, \\cdots, x_d)$ to obtain\n$$ \\mathbb{P}(X) \\le \\sum_{x_1, \\ldots, x_d\\ge \\ell} r(x_1,\\ldots, x_d) \\le \\left(\\sum_{i=0}^{\\infty}(1\/2)^i \\right)^d r(\\ell,\\ldots,\\ell) \n\\le 2^d (2N)^{-d} = N^{-d}. $$\n\nWe now estimate conditional expectation $\\mathbb{E}( \\xi | Y)$, where $\\xi = |A|$. Note that $\\mathbb{E}(\\xi | Y)\\mathbb{P}(Y)=\\mathbb{E}(\\xi)-\\mathbb{E}(\\xi | X) \\mathbb{P}(X) \\ge \\Theta(N^{d-1\/2})-N^d N^{-d} = \\Theta(N^{d-1\/2})$ so $\\mathbb{E}(\\xi | Y)=\\Theta(N^{d-1\/2})$.\nThus, there exists an $A$ that avoids $R^{l, \\ldots,l}$ as an interval minor and has at least $\\Theta(N^{d-1\/2})$ $1$-entries.\n\\end{proof}\n\n\nWe are now ready to prove Theorem \\ref{lower}.\n\\begin{proof} [Proof of Theorem \\ref{lower}]\n\nLet $\\ell=\\lfloor k^{1\/d} \\rfloor$ be the largest integer less than or equal to $ k^{1\/d}$. There is a family of $d$-dimensional permutation matrices of size $\\ell^d \\times \\cdots \\times \\ell^d$ that contain $R^{\\ell, \\ldots , \\ell}$ as an interval minor and have at least one corner $1$-entry. To see this, \nmany such permutation matrices can be constructed so that they have exactly one $1$-entry in each of their $S$-submatrices of size $\\ell^{d-1} \\times \\cdots \\times \\ell^{d-1}$, including a corner $1$-entry.\n\nSince $k \\geq \\ell^d$, there is a family of permutation matrices $P$ of size $k \\times \\cdots \\times k$ that contain $R^{\\ell, \\ldots , \\ell}$ as an interval minor and have at least one corner $1$-entry.\nEach $P$ has a corner $1$-entry, so we can apply Lemma \\ref{converge} to obtain\n\\begin{equation}\n\\label{key}\nI(P,d) \\ge {1 \\over (d-1)!} \\ {f(N,P,d) \\over N^{d-1}} ,\n\\end{equation}\nwhere $N$ can be chosen to be the positive integer given in Lemma 4.9. \n\nMatrix $P$ contains $R^{\\ell,\\ldots,\\ell}$ as an interval minor, \nso $f(N,P,d)\\ge m(N,R^{\\ell, \\ldots, \\ell},d)$,\nwhich along with (\\ref{key}) and Lemma \\ref{exists} yields \n$$I(P,d) \\ge {1 \\over (d-1)!} \\ {m(N, R^{\\ell, \\ldots, \\ell}, d) \\over N^{d - 1}} \n \\geq \\Theta(N^{1 \\over 2}) \n= 2^{\\Omega(\\ell)}\n= 2^{\\Omega(k^{1 \/ d})}.\n$$\nThis completes the proof of Theorem \\ref{lower}.\n\\end{proof}\n\n\\section{Conclusions and future directions}\n\\label{conclusion}\nWe obtained non-trivial lower and upper bound on $f(n,P,d)$ when $n$ is large for block permutation matrices $P$. In particular, we established the tight bound $\\Theta(n^{d-1})$ on $f(n,P,d)$ for every $d$-dimensional tuple permutation matrix $P$. \nWe improved the previous upper bound on the limit superior of the sequence $\\{ {f(n,P,d) \\over n^{d-1}}\\}$ for \nall permutation and tuple permutation matrices. We used the super-homogeneity of the extremal function to show that \nthe limit inferior is exponential in $k$ for a family of $k \\times \\cdots \\times k$ permutation matrices. Our results substantially advance the extremal theory of matrices. We believe that super-homogeneity is fundamental to pattern avoidance in multidimensional matrices.\n\nOne possible direction for future research would be to strengthen the super-homogeneity as expressed in Lemma \\ref{homo} to\n$f(sn, P, d) \\geq s^{d-1} f(n, P, d)$. We have successfully tested this super-homogeneity on the identity matrix and the matrices whose $1$-entries are on rectilinear paths. If this super-homogeneity is true \nfor permutation matrices $P$, we can then use a Fekete-like lemma to show the convergence of the sequence $\\{ {f(n,P,d) \\over n^{d-1}} \\}$.\n\nAnother possible direction would be to extend Theorem \\ref{lower} from a family of permutation matrices to almost all permutation matrices. We think this becomes possible if the corner $1$-entry condition is removed in Lemmas \\ref{homo} and \\ref{converge}.\n\n\\section{Acknowledgments}\n\nWe would like to thank Professor Jacob Fox for valuable discussions about this research. The first author was supported by the NSF graduate fellowship under grant number 1122374. The research of the second author was supported in part by the Department of Mathematics, MIT through PRIMES-USA 2014. It was also supported in part by the Center for Excellence in Education and the Department of Defense through RSI 2014. \n\n\n\\bibliographystyle{amsplain}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}