diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzjqmz" "b/data_all_eng_slimpj/shuffled/split2/finalzzjqmz" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzjqmz" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n\\begin{note}\n\t\\begin{itemize}\n\t\t\\item Let $\\Z$ and $\\P$ be the set of integers and prime numbers, respectively.\n\t\t\\item Let $\\N$ be $\\{1,2,3,\\cdots\\}$, and its elements are called natural numbers.\n\t\t\\item For real $x$, $\\floor{x}$ means to the unique integer such that $\\floor{x}\\leq x<\\floor{x}+1$; called the integer part of $x$.\n\t\t\\item Unless stated otherwise, the greatest common divisor of two integers $m$ and $n$ is denoted by $(m,n)$.\n\t\\end{itemize}\n\\end{note}\n\nIt is well known that arithmetical progressions are related to prime numbers. For example, the Dirichlet prime number theorem in $1837$ \\cite{Dirichlet} and the Green-Tao theorem in $2004$ \\cite{Green-Tao}. However, the following conjecture of Dickson has not been well studied.\n\n\\begin{cjt}[Dickson \\cite{Dickson}]\\label{cjt:Dickson}\n\tLet $k$ be a natural number. Suppose that $a_i$ and $b_i$ are integers with $b_i\\geq1$ and $(a_i,b_i)=1$ for all $i=1,\\ldots,k$. Then, there are infinitely many integers $n$ such that $a_1+b_1n,\\cdots,a_k+b_kn$ are simultaneously primes.\n\\end{cjt}\n\nIf $k=1$, then this is nothing but the Dirichlet prime number theorem. However, all the other cases are unsolved. In $1962$, Bateman and Horn generalized this conjecture to irreducible polynomials.\n\n\\begin{cjt}[Bateman-Horn \\cite{Bateman-Horn}]\\label{cjt:Bateman-Horn}\n\tSuppose that $f_i(x)$ is an irreducible polynomial whose leading coefficient is a positive integer. Let $N$ be a natural number, and let $d_i$ be the degree of $f_i(x)$. Then, the number of $03$, and we get $l_1(p)2$, then $(n+1)2^{k-1}-1\\pmod p$ has period $\\textup{ord}\\pas{p;2}$ with respect to $k$. Letting $v(p)=\\min\\{k,\\textup{ord}\\pas{p;2}\\}$, this implies that $w(p)$ is equal to the number $w^\\prime(p)$ of solutions of\n\\[\nn\\pas{2n+1}\\cdots \\pas{(n+1)2^{v(p)-1}-1}\\equiv0\\pmod p.\n\\]\nWe observe that each of $n,2n+1,\\cdots,(n+1)2^{v(p)-1}-1$ has one solution in $\\{0,\\cdots,p-1\\}$. That is, $w^\\prime(p)\\geq v(p)$. Assume that there exist $1\\leq a\\leq b\\leq v(p)$ and $n\\in\\{0,\\cdots,p-1\\}$ satisfy\n\\[\n\\pas{n+1}2^{a-1}\\equiv1\\pmod p \\quad\\textup{ and } \\quad \\pas{n+1}2^{b-1}\\equiv1\\pmod p.\n\\]\nThen, since $n+1$ and $p$ are relatively prime, $2^{a-1}\\equiv2^{b-1}\\pmod p$ and it implies $a=b$. We have $w(p)=w^\\prime(p)=v(p)$. The same argument can be applied to the second Cunningham chains. From this argument and Conjecture.\\ref{cjt:Bateman-Horn}, we can expect the following.\n\n\\begin{cjt}[Caldwell \\cite{Caldwell}]\\label{cjt:Caldwell}\n\tFix a natural number $k$. Then\n\t\\[\n\t\\sum_{\\substack{p\\leq N\\\\ l(p)\\geq k}}1\\sim B_k\\int_2^N\\frac{dx}{\\pas{\\log x}\\pas{\\log 2x}\\cdots\\pas{\\log2^{k-1}x}}\\sim B_k\\frac{N}{\\log^kN}\n\t\\]\n\twhere the partial sum on the left hand side runs through all prime numbers $p\\leq N$ satisfying $l(p)\\geq k$. Here,\n\t\\[\n\tB_k:=\\prod_{p\\in\\P}\\frac{1-w(p)\/p}{\\pas{1-p^{-1}}^k}=2^{k-1}\\prod_{p>2}\\frac{1-\\min\\Pas{k,\\textup{ord}\\pas{p;2}}\/p}{\\pas{1-p^{-1}}^k}.\n\t\\]\n\\end{cjt}\n\nIf we examine the size of $B_k$, we can get some upper bound of $l(p)$.\n\n\\begin{prop}\\label{prop:order of Bk}\n\tLet $\\gamma$ be the Euler constant $\\simeq0.57722$. Then for $k\\geq2$,\n\t\\[\n\t\\log\\log k+\\gamma-2+O\\pas{\\frac{1}{\\log k}}\\leq \\frac{\\log B_k}{k}\\leq \\log k+\\gamma+\\log\\log2-1+O\\pas{\\frac{\\log k}{k}}.\n\t\\]\n\\end{prop}\n\\begin{proof}\n\tThe following facts in \\cite{Rosser-Schoenfeld} will be used in the proof without mentioning explicitly:\n\t\\begin{itemize}\n\t\t\\item[$(a)$] $\\displaystyle \\log\\log x+B-\\frac{1}{2\\log^2x}<\\sum_{p\\leq x}\\frac{1}{p}<\\log\\log x +B+\\frac{1}{\\log^2x},$\n\t\t\\item[$(b)$] $\\displaystyle \\prod_{p\\leq x}\\pas{1-\\frac{1}{p}}=O\\pas{\\frac{1}{\\log x}},$\n\t\t\\item[$(c)$] $\\displaystyle \\prod_{p\\leq x}\\pas{1-\\frac{1}{p}}^{-1}=e^\\gamma\\log x+O\\pas{\\frac{1}{\\log x}},$\n\t\t\\item[$(d)$] $\\displaystyle \\sum_{p\\leq x}\\log p<\\pas{1+\\frac{1}{2\\log x}}x,$\n\t\\end{itemize}\n\twhere $x>2$ and $B=\\gamma-\\sum_{p\\in\\P}\\sum_{k=2}^\\infty k^{-1}p^{-k}$. For $k\\geq2$ and a sufficiently large real number $x$, we define\n\t\\[\n\tB_k(x) := 2^{k-1}\\prod_{2\\frac{k-1}{k}\\log2++\\log\\frac{e^\\gamma}{2}+\\log\\log x+O\\pas{\\frac{1}{\\log^2x}}+\\frac{\\log2}{k}-1-\\frac{1}{2\\log k}\\\\\n\t\t&\\quad-\\log\\pas{\\frac{\\log x}{\\log k}}-\\frac{1}{\\log^2x}-\\frac{1}{2\\log^2k}-1+O\\pas{\\frac{\\log k}{k}}\\\\\n\t\t&=\\log\\log k+\\gamma-2-\\frac{1}{2\\log k}-\\frac{1}{2\\log^2k}+O\\pas{\\frac{1}{\\log^2x}}+O\\pas{\\frac{\\log k}{k}}\n\t\\end{align*}\n\tand we have\n\t\\begin{align*}\n\t\t\\frac{1}{k}\\log B_k\\geq \\log\\log k+\\gamma-2-\\frac{1}{2\\log k}-\\frac{1}{2\\log^2k}+O\\pas{\\frac{\\log k}{k}}\n\t\\end{align*}\n\tas $x\\to\\infty$. We next consider the upper bound of $B_k(x)$ in $k$. Write\n\t\\begin{align*}\n\t\tB_k(x)\n\t\t&=2^{k-1}\\pas{\\prod_{22^k$. If $p>2^{j-1}$ for some $j\\in\\N$, then $\\textup{ord}\\pas{p;2}\\geq j$. Thus, putting $y=\\log k\/\\log2$, we obtain\n\t\\begin{align*}\n\t\t\\log\\Pi_3\n\t\t&\\leq\\log\\prod_{y0$ which satisfies $F(N_n)\\leq1-C\/k(N_n)$. Thus\n\\begin{align*}\n\t\\limsup_{n\\to\\infty}B_{k(N_n)}\\frac{N_n}{\\pas{\\log N_n}^{k(N_n)}}\n\t&\\leq \\lim_{n\\to\\infty}\\pas{1-\\frac{C}{k(N_n)}}^{k(N_n)}\\\\\n\t&=\\begin{cases}\n\t\te^{-C}\t\t&\\text{if}\t\\quad\t\\lim_{n\\to\\infty}k(N_n)=+\\infty,\\\\\n\t\t\\max_{n\\in\\N}\\pas{1-\\frac{C}{k(N_n)}}^{k(N_n)}\t\t&\\text{if}\t\\quad\t\\lim_{n\\to\\infty}k(N_n)<+\\infty\n\t\\end{cases}\\\\\n\t&<1.\n\\end{align*}\nHowever, unless the order with respect to $k$ of the terms that vanish by approximation of Conjecture.\\ref{cjt:Caldwell} is small, this result contradicts Conjecture.\\ref{cjt:Caldwell} and the maximality of $k(N)$. Therefore we may expect that $\\lim_{N\\to\\infty}(1-F(N))k(N)=0$. In particular, since $\\lim_{N\\to\\infty}F(N)=1$, we have $F(N)<2$ for sufficiently large $N$. Thus\n\\[\n\\frac{\\log N}{k(N)}+\\log f(k(N))-\\log\\log N<\\log2.\n\\]\nFrom Proposition.\\ref{prop:order of Bk} and $l(2)\\geq3$, we have\n\\[\n\\log f(k(N))>\\log\\log k(N)-2>\\log\\log3-2.\n\\]\nTherefore, we obtain\n\\[\nk(N)\\geq\\pas{1+o(1)}\\frac{\\log N}{\\log\\log N}.\n\\]\nFrom this, we may conjecture the following:\n\n\\begin{cjt}\\label{cjt:omega order of Cunningham chains}\n\t\\[\n\tl(p)=\\Omega\\pas{\\frac{\\log p}{\\log\\log p}}\\quad \\textup{ on }\\P.\n\t\\]\n\\end{cjt}\n\nIn \\cite{Augustin} it is reported that, $p_1:=2759832934171386593519$ is the first term of the longest first Cunningham chain in the data up to $2020$. Its length is $17$. And $p_2=42008163485623434922152331$ is the first term of the longest second whose length is $19$. Thus\n\\begin{align*}\n\tl_1(p_1)=17, \\;\\frac{\\log p_1}{\\log\\log p_1}\\simeq12.661,\\\\\n\tl_{-1}(p_2)=19, \\;\\frac{\\log p_2}{\\log\\log p_2}\\simeq14.470.\n\\end{align*}\nIn this way, a better estimation of $B_k$ implies a better bound of the length of Cunningham chains. However, it is still unknown whether $\\limsup_{p\\to\\infty}l(p)\/p=0$ or not.\n\nIn this paper, by using a generalized Fibonacci sequence, we get $l(p)\\ll\\log p$ under a certain condition $(Theorem.\\ref{thm:related with CC})$. It seems that this sufficient condition is plausible by numerical test. The condition can be extended from prime numbers to natural numbers $(Corollary.\\ref{crl:related with CC})$. This implies that the problem of upper estimation of $l(p)$ is reduced to that on natural numbers. One of the benefit of this is that we can use methods of number theory to solve it.\n\n\\textbf{Acknowledgement.} I wish to express my gratitude to Professor Kohji Matsumoto and Dr. Sh\\={o}ta Inoue for their advice towards the present research. I am also thankful to Mr. Yusei Ishida for assisting in writing computer programs. And I thank Dr. Kenta Endo, Dr. Kota Saito, Mr. Yuichiro Toma, Mr. Hideki Hunakura and Mr. Tomohiro Fukada for many useful discussions.\n\n\\section{$\\mathcal{F}_\\alpha$ numbers}\n\\begin{dfn}\n\tLet $\\alpha\\geq1$ be a natural number. A sequence $\\mathcal{F}_\\alpha$:\n\t\\[\n\tF_0=0, \\;F_1=1, \\;F_{n+1}=\\alpha F_n+F_{n-1}\\quad \\pas{n\\in\\Z}\n\t\\]\n\tis called the generalized Fibonacci sequence, and its elements are called $\\mathcal{F}_\\alpha$ numbers.\n\tFor example, enumerating $F_{-5}$ through $F_5$ we have\n\t\\[\n\t\\cdots,\\alpha^4+3\\alpha^2+1,-\\alpha^3-2\\alpha,\\alpha^2+1,-\\alpha,1,0,1,\\alpha,\\alpha^2+1,\\alpha^3+2\\alpha,\\alpha^4+3\\alpha^2+1,\\cdots.\n\t\\]\n\\end{dfn}\n\nThe following facts on $\\mathcal{F}_\\alpha$ numbers are well known. For example, in \\cite{Koshy}, T. Koshy showed the case $\\alpha=1$ of $(a)$ in Section 5.2 pp.88-90, $(b)$ in Section 5 p.82, $(c)$ in Section 5.6 p.103, $(d)$ in Section 20.1 p.397, $(e)$ in Section 10.1 pp.171-173, and $(f)$ in Section 10.1 pp.173-174. We can similarly show the case $\\alpha>1$.\n\n\\begin{fact}\\label{fact:fact of Fa numbers}\n\tLet $m,n$ be integers, and put $\\varphi_\\alpha=(\\alpha+\\sqrt{\\alpha^2+4})\/2$. We have:\n\t\\begin{itemize}\n\t\t\\item[$(a)$] $\\displaystyle F_n=\\frac{1}{\\sqrt{\\alpha^2+4}}\\pas{\\varphi_\\alpha^n-\\pas{-\\varphi_\\alpha}^{-n}},$\n\t\t\\item[$(b)$] $\\displaystyle \\sum_{i=1}^{n}F_i=\\pas{1+\\frac{1}{\\alpha}}F_n+\\frac{F_{n-1}-1}{\\alpha}<\\pas{1+\\frac{2}{\\alpha}}F_n\\quad \\pas{n\\geq1},$\n\t\t\\item[$(c)$] $\\displaystyle F_{-n}=(-1)^{n+1}F_n,$\n\t\t\\item[$(d)$] $\\displaystyle F_{m+n}=F_{m+1}F_n+F_mF_{n-1},$\n\t\t\\item[$(e)$] $\\displaystyle m\\mid n \\iff F_m\\mid F_n,$\n\t\t\\item[$(f)$] $\\displaystyle (F_m,F_n)=F_{(m,n)}.$\n\t\\end{itemize}\n\\end{fact}\n\nIf $\\alpha=1$, then $m$ should not be $2$. Indeed, $F_2=1, F_3=2$ thus $F_2\\mid F_3$, but $2\\nmid3$. We next define the divisor function on $\\mathcal{F}_\\alpha$.\n\n\\begin{dfn}\n\tIn this article, a natural number $d$ is called a $\\mathcal{F}_\\alpha$ divisor of $n$ if $d\\in\\mathcal{F}_\\alpha$ and $d\\mid n$. A map ${}_{\\Falpha}\\sigma:\\N\\to \\C$ is defined by\n\t\\[\n\t\\FDF{n}:= \\sum_{\\substack{d\\mid n \\\\ 01$.\n\\end{dfn}\n\nIn Section.$3$, we will investigate the relationship between the iteration of ${}_{\\Falpha}\\sigma$ and Cunningham chains.\n\n\\begin{ex}\n\tSuppose $\\alpha=3$. Since\n\t\\[\n\t\\mathcal{F}_\\alpha: \\cdots, 0, 1, 3, 10, 33, 109, 360, \\cdots,\n\t\\]\n\twe get\n\t\\begin{align*}\n\t\t\\FDF{2}&=1, \\;\\FDF{3}=4, \\;\\FDF{4}=1, \\\\\n\t\t\\FDFk{3}{109}&=\\FDFk{2}{110}=\\FDF{11}=1.\n\t\\end{align*}\n\\end{ex}\n\nWe consider the Dirichlet series associated with ${}_{\\Falpha}\\sigma$. Put $\\zeta_\\alpha(s)=\\sum_{00$. Suppose $f(n)$ is $n$ if $n\\in\\mathcal{F}_\\alpha$, and is $0$ otherwise. Then\n\\begin{align}\\label{eq:Dirichlet product}\n\t\\zeta(s)\\zeta_\\alpha(s-1)=\\pas{\\sum_{m=1}^\\infty\\frac{1}{m^s}}\\pas{\\sum_{01$. As we can see from this expression, the research of ${}_{\\Falpha}\\sigma$ will be useful for the study of $\\zeta_\\alpha$. In particular, $\\zeta_1$ is called the Fibonacci zeta function by Egami\\cite{Egami} and Navas\\cite{Navas}, and it is a meromorphic function on $\\C$. It is the famous unsolved problem whether $\\zeta_1(1)$ is transcendental or not.\n\nHereafter, we suppose that $\\alpha\\geq3$ unless explicitly stated otherwise. Let $\\ind{n}$ be the index of the maximal $\\mathcal{F}_\\alpha$ number $\\leq n$ for $n\\in\\N$, that is, if we take $k$ satisfying $F_k\\leq n1$. Let $i_0$ be the maximal $i$ with $F_i\\mid n$. Then $i_0>1$ and $n$ has at least two $\\mathcal{F}_\\alpha$ divisors $F_{i_0}$ and $F_1$. Thus we estimate that\n\t\\begin{align*}\n\t\tF_{i_0}2$. We have\n\t\\[\n\t\\pas{F_p,\\frac{F_{2m}}{F_m}}\\leq\\pas{F_p,F_{2m}}=F_{\\pas{p,2m}}=F_1=1\n\t\\]\n\tthat is $(F_p, F_{2m}\/F_m)=1$. Here, let $D_\\alpha(n)$ be the set of all $\\mathcal{F}_\\alpha$ divisors of $n$. We find that\n\t\\[\n\tD_\\alpha(F_p)\\cap D_\\alpha\\pas{\\frac{F_{2m}}{F_m}}=\\Pas{1}.\n\t\\]\n\tAnd in general,\n\t\\[\n\tD_\\alpha\\pas{F_p\\frac{F_{2m}}{F_m}}\\supset D_\\alpha(F_p)\\cup D_\\alpha\\pas{\\frac{F_{2m}}{F_m}}.\n\t\\]\n\tWe will show the inclusion relation of the reverse direction. Take an arbitrary $F_s$ in $D_\\alpha(F_p\\cdot F_{2m}\/F_m)$. Then there exists $a$ and $b$ satisfying $a\\mid F_p, b\\mid F_{2m}\/F_m$ and $ab=F_s$. Since $(b,F_p)=1$,\n\t\\[\n\ta=\\pas{a,F_p}=\\pas{a,F_p}\\pas{b,F_p}=\\pas{ab,aF_p,bF_p,F_p^2}=\\pas{ab,\\pas{a,b,F_p}F_p}=\\pas{ab,F_p}=F_{\\pas{s,p}}=1\\textup{ or }F_p.\n\t\\]\n\tIn the case $a=1$, $F_s=b\\in D_\\alpha(F_{2m}\/F_m)$ holds. Thus we have $ab\\in\\FDF{F_{2m}\/F_m}$. Suppose that $a=F_p$. Then $k:=s\/p$ is a natural number from Fact.\\ref{fact:fact of Fa numbers} $(e)$, and $F_{kp}\/F_p\\mid F_{2m}\/F_m$ holds. If $mF_{2m}\/F_m$ for $k\\geq2$. We consider the case $p3$, and we get $k=1$. In other words, $b=1$ in the case $a=F_p$, and then $ab\\in D_\\alpha(F_p)$. From these result, we have\n\t\\begin{align*}\n\t\tD_\\alpha(F_p)\\cap D_\\alpha\\pas{\\frac{F_{2m}}{F_m}}=\\Pas{1} \\text{ and } D_\\alpha\\pas{F_p\\frac{F_{2m}}{F_m}}=D_\\alpha(F_p)\\cup D_\\alpha\\pas{\\frac{F_{2m}}{F_m}},\n\t\\end{align*}\n\tand hence\n\t\\begin{align*}\n\t\t\\FDF{F_{m+p}+F_{m-p}}\n\t\t=\\sum_{d\\in D_\\alpha\\pas{F_p\\frac{F_{2m}}{F_m}}}d\n\t\t=\\pas{\\sum_{d\\in D_\\alpha\\pas{F_p}}+\\sum_{d\\in D_\\alpha\\pas{\\frac{F_{2m}}{F_m}}}}d-1\n\t\t=\\FDF{F_p}+\\FDF{\\frac{F_{2m}}{F_m}}-1.\n\t\\end{align*}\n\tApply Lemma.\\ref{lem:divisors of F(m+1)+F(m-1)} to complete the proof.\n\\end{proof}\n\nFrom this theorem, we find the following corollary. That is a relationship between the iteration of ${}_{\\Falpha}\\sigma$ and Cunningham chains.\n\n\\begin{crl}\\label{crl:divisors of F(2p+1)+1}\n\tFor every odd prime $p$,\n\t\\[\n\t\\FDF{F_{2p\\pm1}+1}=F_p+1.\n\t\\]\n\tIn particular, if $2p\\pm1$ is also prime, then\n\t\\begin{align}\\label{crl:divisors of F(2p+1)+1-1}\n\t\t\\FDFk{2}{F_{2p\\pm1}}=\\FDF{F_p}.\n\t\\end{align}\n\tFurther, by iterating this argument, we obtain\n\t\\[\n\tl_{\\pm1}(p)-1=\\ord{F_{(p\\pm1)2^{l_{\\pm1}(p)-1}\\mp1}}-\\ord{F_p}.\n\t\\]\n\\end{crl}\n\\begin{proof}\n\tIt follows $m=p\\pm1$ in Theorem.\\ref{thm:divisors of F(m+1)+F(m-1)}. Further\n\t\\[\n\t\\ord{F_{(p\\pm1)2^{l_{\\pm1}(p)-1}\\mp1}}=1+\\ord{F_{(p\\pm1)2^{l_{\\pm1}(p)-2}\\mp1}}=\\cdots=l_{\\pm1}(p)-1+\\ord{F_p}.\n\t\\]\n\\end{proof}\n\nIn Section.$4$, we will show the converse of $(\\ref{crl:divisors of F(2p+1)+1-1})$.\n\n\n\\section{The upper bound of $\\ord{n}$}\nIn this section, our aim is to prove Theorem.\\ref{thm:converse of cor.3.4} and Theorem.\\ref{thm:related with CC}. Those are important results that suggest the relationship between ${}_{\\Falpha}\\sigma$ and Cunningham chains. In the process of the proof, we will use the following theorem which is called the generalized Zeckendorf`s theorem by Hoggatt \\cite{Hoggatt} and Keller \\cite{Keller}.\n\n\\begin{dfn}[Zeckendorf-Hoggatt-Keller]\n\tEvery natural number $n$ has the unique representation:\n\t\\[\n\tn=\\sum_{i=1}^ra_iF_{c_i}\n\t\\]\n\twhere $r$ is a natural number and sequences $\\{a_i\\}_{i=1}^r, \\{c_i\\}_{i=1}^r\\subset\\N$ satisfy the following conditions.\n\t\\begin{enumerate}\n\t\t\\item[(i)] $\\displaystyle 0i\\geq0$,\n\t\\begin{align}\n\t\tF_{k+i}&\\equiv \\pas{-1}^{i+1}F_{k-i} \\pmod{F_k}, \\label{lem:fractional part of Falpha-1}\\\\\n\t\tF_{2k+i}&\\equiv \\pas{-1}^kF_i \\pmod{F_k}, \\label{lem:fractional part of Falpha-2}\\\\\n\t\tF_{3k+i}&\\equiv \\pas{-1}^{k+i+1}F_{k-i} \\pmod{F_k}, \\label{lem:fractional part of Falpha-3}\\\\\n\t\tF_{4k+i}&\\equiv F_i \\pmod{F_k}. \\label{lem:fractional part of Falpha-4}\n\t\\end{align}\n\tMore generally, for every two non-negative integers $a,b$ with $a\\equiv b\\pmod4$,\n\t\\begin{align}\\label{lem:fractional part of Falpha-5}\n\t\tF_{ak+i}\\equiv F_{bk+i} \\pmod{F_k}.\n\t\\end{align}\n\\end{lem}\n\\begin{proof}\n\tFix a natural number $k$. First, we prove $(\\ref{lem:fractional part of Falpha-1})$. From Fact.\\ref{fact:fact of Fa numbers} $(d)$,\n\t\\[\n\tF_{k+i}=F_kF_{i+1}+F_{k-1}F_i\\equiv F_{k-1}F_i \\pmod{F_k}.\n\t\\]\n\tThus $(\\ref{lem:fractional part of Falpha-1})$ holds for $i=0,1$. Suppose that $(\\ref{lem:fractional part of Falpha-1})$ holds for all natural numbers less than $i\\geq2$. The right hand side is\n\t\\begin{align*}\n\t\t\\pas{-1}^{i+1}F_{k-i}=\\pas{-1}^{i+1}\\pas{F_{k-i+2}-\\alpha F_{k-i+1}}=\\pas{-1}^{i-1}F_{k-i+2}+\\alpha \\pas{-1}^iF_{k-i+1}.\n\t\\end{align*}\n\tUsing the assumptions of induction, we have\n\t\\[\n\t\\pas{-1}^{i-1}F_{k-i+2}+\\alpha \\pas{-1}^iF_{k-i+1}\\equiv F_{k+i-2}+\\alpha F_{k+i-1}=F_{k+i} \\pmod{F_k}.\n\t\\]\n\tThe proof of $(\\ref{lem:fractional part of Falpha-2})$ runs as\n\t\\begin{align*}\n\t\tF_{2k+i}=F_{2k}F_{i+1}+F_{2k-1}F_i\\equiv F_{2k-1}F_i\\equiv \\pas{-1}^kF_i \\pmod{F_k}\n\t\\end{align*}\n\tfrom the case of $i=k-1$ of $(\\ref{lem:fractional part of Falpha-1})$. Similarly, $(\\ref{lem:fractional part of Falpha-3})$ and $(\\ref{lem:fractional part of Falpha-4})$ is obtained from\n\t\\begin{align*}\n\t\tF_{3k+i}\n\t\t&=F_{3k}F_{i+1}+F_{3k-1}F_i\\equiv F_{2k+(k-1)}F_i\\equiv \\pas{-1}^kF_{k-1}F_i\\\\\n\t\t&\\equiv\\pas{-1}^k\\pas{F_kF_{i+1}+F_{k-1}F_i}=\\pas{-1}^kF_{k+i}\\equiv \\pas{-1}^{k+i+1}F_{k-i} \\pmod{F_k},\\\\\n\t\tF_{4k+i}\n\t\t&\\equiv F_{4k-1}F_i\\equiv\\pas{-1}^{k+(k-1)+1}F_1F_i=F_i \\pmod{F_k}.\n\t\\end{align*}\n\tNext, we observe that $(\\ref{lem:fractional part of Falpha-5})$ holds for every non-negative $a,b$ with $a\\equiv b\\pmod4$ if and only if for every non-negative $a$,\n\t\\begin{align}\\label{lem:fractional part of Falpha-6}\n\t\tF_{ak+i}\\equiv F_{\\delta k+i} \\pmod{F_k}\n\t\\end{align}\n\tholds where $a\\equiv\\delta\\pmod4$ with $\\delta\\in\\Pas{0,1,2,3}$. We prove $(\\ref{lem:fractional part of Falpha-6})$ by using induction with respect to $a$. The case $a=0,1,2,$ and $3$ are trivial since $a=\\delta$. And $(\\ref{lem:fractional part of Falpha-4})$ is nothing but the case of $a=4$. Suppose that $(\\ref{lem:fractional part of Falpha-6})$ holds also all natural numbers less than $a\\geq5$, and we take $\\delta^\\prime\\in\\{1,2,3,4\\}$ satisfying $\\delta^\\prime\\equiv a\\pmod4$. Then\n\t\\begin{align*}\n\t\tF_{ak+i}=F_{ak}F_{i+1}+F_{ak-1}F_i\\equiv F_{ak-1}F_i=F_{(a-1)k+(k-1)}F_i.\n\t\\end{align*}\n\tUsing the assumption of induction, we have\n\t\\begin{align*}\n\t\t\\equiv F_{(\\delta^\\prime-1)k+(k-1)}F_i=F_{\\delta^\\prime-1}F_i\\equiv F_{\\delta^\\prime k}F_{i+1}+F_{\\delta^\\prime k-1}F_i=F_{\\delta^\\prime k+i} \\pmod{F_k}.\n\t\\end{align*}\n\\end{proof}\n\n\n\\begin{thm}\\label{thm:best estimation of FDF(n)}\n\tSuppose $a\\in\\FDF{\\N}$, and put $k=\\ind{a}$. If $F_i\\mid a$, then $i\\leq(k+1)\/2$, that is\n\t\\[\n\t\\ind{\\FDF{a}}\\leq\\frac{k+1}{2}.\n\t\\]\n\\end{thm}\n\\begin{proof}\n\tLet $i_0$ be the maximal $i$ satisfying $F_i\\mid a$. The case $k=1$ is $i_0=1$ since $a=1$, and hence the claim holds. Assume that $i_0>(k+1)\/2$ for $k\\geq2$. In particular, $i_0\\geq2$. Then $a>1$. Thus, there exist natural numbers $c_1\\ldots,c_n$ such that\n\t\\begin{align*}\n\t\ta=F_{i_0+c_1}+F_{i_0+c_2}+\\cdots+F_{i_0+c_m}+F_{c_{m+1}}+\\cdots+F_{c_n},\\\\\n\t\tk=i_0+c_1>i_0+c_2>\\cdots>i_0+c_m>c_{m+1}>\\cdots>c_n=1.\n\t\\end{align*}\n\tNote that $c_1\\leq i_0-2$. Since $F_{i_0}\\mid a$,\n\t\\begin{align*}\n\t\ta=\\sum_{j=1}^mF_{i_0+c_j}+\\sum_{j=m+1}^nF_{c_j}\\equiv 0 \\pmod{F_k}\n\t\\end{align*}\n\tFrom Lemma.\\ref{lem:fractional part of Falpha} $(\\ref{lem:fractional part of Falpha-1})$,\n\t\\begin{align*}\n\t\t\\sum_{j=1}^mF_{i_0+c_j}\\equiv \\sum_{j=1}^m\\pas{-1}^{c_j+1}F_{i_0-c_j} \\pmod{F_k}\n\t\\end{align*}\n\tIt is enough to consider only the fractional part, and hence we suppose $c_{m+1}\\exp\\pas{\\frac{4}{\\log\\varphi_\\alpha-4}\\log\\pas{\\sqrt{\\alpha^2+4}+\\frac{1}{\\varphi_\\alpha}}},\n\t\\]\n\tthen we get\n\t\\begin{align}\\label{thm:estimation of ord(n)-1}\n\t\t\\ord{n}<\\frac{1}{\\log 2}\\log\\log n.\n\t\\end{align}\n\tMoreover,\n\t\\begin{align*}\n\t\t\\ord{n}\n\t\t\\begin{cases}\n\t\t\t=0 & (n=1),\\\\\n\t\t\t=1 & (2\\leq n\\leq7),\\\\\n\t\t\t<\\frac{1}{\\log2}\\log\\log n & (n\\geq8)\n\t\t\\end{cases}\n\t\\end{align*}\n\tholds at least in the case $\\alpha\\geq2981$.\n\\end{thm}\n\\begin{proof}\n\tWe see that $\\ord{1}=0$ by definition. Hereafter let $n\\geq2$ and $k=\\ind{n}\\geq1$. Since $F_k\\leq n$, we have\n\t\\[\n\tk\\leq\\flog{\\varphi_\\alpha}\\log\\pas{n\\sqrt{\\alpha^2+4}+\\pas{-\\varphi_\\alpha}^{-k}}\n\t\\]\n\tby the argument in the proof of Theorem.\\ref{thm:there exists a k s.t. FDFk(n)=1}. From Theorem.\\ref{thm:estimatation of ord(n) with ind(n)}, we estimate that\n\t\\begin{align}\\label{thm:estimation of ord(n)-2}\n\t\t\\ord{n}\n\t\t&\\leq\\flog{2}\\pas{\\log\\log\\pas{n\\sqrt{\\alpha^2+4}+\\pas{-\\varphi_\\alpha}^{-k}}-\\log\\log\\varphi_\\alpha}+2\\notag\\\\\n\t\t&<\\flog{2}\\log\\log n+\\flog{2}\\pas{\\log\\pas{1+\\flog{n}\\log\\pas{\\sqrt{\\alpha^2+4}+\\frac{1}{\\varphi_\\alpha}}}-\\log\\log\\varphi_\\alpha}+2.\n\t\\end{align}\n\tHere, we define\n\t\\begin{align*}\n\t\tf_\\alpha(x)&:=\\log\\pas{1+\\flog{x}\\log\\pas{\\sqrt{\\alpha^2+4}+\\frac{1}{\\varphi_\\alpha}}}-\\log\\log\\varphi_\\alpha,\\\\\n\t\tg_\\alpha(x)&:=\\exp\\pas{f_\\alpha(x)}=\\flog{\\varphi_\\alpha}\\pas{1+\\flog{x}\\log\\pas{\\sqrt{\\alpha^2+4}+\\frac{1}{\\varphi_\\alpha}}}\n\t\\end{align*}\n\twith $x\\geq2$. For real $y>(\\log\\varphi_\\alpha)^{-1}$, we have\n\t\\[\n\tx>\\exp\\pas{\\frac{1}{y\\log\\varphi_\\alpha-1}\\log\\pas{\\sqrt{\\alpha^2+4}+\\frac{1}{\\varphi_\\alpha}}}.\n\t\\]\n\tDenote by $A_\\alpha(y)$ the right-hand side of this inequality. Then $A_\\alpha(y)$ is decreasing with respect to $\\alpha$. (We will prove this in Remark.\\ref{rem:estimation of ord(n)}.) Since $\\varphi_3=(3+\\sqrt{13})\/2>3.3$, we have\n\t\\[\n\t\\log A_\\alpha(2)\\leq\\log A_3(2)=\\frac{1}{2\\log\\varphi_3-1}\\log\\pas{\\sqrt{13}+\\varphi_3^{-1}}<1\n\t\\]\n\twith $\\alpha\\geq3$, that is $A_\\alpha(2)<3$. Therefore, we have $g_\\alpha(x)<2$ for $\\alpha\\geq3$ and $x\\geq3$. Thus for every $\\alpha\\geq3$,\n\t\\[\n\t\\ord{n}<\\flog{2}\\log\\log n+\\flog{2}\\log g_\\alpha(n)+2<\\flog{2}\\log\\log n+3\n\t\\]\n\twith $n\\geq3$. In addition, this also holds the case $n=2$ since $\\ord{2}=1$ and $\\log\\log2\/\\log2+3\\simeq2.47$.\n\n\tLet us next prove $(\\ref{thm:estimation of ord(n)-1})$. It is enough to consider the domain of $\\alpha\\geq3$ and $a\\geq2$ which satisfies $f_\\alpha(x)\/\\log2+2<0$. Since this condition can be transformed into $g_\\alpha(s)<1\/4$ it is enough to assume the condition $x>A_\\alpha(1\/4)$ for all sufficient large $\\alpha$. Then $\\log\\varphi_\\alpha>4$, that is $\\alpha\\geq55$. Thus for every $\\alpha\\geq55$, we have\n\t\\[\n\t\\ord{n}<\\flog{2}\\log\\log n\n\t\\]\n\twith $x>A_\\alpha(1\/4)$. Since $A_\\alpha(y)$ is decreasing in $\\alpha$, $\\alpha>A_\\alpha(1\/4)$ holds for all sufficiently large $\\alpha\\geq55$. The lower bound is $\\alpha\\geq2981$ from computer calculations, and hence for every $\\alpha\\geq2981$ we get\n\t\\[\n\t\\ord{n}<\\flog{2}\\log\\log n\n\t\\]\n\twith $n\\geq\\alpha$. $\\ord{n}=1$ if $2\\leq n<\\alpha$, and $\\log\\log n\/\\log2>1$ for $n\\geq8$. This implies\n\t\\begin{align*}\n\t\t\\ord{n}\n\t\t\\begin{cases}\n\t\t\t=1 & \\pas{2\\leq n\\leq 7},\\\\\n\t\t\t<\\flog{2}\\log\\log n & \\pas{n\\geq8}.\n\t\t\\end{cases}\n\t\\end{align*}\n\\end{proof}\n\n\\begin{rem}\\label{rem:estimation of ord(n)}\n\tWe show that $A_\\alpha(y)$ is decreasing monotonically in $\\alpha$. By definition,\n\t\\[\n\t\\log A_\\alpha(y)=\\frac{\\log\\sqrt{\\alpha^2+4}}{y\\log\\varphi_\\alpha-1}+\\frac{1}{y\\log\\varphi_\\alpha-1}\\log\\pas{1+\\frac{1}{\\varphi_\\alpha\\sqrt{\\alpha^2+4}}}=:B(\\alpha)+C(\\alpha)\n\t\\]\n\tsay, with $y\\log\\varphi_\\alpha>1$. It is clear that $C(\\alpha)$ is decreasing, and hence it is sufficient to discuss on $B(\\alpha)$. For real $\\alpha$ with $y\\log\\varphi_\\alpha>1$,\n\t\\begin{align*}\n\t\t\\frac{d}{d\\alpha}B(\\alpha)=\\frac{1}{\\pas{y\\log\\varphi_\\alpha-1}^2}\\pas{\\frac{2y\\alpha}{\\alpha^2+4}\\log\\varphi_\\alpha-\\frac{2\\alpha}{\\alpha^2+4}-\\frac{y\\varphi_\\alpha^\\prime}{\\varphi_\\alpha}\\log\\pas{\\alpha^2+4}}.\n\t\\end{align*}\n\tSince $\\varphi_\\alpha<\\sqrt{\\alpha^2+4}$, we estimate that the right-hand side of the above is\n\t\\begin{align*}\n\t\t&<\\frac{1}{\\pas{y\\log\\varphi_\\alpha-1}^2}\\pas{\\frac{2y\\alpha}{\\alpha^2+4}\\log\\sqrt{\\alpha^2+4}-\\frac{y\\varphi_\\alpha^\\prime}{\\varphi_\\alpha}\\log\\pas{\\alpha^2+4}}\\\\\n\t\t&=\\frac{y\\log\\pas{\\alpha^2+4}}{\\pas{y\\log\\varphi_\\alpha-1}^2}\\pas{\\frac{\\alpha}{\\alpha^2+4}-\\frac{\\varphi_\\alpha^\\prime}{\\varphi_\\alpha}}\\\\\n\t\t&=\\frac{y\\log\\pas{\\alpha^2+4}}{\\pas{y\\log\\varphi_\\alpha-1}^2\\pas{\\alpha^2+4}\\pas{\\alpha+\\sqrt{\\alpha^2+4}}}\\pas{\\alpha\\pas{\\alpha+\\sqrt{\\alpha^2+4}}-\\pas{1+\\frac{\\alpha}{\\sqrt{\\alpha^2+4}}}\\pas{\\alpha^2+4}}\\\\\n\t\t&=-\\frac{4y\\log\\pas{\\alpha^2+4}}{\\pas{y\\log\\varphi_\\alpha-1}^2\\pas{\\alpha^2+4}\\pas{\\alpha+\\sqrt{\\alpha^2+4}}}<0.\n\t\\end{align*}\n\\end{rem}\n\nLet us consider the case $\\alpha=55$.\n\n\\begin{rem}\n\tSince $\\log A_{55}\\pas{1\/4}\\simeq 2091.79$,\n\t\\[\n\t\\textup{ord}_{55}\\pas{n}<\\frac{1}{\\log2}\\log\\log n\n\t\\]\n\tholds at least for $n>e^{2092}$.\n\\end{rem}\n\nIn addition, we obtain the following corollary from Theorem.\\ref{thm:estimation of ord(n)}.\n\n\\begin{crl}\n\tFor every $\\alpha\\geq3$,\n\t\\[\n\t\\limsup_{n\\to\\infty}\\frac{\\ord{n}}{\\log\\log n}\\leq \\frac{1}{\\log2}.\n\t\\]\n\\end{crl}\n\n\\begin{crl}\n\t\\begin{align*}\n\t\t\\lim_{\\alpha\\to\\infty}\\liminf_{n\\to\\infty}\\pas{\\flog{2}\\log\\log n-\\ord{n}}=+\\infty.\n\t\\end{align*}\n\\end{crl}\n\\begin{proof}\n\tFrom $(\\ref{thm:estimation of ord(n)-2})$, we have\n\t\\[\n\t\\liminf_{n\\to\\infty}\\pas{\\flog{2}\\log\\log n-\\ord{n}}\\geq\\flog{2}\\log\\log\\varphi_\\alpha-2.\n\t\\]\n\\end{proof}\n\nFrom Theorem.\\ref{thm:estimatation of ord(n) with ind(n)}, we have $\\ord{F_p}\\leq \\log p\/\\log2 +2$ for prime $p$. In fact, there is quite a big difference between them, which can be observed by numerical tests $(\\textup{FIGURE}\\ref{FIGURE 1},\\textup{FIGURE}\\ref{FIGURE 2})$. Thus, the author believes that the following theorem will be useful in the future.\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=0.6\\columnwidth]{FIGURE1.png}\n\t\\caption{$\\textup{ord}_3\\pas{F_n}$ and $\\log n\/\\log2+2\\;(n\\leq80000)$}\n\t\\label{FIGURE 1}\n\\end{figure}\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=0.6\\columnwidth]{FIGURE2.png}\n\t\\caption{$\\textup{ord}_3\\pas{F_n}\/\\log (n+1)$ and $1\/\\log2\\;(n\\leq80000)$}\n\t\\label{FIGURE 2}\n\\end{figure}\n\n\\newpage\n\n\\begin{thm}\\label{thm:related with CC}\n\tSuppose $\\alpha\\geq3$, and put $C_\\alpha:=\\limsup_{p\\to\\infty}\\frac{\\ord{F_p}}{\\log p}$. If $C_\\alpha<1\/\\log2$ for some $\\alpha$, then\n\t\\[\n\t\\limsup_{p\\to\\infty}\\frac{l(p)}{\\log p}\\leq\\frac{C_\\alpha}{1-C_\\alpha\\log2}.\n\t\\]\n\\end{thm}\n\\begin{proof}\n\tSuppose that $C_\\alpha<1\/\\log2$. For every $0<\\varepsilon<1\/\\log2-C_\\alpha$,\n\t\\begin{align}\\label{thm:related with CC-1}\n\t\t\\ord{F_p}<\\pas{C_\\alpha+\\varepsilon}\\log p\n\t\\end{align}\n\twith sufficiently large $p$. And\n\t\\[\n\t\\log\\pas{\\pas{p\\pm1}2^{l_{\\pm1}(p)-1}\\mp1}=\\pas{l_{\\pm1}(p)-1}\\log2+\\log p+O\\pas{\\frac{1}{p}}.\n\t\\]\n\tWe replace $p$ by $\\pas{p\\pm1}2^{l_{\\pm1}(p)-1}\\mp1$ in $(\\ref{thm:related with CC-1})$. Then\n\t\\[\n\tl(p)-1+\\ord{F_p}<\\pas{C_\\alpha+\\varepsilon}\\pas{\\pas{l_{\\pm1}(p)-1}\\log2+\\log p+O\\pas{\\frac{1}{p}}}\n\t\\]\n\tfrom Corollary.\\ref{crl:divisors of F(2p+1)+1}, and hence we have\n\t\\[\n\tl(p)<1+\\frac{C_\\alpha+\\varepsilon}{1-(C_\\alpha+\\varepsilon)\\log2}\\log p+O\\pas{\\frac{1}{p}}.\n\t\\]\n\tDivide the both sides by $\\log p$, and take limit superior with respect to $p$. Then we find that\n\t\\[\n\t\\limsup_{p\\to\\infty}\\frac{l(p)}{\\log p}\\leq\\frac{C_\\alpha+\\varepsilon}{1-\\pas{C_\\alpha+\\varepsilon}\\log2}.\n\t\\]\n\\end{proof}\n\nThe sufficient condition of Theorem.\\ref{thm:related with CC}, written in terms of prime numbers, can be replaced by the condition written in terms of natural numbers.\n\n\\begin{crl}\\label{crl:related with CC}\n\tSuppose that $\\alpha\\geq3$, and put $D_\\alpha:=\\limsup_{n\\to\\infty}\\frac{\\ord{n}}{\\log\\log n}$. If $D_\\alpha<1\/\\log2$ for some $\\alpha$, then\n\t\\[\n\t\\limsup_{p\\to\\infty}\\frac{l(p)}{\\log p}\\leq\\frac{D_\\alpha}{1-D_\\alpha\\log2}.\n\t\\]\n\\end{crl}\n\\begin{proof}\n\tFor all natural $n$,\n\t\\[\n\t\\log F_n=n\\log\\varphi_\\alpha+\\log\\pas{\\frac{1-\\pas{-\\varphi_\\alpha^2}^{-n}}{\\sqrt{\\alpha^2+4}}}\n\t\\]\n\tfrom Fact.\\ref{fact:fact of Fa numbers} $(a)$. In particular, $\\log\\log F_n\\sim \\log n$. Then, for every $\\varepsilon>0$, we have $\\log p\/\\log\\log F_p>1-\\varepsilon$ with any sufficiently large $p$. Thus we estimate that\n\t\\[\n\tD_\\alpha\\geq\\limsup_{p\\to\\infty}\\frac{\\ord{F_p}}{\\log\\log F_p}=\\limsup_{p\\to\\infty}\\frac{\\ord{F_p}}{\\log p}\\cdot\\frac{\\log p}{\\log\\log F_p}\\geq\\pas{1-\\varepsilon}\\limsup_{p\\to\\infty}\\frac{\\ord{F_p}}{\\log p}.\n\t\\]\n\tNow, since $\\varepsilon>0$ is arbitrary, we get\n\t\\[\n\t\\limsup_{p\\to\\infty}\\frac{\\ord{F_p}}{\\log p}\\leq D_\\alpha.\n\t\\]\n\tHere we suppose $D_\\alpha<1\/\\log2$ and apply Theorem.\\ref{thm:related with CC}, then the proof is complete.\n\\end{proof}\n\nThe advantage of this corollary is that the problem of upper estimation of $l(p)$ is reduced to the situation that we can use number theoretic methods which cannot be applied to prime numbers.\n\n\n\\section{Remaining Problems}\nIn this paper, an experimentally reliable sufficient condition for $l(p)\\ll\\log p$ was obtained using elementary methods that do not involve differentiation and integration. If we could successfully use analytical methods, perhaps we would obtain better estimation. For example, it is describable to find some analogy of $(\\ref{eq:Dirichlet product})$ with respect to ${}_{\\Falpha}\\sigma^2,{}_{\\Falpha}\\sigma^3,\\cdots$, or some non-trivial order of $\\sum\\ord{n}$. However, the difficulty lies in the fact that $\\ord{n}$ is defined by the iterations of ${}_{\\Falpha}\\sigma$. The iterations of the divisor function $\\sigma(n)$ and the Euler function $\\varphi(n)$ have been considered in \\cite{Erdos},\\cite{Erdos-Granville-Pomerance-Spiro},\\cite{Erdos-Subbarao} and so on; however those researches seem to be possible because $\\sigma,\\varphi$ are number theoretically easier to treat. Even though ${}_{\\Falpha}\\sigma$ is not multiplicative, $\\mathcal{F}_\\alpha$ numbers has some nice properties related to multiplication, such as Fact.\\ref{fact:fact of Fa numbers} $(e),(f)$.\nIn the future, it is expected that such properties will be used well to obtain further results on the function $\\textup{ord}_\\alpha$.\n\nFinally, we list the problems to be solved.\n\\begin{problem}\n\t\\[\n\t\\limsup_{n\\to\\infty}\\frac{\\ord{n}}{\\log\\log n}\\overset{?}{<}\\flog{2}\n\t\\]\n\\end{problem}\nIt is shown in Theorem.\\ref{crl:related with CC} that $l(p)\\ll\\log p$ holds if this inequality is true. Here we recall Conjecture.\\ref{cjt:omega order of Cunningham chains}, which is not so far from the above inequality.\n\\begin{problem}\n\tIs there a sequence that is different from $\\mathcal{F}_\\alpha$ with ``similar properties\"?\n\\end{problem}\nThe term ``similar properties\" means those that are related to a chain of prime numbers.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn pseudoscalar meson photoproduction, the reaction is completely\ndescribed by four amplitudes that are functions of hadronic mass $W$\nand center of mass scattering angle $\\theta_{CM}$ (or, equivalently\n$s$ and $t$). If one were able to extract these amplitudes (allowing\nof course for an overall phase) at $\\left\\{ W,\\cos\\theta_{CM} \\right\\} $ or $\\left\\{ s,t\\right\\} $ points,\nthere is nothing else one could measure that would alter how one could\ninterpret the physics of the reaction. \n\nThis observation is especially important in the study of the spectrum\nof baryon resonances. Despite several decades of investigation, it\nis still not clear whether certain states that are predicted by quark\nmodels exist or not. The signatures of any hitherto undiscovered states\nmust be very subtle, to the extent that they are not readily apparent\nfrom cross section measurements alone. If one could unpick the reaction\namplitudes from suitable observables, that would constitute the most\ncomprehensive test for models. In the case of establishing $s$-channel resonances, extraction of the four amplitudes may not even be enough. Partial-wave analyses will be required, and these can lead to finite ambiguities that require additional information to resolve. In any event, a potential new physical effect would have\nto manifest itself clearly, or be declared unproven.\n\nIn order to extract the amplitudes, it is necessary to measure several\npolarization observables. In addition to the cross section, there\nare three single-spin observables %\n\\footnote{This departure from the usual conventions is to avoid confusion between\n$\\Sigma$-particle and $\\Sigma$-beam asymmetry, as well as $P$-photon\npolarization and $P$-recoil.%\n}: $B$ (photon beam asymmetry), $R$ (recoil polarization) and $T$\n(target polarization), which can be labelled as ${\\cal S}$-type measurements.\nThere are also four beam-recoil (${\\cal BR}$-type), four beam-target\n(${\\cal BT}$-type) and four recoil-target (${\\cal RT}$-type) observables.\nAll these observables are bilinear combinations of the four reaction\namplitudes, and are not independent. In principle, therefore, it is\nnot necessary to measure all of them to be able to infer the amplitudes.\nAs we have now entered an era in which single- and double-polarization\nmeasurements can be made, there exists a real opportunity for progress\nin understanding pseudoscalar meson photoproduction reactions, and\nfor potential discovery of new states.\n\nThe problem of finding a minimum set of measurements that allows the\nunambiguous extraction of amplitudes was addressed by Barker, Donnachie\nand Storrow \\cite{Barker1975}). They found that, in addition to the\nsingle polarization set, five more double polarization observables\nwere needed to remove all ambiguities in the quadrants for each relative\nphase angle. More recently, Chiang and Tabakin \\cite{Chiang:1996em}\ncarried out a detailed analysis of the algebra of observables using\nFierz identities, and showed that the selection of just four suitably\nchosen double polarization observables was sufficient to remove the\nambiguities. Such sets have been designated as {}``complete'' sets.\n\nThe Fierz identity analysis led to a large number of identities among observables. Work by Artru et al.~\\cite{Artru:2006xf,Artru20091} extended this by using positivity constraints to derive many \\emph{inequalities}. This means that the measurement of a subset of observables places limits on the possible values of the undetermined observables, so the inequalities provide useful guides to whether the values of experimental data are physical.\n\nLabelling sets of observables as {}``complete'', implies\nsomehow that one has reached an ultimate state of knowledge. However,\nthe reality is that all experimental measurements of observables carry\nwith them a finite uncertainty, so the concept of completeness is\nnot well defined. One might be tempted to regard this as an experimental\nfailing, but in practice any experiment has to be performed within\nconstraints of time and technological feasibility; the experiment\nwith zero uncertainty can only be accomplished in an infinite time.\nThe alternative is to embrace experimental uncertainty and include\nit in the interpretation of results. \n\n\n\nThe problem of uncertainty due to noise in communication channels\nled Shannon to develop the foundations of information theory \\cite{Shannon1948}.\nIn that seminal work, the concept of entropy was used as a means of\nquantifying an amount of information. One can also apply this to measurements.\nTo introduce the idea with a concrete example, suppose one measured\na quantity $X$ and obtained a measured value $x$ with an uncertainty\n$\\delta_{x}$. The reporting of this measurement would usually be\nin the form $x\\pm\\delta_{x}$, but this is really shorthand for a\nGaussian probability density function (PDF) $p\\left(x\\right)$. The\nentropy is then\\begin{equation}\nH=-\\int p\\left(x\\right)\\log p\\left(x\\right)dx,\\label{eq:entropy}\\end{equation}\nwhich for a Gaussian PDF is $H=\\log\\sqrt{2\\pi e}+\\log\\delta_{x}$.\nIf a more accurate measurement were to be made, resulting in a reduced\nuncertainty $\\delta_{x}^{\\prime}$, the gain in information can be\nquantified as\\[\nI=H-H^{\\prime}=\\log\\left(\\frac{\\delta_{x}}{\\delta_{x}^{\\prime}}\\right).\\]\n\n\nBy extending this idea to the uncertainty in the reaction amplitudes,\nit is possible to quantify how much information is gained following\nthe measurement of one or more observables. This article represents\na preliminary study of information entropy as applied to pseudoscalar\nmeson photoproduction. Section \\ref{sec:Measuring-Information} develops\nthe idea encapsulated by Eq. (\\ref{eq:entropy}) for the reaction\namplitudes, and introduces a means of calculating it. In section \\ref{sec:Results}\nexamples of hypothetical measurements are given, which show how the\nmagnitudes and relative phases of the amplitudes can be determined.\nIn addition to this, section \\ref{sec:Comparison-of-Models} briefly considers\nhow the information content of measured data can be used as a guide to estimating whether the measurement could in principle reduce uncertainty in derived physical quantities.\n\n\n\\section{\\label{sec:Measuring-Information}Measuring Information}\n\n\n\\subsection{Reduced Amplitudes}\n\nA full analysis of reactions will involve measurements over all scattering\nangles and cover the mass range of interest. To develop the concept\nof information content, however, we restrict ourselves to considering\none region (or {}``bin'') in $\\left\\{ W,\\theta_{CM}\\right\\} $ space.\nThe ideas can be straightforwardly extended to include many regions,\nsince entropies are additive. The issue of whether different experiments\n(measuring different observables) have covered the same $\\left\\{ W,\\theta_{CM}\\right\\} $\nspace has been avoided.\n\nThe choice of basis for amplitudes is arbitrary; information content\nis derived from the measured observables, so it cannot depend on the\nchoice. In this work, the transversity basis is used, where the spin\nof the target nucleon and recoiling baryon is projected onto the normal\nto the scattering plane, and the linear polarization of the photon\nis either normal or parallel to the scattering plane. \n\nIt is assumed that differential cross section measurements have been\nperformed to a level of accuracy of, say, a few percent, so that further\nmeasurement would be unlikely significantly to improve knowledge of\nthe amplitudes. The information gain to be studied here is solely\ndue to an increased accuracy in the knowledge of the polarization\nobservables. Since all these observables are asymmetries, no generality\nis lost if we rescale the amplitudes $b_{i}\\rightarrow a_{i}$ such\nthat\\[\na_{i}=\\frac{b_{i}}{\\sqrt{\\left|b_{1}\\right|^{2}+\\left|b_{2}\\right|^{2}+\\left|b_{3}\\right|^{2}+\\left|b_{4}\\right|^{2}}},\\]\nso that the cross section provides an overall scale factor. Applying\nthis rescaling, we have\\begin{equation}\n\\left|a_{1}\\right|^{2}+\\left|a_{2}\\right|^{2}+\\left|a_{3}\\right|^{2}+\\left|a_{4}\\right|^{2}=1.\\label{eq:7-sphere}\\end{equation}\n Since these reduced amplitudes $a_{i}$ are complex, this represents\nthe equation of a unit 7-sphere, i.e. the eight numbers that are the\nreal and imaginary parts are constrained to be on the surface of a\nunit hypersphere in 8 dimensions (a unit 8-ball). \n\nThe definitions of the observables in terms of the reduced amplitudes\nare given in appendix \\ref{sec:Definitions-of-Observables}. One side-effect\nof choosing transversity amplitudes is that measurement of the ${\\cal S}$-type\nobservables leads to the extraction of the magnitudes, leaving just\nthe relative phases to be determined. There is often a tacit assumption\nthat it is easier to perform single-spin asymmetry measurements. For\nthat reason many analyses \\cite{Barker1975,Chiang:1996em} start from\na point where values of the ${\\cal S}$-type observables have been\ndetermined.\n\n\n\\subsection{Entropy}\n\nThe entropy associated with the state of knowledge of the amplitudes\nis an multidimensional extension of Eq. (\\ref{eq:entropy}):\\begin{equation}\nH=-\\int p\\left(\\left\\{ x_{i}\\right\\} \\right)\\log p\\left(\\left\\{ x_{i}\\right\\} \\right)d\\left\\{ x_{i}\\right\\} ,\\label{eq:nd-entropy}\\end{equation}\nwhere $\\left\\{ x_{i}\\right\\} $ represents the values of the real\nand imaginary parts of the amplitudes. Before the measurement of any\npolarization observable, there is no knowledge of $\\left\\{ x_{i}\\right\\} $,\nother than the constraint imposed by Eq. (\\ref{eq:7-sphere}). To\nencode this as a PDF, we can spread the probability uniformly over\nthe surface area of the unit 7-sphere to give\\[\np\\left(\\left\\{ x_{i}\\right\\} \\right)=\\frac{3}{\\pi^{4}},\\]\nwhich results in a pleasingly simple entropy of \\begin{equation}\nH_{7-sphere}=-\\int\\frac{3}{\\pi^{4}}\\log\\left(\\frac{3}{\\pi^{4}}\\right)d\\left\\{ x_{i}\\right\\} =4\\log\\pi-\\log3.\\label{eq:7-sphere-1}\\end{equation}\n\n\nThe act of measurement can be viewed as a compression of this {}``uniform''\nPDF into as small a region of $\\left\\{ x_{i}\\right\\} $ space as possible.\nAs a rough example, consider a set of measurements that results in\na multi-dimensional Gaussian PDF in amplitude space. The entropy of\nan $n$-dimensional Gaussian is \\cite{Shannon1948}\\begin{equation}\nH_{g}=\\frac{n}{2}\\log\\left(2\\pi e\\right)+\\frac{1}{2}\\log\\left(\\left|c_{ij}\\right|\\right),\\label{eq:entropy-nd-gaussian}\\end{equation}\nwhere $\\left|c_{ij}\\right|$ is the determinant of the covariance\nmatrix. While the four complex amplitudes have eight numbers in total,\nrepresenting real and imaginary parts, all observable quantities are\ninvariant to the choice on an overall phase angle, so the effective\nnumber of numbers to extract is seven. In this case, a 7-dimensional\nGaussian is used to estimate information gain. The projection of the\nGaussian onto the 7-sphere will induce off-diagonal correlations in\n$c_{ij}$, but for simplicity we ignore any correlations and take\nthe standard deviation in each of the $\\left\\{ x_{i}\\right\\} $ to\nbe the same ($\\sigma$, say). The resulting approximate expression\nis\\begin{equation}\nH_{measured}=\\frac{7}{2}\\log\\left(2\\pi e\\right)+7\\log\\sigma.\\label{eq:entropy-7D-gaussian}\\end{equation}\n The gain in information is the difference between this and the initial\nuniform PDF over the 7-sphere:\\begin{equation}\nI=H_{7-sphere}-H_{measured}=4\\log\\pi-\\log3-\\frac{7}{2}\\log\\left(2\\pi e\\right)-7\\log\\sigma.\\label{eq:info-gain}\\end{equation}\nA plot of this quantity as function the size of standard deviation\nis shown in Fig. (\\ref{fig:Rough-guide-to}). The choice of logarithm\nbase is arbitrary, but for this work we select it to be 2. This means\nthat the unit of information is the {}``bit'' (i.e. knowing whether\na quantity is 1 or 0). This unit system is convenient for considering\nquantities related to polarization; determining whether an asymmetry\nis positive or negative is equivalent to a gain of one bit of information,\nwhereas determining a phase angle quadrant is a gain of two bits.\n\n\\begin{figure}\n\\includegraphics[width=0.8\\textwidth]{InfoPlot}\\caption{\\label{fig:Rough-guide-to}(Color online) Rough guide to information gain as a function\nof the standard deviation $\\sigma$ in the real and imaginary parts\nof the amplitudes. }\n\n\n\n\\end{figure}\n\n\nFrom Fig. (\\ref{fig:Rough-guide-to}) it can be seen that if one wants\nto have a measured accuracy of the amplitudes to a value $\\sigma=0.05$,\nthe gain in information is roughly 21 bits (see dashed vertical line\non graph). Attempting to achieve much better accuracy than this from\nexperiments is not likely to be practical, so we should therefore\nregard the 21-bit information gain as a target figure to aim for,\nif we want to be able to say that we have extracted amplitudes. Furthermore,\nif two models differ by only a few percent in the values of their\namplitudes, it is not reasonable to expect that comparison with data\nwould ever lead to being able to differentiate them. \n\n\n\\subsection{Numerical calculation of entropies. }\n\nWhile the calculation sketched out above is a useful rough guide,\nwhen an actual set of observables have been measured, Eq. (\\ref{eq:nd-entropy})\nwill need to be evaluated numerically. The number of dimensions in\nthis system indicates the use of Monte Carlo techniques, and a simple\nimplementation of this is as follows.\n\nSample points are generated randomly in amplitude space with uniform\ndensity on the surface of the unit 7-sphere. The number of points\n$N_{0}$ needs to be sufficiently large to minimize Monte Carlo sampling\nuncertainty. For each point, the observables are evaluated according\nto the algebra of table \\ref{tab:Definition-of-observable} in the appendix. The use of random values of amplitudes was described in \\cite{Artru20091} in order to establish,\nfor combinations of observables, the limits of regions in observable\nspace that are allowed by postivity constraints, and using this a a guide for deriving inequalities. The present work goes further by not only taking into account these positivity constraints, but also estimating the PDFs of the combinations of observables. One can then simulate\nthe process of measuring an observable by weighting all the points\nby another PDF representing the measured observable. \n\nIn practice, the PDF of an asymmetry is likely to be something like\na beta distribution (or a Gaussian approximation thereof). For illustrative\npurposes, however, we can use a simple top-hat function, which for\na single observable is equivalent to reducing the range of values\nfrom $\\left[-1,1\\right]$ to $\\left[r-\\delta,r+\\delta\\right]$, where\n$r$ is the measured result with some uncertainty $\\pm\\delta$. If\nthe uniform probability density on a multi-dimensional surface $S$\nis $p\\left(x_{i}\\right)d\\left\\{ x_{i}\\right\\} =d\\left\\{ x_{i}\\right\\} \/S$.\nThe entropy of a uniform distribution in a volume $S$ is then\\[\nH=-\\int\\frac{1}{S}\\log\\left(\\frac{1}{S}\\right)d\\left\\{ x_{i}\\right\\} =\\log S,\\]\nas illustrated by the value for the 7-sphere in Eq. (\\ref{eq:7-sphere-1}). \n\nIf the surface is reduced by a cut, say from $S_{0}$ to $S_{1}$,\nthe probability density will be uniform in $S_{1}$ and zero otherwise,\nso the gain in information is simply the log of the ratio of the two\nsurface areas:\\[\nI=\\log S_{0}-\\log S_{1}\\]\n\n\nWhen cuts representing the measurement of a combination of observables\nare imposed, the number of remaining points $N_{1}$ is an estimate\nof the remaining volume, so\\[\nI=\\log N_{0}-\\log N_{1}.\\]\n So in order to gain the 21 bits of information, the surface area\nin amplitude space (and hence the number of points) needs to be reduced\nby a factor of $2^{21}\\approx10^{6}$.\n\nThis is best illustrated with a simple example, such as the measurement\nof one polarization observable, recoil polarization, say. Figure \\ref{fig:Distribution-of-values}\nshows in the light shade the distribution of $10^{6}$ points when\nsampling is done uniformly in amplitude space. The dark shaded region\nshows 126045 points selected when a simulated measurement of $R=-0.4\\pm0.1$\nis selected. The result is an information gain of $6\\log_{2}10-\\log_{2}126045=2.988\\pm0.003$\nbits, where the uncertainty is an estimate of the Monte Carlo error.\nSo we can expect that a measurement of one polarization observable\nto an accuracy of $\\pm$10\\% will give us about 3 bits of information.\n\n\\begin{figure}\n\n\n\\includegraphics[width=0.8\\columnwidth]{RecoilPlot}\\caption{\\label{fig:Distribution-of-values}(Color online) Distribution of values of recoil\npolarization from the uniform PDF in amplitude space. Shaded region\nrepresent the possible values remaining after a {}``measurement''. }\n\n\\end{figure}\n\n\nNote that the {}``uncut'' or prior distribution is quadratic in\nshape, not only for recoil polarization, but for all observables.\nThis is a consequence of the observables being bilinear combinations\nof the amplitudes. \n\n\n\\section{\\label{sec:Results}Simulating Combinations of Measurements}\n\n\n\\subsection{Measuring all ${\\cal S}$-type observables}\n\nFor the extraction of amplitudes, it is usually assumed that the ${\\cal S}$-type\nobservables ($B$, $R$ and $T$) have to be measured. Let us examine\nhow much information one gains by making such measurements. \n\nAs shown in \\cite{Artru:2006xf,Artru20091}, the constraints among\nobservables \\begin{equation}\n\\left|T-R\\right|\\leq1-B;\\quad\\left|T+R\\right|\\leq1+B\\label{eq:BRT-constraints}\\end{equation}\ncarve out a tetrahedron inside a cube $\\left[-1,+1\\right]^{3}$ in\n$BRT$-space. To approximate a measurement of $B$, $R$ and $T$,\nwe define a spherical region, of radius $r$, i.e.\\[\n\\left(B-x\\right)^{2}+\\left(R-y\\right)^{2}+\\left(T-z\\right)^{2}\\leq r^{2},\\]\nwhere $(x,y,z)$ are the coordinates of the sphere centre. This spherical\ncut can be moved to various points within the tetrahedron, and the\neffect on the distributions of magnitudes and phases studied. \n\nA typical example is depicted in Fig. \\ref{fig:Bottom-left-panel}.\nThe bottom left panel shows a projection of the $BRT$ distributions,\nwhich highlights the tetrahedral region. Recall that the points in\nthe light shaded region have been initially sampled over amplitude\nspace, so this represents a projection into $BRT$-space, and affirms\nthe constraints defined by Eq. (\\ref{eq:BRT-constraints}). The points\nin the dark sphere are those selected by the choice of cut region.\nThe radius of the spherical cut is 0.1, which is equivalent in information\ngain to a measured accuracy in each observable of better that $\\pm0.05$\n(see later). It is unlikely, when statistical and systematic uncertainties\nare taken into account, that experiments will be able to determine\nobservables to much greater accuracy than this.\n\nIn the example of Fig. \\ref{fig:Bottom-left-panel}, the spherical\ncut is just touching the midpoint of one of the tetrahedron faces.\nThe top row shows the magnitudes of the amplitudes, and it is clear\nthat values for each one can now be estimated. Note, however, that\nthere is much greater uncertainty in $\\left|a_{2}\\right|$ than in\nthe other ones. \n\nThe relative phase angles are displayed in the remaining panels. While\nonly three relative angles are independent, all six possibilities\nare shown. This is because, for situations in which the magnitudes\nof two amplitudes $a_{i}$ and $a_{j}$ are almost equal (as in this\ncase), very small uncertainties in the relative phase of the two amplitudes\nwith respect to a third ($\\theta_{ik}$ and $\\theta_{jk}$) could\nlead to very large uncertainties in their relative phase $\\theta_{ij}$.\nIt is to be expected that there should be no relative phase information\nfor transversity amplitudes if only ${\\cal S}$-type measurements\nare made, and this is apparent from the distributions in Fig. \\ref{fig:Bottom-left-panel}.\nThe observed increase towards $\\theta_{ij}=0^{\\circ}$ is due to the\nfact that the relative angles are formed from the difference of two\nuniform random variables.\n\n\\begin{figure}\n\n\n\\includegraphics[width=0.8\\textwidth]{TetraMidPlane}\\caption{\\label{fig:Bottom-left-panel}(Color online) Light shade - uniform sample of amplitude space; dark shade - region surviving cut. Panel (a) is the projection of BRT tetrahedron, (b)-(e) show the magnitudes of the amplitudes and the other panels are the distributions of relative phase angles (in degrees).}\n\n\n\n\\end{figure}\n\n\nBy examining the variations in the distributions of magnitudes and\nphases for different positions in the $BRT$ tetrahedron, one can\ndeduce some general heuristics governing the relation between what\nwe shall call a $BRT$ measurement and the magnitudes $\\left|a_{i}\\right|$.\nThese are listed in table \\ref{tab:Guide-to-relative}.\n\n\\begin{table}\n\\begin{tabular}{l|l}\nPosition in $BRT$ tetrahedron & Magnitude information\\tabularnewline\n\\hline\nCenter & All magnitudes equal\\tabularnewline\nMid-point of face & One magnitude small, others large and equal\\tabularnewline\nMid-point of edge & Two magnitudes small and equal, other two large and equal\\tabularnewline\nVertex & One magnitude large, others small and equal\\tabularnewline\n\\end{tabular}\n\n\\caption{\\label{tab:Guide-to-relative}Guide to relative size of magnitudes\nfor various positions within the $BRT$ tetrahedron}\n\n\\end{table}\n\n\nReturning to the information gain from a $BRT$ measurement, if we\nassume that the sampled points in amplitude space project into a uniformly\ndense $BRT$ tetrahedron, the entropy before a measurement is\\[\nH_{tetra}=\\log8-\\log3,\\]\ni.e. the volume is a third of the cube $\\left[-1,+1\\right]^{3}$.\nA 3D gaussian, with symmetric widths $\\sigma$ has entropy\\begin{equation}\nH_{3DGaussian}=\\frac{3}{2}\\log\\left(2\\pi e\\right)+3\\log\\sigma,\\label{eq:3D-Gaussian}\\end{equation}\nfrom Eq. (\\ref{eq:entropy-nd-gaussian}). To establish an equivalent\nspherical cut, one can use the entropy of a sphere of radius $r$\n(inside tetrahedron),\\begin{equation}\nH_{sphere}=\\log\\left(\\frac{4}{3}\\pi r^{3}\\right),\\label{eq:sphere}\\end{equation}\nand equate Eq. (\\ref{eq:3D-Gaussian}) and Eq. (\\ref{eq:sphere})\nto establish a relationship between $r$ and $\\sigma$:\\[\n\\log r-\\log\\sigma=\\frac{1}{2}\\log\\left(2\\pi e\\right)-\\frac{1}{3}\\log\\left(\\frac{4}{3}\\pi\\right),\\]\nfrom which we have $r\\approx2.564\\sigma$, so a spherical cut of radius\n0.1 is equivalent to Gaussian errors on $B$, $R$ and $T$ with $\\sigma=0.039$.\nUsing these figures, the predicted information gain is 9.31 bits for\nany position of the spherical cut within the tetrahedron. For the\ncase depicted in Fig. \\ref{fig:Bottom-left-panel}, the estimated\ninformation gain is 9.28 $\\pm$ 0.02. \n\nIt can be readily demonstrated that the estimates of information gain\nare equal to the predicted value of 9.31 (to within sampling errors)\nfor all the classes of position listed in table \\ref{tab:Guide-to-relative},\nhence verifying the assumption that the $BRT$ tetrahedron is uniformly\nsampled. So for real experiments, knowing the \\emph{uncertainties}\nof the measured values of $B$, $R$ and $T$ allows a calculation\nof information gain, irrespective of the \\emph{size} of the measured\nvalues.\n\n\n\\subsection{Towards Extraction of Amplitudes}\n\nCompared to the original guideline of 21 bits, we can see that the\nmeasurement of just ${\\cal S}$-type observables leaves a lot of information\nto be gained. With just under 10 bits, the magnitudes can be determined\nto roughly 10\\% accuracy, but to determine relative phases to better\nthan, say, a 16th ($=2^{-4}$) of the full angular range will require\nan additional 4 bits for each one. Adding this information together\nbrings us to 22 bits, one greater than the original estimate. One\nmight imagine that measurement of an additional four double polarization\nobservables would now be sufficient, given that individual measurements\ncan gain about 3 bits (see section \\ref{sec:Measuring-Information}).\nHowever, the complicated relations among observables now conspire\nagainst this. \n\nChiang and Tabakin \\cite{Chiang:1996em} systematically listed the\npossible combinations of observables that would lead to a {}``complete''\nset; there are a large number of them. They took one example set which\nshowed a counter-example to the claim in \\cite{Barker1975} that completeness\ncould only be attained if five observables are measured, of which\nfour should not be from the same ${\\cal BR}$-, ${\\cal BT}$- or ${\\cal RT}$-set.\nIn that example, $F$, $G$ and $L_{x}$ were taken to be measured,\nand whereas Ref. \\cite{Barker1975} claimed that $E$ and $L_{z}$\nwere needed, Ref. \\cite{Chiang:1996em} asserted that only $T_{x}$\nwas necessary.\n\nUsing the scheme already outlined, we may examine what happens when\nsimulated measurements are made of the same sets of observables. The\n$BRT$ measurements are all assumed to have been made, but to study\nwhether the results for information gain depend on the measure values,\nfour possible cases of position in the measured $BRT$ tetrahedron\nhave been used: center, mid-face, mid-edge and vertex. They give a\nrepresentative sample of all possible cases, and due to the tetrahedral\nsymmetry only one mid-face, mid-edge and vertex needs to be considered.\nFor each of the four cases, $10^{5}$ events were generated within\nthe defined spherical sub-region of the $BRT$ tetrahedron. These\nwere selected by rejection from an intial uniform sample over amplitude\nspace (the 7-sphere). In order to simulate possible measurements of\n$F$, $G$ and $L_{x}$, a $\\pm$0.1 cut on the generated points around\na central value of each observable was imposed. The central values\nare shown for each case in table \\ref{tab:Results-of-simulated}.\nRelatively large values were chosen for clarity of illustration, and\nnote that the same values of $F$, $G$ and $L_{x}$ could not be\nused for each case because of the interdependency of these observables\nwith the chosen $BRT$ values.\n\nFor each set of $BRT$ values, three cases were studied for combinations\nof further measurements: $T_{x}$ only (choice of Ref. \\cite{Chiang:1996em}),\n$E$ and $L_{z}$ (choice of Ref. \\cite{Barker1975}) and $T_{x}$,\n$E$ and $L_{z}$. Again, a $\\pm$0.1 cut on the generated points\naround a central value of each observable is applied. The results\nfor information gain are shown in the penultimate column of table\n\\ref{tab:Results-of-simulated}.\n\n\\begin{table}\n\n\n\\begin{tabular}{|c|c|c|c|c|c|c|c|c|}\n\\hline \n$BRT$ position & $F$ & $G$ & $L_{x}$ & $E$ & $L_{z}$ & $T_{x}$ & Information (bits) & Ambiguity?\\tabularnewline\n\\hline\n & 0.4 & 0.4 & 0.3 & - & - & 0.7 & 11.4 $\\pm$ 0.2 & Y\\tabularnewline\nCenter & 0.4 & 0.4 & 0.3 & 0.3 & 0.3 & - & 12.7 $\\pm$ 0.3 & N\\tabularnewline\n & 0.4 & 0.4 & 0.3 & 0.3 & 0.3 & 0.7 & 13.2 $\\pm$ 0.3 & N\\tabularnewline\n\\hline \n & 0.4 & -0.4 & 0.4 & - & - & 0.4 & 11.1 $\\pm$ 0.1 & Y\\tabularnewline\nMid-Face & 0.4 & -0.4 & 0.4 & 0.7 & -0.7 & - & 12.0 $\\pm$ 0.2 & N\\tabularnewline\n & 0.4 & -0.4 & 0.4 & 0.7 & -0.7 & 0.4 & 12.7 $\\pm$ 0.3 & N\\tabularnewline\n\\hline \n & 0.4 & 0.4 & 0.4 & - & - & -0.7 & 12.4 $\\pm$ 0.2 & N\\tabularnewline\nMid-Edge & 0.4 & 0.4 & 0.4 & 0.2 & -0.7 & - & 13.6 $\\pm$ 0.4 & N\\tabularnewline\n & 0.4 & 0.4 & 0.4 & 0.2 & -0.7 & -0.7 & 13.6 $\\pm$ 0.4 & N\\tabularnewline\n\\hline \n & 0.4 & 0.4 & 0.4 & - & - & 0.3 & 8.8 $\\pm$ 0.1 & Y\\tabularnewline\nVertex & 0.4 & 0.4 & 0.4 & 0.3 & 0.2 & - & 11.1 $\\pm$ 0.1 & N{*}\\tabularnewline\n & 0.4 & 0.4 & 0.4 & 0.3 & 0.2 & 0.3 & 11.5 $\\pm$ 0.2 & N{*}\\tabularnewline\n\\hline\n\\end{tabular}\\caption{\\label{tab:Results-of-simulated}Results of simulated measurements\nfor different combinations of observables. The values of each observable\nare all defined with a $\\pm$0.1 cut.}\n\n\\end{table}\n\n\nSeveral points are apparent from the results displayed. It is clear\nthat the more measurements that are made, the more information that\nis gained. It is also clear that the information gain is dependent\non the assumed measured $BRT$ values. Recall that the information\ngain obtained when measuring \\emph{only} $BRT$ values was independent\nof position in the $BRT$ tetrahedron; the difference is again due\nto the interdependency among observables. When the information gain\nis greater that 13, the number of points surviving the cuts is 10\nor less, so the estimates are of limited accuracy. \n\nAll the cases of combinations of observables that are listed in table\n\\ref{tab:Results-of-simulated} have previously been proved to result\nin mathematically complete sets. With the introduction of simulated\nexperimental uncertainty, however, this can no longer be taken to\nbe adequate. The last column of the table (headed {}``Ambiguity?'')\nindicates whether there are identifiable, unambiguous values of both\nmagnitudes and relative phases of the amplitudes. The mid-face case,\nwhere $T_{x}$ only has been measured in addition to the common set\nof observables, is illustrated in Fig. \\ref{fig:Example-of-residual}.\nDespite the few surviving points, it is fairly clear that there are\nno three relative phase angles that have a single cluster of points,\nand so an unambiguous extraction of amplitudes would not be possible.\n\nFor the cases listed in table \\ref{tab:Results-of-simulated} with\nN{*} for ambiguity, this indicates that while there is just one identifiable\ncluster of points in the distributions of relative phases, the spread\nin possible points is greater than 10\\% of the full angular range;\ni.e. there may be no quadrant ambiguity, but there remains a considerable\nuncertainty. \n\nIt appears, from this very small sample of possible outcomes, that\nfor measurement of double polarization observables an information\ngain of about 12 bits is required. Combining this number with that\nfrom the measurement of $BRT$ ($\\sim$10 bits), this leads us to\na crude, but very helpful conclusion: only when the total information\ngain from polarization observables is greater than about 21 bits should\nit be possible to extract amplitudes from experimental measurements.\nThis condition will apply irrespective of which particular set of\nobservables have been measured, since information gain is simply a\nmeasure how of much one has compressed the original uniform PDF in\namplitude space. This number is also in line with the crude calculation\ngiven in Eq. (\\ref{eq:info-gain}), where the real and imaginary parts\nof the amplitudes were assumed to be extracted to an accuracy of 0.05.\n\n\\begin{figure}\n\n\n\\includegraphics[width=0.8\\textwidth]{FGLx}\\caption{\\label{fig:Example-of-residual}\n(Color online) Example of residual ambiguity in relative phases after measurement of set $F$, $G$, $L_{x}$ and $T_{x}$. Light shade - uniform sample of amplitude space; Dark shade - region within BRT tetrahedron; Light shade - points surviving cuts in $F$, $G$, $L_{x}$ and $T_{x}$. Labels (a)-(k) are as in Fig. \\ref{fig:Bottom-left-panel}.}\n\n\n\n\\end{figure}\n\n\nThe scheme outlined above uses {}``cuts'' in the space of possible\nobservables to simulate the act of measurement, and the reduced observable\nspace is projected back into amplitude space to calculate the associated\nentropy. This is a crude, but effective, means of relating the observable-space\nPDF to the amplitude-space PDF. To apply the idea of information gain\nto the results of actual experiments, this scheme will have to be\nmodified. When the measurement of a set of observables is made, the\nresult will be an approximately multi-dimensional Gaussian PDF over\nthe range of those observables. A PDF in amplitude space can be constructed\nby sampling uniformly over all amplitude space, calculating the value\nof the observables for each sample point then weighting them with\nthe values of the experimentally determined observable PDF. The resulting\namplitude PDF can be made arbitrarily accurate, depending on the number\nof sample points, but for calculating information gains of 21 bits,\n${\\cal O}\\left(10^{7}\\right)$ points may be needed. \n\nOne final comment related to practical experiments is in order. It\nis clear that for extraction of amplitudes, it is essential to be\nable to polarize photon beams and targets, and to detect recoil polarization.\nGiven that all three components of the reaction require this technological\neffort, the most obvious strategy is to worry less about which combination\nof observables to measure, and more about trying to measure as many\nas possible, with as great an accuracy as possible. The theoretical\nwork in Refs. \\cite{Barker1975,Chiang:1996em} is, however, still\na useful guide to selecting the combinations of observables that will\nmost efficiently lead to an information gain of 21 bits. The information\nmeasure (\\ref{eq:nd-entropy}) can be used in the design of experiments\nto provide an estimate of the degree of accuracy (and hence the integrated\nluminosity) required for amplitude extraction. \n\n\n\\section{\\label{sec:Comparison-of-Models}Comparison of Models with Data}\n\nHaving established how to estimate the quantity of information contained\nin measured data, can the measured data be used to extract information\nabout the parameters of an individual model?\n\n\nAn individual model will depend on some input parameters, $\\xi$,\nsay (e.g. coupling constants). Quite often, the comparison of model\ncalculations to data is used to extract {}``best fit'' values for\nthe input parameters, $\\xi^{\\star}$. We can use the information gain\nfor measured data to tell whether a fit to the new data will yield\nan improved knowledge of input parameters, compared to prior information.\nPrior to a fit procedure, knowledge about the possible values $\\xi$\nwill be encoded in a PDF $p\\left(\\xi,M\\right)$, where $M$ is there\nas a reminder that this quantity depends on a model. The amplitude\nPDF of the model, given a specified set of input parameters is $p\\left(x_{i}\\mid\\xi,M\\right)$,\nwhere as before $x_{i}$ represents the real and imaginary parts of\nthe amplitudes. The total prior PDF of the model is an integral over\nthe range of input parameters\\[\np\\left(x_{i}\\mid M\\right)=\\int p\\left(x_{i}\\mid\\xi,M\\right)p\\left(\\xi,M\\right)d\\xi,\\]\nso a model entropy $H_{model}$ can be evaluated by plugging the model\nprior PDF into Eq. (\\ref{eq:nd-entropy}). Then only if $H_{measured}\\ll H_{model}$\nis there likely to be a significant improvement in the knowledge of\nthe input parameters when the model is fitted to the data.\n\n\n\\section{Conclusions}\n\nIn this article a measure of information content, based on Shannon\nentropy, was introduced and has been applied to measurement of polarization\nobservables in pseudo-scalar meson photoproduction. Using the uncertainties\nin the measurements, the information entropy of the four amplitudes\ncan be calculated. It is assumed that a suitably accurate determination\nof the cross-section has been made, which gives an overall scale factor\nto the amplitudes. \n\nAn important finding is that, when allowing for a realistic but small\nmeasurement uncertainty, measuring only a mathematically complete\nset of observables is not enough to guarantee the extraction of amplitudes.\nInstead, a rule of thumb, based on quantifying information gain of\nabout 21 bits for each point in $\\left\\{ W,\\theta_{CM}\\right\\} $,\nis likely to be a more robust guide.\n\nAn extension of the work presented here is likely to be applicable\nto other reactions in which information content could determine whether\nmeasurements will be adequate to extract physically meaningful results.\nExamples such as the extraction of generalized parton distributions\nfrom DVCS-like asymmetries, or inferring the details of the nucleon-nucleon\ninteraction from the database of scattering observables may be fruitful\nareas of investigation.\n\\begin{acknowledgments}\nThis work was supported by the United Kingdom's Science and Technology\nFacilities Council.\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \nWe study varieties of complete flags in nilpotent representations associated to an oriented cycle. In this situation the filtrations by radicals and socles play a special role (for an indecomposable module they are the functorial filtrations for a string module). Restriction to the one-dimensional subspace and relative position with respect to these flags gives us (recursively) a cell decomposition into affine cells. This method goes back to a work of Spaltenstein \\cite{Sp}, compare section 3. \nFor the loop (or Jordan) quiver with the zero endomorphism we get the Schubert cell decomposition for $\\mathbf{Gl}_n$ and for more general representations we reobtain the cell decomposition for Springer fibres in type $A$ studied by L. Fresse in \\cite{Fr}. \nAn efficient way to parametrize the cells is book-keeping of the relative position in terms of multi-tableaux (see section 4) and this gives a combinatorial tool to describe the Betti numbers of these quiver flag varieties. \n\nWe also observe that the torus action which is given by scalar multiplication on each indecomposable summand operates on the quiver flag variety with finitely many fixed points and one-dimensional orbits. In fact, every cell is a limit cell of a unique torus fix point in it and the cell decomposition is an instance of a Bialynicki-Birula decomposition (of a non-smooth variety, see e.g. \\cite{Car}). The theory developped by Goresky, Kottwitz and MacPherson (see \\cite{GKM}) gives a description of the torus equivariant cohomology of these varieties. \n\nAs an application, the Borel-Moore homology groups appear as modules for quiver Hecke algebras of nilpotent representations of the oriented cycle, \nsee \\cite{Ka3} for the definition in a more restrictive situation and a general study of their properties (Kato's situation does not apply because it requires all multiplicites $L_{\\lambda}$ to be non-zero).\nQuiver Hecke algebras are known to be graded Morita equivalent to (positively graded) standardly stratified algebras in the sense of Mazorchuk \\cite{Ma} and under this equivalence Kato's standard modules correspond to (Mazorchuk's) two types of standard modules. \n\n\n \n\n\\section{Nilpotent representations of the oriented cycle and quiver flag varieties}\n\nLet $K$ be an algebraically closed field. Let $A=KQ$ where $Q$ is the oriented cycle with vertices $Q_0=\\{1,\\ldots , n\\}$ identified with their residue classes in $\\mathbb{Z}\/n\\mathbb{Z}$ and $Q_1=\\{a_i\\colon i\\to i+1 \\mid i\\in \\mathbb{Z}\/n\\mathbb{Z}\\}$. An $A$-module $M$ is given by vector spaces $V_1, \\ldots , V_n$ and linear maps $f_i\\colon V_i\\to V_{i+1}$. Its $Q_0$-graded dimension is given by $\\Dim M=(\\dim V_1, \\ldots , \\dim V_n)$. We will only look at finite-dimensional nilpotent representations, that means $\\dim V_i<\\infty $ and \n$f_{i+n-1} \\circ \\cdots \\circ f_{i+1}\\circ f_i\\colon V_i\\to V_i$ is nilpotent for every $i\\in Q_0$. \n\n\\paragraph{Indecomposable nilpotent $A$-modules} For $1\\leq i\\leq n$ we write $S_i=E_i[1]$ for the simple left $A$-module supported in the vertex $i$. For $j\\in Q_0, \\ell\\in \\mathbb{N} $ we write $E_{j}[\\ell]$ for the unique indecomposable $A$-module with socle $S_j$ and $K$-vector space dimension $\\ell $.\nWe have $\\soc E_{j}[\\ell ] = S_j\\subset E_{j}[\\ell ]$ the inculsion of the socle, $\\rad E_{j}[\\ell ]= E_{j}[\\ell-1] \\subset E_{j}[\\ell ]$ (with $E_j[0]:=0$) the inclusion of the radical and $E_j[\\ell] \\surj \\Top E_j[\\ell] = S_{j-\\ell +1}$ the quotient map to the top. \n\n\n\\begin{defi}\nLet $M$ be a nilpotent $A$-module of dimension $\\Dim M =:\\underline{d} \\in \\mathbb{N}_0^{Q_0}$ and \n$\\underline{\\mathbf{d}} := (0=\\underline{d}^0 , \\underline{d}^1 ,\\ldots , \\underline{d}^{r}=\\underline{d})$ with $\\underline{d}^k \\in \\mathbb{N}_0^{Q_0}$ be a sequence We define \n\\[\n{\\rm Fl}\\binom{M}{\\underline{\\mathbf{d}}} := \\{ U=(U^0\\subset \\cdots \\subset U^{r}=M) \\mid U^k \\; A \n\\text{-submodule}, \\; \\Dim U^k =\\underline{d}^k \\}\n\\]\n\nThis defines a projective $K$-variety, we call it the \\textbf{quiver flag variety} for $(M, \\underline{\\mathbf{d}} )$. We will always assume that the flags are complete i.e. $\\left| \\underline{d}^{t+1}\\right|- \\left|\\underline{d}^t \\right| =1 , \\; 1\\leq t \\leq r -1$ (where $\\left| v\\right| =\\sum_{i=1}^nv_i$ for $v\\in \\mathbb{N}_0^{Q_0}$), with one exception: If $\\underline{\\mathbf{d}}=(0, \\underline{e} , \\underline{d})$ we denote the quiver flag variety by ${\\rm Gr}\\binom{M}{\\ee}$. \n\\end{defi} \n\n\nFor $d\\in \\mathbb{N}$ we denote $\\mathrm{Fl} (d)$ the variety of complete flags in $K^d$ and $\\mathrm{Fl}(0):=pt$. \n\\paragraph{Relative position}\nLet $\\mathrm{Fl} (\\underline{\\mathbf{d}} ):=\\prod_{i\\in Q_0} \\mathrm{Fl} (\\underline{d}_i), \\mathrm{Fl} (\\underline{\\mathbf{e}} ) :=\\prod_{i\\in Q_0} \\mathrm{Fl} (\\underline{e}_i )$ with $\\underline{d}_i=(0=d^0_i \\leq d_i^1 \\leq \\cdots \\leq d_i^{r}), \\underline{e}_i=(0=e^0_i \\leq e_i^1 \\leq \\cdots \\leq e_i^{\\mu})$, $d_i^{r}=e_i^{\\mu}$ for all $i\\in Q_0$. The relative position map \n$\\rm rp \\colon \\mathrm{Fl}(\\underline{\\mathbf{d}} )\\times \\mathrm{Fl} (\\underline{\\mathbf{e}} ) \\to \\prod_{i\\in Q_0} \\Mat \\bigl((r +1)\\times (\\mu +1) , \\mathbb{N}_0 \\bigr)$ is defined as \n$\n\\rm rp (U_i^{\\bullet}, W_j^{\\bullet})_{i,j\\in Q_0} := \\bigl((\\dim U_i^k \\cap W_i^\\ell)_{k,\\ell}\\bigr)_{i\\in Q_0}. \n$ \nNow fix $W\\in \\mathrm{Fl} (\\underline{\\mathbf{e}} )$ and given $w\\in \\prod_{i\\in Q_0} \\Mat \\bigl((r +1)\\times (\\mu +1) , \\mathbb{N}_0 \\bigr)$ we set \n\\[ \\mathrm{Fl} (\\underline{\\mathbf{d}} )_{w} :=\\{ U\\in \\mathrm{Fl} (\\underline{\\mathbf{d}} )\\mid \\rm rp (U,W)=w\\}. \\]\nFor our fixed representation $M$ we will assume from now on $V_i=K^{d_i}$, $i\\in Q_0$ and for the quiver flag variety from before, we set \n\\[ \n{\\rm Fl}\\binom{M}{\\underline{\\mathbf{d}}}_{w} := {\\rm Fl}\\binom{M}{\\underline{\\mathbf{d}}} \\cap \\mathrm{Fl} (\\underline{\\mathbf{d}} )_{w} \n\\]\nThis gives the \\textbf{stratification by relative position (with the fixed flag $W$) }in ${\\rm Fl}\\binom{M}{\\underline{\\mathbf{d}}}$. \n\n\n\\section{Spaltenstein's fibration and the cell decomposition}\n\nLet $M$ be an $A$-module.\nIf $L$ is $1$-dimensional $Q_0$-graded subvector space of $M$, the inclusion \n$j\\colon L\\inj M$ is $A$-linear if and only if $L\\subset \\soc (M)$. Thus for a dimension vector $\\underline{e}$ with $\\underline{e} =e_i$ for some $i\\in Q_0$, we have an isomorphism ${\\rm Gr} \\binom{M}{\\underline{e}} \\cong {\\rm Gr} \\binom{s_i }{1 }=\\mathbb{P}^{s_i -1}$ where $\\underline{s}:= \\Dim \\soc (M)$. \n\nWe need the following preparation. \nLet $i\\in Q_0$ and denote by $M_{(i)}$ the maximal subrepresentation of a representation $M$ such that $\\soc (M_{(i)})$ is a direct sum of copies of $S_i$. We get $M=\\bigoplus_{i\\in Q_0}M_{(i)}$ and we can see $\\Aut (M_{(i)})\\subset \\Aut (M)$ as a subgroup. \nWe denote by $F=F_M$ the underlying $Q_0$-graded flag of the following flag of submodules\n$ 0\\subset \\soc (M) \\cap \\rad^m (M) \\subset \\cdots \\subset \\soc (M)\\cap \\rad^2 (M) \\subset \\soc (M) \\cap \\rad (M) \\subset \\soc (M) $. \n\nNow fix $i\\in Q_0$ and let $s_i:= \\dim \\soc(M_{(i)})$. We can consider $F_i$ as a flag in the vector space $\\soc M_{(i)}$. We denote by $P=P_i \\subset \\mathbf{Gl}_{s_i}$ the stabilizer of this flag. \nThe restriction to the socle gives a group homomorphism \n\\[ \n\\varphi\\colon \\Aut(M_{(i)} ) \\to \\mathbf{Gl}_{s_i}.\n\\]\nWe fix a vector space basis for $M_{(i)}$ which is compatible with the Krull-Schmidt decomposition and a complete flag refining $F_i$ such that its stabilizer $B\\subset P$ is a lower triangular standard Borel. \n\n\n\\begin{lemma} The image of $\\varphi $ is $P$ and \nthere is a group morphism $\\theta \\colon P\\to \\Aut (M_{(i)}) $ such that $\\varphi\\circ \\theta =\\id_P$. \n\\end{lemma}\n\n\\begin{proof} Clearly the image is contained in $P$ since any $A$-linear map maps radicals to radicals. We look at the $K$-algebra homomorphism $\\Phi$ given by taking socle \nwhose restriction to the units gives the map \n$\\varphi$. \n\\[ \\Phi \\colon \\End_A (M_{(i)}) \\to \\mathfrak{p}:=\\{ f\\in \\End_K (\\soc M_{(i)})\\mid f (F_i^k)\\subset F_i^k, k\\leq m\\} .\\]\nIf we fix a vector space basis for $M$ compatible with the direct sum decomposition, we can define a $K$-algebra homomorphism $\\Theta \\colon \\mathfrak{p}\\to \\End_A(M_{(i)})$ such that \n$\\Phi\\circ \\Theta =\\id_{\\mathfrak{p}}$ as follows. \nAny coordinate $(s,t)$ in $\\End_K(\\soc M_{(i)})$ corresponds (by our choice of a basis) to a pair of socles of direct summands $E_i[\\ell]$ and $E_i[k]$ of $M_{(i)}$. If $k<\\ell$, then there is no nonzero $A$-linear map $E_i[\\ell]\\to E_i[k]$, we set $\\theta_{s,t} =0$. If $k\\geq \\ell $ we fix the (unique) inclusion $\\theta_{s,t}\\colon E_i[\\ell] \\subset E_i[k]$ of the submodule. Then, $\\Theta ((a_{s,t})) := \\sum_{s,t} a_{s,t}\\theta_{s,t}$ defines the desired map. \n\\end{proof}\nWe call two points $U^\\bullet, W^\\bullet \\in {\\rm Fl}\\binom{M}{\\underline{\\mathbf{d}}}$ equivalent if \nthe quotients of $M\/U^k$ and $M\/V^k$ are isomorphic $A$-modules for $1\\leq k\\leq r$. There are only finitely many equivalence classes, we denote by $sp \\colon {\\rm Fl}\\binom{M}{\\underline{\\mathbf{d}}} \\to \\mathcal{S}, U^\\bullet \\mapsto ([M\/U^k])_{1\\leq k\\leq r}$ the map to the equivalence class. \n\nNow we can prove the following analogue of a result of Spaltenstein \\cite[Lemma, p. 453]{Sp}.\n\n\\begin{theorem}[Spaltenstein's fibration] Let $Q$ be the oriented cycle with $n$ vertices, and $\\underline{\\mathbf{d}}=(\\underline{d}^0,\\underline{d}^1,\\ldots , \\underline{d}^{r}=\\underline{d})$ a complete dimension filtration. Let $M$ be a $\\underline{d}$-dimensional nilpotent $A$-module, for $w\\in \\prod_{i\\in Q_0} \\Mat (1 \\times (s_i +1), \\mathbb{N}_0)$ we write $()_w$ for the relative position with respect to a complete flag refining $\\soc(M) \\cap \\rad^{\\bullet}(M)$. \nThen, there is an isomorphism of algebraic varieties \n\\[ \n f\\colon p^{-1} ({\\rm Gr} \\binom{M}{\\underline{d}^{1}}_{w}) \\to\n{\\rm Gr} \\binom{M}{\\underline{d}^{1}}_{w} \\times {\\rm Fl}\\binom{N}{\\underline{\\mathbf{e}}} \n\\] \nwith $\\underline{\\mathbf{e}}:= (\\underline{d}^1-\\underline{d}^1,\\underline{d}^2-\\underline{d}^1,\\ldots , \\underline{d}^{r }-\\underline{d}^1)$, $p\\colon {\\rm Fl}\\binom{M}{\\underline{\\mathbf{d}}} \\to {{\\rm Gr} \\binom{M}{\\underline{d}^{1}}}$ is the map forgetting all but the first subspace, and $N = M\/U_0$ with $(U_0\\subset M) \\in {{\\rm Gr} \\binom{M}{\\underline{d}^{1}}}_w $ arbitrary, such that the following \ndiagram is commutative \n\\[ \n\\xymatrix{ &{\\rm Gr} \\binom{M}{\\underline{d}^{1}}_{w} & \\\\ \np^{-1} ({\\rm Gr} \\binom{M}{\\underline{d}^{1}}_{w})\\ar[ur]^{p} \\ar[rr]^f \\ar[dr]_{sp} &&{\\rm Gr} \\binom{M}{\\underline{d}^{1}}_{w} \\times {\\rm Fl}\\binom{N}{\\underline{\\mathbf{e}}} \\ar[ul]_{pr_1}\\ar[dl]^{([N], sp)\\circ pr_2} \\\\ &\\mathcal{S} .&\n}\n\\]\n\\end{theorem}\n \n\\begin{proof}\nLet $L\\in {\\rm Gr} \\binom{M}{\\underline{d}^{1}}_w=\\{[0:\\cdots :0:1:x_1:\\ldots :x_s]\\in \\mathbb{P}^{s_j -1}\\mid x_i \\in K, 1\\leq i\\leq s\\}$. Observe, that by definition $B$ operates transitive on ${\\rm Gr} \\binom{M}{\\underline{d}^{1}}_w$ since it is a $B$-orbit. \nUsing the previous lemma we can find an algebraic map $\\phi \\colon \n{\\rm Gr} \\binom{M}{\\underline{d}^{1}}_w \\to \\Aut (M), \\; L\\mapsto \\phi_L$ with $\\phi_L (L)=U_0$. More precisely, if $L$ corresponds to the column $(0,\\ldots ,0, 1, x_1, \\ldots , x_s)^t$ and $U_0$ to $(0,\\ldots ,0 ,1, 0,\\ldots ,0)^t$, then we \ntake the image of \n\\[\n\\begin{tikzpicture}[baseline=(current bounding box.center)]\n\\matrix (m) [matrix of math nodes,nodes in empty cells,right delimiter={]},left delimiter={[} ]{\n1 & & & & & \\\\\n & & & & & \\\\\n & & & & & \\\\\n & & 1 & & & \\\\\n & & -x_1 & & & \\\\\n & & & & & \\\\\n & & -x_s & & & 1 \\\\ \n} ;\n\\draw[dotted] (m-1-1)-- (m-4-3);\n\\draw[dotted] (m-4-3)-- (m-7-6);\n\\draw[dotted] (m-5-3)-- (m-7-3);\n\\end{tikzpicture}\n\\]\nunder $B\\subset P\\subset \\Aut(M_{(i)} \\subset \\Aut (M)$. \nWe define \n\\[\n\\begin{aligned}\nf\\colon p^{-1} ({\\rm Gr} \\binom{M}{\\underline{d}^{1}})_{w} &\\to {\\rm Gr} \\binom{M}{\\underline{d}^{1}}_{w} \\times {\\rm Fl}\\binom{N}{\\underline{\\mathbf{e}}} \\\\\nU=(U^1\\subset \\cdots \\subset U^{r}=M) &\\mapsto (U^1, \\phi_{U^1}(U)\/U_0 ) .\n\\end{aligned}\n\\]\nThis is a morphism of algebraic varieties. To find the inverse, we consider $\\pi\\colon M\\to M\/U_0 =N$ the canonical projection and define \n\\[\n\\begin{aligned}\n{\\rm Gr} &\\binom{M}{\\underline{d}^{1}}_{w} \\times {\\rm Fl}\\binom{N}{\\underline{\\mathbf{e}}} \\to p^{-1} ({\\rm Gr} \\binom{M}{\\underline{d}^{1}}_{w})\\\\\n&\\left( L, V=(V^1\\subset \\cdots \\subset V^{r -1}=N)\\right) \\\\\n&\\quad \\mapsto (L \\subset \\phi_L^{-1}\\pi^{-1}(V^1)\\subset \\phi_L^{-1}\\pi^{-1}V^2 \\subset \\cdots \n\\subset \\phi_L^{-1}\\pi^{-1}V^{r -1 }=M).\n\\end{aligned}\n\\]\n\\end{proof}\n\n\\begin{defi} Let $X$ be a scheme. An \\text{affine cell decomposition} is a filtration \n\\[X=X_m \\supset X_{m-1}\\supset \\cdots \\supset X_0\\supset X_{-1}=\\emptyset \\]\nby closed subschemes, with each $X_{i}\\setminus X_{i-1}$ is a disjoint union of finitely many schemes $U_{ij}$ isomorphic to affine spaces $\\mathbb{A}^{n_{ij}}$. \n\\end{defi}\n\n\\begin{coro} \nLet ${\\rm Fl}\\binom{M}{\\underline{\\mathbf{d}}}$ be a complete quiver flag variety for the oriented circle and $M$ be a nilpotent representation. Then, it has an affine cell decomposition. If $K=\\mathbb{C}$ or $\\overline{\\mathbb{Q}_{\\ell}}$ then it is pure. \n\\end{coro}\n\n\nNote that there is a closed embedding by forgetting the $A$-module structure \n$\\kappa \\colon {\\rm Fl}\\binom{M}{\\underline{\\mathbf{d}}} \\to \\mathrm{Fl} (\\underline{\\mathbf{d}} ) =:\\mathcal{F}$. Forgetting all other then the first subspace gives a commutative square \n\\[ \n\\xymatrix{ \\mathcal{F} \\ar[r]^q & \\mathbb{P}^{d_i -1} \\\\\n{\\rm Fl}\\binom{M}{\\underline{\\mathbf{d}}} \\ar[r]^p\\ar[u]^{\\kappa} & \\mathbb{P}^{s_i-1}\\ar[u]^{\\rho} \n}\n\\] \nwhere the vertical maps are closed immersions. By choosing appropiate cells in $\\mathcal{F}$, there is an affine cell decomposition in $\\mathcal{F}$ such that \nthe intersection with ${\\rm Fl}\\binom{M}{\\underline{\\mathbf{d}}}$ is a union of cells in $\\mathcal{F}$. This implies that the open complement of ${\\rm Fl}\\binom{M}{\\underline{\\mathbf{d}}}$ in $\\mathcal{F}$ also has an affine cell decomposition, this is the main ingredient to prove the following theorem, for $n=1$ compare \\cite[section 4.4, 4.5]{DP}. \n\n\n\\begin{satz}\nLet be $K=\\mathbb{C}$. Let $Q$ be the oriented cycle, $M$ be a nilpotent representation and $\\underline{d}$ a complete dimension filtration. \nThe pullback along $\\kappa \\colon {\\rm Fl}\\binom{M}{\\underline{\\mathbf{d}}} \\to \\mathcal{F}$ induces a surjective ring homomorphism \non singular cohomology \n\\[ \n\\kappa^* \\colon H^*(\\mathcal{F}) \\to H^*({\\rm Fl}\\binom{M}{\\underline{\\mathbf{d}}}). \n\\]\n\\end{satz}\n\n\\paragraph{proof:} \nBy the universal coefficient theorem for projective varieties, it suffices to show that $\\kappa_* \\colon H_*^{BM}({\\rm Fl}\\binom{M}{\\underline{\\mathbf{d}}}) \\to H_*^{BM}(\\mathcal{F} )$ is injective. Let $U$ be the open complement of ${\\rm Fl}\\binom{M}{\\underline{\\mathbf{d}}}$ in $\\mathcal{F}$. By the construction of the Spaltenstein fibration, we get that $\\mathcal{F}$ has a cell decomposition continuing the one of ${\\rm Fl}\\binom{M}{\\underline{\\mathbf{d}}}$, therefore $U$ also has also a cell decomposition. Since we have odd-degree vanishing also for $U$ the localization sequence gives a short exact sequence \n\\[\n0\\to H_*^{BM} ({\\rm Fl}\\binom{M}{\\underline{\\mathbf{d}}} ) \\to H_*^{BM} (\\mathcal{F} ) \\to H_*^{BM} (U) \\to 0 \n\\]\n\\hfill $\\Box $\n\n\n\\section{Parametrizing cells by multi-tableaux} \\label{PS}\n\n\nTo the nilpotent indecomposable $A$-module $E_i[\\ell]$ we associate a row of $\\ell $ boxes, indexing the columns by the elements $i-\\ell +1,i-\\ell, \\ldots , i$ in $\\mathbb{Z}\/n$ from left to right. \nTo any nilpotent $A$-module we associate a multipartition $Y=(Y_j)_{j\\in \\mathbb{Z}\/n}$ by taking the young diagram $Y_j$ corresponding to all indecomposable summands which have top supported on $j\\in \\mathbb{Z}\/n$. For example for $n=3$ and \\[M=(E_3[3]^2\\oplus E_2[2]) \\oplus (E_2[4] \\oplus E_3[2]) \\] we visualize the multipartition $Y_M= ((3,3,2), (4,2), \\emptyset )$ as follows, see the figure on the left. \n\\[ \n\\begin{array}{c}\n\\makebox[9pt][c]{\\text{\\tiny{1}}}\\makebox[9pt][c]{\\text{\\tiny{2}}}\\makebox[9pt][c]{\\text{\\tiny{3}}}\\makebox[9pt][c]{\\text{\\tiny{1}}}\\makebox[9pt][c]{\\text{\\tiny{2}}}\\\\\n\\ytableausetup{smalltableaux}\n\\begin{ytableau}\n{} & & & \\none & \\none \\\\\n& & & \\none & \\none \\\\\n& &\\none &\\none & \\none \\\\\n\\none & & & & \\\\\n\\none & & & \\none & \\none \n\\end{ytableau}\n\\end{array}\n\\quad \\quad \\quad \n\\begin{array}{c}\n\\makebox[9pt][c]{\\text{\\tiny{1}}}\\makebox[9pt][c]{\\text{\\tiny{2}}}\\makebox[9pt][c]{\\text{\\tiny{3}}}\\makebox[9pt][c]{\\text{\\tiny{1}}}\\makebox[9pt][c]{\\text{\\tiny{2}}}\\\\\n\\ytableausetup{smalltableaux}\n\\begin{ytableau}\n{} & & *(gray) & \\none & \\none \\\\\n& & *(gray) & \\none & \\none \\\\\n& &\\none &\\none & \\none \\\\\n\\none & & & & \\\\\n\\none & & *(gray) & \\none & \\none \n\\end{ytableau}\n\\end{array}\n\\quad \\quad \\quad \n\\begin{array}{c}\n\\makebox[9pt][c]{\\text{\\tiny{1}}}\\makebox[9pt][c]{\\text{\\tiny{2}}}\\makebox[9pt][c]{\\text{\\tiny{3}}}\\makebox[9pt][c]{\\text{\\tiny{1}}}\\makebox[9pt][c]{\\text{\\tiny{2}}}\\\\\n\\ytableausetup{smalltableaux}\n\\begin{ytableau}\n{} & & & \\none & \\none \\\\\n& & & \\none & \\none \\\\\n& *(gray) &\\none &\\none & \\none \\\\\n\\none & & & & *(gray) \\\\\n\\none & & & \\none & \\none \n\\end{ytableau}\n\\end{array}\n\\]\nIn the middle and the right hand side figure we shaded the socle at $3$ and the socle at $2$ respectively, the socle at $1$ is zero. \nFrom now on we choose a basis of $M$ such that each box corresponds to a basis vector and we order the basis vectors starting at the first row from left to right and then the second row from left to right and so on. \nWhen we include a line into the vector space given by the socle at $3$, we find a first \ndirect summand (with respect to the fixed basis from before) where the inclusion is nonzero, we call the corresponding box the \\textbf{pivot box}. Looking at lines with the same pivot box defines a Schubert cell, \ne.g. we indicate the Schubert cells as follows, the pivot box gets a $1$, the number of stars indicates the dimension of the cell\n\\[ \n\\begin{array}{c}\n\\makebox[9pt][c]{\\text{\\tiny{1}}}\\makebox[9pt][c]{\\text{\\tiny{2}}}\\makebox[9pt][c]{\\text{\\tiny{3}}}\\makebox[9pt][c]{\\text{\\tiny{1}}}\\makebox[9pt][c]{\\text{\\tiny{2}}}\\\\\n\\ytableausetup{smalltableaux}\n\\begin{ytableau}\n{} & & 1 & \\none & \\none \\\\\n& & \\star & \\none & \\none \\\\\n& &\\none &\\none & \\none \\\\\n\\none & & & & \\\\\n\\none & & \\star & \\none & \\none \n\\end{ytableau}\n\\end{array}\n\\quad \n\\begin{array}{c}\n\\makebox[9pt][c]{\\text{\\tiny{1}}}\\makebox[9pt][c]{\\text{\\tiny{2}}}\\makebox[9pt][c]{\\text{\\tiny{3}}}\\makebox[9pt][c]{\\text{\\tiny{1}}}\\makebox[9pt][c]{\\text{\\tiny{2}}}\\\\\n\\ytableausetup{smalltableaux}\n\\begin{ytableau}\n{} & & 0 & \\none & \\none \\\\\n& & 1 & \\none & \\none \\\\\n& &\\none &\\none & \\none \\\\\n\\none & & & & \\\\\n\\none & & \\star & \\none & \\none \n\\end{ytableau}\n\\end{array}\n\\quad \n\\begin{array}{c}\n\\makebox[9pt][c]{\\text{\\tiny{1}}}\\makebox[9pt][c]{\\text{\\tiny{2}}}\\makebox[9pt][c]{\\text{\\tiny{3}}}\\makebox[9pt][c]{\\text{\\tiny{1}}}\\makebox[9pt][c]{\\text{\\tiny{2}}}\\\\\n\\ytableausetup{smalltableaux}\n\\begin{ytableau}\n{} & & 0 & \\none & \\none \\\\\n& & 0 & \\none & \\none \\\\\n& &\\none &\\none & \\none \\\\\n\\none & & & & \\\\\n\\none & & 1 & \\none & \\none \n\\end{ytableau}\n\\end{array}\n\\]\nLet us look at the first cell. The quotients $M\/S$ with $S\\subset \\soc (M_{(3)})$ can be visualized by the left figure below.\n\\[ \n\\begin{array}{c}\n\\makebox[9pt][c]{\\text{\\tiny{1}}}\\makebox[9pt][c]{\\text{\\tiny{2}}}\\makebox[9pt][c]{\\text{\\tiny{3}}}\\makebox[9pt][c]{\\text{\\tiny{1}}}\\makebox[9pt][c]{\\text{\\tiny{2}}}\\\\\n\\ytableausetup{smalltableaux}\n\\begin{ytableau}\n{} & &\\none & \\none & \\none \\\\\n& & & \\none & \\none \\\\\n& &\\none &\\none & \\none \\\\\n\\none & & & & \\\\\n\\none & & & \\none & \\none \n\\end{ytableau}\n\\end{array}\n\\quad \\quad \\quad \\quad \\quad \\quad \n \\begin{array}{c}\n\\makebox[9pt][c]{\\text{\\tiny{1}}}\\makebox[9pt][c]{\\text{\\tiny{2}}}\\makebox[9pt][c]{\\text{\\tiny{3}}}\\makebox[9pt][c]{\\text{\\tiny{1}}}\\makebox[9pt][c]{\\text{\\tiny{2}}}\\\\\n\\ytableausetup{smalltableaux}\n\\begin{ytableau}\n{} & 0& \\none & \\none & \\none \\\\\n& & & \\none & \\none \\\\\n& 1 &\\none &\\none & \\none \\\\\n\\none & & & & \\star \\\\\n\\none & & & \\none & \\none \n\\end{ytableau}\n\\end{array}\n\\]\nThe right hand side figure above shows a cell in the socle at $2$ with pivot box in row $3$. \nIn the figure above on the left each box corresponds to the residue class of a basis vector of $M$ corresponding to a box at the same place. This gives an ordered basis of $M\/S$.\nTo parametrize the cells in the complete flag variety, we need to parametrize the cells in the socles of the successive quotients. Now, to parameterize the cells in ${\\rm Fl}\\binom{M}{\\underline{\\mathbf{d}}}$ with $\\underline{\\mathbf{d}}$ complete, we put an $r=\\dim M$ into an end box in the column specified by $\\underline{d}^1$. Then, we put an $r-1$ into an end or not filled box in the column specified by $\\underline{d}^2-\\underline{d}^1$, etc. We obtain a filling of $Y_M$ with numbers \n$1,\\ldots , r$ which is increasing from left to right in the rows, we call such a filling a \\textbf{row multi-tableau} of dimension $\\underline{\\mathbf{d}} $. Let $\\tau $ be a row multi-tableau, then we write $C_{\\tau} \\subset {\\rm Fl}\\binom{M}{\\underline{\\mathbf{d}}}$ for the corresponding cell. We can recover a dimension filtration $\\underline{\\mathbf{d}}=: \\Dim \\tau$ and an isomorphism class of a module $Y_{\\tau}$ from $\\tau$ by counting boxes in the columns and by looking at the shape of $\\tau$. \nFor $k\\in \\{1, \\ldots , r\\}$ we write $c_k\\in \\mathbb{Z}\/n$ for the column containing \n$k$ and $r_k\\in \\mathbb{N}$ for the row containing $k$. We define \n\\[ d_{\\tau }(k) := \\# \\{ s\\in \\{1, \\ldots , k-1\\}\\mid c_s=c_k, r_s>r_k, r_s\\neq r_t, \\; s 0$ with respect to \\eqref{eq:case2} and \\eqref{eq:integralF}:\n\n\\begin{equation}\\label{cl}\n\tp_\\ell = \\dfrac{1}{\\pi} \\int\\limits_{0}^{2\\pi} \\phi(\\cos\\theta)\\cos(\\ell\\theta)\\,\\mathrm{d}\\theta.\n\\end{equation}\n\nFor $\\theta \\in [0,2\\pi]$, $\\cos\\theta$ lies in interval $[-1,1]$, so the image of $\\phi(\\cos\\theta)$ also lies in the interval $[-1,1]$. Thus, we find:\n\n\\begin{equation}\n\t|p_0| \\leq \\dfrac{1}{2\\pi}\\int\\limits_{0}^{2\\pi} |\\phi(\\cos\\theta)|\\,\\mathrm{d}\\theta \\leq \\dfrac{1}{2\\pi}\\int\\limits_{0}^{2\\pi} \\,\\mathrm{d}\\theta = 1.\n\\end{equation}\n\nand\n\n\\begin{equation}\n|p_\\ell| \\leq \\dfrac{1}{\\pi}\\int\\limits_{0}^{2\\pi} |\\phi(\\cos\\theta)||\\cos(\\ell\\theta)|\\,\\mathrm{d}\\theta \\leq \\dfrac{1}{\\pi}\\int\\limits_{0}^{2\\pi}|\\cos(\\ell\\theta)| \\,\\mathrm{d}\\theta.\n\\end{equation}\n\nTo calculate this last integral, we use the periodicity of the function $\\cos(\\ell\\theta)$. This function has a period of $2\\pi \/ \\ell$, so goes $\\ell$ times up and down on the interval $[0,2\\pi]$. So, after taking the absolute value of this function, we find $2\\ell$ times the integral over the positive part of a period, for example, the interval $[-\\pi\/2\\ell, \\pi\/2\\ell]$:\n\n\\begin{equation}\n\\begin{aligned}\n \\dfrac{1}{\\pi}\\int\\limits_{0}^{2\\pi}|\\cos(\\ell\\theta)| \\,\\mathrm{d}\\theta = \\dfrac{2\\ell}{\\pi} \\int\\limits_{-\\pi\/2\\ell}^{\\pi\/2\\ell} \\cos(\\ell\\theta)\\,\\mathrm{d}\\theta = \\dfrac{4}{\\pi}.\n\\end{aligned}\n\\end{equation}\n\nThus, the following bounds were obtained:\n\n\\begin{equation} \\label{boundsDS}\n |p_0| \\leq 1 \\hspace{0.5cm} \\text{ and } \\hspace{0.5cm} |p_\\ell| \\leq \\dfrac{4}{\\pi}, \\hspace{0.3cm} \\ell =1,\\dots,n.\n\\end{equation}\n\nThese constraints on the design space are exploited in the subsequent optimization.\n\n\n\\section{Results}\n\\subsection{Motion Profile Optimization}\nIn order to assess the performance of the proposed method, a set of optimizations has been performed on the industrial pick-and-place unit depicted in Fig. \\ref{fig:Experimental_Setup}. The mechanism is required to move between its start position $\\theta_A$ of $0^\\circ$ and end position $\\theta_B$ of $173.6^\\circ$ and has a motion time $\\Delta t$ of $73.5ms$. As for the constraint, two different cases are considered, namely\n\n\\begin{itemize}\n \\item \\textit{Jerk Free (JF)}: Only the boundary constraints of \\eqref{eq:constraints} are taken into account. The corresponding rescaled Chebyshev position profile $\\phi(x)$ of degree $n$ is hereafter referred to as \\textit{cheb\"n\"}. A 5th-degree polynomial, hereafter indicated as \\textit{poly5}, is taken as the reference motion profile for comparison purposes. This is the smallest degree polynomial that satisfies the constraints.\n \n \\item \\textit{Jerk Zero (J0)}: In addition to the constraint of a jerk-free optimization, a zero-jerk constraint is added in the start, and endpoint \\eqref{eq:constraint_jerk} is added to the motion profile definition. The resulting $n$-th degree position profile $\\phi(x)$ is referred to as \\textit{cheb\"n\"J0}. The reference motion profile is in this case a 7th-degree polynomial, hereafter referred to as \\textit{poly7J0}.\n\\end{itemize}\n\nFor every case, the resulting optimization problem is solved in a MATLAB environment for degrees $n= 7, 9, 11,$ and $13$. The results are presented in Fig. \\ref{fig:Optimization_Results} and Tables \\ref{tab:results_jerkfree} \\& \\ref{tab:results_jerkzero} where for every motion profile, the corresponding RMS torque $\\tau_{rms}$ and solve time $t_{sol}$ are displayed. Savings up to $54.4\\%$ are obtained in under $0.77$ s. The results clearly converge towards a minimal value for increasing degree $n$. In general, the motion profiles which include the jerk constraint \\eqref{eq:constraint_jerk}, have slightly bigger $\\tau_{rms}$ values, which is to be expected due to the fact that this extra constraint limits the acceleration near the endpoints while it is desirable to have high accelerations here since the inertia is low.\n\nIn Table \\ref{tab:results_jerkfree}, the $\\tau_{rms}$ values of a conventional trapezoidal 1\/3 motion profile are presented as well, which accelerates during 1\/3rd of the time, moves at a constant speed during 1\/3rd, and decelerates at the last 1\/3rd \\cite{Park1996}. What is interesting in this table is that the torque demand can already be significantly reduced by selecting an adequate default motion law. Notwithstanding that the greatest savings are realized after optimization.\n\nIt is worth noting that for the jerk-free motion profiles, the same solution was found for both the genetic algorithm and gradient-based solver. However, the calculation times with GA are considerably higher. When including the jerk constraint, the GA comes close but does not completely reveal the full optimization potential. Therefore, for what concerns the present study, gradient-based optimizations algorithms are preferable. Since the GA did not obtain a better solution for any motion profile in the bounded search space, we can expect that the results obtained with the gradient-based method are global optimal solutions.\n\nAlthough only the forward motion is considered here, similar results can be obtained for the return motion by simply changing the position constraints.\n\n\n\\begin{table} \n\\caption{Results of the motion profile optimization (Jerk Free).}\n\\label{tab:results_jerkfree}\n\\centering\n\\begin{tabular}{lcccc}\n & \\multicolumn{2}{c}{\\textbf{Gradient-Based }} & \\multicolumn{2}{c}{\\textbf{Genetic Algorithm}} \\\\\n\\textbf{JF} & \\textbf{$\\tau_{rms}\\,[Nm]$} & \\textbf{$t_{sol}\\, [s]$} & \\textbf{\\textbf{$\\tau_{rms}\\,[Nm]$}} & \\textbf{$t_{sol}\\,[s]$} \\\\ \n\\hline\\hline\npoly5 (ref.) & \\begin{tabular}[c]{@{}c@{}}22.48\\end{tabular} & - & \\begin{tabular}[c]{@{}c@{}}22.48\\end{tabular} & - \\\\ \n\\cmidrule(lr){1-5}\ntrap & \\begin{tabular}[c]{@{}c@{}}17.16 \\\\-23.7\\%\\end{tabular} & - & \\begin{tabular}[c]{@{}c@{}}17.16\\\\-23.7\\%\\end{tabular} & - \\\\ \n\\cmidrule(lr){1-5}\ncheb7 & \\begin{tabular}[c]{@{}c@{}}13.78 \\\\-38.7\\%\\end{tabular} & 0.21 & \\begin{tabular}[c]{@{}c@{}}13.78\\\\-38.7\\%\\end{tabular} & 3.28 \\\\ \n\\cmidrule(lr){1-5}\ncheb9 & \\begin{tabular}[c]{@{}c@{}}12.47 \\\\-44.5\\%\\end{tabular} & 0.32 & \\begin{tabular}[c]{@{}c@{}}12.47\\\\-44.5\\%\\end{tabular} & 40.33 \\\\ \n\\cmidrule(lr){1-5}\ncheb11 & \\begin{tabular}[c]{@{}c@{}}12.33 \\\\-45.2\\%\\end{tabular} & 0.51 & \\begin{tabular}[c]{@{}c@{}}12.33\\\\-45.2\\%\\end{tabular} & 67.05 \\\\ \n\\cmidrule(lr){1-5}\ncheb13 & \\begin{tabular}[c]{@{}c@{}}12.29 \\\\-45.4\\%\\end{tabular} & 1.06 & \\begin{tabular}[c]{@{}c@{}}12.29\\\\-45.4\\%\\end{tabular} & 142.34 \\\\\n\\cmidrule(lr){1-5}\n\\end{tabular}\n\\end{table}\n\n\\begin{table} \n\\caption{Results of the motion profile optimization (Jerk 0).}\n\\label{tab:results_jerkzero}\n\\centering\n\\begin{tabular}{lcccc}\n & \\multicolumn{2}{c}{\\textbf{Gradient-Based }} & \\multicolumn{2}{c}{\\textbf{Genetic Algorithm}} \\\\\n\\textbf{J0} & \\textbf{$\\tau_{rms}\\,[Nm]$} & \\textbf{$t_{sol}\\, [s]$} & \\textbf{\\textbf{$\\tau_{rms}\\,[Nm]$}} & \\textbf{$t_{sol}\\,[s]$} \\\\ \n\\hline\\hline\npoly7J0 (ref.) & \\begin{tabular}[c]{@{}c@{}}28.44\\end{tabular} & - & \\begin{tabular}[c]{@{}c@{}}28.44\\end{tabular} & - \\\\ \n\\cmidrule(r){1-5}\ncheb9J0 & \\begin{tabular}[c]{@{}c@{}}16.12 \\\\-43.3\\%\\end{tabular} & 0.27 & \\begin{tabular}[c]{@{}c@{}}16.12\\\\-43.3\\%\\end{tabular} & 6.15 \\\\ \n\\cmidrule(r){1-5}\ncheb11J0 & \\begin{tabular}[c]{@{}c@{}}13.61 \\\\-52.2\\%\\end{tabular} & 0.38 & \\begin{tabular}[c]{@{}c@{}}14.11\\\\-50.4\\%\\end{tabular} & 175.23 \\\\ \n\\cmidrule(lr){1-5}\ncheb13J0 & \\begin{tabular}[c]{@{}c@{}}12.98\\\\-54.4\\%\\end{tabular} & 0.77 & \\begin{tabular}[c]{@{}c@{}}13.15\\\\-53.8\\%\\end{tabular} & 195.02 \\\\\n\\cmidrule(lr){1-5}\n\\end{tabular}\n\\end{table}\n\n\\begin{figure}[thpb]\n \\centering\t\n \\includegraphics[width=\\columnwidth]{Images\/Optimization_Results.pdf}\n \\caption{Results of the motion profile optimization for different degrees $n$.}\n \\label{fig:Optimization_Results}\n\\end{figure}\n\n\n\\subsection{Measurements}\nThe theoretical results are validated against experimental measurements on the pick-and-place unit (Fig. \\ref{fig:Experimental_Setup}). The setup comprises a Beckhoff CX5140 PLC, a Beckhoff AX5901 motor drive, and a Beckhoff AM3064 PMSM, which is connected to the shaft of the mechanism. In order to measure the input electrical energy, a Tektron PA4000 power analyzer is used to analyze the power supply (Fig. \\ref{fig:Schematic_Experimental_Setup}).\n\n\\begin{figure}[thpb]\n \\centering\t\n \\includegraphics[width=\\columnwidth]{Images\/Experimental_Setup.pdf}\n \\caption{Schematic overview of the experimental setup.}\n \\label{fig:Schematic_Experimental_Setup}\n\\end{figure}\n\nThe theoretical savings potential of the motion profile optimization is only fulfilled when the motor is capable of following the optimized position setpoint. Therefore, a performant motion controller needs to be designed in order to keep the tracking error as low as possible. Here, similar to \\cite{VanOosterwyck2019}, a cascade controller with torque and speed feedforward is employed as it has proven to be successful for high dynamic systems. The look-up table for the feedforward torque is determined using the torque equation \\eqref{eq:torque_equation_rescaled}.\n\n\\begin{figure}[thpb]\n \\centering\t\n \\includegraphics[width=\\columnwidth]{Images\/Motion_Controller.pdf}\n \\caption{Schematic overview of the cascade motion controller with feedforward \\cite{VanOosterwyck2019}.}\n \\label{fig:Motion_Controllers}\n\\end{figure}\n\nIn Tables \\ref{tab:measurement_jerkfree} and \\ref{tab:measurement_jerkzero}, the results of both the measured RMS torque $\\tau_{rms}$ and measured input electrical energy $E$ for different motion profiles are presented. As expected from the simulations, the lowest absolute energy consumption is obtained when using jerk-free motion profiles. When the jerk constraint is active, a decrease of 62.9\\% in energy consumption can be achieved by optimizing the motion profile, while a relative saving of 52.5\\% is possible if no extra constraint on the jerk is imposed.\n\nThe measured $\\tau_{rms;meas}$ and calculated RMS motor torque $\\tau_{rms}$ show a very high similarity, which confirms that the present system model is valid.\n\n\\begin{table}\n\\caption{Experimental results with energy measurement (Jerk Free).}\n\\label{tab:measurement_jerkfree}\n\\centering\n\\begin{tabular}{cccc}\n\\textbf{JF} & \\textbf{$\\tau_{rms}\\,[Nm]$} & \\textbf{$\\tau_{rms;meas}\\,[Nm]$} & \\textbf{$E_{meas} \\, [Wh] $} \\\\ \n\\hline\\hline\npoly5 & 22.48 & 19.59 & 312.2 \\\\ \n\\cmidrule(r){1-4}\ntrap & \\begin{tabular}[c]{@{}c@{}}17.16\\\\-23.7\\%\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}15.88\\\\-18.98\\%\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}215.1\\\\-31.1\\%\\end{tabular} \\\\ \n\\cmidrule(r){1-4}\ncheb7 & \\begin{tabular}[c]{@{}c@{}}13.78\\\\-38.7\\%\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}13.40\\\\-31.6\\%\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}181.7\\\\-41.8\\%\\end{tabular} \\\\ \n\\cmidrule(lr){1-4}\ncheb9 & \\begin{tabular}[c]{@{}c@{}}12.47\\\\-44.5\\%\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}12.07\\\\-38.4\\%\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}152.3\\\\-51.2\\%\\end{tabular} \\\\ \n\\cmidrule(lr){1-4}\ncheb11 & \\begin{tabular}[c]{@{}c@{}}12.33\\\\-45.2\\%\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}11.93\\\\-39.1\\%\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}150.1\\\\-51.9\\%\\end{tabular} \\\\ \n\\cmidrule(lr){1-4}\ncheb13 & \\begin{tabular}[c]{@{}c@{}}12.29\\\\-45.4\\%\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}11.83\\\\-39.6\\%\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}148.2\\\\-52.5\\%\\end{tabular} \\\\\n\\cmidrule(lr){1-4}\n\\end{tabular}\n\\end{table}\n\n\n\\begin{table}\n\\caption{Experimental results with energy measurement (Jerk Zero).}\n\\label{tab:measurement_jerkzero}\n\\centering\n\\begin{tabular}{cccc}\n\\textbf{J0} & \\textbf{$\\tau_{rms}\\,[Nm]$} & \\textbf{$\\tau_{rms;meas}\\,[Nm]$} & \\textbf{$E_{meas} \\, [Wh] $} \\\\ \n\\hline\\hline\npoly7J0 & 28.44 & 25.30 & 458.5 \\\\ \n\\cmidrule(lr){1-4}\ncheb9J0 & \\begin{tabular}[c]{@{}c@{}}16.12\\\\-43.3\\%\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}15.81\\\\-37.5\\%\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}222.9\\\\-51.4\\%\\end{tabular} \\\\ \n\\cmidrule(lr){1-4}\ncheb11J0 & \\begin{tabular}[c]{@{}c@{}}13.61\\\\-52.2\\%\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}13.08\\\\-48.3\\%\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}170.3\\\\-62.9\\%\\end{tabular} \\\\ \n\\cmidrule(lr){1-4}\ncheb13J0 & \\begin{tabular}[c]{@{}c@{}}12.98\\\\-54.4\\%\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}12.72\\\\-49.7\\%\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}170.8\\\\-62.7\\%\\end{tabular} \\\\\n\\cmidrule(lr){1-4}\n\\end{tabular}\n\\end{table}\n\\section{Conclusion}\nThis study proposes a novel approach for motion profile optimization of PTP motions with Chebyshev polynomials. At first, system properties have been extracted from both CAD motion simulations and measurements to obtain an accurate virtual twin of the system. A Chebyshev motion profile with scaling laws is presented. Especially novel in this paper is the derivation of the boundary conditions of this profile which enables to define bounds for the design variables. The latter allows to use an optimizer that is designed to obtain globally optimal solutions, i.e. Genetic Algorithm. In addition, the solutions are validated with fast gradient-based optimization algorithms. Finally, experimental optimization results have been considered to verify the feasibility of the proposed solutions.\n\nThe numerical results, achieved on an exemplary model, clearly show that large $\\tau_{rms}$ savings of up to 53.8\\% can be achieved. In addition, it is shown that by employing Chebyshev polynomials for the motion profile, a fast gradient-based optimization can be effectively employed with solve times under $0.8$s. At last, the validation measurements show that similar savings are obtained on the real machine with a maximum energy reduction of $62.9 \\%$.\n\nDue to the straightforward implementation of both the optimization itself and integration of the resulting motion profiles in the motor drive, the proposed method can be easily adopted in any existing configuration where the CAD is data available. Therefore, the proposed method is expected to have a beneficial impact on the energy usage of the envisaged PTP applications.\n\n\n\n\n\\section{Optimization Approach}\n\\subsection{Motion Profile Definition \\& Rescaling}\nIn this paper, a Chebyshev polynomial $\\sum_{i=0}^{n} p_iT_i(x)$ is used to define the position profile $\\theta(t)$, where $t \\in [t_A,t_B]$, in between the start- ($\\theta(t_A) = \\theta_A$) and endpoint ($\\theta(t_B) = \\theta_B$) of the motion task. The sequence of orthogonal Chebyshev polynomials $T_k(x) = T_k(\\cos(\\vartheta))$, defined on the interval $x \\in [-1,1]$, is obtained from the recurrence relation: \n\n\\begin{equation} \\label{eq:def_cheb}\n\\begin{aligned}\nT_0(x) &= 1, \\quad T_1(x) = x, \\\\\nT_{k+1}(x) &= 2xT_k(x)-T_{k-1}(x),\n\\end{aligned}\n\\end{equation}\n\nAlternatively, the polynomials can be derived from the trigonometric definition, which gives exactly the same results:\n\n\\begin{equation} \\label{eq:def_cheb_tri}\nT_k (x) = T_k(\\cos(\\vartheta)) = \\cos(k\\vartheta).\n\\end{equation}\n\nTo use $T_n(x)$ as a representation for the position profile, a linear transformation from $t$ into the range $[-1,1]$ of $x$ is required \\cite{Thompson2013}:\n\n\\begin{equation} \\label{eq:rescale_tx}\n\\begin{aligned}\nt&=\\frac{1}{2}(t_B-t_A)x+\\frac{1}{2}(t_B+t_A) =a x+ b,\n\\end{aligned}\n\\end{equation}\n\nwhere scale factors $a$ and $b$ are defined for the purpose of the following paragraphs. In addition, the position $\\theta \\in [\\theta_A, \\theta_B]$ is also rescaled to the interval $\\phi \\in [-1,1]$, which makes it possible to obtain strict bounds on the design space in \\eqref{boundsDS}. Thus, the rescaled motion profile description $\\phi(x)$ of degree $n$ with optimizable coefficients $\\mathbf{p} = [p_0, p_1, \\ldots ,p_n]^T$ is obtained.\n\n\\begin{equation} \\label{eq:position_function_cheb}\n\\phi(x)=\\sum_{i=0}^{n} p_iT_i(x), \\quad x\\in [-1,1].\n\\end{equation}\n\nThe output of the motion simulations in the previous section deliver $n_s$ samples of inertia $\\mathbf{J}= [J_1, \\ldots ,J_{n_s}]^T$, load torque $\\boldsymbol{\\uptau}_l= [\\tau_{l,1}, \\ldots ,\\tau_{l,n_s}]^T$ and corresponding angle query points $\\boldsymbol{\\uptheta} = [\\theta_1, \\ldots ,\\theta_{n_s}]^T$. Due to the position rescaling of the motion profile $\\phi(x)$, the angle query points $\\boldsymbol{\\uptheta}$ have to be rescaled accordingly:\n\n\\begin{equation} \\label{eq:rescale_prop}\n\\begin{aligned}\n \\boldsymbol{\\upphi} &= \\frac{2}{(\\theta_B -\\theta_A)} \\, \\boldsymbol{\\uptheta} - \\frac{(\\theta_B +\\theta_A)}{(\\theta_B -\\theta_A)} = c \\, \\boldsymbol{\\uptheta} + d.\n \\end{aligned}\n\\end{equation}\n\nMoreover, as the property description is now defined on the rescaled interval $\\phi \\in [-1,1]$, the following relationship holds with regard to the derivative properties such inertia variation $\\frac{\\mathrm{d}J(\\phi)}{\\mathrm{d}\\phi}$:\n\n\\begin{equation} \\label{eq:rescale_pder}\n\\frac{\\mathrm{d}J(\\phi)}{\\mathrm{d}\\phi} = \\frac{1}{2}(\\theta_B-\\theta_A) \\frac{\\mathrm{d}J(\\theta)}{\\mathrm{d}\\theta} = e \\, \\frac{\\mathrm{d}J(\\theta)}{\\mathrm{d}\\theta}.\n\\end{equation}\n\nWhen using the rescaled position profile $\\phi(x)$, it is important to rescale the torque equation \\eqref{eq:torque_equation} as well. Otherwise, the resulting values of the torque profile $\\tau(x)$ are distorted which results in different objective values (i.e. $\\tau_{rms}$) and solutions. To preserve the motor torque's absolute values, the following rescaled torque equation is introduced:\n\n\\begin{equation} \\label{eq:torque_equation_rescaled}\n\\tau_m(x) = \\tau_l(\\phi) + \\frac{1}{2}\\frac{\\mathrm{d}J(\\phi)}{\\mathrm{d}\\phi}\\frac{1}{e}\\left(\\frac{\\dot{\\phi}}{a.c}\\right)^2 + J(\\theta)\\frac{\\ddot{\\phi}}{a^2.c} + \\mu_k\\frac{\\dot{\\phi}}{a.c}.\n\\end{equation}\n\nAn overview of the position and torque rescalings is presented in Fig. \\ref{fig:rescaling}. The new system equation \\eqref{eq:torque_equation_rescaled} ensures the system dynamics are equally scaled and the minima are not altered.\n\n\\begin{figure}[thpb]\n \\centering\t\n \\includegraphics[width=\\columnwidth]{Images\/Rescaling2.pdf}\n \\caption{Original $\\theta(t)$ and rescaled position profiles $\\theta(x)$, $\\phi(x)$ with their corresponding torque equations.}\n \\label{fig:rescaling}\n\\end{figure}\n\nFor what concerns the constraints, the rest-to-rest motion requires zero speed $\\dot{\\phi}$ and acceleration $\\ddot{\\phi}$ in the start and endpoint:\n\n\\begin{equation} \n\\begin{array}{ccccc}\n\\phi(-1)=-1 & , & \\dot{\\phi}(-1)=0 & , & \\ddot{\\phi}(-1)=0,\\\\\n\\phi(1)=1 & , & \\dot{\\phi}(1)=0 & , & \\ddot{\\phi}(1)=0.\n\\end{array}{}\n\\label{eq:constraints}\n\\end{equation}\n\n\nReferring to \\eqref{eq:position_function_cheb}, and by incorporating the motion profile constraints \\eqref{eq:constraints}, the lower degree coefficients $[p_0,... , p_5]^T$ can be written as a function of the remaining coefficients $[p_6,... , p_n]^T$, such that $n-5$ degrees of freedom (DOF) are kept available for the optimization algorithm \\cite{Hsu2014}. Thus, the energy optimal motion profile problem is formulated as the following minimization problem with design variable vector $\\mathbf{o}=[p_6,... , p_n]^T$:\n\n\\begin{equation}\n\\begin{aligned}\n& \\underset{\\mathbf{o} \\, \\in \\, \\mathbb{R}^{n-5}}{\\text{minimize}}\n& & \\tau_{rms} = \\sqrt{\\frac{1}{2}\\int_{-1}^{1} {\\tau_m(\\phi(x,\\mathbf{o}))}^2 \\, \\mathrm{d}x} .\n\\end{aligned}\n\\end{equation}\n\n\nIn some applications, an additional constraint of zero jerk in the begin and endpoint can be imposed to limit the vibrations:\n\n\\begin{equation} \\label{eq:constraint_jerk}\n \\dddot{\\phi}(-1)=0 \\quad;\\quad \\dddot{\\phi}(1)=0.\n\\end{equation}\n\nBecause of these two extra equations, the DOF is reduced to $n-7$ and the design variable vector can be expressed as $\\mathbf{o}=[p_8,... , p_n]^T$.\n\n\\subsection{Initialization \\& Design Space}\nIn this paper, the resulting optimization problem is solved with both a fast \\textit{gradient-based} solver, the BFGS (Broyden\u2013Fletcher\u2013Goldfarb\u2013Shanno) quasi-Newton method \\cite{Nocedal2006}, and a global \\textit{heuristic} solver, the genetic algorithm \\cite{Holland1992}.\n\nFor gradient-based optimization, a starting point needs to be defined. The use of the Chebyshev basis $T_i(x)$ in representation \\eqref{eq:position_function_cheb} allows initializing the optimization parameter vector at zero since the coefficients in a convergent Chebyshev series development of the motion profile function $\\phi(x)$ would converge to zero \\cite{Majidian2017}. Here, we can safely assume some similar behavior for the coefficients $p_i$ in \\eqref{eq:position_function_cheb}.\n\nFor what concerns the genetic algorithm, a similar approach is used for the initialization of the population. However, because a GA often samples a wide part of the design space \\cite{Wenzhong2005}, it is beneficial to determine the exact bounds on the design vector $\\mathbf{o}$. By doing so, the solver can cover a large part of the design space and reveal the global optimal solution. In the following paragraphs, thanks to the rescaled Chebyshev motion profile $\\phi(x)$, strict bounds on the design vector $\\mathbf{o}$ can be derived.\n\nTo define these bounds, we take a look at the projection of the position profile $\\phi(x)$ onto the orthogonal Chebyshev polynomial basis $T_l(x)$. Given that $x =\\cos(\\theta)$, we introduce the inner product $F$:\n\n\\begin{equation} \\label{eq:integralF}\n \\begin{aligned}\n F = \\langle\\phi(x),T_l(x)\\rangle &= \\int\\limits_{-1}^{1} \\frac{\\phi(x)T_l(x)}{\\sqrt{1-x^2}}\\,\\mathrm{d}x\\\\\n &= \\int\\limits_{0}^{2\\pi} \\phi(\\cos\\theta)T_l(\\cos\\theta)\\,\\mathrm{d}\\theta.\n \\end{aligned}\n\\end{equation}\n\nThen, by taking into account the position function definition \\eqref{eq:position_function_cheb}, we find the following result:\n\n\\begin{equation} \n \\begin{aligned}\n F &= \\int\\limits_{0}^{2\\pi} \\left(\\sum\\limits_{k=0}^n p_k T_k(\\cos\\theta)\\right)T_l(\\cos\\theta)\\,\\mathrm{d}\\theta \\\\\n\t&= \\sum\\limits_{k=0}^n p_k \\int\\limits_{0}^{2\\pi} T_k(\\cos\\theta) T_l(\\cos\\theta) \\, \\mathrm{d}\\theta.\n \\end{aligned}\n\\end{equation}\n\nHere, the integral $I= \\int\\limits_{0}^{2\\pi} T_k(\\cos\\theta) T_l(\\cos\\theta) \\, \\mathrm{d}\\theta$ can be further simplified by using the Chebyshev polynomial orthogonality properties, which are rederived here for the sake of readability. Because of Eq. \\eqref{eq:def_cheb_tri} and by using the inverse Simpson rule of trigonometry, the integral $I$ can be written as:\n\n\\begin{equation}\n\\begin{aligned}\n\tI &= \\int\\limits_{0}^{2\\pi} \\cos(k\\theta) \\cos(\\ell\\theta) \\, \\mathrm{d}\\theta \\\\\n\t&= \\frac{1}{2} \\int\\limits_{0}^{2\\pi} \\cos\\big((k+\\ell)\\theta\\big)\\,\\mathrm{d}\\theta\n\t+ \\frac{1}{2} \\int\\limits_{0}^{2\\pi} \\cos\\big((k-\\ell)\\theta\\big)\\,\\mathrm{d}\\theta.\n\\end{aligned}\n\\end{equation}\n\nThis integral can be split into three cases:\n\\begin{enumerate}\n \\item \\underline{$k =\\ell = 0$} \\\\\n \n \\begin{equation}\\label{eq:case1}\n I =\n 2\\pi,\n \\end{equation}\n \n \\item \\underline{$k =\\ell \\neq 0$} \\\\\n \n \\begin{equation}\\label{eq:case2}\n \\begin{aligned}\n I\n\n\t= \\pi,\n\t\\end{aligned}\n \\end{equation}\n \n \\item \\underline{$k \\neq \\ell$} \\\\\n \n \\begin{equation}\\label{eq:case3}\n \\begin{aligned}\n I\n = 0.\n \\end{aligned}\n \\end{equation}\n\\end{enumerate}\n \n\nThus, by taking into account \\eqref{eq:case3}, only the term for which $k=l$ remains in the summation $F$:\n\n\\begin{equation}\n \\begin{aligned}\n F = p_\\ell \\int\\limits_{0}^{2\\pi} \\cos^2 (\\ell\\theta) \\, \\mathrm{d}\\theta.\n \\end{aligned}\n\\end{equation}\n\nThis can be split into two cases. For $\\ell = 0$ and by making use of \\eqref{eq:case1} and \\eqref{eq:integralF} we find:\n\\begin{equation}\\label{c0}\n\tp_0 = \\dfrac{1}{2\\pi} \\int\\limits_{0}^{2\\pi} \\phi(\\cos\\theta)\\,\\mathrm{d}\\theta,\n\\end{equation}\n\nand for $\\ell > 0$, by making use of \\eqref{eq:case2} and \\eqref{eq:integralF}:\n\n\\begin{equation}\\label{cl}\n\tp_\\ell = \\dfrac{1}{\\pi} \\int\\limits_{0}^{2\\pi} \\phi(\\cos\\theta)\\cos(\\ell\\theta)\\,\\mathrm{d}\\theta.\n\\end{equation}\n\nFor $\\theta \\in [0,2\\pi]$, $\\cos\\theta$ lies in interval $[-1,1]$. Because of the position rescalings of the motion profile $\\phi(x)$, the image $\\phi(\\cos\\theta)$ also lies in the interval $[-1,1]$. Thus, we find:\n\n\\begin{equation}\n\t|p_0| \\leq \\dfrac{1}{2\\pi}\\int\\limits_{0}^{2\\pi} |\\phi(\\cos\\theta)|\\,\\mathrm{d}\\theta \\leq \\dfrac{1}{2\\pi}\\int\\limits_{0}^{2\\pi} \\,\\mathrm{d}\\theta = 1.\n\\end{equation}\n\nand\n\n\\begin{equation}\n|p_\\ell| \\leq \\dfrac{1}{\\pi}\\int\\limits_{0}^{2\\pi} |\\phi(\\cos\\theta)||\\cos(\\ell\\theta)|\\,\\mathrm{d}\\theta \\leq \\dfrac{1}{\\pi}\\int\\limits_{0}^{2\\pi}|\\cos(\\ell\\theta)| \\,\\mathrm{d}\\theta.\n\\end{equation}\n\nTo calculate this last integral, we use the periodicity of the function $\\cos(\\ell\\theta)$. This function has a period of $2\\pi \/ \\ell$, so goes $\\ell$ times up and down on the interval $[0,2\\pi]$. So, after taking the absolute value of this function, we find $2\\ell$ times the integral over the positive part of a period, for example, the interval $[-\\pi\/2\\ell, \\pi\/2\\ell]$:\n\n\\begin{equation}\n\\begin{aligned}\n \\dfrac{1}{\\pi}\\int\\limits_{0}^{2\\pi}|\\cos(\\ell\\theta)| \\,\\mathrm{d}\\theta = \\dfrac{2\\ell}{\\pi} \\int\\limits_{-\\pi\/2\\ell}^{\\pi\/2\\ell} \\cos(\\ell\\theta)\\,\\mathrm{d}\\theta = \\dfrac{4}{\\pi}.\n\\end{aligned}\n\\end{equation}\n\nThus, the following bounds for the coefficients $p_i$ are obtained:\n\n\\begin{equation}\\label{boundsDS}\n |p_0| \\leq 1 \\hspace{0.5cm} \\text{ and } \\hspace{0.5cm} |p_\\ell| \\leq \\dfrac{4}{\\pi}, \\hspace{0.3cm} \\ell =1,\\dots,n.\n\\end{equation}\n\nThese constraints on the design space simplify the subsequent optimization.\n\n\n\\section{Introduction}\n\\input{Sections\/01_Introduction}\n\\input{Sections\/02_SystemModelling}\n\\input{Sections\/03_Identification}\n\\input{Sections\/04_Optimization}\n\\input{Sections\/05_Results}\n\\input{Sections\/06_Conclusion}\n\n\n\\section*{Acknowledgements}\nResearch funded by a PhD grant of the Research Foundation Flanders (FWO) [1S88120N].\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{intro}\n\nComplex networks have attracted much attention over the last decades.\nThey provide a natural setting to describe many phenomena\nin nature and society~\\cite{abrmp,doro1,doro2,blmch,cup}.\nOne of the salient features of most networks,\neither natural and artificial, is their scalefreeness.\nThis term refers to the broad degree distribution exhibited by these networks.\nThe probability that a node\nhas degree $k$ (i.e., is connected to exactly $k$ other nodes)\nis commonly observed to fall off as a power law:\n\\begin{equation}\nf_k\\sim k^{-\\gamma}.\n\\label{f}\n\\end{equation}\nThis power-law behavior,\nwhich holds in the limit of an infinitely large network,\nwill be referred to hereafter as `stationary'.\nThe exponent usually obeys $\\gamma>2$,\nso that the mean degree of the infinite network is finite.\nGrowing networks with a preferential attachment rule,\nsuch as the well-known Barab\\'asi-Albert (BA) model~\\cite{ba1,ba2},\nhave received a considerable interest,\nas they provide a natural explanation for the observed scalefreeness.\nThe observation that preferential attachment generates\na power-law degree distribution actually dates back\nto much earlier works~\\cite{simon,price}.\n\nScalefree networks, being chiefly characterized by the exponent $\\gamma$\nof their degree distribution,\nare therefore somewhat similar to equilibrium systems\nat their critical point.\nAs a consequence,\nfinite-size (i.e., finite-time) effects can be expected to yield\nimportant corrections to the asymptotic or stationary form~(\\ref{f})\nof the degree distribution.\nThese effects are one of the possible causes of the cutoff phenomenon\nwhich is often observed in the degree distribution of real networks~\\cite{bpv}.\nMore precisely,\nthe largest degree $k_\\star(n)$ of a scalefree network at time $n$\ncan be estimated by means of the following argument of extreme value statistics:\nit is such that the stationary probability\nof having $k\\ge k_\\star(n)$ is of order $1\/n$.\nThe largest degree thus grows as a power law~\\cite{bpv,dms1}:\n\\begin{equation}\nk_\\star(n)\\sim n^\\nu,\\quad\\nu=\\frac{1}{\\gamma-1}.\n\\label{kstar}\n\\end{equation}\nThis growth law is always subextensive,\nbecause one has $\\gamma>2$, so that $\\nu<1$.\nThe cases $2<\\gamma<3$ (i.e., $1\/2<\\nu<1$)\nand $\\gamma>3$ (i.e., $0<\\nu<1\/2$)\nhowever correspond to qualitative differences,\nespecially in the topology\nand in the various dimensions of the networks~\\cite{bck}.\n\nThe goal of this article is to provide a systematic analysis\nof the degree statistics\nof growing network models at a large but finite time $n$.\nBoth the age-resolved distribution $f_k(n,i)$\nof the degree of node $i$ at a later time $n$\nand the distribution $f_k(n)$ of an unspecified node at time $n$\nwill be considered throughout.\nSeveral works have already been devoted to this problem,\nboth for growing networks\nwith preferential attachment~\\cite{dms1,dms2,krl,kr1,kr2,ws,mj}\nand for related models of random graphs and other structures~\\cite{kk,cb}.\nThe present work aims at being systematic in the following three respects:\n\n\\noindent $\\bullet$ {\\it Models.}\nThis work is focussed onto growing network models where\na new node enters at each time step,\nso that nodes can be labeled by their birth date~$n$,\ni.e., the time they enter the network.\nNode $n$ attaches to a single earlier node ($i=1,\\dots,n-1$)\nwith probability $p_{n,i}$.\nThe attachment probabilities and the initial configuration\nentirely define the model.\nThe network thus obtained has the topology of a tree.\nThe degrees $k_i(n)$ of the nodes at time $n$ obey the sum rule\n\\begin{equation}\n\\sum_{i=1}^n k_i(n)=2L(n),\n\\label{sumr}\n\\end{equation}\nwhere $L(n)$ is the number of links of the network at time $n$.\n\nWe will successively consider the following models:\n\n\\noindent -- {\\it Uniform attachment} (UA) (Section~2).\nThe attachment probability is independent of the node,\ni.e., uniform over the network.\nThis model is not scalefree.\nIts analysis serves as a warming up for that of the subsequent models.\n\n\\noindent -- {\\it Barab\\'asi-Albert} (BA) {\\it model} (Section~3).\nThe attachment probability is proportional\nto the degree $k_i(n)$ of the earlier node.\nThis well-known model~\\cite{ba1,ba2} is scalefree,\nwith exponents $\\gamma=3$ and $\\nu=1\/2$.\n\n\\noindent -- {\\it General preferential attachment} (GPA) (Section~4).\nThe attachment probability is proportional\nto the sum $k_i(n)+c$ of the degree of the earlier node\nand of an additive constant $c>-1$.\nThis parameter,\nrepresenting the initial attractiveness of a node~\\cite{dms1},\nis relevant as it yields the continuously varying exponents\n$\\gamma=c+3$ and $\\nu=1\/(c+2)$.\nThe BA and UA model are respectively recovered when $c=0$ and $c\\to\\infty$.\n\n\\noindent $\\bullet$ {\\it Regimes.}\nFor each model, the following three regimes will be considered:\n\n\\noindent -- {\\it Stationary regime} ($k\\ll k_\\star(n)$).\nThe degree distribution is essentially given by its stationary form~(\\ref{f}),\nto be henceforth denoted by $f_{k,{\\mathrm{stat}}}$,\nin order to emphasize its belonging to the stationary regime.\n\n\\noindent -- {\\it Finite-size scaling regime} ($k\\sim k_\\star(n)$).\nIn the scalefree cases, the degree distribution obeys\na multiplicative finite-size scaling law of the form\n\\begin{equation}\nf_k(n)\\approx f_{k,{\\mathrm{stat}}}\\,\\Phi\\!\\left(\\frac{k}{k_\\star(n)}\\right).\n\\label{fssdef}\n\\end{equation}\n\n\\noindent -- {\\it Large-deviation regime} ($k_\\star(n)\\ll k\\sim n$).\nThe degree distribution is usually exponentially small in $n$.\n\n\\begin{table}\n\\caption{Various characteristics of the network for both initial conditions.\nThe listed results hold irrespective of the attachment rule.}\n\\label{tabledef}\n\\begin{tabular}{l|l|l}\n\\hline\\noalign{\\smallskip}\nInitial condition & Case~A & Case~B\\\\\n\\noalign{\\smallskip}\\hline\\noalign{\\smallskip}\nTopology & tree & rooted tree\\\\\nNumber of links at time $n$ & $L^{(\\mathrm{A})}(n)=n-1$ & $L^{(\\mathrm{B})}(n)=n-1\/2$\\\\\nMean degree at time $n$ & $\\mean{k^{(\\mathrm{A})}(n)}=2-2\/n$ & $\\mean{k^{(\\mathrm{B})}(n)}=2-1\/n$\\\\\n\\noalign{\\smallskip}\\hline\\noalign{\\smallskip}\nDegrees at time $1$ & $k^{(\\mathrm{A})}_1(1)=0$ & $k^{(\\mathrm{B})}_1(1)=1$\\\\\nand generating polynomials& $F^{(\\mathrm{A})}_1(x)=1$ & $F^{(\\mathrm{B})}_1(x)=x$\\\\\n\\noalign{\\smallskip}\\hline\\noalign{\\smallskip}\nDegrees at time $2$ & $k^{(\\mathrm{A})}_1(2)=k^{(\\mathrm{A})}_2(2)=1$ & $k^{(\\mathrm{B})}_1(2)=2$, $k^{(\\mathrm{B})}_2(2)=1$\\\\\nand generating polynomials& $F^{(\\mathrm{A})}_2(x)=x$ & $F^{(\\mathrm{B})}_2(x)={\\textstyle\\frac{1}{2}}x(x+1)$\\\\\n\\noalign{\\smallskip}\\hline\n\\end{tabular}\n\\end{table}\n\n\\noindent $\\bullet$ {\\it Initial conditions.}\nWe will consider the following two initial conditions:\n\n\\noindent -- {\\it Case~A.}\nThe first node appears at time $n=1$ with degree $k_1(1)=0$.\nThis prescription is natural because the first node initially has no connection.\nAll subsequent nodes appear with degree $k_n(n)=1$.\nIn particular, at time $n=2$ the second node connects to the first one,\nso that $k_1(2)=k_2(2)=1$.\nThe configuration thus obtained is the dimer configuration\nused e.g.~in~\\cite{kr1,kr2}.\nAt time $n$, the network has $L(n)=n-1$ links.\nIt has the topology of a tree.\n\n\\noindent -- {\\it Case~B.}\nThe first node now appears at time $n=1$ with degree $k_1(1)=1$.\nThis formally amounts to saying that this node is connected to a root,\nwhich does not belong to the network.\nIt is natural to associate half a link to this fictitious connection.\nAt time $n=2$ the second node connects to the first one,\nso that $k_1(2)=2$ and $k_2(2)=1$.\nAt time $n$, the network has $L(n)=n-1\/2$ links.\nIt has the topology of a rooted tree.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=.8\\linewidth]{dessin_CI2.eps}\n\\caption{\\label{figdef}\nFirst three steps of the construction of the network (upper panel)\nand corresponding interacting particle representation (lower panel)\nfor both initial conditions.}\n\\end{center}\n\\end{figure}\n\nTable~\\ref{tabledef} summarizes various characteristics of the network\nfor both initial conditions,\nwhereas Figure~\\ref{figdef} illustrates the first three steps of the network\nconstruction.\nThe upper panel shows the networks with their nodes and links.\nThe lower panel shows the corresponding representation\nas an interacting particle system,\nwhere each node is viewed as a site occupied by\na number of particles equal to its degree.\nThe total number of particles in the system is therefore $2L(n)$.\nThe information about the topology of the network,\nand especially about the genealogy of the nodes,\nis lost in the interacting particle representation,\nbut this information will not be used in the present study\nwhich is focussed on the statistics of degrees.\n\n\\section{The uniform attachment (UA) model}\n\\label{UA}\n\nThe uniform attachment (UA) model is the simplest of all:\nthe attachment probability is chosen to be uniform over all existing nodes.\nThis section is devoted to an analytical study of the distribution\nof the degree of a fixed node and of an unspecified node,\nexactly taking into account fluctuations, finite-time effects,\nand the influence of the initial condition.\n\n\\subsection{Degree statistics of a fixed node}\n\nWe start with the study of the distribution\nof the degree $k_i(n)$ of node $i$ at time $n$.\nThe node appearing at time $n\\ge2$ links to any of the $n-1$ earlier nodes\n($i=1,\\dots,n-1$) with uniform probability\n\\begin{equation}\np_{n,i}=\\frac{1}{n-1}.\n\\end{equation}\n\nIf we define the degree increment of node $i$ at a later time $j>i$ as\n\\begin{equation}\nI_i(j)=k_i(j)-k_i(j-1)=\\left\\{\\begin{array}{l l}1 &\\mathrm{with\\ probability\\\n}p_{j,i},\\\\0 &\\mathrm{else},\\end{array}\\right\n\\label{idef}\n\\end{equation}\nthe degree $k_i(n)$ of node $i$ at a later time $n$ is given by\n\\begin{equation}\nk_i(n)=k_i(i)+\\sum_{j=i+1}^n I_i(j),\n\\label{kinsum}\n\\end{equation}\nwith $k_i(i)=1$, except for $i=1$ in Case~A, where $k_1(1)=0$\n(see Table~\\ref{tabledef}).\n\nThe mean degree $\\mean{k_i(n)}$ therefore reads ($i\\ge2$)\n\\begin{equation}\n\\mean{k_i(n)}=1+\\sum_{j=i+1}^n\\frac{1}{j-1}\n=H_{n-1}-H_{i-1}+1\\approx\\ln\\frac{n}{i}+1,\n\\label{kave}\n\\end{equation}\nwhere the harmonic numbers $H_n$ are defined in~(\\ref{hardef}).\n\nThe distribution $f_k(n,i)=\\mathop{\\rm Prob}\\nolimits\\{k_i(n)=k\\}$\ncan be encoded in the generating polynomial\n\\begin{equation}\nF_{n,i}(x)=\\bigmean{x^{k_i(n)}}=\\sum_{k=1}^{n}f_k(n,i)x^k.\n\\end{equation}\nAs a consequence of~(\\ref{kinsum}), we have\n\\begin{equation}\nF_{n,i}(x)=x^{k_i(i)}\\prod_{j=i+1}^n\\bigmean{x^{I_i(j)}},\n\\label{fniprod}\n\\end{equation}\nwhere the characteristic function of the degree increment $I_i(j)$\nassumes the simple form\n\\begin{equation}\n\\bigmean{x^{I_i(j)}}=1+(x-1)p_{j,i}=\\frac{x+j-2}{j-1},\n\\end{equation}\nirrespective of $i$.\nWe thus get ($i\\ge2$)\n\\begin{equation}\nF_{n,i}(x)=\\frac{x(i-1)!\\Gamma(x+n-1)}{(n-1)!\\Gamma(x+i-1)},\n\\label{fnires}\n\\end{equation}\nwhereas only $F_{n,1}(x)$ depends on the initial condition according to\n\\begin{equation}\nF_{n,1}^{(\\mathrm{A})}(x)=\\frac{\\Gamma(x+n-1)}{(n-1)!\\Gamma(x)},\\quad\nF_{n,1}^{(\\mathrm{B})}(x)=\\frac{x\\Gamma(x+n-1)}{(n-1)!\\Gamma(x)}.\n\\end{equation}\nThroughout the following, the superscripts ${(\\mathrm{A})}$ and ${(\\mathrm{B})}$ mark a result\nwhich holds for a prescribed initial condition (Case~A or Case~B).\n\nThe product form~(\\ref{fniprod}) implies that the generating polynomials\nof node $i$ at times $n$ and $n+1$ obey the recursion\n\\begin{equation}\nF_{n+1,i}(x)=\\bigmean{x^{I_i(n+1)}}F_{n,i}(x)=\\frac{x+n-1}{n}\\,F_{n,i}(x).\n\\label{fniprec}\n\\end{equation}\nThe probabilities $f_k(n,i)$ therefore obey the recursion\n\\begin{equation}\nf_k(n+1,i)=\\frac{1}{n}\\,f_{k-1}(n,i)+\\left(1-\\frac{1}{n}\\right)f_k(n,i),\n\\label{fkdif}\n\\end{equation}\nwith initial conditions given in Table~\\ref{tabledef}, i.e.,\n\\begin{equation}\nf_k(i,i)=\\delta_{k,1}\\quad(i\\ge2),\\quad\nf_k^{(\\mathrm{A})}(1,1)=\\delta_{k,0},\\quad f_k^{(\\mathrm{B})}(1,1)=\\delta_{k,1}.\n\\label{f2init}\n\\end{equation}\nThe master equations~(\\ref{fkdif}) can be directly written down\nby means of a simple reasoning.\nThey provide an alternative way of describing\nthe evolution of the degree distribution of individual nodes.\n\nThe degree distribution encoded in~(\\ref{fnires})\nhas the following characteristics.\nThe degree of node $i$ at time $n$ ranges from the\nminimal value 1 to the maximal value $n+1-i$.\nThese extremal values occur with probabilities\n\\begin{equation}\nf_1(n,i)=\\frac{i-1}{n-1},\\quad f_{n+1-i}(n,i)=\\frac{(i-1)!}{(n-1)!}.\n\\end{equation}\nThe mean and the variance of the degree can be obtained\nby expanding the result~(\\ref{fnires}) around $x=1$, using\n\\begin{equation}\n\\mean{x^K}=1+(x-1)\\mean{K}+\\frac{1}{2}(x-1)^2\n{\\hskip -9pt}\\underbrace{\\mean{K^2-K}}_{\\mathop{\\rm var}\\nolimits{K}+\\mean{K}^2-\\mean{K}}+\\cdots,\n\\label{devt}\n\\end{equation}\nwhere $K$ is any random variable taking positive integer values.\nWe thus get\n\\begin{equation}\n\\matrix{\n\\mean{k_i(n)}=H_{n-1}-H_{i-1}+1,\\hfill\\cr\\cr\n\\mathop{\\rm var}\\nolimits{k_i(n)}=H_{n-1}-H^{(2)}_{n-1}-H_{i-1}+H^{(2)}_{i-1},\\hfill\n}\n\\label{kmom}\n\\end{equation}\nwhere the harmonic numbers $H_n$ and $H^{(2)}_n$ are defined in~(\\ref{hardef}).\nThe above results hold irrespective of the initial condition.\nThe first one coincides with~(\\ref{kave}).\n\nIn the scaling regime where both times $i$ and $n$ are large and comparable,\nintroducing the time ratio\n\\begin{equation}\nz=\\frac{n}{i}\\ge1,\n\\label{zdef}\n\\end{equation}\nthe expressions~(\\ref{kmom}) yield\n\\begin{equation}\n\\mean{k_i(n)}\\approx\\ln z+1,\\quad\\mathop{\\rm var}\\nolimits{k_i(n)}\\approx\\ln z.\n\\label{kinsca}\n\\end{equation}\n\nIn deriving the above results,\nwe have used the asymptotic behavior\nof the digamma function $\\Psi(x)=\\Gamma'(x)\/\\Gamma(x)$\nand of the trigamma function~$\\Psi'(x)$ as $x\\to\\infty$:\n\\begin{equation}\n\\Psi(x)=\\ln x-\\frac{1}{2x}+\\cdots,\\quad\n\\Psi'(x)=\\frac{1}{x}+\\frac{1}{2x^2}+\\cdots,\n\\end{equation}\nas well as their values at integers:\n\\begin{equation}\n\\Psi(n)=H_{n-1}-{\\gamma_{\\scriptscriptstyle{\\rm E}}},\\quad\n\\Psi'(n)=\\frac{\\pi^2}{6}-H^{(2)}_{n-1},\n\\end{equation}\nwhere\n\\begin{equation}\nH_n=\\sum_{i=1}^n\\frac{1}{i},\\quad\nH^{(2)}_n=\\sum_{i=1}^n\\frac{1}{i^2}\n\\label{hardef}\n\\end{equation}\nare the harmonic numbers of the first and second kind,\nand ${\\gamma_{\\scriptscriptstyle{\\rm E}}}$ is Euler's constant.\n\nThe entire degree distribution can be characterized in the scaling regime.\nEquation~(\\ref{fnires}) indeed yields\n\\begin{equation}\nF_{n,i}(x)\\approx x\\,{\\rm e}^{(x-1)\\ln z},\n\\end{equation}\nirrespective of the initial condition.\nWe recognize the generating function of a Poissonian distribution\nwith parameter $\\lambda=\\ln z$, up to a shift by one unit.\nWe thus obtain~\\cite{kr1,krlead}\n\\begin{equation}\nf_k(n,i)\\approx\\frac{(\\ln z)^{k-1}}{z\\,(k-1)!}.\n\\label{ufz}\n\\end{equation}\n\n\\subsection{Degree statistics of the whole network}\n\nWe now turn to the degree distribution of the whole network at time $n$,\n$f_k(n)=\\mathop{\\rm Prob}\\nolimits\\{k(n)=k\\}$,\nwhere $k(n)$ stands for the degree of an unspecified node.\nWe have\n\\begin{equation}\nf_k(n)=\\frac{1}{n}\\sum_{i=1}^n f_k(n,i).\n\\end{equation}\nThe corresponding generating polynomials,\n\\begin{equation}\nF_n(x)=\\bigmean{x^{k(n)}}=\\sum_{k=1}^nf_k(n)x^k\n=\\frac{1}{n}\\sum_{i=1}^n F_{n,i}(x),\n\\end{equation}\nobey the recursion\n\\begin{equation}\n(n+1)F_{n+1}(x)=(x+n-1)F_n(x)+x,\n\\label{fnrec}\n\\end{equation}\nor equivalently\n\\begin{equation}\n(n+1)f_k(n+1)=f_{k-1}(n)+(n-1)f_k(n)+\\delta_{k,1},\n\\label{fknrec}\n\\end{equation}\nwith initial conditions given in Table~\\ref{tabledef}, i.e.,\n\\begin{equation}\nf_k^{(\\mathrm{A})}(1)=\\delta_{k,0},\\quad\nf_k^{(\\mathrm{B})}(1)=\\delta_{k,1}.\n\\label{f1init}\n\\end{equation}\n\nThe recursion~(\\ref{fnrec}) has a non-polynomial solution, independent of $n$,\n\\begin{equation}\nF_{\\mathrm{stat}}(x)=\\frac{x}{2-x},\n\\end{equation}\ndescribing the stationary\ndegree distribution on an infinitely large network:\n\\begin{equation}\nf_{k,{\\mathrm{stat}}}=\\frac{1}{2^k}\\quad(k\\ge1).\n\\label{fstat}\n\\end{equation}\nThe solution of~(\\ref{fnrec}) reads\n\\begin{equation}\n\\matrix{\nF_n^{(\\mathrm{A})}(x)\n=\\frad{x}{2-x}+\\frad{2(1-x)}{2-x}\\,\\frad{\\Gamma(x+n-1)}{n!\\Gamma(x)},\\cr\\cr\nF_n^{(\\mathrm{B})}(x)\n=\\frad{x}{2-x}+\\frad{x(1-x)}{2-x}\\,\\frad{\\Gamma(x+n-1)}{n!\\Gamma(x)}.}\n\\label{fnres}\n\\end{equation}\n\nThe polynomials $F_n^{(\\mathrm{A})}(x)$ and $F_n^{(\\mathrm{B})}(x)$\nhave respective degrees $n-1$ and~$n$.\nThe first of them which are not listed in Table~\\ref{tabledef} read\n\\begin{equation}\n\\matrix{\nF_3^{(\\mathrm{A})}(x)=\\frac{1}{3}\\,x(x+2),\\hfill&\nF_3^{(\\mathrm{B})}(x)=\\frac{1}{6}\\,x(x^2+2x+3),\\hfill\\cr\\cr\nF_4^{(\\mathrm{A})}(x)=\\frac{1}{12}\\,x(x^2+4x+7),\\hfill&\nF_4^{(\\mathrm{B})}(x)=\\frac{1}{24}\\,x(x+3)(x^2+x+4).\\hfill}\n\\end{equation}\nThe degree $k(n)$ at time $n$ ranges from the\nminimal value 1 to the maximal value $n-1$ (Case~A) or $n$ (Case~B).\nThese extremal values occur with the following probabilities $(n\\ge2)$\n\\begin{equation}\nf_1^{(\\mathrm{A})}(n)=\\frac{1}{2}+\\frac{1}{n(n-1)},\\quad\nf_1^{(\\mathrm{B})}(n)=\\frac{1}{2},\\quad\nf_{n-1}^{(\\mathrm{A})}(n)=\\frac{2}{n!},\\quad\nf_n^{(\\mathrm{B})}(n)=\\frac{1}{n!}.\n\\label{ffac}\n\\end{equation}\n\nWe now turn to the finite-size scaling behavior of the degree distribution\nwhen both $k$ and $n$ are large.\nAs anticipated in the Introduction,\nit is to be expected that the probabilities $f_k(n)$ are close to their limits\n$(f_k(n)\\approx f_{k,{\\mathrm{stat}}})$ for $n$ large at fixed degree $k$,\nand more generally in the stationary regime\nwhere~$k$ is much smaller than some characteristic\ncrossover degree $k_\\star(n)$.\nConversely, the probabilities $f_k(n)$ are expected to be negligible\n$(f_k(n)\\ll f_{k,{\\mathrm{stat}}})$ for $k$ large enough at fixed time $n$,\nand more generally in the large-deviation regime where $k_\\star(n)\\ll k\\sim n$.\nThe crossover scale $k_\\star(n)$ can be estimated as\n$k_\\star(n)\\approx\\mean{k_1(n)}$ (see~(\\ref{kave})).\nNodes with highest degrees are indeed typically expected to be the oldest ones.\nAn alternative route consists in using the argument\nof extreme value statistics alluded to in the Introduction:\nthe largest degree $k_\\star$ at time $n$\nis such that the stationary probability of having $k\\ge k_\\star$\nis of order $1\/n$.\nBoth approaches consistently yield\n\\begin{equation}\nk_\\star(n)\\sim\\ln n.\n\\end{equation}\nFinite-size effects are best revealed by considering the ratios\n\\begin{equation}\nR_k(n)=\\frac{f_k(n)}{f_{k,{\\mathrm{stat}}}}=2^k f_k(n).\n\\label{rdef}\n\\end{equation}\nThese ratios are expected to fall off to zero\nfor $k$ of the order of $k_\\star(n)\\sim\\ln n$.\nFigure~\\ref{auscaling} shows a plot of the ratios $R_k(n)$ against $k\/\\ln n$,\nfor times $n=10^3$ and $n=10^6$ in Case~A.\nNumerically exact values of the $f_k(n)$ are obtained\nby iterating~(\\ref{fknrec}).\nA steeper and steeper crossover is clearly observed.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[angle=90,width=.7\\linewidth]{aus.eps}\n\\caption{\\label{auscaling}\nPlot of the ratios $R_k(n)$ against $k\/\\ln n$ (see~(\\ref{rdef})),\nfor the UA model with initial condition~A,\nat times $n=10^3$ (empty symbols) and $n=10^6$ (full symbols).}\n\\end{center}\n\\end{figure}\n\nIn order to get some quantitative information on the observed crossover,\nit is advantageous to introduce the differences\n$d_k(n)=R_{k-1}(n)-R_k(n)$ for $k\\ge2$,\ncompleted by $d_1(n)=1-R_1(n)$, i.e., $R_0(n)=1$.\nAlthough the $d_k(n)$ are not positive, most of them are,\nand they sum up to unity, so that it is tempting to think of them\nas a narrow probability distribution living in the crossover region.\nThe generating function of the $d_k(n)$ reads\n\\begin{equation}\nD_n(x)=\\sum_{k\\ge1}d_k(n)x^k=(x-1)F_n(2x)+x.\n\\label{phires}\n\\end{equation}\nThe above picture suggests to define the crossover scale as the first moment\n\\begin{equation}\nk_\\star=\\mu(n)=\\sum_{k\\ge1}kd_k(n)=D_n'(1),\n\\end{equation}\nand the squared width of the crossover front as the variance\n\\begin{equation}\n\\sigma^2(n)=\\sum_{k\\ge1}k^2d_k(n)-\\mu(n)^2=D_n''(1)+\\mu(n)-\\mu(n)^2.\n\\end{equation}\nEquations~(\\ref{fnres}),~(\\ref{phires}) yield\n\\begin{equation}\n\\mu^{(\\mathrm{A})}(n)=2H_n\\approx2(\\ln n+{\\gamma_{\\scriptscriptstyle{\\rm E}}}),\\quad\n\\mu^{(\\mathrm{B})}(n)=2H_n+1\\approx2(\\ln n+{\\gamma_{\\scriptscriptstyle{\\rm E}}})+1,\n\\end{equation}\nand\n\\begin{equation}\n\\sigma^2(n)=2H_n-4H^{(2)}_n\\approx2(\\ln n+{\\gamma_{\\scriptscriptstyle{\\rm E}}}-\\pi^2\/3),\n\\end{equation}\nthe latter result being independent of the initial condition.\n\nThe crossover scale is thus $k_\\star\\approx2\\ln n$,\nwhereas the width of the crossover front grows as\n$\\sigma(n)\\approx(2\\ln n)^{1\/2}$.\nThese predictions are in agreement with the observations which can be made\non Figure~\\ref{auscaling},\nnamely that the crossover takes place around $k\/\\ln n=2$,\nand that it becomes steeper at larger times,\nas its relative width falls off, albeit very slowly, as $(\\ln n)^{-1\/2}$.\n\nAnother illustration of finite-size effects is provided\nby the complex zeros of the polynomials $F_n(x)$.\nThe location of these zeros indeed shows\nhow fast the degree distribution of finite networks,\nencoded in the polynomials $F_n(x)$, converges to the stationary distribution,\nencoded in the function $F_{\\mathrm{stat}}(x)$.\nFor $n\\ge2$, $F_n^{(\\mathrm{A})}(x)$ and $F_n^{(\\mathrm{B})}(x)$ have one trivial zero at $x=0$,\nand respectively $n-2$ and $n-1$ non-trivial ones.\nThe explicit expressions~(\\ref{fnres}) allow one to find\nthe asymptotic locus of the zeros as follows.\nThe most rapidly varying part of these results is the rightmost ratio,\nso that the zeros are asymptotically located on the curve with equation\n$\\abs{\\Gamma(x+n-1)\/(n!\\Gamma(x))}=1$.\nSetting\n\\begin{equation}\nx=n\\xi,\n\\label{xidef}\n\\end{equation}\nand using Stirling's formula, we can recast the above estimate as\n\\begin{equation}\n\\Re{\\left[(1+\\xi)\\ln(1+\\xi)-\\xi\\ln\\xi\\right]}=0.\n\\label{lens}\n\\end{equation}\nThe non-trivial zeros of the polynomials $F_n(x)$\nare thus predicted to escape to infinity linearly with time $n$.\nOnce rescaled by $n$ according to~(\\ref{xidef}),\nthey accumulate onto a well-defined limiting curve in the complex $\\xi$-plane.\nThis curve, with equation~(\\ref{lens}),\nhas the shape of a lens connecting the points~$-1$ and $0$.\nFigure~\\ref{auzeros} illustrates this result with data at time $n=50$\nfor both initial conditions.\nThe polynomials $F_n(x)$ converge to the stationary function $F_{\\mathrm{stat}}(x)$\nwhenever the complex ratio $\\xi=x\/n$ lies within the lens.\nOtherwise they diverge exponentially with $n$.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[angle=90,width=.7\\linewidth]{auz.eps}\n\\caption{\\label{auzeros}\nPlot of the non-trivial zeros of the polynomials $F_n(x)$ for the UA model,\nin the complex plane of the rescaled variable $\\xi=x\/n$.\nSymbols: zeros for $n=50$ in Case~A (empty symbols) and Case~B (full symbols).\nLine: limiting curve with equation~(\\ref{lens}).}\n\\end{center}\n\\end{figure}\n\nA related issue concerns the behavior of the probability $f_k(n)$\nof having a very large degree, of order $k\\sim n$,\nmuch larger than $k_\\star(n)\\sim\\ln n$.\nConsidering Case~A for definiteness,\nthe expression~(\\ref{fnres}) leads to the exact contour-integral representation\n\\begin{equation}\nf_k^{(\\mathrm{A})}(n)=\\oint\\frac{\\d x}{2\\pi\\i\\,x^{k+1}}\n\\left(\\frac{x}{2-x}+\\frac{2(1-x)}{2-x}\\,\\frac{\\Gamma(x+n-1)}{n!\\Gamma(x)}\n\\right).\n\\end{equation}\nThe presence of gamma functions suggests to look for a saddle point $x_{\\mathrm{s}}$\nproportional to $n$.\nSetting $\\zeta=k\/n$, we indeed find $x_{\\mathrm{s}}=n\/v$, where $\\zeta$ and\n$v$ are related through\n\\begin{equation}\n\\zeta=\\frac{\\ln(v+1)}{v}.\n\\end{equation}\nWe thus obtain the following large-deviation estimate\n\\begin{equation}\nf_k(n)\\sim\\exp\\Bigl(-n\\bigl(\\zeta\\ln n+S(\\zeta)\\bigr)\\Bigr),\n\\label{uent}\n\\end{equation}\nwhere the exponent has a usual contribution in $n$\nand a less usual one in $n\\ln n$.\nThe term linear in $n$ involves a large-deviation function $S(\\zeta)$,\nwhich is obtained in the following form, parametrized by $v$:\n\\begin{equation}\nS(\\zeta)=\\frac{1}{v}\\bigl(v\\ln v-\\ln v\\ln(v+1)-(v+1)\\ln(v+1)\\bigr).\n\\end{equation}\nThis function decreases from $S(0)=0$ to $S(1)=-1$.\nThe resulting behavior at $\\zeta=1$, i.e., $\\exp(-n(\\ln n-1))$,\nis in agreement with the inverse factorial expressions~(\\ref{ffac}).\n\n\\section{Linear preferential attachment: the Barab\\'asi-Albert (BA) model}\n\\label{BA}\n\nThe Barab\\'asi-Albert (BA) model is the simplest of the models\nwith preferential attachment:\neach new node connects to earlier nodes with a probability proportional\nto their degrees.\nThe probability that node $n$ connects to an earlier node $i$ thus reads\n\\begin{equation}\np_{n,i}=\\frac{k_i(n-1)}{Z(n-1)},\n\\end{equation}\nwhere $k_i(n-1)$ is the degree of node $i$ at time $n-1$,\ni.e., before node $n$ enters the network.\nThe partition function in the denominator,\n\\begin{equation}\nZ(n)=\\sum_{i=1}^nk_i(n)=2L(n)\n\\end{equation}\n(see~(\\ref{sumr})), ensures that the attachment probabilities add up to unity.\n\nIn the following we analyze the BA model\nalong the lines of the previous section,\nkeeping consistent notations as much as possible.\nThe dependence of the attachment probability $p_{n,i}$\non the degree $k_i(n-1)$\nhowever makes the problem more difficult than\nthe previous one of a uniform attachment.\n\n\\subsection{Degree statistics of a fixed node}\n\nLet us again begin with the\ndistribution $f_k(n,i)=\\mathop{\\rm Prob}\\nolimits\\{k_i(n)=k\\}$\nof the degree of node $i$ at time $n$.\n\nA first estimate of the degree $k_i(n)$ is provided by the following\nrecursion relation for the mean degree $\\mean{k_i(n)}$,\nwhich is a consequence of~(\\ref{idef}):\n\\begin{equation}\n\\mean{k_i(n)}=\\mean{k_i(n-1)}+\\mean{p_{n,i}}\n=\\left(1+\\frac{1}{Z(n-1)}\\right)\\mean{k_i(n-1)}.\n\\label{aveprod}\n\\end{equation}\nIn the scaling regime where both $i$ and $n$ are large,\nusing the expressions of the partition function given in Table~\\ref{tabledef},\ni.e.,\n\\begin{equation}\nZ^{(\\mathrm{A})}(n)=2n-2,\\quad Z^{(\\mathrm{B})}(n)=2n-1,\n\\label{zres}\n\\end{equation}\nthe above relation becomes the differential equation\n\\begin{equation}\n\\frac{\\partial\\mean{k_i(n)}}{\\partial n}\\approx\\frac{\\mean{k_i(n)}}{2n},\n\\end{equation}\nwhich yields\n\\begin{equation}\n\\mean{k_i(n)}\\approx\\left(\\frac{n}{i}\\right)^{1\/2}.\n\\label{bave}\n\\end{equation}\n\nThe generating polynomials $F_{n,i}(x)$ and $F_{n+1,i}(x)$\nwhich encode the distribution of the degree of node $i$\nat successive times $n$ and $n+1$ obey the recursion formula:\n\\begin{eqnarray}\nF_{n+1,i}(x)\n&=&\\bigmean{x^{k_i(n+1)}}=\\bigmean{x^{I_i(n+1)}x^{k_i(n)}}\\nonumber\\\\\n&=&\\bigmean{(1+(x-1)p_{n+1,i})x^{k_i(n)}}\\nonumber\\\\\n&=&\\bigmean{\\left(1+\\frac{x-1}{Z(n)}\\,k_i(n)\\right)x^{k_i(n)}},\n\\end{eqnarray}\ni.e.,\n\\begin{equation}\nF_{n+1,i}(x)=F_{n,i}(x)+\\frac{x(x-1)}{Z(n)}\\frac{\\d F_{n,i}(x)}{\\d x},\n\\label{fnirec}\n\\end{equation}\nwhere $Z(n)$ is given by~(\\ref{zres}).\nThe probabilities $f_k(n,i)$ themselves therefore obey the recursion\n\\begin{equation}\nf_k(n+1,i)\n=\\frac{k-1}{Z(n)}f_{k-1}(n,i)+\\left(1-\\frac{k}{Z(n)}\\right)f_k(n,i),\n\\end{equation}\nwith initial conditions~(\\ref{f2init}).\nThe initial condition for Case~A should be taken at time $n=2$,\nin order to avoid indeterminate expressions, as $Z^{(\\mathrm{A})}(1)=0$.\n\nIn order to solve the recursion~(\\ref{fnirec}),\nwe perform the rational change of variable from $x$ to $u$ such that\n\\begin{equation}\nu=\\frac{x}{1-x},\\quad x=\\frac{u}{u+1},\\quad\nx(x-1)\\frac{\\d}{\\d x}=-u\\frac{\\d}{\\d u}.\n\\label{xtou}\n\\end{equation}\nIntroducing the notation $\\widehat F_{n,i}(u)=F_{n,i}(x)$,\nthe recursion~(\\ref{fnirec}) reads\n\\begin{equation}\n\\widehat F_{n+1,i}(u)=\\widehat F_{n,i}(u)-\\frac{u}{Z(n)}\\frac{\\d\\widehat F_{n,i}(u)}{\\d u}.\n\\label{wfnirec}\n\\end{equation}\nIt is then advantageous to introduce the Mellin transform $M_{n,i}(s)$\nof $\\widehat F_{n,i}(u)$, defined as\n\\begin{equation}\nM_{n,i}(s)=\\int_0^\\infty\\widehat F_{n,i}(u)\\,u^{-s-1}\\,\\d u.\n\\end{equation}\nThe inverse transform reads\n\\begin{equation}\n\\widehat F_{n,i}(u)=\\int_{\\mathrm{C}}\\frac{\\d s}{2\\pi\\i}\\,M_{n,i}(s)\\,u^s,\n\\label{invmel}\n\\end{equation}\nwhere C is a vertical contour in the complex $s$-plane\nwhose position will be defined in a while.\nThe virtue of the Mellin transformation\nis that the recursion~(\\ref{wfnirec}) simplifies to\n\\begin{equation}\nM_{n+1,i}(s)=\\left(1-\\frac{s}{Z(n)}\\right)M_{n,i}(s),\n\\label{mfnirec}\n\\end{equation}\nwith initial condition $M_{i,i}(s)=X_0(s)$ for $i\\ge2$, with\n\\begin{equation}\nX_0(s)=\\int_0^\\infty x(u)\\,u^{-s-1}\\,\\d u=\\int_0^1 x^{-s}(1-x)^{s-1}\\,\\d x\n=\\frac{\\pi}{\\sin\\pi s}\n\\label{xdef}\n\\end{equation}\nfor $0<\\Re s<1$.\nHereafter the contour C is assumed to be in that strip.\nWe thus get $(i\\ge2)$\n\\begin{equation}\n\\matrix{\nM_{n,i}^{(\\mathrm{A})}(s)=\\frad{\\Gamma\\!\\left(n-\\frac{s}{2}-1\\right)\\Gamma(i-1)}\n{\\Gamma\\!\\left(i-\\frac{s}{2}-1\\right)\\Gamma(n-1)}\\,X_0(s),\\hfill\\cr\\cr\nM_{n,i}^{(\\mathrm{B})}(s)=\\frad{\\Gamma\\!\\left(n-\\frac{s}{2}-\\frac{1}{2}\\right)\n\\,\\Gamma\\!\\left(i-\\frac{1}{2}\\right)}\n{\\Gamma\\!\\left(i-\\frac{s}{2}-\\frac{1}{2}\\right)\n\\,\\Gamma\\!\\left(n-\\frac{1}{2}\\right)}\\,X_0(s).\\hfill\n}\n\\label{bfnires}\n\\end{equation}\nThese product formulas in the Mellin variable $s$\nare reminiscent of~(\\ref{fnires}).\n\nThe mean and the variance of the degree of node $i$ at time~$n$\ncan be extracted from these results as follows.\nThe identity~(\\ref{devt}) yields\n\\begin{equation}\n\\widehat F_{n,i}(u)=1-\\frac{\\mean{k_i(n)}}{u}\n+\\frac{\\mean{k_i(n)^2}+\\mean{k_i(n)}}{2u^2}+\\cdots\n\\end{equation}\nas $u\\to+\\infty$.\nFurthermore the coefficients of $1\/u$ and $1\/u^2$\nare respectively the residues of $M_{n,i}(s)$ at $s=-1$ and $s=-2$.\nWe thus obtain\n\\begin{equation}\n\\matrix{\n\\mean{k^{(\\mathrm{A})}_i(n)}=\\frad{\\Gamma\\!\\left(n-\\frac{1}{2}\\right)\\Gamma(i-1)}\n{\\Gamma\\!\\left(i-\\frac{1}{2}\\right)\\Gamma(n-1)},\\quad\n\\mean{k^{(\\mathrm{B})}_i(n)}=\\frad{\\Gamma(n)\\,\\Gamma\\!\\left(i-\\frac{1}{2}\\right)}\n{\\Gamma(i)\\,\\Gamma\\!\\left(n-\\frac{1}{2}\\right)}\n}\n\\end{equation}\nand\n\\begin{equation}\n\\matrix{\n\\mathop{\\rm var}\\nolimits{k^{(\\mathrm{A})}_i(n)}=2\\,\\frad{n-1}{i-1}-\\mean{k^{(\\mathrm{A})}_i(n)}^2-\\mean{k^{(\\mathrm{A})}_i(n)},\\hfill\n\\cr\\cr\n\\mathop{\\rm var}\\nolimits{k^{(\\mathrm{B})}_i(n)}=2\\,\\frad{2n-1}{2i-1}-\\mean{k^{(\\mathrm{B})}_i(n)}^2-\\mean{k^{(\\mathrm{B})}_i(n)}.\\hfill\n}\n\\end{equation}\n\nIn the scaling regime where both times $i$ and $n$ are large and comparable,\nintroducing the time ratio $z=n\/i$ (see~(\\ref{zdef})),\nthe above results yield\n\\begin{equation}\n\\mean{k_i(n)}\\approx z^{1\/2},\\quad\\mathop{\\rm var}\\nolimits{k_i(n)}\\approx z^{1\/2}(z^{1\/2}-1),\n\\label{bkin}\n\\end{equation}\nirrespective of the initial condition.\nThe mean degree is in agreement with the estimate~(\\ref{bave}).\nThe entire degree distribution can actually be derived in the scaling regime.\nEquation~(\\ref{bfnires}) indeed yields\n\\begin{equation}\nM_{n,i}(s)\\approx z^{-s\/2}\\,\\frac{\\pi}{\\sin\\pi s}.\n\\end{equation}\nWe thus obtain\n\\begin{equation}\nF_{n,i}(x)\\approx\\frac{x}{x+z^{1\/2}(1-x)}\n\\end{equation}\nand finally\n\\begin{equation}\nf_k(n,i)\\approx z^{-1\/2}\\bigl(1-z^{-1\/2}\\bigr)^{k-1}.\n\\label{bfz}\n\\end{equation}\nThe degree distribution is therefore found to be asymptotically geometric,\nirrespective of the initial condition~\\cite{kr1,krlead}.\n\n\\subsection{Degree statistics of the whole network}\n\nWe now turn to the degree distribution $f_k(n)=\\mathop{\\rm Prob}\\nolimits\\{k(n)=k\\}$,\nwhere $k(n)$ stands for the degree of an unspecified node.\n\nThe generating polynomials $F_n(x)$ obey the recursion\n\\begin{equation}\n(n+1)F_{n+1}(x)=nF_n(x)+n\\frac{x(x-1)}{Z(n)}\\frac{\\d F_n(x)}{\\d x}+x,\n\\label{bfnrec}\n\\end{equation}\nwhere $Z(n)$ is again given by~(\\ref{zres}), and\nwith initial conditions given in Table~\\ref{tabledef}.\nThe probabilities $f_k(n)$ themselves obey the recursion\n\\begin{equation}\n(n+1)f_k(n+1)\n=\\frac{k-1}{Z(n)}nf_{k-1}(n)+\\left(1-\\frac{k}{Z(n)}\\right)nf_k(n)+\\delta_{k,1}.\n\\label{bfknrec}\n\\end{equation}\n\nThe first generating polynomials which depend on the attachment rule read\n\\begin{equation}\n\\matrix{\nF_3^{(\\mathrm{A})}(x)=\\frac{1}{3}\\,x(x+2),\\hfill&\nF_3^{(\\mathrm{B})}(x)=\\frac{1}{9}\\,x(2x^2+2x+5),\\hfill\\cr\\cr\nF_4^{(\\mathrm{A})}(x)=\\frac{1}{8}\\,x(x^2+2x+5),\\hfill&\nF_4^{(\\mathrm{B})}(x)=\\frac{1}{60}\\,x(6x^3+8x^2+11x+35).\\hfill}\n\\end{equation}\n\nThe stationary degree distribution $f_{k,{\\mathrm{stat}}}$\ncan be determined as the solution of~(\\ref{bfknrec})\nwhich becomes independent of $n$ for large $n$.\nWe thus get\n\\begin{equation}\n(k+2)f_{k,{\\mathrm{stat}}}=(k-1)f_{k-1,{\\mathrm{stat}}}+2\\delta_{k,1},\n\\end{equation}\nhence~\\cite{dms1,krl,kr1}\n\\begin{equation}\nf_{k,{\\mathrm{stat}}}=\\frac{4}{k(k+1)(k+2)}.\n\\label{bfstat}\n\\end{equation}\nAn alternative approach consists in looking for the\nasymptotic generating function $F_{\\mathrm{stat}}(x)$\nas the solution of~(\\ref{bfnrec}) which becomes independent of $n$\nfor large~$n$.\nWe thus obtain the differential equation\n\\begin{equation}\nx(1-x)F'_{\\mathrm{stat}}(x)+2F_{\\mathrm{stat}}(x)=2x,\n\\end{equation}\nwhich has for solution\n\\begin{equation}\nF_{\\mathrm{stat}}(x)=3-\\frac{2}{x}-\\frac{2(1-x)^2}{x^2}\\ln(1-x).\n\\label{fst}\n\\end{equation}\nExpanding this result as a power series in $x$\nallows one to recover~(\\ref{bfstat}).\n\nThe recursion~(\\ref{bfnrec}) for the generating polynomials $F_n(x)$\ncan be solved along the lines of the above solution\nof the recursion~(\\ref{fnirec}).\nThe Mellin transforms $M_n(s)$ of the functions $\\widehat F_n(u)=F_n(x)$\nobey the recursion\n\\begin{equation}\n(n+1)M_{n+1}(s)=\\left(1-\\frac{s}{Z(n)}\\right)nM_n(s)+X_0(s),\n\\label{mfnrec}\n\\end{equation}\nwith initial condition $M_2^{(\\mathrm{A})}(s)=M_1^{(\\mathrm{B})}(s)=X_0(s)$.\nEquation~(\\ref{mfnrec}) has a special solution\n\\begin{equation}\nM_n(s)=\\frac{Z(n)X_0(s)}{(s+2)n},\n\\label{mspec}\n\\end{equation}\nwhereas the general solution of the homogeneous equation\nshares the $n$-de\\-pen\\-den\\-ce of the expressions~(\\ref{bfnires}).\nWe thus obtain\n\\begin{equation}\n\\matrix{\nM_n^{(\\mathrm{A})}(s)=\\frad{2X_0(s)}{(s+2)n}\n\\left(n-1+(s+1)\\frad{\\Gamma\\!\\left(n-\\frac{s}{2}-1\\right)}\n{\\Gamma\\!\\left(1-\\frac{s}{2}\\right)\\,\\Gamma\\!\\left(n-1\\right)}\\right),\\hfill\\cr\\cr\nM_n^{(\\mathrm{B})}(s)=\\frad{X_0(s)}{(s+2)n}\\left(2n-1\n+(s+1)\\frad{\\sqrt{\\pi}\\,\\Gamma\\!\\left(n-\\frac{s}{2}-\\frac{1}{2}\\right)}\n{\\Gamma\\!\\left(\\frac{1}{2}-\\frac{s}{2}\\right)\\,\\Gamma\\!\\left(n-\\frac{1}{2}\\right)}\n\\right).\\hfill\n}\n\\label{bfnres}\n\\end{equation}\nThe common stationary limit of both expressions,\n\\begin{equation}\nM_{\\mathrm{stat}}(s)=\\frac{2X_0(s)}{s+2},\n\\end{equation}\nis proportional to the special solution~(\\ref{mspec}).\nRecalling~(\\ref{xdef}), the inverse Mellin transform of the above result,\n\\begin{equation}\n\\widehat F_{\\mathrm{stat}}(u)=1-\\frac{2}{u}+\\frac{2}{u^2}\\ln(u+1),\n\\end{equation}\nis equivalent to~(\\ref{fst}).\n\nThe results~(\\ref{bfnres}) allow one to investigate,\nat least in principle, every feature of the degree distribution $f_k(n)$.\nLet us take the example of the probability $f_1(n)$\nfor a node to have degree one.\nThe inverse formula~(\\ref{invmel}) shows that this probability\nis equal to minus the residue of $M_n(s)$ at $s=1$.\nThe nature of the subleading corrections to the stationary value $f_{1,{\\mathrm{stat}}}=2\/3$\ndepends on the initial condition.\nFor Case~A we obtain ($n\\ge2$)\n\\begin{equation}\nf_1^{(\\mathrm{A})}(n)=\\frad{2(n-1)}{3n}\n+\\frad{4\\,\\Gamma\\!\\left(n-\\frac{3}{2}\\right)}{3\\sqrt{\\pi}\\,n\\Gamma(n-1)}\n=\\frad{2}{3}-\\frad{2}{3n}+\\frad{4}{3\\sqrt{\\pi}\\,n^{3\/2}}+\\cdots\n\\label{af1}\n\\end{equation}\nMore generally, all the probabilities $f_k(n)$ exhibit\na singular correction in $n^{-3\/2}$.\nCase~B has the remarkable property that all the probabilities $f_k(n)$\nare rational functions of time $n$.\nTheir expansion at large $n$ therefore only involves integer powers of $1\/n$.\nWe have e.g.\n\\begin{equation}\n\\matrix{\nf_1^{(\\mathrm{B})}(n)=\\frad{2n-1}{3n}=\\frad{2}{3}-\\frad{1}{3n},\\hfill\\cr\\cr\nf_2^{(\\mathrm{B})}(n)=\\frad{n^2-2n+3}{3n(2n-3)}\n=\\frad{1}{6}-\\frad{1}{12n}+\\frad{3}{8n^2}+\\cdots\\hfill\n}\n\\label{bf1}\n\\end{equation}\n\nWe now turn to the finite-size scaling behavior\nof the degree distribution when both $k$ and $n$ are large.\nThe crossover scale $k_\\star(n)$ can again be estimated\neither using~(\\ref{bave}) or by the argument of extreme value statistics.\nBoth approaches consistently yield\n\\begin{equation}\nk_\\star(n)\\sim n^{1\/2}.\n\\end{equation}\nWe will now show that the degree distribution\nobeys the multiplicative finite-size scaling law\n\\begin{equation}\nf_k(n)\\approx f_{k,{\\mathrm{stat}}}\\,\\Phi(y),\\quad y=\\frac{k}{n^{1\/2}},\n\\label{fss}\n\\end{equation}\nwhere the scaling function $\\Phi(y)$ is non-universal,\nin the sense that it depends on the initial condition~\\cite{kr2,ws}.\nThe proof of the scaling behavior~(\\ref{fss})\nand the determination of the scaling functions $\\Phi^{(\\mathrm{A})}(y)$\nand $\\Phi^{(\\mathrm{B})}(y)$ go as follows.\nLet us start with Case~A.\nThe second term of the expression~(\\ref{bfnres}) for $M_n^{(\\mathrm{A})}(s)$ scales as\na power law for large $n$:\n\\begin{equation}\nM_{n,{\\mathrm{scal}}}^{(\\mathrm{A})}(s)\\approx\\frac{2(s+1)X_0(s)}\n{(s+2)\\,\\Gamma\\!\\left(1-\\frac{s}{2}\\right)}\\,n^{-s\/2-1}.\n\\end{equation}\nThe inverse Mellin transform of the latter formula,\n\\begin{equation}\n\\widehat F_{n,{\\mathrm{scal}}}^{(\\mathrm{A})}(u)\\approx\\frac{1}{n}\\int_{\\mathrm{C}}\\frac{\\d s}{2\\pi\\i}\n\\,\\frac{2(s+1)X_0(s)}{(s+2)\\,\\Gamma\\!\\left(1-\\frac{s}{2}\\right)}\n\\left(u\/n^{1\/2}\\right)^s,\n\\end{equation}\ndescribes the scaling behavior of $\\widehat F_n^{(\\mathrm{A})}(u)$\nin the regime where $u$ and $n$ are simultaneously large,\nwith $u\/n^{1\/2}$ fixed.\nFinally, by inserting the above scaling estimate\ninto the contour-integral representation\n\\begin{equation}\nf_k^{(\\mathrm{A})}(n)=\\oint\\frac{\\d x}{2\\pi\\i}\\,\\frac{F_n^{(\\mathrm{A})}(x)}{x^{k+1}}\n=\\oint\\frac{\\d u}{2\\pi\\i}\\,\\frac{\\widehat F_n^{(\\mathrm{A})}(u)(u+1)^{k-1}}{u^{k+1}},\n\\label{bcontour}\n\\end{equation}\npermuting the order of integrals, opening up the $u$-contour and using\n\\begin{equation}\n\\int_C\\frac{\\d u}{2\\pi\\i}\\,\\frac{(u+1)^{k-1}}{u^{k-s+1}}\n=\\frac{\\Gamma(k)}{\\Gamma(s)\\Gamma(k-s+1)},\n\\end{equation}\nwe obtain after some algebra the scaling form~(\\ref{fss}), with\n\\begin{equation}\n\\Phi^{(\\mathrm{A})}(y)=1+\\frac{2}{\\sqrt{\\pi}}\n\\int_{\\mathrm{C}}\\frac{\\d s}{2\\pi\\i}\\,\\frac{s+1}{s+2}\n\\,\\Gamma\\!\\left(\\frac{1-s}{2}\\right)\\left(\\frac{y}{2}\\right)^{s+2}.\n\\end{equation}\nCase~B can be dealt with along the same lines.\nWe thus get the similar expression\n\\begin{equation}\n\\Phi^{(\\mathrm{B})}(y)=1+\\int_{\\mathrm{C}}\\frac{\\d s}{2\\pi\\i}\\,\\frac{s+1}{s+2}\n\\,\\Gamma\\!\\left(1-\\frac{s}{2}\\right)\\left(\\frac{y}{2}\\right)^{s+2}.\n\\end{equation}\nThe above expressions can be evaluated by closing the contour to the right\nand summing the residues at the poles of the gamma functions.\nWe thus get\n\\begin{equation}\n\\matrix{\n\\Phi^{(\\mathrm{A})}(y)=1+\\frad{8}{\\sqrt{\\pi}}\n\\displaystyle\\sum_{m\\ge0}\\frad{(-1)^m(m+1)}{(2m+3)m!}\\left(\\frac{y}{2}\\right)^{2m+3},\n\\hfill\\cr\\cr\n\\Phi^{(\\mathrm{B})}(y)=1+\n\\displaystyle\\sum_{m\\ge0}\\frad{(-1)^m(m+1)(2m+3)}{(m+2)!}\\left(\\frac{y}{2}\\right)^{2m+4},\n\\hfill\n}\n\\end{equation}\ni.e., finally\n\\begin{equation}\n\\matrix{\n\\Phi^{(\\mathrm{A})}(y)=\\mathop{\\rm erfc}\\nolimits\\left(\\frad{y}{2}\\right)\n+\\frad{y}{\\sqrt{\\pi}}\\left(1+\\frad{y^2}{2}\\right){\\rm e}^{-y^2\/4},\\hfill\\cr\\cr\n\\Phi^{(\\mathrm{B})}(y)=\\left(1+\\frad{y^2}{4}+\\frad{y^4}{8}\\right){\\rm e}^{-y^2\/4},\\hfill\n}\n\\label{bfss}\n\\end{equation}\nwhere erfc denotes the complementary error function.\nThe above expression for $\\Phi^{(\\mathrm{A})}$ can be found in~\\cite{kr2,ws},\nwhereas that for $\\Phi^{(\\mathrm{B})}$ has been shown in~\\cite{dms2} to hold for a slightly\ndifferent attachment rule and initial condition.\n\nFigure~\\ref{bscaling} shows a plot of the ratios $f_k(n)\/f_{k,{\\mathrm{stat}}}$,\nagainst the scaling variable $y=k\/n^{1\/2}$,\nat time $n=10^3$ for both initial conditions.\nExact values for the $f_k(n)$ are obtained by iterating~(\\ref{bfknrec}).\nThe data are well described by the predicted finite-size scaling functions\n$\\Phi^{(\\mathrm{A})}(y)$ and $\\Phi^{(\\mathrm{B})}(y)$, shown as full lines.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[angle=90,width=.7\\linewidth]{bas.eps}\n\\caption{\\label{bscaling}\nPlot of the ratios $f_k(n)\/f_{k,{\\mathrm{stat}}}$\nagainst the scaling variable $y=k\/n^{1\/2}$,\nfor the BA model at time $n=10^3$ (symbols) for both initial conditions.\nFull lines: asymptotic scaling functions $\\Phi^{(\\mathrm{A})}(y)$ and $\\Phi^{(\\mathrm{B})}(y)$.}\n\\end{center}\n\\end{figure}\n\nBoth scaling functions share similar qualitative features.\nThey start from the value 1 at $y=0$,\nincrease to a maximum, which is reached for $y=2$ in Case~A\nand for $y=\\sqrt{6}$ in Case~B, and fall off as $\\exp(-y^2\/4)$.\nThey however differ at the quantitative level,\nboth at small and large values of $y$:\n\\begin{equation}\n\\matrix{\n\\Phi^{(\\mathrm{A})}(y)=1+\\frad{y^3}{3\\sqrt{\\pi}}+\\cdots,\\hfill&\n\\Phi^{(\\mathrm{B})}(y)=1+\\frad{3y^4}{32}+\\cdots,\\hfill\\cr\\cr\n\\Phi^{(\\mathrm{A})}(y)\\approx\\frad{y^3}{2\\sqrt{\\pi}}\\,{\\rm e}^{-y^2\/4},\\hfill&\n\\Phi^{(\\mathrm{B})}(y)\\approx\\frad{y^4}{8}\\,{\\rm e}^{-y^2\/4}.\\hfill\n}\n\\label{phiasy}\n\\end{equation}\nApart from the additive constant 1,\nthe scaling functions $\\Phi^{(\\mathrm{A})}$ and $\\Phi^{(\\mathrm{B})}$ are respectively\nan odd and an even function of $y$.\nThis is the transcription in the finite-size scaling regime\nof the phenomenon underlined when discussing~(\\ref{af1}) and~(\\ref{bf1}).\nIn particular, the first correction term at small $y$ is in $y^3$ for $\\Phi^{(\\mathrm{A})}$,\nand in $y^4$ for $\\Phi^{(\\mathrm{B})}$.\n\nLet us again close up with the location\nof the complex zeros of the polynomials $F_n(x)$.\nConsidering Case~A for definiteness,\nthe result~(\\ref{bfnres}) can be recast as the exact formula\n\\begin{equation}\n\\widehat F_n^{(\\mathrm{A})}(u)-\\frac{n-1}{n}\\,\\widehat F_{\\mathrm{stat}}(u)\n=\\frac{1}{n}\\int_{\\mathrm{C}}\\frac{\\d s}{\\i}\\,\\frac{s+1}{s+2}\n\\,\\frac{\\Gamma\\!\\left(n-\\frac{s}{2}-1\\right)}\n{\\Gamma\\!\\left(1-\\frac{s}{2}\\right)\\Gamma(n-1)}\\,\\frac{u^s}{\\sin\\pi s}.\n\\label{bfnc}\n\\end{equation}\nThe growth of this expression with $n$\nfor a fixed value of the complex variable $u$\ncan be investigated by means of the saddle-point approximation.\nThe presence of gamma functions again suggest to look for a saddle point\n$s_{\\mathrm{s}}$ proportional to~$n$.\nSkipping details, let us mention that we find $s_{\\mathrm{s}}\\approx2n\/(1-u^2)$,\nso that the right-hand side of~(\\ref{bfnc}) can be estimated as\n\\begin{equation}\n\\widehat F_{n,{\\mathrm{sing}}}(u)\\sim\\left(1-\\frac{1}{u^2}\\right)^{-n},\n\\label{bexpo}\n\\end{equation}\nwith exponential accuracy.\nThe asymptotic locus of the complex zeros is then naturally given\nby the condition that the above estimate neither falls off\nnor grows exponentially.\nWe thus obtain $\\abs{1-1\/u^2}=1$.\nThe relevant part of this locus\ncan be parametrized by an angle $0\\le\\theta\\le2\\pi$~as\n\\begin{equation}\nu=\\left(1-{\\rm e}^{-\\i\\theta}\\right)^{-1\/2},\\quad\nx=\\frac{1}{1-\\left(1-{\\rm e}^{-\\i\\theta}\\right)^{1\/2}}.\n\\label{bacurve}\n\\end{equation}\nThis closed curve in the $x$-plane has a cusp at the point $x=1$,\ncorresponding to the scaling regime, with a right opening angle.\nWe have indeed $x-1\\approx({\\rm e}^{\\i\\pi\/2}\\theta)^{1\/2}$ as $\\theta\\to0$.\nFigure~\\ref{bzeros} illustrates this result with data at time $n=50$\nfor both initial conditions.\nThe polynomials $F_n(x)$ converge to the stationary series $F_{\\mathrm{stat}}(x)$\nwhenever the complex variable $x$ lies\nwithin the closed curve shown on the figure.\nOtherwise they diverge exponentially with $n$.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[angle=90,width=.6\\linewidth]{baz.eps}\n\\caption{\\label{bzeros}\nPlot of the non-trivial zeros of the polynomials $F_n(x)$\nfor the BA model in the complex $x$-plane.\nSymbols: zeros for $n=50$ in Case~A (empty symbols) and Case~B (full symbols).\nLine: limiting curve with equation~(\\ref{bacurve}).}\n\\end{center}\n\\end{figure}\n\nThe exponential estimate~(\\ref{bexpo}) has another virtue.\nBy inserting it\ninto the contour-integral representation~(\\ref{bcontour}), we obtain\n\\begin{equation}\nf_k(n)\\sim\\oint\\frac{\\d u}{2\\pi\\i}\\left(\\frac{u+1}{u}\\right)^k\n\\left(1-\\frac{1}{u^2}\\right)^{-n}.\n\\label{fksaddle}\n\\end{equation}\nThis integral can in turn be investigated\nby means of the saddle-point approximation.\nThe result is the following large-deviation estimate\n\\begin{equation}\nf_k(n)\\sim\\exp(-n\\,S(\\zeta)),\n\\label{bent}\n\\end{equation}\nwhere $\\zeta=k\/n$, and where the large-deviation function $S(\\zeta)$ reads\n\\begin{equation}\nS(\\zeta)=(1-\\zeta)\\ln(1-\\zeta)-(2-\\zeta)\\ln\\frac{2-\\zeta}{2}.\n\\end{equation}\nThe formula~(\\ref{bent}) describes, with exponential accuracy,\nthe degree distribution in the whole large-deviation regime\nwhere $k$ and $n$ are comparable.\nThe quadratic growth $S(\\zeta)\\approx\\zeta^2\/4$\nat small $\\zeta$ matches the fall-off of the finite-size scaling functions\n$\\Phi^{(\\mathrm{A})}(y)\\sim\\Phi^{(\\mathrm{B})}(y)\\sim\\exp(-y^2\/4)$ (see~(\\ref{phiasy})).\nThe maximal value $S(1)=\\ln 2$ describes the fall-off $f_k(n)\\sim2^{-n}$\nof the probability of having\na degree $k$ equal to its maximal value ($k=n$ or $k=n-1$).\n\n\\section{The general preferential attachment (GPA) model}\n\nWe now consider the general preferential attachment (GPA) rule,\nwhere the attachment probability to a node is proportional\nto the sum $k_i(n)+c$ of the degree of the earlier node\nand of an additive constant $c$,\nrepresenting the initial attractiveness of the node~\\cite{dms1}.\nThis attachment rule interpolates between the uniform attachment rule,\nwhich is recovered in the $c\\to\\infty$ limit,\nand the BA model, which corresponds to $c=0$.\nIt can actually be continued on the other side of the BA model,\nas $c$ can be chosen in the range $-1p-2$.\n\nThe recursion~(\\ref{gfnrec}) for the generating polynomials $F_n(x)$\ncan again be exactly solved for a finite time $n$.\nThe Mellin transforms $M_n(s)$ of the functions\n$\\widehat F_n(u)=(1-x)^c F_n(x)$ obey\n\\begin{equation}\n(n+1)M_{n+1}(s)=\\left(1-\\frac{s+c}{Z(n)}\\right)nM_n(s)+X_c(s),\n\\label{gmfnrec}\n\\end{equation}\nwith initial condition $M_2^{(\\mathrm{A})}(s)=M_1^{(\\mathrm{B})}(s)=X_c(s)$.\nEquation~(\\ref{gmfnrec}) has a special solution\n\\begin{equation}\nM_n(s)=\\frac{Z(n)X_c(s)}{(s+2c+2)n},\n\\label{gmspec}\n\\end{equation}\nwhereas the general solution of the homogeneous equation\nshares the $n$-de\\-pen\\-den\\-ce of the expressions~(\\ref{gfnires}).\nWe thus get\n\\begin{equation}\n\\matrix{\nM_n^{(\\mathrm{A})}(s)=\\frad{X_c(s)}{(s+2c+2)n}\\hfill\\cr\n{\\hskip 36pt}\\times\n\\left((c+2)n-2+2(s+c+1)\\frad{\\Gamma\\!\\left(n-\\frac{s+c+2}{c+2}\\right)\n\\,\\Gamma\\!\\left(\\frac{2c+2}{c+2}\\right)}\n{\\Gamma\\!\\left(1-\\frac{s}{c+2}\\right)\\,\\Gamma\\!\\left(n-\\frac{2}{c+2}\\right)}\\right)\n,\\hfill\\cr\\cr\nM_n^{(\\mathrm{B})}(s)=\\frad{X_c(s)}{(s+2c+2)n}\\hfill\\cr\n{\\hskip 36pt}\\times\n\\left((c+2)n-1+(s+c+1)\\frad{\\Gamma\\!\\left(n-\\frac{s+c+1}{c+2}\\right)\n\\,\\Gamma\\!\\left(\\frac{c+1}{c+2}\\right)}\n{\\Gamma\\!\\left(\\frac{1-s}{c+2}\\right)\\,\\Gamma\\!\\left(n-\\frac{1}{c+2}\\right)}\\right)\n.\\hfill\n}\n\\label{gfnres}\n\\end{equation}\n\nIn order to illustrate these general results,\nlet us again consider the probability $f_1(n)$ for a node to have degree one.\nThis probability is minus the residue of $M_n(s)$ at $s=1$.\nFor Case~A we obtain ($n\\ge2$)\n\\begin{eqnarray}\nf_1^{(\\mathrm{A})}(n)&=&\\frad{1}{(2c+3)n}\n\\left((c+2)n-2+2(c+2)\\frad{\\Gamma\\!\\left(n-\\frac{c+3}{c+2}\\right)\n\\,\\Gamma\\!\\left(\\frac{2c+2}{c+2}\\right)}\n{\\Gamma\\!\\left(\\frac{c+1}{c+2}\\right)\\,\\Gamma\\!\\left(n-\\frac{2}{c+2}\\right)}\\right)\n\\nonumber\\\\\n&=&\\frac{1}{2c+3}\n\\left(c+2-\\frac{2}{n}+\\frac{2(c+2)\\,\\Gamma\\!\\left(\\frac{2c+2}{c+2}\\right)}\n{\\Gamma\\!\\left(\\frac{c+1}{c+2}\\right)}\n\\,n^{-\\frat{2c+3}{c+2}}+\\cdots\\right),\n\\end{eqnarray}\nwhereas for Case~B we obtain ($n\\ge2$)\n\\begin{equation}\nf_1^{(\\mathrm{B})}(n)=\\frac{(c+2)n-1}{(2c+3)n}.\n\\end{equation}\nThis rational expression for $f_1^{(\\mathrm{B})}(n)$ is however an exception.\nThe probabilities $f_k(n)$ indeed generically have\na singular correction in $n^{-(2c+3)\/(c+2)}$\nfor both initial conditions,\nwhereas only $f_1^{(\\mathrm{B})}(n)$ and $f_2^{(\\mathrm{B})}(n)$ are rational functions of time $n$.\n\nWe now turn to the finite-size scaling behavior\nof the degree distribution when both $k$ and $n$ are large.\nThe crossover scale $k_\\star(n)$ can again be estimated\neither using~(\\ref{gave}) or by the argument of extreme value statistics.\nBoth approaches consistently yield\n\\begin{equation}\nk_\\star(n)\\sim n^{1\/(c+2)}.\n\\end{equation}\nThe degree distribution\nobeys a finite-size scaling law of the form\n\\begin{equation}\nf_k(n)\\approx f_{k,{\\mathrm{stat}}}\\,\\Phi(y),\\quad y=\\frac{k}{n^{1\/(c+2)}},\n\\end{equation}\nwhere the scaling function $\\Phi(y)$ again depends\non the initial condition~\\cite{kr2,ws}.\nThe determination of the scaling functions $\\Phi^{(\\mathrm{A})}(y)$\nand $\\Phi^{(\\mathrm{B})}(y)$ closely follows the steps of Section 3.2.\nWe thus obtain\n\\begin{equation}\n\\matrix{\n\\Phi^{(\\mathrm{A})}(y)=1+\\frad{2\\,\\Gamma\\!\\left(\\frac{2c+2}{c+2}\\right)}{(c+2)\\Gamma(2c+3)}\n\\int_{\\mathrm{C}}\\frad{\\d s}{2\\pi\\i}\\,\\frad{s+c+1}{s+2c+2}\n\\frad{\\Gamma(1-s)}{\\Gamma\\!\\left(1-\\frac{s}{c+2}\\right)}\\,y^{s+2c+2},\\hfill\\cr\\cr\n\\Phi^{(\\mathrm{B})}(y)=1+\\frad{\\Gamma\\!\\left(\\frac{c+1}{c+2}\\right)}{(c+2)\\Gamma(2c+3)}\n\\int_{\\mathrm{C}}\\frad{\\d s}{2\\pi\\i}\\,\\frad{s+c+1}{s+2c+2}\n\\frad{\\Gamma(1-s)}{\\Gamma\\!\\left(\\frac{1-s}{c+2}\\right)}\\,y^{s+2c+2}.\\hfill\n}\n\\label{gfss}\n\\end{equation}\nBy closing the contours to the right,\nwe can derive the following convergent series:\n\\begin{equation}\n\\matrix{\n\\Phi^{(\\mathrm{A})}(y)=1+\\frad{2\\,\\Gamma\\!\\left(\\frac{2c+2}{c+2}\\right)}{(c+2)\\Gamma(2c+3)}\ny^{2c+3}\\displaystyle\\sum_{m\\ge0}\\frad{(m+c+2)(-y)^m}\n{(m+2c+3)m!\\,\\Gamma\\!\\left(1-\\frac{m+1}{c+2}\\right)},\\hfill\\cr\\cr\n\\Phi^{(\\mathrm{B})}(y)=1+\\frad{\\Gamma\\!\\left(\\frac{c+1}{c+2}\\right)}{(c+2)\\Gamma(2c+3)}\ny^{2c+3}\\displaystyle\\sum_{m\\ge1}\\frad{(m+c+2)(-y)^m}\n{(m+2c+3)m!\\,\\Gamma\\!\\left(-\\frac{m}{c+2}\\right)}.\\hfill\n}\n\\label{gfsss}\n\\end{equation}\nThe above expression for $\\Phi^{(\\mathrm{A})}$ can be found in~\\cite{ws},\nalbeit not in a fully explicit form.\nIt is also worth mentioning that the finite-size scaling function derived\nin~\\cite{cb} for asymmetric growing networks is different from the above one\nfor generic values of the exponent $\\nu=1\/(c+2)$,\nalthough it coincides for $\\nu=1\/2$ with our result~(\\ref{bfss}) for $\\Phi^{(\\mathrm{A})}$.\n\nThe expressions~(\\ref{gfsss}) suggest that the derivatives\n${\\Phi^{(\\mathrm{A})}}'(y)$ and ${\\Phi^{(\\mathrm{B})}}'(y)$ are somewhat simpler\nthan the functions themselves.\nThe factor $(m+2c+3)$ is indeed chased away from the denominators\nunder differentiation.\nThe resulting series can be resummed by means of the identities\n\\begin{equation}\n\\matrix{\n\\displaystyle\\sum_{m\\ge0}\\frad{(-y)^m}{m!\\,\\Gamma\\!\\left(1-\\frac{m+1}{c+2}\\right)}\n=(c+2)\\int_{\\mathrm{C}}\\frad{\\d z}{2\\pi\\i}\\,{\\rm e}^{-yz+z^{c+2}},\\hfill\\cr\\cr\n\\displaystyle\\sum_{m\\ge1}\\frad{(-y)^m}{m!\\,\\Gamma\\!\\left(-\\frac{m}{c+2}\\right)}\n=y\\int_{\\mathrm{C}}\\frad{\\d z}{2\\pi\\i}\\,{\\rm e}^{-yz+z^{c+2}},\\hfill\n}\n\\end{equation}\nwhich are known e.g.~in the theory of L\\'evy stable laws.\nWe are thus left with the following alternative contour-integral expressions\nfor the derivatives:\n\\begin{equation}\n\\matrix{\n{\\Phi^{(\\mathrm{A})}}'(y)=\\frad{2\\,\\Gamma\\!\\left(\\frac{2c+2}{c+2}\\right)}{\\Gamma(2c+3)}\ny^{2c+2}\\int_{\\mathrm{C}}\\frad{\\d z}{2\\pi\\i}(c+2-yz)\\,{\\rm e}^{-yz+z^{c+2}},\\hfill\\cr\\cr\n{\\Phi^{(\\mathrm{B})}}'(y)=\\frad{\\Gamma\\!\\left(\\frac{c+1}{c+2}\\right)}{(c+2)\\Gamma(2c+3)}\ny^{2c+3}\\int_{\\mathrm{C}}\\frad{\\d z}{2\\pi\\i}(c+3-yz)\\,{\\rm e}^{-yz+z^{c+2}}.\\hfill\n}\n\\label{phiint}\n\\end{equation}\n\nBoth scaling functions start increasing from the value 1\naccording to the power laws\n\\begin{equation}\n\\matrix{\n\\Phi^{(\\mathrm{A})}(y)=1+\\frad{2\\,\\Gamma\\!\\left(\\frac{2c+2}{c+2}\\right)}\n{\\Gamma(2c+4)\\,\\Gamma\\!\\left(\\frac{c+1}{c+2}\\right)}\\,y^{2c+3}+\\cdots,\\hfill\\cr\\cr\n\\Phi^{(\\mathrm{B})}(y)=1+\\frad{(c+3)}{2(c+2)^3\\Gamma(2c+3)}\\,y^{2c+4}+\\cdots,\\hfill\n}\n\\end{equation}\ngo through a maximum, and fall off superexponentially as\n\\begin{equation}\n\\Phi^{(\\mathrm{A})}(y)\\approx\n2(c+2)C\\Gamma\\!\\left(\\frat{2c+2}{c+2}\\right)\\Psi(y),\\quad\n\\Phi^{(\\mathrm{B})}(y)\\approx\nC\\Gamma\\!\\left(\\frat{c+1}{c+2}\\right)y\\,\\Psi(y),\n\\label{phist}\n\\end{equation}\nwith\n\\begin{equation}\n\\Psi(y)=y^{2c+3-\\!\\frat{c}{2(c+1)}}\n\\,\\exp\\left(-(c+1)\\left(\\frac{y}{c+2}\\right)^{\\frat{c+2}{c+1}}\\right)\n\\label{psist}\n\\end{equation}\nand\n\\begin{equation}\nC=\\left[(2\\pi(c+1))^{1\/2}(c+2)^{\\frat{2c+3}{2(c+1)}}\\Gamma(2c+3)\\right]^{-1}.\n\\end{equation}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[angle=90,width=.7\\linewidth]{gs.eps}\n\\caption{\\label{gscaling}\nPlot of the scaling functions\n$\\Phi^{(\\mathrm{A})}(y)$ (full lines) and $\\Phi^{(\\mathrm{B})}(y)$ (dashed lines) against $y$,\nfor (1) $c=-1\/2$, i.e., $\\nu=2\/3$;\n(2) $c=0$, i.e., $\\nu=1\/2$ (the BA model);\nand (3) $c=1$, i.e., $\\nu=1\/3$.}\n\\end{center}\n\\end{figure}\n\nFigure~\\ref{gscaling} shows a plot of the scaling functions\n$\\Phi^{(\\mathrm{A})}(y)$ and $\\Phi^{(\\mathrm{B})}(y)$\nfor (1) $c=-1\/2$, i.e., $\\nu=2\/3$;\n(2) $c=0$, i.e., $\\nu=1\/2$ (the BA model);\nand (3) $c=1$, i.e., $\\nu=1\/3$.\nThe figure demonstrates that the scaling functions present\na high and narrow maximum for the smaller values of $c$,\nand a direct crossover from 1 to 0 for the larger values of $c$.\nThese observations can be made quantitative by means of the pseudo-moments\n\\begin{equation}\n\\mu_p=-\\int_0^\\infty\\Phi'(y) y^p\\,\\d y=p\\int_0^\\infty\\Phi(y) y^{p-1}\\,\\d y.\n\\end{equation}\nThe integral formulas~(\\ref{phiint}) allow one\nto evaluate these quantities explicitly:\n\\begin{equation}\n\\matrix{\n\\mu_p^{(\\mathrm{A})}=\\frad{(p+c+1)\\,\\Gamma\\!\\left(\\frac{3c+4}{c+2}\\right)\\Gamma(p+2c+3)}\n{(c+1)\\,\\Gamma\\!\\left(\\frac{p+3c+4}{c+2}\\right)\\Gamma(2c+3)},\\hfill\\cr\\cr\n\\mu_p^{(\\mathrm{B})}=\\frad{\\Gamma\\!\\left(\\frac{c+1}{c+2}\\right)\\Gamma(p+2c+3)}\n{\\Gamma\\!\\left(\\frac{p+c+1}{c+2}\\right)\\Gamma(2c+3)}.\\hfill\n}\n\\end{equation}\n\n\\noindent $\\bullet$\nFor large values of $c$ (i.e., $c\\to\\infty$),\nthe model is close to the UA model.\nThe analysis of the scaling functions\nwill follow that of the ratios $R_k(n)$ in the UA model,\nperformed in Section~2.2.\nThe crossover value of $y$,\nat which the functions exhibit a relatively sharp crossover from 1 to 0,\ncan be estimated as~$\\mu_1$, i.e.,\n\\begin{equation}\n\\mu_1^{(\\mathrm{A})}=2c+2{\\gamma_{\\scriptscriptstyle{\\rm E}}}+2+\\cdots,\\quad\\mu_1^{(\\mathrm{B})}=2c+2{\\gamma_{\\scriptscriptstyle{\\rm E}}}+3+\\cdots,\n\\end{equation}\nwhich grows as $2c$, irrespective of the initial condition.\nSimilarly, the squared width of the crossover region can be estimated as the\npseudo-variance $\\sigma^2=\\mu_2-\\mu_1^2$, i.e.,\n\\begin{equation}\n\\sigma^{2{(\\mathrm{A})}}=2c+4{\\gamma_{\\scriptscriptstyle{\\rm E}}}+2-2\\pi^2\/3+\\cdots,\\quad\n\\sigma^{2{(\\mathrm{B})}}=2c+4{\\gamma_{\\scriptscriptstyle{\\rm E}}}+3-2\\pi^2\/3+\\cdots,\n\\end{equation}\nwhich also grows as $2c$, irrespective of the initial condition.\n\n\\noindent $\\bullet$\nFor small values of $c$ (i.e., $c\\to-1$),\nthe scaling functions exhibit a high and narrow peak around $y=1$.\nThe position of the peak can be estimated as $\\mean{y}=\\mu_2\/(2\\mu_1)$,\ni.e., setting $c=-1+\\varepsilon$,\n\\begin{equation}\n\\mean{y}^{(\\mathrm{A})}=1+(3\/2-{\\gamma_{\\scriptscriptstyle{\\rm E}}})\\varepsilon+\\cdots,\\quad\n\\mean{y}^{(\\mathrm{B})}=1+(2-{\\gamma_{\\scriptscriptstyle{\\rm E}}})\\varepsilon+\\cdots,\n\\end{equation}\nwhereas the squared width of the peak can be estimated as\n$\\mathop{\\rm var}\\nolimits{y}=(4\\mu_1\\mu_3-3\\mu_2^2)\/(12\\mu_1^2)$, i.e.,\n\\begin{equation}\n\\matrix{\n\\mathop{\\rm var}\\nolimits{y}^{(\\mathrm{A})}=\\frad{5}{6}\\,\\varepsilon+\\frad{19-10{\\gamma_{\\scriptscriptstyle{\\rm E}}}-\\pi^2}{6}\\,\\varepsilon^2+\\cdots,\\hfill\\cr\n\\mathop{\\rm var}\\nolimits{y}^{(\\mathrm{B})}=\\frad{2}{3}\\,\\varepsilon+\\frad{22-8{\\gamma_{\\scriptscriptstyle{\\rm E}}}-\\pi^2}{6}\\,\\varepsilon^2+\\cdots,\\hfill}\n\\end{equation}\nand finally the area under the peak scales as $\\mu_1$, i.e.,\n\\begin{equation}\n\\mu_1^{(\\mathrm{A})}=\\frac{1}{\\varepsilon}+2-{\\gamma_{\\scriptscriptstyle{\\rm E}}}+\\cdots,\\quad\n\\mu_1^{(\\mathrm{B})}=\\frac{1}{\\varepsilon}+3-{\\gamma_{\\scriptscriptstyle{\\rm E}}}+\\cdots\n\\end{equation}\nWe are thus left with the picture of a narrow peak around $y=1$,\nwhose width shrinks as $\\varepsilon^{1\/2}$ and whose height grows as $\\varepsilon^{-3\/2}$.\n\nLet us close up this section with the location\nof the complex zeros of the polynomials $F_n(x)$.\nThe derivation of the estimate~(\\ref{bexpo}) can be generalized\nto the present situation for arbitrary values of $c$.\nWe are thus left with\n\\begin{equation}\n\\widehat F_{n,{\\mathrm{sing}}}(u)\\sim\\left(1-\\frac{1}{(-u)^{c+2}}\\right)^{-n},\n\\label{gexpo}\n\\end{equation}\nagain with exponential accuracy.\nThe asymptotic locus of the complex zeros is\ntherefore given by the condition $\\abs{1-1\/(-u)^{c+2}}=1$.\nThe relevant part of this locus\ncan be parametrized by an angle $0\\le\\theta\\le2\\pi$~as\n\\begin{equation}\nu=\\left(1-{\\rm e}^{-\\i\\theta}\\right)^{-1\/(c+2)},\\quad\nx=\\frac{1}{1-\\left(1-{\\rm e}^{-\\i\\theta}\\right)^{1\/(c+2)}}.\n\\label{gcurve}\n\\end{equation}\nThis closed curve in the $x$-plane has a cusp at the point $x=1$,\ncorresponding to the scaling regime, with an opening angle equal to $\\pi\/(c+2)$.\nWe have indeed $x-1\\approx({\\rm e}^{\\i\\pi\/2}\\theta)^{1\/(c+2)}$ as $\\theta\\to0$.\nFigure~\\ref{gzeros} illustrates this result with data at time $n=50$\nfor three values of $c$ and both initial conditions.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[angle=90,width=.6\\linewidth]{gz.eps}\n\\caption{\\label{gzeros}\nPlot of the non-trivial zeros of the polynomials $F_n(x)$\nin the complex $x$-plane.\nSymbols: zeros for $n=50$ in Case~A (empty symbols) and Case~B (full symbols).\nLines: limiting curves with equation~(\\ref{gcurve}).\nFrom the inside to the outside: $c=0$ (the BA model,\nalready shown in Figure~\\ref{bzeros}), $c=1$ and $c=2$.}\n\\end{center}\n\\end{figure}\n\nThe exponential estimate~(\\ref{gexpo})\ncan again be recast into a large-deviation estimate\nfor the probabilities $f_k(n)$ in the regime $k\\sim n$, of the form\n\\begin{equation}\nf_k(n)\\sim\\exp(-n\\,S(\\zeta)),\n\\label{gent}\n\\end{equation}\nwith $\\zeta=k\/n$.\nThe large-deviation function $S(\\zeta)$ is obtained in parametric form:\n\\begin{equation}\n\\matrix{\n\\zeta=\\frad{(c+2)(v-1)}{v^{c+2}-1},\\hfill\\cr\\cr\nS=\\ln(v^{c+2}-1)\n-\\frad{c+2}{v^{c+2}-1}\\left((v-1)\\ln(v-1)+(v^{c+1}-1)v\\ln v\\right),\n}\n\\end{equation}\nwhere the parameter $v$ in the range $1-1$,\nrepresenting the initial attractiveness of a node.\nThe UA and BA models are recovered as two special cases,\nrespectively corresponding to $c\\to\\infty$ and $c=0$.\nThe model is scalefree for any finite value of $c$,\nwith the continuously varying exponents $\\gamma=c+3$ and $\\nu=1\/(c+2)$.\nThe continuous dependence of exponents on the parameter $c$,\nand the dependence of finite-size scaling functions\non the initial condition (Case~A or Case~B in the present study),\nare two illustrations of the lack of universality\nwhich altogether characterizes the scaling behavior of growing networks.\n\nThe GPA rule is actually the most general one for which\nthe partition function $Z(n)$ (see~(\\ref{zp})) is deterministic,\ni.e., independent of the history of the network.\nWhenever the attachment probability has a non-linear dependence on\nthe degree $k$,\nthe partition function becomes a history-dependent fluctuating quantity,\nso that the analysis of size effects becomes far more difficult.\nThe general case of an arbitrary attachment rule,\ngrowing either less or more rapidly than linearly with the degree,\nhas been considered in several works~\\cite{krl,kr1,mj}.\nWhenever the degree dependence of the attachment rule is asymptotically linear,\nthe resulting network is generically scalefree.\nThe determination of the degree exponent $\\gamma$\nis however a highly non-trivial\ntask in general (see~\\cite{krl,kr1} for an explicit example).\n\nThe present study has underlined the key r\\^ole\nplayed by the typical value $k_\\star(n)$ of the largest degree\nin a finite network at time $n$.\nIn the UA model, $k_\\star(n)$ grows logarithmically with time $n$.\nThe situation is more interesting in the scalefree case, i.e., for $c$ finite.\nThe largest degree $k_\\star(n)$ grows as a subextensive power law\nwith exponent $\\nu$,\nand demarcates three regimes in the size-degree plane,\nwhere finite-size (i.e., finite-time) effects\non the degree distribution $f_k(n)$ have different forms.\n\n\\noindent --\nIn the stationary regime ($k\\ll k_\\star(n)$),\nthe degree distribution is very close to the stationary one, $f_{k,{\\mathrm{stat}}}$.\n\n\\noindent --\nIn the finite-size scaling regime ($k\\sim k_\\star(n)$),\nthe degree distribution obeys\na multiplicative finite-size scaling law.\nAs already noticed in several earlier works,\nthe finite-size scaling function $\\Phi$ depends\non the initial condition imposed on the network.\nThis lack of universality holds for all finite values of\nthe parameter~$c$.\nAnother feature of the finite-size scaling function is that\nit increases from its initial value $\\Phi(0)=1$, reaches a maximum,\nand stays above unity for a range of values of its argument\n$y=k\/k_\\star(n)$, before it eventually falls off to zero.\nThis non-monotonic overshooting behavior is however not mandatory.\nIn this respect it is worth recalling\nthe example of the so-called zeta urn model~\\cite{dgc,zeta1,zetarev}.\nThis mean-field interacting particle system with multiple occupancies\npossesses a continuous condensation transition at a finite critical density.\nIts behavior right at the critical density shares\na high amount of similarity with the present problem,\nincluding a power-law stationary distribution\nwith a continuously varying exponent, and finite-time scaling.\nThe same results have been shown\nto apply to the dynamics of condensation\nin the zero-range process (ZRP)~\\cite{fsszrp}.\nIn the critical zeta urn and ZRP models, the finite-size scaling function\nis a monotonically decreasing function, so that $\\Phi(y)<1$ for all $y$.\nThis does not contradict the conservation of probability:\nthe excess probability is carried by smaller values of~$k$,\npertaining to the stationary regime.\n\n\\noindent --\nIn the large-deviation regime ($k_\\star(n)\\ll k\\sim n$),\nthe degree distribution falls off exponentially in $n$.\nAt variance with the finite-size scaling law,\nthe corresponding large-deviation function\nis independent of the initial condition.\nThe analysis of this regime has been shown to be closely related\nto the locus of the complex zeros of the generating polynomials $F_n(x)$,\nwhich have played a central r\\^ole throughout this work.\n\nTo close up, it is to be hoped that some of the concepts and methods\nused in the present work can be used to shed some new light\neither to other observables in the network models considered here,\nsuch as e.g.~the statistics of leaders and lead changes~\\cite{krlead},\nor to the degree statistics in more complex network models,\nsuch as e.g.~the Bianconi-Barab\\'asi (BB) model~\\cite{bb1,bb2},\nwhere attachment rules involve the competing effects\nof dynamical variables (the node degrees)\nand quenched disordered ones (the node fitnesses).\nDepending on the a priori distribution of the random fitnesses,\nthe BB model may possess a low-temperature condensed phase.\nSome features of the dynamics of the condensed phase have been\ninvestigated recently,\nboth at zero temperature~\\cite{usrecords},\nwhere the model is intimately related to the statistics of records,\nand at finite temperature~\\cite{fb}.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}