diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzohbm" "b/data_all_eng_slimpj/shuffled/split2/finalzzohbm" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzohbm" @@ -0,0 +1,5 @@ +{"text":"\\section{Re-investigating Intuitive\nInterpretability}\\label{re-investigating-intuitive-interpretability}\n\nPhilip Agre has argued that ``technology at present is covert\nphilosophy'' (Agre, 1997). While the scope of this claim is certainly\ndebatable, in the quest for interpretable machine learning models,\ncertain philosophical issues are evident, above all the fact that\ninterpretability itself is an intuitive notion. As Wolfgang Iser\nremarks: ``For a long time, interpretation was taken for an activity\nthat did not seem to require analysis of its own procedures. There was a\ntacit assumption that it came naturally, not least because human beings\nlive by constantly interpreting.'' (Iser, 2000)\n\nNevertheless, Kim and Doshi-Velez (2017) and many others have shown that\ninterpretability can be transformed into a more rigorous notion. While\nmost investigations into the interpretability of machine learning models\nthus focus on the further development of this rigorous notion of\ninterpretability, I suggest that a re-investigation of the intuitive\nnotion of interpretability can help to better understand the limits of\ninterpretability in general.\n\nWhen we talk about intuitive interpretability in the context of machine\nlearning, we assume a Cartesian concept of intuition that posits\nintuitive concepts as rational concepts, and thus intuition as an\nadequate measure of reality. The statement ``I know it when I see it'',\nwhich is often employed to illustrate this concept, indicates the\ndependency of such intuition on visualization. To intuitively understand\na machine learning model, we need to visualize it, make it accessible to\nthe senses. This process, however, is not as straightforward as it\nseems. I argue that specifically for machine learning models,\nvisualization -- and thus intuitive interpretation -- necessarily\nimplies two levels of pre-interpretation.\n\n\\section{Intuitive Interpretability Depends on Dimensionality\nReduction}\\label{intuitive-interpretability-depends-on-dimensionality-reduction}\n\nMachine learning models operate in high-dimensional vector spaces.\nHigh-dimensional vector spaces are geometrically counter-intuitive.\nWhile low-dimensional vector spaces can always be intuitively correlated\nwith our physical reality, with the existence of objects in space and\ntime, high-dimensional vector spaces have no intuitive equivalent in the\nreal world. Beyond this general inaccessibility, however,\nhigh-dimensional vector spaces also specifically impede interpretation,\nas distances between data points have a tendency to lose their\n\\emph{meaning} (Beyer et al., 1999) -- this is commonly known as the\n``curse of dimensionality''. Making a high-dimensional vector space\nintuitively interpretable thus requires its mathematical\npre-interpretation, its representation in human terms, i.e.~usually in\nno more than three dimensions.\n\nWhile there is certainly a quantifiable limit to the damage\ndimensionality reduction can inflict (Johnson and Lindenstrauss, 1984)\nit is nevertheless important to acknowledge the reason for the\ninevitability of this mathematical pre-interpretation. Internal states\nof machine learning models are non-concepts, concepts that have no\nintuitive equivalent in the real world and that can only be represented\n\\emph{in terms of what they are not}. This notion of the non-concept\nwill guide our further investigation of intuitive interpretability.\n\n\\section{Intuitive Interpretability Depends on\nRegularization}\\label{intuitive-interpretability-depends-on-regularization}\n\nArtificial neural networks trained on image data are notoriously opaque.\nParticularly for deep convolutional neural networks (Krizhevsky et al.,\n2012), it is very hard to infer from the training dataset and the final\nweights of the fully trained neural network how exactly the network\nmakes its decisions. Many different approaches to this problem have been\nsuggested, most prominently two types of feature visualization:\nactivation maximization (Zeiler and Fergus, 2014, Simonyan et al.\n(2014), Mahendran and Vedaldi (2015), Mahendran and Vedaldi (2016),\nNguyen, Dosovitskiy, et al. (2016), Nguyen, Yosinski, et al. (2016)) and\nsaliency maps, a technique also called attribution (Olah et al., 2017,\nSimonyan et al. (2014), Zeiler and Fergus (2014)). We will focus here on\nactivation maximization.\n\nNaive optimizations of an image to maximally activate a specific\n``neuron'', i.e.~the part of an artificial neural network that encodes a\nspecific feature, often result in noise and ``nonsensical high-frequency\npatterns'' (Olah et al., 2017) -- patterns that are without meaning and\nare thus, again, inaccessible to an intuitive interpretation. The\nregularization of this optimization is thus another pre-interpetation\nthat is necessary to establish intuitive interpretability. The goal of\nactivation maximizations is thus to generate ``natural'' pre-images\n(Mahendran and Vedaldi, 2015, Mahendran and Vedaldi (2016)) -- images\nthat are visual representations of intermediate stages in the neural\nnetwork, expressed \\emph{in terms of} a set of natural images. This\nregularization is achieved by introducing natural image priors into the\nobjective function.\n\n\\section{Non-Concepts and the Place of Semantic\nInformation}\\label{non-concepts-and-the-place-of-semantic-information}\n\nThe natural pre-images of activation maximization usually consist of an\narbitrary ``mix'' of different representations. This mix of\nrepresentations can either be a set of images that each show a different\naspect of the activation maximization, or a ``blend'' of different\nimages, i.e.~a single image that maximizes different aspects of the\nneuron. Most recently, such ``multifaceted'' representations have been\nimproved significantly (Nguyen, Dosovitskiy, et al., 2016, Nguyen,\nYosinski, et al. (2016)) through the automatic generation of natural\nimage priors with the help of an additional, generative adversarial\nneural network. Other approaches have used techniques from style\ntransfer to likewise increase the ``diversity'' (Olah et al., 2017) of\nthe visualization.\n\nHowever, as Olah et al. (2017) observe, many of the resulting images are\n``strange mixtures of ideas'' suggesting that single neurons are not\nnecessarily the right semantic units for understanding neural nets.\" In\nfact, as Szegedy et al. (2013) showed, looking for meaningful features\ndoes not necessarily lead to more meaningful visualizations than looking\nfor any features, i.e.~for arbitrary activation maximizations. This is\nalso the reason for the effectiveness of many adversarial strategies (Su\net al., 2017, Papernot et al. (2017), Kurakin et al. (2016), Goodfellow\net al. (2014)).\n\nIn other words, not only is the representation of non-concepts mediated\ntwice, by means of dimensionality reduction and regularization, it is\nalso questionable if non-concepts can be approximated at all in human\nterms. Szegedy et al. (2013) suggest that the entire space of\nactivations, rather than the individual units contain most of the the\nsemantic information.\\footnote{While we will not develop this idea\n further within this limited context, it is worth noting that the\n findings in Szegedy et al. (2013) in relation to the notion of\n non-concepts mirror long-standing discussions in the humanities on\n meaning and interpretation, with the most prominent concept being\n Jacques Derrida's notion of diff\u00e9rance (Derrida, 1982).} As also\npointed out in Szegedy et al. (2013), a ``similar but even stronger\nconclusion'' was reached for word embedding models, and in fact the\nconcept of a distributed semantic structure becomes even more obvious\nwhen we look at the text and not the image domain.\n\nWord embedding models (Mikolov et al., 2013) employ shallow artificial\nneural networks to construct high-dimensional vector spaces that not\nonly reflect syntactic but also semantic properties of the source\ncorpus. Most prominently, word embedding models are able to solve\nanalogy queries, like ``what is to woman what king is to man''. This is\nachieved by not extending, but reducing the dimensionality of the vector\nspace in relation to the number of n-grams in the source corpus.\nAccordingly, no vector represents just a single n-gram. Instead, the\ntotality of vectors represents the totality of the semantic structure of\nthe source corpus. This distributed semantic structure, however, has\npeculiar consequences. The solution to an analogy query is given by the\nmodel not as a definite answer, but as a hierarchy of answers. Why?\nSimply because there are no ``intermediate'' words. If the best possible\nanalogy is a (new) data point right in between two (existing) data\npoints representing n-grams in the source corpus vocabulary, the best\npossible solution to the analogy query is neither of them, but it still\ncan only be described \\emph{in terms of} them. Even if the input\nvocabulary consisted of all words in the English language, the solution\nto the analogy query could still be a data point that is ``in between\neverything'' but has no equivalent in the real world -- a non-concept.\nEvery computational solution to an analogy task is thus, ironically,\nitself an analogy.\n\n\\section{Non-Concepts as a Critical Technical Practice: Revealing Human\nBias}\\label{non-concepts-as-a-critical-technical-practice-revealing-human-bias}\n\nA demonstration of this dilemma of non-concepts, and an example for a\ncritical technical practice based on it, is ``Image Synthesis from\nYahoo's open\\_nsfw''(Goh, 2016), a project by Gabriel Goh. Using the\ntechnique developed in (Nguyen, Dosovitskiy, et al., 2016) Goh produces\nimages that maximally activate certain neurons of a classifier network\ncalled ``open\\_nsfw'', which was created by Yahoo to distinguish\nworkplace-safe (``SFW'') from ``not-safe-for-work'' (``NSFW'') imagery:\na literal mathematical model of ``I know it when I see it''. By\ngenerating sets of images ranging from most to least pornographic, Goh\nproduces some interesting insights into Yahoo's specific interpretation\nof ``nsfw'', and the essence of the concept of pornography. Most\ninteresting, however, are the ``least pornographic'' images. What really\n\\emph{is} the ``opposite'' of pornography? Fully clothed people?\nNon-pornography, again, is a non-concept which has never been defined\nbut through its negation.\n\nExcept in this particular case it hasn't. Goh notes that the least\npornographic images ``all have a distinct pastoral quality -- depictions\nof hills, streams and generally pleasant scenery'' and concludes that,\nmost likely, this is the result of providing negative examples during\ntraining . Apparently, the non-concept of non-pornography was made into\na positive concept -- pastoral landscapes -- to improve the training of\nthe model. This, of course, becomes particularly problematic if, as it\nis the case with open\\_nsfw, a fully trained model is provided without\naccess to the training data. While the Github page for open\\_nsfw\nacknowledges that the ``definition of NSFW is subjective and\ncontextual'', what is at stake here is exactly the opposite: the fact\nthat ``SFW'' is subjective and contextual, and that regardless a very\nspecific notion of ``SFW'' was built into the model. More generally\nspeaking, the approximation of non-concepts with positive concepts\nnecesarily introduces a significant human -- aesthetic -- bias into the\nequation.\n\n\\section{Conclusion}\\label{conclusion}\n\nGoh's project serves to show the non-conceptual structure of machine\nlearning models and the problems this structure creates for intuitive\ninterpretability. While one possible way to address these problems (if\nintuitive interpretability is needed) is to avoid strategies that\npresent singular images as representations of internal model states\naltogether, and instead switch to multitudes of images, there is no\ngeneral solution to the problem of finding human-readable\nrepresentations of non-concepts. Research in interpretability thus has\nto take the non-conceptual structure of machine learning models into\naccount. To make the notion of interpretability more rigorous we have to\nfirst identify where it might still be impaired by intuitive\nconsiderations: we have to consider it precisely in terms of what it is\nnot.\n\n\\section*{References}\n\n\\small\n\nAgre, P.E., 1997. Computation and Human Experience. Cambridge University\nPress.\n\nBeyer, K., Goldstein, J., Ramakrishnan, R., Shaft, U., 1999. When is\n``nearest neighbor'' meaningful?, in: International Conference on\nDatabase Theory. Springer, pp. 217--235.\n\nDerrida, J., 1982. Diff\u00e9rance, in: Margins of Philosophy. University of\nChicago Press.\n\nGoh, G., 2016. Image synthesis from Yahoo's open\\_nsfw. Blog: Gabriel Goh.\n\nGoodfellow, I.J., Shlens, J., Szegedy, C., 2014. Explaining and\nharnessing adversarial examples. arXiv preprint arXiv:1412.6572.\n\nIser, W., 2000. The Range of Interpretation. Columbia University Press,\nNew York, NY.\n\nJohnson, W.B., Lindenstrauss, J., 1984. Extensions of Lipschitz mappings\ninto a Hilbert space. Contemporary Mathematics 26, 1.\n\nKim, B., Doshi-Velez, F., 2017. Towards a rigorous science of\ninterpretable machine learning. arXiv preprint arXiv:1702.08608.\n\nKrizhevsky, A., Sutskever, I., Hinton, G.E., 2012. Imagenet\nclassification with deep convolutional neural networks, in: Advances in\nNeural Information Processing Systems. pp. 1097--1105.\n\nKurakin, A., Goodfellow, I., Bengio, S., 2016. Adversarial examples in\nthe physical world. arXiv preprint arXiv:1607.02533.\n\nMahendran, A., Vedaldi, A., 2016. Visualizing deep convolutional neural\nnetworks using natural pre-images. International Journal of Computer\nVision 120, 233--255.\n\nMahendran, A., Vedaldi, A., 2015. Understanding deep image\nrepresentations by inverting them, in: Proceedings of the IEEE\nConference on Computer Vision and Pattern Recognition. pp. 5188--5196.\n\nMikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J., 2013.\nDistributed representations of words and phrases and their\ncompositionality, in: Advances in Neural Information Processing Systems.\npp. 3111--3119.\n\nMordvintsev, A., Olah, C., Mike, T., 2015. Inceptionism: Going deeper\ninto neural networks. Google Research Blog.\n\nNguyen, A., Dosovitskiy, A., Yosinski, J., Brox, T., Clune, J., 2016.\nSynthesizing the preferred inputs for neurons in neural networks via\ndeep generator networks, in: Advances in Neural Information Processing\nSystems. pp. 3387--3395.\n\nNguyen, A., Yosinski, J., Clune, J., 2016. Multifaceted feature\nvisualization: Uncovering the different types of features learned by\neach neuron in deep neural networks. arXiv preprint arXiv:1602.03616.\n\nOlah, C., Mordvintsev, A., Schubert, L., 2017. Feature visualization.\nDistill.\n\nPapernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z.B., Swami,\nA., 2017. Practical black-box attacks against machine learning, in:\nProceedings of the 2017 ACM Asia Conference on Computer and\nCommunications Security. ACM, pp. 506--519.\n\nSimonyan, K., Vedaldi, A., Zisserman, A., 2014. Deep inside\nconvolutional networks: Visualising image classification models and\nsaliency maps. arXiv preprint arXiv:1312.6034.\n\nSu, J., Vargas, D.V., Kouichi, S., 2017. One pixel attack for fooling\ndeep neural networks. arXiv preprint arXiv:1710.08864.\n\nSzegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D.,\nGoodfellow, I., Fergus, R., 2013. Intriguing properties of neural\nnetworks. arXiv preprint arXiv:1312.6199.\n\nZeiler, M.D., Fergus, R., 2014. Visualizing and understanding\nconvolutional networks, in: European Conference on Computer Vision.\nSpringer, pp. 818--833.\n\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{Introduction}\nRecent interest in the analytical properties of \nFeynman diagrams has been motivated by processes at the LHC. \nThe required precision demands the evaluation of a\nhuge number of diagrams having many scales to a high order, \nso that a new branch of mathematics emerges, which we may call \n{\\it the Mathematical Structure of Feynman Diagrams}\n~\\cite{hwa,pham}, which includes elements of algebraic geometry, algebraic \ntopology, the analytical theory of differential equations, multiple \nhypergeometric functions, \nelements of number theory, modular functions and elliptic curves, \nmultidimensional residues, and graph theory. \nThis mathematical structure has been extensively developed, studied and \napplied. For a more detailed discussion of the oldest results and their relation \nto modern techniques, see\nRefs.\\ \\cite{golubeva}\\ and \\cite{Kalmykov:2008}). \nOne of these approaches is based on the treatment of Feynman \ndiagrams in terms of multiple hypergeometric \nfunctions~\\cite{kershaw}. For example, in the series of \npapers~\\cite{kreimer1,kreimer2,kreimer3}, the one-loop diagrams \nhave been associated with the $R$-function (a particular case \nof the $F_D$-function~\\cite{bateman,slater,srivastava}).\n\n\n\\subsection{Mellin-Barnes representation, \n\tasymptotic expansion, NDIM}\n\\label{Mellin-Barnes representation}\n\nA universal technique based on the Mellin-Barnes representation of Feynman \ndiagrams has been applied to one-loop diagrams in \nRef.\\ \\cite{boos-davydychev,davydychev:1991}\nand to two-loop propagator diagrams in \nRef.\\ \\cite{broadhurst,berends,bauberger,davydychev-grozin,weinzierl:2003}.\\footnote{Several \n\tprograms are available for the automatic generation of the \n\tMellin-Barnes representation of Feynman \n\tdiagrams~\\cite{ambre,smirnov-smirnov,prausa}.}\nThe multiple Mellin-Barnes representation for a Feynman diagram \nin covariant gauge can be written in the form\n\\begin{eqnarray}\n\t\\Phi({\\bf A},\\vec{B};{\\bf C}, \\vec{D};\\vec{z}) \n\t& = & \n\t\\int_{-i \\infty}^{+i \\infty}\n\t\\phi(\\vec{t}) \n\td\\vec{t} \n\n\n\t= \n\t\\int_{-i \\infty}^{+i \\infty}\n\t\\prod_{a,b,c,r}\n\t\\frac{\\Gamma(\\sum_{i=1}^m A_{ai}t_i \\!+\\! B_a)}{\\Gamma(\\sum_{j=1}^r \n\tC_{bj}t_j \\!+\\! D_{b})}\n\tdt_c z_k^{\\sum_l \\alpha_{kl} t_l}\n\t\\;,\n\t\\nonumber \\\\ \n\t\\label{MB}\n\\end{eqnarray}\nwhere $z_k$ are ratios of Mandelstam variables and \n$A,B,C,D$ are matrices and vectors depending linearly on the \ndimension of space-time $n$ and powers of the propagators. \nClosing the contour of integration on the right\n(on the left), this integral can be presented \naround zero values of $\\vec{z}$ in the form\n\\begin{eqnarray}\n\t&& \n\t\t\\Phi({\\bf A},\\vec{B};{\\bf C}, \\vec{D};\\vec{z}) \n\t= \n\t\\sum_{\\vec{\\alpha}} f_{\\vec{\\alpha}}\n\tH({\\bf A},\\vec{B};{\\bf C}, \\vec{D};\\vec{z}) \n\t\\vec{z}^{\\;\\vec{\\alpha}} \n\t\\;, \n\t\\label{representation}\n\\end{eqnarray}\t\nwhere the coefficients \n$\nf_{\\vec{\\alpha}}\n$\nare ratios of $\\Gamma$-functions and the functions $H$ are \nHorn-type hypergeometric functions~\\cite{horn} \n(see Section~\\ref{Horn-Functions} for details).\nThe analytic continuation of the hypergeometric functions \n$H(\\vec{z})$ into another region of the variables $\\vec{z}$\ncan be constructed via the integral representation \n(when available)~\\cite{wu,mano}, \n$H(\\vec{z}) \\to H(1-\\vec{z})$. \nHowever, for more complicated cases of Horn-type hypergeometric\nfunctions, this type of analytic continuation is still under \nconstruction~\\cite{friot1,friot2}.\n\nA major set of mathematical results (see, for example,~\\cite{bezrodnykh1,bezrodnykh2,bezrodnykh3}) is devoted to the \nconstruction of the analytic continuation of a series around ${z_j}=0$ \nto a series of the form\n$\\frac{z_A}{z_B}$: $H(\\vec{z}) \\to H(\\frac{z_A}{z_B})$, \nwhere the main physical application is the construction of an expansion about \nLandau singularities $L(\\vec{z})$: $H(\\vec{z}) \\to \nH(L(\\vec{z}))$.\nFor example, the singular locus $L$ of the Appell function $F_4(z_1,z_2)$ \nis $L = \\{ (z_1,z_2) \\in \\mathbb{C}^2 |z_1 z_2 R(z_1,z_2) =0 \\} \\cup L_\\infty$\nwhere $R(z_1,z_2) =(1\\!-\\!z_1\\!-\\!z_2)^2\\!-\\!4 z_1z_2$,\nand the physically interesting case of an expansion around the singularities \ncorresponds to an analytical continuation \n$F_4(z_1,z_2) \n \\to \n F_4\\left(\\frac{R(z_1,z_2)}{z_1}, \\frac{R(z_1,z_2)}{z_2} \\right)$.\n\nA similar problem, the construction of convergent series \nof multiple Mellin-Barnes integrals in different regions of parameters, \nhas been analyzed in detail for \nthe case of two variables~\\cite{tsikh,passare,friot}. \nHowever, to our knowledge, there are no systematic analyses of the \nrelation between these series and the singularities of multiple \nMellin-Barnes integrals. \n\nIt was understood long ago that there is a one-to-one correspondence \nbetween the construction of convergent series from Mellin-Barnes \nintegrals and the asymptotic expansions; see\nRef.\\ \\cite{MB:asymptotic} for example. The available software, \n{\\em e.g.} Ref.\\ \\cite{czakon}, allows the construction of\nthe analytical continuation of a Mellin-Barnes integral in \nthe limit when some of the variables $z$ goto to $0$, or $\\infty$. \nThese are quite useful in the evaluation of Feynman diagrams, but do not \nsolve our problem. The current status of the asymptotic expansions is discussed \nin Ref.\\ ~\\cite{asymptotic1,asymptotic2}. \n\nAnother technique for obtaining a hypergeometric representation \nis the so-called ``Negative Dimensional Integration Method'' (NDIM) \n~\\cite{ndim1,ndim2,ndim3,ndim4,ndim5,ndim6}. \nHowever, it is easy to show~\\cite{ndim} that all available results \nfollow directly from the Mellin-Barnes integrals~\\cite{boos-davydychev}.\n\nFor some Feynman diagrams, the hypergeometric representation follows from \na direct integration of the parametric representation, see \nRef.\\ \\cite{somogy,grozin-kotikov,\nbritto:2015,ablinger:2015,feng1,feng2,yang1,feng3,grozin}.\n\nWe also mention that the ``Symmetries of Feynman Integrals''\nmethod~\\cite{kol1,kol2,kol3} can also be used to obtain the \nhypergeometric representation for some types of diagrams. \n\n\\subsection{About GKZ and Feynman Diagrams}\n\\label{GKZ}\nThere are a number of different though entirely equivalent ways to describe \nhypergeometric functions:\n\\begin{itemize}\n\t\\item\n\tas a multiple series;\n\t\\item \n\tas a solution of a system of differential equations \n(hypergeometric D-module);\n\t\\item \n\tas an integral of the Euler type;\n\t\\item\n\tas a Mellin-Barnes integral. \n\\end{itemize}\nIn a series of papers, Gel'fand, Graev, Kapranov and Zelevinsky \n~\\cite{Gelfand1,Gelfand2,Gelfand3} \n(to mention only a few of their series of papers \ndevoted to the systematic development of this approach) \nhave developed a uniform approach to the description of hypergeometric \nfunctions \\footnote{\n\tThe detailed discussion of $A$-functions and their properties \n\tis beyond our current consideration. There are many interesting papers \n\ton that subject, including (to mention only a few) \n\tRefs.\\ \\cite{algorithm2,beukers2,review}.}.\nThe formal solution of the $A$-system is a\nso-called multiple $\\Gamma$-series having the following form:\n$$\n\\sum_{(l_1, \\cdots, l_N) \\in \\mathbf{L}}\n\\frac{ z_1^{l_1+\\gamma_1} \\cdots z_N^{l_N+\\gamma_M} }\n {\\Gamma(l_1+\\gamma_1+1) \\cdots \\Gamma(l_N+\\gamma_N+1) \n} \n\\;, \n$$\nwhere $\\Gamma$ is the Euler $\\Gamma$-function \nand the lattice $ \\mathbf{L}$ has rank $d$. \nWhen this formal series has a non-zero radius of convergence, \nit coincides (up to a factor) \nwith a Horn-type hypergeometric series~\\cite{Gelfand3}\n(see Section~\\ref{Horn-Functions}). \nAny Horn-type hypergeometric function can be written in the form \nof a $\\Gamma$-series by applying the reflection formula \n$\\Gamma(a+n) = (-1)^n \\frac{\\Gamma(a) \\Gamma(1-a)}{\\Gamma(1-a-n)}$. \nMany examples of such a conversion -- \nall Horn-hypergeometric functions of two variables -- \nhave been considered in Ref.\\ \\cite{bod}. \n\nThe Mellin-Barnes representation was beyond Gelfand's consideration. \nIt was worked out later by Fritz Beukers~\\cite{beukers1};\nsee also the recent paper~\\cite{matsubara}. \nBeukers analyzed the Mellin-Barnes integral\n$$\n\\int \n\\Pi_{i=1}^N \n\\Gamma(-\\gamma_i - \\vec{b}_i \\vec{s})\nv_i^{\\gamma_i+ \\vec{b}_i \\vec{s}} ds \\;, \n$$\nand pointed out that,\nunder the assumption that the Mellin-Barnes integral \nconverges absolutely, it satisfies the set\nof $A$-hypergeometric equations. \nThe domain of convergence \nfor the $A$-hypergeometric series and the associated Mellin-Barnes integrals\nhave been discussed recently in Ref.\\ \\cite{nilsson}. \n\nFollowing Beuker's results, we conclude that any \nFeynman diagram with a generic set of parameters (to guarantee convergence, we \nshould treat the powers of propagators as non-integer parameters) \ncould be treated as an $A$-function. \nHowever our analysis has shown that, typically, a real \nFeynman diagram corresponds to an $A$-function with reducible monodromy. \n\nLet us explain our point of view. \nBy studying Feynman diagrams having a one-fold Mellin-Barnes \nrepresentation ~\\cite{bkk2009}, we have found that certain Feynman \ndiagrams \n($E^q_{1220}, B^2_{1220}, V^q_{1220}, J^q_{1220}$ in the notation of \nRef.\\ \\cite{bkk2009}) with powers of propagator equal to one\n(the so-called master-integrals) have the following hypergeometric \nstructure (we drop the normalization constant for simplicity): \n\\begin{eqnarray}\n\\Phi(n,\\vec{1};z)\n= \n{}_3F_2(a_1,a_2,a_3;b_1,b_2;z) \n+ \nz^\\sigma {}_4F_3(1,c_1,c_2,c_3;p_1,p_2,p_3;z) \\;, \n\\label{example}\n\\end{eqnarray}\nwhere the dimension $n$ of space-time \\cite{dimreg} is not an integer\nand the difference between any two parameters of the hypergeometric \nfunction also are not integers. \nThe holonomic rank of the hypergeometric function \n${}_pF_{p-1}$ is equal to $p$, \nso that the Feynman diagram is a linear combination of two \nseries having different holonomic rank. \nWhat could we say about the holonomic rank of a Feynman diagram $\\Phi$? \nTo answer that question, let us find the differential equation for \nthe Feynman diagram $\\Phi(n,\\vec{1};z)$ starting from the\nrepresentation Eq.\\ (\\ref{example}). This could be done by the\nHolonomic Function Approach~\\cite{zeilberger} \nor with the help of a programs\ndeveloped by Frederic Chyzak \\cite{mgfun}\n(it is a MAPLE package)\nor by Christoph Koutschan\\footnote{\n\tSee Christoph's paper in the present volume.} \\cite{HolonomicFunctions} (it is MATHEMATICA package).\nWe used a private realization of this approach based on ideas from the \nGr\\\"obner basis technique. Finally, we obtained the result that \nthe Feynman diagram $\\Phi$ satisfies the homogeneous differential equation \nof the hypergeometric type of order $4$\nwith a left-factorizable differential operator of order $1$: \n\\begin{equation}\n(\\theta+A) \\left[\n(\\theta+B_1) (\\theta+B_2) (\\theta+B_3)\n+ z \\theta (\\theta+C_1) (\\theta+C_2) \n\\right] \\Phi(n,\\vec{1};z) = 0 \\;, \n\\label{MB:F}\n\\end{equation}\nwhere none of the $B_j$ and $C_a$ are integers\nand $\\theta = z \\frac{d}{dz}$. \n\nIt follows from this differential equation that\nthe holonomic rank of the Feynman diagram $\\Phi$ is equal to $4$, \nand factorization means that the space of solutions\nsplits into a direct sum of two spaces of dimension \none and three: $\\Phi_{\\mbox{dim}} = 4 = 1 \\otimes 3$. \nAs it follows from~\\cite{beukers3}, the monodromy representation \nof Eq.\\ (\\ref{MB:F})\nis reducible and there is a one-dimensional invariant subspace.\nConsequently, \nthere are three non-trivial solutions (master-integrals) \nand the one-dimensional invariant subspace corresponds \nto an integral having a Puiseux-type solution \n(expressible in terms of $\\Gamma$-functions). \n\nWe pointed out in Ref.\\ \\cite{bkk2009} that \na Feynman diagram can be classified by the dimension \nof its irreducible representation.\nThis can be evaluated by the construction of differential equations \nor by using the dimension of the irreducible representation of the \nhypergeometric functions entering in the r.h.s.\\ of \nEq.\\ (\\ref{representation}). \nIndeed, in the example considered in Eq.\\ (\\ref{example}), the \ndimension of the irreducible representation \n${}_4F_3(1,\\vec{c};\\vec{p};z)$\nis equal to $3$ ~\\cite{beukers3}, \nso that \nthe dimension of the irreducible \nspace of the Feynman diagram $\\Phi$ is equal to $3$, \nsee Eq.(\\ref{MB:F}),\nand $\\Phi$ is expressible via \nthe sum of a series (see Eq.~\\ref{example}) with an irreducible representation of dimension $3$.\n \n \nThe results of the analysis performed in Ref.\\ \\cite{bkk2009}, \nare summarized in the following proposition: \\\\[1em]\n{\\bf Proposition}:\\ {\\em A Feynman diagram can be \ntreated as a linear combination of Horn-type hypergeometric series \nwhere each term has equal {\\bf irreducible} holonomic rank.}\\\\[1em]\n\nExamining this new ``quantum number,''\nirreducible holonomic rank, \nwe discover, and can rigorously prove, \nan extra relation between master-integrals~\\cite{KK:extra}.\nIn many other examples we found a complete agreement between the\nresults of differential reduction and \nand the results of a reduction based on the IBP \nrelations~\\cite{IBP1,IBP2}.\n\nThe Feynman diagram $J$\nconsidered in Ref.\\ \\cite{KK:extra}\nsatisfies the differential equation\n\\begin{eqnarray}\n\t&& \n\t\\left( \\theta \\!-\\! \\frac{n}{2} \\!+\\! I_1 \\right)\n\t\\left( \\theta \\!-\\! n \\!+\\! I_2 \\right)\n\t\\left[ \n\t\\theta \n\t\\left( \\theta \\!-\\! n \\!+\\! \\frac{1}{2} \\!+\\! I_3 \\right)\n\t+z\n\t\\left( \\theta \\!-\\! \\frac{3n}{2} \\!+\\! I_4 \\right)\n\t\\right] J = 0 \\;,\n\n\t\\label{J} \n\\end{eqnarray}\nwhere $I_1,I_2,I_3,I_4$ are integers, $n$ is the dimension of space-time \nand $\\theta = z \\frac{d}{dz}$.\nThe dimension of $J$ is $4$ and there are two one-dimensional \ninvariant subspaces, corresponding to two first-order differential \noperators:\n$\nJ_{\\mbox{dim}} = 4 = 1 \\otimes 1 \\otimes 2 \\;.\n$\nIndeed, after integrating twice, we obtained\n$$\n\\left[ \n\\theta \n\\left( \\theta \\!-\\! n \\!+\\! \\frac{1}{2} \\!+\\! I_3 \\right)\n+z\n\\left( \\theta \\!-\\! \\frac{3n}{2} \\!+\\! I_4 \\right)\n\\right] J = C_1 z^{n\/2-I_1} + C_2 z^{n-I_2} \\;.\n$$\nSurprisingly, this simple relation has not been not reproduced \n(as of the end of $2016$)\nby any of the powerful programs for the reduction of Feynman diagrams\n(see the discussion in Chapter 6 of Ref.\\ \\cite{KK:sunset}).\n\tIn ~\\cite{SS}, it was shown that the extra relation \n\t~\\cite{KK:extra} could be deduced from a diagram of more general \n\ttopology by exploring a new relation\n\tderived by taking of the derivative with respect to the mass \n\twith a subsequent reduction with the help of IBP relations.\n\tHowever, it was not shown that the derivative with respect to \n\tmass can be deduced from derivatives with respect to momenta, so \n\tthat the result of Ref.\\ \\cite{SS} could be considered as an alternative proof \n\tthat, in the massive case, there may exist an extra relation between \n\tdiagrams that does not follow from classical IBP relations.\n\n\nFinally, we have obtained a very simple result~\\cite{KK:MB}: \nEq.\\ (\\ref{MB:F}) follows directly from the\nMellin-Barnes representation for a Feynman diagram.\n(See Section~\\ref{Horn-Functions} and Eq.\\ (\\ref{MB:DE}) for details.)\nBased on this observation and on the results of our analysis performed \nin Ref.\\ \\cite{bkk2009}, and extending the idea of \nthe algorithm of Ref.\\ \\cite{algorithm1}, \nwe have constructed a simple and fast algorithm for the algebraic reduction \nof any Feynman diagram having a one-fold Mellin-Barnes integral representation \nto a set of master-integrals without using the IBP relations. \nIn particular, our approach and our program cover some types of \nFeynman diagrams with arbitrary powers of propagators considered in \nRefs.\\ \\cite{mizera1,mizera2,mizera3}.\n\nIn a similar manner, one can consider the multiple Mellin-Barnes \nrepresentation of a Feynman diagram~\\cite{KK:sunset,hyperdire}. \nIn contrast to the one variable case, the factorization of the\npartial differential operator is much more complicated.\nThe dimension of the Pfaff system~\\footnote{Rigorously speaking, \n\tthis system of equations is correct when there is a\n\tcontour in $\\mathbf{C}^n$ that is not changed \n\tunder translations by an arbitrary unit vector, see \n\tRef.\\ \\cite{Sadykov}, \n\tso that we treat the powers of propagator as parameters.} \nrelated to the multiple Mellin-Barnes \nintegral can be evaluated with the help of a prolongation procedure\n(see the discussion in Section~\\ref{Horn-Functions}). However, \nin this case, there may exist a Puiseux type solution even for a generic \nset of parameters (see for example Section~\\ref{FT:section}).\n\n\nExploring the idea~\\footnote{The monodromy group is \n\tthe group of linear transformations of solutions \n\tof a system of hypergeometric differential equations under \n\trotations around its singular locus.\n\tIn the case when the monodromy is reducible, there is a \n\tfinite-dimensional subspace of \n\tholomorphic solutions of the hypergeometric system on which \n\tthe monodromy acts trivially.} \npresented in Ref.\\ \\cite{beukers3}, one possibility is to \nconstruct an explicit solution of the invariant subspace \n(see~\\cite{KK:sunset} and Section~\\ref{FT:section})\nand find the dimension of the irreducible representation. \nOur results presented in ~\\cite{KK:sunset} were confirmed by another technique \nin Ref.~\\cite{bbkp}. \n\nLet us illustrate the notion of irreducible holonomic rank \n(or an irreducible representation) in an application to Feynman diagrams.\nAs follows from our analysis of sunset diagrams~\\cite{KK:sunset}, \nthe dimension of the\nirreducible representation of two-loop sunset with three different masses \nis equal to $4$. There is only one hypergeometric function of three \nvariables having holonomic rank $4$, the $F_D$ function. \nThen we expect that there is a linear combination of \nfour two-loop sunsets and the product of one-loop tadpoles that are\nexpressible in terms of a linear combination of the $F_D$ functions. \n\nAnother approach to the construction of a GKZ representation \nof Feynman Diagrams\nwas done recently in the series of papers in Refs.\\ \\cite{GKZ1,GKZ2,GKZ3}.\nBased on the observation made in Ref.\\ \\cite{Nilsson-Passere}\nabout the direct relation between $A$-functions and \nMellin transforms of rational functions, and exploring the \nLee-Pomeransky representation~\\cite{Lee-Pomeransky}, \nthe authors studied a different aspect of the GKZ representation \nmainly considering the examples of massless or one-loop diagrams. \nTwo non-trivial examples \nhave been presented in Ref.~\\cite{GKZ2}: the two-loop sunset \nwith two different masses and one zero \nmass, which corresponds to a linear combination of two Appell functions \n$F_4$ (see Eq.\\ (3.11) in~\\cite{JK})~\\footnote{It is interesting to note, \n\tthat on-mass shell $z=1$, this diagram has two Puiseux type solutions\n\tthat do not have analytical continuations.}\nand a two-loop propagator with three different masses related \nto the functions $F_C$ of three variables~\\cite{berends}. \n\nA different idea on how to apply the GKZ technique to the analysis of Feynman Diagrams has been\npresented in \\cite{vanhove} and has received further development in\n\\cite{klemm1,klemm2}. \n\n\\subsection{One-Loop Feynman Diagrams}\n\\label{one-loop}\nLet us give special attention to one-loop Feynman diagrams. \nIn this case, two elegant approaches \nhave been developed~\\cite{DD,FJT} that allow us to obtain \ncompact hypergeometric representations for the master-integrals. \nThe authors of the first paper~\\cite{DD} explored \nthe internal symmetries of the Feynman parametric representation to get a one-fold integral \nrepresentation for one-loop Feynman diagrams (see also ~\\cite{bloch-kreimer,n-gon}). \nThe second approach~\\cite{FJT} \nis based on the solution of difference equations \nwith respect to the dimension of space-time~\\cite{tarasov:d}\nfor the one-loop integrals.\nIn spite of different ideas on the analysis of Feynman diagrams,\nboth approaches, ~\\cite{DD} and ~\\cite{FJT},\nproduce the same results for one-loop propagator and vertex diagrams ~\\cite{one-loop:vertex1,one-loop:vertex2,one-loop:vertex3}. \nHowever, beyond these examples, the situation is less complete: it was \nshown in Ref.\\ \\cite{FJT} that the off-shell one-loop massive box is expressible in terms of a linear combination of \n$F_S$ Horn-type hypergeometric functions of three variables \n(see also discussions\nin Refs.\\ \\cite{bkm2013,riemann1,riemann2,phan}), \nor in terms of $F_N$ Horn-type hypergeometric functions of three variables ~\\cite{davydychev:box} \n(see Section ~\\ref{FN:section}).\n\nRecently, it was observed \n~\\cite{yangian1,yangian2,yangian3}\nthat massive conformal Feynman diagrams \nare invariant under a Yangian \nsymmetry that allows to get the hypergeometric representation \nfor the conformal Feynman diagrams. \n \n\\subsection{Construction of $\\ep$-expansion}\nFor physical applications, the construction of the analytical coefficients\nof the Laurent expansions of hypergeometric functions around particular values of parameters (integer, half-integer, rational) is necessary. \nSince the analytic continuations of hypergeometric functions \nis still an unsolved problem, the results are written \nin some region of variables in each order of the $\\ep$-expansion \nin terms of special functions like classical or multiple polylogarithms, \n~\\cite{lewin,harmonic,2dim,BBBL,mpl1,mpl2,mpl3},\nand then these functions are analytically continued to another region. For this reason, the analytical properties of special functions \nwere\n analyzed in detail~\\cite{mpl:properties1,mpl:properties2,mpl:properties3,\n\t mpl:properties4,mpl:properties5,panzer2015,PolyLog}.\nAlso, tools for the numerical evaluation of the corresponding functions are important ingredients \n~\\cite{gr1,gr2,vollinga,logsine,maitre1,bonciani2011,maitre2,chaplin,\n\t li22,maple,harmonic8,handyG,duhr-tancredi,walden}. \n\nEach of the hypergeometric function representations (series, integral, Mellin-Barnes, differential equation) can be used for\nthe construction of the $\\ep$-expansion,\nand each of them has some technical advantages or disadvantages\nin comparison with the other ones.\nThe pioneering $\\ep$-expansion of the hypergeometric function \n${}_pF_{p-1}$ around $z=\\pm 1$ was done by David Broadhurst \n~\\cite{david1,david2}. \nThe expansion was based on the analysis of multiple series \nand it was interesting from a mathematical point of view~\\cite{david3}\nas well as for its application to quantum field theory~\\cite{david4}.\nThe integral representation was mainly developed by Andrei Davydychev \nand Bas Tausk~\\cite{dt1,dt2}, so that, finally, the all-order \n$\\ep$-expansion for the Gauss hypergeometric functions around \na rational parameter, a case\nthat covers an important class of diagrams, \nhas been constructed~\\cite{davydychev} in terms of \ngeneralized log-sine~\\cite{lewin} functions or in term of Nielsen polylogarithms \\cite{DK1,DK2}. \n\nThe integral representation was also the starting point for the \nconstruction of the\n$\\ep$-expansion of ${}_pF_{p-1}$ hypergeometric functions,~\\cite{huber-maitre1,huber-maitre2}, \nand also the $F_1$ \\cite{ndim4} and $F_D$ functions \naround integer values of parameters \n~\\cite{bogner-brown1,bogner-brown2,panzer2014,bogner-mpl}.\n\nPurely numerical approaches~\\cite{NumExp1,pentagon4,NumExp2}\ncan be applied for arbitrary values of the parameters. \nHowever, this technique typically does not produce a stable numerical\nresult in regions around singularities of the hypergeometric functions. \n\n\nA universal technique which does not \ndepend on the order of the differential equation \nis based on the algebra of multiple sums~\\cite{nested1,nested2,nested3}.\nFor the hypergeometric functions\\footnote{It was shown \n\tin ~\\cite{smirnov,tausk} that multiple Mellin-Barnes integrals \n\trelated to Feynman diagrams could be evaluated analytically\/numerically \n\tat each order in $\\ep$ via multiple sums, without requiring a\n\tclosed expression in terms of Horn-type hypergeometric functions.}\nfor which the nested-sum algorithms~\\cite{nested1} are applicable, \nthe results \nof the $\\ep$-expansion are automatically obtained in terms of multiple \npolylogarithms.\n\nThe nested-sum algorithms~\\cite{nested1} have been implemented in a few\npackages~\\cite{nested1a,nested1b} and allow for the construction of the\n$\\ep$-expansion of hypergeometric functions ${}_pF_{p-1}$ and \nAppell functions $F_1$ and $F_2$ around integer values of parameters\\footnote{\n\tSee also Refs.\\ \\cite{series1,series2} for an alternative realization.}. \nHowever, the nested-sum approach fails for \nthe $\\ep$-expansion of hypergeometric functions \naround rational values of parameters and it is not applicable to \nsome specific classes of hypergeometric functions (for example, the $F_4$ \nfunction, see~\\cite{pentagon1}).\n\nIn the series of papers~\\cite{DK3,kalmykov2004}, the generating function \ntechnique~\\cite{generating1,generating2} has been developed \nfor the analytical evaluation of multiple sums. \nIndeed, the series generated by the $\\ep$-expansion of hypergeometric \nfunctions has the form \n$\n\\sum_k c(k) z^k\\;, \n$\nwhere the coefficients $c(k)$ include only products of the harmonic sums \n$\\Pi_{a,b} S_a(k-1) S_b(2k-1)$ and $S_a(k) = \\sum_{j=1}^k\n\\frac{1}{j^a}$. The harmonic sums satisfy the recurrence relations\n$$\nS_a(k) \n= S_a(k-1) \n+ \\frac{1}{k^a}\n\\quad, \\qquad\nS_a(2k+1) \n= S_a(2k-1) \n+ \\frac{1}{(2k+1)^a}\n+ \\frac{1}{(2k)^a}\\;,\n$$ \nso that the coefficients $c(k)$ satisfy \nthe first order difference equation\\footnote{In general, it could be \n\ta more generic recurrence, $\\sum_{1=0}^k p_{k+j}(k+j)c(k+j) = r(k)$.}: \n$$\nP(k+1) c(k+1) = Q(k) c(k) + R(k) \\;, \n$$ where $P$ and $Q$ are polynomial functions \nthat can be defined from the original series.\nThis equation could be converted into \na first order differential equation for the generating function\n$F(z) = \\sum_k c(k) z^k$, \n$$\n\\frac{1}{z} P \\left(z \\frac{d}{dz} \\right) F(z)\n- P(1)C(1) z = Q \\left(z \\frac{d}{dz} \\right) F(z)\n+ \\sum_{k=1} R(k) z^k \\;.\n$$\nOne of the \nremarkable properties of this technique, that the non-homogeneous part\nof the differential equation, the\nfunction $R(k)$, has one-unit less depth in contrast to the \noriginal sums, so that, step-by-step, all sums could be evaluated analytically. \nBased on this technique, all series arising from \nthe $\\ep$-expansion of hypergeometric functions around \nhalf-integer values of parameters have been evaluated~\\cite{DK3} \nup to weight $4$.\nThe limits considered were mainly motivated by physical reasons (at $O$(NNLO) only \nfunctions of weight $4$ are generated) and, in this limit, only one new function~\\cite{harmonic} $H_{-1,0,0,1}(z)$ was necessary to introduce.\nThese results~\\cite{DK3} allow us to construct the $\\ep$-expansion of \nthe hypergeometric functions ${}_pF_{p-1}$\naround half-integer values of parameters, see~\\cite{JKV2003,MKL:Gauss}.\n\nOther results and theorems relevant for the evaluation of \nFeynman diagrams are related with the appearance of a factor \n$1\/\\sqrt{3}$ in the $\\ep$-expansion of some diagrams~\\cite{FK1999} \nexpressible in terms of hypergeometric functions \n\\footnote{Recent results on the analytical evaluation of inverse binomial \n\tsums for particular values of the arguments \n\thave been presented in~\\cite{binsum1,binsum2,binsum3}.}\nwere derived in Refs.\\ \\cite{DK2,KWY2007,KK2010}\\footnote{The appearance of \n\t$1\/\\sqrt{3}$ in RG functions in seven loops was quite \n\tintriguing~\\cite{schnetz1,panzer2015}.}.\n\nLet us consider typical problems arising in this program. We follow \nour analysis presented in Ref.\\ \\cite{BKK2012}, see also the closely related \ndiscussion in Ref.\\ \\cite{abs2018}. \nFirst, the construction of the difference equation for the coefficient \nfunctions $c(z)$ is not an easy task~\\cite{schneider1,schneider2,schneider3}. \nIn the second step, the differential operator(s) coming from the difference \nequation\n$P \\left(z \\frac{d}{dz} \\right) - z Q \\left(z \\frac{d}{dz} \\right)$\nshould be factorized into a product of differential operators of the first \norder, \n$$\nP \\left(z \\frac{d}{dz} \\right) - z Q \\left(z \\frac{d}{dz} \\right)\n= \n\\Pi_{k=1} \n\\left[ p_{k}(z) \\frac{d}{dz} - q_{k}(z) \\right]\\;,\n$$\nwhere $ p_{k}(z)$ and $q_{k}(z)$ are rational functions. Unfortunately, the \nfactorization of differential operators into irreducible factors is not \nunique~\\cite{landau}: \n$$\n\\left(\n\\frac{d^2}{dx^2} \\!-\\! \\frac{2}{x} \\frac{d}{dx} \\!+\\! \\frac{2}{x^2}\n\\right)\n= \n\\left(\n\\frac{d}{dx} \\!-\\! \\frac{1}{x}\n\\right)\n\\left(\n\\frac{d}{dx} \\!-\\! \\frac{1}{x}\n\\right)\n= \n\\left(\n\\frac{d}{dx} \\!-\\! \\frac{1}{x (1+ax)}\n\\right)\n\\left(\n\\frac{d}{dx} \\!-\\! \\frac{(1+2ax)}{x(1+ax)}\n\\right) \\;, \n$$\nwhere $a$ is a constant. \n\nHowever, the \nfollowing theorem is valid (see~\\cite{schwarz}):\nAny two decompositions of a linear differential operator $L^{(p)}$\ninto a product (composition) of irreducible linear differential \noperators \n$$\nL^{(p)} = \nL_1^{(a_1)} \nL_2^{(a_2)} \n\\cdots \nL_m^{(a_m)} \n= \nP_1^{(r_1)} \nP_2^{(r_2)} \n\\cdots \nP_k^{(r_k)}\n$$\nhave equal numbers of components $m=k$ \nand the factors $L_j$ and $P_a$ \nhave the same order of differential operators:\n$L_a=P_j$ (up to commutation). In the application to the $\\ep$-expansion \nof hypergeometric functions this problem has been discussed in \nRef.\\ \\cite{yost2011}.\n\n\nAfter factorization, the iterated integral over \nrational functions (which is not uniquely defined, as seen in \nthe previous example) would be generated that in general is not \nexpressible in terms of hyperlogarithms. \nIndeed, the solution of the differential equation\n$$\n\t\\left[ R_1(z) \\frac{d}{dz} \\!+\\! Q_1(z) \\right] \\left[ R_2(z) \\frac{d}{dz} \\!+\\! Q_2(z) \\right] h(z) = F(z) \\;.\n\t\\label{de}\n$$\nhas the form\n$$\n\th(z) = \n\t\\int^{z} \\frac{dt_3}{R_2(t_3)} \\left[ \\exp^{-\\int_0^{t_3}\\frac{Q_2(t_4)}{R_2(t_4)} dt_4} \\right] \n\t\\int^{t_3} \\frac{dt_1}{R_1(t_1)} \\left[ \\exp^{-\\int_0^{t_1}\\frac{Q_1(t_2)}{R_1(t_2)} dt_2} \\right] F(t_1) \\;. \n$$\n From this solution it follows~\\cite{KK2010B}\n\tthat the following conditions are enough to \n\tconvert the iterated integral into hyperlogarithms:\n\tthere are new variables $\\xi$ and $x$ so that \n\t\\begin{eqnarray}\n\t\t&& \n\t\t\\int^z \\frac{Q_i(t)}{R_i(t)} dt = \\ln \\frac{M_i(\\xi)}{N_i(\\xi)} \n\t\t\\Rightarrow\n\t\t\\left. \\frac{dt}{R_i(t)} \\right|_{t = t(\\xi)} \n\t\t\\frac{N_i(\\xi)}{M_i(\\xi)} = dx \\frac{K_i(x)}{L_i(x)} \\;, \n\t\t\\nonumber \n\t\\end{eqnarray}\n\twhere $M_i,N_i,K_i,L_i$ are polynomial functions.\n\nThe last problem is related to the Abel-Ruffini theorem: \nthe polynomial is factorizable into a product of its primitive roots, \nbut there are not solutions in radicals for \npolynomial equations of degree five or more. \nThe last problem got a very elegant solution by the introduction \nof cyclotomic polylogarithms~\\cite{nested3}, with the integration over \nirreducible cyclotomic polynomials $\\Phi_n(x)$.\nThe first two irreducible polynomials (see Eqs.\\ $(3.3)-(3.14)$ in~\\cite{nested3}) \nare $\\Phi_7$ and $\\Phi_9$\n(the polynomial of order $6$). \nTwo other polynomials of order $4$, \n$\\Phi_5$ and $\\Phi_{10}$: \n$(x^4 \\pm x^3 + x^2 \\pm x +1)$, have \nnon-trivial primitive roots. But up to now, all these polynomials were not \ngenerated by Feynman diagrams. Surprisingly, \nby increasing the number of loops or number of scales, \nother mathematical structures are generated.~\\cite{bs1,bs2}.\nDetailed analyses of properties of the new functions have been presented in \nRefs.\\ \\cite{abs2013,abs2014} and automated by Jakob \nAblinger~\\cite{ablinger1,ablinger2,ablinger3}. \nThe problem of integration over algebraic functions \n(typically square roots of polynomials) \nwas solved by the introduction of a new type of functions,\n~\\cite{Bonciani}, intermediate between multiple and elliptic polylogarithms.\n\nThe series expansion is not very efficient for the construction \nof the $\\ep$-expansion, since the number of series increases with the \norder of the $\\ep$-expansion and increases the complexity of the individual \nsums.\nLet us recall that the Laurent expansion of a hypergeometric function \ncontains a linear combination of multiple sums. \nFrom this point of view, the construction of the analytical coefficients of\nthe $\\varepsilon$-expansion of a hypergeometric function\ncan be carried out independently\nof existing analytical results for each individual multiple sum. \nThe ``internal'' symmetry of a Horn-type hypergeometric function is \nuniquely defined by the corresponding system of differential equations. \nWhile exploring this idea, a new algorithm was presented in \nRefs.\\ \\cite{kwy2006,kwy2007}, based on factorization, looking for a \nlinear parametrization and direct iterative solution \nof the differential equation for a hypergeometric function.\nThis approach allows the construction of the analytical coefficients of the \n$\\varepsilon$-expansion of a hypergeometric \nfunction, as well as obtaining analytical expressions \nfor a large class of multiple series \nwithout referring to the algebra of nested sums.\n\nBased on this approach, the all-order $\\varepsilon$-expansion of the Gauss \nhypergeometric function around half-integer and rational values of parameters\nhas been constructed~\\cite{kwy2006,kk2008}, so that the first $20$ coefficients \naround half-integer values of parameters, the $12$ coefficients \nfor $q=4$ and $10$ coefficients for $q=6$\nhave been generated already in 2012~\\footnote{The results \n\thave been written in \n\tterms of hyperlogarithms of primitive $q$-roots of unity.}. \nAnother record is the generation of $24$ coefficients for the\nClausen hypergeometric function ${}_3F_2$ around integer values of \nparameters, relevant for the analysis~\\footnote{The results of \n\t~\\cite{david5} were relevant for the reduction of multiple zeta values\n\tto the minimal basis.} \nperformed in ~\\cite{boels}.\nTo our knowledge, at the present moment, this remains the fastest and most \nuniversal algorithm. \n\n\nMoreover, it was shown in Refs.\\ \\cite{kwy2006,kwy2007}, \nthat when the coefficients of the $\\varepsilon$-expansion of a hypergeometric \nfunction are expressible in terms of multiple polylogarithms, there is a set \nof parameters (not uniquely defined) such that, \nat each order of $\\varepsilon$, the coefficients of the $\\varepsilon$-expansion \ninclude multiple polylogarithms of a single uniform weight. \nA few years later, this property was established \nnot only for hypergeometric functions, but for Feynman Diagrams\n~\\cite{Henn:1}.\n\nA multivariable generalization~\\cite{BKK2012} \nof the algorithm of Refs.\\ \\cite{kwy2006,kwy2007} has been described.\nThe main difference with respect to the case of one variable is the \nconstruction of a system of differential equations of triangular form\n to avoid the appearance of elliptic functions.\nAs a demonstration of the validity of the algorithm, the first few coefficients \nof the $\\varepsilon$-expansion of the Appell hypergeometric functions \n$F_1,F_2,F_3$ and $F_D$ around integer values \nof parameters have been evaluated analytically~\\cite{bkm2013}. \n\nThe $\\varepsilon$-expansion of the hypergeometric \nfunctions $F_3$ and $F_D$ \nare not covered by the nested sums technique or its generalization. \nThe differential equation\ntechnique can be applied to the construction of analytic coefficients of \nthe $\\varepsilon$-expansions of hypergeometric functions of several variables \n(which is equivalent to the multiple series of several variables)\naround any rational values of parameters via direct solution of the \nlinear systems of differential equations. \n\nThe differential equation approach~\\cite{kwy2006,kwy2007}\nallows us to analyze arbitrary sets of parameters simultaneously\nand to construct the solution in terms of iterated integrals, \nbut for any hypergeometric function\nthe Pfaff system of differential equations should be constructed.\nThat was the motivation for creation of the package(s) (the HYPERDIRE \nproject) ~\\cite{hyperdire} for the manipulation of the parameters of \nHorn-type hypergeometric functions of several variables. \nFor illustration, we describe in detail how it works in the application \nto the $F_3$ hypergeometric function in Section~\\ref{F3:general}.\n\nRecently, a new technique~\\cite{abreu1}\nfor the construction of the $\\ep$-expansion \nof Feynman diagrams~\\cite{abreu2}\nas well as for hypergeometric functions has been presented~\\cite{abreu3}.\nIt is based on the construction of a coaction~\\footnote{An interesting \n\tconstruction of the coaction for the Feynman graph has been presented \n\trecently in Ref.\\ \\cite{Kreimer2020}.} \nof certain hypergeometric functions. The structures of the $\\ep$-expansion of \nthe Appell hypergeometric functions $F_1,F_2,F_3$ and $F_4$ as well as $F_D$\n(for the last function $F_D$ see also the discussion in Ref.\\ \\cite{brown-dupont}) \naround integer values of parameters \nare in agreement with our analysis and partial results presented in\nRefs.\\ \\cite{BKK2012} and ~\\cite{bkm2013}. However, the \nstructure of the $\\ep$-expansion around rational values of parameters \nhas not been discussed in~\\cite{abreu3}, nor in ~\\cite{brown-dupont}.\n\n\\section{Horn-type hypergeometric functions}\n\\label{Horn-Functions}\n\\subsection{Definition and system of differential equations}\n\\label{Horn:defintion}\nThe study of solutions of linear partial \ndifferential equations (PDEs) of several variables in terms of multiple series,\n{\\em i.e.} a multi-variable generalization of the Gauss hypergeometric \nfunction~\\cite{Gauss}, began long ago~\\cite{Lauricella}.\n\nFollowing the Horn definition~\\cite{horn}, a multiple series is called a \n``Horn-type hypergeometric function,'' if, about the point $\\vec{z}=\\vec{0}$, \nthere is a series representation\n\\begin{equation}\nH(\\vec{z}) = \\sum_{\\vec{m}} C(\\vec{m}) \\vec{z}^{\\vec{m}},\n\\label{def}\n\\end{equation}\nwhere \n$\n\\vec{z}^{\\vec{m}} = z_1^{m_1} \\cdots z_r^{m_r}\n$\nfor any integer multi-index\n$\\vec{m} = (m_1, \\cdots, m_r)$, \nand the ratio of two coefficients can be represented as a ratio of\ntwo polynomials:\n\\begin{eqnarray}\n\\frac{C(\\vec{m}+\\vec{e}_j)}{C(\\vec{m})} \n= \\frac{P_j(\\vec{m})}{Q_j(\\vec{m})} \n\\;,\n\\label{horn}\n\\end{eqnarray}\nwhere\n$\n\\vec{e}_j\n$\ndenotes the unit vector with unity in its $j^{\\rm th}$ entry, \n$ \n\\vec{e}_j = (0,\\cdots,0,1,0,\\cdots,0). \n$\nThe coefficients $C(\\vec{m})$ of such a series can be expressed as \nproducts or ratios of Gamma-functions (up to some factors \nirrelevant for our consideration)~\\cite{Ore:Sato:1,Ore:Sato:2}:\n\\begin{eqnarray} \n\tC(\\vec{m})=\n\\frac{\n\t\\prod\\limits_{j=1}^p\n\t\\Gamma\\left(\\sum_{a=1}^r \\mu_{ja}m_a+\\gamma_j \\right)}\n{\n\t\\prod\\limits_{k=1}^q\n\t\\Gamma\\left( \\sum_{b=1}^r \\nu_{kb}m_b+\\sigma_k \\right)\n} \\;,\n\\label{ore}\n\\end{eqnarray}\nwhere\n$ \n\\mu_{ja}, \\nu_{kb}, \\sigma_j,\\gamma_j \\in \\mathbb{Z}\n$\nand $m_a$ are elements of ${\\vec{m}}$.\n\nThe Horn-type hypergeometric function, Eq.~(\\ref{horn}), \nsatisfies the following system of differential equations:\n\\begin{equation}\n0 =\nL_j (\\vec{z})\nH(\\vec{z})\n=\n\\left[\nQ_j\\left(\n\\sum_{k=1}^r z_k\\frac{\\partial}{\\partial z_k}\n\\right)\n\\frac{1}{z_j}\n-\nP_j\\left(\n\\sum_{k=1}^r z_k\\frac{\\partial}{\\partial z_k}\n\\right)\n\\right]\nH(\\vec{z}) \\;,\n\\label{diff}\n\\end{equation}\nwhere $j=1, \\ldots, r$.\nIndeed, \n\\begin{eqnarray}\n&& \nQ_j\\left(\n\\sum_{k=1}^r z_k\\frac{\\partial}{\\partial z_k}\n\\right)\n\\frac{1}{z_j} \\sum_{\\vec{m}} C(\\vec{m}) \\vec{z}^{~\\vec{m}}\n= \\sum_{\\vec{m}} Q_j(\\vec{m}) C(\\vec{m}\\!+\\!\\vec{e}_j) \\vec{z}^{~\\vec{m}}\n\\nonumber \\\\ && \n= \n\\sum_{\\vec{m}} P_j(\\vec{m}) C(\\vec{m}) \\vec{z}^{~\\vec{m}}\n=\nP_j\\left(\n\\sum_{k=1}^r z_k\\frac{\\partial}{\\partial z_k}\n\\right)\n\\sum_{\\vec{m}} C(\\vec{m}) \\vec{z}^{~\\vec{m}} \\;.\n\\nonumber \n\\end{eqnarray}\nThe degrees of the polynomials $P_i$ and $Q_i$ \nare $p_i$ and $q_i$, respectively.\nThe largest of these, $r=\\max \\{p_i,q_j \\}$, is called the order of \nthe hypergeometric series. \nTo close the system of differential equations, the \n{\\it prolongation procedure} \nshould be applied: by\napplying the differential operator $\\partial_i$ to $L_j$\nwe can convert the system of linear PDEs with polynomial \ncoefficients into Pfaff form (for simplicity, we assume that system is closed): \n\n\\begin{eqnarray}\nL_j H(\\vec{z}) = 0 \n& \\Rightarrow & \n\\Biggl\\{\nd \\omega_i(\\vec{z}) = \\Omega_{ij}^k(\\vec{z}) \\omega_j(\\vec{z}) dz_k \\;,\n\\quad \nd \\left[ d \\omega_i(\\vec{z}) \\right] = 0 \n\\Biggr\\} \\;.\n\\label{pfaff}\n\\end{eqnarray}\n\nInstead of a series representation, one can \nuse a Mellin-Barnes integral representation \n(see the discussion in ~\\cite{KK:MB}). Indeed, \nthe multiple Mellin-Barnes representation for a Feynman diagram \ncould be written in the form in Eq.\\ (\\ref{MB}).\nLet us define the polynomials $P_i$ and $Q_i$ as \n\\begin{equation}\n\\frac{P_i(\\vec{t})}{Q_i(\\vec{t})} = \\frac{\\phi(\\vec{t}+e_i)}{\\phi(\\vec{t})} \\;.\n\\end{equation}\nThe integral (\\ref{MB}) then satisfies the system of linear differential \nequations (\\ref{diff})\n\\begin{eqnarray}\n\\left. Q_i(\\vec{t}) \\right|_{t_j \\to \\theta_j }\\frac{1}{z_i} \\Phi({\\bf A},\n\t\\vec{B};{\\bf C}, \\vec{D};\\vec{z}) \n\t= \\left. P_i(\\vec{t}) \\right|_{t_j \\to \\theta_j } \\Phi({\\bf A},\n\t\\vec{B};{\\bf C}, \\vec{D};\\vec{z}) \\;,\n\\label{MB:DE}\n\\end{eqnarray}\nwhere $\\theta_i = z_i\\frac{d}{dz_i}$.\nSystems of equations such as Eq. (\\ref{MB:DE}) are left ideals in \nthe Weyl algebra of linear differential operators with polynomial coefficients.\n\n\\subsection{Contiguous relations}\n\\label{Horn:contiguous}\nAny Horn-type hypergeometric function is a function of two types of variables, \n{\\it continuous} variables, $z_1,z_2, \\cdots, z_r$ and \n{\\it discrete} variables:\n$\\{ J_a \\}:= \\{\\gamma_k,\\sigma_r \\}$, \nwhere the latter can change by integer numbers \nand are often referred to as the {\\it parameters} of the hypergeometric \nfunction. \n\nFor any Horn-hypergeometric function, \nthere are linear differential operators \nchanging the value of the discrete variables by one unit. \nIndeed, let us consider a multiple series defined by \nEq.~(\\ref{def}).\n\nTwo hypergeometric functions $H$\nwith sets of parameters shifted by unity,\n$H(\\vec{\\gamma}+\\vec{e_c};\\vec{\\sigma};\\vec{z})$ and\n$H(\\vec{\\gamma};\\vec{\\sigma};\\vec{z})$,\nare related by a linear differential operator:\n\\begin{eqnarray}\nH(\\vec{\\gamma}+\\vec{e_c};\\vec{\\sigma};\\vec{z})\n& = &\n\\left ( \\sum_{a=1}^r \\mu_{ca} \nz_a \\frac{\\partial}{\\partial z_a}+\\gamma_c \\right)\nH(\\vec{\\gamma};\\vec{\\sigma};\\vec{z})\n\\;.\n\\label{do1}\n\\end{eqnarray}\nSimilar relations also exist for the lower parameters:\n\\begin{eqnarray}\nH(\\vec{\\gamma};\\vec{\\sigma}-\\vec{e}_c;\\vec{z})\n& = &\n\\left(\n\\sum_{b=1}^r\n\\nu_{cb} z_b \\frac{\\partial}{\\partial z_b} \\!+\\! \\sigma_c \\!-\\! 1\n\\right)\nH(\\vec{\\gamma};\\vec{\\sigma};\\vec{z})\n\\;.\n\\label{do2}\n\\end{eqnarray}\nLet us rewrite these relations in a symbolic form: \n\\begin{eqnarray}\nR_{K}(\\vec{z})\\frac{\\partial }{\\partial \\vec{z}_K} H(\\vec{J};\\vec{z}) \n= H(\\vec{J} \\pm e_K; \\vec{z}) \\;,\n\\label{direct}\n\\end{eqnarray}\nwhere $R_{K}(\\vec{z})$ are polynomial (rational) functions.\n\nIn Refs.\\ \\cite{algorithm1,algorithm2} it was shown that there is an \nalgorithmic construction of inverse linear differential operators: \n\\begin{eqnarray}\nB_{L,N}(\\vec{z})\\frac{\\partial^L }{\\partial \\vec{z}_N} \n\\left( R_{K}(\\vec{z})\\frac{\\partial }{\\partial \\vec{z}_K} \\right) \nH(\\vec{J};\\vec{z}) \n\\equiv \nB_{L,N}(\\vec{z})\\frac{\\partial^L }{\\partial \\vec{z}_N} H(\\vec{J} \\pm e_K; \\vec{z}) \n= \nH(\\vec{J};\\vec{z}) \n\\;.\n\\label{inverse}\n\\end{eqnarray}\nApplying the direct or inverse differential operators\nto the hypergeometric function\nthe values of the parameters can be changed by an arbitrary integer: \n\\begin{equation}\nS(\\vec{z}) H(\\vec{J}+\\vec{m}; \\vec{z})\n= \n\\sum_{j=0}^r S_j(\\vec{z}) \n\\frac{\\partial^j}{\\partial \\vec{z}} H(\\vec{J}; \\vec{z}) \\;,\n\\label{reduction}\n\\end{equation}\nwhere $\\vec{m}$ is a set of integers, $S$ and $S_j$ are polynomials \nand $r$ is the holonomic rank (the number of linearly independent solutions) \nof the system of differential equations Eq.~(\\ref{diff}).\nAt the end of the reduction, the differential operators acting on the \nfunction $H$ can be replaced by a linear combination of the function \nevaluated with shifted parameters. \n\nWe note that special considerations are necessary when the system of \ndifferential operators, Eq.~(\\ref{diff}), has a\nPuiseux-type solution (see Section \\ref{FT:section}). \nIn this case, the prolongation procedure gives \nrise to the Pfaffian form, but\nthis set of differential equations is not enough to construct \nthe inverse operators~\\cite{BK}, so that new \ndifferential equations should be introduced.\nIn the application to the Feynman diagrams, this problem \nis closely related with obtaining new relations between \nmaster integrals, see Ref.~\\cite{KK:extra} for details.\nIn the Section ~\\ref{FT:section} we present an example of the\nHorn-type hypergeometric equation of second order \nof three variables having a Puiseux-type solution. \n\nAnother approach to the reduction of hypergeometric functions \nis based on the explicit algebraic solution of the contiguous relations, \nsee the discussion in Ref.\\ \\cite{Schlosser}. \nThis technique is applicable in many particular cases, including \n${}_2F_1, {}_3F_2$, and the Appell functions $F_1,~F_2,~F_3,~F_4$ \n(see the references in Ref.\\ \\cite{acat}), and there is \na general expectation that it could be solved for any Horn-type hypergeometric function. However, to our knowledge, \nnobody has analyzed the algebraic reduction in the application to \ngeneral hypergeometric functions having a Puiseux-type solution. \n\nThe multiple Mellin-Barnes integral\n$\\Phi$ defined by Eq.~(\\ref{MB}) satisfies similar \ndifferential contiguous relations: \n\\begin{eqnarray}\n\\Phi({\\bf A}, \\vec{B} \\!+\\! e_a; {\\bf C}, \\vec{D};\\vec{z}) \n& = & \n\\left(\\sum_{i=1}^m A_{ai} \\theta_i \\!+\\! B_a \\right) \n\\Phi({\\bf A},\\vec{B}; {\\bf C}, \\vec{D};\\vec{z}) \\;,\n\\nonumber \\\\ \n\\quad \n\\Phi({\\bf A}, \\vec{B}; {\\bf C}, \\vec{D} \\!-\\! e_b;\\vec{z}) \n& = & \n\\left(\\sum_{j=1}^r C_{bj} \\theta_j \\!+\\! D_{b} \\right)\n\\Phi({\\bf A},\\vec{B}; {\\bf C}, \\vec{D};\\vec{z}) \n\\;,\n\\label{step-up-down}\n\\end{eqnarray}\nso that \nthe original diagram may be explicitly reduced to a set of basis functions\nwithout examining the IBP relations~\\cite{IBP1,IBP2}.\nA non-trivial example of this type of reduction beyond IBP relations \nhas been presented in Ref.\\ \\cite{KK:extra} \n(see also the discusion in Chapter 6 of Ref.\\ \\cite{KK:sunset}).\n\n\n\n\n\n\\section{Examples}\n\\label{Example}\n\\subsection{Holonomic rank \\& Puiseux-type solution} \nIn addition to the examples presented previously in our series of \npublications, we present here a few new examples.\n\n\\subsubsection{Evaluation of holonomic rank: the hypergeometric \n\tfunction $F_N$}\n\\label{FN:section}\nThe Lauricella-Saran hypergeometric function of three variables $F_N$ \n is defined about the point $z_1=z_2=z_3=0$ by\n\\begin{eqnarray}\n&& \nF_N(a_1,a_2,a_3;b_1,b_2; c_1,c_2; z_1,z_2,z_3)\n\\nonumber \\\\ && \n= \\sum_{m_1,m_2,m_3=0}^\\infty\n\\left[ \n\\Pi_{j=1}^3 (a_j)_{m_j} \\frac{z_j^{m_j}}{m_j!}\n\\right] \n\\frac{(b_1)_{m_1+m_3} (b_2)_{m_2}} \n{(c_1)_{m_1} (c_2)_{m_2+m_3}} \n\\;. \n\\label{FN:series}\n\\end{eqnarray}\nThis function is related to one-loop box diagrams in an \narbitrary dimension considered by Andrei Davydychev~\\cite{davydychev:box}. \n\nFollowing the general algorithm~\\cite{bkm2013}, \nthe following result is easily derived: \n\\begin{theorem}\n\tFor generic values of the parameters, \n\tthe holonomic rank of the function $F_N$ \n\tis equal $8$. \n\\end{theorem}\nIn this way, for generic values of parameters, \nthe result of differential reduction, Eq.\\ (\\ref{reduction}), \nhave the following form: \n$$\nS(\\vec{z}) F_N(\\vec{J}+\\vec{m}; \\vec{z})\n= \n\\left[ S_0\n+ S_i \\sum_{j=1}^3 \\theta_j \n+ \\sum_{\\substack{i,j=1 \\\\ i2$, a solution exists only in terms \nof elliptic functions. However, it may happen that \nsuch a parametrization exists for $q=2$, but we are not able to find it.\n\nThis problem is closely related to \nthe solution of a functional equation. \nFor example, for the equation \n$$\nf^n + g^n = 1\\;, \n$$\nthe solution can be characterized as follows: \n\\cite{gross,baker}:\n\\begin{itemize}\n\\item\nFor $n=2$, all solutions are the form \n$$\nf = \\frac{2 \\beta(z)}{ 1+\\beta^2(z)} \\;, \n\\quad \ng = \\frac{1-\\beta^2(z)}{1+\\beta^2(z)} \\;,\n$$\nwhere $\\beta(z)$ is an arbitrary function. \n\\item\nFor $n=3$, one solution is given by\n\\begin{equation}\nf = \\frac{1}{2 \\wp}\n\\left(\n1 + \\frac{1}{\\sqrt{3}} \\wp'\n\\right)\\;, \n\\quad \ng = - \\frac{1}{2 \\wp}\n\\left(\n1 - \\frac{1}{\\sqrt{3}} \\wp'\n\\right)\\;, \n\\end{equation}\nwhere $\\wp$ is the Weierstrass $\\wp$-function satisfy \n$\n\\left( \\wp' \\right)^2 = 4 \\wp^3 - 1 \\;. \n$\nFor $n=3$, the original equation is of genus $1$, so that \nuniformization theorem assures the existence of an elliptic solution. \n\\end{itemize}\n\n\nOne of the most natural sets of variables for\nthe set of parameters under consideration is the following: \n$\nP_1 = \\xi_1 \\;, P_2 = \\xi_2 \\;, \n$\nso that \n\\begin{equation}\nH = \\frac{1}{\\xi_1^p \\xi_2^p} \n\\left(\\xi_1^q \\xi_2^q - \\xi_1^q - \\xi_2^q \\right)^\\frac{p}{q} \\;. \n\\label{surface}\n\\end{equation}\n\n\n\\subsubsection{The rational parametrization: set 2}\n\\label{F3:set2}\nIn a similar manner, let us analyze another set of parameters: \n$$\nF_3\\left(-\\frac{p}{q} + a_1 \\varepsilon, \n a_2 \\varepsilon, \n b_1 \\varepsilon, \n -\\frac{p}{q}+b_2\\varepsilon, \n 1-\\frac{p}{q}+c\\varepsilon; x,y\n\\right) \\;.\n$$\nIn this case, \n$\ns_1 = s_2 = -p \\;, \n$\nand the functions $h$ have the form\n$\nh(x) = x^\\frac{p}{q} \\;, \n\\quad \nh(y) = y^\\frac{p}{q} \\;. \n$\nApplying the same trick with the introduction of new \nfunctions $P_1$ and $P_2$, we would find that the existence of a rational parametrization corresponds in the present case to the \nvalidity of Eq.\\ (\\ref{rational-parametrization}). \nIn particular, by introducing new variables \n$\nx^\\frac{1}{q} = \\xi_1, y^\\frac{1}{q} = \\xi_2 \\;, \n$ \nwe obtain\n$\nH = \n\\left(\\xi_1^q \\xi_2^q - \\xi_1^q - \\xi_2^q \\right)^\\frac{p}{q} \\;. \n$\n\\subsubsection{The rational parametrization: set 3}\n\\label{F3:set3}\nLet us analyze the following set of parameters: \n$\np = 0 \\;, s_1, s_2 \\neq 0 \\;, \n$\ncorresponding to \n$$\nF_3\\left( \\frac{p_1}{q} + a_1 \\varepsilon, \n \\frac{p_2}{q} + b_1 \\varepsilon, \n b_2 \\varepsilon, \n 1+c\\varepsilon; x,y\n\\right) \\;.\n$$\nIn this case, \n$$\nh_1(x) = \\left(1-x \\right)^\\frac{p_1}{q} \\;, \n\\quad \nh_2(y) = \\left(1-y \\right)^\\frac{p_2}{q} \\;, \n\\quad \nH(x,y) = \\left(\n\\frac{x^{s_2} y^{s_1}}{(xy-x-y)^{s_1+s_2}} \n\\right)^\\frac{1}{q} \\;. \n$$\nFor simplicity, we set $s_1=-s_2=s$, and put \n$1-x = P_1^q$ and $1-y= P_2^q$, so that \n$H = \\left(\\frac{1-P_2^q}{1-P_1^q} \\right)^{\\frac{s}{q}} \\equiv P_3$. In particular, for $P_1 = \\xi_1, P_2 = \\xi_2$, \n$H = \\left(\\frac{1-\\xi_2^q}{1-\\xi_1^q} \\right)^{\\frac{s}{q}}$. \n\n\\subsubsection{The rational parametrization: set 4}\n\\label{F3:set4}\nLet us analyze the following set of parameters: \n$\ns_1 = -p \\;, s_2 = 0 \\;, \n$\nso that hypergeometric function is \n$$\nF_3\\left(-\\frac{p_1}{q} + a_1 \\varepsilon, \n a_2 \\varepsilon, \nb_1 \\varepsilon, \nb_2 \\varepsilon, 1-\\frac{p}{q}+c\\varepsilon; x,y\n\\right) \\;.\n$$\nIn this case, \n\\begin{eqnarray}\nh_1(x) & = & x^\\frac{p}{q} \\;, \n\\quad \nh_2(y) = \\left(\\frac{y}{y-1} \\right)^\\frac{p}{q} \\;, \n\\quad \nH(x,y) = x^\\frac{p}{q} \\equiv h(x) \\;.\n\\end{eqnarray}\nLet us take a new set of variables $(x,y) \\to (\\xi_1,\\xi_2)$: \n\\begin{eqnarray}\n\\xi_1 & = & x^\\frac{1}{q} \\;, \n\\quad \n\\xi_2 = \\left(\\frac{y}{y-1}\\right)^\\frac{1}{q} \\;, \n\\end{eqnarray}\nIn terms of a new variables we have: \n\\begin{eqnarray}\nH(x,y) & \\equiv & h_1(x) = \\xi_1^p \\;, \n\\quad \nh_2(y) = \\xi_2^p \\;, \n\\quad \nx+y-xy = \n\\frac{\\xi_1^q-\\xi_2^q}{1-\\xi_2^q} \\;. \n\\end{eqnarray}\nThus, a rational parametrization exists. \n\n\\subsubsection{The rational parametrization: set 5}\n\\label{F3:set5}\nConsider the set of parameters defined by \n$\ns_2 = p = 0 \\;, \ns_1 \\neq 0,\n$\nthat corresponds to the case\n$\nF_3\\left(-\\frac{p_1}{q} + a_1 \\varepsilon, \n a_2 \\varepsilon, \nb_1 \\varepsilon, \nb_2 \\varepsilon, \n1+c \\varepsilon; x,y\n\\right) \\;.\n$\nIn this case, \n\\begin{eqnarray}\nh_1(x) = \\left(1-x\\right)^{\\frac{p_1}{q}} \\;, \n\\quad \nh_2(y) = 1\\;, \n\\quad \nH(x,y) = \\left( \\frac{y}{xy-x-y} \\right)^\\frac{p_1}{q} \\;, \n\\end{eqnarray}\nLet us suggest that \n$$\n1-x = P^q \\;, \n\\quad \nH(x,y) = \\frac{y}{x+y-xy} = Q^q \\;,\n$$\nwhere $P$ and $Q$ are rational functions. \nThen, \n$\ny = \\frac{(1-P^q)Q^q}{1-P^q Q^q} \\;.\n$\nAfter the redefinition,\n$\nP Q = R \\; \n$, we get\n$ \ny = \\frac{(1-P^q)}{(1-R^q)} \\frac{R^q}{P^q} \\;. \n$\nThe simplest version of $P$ and $R$ are polynomials: \n$\nP = \\xi_1 \\;, \n$\nand \n$ \nR = \\xi_2 \\;. \n$ \nIn this parametrization, \n$$\nh(x) = \\xi_1^{p_1} \\;, \n\\quad \nH(x,y) = \\left( \\frac{\\xi_2}{\\xi_1} \\right)^{p_1}\\;, \n\\quad \ny = \\frac{1-\\xi_1^q}{1-\\xi_2^q} \\left( \\frac{\\xi_2}{\\xi_1}\\right)^q \\;. \n$$\nIn this case, the rational parametrization exists. \n\n\\subsubsection{Explicit construction of expansion: integer values of parameters}\nLet us consider the construction of the $\\ep$-expansion around \ninteger values of parameters. \nIf we put \n$$\n\\omega_0 = F_3(a_1 \\ep, b_1 \\ep, a_2 \\ep, b_2 \\ep, 1+c\\ep;x,y), \n$$\nthen the system of differential equations can be presented \nin the form \n\\begin{eqnarray}\n\\frac{\\partial}{\\partial x} \\omega_1 & = & \n\\left[ \n\\frac{1}{x-1}\n- \\frac{1}{x} \n\\right]\n\\omega_3 \n- \\left[ \\frac{c}{x} + \\frac{(a_1+b_1-c)}{x-1} \\right] \\ep \\omega_1 \n- a_1 b_1 \\frac{1}{x-1} \\ep^2 \\omega_0 \n\\;, \n\\\\ \n\\frac{\\partial}{\\partial y} \\omega_2 & = & \n\\left[ \n\\frac{1}{y-1}\n- \\frac{1}{y} \n\\right] \\omega_3 \n- \\left[ \\frac{c}{y} + \\frac{(a_2+b_2-c)}{y-1} \\right] \\ep \\omega_2 \n- a_2 b_2 \\frac{1}{y-1} \\ep^2 \\omega_0 \n\\;, \n\\\\\n\\frac{\\partial}{\\partial x} \\omega_3 \n& = & \n\\left[ \n\\frac{(a_2\\!+\\!b_2\\!-\\!c) }{x}\n\\!-\\! \n\\frac{(a_1 \\!+\\! b_1 \\!+\\! a_2\\!+\\!b_2\\!-\\!c)}{x+\\frac{y}{1-y}} \n\\right] \\ep \\omega_3 \n\\nonumber \\\\ && \n- \n\\frac{a_1 b_1}{x+\\frac{y}{1-y}} \\ep^2 \\omega_2 \n\\!+\\! \n\\left[ \n\\frac{1}{x} \n- \n\\frac{1}{x+\\frac{y}{1-y}}\n\\right]\na_2 b_2 \\ep^2 \\omega_1 \n\\;, \n\\\\ \n\\frac{\\partial}{\\partial y} \\omega_3 \n& = & \n\\left[ \n\\frac{(a_1\\!+\\!b_1\\!-\\!c) }{y}\n\\!-\\! \n\\frac{(a_1 \\!+\\! b_1 \\!+\\! a_2\\!+\\!b_2\\!-\\!c)}{y+\\frac{x}{1-x}} \n\\right] \\ep \\omega_3 \n\\nonumber \\\\ && \n- \n\\frac{a_2 b_2}{y+\\frac{x}{1-x}} \\ep^2 \\omega_1 \n\\!+\\! \n\\left[ \n\\frac{1}{y} \n- \n\\frac{1}{y+\\frac{x}{1-x}}\n\\right]\na_1 b_1 \\ep^2 \\omega_2 \n\\;. \n\\end{eqnarray}\nThis system can be straightforwardly integrated in terms of multiple polylogarithms, defined via a one-fold \niterated integral $G$, where \n\\begin{eqnarray}\nG(z;a_k,\\vec{a}) & = & \n\\int_0^{z} \\frac{dt}{t-a_k}\nG(t;\\vec{a}) \\;.\n\\label{G}\n\\end{eqnarray}\n \nIn addition, the $\\ep$-expansion of a Gauss hypergeometric function around integer values of parameters is needed. \nIt has the following form (see Eq.~(34) in ~\\cite{MKL:Gauss}):\n\\begin{eqnarray}\n&& \n{}_2F_1(a\\ep,b\\ep;1\\!+\\!c\\ep;z) \n= \n1 \\!+\\! \na b \\ep^2 \\Li{2}{z}\n\\nonumber \\\\ && \\hspace{5mm}\n+ \na b \\ep^3 \n\\left[(a \\!+\\! b \\!-\\! c) \\Snp{1,2}{z} \\!-\\! c \\Li{3}{z} \\right] \n+ O(\\ep^4)\n\\;.\n\\nonumber \\\\ \n\\end{eqnarray}\n\nThe first iteration gives rise to\n\\begin{eqnarray}\n\\omega_0^{(0)} & = & 1 \\;, \n\\quad \n\\omega_1^{(0)} = \\omega_2^{(0)} = \\omega_3^{(0)} = 0 \\;, \n\\quad \n\\omega_0^{(1)} = \\omega_1^{(1)} = \\omega_2^{(1)} = \\omega_3^{(1)} = 0 \\;. \n\\nonumber \n\\end{eqnarray}\n\nThe results of the second iteration are the following: \n\\begin{eqnarray}\n\\omega_3^{(2)} & = & 0 \\;, \n\\quad \n\\omega_1^{(2)} = -a_1 b_1 \\ln(1-x) \\;, \n\\quad \n\\omega_2^{(2)} = - a_2 b_2 \\ln(1-y) \\;, \n\\nonumber \\\\ \n\\omega_0^{(2)} & = & a_1 b_1 \\Li{2}{x} + a_2 b_2 \\Li{2}{y} \\;, \n\\end{eqnarray}\nwhere the classical polylogarithms $\\Li{n}{z}$ \nare defined as \n\\begin{equation}\n\\Li{1}{z} = - \\ln(1-z) \\; ,\n\\quad \n\\Li{n+1}{z} = \\int_0^z \\frac{dt}{t} \\Li{n}{t}, \\quad n\\ge 1.\n\\label{polylogarithm}\n\\end{equation}\n\nAfter the third iteration, we have\n\\begin{eqnarray}\n\\omega_{3}^{(3)} & = & 0\n\\\\ \n\\omega_1^{(3)} & = & \n\\frac{1}{2} a_1 b_1 (a_1+b_1-c) \\ln^2 (1-x) \n- a_1 b_1 c \\Li{2}{x} \n\\;, \n\\\\ \n\\omega_2^{(3)} & = & \n\\frac{1}{2} a_2 b_2 (a_2+b_2-c) \\ln^2 (1-y) \n- a_2 b_2 c \\Li{2}{y} \n\\;, \n\\\\\n\\omega_0^{(3)} & = & \n- a_1 b_1 c \\Li{3}{x} \n- a_2 b_2 c \\Li{3}{y} \n\\nonumber \\\\ && \n+ a_1 b_1 (a_1+b_1-c) \\Snp{1,2}{x}\n+ a_2 b_2 (a_2+b_2-c) \\Snp{1,2}{y}\n\\;,\n\\end{eqnarray}\nwhere $\\Snp{a,b}{z}$ are the Nielsen polylogarithms: \n\\begin{equation}\nz \\frac{d}{d z} \\Snp{a,b}{z} = \\Snp{a-1,b}{z} \\;, \n\\quad \n\\Snp{1,b}{z} = \\frac{(-1)^b}{b!} \\int_0^1 \\frac{dx}{x} \\ln^b(1-zx) \\;. \n\\label{nielsen}\n\\end{equation}\n\nThe result of the next iteration, $\\omega_3^{(4)}(x,y)$, \ncan be expressed in several equivalent forms: \n\\begin{eqnarray}\n\\frac{\\omega_3^{(4)}(x,y)}{a_1 a_2 b_1 b_2} \n& = & \n\\ln(1-y) G_1\\left(x; -\\frac{y}{1-y} \\right)\n- G_2(x;1)\n+ G_{1,1}\\left(x; - \\frac{y}{1-y},1 \\right) \n\\nonumber \\;, \n\\\\ \n\\frac{\\omega_3^{(4)}(x,y)}{a_1 a_2 b_1 b_2 } \n& = & \n\\ln(1-x) G_1\\left(y; -\\frac{x}{1-x} \\right)\n- G_2(y;1)\n+ G_{1,1}\\left(y; - \\frac{x}{1-x},1 \\right)\n\\;. \n\\nonumber \n\\end{eqnarray}\nKeeping in mind that \n\\begin{eqnarray}\n&& \nG_{1,1}\\left(x; - \\frac{y}{1-y},1 \\right)\n= \n\\int_0^x \\frac{dt}{t+\\frac{y}{1-y}} \\ln (1-t) \n\\\\ && \n= \n- \\ln (1-y) \\ln (x+y-xy) \n+ \\ln(1-y) \\ln y\n- \\Li{2}{x+y-xy} \n+ \\Li{2}{y} \n\\nonumber \\;,\n\\end{eqnarray}\nthe result can be written in a very simple form,\n\\begin{eqnarray}\n\\frac{\\omega_3^{(4)}(x,y)}{a_1 a_2 b_1 b_2} \n= \\Li{2}{x} + \\Li{2}{y} - \\Li{2}{x+y-xy} \\;. \n\\end{eqnarray}\n\nTaking into account that \n$\n\\frac{\\omega_3^{(4)}(x,y)}{a_1 a_2 b_1 b_2}\n= \\frac{1}{2} xy F_3(1,1,1,1;3;x,y)\\;, \n$\nwe obtain the well-known result~\\cite{sanchis-lozano} \n$$\n\\frac{1}{2} xy F_3(1,1,1,1;3;x,y) = \\Li{2}{x} + \\Li{2}{y} - \\Li{2}{x+y-xy} \\;.\n$$\nThere is also another form ~\\cite{brychkov} for this integral,\n\\begin{eqnarray}\n\\frac{1}{2} xy F_3(1,1,1,1;3;x,y) & = & \n\\Li{2}{\\frac{x}{x+y-xy}} \n- \n\\Li{2}{\\frac{x-xy}{x+y-xy}} \n\\nonumber \\\\ && \n- \\ln (1-y) \\ln \\left(\\frac{y}{x+y-xy} \\right) \\;.\n\\nonumber \n\\end{eqnarray}\nOne form can be converted to the other using the well-known \ndilogarithm identity \n$$\n\\Li{2}{\\frac{1}{z}}\n= \n-\\Li{2}{z} \n- \\frac{1}{2} \\ln^2 (-z)\n-\\zeta_2 \\;,\n$$\ntogether with the attendant functional relations~\\cite{lewin,Li2identities}. \n\nThe following expressions result from direct iterations in terms of $G$-functions: \n\\begin{eqnarray}\n&&\n\\frac{\\omega_1^{(4)}(x,y)}{a_1 b_1} = \na_2 b_2 \n\\Biggl\\{ \n\\ln(1-y) \n\\left[\nG_{1,1}\\left(x; 1, -\\frac{y}{1-y} \\right)\n- \nG_{2}\\left(x; -\\frac{y}{1-y} \\right)\n\\right]\n\\nonumber \\\\ && \n- G_{1,2}(x;1,1) + G_{3}(x;1)\n+ G_{1,1,1}\\left(x; 1, - \\frac{y}{1-y},1 \\right) \n- G_{2,1}\\left(x; - \\frac{y}{1-y},1 \\right) \n\\Biggr\\} \n\\nonumber \\\\ && \n+ a_2 b_2 G_1(x;1) G_2(y;1) \n- c \\Delta_1 G_{2,1}(x;1,1)\n- \\Delta_1^2 G_{1,1,1}(x;1,1,1)\n- c^2 G_{3}(x;1)\n\\nonumber \\\\ && \n+ (a_1 b_1 \\!-\\! c \\Delta_1) G_{1,2}(x;1,1) \n\\;, \n\\end{eqnarray}\nwhere \n$$\n\\Delta_j = a_j + b_j - c \\;.\n$$\nThe $G$-functions can be converted into classical or \nNielsen polylogarithms with the help of the following relations: \n\\begin{subequations}\n\\begin{eqnarray}\n&& \nG_{1,1}\n\\left( \nx; 1, - \\frac{y}{1-y}\n\\right)\n= \nG_1(x; 1)\nG_1\\left( \nx; - \\frac{y}{1-y}\n\\right)\n- \nG_{1,1}\\left( \nx; - \\frac{y}{1-y}, 1\n\\right) \\;,\n\\\\ && \nG_{1}\n\\left( \nx; 1\n\\right)\nG_{1,1}\n\\left( \nx; - \\frac{y}{1-y}, 1\n\\right)\n= \nG_{1,1,1}\n\\left( \nx; 1, - \\frac{y}{1-y}, 1\n\\right)\n\\\\ && \\hspace{40mm}\n+ \n2G_{1,1,1}\n\\left( \nx; - \\frac{y}{1-y}, 1, 1\n\\right) \n\\;, \n\\nonumber \\\\ && \nG_{1,1,1}\n\\left( \nx; - \\frac{y}{1-y}, 1, 1\n\\right) \n= \n\\Snp{1,2}{x+y-xy}- \\Snp{1,2}{y}\n\\\\ && \n+ \\frac{1}{2} \\ln^2 (1-y)\n\\left[ \n\\ln(x+y-xy) \n\\!-\\! \n\\ln y \n\\right]\n+ \\ln(1-y) \n\\left[ \n\\Li{2}{x+y-xy} \\!-\\! \\Li{2}{y} \n\\right]\n\\;, \n\\nonumber \\\\ &&\nG_{2,1}\\left(x; -\\frac{y}{1-y} ,1 \\right)\n+ \\ln(1-y) G_{2}\\left(x; -\\frac{y}{1-y} \\right)\n= \nG_{1,2}\\left(1; 1-\\frac{1}{y}, \\frac{1}{x} \\right)\n+ G_{3}\\left(x; 1 \\right)\n\\nonumber \\\\ && \n= \\int_0^x \\frac{du}{u}\n\\left[ \n\\Li{2}{y} - \\Li{2}{u+y-uy} \n\\right] \\;.\n\\end{eqnarray}\n\\end{subequations}\nIn a similar manner, \n\\begin{eqnarray}\n&&\n\\frac{\\omega_2^{(4)}(x,y)}{a_2 b_2} = \na_1 b_1 \n\\Biggl\\{ \n\\ln(1-x) \n\\left[\nG_{1,1}\\left(y; 1, -\\frac{x}{1-x} \\right)\n- \nG_{2}\\left(y; -\\frac{x}{1-x} \\right)\n\\right]\n\\nonumber \\\\ && \n- G_{1,2}(y;1,1) + G_{3}(y;1)\n+ G_{1,1,1}\\left(y; 1, - \\frac{x}{1-x},1 \\right) \n- G_{2,1}\\left(y; - \\frac{x}{1-x},1 \\right) \n\\Biggr\\} \n\\nonumber \\\\ && \n- c \\Delta_2 G_{2,1}(y;1,1)\n- \\Delta_2^2 G_{1,1,1}(y;1,1,1)\n- c^2 G_{3}(y;1)\n- c \\Delta_2 G_{1,2}(y;1,1)\n\\nonumber \\\\ && \n+ a_1 b_1 G_1(y;1) G_2(x;1) \n+ a_2 b_2 G_{1,2}(y;1,1) \n\\;, \n\\end{eqnarray}\nand the last term, expressible in terms of functions of order $3$, is $\\omega_3^{(5)}$. \n\n\n\n\\subsubsection{Construction of $\\ep$-expansion via \n\t integral representation}\n\nAn alternative approach to construction of the \nhigher order $\\ep$-expansion of generalized hypergeometric \nfunctions is based on their \nintegral representation. \nWe collect here some representations\nfor the Appell hypergeometric function $F_3$\nextracted from Refs.\\ \\cite{slater} and ~\\cite{srivastava}.\n\nFor our purposes, the most useful expression is the following:\n(see Eq.\\ (16) in Section 9.4. of ~\\cite{srivastava}), \n~\\cite{sanchis-lozano,brychkov}:\n\\begin{eqnarray}\n&& \n\\frac{\\Gamma(c_1) \\Gamma(c_2)}{\\Gamma(c_1+c_2)}\nF_3(a_1,b_1,a_2,b_2,c_1+c_2;x,y)\n\\nonumber \\\\ && \n = \\int_0^1 du u^{c_1-1} (1-u)^{c_2-1}\n {}_2F_1(a_1,b_1;c_1;ux)\n {}_2F_1(a_2,b_2;c_2;(1-u)y) \\;.\n \\label{F3:1}\n\\end{eqnarray}\nIndeed, expanding one of the hypergeometric functions as a power series leads to \n$$\n\\int_0^1 u^{c_1-1} (1-u)^{j+c_2-1}\n{}_2F_1(a_1,b_1;c_1;ux)\n\\sum_{j=0}^\\infty\n\\frac{(a_2)_j (b_2)_j y^j}{(c_2)_j j!}\n \\;. \n$$\nThe order of summation and integration can be interchanged \nin the domain of convergence of the series, and after integration we obtain Eq.\\ (\\ref{F3:1}).\n\n\nThe two-fold integral representation~\\cite{slater},\n\\begin{eqnarray}\n&& \n\\frac{\\Gamma(b_1) \\Gamma(b_2)\\Gamma(c-b_1-b_2)}{\\Gamma(c)}\nF_3(a_1,a_2;b_1,b_2;c;x,y)\n= \n\\\\ && \n\\int \\int_{0 \\leq u, 0 \\leq v, u+v \\leq 1} du dv \nu^{b_1-1} v^{b_2-1}\n(1-u-v)^{c-b_1-b_2-1}\n(1-ux)^{-a_1}\n(1-vy)^{-a_2} \\;,\n\\nonumber \n\\label{F3:slater} \n\\end{eqnarray}\ncan be reduced to the following integral: \n\\begin{eqnarray}\n&& \n\\frac{\\Gamma(b_1) \\Gamma(b_2)\\Gamma(c-b_1-b_2)}{\\Gamma(c)}\nF_3(a_1,a_2;b_1,b_2;c;x,y)\n\\label{F3:2}\n\\\\ && \n= \n\\int_0^1 du \\int_0^{1-u} dv \nu^{b_1-1} v^{b_2-1}\n(1-u-v)^{c-b_1-b_2-1}\n(1-ux)^{-a_1}\n(1-vy)^{-a_2}\n\\nonumber \\\\ && \n= \n\\frac{\\Gamma(b_2) \\Gamma(c-b_1-b_2)}{\\Gamma(c-b_1)}\n\\nonumber \\\\ && \\hspace{5mm}\n\\times \n\\int_0^1 du u^{b_1-1} (1-ux)^{-a_1} (1-u)^{c-b_1-1}\n~_2F_1\n\\left( \\begin{array}{c|}\na_2, b_2 \\\\\nc-b_1\n\\end{array}~ (1-u)y \\right) \\;.\n\\nonumber \n\\end{eqnarray}\n\nFor the particular values of parameters $(c_1=a_1,c_2=a_2)$, the \nintegral Eq.\\ (\\ref{F3:1}) can be reduced to the Appell function $F_1$: \n\\begin{eqnarray}\nF_3(a_1, a_2, b_1, b_2; a_1+a_2; x,y)\n& = & \n\\frac{1}{(1-y)^{b_2}}\nF_1\\left( a_1, b_1, b_2, a_1+a_2; x, -\\frac{y}{1-y} \\right) \\;.\n\\nonumber \\\\ \n& = & \n\\frac{1}{(1-x)^{b_1}}\nF_1\\left( a_2, b_1, b_2, a_1+a_2; -\\frac{x}{1-x}, y \\right) \\nonumber \\;.\n\\end{eqnarray}\n\nUsing the one-fold integral representation for the Appell function $F_1$, \nit is possible to prove the following relations: \n\\begin{eqnarray}\n&& \nF_3(a,c-a,b,c-b,c,x,y)\n= \n(1-y)^{c+a-b}\n~_2F_1\n\\left( \\begin{array}{c|}\na, b \\\\\nc\n\\end{array}~ x+y-xy \\right) \\;.\n\\end{eqnarray}\n\nUsing Eqs.\\ (\\ref{F3:1}) and (\\ref{F3:2}), the one-fold \nintegral representation can be written for the coefficients of \nthe $\\ep$-expansion of the hypergeometric function $F_3$\nvia the $\\ep$-expansion of the Gauss hypergeometric function, constructed in Refs.\\ \\cite{kwy2006}, ~\\cite{kk2008}.\n\nThe coefficients of the $\\ep$-expansion of the Gauss hypergeometric function can be expressed in terms \nof multiple polylogarithms of a $q$-root of unity with arguments \n$\\left(\\frac{z}{z-1}\\right)^\\frac{1}{q}$,\n$z^\\frac{1}{q}$ or $(1-z)^\\frac{1}{q}$ (see also ~\\cite{nested2}), so that the problem of finding a rational parametrization reduces to the problem of finding a rational parametrization of the integral kernel of Eqs.\\ (\\ref{F3:1}) and (\\ref{F3:2}) in terms of variables generated by the $\\ep$-expansion of the Gauss hypergeometric function. \n\nThe construction of the higher-order \n$\\ep$-expansion of the Gauss hypergeometric function \naround rational values of parameters ~\\cite{kwy2006,kk2008},\nplays an crucial role in construction of the higher-order \n$\\ep$-expansion of many (but not all)\nHorn-type hypergeometric functions. \n\n\n\\subsubsection{Relationship to Feynman Diagrams}\nLet us consider the one-loop pentagon with vanishing external\nlegs.\n The higher-order $\\ep$-expansion for this diagram \nhas been constructed~\\cite{pentagon1} in terms of iterated one-fold integrals over algebraic functions. \nIn Ref.\\ \\cite{pentagon2}, the hypergeometric representation \nfor the one-loop \npentagon with vanishing external momenta has been constructed as a sum of \nAppell hypergeometric functions $F_3$.\n\nIn ~\\cite{BKK2012}, where our differential equation approach is presented, it was pointed out that \nthe one-loop pentagon can be expressed in terms of multiple \npolylogarithms. Ref.\\ \\cite{pentagon4} verified the numerical agreement between the results of Refs.\\ \\cite{pentagon1} and \n\\cite{pentagon2}, and Ref.\\ \\cite{pentagon5} constructed \nthe iterative solution of the differential equation \\cite{kotikov}.\n\nLet us recall the results of Ref.\\ \\cite{pentagon2}.\nThe one-loop massless pentagon is expressible in terms of the Appell function $F_3$\nwith the following set of parameters: \n\\begin{eqnarray}\n\\Phi_5^{(d)} \\sim F_3\\left( 1,1,\\frac{7-d}{2},1,\\frac{10-d}{2};x,y \\right) \\;, \n\\label{pentagon1}\n\\end{eqnarray}\nwhere $d$ is dimension of space-time. \nAnother representation presented in \n\\cite{pentagon2} has the structure\n\\begin{eqnarray}\t\nH_5^{(d)}\n\\sim \nF_3\\left( \\frac{1}{2},1,1,\\frac{d-2}{2},\\frac{d+1}{2};\\frac{y}{x+y-xy}, \\frac{1}{x} \\right) \n\\;.\n\\label{pentagon2}\n\\end{eqnarray}\t\n\nLet us consider the case of $d=4-2\\ep$. \nThe first representation, Eq.\\ (\\ref{pentagon1}), is \n$$\n\\Phi_5^{(4-2\\ep)} \\sim F_3\\left( 1,1,\\frac{3}{2}-\\ep,1,4-\\ep;x,y \\right) \\;. \n$$\nThis case, \n$$\n\\{\np_1 = p_2 =r_2 = p = 0 \\}\\;, \n\\{r_1 = 1 \\;, q=2 \\}\n\\Longrightarrow\ns_1 \\neq 0; \\quad s_2 = 0 \\;, \\quad p = 0 \\;,\n$$\ncorresponds to our {\\bf set 5}, so that \nthe {\\bf $\\ep$-expansion is expressible in terms of multiple polylogarithms}, defined by Eq.~(\\ref{G}). \n\nFor the other representation, Eq.~(\\ref{pentagon2}), \n$$\t\nH_5^{(4-2\\ep)} \n\\sim\n\tF_3\\left( \\frac{1}{2},1,1,\\ep,\\frac{5}{2}-\\ep;\\frac{y}{x+y-xy}, \\frac{1}{x} \\right) \\;, \n$$\nso that it is reducible to the following set of parameters: \n$$\n\\{p_2 = r_1 = r_2 = 0 \\}\\;, \n\\quad \n\\{p_1 = 1\\;, q=2, \\; p=1 \\}\\;\n\\Longrightarrow\n\ts_1 = 1; \\quad s_2 = 0 \\;, \\quad p = -1 \\;.\n$$\t\nThis is our {\\bf set 4}, so that the\n{\\bf $\\ep$-expansion is expressible in terms of multiple polylogarithms}, \ndefined by Eq.~(\\ref{G}). \n\n\nThe $\\ep$-expansion of the one-loop pentagon\n about $d=3-2\\ep$ could be treated in a similar manner. In this case, \nthe first representation corresponds to {\\bf set 1} \n$$\n\t\\Phi^{(3-2\\ep)} \\sim F_3\\left( 1,1,2+\\ep,1,\\frac{7}{2}+\\ep;x,y \\right) \n\\Longrightarrow\n\t\tp_1 = p_2 =r_1 = r_2 = 0, \\quad \n\t\t\tq=2 \\;, p=1 \n$$ \nand there is no rational parametrization, so that the result of the {\\bf $\\ep$-expansion is expressible in terms of a one-fold \niterated integral over algebraic functions}.\n\nThe other representation, Eq.\\ (\\ref{pentagon2}), also \ncannot be expressed in terms of multiple polylogarithms:\n$$\nH_5^{(3-2\\ep)} \t\\sim \n\tF_3\\left( \\frac{1}{2},1,1,\\frac{1}{2} - \\ep,2-\\ep;\\frac{y}{x+y-xy}, \\frac{1}{x} \\right) \n\t\\Longrightarrow\n\t\\begin{array}{l}\n\tp_2 = r_1 = p = 0 \\;, \\\\\n\tq=2 \\; \\quad p_1 = r_2 = 1\\;, \\\\\n\ts_1 = 1; \\quad s_2 = 1 \\;.\n\t\\end{array}\n$$\nThis corresponds to {\\bf set 3}, and the \n{\\bf $\\ep$-expansion is expressible in terms of one-fold iterated integral over algebraic functions}.\n\nIn this way, the question\nof the all-order $\\ep$-expansion of a one-loop Feynman diagram in terms of multiple polylogarithms is reduced to the question of the existence of \na rational parametrization for the (ratio) of singularities. \n\n{\\bf Remark}:\nThe dependence of the coefficients of the $\\ep$-expansion (multiple polylogarithms or elliptic function)\non the dimension of space-time is not new.\nIn particular, it is well known that the two-loop sunset \nin $3-2\\ep$ dimension is expressible in terms of polylogarithms~\\cite{rajantie,lee}\nand demands introduction of new functions in $4-2\\ep$ dimension \n\\cite{Laporta,tarasov:sunset,Bloch,Adams,Bloch:2016,Duhr,Bogner}. \n\n\n\n\\section{Conclusion}\nThe deep relationship between Feynman diagrams and hypergeometric functions has been reviewed, and we\nhave tried to enumerate all approaches and recent results on that subject. Special attention was devoted to the discussion of \ndifferent algorithms for constructing the analytical \ncoefficients of the $\\ep$-expansion of multiple hypergeometric functions.\nWe have restricted ourselves to\nmultiple polylogarithms and functions related to \nintegration over rational functions \n(the next step after multiple polylogarithms). \nThe values of parameters related to elliptic polylogarithms \nwas beyond our consideration. \n\nWe have presented our technique for the construction of \ncoefficients of the higher order $\\ep$-expansion of multiple Horn-type hypergeometric functions, developed by the authors~\\footnote{Unfortunately, the further \n\tprolongation of this \n\tproject has not been supported by DFG, \n\tso that many interesting results remain unpublished.} \nduring the period 2006 -- 2013.\nOne of the main results of interest \nwas the observation~\\cite{DK1,DK2,DK3,kwy2006,kwy2007,kk2008,bkm2013} \nthat for each Horn-type hypergeometric function, a \nset of parameters can be found so that the coefficients of the \n$\\ep$-expansion include only functions of weight one\n(so-called ``pure functions,'' in a modern terminology). \nAs was understood in 2013 by Johannes Henn \n ~\\cite{Henn:1,Henn:2}, \nthis property is valid not only for \nhypergeometric functions but also for generic Feynman diagrams. \n\nOur approach is based on the systematic analysis of the\nsystem of hypergeometric differential equations (linear differential operators of hypergeometric type with polynomial coefficients) \nand does not demand the\nexistence of an integral representation, \nwhich is presently unknown for a large class\nof multiple Horn-type hypergeometric functions (they could be deduced, \nbut are not presently available in the mathematical literature). \n\nOur approach is based on the factorization of the system of \ndifferential equations into a product of differential operators,\ntogether with finding a rational parametrization and constructing\niterative solutions. \nTo construct such a system, an auxiliary manipulation with \nparameters (shifting by integer values) is required,\nwhich can be done with the help of the HYPERDIRE set of \nprograms~\\cite{hyperdire}.\nThis technique\nis applicable not only to \nhypergeometric functions defined by series but also to multiple \nMellin-Barnes integrals~\\cite{KK:MB} -- one of the representations of Feynman diagrams in a covariant gauge.\nWe expect that the present technique is directly applicable (with some \ntechnical modifications) to the construction of the $\\ep$-expansion \nof hypergeometric functions \nbeyond multiple polylogarithms, specially, that the two-loop sunset \nhas a simple hypergeometric representation~\\cite{tarasov:sunset}. \n\nThere are two of our considerations that have not \nbeen solved algorithmically:\n (a) the factorization of linear partial differential operators into irreducible factors\nis not unique, as has been ilustrated by Landau~\\cite{landau} \n(see also Ref.\\ \\cite{schwarz}):\n$\n(\\partial_x+1) (\\partial_x+1) (\\partial_x+x\\partial_y)\n= \n(\\partial^2_x+x \\partial_{xy} \n+ \\partial_x+(2+x) \\partial_y) (\\partial_x+1) \\; ; \n$\n(b) The choice of parametrization is still an open problem, but there is essential progress in this direction~\\cite{root1,root2}.\n\nOur example has shown that such a parametrization is\ndefined by the locus of singularities of a system of differential \nequations, so that the problem of finding a rational parametrization \nis reduced to the parametrization of solutions of the\nDiophantine equation for the singular locus of a Feynman diagram and\/or hypergeometric function.\nIt is well known that in the case of a positive solution of this problem (which has no complete algorithmic solution), \nthe corresponding system of partial differential equations \nof a few variables takes the simplest structure.\nAt the same time, there is a relationship \nbetween the type of solution of the Diophantine equation \nfor the singular locus \nand the structure of the coefficients of the $\\varepsilon$-expansion: \na linear solution allows us to write the results of the\n$\\varepsilon$-expansion \nin terms of multiple polylogarithms. \nAn algebraic solution gives rise to functions different from multiple polylogarithms and elliptic functions, {\\em etc}. \nIt is natural to expect that, in the case of an elliptic solution of the Diophantine equation\nfor the singular locus, the results for the $\\varepsilon$-expansion are related to \nthe elliptic generalization of multiple polylogarithms.\n\nAnother quite interesting and still algorithmically open problem is the transformation of multiple Horn-type hypergeometric functions with reducible monodromy to \nhypergeometric functions with irreducible monodromy. \nIn the application to Feynman diagrams, such transformations correspond to functional relations, studied recently by Oleg Tarasov \n~\\cite{Tarasov:functional1,Tarasov:functional2,Tarasov:functional3}, \nand by Andrei Davydychev~\\cite{davydychev:box}. \n\n\nFurther analysis of the symmetries of the hypergeometric differential equations related to the Mellin-Barnes representation \nof Feynman diagrams (for simplicity we will call it the hypergeometric representation) \nhas revealed their deep connection to the holonomic properties of Feynman diagrams. \nIn particular, a simple and fast algorithm was constructed for the \nreduction of any Feynman diagram \nhaving a one-fold Mellin-Barnes integral representation to a set of master integrals~\\cite{KK:MB}. \n\nThe importance of considering the\ndimension of the irreducible representation instead of generic \nholonomic rank has been pointed out in the application to Feynman \ndiagrams~\\cite{bkk2009}. In the framework of this approach, \nthe set of irreducible, non-trivial master-integrals \ncorresponds to the set of irreducible (with respect to analytical continuation of the variables, masses,\nand external momenta) solutions of the corresponding system of hypergeometric differential equations, \nwhereas diagrams expressible in terms of Gamma-functions correspond \nto Puiseux-type solutions (monomials with respect to Mandelstam variables) of the original system of hypergeometric equations. \n\n\n\\begin{acknowledgement}\n MYK is very indebted to Johannes Bl\\\"umlein, Carsten Schneider \n\tand Peter Marquard for the\n\tinvitation and for creating such a stimulating atmosphere\n\tin the workshop ``Anti-Differentiation and the\n\tCalculation of Feynman Amplitudes'' at DESY Zeuthen.\n\tBFLW thanks Carsten Schneider for kind the hospitality at RISC.\n MYK thanks the Hamburg University for hospitality while \n\tworking on the manuscript. SAY acknowledges support from The Citadel Foundation.\n\\end{acknowledgement}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{The status quo}\n\nIn the system of peer review that is currently used in the sciences,\nan editor invites one or more referees to review an article submitted\nto a scientific journal. Based on the referees' recommendations, the\neditor will accept the article, demand modifications, or reject it.\nReferee reports are generally made available to the article's author\nin anonymized form only and are not otherwise published. (Some journals\nalso anonymize the article to be refereed, even though ascertaining\nthe true author of a submission is usually a simple matter of using\nan internet search engine.) \n\nThe system as described is completely opaque to outside observers.\nNeither the quality and timeliness of reviews, nor the standards of\na journal\\textquoteright{}s editors, nor the extent of modifications\nmade after initial review, nor the number of times an article has\nbeen rejected by other journals are publicly available. \n\nOther than professional integrity, referees have little legitimate\nincentive to produce timely, fair and high-quality reviews. Since\nthe reviews are not published, referees are not accountable for their\nwork and cannot use it to bolster a case for professional advancement\nor to improve their general standing in the academic community. Probably\nthe biggest (and most problematic) incentive for referees is the accumulation\nof editor goodwill, to be expended during future article submissions.\nIt is also conceivable that some referees reject articles whose authors\nthey dislike or whose approach or results interfere with their own\nresearch agenda. Finally, editors may circumvent the peer review process\naltogether in order to promote their own or their associates' work.\nSeveral case reports of dysfunction and breakdown of the peer review\nprocess in the mathematical and physical sciences have recently appeared\nin the literature. \\cite{Baez 2006,Schiermeier 2008,Trabino 2009}.\n\nAuthors, editors and referees are not paid for their work in this\npublication process. Nevertheless, publishers often charge exorbitant\namounts for the resulting product, journals which have typically ended\nup being hidden away in university libraries, inaccessible to the\npublic who funded the research in the first place. Independent workers\nas well as researchers in poor countries thus have often been cut\nout of the research loop entirely.\n\nThe need for a system of open electronic publishing of scientific\narticles has long been recognized (see e.g. \\cite{Odlyzko 1995}).\nSeveral electronic journals have now been created. Some of these charge\nreaders for access, others are free to read but charge authors for\npublication, and still others are free for all parties involved. Perhaps\nthe biggest success of the Open Access movement was a 2007 U.S. law\nrequiring all NIH-supported research to be submitted to an openly\naccessible archive one year after publication. \\cite{Weiss 2007} \n\nInternet-based alternatives to the prevalent peer review and publishing\nprocess have been discussed in \\cite{Harnard 2000} and \\cite{Nielsen 2008}.\nA trial in open peer review at the journal \\emph{Nature} in 2006 generated\nwidespread debate of the concept \\cite{Nature 2006-1}; the final\nreport concluded that, while the general concept was received enthusiastically,\nparticipation in and satisfaction with their particular model of open\ncommentary were disappointing. \\cite{Nature 2006-2}\n\n\n\\section{ArXiv.org}\n\nThe website \\noun{arXiv.org} (formerly \\noun{xxx.lanl.gov}) is an\nelectronic archive of freely accessible research preprints. \\cite{Ginsparg 1997}\nIt was started by physicist Paul Ginsparg in August 1991 and has since\nbecome an indispensable tool for researchers in physics, mathematics\nand, increasingly, computer science and quantitative biology. Authors\nsubmit their articles to the archive prior to peer review and official\npublication by a scientific journal; the preprints are posted on the\nwebsite in perpetuity after superficial moderator review. To participate,\nauthors need an affiliation with a recognized academic institution\nor an endorsement by an established author. Interested parties can\nsign up for regular e-mail announcements containing the abstracts\nof new preprints in their chosen fields.\n\nOnce a manuscript has been peer reviewed and accepted for publication,\nauthors should ideally post an updated version to the archive. Not\nall authors remember to do this, and some journals explicitly prohibit\nthe practice, claiming a copyright on the final result of peer review\n\\footnote{See for instance Elsevier's policy on electronic preprints at \\url{http:\/\/www.elsevier.com\/wps\/find\/authorshome.authors\/preprints}\n(accessed 26 December 2008\n}\n\nConsequently, \\noun{arXiv.org} in its present incarnation and similar\npreprint archives in other fields do not serve as authoritative Open\nAccess repositories of peer reviewed research.\n\n\n\\section{A proposed solution}\n\nTo address the problems outlined in section 1, I propose the following\nextension to the \\noun{arXiv.org} preprint archive. A new class of\nusers is created, the {}``editors''. Each editor works for an electronic\njournal. Authors, after having uploaded a preprint to the archive,\nmay elect to submit their article for review and official publication\nin one of these electronic journals. An editor of that journal then\ndecides whether the article is appropriate for the journal in terms\nof scope and quality. If it is not, this decision is publicly attached\nto the article and the process ends; if it is, the editor invites\none or more referees to write public reviews, to be attached to the\narticle. The article author may subsequently post a public rebuttal\nto the reviews. Based on the referee reports and rebuttals, the editor\ndecides whether to accept, demand changes to, or reject the article.\nThe original article, reviews, rebuttal and publication decision are\npublished in perpetuity. If accepted, the author posts a final version\nof the article to arXiv.org; as a peer reviewed and officially published\narticle, it is visibly set apart from mere preprints and added to\nthe electronic journal's collection of published articles. Rejected\narticles may be submitted to another electronic journal.\n\nReviews should be signed with the referee's full name and affiliation.\nThis maximizes transparency and allows referees to receive academic\ncredit for their work. However, some reviewers might be reluctant\nto participate in such a system, for instance because they hesitate\nto openly reject the work of friends or influential researchers, or\nbecause they do not want to call attention to their ignorance of some\nof the issues discussed in the reviewed article. Thus it is probably\nnecessary to offer referees the option to publish their reviews under\na pseudonym. Over time, such a pseudonym might naturally develop a\nreputation as a solid reviewer, completely divorced from the writer's\nreal-world identity. Using a straightforward cryptographic scheme,\na referee could prove to selected others that he or she owns a certain\npseudonym; in this way even pseudonymous referees could receive academic\ncredit for their work at the time of tenure or promotion decisions.\n\nSome electronic journals may wish to develop a process for attaching\nnotes to published articles, for instance to point out prior work,\nmistakes or scientific misconduct discovered after publication. It\nwill also be desirable to attach a moderated discussion forum to each\narticle, as a natural gathering place of interested researchers. The\nquality of these forums would serve as a criterion to differentiate\nelectronic journals from each other. The pseudonyms used for refereeing\ncould also be used to sign forum contributions.\n\nOne may hope that the proposed system will engender several desirable\nconsequences. The act of refereeing will rise in prestige in accordance\nwith its importance for the scientific process. The quality of referee\nreports will improve. Outside evaluations and comparisons of the standards\nand practices of different electronic journals will become possible.\nThe process becomes completely transparent and its results are made\nfreely available. \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\n\n\nRadio astronomy aims to study radio emissions from the sky, in order to detect, identify new objects and observe known structures at higher resolution, in a specific electromagnetic spectrum \\cite{thompson2008interferometry}. This fundamental thematic shines a new light on our universe, revealing more about its nature and history. \nIn order to carry out particularly sensitive observations in a large range of the spectrum, and to handle significant cosmological issues, largely distributed sensor arrays \nare currently being built or planned, such as the low frequency array (LOFAR) \\cite{van2013lofar} and the square kilometre array (SKA) \\cite{dewdney2009square}. They will notably be composed of a large number of relatively low-cost small antennas with wide field of view, resulting in a large collecting area and high resolution imaging. Nevertheless, to meet the theoretical optimal performances of such next generation radio interferometers, a plethora of signal processing challenges must be overcome, among them, calibration, data reduction and image synthesis \\cite{wijnholds2008fundamental,rau2009advances,deVos2009lofar, van2013signal}.\nThese aspects are intertwined and must be dealt with to take advantage of the new advanced radio interferometers. As an example, lack of calibration has dramatic effects in the image reconstruction by causing severe distortions. In this paper, we focus on calibration, which involves the estimation of all unknown perturbation effects and represents a cornerstone of the imaging step \\cite{mitchell2008real,wijnholds2014signal,wijnholds2010calibration}. \n\n\nArray calibration aspects have been tackled for a few decades in the array processing community leading to a variety of calibration algorithms \\cite{fuhrmann1994estimation,ng1996sensor,ng2009practical}. Such algorithms can be classified into two different approaches depending on the presence \\cite{lo1987eigenstructure,ng1992array,ng1995active}, or the absence \\cite{rockah1987array,weiss1989array,friedlander1991direction,wylie1994joint,qiong2003overview,flanagan2001array}, of one or more cooperative sources, named calibrator sources. In the radio astronomy context, calibration is commonly treated using the first approach as we have access to prior knowledge thanks to tables describing accuratly the position and flux of the brightest sources \\cite{baars1977absolute}.\n\n\n\n\nFollowing this methodology, the majority of proposed calibration schemes in radio interferometry are least squares-based approaches. The state-of-the-art consists in the so-called alternating least squares approach \\cite{boonstra2003gain, wijnholds2009multisource,wijnholds2010fish,salvini2014fast}, which leads to statistically efficient algorithm under a Gaussian model, since the least squares estimator is equivalent to the maximum likelihood (ML) estimator in this case. On the other hand, expectation maximization (EM) \\cite{dempster1977maximum, moon1996expectation,mclachlan2007algorithm} and EM-based algorithms, such as the space alternating generalized expectation maximization algorithm \\cite{fessler1994space}, have been proposed in order to enhance the convergence rate of the least squares-based calibration algorithms \\cite{yatawatta2009radio}. Nevertheless, the major drawback of these schemes is the Gaussianity assumption which is not realistic in the radio astronomy context. Specifically, the presence of outliers has multiple causes, among which i) the radio frequency interferers, which corrupt the observations and are not always perfectly filtered in practice \\cite{raza2002spatial,van2005performance}, ii) the presence of unknown weak sources in the background \\cite{yatawatta2014robust}, iii) the presence of some punctual events such as interference due to the Sun or due to strong sources in the sidelobes which can also randomly create outliers \\cite{boonstra2005radio}. To the best of our knowledge, the proposed scheme in \\cite{yatawatta2014robust}, represents the only alternative to the existing calibration algorithms based on a Gaussian noise model. \n\n\n\n\nIn \\cite{yatawatta2014robust}, theoretical and experimental analyses have been conducted in order to demonstrate that the effect of outliers in the radio astronomy context can indeed be modeled by a non-Gaussian heavy-tailed distributed noise process. Nevertheless, the algorithm presented in \\cite{yatawatta2014robust} has its own limits, since the noise is specifically modeled as a Student's t with independent identically distributed entries. To improve the robustness of the calibration, we propose, in this paper, a new scheme based on a broader class of distributions gathered under the so-called spherically invariant random noise modeling \\cite{jay2002detection,yao2003spherically}, which includes the Student's t distribution. A spherically invariant random vector (SIRV) is described as the product of a positive random variable, named texture, and the so-called speckle component which is Gaussian, resulting in a two-scale compound Gaussian distribution \\cite{wang2006maximum}.\nThe flexibility of the SIRV modeling allows to consider non-Gaussian heavy-tailed distributed noise in the presence of outliers, but also to adaptively consider Gaussian noise in the extreme case when there are no outliers. Under the SIRV model, we estimate the unknown parameters iteratively based on a relaxed ML estimator, leading to closed-form expressions for the noise parameters while a block coordinate descent (BCD) algorithm \\cite{friedman2007pathwise,hong2015unified} is designed to obtain the estimates of parameters of interest efficiently and at a low cost. \n\n\n\nFinally, it is worth mentioning that the parametric model used in this paper to describe the perturbation effects is based on the so-called Jones matrices \\cite{hamaker1996understanding,smirnov2011revisiting}. Such formalism describes in a flexible way the conversion of the incident electric field into voltages. Indeed, along its propagation path, the signal is affected by various effects and transformations which correspond to matrix multiplications in the mathematical Jones framework. Multiple distortion effects caused by the environment and\/or the instruments can be easily incorporated into the model using an adequate parametrization of the Jones matrices. Such effects can represent, for example, the ionospheric phase delay resulting in angular shifts, the atmospheric distortions, the typical phase delay due to geometric pathlength difference, the voltage primary beam, the cross-leakage or also the electronic gains \\cite{noordam1996measurement,noordam2010meqtrees}. For the above reasons and due to its flexibility \\cite{thompson2008interferometry,yatawatta2009radio,noordam1996measurement,hamaker1996understanding,smirnov2011revisiting}, we adopt this parametric model. We make a distinction between the non-structured and the structured cases: in the first one, one total Jones matrix stands for all the effects along the full signal path while in the second case, we regard each physical effect separately thanks to individual Jones terms in a cumulative product. Thus, different corruptions are described by different kinds of Jones matrices.\n We emphasize that the proposed algorithm, entitled relaxed concentrated ML estimator, is a generic algorithm as it is based on a non-structured Jones matrices formulation as a first step. However, it can be adapted to various regimes \n describing distinct calibration scenarios in which an array can operate \\cite{lonsdale2004calibration}. In this paper, we consider the specific example of the direction dependent distortion regime with a compact set of antennas, which we refer to as the 3DC regime.\nThe array is therefore considered as a closely packed group of antennas but the array elements have a wide field of view. This is particularly well-adapted for calibration of compact arrays, typically a LOFAR station.\n \n The rest of the paper is organized as follows: in Section \\ref{data}, we present the data model in the context of radio astronomy, first with non-structured Jones matrices and thereafter, we study an example of structured Jones matrices for the 3DC calibration regime. In Section \\ref{robust_estim}, we give an overview of the proposed robust ML estimator, based on spherically invariant random process (SIRP) noise modeling. An efficient estimation procedure of the distortions introduced on each signal propagation path is derived in Section \\ref{non_structure}. Then, the algorithm is adapted to the case of structured Jones matrices in Section \\ref{structure} for the 3DC calibration regime. Finally, we provide numerical simulations in Section \\ref{simus} to assess the robustness of the approach and draw our conclusions in Section \\ref{ccl}.\n\n\n\n\n\n\nIn this paper, we use the following notation: symbols $\\left( \\cdot \\right) ^{T}$, $\\left( \\cdot \\right)^{\\ast }$, $\\left( \\cdot \\right) ^{H}$ denote, respectively, the transpose, the complex conjugate and the Hermitian transpose. The Kronecker product is represented by $\\otimes$, $\\mathrm{E}\\{\\cdot\\}$ denotes the expectation operator, $\\mathrm{bdiag}\\{\\cdot\\}$ is the block-diagonal operator, whereas $\\mathrm{diag}\\{\\cdot\\}$ converts a vector into a diagonal matrix. The trace and determinant operators are, respectively, referred by $\\mathrm{tr}\\left\\{ \\cdot \\right\\} $ and $ |\\cdot|$. The symbol $\\mathbf{I}_{B}$ represents the $B \\times B$ identity matrix, $\\mathrm{vec}(\\cdot)$ stacks the columns of a matrix on top of one another, $||\\cdot||_F$ is the Frobenius norm, while $||\\cdot||_2$ denotes the $l_2$ norm. Finally, $\\Re\\left\\{ \\cdot \\right\\} $ represents the real part \nand we note $j$ the complex number whose square equals $-1$.\n\n\n\n\\section{Data model}\n\\label{data}\n\n\\subsection{Case of non-structured Jones matrices}\n \nLet us consider $M$ antennas with known locations that receive $D$ signals emitted by calibrator sources. Each antenna is dual polarized and composed of two receptors, in order to provide sensitivity to the two orthogonal\npolarization directions $(x,y)$ of the incident electromagnetic plane wave. Consequently, the relation between the \\textit{i}-th source emission and the\nmeasured voltage at the \\textit{p}-th antenna is given by \\cite{hamaker1996understanding,KerThesis2012,smirnov2011revisiting}\n\\begin{equation}\n\\mathbf{v}_{i,p}(\\boldsymbol{\\theta}) = \\mathbf{J}_{i,p}(\\boldsymbol{\\theta}) \\mathbf{s}_{i}\n\\end{equation}\nwhere $\\mathbf{s}_{i} = [\n s_{i_{x}},s_{i_{y}}]^{T}$ is the incoming signal, $ \\mathbf{v}_{i,p}(\\boldsymbol{\\theta}) = [\n v_{i,p_{x}}(\\boldsymbol{\\theta}),\nv_{i,p_{y}}(\\boldsymbol{\\theta})]^{T}$ is the generated voltage with one output for each polarization direction and $\\mathbf{J}_{i,p}(\\boldsymbol{\\theta})$ denotes the so-called $2 \\times 2$ Jones matrix, parametrized by the unknown vector of interest $\\boldsymbol{\\theta}$. The Jones matrix models the array response and all the perturbations introduced along the path from the \\textit{i}-th source to the \\textit{p}-th sensor. Since each propagation path is particular,\nwe can associate a different Jones matrix with each source-antenna pair $(i,p)$, leading to a total number of $D M$ Jones matrices.\nIn this section, we consider the non-structured case where no specific perturbation model is used to describe the physical mechanism behind each perturbation effect and the unknown elements correspond to the entries of all Jones matrices \\cite{yatawatta2009radio,nunhokeeghost2015} (a structured example is given for 3DC calibration regime, in Section \\ref{specific_regime}).\n\n\nFor each antenna pair, we compute the correlation of the output signals, resulting in the typical observations recorded by a radio interferometer.\nThe correlation between voltages is given, in the case of noise free measurements, for the $(p,q)$ antenna pair, by\n\\begin{align}\n\\label{ME}\n\\nonumber\n\\mathbf{V}_{pq}(\\boldsymbol{\\theta}) & =\\mathrm{E}\\left\\{\\left(\\sum_{i=1}^{D}\\mathbf{v}_{i,p}(\\boldsymbol{\\theta})\\right)\\left(\\sum_{i=1}^{D}\\mathbf{v}_{i,q}^{H}(\\boldsymbol{\\theta})\\right)\\right\\} \\\\ &=\n\\sum_{i=1}^{D}\\mathbf{J}_{i,p}(\\boldsymbol{\\theta})\\mathbf{C}_{i}\n\\mathbf{J}_{i,q}^{H}(\\boldsymbol{\\theta}) \\ \\\n\\text{for} \\ \\ p p, \\ \\ q \\in \\{1, \\ldots, M\\}$ and $\\{\\mathbf{u}_{i,qp}\\} \\ \\text{for} \\ \\ qp}}^{M} \\Big(\\mathbf{w}_{i,pq} - \\mathbf{u}_{i,pq}(\\boldsymbol{\\theta}_{i,p})\\Big)^H\n(\\beta_i \\tau_{pq} \\boldsymbol{\\Omega})^{-1} \n\\Big(\\mathbf{w}_{i,pq} - \\mathbf{u}_{i,pq}(\\boldsymbol{\\theta}_{i,p})\\Big)+ \\\\ &\n\\nonumber\n\\sum_{\\substack {q=1 \\\\ q0$\n \n\\end{enumerate}\nThe idea is that if we are provided a good algorithm for finding a partial coloring, we can repeatedly apply this algorithm on the variables not yet \"colored\" by this partial coloring, while holding the colored ones constant. This will eventually converge to a full coloring and total discrepancy, as the number of points colored follows a geometric series with ratio $\\sqrt{1-c}$, and the discrepancy can be bounded by $O(\\sqrt{n\\log(m\/n)})$.\n\n\\section{Achieving a Partial Coloring}\n\nThe first and most important step is actually achieving a good partial coloring. We start with a convenient construction: We let $v_1,...,v_m \\in \\mathbb{R}^n$ be the indicator vectors for each of our subsets $S_1,...,S_m$ respectively. Then, the discrepancy of our collection $S$ can be very easily described as \n$$\\mathsf{disc}(S) = \\max_{i \\in [m]} |\\langle \\chi, v_i \\rangle|$$\n\\begin{theorem}[Main Partial Coloring Result]\\label{Coloring}\nLet $v_1,...,v_m \\in \\mathbb{R}^n$ be vectors, and $x_0 \\in [-1,1]^n$ be a \"starting point\". Further let $c_1,...,c_m \\geq 0$ be thresholds such that $\\sum_{j=1}^m\\exp(-c_j^2\/16) \\leq n\/16$. Then let $\\delta>0$ be an approximation parameter. Then there exists an efficient randomized algorithm which with probability $\\geq 0.1$ finds $x \\in [-1,1]^n$ such that\n\\begin{enumerate}\n \\item discrepancy constraints: \\quad \\quad $|\\langle x-x_0,v_j\\rangle| \\leq c_j \\|v_j\\|_2$ \n \\item variable constraints: \\quad \\quad $|x_i| \\geq 1-\\delta$ for at least $n\/2$ indices $i \\in [n]$\n\\end{enumerate}\nMoreover, the algorithm runs in time $O((m+n)^3\\delta^{-2}\\log (nm\/\\delta))$\n\\end{theorem}\n\nThe reason for the constraint on the $c_j$'s will become apparent later, but for now we note that the smaller the $c_j$ are, the stronger the theorem is. In other words, we want them to be small, but they can't be too small otherwise the theorem won't hold, hence the constraint. We also note that we can increase the probability of success by simply running the algorithm multiple times over.\n\n\\begin{center}\n 4.1. The Algorithm.\n\\end{center}\nWe begin with a general idea, before going into the details of the algorithm. We also assume without changing the problem that the $v_i$'s have all been normalized (we can simply adjust our $c_j$'s to account for this): $\\|v_i\\|_2 = 1, \\forall i$. Consider the following polytope, which describes the legal values that $x\\in \\mathbb{R}^n$ can take on:\n$$\\mathcal{P} = \\{x\\in \\mathbb{R}^n: |x_i|\\leq 1 \\forall i \\in [n], |\\langle x-x_0,v_j\\rangle| \\leq c_j\\}.$$\n\nThen the above theorem says we can find an $x\\in \\mathbb{R}^n$ such that at least $n\/2$ of the variable constraints are satisfied with (virtually) no slack, and it works with good probability as long as we have $\\sum \\exp(-c_j^2) << n$. The idea is to take very small, discrete, Gaussian steps (called Brownian motion) starting from $x_0$. We intuitively want to use these steps to find such an $x$ that is as far away from the origin ($x_0$) as possible, as this implies that more of the constraints are satisfied with no slack. \n\nWe are now ready to present the constructive algorithm that serves as a proof of Theorem \\ref{Coloring}. Let $\\gamma >0$ be a small step size such that $\\delta=O(\\gamma \\sqrt{\\log(nm\/\\gamma})$. The correctness of the algorithm will not be affected by the choice of $\\gamma$, only the runtime. Further let $T=K_1\/\\gamma^2$, where $K_1 = 16\/3$, and assume $\\delta < 0.1$. The algorithm then produces $X_0 = x_0, X_1,...,X_T \\in \\mathbb{R}^n$ according to the following algorithm\\\\\n\\begin{algorithm}[H]\n\\SetAlgoLined\n \n \\For{$t = 1,...,T$}{\n $\\cdot$ Let $C_t^{var} \\gets C_t^{var}(X_{t-1}) = \\{i \\in [n]: |(X_{t-1})_i| \\geq 1-\\delta\\}$ be the set of variable constraints ``nearly hit\" on the previous iteration\\;\n $\\cdot$ Let $C_t^{\\mathsf{disc}} \\gets C_t^{\\mathsf{disc}}(X_{t-1}) = \\{j \\in [m]: |\\langle X_{t-1}-x_0,v_j\\rangle| \\geq c_j-\\delta\\}$ be the set of discrepancy constraints \"nearly hit\" on the previous iteration\\;\n $\\cdot$ Let $\\mathcal{V}_t \\gets \\mathcal{V}(X_{t-1}) = \\{u \\in \\mathbb{R}^n: u_i=0 \\forall i \\in C_t^{var}, \\quad \\langle u, v_j\\rangle = 0 \\forall j \\in C_t^{\\mathsf{disc}}\\}$ be the subspace orthogonal to the ``nearly hit\" variable and discrepancy constraints.\\;\n $\\cdot$ Set $X_t \\gets X_{t-1} + \\gamma U_t$, where $U_t \\sim \\mathcal{N}(\\mathcal{V}_t)$\n }\n \\caption{The Brownian Motion process for Theorem \\ref{Coloring}}\n\\end{algorithm}\n\nWhen we say that $U \\sim \\mathcal{N}(\\mathcal{V}_t)$, we are referring to the standard multi-dimensional Gaussian distribution: $U = U_1v_1 + ... + U_dv_d$ where $\\{v_1,..,v_d\\}$ is an orthonormal basis for $\\mathcal{V}_t$ and $U_1, ...,U_d \\sim \\mathcal{N}(0,1)$ are all independent. \n\n\\newpage\n\n\\begin{center}\n 4.2. Analysis Outline.\n\\end{center}\n\nWe seek to prove the following:\n\\begin{lemma}\\label{coloring-analysis}\nWe have that Theorem \\ref{Coloring} holds for $X_T$ in the above algorithm, and that with probability at least $0.1$, $X_0,...,X_T \\in \\mathcal{P}$.\n\\end{lemma}\n\nWe begin with a useful claim regarding the behavior of the random walk.\n\n\\begin{claim}\\label{Ortho}\nFor all $t$ we have that $C_t^{var} \\subseteq C_{t+1}^{var}$ and similarly $C_t^{\\mathsf{disc}} \\subseteq C_{t+1}^{\\mathsf{disc}}$ for $t=0,...,T-1$. This further implies that $\\dim(\\mathcal{V}_t) \\geq \\dim(\\mathcal{V}_{t+1})$.\n\\end{claim}\n\\begin{proof}\nIntuitively, we are taking Gaussian steps orthogonal to the subspace $C_t$, so at each step we should never be able to remove any elements in $C_t^{var}$ or $C_t^{\\mathsf{disc}}$. Formally, let $i \\in C_t^{var}$. Then $U_t \\in \\mathcal{V}_t$ which implies $(U_t)_i = 0$. This implies that $(X_t)_i = (X_{t-1})_i$ and $i \\in C_{t+1}^{var}$ as desired. The argument is very similar for the discrepancy constraints.\n\\end{proof}\n\nNow, we can begin to look at the results of the algorithm. \n\nFirst, we can prove that with good probability, our Brownian motion will not leave the polytope. The ``nearly hit\" constraints serve this purpose; we select step size $\\gamma$ small enough that whenever a solution approaches a constraint, it is more likely to fall into the $\\delta$-band of the constraint than it is to break the polytope. Once it falls into this band, Claim \\ref{Ortho} implies that it will never break the polytope. This can be shown formally using Gaussian tailbounds.\n\nNext, we argue that the algorithm satisfies many variable constraints and few discrepancy constraints with high probability. Using our bound on the discrepancy coefficients as well as Gaussian tailbounds, we can demonstrate that the easily-satisfiable discrepancy constraints is small, and that it is unlikely that many other ones are met. With this in mind, at any time $t$ note that there are two scenarios for $C_t^{var}$: if it is large then we are done, and if it is small then our Brownian motion is less constrained so we expect to take steps of larger magnitude. Thus, we argue that by time $T$ it is likely that we ``nearly-hit\" many variable constraints.\n\nFinally, we look at the computational complexity of the algorithm, which is claimed to be $O((n+m)^3\\delta^{-2}\\log(nm\/\\delta))$. The paper does not provide a full justification of this runtime, but we believe it to be inaccurate. \n\nComputing $C_t^{var}$ and $C_t^{\\mathsf{disc}}$ given $X_{t-1}$ takes $O(nm)$ time, since computing $C_t^{\\mathsf{disc}}$ requires the computation of $m$ dot products in $\\mathbb{R}^n$. We can sample from $\\mathcal{N}(\\mathcal{V}_t)$ by constructing an orthonormal basis for $\\mathcal{V}_t$. We do this by constructing an orthogonal basis using our constraints, and using the completion theorem to find a basis of $\\mathcal{V}_t$. Finding a basis from $n+m$ constraint vectors requires Gaussian elimination, so it takes $O((n+m)^3)$ time. \n\nNow, we have to repeat this for $T$ rounds, so the runtime should be expressible as $O((n+m)^3 T)$. Note that $T = O(1\/\\gamma^2)$, so the runtime described by \\cite{Lovett} holds in the case where \n\\begin{align*}\n \\frac{1}{\\gamma^2} &= O(\\delta^{-2}\\log(nm\/\\delta)) \\\\\n \\frac{1}{\\gamma} &= O\\left(\\frac{1}{\\delta}\\sqrt{\\log(nm\/\\delta)}\\right) \\\\\n \\delta &= O(\\gamma\\sqrt{\\log(nm\/\\delta)}).\n\\end{align*}\nHowever, in the paper, $\\gamma$ is selected under the condition $\\delta = O(\\gamma\\sqrt{nm\/\\gamma})$. Of course, this ends up being a small distinction for $nm \\gg \\delta$, but it is still worth noting.\n\nA full proof is provided in the appendix at section \\ref{full-proof}. \n\n\n\\section{The Discrepancy Minimizer}\nFor the purposes of brevity, we only provide a proof for Theorem \\ref{result}.\n\n\\begin{proof}[Proof (Theorem \\ref{result})] To find our full coloring, we will simply repeatedly use Theorem \\ref{Coloring}. For $m$ sets on a universe of size $n$, we'll select $\\delta = 1\/(8 \\log m)$ and $c_1, ..., c_m = 8\\sqrt{\\log(m\/n)}$, and denote by $v_i ... v_m$ the indicator vectors for the sets. We'll use the partial coloring algorithm starting with vector $\\vec{x}_0 = 0^n$ to find some vector $\\vec{x}_1$ where $|\\langle v_j, x_1 \\rangle| < \\sqrt{n}(8\\sqrt{\\log(m\/n)})$ for all $j$ and where more than half of the points have values within the ``nearly-hit\" bound. By Theorem \\ref{Coloring}, this has probability of at least $0.1$, which we can boost by repeating as needed. \n\nApplying this iteratively to the vectors that haven't yet been assigned a partial coloring, we find that within $t = O(\\log n)$ iterations every value in $x$ will be within $\\delta$ of an assignment. When this occurs, for any $j \\in [m]$, we note that $n_i < \\frac{n}{2^i}$, so we have \n\\begin{align*}\n |\\langle v_j, x \\rangle| &< \\sum_{i=0}^t |\\langle v_j, x_t \\rangle| \\\\\n &< \\sum \\sqrt{n_i}8\\sqrt{\\log(m\/n_i)} \\\\\n &< 8\\sqrt{n} \\sum_{i=1}^{\\infty} \\sqrt{\\frac{i + \\log(m\/n)}{2^i}} \\\\\n &< C\\sqrt{n\\log(m\/n)}\n\\end{align*}\nfor some constant $C$.\n\nWe then use this candidate solution and round it to an actual coloring. Knowing that each variable is within $\\delta$ of either $1$ or $-1$, we'll set each variable to the one it is closer to with probability $(1+|x_i|)\/2$, which means that $\\mathbb{E}[\\chi_i] = x_i$. Denoting $Y := \\chi - x$ we have that the discrepancy for any set $j$ follows \n\\[ |\\langle \\chi, v_j \\rangle| \\leq |\\langle x, v_j \\rangle| + |\\langle Y, v_j \\rangle|\\]\ndue to triangle inequality. What's left, then, is to find an upper bound for $|\\langle Y, v_j \\rangle|$. Noting that $|Y_i| \\leq 2$, $\\mathbb{E}[Y_i]=0$, $\\sigma^2(Y_i) \\leq \\delta$ (which the paper claims but we are only able to show this is true for $2\\delta$), and $||v_j||_2 \\leq \\sqrt{n}$, the fact that $||v_j||_{\\infty} \\leq 1$ allows us to use a Chernoff bound and get \n\\begin{align*}\n \\Pr[|\\langle Y, v_j \\rangle| > 2 \\sqrt{2\\log m} \\sqrt{n\\delta}] &\\leq 2\\exp(-2\\log m) \\\\\n &\\leq 2\/m^2 \\\\\n &\\leq 1\/2m\n\\end{align*}\nfor $m > 2$. Note that $\\delta = 1\/(8 \\log m)$, so $2 \\sqrt{2\\log m} \\sqrt{n\\delta} = \\sqrt{n}$, which means that across all $j$ we have $\\Pr[|\\langle Y, v_j \\rangle| > \\sqrt{n}] < 1\/2$. Therefore, with probability at least $1\/2$, $\\mathsf{disc}(\\chi) \\leq C\\sqrt{n\\log(m\/n)} + \\sqrt{n} < K\\sqrt{n\\log(m\/n)}$, as desired.\n\\end{proof}\n\n\\newpage\n\n\\section*{References}\n\\printbibliography[heading = none]\n\n\\newpage\n\n\\section{Appendix}\n\n\\subsection{A proof of the Beck-Fiala Theorem}\\label{Beck-Fiala}\nWe present a proof of the Beck-Fiala theorem using only arguments from linear algebra.\n\n\\begin{proof} \\cite{Chazelle}\nWe start by initializing all $\\chi(i) = 0, \\forall i \\in [n]$, and we call all of these variables \\textit{undecided}. We also call a set \\textit{stable} if it has less than or equal to $t$ undecided elements. We also note that due to the constraint, there must be less $n$ sets that contain strictly more than $t$ elements to start off with (all of which are undecided upon initialization). If we impose the constraints that all of the elements in each unstable set must be zero, we get a system of less than $n$ equations, and $n$ variables. This tells us that there is at least one nontrivial solution to the system of equations, that changes only undecided variables and maintains that the discrepancy of all unstable sets remains zero. We can normalize this solution until at least one of the undecided variables is $\\pm 1$. Then, this variable is decided, and we have a partial coloring. We now have at most $n-1$ undecided variables, and each undecided variable is in $(-1, 1)$. By the same argument from above, we have that the number of unstable sets is strictly less than the number of undecided variables, so we can repeat the procedure to find another nontrivial solution to our system of equations. We continue in the manner until all the sets are stable. Then we note that until a set is declared stable, its discrepancy is 0. Then, when it is declared stable, it has at most $t$ undecided variables, all of which are in $(-1,1)$. Then the process of deciding those variables changes the discrepancy of the set by strictly less than $2t$. And since the final discrepancy must be integral, we get the result. \n\\end{proof}\n\n\n\\subsection{A Full Proof of Lemma \\ref{coloring-analysis}} \\label{full-proof}\nWe have already argued about the runtime of the algorithm. Here, we must show that the solution is unlikely to leave the polytope, that few discrepancy constraints are met, and that many variable constraints are met.\n\n\\begin{claim}\\label{Polytope}\nFor $\\gamma \\leq \\delta \/ \\sqrt{c\\log (mn\/\\gamma)}$ and $c$ a sufficiently large constant, with probability at least $1-1\/(mn)^{c-2}$ we have that $X_0,...,X_T \\in \\mathcal{P}$\n\\end{claim}\n\nTo prove the above claim, we will need to use a Gaussian tail bound:\n\\begin{claim} \\label{tailbound}\nFor any $\\lambda >0$, $P(|G| \\geq \\lambda ) \\leq 2\\exp(-\\lambda^2\/2)$, where $G \\sim \\mathcal{N}(0,1)$\n\\end{claim}\n\n\\begin{proof}\nWe have that \n$$P(|G|>\\lambda) = 2P(G > \\lambda) = 2 \\int_{\\lambda}^{\\infty} \\frac{1}{\\sqrt{2\\pi}}\\exp(-t^2\/2)dt \\leq 2 \\int_{\\lambda}^{\\infty} \\frac{t}{\\lambda}\\frac{1}{\\sqrt{2\\pi}}\\exp(-t^2\/2)dt = \\frac{2\\exp(-\\lambda^2\/2)}{\\sqrt{2\\pi}\\lambda}$$\nFrom here, it is easy to see that for $\\lambda \\geq 1\/\\sqrt{2\\pi}$ we have that $\\frac{2\\exp(-\\lambda^2\/2)}{\\sqrt{2\\pi}\\lambda} \\leq 2\\exp(-\\lambda^2\/2) $ as desired. For the case when $\\lambda \\leq 1\/\\sqrt{2\\pi}$, it is easy to see that $2\\exp(-\\lambda^2\/2) > 1$ so the bound is trivial.\n\\end{proof}\n\n\\begin{proof}[Proof of Claim \\ref{Polytope}]\nClearly $X_0 = x_0 \\in \\mathcal{P}$. We further define $E_t := \\{X_t \\not\\in \\mathcal{P} | X_0,...,X_{t-1} \\in \\mathcal{P}\\}$ denote the event that $X_t$ is the first element of the sequence not in $\\mathcal{P}$. \\ We then have\n$$Pr(X_0,...,X_T \\in \\mathcal{P}) = 1-\\sum_{t=1}^T Pr(E_t).$$\n\nThe next step of the proof is clearly to calculate $Pr(E_t)$. In order for $E_t$ to happen, it must be the case that either a variable constraint or a discrepancy constraint was violated. Lets first look at the variable constraint case: say $(X_t)_i>1$. Since $X_{t-1} \\in \\mathcal{P}$, we must have that $(X_{t-1})_i \\leq 1$. Yet, if $(X_{t-1})_i \\geq 1-\\delta,$ then $i \\in C_t^{var}$ so $(X_{t-1})_i = (X_{t})_i$. Thus, for the constraint to be violated, we must have had that $(X_{t-1})_i < 1-\\delta $. Then, in order for $(X_t)_i$ to be greater than 1, and since $X_t = X_{t-1} + \\gamma U_t$, we must have that $|U(t)_i| \\geq \\delta\/\\gamma$.\n\nNow, let's look what must happen in order for $X_t$ to violate a discrepancy constraint. First we define $W := \\{e_1, ..., e_n, v_1,...,v_m\\}$. By our construction of $W$, we conclude that if $E_t$ holds then we must have that $|\\langle X_t - X_{t-1} ,w\\rangle| \\geq \\delta$ for some $w \\in W$. This is equivalent to saying that $|\\langle U_t, w \\rangle| \\geq \\delta\/\\gamma$ for that same $w$. We note here that, once again by construction of $W$, we have that if $|U(t)_i| \\geq \\delta\/\\gamma$ holds, then we must have that $|\\langle U_t, w \\rangle| \\geq \\delta\/\\gamma$ holds for some $w$, in particular it holds if we pick $w=e_i$. However, the reverse does not hold. Therefore, it since the event of a variable constraint being violated is entirely contained in the event of a discrepancy constraint being violated, it suffices to bound $Pr[|\\langle U_t, w \\rangle| \\geq \\delta\/\\gamma]$. In order to bound this, we need the following claim:\n\n\\begin{claim} \\label{subspace}\nLet $V \\subseteq \\mathbb{R}^n$ be a subspace and let $G \\sim \\mathcal{N}(V)$. Then for all $u \\in \\mathbb{R}^n$, $\\langle G, u \\rangle \\sim \\mathcal{N}(0, \\sigma^2)$, where $\\sigma^2 \\leq \\|u\\|_2^2$\n\\end{claim}\n\n\\begin{proof}\nWe have that $G = G_1v_1 + ... + G_dv_d$, where $\\{v_1,...,v_d\\}$ is an arbitrary orthonormal basis for $V$ and $G_1, ..., G_d$ are all standard normals and independent. Then $\\langle G, u \\rangle = \\sum_{i=1}^d \\langle v_i, u \\rangle G_i$. This is a Gaussian RV which has mean zero, and variance $\\sum_{i=1}^d \\langle v_i, u \\rangle^2$. This equation is simply equal to $\\|\\mathsf{Proj}_V u\\|_2^2$, the norm squared of the projection of $u$ onto the span of $V$. Therefore, we have that $\\sum_{i=1}^d \\langle v_i, u \\rangle^2 \\leq \\|u\\|_2^2$, and we are done.\n\\end{proof}\nNow we can use the above claim, and we have that $\\langle U_t, w\\rangle$ is Gaussian with mean 0 and variance at most 1. Then, by claim \\ref{tailbound}, we have that:\n$$Pr[|\\langle U_t, w \\rangle| \\geq \\delta\/\\gamma] \\leq 2\\exp(-(\\delta\/\\gamma)^2\/2) $$\nNow, by our choices of variables, we have that $\\delta\/\\gamma = \\sqrt{C\\log (nm\/\\gamma})$ and $T = O(1\/\\gamma^2)$. Therefore, we have\n$$Pr[X_0 \\cup ... \\cup X_t \\not\\in \\mathcal{P}] = \\sum_{t-1}^T Pr[E_t]$$\nwhich, by a union bound, \n$$\\leq \\sum_{t-1}^T \\sum_{w \\in W} Pr[|\\langle U_t, w \\rangle| \\geq \\delta\/\\gamma] \\leq T (n+m)\\cdot 2\\exp(-(\\sqrt{C\\log (nm\/\\gamma}))^2\/2) = T(n+m)\\cdot 2\\left (\\frac{\\gamma}{nm}\\right )^C $$\n$$\\leq T(nm)\\frac{\\gamma^2}{(nm)^C} \\leq \\frac{1}{(nm)^{C-2}}$$\nFor large enough C, since we have that $\\gamma <1$ and $(nm) >1$.\n\\end{proof}\nWe are now well on our way to proving Lemma \\ref{coloring-analysis}. The intuition behind the remaining steps is as follows. We will use the constraint on our discrepancy thresholds $c_j, j \\in [m]$ to argue first that $\\mathbb{E}[|C_T^{var}|] \\ll n$. This will be useful because it will mean that $\\dim (\\mathcal{V}_{t-1})$ will be larger, which means that $\\mathbb{E}[\\|X_t\\|^2]$ should increase more appreciably compared to the previous timestep. At any given timestep ,either $|C_t^{var}|$ is large and we are done, or $|C_t^{var}|$ is small and once again $\\dim (\\mathcal{V}_{t-1})$ and we will be taking bigger steps. We note also that in order to prove the lemma, we really only need to show that $\\mathbb{E}[|C_t^{var}|] = \\Omega(n)$, since if we achieve this, then we can use this along with the fact that $|C_T^{var}|$ is upper bounded by $n$ to show that $Pr[|C_t^{var}| < n\/2] < 0.9$.\\\\\nWe first show that $\\mathbb{E}[|C_t^{\\mathsf{disc}}|]$ is small; that is, on average very few discrepancy constraints are ever nearly hit.\n\n\\begin{claim}\n$\\mathbb{E}[|C_t^{\\mathsf{disc}}|] < n\/4$\n\\end{claim}\n\\begin{proof}\nWe let $J := \\{j: c_j \\leq 10\\delta \\}$. In order to bound the size of $J$, we have that from our constraints\n$$n\/16 \\geq \\sum_{j \\in J} \\exp (-c_j^2\/16) \\geq |J| \\cdot \\exp(-100\\delta^2\/16) \\geq |J| \\cdot \\exp(-1\/16) > 9|J|\/10$$\nSince $\\delta <0.1$. So we have then that $|J| \\leq 1.2n\/16 < 2n\/16$. Now we consider the $j \\not \\in J$. If $j \\in C_T^{\\mathsf{disc}}$, then $|\\langle X_T-x_0, v_j\\rangle| \\geq c_j - \\delta \\geq 0.9c_j$. We want to bound the probability that his occur. Via our update formula, we have that $X_T = x_0 + \\gamma(U_1+...+U_T)$. We then define $Y_i = \\langle U_i, v_j \\rangle$. We then have that for $j \\not\\in J$\n$$Pr[j \\in C_T^{\\mathsf{disc}}] = Pr[|Y_1 + ... + Y_T| \\geq 0.9c_j\/\\gamma]$$\nWe will also need the following Lemma:\n\\begin{lemma}[\\cite{Bansal}]\nLet $X_1,...X_T$ be random variables, and let $Y_1,...,Y_T$ be RVs where each $Y_i$ is a function of $X_i$. Suppose that for all $1 \\leq i \\leq T$, $Y_i|X_1, ..., X_{i-1} $ is Gaussian with mean zero and variance at most one. Then for any $\\lambda >0$:\n$$Pr[|Y_1 + ...+ Y_T| \\geq \\lambda \\sqrt{T}] \\leq 2 \\exp(-\\lambda^2\/2)$$\n\\end{lemma}\nThe proof of the above lemma is a generalization of the proof for Claim \\ref{tailbound}, and is omitted. We note that $Y_i = \\langle U_i, v_j \\rangle \\sim \\mathcal{N}(0, \\sigma^2)$, where $\\sigma^2 \\leq \\|v_j\\|^2 = 1$ by our assumption at the beginning of the problem that we had normalized all the the $v_i$. We can apply the above lemma to our $Y_i$'s, since $Y_i$ is a function of $U_i$ and $Y_i|U_1,...,U_{i-1}$ is Gaussian with mean zero and variance at most one to get that \n$$Pr[j \\in C_T^{\\mathsf{disc}}] \\leq 2\\exp(-(0.9c_j)^2\/2\\gamma^2T$$\nwhich, since $T = K_1\/\\gamma^2$\n$$ = 2\\exp(-(0.9c_j)^2\/2K_1) < 2\\exp(-c_j^2\/2)$$\nWe therefore have that \n$$\\mathbb{E}[|C_T^{\\mathsf{disc}}|] \\leq |J| + \\sum_{j \\not\\in J} Pr[j \\in C_T^{\\mathsf{disc}}] < 2n\/16+2n\/16 = n\/4$$\nWhere above we have used conditional expectations, and assumed in the worst case that every element in $J$ is in $C_T^{\\mathsf{disc}}$ and we have also used the constraint $\\sum_{j=1}^m\\exp(-c_j^2\/16) \\leq n\/16$.\n\\end{proof}\n\n\\begin{claim} \\label{bound-x-t}\n$\\mathbb{E}[\\|X_T\\|_2^2] \\leq n$\n\\end{claim}\n\\begin{proof}\nWe start by noting that it suffices to show that $\\mathbb{E}[(X_T)_i^2] \\leq 1$ for all $i \\in [n]$, since $\\mathbb{E}[\\|X_T\\|_2^2] = \\sum_i \\mathbb{E}[(X_T)_i^2]$. We have that\n$$\\mathbb{E}[(X_T)_i^2] = \\mathbb{E}[(X_T)_i^2|i \\not \\in C_T^{var}]Pr[i \\not\\in C_T^{var}] + \\sum _{t=1}^T \\mathbb{E}[(X_T)_i^2|i \\in C_t^{var} \\setminus C_{t-1}^{var}]Pr[i \\in C_t^{var} \\setminus C_{t-1}^{var}]$$\nNow, we have that clearly $\\mathbb{E}[(X_T)_i^2|i \\not \\in C_T^{var}] \\leq 1$. For the rest of the terms, we have:\n$$\\mathbb{E}[(X_T)_i^2|i \\in C_t^{var} \\setminus C_{t-1}^{var}] = \\mathbb{E}[(X_t)_i^2|i \\in C_t^{var} \\setminus C_{t-1}^{var}]$$\n$$ = \\mathbb{E}[((X_{t-1})_i+\\gamma(U_t)_i)^2|i \\in C_t^{var} \\setminus C_{t-1}^{var}] \\leq \\mathbb{E}[(1-\\delta+\\gamma(U_t)_i)^2|i \\in C_t^{var} \\setminus C_{t-1}^{var}]$$\n$$ = (1-\\delta)^2 + 2(1-\\delta)\\gamma\\mathbb{E}[(U_t)_i] + \\gamma^2 \\mathbb{E}[(U_t)_i^2]$$\nHere, we note that $\\mathbb{E}[(U_t)_i^2] = 1$ and $\\mathbb{E}[(U_t)_i] = 0$, so we have\n$$ = (1-\\delta)^2 + \\gamma^2 \\leq 1-\\delta + \\gamma < 1$$\nby our construction of $\\gamma$.\n\\end{proof}\n\n\n\\begin{claim}\n$\\mathbb{E}[|C^{var}_T|] \\geq 0.56n$.\n\\end{claim}\n\\begin{proof}\nWe will use the high average norm of $X_t$ and low number of discrepancy constraints broken to demonstrate that the number of variable constraints broken is high with high probability. Note that \n\\begin{align*}\n \\mathbb{E}[||X_t||^2_2] &= \\mathbb{E}[||X_{t-1} + \\gamma U_t||_2^2] \\\\\n &= \\mathbb{E}[||X_{t-1}||^2_2] + 2\\mathbb{E}[||X_{t-1} \\cdot \\gamma U_t||_2] + \\mathbb{E}[||\\gamma U_t||_2^2] \\\\\n &= \\mathbb{E}[||X_{t-1}||^2_2] + \\mathbb{E}[||\\gamma U_t||_2^2] \\\\\n &= \\mathbb{E}[||X_{t-1}||^2_2] + \\gamma^2\\mathbb{E}[\\text{dim}(\\mathcal{V}_t)].\n\\end{align*}\nWe use the fact that $\\mathbb{E}[U_t]=0$, and we use Claim \\ref{subspace} as well. Then, by Claim \\ref{bound-x-t}, we have \n\\begin{align*}\n n &\\geq \\mathbb{E}[||X_T||_2^2] \\\\\n n &\\geq \\gamma^2 \\sum_{t=1}^T \\mathbb{E}[\\text{dim}(\\mathcal{V}_t)] \\\\\n n &\\geq \\gamma^2 |T|\\mathbb{E}[\\text{dim}(\\mathcal{V}_T)] \\\\\n n &\\geq K_1 \\mathbb{E}[n - |C_T^{var}| - |C_T^{\\mathsf{disc}}|] \\\\\n n\/K_1 &\\geq \\mathbb{E}[n] - \\mathbb{E}[|C_T^{var}|] - \\mathbb{E}[|C_T^{\\mathsf{disc}}|] \\\\\n \\mathbb{E}[|C_T^{var}|] &\\geq n (1 - 1\/K_1) - n\/4 \\\\\n \\mathbb{E}[|C_T^{var}|] &\\geq n (1 - 3\/16 - 1\/4) \\\\\n \\mathbb{E}[|C_T^{var}|] &\\geq 0.5625n.\n\\end{align*}\n\\end{proof}\n\nNow we can fully prove Lemma \\ref{coloring-analysis}. From Claim 14 and the fact that $|C_T^{var}| \\leq n$, in the worst case we have that with probability $0.88$, $|C_T^{var}| = n\/2$, and with probability $0.12$, $|C_T^{var}| = n$. This maximizes the number of instances where $|C_T^{var}| \\leq n\/2$, while still maintaining the fact that $\\mathbb{E}[|C_T^{var}|] \\geq 0.56$. Therefore we have that $Pr[|C_T^{var}| > n\/2] \\geq 0.12$. Combining this with claim 8 tells us that with probability at least $0.12 - 1\/poly(m,n)$ we achieve the partial coloring, and we are done.\n\n\\end{document}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}