diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzbbqv" "b/data_all_eng_slimpj/shuffled/split2/finalzzbbqv" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzbbqv" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\subsection{NGC 2419}\nNGC 2419 is a stellar aggregate with a number of puzzling characteristics and its nature and origin are yet unclear. \nWith a half-light radius $r_h$ of 21.4 pc it is the fifth-most extended object listed in the 2010-version of the Harris (1996) catalogue, while \nit is also one of the most luminous Globular Clusters (GCs) in the Milky Way (MW; Fig.~1). At a Galactocentric distance of 90 kpc it \nresides in the outermost halo. All these traits have fueled discussions of whether it contains any dark matter or could be affected by non-Newtonian\ndynamics (Baumgardt et al. 2005; Conroy et al. 2011; Ibata et al. 2012). For instance, Ibata et al. (2012) argue that its kinematics is incompatible with a dark matter content in excess of some 6\\% of its total mass. \nOverall, these morphological and dynamical considerations beg the question to what extent NGC~2419 has evolved in isolation and \nwhether it could be associated with a once-accreted, larger system like a dwarf (spheroidal) galaxy.\n\\begin{figure}[t!]\n\\resizebox{\\hsize}{!}{\n\\includegraphics[clip=true]{Koch_f1.eps}\n}\n\\caption{\\footnotesize\nMagnitude-half light radius plot for GCs (black dots), luminous dSphs (blue squares) and ultrafaint MW satellites (red circles). NGC~2419 is labeled -- what is this object?}\n\\vspace{-0.5cm}\n\\end{figure}\n\n\nAlso chemically, NGC 2419 has much to offer: Cohen \\& Kirby (2012) and Mucciarelli et al. (2012) identified a population of stars (ca. 30\\% by number) with remarkably low Mg- and high K-abundances, \nwhich could be the result of ``extreme nucleosynthesis'' (Ventura et al. 2012). The question of an abundance-spread has been addressed by several authors using high-resolution spectroscopy and \nlow-resolution measurements of the calcium triplet. However, the large abundance variation of \nthe electron-donor Mg will upset the commonly used stellar model atmospheres so that any claimed spread in iron-, Ca-, and thus overall metallicity needs to be considered with caution. \nHowever, settling exactly this aspect is of prime importance, since any significant spread in heavy elements is a trademark signature of an object with a likely extragalactic origin \n(e.g., Fig.~1 in Koch et al. 2012). \n\nThe color-magnitude diagrams (CMDs) of Di Criscienzo et al. (2011) show a hint of a color-spread towards the subgiant branch and the presence of a \nhot, faint Horizontal Branch (HB), \nconsistent with a second generation of stars with a strongly increased He-content. Thus, also NGC~2419 does appear to show signs of multiple stellar populations,\nin line with the majority of the MW GC system. \n\\subsection{Str\\\"omgren photometry}\nWhile broad-band filter combinations have succeeded in unveiling multiple stellar populations in sufficiently deep data sets and more massive systems (e.g., Piotto et al. 2007), additional \nobservations in intermediate-band \nStr\\\"omgren filters\nare desirable for a number of reasons:\n\\begin{enumerate}\n\\item[\\em i)] The $c_1 = (u-v)- (v-b)$ index in combination with a color such as $v-y$ is a powerful {\\em dwarf\/giant separator} and can efficiently remove any foreground contamination (e.g., Faria et al. 2007). \nAt $b=25\\degr$ this can be expected to be less of a problem in NGC~2419, but see, e.g., Ad\\'en et al. (2009) for an impressive demonstration of such a CMD cleaning. \nOur first assessment of the $c_1$-$(b-y)$ plane indicates that the foreground contamination is indeed minimal on the upper RGB (see also Fig.~2). \n\\item[\\em ii)] The index $m_1 = (v-b)- (b-y)$ is a good proxy for stellar {\\em metallicity} and calibrations have been devised by several authors (e.g., Hilker 2000; Calamida et al. 2007; \nAd\\'en et al. 2009). \n\\item[\\em iii)] {\\em Multiple populations} in terms of split red giant branches (RGBs), multiple subgiant branches, and main sequence turnoffs are well separated in CMDs that use combinations \nof Str\\\"omgren filters, e.g., $\\delta_4 = c_1 + m_1$ (Carretta et al. 2011), where optical CMDs based on broad-band filters still show unimodal, ``simple stellar populations''. \n\\item[\\em iv)] This is immediately interlinked with the {\\em chemical abundance variations} in the light chemical elements (e.g., Anthony-Twarog et al. 1995) that accompany the multiple populations, \nmost prominently driven by N-variations. Accordingly, Yong et al. (2008) confirmed linear correlations of $c_y = c_1 - (b-y)$ with the [N\/Fe] ratio.\n\\end{enumerate}\n\\section{Data and analysis}\nWe obtained imaging in all relevant Str\\\"omgren filters ($u$,$b$,$v$,$y$) using the Wide Field Camera (WFC) at the 2.5-m Isaac Newton Telescope (INT) at La Palma, Spain. \nIts large field of view ($33\\arcmin\\times33\\arcmin$) allows us to trace the large extent of NGC~2419 out to several times its tidal radius ($r_t\\sim7.5\\arcmin$). \n\nInstrumental magnitudes were obtained via PSF-fitting using the \\textsc{Daophot\/Allframe} software packages \n(Stetson 1987).\nThe instrumental magnitudes were transformed to the standard Str\\\"omgren system using ample observations of standard stars \n(Schuster \\& Nissen 1988).\nWe set up transformation equations similar to those given by Grundahl, Stetson \\& Andersen (2002).\n\\section{Preliminary results: CMDs and [M\/H]}\nFig.~2 shows two CMDs of NGC~2419, where we restrict our analysis to the bona-fide region between 1 and 3 half-light radii to avoid potentially crowded, inner regions, yet \nminimizing the field star contamination of the outer parts. \nFor the present analysis, we adopted a constant reddening of E($B-V$)$= 0.061$\\,mag\\footnote{Obtained from \\url{http:\/\/irsa.ipac.caltech.edu\/applications\/DUST}}, \nand its respective transformations to the Str\\\"omgren system (Calamida et al. 2009).\n\\begin{figure*}[t!]\n\\resizebox{\\hsize}{!}{\n\\includegraphics[width=0.02\\hsize]{Koch_f2.eps}\n}\n\\caption{\\footnotesize\nCMDs of NGC 2419 for two possible Str\\\"omgren-filter combinations. Shown are stars between $r_h < r < 3\\,r_h$; no other selection criteria have been applied. \nStars falling within the lines shown right were used to construct the metallicity distribution in Fig.~3.}\n\\end{figure*}\n\nWhile we do not resolve the main sequence turnoff, our photometry reaches about 1 mag below the HB at $y_{\\rm HB}$$\\equiv$$V_{\\rm HB}$$\\sim$$20.5$ mag. \nAll regions of the CMD are well reproduced, showing \na clear RGB, hints of an AGB and bright AGB (which stand out more clearly in other color indices; Frank et al. in prep.), and a prominent HB. Hess diagrams also highlight the presence of a clear \nRGB bump at $y_0\\sim20.3$ mag. \nMoreover, the extreme, hotter HB stands out in the bluer u-band (left panel), confirming the presence of this He-rich, secondary population (di Crisicienzo et al. 2011). \n\nTo obtain a first impression of the metallicity distribution function (MDF) of NGC~2419 (Fig.~3, right panel), we convert our Str\\\"omgren photometry to metallicities, [M\/H], through the \ncalibration by Ad\\'en et al. (2009). \nThis was carried out for stars on the RGB (see ridge lines in Fig.~2, right panel, and Fig.~3, left). As a result, we find \na mean [M\/H] of $-2$ dex. This is in good agreement with the values listed in the Harris catalogue and the high-resolution data of Mucciarelli et al. (2012) and Cohen \\& Kirby (2012) of [Fe\/H] = $-2.15$ dex. \n\\begin{figure*}[t!]\n\\resizebox{\\hsize}{!}{\n\\includegraphics[clip=true,width=0.55\\hsize]{Koch_f3a.eps}\n\\includegraphics[clip=true,width=0.53\\hsize]{Koch_f3b.eps}\n}\n\\caption{\\footnotesize\nPreliminary metallicity calibration in the $m_1$-vs-$(b-y)$ plane (left panel). Dashed lines indicate iso-metallicity curves based on the calibration of Ad\\'en et al. (2009) for [M\/H] = $-2.5$ up to +0.5 dex in steps of 0.5 (bottom to top). \nBlack dots are those within the RGB-ridge lines of Fig.~2, used to infer the MDF (light gray) in the right panel. The dark gray MDF in this plot uses a stricter RGB criterion of $(v-y)_0 > 1.4$.}\n\\label{eta}\n\\end{figure*}\n\n\nThe MDF also indicates the presence of a broad metallicity spread, where we find a nominal 1$\\sigma$-spread of 0.5 dex, but this is probably still dominated by \nremaining foreground contaminants and photometric errors. While we cannot exclude the presence of an abundance spread in NGC 2419 from the \npresent data, it is very likely much smaller than the one suggested by Fig.~3.\n\\section{Discussion} \nAlthough our CMD does not allow us to clearly isolate any multiple stellar populations at this stage of our analysis, we nevertheless \nfind strong reason to believe in their presence in NGC~2419, bolstered by recent optical images (di Criscienzo et al. 2011). \nThese authors detected a color spread at the base of the RGB and an extreme, hot HB, indicative of an increased He-abundance of a populous second stellar generation. \nThis HB population is also visible in our intermediate-band CMDs. \n\nAlthough our first analysis suggests a broad metallicity spread in NGC~2419, this is probably not significant and further CMD filtering is necessary. \nHowever, our derived mean metallicity is in line with the results from high-resolution spectroscopy, which indicates that our Str\\\"omgren photometry \nis well calibrated. \n\\begin{acknowledgements}\nAK, MF, and NK gratefully acknowledge the Deutsche Forschungsgemeinschaft for funding from Emmy-Noether grant Ko 4161\/1. \nThis research has made use of the NASA\/ IPAC Infrared Science Archive, which is operated by the Jet Propulsion Laboratory, California Institute of \nTechnology, under contract with the National Aeronautics and Space Administration.\n\\end{acknowledgements}\n\\bibliographystyle{aa}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{%\n \\@startsection\n {section}%\n {1}%\n {\\z@}%\n {0.8cm \\@plus1ex \\@minus .2ex}%\n {0.5cm}%\n {%\n \\normalfont\\small\\bfseries\n \\centering\n }%\n}%\n\\def\\@hangfrom@section#1#2#3{\\@hangfrom{#1#2}\\MakeTextUppercase{#3}}%\n\\def\\subsection{%\n \\@startsection\n {subsection}%\n {2}%\n {\\z@}%\n {.8cm \\@plus1ex \\@minus .2ex}%\n {.5cm}%\n {%\n \\normalfont\\small\\bfseries\n \\centering\n }%\n}%\n\\def\\subsubsection{%\n \\@startsection\n {subsubsection}%\n {3}%\n {\\z@}%\n {.8cm \\@plus1ex \\@minus .2ex}%\n {.5cm}%\n {%\n \\normalfont\\small\\itshape\n \\centering\n }%\n}%\n\\def\\paragraph{%\n \\@startsection\n {paragraph}%\n {4}%\n {\\parindent}%\n {\\z@}%\n {-1em}%\n {\\normalfont\\normalsize\\itshape}%\n}%\n\\def\\subparagraph{%\n \\@startsection\n {subparagraph}%\n {5}%\n {\\parindent}%\n {3.25ex \\@plus1ex \\@minus .2ex}%\n {-1em}%\n {\\normalfont\\normalsize\\bfseries}%\n}%\n\\def\\section@preprintsty{%\n \\@startsection\n {section}%\n {1}%\n {\\z@}%\n {0.8cm \\@plus1ex \\@minus .2ex}%\n {0.5cm}%\n {%\n \\normalfont\\small\\bfseries\n }%\n}%\n\\def\\subsection@preprintsty{%\n \\@startsection\n {subsection}%\n {2}%\n {\\z@}%\n {.8cm \\@plus1ex \\@minus .2ex}%\n {.5cm}%\n {%\n \\normalfont\\small\\bfseries\n }%\n}%\n\\def\\subsubsection@preprintsty{%\n \\@startsection\n {subsubsection}%\n {3}%\n {\\z@}%\n {.8cm \\@plus1ex \\@minus .2ex}%\n {.5cm}%\n {%\n \\normalfont\\small\\itshape\n }%\n}%\n \\@ifxundefined\\frontmatter@footnote@produce{%\n \\let\\frontmatter@footnote@produce\\frontmatter@footnote@produce@endnote\n }{}%\n\\def\\@pnumwidth{1.55em}\n\\def\\@tocrmarg {2.55em}\n\\def\\@dotsep{4.5pt}\n\\setcounter{tocdepth}{3}\n\\def\\tableofcontents{%\n \\addtocontents{toc}{\\string\\tocdepth@munge}%\n \\print@toc{toc}%\n \\addtocontents{toc}{\\string\\tocdepth@restore}%\n}%\n\\def\\tocdepth@munge{%\n \\let\\l@section@saved\\l@section\n \\let\\l@section\\@gobble@tw@\n}%\n\\def\\@gobble@tw@#1#2{}%\n\\def\\tocdepth@restore{%\n \\let\\l@section\\l@section@saved\n}%\n\\def\\l@part#1#2{\\addpenalty{\\@secpenalty}%\n \\begingroup\n \\set@tocdim@pagenum{#2}%\n \\parindent \\z@\n \\rightskip\\tocleft@pagenum plus 1fil\\relax\n \\skip@\\parfillskip\\parfillskip\\z@\n \\addvspace{2.25em plus\\p@}%\n \\large \\bf %\n \\leavevmode\\ignorespaces#1\\unskip\\nobreak\\hskip\\skip@\n \\hb@xt@\\rightskip{\\hfil\\unhbox\\z@}\\hskip-\\rightskip\\hskip\\z@skip\n \\par\n \\nobreak %\n \\endgroup\n}%\n\\def\\tocleft@{\\z@}%\n\\def\\tocdim@min{5\\p@}%\n\\def\\l@section{%\n \\l@@sections{}{section\n}%\n\\def\\l@f@section{%\n \\addpenalty{\\@secpenalty}%\n \\addvspace{1.0em plus\\p@}%\n \\bf\n}%\n\\def\\l@subsection{%\n \\l@@sections{section}{subsection\n}%\n\\def\\l@subsubsection{%\n \\l@@sections{subsection}{subsubsection\n}%\n\\def\\l@paragraph#1#2{}%\n\\def\\l@subparagraph#1#2{}%\n\\let\\toc@pre\\toc@pre@auto\n\\let\\toc@post\\toc@post@auto\n\\def\\listoffigures{\\print@toc{lof}}%\n\\def\\l@figure{\\@dottedtocline{1}{1.5em}{2.3em}}\n\\def\\listoftables{\\print@toc{lot}}%\n\\let\\l@table\\l@figure\n\\appdef\\class@documenthook{%\n \\@ifxundefined\\raggedcolumn@sw{\\@booleantrue\\raggedcolumn@sw}{}%\n \\raggedcolumn@sw{\\raggedbottom}{\\flushbottom}%\n}%\n\\def\\tableft@skip@float{\\z@ plus\\hsize}%\n\\def\\tabmid@skip@float{\\@flushglue}%\n\\def\\tabright@skip@float{\\z@ plus\\hsize}%\n\\def\\array@row@pre@float{\\hline\\hline\\noalign{\\vskip\\doublerulesep}}%\n\\def\\array@row@pst@float{\\noalign{\\vskip\\doublerulesep}\\hline\\hline}%\n\\def\\@makefntext#1{%\n \\def\\baselinestretch{1}%\n \\reset@font\n \\footnotesize\n \\leftskip1em\n \\parindent1em\n \\noindent\\nobreak\\hskip-\\leftskip\n \\hb@xt@\\leftskip{%\n \\Hy@raisedlink{\\hyper@anchorstart{footnote@\\the\\c@footnote}\\hyper@anchorend}%\n \\hss\\@makefnmark\\\n }%\n #1%\n \\par\n}%\n\\prepdef\n\\section{Introduction}\n\\label{Sect.I}\n\nHeavy-ion collisions are excellent factory for producing \nboth elementary and composed particles as well as for studying their \nproperties and production mechanism. Since many years \nefforts of theorists and experimentalists were focused on the investigation \nof time-space evolution of the quark-gluon plasma (QGP) and production of\ndifferent species of particles, primarily hadrons \n(pions, kaons, nucleons, hyperons, etc.) emitted in the collision. \nAt high energies, the velocities of such beam nuclei are close to light \nvelocity thus they are often called ultra-relativistic \nvelocities (URV). \n\nCentral collisions are the most interesting in the context of\nthe QGP studies. Plasma is, of course, also produced in more peripheral \ncollisions.\nIn peripheral collisions, the so-called spectators are relatively large \nand have large moving charge. It was realized relatively late that \nthis charge generates strong quickly changing electromagnetic fields \nthat can influence the trajectories and some observables \nfor charged particles.\n\nSuch effects were investigated in previous studies of one of the present\nauthors \\cite{Rybicki:2006qm,Rybicki:2013qla}.\nOn one hand side the EM effects strongly modify the Feynman $x_F$ spectra\nof low-$p_T$ pions, creating a dip for $\\pi^+$ and\nan enhancement for $\\pi^-$ at $x_F \\approx \\frac{m_{\\pi}}{m_N}$.\nIn \\cite{Rybicki:2006qm} a formalism of charged meson evolution\nin the EM field (electric and magnetic) of fast moving nuclei was\ndeveloped. Later on the spectacular effects were confronted with \nthe SPS data \\cite{Rybicki:2009zz} confirming the theoretical predictions.\nThe investigation was done for $^{208}$Pb+$^{208}$Pb at 158~GeV\/nucleon \nenergy ($\\sqrt{{s}_{NN}} =$ 17.3~GeV) at \nCERN Super Proton Synchrotron (SPS) \\cite{Schlagheck:1999aq}.\nIn \\cite{Rybicki:2013qla} the influence of the EM fields\non azimuthal flow parameters ($v_n$) was studied and confronted in\n\\cite{Rybicki:2014rna} with the RHIC data. It was found that the EM field\nleads to a split of the directed flow for opposite e.g.\ncharges. In the initial calculation, a simple model of single initial\ncreation point of pions was assumed for simplicity.\nMore recently, such a calculation was further developed by taking \ninto account also the time-space evolution of the fireball, treated \nas a set of firestreaks \\cite{Ozvenchuk:2019cve}. \nThe distortions of the $\\pi^+$ and $\\pi^-$ distributions allow to\ndiscuss the electromagnetic effects of the spectator and charged pions \nin URV collisions of heavy ions and nicely explain the experimental data. \nThis study was done for non-central collisions where the remaining \nafter collision object, called 'spectator', loses only a small part \nof nucleons of the original beam\/target nucleus.\n \nThe ultra-peripheral heavy-ion collisions (e.g.~$^{208}$Pb+$^{208}$Pb) at \nultra-relativistic energies ($\\sqrt{s_{NN}}\\ge 5~$GeV) \\cite{\nKlusek-Gawenda:2016suk}\nallow to produce particles in a broad region of impact parameter\nspace, even far from ``colliding'' nuclei.\nThe nuclei passing near one to each other with ultrarelativistic energies \nare a source of virtual photons that can collide producing\ne.g. a pair of leptons.\nIn real current experiments (RHIC, LHC), the luminosity is big enough to observe\ne.g. $AA \\rightarrow AA\\rho_0$ and $AA \\rightarrow AAe^+e^-$, \n$AA \\rightarrow AA\\mu^+\\mu^-$ processes. \nOne of the most interesting phenomena is multiple interaction \n\\cite{Klusek-Gawenda:2016suk,vanHameren:2017krz}\nwhich may lead to the production of more than one lepton pair.\n\nThe studies on the creation of positron-electron pairs started in early \n1930'ties with the discovery of positron by Dirac \\cite{Dirac:1934} and works\nof G. Breit and J.A. Wheeler \\cite{Breit:1934zz}, where they calculated \nthe cross section for the production of such pairs in the electric field. \nIt was E.J. Williams who realized \\cite{Williams:1935} that the \nproduction of $e^+ e^-$ pairs is enhanced in the vicinity of the atomic\nnucleus. An overview of the theoretical investigation of \nthe $e^+ e^-$ pairs creation in historical context was presented \nby e.g. J.H. Hubbell \\cite{Hubbel:2006} \nand the detailed discussion about this process in physics and\nastrophysics was written by R. Ruffini et al. \\cite{Ruffini:2009hg}.\n\nThe early analyses were done in the momentum space and therefore\ndid not include all details in the impact parameter space.\nAn example of the calculation where such details are taken into\naccount can be found e.g. in \n\\cite{Klusek-Gawenda:2010vqb,Klusek-Gawenda:2016suk}.\nIn ultra-peripheral collision (UPC), the nuclei do not collide\nwithout loosing, in principle, any nucleon.\nHowever, the electromagnetic interaction induced by fast moving\nnuclei may cause excitation of the nuclei and subsequent emission \nof different particles, in particular neutrons \\cite{Klusek-Gawenda:2013ema}\nthat can be measured both at RHIC and at the LHC.\nMoreover the UPC are responsible not only \nfor Coulomb excitation of the spectator but also for \nthe multiple scattering and production of more than one dielectron\npair \\cite{vanHameren:2017krz}. \nWith large transverse momentum cut typical at RHIC and the LHC the effect\nis not dramatic.\n\nCan the strong EM fields generated at high energies modify \nthe electron\/positron distributions?\nNo visible effect was observed for electrons with $p_T >$ 1~GeV\nas discussed in \\cite{vanHameren:2017krz}, where the ALICE distributions\nwere confronted with the $b$-space equivalent photon approximation (EPA)\nmodel.\nBut the electromagnetic effect is expected rather at \nlow transverse momenta.\nAccording to our knowledge this topic was not discussed \nin the literature.\nAs the spectators, which in ultra-peripheral collisions are almost \nidentical to colliding nuclei, are charged,\nthey can interact electromagnetically with electrons and positrons as \nit was in the case of pions. \nSimilar effects as observed for pions may be expected also for charged \nleptons. The motion of particles in EM field depends not only on their \ncharge but also on their mass. Thus the distortions of $e^+\/e^-$\ndistributions should be different than those for $\\pi^+\/\\pi^-$ distributions. \nAlso the mechanism of production is completely different. \nIn contrast to pion production, where the emission site is well localized,\nthe electron-positron pairs produced by photon-photon fusion can \nbe produced in a broad configuration space around the ``collision''\npoint - point of the closest approach of nuclei. \nA pedagogical illustration of the impact parameter dependence can \nbe found e.g. in \\cite{Klusek-Gawenda:2010vqb}.\n\nFrom one side, the previous works \\cite{Ozvenchuk:2019cve} and reference \ntherein, were connected with the electromagnetic effects caused by \nthe emission of the pions from the fireball region. \nFrom the other side, the model considered \\cite{Klusek-Gawenda:2010vqb,Klusek-Gawenda:2018zfz,Klusek-Gawenda:2020eja} can correctly estimate \nthe localization in the impact parameter space.\nThe present study will be focused on electromagnetic \ninteraction between electrons\/positrons with highly positively charged nuclei.\n\nOur approach consists of two steps. First, the $e^+ e^-$ distributions \nare calculated within EPA in terms of initial distributions in a given space \npoint and at a given initial rapidity and transverse momentum.\nSecondly, the space-time evolution of leptons in the electromagnetic fields\nof fast moving nuclei with URV is performed by solving relativistic \nequation of motion \\cite{Rybicki:2011zz}.\n\nIn Section \\ref{Sect.II} the details of the calculation of \nthe differential cross section of $e^+ e^-$ production will be\npresented.\nWe will not discuss in detail equation of motion which was presented\ne.g. in \\cite{Rybicki:2011zz}. \nThe results of the evolution of electrons\/positrons in\nthe EM field of ``colliding'' nuclei are presented in Section \\ref{Sect.III}. \n\\section{Lepton pair production, equivalent photon approximation}\n\\label{Sect.II}\n\nThe particles originated from photon-photon collisions can be created\nin full space around excited nucleus thus first of all the geometry of \nthe reaction should be defined. \n\nIn the present study, the ultra-peripheral collisions (UPC) are\ninvestigated in the reaction plane ($b_x,b_y$) which are perpendicular \nto the beam axis taken as $z$-direction.\n\nThe collision point ($b_x=0,b_y=0$) is time-independent center of mass (CM)\nof the reaction as shown in Fig.~\\ref{fig01}. The impact parameter\nis fixed as double radius of each (identical) Pb nucleus b=(13.95~fm, 14.05~fm). \nFor comparison we will show results also for b=(49.95~fm, 50.05~fm).\n\nFour characteristic points ($\\pm$15~fm, 0), (0, $\\pm$15~fm), which are discussed later are also \nmarked in the figure. In the present paper \nwe shall show results for these initial emission points for \nillustrating the effect of evolution of electrons\/positrons in \nthe EM field of nuclei.\n\n\\begin{figure}[!hbt]\n \\begin{center}\n \\includegraphics[scale=0.30]{geometra01.pdf}\n \\caption{ The impact parameter space and the\n five selected points ($b_x,b_y$):\n (0, 0), ($\\pm$15~fm,0), (0,$\\pm$15~fm), for which \n the distribution in rapidity and transverse momentum will be\n compared latter on in the text, shown in the CM rest frame.\n \n }\n \\label{fig01}\n \\end{center}\n\n\\end{figure}\n\nUsually the exclusive dilepton production was estimated by using \nthe monopole charge form factor which allows to reproduce correctly \nthe total cross section. \nThe differential cross sections are more sensitive to\ndetails, thus realistic charge form factor (Fourier transform of the\ncharge distribution) has to be employed.\n\\footnote{Double scattering production of positron-electron pairs using \nthe realistic charge form factor has been discussed \nin \\cite{Klusek-Gawenda:2016suk} and \\cite{vanHameren:2017krz}.} \nThe total cross section for the considered process\n($A A \\to A A e^+ e^-$) can be written as:\n\n\\begin{eqnarray}\n&& \\sigma_{A_1A_2\\rightarrow A_1A_2e^+e^-} (\\sqrt{s_{A_1A_2}})=\t\\nonumber \\\\\n &=&\\int \\frac{d\\sigma_{\\gamma\\gamma \\rightarrow e^+e^-}(W_{\\gamma\\gamma}) }{d\\cos{\\theta}} \nN(\\omega_1,b_1) N(\\omega_2,b_2) S^2_{abs}(b) \\nonumber\\\\\n&\\times& 2\\pi bdbd\\overline{b_x} d \\overline{b_y}\n\\frac{W_{\\gamma\\gamma}}{2} d W_{\\gamma\\gamma}d Y_{e^+e^-} \\ d\\cos{\\theta},\n\\label{EPA}\n\\end{eqnarray}\nwhere $N(\\omega_i,b_i)$ are photon fluxes, $W_{\\gamma\\gamma}=M_{e^+e^-}$\n is invariant mass and $Y_{e^+e^-}= (y_{e^+} + y_{e^-})\/2$ is rapidity of \nthe outgoing system and $\\theta$ is the scattering angle in the $\\gamma\\gamma\\rightarrow e^+e^-$ \ncenter-of mass system. The gap survival factor $S^2_{abs}$ assures that \nonly ultra-peripheral reactions are considered.\n\\begin{figure}[!hbt]\n \\begin{center}\n \\includegraphics[scale=0.5]{paw_ypt_b14.png}\n \\caption{The map of differential cross section in rapidity of\n electron or positron and lepton transverse momentum \n for ($b_x,b_y$) = (0, 0) which will be called CM point for brevity.}\n\n \\label{fig02a}\n \\end{center}\n\\end{figure}\n\nThe\n$\\overline{b_x}=(b_{1x}+b_{2x})\/2$ and\n$\\overline{b_y}=(b_{1y}+b_{2y})\/2$\nquantities are particularly useful for our purposes.\nWe define $\\vec{{\\bar b}} = (\\vec{b}_1 + \\vec{b}_2)\/2$ which is\n(initial) position\nof electron\/positron in the impact parameter space.\nThis will be useful when considering motion of electron\/positron\nin the EM field of nuclei.\nThe energies of photons are included by the relation: \n$\\omega_{1,2}=W_{\\gamma\\gamma}\/2 \\exp(\\pm Y_{e^+e^-})$.\nIn the following for brevity we shall use $b_x, b_y$ instead of \n$\\overline{b_x}, \\overline{b_y}$.\nThen $(b_x, b_y)$ is the position in the impact parameter plane,\nwhere the electron and positron are created.{\\footnote{Expression (\\ref{EPA}) allows to estimate not only \nthe lepton pair production but also a production of any other \nparticle pair \\cite{Klusek-Gawenda:2010vqb}.}}\nThe differential (in rapidity and transverse momentum) cross section \ncould be obtained in each emission point \nin the impact parameter space ($b_x,b_y$).\n\n\\begin{figure}[!hbt]\n \\begin{center}\n \\includegraphics[scale=0.60]{rap_e_xy_weight_17GeV.pdf}\n \\includegraphics[scale=0.60]{pt_e_xy_weight_17GeV.pdf}\n \\caption{(Color on-line) The differential cross section for various emission points\n of electrons\/positrons produced in the $^{208}$Pb+$^{208}$Pb\n reaction at 158~GeV\/nucleon energy ($\\sqrt{{s}_{NN}} =$ 17.3~GeV)\n at impact parameter 14$\\pm$0.05~fm. The cross section \n for selected points ($b_x,b_y$): (0, 0), ($\\pm$15~fm, 0), (0, $\\pm$15~fm) and (40~fm, 0)\n are integrated over $p_T$ (a) and rapidity (b), respectively.}\n \\label{fig02b}\n \\end{center}\n\\end{figure}\n\n\nThe calculations will be done assuming the collision of\n$^{208}$Pb+$^{208}$Pb \nat 158~GeV\/nucleon energy ($\\sqrt{{s}_{NN}} =$ 17.3~GeV) corresponding to\nthe CERN SPS and $\\sqrt{{s}_{NN}} =$ 200~GeV of \nthe STAR RHIC at impact parameter 14$\\pm$0.05~fm which is \napproximately twice the radius of the lead nucleus. \nThis is minimal configuration assuring ultra-peripheral collisions.\n\nFigure \\ref{fig02a} illustrates the differential cross section on \nthe plane of rapidity ($y$) vs. transverse momentum ($p_T$). \nRather broad range of rapidity (-5, 5) is chosen, but the distribution in $p_T$ will be limited\nto (0, 0.1~GeV) as the cross section drops at $p_T$ = 0.1~GeV \nalready a few orders of magnitude. The electromagnetic effects may \nbe substantial only in the region of the small transverse momenta.\n\nThus for our exploratory study here we have limited the range for rapidity \nto (-5, 5) and for transverse momentum to $p_T$=(0, 0.1~GeV).\nThe integrated distribution can be seen in Fig.~\\ref{fig02b}(a) and\n(b). There we compare the distributions obtained for \ndifferent emission points ($b_x,b_y$): (0, 0), ($\\pm$15~fm, 0), \n(0, $\\pm$15~fm) as shown in Fig.~\\ref{fig01}.\nThe behavior of the differential cross section is very similar in each \n($b_x,b_y$) point but it differs in normalization as it is shown \nin Fig.~\\ref{fig02b}.\n\n\\begin{figure}[!hbt]\n\\includegraphics[scale=0.4]{dsig_dbm.eps}\n\\caption{\nDistribution of the cross section in the impact parameter $b$\nfor different energies:\n$\\sqrt{s_{NN}}$ = 17.3, 50 and 200~GeV (from bottom to top).\n}\n\\label{fig:dsig_dbm}\n\\end{figure}\n\nIn Fig.~\\ref{fig:dsig_dbm} we show a distribution of the cross section\nin impact parameter $b$ for different collision energies\n$\\sqrt{s_{NN}}$ = 17.3, 50, 200~GeV.\nIn this calculation we have taken $p_T >$ 0~GeV (the cross section\nstrongly depends on the lowest value of lepton transverse momentum $p_T$).\nIn general, the larger collision energy the broader the range of impact\nparameter. However, the cross section for $b \\approx R_{A_1} + R_{A_2}$\nis almost the same. Only taking into account limitation, e.g. on the momentum transfer, makes the difference in the cross section significant even at $b=14$~fm.\n\n\\begin{figure}[!hbt]\n \\begin{center}\n \\includegraphics[scale=0.450]{paw_baxbay_b14_check.eps}\\\\\n \\vspace{-8cm}\\hspace{-5.0cm}{\\bf(a)}\\\\\n \\vspace{7cm}\n \\includegraphics[scale=0.450]{paw_baxbay_b50_check.eps}\\\\\n \\vspace{-8cm}\\hspace{-5.0cm}{\\bf(b)}\\\\\n \\vspace{7cm}\n \\caption{ Two-dimensional cross section\n as a function of $b_x$ and $b_y$ for\n two values of impact parameter: \n (a) b=14$\\pm$0.05~fm and (b) b=50$\\pm$0.05~fm.}\n \\label{fig02d}\n \\end{center}\n\\end{figure}\n\n\nThe emission point of the electrons\/positrons does not change the\nbehavior (shape) of the cross section on the ($y,p_T$) plane but \nchanges the absolute value of the cross section.\nAs it is visible in Fig.~\\ref{fig02b} (a) and (b) the biggest\ncross section is obtained for the CM emission point. The production of \n$e^+,e^-$ at ($b_x,b_y$) = (40~fm, 0) i.e. far from the CM point, \nis hindered by three orders of magnitude. \nMoreover, the production at $b_x$=$\\pm$15~fm and $b_y$=0 is more\npreferable than the production at $b_x$=0 and $b_y$=$\\pm$15~fm \nwhat is fully understandable taking into account the geometry of \nthe system (see Fig.~\\ref{fig01}). \nAs the system taken here into consideration is fully symmetric\n($A_1 = A_2, Z_1 = Z_2$), \nthus corresponding results are symmetric under the following\nreplacements: $b_x \\to -b_x$ or $b_y \\to -b_y$.\n\n\nFigure~\\ref{fig02d} compares the integrated cross section on reaction \nplane ($b_x,b_y$) for two impact parameters: (a) b=14$\\pm$0.05~fm \n(when nuclei are close to each other) and (b) b=50$\\pm$0.05~fm (when nuclei are well separated). \nThe landscape reflects the position of the nuclei in the moment of\nthe closest approach.\nSimilar plots have also been done for higher $\\sqrt{s_{NN}}$ but \nthe shape is almost unchanged, only the cross section value is\ndifferent. \nThis figure illustrates the influence of the geometry of the reaction. \nRegardless of the impact parameter (b), distance between colliding nuclei, the cross-section has\na maximum at $b_x=0$. The change of $b$ is correlated with the shift \nin a peak at $b_x$.\n\n\\begin{figure}[!hbt]\n \\begin{center}\n \\includegraphics[scale=0.60]{rap_e_xy_weight.pdf}\n \\includegraphics[scale=0.60]{pt_e_xy_weight.pdf}\n \\caption{(Color on-line) The differential cross section for various emission points\n of electrons in the $^{208}$Pb+$^{208}$Pb reaction \n at $\\sqrt{{s}_{NN}} =$ 17.3, 50, 200~GeV at impact parameter 14$\\pm$0.05~fm.}\n \\label{fig02c}\n \\end{center}\n\\end{figure}\n\n\nThe calculations confirm that the shape of the electron\/positron \ndistribution, shown in Fig.~\\ref{fig02c} does not depend on \nthe energy of the colliding nuclei. There are visible small differences \nin the magnitude of cross sections \nfor $\\sqrt{{s}_{NN}} =$ 17.3 and 200~GeV\nat least in the selected limited $p_T$=(0, 0.1)~GeV range. \nDependence on the rapidity is even weaker as the differences are visible\nonly for $|y|>$3.\n\nThese cross sections are used as weights in calculation of electromagnetic\neffects between electrons\/positrons and the fast moving nuclei. The\ncorresponding matrix has following dimensions: $b_{x,y}$=(-50~fm, 50~fm) - \n99$\\times$99 points in the reaction plane and 100$\\times$15 in the ($y,p_T$) space.\n\n\\begin{figure}[!hbt]\n\t\\begin{center}\n\t\t\\includegraphics[scale=0.4]{dsig_dxipt.eps}\n\t\t\\caption{\n\t\t\tDistribution of the cross section in $\\log_{10}p_T$\n\t\t\tfor different energies:\n\t\t\t$\\sqrt{s_{NN}}$ = 17.3, 50 and 200~GeV (from bottom to top).\n\t\t}\n\t\t\\label{fig:dsig_dxipt}\n\t\\end{center}\n\\end{figure}\n\nRather small transverse momenta enter such calculation.\nTo illustrate this in Fig.~\\ref{fig:dsig_dxipt} we show a distribution\nin $log_{10}(p_T)$. As seen from the figure the cross section is\nintegrable and we have no problem with this with our Monte Carlo\nroutine \\cite{Lepage:1977sw}.\n\n\n\\section{Electromagnetic interaction effects} \\label{Sect.III}\n\nThe spectator system are modeled as two uniform spheres in \ntheir respective rest frames that change into disks in the \noverall center-of-mass collision frame.\nThe total charge of nuclei is 82 consistent with UPC. \nThe lepton emission region is reduced to a single point and the time of emission is a free parameter. \n\\begin{figure*}[!hbt]\n\\includegraphics[width=5.8cm,height=5.4cm]{ym_pt_000_17GeV_b14fm.png}\n\\includegraphics[width=5.8cm,height=5.4cm]{ym_pt_0150_17GeV_b14fm.png}\n\\includegraphics[width=5.8cm,height=5.4cm]{ym_pt_1500_17GeV_b14fm.png}\\\\\n\\vspace{-5.3cm}\\hspace{0.cm} {\\bf (a) ($b_x$=0, $b_y$=0) \\hspace{2.cm} (b) ($b_x$=0, $b_y$=15~fm) \\hspace{2.5cm} (c) ($b_x$=15~fm, $b_y$=0)} \\\\\n\\vspace{4.55cm}\n\\includegraphics[width=5.8cm,height=5.4cm]{ym_pt_full_17GeV_b14fm.png}\n\\includegraphics[width=5.8cm,height=5.4cm]{ym_pt_0m150_17GeV_b14fm.png}\n\\includegraphics[width=5.8cm,height=5.4cm]{ym_pt_m1500_17GeV_b14fm.png}\\\\\n\\vspace{-5.3cm}\\hspace{0.cm} {\\bf (d) full ($b_x$, $b_y$) space \\hspace{2.cm} (e) ($b_x$=0, $b_y$=-15~fm) \\hspace{2.cm} (f) ($b_x$=-15~fm, $b_y$=0)} \\\\\n\\vspace{5.cm}\n\\caption{(Color on-line) Rapidity vs $p_T$ distributions for final (subjected to EM\n effects) electrons for different emission points: \n(a) (0, 0) and (d) full xy plane; (b)\n(0, 15~fm); (c) (15~fm, 0); (e) (0, -15~fm) and (f) (-15~fm, 0).\nThese results are for $\\sqrt{s_{NN}}$ = 17.3~GeV.}\\label{fig03}\n\\includegraphics[width=5.8cm,height=5.4cm]{yp_pt_000_17GeV_b14fm.png}\n\\includegraphics[width=5.8cm,height=5.4cm]{yp_pt_0150_17GeV_b14fm.png}\n\\includegraphics[width=5.8cm,height=5.4cm]{yp_pt_1500_17GeV_b14fm.png}\\\\\n\\vspace{-5.3cm}\\hspace{0.cm} {\\bf (a) ($b_x$=0, $b_y$=0) \\hspace{3.cm} (b) ($b_x$=0,$b_y$=15~fm) \\hspace{2.5cm} (c) ($b_x$=15~fm, $b_y$=0)} \\\\\n\\vspace{4.55cm}\n\\includegraphics[width=5.8cm,height=5.4cm]{yp_pt_full_17GeV_b14fm.png}\n\\includegraphics[width=5.8cm,height=5.4cm]{yp_pt_0m150_17GeV_b14fm.png}\n\\includegraphics[width=5.8cm,height=5.4cm]{yp_pt_m1500_17GeV_b14fm.png}\\\\\n\\vspace{-5.3cm}\\hspace{0.6cm} {\\bf (d) full ($b_x$, $b_y$) space \\hspace{3.0cm} (e) ($b_x$=0, $b_y$=-15~fm) \\hspace{2.6cm} (f) ($b_x$=-15~fm, $b_y$=0)} \\hspace{0cm}\\\\\n\\vspace{5.cm}\n\\caption{(Color on-line) Rapidity vs $p_T$ distributions for final (subjected to EM \neffects) positrons for different emission points: \n(a) (0, 0) and (d) full xy plane; (b) (0, 15~fm); (c) (15~fm, 0);\n(e) (0, -15~fm) and (f) (-15~fm, 0).\nThese results are for $\\sqrt{s_{NN}}$ = 17.3~GeV.}\\label{fig04}\n\\end{figure*}\n\n\n\\begin{figure*}[!hbt]\n\\includegraphics[width=8.6cm,height=6.cm]{yf_m_p_0_0_0_17GeV.png}\n\\hspace{0.cm}\n\\includegraphics[width=8.6cm,height=6.cm]{yf_m_p_0_0_0_17GeV_b50fm.png}\\\\\n\\vspace{-1.5cm}\\hspace{0.6cm} {\\bf (a) $\\sqrt{s_{NN}}$= 17.3~GeV \\hspace{5.2cm} (b) $\\sqrt{s_{NN}}$= 17.3~GeV\\\\\n\\vspace{-1.5cm}\\hspace{0.cm} b=14~fm ($b_x$=0,$b_y$=0) \\hspace{6.3cm} b=50~fm}\\\\\n\\vspace{1.7cm}\n\\includegraphics[width=8.6cm,height=6.cm]{yf_m_p_0_0_0_50GeV.png}\n\\includegraphics[width=8.6cm,height=6.cm]{yf_m_p_0_0_0_200GeV.png}\\\\\n\\vspace{-1.5cm}\\hspace{0.2cm}{\\bf(c)\\hspace{0.5cm} $\\sqrt{s_{NN}}$= 50~GeV \\hspace{4.2cm} (d)\\hspace{1.cm} $\\sqrt{s_{NN}}$= 200~GeV\\\\\n\\vspace{-1.5cm}\\hspace{0.2cm} b=14~fm \\hspace{7.3cm} b=14~fm}\\\\\n\\vspace{2.3cm}\n\\caption{(Color on-line) Reduced rapidity distributions for final electrons (blue) and\n positrons (red) for fixed $b$ and ($b_x$=0,$b_y$=0) plane of\nemission points at three collision energies: $\\sqrt{s_{NN}}$=17.3, 50 and 200~GeV.\n}\n \\label{fig05a}\n\\end{figure*}\n\n\nIn this work we assume there is no delay time between collisions of\nnuclei and the start of the EM interactions. \nThe $z$-dependence of the first occurrence of the $e^+ e^-$ pair is\nbeyond the EPA and is currently not known.\nIn our opinion production of $e^+ e^-$ happens when the moving\ncones, fronts of the EM fields, cross each other. This happens\nfor $z \\approx$ 0. \nIn the following we assume $z$ = 0 for simplicity.\n\\footnote{Any other distribution could be taken.}\n\n\\begin{figure*}[!hbt]\n\\includegraphics[width=8.6cm,height=6.cm]{yf_m_p_full_17GeV.png}\n\\includegraphics[width=8.6cm,height=6.cm]{yf_m_p_full_17GeV_b50fm.png}\\\\\n\\vspace{-5.5cm}\\hspace{0cm}{\\bf (a)\\hspace{1.cm} $\\sqrt{s_{NN}}$= 17.3~GeV \\hspace{4.cm} (b)\\hspace{1.cm} $\\sqrt{s_{NN}}$= 17.3~GeV\\\\\n\\vspace{2.5cm}\\hspace{-8cm} full ($b_x$,$b_y$) space \\hspace{12.5cm} \\\\\n\\vspace{0.5cm}\\hspace{0cm} b=14~fm \\hspace{7.5cm} b=50~fm}\\\\\n\\vspace{0.9cm}\n\\includegraphics[width=8.6cm,height=6.cm]{yf_m_p_full_50GeV.png}\n\\includegraphics[width=8.6cm,height=6.cm]{yf_m_p_full_200GeV.png}\\\\\n\\vspace{-5.5cm}{\\bf (c)\\hspace{0.8cm} $\\sqrt{s_{NN}}$= 50~GeV \\hspace{4.0cm} (d)\\hspace{1.cm} $\\sqrt{s_{NN}}$= 200~GeV\\\\\n\\vspace{3.5cm}\\hspace{0cm} b=14~fm \\hspace{7.5cm} b=14~fm}\\\\\n\\vspace{1.3cm}\n\\caption{(Color on-line) Reduced rapidity distributions for final electrons (blue) \nand positrons (red) for fixed $b$ and full ($b_x$,$b_y$) plane of\nemission points at three collision energies: $\\sqrt{s_{NN}}$=17.3, 50 and 200~GeV.}\n \\label{fig05b}\n\\end{figure*}\n\n\nThe trajectories of $e^{\\pm}$ in the field of moving nuclei are obtained\nby solving the equation of motion numerically for electrons\/positrons:\n\\begin{equation}\n\\frac{d \\vec{p}_{e^{\\pm}}}{d t} =\n\\vec{F}_{1,e^{\\pm}}(\\vec{r}_1,t) + \\vec{F}_{2,e^{\\pm}}(\\vec{r}_2,t) \\; .\n\\label{equation_of_motion}\n\\end{equation}\n\n\nThe total interaction is a superposition of interactions with both\nnuclei which positions depend on time.\nWe solve the motion of electron\/positron in the overall center \nof mass system, i.e. both position and time are given in this frame.\nIn this frame we have to deal with both electric and magnetic force\n\\cite{Rybicki:2006qm}.\nBecause nuclei are very heavy compared to electrons\/positron\ntheir motion is completely independent and is practically not distorted\nby the EM interaction.\nWe take:\n\\begin{eqnarray}\n\\vec{r}_1(t) = + {\\hat z} c t + \\vec{b}\/2 \\; , \\nonumber \\\\\n\\vec{r}_2(t) = - {\\hat z} c t - \\vec{b}\/2 \\; ,\n\\end{eqnarray} \ni.e. assume that the nuclei move along straight trajectories\nindependent of the motion of electron\/positron.\nThe step of integration depends on energy and must be carefully adjusted.\n\nThe rapidity vs $p_T$ distributions of initial leptons are \nobtained by randomly choosing the position on the two-dimensional space. \nThe path of particle in electromagnetic field generated by nuclei\nare traced up to 10 000~fm away from the original interaction point. \nThe Monte Carlo method is used to randomize the \ninitial rapidity and $p_T$ from uniform distribution. \nThe initial rapidity and $p_T$ of electrons\/positrons are randomly\nchosen in the range: $y$=(-5,5) and $p_T$=(0, 0.1)~GeV as fixed\nin the previous section. The ($y,p_T$) distributions for final\n(subjected to the EM evolution) leptons\nare presented in Fig.~\\ref{fig03} for electrons and in Fig.~\\ref{fig04} \nfor positrons. These distributions were obtained by analyzing \nthe EM evolution event-by-event.\nThe number of events taken here is $n_{event}$ = 10$^7$ \nfor each two-dimensional plot.\n\nFor electron production an enhancement and for positron a loss with\nrespect to the neighborhood or\/and flat initial distribution\nis observed for $y \\approx \\pm$ 3. This corresponds to the beam rapidity\nat $\\sqrt{s_{NN}}$= 17.3~GeV energy.\n\nThese two sets of two-dimensional plots illustrate \neffect of the EM interaction between $e^+$ or $e^-$ and the moving nuclei. \nThe motion of particles in the EM field of nuclei changes \nthe initial conditions and the final ($y,p_T$) are slightly different.\nFig.~\\ref{fig03} shows the behavior of electrons and \nFig.~\\ref{fig04} of positrons at CM (a) and \ndifferent impact parameter points: (panels c,f) ($\\pm$15~fm,0) and \n(panels b,e) (0,$\\pm$15~fm) marked in Fig.~\\ref{fig01}. \nWe observe that the maximal number of electrons is located \nwhere the cross section for positrons has minimum. The Coulomb effects are well visible as \na missing areas for positrons for $p_T<0.02$~GeV for particles\nemitted from the CM point. The emission of leptons from $b_y$=$\\pm$15~fm gives\nlower effect. The asymmetry in emission is well visible for\n$b_x$=$\\pm$15~fm, \nwhere a larger empty space is for positive rapidity when $b_x$=15~fm and \nfor negative rapidity when $b_x$=-15~fm. \n\nAlthough the EM effects are noticeable for \nelectrons\/positrons in different impact parameter points, \nthe integration over full reaction plane washes out almost totally the effect.\n\nFor comparison the results of integration over full space \n$b_x$=(-50~fm, 50~fm) and $b_y$=(-50~fm, 50~fm) are shown \nin panels (d) of Figs.~\\ref{fig03} and \\ref{fig04}. \nThese results are independent of the source of leptons, thus it could \nbe treated as a general trend and an indication for which \nrapidity-transverse momentum ranges one could observe effects of the EM\ninteraction between leptons and nuclei.\n\\begin{figure*}\n\\includegraphics[width=8.6cm,height=6.cm]{pTf_m_p_full_17GeVa.png}\\hspace{-0.cm}\n\\includegraphics[width=8.6cm,height=6.cm]{pTf_m_p_full_17GeV_b50fma.png}\\\\\n\\vspace{-5.0cm}\\hspace{-2cm}{\\bf (a) $\\sqrt{s_{NN}}$= 17.3~GeV \\hspace{4.9cm} (b) $\\sqrt{s_{NN}}$= 17.3GeV\\\\\n\\vspace{1.5cm}\\hspace{-1.0cm} b=14~fm \\hspace{6.5cm} b=50~fm}\\\\\n\\vspace{2.4cm}\n\\includegraphics[width=8.6cm,height=6.cm]{pTf_m_p_full_50GeVa.png}\\hspace{-0.cm}\n\\includegraphics[width=8.6cm,height=6.cm]{pTf_m_p_full_200GeVa.png}\\\\\n\\vspace{-5.4cm}\\hspace{-2cm}{\\bf(c) $\\sqrt{s_{NN}}$= 50~GeV \\hspace{4.9cm} (d) $\\sqrt{s_{NN}}$= 200~GeV\\\\\n\\vspace{1.5cm}\\hspace{-1.0cm} b=14~fm \\hspace{6.5cm} b=14~fm}\\\\\n\\vspace{3.5cm}\n\\caption{(Color on-line) Transverse momentum distributions for final electrons (blue) \nand positrons (red) for fixed $b$ and full ($b_x$, $b_y$) plane of\nemission points at three collision energies: $\\sqrt{s_{NN}}$=17.3, 50 and 200~GeV.}\n \\label{fig05c}\n\\end{figure*}\n\nThe distribution in reduced rapidity of final leptons are shown \nin Fig.~\\ref{fig05a} and \\ref{fig05b}. The reduced rapidity\n(dimensionless quantity) is the rapidity $y$ normalized to the beam rapidity $y_{beam}$\n(different for various collision energies)\n\\begin{equation}\ny_{red} = y \/ y_{beam} \\; ,\n\\end{equation}\nwhere\n\\begin{equation}\ny_{beam} = \\pm ln \\left( \\frac{\\sqrt{s_{NN}}}{m_p} \\right)\n\\end{equation}\nand $m_p$ is the proton mass.\nThe results shown above were obtained, somewhat arbitrarily, with\nuniform distribution in $(y,p_T)$.\nThis leads to the observation of peaks or dips at beam rapidities.\nNo such peaks appear for $\\sqrt{s}$ = 200~GeV as here the chosen range\nof rapidity (-5,5) is not sufficient.\nWhether such effects survive when weighting with the b-space EPA\ncross section will be discussed below.\n\nFig.~\\ref{fig05a} is focused on emission from the center of mass point \nand Fig.~\\ref{fig05b} is obtained when integrating over full ($b_x,b_y$) plane. \nThe main differences between electron (blue lines) and \npositron (red lines) distributions are not only at midrapidities \nbut also around the beam rapidity. The effect is more visible \nfor the CM emission point but it is slightly smoothed out when \nthe full $(b_x, b_y)$ plane is taken into consideration. \nMoreover increasing the impact parameter (panels (a) and (b)) \ndiminishes the difference between rapidity distributions of final \nelectrons and positrons. The beam energy is another crucial parameter. \nThe collision with $\\sqrt{s_{NN}} > 100$~GeV (panel (d)) does not \nallow for sizeable effects of electromagnetic interaction between \nleptons and nuclei, at least at midrapidities.\n\nThe discussion of the EM interaction between $e^+e^-$ and nuclei has \nto be completed by combining with the cross section of lepton production\nas obtained within EPA.\nTaking into account the leptons coming from photon-photon fusion \nthe distributions from Fig.~\\ref{fig03} and \\ref{fig04} are multiplied \nby differential cross section obtained with Eq.(~\\ref{EPA}).\n \nThe details of the method are presented in Ref.~\\cite{Rybicki:2006qm} \nand adapted here from pion emission to electron\/positron emission.\n\\begin{figure*}[!hbt]\n \\begin{center}\n \\includegraphics[scale=0.60]{y_p_m_000_EM_pure_weighta.pdf}\n \\includegraphics[scale=0.60]{y_p_m_full_EM_pure_weighta.pdf}\n \\caption{(Color on-line)\n The electron and positron emission cross section normalized to 100\\% \n in the $^{208}$Pb+$^{208}$Pb reaction at 158~GeV\/nucleon energy \n($\\sqrt{{s}_{NN}} =$ 17.3~GeV) at impact parameter 14$\\pm$0.05~fm\nassuming the $p_T$=(0, 0.1)~GeV produced in the center of mass (0, 0)\npoint (a) and when integrating over full reaction space (b).\nShown are original EPA distributions (dotted line) and results when\nincluding evolution in the EM field of nuclei for positrons (solid line)\nand for electrons (dashed line).\n}\n \\label{fig07}\n \\end{center}\n\\end{figure*}\nIn Fig.~\\ref{fig05c} we show the influence of EM interaction on $p_T$ distributions.\nHere we integrate over rapidity and ($b_x,b_y$). One can observe that the EM effects lead to a diffusion of transverse momenta \n(see the diffused edge at $p_T=0.1$~GeV, marked by green vertical line). No spectacular effect is observed when changing the \nimpact parameter or beam energy.\n\nFig.~\\ref{fig07} shows a comparison of rapidity distribution of \nfinal electrons and positrons, assuming the particles are emitted \nfrom (a) the center of mass (0, 0) point and (b) when integrating over \n$b_x,b_y$. \nThe comparison is done between EPA distribution relevant for initial stage (red, dotted line) with the final stage, resulting from the EM interaction of charged leptons with positively charged nuclei. \nIf leptons are produced in the CM point, the electron\ndistributions are almost unchanged but positron distributions \nare squeezed to $|y| < 2$. \nIf the cross section is integrated over full ($b_x,b_y$) parameter space, \nthe positron distribution is still steeper than that for electrons \nbut mainly for $|y| <$ 2. \n\n\n\nEven when the leptons produced in the full ($b_x,b_y$) plane are considered, the\n$e^+$ and $e^-$ distributions are different from the initial ones. The electrons under the EM interactions are focused at midrapidities.\n\n\n\\begin{figure*}[!hbt]\n\\includegraphics[width=7.8cm,height=6.cm]{bx_ym_17GeV_14fm.png}\n\\includegraphics[width=7.8cm,height=6.cm]{bx_yp_17GeV_14fm.png}\\\\\n\\vspace{-6.0cm}{\\bf\\hspace{-5cm} (a) \\hspace{7.0cm} (b)}\\\\\n\\vspace{5.5cm}\n\\includegraphics[width=7.8cm,height=6.cm]{by_ym_17GeV_14fm.png}\n\\includegraphics[width=7.8cm,height=6.cm]{by_yp_17GeV_14fm.png}\\\\\n\\vspace{-6.0cm}{\\bf\\hspace{-5cm} (c) \\hspace{7.0cm} (d)}\\\\\n\\vspace{5.5cm}\n\\caption{ Distribution of electrons ((a), (c)) and positrons ((b), (d)) for $\\sqrt{s_{NN}}$= 17.3\n ~GeV at b=14~fm\n integrated over ($b_x,b_y$)=(-50~fm, 50~fm),\n $p_T^{ini}$=(0, 0.1~GeV)}\n \\label{fig08}\n\\end{figure*}\n\\begin{figure*}\n\\includegraphics[width=7.8cm,height=6.cm]{bx_ym_17GeV_50fm.png}\n\\includegraphics[width=7.8cm,height=6.cm]{bx_yp_17GeV_50fm.png}\\\\\n\\vspace{-6.0cm}{\\bf\\hspace{-5cm} (a) \\hspace{7.0cm} (b)}\\\\\n\\vspace{5.5cm}\n\\includegraphics[width=7.8cm,height=6.cm]{by_ym_17GeV_50fm.png}\n\\includegraphics[width=7.8cm,height=6.cm]{by_yp_17GeV_50fm.png}\\\\\n\\vspace{-6.0cm}{\\bf\\hspace{-5cm} (c) \\hspace{7.0cm} (d)}\\\\\n\\vspace{5.5cm}\n\\caption{ Distribution of electrons ((a), (c)) and positrons ((b), (d)) for $\\sqrt{s_{NN}}$= 17.3\n ~GeV at b=50~fm \n integrated over ($b_x,b_y$)=(-100~fm, 100~fm), \n $p_T^{ini}$=(0, 0.1~GeV).}\n \\label{fig09}\n\\end{figure*}\n\\begin{figure*}[!hbt]\n\\includegraphics[width=7.8cm,height=6.cm]{bx_ym_200GeV_14fm.png}\n\\includegraphics[width=7.8cm,height=6.cm]{bx_yp_200GeV_14fm.png}\\\\\n\\vspace{-6.0cm}{\\bf\\hspace{-5cm} (a) \\hspace{7.0cm} (b)}\\\\\n\\vspace{5.5cm}\n\\includegraphics[width=7.8cm,height=6.cm]{by_ym_200GeV_14fm.png}\n\\includegraphics[width=7.8cm,height=6.cm]{by_yp_200GeV_14fm.png}\\\\\n\\vspace{-6.0cm}{\\bf\\hspace{-5cm} (c) \\hspace{7.0cm} (d)}\\\\\n\\vspace{5.5cm}\n\\caption{ Distribution of electrons ((a), (c)) and positrons ((b), (d)) for $\\sqrt{s_{NN}}$= 200\n ~GeV at b=14~fm \nintegrated over ($b_x,b_y$)=(-50~fm, 50~fm) for $p_T^{ini}$=(0, 0.1~GeV)}\n \\label{fig11}\n\\end{figure*}\n\nThe dependence on the position of emission and final rapidity allows to \nunderstand how the geometry influences the electromagnetic interaction between \nleptons and nuclei. Figures \\ref{fig08},\\ref{fig09} \nand \\ref{fig11} present the cross section distribution in \n$b_x$ (top rows) and rapidity for electrons (a) and positrons (b) and (bottom rows)\n$b_y$ and rapidity for electrons (c) and positrons (d).\nFigs.~\\ref{fig08} and \\ref{fig09} are for $\\sqrt{s_{NN}}$= 17.3~GeV but with \nthe impact parameter 14$\\pm$0.05 and 50$\\pm$0.05~fm.\nFig.~\\ref{fig11} is for $\\sqrt{s_{NN}}$= 200~GeV and\nb=14~fm. \nThese plots allow to investigate the anisotropy caused by the\ninteraction between leptons and nuclei. It is more visible for larger \nimpact parameter when the spectators are well separated (Fig.~\\ref{fig09}). \n\\begin{figure*}\n\\includegraphics[width=7.8cm,height=6.cm]{y_m_full_x_weight_17_50_200.pdf}\n\\includegraphics[width=7.8cm,height=6.cm]{y_p_full_x_weight_17_50_200.pdf}\n\\caption{Rapidity distribution of electrons for $\\sqrt{s_{NN}}$= 17.3~GeV (b=14~fm,\n 50~fm) and 50~GeV and 200~GeV with b=14$\\pm$0.05~fm (only) integrated over \n($b_x,b_y$)=(-50~fm, 50~fm), $p_T^{ini}$=(0, 0.1~GeV).}\n \\label{fig13}\n\\end{figure*}\n\n\\begin{figure}\n\\includegraphics[width=7.8cm,height=6.cm]{y_pm_full_x_weight_17_50_200.pdf}\n\\caption{The ratio of rapidity distributions of positrons and electrons for \n $\\sqrt{s_{NN}}$= 17.3~GeV and fixed b=14~fm and 50~fm \n and $\\sqrt{s_{NN}}$ = 50~GeV and 200~GeV with fixed b=14~fm \n integrated over ($b_x,b_y$)=(-50~fm, 50~fm) and transverse momenta in the interval \n $p_T^{ini}$=(0, 0.1~GeV).}\n \\label{fig14}\n\\end{figure}\n\nDistributions for fixed impact parameter and different beam energies \nreflect the behavior seen in Fig.~\\ref{fig05b}. For collision with \n$\\sqrt{s_{NN}}$= 200~GeV (Fig.~\\ref{fig11})\nthe electrons and positrons almost do not feel the presence of the EM\nfields of the nuclei. \n\nIntegrating over full ($b_x,b_y$) plane one obtains \nthe rapidity distribution of final leptons\nshown in Fig.~\\ref{fig13} (a) separately for electrons and (b) positrons, \nfor two impact parameters: 14~fm (full lines) and 50~fm (dashed lines). \nElectrons have a somewhat wider distribution than positrons and \nthis is independent of the impact parameter.\nWhile positron rapidity distributions only weakly depend on collision energy it is not the case for electrons, where sizeable differences can be observed.\n\nThe ratio of distributions for positrons and electrons (Fig.~\\ref{fig14}) reflects the behavior seen in Figs.~\\ref{fig08} and \\ref{fig09}. \nThis plot shows a combined effect of EPA cross section and \nthe EM interactions of leptons with nuclei. Thus despite the production \ncross section is lower for larger impact parameter, the discussed\nphenomenon should be visible for larger rapidities. \nThe ratio quickly changes with energy and tends to 1 for larger energies (see dotted line for $\\sqrt{{s}_{NN}}$ = 200~GeV).\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Conclusions}\n\nIn the last 15 years the electromagnetic effects due to a large (moving)\ncharge of the spectators on charged pion momentum distributions were \nobserved both in theoretical calculations and experimentally \nin peripheral heavy ion collisions at SPS and RHIC energies.\nInteresting and sometimes spectacular effects were identified.\n\nIn the present paper we have discussed whether such effects could also\nbe observed for the distributions of electrons\/positrons produced\nvia photon-photon fusion in heavy ion UPC.\nThe corresponding cross section can be rather reliably calculated\nand turned out to be large, especially for low transverse\nmomentum electrons\/positrons. The impact parameter equivalent photon\napproximation is well suitable for investigating the electromagnetic\neffects.\nOn the experimental side only rather large transverse momentum\nelectrons\/positrons could be measured so far at RHIC and the LHC, \ntypically larger than 0.5~GeV.\n\nWe have organized calculations that include the EM effects using as an input EPA distributions.\nFirst multidifferential (in momenta and impact parameter) distributions \nfor the diphoton production of the $e^+ e^-$ pairs are prepared in \nthe impact parameter equivalent photon approximation. \nSuch distributions are used next to calculate the propagation of \nthe electrons\/positrons in strong EM fields generated by \nthe quickly moving nuclei.\nThe propagation has been done by solving numerically relativistic \nequation of motion. Strong EM effects have been observed only at \nvery small transverse momenta of electron\/positron. Therefore \nto accelerate calculations we have limited to really small initial \ntransverse momenta $p_T <$ 0.1~GeV.\n\n\nThe shape of the differential cross section in rapidity and transverse\nmomentum does not depend on the energy of the process but rather \non the emission point in the impact parameter plane ($b_x,b_y$). \nWe have investigated effects for different initial conditions, \ni.e. different emission positions in the impact parameter space.\n\nThe leptons interact electromagnetically with charged nuclei which \nchanges their trajectories. The biggest effect has been identified for \nthe CM emission point. \nHowever, the integration over full ($b_x,b_y$) plane washes out this effect to large extent. \nThe range of $p_T$=(0, 0.02~GeV) has turned out the most preferable \nto investigate the influence of the EM effects on leptons originating from various \n($b_x,b_y$) plane points.\n\nMoreover, the impact parameter influences not only the value of \nthe cross section but also the shapes of distributions of final leptons. \n\nThe $AA \\rightarrow AAe^+e^-$ process creates leptons in a broad\nrange of rapidities. We have found that only at small transverse momenta of\nelectrons\/positrons one can observe sizeable EM effects.\n \nThe performed calculations allow to conclude that the maximal \nbeam energy for the Pb+Pb collision, where the EM effects between leptons\nand nuclei are evident at midrapidities, is probably \n$\\sqrt{{s}_{NN}}$=100~GeV.\nObservation of the effect at higher energies may be therefore \nrather difficult, if not impossible.\nThe effect survives even at high energies but close to beam rapidity. \nHowever, this region of the phase space is usually not instrumented\nand does not allow electron\/positron measurement.\n\nSo far the effect of EM interaction was studied for fixed values of \nthe impact parameter (mainly for $b$ = 14~fm). However, the impact parameter\ncannot be measured.\nThe integration over the impact parameter is rather difficult and\ngoes beyond the scope of the present paper.\nSuch an integration will be studied elsewhere.\n\nOn the experimental side a good measurement of electrons and positrons\nat low transverse momenta ($p_T <$ 0.1~GeV is necessary to see \nan effect.\nIn principle, the measurement of the $e^+ \/ e^-$ ratio as a function\nof lepton rapidity and transverse momentum would be useful in \nthis context.\nAccording to our knowledge there are definite plans only for\nhigh energies (ALICE-3 project) where, however, the EM effect should\nbe very small (high collision energy). At CERN SPS energies the effect\nis rather large but at very small transverse momenta.\nRHIC would probably be a good place to observe\nthe effect of the discussed here EM interactions but this would\nrequire a modification of the present apparatus.\n\n{\\bf Acknowledgement}\n\nA.S. is indebted to Andrzej Rybicki for past collaboration on\nelectromagnetic effects in heavy ion collisions.\nThis work is partially supported by\nthe Polish National Science Centre under Grant\nNo. 2018\/31\/B\/ST2\/03537\nand by the Center for Innovation and Transfer of Natural Sciences \nand Engineering Knowledge in Rzesz\\'ow (Poland).\n\n\n\n\\bibliographystyle{spphys} \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Appendix}\n\\label{sec:appendix}\n\\begin{proof}[Proof of Lemma \\ref{lem:wtforg}]\nTo construct the weight function $w$, we associate a sequence $P(u,v)$ of edges of $G'$ with each edge $(u,v)$ of $G$. Assume that $T'$ is a rooted tree, and root is the highest node in the tree. The heights of all the other nodes are one less than that of their parent. As we mentioned that $T'$ is a tree decomposition of $G$. For an edge $(u,v)$, we know that there are unique highest bags $B_1$ and $B_2$ that contain vertices $u$ and $v$, respectively.\n\\begin{itemize}\n\\item If $B_1 = B_2$ then $P(u,v) = (u_{B_1} , v_{B_2})$.\n\n\\item If $B_1$ is an ancestor of $B_2$ then \\\\$P(u,v) = (u_{B_1}, u_{parent(\\cdots (parent(B_2)))}), \\ldots ,(u_{parent(B_2)}, u_{B_2}),(u_{B_2},v_{B_2})$.\n\n\\item If $B_1$ is a descendant of $B_2$ then \\\\ $P(u,v) = (u_{B_1}, v_{B_1}), (v_{B_1}, v_{parent(B_1)}), \\ldots , (v_{parent(\\cdots (parent(B_1)))},v_{B_2})$.\n\\end{itemize}\n\nThe weight function $w$ for the graph $G$ is defined as follows:\n\\begin{eqnarray*}\nw(u,v) = \\sum_{e\\in P(u,v)} w'(e)\n\\end{eqnarray*}\n\n\nFor a simple cycle $C = e_1,e_2, \\ldots ,e_j$ in $G$, we define $P(C) = P(e_1),P(e_2), \\ldots ,P(e_j)$. Note that $P(C)$ is a closed walk in $G'$. Let $E'_d(C)$ be the subset of edges of $G'$ such for all edges $e \\in E'_d(C)$ both $e$ and $e^{r}$ appear in $P(C)$, where $e^r$ denotes the edge obtained by reversing the direction of $e$. We prove that if we remove the edges of $E'_d(C)$ from $P(C)$ then the remaining edges $P(C) - E'_d(C)$ form a simple cycle in $G'$. \n\n\\begin{claim}\n\\label{clm:simp}\nEdges in the set $P(C)- E'_d(C)$ form a simple cycle in $G'$. \n\\end{claim}\n\n\\begin{proof}\nNote that the lemma follows trivially if $P(C)$ is a simple cycle. Therefore, assume that $P(C)$ is not a simple cycle. We start traversing the walk $P(C)$ starting from the edges of the sequence $P(e_1)$. Let $P(e_k)$ be the first place where a vertex in the walk $ P(e_1) P(e_2).....P(e_k)$ repeats, i.e., edges in the sequence $P(e_1) P(e_2).....P(e_{k-1})$ form a simple path, but after adding the edges of $P(e_k)$ some vertices are visited twice in the walk $P(e_1) P(e_2).....P(e_{k-1})P(e_k)$ for some $k \\leq j$. This implies that some vertices are visited twice in the sequence $P(e_{k-1})P(e_k)$. Let $e_{k-1}=(u,v)$ and $e_k=(v,x)$. This implies that some copies of the vertex $v$ appear twice in the sequence $P(u,v)P(v,x)$. Let $B_1$ and $B_2$ be the highest bags such that $B_1$ contains the copies of vertices $u$ and $v$, and $B_2$ contains the copies of $v$ and $x$. Let bag $B$ be the lowest common ancestor of $B_1$ and $B_2$. We know that $B$ must contain a copy of the vertex $v$, i.e., $v_B$. Let $B'$ be the highest bag containing a copy of vertex $v$, i.e., $v_{B'}$. First, consider the case when neither $B_1$ is an ancestor of $B_2$ and vice-versa, other cases can be handled similarly. In that case sequence $P(u,v) = u_{B_1}v_{B_1} v_{parent(B_1)} \\ldots v_{B} \\ldots v_{B'}$ and $P(v,w) = v_{B'} \\ldots v_{B} \\ldots v_{parent(B_2)} v_{B_2}w_{B_2}$. Note that in $P(u,v)$ a path goes from $v_B$ to $v_{B'}$ and the same path appear in reverse order from $v_{B'}$ to $v_{B}$ in the sequence $P(v,x)$. Therefore if we remove these two paths from $P(u,v)$ and $P(v,w)$ the remaining subsequence of the sequence $P(u,v)P(v,w)$ will be a simple path, i.e., no vertex will appear twice since $B$ is the lowest common ancestor of $B_1$ and $B_2$. Now repeat this procedure for $P_{k+1},P_{k_2} \\ldots$ and so on till $P_{k}$. In the end, we will obtain a simple cycle.\n\\end{proof}\n\n\nSince we assumed that the weight function $w'$ is skew-symmetric, we know that $w'(e) = -w (e^{r})$, for all $e \\in G'$. This implies that $w'(E'_d(C)) = 0 $. Therefore $w(C) = w'(P(C)) = w'(P(C) - E'_d(C))$. From Claim \\ref{clm:simp} we know that edges in the set $P(C) - E'_d(C)$ form a simple cycle and we assumed that $w'$ gives nonzero circulation to every simple cycle therefore, $w'(P(C) - E'_d(C)) \\neq 0$. This implies that $w(C) \\neq 0$. This finishes the proof of Lemma \\ref{lem:wtforg}.\n\\end{proof}\n\n\\begin{proof}[Proof of Claim \\ref{clm:connect}]\nNote that if we treat each vertex in the bags of $T'$ distinctly, then there is a one-to-one correspondence between vertices of $G'$ and vertices in the bags of $T'$. Therefore, in $T'$ a vertex of $G'$ is identified by its corresponding vertex. Note that all the bags which contain vertices of the cycle $C$ form a connected component in $T'$. We will now prove that if a bag $B$ contains some vertices of $C$, then either $B$ has some edges of $C$ associate with it or no bag in the subtree rooted at $B$ has any edge of $C$ associated with it. From this, we can conclude that the bags which have some edges of $C$ associated with them form a connected component in $T'$.\n\nAssume that $B$ is a bag which contains a vertex of $C$ but no edge of $C$ is associated with it. This implies that $C$ never enters in any of the children of $B$. Because, let us assume it enters to some child $B'$ of $B$ through some vertex $v_{B'}$ of $B'$. In that case, there will be an edge $(v_{B},v_{B'})$ of $C$ associated with the bag $B$, which is a contradiction. Therefore subtree rooted at $B$ will not have any edge of the cycle $C$ associated with it. This finishes the proof.\n\\end{proof}\n\\section{Weight function}\n\\label{sec:wtfn}\nIn order to construct the desired weight function for a given graph $G_0 \\in \\langle \\mathcal{G}_{P, W}\\rangle_3$, we modify the component tree $T_0$ of $G_0$ such that it has the following properties.\n\\begin{itemize}\n\\item No two separating sets share a common vertex. \\item A separating set is shared by at most two components.\n\\item any virtual triangle, i.e., the triangle consists of virtual edge, in a planar component is always a face.\n\\end{itemize}\n\nLet $T$ be this modified component tree, and $G$ be the graph represented by $T$. We show that if we have a weight function that gives nonzero circulation to every cycle in $G$, then we can obtain a weight function that will give nonzero circulation to all the cycles in $G_0$.\nArora et al. \\cite{AGGT16} showed how a component tree satisfying these properties can be obtained for $K_{3,3}$-free and $K_5$-free graphs. We give a similar construction below and show that we can modify the components of $T_0$ such that $T$ satisfies the above properties (see Section \\ref{sec:modt}). Note that if the graphs inside two nodes of $T_0$ share a separating set $\\tau$ and they both are constant tree-width graphs, then we can take the clique-sum of these two graphs on the vertices of $\\tau$, and the resulting graph will also be a constant tree-width graph. Therefore, we can assume that if two components share a separating set, then either both of them are planar, or one of them is planar and the other is of constant tree-width.\n\n\\subsection{Modifying the Component Tree}\n\\label{sec:modt}\nIn this section, we show that how we obtain the component tree $T$ from $T_0$ so that it satisfies the above three properties.\n\\subparagraph{(i) No two separating sets share a common vertex:}\nFor a node $D$ in $T_0$, let $G_D$ be the graph inside node $D$. Assume that $G_D$ contains a vertex $v$ which is shared by separating sets $\\tau_1,\\tau_2, \\ldots ,\\tau_k$, where $k>1$, present in $G_D$. We replace the vertex $v$ with a gadget $\\gamma$ defined as follows: $\\gamma $ is a star graph such that $v$ is the center node and $v_1,,v_2, \\ldots ,v_k$ are the leaf nodes of $\\gamma$. The edges which were incident on $v$ and had their other endpoints in $\\tau_i$, will now incident on $v_i$ for all $i \\in[k]$. All the other edges which were incident on $v$ will continue to be incident on $v$. We do this for each vertex which is shared by more than one separating set in $G_{D}$. Let $G_{D'}$ be the graph obtained after replacing each such vertex with gadget $\\gamma$. It is easy to see that if $G_D$ was a planar component, then $G_{D'}$ will also be a planar component. We show that the same holds for constant tree-width components as well.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=1]{figures\/singlecmg}\n\\caption{(Left)A separating set $\\{x_1,x_2\\}$ is shared by components $D_1,D_2$ and $D_3$. (Right) Replace them by adding the gadget $\\beta$ and connect $D_1,D_2$ and $D_3$ to $\\beta$.}\n\\label{fig:sepset}\n\\end{center}\n\\end{figure}\n\n\\begin{claim}\nIf $G_D$ is a constant treewidth graph, then $G_{D'}$ will also be of constant treewidth.\n\\end{claim}\n\\begin{proof}\nLet $T_D$ be a tree decomposition of $G_D$ such that each bag of $T_D$ is of constant size, i.e., contains some constant number of vertices. Let $v$ be a vertex shared by $k$ separating sets $\\{x_i,y_i,v\\}$, for all $i \\in [k]$ in $G_D$. Let $B_1,B_2, \\ldots B_k$ be the bags in $T_D$ that contain separating sets $\\{x_1,y_1,v\\}, \\{x_2,y_2,v\\} , \\ldots ,\\{x_k,y_k,v\\}$ respectively (note that one bag may contain many separating sets). Now we obtain a tree decomposition $T_{D'}$ of the graph $G_{D'}$ using $T_D$ as follows: add the vertices $v_i$ in the bag $B_i$, for all $i \\in [k]$. Repeat this for each vertex $v$ in $G_D$, which is shared by more than one separating set to obtain $T_{D'}$. Note that in each bag of $T_D$ we add at most one new vertex with respect to each separating set contained in the bag in order to obtain $T_{D'}$. Since each bag in $T_D$ can contain vertices of only constant many separating sets, size of each bag remain constant in $T_{D'}$. Also, $T_{D'}$ is a tree decomposition of $G_{D'}$.\n\\end{proof}\n\n\n\\subparagraph{(ii) A separating set is shared by at most two components:} Assume that a separating set of size $t$, $\\tau =\\{x_i\\}_{i \\leq t}$ is shared by $k$ components $D_1,D_2, \\ldots D_k$, for $k>2$, in $T_0$. Let $\\beta$ be a gadget defined as follows: the gadget consists of $t$ star graphs $\\{\\gamma_i\\}_{i \\leq t}$ such that $x_i$ is the center node of $\\gamma_i$ and each $\\gamma_i$ has $k$ leaf nodes $\\{x_i^1,x_i^2, \\ldots x_i^k\\}$. There are virtual cliques present among the vertices $\\{x_i^j\\}_{i \\leq t}$ for all $j \\in [k]$ and among $\\{x_i\\}_{i \\leq t}$ (see Figure \\ref{fig:sepset}). If there is an edge present between any pair of vertices in the set $\\{x_i\\}_{i \\leq t}$ in the original graph, then we add a real edge between respective vertices in $\\beta$. $\\beta$ shares the separating set $\\{x_i^j\\}_{i \\leq t}$ with the component $D_j$ for all $j \\in [k]$.\n\nNote that in this construction, we create new components ($\\beta$) while all the other components in the component tree remain unchanged. Notice that the tree-width of $\\beta$ is constant (at most $5$ to be precise). We can define a tree decomposition of $\\beta$ of tree-width $5$ as follows: $B_0,B_1',B_2', \\ldots ,B_k'$ be the bags in the tree decomposition such that $B_0 = \\{x_1,x_2,x_3\\}$, $B_i'= \\{x_1,x_2,x_3,x^i_1,x^i_2,x^i_3\\}$ and there is an edge from $B_0$ to $B_i'$ for all $i \\in [k]$.\n\n\\subparagraph{(iii) Any virtual triangle, i.e., the triangle consists of virtual edges, in a planar component is always a face:} $3$-cliques in a $3$-clique sum of a planar and a bounded tree-width component is always a face in the planar component. This is because suppose there is a planar component $G_i$ in which the $3$-clique on $u,v,w$ occurs but does not form a face. Then the triangle $u,v,w$ is a separating set in $G_i$, which separates the vertices in its interior $V_1$ from the vertices in its exterior $V_2$. Notice that neither of $V_1,V_2$ is empty by assumption since $u,v,w$ is not a face. However, then we can decompose $G_i$ further.\n\n\\subsection{Preserving nonzero circulation:}\n\\label{subsec:presnzc}\nWe can show that if we replace a vertex with the gadget $\\gamma$, then the nonzero-circulation in the graph remains preserved: let $G_1(V_1,E_1)$ be a graph such that a vertex $v$ in $G_1$ is replaced with the gadget $\\gamma$ (star graph). Let this new graph be $G_2(V_2,E_2)$. We show that if we have a skew-symmetric weight function $w_2$ that gives nonzero circulation to every cycle in $G_2(V_2,\\dvec{E}_2)$, then we can obtain a skew-symmetric weight function $w_1$ that gives nonzero circulation to every cycle in $G_1(V_1,\\dvec{E}_2)$ as follow. Let $u_1,u_2, \\ldots, u_k$ be the neighbors of $v$ in $G_1$. For the sake of simplicity, assume that $v$ is replaced with $\\gamma$ such that $\\gamma$ has only two leaves $v_1$ and $v_2$ and $v$ is the center of $\\gamma$. Now assume that $u_1,u_2, \\ldots ,u_j$ become neighbors of $v_1$ and, $u_{j+1},u_{j+2}, \\ldots , u_k$ become neighbors of $v_2$ in $G_2$, for some $j max(2^{m+2},7)$, where $m$ is the maximum number of edges associated with any constant size component.\n\n\\subparagraph{ If $G'_{B_i}$ is a planar component:}\n$w_1$ for such components is same as the weight function defined in \\cite{BTV09} for planar graphs. We know that given a planar graph $G$, its planar embedding can be computed in logspace \\cite{AllenderMahajan04}.\n\n\\begin{theorem}[\\cite{BTV09}]\nGiven a planar embedding of a graph $H$, there exists a logspace computable function $w$ such that for every cycle $C$ of $H$, circulation of the cycle $w(C) \\neq 0$.\n\\end{theorem}\nThe above weight function gives nonzero circulation to every cycle that is completely contained in a planar component.\n\nThe weight function $w_2$ for planar components is defined as follows. $w_2$ assigns weights to only those faces of the component, which are adjacent to some separating set. For a subtree of $T_s$ of $A(T')$, let $l(T_s)$ and $r(T_s)$ denote the number of leaf nodes in $T_s$ and root node of $T_s$, respectively. For a bag $B_i$, $h(B_i)$ denotes the height of the bag in $A(T')$. If $B_i$ is the only bag in the subtree rooted at $B_i$, then each face in $G'_{B_i}$ is assigned weight zero. Otherwise, let $\\tau$ be a separating set where some subtree $T_i$ is attached to $B_i$. The faces adjacent to $\\tau$ in $G'_{B_i}$ are assigned weight $2\\times K^{h(r(T_i))} \\times l(T_i)$. If a face is adjacent to more than one separating set, then the weight assigned to the face is the sum of the weights due to each separating set. The weight of a face is defined as the sum of the weights of the edges of the face in clockwise order. If we have a skew-symmetric weight function, then the weight of the clockwise cycle will be the sum of the weights of the faces inside the cycle \\cite{BTV09}. Therefore assigning positive weights to every face inside a cycle will ensure that the circulation of the cycle is nonzero. Given weights on the faces of a graph, we can obtain weights for the edges so that the sum of the weights of the edges of a face remains the same as the weight of the face assigned earlier \\cite{Kor09}.\n\n\n\\subparagraph{If $G'_{B_i}$ is a constant size component:} For this type of component, we need only one weight function. Thus we set $w_2$ to be zero for all the edges in $G'_{B_i}$ and $w_1$ is defined as follows. Let $e_1,e_2, \\ldots ,e_k$ be the edges in the component $Q_i$, for some $k \\leq m$. Edge $e_j$ is assigned weight $2^i \\times K^{h(r(T_i))-1} \\times l(T_{i})$ (for some arbitrarily fixed orientation), Where $T_{i}$ is the subtree of $A(T')$ rooted at $B_i$. Note that for any subset of edges of $G'_{B_i}$, the sum of the weight of the edges in that subset is nonzero with respect to $w_1$.\n\n\nThe final weight function is $w' = \\langle w_1 + w_2 \\rangle.$ Since the maximum height of a bag in $A(T')$ is $O(\\log n)$, the weight of an edge is at most $O( n^c)$, for some constant $c>0$.\n\n\\begin{lemma}\n\\label{lem:maxval}\nFor a cycle $C$ in $G'$ sum of the weights of the edges of $C$ associated with the bags in a subtree $T_i$ of $A(T')$ is $< K^{h(r(T_i))} \\times l(T_i) $.\n\\end{lemma}\n\n\\begin{proof}\nLet $w(C_{T_i})$ denotes the sum of the weight of the edges of a cycle $C$ associated with the bags in $T_i$. We prove the Lemma by induction on the height of the root of the subtrees of $A(T')$. Note that the Lemma holds trivially for the base case when the height of the root of a subtree is $1$.\n\n\\textit{Induction hypothesis}: Assume that it holds for all the subtrees such that the height of their root is $ 2^{m+2}] \\\\\nw(C_{T_i}) &<& K^{h(r(T_i))} \\times l(T_i)\n\\end{eqnarray*}\n\n\\item When $G'_{r(T_i)}$ is a planar graph: let $\\tau_1,\\tau_2, \\ldots ,\\tau_k$ be the separating sets present in $G'_{r(T_i)}$ such that the subtree $T_i^j$ is attached to $r(T_i)$ at $\\tau_j$, for all $j \\in [k]$. A separating set can be present in at most 3 faces. Thus it can contribute $2 \\times 3 \\times K^{h(r(T_i^j))} \\times l(T_i^j)$ to the circulation of the cycle $C$. Therefore,\n\\begin{eqnarray*}\nw(C_{T_i}) &\\leq & \\sum_{j=1}^k 6 \\times K^{h(r(T_i^j))} \\times l(T_i^j) + \\sum_{j=1}^k K^{h(r(T_i^j))} \\times l(T_i^j) \\\\\nw(C_{T_i}) &\\leq& 7 \\times K^{h(r(T_i))-1} \\times l(T_i)\\hspace{5cm} [K > 7] \\\\\nw(C_{T_i}) &<& K^{h(r(T_i))} \\times l(T_i)\n\\end{eqnarray*}\n\\end{itemize}\n\\end{proof}\n\n\n\\begin{lemma}\n\\label{lem:domwt}\nFor a cycle, $C$ in $G$ let $B_i$ be the unique highest bag in $A(T')$ that have some edges of $C$ associated with it. Then the sum of the weights of the edges of $C$ associated with $B_i$ will be more than that of the rest of the edges of $C$ associated with the other bags.\n\\end{lemma}\n\n\\begin{proof}\nLet $T_i$ be the subtree of $A(T')$ rooted at $B_i$. We know that sum of the weights of the edges of $C$ associated $B_i$ is $\\geq 2 \\times K^{h(r(T_i))-1} \\times l(T_i)$. Let $T_i^1, T_i^2 , \\ldots ,T_i^k$ be the subtree of $T_i$ rooted at children of $B_i$. By Lemma \\ref{lem:maxval}, we know that the sum of the weight of the edges of $C$ associated with the bags in these subtrees is $< \\sum_{j=1}^k K^{h(r(T_i^j))} \\times l(T_i^j) = K^{h(r(T_i))-1} \\times l(T_i)$. Therefore, the lemma follows.\n\\end{proof}\n\n\\begin{lemma}\n\\label{lem:nonzerowt}\nCirculation of a simple cycle $C$ in the graph $G'$ is nonzero with respect to $w'$.\n\\end{lemma}\n\n\\begin{proof}\nIf $C$ is contained within a component, i.e., its edges are associated with a single bag $B_i$, then we know that $w_1$ assigns nonzero circulation to $C$. Suppose the edges of $C$ are associated with more than one bag in $T'$. By Claim \\ref{clm:connect}, we know that these bags form a connected component. By the (ii) property of $A(T')$, we know that there is a unique highest bag $B_i$ in $A(T')$ which have edges of $C$ associated with it. Therefore from Lemma \\ref{lem:domwt} we know that the circulation of $C$ will be nonzero.\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem \\ref{thm:main}]\nProof of Theorem \\ref{thm:main} follows from Lemma \\ref{lem:wtforg} and \\ref{lem:nonzerowt}.\n\\end{proof}\n\n\\section{Conclusion}\n\\label{sec:concl}\nWe have given a construction of a nonzero circulation weight function for the class of graphs that can be expressed as 3-clique-sums of planar and constant treewidth graphs. However, it seems that our technique can be extended to the class of graphs that can be expressed as 3-clique-sums of constant genus and constant treewidth graphs. Further extending our results to larger graph classes would require fundamentally new techniques. This is so because the most significant bottleneck in parallelizing matching algorithms for larger graph classes such as apex minor free graphs or $H$-minor free graphs for a finite $H$ is the absence of a parallel algorithm for the structural decomposition of such families. Thus we would need to\nrevisit the Robertson-Seymour graph minor theory to parallelize it. This paper thus serves the dual purpose of delineating the boundaries of the known regions of parallel (bipartite) matching and reachability and as an invitation to the vast unknown of parallelizing the Robertson-Seymour structure theorems.\n\\section{Introduction}\nDirected graph reachability and perfect matching are two fundamental problems in computer science. The history of the two problems has been inextricably linked together from the inception of computer science (and before!) \\cite{FordF56}. The problems and their variants, such as shortest path \\cite{Dijkstra59} and maximum matching \\cite{Edmonds65} have classically been studied in the sequential model of computation. Since the 1980s, considerable efforts have been spent trying to find parallel algorithms for matching problems spurred on by the connection to reachability which is, of course, parallelizable.\nThe effort succeeded only in part with the discovery of randomized parallel algorithms \\cite{KUW85,MulmuleyVV87}. While we know that the reachability problem is complete for the complexity class $\\NL$, precise characterization has proved to be elusive for matching problems. The 1990s saw attempts in this\ndirection when surprisingly ``small'' upper bounds were proved \\cite{ARZ99}\nfor the perfect matching problem, although in the non-uniform setting.\nAt roughly the same time, parallel algorithms for various\nversions of the matching problem for restricted graph classes like planar\n\\cite{MN95} and bounded genus \\cite{MV00} graphs were\ndiscovered. The last two decades have seen efforts towards pinning down the\nexact\nparallel complexity of reachability and matching related problems in restricted\ngraph classes \\cite{BTV09,KV10,DKR10,DKTV11,AGGT16,KT16,GST19,GST20}.\nMost of these papers are based on the method of constructing \\emph{nonzero circulations}.\n\nThe circulation of a simple cycle is the sum of its edge-weights in a\nfixed orientation (see Section~\\ref{sec:prelims} for the definition)\nand\nwe wish to assign polynomially bounded weights to the edges of a graph, such\nthat every simple cycle has a nonzero circulation.\nAssigning such weights \\emph{isolates} a reachability witness or a matching witness in the graph \\cite{TV12}. Constructing\npolynomially bounded isolating weight function in parallel for general graphs has been\nelusive so far.\nThe last five years\nhave seen rapid progress in the realm of matching problems, starting with\n\\cite{FGT} which showed that the method of nonzero circulations could\nbe extended from topologically restricted (bipartite) graphs to general\n(bipartite) graphs. A subsequent result extended this to all graphs\n\\cite{ST17}. More recently, the endeavour to parallelize planar\nperfect matching has borne fruit \\cite{Sankowski18,AV20} and has been followed up by further exciting work \\cite{AV2}. \n\nWe know that polynomially bounded weight functions that give nonzero circulation to every cycle can be constructed in logspace for planar graphs, bounded genus graphs and bounded treewidth graphs \\cite{BTV09,DKTV11,DKMTVZ20} . Planar graphs are both $K_{3,3}$-free and $K_5$-free graphs. Such a weight function is also known to be constructable in logspace for $K_{3,3}$-free graphs and $K_5$-free graphs, individually \\cite{AGGT16}. A natural question arises if we can construct such a weight function for $H$-minor-free graphs for any arbitrary graph $H$. A major hurdle in this direction is the absence of a space-efficient (Logspace) or parallel algorithm (\\NC) for finding a structural decomposition of $H$-minor free graphs. However, such a decomposition is known when $H$ is a single crossing graph. This induces us to solve the problem for single crossing minor-free (SCM-free) graphs. An SCM-free graph can be decomposed into planar and bounded treewidth graphs. Moreover, $K_{3,3}$ and $K_5$ are single crossing graphs. Hence our result can also be seen as a generalization of the previous results on these classes. There have also been important follow-up works on parallel algorithms for SCM-free graphs \\cite{EV21}. SCM-free graphs have been studied in several algorithmic works (for example \\cite{STW16,CE13,DHNRT04}).\n\n\n\\subsection{Our Result}\nIn this paper, we show that results for previously studied graph classes\n(planar, constant tree-width and $H$-minor free for $H \\in \\{K_{3,3},K_5\\}$)\ncan be extended and unified to yield similar results for SCM-free graphs.\n\\begin{theorem}\n\\label{thm:main}\nThere is a logspace algorithm for computing polynomially-bounded, skew-symmetric nonzero circulation weight function in SCM-free graphs.\n\\end{theorem}\n\nAn efficient solution to the circulation problem for a class of graphs yields better complexity bounds for determining reachability in the directed version of that class and constructing minimum weight maximum-matching in the bipartite version of that class. Theorem~\\ref{thm:main}\nwith the results of \\cite{DKKM18,RA00}, yields the following:\n\\begin{corollary}\\label{cor:stat}\nFor SCM-free graphs, reachability is in $\\UL\\cap\\coUL$ and minimum weight bipartite maximum matching is in $\\SPL$.\n\\end{corollary}\n\nAlso using the result of \\cite{TW10}, we obtain that the \\textit{Shortest path} problem in SCM-free graphs can be solved in $\\UL\\cap \\coUL$. \n\\subparagraph*{Overview of Our Techniques and Comparison With Previous Results:} We know that for planar graphs and constant treewidth graphs nonzero circulation weights can be constructed in logspace \\cite{BTV09,DKMTVZ20}. We combine these weight functions using the techniques from Arora et al. \\cite{AGGT16}, Datta et al. \\cite{DKKM18} and, Datta et al. \\cite{DKMTVZ20} together with some modifications to obtain the desired weight function. In \\cite{AGGT16}, the authors decompose the given input graph $G$ ($K_{3,3}$-free or $K_5$-free) and obtain a component tree that contains planar and constant size components. They modify the components of the component tree so that they satisfy few properties which they use for constructing nonzero circulation weights (these properties are mentioned at the beginning of Section \\ref{sec:wtfn}). The new graph represented by these modified components preserves the perfect matchings of $G$. Then, they construct a \\emph{working-tree} of height $O(\\log n)$ corresponding to this component tree and use it to assign nonzero circulation weights to the edges of this new graph. The value of the weights assigned to the edges of the new graph is exponential in the height of the working tree.\n\nWhile $K_{3,3}$-free and $K_5$-free graphs can be decomposed into planar and constant size components, an SCM-free graph can be decomposed into planar and constant treewidth components. Thus the component tree of the SCM-free graph would have several non-planar constant treewidth components. While we can construct a working tree of height $O(\\log n)$, this tree would contain constant-treewidth components and hence make it difficult to find nonzero circulation weights. A na\\\"ive idea would be to replace each constant treewidth component with its tree decomposition in the working tree. However, the resultant tree would have the height $O(\\log^2 n)$. Thus the weight function obtained in this way is of $O(\\log^2 n)$-bit. We circumvent this problem as follows: we obtain a component tree $T$ of the given SCM-free graph $G$ and modify its components to satisfy the same property as \\cite{AGGT16} (however, we use different gadgets for modification). Now we replace each bounded treewidth component with its tree decomposition in $T$. Using this new component tree, say $T'$, we define another graph $G'$. We use the technique from \\cite{DKKM18} to show that if we can construct the nonzero circulation for $G'$, then we can \\textit{pull back} nonzero circulation for $G$. Few points to note here: (i) pull back technique works because of the new gadget that we use to modify the components in $T$, (ii) since ultimately we can obtain nonzero circulation for $G$, it allows us to compute maximum matching in $G$ in $\\SPL$, which is not the case in \\cite{AGGT16}.\n \n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Organization of the Paper}\nAfter introducing the definitions and preliminaries in Section~\\ref{sec:prelims}, in Section~\\ref{sec:wtfn} we discuss the weight function that achieves non-zero circulation in single-crossing minor free graphs and its application to maximum matching in Section~\\ref{sec:staticmatch}. Finally, we conclude with Section~\\ref{sec:concl}.\n\n\n\n\\section{Dynamic isolation from static non-zero circulation}\\label{sec:iso}\nIn previous sections, we have shown that we can compute a non-zero\ncirculation (for Matchings) in a bipartite single crossing minor free graph \nin $\\L$. This allows us to compute if the graph has a perfect matching \nin $\\SPL \\subseteq \\NC$.\n\nIn the dynamic setting, we allow the graph to evolve by insertion and \ndeletion of a small number of edges at every time step under the promise \nthat the graph stays $H$-minor free for a fixed single crossing graph $H$. \nWe would like to maintain if the graph has a perfect matching in $\\DynFO$.\nOne plausible approach would be to maintain the non-zero circulation as\ndescribed in Section~\\ref{sec:wtfn}.\nHowever, this seems non-trivial as we would need to maintain the decomposition\nof the given graph into planar and constant tree-width parts in $\\DynFO$. Even\nif we could somehow do that, the weight of many edges may change due to \neven a single edge change. Thus many more than polylog many entries of the\nadjacency and Tutte matrices may change. This would preclude the use of ``small\nchange'' techniques like the Sherman-Morrison-Woodbury formula.\nThis induces us to side-step the update of the non-zero\ncirculation by the method of Section~\\ref{sec:wtfn}.\n\nInstead we use a result from \\cite{FGT} to convert the given static circulation\nto dynamic isolating weights. Notice that \\cite{FGT} yields a black box recipe\nto produce isolating weights of quasipolynomial magnitude in the following way.\nGiven a bipartite graph $G$, they first consider a non-zero circulation\nof exponential magnitude viz. $w_0(e) : e \\mapsto 2^e$. Next, they consider a \nlist of $\\ell = O(\\log{n})$ primes $\\vec{p} = \\left$ \nwhich yield a weight function $w_{\\vec{p}}(e)$. This is defined\nby considering the $\\ell$ weight functions \n$w_0\\bmod{p_i}$ for $i \\in \\{1,\\ldots,\\ell\\}$\nand concatenating them after introducing a \\emph{shift} in the bit\npositions i.e.:\n$w_{\\vec{p}}(e) = \\left$.\nThis is so that there is no\noverflow from the $i$-th field to the $(i-1)$-st for any \n$i \\in \\{2,\\ldots,\\ell\\}$. \n\nSuppose we start with a graph with static weights ensuring non-zero \ncirculation. In a step, some edges are \ninserted or deleted.\nThe graph after deletion is a subgraph of the original graph; hence the\nnon-zero circulation remains non-zero after a deletion\\footnote{If we merely had isolating\nweights this would not necessarily preserve isolation.}, but we have to do more\nin the case of insertions. We aim to give the newly inserted edges FGT-weights\n (from \\cite{FGT})\nin the higher order bits while giving weight $0$ to all the original edges in \n$G$ again in the higher order bits. Thus the weight of all perfect matchings\nthat survive the deletions in a step remains unchanged. Moreover, if none such\nsurvive but new perfect matchings are introduced (due to insertion of edges)\nthe lightest of them is determined solely by the weights of the newly\nintroduced edges. In this case, our modification of the existential proof \nfrom \\cite{FGT} ensures that the minimum weight perfect matching is unique.\n\nNotice that the FGT-recipe applied to a graph with polylog ($N = \\log^{O(1)}{n}$)\nedges yields\nquasipolylogarithmically ($N^{\\log^{O(1)}{N}}$) \nlarge weights which are therefore still subpolynomial ($2^{(\\log{\\log{n}})^{O(1)}} = 2^{o(\\log{n})} = n^{o(1)}$).\nThus the weights remain polynomial when shifted to accommodate for the old \nweights. Further the number of primes is polyloglog ($\\log^{O(1)}{N} = (\\log{\\log{n}})^{O(1)}$) and so sublogarithmic ($= \\log^{o(1)}{n}$)\nthus the number of possible different weights is subpolynomial hence our \nalgorithm can be derandomized.\n\nBefore getting into technical details, we point out that in \\cite{DKMTVZ20} a\nsimilar scheme is used for reachability and bears the same relation to\n\\cite{KT16} as this section does to \\cite{FGT}. We have the following lemma\n, which we prove in Appendix~\\ref{subsec:isoApp}:\n\\begin{lemma} \\label{lem:combFGT}\nLet $G$ be a bipartite graph with a non-zero circulation\n$w$. Suppose $N = \\log^{O(1)}{n}$ edges are inserted into $G$ to yield $G^{new}$\nthen we can compute polynomially many weight functions in $\\FOar$ that\nhave $O(\\log{n})$ bit weights, and at least one of them,\n$w^{new}$ is isolating. Further the weights of the original edges remain\nunchanged under $w^{new}$.\n\\end{lemma}\n\n\n\\section{The details from Section~\\ref{sec:iso}}\\label{subsec:isoApp}\nWe use the same general strategy as in \\cite{DKMTVZ20} and divide the edges into\n\\emph{real} and \\emph{fictitious} where the former represent the newly inserted\nedges and the latter original undeleted edges\\footnote{We use the terms\nold $\\leftrightarrow$ fictitious and new $\\leftrightarrow$ real interchangeably\nin this section.}.\n\nLet $\\mathcal{C}$ be a set of cycles containing both real and fictitious edges that \noccur in any PM.\nLet $w$ be a weight \nfunction on the edges that gives non-zero weight only to the real edges.\nDefine $c_w(\\mathcal{C})$ to be the set of all circulations in the cycles of \n$\\mathcal{C}$. Here the circulation is the absolute value of the alternating sum\nof weights $c_w(C) = |w(f_1) - w(f_2) + w(f_3) - \\ldots|$ where \n$C = f_1,f_2,f_3,\\ldots$ is the sequence of edges in the cycle. \n\nWe say that a weight function,\nthat gives non-zero weights to the real edges, \\emph{real isolates} $\\mathcal{M}$ \nfor a set system $\\mathcal{M}$ if the minimum weight set in $\\mathcal{M}$ is unique. In our context $\\mathcal{M}$ will refer to the set of perfect\/maximum\n matchings.\n\nNext we follow the proof idea of \\cite{FGT} but focus on assigning weights\nto real edges which are, say, $N$ in number.\n We do this in $\\log{N}$ stages starting with a graph $G_0 = G$\nand ending with the acyclic graph $G_\\ell$ where $\\ell = \\log{N}$. The inductive\nassumption is that:\n\\begin{invariant}\\label{inv:analogFGT}\n For $i \\geq 1$, $G_i$ contains no cycles with at most $2^{i+1}$ real edges. \n\\end{invariant}\n\n\nNotice that induction starts at $i > 0$.\n\nWe first show how to construct $G_{i+1}$ from $G_i$ such that if $G_i$ satisfies\nthe inductive invariant~\\ref{inv:analogFGT} then so does $G_{i+1}$.\nLet $i > 1$, then in the $i$-th stage, let $\\mathcal{C}_{i+1}$ be the set of cycles that contain \nat most $2^{i+2}$ real edges. For each such cycle \n$C = f_0,f_1,\\ldots$ containing $k \\leq 2^{i+2}$ real edges (with $f_0$ being the\nleast numbered real edge in the cycle) edge-partition it into $4$ consecutive \npaths $P_j(C)$ for $j \\in \\{0,1,2,3\\}$ such that the first $3$ paths contain exactly \n$\\lfloor\\frac{k}{4}\\rfloor$ real edges and the last path contains the rest. In\naddition ensure that the first edge in each path is a real edge. Let the first\nedge of the $4$-paths be respectively $f_0 = f'_0, f'_1, f'_2, f'_3$. We \nhave the following which shows that the associated \n$4$-tuples $\\left$\nuniquely characterise cycles in $\\mathcal{C}_{i+1}$.\n\\begin{claim}\nThere is at most one cycle in $\\mathcal{C}_{i+1}$ that has a given $4$-tuple \n$\\left$ associated with it.\n\\end{claim}\n\\begin{proof}\nSuppose two distinct cycles $C,C' \\in \\mathcal{C}_{i+1}$ have\na tuple $\\left$ \nassociated with them. Then for least one $j \\in \\{0,1,2,3\\}$\n$P_j(C) \\neq P_j(C')$. Then $P_j(C) \\cup P_j(C')$\nis a closed walk in $G_i$ containing at most \n$2\\times \\lceil\\frac{2^{i+2}}{4}\\rceil = 2^{i+1}$ many real edges,\ncontradicting the assumption on $G_i$.\n\\end{proof}\n\nThis lemma shows that there are at most $N^4$ elements in $\\mathcal{C}_i$.\nNext consider the following lemma from \\cite{FKS}:\n\\begin{lemma}[\\cite{FKS}]\nFor every constant $c>0$ there is a constant $c_0>0$ such that for\nevery set $S$ of $m$ bit integers with $|S| \\leq m^c$,\nthe following holds: There is a $ c_0 \\log{m}$ bit prime\nnumber $p$ such that for any $x,y \\in S$ it holds that if $x \\neq y$ then \n$x \\not\\equiv y \\bmod{p}$.\n\\end{lemma}\nWe apply it to the set $c_{w_0}(\\mathcal{C}_i) = \\{c_{w_0}(C) : C \\in \\mathcal{C}_i\\}$. \nHere, the weight\nfunction $w_0$ assigns weights $w_0(e_j) = 2^j$ to the real edges which are\n$e_1,e_2,\\ldots,e_N$ in an arbitrary but fixed order.\nNotice that from the above claim, the size of this set \n$|w_0(\\mathcal{C}_i)| \\leq N^4$.\nAnd $w_0(e_j)$ is $j$-bits long hence $c_{w_0}(C)$ for any cycle\n$C \\in \\mathcal{C}_i$ that has less than $2^{i+2}$ real edges \nis at most $i+j+2 < 4N$-bits long. Thus, we obtain a prime $p_{i+1}$ \nof length at most $c_0\\log{4N}$ by picking $c = 4$. We define \n$w_{i+1}(e_j) = w_0(e_j) \\bmod{p_{i+1}}$.\n\nNow consider the following crucial lemma from \\cite{FGT}:\n\\begin{lemma}[\\cite{FGT}]\\label{lem:crucialFGT}\nLet $G = (V,E)$ be a bipartite graph with weight function $w$. \nLet $C$ be a cycle in $G$ such that $c_w(C) \\neq 0$. \nLet $E_1$ be the union of all minimum weight perfect matchings in $G$.\nThen the graph $G_1(V,E_1)$ does not contain the cycle $C$.\nMoreover all the perfect matchings in $G_1$ have the same weight.\n\\end{lemma}\nLet $B$ be a large enough constant (though bounded by a polynomial in $N$) \nto be specified later.\nWe shift the original accumulated weight function\n$W_i$ and add the new weight function $w_{i+1}$ to obtain:\n$W_{i+1}(e) = W_{i}(e)B + w_{i+1}(e)$. \nApply $W_{i+1}$ on the graph $G_i$ to obtain the graph $G_{i+1}$.\n Inductively suppose we have\n the invariant~\\ref{inv:analogFGT}\n that the graph $G_i$ did not have any cycles containing at least \n$2^{i+1}$ real edges. This property is preserved when we take all the\nperfect matchings in $G_i$ and apply $W_{i+1}$ yielding $G_{i+1}$. Moreover \nfrom Lemma~\\ref{lem:crucialFGT} and the construction of $w_{i+1}$ the cycles of\n$\\mathcal{C}_i$ disappear from $G_{i+1}$ restoring the invariant. \n\nNotice that it suffices to take $B$ greater than the number of real edges\ntimes the maximum of $w_i(e)$ over $i,e$.\nShowing that $G_1$ contains no cycle of length at most $4$ mimics the above\nmore general proof and we skip it here.\nWe can now complete the proof of Lemma~\\ref{lem:combFGT}:\n\\begin{lemma*} (Lemma~\\ref{lem:combFGT} restated)\nLet $G$ be a bipartite graph with a non-zero circulation\n$w^{old}$. Suppose $N = \\log^{O(1)}{n}$ edges are inserted into $G$ to yield $G^{new}$\nthen we can compute polynomially many weight functions in $\\FOar$ that\nhave $O(\\log{n})$ bit weights, and at least one of them,\n$w^{new}$ is isolating. Further the weights of the original edges remains\nunchanged under $w^{new}$.\n\\end{lemma*}\n\n\\begin{proof}(of Lemma~\\ref{lem:combFGT})\nFrom the invariant above $G_\\ell$ does not contain\nany cycles. From the construction of $G_\\ell$ if $G$ has a perfect matching then\nso does $G_\\ell$ and hence it is a perfect matching. Notice that $W_\\ell$\nis obtained from $p_1,\\ldots,p_\\ell$ that include $O((\\log{\\log{n}})^2) = o(\\log{n})$ \nmany bits. Thus there are (sub)polynomially many such weighting functions $W_\\ell$, depending on the primes $\\vec{p}$.\nLet $w = B\\cdot W_\\ell + w^{old}$ where we recall that $W_\\ell(e)$ is non-zero only\nfor the new (real) edges and $w^{old}$ is non-zero only for the old (fictitious)\nedges. Thus, any perfect matching that consists of only old edges is lighter\nthan any perfect matching containing at least one new edge. Moreover if \nthe real edges in two matchings differ then from the construction of \n$W_\\ell$ (for some choice of $\\vec{p}$) both matchings cannot be lightest \nas $W_\\ell$ real isolates a matching. Thus the only remaining case is\nthat we have two distinct lightest perfect matchings which differ only in the\nold edges. But the symmetric difference of any two such perfect matchings\nis a collection of cycles consisting of old edges. But each cycle has a\nnon-zero circulation in the old graph and so we can obtain a matching\nof even lesser weight by replacing the edges of one of the matchings in one\ncycle by the edges of the other one. This contradicts that both matchings were\nof least weight. This completes the proof.\n\\end{proof}\n\n\n\\subsection{Language Definitions}\\label{subsec:langDefs}\n\\begin{definition}\n\\begin{itemize}\n\\item $\\PM$ is a set of independent edges covering all the vertices of a graph. \n\\item $\\BPM$ is a $\\PM$ in a bipartite graph.\n\\item $\\PMD$ Given an undirected graph $G(V,E)$ determine if there exists a \nperfect matching in $G$.\n\\item $\\PMS$ Construct a perfect matching for a given graph $G$ if one exists.\n\\item $\\BMWPMS$ In an edge weighted bipartite graph construct a $\\PM$ of least weight.\n\\item $\\MCM$ is a set of largest size consisting of independent edges.\n\\item $\\BMCM$ is an $\\MCM$ in a bipartite graph.\n\\item $\\MWMCM$ is an $\\MCM$ of least weight in an edge weighted graph.\n\\item $\\BMWMCM$ is an $\\MWMCM$ in a weighted bipartite graph.\n\\item $\\MCMSz$ determine the size of an $\\MCM$.\n\\item $\\BMCMSz$ the $\\MCMSz$ problem in bipartite graphs.\n\\item $\\BMWMCMS$ Construct a $\\BMWMCM$. \n\\item $\\Reach$ Given a directed graph $G(V,E)$ and $s,t \\in V$ does there exist a directed path from $s$ to $t$ in $G$.\n\\item $\\Dist$ Given a directed graph $G(V,E)$, polynomially bounded edge weights and $s,t \\in V$ find the weight of a least weight path from $s$ to $t$.\n\\item $\\Rank$ Given an $m \\times n$ matrix $A$ with integer entries find \nthe rank of $A$ over $\\mathbb{Q}$.\n\\end{itemize}\n\\end{definition}\n\n\n\\section{Preliminaries and Notations}\n\\label{sec:prelims}\n\\subparagraph*{Tree decomposition:} Tree decomposition is a well-studied concept in graph theory. Tree decomposition of a graph, in some sense, reveals the information of how much tree-like the graph is. We use the following definition of tree decomposition.\n\n\\begin{definition}\n\\label{def:treedec}\nLet $G(V,E)$ be a graph and $\\tilde{T}$ be a tree, where nodes of the $\\tilde{T}$ are $\\{B_1, \\ldots ,B_k \\mid B_i \\subseteq V\\}$ (called bags). $T$ is called a tree decomposition of $G$ if the following three properties are satisfied:\n\\begin{itemize}\n\n\\item $B_1 \\cup \\ldots \\cup B_k = V$,\n\\item for every edge $(u,v) \\in E$, there exists a bag $B_i$ which contains both the vertices $u$ and $v$,\n\\item for a vertex $v \\in V$, the bags which contain the vertex $v$ form a connected component in $\\tilde{T}$.\n\\end{itemize}\n\n\\end{definition}\n\nThe width of a tree decomposition is defined as one less than the size of the largest bag. The treewidth of a graph $G$ is the minimum width among all possible tree decompositions of $G$. Given a constant treewidth graph $G$, we can find its tree decomposition $\\tilde{T}$ in logspace such that $\\tilde{T}$ has a constant width \\cite{EJT10}.\n\n\\begin{lemma}[\\cite{EJT10}]\n\\label{lem:boundtw}\nFor every constant $c$, there is a logspace algorithm that takes a graph as input and outputs its tree decomposition of treewidth at most $c$, if such a decomposition exists.\n\\end{lemma}\n\n\\begin{definition}\n\\label{def:cls}\nLet $G_1$ and $G_2$ be two graphs containing cliques of equal size. Then the \\emph{clique-sum} of $G_1$ and $G_2$ is formed from their disjoint union by identifying pairs of vertices in these two cliques to form a single shared clique, and then possibly deleting some of the clique edges.\n\\end{definition}\nFor a constant $k$, a $k$-clique-sum is a clique-sum in which both cliques have at most $k$ vertices. One may also form clique-sums of more than two graphs by repeated application of the two-graph clique-sum operation. For a constant $W$, we use the notation $ \\langle \\mathcal{G}_{P, W}\\rangle_k$ to denote the class of graphs that can be obtained by taking repetitive $k$-clique-sum of planar graphs and graphs of treewidth at most $W$. In this paper, we construct a polynomially bounded skew-symmetric weight function that gives nonzero circulation to all the cycles in a graph $G \\in \\langle \\mathcal{G}_{P, W}\\rangle_3$. Note that if a weight function gives nonzero circulations to all the cycles in the biconnected components of $G$, it will give nonzero circulation to all the cycles in $G$ because no simple cycle can be a part of two different biconnected components of $G$. We can find all the biconnected components of $G$ in logspace by finding all the articulation points. Therefore, without loss of generality, assume that $G$ is biconnected.\n\n\nThe \\emph{crossing number} of a graph $G$ is the lowest number of edge crossings of a plane drawing of $G$. A \\emph{single-crossing} graph is a graph whose crossing number is at most 1. SCM-free graphs are graphs that do not contain $H$ as a minor, where $H$ is a fixed single crossing graph. Robertson and Seymour have given the following characterization of SCM-free graphs.\n\n\\begin{theorem}[\\cite{RS91}]\n\\label{thm:rs}\nFor any single-crossing graph $H$, there is an integer\n$c_H \\geq 4$ (depending only on $H$) such that every graph with no minor isomorphic to $H$ can\nbe obtained as $3$-clique-sum of planar graphs and graphs of treewidth at most $c_H$.\n\\end{theorem}\n\n\\subparagraph*{Component Tree:} In order to construct the desired weight function for a graph $G \\in \\langle \\mathcal{G}_{P, W}\\rangle_3$, we decompose $G$ into smaller graphs and obtain a component tree of $G$ defined as follows: we first find $3$-connected and $4$-connected components of $G$ such that each of these components is either planar or of constant treewidth. We know that these components can be obtained in logspace \\cite{TW09}. Since $G$ can be formed by taking repetitive $3$-clique-sum of these components, the set of vertices involved in a clique-sum is called a separating set. Using these components and separating sets, we define a component tree of $G$. A component tree $T$ of $G$ is a tree such that each node of $T$ contains a $3$-connected or $4$-connected component of $G$, i.e., each node contains either a planar or constant treewidth subgraph of $G$. There is an edge between two nodes of $T$ if the corresponding components are involved in a clique-sum operation. If two nodes are involved in a clique-sum operation, then copies of all the vertices of the clique are present in both components. It is easy to see that $T$ will always be a tree. Within a component, there are two types of edges present, \\textit{real} and \\textit{virtual edges}. Real edges are those edges that are present in $G$. Let $\\{a,b,c\\}$(or $\\{a,b\\}$) be a separating triplet(or pair) shared by two nodes of $T$, then there is a clique $\\{a,b,c\\}$ (or $\\{a,b\\}$) of virtual edges present in both the components. Suppose there is an edge present in $G$ between any pair of vertices of a separating set. In that case, there is a real edge present between that pair of vertices parallel to the virtual edge, in exactly one of the components which share that separating set.\n\n\\subparagraph*{Weight function and circulation:} Let $G(V, E)$ be an undirected graph with vertex set $V$ and edge set $E$. By $\\dvec{E}$, we denote the set of bidirected edges corresponding to $E$. Similarly, by $G(V, \\dvec{E})$, we denote the graph corresponding to $G(V, E)$ where each of its edges is replaced by a corresponding bidirected edge. A weight function $w : \\dvec{E} \\rightarrow \\mathbb{Z}$ is called skew-symmetric if for all $e\\in \\dvec{E}$, $w(e) = -w(e^r)$ (where $e^r$ represent the edge with its direction reversed). We know that if $w$ gives nonzero circulation to every cycle that consists of edges of $\\dvec{E}$ then it isolates a directed path between each pair of vertices in $G(V,\\dvec{E})$. Also, if $G$ is a bipartite graph, then the weight function $w$ can be used to construct a weight function $w^{\\textrm{{\\tiny{und}}}} : E \\rightarrow \\mathbb{Z}$ that isolates a perfect matching in $G$ \\cite{TV12}.\n\n\nA convention is to represent by $\\left$ the weight function that on edge $e$ takes the weight $\\sum_{i=1}^k{w_i(e)B^{k-i}}$ where $w_1,\\ldots,w_k$ are weight functions such that $\\max_{i=1}^k{(nw_i(e))} \\leq B$.\n\n\\subparagraph*{Complexity Classes:} The complexity classes \\Log\\ and \\NL\\ are the classes of languages accepted by deterministic and non-deterministic logspace Turing machines, respectively. $\\UL$ is a class of languages that can be accepted by an $\\NL$ machine that has at most one accepting path on each input, and hence $\\UL \\subseteq \\NL$. $\\SPL$ is the class of languages whose characteristic function can be written as a logspace computable integer determinant. \n\n\n\\section{Maximum Matching}\\label{sec:staticmatch}\nIn this section, we consider the complexity of the maximum matching problem in single crossing minor free graphs. Recently Datta et al.~\\cite{DKKM18} have shown that the bipartite maximum matching can be solved in \\SPL\\ in the planar, bounded genus, $K_{3,3}$-free and $K_{5}$-free graphs.\n\nTheir techniques can be extended to any graph class where nonzero circulation weights can be assigned in logspace. For constructing a maximum matching in $K_{3,3}$-free and $K_{5}$-free bipartite graphs, they use the logspace algorithm of~\\cite{AGGT16} as a black box. Since from Theorem~\\ref{thm:main} nonzero circulation weights can be computed for the more general class of any single crossing minor free graphs, we get the bipartite maximum matching result of Corollary~\\ref{cor:stat}.\n\nIn a related work recently, Eppstein and Vazirani~\\cite{EV21} have shown an $\\NC$ algorithm for the case when the graph is not necessarily bipartite. However, the result holds only for constructing perfect matchings. In non-bipartite graphs, there is no known parallel (e.g., \\NC) or space-efficient algorithm for deterministically constructing a maximum matching even in the case of planar graphs~\\cite{Sankowski18, DKKM18}. Datta et al.~\\cite{DKKM18} givelo an approach to design a \\emph{pseudo-deterministic} \\NC\\ algorithm for this problem. Pseudo-deterministic algorithms are probabilistic algorithms for search problems that produce a unique output for each given input with high probability. That is, they return the same output for all but a few of the possible random choices. We call an algorithm pseudo-deterministic $\\NC$ if it runs in $\\RNC$ and is pseudo-deterministic.\n\nUsing the Gallai-Edmonds decomposition theorem, \\cite{DKKM18} shows that the search version of the maximum matching problem reduces to determining the size of the maximum matching in the presence of algorithms to (a) find a perfect matching and to (b) solve the bipartite version of the maximum matching, all in the same class of graphs. This reduction implies a pseudo-deterministic $\\NC$ algorithm as we only need to use randomization for determining the size of the matching, which always returns the same result. For single crossing minor free graphs, using the $\\NC$ algorithm of \\cite{EV21} for finding a perfect matching and our $\\SPL$ algorithm for finding a maximum matching in bipartite graphs, we have the following result:\n\\begin{theorem}\nMaximum matching in single-crossing minor free graphs (not necessarily bipartite) is in pseudo-deterministic \\NC.\n\\end{theorem}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec:intro}\nSolar coronal jets are collimated plasma ejections that occur in the solar corona and they offer ways for plasma and particles to enter interplanetary space. They are transient (tens of minutes) but ubiquitous, with a typical height of $\\sim5\\times10^4$ km and a typical width of $\\sim8\\times10^3$ km \\citep[e.g.][]{2007PASJ...59S.771S}. X-ray emissions from coronal jets were first observed by the Soft X-ray Telescope (SXT) onboard \\textit{Yohkoh} in the early 1990s \\citep{1992PASJ...44L.173S}. Since then, coronal jets have been studied in various aspects including morphology, dynamics, driving mechanisms, and more \\citep[see, e.g.,][]{2016SSRv..201....1R, 2021RSPSA.47700217S}. Jets or jet-like events have also been observed in other wavelengths such as extreme ultraviolet (EUV), ultraviolet (UV), and H alpha (those studied in H alpha are historically known as ``surges'') \\citep[e.g.][]{1997Natur.386..811I, 1999ApJ...513L..75C,2009SoPh..259...87N}. These wavebands cover a wide range of plasma temperatures (from chromospheric to coronal), and one important feature of jets is the presence of both hot and cool components in many events \\citep{2010ApJ...720..757M, 2013ApJ...769..134M}.\n\nCurrent models generally suggest that jets are formed by magnetic reconnection between open and closed magnetic field lines; however, the detailed triggering process for such magnetic reconnection is still not fully clear. In the emerging flux model, jets are generated through interchange reconnection when field lines of the newly emerging magnetic flux reach those of the pre-existing open field \\citep{1992PASJ...44L.173S}. As shown in the 2D simulation by \\citet{1995Natur.375...42Y, 1996PASJ...48..353Y}, this model could successfully produce a hot jet and an adjacent cool jet (or surge) simultaneously. The embedded-bipole model developed by \\citet[][etc.]{2015A&A...573A.130P, 2016A&A...596A..36P} considers a 3D fan-spine topology where magnetic reconnection occurs around the 3D null point. In their simulation, straight jets are generated through slow reconnection at the current sheet and driven by magnetic tension, while helical jets are generated through explosive magnetic reconnection triggered by a kink-like instability and driven by a rapid untwisting process of magnetic field lines. Recently, a few studies have reported small-scale filament structures (known as ``minifilaments'') at the base of some coronal jets, leading to the minifilament eruption model \\citep{2015Natur.523..437S, 2016ApJ...821..100S, 2016ApJ...832L...7P, 2017ApJ...844..131P, 2018ApJ...853..189P, 2017Natur.544..452W, 2018ApJ...852...98W, 2018ApJ...859....3M, 2019ApJ...882...16M}. This model suggests that jets are generated through miniature filament eruptions similar to those that drive larger eruptive events such as coronal mass ejections (CMEs). In addition to the external\/interchange magnetic reconnection, this process also involves internal magnetic reconnection inside the filament-carrying field, and the jet bright point (JBP, which corresponds to the solar flare arcade in the larger-scale case) appears underneath the erupting minifilament. Many recent observations have shown that the triggers for these minifilaments eruptions are usually magnetic flux cancellation \\citep{2011ApJ...738L..20H, 2012A&A...548A..62H, 2014ApJ...783...11A, 2016ApJ...832L...7P, 2017ApJ...844..131P, 2018ApJ...853..189P, 2019ApJ...882...16M, 2021ApJ...909..133M}.\n\nHXR observations can also provide helpful insights into jet formation mechanisms by constraining energetic electron populations within coronal jets. \\citet{2011ApJ...742...82K} investigated HXR emissions for 16 flare-related energetic electron events and found that 7 of them showed three distinct HXR footpoints, which was consistent with the interchange reconnection geometry. (In the remaining events, the fact that they showed less then three sources was likely due to instrument limitations.) Also in that study, EUV jets were found in all 6 events that had EUV data coverage. HXR bremsstrahlung emissions could also directly come from coronal jets if there are energetic electrons, but those extended sources are usually much fainter than the footpoint sources and only a few studies \\citep{2009A&A...508.1443B, 2012ApJ...754....9G} have reported such observations. More recently, \\citet{2018ApJ...867...84G} combined HXR observations with microwave emission, EUV emission, and magnetogram data, performing 3D modeling of electron distributions for a flare-related jet. They obtained direct constraints on energetic electron populations within that event. \\citet{2020ApJ...889..183M} carried out a statistical study of 33 flare-related coronal jets using HXR and EUV data, and they observed non-thermal emissions from energetic electrons in 8 of these events. They also studied the relation between jets and the associated flares but found no clear correlations between jet and flare properties. \n\nIn most of the previous studies of coronal jets, hot plasma and HXR emissions were found near the base of the jet (the location of the primary reconnection site) \\citep[e.g.][]{2011ApJ...742...82K, 2016A&A...589A..79M, 2020ApJ...889..183M}. However, for two coronal jets on November 13, 2014, HXR thermal emissions were observed near the far end of the jet spire (hereafter the ``top''). In fact, in the second event which had full HXR coverage, HXR emissions were observed at three different locations: the base of the jet, the top of the jet, and a location to the north of the jet. Here we present a multi-wavelength analysis of these two jets using data from the Atmospheric Imaging Assembly (AIA) onboard the \\textit{Solar Dynamic Observatory} (\\textit{SDO}), the \\textit{Reuven Ramaty High Energy Solar Spectroscopic Imager} (\\textit{RHESSI}), the X-ray Telescope (XRT) onboard \\textit{Hinode}, and the \\textit{Interface Region Imaging Spectrograph} (\\textit{IRIS}). We found that all these different HXR sources showed evidence of mildly accelerated electrons, and particle acceleration also happened near the jet top in addition to the site at the jet base. To our knowledge, this is the most thorough HXR study to date of particle acceleration in coronal jets.\n\nThe paper is structured as follows: In section \\ref{sec:data}, we describe the observations from each instrument. In Section \\ref{sec:analysis}, we show results from differential emission measure (DEM) analysis, imaging spectroscopy, and velocity estimation. In Section \\ref{sec:discussion}, we calculate the energy budget for one of the jets, discuss the interpretations of observational results, and compare them with jet models. Finally, in Section \\ref{sec:summ}, we summarize the key findings of this work. \n\n\n\n\\section{Observations} \\label{sec:data}\nOn November 13, 2014, more than ten recurrent jets were ejected from NOAA Active Region 12209 near the eastern solar limb at different times throughout the day. While most (if not all) of the jets can be identified in one or more AIA channels, only two events, SOL2014-11-13T17:20 and SOL2014-11-13T20:47, were simultaneously observed by AIA and {\\textit{RHESSI}}. We select these two flare-related jets for this study, and we add supporting observations from XRT and {\\textit{IRIS}}. The associated flares are GOES class C1.5-1.7 without background subtraction (see top row of Figure \\ref{fig:t_profile}), or B2.4-3.7 with background subtraction.\n\n\n\\begin{figure}\n\t\\includegraphics[width=0.5\\textwidth]{fig_t_profile1.pdf}\n\t\\includegraphics[width=0.5\\textwidth]{fig_t_profile2.pdf}\n\t\\caption{Time profiles of the $\\sim$17:20 jet (left) and the $\\sim$20:50 jet (right) on November 13, 2014. Top row: GOES light curves in the 1-8 {\\AA} channel. Second row: GOES light curves in the 0.5-4 {\\AA} channel. Third row: {\\textit{RHESSI}} emission in 3-6 keV (black) and 6-12 keV (red), using detectors 3, 6, 8, and 9. The first event only has partial coverage from {\\textit{RHESSI}} due to spacecraft night. Fourth row: Examples of AIA EUV emissions from the jet base\/top. Blue lines show light curves of a 3\"$\\times$3\" box at the base of each jet in the 304 {\\AA} channel, and red lines show light curves of a 3\"$\\times$3\" box at the top of each jet in the 131 {\\AA} channel (boxes are not shown). Bottom row: XRT measurements of selected regions (3\"$\\times$3\", not shown) at the jet base (blue) and the jet top (red) in the thin-Be filter. Both events only have partial coverage from XRT.}\n\t\\label{fig:t_profile}\t\n\\end{figure}\n\n\n\\subsection{AIA data}\n\nThe AIA instrument provides full-disk solar images in ten EUV\/UV\/visible-light channels with a spatial resolution of 1.5 arcsec \\citep{2012SoPh..275...17L}. In this work we use data from the seven EUV channels of AIA: 94 {\\AA}, 131 {\\AA}, 171 {\\AA}, 193 {\\AA}, 211 {\\AA}, 304 {\\AA}, and 335 {\\AA}, which have a cadence of 12 seconds and cover plasma temperatures from $\\sim$0.05 MK up to $\\sim$20 MK.\n\nFigure \\ref{fig:aia_im} shows AIA images of the two jets in the 131 {\\AA} and 304 {\\AA} channels at selected times. At the beginning of each event, a minifilament (pointed by yellow arrows) was identified in multiple AIA channels at the base of the jet. After the minifilamnet eruption, a JBP (pointed by white arrows) appeared underneath the prior minifilament location.\n\nInterestingly, both events showed slightly different jet evolution in cool and hot AIA channels. In cooler channels including 171 {\\AA}, 193 {\\AA}, 211 {\\AA}, 304 {\\AA}, and 335 {\\AA}, the first jet started at $\\sim$17:15 UT, reached its maximum extent at $\\sim$17:23 UT, and lasted about 20 minutes. However, in hot channels that are sensitive to $\\gtrsim$10 MK plasma (94 {\\AA} and 131 {\\AA}, particularly), the jet reached its maximum height within five minutes after the same starting time; then it slightly expanded transversely and gradually faded away in a much longer time. (The 193 {\\AA} channel in principle could also measure hot plasma \\citep{2012SoPh..275...17L}, but its response was dominated by temperatures below 10 MK and thus looked like a cool channel.) Similar behavior was observed in the later jet, which started at $\\sim$20:40 UT and reached its maximum extent at $\\sim$20:47 UT in the hotter 94 {\\AA} and 131 {\\AA} channels or at $\\sim$20:50 UT in the rest of the channels. The jet had already disappeared in those cooler channels before 20:58 UT, but it was visible in 94 {\\AA} and 131 {\\AA} for more than an hour. \n\n\\begin{figure*}\n\t\\centering\n\t\\includegraphics[width=0.98\\textwidth]{fig_aia.pdf}\n\t\\caption{AIA 131 {\\AA} and 304 {\\AA} images of the two jets at selected times. The top two rows show the evolution of the earlier jet and the bottom two rows show the evolution of the later jet. The yellow arrows point to the minifilament while the blue arrows point to the JBP in each event. Both jets reached their maximum extents at earlier times and lasted longer in the hotter 131 {\\AA} channel (sensitive to both $\\sim$0.4 MK and $\\sim$10 MK temperatures) compared to the cool 304 {\\AA} channel (sensitive to chromospheric temperatures around 0.05 MK).}\n\t\\label{fig:aia_im}\t\n\\end{figure*}\n\n\n\n\\subsection{RHESSI data} \\label{sec:rhessi}\n\n{\\textit{RHESSI}} was a solar-dedicated HXR observatory launched in 2002 and decommissioned in 2018. It consisted of nine rotating modulation collimators, each placed in front of a cooled germanium detector, and used indirect Fourier imaging techniques. {\\textit{RHESSI}} measured both images and spectra over the full sun in the energy range of 3 keV - 17 MeV and had good spatial and energy resolutions especially for lower energies (2.3 arcsec and $\\sim$1 keV, respectively) \\citep{2002SoPh..210....3L}. \n\n{\\textit{RHESSI}} was in eclipse during 16:46-17:23 UT, so it didn't capture the entire first jet; but {\\textit{RHESSI}} did have full coverage for the later jet. Figure \\ref{fig:rhessi_im} shows {\\textit{RHESSI}} images in 3-12 keV using detectors 3, 6, 8, and 9. All images were produced using the CLEAN algorithm in the HESSI IDL software package. In both events, HXR emissions were observed near the top of the jet. Furthermore, time slices of the later jet show that there were actually three HXR sources in that event. The first HXR source appeared at the base of the jet a few minutes after the jet's starting time and peaked at around 20:46 UT. The location of this source is consistent with the erupting minifilament site where magnetic reconnection took place. Meanwhile, starting from $\\sim$20:46 UT, the second HXR source appeared near the top of the jet and it became dominant during 20:48-20:51 UT. Finally, after the source at the jet top had faded away, another HXR source was observed to the north of the jet which reached its maximum intensity at $\\sim$20:53 UT.\n\n\\begin{figure*}\n\t\\centering\n\t\\includegraphics[width=0.99\\textwidth]{fig_rhessi.pdf}\n\t\\caption{{\\textit{RHESSI}} contours in 3-12 keV overlaid on the AIA 94 {\\AA} images. Panel (a) shows an image of the earlier event and panels (b)-(j) show time slices of the later event. HXR emissions were observed near the top of the jet in both events. The later event showed three different HXR sources: one at the base of the jet at $\\sim$20:46 UT (panel c), one near the top of the jet at $\\sim$20:50 UT (panel g), and one to the north of the jet at $\\sim$20:53 UT (panel j).}\n\t\\label{fig:rhessi_im}\t\n\\end{figure*}\n\n\n\\subsection{XRT data} \\label{sec:xrt}\nXRT provides additional coverage of high-temperature plasma beyond AIA hot channels and {\\textit{RHESSI}}, though only data in the thin-Be filter for part of each jet are available. This filter is sensitive to plasma temperatures around 10 MK, and shows very similar jet behavior as the AIA 94 {\\AA} and 131 {\\AA} filters. Here we include these data as supplementary observations.\n\nThin-Be filter data were available for the first 15 minutes of the earlier jet (before 17:30 UT) with a cadence of a half minute. The jet started with a very fast flow at $\\sim$17:15 UT, and reached its maximum extent in just a few minutes. Then after $\\sim$17:20 UT, it grew slightly wider and remained visible towards the end of the observation time (Figure \\ref{fig:xrt_iris}). As for the later jet, XRT missed most of its erupting process since no data were available between 20:45 UT and 21:09 UT. But after 21:09 UT, the jet was still visible in the thin-Be filter until it finally faded away at $\\sim$22:00 UT (not shown).\n\n\n\\subsection{IRIS data} \n{\\textit{IRIS}} has full temporal coverage and partial spatial coverage of the earlier jet in its 1330 {\\AA} slit-jaw images. This channel is sensitive to temperatures around 0.02 MK with a spatial resolution of 0.33 arcsec and a cadence of 10 seconds, thus it helps to investigate the dynamics of plasma at chromospheric temperatures. For the earlier event, the jet was at the corner of the field of view and most part of the jet body (but no jet base or top) was captured in these images (Figure \\ref{fig:xrt_iris}). Jet evolution in this channel was similar to that in the AIA 304 {\\AA} channel, and we used these data (in addition to the AIA data) to estimate jet velocities (see Section \\ref{velocities}). However, this channel had much less coverage of the later jet and it was not considered for that event. \n\n\n\\begin{figure*}\n\t\\centering\n\t\\includegraphics[width=0.8\\textwidth]{fig_xrt_iris.pdf}\n\t\\caption{XRT and {\\textit{IRIS}} images of the two jets at selected times. Panels (a)-(d): XRT Be-thin images of the earlier event. Jet evolution is similar to that in AIA hot filters (94 {\\AA} and 131 {\\AA}). Panels (e)-(g): XRT Be-thin images of the later event. No data was available between 20:44 and 21:09 UT. Panel (h): An {\\textit{IRIS}} SJI 1330{\\AA} image of the earlier event. The jet was located at the corner of the {\\textit{IRIS}} field of view. }\n\t\\label{fig:xrt_iris}\t\n\\end{figure*}\n\n\n\\section{Data analysis} \\label{sec:analysis}\n\n\\subsection{Differential emission measure (DEM) analysis} \\label{sec:DEM}\nWe carried out a differential emission measure (DEM) analysis to investigate the temperature profile for these two events. A DEM describes a plasma distribution with respect to temperature along the line of sight, and it is directly related to the observed flux $F$ for a particular instrument via\n\\begin{equation}\n\tF=\\int{R(T) \\cdot DEM(T) \\, dT},\n\\end{equation} \nwhere $R$ is the temperature response of that instrument. In this analysis, we used the regularization method developed by \\citet{2012A&A...539A.146H, 2013A&A...553A..10H} for DEM inversion. We considered two different data selections: (a) AIA bandpass filter data only, where we used data from six AIA bandpass filters that are sensitive to coronal temperatures: 94 {\\AA}, 131 {\\AA}, 171 {\\AA}, 193 {\\AA}, 211 {\\AA}, and 335 {\\AA}; and (b) a combination of multi-instrument data, where we used the same set of AIA data, together with HXR measurements in 4-5, 5-6, and 6-7 keV bands from {\\textit{RHESSI}}, and thin-Be filter data from XRT if available. The {\\textit{RHESSI}} 4-5 keV and 5-6 keV energy bands were selected because they measure plasma temperature via the bremsstrahlung continuum, and the 6-7 keV energy band was particularly important as it includes the 6.7 keV Fe line complex. The uncertainties for AIA data were estimated via the SSWIDL procedure ``\\texttt{aia\\textunderscore bp\\textunderscore estimate\\textunderscore error.pro}'', added in quadrature with a systematic error of 10\\%. The uncertainties for {\\textit{RHESSI}} and XRT data were both estimated as 20\\%.\n\nThe temperature responses for AIA and XRT filters were generated through SSWIDL routines ``\\texttt{aia\\textunderscore get\\textunderscore response.pro}'' and ``\\texttt{make\\textunderscore xrt\\textunderscore temp\\textunderscore resp.pro}'', respectively. To obtain the temperature responses for {\\textit{RHESSI}} in different energy bands, we first calculated the isothermal HXR spectra as a function of energy for multiple temperatures ranging from 3 MK to 30 MK, using the SSWIDL routine ``\\texttt{f\\textunderscore vth.pro}''. Thus for each energy band, we obtained a series of photon fluxes at different temperatures, which would correspond to the temperature response (in photon space) for that energy band after applying proper normalization. (The {\\textit{RHESSI}} instrument response was already taken into account when producing HXR images, thus it did not need to be included in the temperature response.) In above calculation, coronal abundances were adopted.\n\nWe calculated the DEMs for four regions where HXR emissions were observed, around the times when each HXR source reached its maximum intensity: the top of the earlier jet at 17:24 UT, the base of the later jet at 20:46 UT, the top of the later jet at 20:50 UT, and the loop to the north of the later jet at 20:53 UT. Each region was selected based on contours in AIA 131 {\\AA} images and the observed intensities were averaged over the whole region. We first obtained the DEM results using AIA data only (black lines in Figure \\ref{fig:dem_ave}), as well as the corresponding residuals in data space (asterisks in Figure \\ref{fig:dem_ave}). All these AIA-alone DEMs indicate the existence of multi-thermal plasma, each with a high-temperature component peaking around 10 MK. However, although these AIA-alone DEMs had good enough predictions in the AIA channels that were used for the DEM inversion, they failed to predict the HXR measurements well. As shown by the blue asterisks in the residual plots, the photon fluxes predicted by the AIA-alone DEMs in the {\\textit{RHESSI}} 4-5 keV and 5-6 keV energy bins were always lower than the actual measurements, and the line emissions in the 6-7 keV energy bin were very prominent compared to the bremsstrahlung continuum.\n\nTo have a better understanding of the level of agreement between AIA and {\\textit{RHESSI}}, we carried out a more complete quantitative comparison of HXR fluxes between these two instruments. In this exercise, we predicted the HXR spectrum in the 3-15 keV energy range for the top region of the earlier jet\u00a0using the AIA-alone DEM and compared\u00a0it with the spectrum directly measured by {\\textit{RHESSI}}. The predicted HXR spectrum was calculated according to Eq.(1), with R being the {\\textit{RHESSI}} temperature response. The results are shown in the left panel of Figure \\ref{fig:comp}, where the AIA-predicted HXR fluxes were consistently lower than the {\\textit{RHESSI}} fluxes, indicating a possible cross-calibration factor between AIA and {\\textit{RHESSI}}. And again, the AIA-alone DEM predicted much stronger line emissions in the 6-7 keV energy bin over the continuum as compared to the actual {\\textit{RHESSI}} observation. Reasons for these disagreements could be some instrumental effects that are not well understood, such as the change in {\\textit{RHESSI}} blanketing with respect to time, or could be the possible ``non-standard'' elemental abundances in these events that we are unable to characterize.\n\nBecause of the discrepancies mentioned above, incorporating {\\textit{RHESSI}} data into this DEM analysis is challenging. To obtain a DEM solution that could successfully predict both the HXR continuum and the line feature at the same time, we found that a cross-calibration factor between AIA and {\\textit{RHESSI}} was required and the initial DEM guess must be very carefully chosen. We had the best chance of success when using a ``modified'' AIA-alone DEM as the initial guess, where we substituted the high-temperature component of each AIA-alone DEM with a Gaussian distribution that peaked around the temperature given by {\\textit{RHESSI}} spectroscopy (more details of spectroscopy will be discussed in Section \\ref{sec:imag_spec}). The height, width, and exact peak location of that Gaussian distribution were tested with a series of values, and we selected the most robust ones. In this piece of analysis, we scaled down the photon fluxes from {\\textit{RHESSI}} by a factor of 3.5, but this cross-calibration factor could be in the range of 3-5 and was not well constrained. Incidentally, this factor that we found here is similar to the AIA-{\\textit{RHESSI}} discrepancy found by \\citet{2013ApJ...779..107B}. In addition, in some literature a factor of 2-3 was suggested for cross-calibration between AIA and XRT \\citep[e.g.][]{2015ApJ...806..232S, 2017ApJ...844..132W}. We found that multiplying a factor of 2 to the XRT response would result in a better agreement between the predicted and measured Be-thin filter data; thus that factor was also included here.\n\nThe joint DEMs (red lines) and the data-space residuals (red triangles) are also plotted in Figure \\ref{fig:dem_ave}, along with the AIA-alone DEM results. These joint DEMs are the only set of solutions we found that fit both the line emission and the bremsstrahlung continuum well. For all the selected regions, the joint DEMs have very similar cool components as the AIA-alone DEMs, but the hot components of the joint DEMs tend to be more isothermal and slightly cooler. Particularly, HXR constraints significantly reduced the amount of plasma above $\\sim$15 MK (otherwise the predicted line emission was always too prominent). However, previous studies have seen larger discrepancies between bandpass filter DEMs and the ones that included HXR constraints. For example, in the DEM analysis for a quiescent active region presented by \\citet{2009ApJ...704..863S}, a high-temperature component that peaked around $10^{7.4}$K was found when only using data from XRT filters, but the DEM for that component was reduced by more than one order of magnitude when combining observations from both XRT and {\\textit{RHESSI}}. Compared to that study, the AIA-alone DEMs here are not too far from the joint DEMs that incorporated HXR data.\n\nWe further compared the HXR fluxes predicted by the joint DEM with {\\textit{RHESSI}} measurements (with a cross-calibration factor applied) for the top region of the earlier jet, as shown in the right panel of Figure \\ref{fig:comp}. As expected, the two HXR spectra had a much better agreement at lower energies than the spectra predicted from AIA-only DEMs did, including both the overall continuum and the line feature. Besides, for higher energies around 10 keV, {\\textit{RHESSI}} measurements had systematically higher emissions, suggesting a possible non-thermal component for this source. This is consistent with our later findings through spectral analysis (Section \\ref{sec:imag_spec}). As a side note, later spectral analysis suggests that for some of the HXR sources non-thermal emissions might dominate in the 6-7 keV (and maybe 5-6 keV) energy bin(s). In this scenario, the fluxes from those thermal sources would be even lower and the joint-DEMs shown here provide upper limits for the possible amount of hot plasma. \n\nAs the last part of the DEM analysis, we examined the temporal evolution of the DEM maps in 11-14 MK (i.e. the hot component) for each event (Figures \\ref{fig:em1} and \\ref{fig:em2}). Because of the missing {\\textit{RHESSI}} and XRT data, and the fact that the AIA-alone DEMs in this temperature range qualitatively agree with the joint DEMs, these DEM maps were all generated using AIA data only. In both events, hot plasma first appeared at the base of the jet (which was the same location from where the minifilament erupted and magnetic reconnection occurred); however, as the hot plasma at the jet base gradually cooled down, more and more hot plasma was observed near the top of the jet and that location was mostly stationary. These DEM maps show consistent results with the location and temporal evolution of the {\\textit{RHESSI}} HXR sources.\n\n\\begin{figure}\n\t\\plotone{fig_dem_comp.pdf}\n\t\\caption{DEMs and data-space residuals (defined as (model-data)\/error) for four different selected regions: the top of the earlier jet at 17:24 UT (top left), the base of the later jet at 20:47 UT (top right), the top of the later jet at 20:49 UT (bottom left), and a loop to the north of the later jet at 20:53 UT (bottom right). The black lines show results for the DEM inversion that used AIA data only, and the red lines show results for the DEM inversion that used multi-instrument data from AIA, {\\textit{RHESSI}}, and XRT if available. The AIA-alone DEMs and the joint DEMs agree qualitatively, but the joint DEMs require a more isothermal and slightly cooler high-temperature component for each source. In the residual plots, asterisks stand for results from the AIA-alone DEMs (among which black asterisks show the residuals in AIA channels that were used for the DEM inversion while blue asterisks show the residuals in {\\textit{RHESSI}} energy bands and possibly XRT Be-thin filter predicted by this DEM), and red triangles stand for results from the joint DEMs. The HXR fluxes predicted by the joint DEMs have a much better agreement with the actual data.}\n\t\\label{fig:dem_ave}\t\n\\end{figure}\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.8\\textwidth]{fig_hxr_comp.pdf}\n\t\\caption{\\textit{Left}: HXR spectrum for the source at the top of the earlier jet deduced from the AIA-alone DEM (black), compared to {\\textit{RHESSI}} measurements (red). \\textit{Right}:\u00a0HXR spectrum for the same source deduced from the joint DEM (black), compared to {\\textit{RHESSI}} measurements with a cross-calibration factor applied (red).\u00a0With a cross-calibration factor of 3.5, the HXR spectrum from the joint DEM could successfully predict the bremsstrahlung continuum and the line feature at 6.7 keV simultaneously.}\n\t\\label{fig:comp}\t\n\\end{figure}\n\n\\begin{figure}\n\t\\plotone{fig_emmaps1.pdf}\n\t\\caption{Temporal evolution of the DEM maps in 11-14 MK for the earlier jet. Hot plasma appeared at the base of the jet during the first few minutes, but starting from $\\sim$17:20 more and more hot plasma was observed near the top of the jet and the top source became dominant at $\\sim$17:22. Color scale is in units of cm$^{-5}$ K$^{-1}$.}\n\t\\label{fig:em1}\t\n\\end{figure}\n\n\\begin{figure}\n\t\\plotone{fig_emmaps2.pdf}\n\t\\caption{Temporal evolution of the DEM maps in 11-14 MK for the later jet. Similar to the earlier event, hot plasma first appeared at the base of the jet, but was also observed near the top of the jet starting from $\\sim$20:45 and the top source became dominant at $\\sim$20:50. Color scale is in units of cm$^{-5}$ K$^{-1}$.}\n\t\\label{fig:em2}\t\n\\end{figure}\n\n\n\\subsection{Jet velocities} \\label{velocities}\n\nIdentifying different velocities associated with the jet will be helpful to differentiate possible mechanisms behind those jets. A common method for velocity estimation is making time-distance plots \\citep[e.g.][]{2016A&A...589A..79M, 2020ApJ...889..183M}. Such plots are usually produced by putting together time slices of the intensity profile along the direction of the jet, in which case the jet velocities (in the plane of the sky) are the slopes. To take into account everything within the width of the jets, here we selected a rectangular region around each jet and we summed the intensities across the width of this region. \n\nFigure \\ref{fig:td1} shows the time-distance plots for the earlier jet using seven EUV filters of AIA and the slit-jaw 1330{\\AA} filter of {\\textit{IRIS}}. Interestingly, the chromospheric filters (AIA 304 {\\AA} and {\\textit{IRIS}} 1330 {\\AA}) are the ones where the velocities are most clearly identified and show very consistent results. The 304 {\\AA} filter shows multiple upward velocities ranging from 104 km\/s to 226 km\/s, while the 1330 {\\AA} filter shows upward velocities ranging from 83 km\/s to 404 km\/s (the uncertainties for those velocities are on the order of 10-20$\\%$ considering the pixel size and the temporal cadence of the images). Also, both filters clearly indicate that some plasma returned to the solar surface (possibly) along the same trajectory as the original jet, with downflow velocities of $\\sim$110 km\/s. For the rest of the AIA filters, similar upward and downward velocities as mentioned above can partly be seen in the 171 {\\AA} filter (sensitive to $\\sim$0.6 MK plasma), but could barely be seen in other ones. However, there were also some really fast outflows at the beginning of this jet in the 131{\\AA} filter (sensitive to both $\\sim$0.4 MK and $\\sim$10 MK plasma), which has a velocity of $\\sim$700 km\/s.\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.75\\textwidth]{fig_td_jet1_paper.pdf}\n\t\\caption{Time-distance plots of the earlier jet in seven AIA EUV filters and the {\\textit{IRIS}} 1330 {\\AA} filter. Slopes for velocity calculation are shown as dashed lines. The chromospheric filters (AIA 304 {\\AA} and {\\textit{IRIS}} 1330 {\\AA}) show various upward velocities around 200 km\/s and downward velocities around 110 km\/s. The AIA 131 {\\AA} filter (sensitive to both $\\sim$0.4 MK and $\\sim$10 MK) shows a faster upward velocity of 694 km\/s at the beginning of the jet. In the rest of the AIA filters, velocities are less apparent. (The black line at $\\sim$17:21 in the 171 {\\AA} plot is due to some instrument issue.)}\n\t\\label{fig:td1}\t\n\\end{figure}\n\nThe time-distance plots for the later jet describe a slightly different picture (Figure \\ref{fig:td2}). The 304 {\\AA} filter again shows multiple upward velocities ranging from 192 km\/s to 251 km\/s, and downward velocities around 130 km\/s (the uncertainty is on the order of 10-20\\%). However, these main upward velocities\u00a0can be clearly identified in all seven AIA filters, including both cool and hot ones. Besides, in the 131 {\\AA} filter, a faster outflow at the beginning of the jet is still identifiable but harder to see compared to the earlier event, and the velocity for this outflow is 377 km\/s. These velocities will be compared to other studies and to models in Section \\ref{sec:v_discuss}.\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.75\\textwidth]{fig_td_jet2_paper.pdf}\n\t\\caption{Time-distance plots of the later jet in seven AIA EUV filters. Slopes for velocity calculation are shown as dashed lines. Unlike the earlier event, the main upward velocities can be clearly identified in all filters. The AIA 131 {\\AA} filter still shows a faster upward velocity of 377 km\/s at the beginning of the jet, but this velocity is less apparent.}\n\t\\label{fig:td2}\t\n\\end{figure}\n\n\n\\subsection{Imaging spectroscopy} \\label{sec:imag_spec}\n\nTo study the accelerated electron populations in these events, we performed imaging spectroscopy for the four HXR sources observed by {\\textit{RHESSI}}. For each source, a one-minute time interval during which the source reached its maximum HXR intensity was first selected by eye based on the {\\textit{RHESSI}} images. These images were produced using the CLEAN algorithm and detectors 3, 6, 8, and 9. Then we chose a circular region which contained that source and obtained the spectrum for the selected region. Finally, we carried out spectral fitting using the OSPEX software package in the energy range of 3-15 keV. \n\nAs mentioned in Section \\ref{sec:DEM}, the comparison of the joint DEM and {\\textit{RHESSI}} measurements suggests a non-thermal component in the HXR spectrum. To further confirm this, we first fitted the spectra with an isothermal model (not shown), but the models always overpredicted the fluxes at the 6.7 keV line complex and had systematically low emissions at energies above 10 keV for all the sources. This indicated that there should be another component in the spectra, either due to a second thermal distribution or a non-thermal distribution. However, the results of fitting for a double thermal model (not shown) had unphysical fit parameters for one of the thermal components, and it again overpredicted the line emission, making this scenario unlikely. Therefore, we confirmed that there should be non-thermal emissions in these events. We then added a thick-target non-thermal component to the fitting (the justification for the thick-target regime will be discussed in Section 4.2), and could obtain good fits across the entire observed energy range. (We used the temperatures from those fits as a reference when generating the initial DEM guess for the joint DEM inversion.) However, due to the limited number of energy bins and the number of free parameters, some of the fit parameters were not well constrained and had uncertainties over 100$\\%$. To further reduce the uncertainties, we performed another fitting with a fixed temperature, which was chosen to be the average temperature of the hot component derived from the joint DEMs. The resulting spectra are shown in Figure \\ref{fig:spec} and the parameters are reported in Table \\ref{tab:spec_fit}. Interestingly, the non-thermal electron power laws in all sources have similar spectral indices around 10 and low energy cutoffs around 9 keV. While these non-thermal power laws are steeper than in most flares, the parameters are consistent with the range found for microflares by \\citet{2008ApJ...677..704H}.\n\n\\begin{figure}\n\t\\plotone{fig_spec_paper.pdf}\n\t\\caption{{\\textit{RHESSI}} spectra for the four HXR sources observed in these two events, each during a one-minute interval when the source approximately reached its maximum HXR intensity. All spectra can be fitted well with an isothermal (blue) plus thick-target (red) model. Note that in these fits, the isothermal temperatures were fixed to be the average temperatures of the hot components derived from the joint DEMs.}\n\t\\label{fig:spec}\t\n\\end{figure}\n\n\\begin{deluxetable*}{lcccccc}\n\t\\tablenum{1}\n\t\\tablecaption{Fit parameters for the four {\\textit{RHESSI}} HXR sources assuming an isothermal plus thick-target model \\label{tab:spec_fit}}\n\t\\tablehead{\n\t\t\\colhead{} & \\colhead{Time} & \n\t\t\\colhead{Emission measure} &\n\t\t\\colhead{Temperature} & \\colhead{spectral index} &\n\t\t\\colhead{low energy cutoff} \\\\\n\t\t\\colhead{} & \\colhead{} &\n\t\t\\colhead{($10^{46} \\mathrm{cm}^{-3}$)} &\n\t\t\\colhead{(MK, fixed)} & \\colhead{} & \\colhead{(keV)} \n\t}\n\t\\startdata\n\tJet 1 top source & 17:23:30-17:24:30 & 1.8 $\\pm$ 0.5 & 11.1 & 11.4 $\\pm$ 3.6 & 9.3 $\\pm$ 1.6 \\\\\n\tJet 2 base source & 20:46:00-20:47:00 & 4.7 $\\pm$ 1.0 & 9.9 & 9.1 $\\pm$ 0.9 & 8.5 $\\pm$ 0.9 \\\\\n\tJet 2 top source & 20:50:00-20:51:00 & 4.1 $\\pm$ 0.5 & 10.5 & 10.3 $\\pm$ 1.4 & 9.4 $\\pm$ 1.2 \\\\\n\tJet 2 northern source & 20:53:00-20:54:00 & 3.9 $\\pm$ 1.3 & 9.6 & 8.5 $\\pm$ 0.8 & 8.7 $\\pm$ 1.0 \n\t\\enddata\n\\end{deluxetable*}\n\n\n\n\\section{Discussion} \\label{sec:discussion}\n\\subsection{Jet velocities and driving mechanisms} \\label{sec:v_discuss}\nIn these two events, we observed two types of upward velocities in jets. One type falls within the range of 80-400 km\/s, most clearly seen in the AIA 304 {\\AA} and {\\textit{IRIS}} 1330 {\\AA} filters (sensitive to chromospheric temperatures) and possibly visible in other filters. This type of velocity is consistent with many previous studies of coronal jets \\citep[e.g.][]{1996PASJ...48..123S, 2007PASJ...59S.771S, 2016A&A...589A..79M, 2016ApJ...832L...7P, 2020ApJ...889..183M} where the jet velocities usually range from a few tens of km\/s to $\\sim$500 km\/s, with an average around 200 km\/s. The other velocities, $\\sim$700 km\/s and $\\sim$400 km\/s respectively for the two jets, could only be identified in the 131 {\\AA} filter (sensitive to $\\sim$0.4 MK and hot temperatures $\\sim$10 MK) at the beginning of each event (though harder for the later jet). The velocity of $\\sim$400 km\/s is still within the common range for coronal jets, but it is faster than the other velocities observed in that jet. The velocity of $\\sim$700 km\/s seems to be faster than most of the observed jets. However, such velocities are not rare and have been reported in a few coronal jet observations by XRT \\citep{2007PASJ...59S.771S, 2007Sci...318.1580C}.\n\nOne possible acceleration mechanism for coronal jets is chromospheric evaporation, which is also the responsible mechanism for some plasma flows in solar flares. In this process, the energy released from magnetic reconnection is deposited in the chromosphere, compresses and heats the plasma there, and produces a pressure-driven evaporation outflow on the order of sound speed. In fact, \\citet{1984ApJ...281L..79F} derived a theoretical upper limit for the velocity of this evaporation outflow to be 2.35 times the local sound speed, where the sound speed $c_s$ can be calculated as: $c_{s}=147\\sqrt{\\frac{T}{\\mathrm{1MK}}}$ km\/s assuming an isothermal model \\citep[e.g.][]{2004psci.book.....A}. In the 304 {\\AA} filter (characteristic temperature $10^{4.7}$ K), this upper limit corresponds to a very low speed of 77 km\/s, indicating that the cool plasma is unlikely driven by chromospheric evaporation. Also, common velocities reported by observations of chromospheric evaporation usually fall within the range of tens of km\/s up to 400 km\/s \\citep[e.g.][]{2013ApJ...767...55D, 2015ApJ...811..139T, 2015ApJ...805..167S}, thus chromospheric evaporation seems not able to explain the very fast flow of the earlier jet observed in the hot 131{\\AA} filter. Furthermore, it is expected that the velocity\u00a0would increase\u00a0with the temperature if a jet is generated\u00a0by chromospheric evaporation \\citep{2012ApJ...759...15M}, but here we have seen very consistent velocities in all seven AIA filters that are sensitive to different temperatures in the later event. For all these reasons, if both jets are driven by the same mechanism, that mechanism is likely \\textit{not} chromospheric evaporation but magnetic tension instead. However, it is not clear why the earlier jet shows more complicated and various velocities (even in a single channel) if both jets are driven similarly.\n\n\n\\subsection{Particle acceleration locations} \nIn Section \\ref{sec:imag_spec}, we fitted the {\\textit{RHESSI}} spectra of the four HXR sources with an isothermal plus thick-target model. Here we first justify that the thick-target regime is a reasonable approximation.\n\nThe column depth (defined as $N_s=\\int ndz$ where n is the plasma density) to fully stop an electron of energy E (in units of keV) can be calculated as: $N_s=1.5\\times10^{17}\\mathrm{cm^{-2}} E^2$ \\citep[e.g.][]{krucker2008hard}. Based on this formula, Figure \\ref{fig:stopping_d} plots the relation between the stopping distance and the plasma density for a given electron energy. Under the thick-target regime, according to Table \\ref{tab:spec_fit}, the average electron energy for the source at the top of the earlier jet is $\\sim$10 keV and the density there is $6\\times10^{9}$ $\\mathrm{cm}^{-3}$ (derived from the joint DEM), which corresponds to a distance of 36 arcsec in average that the electrons can travel before fully stopped by the ambient plasma. Similar average electron energies around 10 keV are found for the other three HXR sources in the later event, and the densities of those sources are $(0.8-1.9)\\times10^{10}$ $\\mathrm{cm}^{-3}$, resulting in stopping distances of 10-28 arcsec. Moreover, in Section \\ref{sec:DEM}, we report a possible cross-calibration factor around 3.5 between AIA and {\\textit{RHESSI}}. That factor is not included in the above calculation; however, if the cross-calibration factor is included, all the densities above would be multiplied by $\\sqrt{3.5}$, corresponding to even shorter stopping distances of 5-20 arcsec. In general, these stopping distances are comparable to the size of the HXR sources, meaning that accelerated electrons deposit a considerable portion of their energies into each source. Furthermore, if this is a thin-target regime, the spectral indices would be slightly smaller but the average electron energies would still be $\\sim$10 keV. This would result in very similar stopping distances that are comparable to source sizes, which is not consistent with the thin-target assumption. Therefore, we conclude that the HXR sources observed in these events can be approximated as thick targets, and mildly accelerated electrons are found at all these locations.\n\nSince the three HXR sources observed in the later event share similar electron distributions, here comes the next question: were the HXR emissions in the later event produced by the same population of accelerated electrons that traveled to different locations, or were they produced by different groups of accelerated electrons individually? In the standard jet models, reconnection happens near the base of the jet, which would require electrons to be accelerated near the base and travel upwards along the magnetic field lines to produce the HXR source at the jet top. However, from the DEM analysis, we find that the densities in the body of the jet (where there were not many HXR emissions) are $\\sim1\\times10^{10}$ $\\mathrm{cm}^{-3}$, so the stopping distance along the jet body is still 20-30 arcsec (for both jets). This is a few times smaller than the distance from the jet base to the jet top; therefore the HXR source at the top of each jet was produced by electrons that were accelerated very close to this source, rather than electrons that traveled far from the primary reconnection site at the jet base. This finding is in line with a similar one made for the powerful X8.3 class flare on September 10, 2017, obtained with an entirely different methodology that employs microwave imaging spectroscopy \\citep{fleishman2022solar}. \n\nAnother possible explanation for the sources at the jet top could be that the jets were actually ejected along large closed loops perpendicular to the plane of the sky rather than the so-called ``open'' field lines. Then the top of the jet is in fact the apex of the loop, which would have higher emissions purely because of the line-of-sight effect. However, even in this scenario the stopping distance along the jet body would remain the same, thus the conclusion of an additional particle acceleration site near the jet top (or loop apex) still holds regardless of jet geometry.\n\nThe HXR source to the north of the later jet appeared last among the three HXR sources, but still during a time when the jet was visible in EUV filters. It is also likely related to the jet because the formation of the jet would change the magnetic configuration of the active region, but neither the electron path nor the density along the path is clear if the energetic electrons traveled from the jet base to the northern location. The typical coronal density for an active region is about $10^{9}$ $\\mathrm{cm^{-3}}$ \\citep[e.g.][]{1961ApJ...133..983N}, corresponding to a stopping distance of 200 arcsec for electrons of 10 keV. Thus, in general situations energetic electrons could travel a decent distance in the corona, but it is also possible that the density in this active region is larger than that typical value. However, due to lack of data, we couldn't determine which was the case for this source.\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.75\\textwidth]{fig_stopping_d.pdf}\n\t\\caption{The relation between collisional stopping distances and ambient plasma densities for electrons of certain energies. Red plus signs mark the values for the densities (without a cross-calibration factor) and average energies for the four observed HXR sources, while brown triangles mark the values with a cross-calibration factor applied. The stopping distances for these sources are less than a few tens of arcsecs, but for lower densities and\/or higher electron energies, accelerated electrons could travel an appreciable distance in the corona. }\n\t\\label{fig:stopping_d}\t\n\\end{figure}\n\n\n\\subsection{Energy budget}\nInvestigating the partition of different energy components can help to understand the energy release process in these events. Such calculations have been done in the past for a number of flares and CMEs \\citep[e.g.][]{2012ApJ...759...71E, 2015ApJ...802...53A, 2016A&A...588A.116W}, but only for a few jets so far \\citep{2013ApJ...776...16P}. Here we present our estimates of various energy components for the later event, including kinetic energy, gravitational energy, thermal energy, and the energy in non-thermal electrons. We calculated the maximum amount of energy that could be converted into each of the forms above.\n\nThe jet's major eruption started from 20:46 UT, which was visible in all AIA filters and had a speed of $\\sim$260 km\/s (Figure \\ref{fig:td2}). The density of this plasma was derived from its DEM, which is $1.3\\times10^{10}$ $\\mathrm{cm}^{-3}$. Assuming the jet body that contained this group of plasma to be a cylinder, the peak kinetic energy of the jet is $5\\times10^{26}$ erg.\n\nThe maximum height of this jet is $\\sim$80 arcsec; however, as the height increases the amount of plasma that traveled there decreases, and it's not clear what fraction of plasma finally reached the maximum height. Therefore, instead of calculating the maximum gravitational energy of the jet, we set an upper limit of $2\\times10^{26}$ erg, which is the gravitational energy if all the plasma of the major eruption reached a height of 80 arcsec. This upper limit is smaller than the kinetic energy of the jet, meaning the eruption is not ballistic. \n\nThe thermal energy, $E_{th} = 3k_BTnV$, is dominated by contributions from HXR sources. Using the joint DEMs, the peak thermal energy for each HXR source is about $5\\times10^{27}$ erg. This value is consistent with the flare thermal energies found for other jets in \\citet{2020ApJ...889..183M}.\n\nThe energy in non-thermal electrons can be simply estimated as $E_{nonth} = N_eE_{e,ave}$ where $N_e$ is the total number of accelerated electrons and $E_{e,ave}$ is the average electron energy. Adopting the thick-target approximation and using the parameters from Table \\ref{tab:spec_fit}, the non-thermal energy for each HXR source is about $(6-11)\\times10^{29}$ erg. However, this value here is calculated based on {\\textit{RHESSI}} measurements. If we apply the cross-calibration factor between AIA and {\\textit{RHESSI}} to match the calculations of other energy forms, the non-thermal energy for each HXR source becomes $(3-6)\\times10^{29}$ erg.\n\nThe energies of HXR sources in this event can be compared to those in previous studies of flare energetics. \\citet{2012ApJ...759...71E} studied 38 eruptive events (all except one were M or X class flares and most flares were accompanied by a CME), and they found that the flare thermal energies were always smaller than the energies in accelerated particles. Similar results were found in a later study by \\citet{2016A&A...588A.116W}, where the median ratio of thermal energies to non-thermal energies in electrons for 24 (C-to-X class) flares was 0.3. In a subclass of ``cold'' flares \\citep{2018ApJ...856..111L}, the thermal energy is equal (within the uncertainties) to the non-thermal energy deposition \\citep{2016ApJ...822...71F, 2020ApJ...890...75M, 2021ApJ...913...97F}. (Theoretically, the thermal energy cannot be less than the non-thermal energy as the latter one decays into the thermal one.) For our jet event that contains low-C class flares, the thermal energies are more than one order of magnitude smaller than the non-thermal energies. Therefore, the conclusion that the non-thermal energy is always larger than or at least equal to the thermal energy is likely consistent across a wide range of flare classes and regardless of whether the flare is associated with a jet\/CME or not. \n \nHowever, the energy partition between the jet and the associated flares is different from the energy partition between a CME and a flare. For this event, the kinetic\/gravitational energy of the jet is more than one order of magnitude smaller than the energy of the flares (thermal\/non-thermal), while in \\citet{2012ApJ...759...71E} the total energy of the CME is usually significantly larger. (The kinetic energy in confined flares is much smaller \\citep{2021ApJ...913...97F}, though.) This variety could be explained in the minifilament eruption scenario that jets and CMEs are still both parts of the same eruptive events but the energy partition changes with scale, or this could also indicate that there are fundamental differences between jets and CMEs. To further answer this question, future studies with more samples of flare-related jets are needed.\n\nLastly, it should also be noted that there are still other forms of energy that were not considered in the calculations above, such as magnetic energy, wave energy, etc. These energies could also be important components of the event energy budget, but are hard to evaluate here due to limited data.\n\n\n\\subsection{Comparison to the current jet models} \\label{comp_models}\nConsidering the locations of hot plasma as well as the HXR sources, these two jets are interesting examples to be compared with current jet models. On the one hand, the source at the base of the jet is consistent with what is expected from jet models. During the minifilament eruption at the jet base, magnetic reconnection happens close to the bottom of the corona, heating the plasma there directly and generating accelerated electrons near the reconnection site. The downward-traveling energetic electrons radiate bremsstrahlung emissions as they collide with the dense chromosphere, producing a HXR source and\/or further heating the ambient plasma at the base of the jet. On the other hand, processes after a jet's eruption are generally not considered by those models; thus the hot plasma and the HXR source at the top of the jet are not expected. Our observations have shown that additional particle acceleration could happen at other locations besides the jet base. In other words, there could be multiple reconnection and energy release sites in a single jet event. Also, despite the significantly different particle acceleration sites (and even two separate events), the non-thermal electrons share very similar energy distributions. The spectral indices around 10 and the low energy cutoffs around 9 keV suggest that jet reconnection typically produces only mild particle acceleration. These low energy cutoffs are similar to those of the cold flares \\citep[e.g.,][]{2020ApJ...890...75M}, while the spectra are much softer in the case of the jets.\n\nAnother interesting point about these events is the relation between hot and cool material. For both jets, the cool ejections observed in the 304 {\\AA} filter were adjacent to the hot ejections observed in the 94 {\\AA} and 131 {\\AA} filters. While past simulations have successfully produced a hot jet and a cool jet (or surge) in a single event, it is generally expected that hot and cool jets are driven through different mechanisms. For example, in the simulation by \\citet{1996PASJ...48..353Y}, the hot jet was accelerated by the pressure gradient while the cool surge was accelerated by magnetic tension. Similarly in an observational study by \\citet{2012ApJ...759...15M}, the hot component (at coronal temperatures) was generated by chromospheric evaporation while the cool component (at chromospheric temperatures) was accelerated by magnetic force. However, though the observation of the earlier jet doesn't conflict with this picture, the later jet had consistent velocities in hot and cool filters, indicating that some of the hot components might be driven by a very similar process as the cool components in that event. Therefore, at least in some cases the hot and cool components must be more closely related, and a jet model should be able to explain this kind of observation as well as those similar to \\citet{2012ApJ...759...15M}.\n\n\n\n\\section{Summary} \\label{sec:summ}\nIn this paper, we present a multi-wavelength analysis of two active region jets that were associated with low C-class flares on November 13, 2014. Key aspects of this study include:\n\n\\begin{enumerate}\n\t\\item In both events, hot ($\\gtrsim$10MK) plasma not only appeared near the base of the jet (which is the location of the primary reconnection site) at the beginning, but also appeared near the top of the jet after a few minutes. \n\t\\item Four {\\textit{RHESSI}} HXR sources were observed: one (at the jet top) in the first event and three (at the jet base, jet top, and a location to the north of the jet) in the later event. All those sources showed evidence of mildly accelerated electrons which had spectral indices around 10 and extended to low energies around 9 keV. \n\t\\item Various jet velocities were identified through time-distance plots, including major upward velocities of $\\sim$250 km\/s and downward velocities of $\\sim$100 km\/s. Fast outflows of $\\sim$700 km\/s or $\\sim$400 km\/s were observed only in the hot AIA 131 {\\AA} filter at the beginning of each jet.\u00a0These velocities indicate that the jets were likely driven by magnetic force.\n\t\\item The HXR source and hot plasma at the base of the jet were expected from current models. However, the HXR sources at the top of the jet were produced by energetic electrons that were accelerated very close to the top location, rather than electrons that were accelerated near the jet base but traveled to the top. This means that there was more than one reconnection and particle acceleration site in each event.\t\n\\end{enumerate}\t\n\nCoronal jets are an important form of solar activity that involves particle acceleration, and they share similarities with larger eruptive events such as CMEs. HXRs can provide important constraints on hot plasma within a coronal jet, as well as unique diagnostics of energetic electron populations. To obtain the best constraints for jet models, observations should take advantage of state-of-the-art instruments in different wavebands, but only a few studies have included HXR observations to date. In future work, we would like to extend the method described in this paper to other coronal jets. Those jets could come from the jet database that will be generated by the citizen science project Solar Jet Hunter \\footnote{https:\/\/www.zooniverse.org\/projects\/sophiemu\/solar-jet-hunter} (which was launched through the Zooniverse platform in December 2021). We expect studies with more jet samples to further advance our understanding of particle acceleration in jets.\n\nFurthermore, as shown in this study, HXR sources that are associated with jets could be found in the corona, and they could be faint in some events, thus not identified by current instruments. One solution is to develop direct focusing instruments, such as that demonstrated by the Focusing Optics X-ray Solar Imager (\\textit{FOXSI}) sounding rocket experiment, which will provide better sensitivity and dynamic range for future HXR observations. \n\n\\acknowledgments\nThis work is supported by NASA Heliophysics Guest Investigator grant 80NSSC20K0718. Y.Z. is also supported by the NASA FINESST program 80NSSC21K1387. N.K.P. acknowledges support from NASA's {\\textit{SDO}}\/AIA and HGI grant. We thank Samaiyah Farid for helpful discussions. We are also grateful to the {\\textit{SDO}}\/AIA, {\\textit{RHESSI}}, {\\textit{Hinode}}\/XRT, and {\\textit{IRIS}} teams for their open data policy. {\\textit{Hinode}} is a Japanese mission developed and launched by ISAS\/JAXA, with NAOJ as domestic partner and NASA and STFC (UK) as international partners. It is operated by these agencies in co-operation with ESA and the NSC (Norway). {\\textit{IRIS}} is a NASA small explorer mission developed and operated by LMSAL with mission operations executed at NASA Ames Research Center and major contributions to downlink communications funded by ESA and the Norwegian Space Centre.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe discrete logarithm problem (DLP) was first proposed as a hard\nproblem in cryptography in the seminal article of Diffie and\nHellman~\\cite{DiHe76}. Since then, together with factorization, it has\nbecome one of the two major pillars of public key cryptography. As a\nconsequence, the problem of computing discrete logarithms has\nattracted a lot of attention. From an exponential algorithm in $1976$,\nthe fastest DLP algorithms have been greatly improved during the past\n$35$ years. A first major progress was the realization that the DLP in\nfinite fields can be solved in subexponential time, i.e. $L(1\/2)$\nwhere $L_N(\\alpha)=\\exp\\left(O((\\log N)^\\alpha(\\log\\log\n N)^{1-\\alpha})\\right)$. The next step further reduced this to a\nheuristic $L(1\/3)$ running time in the full range of finite fields,\nfrom fixed characteristic finite fields to prime\nfields~\\cite{Adl79,Cop84,Gor93,Adl94,JoLe06,JLVS07}.\n\nRecently, practical and theoretical advances have been\nmade~\\cite{Jo13faster,GGMZ13,Joux13} with an\nemphasis on small to medium characteristic finite fields and composite\ndegree extensions. The most general and efficient\nalgorithm~\\cite{Joux13} gives a complexity of $L(1\/4+o(1))$ when the\ncharacteristic is smaller than the square root of the extension\ndegree. Among the ingredients of this approach, we find the use of a\nvery particular representation of the finite field; the use of the\nso-called {\\em systematic equation}\\footnote{While the terminology is\nsimilar, no parallel is to be made with the systematic equations as\ndefined in early works related to the computation discrete logarithms in\n${\\mathbb F}_{2^n}$, as~\\cite{BlFuMuVa84}.}; and the use of algebraic\nresolution of bilinear polynomial systems in the individual logarithm\nphase.\n\nIn this work, we present a new discrete logarithm algorithm, in\nthe same vein as in~\\cite{Joux13} that uses an asymptotically more\nefficient descent approach. The main result gives a {\\it\n quasi-polynomial} heuristic complexity for the DLP in finite fields\nof small characteristic. By quasi-polynomial, we mean a complexity of\ntype $n^{O(\\log n)}$ where $n$ is the bit-size of the cardinality of\nthe finite field. Such a complexity is smaller than any\n$L(\\epsilon)$ for $\\epsilon>0$. It remains super-polynomial\nin the size of the input, but offers a major asymptotic improvement\ncompared to $L(1\/4+o(1))$.\n\nThe key features of our algorithm are the following.\n\\begin{itemize}\n\\item We keep the field representation and the systematic equations of~\\cite{Joux13}.\n\\item The algorithmic building blocks are elementary. In particular,\n we avoid the use of Gr\u00f6bner basis algorithms.\n\\item The complexity result relies on three key heuristics:\nthe existence of a polynomial representation of the appropriate\nform; the fact that the smoothness probabilities of some non-uniformly\ndistributed\npolynomials are similar to the probabilities for uniformly random\npolynomials of the same degree; and the linear independence of some\nfinite field elements related to the action of $\\PGL_2({\\mathbb F}_{q})$.\n\\end{itemize}\n\nThe heuristics are very close to the ones used in~\\cite{Joux13}. In\naddition to the arguments in favor of these heuristics already given\nin~\\cite{Joux13}, we performed some experiments to validate them on\npractical instances. \\medskip\n\nAlthough we insist on the case of finite fields of small\ncharacteristic, where quasi-polynomial complexity is obtained, our new\nalgorithm improves the complexity of discrete logarithm computations in a\nmuch larger range of finite fields.\n\nMore precisely, in finite fields of the form ${\\mathbb F}_{q^k}$, where $q$\ngrows as $L_{q^k}(\\alpha)$, the complexity becomes\n$L_{q^k}(\\alpha+o(1))$. As a consequence, our algorithm is\nasymptotically faster than the Function Field Sieve algorithm in\nalmost all the range previously covered by this algorithm. Whenever \n$\\alpha<1\/3$, our new algorithm offers the smallest complexity. For\nthe limiting case $L(1\/3,c)$, the Function Field Sieve remains more\nefficient for small values of $c$, and the Number Field Sieve is better\nfor large values of $c$ (see~\\cite{JLVS07}).\n\\bigskip\n\nThis article is organized as follows. In Section~\\ref{sec:main}, we state\nthe main result, and discuss how it can be used to design a complete\ndiscrete logarithm algorithm. In Section~\\ref{sec:csq}, we analyze how\nthis result can be interpreted for various types of finite fields,\nincluding the important case of fields of small characteristic.\nSection~\\ref{sec:descent-one-step} is devoted to the description of our\nnew algorithm. It relies on heuristics that are discussed in\nSection~\\ref{sec:heur}, from a theoretical and a practical point of view.\nBefore getting to the conclusion, in Section~\\ref{sec:improvement}, we\npropose a few variants of the algorithm.\n\n\\section{Main result}\n\\label{sec:main}\n\nWe start by describing the setting in which our algorithm applies. It is\nbasically the same as in~\\cite{Joux13}: we need a large enough subfield,\nand we assume that a sparse representation can be found. This is\nformalized in the following definition.\n\n\\begin{definition}\n A finite field $K$ admits a {\\em sparse medium subfield representation} if\n \\begin{itemize}\n \\item it has a subfield of $q^2$ elements for a prime power $q$,\n\t\ti.e. $K$ is isomorphic to ${\\mathbb F}_{q^{2k}}$ with $k\\geq1$;\n \\item there exist two polynomials $h_0$ and $h_1$ over\n ${\\mathbb F}_{q^2}$ of small degree, such that $h_1X^q-h_0$ has a\n degree $k$ irreducible factor.\n \\end{itemize}\n\\end{definition}\n\nIn what follows, we will assume that all the fields under consideration\nadmit a sparse medium subfield representation. Furthermore, we assume that\nthe degrees of the polynomials $h_0$ and $h_1$ are uniformly bounded by a\nconstant $\\delta$. Later, we will provide heuristic arguments for the\nfact that any finite field of the form ${\\mathbb F}_{q^{2k}}$ with $k \\le q+2$\nadmits a sparse medium subfield representation with polynomials $h_0$ and\n$h_1$ of degree at most 2. But in fact, for our result to hold, allowing\nthe degrees of $h_0$ and $h_1$ to be bounded by any constant $\\delta$\nindependent of $q$ and $k$ or even allowing $\\delta$ to grow\nslower than $O(\\log q)$ would be sufficient.\n\nIn a field in sparse medium subfield representation, elements will\nalways be represented as polynomials of degree less than $k$ with\ncoefficients in ${\\mathbb F}_{q^2}$. When we talk about the discrete logarithm of\nsuch an element, we implicitly assume that a basis for this discrete\nlogarithm has been chosen, and that we work in a subgroup whose order has\nno small irreducible factor (we refer to the Pohlig-Hellman\nalgorithm~\\cite{PoHe78} to limit ourselves to this case).\n\n\\begin{prop}\\label{prop:onestep}\n Let $K={\\mathbb F}_{q^{2k}}$ be a finite field that admits a sparse medium subfield\n representation.\n Under the heuristics explained below, there exists an algorithm whose\n complexity is polynomial in $q$ and $k$ and which can be used for the\n following two tasks. \n\n \\begin{enumerate}\n \\item \n Given an element of $K$ represented by a polynomial\n $P\\in{\\mathbb F}_{q^2}[X]$ with $2\\leq \\deg P\\leq k-1$,\n the algorithm returns an expression of\n $\\log P(X)$ as a linear combination of at most $O(kq^2)$\n logarithms $\\log P_i(X)$ with $\\deg P_i \\leq \\lceil\n \\frac12 \\deg P\\rceil$ and of $\\log h_1(X)$.\n\n \\item\n The algorithm returns the logarithm of $h_1(X)$ and \n the logarithms of all the elements of $K$\n of the form $X+a$, for $a$ in ${\\mathbb F}_{q^2}$. \n \\end{enumerate}\n\\end{prop}\n\nBefore the presentation of the algorithm, which is made in Section~\\ref{sec:descent-one-step}, we explain how to use it as a building block for a complete discrete logarithm algorithm.\n\nLet $P(X)$ be an element of $K$ for which we want to compute the discrete\nlogarithm. Here $P$ is a polynomial of degree at most $k-1$ and with\ncoefficients in ${\\mathbb F}_{q^2}$. We start by applying the algorithm of Proposition~\\ref{prop:onestep} to $P$. We obtain a relation of the form\n$$ \\log P = e_0 \\log h_1 + \\sum e_i \\log P_i,$$\nwhere the sum has at most $\\kappa q^2 k$ terms for a constant\n$\\kappa$ and the $P_i$'s have degree at most $\\lceil \\frac12 \\deg P\\rceil$.\nThen, we apply\nrecursively the algorithm to the $P_i$'s, thus creating a descent\nprocedure where at each step, a given element $P$ is expressed as a\nproduct of elements, whose degree is at most half the degree of $P$\n(rounded up) and the arity of the descent tree is in $O(q^2 k)$.\n\nAt the end of the process, the logarithm of $P$ is expressed as a linear\ncombination of the logarithms of $h_1$ and of the linear polynomials,\nfor which the logarithms are computed with the algorithm in\nProposition~\\ref{prop:onestep} in its second form.\n\nWe are left with the complexity analysis of the descent\nprocess. Each internal node of the descent tree corresponds to one application of\nthe algorithm of Proposition~\\ref{prop:onestep}, therefore each internal\nnode has a cost which is bounded by a polynomial in $q$ and~$k$. The total cost\nof the descent is therefore bounded by the number of nodes in the descent\ntree times a polynomial in $q$ and $k$. The depth of the descent tree is in\n$O(\\log k)$. The number of nodes of the tree is then less than or equal to\nits arity raised to the power of its depth, which is $(q^2\nk)^{O(\\log k)}$. Since any polynomial in $q$ and\n$k$ is absorbed in the $O()$ notation in the exponent, we obtain the\nfollowing result.\n\n\\begin{theo}\\label{thm}\n Let $K={\\mathbb F}_{q^{2k}}$ be a finite field that admits a sparse medium\n subfield representation. Assuming the same heuristics as in\n Proposition~\\ref{prop:onestep}, any discrete logarithm in $K$ can be\n computed in a time bounded by \n $$ \\max(q,k)^{O(\\log k)}.$$\n\\end{theo}\n\n\n\\section{Consequences for various ranges of parameters}\n\\label{sec:csq}\n\nWe now discuss the implications of Theorem~\\ref{thm} depending on the\nproperties of the finite field ${\\mathbb F}_Q$ where we want to compute discrete\nlogarithms in the first place. The complexities will be expressed in\nterms of $\\log Q$, which is the size of the input.\n\nThree cases are considered. In the first one, the finite field admits a\nsparse medium subfield representation, where $q$ and $k$ are almost\nequal. This is the optimal case. Then we consider the case where the\nfinite field has small (maybe constant) characteristic. And finally, we\nconsider the case where the characteristic is getting larger so that the\nonly available subfield is a bit too large for the algorithm to have an\noptimal complexity.\n\nIn the following, we always assume that for any field of the form\n${\\mathbb F}_{q^{2k}}$, we can find a sparse medium subfield representation.\n\n\\subsection{Case where the field is ${\\mathbb F}_{q^{2k}}$, with $q\\approx k$}\n\nThe finite fields ${\\mathbb F}_Q = {\\mathbb F}_{q^{2k}}$ for which $q$ and $k$ are almost\nequal are tailored for our algorithm. In that case, the complexity of\nTheorem~\\ref{thm} becomes $q^{O(\\log q)}$. Since $Q \\approx q^{2q}$, we\nhave $q=(\\log Q)^{O(1)}$. This gives an expression of the form\n$2^{O\\left((\\log \\log Q)^2\\right)}$, which is sometimes called\nquasi-polynomial in complexity theory.\n\n\\begin{cor}\\label{cor1}\n For finite fields of cardinality $Q = q^{2k}$ with $q+O(1)\\geq k$\n and $q=(\\log Q)^{O(1)}$,\n there exists a heuristic algorithm for computing discrete logarithms\n in quasi-polynomial time\n $$ 2^{O\\left((\\log \\log Q)^2\\right)}.$$\n\\end{cor}\n\nWe mention a few cases which are almost directly covered by\nCorollary~\\ref{cor1}. First, we consider the case where $Q=p^n$ with\n$p$ a prime bounded by $(\\log\nQ)^{O(1)}$, and yet large enough so that $n \\le (p+\\delta)$. In this\ncase ${\\mathbb F}_Q$, or possibly ${\\mathbb F}_{Q^2}$ if $n$ is odd, can be represented in\nsuch a way that Corollary~\\ref{cor1} applies.\n\nMuch the same can be said in the case where $n$ is composite and factors\nnicely, so that ${\\mathbb F}_Q$ admits a large enough subfield ${\\mathbb F}_q$ with\n$q=p^m$. This can be used to solve certain discrete logarithms in, say,\n${\\mathbb F}_{2^n}$ for adequately chosen $n$ (much similar to records tackled\nby~\\cite{record1778,record1971,record4080,record6120,record6168}).\n\n\\subsection{Case where the characteristic is polynomial in the input size}\n\nLet now ${\\mathbb F}_Q$ be a finite field whose characteristic $p$ is bounded by\n$(\\log Q)^{O(1)}$, and let $n=\\log Q \/ \\log p$, so that $Q = p^n$. While\nwe have seen that Corollary~\\ref{cor1} can be used to treat some cases,\nits applicability might be hindered by the absence of an appropriately\nsized subfield: $p$ might be as small as $2$,\nand $n$ might not factor adequately. In those cases, we use the same strategy as\nin~\\cite{Joux13} and embed the discrete logarithm problem in ${\\mathbb F}_Q$ into\na discrete logarithm problem in a larger field.\n\nLet $k$ be $n$ if $n$ is odd and $n\/2$ if $n$ is even. Then, we set\n$q = p^{\\lceil \\log_p k \\rceil}$, and we work in the field ${\\mathbb F}_{q^{2k}}$.\nBy construction this field contains ${\\mathbb F}_Q$ (because $p|q$ and $n|2k$) and\nit is in the range of applicability of Theorem~\\ref{thm}. Therefore,\none can solve a discrete logarithm problem in ${\\mathbb F}_Q$ in time\n$\\max(q, k)^{O(\\log k)}$. Rewriting this complexity in terms of $Q$, we get\n$\\log_p(Q)^{O(\\log\\log Q)}$. And finally, we get a similar complexity\nresult as in the previous case. Of course, since we had to embed in a\nlarger field, the constant hidden in the $O()$ is larger than for\nCorollary~\\ref{cor1}.\n\n\\begin{cor}\\label{cor2}\n For finite fields of cardinality $Q$ and characteristic bounded by\n $\\log(Q)^{O(1)}$, there exists a heuristic algorithm for \n computing discrete logarithms in quasi-polynomial time\n $$ 2^{O\\left((\\log \\log Q)^2\\right)}.$$\n\\end{cor}\n\nWe emphasize that the case ${\\mathbb F}_{2^n}$ for a prime $n$ corresponds to\nthis case. A direct consequence of Corollary~\\ref{cor2} is that discrete logarithms in ${\\mathbb F}_{2^n}$ can be computed in\nquasi-polynomial time $2^{O((\\log n)^2)}$.\n\n\\subsection{Case where $q = L_{q^{2k}}(\\alpha)$}\nIf the characteristic of the base field is not so small compared to\nthe extension degree, the complexity of our algorithm does not keep\nits nice quasi-polynomial form. However, in almost the whole range of\napplicability of the Function Field Sieve algorithm, our algorithm is\nasymptotically better than FFS.\n\nWe consider here finite fields that can be put into the form ${\\mathbb F}_Q =\n{\\mathbb F}_{q^{2k}}$, where $q$ grows not faster than an expression of the form\n$L_Q(\\alpha)$. In the following, we assume that there is equality, which\nis of course the worst case. The condition can then be rewritten as\n$\\log q = O((\\log Q)^\\alpha(\\log\\log\nQ)^{1-\\alpha})$ and therefore $k = \\log Q \/ \\log q = O((\\log Q \/ \\log\\log\nQ)^{1-\\alpha})$. In particular we have $k\\leq q+\\delta$, so that\nTheorem~\\ref{thm} can be applied and gives a complexity of $q^{O(\\log\nk)}$. This yields the following result.\n\n\\begin{cor}\\label{cor3}\n For finite fields of the form ${\\mathbb F}_Q = {\\mathbb F}_{q^{2k}}$ where $q$ is\n bounded by $L_Q(\\alpha)$, there exists a heuristic algorithm for computing\n discrete logarithms in subexponential time\n $$ L_Q(\\alpha)^{O(\\log \\log Q)}.$$\n\\end{cor}\n\nThis complexity is smaller than $L_Q(\\alpha')$ for any $\\alpha' >\n\\alpha$. Hence, for any $\\alpha<1\/3$, our algorithm is faster than the\nbest previously known algorithm, namely FFS and its variants.\n\n\n\\section{Main algorithm: proof of Proposition~\\ref{prop:onestep}}\n\\label{sec:descent-one-step}\n\nThe algorithm is essentially the same for proving the two points of\nProposition~\\ref{prop:onestep}. The strategy is to find relations between\nthe given polynomial $P(X)$ and its translates by a constant in\n${\\mathbb F}_{q^2}$. Let $D$ be the degree of $P(X)$, that we assume to be at\nleast 1 and at most $k-1$.\n\nThe key to find relations is the {\\em systematic equation}:\n\\begin{equation}\\label{eq:frobenius}\n X^q-X=\\prod_{a\\in {\\mathbb F}_q}(X-a)\\text.\n\\end{equation}\n\nWe like to view Equation~\\eqref{eq:frobenius} as involving\nthe projective line $\\ensuremath{\\mathbb{P}}^1({\\mathbb F}_q)$. Let $\\ensuremath{\\mathcal S}=\\{(\\alpha,\\beta)\\}$ be a set\nof representatives of the $q+1$ points $(\\alpha:\\beta)\\in\\ensuremath{\\mathbb{P}}^1({\\mathbb F}_q)$,\nchosen adequately so that the following equality holds.\n\\begin{equation}\n \\label{eq:frobenius-proj}\n X^qY-XY^q=\\prod_{(\\alpha,\\beta)\\in\\ensuremath{\\mathcal S}}(\\beta X-\\alpha Y)\\text.\n\\end{equation}\n\nTo make translates of $P(X)$ appear, we consider the action of {\\em\nhomographies}. \nAny matrix $m = \\begin{pmatrix}a & b\\\\ c& d\\end{pmatrix}$ acts on $P(X)$\nwith the following formula:\n$$m\\cdot P = \\frac{aP+b}{cP+d}.$$\nIn the following, this action will become trivial if the matrix $m$ has\nentries that are defined over ${\\mathbb F}_q$. This is also the case if $m$\nis non-invertible. Finally, it is clear that multiplying all the\nentries of $m$ by a non-zero constant does not change its action on\n$P(X)$. Therefore the matrices of the homographies that we consider are\ngoing to be taken in the following set of cosets:\n$$ \\ensuremath{\\mathcal{P}}_q = \\PGL({\\mathbb F}_{q^2}) \/ \\PGL({\\mathbb F}_q).$$\n(Note that in general $\\PGL_2({\\mathbb F}_q)$ is not a\nnormal subgroup of $\\PGL_2({\\mathbb F}_{q^2})$, so that $\\ensuremath{\\mathcal{P}}_q$ is not a quotient\ngroup.) \n\nTo each element $m = \\begin{pmatrix}a & b\\\\ c& d\\end{pmatrix}\\in\n\\ensuremath{\\mathcal{P}}_q$, we associate the equation~\\eqref{eq:Em} obtained by substituting $aP+b$\nand $cP+d$ in place of $X$ and $Y$ in \nEquation~\\eqref{eq:frobenius-proj}.\n\\def\\mathop{\\raise-.0125ex\\hbox{x}}{\\mathop{\\raise-.0125ex\\hbox{x}}}\n\\begin{align*}\n \\tag{$E_m$}\\label{eq:Em}\n(aP+b)^q(cP+d) - (aP+b)(cP+d)^q & =\n \\prod_{(\\alpha,\\beta)\\in\\ensuremath{\\mathcal S}} \\beta(aP+b) - \\alpha(cP+d) \\\\\n & =\\prod_{(\\alpha,\\beta)\\in\\ensuremath{\\mathcal S}}\n (-c\\alpha + a\\beta) P - (d\\alpha - b\\beta) \\\\\n & =\\lambda\\prod_{(\\alpha,\\beta)\\in\\ensuremath{\\mathcal S}}\n P - \\mathop{\\raise-.0125ex\\hbox{x}}(m^{-1} \\cdot (\\alpha:\\beta))\\text.\n\\end{align*}\nThis sequence of formulae calls for a short comment because of an abuse\nof notation in the last expression. First, $\\lambda$ is the constant in\n${\\mathbb F}_{q^2}$ which makes the leading terms of the two sides match. Then,\nthe term $P-\\mathop{\\raise-.0125ex\\hbox{x}}(m^{-1} \\cdot\n(\\alpha:\\beta))$ denotes $P-u$ when $m^{-1} \\cdot (\\alpha:\\beta)=(u:1)$\n(whence we have $u=\\frac{d\\alpha - b\\beta}{-c\\alpha + a\\beta}$), or $1$\nif $m^{-1} \\cdot (\\alpha:\\beta)=\\infty$. The latter may occur since when\n$a\/c$ is in ${\\mathbb F}_q$, the expression $-c\\alpha + a\\beta$ vanishes for a\npoint $(\\alpha:\\beta)\\in\\ensuremath{\\mathbb{P}}^1({\\mathbb F}_{q})$ so that one of the factors of the\nproduct contains no term in $P(X)$.\n \nHence the right-hand side of Equation~\\eqref{eq:Em} is, up to a\nmultiplicative constant, a product of $q+1$ or $q$ translates of the\ntarget $P(X)$ by elements of\n${\\mathbb F}_{q^2}$. The equation obtained is actually related to the set of\npoints $m^{-1}\\cdot\\ensuremath{\\mathbb{P}}^1({\\mathbb F}_q)\\subset \\ensuremath{\\mathbb{P}}^1({\\mathbb F}_{q^2})$.\n\\medskip\n\n\nThe polynomial on the left-hand side of~\\eqref{eq:Em} can be rewritten as\na smaller degree equivalent. For this, we use the\nspecial form of the defining polynomial: in $K$ we have $X^q \\equiv\n\\frac{h_0(X)}{h_1(X)}$. Let us denote by $\\tilde{a}$ the element $a^q$ when $a$\nis any element of ${\\mathbb F}_{q^2}$. Furthermore, we write\n$\\tilde{P}(X)$ the polynomial $P(X)$ with all its coefficients\nraised to the power $q$. The left-hand side of~\\eqref{eq:Em} is\n$$(\\tilde{a}\\tilde{P}(X^q)+\\tilde{b})(cP(X)+d)\n- (aP(X) + b)(\\tilde{c}\\tilde{P}(X^q)+\\tilde{d}),$$\nand using the defining equation for the field $K$, it is congruent to\n$$\n\\ensuremath{\\mathcal{L}}_m \\mathrel{:=} \\left(\\tilde{a}\\tilde{P}\\left(\\frac{h_0(X)}{h_1(X)}\\right)+\\tilde{b}\\right)(cP(X)+d)\n- (aP(X) +\nb)\\left(\\tilde{c}\\tilde{P}\\left(\\frac{h_0(X)}{h_1(X)}\\right)+\\tilde{d}\\right).\n$$\nThe denominator of $\\ensuremath{\\mathcal{L}}_m$ is a\npower of~$h_1$ and its numerator has degree at most $(1+\\delta) D$ where\n$\\delta=\\max(\\deg h_0,\\deg h_1)$. We say that $m\\in\\ensuremath{\\mathcal{P}}_q$ yields a\nrelation if this numerator of $\\ensuremath{\\mathcal{L}}_m$ is $\\lceil D\/2 \\rceil$-smooth. \n\nTo any $m\\in\\ensuremath{\\mathcal{P}}_q$, we associate a row vector $v(m)$ of dimension $q^2+1$ in the following\nway. Coordinates are indexed by $\\mu\\in\\ensuremath{\\mathbb{P}}^1({\\mathbb F}_{q^2})$, and the value\nassociated to $\\mu\\in{\\mathbb F}_{q^2}$ is $1$ or $0$ depending on whether\n$P-\\mathop{\\raise-.0125ex\\hbox{x}}(\\mu)$ appears in the right-hand side of Equation~\\eqref{eq:Em}. Note that\nexactly $q+1$ coordinates are $1$ for each $m$. Equivalently, we may write\n\\begin{equation}\\label{eq:v(m)}\nv(m)_{\\mu\\in\\ensuremath{\\mathbb{P}}^1({\\mathbb F}_{q^2})}=\\left\\{\n \\begin{array}{l}\n 1\\text{ if }\\mu=m^{-1}\\cdot(\\alpha:\\beta) \\text{ with }\n\t (\\alpha:\\beta)\\in\\ensuremath{\\mathbb{P}}^1({\\mathbb F}_q),\\\\\n 0\\text{ otherwise}.\n \\end{array}\n\\right.\n\\end{equation}\n\nWe associate to the polynomial $P$ a matrix $H(P)$ whose rows are\nthe vectors $v(m)$ for which $m$ yields a relation, taking at most one\nmatrix $m$ in each coset of $\\ensuremath{\\mathcal{P}}_q$. The validity of Proposition~\\ref{prop:onestep} crucially relies on the\nfollowing heuristic.\n\n\\begin{heuristic}\\label{heu:fullrank}\n For any $P(X)$, the set of rows $v(m)$ for cosets\n$m\\in\\ensuremath{\\mathcal{P}}_q$ that yield a relation form a matrix which has full rank $q^2+1$.\n\\end{heuristic}\n\nAs we will note in Section~\\ref{sec:heur}, the matrix $H(P)$ is\nheuristically expected to have $\\Theta(q^3)$ rows, where the implicit\nconstant depends on $\\delta$. This means that for our decomposition\nprocedure to work, we rely on the fact that $q$ is large enough\n(otherwise $H(P)$ may have less than $q^2+1$ rows, which precludes the\npossibility that it have rank $q^2+1$).\n\\medskip\n\nThe first point of Proposition~\\ref{prop:onestep}, where we descend a\npolynomial $P(X)$ of degree $D$ at least 2, follows by linear algebra\non this matrix.\nSince we assume that the matrix has full rank, \nthen the vector $(\\ldots,0,1,0,\\ldots)$ with $1$\ncorresponding to $P(X)$ can be written as a linear combination of the rows.\nWhen doing this linear combination on the equations~\\eqref{eq:Em} corresponding to\n$P$ we write $\\log P(X)$ as a linear combination of $\\log P_i$ where\n$P_i(x)$ are the elements occurring in the\nleft-hand sides of the equations. Since there are $O(q^2)$ columns, the elimination process\ninvolves at most $O(q^2)$ rows, and since each row corresponds to\nan equation~\\eqref{eq:Em}, it involves at most $\\deg \\ensuremath{\\mathcal{L}}_m\\leq (1+\\delta)D$ polynomials in the\nleft-hand-side\\footnote{This estimate of the number of irreducible\n factors is a pessimistic upper bound. In practice, one expects to\n have only $O(\\log D)$ factors on average. Since the crude estimate\n does not change the overall complexity, we keep it that way to avoid\n adding another heuristic.}. In total, the polynomial $D$ is\nexpressed by a linear combination of at most $O(q^2D)$ polynomials of\ndegree less than $\\lceil D\/2\\rceil$. The logarithm of $h_1(X)$ is also\ninvolved, as a denominator of $\\ensuremath{\\mathcal{L}}_m$. We have not made precise the\nconstant in ${\\mathbb F}_{q^2}^*$ which occurs to take care of the leading\ncoefficients. Since discrete logarithms in ${\\mathbb F}_{q^2}^*$ can certainly be\ncomputed in polynomial time in $q$, this is not a problem.\n\nSince the order of $\\PGL_2({\\mathbb F}_{q^i})$ is $q^{3i}-q^i$, the set of cosets\n$\\ensuremath{\\mathcal{P}}_q$ has $q^3+q$ elements. For each $m \\in\\ensuremath{\\mathcal{P}}_q$, testing\nwhether~\\eqref{eq:Em}\nyields a relation amounts to some polynomial manipulations and a \nsmoothness test. All of them can be done in polynomial time in $q$ and\nthe degree of $P(X)$ which is bounded by $k$. Finally, the linear algebra\nstep can be done in $O(q^{2\\omega})$ using asymptotically fast matrix\nmultiplication algorithms, or alternatively $O(q^5)$ operations using\nsparse matrix techniques.\nIndeed, we have $q+1$\nnon-zero entries per row and a size of $q^2+1$.\nTherefore, the overall cost is polynomial in $q$ and $k$ as claimed.\n\\medskip\n\nFor the second part of Proposition~\\ref{prop:onestep} we replace $P$ by $X$ during the\nconstruction of the matrix. In that case, both sides of the\nequations~\\eqref{eq:Em} involve only linear polynomials.\nHence we obtain a linear system whose unknowns\nare $\\log (X+a)$ with $a\\in{\\mathbb F}_{q^2}$. Since Heuristic~\\ref{heu:fullrank}\nwould give us only the full rank of the system corresponding to the\nright-hand sides of the equations~\\eqref{eq:Em}, we have to rely on a\nspecific heuristic for this step:\n\\begin{heuristic}\\label{heu:linfullrank}\n The linear system constructed from all the equations~\\eqref{eq:Em}\n for $P(X)=X$ has full rank.\n\\end{heuristic}\nAssuming that this heuristic holds, we can\nsolve the linear system and obtain the discrete logarithms of the linear\npolynomials and of $h_1(X)$. \n\n\\section{Supporting the heuristic argument in the proof}\n\\label{sec:heur}\n\nFor Heuristic~\\ref{heu:fullrank}, we propose two approaches to support this\nheuristic. Both allow to gain some confidence in the validity of\nthe heuristic, but of course none affect the heuristic nature of this\nstatement.\n\nFor the first line of justification, we denote by $\\ensuremath{\\mathcal{H}}$ the matrix of all\nthe $\\#\\ensuremath{\\mathcal{P}}_q=q^3+q$ vectors $v(m)$ defined as in\nEquation~\\eqref{eq:v(m)}. Associated to a polynomial~$P$,\nSection~\\ref{sec:descent-one-step} defines the matrix\n$H(P)$ formed of the\nrows $v(m)$ such that the numerator of $\\ensuremath{\\mathcal{L}}_m$ is smooth. We will give\nheuristics that $H(P)$ has $\\Theta(q^3)$ rows and then prove that $\\ensuremath{\\mathcal{H}}$ has\nrank $q^2+1$, which of course does not prove that its submatrix $H(P)$ has full rank. \n\nIn order to estimate the number of rows of $H(P)$ we assume that the\nnumerator of $\\ensuremath{\\mathcal{L}}_m$ has\nthe same probability to be $\\lceil \\frac{D}{2}\\rceil$-smooth as a random\npolynomial of same degree. In this paragraph, we assume that the\ndegrees of $h_0$ and $h_1$ are bounded by $2$, merely to avoid awkward\nnotations; the result holds for any constant bound $\\delta$.\nThe degree of the numerator of $\\ensuremath{\\mathcal{L}}_m$ is then bounded by $3D$, so we have\nto estimate\nthe probability that a polynomial in ${\\mathbb F}_{q^2}[X]$ of degree $3D$ is\n$\\lceil \\frac{D}{2}\\rceil$-smooth. For any prime power $q$ and integers\n$1\\leq m\\leq n$, we denote by $N_q(m,n)$ the number of $m$-smooth monic\npolynomials of degree $n$. Using analytic methods, Panario et\nal. gave a precise estimate of this quantity (Theorem~$1$\nof~\\cite{FGP98}):\n\\begin{equation}\\label{eq:Flajolet}\n\tN_{q}(n,m)=q^n \\rho\\left(\\frac{n}{m}\\right)\\left(1+O\\left(\\frac{\\log\nn}{m}\\right)\\right),\n\\end{equation}\nwhere $\\rho$ is Dickman's function defined as the unique continuous\nfunction such that $\\rho(u)=1$ on $[0,1]$ and $u\\rho'(u)=\\rho(u-1)$ for\n$u>1$. We stress that the constant $\\kappa$ hidden in the $O()$ notation\nis independent of $q$.\nIn our case, we are interested in the value of $N_{q^2}(3D, \\lceil\n\\frac{D}{2}\\rceil)$. Let us call $D_0$ the least\ninteger such that $1+\\kappa\\left(\\frac{\\log (3D)}{\\lceil D\/2\\rceil}\\right)$\nis at least $1\/2$. For $D>D_0$, we will use the\nformula~\\eqref{eq:Flajolet}; and for $D\\le D_0$, we will use the crude\nestimate $N_q(n,m) \\ge N_q(n,1) = q^n\/n!$. Hence the smoothness\nprobability of $\\ensuremath{\\mathcal{L}}_m$ is at least\n$\\min\\left(\\frac{1}{2}\\rho(6),1\/(3D_0)!\\right)$.\n\nMore generally, if $\\deg h_0$ and $\\deg h_1$ are bounded by a constant\n$\\delta$ then we have a smoothness probability of $\\rho(2\\delta+2)$ times\nan absolute constant. Since we have $q^3+q$ candidates and a constant\nprobability of success, $H(P)$ has $\\Theta(q^3)$ rows.\n\n\n\nNow, unless some theoretical obstruction\noccurs, we expect a matrix over ${\\mathbb F}_\\ell$ to have full rank with\nprobability at least $1-\\frac{1}{\\ell}$. The matrix $\\ensuremath{\\mathcal{H}}$ is however peculiar, and does enjoy\nregularity properties which are worth noticing.\nFor instance, we have the following proposition.\n\\begin{prop}\n \\label{prop:bigmat-fullrank}\nLet $\\ell$ be a prime not dividing $q^3-q$. Then the matrix\n$\\ensuremath{\\mathcal{H}}$ over ${\\mathbb F}_\\ell$ has full rank $q^2+1$.\n\\end{prop}\n\\begin{proof}\n We may obtain this result in two ways. First, \n\\ensuremath{\\mathcal{H}}\\ is\nthe incidence matrix of a $3-(q^2+1,q+1,1)$ combinatorial design called\n\\emph{inversive plane} (see e.g.~\\cite[Theorem 9.27]{Stinson03}). As such\nwe obtain the identity $$\\ensuremath{\\mathcal{H}}^T\\ensuremath{\\mathcal{H}}=(q+1)(J_{q^2+1}-(1-q)I_{q^2+1})$$\n(see~\\cite[Theorem 1.13 and Corollary 9.6]{Stinson03}), where\n$J_n$ is the $n\\times n$ matrix with all entries equal to one, and $I_n$\nis the $n\\times n$ identity matrix. This readily gives the result exactly\nas announced.\n\n We also provide an elementary proof of the Proposition.\n We have a\n bijection between rows of $\\ensuremath{\\mathcal{H}}$ and the different possible image\n sets of the projective line $\\ensuremath{\\mathbb{P}}^1({\\mathbb F}_q)$ within $\\ensuremath{\\mathbb{P}}^1({\\mathbb F}_{q^2})$,\n under injections of the form $(\\alpha:\\beta)\\mapsto\n m^{-1}\\cdot(\\alpha:\\beta)$. All these $q^3+q$ image sets have size\n $q+1$, and by symmetry all points of $\\ensuremath{\\mathbb{P}}^1({\\mathbb F}_{q^2})$ are reached\n equally often. Therefore, the sum of all rows of $\\ensuremath{\\mathcal{H}}$ is the\n vector whose coordinates are all equal to $\\frac1{1+q^2}(q^3+q)(q+1)=q^2+q$.\n\n Let us now consider the sum of the rows in $\\ensuremath{\\mathcal{H}}$ whose first\n coordinate is $1$ (as we have just shown, we have $q^2+q$ such rows).\n Those correspond to image sets of $\\ensuremath{\\mathbb{P}}^1({\\mathbb F}_q)$ which contain one\n particular point, say $(0:1)$. The value of the sum for any other\n coordinate indexed by e.g.\\ $Q\\in\\ensuremath{\\mathbb{P}}^1({\\mathbb F}_{q^2})$ is the number of\n image sets $m^{-1}\\cdot\\ensuremath{\\mathbb{P}}^1({\\mathbb F}_q)$ which contain both $(0:1)$ and\n $Q$, which we prove is equal to $q+1$ as follows. Without loss of generality, we may assume $Q=\\infty=(1:0)$.\n We need to count the relevant\n homographies $m^{-1}\\in\\PGL_2({\\mathbb F}_{q^2})$, modulo\n $\\PGL_2({\\mathbb F}_q)$-equivalence $m\\equiv hm$. By\n $\\PGL_2({\\mathbb F}_q)$-equivalence, we may without loss of generality assume\n that $m^{-1}$ fixes $(0:1)$ and $(1:0)$.\n Letting $m^{-1}=\n\\begin{pmatrix}a&b\\\\c&d\\end{pmatrix}$, we obtain $(b:d)=(0:1)$ and\n $(a:c)=(1:0)$, whence $b=c=0$, and both $a,d\\not=0$. We may\n normalize to $d=1$, and notice that multiplication of $a$ by a scalar\n in ${\\mathbb F}_q^*$ is absorbed in $\\PGL_2({\\mathbb F}_q)$-equivalence. Therefore\n the number of suitable $m$ is $\\#{{\\mathbb F}_{q^2}^*}\/{{\\mathbb F}_q^*}=q+1$.\n\n\n These two facts show that the row span of $\\ensuremath{\\mathcal{H}}$\n contains the vectors $(q^2+q, \\ldots, q^2+q)$ and $(q^2+q, q+1,\n \\ldots, q+1)$. The vector $(q^3-q,0,\\ldots,0)$ is obtained as a linear\n combination of these two vectors, which suffices to prove that\n $\\ensuremath{\\mathcal{H}}$ has full rank, since the same reasoning holds\n for any coordinate.\n\n\\end{proof}\n\n\nProposition~\\ref{prop:bigmat-fullrank}, while encouraging, is clearly not\nsufficient. We are, at the moment, unable to provide a proof of a\nmore useful statement. On the experimental side, it is reasonably easy to\nsample arbitrary subsets of the rows of $\\ensuremath{\\mathcal{H}}$ and check for their rank.\nTo this end, we propose the following experiment. We have considered\nsmall values of $q$ in the range $[16,\\ldots,64]$, and made~50 random picks of\nsubsets $S_i\\subset\\ensuremath{\\mathcal{P}}_q$, all of size exactly $q^2+1$. For each we\nconsidered the matrix of the corresponding linear system, which is made of\nselected rows of the matrix \\ensuremath{\\mathcal{H}}, and computed its determinant $\\delta_i$.\nFor all values of $q$ considered,\nwe have observed the following facts.\n\\begin{itemize}\n \\item First, all square matrices considered had full rank over \\ensuremath{\\mathbb{Z}}.\n Furthermore, their determinants had no common factor apart\n possibly from those appearing in the factorization of $q^3-q$ as\n predicted by Proposition~\\ref{prop:bigmat-fullrank}. In fact,\n experimentally it seems that only the factors of $q+1$ are\n causing problems.\n \\item We also explored the possibility that modulo some primes, the\n determinant could vanish with non-negligible probability. We thus\n computed the pairwise GCD of all~50 determinants computed, for\n each $q$. Again, the only prime factors appearing in the GCDs\n were either originating from the factorization of $q^3-q$, or\n sporadically from the birthday paradox.\n\\end{itemize}\n \\begin{table}\n\\begin{center}\n \\begin{minipage}[t]{0.5\\textwidth}\n\\begin{tabular}{c|c|l|l}\n $q$ & \\#trials & in $\\gcd(\\{\\delta_i\\})$ &\n in $\\gcd(\\delta_i, \\delta_j)$\\\\\n \\hline\n16 & 50 & 17 & 691\\\\\n17 & 50 & 2, 3 & 431, 691\\\\\n19 & 50 & 2, 5 & none above $q^2$\\\\\n23 & 50 & 2, 3 & none above $q^2$\\\\\n25 & 50 & 2, 13 & none above $q^2$\\\\\n27 & 50 & 2, 7 & 1327\\\\\n29 & 50 & 2, 3, 5 & none above $q^2$\\\\\n31 & 50 & 2 & 1303, 3209\\\\\n32 & 50 & 3, 11 & none above $q^2$\\\\\n\n \n\\end{tabular}\n \\end{minipage}%\n \\begin{minipage}[t]{0.5\\textwidth}\n\\begin{tabular}{c|c|l|l}\n $q$ & \\#trials & in $\\gcd(\\{\\delta_i\\})$ &\n in $\\gcd(\\delta_i, \\delta_j)$\\\\\n \\hline\n37 & 50 & 2, 19 & 2879\\\\\n41 & 50 & 2, 3, 7 & none above $q^2$\\\\\n43 & 50 & 2, 11 & none above $q^2$\\\\\n47 & 50 & 2, 3 & none above $q^2$\\\\\n49 & 50 & 2, 5 & none above $q^2$\\\\\n53 & 50 & 2, 3 & none above $q^2$\\\\\n59 & 50 & 2, 3, 5 & none above $q^2$\\\\\n61 & 50 & 2, 31 & none above $q^2$\\\\\n64 & 50 & 5, 13 & none above $q^2$\\\\\n\\end{tabular}\n \\end{minipage}%\n\n\\caption{\\label{tab:experiment1}Prime factors appearing in determinant of\nrandom square submatrices of \\ensuremath{\\mathcal{H}}\\ (for one given set of random trials)}\n\\end{center}\n \\end{table}\n These results are \nsummarized in\ntable~\\ref{tab:experiment1}, where the last column omits small prime\nfactors below $q^2$.\nOf course, we remark that considering square submatrices is a more demanding check than\nwhat Heuristic~\\ref{heu:fullrank} suggests, since our algorithm only\nneeds a slightly larger matrix of size $\\Theta(q^3)\\times(q^2+1)$ to have\nfull rank.\n\\medskip\n\nA second line of justification is more direct and natural, as it is\npossible to implement the algorithm outlined in\nSection~\\ref{sec:descent-one-step}, and verify that it does provide the\ndesired result. A \\textsc{Magma} implementation validates this claim, and\nhas been used to implement descent steps for an example field of\ndegree~$53$ over ${\\mathbb F}_{53^2}$. An example step in this context is given\nfor applying our algorithm to a polynomial of degree~10, attempting to\nreduce it to polynomials of degree~6 or less.\nAmong the 148,930 elements\nof $\\ensuremath{\\mathcal{P}}_q$, it sufficed to consider only 71,944 matrices $m$, of which about 3.9\\%\nled to relations, for a minimum sufficient number of relations equal to\n$q^2+1=2810$ (as more than half of the elements of $\\ensuremath{\\mathcal{P}}_q$ had not even\nbeen examined at this point, it is clear that getting more relations was\neasy---we did not have to).\nAs the defining polynomial\nfor the finite field considered was constructed with $\\delta=\\deg\nh_{0,1}=1$, all left-hand sides involved\nhad degree 20.\nThe polynomials appearing in their\nfactorizations had the\nfollowing degrees (the number in brackets give the number of distinct\npolynomials found for each degree): 1(2098), 2(2652), 3(2552), 4(2463), 5(2546), 6(2683).\nOf course this tiny example size uses no\noptimization, and is only intended to check the validity of\nProposition~\\ref{prop:onestep}.\n\n\\bigskip\n\nAs for Heuristic~\\ref{heu:linfullrank}, it is already present\nin~\\cite{Joux13} and~\\cite{GGMZ13}, so this is not a new heuristic.\nJust like for Heuristic~\\ref{heu:fullrank}, it is based on the fact that\nthe probability that a left-hand side is $1$-smooth and yields a relation\nis constant. Therefore, we have a system with $\\Theta(q^3)$ relations\nbetween $O(q^2)$ indeterminates, and it seems reasonable to expect that\nit has full rank. On the other hand, there is not as much algebraic\nstructure in the linear system as in Heuristic~\\ref{heu:fullrank}, so that\nwe see no way to support this heuristic apart from testing it on several\ninputs. This was already done (including for record computations)\nin~\\cite{Joux13} and~\\cite{GGMZ13}, so we do not elaborate on our own\nexperiments that confirm again that Heuristic~\\ref{heu:linfullrank} seems\nto be valid except for tiny values of $q$.\n\n\\paragraph{An obstruction to the heuristics.}\n\nAs noted by Cheng, Wan and Zhuang~\\cite{traps13}, the irreducible factors\nof $h_1X^q-h_0$ other than the degree $k$ factor that is used to define\n${\\mathbb F}_{q^{2k}}$ are problematic. Let $P$ be such a problematic polynomial.\nThe fact that it divides the defining equation implies that it also\ndivides the $\\ensuremath{\\mathcal{L}}_m$ quantity that is involved when trying to build a\nrelation that relates $P$ to other polynomials. Therefore the first part\nof Proposition~\\ref{prop:onestep} can not hold for this $P$. Similarly, if\n$P$ is linear, its presence will prevent the second part of\nProposition~\\ref{prop:onestep} to hold since the logarithm of $P$ can not\nbe found with the technique of Section~\\ref{sec:descent-one-step}.\nWe present here a technique to deal with the problematic polynomials.\n(The authors of~\\cite{traps13} proposed another solution to keep the\nquasi-polynomial nature of algorithm.)\n\n\n\\begin{prop}\\label{solvetrap}\nFor each problematic polynomial $P$ of degree $D$, we can find a linear relation\nbetween $\\log P$, $\\log h_1$ and $O(D)$ logarithms of polynomials of degree at\nmost $(\\delta-1)D$ which are not problematic.\n\\end{prop}\n\n\\begin{proof}\nLet $P$ be an irreducible factor of $h_1X^q-h_0$ of degree $D$. Let us\nconsider $P^q$; by reducing modulo $h_1X^q-h_0$ and clearing\ndenominators, there exists a polynomial $A(X)$ such that\n\\begin{equation}\\label{eq:freerel}\nh_1^D P^q = h_1^D\\tilde{P}\\left(\\frac{h_0}{h_1}\\right)\n+(h_1X^q-h_0)A(X).\n\\end{equation}\nSince $P$ divides two of the terms of this equality, it must also\ndivide the third one, namely the polynomial $\\ensuremath{\\mathcal R} =\nh_1^D\\tilde{P}\\left(h_0\/h_1\\right)$. Let $v_P\\ge 1$ be the valuation of\n$P$ in $\\ensuremath{\\mathcal R}$. In the finite field ${\\mathbb F}_{q^{2k}}$ we obtain the following\nequalities between logarithms:\n\\begin{equation*}\n\t(q-v_P)\\log P = -D\\log h_1 +\\sum_i e_i \\log Q_i,\n\\end{equation*}\nwhere $Q_i$ are the irreducible factors of $\\ensuremath{\\mathcal R}$ other than $P$ and $e_i$\ntheir valuation in~$\\ensuremath{\\mathcal R}$. A polynomial $Q_i$ can not be problematic.\nOtherwise, it would divide the right-hand side of\nEquation~\\eqref{eq:freerel}, and therefore, also the left-hand side, which\nis impossible.\nSince $v_P\\leq \\frac{\\deg\n\\ensuremath{\\mathcal R}}{\\deg P}\\leq \\delta 1$, it will be possible to rewrite its logarithm in terms of\nlogarithms of non-problematic polynomials of at most the same degree that\ncan be descended in the usual way. Similarly, each problematic\npolynomial of degree 1 can have its logarithm rewritten in terms of the\nlogarithms of other non-problematic linear polynomials. Adding these\nrelations to the ones obtained in Section~\\ref{sec:descent-one-step}, we\nexpect to have a full-rank linear system.\n\nIf $\\delta>2$, we need to rely on the additional heuristic. Indeed, when\ndescending the $Q_i$ that have a degree potentially larger than the\ndegree of $D$, we could hit again the problematic polynomial we started\nwith, and it could be that the coefficients in front of $\\log P$ in the\nsystem vanishes. More generally, taking into account all the problematic\npolynomials, if when we apply Proposition~\\ref{solvetrap} to them we get\npolynomials $Q_i$ of higher degrees, it could be that descending those we\ncreates loops so that the logarithms of some of the problematic\npolynomials could not be computed. We expect this event to be very\nunlikely. Since in all our experiments it was always possible to obtain\n$\\delta=2$, we did not investigate further.\n\n\\paragraph{Finding appropriate $h_0$ and $h_1$.}\n\nOne key fact about the algorithm is the existence of two polynomials\n$h_0$ and $h_1$ in ${\\mathbb F}_{q^2}[X]$ such that $h_1(X)X^q-h_0(X)$ has an\nirreducible factor of degree $k$. A partial solution is due to\nJoux~\\cite{Joux13} who showed how to construct such polynomials when\n$k\\in\\{q-1,q,q+1\\}$. \nNo such deterministic construction is known in the general\ncase, but experiments show that one can apparently choose $h_0$ and $h_1$ of\ndegree at most $2$. We performed an experiment for every odd prime\npower $q$ in $[3,\\ldots,1000]$ and every $k\\leq q$ and found that we could\nselect $a\\in{\\mathbb F}_{q^2}$ such that $X^q+X^2+a$ has an irreducible factor\nof degree $k$. Finally, note that the result is similar to a commonly made\nheuristic in discrete logarithm algorithms: for fixed\n$f\\in{\\mathbb F}_{q^2}[X,Y]$ and random $g\\in{\\mathbb F}_{q^2}[X,Y]$, the polynomial\n$\\text{Res}_Y(f,g)$ behaves as a random polynomial of same degree with\nrespect to the degrees of its irreducible factors.\n\n\n\n\n\\section{Some directions of improvement}\n\\label{sec:improvement}\nThe algorithm can be modified in several ways. On the one hand one can\nobtain a better complexity if one proves\na stronger result on the smoothness probability. On the other\nhand, without changing the complexity, one can obtain a version which\nshould behave better in practice. \n\n\\subsection{Complexity improvement}\nHeuristic~\\ref{heu:fullrank} tells that a rectangular matrix with $\\Theta(q)$\ntimes more rows than columns has full rank. It seems reasonable to expect\nthat only a constant times more rows than columns would be enough to get\nthe full rank properties (as is suggested by the experiments proposed in\nSection~\\ref{sec:heur}). Then, it means that we expect to have a lot of\nchoices to select the best relations, in the sense that their left-hand\nsides split into irreducible factors of degrees as small as possible.\n\nOn average, we expect to be able to try $\\Theta(q)$ relations for each row\nof the matrix. So, assuming that the numerators of $\\ensuremath{\\mathcal{L}}_m$ behave like\nrandom polynomials of similar degrees, we have to evaluate the\nexpected smoothness that we can hope for after trying $\\Theta(q)$\npolynomials of degree $(1+\\delta)D$ over ${\\mathbb F}_{q^2}$. Set $u=\\log q \/\n\\log\\log q$, so that $u^u\\approx q$. According to~\\cite{FGP98} it is then possible to replace $\\lceil\nD\/2\\rceil$ in\nProposition~\\ref{prop:onestep} by the value $O(D\\log\\log q\/\\log q)$.\n\nThen, the discussion leading to Theorem~\\ref{thm} can be changed to\ntake this faster descent into account. We keep the same estimate for the\narity of each node in the tree, but the depth is now only in $\\log k \/\n\\log\\log q$. Since this depth ends up in the exponent, the resulting\ncomplexity in Theorem~\\ref{thm} is then\n$$ \\max(q, k)^{O(\\log k \/ \\log\\log q)}.$$\n\n\\subsection{Practical improvements}\nBecause of the arity of the descent tree, the breadth eventually\nexceeds the number of polynomials below some degree bound. It\nmakes no sense, therefore, to use the descent procedure beyond\nthis point, as the recovery of discrete logarithms of all these\npolynomials is better achieved as a pre-computation. Note that this\ncorresponds to the computations of the $L(1\/4+\\epsilon)$ algorithm which starts by\npre-computing the logarithms of polynomials up to degree $2$.\nIn our case, we could in principle go up to degree $O(\\log q)$ without\nchanging the complexity.\n\\medskip\n\nWe propose another practical improvement in the case where we would like\nto spend more time descending a given polynomial $P$ in order to improve\nthe quality of the descent tree rooted at $P$. \nThe set of polynomials appearing in the right-hand side of\nEquation~\\eqref{eq:Em} in Section~\\ref{sec:descent-one-step} is\n$\\{P-\\lambda\\}$, because in the factorization of $X^q-X$, we\nsubstitute $X$ with $m\\cdot P$ for homographies~$m$. In fact, we\nmay apply $m$ to $(P:P_1)$ for any polynomial $P_1$ whose degree\ndoes not exceed that of $P$. In the right-hand sides, we will have only\nfactors of form $P - \\lambda P_1$ for $\\lambda$ in ${\\mathbb F}_{q^2}$.\nOn the left-hand sides, we have polynomials of the same degree as before,\nso that the smoothness probability is expected to be the same.\nNevertheless, it is possible to test several $P_1$ polynomials, and to\nselect the one that leads to the best tree.\n\nThis strategy can also be useful in the following context (which will not\noccur for large enough $q$):\nit can\nhappen that for some triples $(q,D,D')$ one has $N_{q^2}(3D,D')\/q^n\\approx 1\/q$. In\nthis case we have no certainty that we can descend a degree-$D$\npolynomial to degree $D'$, but we can hope that at least one of the\n$P_1$ allows to descend.\n\nFinally, if one decides to use several auxiliary $P_1$ polynomials to descend\na polynomial $P$, it might be interesting to take a set of polynomials\n$P_1$ with an arithmetic structure, so that the smoothness tests on the\nleft-hand sides can benefit from a sieving technique.\n\n\\section{Conclusion}\nThe algorithm presented in this article achieves a significant improvement\nof the asymptotic complexity of discrete logarithm in finite fields, in\nalmost the whole range of parameters where the Function Field Sieve was\npresently the most competitive algorithm. Compared to existing\napproaches, and in particular to the line of recent\nworks~\\cite{Jo13faster,GGMZ13}, the practical relevance of our algorithm\nis not clear, and will be explored by further work. \n\nWe note that the analysis of the algorithm presented here is heuristic, as discussed\nin Section~\\ref{sec:heur}. Some of the heuristics we stated,\nrelated to the properties of matrices $H(P)$ extracted from the\nmatrix $\\ensuremath{\\mathcal{H}}$, seem accessible to more solid justification. It seems\nplausible to have the validity of algorithm rely on the sole heuristic of\nthe validity of the smoothness estimates.\n\nThe crossing point between the $L(1\/4)$ algorithm and our quasi-polynomial\none is not determined yet. One of the key factors which hinders the practical\nefficiency of this algorithm is the $O(q^2D)$ arity of the descent tree,\ncompared to the $O(q)$ arity achieved by techniques based on Gr\u00f6bner\nbases~\\cite{Jo13faster} at the expense of a $L(1\/4+\\epsilon)$ complexity.\nAdj et al.~\\cite{AMOR13} proposed to mix the two algorithms\nand deduced that the new descent technique must be used for cryptographic\nsizes. Indeed, by estimating the time required to\ncompute discrete logarithms in ${\\mathbb F}_{3^{6\\cdot 509}}$, they showed the weakness\nof some pairing-based cryptosystems. \n\n\\ifanon\n\\else\n\\section*{Acknowledgements}\nThe authors would like to thank Daniel J. Bernstein for his comments on\nan earlier version of this work, and for pointing out to us the possible\nuse of asymptotically fast linear algebra for solving the linear systems\nencountered.\n\n\\fi\n\n\\bibliographystyle{splncs03}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}