diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzgcry" "b/data_all_eng_slimpj/shuffled/split2/finalzzgcry" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzgcry" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nIn the last few years, ITU\/MPEG video coding standards---HEVC\n\\citep{Sullivan:2012:OHE:2709080.2709221} and VVC \\citep{VVC_Ref}---have been\nchallenged by learning-based codecs. The learned image coding framework\nintroduced by \\citet{DBLP:conf\/iclr\/BalleLS17,DBLP:conf\/iclr\/BalleMSHJ18} eases\nthe design process and improves the performance by jointly optimizing all steps\n(encoder, decoder, entropy coding) given a rate-distortion objective. The best\nlearned coding system \\citep{cheng2020learned} exhibits performance on par with\nthe image coding configuration of VVC. In video coding, temporal redundancies\nare removed through motion compensation. Motion information between\nframes are transmitted and used to interpolate reference frames to obtain a\ntemporal prediction. Then, only the residue (prediction error) is sent, reducing\nthe rate. Frames coded using references are called \\textit{inter} frames, while others\nare called \\textit{intra} frames.\n\nAlthough most learning-based video coding systems follow the framework of\nBall\\'{e} et al., the end-to-end character of the training is often overlooked.\nThe coders introduced by \\citet{DBLP:conf\/cvpr\/LuO0ZCG19} or\n\\citep{DBLP:journals\/corr\/abs-1912-06348} rely on a dedicated pre-training to\nachieve efficient motion compensation. Dedicated training requires proxy\nmetrics not necessary in line with the real rate-distortion objective, leading\nto suboptimal systems. Due to the presence of both intra and inter frames,\nlearned video coding methods transmit two kinds of signal: image-domain signal\nfor intra frames and residual-domain for inter frames. Therefore, most works\n\\citep{Agustsson_2020_CVPR} adopt a \\textit{two-coder} approach, with separate\ncoders for intra and inter frames, resulting in heavier and less factorizable\nsystems.\n\n\nThis paper addresses these shortcomings by introducing a novel framework for\nend-to-end learned video coding, based on a single coder for both intra and\ninter frames. Pursuing the work of \\citet{LaduneMMSP20}, the coding scheme is\ndecomposed into two sub-networks: MOFNet and CodecNet. MOFNet conveys motion\ninformation and a coding mode, which arbitrates between transmission with\nCodecNet or copy of the temporal prediction. MOFNet and CodecNet use conditional\ncoding to leverage information from the previously coded frames while being\nresilient to their absence. This allows to process intra and inter frames with\nthe same coder. The system is trained as a whole with no pre-training or\ndedicated loss term for any of the components. It is shown that the system is\nflexible enough to be competitive with HEVC under three coding configurations.\n\n\\vspace{-0.03cm}\n\n\\section{Proposed system}\n\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=0.6\\linewidth]{.\/arxiv_figure\/RA_4-crop.pdf}\n \\caption{Random Access configuration, GOP size is set to 4 to have concise diagrams.}\n \\label{fig:gopstruct}\n\\end{figure}\n\n\nLet $\\left\\{\\mathbf{x}_i, i \\in \\mathbb{N}\\right\\}$ be a video sequence, each\nframe $\\mathbf{x}_i$ being a vector of $C$ color channels\\footnote{Videos are in\nYUV 420. For convenience, a bilinear upsampling is used to obtain YUV 444 data.}\nof height $H$ and width $W$. Video codecs usually process Groups Of Pictures\n(GOP) of size $N$, with a regular frame organization. Inside a GOP, all frames\nare inter-coded and rely on already sent frames called references: B-frames use\ntwo references while P-frames use a single one. The first frame of the GOP\nrelies either on a preceding GOP or on an intra-frame (I-frame) denoted as\n$\\mathbf{x}_0$. This work primarily targets the \\textit{Random Access}\nconfiguration (Fig. \\ref{fig:gopstruct}), because it features I, P and B-frames.\nHere, we consider the rate-distortion trade-off, weighted by $\\lambda$, of a\n\\textit{single} GOP plus an initial I-frame $\\mathbf{x}_0$:\n\\begin{equation}\n \\mathcal{L}_\\lambda = \\sum_{t=0}^{N} \\mathrm{D}(\\hat{\\mathbf{x}}_t, \\mathbf{x}_t) +\n \\lambda \\mathrm{R}(\\hat{\\mathbf{x}}_t), \\text{ with }\\mathrm{D} \\text{ the MSE and } \\mathrm{R} \\text{ the rate.}\n \\label{eq:loss}\n\\end{equation}\n\n\n\\subsection{B-frame Coding}\n\nThe proposed architecture processes the entire GOP (I, P and B-frames) using a\nunique neural-based coder. B-frames coding is detailed here. Thanks to\nconditional coding, I and P-frames are processed by simply bypassing some steps\nof the B-frame coding process as explained in Section\n\\ref{subsec:conditionalcoding}.\n\nLet $\\mathbf{x}_t$ be the current B-frame and $(\\hat{\\mathbf{x}}_{p},\\hat{\\mathbf{x}}_{f})$ two\nreference frames. Figure \\ref{fig:overalldiagram} depicts the coding process of\n$\\mathbf{x}_t$. First, $(\\mathbf{x}_t,\\hat{\\mathbf{x}}_{p},\\hat{\\mathbf{x}}_{f})$ are fed to MOFNet\nwhich computes and conveys---at a rate $R_m$---two optical flows $(\\mathbf{v}_{p},\n\\mathbf{v}_{f})$, a pixel-wise prediction weighting $\\boldsymbol{\\beta}$ and a pixel-wise\ncoding mode selection $\\boldsymbol{\\alpha}$. The optical flow $\\mathbf{v}_{p}$ (respectively\n$\\mathbf{v}_{f}$) represents a 2D pixel-wise motion from $\\mathbf{x}_t$ to\n$\\hat{\\mathbf{x}}_{p}$ (resp. $\\hat{\\mathbf{x}}_{f}$). It is used to interpolate the reference through a\nbilinear warping $w$. The pixel-wise weighting $\\boldsymbol{\\beta}$ is applied to obtain the\nbi-directional weighted prediction $\\tilde{\\mathbf{x}}_t$:\n\\begin{equation}\n \\tilde{\\mathbf{x}}_t = \\boldsymbol{\\beta} \\odot w(\\hat{\\mathbf{x}}_{p}; \\mathbf{v}_{p}) + (1 - \\boldsymbol{\\beta}) \\odot w(\\hat{\\mathbf{x}}_{f}; \\mathbf{v}_{f}),\n \\left\\{ \\begin{array}{l}\n \\odot \\text{ is a pixel-wise multiplication,} \\\\\n \\mathbf{v}_{p} \\text{ and } \\mathbf{v}_{f} \\in \\mathbb{R}^{2 \\times H \\times W},\\ \\boldsymbol{\\beta} \\in \\left[0, 1\\right]^{H \\times W}\n \\end{array}\n \\right.\n \\label{eq:pred}\n\\end{equation}\nThe coding mode selection $\\boldsymbol{\\alpha} \\in \\left[0, 1\\right]^{H \\times W}$ arbitrates\nbetween transmission of $\\mathbf{x}_t$ using CodecNet versus \\textit{Skip mode},\na direct copy of $\\tilde{\\mathbf{x}}_t$. CodecNet sends areas of $\\mathbf{x}_t$ selected by\n$\\boldsymbol{\\alpha}$, using information from $\\tilde{\\mathbf{x}}_t$ to reduce its rate $R_c$. The total\nrate required for $\\mathbf{x}_t$ is $R = R_m + R_c$ and the decoded frame\n$\\hat{\\mathbf{x}}_t$ is the sum of both contributions: $\\hat{\\mathbf{x}}_t = \\underbrace{(1 -\n\\boldsymbol{\\alpha}) \\odot \\tilde{\\mathbf{x}}_t}_{\\text{Skip}} + \\underbrace{c(\\boldsymbol{\\alpha} \\odot \\mathbf{x}_t,\n\\boldsymbol{\\alpha} \\odot \\tilde{\\mathbf{x}}_t)}_{\\text{CodecNet}}$.\n\n\n\n\\subsection{Conditional Coding}\n\\label{subsec:conditionalcoding}\n\nConditional coding \\citep{LaduneMMSP20} allows to exploit decoder-side information\nmore efficiently than residual coding. Its architecture is similar to an\nauto-encoder \\citep{DBLP:conf\/iclr\/BalleMSHJ18}, with one additional\n\\textit{shortcut} transform (Fig. \\ref{fig:overalldiagram}). It\ncan be understood through the description of its 3 transforms.\\\\\n\\textbf{Shortcut transform} $g^\\prime_a$ (\\textit{Decoder})---Its role is to extract information\nfrom the reference frames available at the decoder (\\textit{i.e.} at no rate). The\ninformation is computed as latents $\\mathbf{y}^\\prime$.\\\\\n\\textbf{Analysis transform} $g_a$ (\\textit{Encoder})---It estimates and conveys\nthe information not available at the decoder \\textit{i.e.} the unpredictable\npart. The information is computed as latents $\\hat{\\mathbf{y}}$.\\\\\n\\textbf{Synthesis transform} $g_s$ (\\textit{Decoder})---Latents from the analysis and shortcut\ntransforms are concatenated and synthesized to obtain the desired output.\n\nUnlike residual coding, conditional coding leverages decoder-side information in\nthe latent domain. As noted by \\citet{DBLP:conf\/iccv\/DjelouahCSS19}, this makes\nthe system more resilient to the absence of information at the decoder\n(\\textit{i.e.} for I-frames). Thus, MOFNet and CodecNet implement\nconditional coding to be able to process I, P and B-frames as well as lowering\ntheir rate. I and P-frames are compressed using the B-frames coding scheme, with\nthe same parameters, and ignore the unavailable elements.\\\\\n\\textbf{I-frame}---Motion compensation is not available. As such,\nMOFNet is ignored, $\\boldsymbol{\\alpha}$ is set to 1 and CodecNet conveys the whole frame, with its shortcut latents\n$\\mathbf{y}^\\prime_c$ set to $0$.\\\\\n\\textbf{P-frame}---Bi-directional motion compensation is not available. $\\boldsymbol{\\beta}$\nis set to 1 to only rely on the prediction from $\\hat{\\mathbf{x}}_{p}$. MOFNet shortcut\nlatents $\\mathbf{y}^\\prime_m$ are set to $0$.\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=\\linewidth]{.\/arxiv_figure\/GOP_Skip_Code_v3-crop.pdf}\n \\caption{Diagram of the system. A detailed version can be found in appendix\n \\ref{app:detailarchitecture}. Arithmetic coding uses hyperpriors\n \\citep{DBLP:conf\/iclr\/BalleMSHJ18} omitted for clarity. Attention modules\n are implemented as proposed by \\citet{cheng2020learned} and $f = 128$. There\n are 20 millions learnable parameters $\\left\\{\\boldsymbol{\\phi},\\boldsymbol{\\theta}\\right\\}$.}\n \\label{fig:overalldiagram}\n\\end{figure}\n\n\\section{Training}\n\nThe training aims at learning to code I, P and B-frames. As such, it considers\nthe smallest coding configuration featuring all 3 types of frame: a GOP of size\n2 plus the preceding I-frame. Each training iteration consists in the coding of\nthe 3 frames, followed by a single back-propagation to minimize the\nrate-distortion cost of \\eqref{eq:loss}. Unlike previous works, the entire\nlearning process is achieved through this rate-distortion loss. No element of\nthe system requires a pre-training or a dedicated loss term. Moreover, coding the\nentire GOP in the forward pass enables the system to model the dependencies\nbetween coded frames, leading to better coding performance.\n\nThe training set is made of 400~000 videos crops of size $256 \\times 256$, with\nvarious resolutions (from 540p to 4K) and framerates (from 24 to 120 fps). The\noriginal videos are from several datasets: KonViD-1k \\citep{hosu2017konstanz},\nCLIC20 P-frame and Youtube-NT \\citep{yang2020hierarchical}. The batch size is 4\nand the learning rate is set to $10^{-4}$ and decreased to $10^{-5}$ during the\nlast epochs. Rate-distortion curves are obtained by training systems for\ndifferent $\\lambda$.\n\n\n\\section{Visual Illustrations}\n\\label{sec:visualization}\n\nThis section shows the different quantities at stakes when coding a B-frame\n$\\mathbf{x}_t$ (Fig. \\ref{subfig:code}). First, MOFNet outputs two optical\nflows $(\\mathbf{v}_{p},\\mathbf{v}_{f})$ (Fig. \\ref{subfig:flow}), the prediction\nweighting $\\boldsymbol{\\beta}$ (Fig. \\ref{subfig:beta}) and the coding mode selection\n$\\boldsymbol{\\alpha}$. The temporal prediction is then computed following \\eqref{eq:pred}.\nMost of the time, $\\boldsymbol{\\beta} \\simeq 0.5$, mitigating the noise from both bilinear\nwarpings. When the background is disoccluded by a moving object (\\textit{e.g.}\nthe woman), $\\boldsymbol{\\beta}$ equals $0$ on one side of the object and $1$ on the other\nside. This allows to retrieve the background from where it is available. The\ncompetition between Skip mode and CodecNet is weighted by $\\boldsymbol{\\alpha}$. Here, most\nof $\\hat{\\mathbf{x}}_t$ comes from the Skip mode\\footnote{Video frames are in\nYUV format. Thus zeroed areas appear green.} (Fig. \\ref{subfig:skipmode}).\nHowever, the less predictable parts, \\textit{e.g.} the woman, are sent by\nCodecNet.\n\nTo illustrate the conditional coding, $\\mathbf{v}_{f}$ is computed by the MOFNet\nsynthesis transform using only the shortcut latents $\\mathbf{y}^\\prime_m$ (Fig.\n\\ref{subfig:flowshortcut}), the transmitted ones $\\hat{\\mathbf{y}}_m$ (Fig.\n\\ref{subfig:flowsent}) or both (Fig. \\ref{subfig:flow}). The shortcut transform\ncaptures the nature of the motion in $\\mathbf{y}^\\prime_m$, which allows to\nsynthesize most of $\\mathbf{v}_{f}$ without any transmission involved. In contrast,\n$\\hat{\\mathbf{y}}_m$ consists in a refinement of the flow magnitude. The\nrate of $\\hat{\\mathbf{y}}_m$ is reduced by using a low spatial resolution,\nunlike $\\mathbf{y}^\\prime_m$ which keeps all the spatial accuracy.\n\n\\newcommand{\\imagepath\/}{.\/arxiv_figure\/}\n\\begin{figure}[htb]\n \\centering\n\\begin{subfigure}[t]{0.24\\textwidth}\n \\includegraphics[width=\\linewidth]{\\imagepath\/\/frame_1.png}\n \\caption{Frame to code $\\mathbf{x}_t$}\n \\label{subfig:code}\n\\end{subfigure}\\hfil\n\\begin{subfigure}[t]{0.265\\textwidth}\n \\includegraphics[width=\\linewidth]{\\imagepath\/\/ModeNet_beta.png}\n \\caption{Prediction weighting $\\boldsymbol{\\beta}$}\n \\label{subfig:beta}\n \\end{subfigure}\\hfil\n \\begin{subfigure}[t]{0.24\\textwidth}\n \\includegraphics[width=\\linewidth]{\\imagepath\/\/png_copy_part.png}\n \\caption{Skip mode $(1-\\boldsymbol{\\alpha}) \\odot \\tilde{\\mathbf{x}}_t$}\n \\label{subfig:skipmode}\n \\end{subfigure}\n\n \n \\vspace{-0.1cm}\n \\begin{subfigure}[t]{0.23\\textwidth}\n \\includegraphics[width=\\linewidth]{\\imagepath\/\/v_next_all_optical_flow.png}\n \\caption{Optical flow $\\mathbf{v}_{f}$}\n \n \\label{subfig:flow}\n \\end{subfigure}\\hfil\n \\begin{subfigure}[t]{0.23\\textwidth}\n \\includegraphics[width=\\linewidth]{\\imagepath\/\/v_next_shortcut_optical_flow.png}\n \\caption{$\\mathbf{v}_{f}$ from $g_s(\\mathbf{y}^\\prime_m; \\boldsymbol{\\theta}_m)$}\n \\label{subfig:flowshortcut}\n \\end{subfigure}\\hfil\n \\begin{subfigure}[t]{0.23\\textwidth}\n \\includegraphics[width=\\linewidth]{\\imagepath\/\/v_next_sent_optical_flow.png}\n \\caption{$\\mathbf{v}_{f}$ from $g_s(\\hat{\\mathbf{y}}_m; \\boldsymbol{\\theta}_m)$}\n \\label{subfig:flowsent}\n \\end{subfigure}\n\\caption{B-frame coding from the \\textit{BQMall} sequence featuring\nmoving people on a static background. This crop PSNR is $31.57$ dB, MOFNet\nrate is $322$ bits and CodecNet rate is $2~240$ bits. Second\nrow shows $\\mathbf{v}_{f}$ computed by MOFNet synthesis transform from both latents\n$\\texttt{cat}(\\hat{\\mathbf{y}}_m,\\mathbf{y}^\\prime_m)$, from shortcut latents\n$\\mathbf{y}^\\prime_m$ and from the transmitted latent $\\hat{\\mathbf{y}}_m$.}\n\\end{figure}\n\n\\section{Rate-Distortion Results}\n\nThe proposed system is assessed against \\texttt{x265}\\footnote{Preset medium,\nthe exact command line can be found in appendix \\ref{subsec:seqbyseqbd}.}, an\nimplementation of HEVC. The quality is measured with the PSNR and the BD-rate\n\\citep{Bjontegaard} indicates the rate difference for the same distortion\nbetween two coders. The test sequences are from the HEVC Common Test Conditions\n\\citep{HEVC_CTC}. The system flexibility is tested under three coding\nconfigurations: All Intra (AI) \\textit{i.e.} coding only the first I-frame,\nLow-delay P (LDP) \\textit{i.e.} coding one I-frame plus 8 P-frames and Random\nAccess (RA) \\textit{i.e.} coding one I-frame plus a GOP of size 8. BD-rates of\nthe proposed coder against HEVC are presented in the Table \\ref{table:bdrate}.\n\n\\begin{table}[h]\n \\centering\n \\caption{BD-rate of the proposed coder against HEVC.\n Negative results indicate that the proposed coder requires less rate than HEVC for equivalent quality.\n }\n \\begin{tabular}{l||rrrrr|r}\n \\multirow{2}{*}{Coding configuration} & \\multicolumn{5}{c|}{Class (Resolution)} & \\multirow{2}{*}{Average}\\\\\n & A (1600p) & B (1080p) & C (480p) & D (240p) & E (720p) & \\\\\n \\hline \n All Intra (AI) & $\\mathbf{-11.3}$\\% & $\\mathbf{-9.6}$\\% & $\\mathbf{-14.8}$\\% & $\\mathbf{-45.6}$\\% & $\\mathbf{-25.8}$\\% & $\\mathbf{-21.4}$\\% \\\\\n Low-delay P (LDP) & $\\mathbf{-4.7}$\\% & $29.1$\\% & $14.3$\\% & $\\mathbf{-9.5}$\\% & $10.0$\\% & $7.8$\\% \\\\\n Random Access (RA) & $5.3$\\% & $29.9$\\% & $7.0$\\% & $\\mathbf{-27.2}$\\% & $\\mathbf{-18.7}$\\% & $\\mathbf{-0.7}$\\% \\\\\n \\end{tabular}\n\\label{table:bdrate}\n\\end{table}\n\nThe proposed system outperforms HEVC in AI configuration, proving that it\nproperly handles I-frames. It is on par with HEVC for RA coding and slightly\nworse than HEVC for LDP coding. This shows that the same coder is also able to\nefficiently code P and B-frames, without affecting the I-frames performance. To\nthe best of our knowledge, this is the first system to achieve compelling\nperformance under different coding configurations with a single end-to-end\nlearned coder for the three types of frame.\n\n\\section{Conclusion}\n\nThis paper proposes a new framework for end-to-end video coding. It is based on\nMOFNet and CodecNet, which use conditional coding to leverage the information\npresent at the decoder. Thanks to conditional coding, all types of frame (I, P\n\\& B) are processed using the same coder with the same parameters, offering a great\nflexibility in the coding configuration. The entire training process is\nperformed through the minimization of a unique rate-distortion cost. Its flexibility is\nillustrated under three coding configurations: All Intra, Low-delay P and Random\nAccess, where the system achieves performance competitive with HEVC.\\\\\nThe main focus of this work is not in the internal design of the networks architecture\n(MOFNet and CodecNet). Future work will investigate more advanced architectures,\nfrom the optical flow estimation or the learned image coding literature, which\nshould bring performance gains.\n\n\\newpage\n\\bibliographystyle{iclr2021_conference}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Details and basic definitions}\n\n\n\nIn the present work, we use the {\\em $O^*$ notation}: we write $f(X)=O^*(h(X))$, as $X\\to a$ to indicate that $|f(X)|\\leq h(X)$ in a neighborhood of $a$, where, in absence of precision, $a$ corresponds to $\\infty$.\nWe also consider the {\\em Euler $\\varphi_s$ and Kappa $\\kappa_s$} functions: let $s$ be any complex number, we define $\\varphi_s:\\mathbb{Z}_{>0}\\to\\mathbb{C}$ as $q\\mapsto q^s\\prod_{p|q}\\left(1-\\frac{1}{p^s}\\right)$ and $\\kappa_s:\\mathbb{Z}_{>0}\\to\\mathbb{C}$ as $q\\mapsto q^s\\prod_{p|q}\\left(1+\\frac{1}{p^s}\\right)$. \n\n{\\em Computational details.} Every constant in this article has been estimated using interval arithmetic. \nEarly numerical analysis was carried out using the ARB implementation, under the SageMath commands RBF and RIF, implemented in Python. We decided, however, to use Platt's implementation in C\\texttt{++}, used for example in \\cite{Pla16}, as it provides results with double precision, when compared to ARB, and at higher performance and faster speed. \n\nThroughout our calculations, we have set a precision order equal to $6\\cdot 10^9$ and run a .cpp script compiled with C\\texttt{++}. We have also written a .ipynd script (compiled by SageMath) to verify some of our results.\n\n\\section{Introduction}\\label{Int}\n\nThe convolution method terminology was made popular by Ramar\\'e in 1995, particularly in \\cite[Lemma 3.2]{RA95}, where it was given in a somewhat hidden version with respect to the one we present in this article. It is a technique, already present in \\cite{Mot78} and \\cite{W1927}, among many other places, that relies upon a convolution identity and helps obtaining explicit estimations of averages of arithmetic functions, under some conditions. It is particularly meaningful when these arithmetic functions are supported on the square-free numbers, having a sufficiently regular behavior on all large prime numbers. \n\nWhile the convolution method provides the main term of a asymptotic expansion for the average of an arithmetic function with ease, it is at the remainder term where it shows its true potential, as it succeeds in giving a good enough estimation, explicit, for the error term: if the average is performed for the range $(0,X]$, where $X>0$, then the convolution methods gives error term explicit estimations of magnitude $X^{-\\delta}$ when $\\delta$ belongs to a maximal real open and positive interval $I$. \n\nNevertheless, the nature of the convolution method does not allow one to obtain an error term estimation of magnitude $X^{-\\delta_0}$ where $\\delta_0$ is the right endpoint of $I$. Since it is usually a subject of interest in the explicit theory of numbers to improve error term magnitudes of expressions of interest, it is thus natural to ask whether or not one can provide, necessarily by a different method, an error term of critical order $\\delta_0$ so that the overall estimation is qualitatively improved, going thus to the edge of the method of convolution.\n\nWe first present in \\S\\ref{dirichlet}, a special form of the convolution method involving sufficiently regular square-free supported functions, as shown in Theorem \\ref{general}. As it relies upon some complex analytic facts, this method is related to a typical complex analytic approach for estimating the asymptotic expansion for the average of an arithmetic function by means of residue theory. \n\nOur main result, presented in \\S\\ref{improvement}, differs from complex analysis. In \\S\\ref{achieving}, we see how the use of some very particular estimations given in \\S\\ref{particular}, constitute the main ingredient to obtain reasonable explicit estimations of critical exponent in almost all cases where the convolution method may be applied with some conditions. Indeed, since our technique also relies upon the convergence of infinite products, some extra conditions on the regularity of the arithmetic function that is being averaged are needed, as Theorem \\ref{general++} tells, and therefore there is a small range of functions that are not considered in our improvements, namely when the values of $\\alpha$ and $\\beta$ defined in Theorem \\ref{general++} have a difference of absolute value smaller or equal than $\\frac{1}{2}$. However, as most of the applications we mention throughout this article do not involve that missing case, we then claim that every one of these ones are improved up to their critical exponent.\n\nPrevious work towards the obtention of error terms of critical exponent can be found, on some particular averages, in \\cite{Bu14} and \\cite{W1927}. In \\cite{RA13} and \\cite{RA19}, the obtention of the critical exponent is carried out by a completely different approach, using some results known as \\emph{the covering remainder lemma} and \\emph{the unbalanced Dirichlet hyperbola formula} as well as strong explicit bounds on some summatory functions involving the M\\\"obius functions that, unlike our case of study, do oscillate. Furthermore, it is important to point out that whereas a similar path as in \\cite{RA13} or \\cite{RA19} could have been followed, these results consider specific properties of the functions that are being averaged and they are thus not easy to generalize to a broader class of functions. This is the reason why \\cite[Thm. 1.2]{RA13} improves on the classic convolution method result presented in Corollary \\ref{corollary} $\\mathbf{(a)}$ but still requires the convolution method to estimate related averages of less simple arithmetic functions; for example, with the result we present in Theorem \\ref{general++}, one can now immediately derive stronger estimations for \\cite[Lemmas 7.1, 7.2, 7.6, 7.7, 7.8, 7.9]{RA13} that may lead to further improvements on the cited article of Ramar\\'e--Akhilesh. In that aspect, our result might help as a reference for further improvements on many places where the convolution method is employed; it read as follows. \n\n\\begin{theorem*} \n Let $X>0$, be a real number and $q$ a positive integer. Consider a multiplicative function $f:\\mathbb{Z}^+\\to\\mathbb{C}$ such that for every prime number $p$ satisfying $(p,q)=1$, we have $f(p)=\\frac{1}{p^{\\alpha}}+O\\left(\\frac{1}{p^{\\beta}}\\right)$, where $\\alpha$, $\\beta$ are real numbers satisfying $\\beta>\\alpha$, $\\beta-\\alpha>\\frac{1}{2}$. Then there exists a constant $\\mathrm{W}_{\\alpha}^q>0$ such that\n \\begin{align*}\n \\sum_{\\substack{\\ell\\leq X\\\\ (\\ell,q)=1}}\\mu^2(\\ell)f({\\ell})=F_\\alpha^{q}(X)+\\begin{cases}\nO^*\\left(\\mathrm{W}_\\alpha^q\\ X^{\\frac{1}{2}-\\alpha}\\right),\\quad&\\text{ if }\\alpha\\neq\\frac{1}{2},\\\\\nO^*\\left(\\mathrm{W}_\\alpha^q\\ \\log(X)\\right),\\quad&\\text{ if }\\alpha=\\frac{1}{2},\\\\\n\\end{cases}\n \\end{align*} \n where \n \\begin{align*} \n F_\\alpha^q(X)&=\\frac{M_{\\alpha}^{q}\\zeta(\\alpha)\\varphi_\\alpha(q)}{q^\\alpha}-\\frac{N_{\\alpha}^q\\varphi(q)}{(\\alpha-1)q}\\frac{1}{X^{\\alpha-1}},\\quad&&\\text{if\\ }\\alpha>\\frac{1}{2},\\ \\alpha\\neq 1,\\\\\n F_1^q(X)&=\\frac{M_{1}^{q}\\varphi(q)}{q}\\left(\\log\\left(X\\right)+T_f^q+\\gamma+\\sum_{p|q}\\frac{\\log(p)}{p-1}\\right),\\\\\n&\\phantom{xxxxxxxxx}T_{f}^{q}=\\sum_{p\\nmid q}\\frac{\\log(p)(1-(p-2)f(p))}{(f(p)+1)(p-1)},\\\\\nF_\\alpha^q(X)&=\\frac{M_{\\alpha}^{q}\\varphi(q)}{(1-\\alpha)q}X^{1-\\alpha},\\quad&&\\text{if\\ }\\alpha\\leq\\frac{1}{2},\n \\end{align*}\nand, \n\\begin{align*}\n M_{\\alpha}^{q}&=\\begin{cases}\n\\prod_{p\\nmid q}\\left(1-\\frac{1-f(p)p^\\alpha+f(p)}{p^{\\alpha}} \\right),\\quad&\\text{ if }\\alpha>\\frac{1}{2},\\\\\nN_{\\alpha}^{q},\\quad&\\text{ if }\\alpha\\leq\\frac{1}{2},\n\\end{cases}\\\\\nN_{\\alpha}^{q}&=\\prod_{p\\nmid q}\\left(1-\\frac{p^{1-\\alpha}-f(p)p+f(p)}{p^{2-\\alpha}} \\right).\n\\end{align*}\n \\end{theorem*}\n\nAs an application of the above theorem, we deduce how the improvement on the convolution method produces better savings on the error term constant of $\\sum_{\\substack{\\ell\\leq X\\\\(\\ell,q)=1}}\\frac{\\mu^2(\\ell)}{\\varphi(\\ell)}, X>0, q\\in\\mathbb{Z}_{>0}$ than the one in \\cite[Thm. 1.1]{RA13} , when prime coprimality conditions are introduced. This situation is examined in \\S\\ref{Cop}, and we have for instance the improvement on the constant \\sage{Trunc(5.9*21\/25,3)}, given in \\cite[Thm. 1.1]{RA13}, by $\\sage{Upper(CONSTANT_RAM*Prod_Ram_Upper*CONSTANT2\/CONSTANT_RAM,digits)}$, according to the following result.\n\n\\begin{lemma*}\nLet $X>0$, then\n\\begin{equation*}\n\\sum_{\\substack{\\ell\\leq X\\\\(\\ell,2)=1}}\\frac{\\mu^2(\\ell)}{\\varphi(\\ell)}=\\frac{1}{2}\\left(\\log\\left(X\\right)+\\mathfrak{a}_2\\right)+O^*\\left(\\frac{\\sage{Upper(CONSTANT_RAM*Prod_Ram_Upper*CONSTANT2\/CONSTANT_RAM,digits)}}{\\sqrt{X}}\\right),\n\\end{equation*}\nwhere $\\mathfrak{a}_2=\\sage{Trunc(C_sum1+log(2)\/2,digits)}\\ldots $. \n\\end{lemma*}\n\n\n\n\n\\section{A special version of the method of convolution}\\label{dirichlet}\n\n\n \nIn the convolution method, it is crucial to preserve regularity conditions, that is, conditions that do not impose specific ranges other than the variable itself being a positive integer, under, perhaps, some coprimality restrictions. \n\nTo put an example, when one carries out a summation on a variable $e\\in\\mathbb{Z}_{>0}$ such that $e\\leq \\frac{X}{d}$ for certain real number $X>0$ and a positive integer $d$, it is often implicitly assumed that $\\frac{X}{d}\\geq 1$, so that the set $\\{e\\in\\mathbb{Z}_{>0}, \\ e\\leq\\frac{X}{d}\\}$ is not empty. If $d$ is itself a variable, that means that we have the range condition $\\{d\\leq X\\}$ on the variable $d$. Hence, if we are able to estimate asymptotically a summation on the variable $e\\in\\mathbb{Z}_{>0}$ such that $e\\leq \\frac{X}{d}$, regardless of whether or not an empty condition sum is performed, that is an \\emph{empty sum}, then the range condition on the variable $d$ will be absent.\n\n\n\n\\subsection{Regularity conditions: estimating empty summations}\n\n\\begin{lemma}\\label{recip} \nLet $\\alpha\\in \\mathbb{R}^+\\setminus \\{1\\}$ and $X>0$. Then\n\\begin{equation*}\n\\sum_{n\\leq X}\\frac{1}{n^\\alpha}=\\zeta(\\alpha)-\\frac{1}{(\\alpha-1)X^{\\alpha-1}}+O^*\\left(\\frac{1}{X^{\\alpha}}\\right).\n\\end{equation*}\n\\end{lemma} \n\n\\begin{proof}\n By definition of $\\zeta(s)$ for $\\Re(s)>1$, and by analytic continuation\n for all $s\\ne 1$ with $\\Re(s)>0$,\n \\begin{equation}\\label{EulerMac}\n \\zeta(s) - \\frac{1}{(s-1) X^{s-1}} - \\sum_{n\\leq X} \\frac{1}{n^s} =\n \\sum_{n=1}^\\infty \n \\left(\\int_{n-1}^{n} \\frac{dx}{(X+x)^s} - \\frac{1}{(\\lfloor X\\rfloor+n)^s}\\right).\\end{equation}\n Set $s=\\alpha$; clearly $(\\lfloor X\\rfloor+n)^{-\\alpha} \\geq (X+n)^{-\\alpha}$ and by convexity of\n $t\\mapsto \\frac{1}{t^\\alpha}$,\n $\\int_{n-1}^{n} \\frac{dx}{(X+x)^\\alpha} \\leq \\frac{1}{2} \\left(\n \\frac{1}{(X+n-1)^\\alpha} + \\frac{1}{(X+n)^\\alpha}\\right)$.\n Hence, the right hand side of \\eqref{EulerMac} is at most \n\\begin{equation*}\\sum_{n=1}^\\infty \\frac{1}{2} \\left(\n \\frac{1}{(X+n-1)^\\alpha} - \\frac{1}{(X+n)^\\alpha}\\right) \n \\leq \\frac{1}{2X^\\alpha}.\n\\end{equation*}\nOn the other hand, by the mean value theorem, for any $n\\in\\mathbb{Z}_{>0}$, there exists $r\\in[n-1,n]$ such that\n$\\int_{n-1}^{n}\\frac{dx}{(X+x)^{\\alpha}}-\\frac{1}{(\\lfloor X\\rfloor+n)^{\\alpha}}=\\frac{1}{(X+r)^{\\alpha}}-\\frac{1}{(\\lfloor X\\rfloor+n)^{\\alpha}}$. Thus, by the monotonicity of $t\\mapsto\\frac{1}{t^{\\alpha}}$ and the fact that $X+r$ and $\\lfloor X\\rfloor+n$ are both contained in $[X+n-1,X+n]$, we have that \nthe right hand side of \\eqref{EulerMac} is at least\n\\begin{equation*}\n\\sum_{n=1}^{\\infty}\\left(\\frac{1}{(X+n)^{\\alpha}}-\\frac{1}{(X+n-1)^{\\alpha}}\\right)=-\\frac{1}{X^{\\alpha}}.\n\\end{equation*}\n\\end{proof}\n\nThe following lemma estimates asymptotically some sums even when they have an empty condition.\n\n \\begin{lemma}\\label{SumEstimations} \n Let $X>0$ and $\\alpha>0$. If $0<\\delta\\leq 1$, we have\n \\begin{align}\\label{harmonic}\n \\sum_{n\\leq X}\\frac{1}{n}&=\\log(X)+\\gamma+O^*\\left(\\frac{\\Delta_{1}^{\\delta}}{X^\\delta}\\right);\n \\end{align}\n if $\\max\\{0,\\alpha-1\\}<\\delta\\leq\\alpha$ and $\\alpha\\neq 1$, we have\n \\begin{equation}\\label{generalz}\n \\sum_{n\\leq X}\\frac{1}{n^\\alpha}=\\zeta(\\alpha)-\\frac{1}{(\\alpha-1)X^{\\alpha-1}}+O^*\\left(\\frac{\\Delta_{\\alpha}^{\\delta}}{X^\\delta}\\right),\n \\end{equation} \nwhere $\\Delta_{1}^{\\delta}=\\max\\left\\{\\gamma,\\frac{1}{\\delta e^{\\gamma\\delta+1}}\\right\\}$\nand, for $\\alpha\\neq 1$,\n \\begin{align*}\n \\Delta_{\\alpha}^{\\delta}&=\n \\begin{cases}\n \\max\\left\\{1,\\left(\\frac{1}{\\delta^{\\delta}}\\left(\\frac{(\\delta-\\alpha+1)}{|\\zeta(\\alpha)(\\alpha-1)|}\\right)^{\\delta-\\alpha+1}\\right)^{\\frac{1}{\\alpha-1}},\\zeta(\\alpha)-\\frac{1}{\\alpha-1}\\right\\},&\\quad\\text{ if }\\delta\\neq\\alpha,\\\\\n 1,&\\quad\\text{ if }\\delta=\\alpha.\n \\end{cases}\n \\end{align*}\n \\end{lemma}\n \\begin{proof} By \\cite[Lemma 2.1]{RA13} and Lemma \\ref{recip}, for $X>0$, we have\n \\begin{align}\n \\sum_{n\\leq X}\\frac{1}{n}&=\\log(X)+\\gamma+O^*\\left(\\frac{\\gamma}{X}\\right),\\label{Ha}\\\\\n \\sum_{n\\leq X}\\frac{1}{n^\\alpha}&=\\zeta(\\alpha)-\\frac{1}{(\\alpha-1)X^{\\alpha-1}}+O^*\\left(\\frac{1}{X^\\alpha}\\right),\\quad\\text{if\\ }\\alpha> 0\\text{\\ and\\ }\\alpha\\neq 1,\\label{Ge}\n \\end{align} \n respectively. \n Thus, if $X\\geq 1$, the result holds trivially as $\\delta'\\mapsto X^{\\delta'}$ is increasing and $\\delta<\\alpha$. Otherwise, when $01$ and observe first that the function $f:Y\\geq 1\\mapsto\\frac{\\log(Y)-\\gamma}{Y^\\delta}$ has a single critical point at $y_0=e^{\\frac{1}{\\delta}+\\gamma}>1$ taking the value $f(y_0)=\\frac{1}{\\delta e^{\\gamma\\delta+1}}>0$. As $f(1)=-\\gamma$ and $\\lim_{Y\\to\\infty}f(Y)=0$, $f$ is increasing in $[1,y_0]$ and decreasing in $[y_0,\\infty)$, and hence $\\sup_{\\{Y>1\\}}|f(Y)|=\\max\\left\\{\\gamma,\\frac{1}{\\delta e^{\\gamma\\delta+1}}\\right\\}$. \n\n Secondly, by \\cite[Cor. 1.14]{MV07}, we have that $\\zeta(\\alpha)>\\frac{1}{\\alpha-1}$ and $\\zeta(\\alpha)(\\alpha-1)>0$ for all $\\alpha\\geq 0$ and $\\alpha\\neq 1$. Moreover, the function $g:Y>0\\mapsto\\frac{1}{Y^\\delta}\\left(\\zeta(\\alpha)-\\frac{Y^{\\alpha-1}}{\\alpha-1}\\right)$ has a critical point $y_0$ satisfying $y_0^{\\alpha-1}=\\frac{\\zeta(\\alpha)(\\alpha-1)\\delta}{\\delta-\\alpha+1}>0$, since $\\delta>\\alpha-1$ and $\\delta>0$ and in this case, we have that $\\lim_{Y\\to\\infty}g(Y)=0$ and, thus, $|g|$ is decreasing in $[y_0,\\infty)$. We conclude then that $\\max_{[y_0,\\infty)}|g(Y)|$ $=|g(y_0)|$, where \n \\begin{equation*}\n |g(y_0)|=\\left(\\frac{1}{\\delta^{\\delta}}\\left(\\frac{(\\delta-\\alpha+1)}{|\\zeta(\\alpha)(\\alpha-1)|}\\right)^{\\delta-\\alpha+1}\\right)^{\\frac{1}{\\alpha-1}}.\n \\end{equation*} \n If $y_0\\leq 1$, then $|g(1)|=g(1)\\leq|g(y_0)|$ and $\\sup_{\\{Y>1\\}}|g(Y)|=g(1)$; otherwise, if $y_0>1$, as $g$ is also monotonic between $1$ and $y_0$, we derive that $\\sup_{\\{Y>1\\}}|g(Y)|=\\max\\{g(1),|g(y_0)|\\}$, which gives us the desired result.\n \\end{proof}\n\nIt is important to point out that in case that $\\alpha>1$, it would have been possible to give an error term expression even if $\\delta=\\alpha-1>0$, whereas, if $\\delta<\\alpha-1$, then $|g|$ would have been unbounded in $[1,\\infty)$.\n\nOn the other hand, as pointed out at the beginning of \\S\\ref{dirichlet}, it is essential to have an estimation of the above summations when they have actually an empty condition, that is when $X\\in(0,1)$. Indeed, this will provide regularity for some sum conditions during the proof of Theorem \\ref{general} that otherwise would impose some variables to be at least $1$ and some sums to be non-empty. It should be expected, though, that the fact of imposing regularity conditions, or rather asking for estimations of sums up to the variable $X$ with $X>0$, will worsen a bit the constants on the involved error terms; for instance, when $\\alpha=1$ and when we are restricted to the range $X\\geq 1$, the value of $\\gamma=\\sage{Trunc(gamma,5)}\\ldots$ given in \\eqref{Ha} can be improved to $2(\\log(2)+\\gamma-1)=\\sage{Trunc(2*(log(2)+euler_gamma-1),5)}\\ldots$ (refer to \\cite[Lemma 2.1]{RA13} ). \n\n\n\\subsection{The convolution method} \\label{TCM}\n\n\n\nThe following theorem will help us to state Corollary \\ref{corollary}. Although inspired by \\cite[Lemma 3.2]{RA95}, it is presented in a much general framework, in an attempt to understand and deduce with ease the order of averages of sufficiently regular square-free supported arithmetic functions. By sufficiently regular, we mean an arithmetic function having a specific constant dominant term on all sufficiently large prime numbers. As it turns out, it is precisely the regularity of an arithmetic function that helps to derive the asymptotic expansion of its average under the method of convolution.\n \\begin{theorem}\\label{general} \n Let $q$ a positive integer and let $X$, $\\alpha$, $\\beta$ be real numbers such that $X>0$, $\\beta>1$ and $\\beta>\\alpha>\\frac{1}{2}$. Consider a multiplicative function $f:\\mathbb{Z}^+\\to\\mathbb{C}$ such that \n $f(p)=\\frac{1}{p^{\\alpha}}+O\\left(\\frac{1}{p^{\\beta}}\\right)$, for every sufficiently large prime number $p$ coprime to $q$. Then for any real number $\\delta>0$ such that $\\max\\{0,\\alpha-1\\}<\\delta<\\min\\{\\beta-1,\\alpha-\\frac{1}{2}\\}$ we have the estimation \n \\begin{equation*}\n \\sum_{\\substack{\\ell\\leq X\\\\ (\\ell,q)=1}}\\mu^2(\\ell)f({\\ell})=F_\\alpha^{q}(X)+O^*\\left(\\Delta_{\\alpha}^{\\delta}\\frac{\\kappa_{\\alpha-\\delta}(q)}{q^{\\alpha-\\delta}}\\cdot\\frac{\\overline{H}_{f}^{\\phantom{.}q}(-\\delta)}{X^\\delta}\\right),\n \\end{equation*} \n where, if $\\alpha\\neq 1$,\n \\begin{align*} \n F_\\alpha^q(X)&=\\frac{H_{f}^{q}(0)\\zeta(\\alpha)\\varphi_\\alpha(q)}{q^\\alpha}-\\frac{H_{f}^{q}(1-\\alpha)\\varphi(q)}{(\\alpha-1)q}\\frac{1}{X^{\\alpha-1}},\n\\end{align*}\nand, if $f(p)= -1$ for some prime number $p$, $F_1^q(X)=-\\sum_{d}\\frac{h_{f}^{q}(d)\\log(d)}{d^\\alpha}$, whereas, if $f(p)\\neq -1$ for any prime number $p$,\n\\begin{align*}\n F_1^q(X)&=\\frac{H_{f}^{q}(0)\\varphi(q)}{q}\\left(\\log\\left(X\\right)+T_f^q+\\gamma+\\sum_{p|q}\\frac{\\log(p)}{p-1}\\right),\\\\\nT_{f}^{q}&=\\sum_{p\\nmid q}\\frac{\\log(p)(1-(p-2)f(p))}{(f(p)+1)(p-1)}.\n \\end{align*}\n Here, $\\Delta_{\\alpha}^{\\delta}$ is defined as in Lemma \\ref{SumEstimations} and $H_{f}^{q}:\\{s\\in\\mathbb{C},\\ \\Re(s)>\\frac{1}{2}-\\alpha\\}\\to\\mathbb{C}$ is an analytic function satisfying \n \\begin{align*}\n H_{f}^{q}(s)&=\\prod_{p\\nmid q}\\left(1-\\frac{1-f(p)p^\\alpha}{p^{s+\\alpha}}-\\frac{f(p)}{p^{2s+\\alpha}} \\right)=\\sum_{\\substack{d\\\\(d,q)=1}}\\frac{h_{f}^{q}(d)}{d^{s+\\alpha}},\\\\\n \\overline{H}_{f}^{\\phantom{.}q}(s)&=\\prod_{p\\nmid q}\\left(1+\\frac{|1-f(p)p^\\alpha|}{p^{\\Re(s)+\\alpha}}+\\frac{|f(p)|}{p^{2\\Re(s)+\\alpha}} \\right)=\\sum_{\\substack{d\\\\(d,q)=1}}\\frac{|h_{f}^{q}(d)|}{d^{\\Re(s)+\\alpha}}.\n \\end{align*} \n \\end{theorem}\n\n \\begin{proof} \n By the asymptotic condition on $f$ in the statement, the Dirichlet series $D_{f}^{q}$ associated with $\\ell\\mapsto\\mu^2(\\ell)f({\\ell})\\mathds{1}_q(\\ell)$, where $\\mathds{1}_q$ is defined as the multiplicative function $\\ell\\mapsto\\mathds{1}_{\\{(\\ell,q)=1\\}}(\\ell)$, converges absolutely for any $s\\in\\mathbb{C}$ such that $\\Re(s)>1-\\alpha$. Thus, in the set $\\{s\\in\\mathbb{C}, \\ \\Re(s)>1- \\alpha\\}$, the equality \n \\begin{align}\n D_{f}^{q}(s)=\\sum_{\\substack{\\ell\\\\(\\ell,q)=1}}\\frac{\\mu^2(\\ell)f(\\ell)}{\\ell^{s}}=\\prod_{p\\nmid q}\\left(1+\\frac{f(p)}{p^s}\\right)\n \\end{align}\n holds and the function $s\\mapsto\\zeta(s+\\alpha)$ can be expressed by an Euler product. For any $s$ such that $\\Re(s)>1-\\alpha$, we have then\n \\begin{align}\\label{D}\n &\\frac{D_{f}^{q}(s)}{\\zeta(s+\\alpha)}=\\prod_{p\\nmid q}\\left(1+\\frac{f(p)}{p^s}\\right)\\left(1-\\frac{1}{p^{s+\\alpha}}\\right)\\cdot\\prod_{p|q}\\left(1-\\frac{1}{p^{s+\\alpha}}\\right)\\nonumber\\\\\n &\\phantom{xxxx}=\\frac{\\varphi_{s+\\alpha}(q)}{q^{s+\\alpha}}\\cdot\\prod_{p\\nmid q}\\left(1-\\frac{1-f(p)p^\\alpha}{p^{s+\\alpha}}-\\frac{f(p)}{p^{2s+\\alpha}}\\right)\\ \\ =\\ \\frac{\\varphi_{s+\\alpha}(q)}{q^{s+\\alpha}}\\cdot H_{f}^{q}(s).\\nonumber\n \\end{align}\n Also, we have that $\\frac{1-f(p)p^{\\alpha}}{p^{s+\\alpha}}=O\\left(\\frac{1}{p^{\\Re(s)+\\beta}}\\right)$ and $\\frac{f(p)}{p^{2s+\\alpha}}=O\\left(\\frac{1}{p^{2\\Re(s)+2\\alpha}}\\right)$. Since $\\beta>\\alpha$, we have that $H$ can be extended analytically from $\\{s\\in\\mathbb{C},\\ \\Re(s)>1-\\alpha\\}$ onto $\\{s\\in\\mathbb{C},\\ \\Re(s)>\\max\\{1-\\beta,\\frac{1}{2}-\\alpha\\}\\}$. Further, as $0>1-\\beta$ and $0>\\frac{1}{2}-\\alpha$, $H_{f}^{q}(0)$ exists and, if $f(p)\\neq -1$ for any prime number $p$, it is different from $0$, since each factor defining it can be expressed as $(1+f(p))\\left(1-\\frac{1}{p^\\alpha}\\right)$ and $\\alpha\\neq 0$.\n\n Now, the formal equality $D_{f}^{q}(s)=H_{f}^{q}(s)\\cdot\\prod_{p\\nmid q}\\left(1+\\frac{1}{p^{s+\\alpha}}+\\frac{1}{p^{2(s+\\alpha)}}+\\ldots\\right)$ hides the convolution product\n \\begin{equation}\\label{identity1}\n \\ell^\\alpha \\mu^2(\\ell)f(\\ell)\\mathds{1}_{(\\ell,q)=1}(\\ell)=(h_{f}^{q}\\star\\mathds{1}_q)\\ (\\ell)=\\sum_{\\substack{d|\\ell}}h_{f}^{q}(d)\\mathds{1}_q\\left(\\frac{\\ell}{d}\\right), \n \\end{equation}\n where $h$ is a multiplicative function defined on the prime numbers as\n \\begin{align}\n &h_{f}^{q}(p)=(f(p)p^\\alpha-1)\\cdot\\mathds{1}_q(p),\\qquad h_{f}^{q}(p^2) = -f(p)p^\\alpha\\cdot \\mathds{1}_q(p),\\label{hfq}\\\\\n &\\phantom{xxxxxxxxxxxxxx} h_{f}^{q}(p^k) = 0, \\quad k>2.\\nonumber\n \\end{align}\n Therefore, from \\eqref{identity1} we conclude that\n \\begin{align}\\label{DirichletIdentity}\n \\sum_{\\substack{\\ell\\leq X\\\\ (\\ell,q)=1}}\\mu^2(\\ell)f({\\ell})=\\sum_{\\substack{\\ell\\leq X}}\\frac{(h_{f}^{q}\\star\\mathds{1}_q)\\ (\\ell)}{\\ell^\\alpha}=\\sum_{\\substack{d}}\\frac{h_{f}^{q}(d)}{d^\\alpha} \\sum_{\\substack{e\\leq\\frac{X}{d}\\\\(e,q)=1}} \\frac{1}{e^\\alpha}\\phantom{xxxxxxxxxxxxx}&\\nonumber\\\\\n=\\sum_{\\substack{d}}\\frac{h_{f}^{q}(d)}{d^\\alpha}\\sum_{\\substack{e\\leq\\frac{X}{d}}}\\frac{1}{e^\\alpha}\\sum_{d'|e,d'|q}\\mu(d')=\\sum_{\\substack{d}}\\frac{h_{f}^{q}(d)}{d^\\alpha}\\sum_{d'|q}\\frac{\\mu(d')}{d'^\\alpha}\\sum_{\\substack{e\\leq\\frac{X}{dd'}}}\\frac{1}{e^\\alpha}&,\n \\end{align}\n where there is no upper bound conditions on the variables $d$ and $d'$ present in the outer sums above, their being encoded by the innermost sum of \\eqref{DirichletIdentity}, which, in order to continue our analysis, we must estimate regardless of whether or not it is empty: Lemma \\ref{SumEstimations} allow us to handle this situation. \n\nHence, as $\\max\\{0,\\alpha-1\\}<\\delta<\\min\\{\\beta-1,\\alpha-\\frac{1}{2}\\}<\\alpha$, we derive that the second sum in \\eqref{DirichletIdentity} can be expressed as\n \\begin{align}\\label{Sum:alpha neq 1}\n \\sum_{d'|q}\\frac{\\mu(d')}{d'^\\alpha}\\sum_{\\substack{e\\leq\\frac{X}{dd'}}}\\frac{1}{e^\\alpha}=\\sum_{d'|q}\\frac{\\mu(d')}{d'^\\alpha}\\left(\\zeta(\\alpha)-\\frac{(dd')^{\\alpha-1}} {(\\alpha-1)X^{\\alpha-1}}+O^*\\left(\\Delta_{\\alpha}^{\\delta}\\frac{(dd')^\\delta}{ X^{\\delta}}\\right)\\right)&\\nonumber\\\\\n =\\ \\frac{\\zeta(\\alpha)\\varphi_\\alpha(q)}{q^\\alpha}-\\frac{\\varphi(q)}{(\\alpha-1)q}\\cdot\\frac{d^{\\alpha-1}}{X^{\\alpha-1}}+O^*\\left(\\Delta_{\\alpha}^{\\delta}\\frac{\\kappa_{\\alpha-\\delta}(q)}{q^{\\alpha-\\delta}}\\cdot\\frac{d^\\delta}{X^\\delta}\\right)&,\n \\end{align}\n if $\\alpha\\neq 1$, or as\n \\begin{align}\\label{Sum:alpha=1}\n \\sum_{d'|q}\\frac{\\mu(d')}{d'^\\alpha}\\sum_{\\substack{e\\leq\\frac{X}{dd'}}}\\frac{1}{e^\\alpha}=\\sum_{d'|q}\\frac{\\mu(d')}{d'^\\alpha}\\left(\\log\\left(\\frac{X}{dd'}\\right)+ \\gamma+O^*\\left(\\frac{\\Delta_1^{\\delta}(dd')^\\delta}{X^\\delta}\\right)\\right)\\phantom{xxxxxx}& \\nonumber\\\\\n =\\frac{\\varphi_\\alpha(q)}{q^\\alpha}\\left(\\log\\left(\\frac{X}{d}\\right)+\\gamma\\right)-\\sum_{d'|q}\\frac{\\mu(d')\\log(d')}{d'^\\alpha}+O^*\\left(\\frac{\\Delta_1^{\\delta}\\kappa_{\\alpha-\\delta}(q)}{q^{\\alpha-\\delta}}\\cdot\\frac{d^\\delta} {X^\\delta}\\right)&\\nonumber\\\\\n =\\frac{\\varphi_\\alpha(q)}{q^\\alpha}\\left(\\log\\left(\\frac{X}{d}\\right)+\\gamma+\\sum_{p|q}\\frac{\\log(p)}{p^\\alpha-1}\\right)+O^*\\left(\\frac{\\Delta_1^{\\delta}\\kappa_{\\alpha-\\delta}(q)}{q^{\\alpha-\\delta}}\\cdot\\frac{d^\\delta}{X^\\delta} \\right),&\n \\end{align}\n if $\\alpha=1$, where we have used that\n \\begin{align}-\\sum_{d'|q}\\frac{\\mu(d')\\log(d')}{d'^\\alpha}=\\left(\\frac{\\varphi_{s+\\alpha}(q)}{q^{s+\\alpha}}\\right)'_{s=0}=\\frac{\\varphi_\\alpha(q)}{q^\\alpha}\\sum_{p|q} \\frac{\\log(p)}{p^\\alpha-1}.\\label{derivated}\n \\end{align}\n On the other hand, observe that $H_{f}^{q}(1-\\alpha)$ and $\\overline{H}_{f}^{\\phantom{.}q}(-\\delta)$ are well-defined, as $\\min\\{1-\\alpha,-\\delta\\}>\\max\\{1-\\beta,\\frac{1}{2}-\\alpha\\}$. Therefore, from \\eqref{DirichletIdentity}, the sum $\\sum_{\\substack{\\ell\\leq X\\\\ (\\ell,q)=1}}\\mu^2(\\ell)f({\\ell})$ can be estimated either as\n \\begin{align}\\label{DirichletIdentity:alpha=1} \n \\sum_{\\substack{d}}\\frac{h_{f}^{q}(d)}{d^\\alpha}\\left(\\frac{\\zeta(\\alpha)\\varphi_\\alpha(q)}{q^\\alpha}-\\frac{\\varphi(q)}{(\\alpha-1)q}\\cdot\\frac{d^{\\alpha-1}}{X^{\\alpha-1}} +O^*\\left(\\frac{\\Delta_{\\alpha}^{\\delta}\\kappa_{\\alpha-\\delta}(q)}{q^{\\alpha-\\delta}} \\cdot\\frac{d^\\delta}{X^\\delta}\\right)\\right)&\\\\\n =H_{f}^{q}(0)\\ \\frac{\\zeta(\\alpha)\\varphi_\\alpha(q)}{q^\\alpha}-\\frac{\\varphi(q)}{(\\alpha-1)q}\\cdot\\frac{H_{f}^{q}(1-\\alpha)}{X^{\\alpha-1}}+O^*\\left(\\frac{\\Delta_{\\alpha}^{\\delta}\\kappa_{\\alpha-\\delta}(q)}{q^{\\alpha-\\delta}}\\cdot\\frac{\\overline{H}_{f}^{\\phantom{.}q}(-\\delta)}{X^\\delta} \\right)\\nonumber&,\n \\end{align}\n if $\\alpha\\neq 1$, by using \\eqref{Sum:alpha neq 1}, or \n \\begin{align}\\label{DirichletIdentity:alphaneq1}\n \\sum_{\\substack{d}}\\frac{h_{f}^{q}(d)}{d^\\alpha}\\left(\\frac{\\varphi_\\alpha(q)}{q^\\alpha}\\left(\\log\\left(\\frac{X}{d}\\right)+\\gamma+\\sum_{p|q}\\frac{\\log(p)}{p^\\alpha-1}\\right)+O^*\\left(\\frac{\\Delta_{1}^{\\delta}\\kappa_{\\alpha-\\delta}(q)}{q^{\\alpha-\\delta}}\\cdot\\frac{d^\\delta}{X^\\delta}\\right)\\right)&\\\\\n = H_{f}^{q}(0)\\ \\frac{\\varphi_\\alpha(q)}{q^\\alpha}\\left(\\log\\left(X\\right)+\\gamma+\\sum_{p|q}\\frac{\\log(p)}{p^\\alpha-1}\\right)+H_{f}^{q}\\phantom{}'(0)\\phantom{xx}\\nonumber&\\\\\n + O^*\\left(\\frac{\\Delta_{1}^{\\delta}\\kappa_{\\alpha-\\delta}(q)}{q^{\\alpha-\\delta}} \\cdot\\frac{\\overline{H}_{f}^{\\phantom{.}q}(-\\delta)}{X^\\delta}\\right),\\phantom{xx}\\nonumber&\n \\end{align}\n if $\\alpha=1$, by using \\eqref{Sum:alpha=1} and that $-\\sum_{d}\\frac{h_{f}^{q}(d)\\log(d)}{d^\\alpha}=H_{f}^{q}\\phantom{}'(0)$. The result is thus obtained by noticing that if $H_{f}^{q}(0)\\neq 0$, then $\\frac{H_{f}^{q}\\phantom{}'(0)}{H_{f}^{q}(0)}$ equals\n \\begin{align*}\n \\left(\\prod_{p\\nmid q}\\left(1-\\frac{1-f(p)p^\\alpha}{p^{s+\\alpha}}-\\frac{f(p)}{p^{2s+\\alpha}}\\right)\\right)'_{s=0}=\\sum_{p\\nmid q}\\frac{\\log(p)(1-f(p)p^\\alpha+2f(p))}{(f(p)+1)(p^\\alpha-1)}.\n \\end{align*} \n \\end{proof}\n\n \n \\begin{corollary}\\label{corollary}\n\n Let $X>0$ and $q\\in\\mathbb{Z}_{>0}$. The following estimations hold\n \\begin{align} \n \\mathbf{(a)}\\sum_{\\substack{\\ell\\leq X\\\\ (\\ell,q)=1}}\\frac{\\mu^2(\\ell)}{\\varphi(\\ell)}&=\\frac{\\varphi(q)}{q}\\left(\\log\\left(X\\right)+\\mathfrak{a}_q\\right)+O^*\\left(\\frac{ \\sage{Error_sum1} \\cdot\\mathpzc{A}_q}{X^{\\sage{delta}}}\\right)\\label{sum1:eq}, \\\\ \n \\mathbf{(b)} \\sum_{\\substack{\\ell\\leq X\\\\ (\\ell,q)=1}}\\frac{\\mu^2(\\ell)}{\\ell}&=\\frac{6}{\\pi^2}\\frac{q}{\\kappa(q)}\\left(\\log\\left(X\\right)+\\mathfrak{b}_q\\right)+O^*\\left(\\frac{\\sage{Upper(Error_sum2,digits)} \\cdot\\mathpzc{B}_q}{X^{\\sage{delta}}}\\right),\\label{sum2:eq}\n \\end{align} \n where\n \\begin{align*}\n \\mathpzc{A}_q= \\prod_{p|q}\\left(1+\\frac{p-p^{\\sage{delta}}-2}{(p-1)p^{\\sage{1-delta}}+p^{\\sage{delta}}+1}\\right),\\mathpzc{B}_q =\\prod_{p|q}\\left(1+\\frac{p^{\\sage{1-delta}}-1}{p^{\\sage{2-2*delta}}+1}\\right),\n \\end{align*} \nand\n \\begin{align*} \n \\mathfrak{a}_q&=\\sum_{p}\\frac{\\log(p)}{p(p-1)}+\\gamma+\\sum_{p|q}\\frac{\\log(p)}{p},\\sum_{p}\\frac{\\log(p)}{p(p-1)}+\\gamma= \\sage{C_sum1} \\ldots,\\\\\n\\mathfrak{b}_q &=\\sum_{p}\\frac{2\\log(p)}{p^2-1}+\\gamma+\\sum_{p|q}\\frac{\\log(p)}{p+1}, \n\\sum_{p}\\frac{2\\log(p)}{p^2-1}+\\gamma=\\sage{C_sum2}\\ldots .\n\\end{align*} \n \\end{corollary}\n\n \\begin{proof}\n For the case $\\mathbf{(a)}$ (respectively $\\mathbf{(b)}$), apply Theorem \\ref{general} with $f(p)=\\frac{1}{\\varphi(p)}=\\frac{1}{p-1}$ (respectively $f(p)=\\frac{1}{p}$), $\\alpha=1$, $\\beta=2$ and $0\\leq\\delta=\\sage{delta}<\\frac{1}{2}$. \n\nThe infinite products that participate in the main and error terms as well as the infinite summation that participates in the main term can be estimated by using a rigorous implementation of interval arithmetic, and some techniques for accelerating convergence.\n \\end{proof}\n\n\n\\noindent\\textbf{Remarks.} Conditions $\\alpha>\\frac{1}{2}$ and $\\beta>1$ in Theorem \\ref{general} are necessary to ensure the existence of $H_{f}^{q}(0)$. Nonetheless, we can derive an analogous result for any multiplicative arithmetic function $f$ satisfying the conditions $f(p)=\\frac{1}{p^{\\alpha}}+O\\left(\\frac{1}{p^{\\beta}}\\right)$, for every sufficiently large prime number $p$ coprime to $q$, where $\\alpha\\leq\\frac{1}{2}$ and $\\beta>\\alpha$ by using of Theorem \\ref{general} and summation by parts. In this instance, there will not be any secondary term appearing and the error term magnitude will be $O\\left(X^{1-\\alpha-\\delta}\\right)$ for any $0<\\delta<\\min\\{\\beta-\\alpha,\\frac{1}{2}\\}$\n\nUpon having Theorem \\ref{general} at our disposal, the asymptotic estimation of averages $\\sum_{\\substack{\\ell\\leq X\\\\ (\\ell,q)=1}}\\mu^2(\\ell)f({\\ell})$ satisfying conditions of that theorem becomes an automatized, but not uninteresting task, that involves each time a choice of parameters: a value for $\\delta$ and a precision value in order to obtain a rigorous estimation of some infinite products.\n\nIn general, we have freedom to choose the error term parameter $\\delta$ described in \\S\\ref{dirichlet} but some of them are not optimal. For instance, if $\\alpha=1$, then in terms of Theorem \\ref{general} and Lemma \\ref{SumEstimations}, $\\Delta_1^\\delta\\to\\infty$ as $\\delta\\to 0^{+}$. Since $\\overline{H}_f^{\\phantom{.}q}(-\\delta)$ converges, that makes the expression $\\Delta_1^\\delta\\overline{H}_f^{\\phantom{.}q}(-\\delta)$ tending to $\\infty$ as well, thus not providing a numerical acceptable value. On the other hand, when $\\delta\\to\\frac{1}{2}^-$, the infinite product given by $\\overline{H}_f^{\\phantom{.}q}(-\\delta)$ tends to $\\infty$, whereas $\\Delta_1^\\delta\\to\\Delta_1^{\\frac{1}{2}}$, thus bounded, so that one also derives that the expression $\\Delta_1^\\delta\\overline{H}_f^{\\phantom{.}q}(-\\delta)$ becomes too big to be practical. The search looks for a value of $\\delta$ not too close to the boundaries of $(0,\\frac{1}{2})$, and in almost all cases it seems acceptable to set $\\delta=\\frac{1}{3}$. \n\nA natural question is whether or not we can improve on the error term estimation given in Theorem \\ref{general}, mandatorily with a different method, of exponent $\\delta=\\min\\{\\beta-1,\\alpha-\\frac{1}{2}\\}$. \nWhen $\\beta-\\alpha>\\frac{1}{2}$, then $\\delta=\\alpha-\\frac{1}{2}$ and the answer to that question is given in \\S\\ref{improvement}: it is positive and constitutes our main result. We provide in addition explicit estimations for those \\textit{ critical exponents }.\n\nOut of the results above, the sum \\eqref{sum1:eq} is classical and it has been thoroughly studied by Ramar\\'e and Akhilesh in \\cite{RA13}, by Ramar\\'e in \\cite[Thm. 3.1]{RA19}, \\cite[Lemma 3.4]{RA95} and given in our simpler form by Helfgott in \\cite[\\S 6.1.1]{Hel19}. \n\n\n\n\\section{Improvements on the convolution method}\\label{improvement}\n\n\n \nDuring the proof of Theorem \\ref{general}, it was crucial to have an empty sum estimation for the inner sum given in \\eqref{DirichletIdentity} so that, thanks to the regularity on the variable $d$ we find convergent main and error term coefficients, as shown in \\eqref{DirichletIdentity:alpha=1} and \\eqref{DirichletIdentity:alphaneq1}. \n\nThis general idea misses the fact that the function $h_f^q$ defined in \\eqref{hfq} vanishes on all non cube-free numbers, and that the particular function $h_f^q:p,(p,q)=1\\mapsto\\frac{1}{p^\\alpha}$, with $\\alpha>\\frac{1}{2}$, satisfies $h_f^q(p)=0$. Moreover, the fact that that particular function is meaningful only on the square of the prime numbers, will allow us to achieve the critical exponent $\\delta=\\frac{1}{2}$, if $\\alpha=1$ or $\\delta=\\alpha-\\frac{1}{2}$, if $\\alpha\\neq 1$ and $\\alpha>\\frac{1}{2}$, when $f$ is an arithmetic function satisfying the conditions of Theorem \\ref{general} with $\\beta-\\alpha>\\frac{1}{2}$.\n\n\n\n\\subsection{A particular case} \\label{particular}\n\n\n\nLet us see how we can improve the estimation $\\mathbf{(b)}$ given in Corollary \\ref{corollary}.\n\\begin{lemma}\\label{sum2:critic:1} \nLet $X>0$.\nThen\n\\begin{align}\\label{sum2:v1} \n\\sum_{\\ell\\leq X}\\frac{\\mu^2(\\ell)}{\\ell}&=\\frac{6}{\\pi^2}\\left(\\log(X)+\\mathfrak{b}_1\\right)+O^*\\left(\\frac{\\sage{Upper(CONSTANT1,digits)}}{\\sqrt{X}}\\right),\\\\\n\\label{sum2:v2}\\sum_{\\substack{\\ell\\leq X\\\\(\\ell,2)=1}}\\frac{\\mu^2(\\ell)}{\\ell}&=\\frac{4}{\\pi^2}\\left(\\log(X)+\\mathfrak{b}_2\\right)+O^*\\left(\\frac{\\sage{Upper(CONSTANT_ALT,digits)}}{\\sqrt{X}}\\right),\n\\end{align} \nwhere $\\mathfrak{b}_1=\\gamma+\\sum_{p}\\frac{2\\log(p)}{p^2-1}=\\sage{C_sum2}\\ldots$, $\\mathfrak{b}_2=\\mathfrak{b}_1+\\frac{\\log(2)}{3}=\\sage{C_sum2_22}\\ldots$. \n\nIf we restraint ourselves to the range $X\\geq 1$,\nthen $\\sage{Upper(CONSTANT1,digits)}$ may be replaced by $\\sage{Upper(Ram_new_cst,2)}$ and $\\sage{Upper(CONSTANT_ALT,digits)}$ may be replaced by $\\sage{Upper(max(C2_v2_1,C2_v2_2),digits)}$.\n\\end{lemma} \n\n\\begin{proof} Equation \\eqref{sum2:eq} gives the main term of \\eqref{sum2:v2} and from that, we can conclude by summation by parts that for all $X\\geq 1$, $\\sum_{\\substack{\\ell\\leq X\\\\(\\ell,2)=1}}\\frac{\\mu^2(\\ell)}{\\ell}$ equals\n\\begin{align}\\label{eqq:v=2}\n\\frac{4(\\log(X)+\\mathfrak{b}_2)}{\\pi^2}+\\left(\\sum_{\\substack{\\ell\\leq X\\\\(\\ell,2)=1}}\\mu^2(\\ell)-\\frac{4}{\\pi^2}X\\right)\\frac{1}{X}-\\int_X^{\\infty}\\left(\\sum_{\\substack{\\ell\\leq t\\\\(\\ell,2)=1}}\\mu^2(\\ell)-\\frac{4}{\\pi^2}t\\right)\\frac{dt}{t^2}.\n\\end{align}\nMoreover, by \\cite[Lemma 5.2]{Hel19}, we have \n\\begin{align}\n\\sup_{\\{X\\geq 1573\\}}\\frac{1}{\\sqrt{X}}\\left|\\sum_{\\substack{\\ell\\leq X\\\\(\\ell,2)=1}}\\mu^2(\\ell)-\\frac{4}{\\pi^2}X\\right|&\\leq\\frac{9}{70}\\label{bound:1},\n\\end{align}\nso that, by \\eqref{eqq:v=2}, \n\\begin{align*}\n\\sum_{\\substack{\\ell\\leq X\\\\(\\ell,2)=1}}\\frac{\\mu^2(\\ell)}{\\ell}=\\frac{4}{\\pi^2}(\\log(X)+\\mathfrak{b}_2)+O^*\\left(\\frac{27}{70}\\frac{1}{\\sqrt{X}}\\right),\\quad\\text{ if }X\\geq 1573,\n\\end{align*}\nwhere $\\frac{27}{70}=\\sage{Trunc(C2_v2_1,digits)}\\ldots$. We further verify by interval arithmetic that\n\\begin{align}\\label{intt:v=2}\n\\sup_{\\{1\\leq X\\leq 1573\\}}\\sqrt{X}\\left|\\sum_{\\substack{\\ell\\leq X\\\\(\\ell,2)=1}}\\frac{\\mu^2(\\ell)}{\\ell}-\\frac{4}{\\pi^2}(\\log(X)+\\mathfrak{b}_2)\\right|\\leq\\sage{Upper(C2_v2_2,digits)}\n\\end{align} \nthe above upper bound being almost achieved when $X\\to 3^-$. On the other hand \\cite[Cor. 1.2]{RA19} tells us that\n\\begin{align}\n\\sup_{\\{X\\geq 1\\}}\\sqrt{X}\\left|\\sum_{\\ell\\leq X}\\frac{\\mu^2(\\ell)}{\\ell}-\\frac{6}{\\pi^2}\\left(\\log(X)+\\mathfrak{b}_1\\right)\\right|&\\leq\\sage{Upper(Ram_new_cst,2)}\\label{sum2:bound:X>=1}.\n\\end{align}\nHence, by using \\eqref{bound:1}, \\eqref{intt:v=2} and \\eqref{sum2:bound:X>=1}, when $v\\in\\{1,2\\}$, we have the bounds\n\\begin{align}\\label{BB}\n\\sup_{\\{X\\geq 1\\}}\\sqrt{X}\\left|\\sum_{\\substack{\\ell\\leq X\\\\(\\ell,v)=1}}\\frac{\\mu^2(\\ell)}{\\ell}-\\frac{v}{\\kappa(v)}\\frac{6}{\\pi^2}(\\log(X)+\\mathfrak{b}_v)\\right|\\leq\n\\begin{cases}\n\\sage{Upper(Ram_new_cst,2)}&,\\quad\\text{ if }v=1,\\\\\n\\sage{Upper(max(C2_v2_1,C2_v2_2),digits)} &,\\quad\\text{ if }v=2.\n\\end{cases}&\n\\end{align} \nIn order to derive the result, it is sufficient to obtain bounds for \\eqref{BB} when $X\\in(0,1)$, in which case the above summation vanishes. By defining $Y=\\frac{1}{X}>1$ and $t_v:Y\\mapsto\\frac{6v(\\log(Y)-\\mathfrak{b}_v)}{\\kappa(v)\\pi^2\\sqrt{Y}}$, we need to find $\\sup_{\\{Y>1\\}}|t_v(Y)|$. By calculus, the function $t_v$ has a critical point at $y_0=e^{2+\\mathfrak{b}_v}$, with value $t_v(y_0)=\\frac{12v}{\\kappa(v)\\pi^2e^{1+\\frac{\\mathfrak{b}_v}{2}}}$, and it is monotonic in $[1,y_0]$ and in $[y_0,\\infty)$. As $\\lim_{Y\\to \\infty}t_v(Y)=0$ and $t_v(y_0)>0$, we conclude that $t_v$ is decreasing in $[y_0,\\infty)$. Similarly, as $t_v(1)=-\\frac{6v\\mathfrak{b}_v}{\\kappa(v)\\pi^2}<0$, $t_v$ is increasing in $[1,y_0]$. Therefore \n\\begin{align}\\label{BB(0,1)}\n\\sup_{\\{00$ and $\\alpha>\\frac{1}{2}$. If $\\alpha\\neq 1$, then\n\\begin{align*}\n\\sum_{\\ell\\leq X}\\frac{\\mu^2(\\ell)}{\\ell^\\alpha}&=\\frac{\\zeta(\\alpha)}{\\zeta(2\\alpha)}-\\frac{6}{(\\alpha-1)\\pi^2}\\frac{1}{X^{\\alpha-1}}+O^*\\left(\\frac{\\mathrm{E}_\\alpha^{(1)}}{X^{\\alpha-\\frac{1}{2}}}\\right),\\\\\n\\sum_{\\substack{\\ell\\leq X\\\\(\\ell,2)=1}}\\frac{\\mu^2(\\ell)}{\\ell^\\alpha}&=\\frac{2^\\alpha}{(2^{\\alpha}+1)}\\frac{\\zeta(\\alpha)}{\\zeta(2\\alpha)}-\\frac{4}{(\\alpha-1)\\pi^2}\\frac{1}{X^{\\alpha-1}}+O^*\\left(\\frac{\\sqrt{2}}{\\varphi_{\\frac{1}{2}}(2)}\\frac{\\mathrm{E}_\\alpha^{(2)}}{X^{\\alpha-\\frac{1}{2}}}\\right),\n\\end{align*}\nwhere, for $v\\in\\{1,2\\}$, we have\n\\begin{align*}\n\\mathrm{E}_\\alpha^{(v)}=\\max\\left\\{\\mathrm{D}_v\\left(1+\\frac{|\\alpha-1|}{\\alpha-\\frac{1}{2}}\\right),\\frac{\\varphi_{\\frac{1}{2}}(v)}{\\sqrt{v}}\\left|\\frac{v^\\alpha}{\\kappa_\\alpha(v)}\\frac{\\zeta(\\alpha)}{\\zeta(2\\alpha)}-\\frac{v}{\\kappa(v)}\\frac{6}{(\\alpha-1)\\pi^2}\\right|,\\phantom{xxx}\\right.&\\\\\n\\left.\\frac{\\varphi_{\\frac{1}{2}}(v)}{\\sqrt{v}}\\frac{|\\alpha-1|}{\\alpha-\\frac{1}{2}}\\left(\\frac{3\\kappa_\\alpha(v)\\zeta(2\\alpha)}{\\left(\\alpha-\\frac{1}{2}\\right)v^{\\alpha-1}\\kappa(v)\\pi^2|\\zeta(\\alpha)(\\alpha-1)|}\\right)^{\\frac{2}{\\alpha-1}}\\right\\}\\phantom{x}&\n\\end{align*}\nand $\\mathrm{D}_1=\\sage{Upper(Ram_new_cst,2)}$, $\\mathrm{D}_2=\\sage{Upper((1-1\/sqrt(2))*max(C2_v2_1,C2_v2_2),digits)}.$\n\nIf $X\\geq 1$, and $\\alpha\\neq 1$ then we can replace $\\mathrm{E}_\\alpha^{(v)}$ by $\\mathrm{D}_v\\left(1+\\frac{|\\alpha-1|}{\\alpha-\\frac{1}{2}}\\right)$. \n\\end{lemma}\n\n\\begin{proof} \nIf $X\\geq 1$, by summation by parts, we can write $\\sum_{\\substack{\\ell\\leq X\\\\(\\ell,v)=1}}\\frac{\\mu^2(\\ell)}{\\ell^\\alpha}$ as\n\\begin{align}\n&\\left(\\sum_{\\substack{\\ell\\leq X\\\\(\\ell,v)=1}}\\frac{\\mu^2(\\ell)}{\\ell}-\\frac{v}{\\kappa(v)}\\frac{6\\left(\\log(X)+\\mathfrak{b}_v\\right)}{\\pi^2}\\right)\\frac{1}{X^{\\alpha-1}}-\\frac{v}{\\kappa(v)}\\frac{6}{(\\alpha-1)\\pi^2}\\frac{1}{X^{\\alpha-1}}+\\nonumber\\\\\n&\\frac{v}{\\kappa(v)}\\frac{6(\\mathfrak{b}_v(\\alpha-1)+1)}{\\pi^2(\\alpha-1)}+(\\alpha-1)\\int_1^X\\left(\\sum_{\\substack{\\ell\\leq t\\\\(\\ell,v)=1}}\\frac{\\mu^2(\\ell)}{\\ell}-\\frac{v}{\\kappa(v)}\\frac{6\\left(\\log(t)+\\mathfrak{b}_v\\right)}{\\pi^2}\\right)\\frac{dt}{t^{\\alpha}}.\\nonumber\\\\\\label{step:alpha}\n\\end{align}\nBy Theorem \\ref{general}, when $\\alpha>\\frac{1}{2}$, the main term in the asymptotic expression of the above summation is $\\frac{v^\\alpha}{\\kappa_\\alpha(v)}\\frac{\\zeta(\\alpha)}{\\zeta(2\\alpha)}-\\frac{v}{\\kappa(v)}\\frac{6}{\n(\\alpha-1)\\pi^2}\\frac{1}{X^{\\alpha-1}}\n$. By using Lemma \\ref{sum2:critic:1} and by making $X\\to\\infty$, we conclude from \\eqref{step:alpha} that $\\frac{v}{\\kappa(v)}\\frac{6(\\mathfrak{b}(\\alpha-1)+1)}{\\pi^2(\\alpha-1)}+(\\alpha-1)\\int_1^\\infty\\left(\\sum_{\\substack{\\ell\\leq t\\\\(\\ell,v)=1}}\\frac{\\mu^2(\\ell)}{\\ell}-\\frac{6}{\\pi^2}\\left(\\log(t)+\\mathfrak{b}_v\\right)\\right)\\frac{dt}{t^{\\alpha}}=\\frac{v^\\alpha}{\\kappa_\\alpha(v)}\\frac{\\zeta(\\alpha)}{\\zeta(2\\alpha)}$. Further, by equation \\eqref{BB}, we conclude that, for all $X\\geq 1$, $\\sum_{\\substack{\\ell\\leq X\\\\(\\ell,v)=1}}\\frac{\\mu^2(\\ell)}{\\ell^\\alpha}$ is equal to\n\\begin{align*}\n\\frac{v^\\alpha}{\\kappa_\\alpha(v)}\\frac{\\zeta(\\alpha)}{\\zeta(2\\alpha)}-\\frac{v}{\\kappa(v)}\\frac{6}{(\\alpha-1)\\pi^2}\\frac{1}{X^{\\alpha-1}}+O^*\\left(\\frac{\\sqrt{v}\\ \\mathrm{D}_v}{\\varphi_{\\frac{1}{2}}(v)}\\left(1+\\frac{|\\alpha-1|}{\\alpha-\\frac{1}{2}}\\right)\\frac{1}{X^{\\alpha-\\frac{1}{2}}}\\right), \n\\end{align*}\nwhere $\\mathrm{D}_1=\\sage{Upper(Ram_new_cst,2)}$ and $\\frac{\\varphi_{\\frac{1}{2}}(2)}{\\sqrt{2}}\\sage{Upper(max(C2_v2_1,C2_v2_2),digits)}\\leq\\mathrm{D}_2=\\sage{Upper((1-1\/sqrt(2))*max(C2_v2_1,C2_v2_2),digits)}$. \n\nSuppose now that $X\\in(0,1)$. Define $g:X>0\\mapsto\\frac{v^{\\alpha-1}\\kappa(v)\\pi^2\\zeta(\\alpha)(\\alpha-1)}{6\\kappa_\\alpha(v)\\zeta(2\\alpha)}X^{\\alpha-\\frac{1}{2}}$ $-\\sqrt{X}$. We have by \\cite[Cor. 1.14]{MV07} that $1<\\zeta(\\alpha)(\\alpha-1)<\\alpha$. If $\\alpha>1$, we derive that $\\frac{\\zeta(\\alpha)(\\alpha-1)}{\\zeta(2\\alpha)}>\\frac{1}{\\zeta(2)}$. As $\\frac{v^{\\alpha-1}\\kappa(v)}{\\kappa_\\alpha(v)}=\\frac{1+\\frac{1}{v}}{1+\\frac{1}{v^\\alpha}}>1$ we conclude that $g(1)>0$ and $g$ has a critical point $x_0$ satisfying $01$, then $\\sup_{\\{00$. Therefore, if $\\frac{1}{2}<\\alpha<1$, then $\\sup_{\\{0\\frac{1}{2}$, \n$\\alpha\\neq 1$, to the case $\\alpha=1$ by defining $\\mathrm{E}_1^{(1)}=\\sage{Upper(CONSTANT1,digits)}$ and, upon observing that $\\frac{\\varphi_{\\frac{1}{2}}(2)}{\\sqrt{2}}\\sage{Upper(4*C_sum2_22\/pi^2,digits)}\\leq\\sage{Upper((1-1\/sqrt(2)) * 4*C_sum2_22\/pi^2,digits)}$, defining $\\mathrm{E}_1^{(2)}=\\sage{Upper(CONSTANT2,digits)}$.\n\\end{remark}\n\n\\begin{lemma}\\label{seekfor} Let $X>0$ and $q\\in\\mathbb{Z}_{>0}$. Then $\\sum_{\\substack{\\ell\\leq X\\\\(\\ell,q)}}\\frac{\\mu^2(\\ell)}{\\ell}$ equals\n\\begin{align*}\n\\frac{q}{\\kappa(q)}\\frac{6}{\\pi^2}\\left(\\log(X)+\\mathfrak{b}_q\\right)+O^*\\left(\\frac{\\sqrt{q}}{\\varphi_{\\frac{1}{2}}(q)}\\frac{\\mathrm{E}_1^{(1)}\\prod_{2|q}\\frac{\\mathrm{E}_1^{(2)}}{\\mathrm{E}_1^{(1)}}}{\\sqrt{X}}\\right), \n\\end{align*}\nwhere $\\mathfrak{b}_q$ is defined in Lemma \\ref{corollary} and, if $\\alpha>\\frac{1}{2}$, $\\alpha\\neq 1$, $\\sum_{\\substack{\\ell\\leq X\\\\(\\ell,q)=1}}\\frac{\\mu^2(\\ell)}{\\ell^\\alpha}$ equals\n\\begin{align*}\n\\frac{q^\\alpha}{\\kappa_{\\alpha}(q)}\\frac{\\zeta(\\alpha)}{\\zeta(2\\alpha)}-\\frac{q}{\\kappa(q)}\\frac{6}{(\\alpha-1)\\pi^2}\\frac{1}{X^{\\alpha-1}}+O^*\\left(\\frac{\\sqrt{q}}{\\varphi_{\\frac{1}{2}}(q)}\\frac{\\mathrm{E}_\\alpha^{(1)}\\prod_{2|q}\\frac{\\mathrm{E}_\\alpha^{(2)}}{\\mathrm{E}_\\alpha^{(1)}}}{X^{\\alpha-\\frac{1}{2}}}\\right),\n\\end{align*}\n where $\\mathrm{E}_\\alpha^{(v)}, v\\in\\{1,2\\}$, is defined as in Lemma \\ref{sum2:critic}.\n\\end{lemma}\n\\begin{proof} Proceed as in \\cite[Lemma 2.17]{MV07}. Define $\\mathcal{D}_{r}=\\{p\\text{ prime }, p|d\\implies p|r\\}\\subset\\mathbb{Z}_{\\geq 0}$. Consider $v\\in\\{1,2\\}$ and write $q=v^kr, k\\in\\mathbb{Z}_{>0}$, with $(v,r)=1$ (where, if $v=1$, then $k=0$). Then for all $s\\in\\mathbb{C}$ such that $\\Re(s)>1-\\alpha$, we have the identity \n\\begin{align*}\n\\sum_{\\substack{\\ell\\\\(\\ell,q)=1}}\\frac{\\mu^2(\\ell)}{\\ell^{s+\\alpha}}=\\prod_{p|r}\\left(1+\\frac{1}{p^{s+\\alpha}}\\right)^{-1}\\cdot\\sum_{\\substack{\\ell\\\\(\\ell,v)=1}}\\frac{\\mu^2(\\ell)}{\\ell^{s+\\alpha}}=\\sum_{\\substack{d\\\\d\\in\\mathcal{D}_r}}\\frac{\\lambda(d)}{d^{s+\\alpha}}\\cdot\\sum_{\\substack{e\\\\(e,v)=1}}\\frac{\\mu^2(e)}{e^{s+\\alpha}},\n\\end{align*}\nwhere $\\lambda$ corresponds to the Liouville function: the completely multiplicative function taking the value $-1$ at every prime number.\nHence\n\\begin{equation}\\label{nice}\n\\sum_{\\substack{\\ell\\leq X\\\\(\\ell,q)=1}}\\frac{\\mu^2(\\ell)}{\\ell^\\alpha}=\\sum_{\\substack{d\\\\d\\in\\mathcal{D}_r}}\\frac{\\lambda(d)}{d^\\alpha}\\sum_{\\substack{e\\leq\\frac{X}{d}\\\\(e,v)=1}}\\frac{\\mu^2(e)}{e^\\alpha},\n\\end{equation}\nwhich, as in Lemma \\ref{SumEstimations}, does not require the condition $\\{d\\leq X\\}$. We are considering thus an infinite range of values of $d$ for the above outer sum, which can be estimated as long as the inner sum is expressed asymptotically with an error term valid even when it has an empty condition plus the fact that the series of error terms for this expression, formed by the outer sum, converges.\n\nIf $\\alpha=1$, by using Lemma \\ref{sum2:critic:1} in \\eqref{nice}, we derive the same main term as the one given in Corollary \\ref{corollary} $\\mathbf{(b)}$, but a better error term magnitude, since $\\sum_{\\substack{\\ell\\leq X\\\\(\\ell,q)=1}}\\frac{\\mu^2(\\ell)}{\\ell}$ can be written as \n\\begin{align*}\n&\\sum_{\\substack{d\\\\d\\in\\mathcal{D}_r}}\\frac{\\lambda(d)}{d}\\left(\\frac{6}{\\pi^2}\\frac{v}{\\kappa(v)}\\left(\\log\\left(\\frac{X}{d}\\right)+\\mathfrak{b}_v\\right)+O^*\\left(\\frac{\\sqrt{v}}{\\varphi_{\\frac{1}{2}}(v)}\\frac{\\mathrm{E}_{1} ^{(v)}\\sqrt{d}}{\\sqrt{X}}\\right)\\right)\\\\\n&=\\frac{vr}{\\kappa(vr)}\\frac{6}{\\pi^2}\\left(\\log(X)+\\mathfrak{b}_v\\right)-\\frac{v}{\\kappa(v)}\\frac{6}{\\pi^2}\\sum_{\\substack{d\\\\d\\in\\mathcal{D}_r}}\\frac{\\lambda(d)\\log(d)}{d}\\\\\n&\\phantom{xxxxxxxxxxxxxxxxxxll}+O^*\\left(\\frac{\\sqrt{v}}{\\varphi_{\\frac{1}{2}}(v)}\\sum_{\\substack{d\\\\d\\in\\mathcal{D}_r}}\\frac{\\mathrm{E}_{1} ^{(v)}}{\\sqrt{d}}\\cdot\\frac{1}{\\sqrt{X}}\\right)\\\\\n&=\\frac{q}{\\kappa(q)}\\frac{6}{\\pi^2}\\left(\\log(X)+\\mathfrak{b}_q\\right)+O^*\\left(\\frac{\\sqrt{q}}{\\varphi_{\\frac{1}{2}}(q)}\\frac{\\mathrm{E}_1^{(1)}\\prod_{2|q}\\frac{\\mathrm{E}_1^{(2)}}{\\mathrm{E}_1^{(1)}}}{\\sqrt{X}}\\right),\n\\end{align*}\nwhere we have used that\n\\begin{align*}\n\\sum_{\\substack{d\\\\d\\in\\mathcal{D}_r}}\\frac{-\\lambda(d)\\log(d)}{d}=\\frac{r}{\\kappa(r)}\\left(\\sum_{\\substack{d\\\\d\\in\\mathcal{D}_r}}\\frac{\\lambda(d)}{d^s}\\right)^{-1}_{s=1}\\cdot\\left(\\sum_{\\substack{d\\\\d\\in\\mathcal{D}_r}}\\frac{\\lambda(d)}{d^s}\\right)'_{s=1}\\phantom{xxxxxxx}&\\\\\n=\\frac{r}{\\kappa(r)}\\sum_{p|r}\\left[\\left(\\left(1+\\frac{1}{p^s}\\right)^{-1}\\right)'\\left(1+\\frac{1}{p^s}\\right)\\right]_{s=1}=\\frac{r}{\\kappa(r)}\\sum_{p|r}\\frac{\\log(p)}{p+1},&\n\\end{align*}\nand that $\\frac{vr}{\\kappa(vr)}=\\frac{q}{\\kappa(q)}$, $\\frac{\\sqrt{vr}}{\\varphi_{\\frac{1}{2}}(vr)}=\\frac{\\sqrt{q}}{\\varphi_{\\frac{1}{2}}(q)}$, $\\sum_{p|v}\\frac{\\log(p)}{p+1}+\\sum_{p|r}\\frac{\\log(p)}{p+1}=\\sum_{p|q}\\frac{\\log(p)}{p+1}$.\n\nFinally, if $\\alpha\\neq 1$, then by using Lemma \\ref{sum2:critic} in \\eqref{nice} and by noticing that $\\frac{(vr)^\\alpha}{\\kappa_\\alpha(vr)}=\\frac{q^\\alpha}{\\kappa_\\alpha(q)}$, we derive that $\\sum_{\\substack{\\ell\\leq X\\\\(\\ell,q)=1}}\\frac{\\mu^2(\\ell)}{\\ell^\\alpha}$ can be expressed as \n\\begin{align*}\n&\\sum_{\\substack{d\\\\d\\in\\mathcal{D}_r}}\\frac{\\lambda(d)}{d^\\alpha}\\left(\\frac{v^\\alpha}{\\kappa_\\alpha(v)}\\frac{\\zeta(\\alpha)}{\\zeta(2\\alpha)}-\\frac{v}{\\kappa(v)}\\frac{6}{(\\alpha-1)\\pi^2}\\frac{d^{\\alpha-1}}{X^{\\alpha-1}}+O^*\\left(\\frac{\\sqrt{v}}{\\varphi_{\\frac{1}{2}}(v)}\\frac{\\mathrm{E}_\\alpha^{(v)} d^{\\alpha-\\frac{1}{2}}}{X^{\\alpha-\\frac{1}{2}}}\\right)\\right)\\\\\n&=\\frac{q^\\alpha}{\\kappa_{\\alpha}(q)}\\frac{\\zeta(\\alpha)}{\\zeta(2\\alpha)}-\\frac{q}{\\kappa(q)}\\frac{6}{(\\alpha-1)\\pi^2}\\frac{1}{X^{\\alpha-1}}+O^*\\left(\\frac{\\sqrt{q}}{\\varphi_{\\frac{1}{2}}(q)}\\frac{\\mathrm{E}_\\alpha^{(1)}\\prod_{2|q}\\frac{\\mathrm{E}_\\alpha^{(2)}}{\\mathrm{E}_\\alpha^{(1)}}}{X^{\\alpha-\\frac{1}{2}}}\\right),\n\\end{align*}\nwhich, again, has the expected main term according to Theorem \\ref{general} but an error term of lower magnitude.\n\\end{proof}\n\nLet us recall that the requirement of the empty sum estimation, as in Lemma \\ref{SumEstimations}, worsens a bit the error term constants with respect to the ones under condition $X\\geq 1$, say, as shown in lemmas \\ref{sum2:critic:1} and \\ref{sum2:critic}, but we gain regularity in our expressions in the variable $d$. It is precisely that regularity that allows us to derive the coprimality restrictions products in a simpler manner: for example, we derive immediately that $\\sum_{\\substack{d\\\\d\\in\\mathcal{D}_r}}\\frac{\\lambda(d)}{d}=\\frac{r}{\\kappa(r)}$, whereas condition $\\frac{X}{d}\\geq 1$ would have imposed us to analyze $\\sum_{\\substack{d\\leq X\\\\d\\in\\mathcal{D}_r}}\\frac{\\lambda(d)}{d}$ or, rather, $\\sum_{\\substack{d>X\\\\d\\in\\mathcal{D}_r}}\\frac{\\lambda(d)}{d}$. This last observation is key for the work carried out in \\cite{RA13} and \\cite{RA19}. \n\n\\begin{corollary} Let $X>0$. Then\n\\begin{align*}\n\\sum_{\\substack{\\ell> X\\\\(\\ell,q)=1}}\\frac{\\mu^2(\\ell)}{\\ell^2}=\\frac{q}{\\kappa(q)}\\frac{6}{\\pi^2}\\frac{1}{X}+O^*\\left(\\frac{\\sqrt{q}}{\\varphi_{\\frac{1}{2}}(q)}\\frac{\\sage{Upper(E(2,1),digits)}}{X^{\\frac{3}{2}}}\\right),&\\quad\\text{ if }2\\nmid q,\\\\\n\\phantom{\\sum_{\\substack{\\ell> X\\\\(\\ell,q)=1}}\\frac{\\mu^2(\\ell)}{\\ell^2}}=\\frac{q}{\\kappa(q)}\\frac{6}{\\pi^2}\\frac{1}{X}+O^*\\left(\\frac{\\sqrt{q}}{\\varphi_{\\frac{1}{2}}(q)}\\frac{\\sage{Upper(E(2,2),digits)}}{X^{\\frac{3}{2}}}\\right),&\\quad\\text{ if }2|q.\n\\end{align*} \n\\begin{proof} By applying Lemma \\ref{seekfor} with $\\alpha=2$, we have\n\\begin{align*}\n\\sum_{\\substack{\\ell\\leq X\\\\(\\ell,q)=1}}\\frac{\\mu^2(\\ell)}{\\ell^2}&=\\frac{q^2}{\\kappa_{2}(q)}\\frac{\\zeta(2)}{\\zeta(4)}-\\frac{q}{\\kappa(q)}\\frac{6}{\\pi^2}\\frac{1}{X}+O^*\\left(\\frac{\\sqrt{q}}{\\varphi_{\\frac{1}{2}}(q)}\\frac{\\mathrm{E}_2^{(1)}\\prod_{2|q}\\frac{\\mathrm{E}_2^{(2)}}{\\mathrm{E}_2^{(1)}}}{X^{\\frac{3}{2}}}\\right),\n\\end{align*}\nwhere, for $v\\in\\{1,2\\}$, $\\mathrm{E}_2^{(v)}$ is defined as\n\\begin{align*}\n\\max\\left\\{\\frac{5\\ \\mathrm{D}_v}{3},\\frac{\\varphi_{\\frac{1}{2}}(v)}{\\sqrt{v}}\\left|\\frac{v^2}{\\kappa_2(v)}\\frac{\\zeta(2)}{\\zeta(4)}-\\frac{v}{\\kappa(v)}\\frac{6}{\\pi^2}\\right|,\\frac{\\varphi_{\\frac{1}{2}}(v)}{\\sqrt{v}}\\frac{2}{3}\\left(\\frac{2\\kappa_2(v)\\zeta(4)}{v\\kappa(v)\\pi^2\\zeta(2)}\\right)^{2}\\right\\}&\\\\\n\\leq\\begin{cases}\n\\sage{Upper(E(2,1),digits)},&\\quad\\text{ if }v=1,\\\\\n\\sage{Upper(E(2,2),digits)},&\\quad\\text{ if }v=2.\n\\end{cases}&\n\\end{align*}\nWe obtain the result by observing that\n\\begin{equation*}\n\\sum_{\\substack{\\ell> X\\\\(\\ell,q)=1}}\\frac{\\mu^2(\\ell)}{\\ell^2}=\\frac{q^2}{\\kappa_{2}(q)}\\frac{\\zeta(2)}{\\zeta(4)}\n-\\sum_{\\substack{\\ell\\leq X\\\\(\\ell,q)=1}}\\frac{\\mu^2(\\ell)}{\\ell^2}.\n\\end{equation*} \n\\end{proof} \n\\end{corollary}\n\n\n\n\\subsection{Achieving the critical exponent}\\label{achieving} \n\n\n\nWe present a new method to achieve the critical exponent for estimations of averages of the form studied in Theorem \\ref{general} provided that the difference between $\\beta$ and $\\alpha$ defined therein is strictly bigger than $\\frac{1}{2}$: in this case, we go to the edge of the special form of the convolution method given in \\S\\ref{TCM}; moreover, no extra conditions on $\\beta$ are needed but $\\beta-\\alpha>\\frac{1}{2}$. Nonetheless, if $\\beta-\\alpha\\leq\\frac{1}{2}$, then we should still refer to Theorem \\ref{general} and its choice of parameter (or indirectly to it, as shown by summation by parts in Theorem \\ref{general++} $\\mathbf{(B)}$, $\\mathbf{(C)}$). \n\n\\begin{theorem}\\label{general++} \n Let $X>0$, be a real number and $q$ a positive integer. Consider a multiplicative function $f:\\mathbb{Z}^+\\to\\mathbb{C}$ such that for every prime number $p$ satisfying $(p,q)=1$, we have $f(p)=\\frac{1}{p^{\\alpha}}+O\\left(\\frac{1}{p^{\\beta}}\\right)$, where $\\alpha$, $\\beta$ are real numbers satisfying $\\beta>\\alpha$, $\\beta-\\alpha>\\frac{1}{2}$. We have the following\n\n \\noindent $\\mathbf{(A)}$ If $\\alpha>\\frac{1}{2}$ then \n \\begin{align*}\n \\sum_{\\substack{\\ell\\leq X\\\\ (\\ell,q)=1}}\\mu^2(\\ell)f({\\ell})=F_\\alpha^{q}(X)+O^*\\left(\\mathrm{p}_\\alpha(q)\\cdot\\frac{\\mathrm{w}_\\alpha^q\\ \\mathrm{P}_\\alpha}{X^{\\alpha-\\frac{1}{2}}}\\right),\n \\end{align*} \n where $F_\\alpha^q(X)$ is defined as in Theorem \\ref{general}, and, if $2|q$, $\\mathrm{w}_{\\alpha}^q=\\mathrm{E}_\\alpha^{(2)}$, whereas, if $2\\nmid q$, \n\\begin{align*}\n\\mathrm{w}_\\alpha^q&=\\left(\\frac{\\sqrt{2}-1}{\\sqrt{2}-1+|2^\\alpha f(2)-1|}\\right)\\left(\\mathrm{E}_\\alpha^{(1)}+\\frac{|2^\\alpha f(2)-1|\\ \\mathrm{E}_\\alpha^{(2)}}{\\varphi_{\\frac{1}{2}}(2)}\\right).\n\\end{align*}\nHere $\\mathrm{E}_\\alpha^{(v)}, v\\in\\{1,2\\}$ is defined in Lemma \\ref{sum2:critic} and Remark \\ref{newdef}, and we have \n\\begin{align*}\n\\mathrm{p}_\\alpha(q)=\\prod_{p|q}\\left(1+\\frac{1-|f(p)p^\\alpha-1|}{\\sqrt{p}-1+|f(p)p^{\\alpha}-1|}\\right),\\mathrm{P}_\\alpha=\\prod_{p}\\left(1+\\frac{|f(p)p^\\alpha-1|}{\\sqrt{p}-1}\\right),\n \\end{align*} \nfor all $\\alpha$, where $\\mathrm{P}_\\alpha$ is a convergent infinite product. \n\n \\noindent $\\mathbf{(B)}$ If $\\alpha<\\frac{1}{2}$ then $ \\sum_{\\substack{\\ell\\leq X\\\\ (\\ell,q)=1}}\\mu^2(\\ell)f({\\ell})$ can be expressed as\n \\begin{equation*}\n\\frac{H_{f'}^q(0)\\varphi(q)}{(1-\\alpha)q}X^{1-\\alpha}+O^*\\left(\\mathrm{p}_{\\alpha}(q)\\cdot\\left(1+\\frac{2-2\\alpha}{1-2\\alpha}\\right)\\mathrm{w'}_{\\alpha}^q\\mathrm{P}_{\\alpha} \\ X^{\\frac{1}{2}-\\alpha}\\right),\n \\end{equation*} \nwhere $\\mathrm{p}_{\\alpha}(q)$ and $\\mathrm{P}_{\\alpha}$ are as in $\\mathbf{(A)}$ and for $\\alpha\\leq\\frac{1}{2}$,\n \\begin{align*}\nH_{f'}^q(0)&=\\prod_{p\\nmid q}\\left(1-\\frac{p^{1-\\alpha}-f(p)p+f(p)}{p^{2-\\alpha}} \\right),\\\\\n\\mathrm{w'}_{\\alpha}^q&= \n\\begin{cases}\\mathrm{E}_1^{(2)}=\\sage{Upper(CONSTANT2,digits)},&\\quad\\text{ if }2|q,\\\\\n\\left(\\frac{\\sqrt{2}-1}{\\sqrt{2}-1+|2^\\alpha f(2)-1|}\\right)\\left(\\mathrm{E}_1^{(1)}+\\frac{|2^\\alpha f(2)-1|\\ \\mathrm{E}_1^{(2)}}{\\varphi_{\\frac{1}{2}}(2)}\\right),&\\quad\\text{ if }2\\nmid q.\\\\\n\\end{cases}\n\\end{align*} \n\n \\noindent $\\mathbf{(C)}$ If $\\alpha=\\frac{1}{2}$ then $\\sum_{\\substack{\\ell\\leq X\\\\ (\\ell,q)=1}}\\mu^2(\\ell)f({\\ell})$ can be written as\n \\begin{equation*}\n\\frac{H_{f'}^q(0)\\varphi(q)}{(1-\\alpha)q}X^{1-\\alpha}+O^*\\left(\\mathrm{C}+\\mathrm{p}_{\\alpha}(q)\\mathrm{w'}_{\\alpha}^q\\mathrm{P}_{\\alpha} \\ \\left(1+\\frac{1}{2}\\log(X)\\right)\\right),\n \\end{equation*} \nwhere $\\mathrm{p}_{\\alpha}(q)$ and $\\mathrm{P}_{\\alpha}$ are as in $\\mathbf{(A)}$, $H_{f'}^q(0)$ and $\\mathrm{w'}_{\\alpha}^q$ are as in $\\mathbf{(B)}$ and\n\\begin{align*}\n\\mathrm{C}&=\\left|\\frac{H_{f'}^{q}(0)\\varphi(q)}{q}\\left(\\sum_{p\\nmid q}\\frac{\\log(p)(\\sqrt{p}-(p-2)f(p))}{(f(p)+\\sqrt{p})(p-1)}+\\gamma+\\sum_{p|q}\\frac{\\log(p)}{p-1}-2\\right)\\right|.\n\\end{align*}\n \\end{theorem}\n\n\\begin{proof} Let us derive $\\mathbf{(A)}$. Consider the arithmetic function $i_f$ defined on each prime as $p\\mapsto f(p)p^\\alpha-1$. Observe that \n\\begin{align}\n \\sum_{\\substack{\\ell\\leq X\\\\ (\\ell,q)=1}}\\mu^2(\\ell)f({\\ell})=\\sum_{\\substack{\\ell\\leq X\\\\ (\\ell,q)=1}}\\frac{\\mu^2(\\ell)}{\\ell^\\alpha}\\cdot f(\\ell)\\ell^{\\alpha}=\\sum_{\\substack{\\ell\\leq X\\\\ (\\ell,q)=1}}\\frac{\\mu^2(\\ell)}{\\ell^\\alpha}\\cdot \\prod_{p|\\ell}(1+i_f(p))&\\nonumber\\\\\n=\\sum_{\\substack{\\ell\\leq X\\\\ (\\ell,q)=1}}\\frac{\\mu^2(\\ell)}{\\ell^\\alpha}\\sum_{d|\\ell}\\mu^2(d)i_f(d)=\\sum_{\\substack{d\\\\(d,q)=1}}\\frac{\\mu^2(d)i_f(d)}{d^{\\alpha}}\\sum_{\\substack{e\\leq\\frac{X}{d}\\\\(e,qd)=1}}\\frac{\\mu^2(e)}{e^\\alpha}&\\label{followup},\n \\end{align} \nwhere we have not imposed upper bound conditions on the variable $d$. \n\nIn order to continue our estimation, we must be able to estimate the innermost summation in \\eqref{followup} regardless of whether or not it has an empty condition, so that their remainder terms converge upon effecting the corresponding outermost summation. As $\\alpha>\\frac{1}{2}$, this situation can be treated with the help of Lemma \\ref{seekfor}; we distinguish two cases.\n\n\\noindent $\\mathbf{i)}$ $2|q$. Then continuing from \\eqref{followup}, along with the ideas of the proof of Theorem \\ref{general} and Lemma \\ref{seekfor}, it is not difficult to see, as expected, that for all $\\alpha>\\frac{1}{2}$, the main term of $\\sum_{\\substack{\\ell\\leq X\\\\ (\\ell,q)=1}}\\mu^2(\\ell)f({\\ell})$ is $F_\\alpha^q(X)$. As for the error term, it corresponds to\n\\begin{align*}\\sum_{\\substack{d\\\\(d,q)=1}}\\frac{\\mu^2(d)|i_f(d)|}{d^\\alpha}O^*\\left(\\frac{\\sqrt{qd}}{\\varphi_{\\frac{1}{2}}(qd)}\\frac{\\mathrm{E}_\\alpha^{(2)}\\ d^{\\alpha-\\frac{1}{2}}}{X^{\\alpha-\\frac{1}{2}}}\\right)\\phantom{xxxxxxxxx}&\\\\\n=O^*\\left(\\frac{\\sqrt{q}}{\\varphi_{\\frac{1}{2}}(q)}\\prod_{p\\nmid q}\\left(1+\\frac{|i_f(p)|}{\\sqrt{p}-1}\\right)\\cdot\\frac{\\mathrm{E}_\\alpha^{(2)}}{X^{\\alpha-\\frac{1}{2}}}\\right),&\n\\end{align*}\nwhere, for any $\\alpha>\\frac{1}{2}$, $\\frac{\\sqrt{q}}{\\varphi_{\\frac{1}{2}}(q)}\\prod_{p\\nmid q}\\left(1+\\frac{|i_f(p)|}{\\sqrt{p}-1}\\right)$ may be expressed as\n\\begin{align*}\n\\frac{\\sqrt{q}}{\\varphi_{\\frac{1}{2}}(q)}\\prod_{p|q}\\left(1+\\frac{|f(p)p^\\alpha-1|}{\\sqrt{p}-1}\\right)^{-1}\\cdot\\mathrm{P}_\\alpha=\\mathrm{p}_\\alpha(q)\\cdot\\mathrm{P}_\\alpha.\n\\end{align*}\nwhere $\\mathrm{p}_\\alpha(q)$ and $\\mathrm{P}_\\alpha$ are defined in the statement. Observe that $\\mathrm{P}_\\alpha$ converges, as $\\frac{|i_f(p)|}{\\sqrt{p}-1}=\\frac{|f(p)p^\\alpha-1|}{\\sqrt{p}-1}=O\\left(\\frac{1}{p^{\\beta-\\alpha+\\frac{1}{2}}}\\right)$ and $\\beta-\\alpha+\\frac{1}{2}>1$.\n\n\\noindent $\\mathbf{ii)}$ $2\\nmid q$. Then we can write \\eqref{followup} as\n\\begin{align*}\n &\\sum_{\\substack{d\\\\(d,2q)=1}}\\frac{\\mu^2(d)i_f(d)}{d^{\\alpha}}\\sum_{\\substack{e\\leq\\frac{X}{d}\\\\(e,qd)=1}}\\frac{\\mu^2(e)}{e^\\alpha}+\\frac{i_f(2)}{2^\\alpha}\\sum_{\\substack{d\\\\(d,2q)=1}}\\frac{\\mu^2(d)i_f(d)}{d^{\\alpha}}\\sum_{\\substack{e\\leq\\frac{X}{2d}\\\\(e,2qd)=1}}\\frac{\\mu^2(e)}{e^\\alpha}\\\\\n&=S_\\alpha^q(X)+\\frac{i_f(2)}{2^\\alpha}T_\\alpha^q(X).\n\\end{align*}\nAgain, it is not difficult to see that, for any $\\alpha>\\frac{1}{2}$, the main term of $S_\\alpha^q(X)+\\frac{i_f(2)}{2^\\alpha}T_\\alpha^q(X)$ is $F_\\alpha^q(X)$, defined in Theorem \\ref{general}. On the other hand, the error term of $S_1^q(X)+\\frac{i_f(2)}{2}T_1^q(X)$, it can be expressed as\n\\begin{align*}\n &\\sum_{\\substack{d\\\\(d,2q)=1}}\\frac{\\mu^2(d)|i_f(d|)}{d^{\\alpha}}O^*\\left(\\frac{\\sqrt{qd}}{\\varphi_{\\frac{1}{2}}(qd)}\\frac{\\mathrm{E}_{\\alpha}^{(1)}\\ d^{\\alpha-\\frac{1}{2}}}{X^{\\alpha-\\frac{1}{2}}}\\right)\\\\\n&\\phantom{xxxx}+\\frac{|i_f(2)|}{2^{\\alpha}}\\sum_{\\substack{d\\\\(d,2q)=1}}\\frac{\\mu^2(d)|i_f(d)|}{d^{\\alpha}}O^*\\left(\\frac{\\sqrt{2qd}}{\\varphi_{\\frac{1}{2}}(2qd)}\\frac{\\mathrm{E}_{\\alpha}^{(2)}\\ (2d)^{\\alpha-\\frac{1}{2}}}{X^{\\alpha-\\frac{1}{2}}}\\right)=\\\\\n&O^*\\left(\\frac{\\sqrt{q}}{\\varphi_{\\frac{1}{2}}(q)}\\prod_{p\\nmid 2q}\\left(1+\\frac{|i_f(p)|}{\\sqrt{p}-1}\\right)\\left(\\mathrm{E}_\\alpha^{(1)}+\\frac{|i_f(2)|\\ \\mathrm{E}_\\alpha^{(2)}}{\\varphi_{\\frac{1}{2}}(2)}\\right)\\cdot\\frac{1}{X^{\\alpha-\\frac{1}{2}}}\\right)=\\\\\n&O^*\\left(\\mathrm{p}_\\alpha(q)\\left(\\frac{\\sqrt{2}-1}{\\sqrt{2}-1+|2^\\alpha f(2)-1|}\\right)\\left(\\mathrm{E}_\\alpha^{(1)}+\\frac{|2^\\alpha f(2)-1|\\ \\mathrm{E}_\\alpha^{(2)}}{\\varphi_{\\frac{1}{2}}(2)}\\right)\\cdot\\frac{\\mathrm{P}_\\alpha}{X^{\\alpha-\\frac{1}{2}}}\\right),\n\\end{align*}\nwhence the first case.\n\nCondition $\\alpha>\\frac{1}{2}$ in the case $\\mathbf{(A)}$ is necessary, as we have used Lemma \\ref{seekfor}. Nonetheless, we can readily derive an analogous result for the cases $\\mathbf{(B)}$ and $\\mathbf{(C)}$. Indeed, we can write $f(p)=p^{1-\\alpha}f'(p)$, where $A(t)=\\sum_{\\substack{\\ell\\leq t\\\\(\\ell,q)=1}}\\mu^2(\\ell)f'(\\ell)$ can be estimated by the case $\\mathbf{(A)}$ with $\\alpha'=1$, $\\beta'=1-\\alpha+\\beta$. We can then estimate $\\sum_{\\substack{\\ell\\leq X\\\\(\\ell,q)=1}}\\mu^2(\\ell)f(\\ell)=\\sum_{\\substack{\\ell\\leq X\\\\(\\ell,q)=1}}\\mu^2(\\ell)f'(\\ell)\\ell^{1-\\alpha}$ by means of a summation by parts, obtaining the result.\n\\end{proof}\n\nNote that the error term improvement from Theorem \\ref{general}, when $\\alpha=\\frac{1}{2}$ and under conditions of Theorem \\ref{general++}, is of logarithmic nature with respect to $O(X^{\\frac{1}{2}-\\delta})$ for any $\\delta\\in(0,\\frac{1}{2})$.\n\nConcerning the error term in Theorem \\ref{general++}, in some particular cases one can do much better in terms of error constants. For instance, it is known, by \\cite[Lemmas 5.1-5.2]{Hel19}\nthat if $f(p)=1$ and $v\\in\\{1,2\\}$, we have that for any $X>0$ that\n\\begin{equation}\\label{squarefree} \n\\sum_{\\substack{\\ell\\leq X\\\\(\\ell,v)=1}}\\mu^2(\\ell)=\\frac{6}{\\pi^2}\\frac{v}{\\kappa(v)}X+O^*(\\mathrm{H}_v\\sqrt{X}),\n\\end{equation}\nwhere\n\\begin{equation}\\mathrm{H}_v=\n\\begin{cases}\n\\sqrt{3}\\left(1-\\frac{6}{\\pi^2}\\right)&\\quad\\text{ if }v=1,\\label{HH2}\\\\\n1-\\frac{4}{\\pi^2}&\\quad\\text{ if }v=2,\n\\end{cases}\n\\end{equation}\nwhereas Corollary \\ref{general++} provides only an explicit error term of the form $O^*\\left(\\frac{\\sqrt{q}}{\\varphi_{\\frac{1}{2}}(q)}\\cdot\\sage{Upper(cst_crucial*3,digits)}\\sqrt{X}\\right)$.\n\n\n\n\\subsection{Consequences}\\label{Cop} \n\n\n\n\\begin{lemma}\\label{consequences}\nLet $X>0$, then the sum $\\sum_{\\substack{\\ell\\leq X\\\\(\\ell,q)=1}}\\frac{\\mu^2(\\ell)}{\\varphi(\\ell)}$ may be estimated as\n\\begin{equation}\\label{summi} \n\\frac{\\varphi(q)}{q}\\left(\\log\\left(X\\right)+\\mathfrak{a}_q\\right)+O^*\\left(\\prod_{p|q}\\left(1+\\frac{p-2}{p^{\\frac{3}{2}}-p-\\sqrt{p}+2}\\right)\\cdot\\frac{\\sage{Upper(CONSTANT_RAM*Prod_Ram_Upper,digits)}\\prod_{2|q}\\sage{Upper(CONSTANT2\/CONSTANT_RAM,digits)}}{\\sqrt{X}}\\right),\n\\end{equation}\nwhere $\\mathfrak{a}_q$ is defined in Corollary \\ref{corollary}.\n\\end{lemma}\n\\begin{proof} We already know the main term of the asymptotic expression of the above sum, thanks to Corollary \\ref{corollary} $\\mathbf{(a)}$; obtaining it again from Theorem \\ref{general++} is an exercise. On the other hand, by Theorem \\ref{general++} with $f(p)=\\frac{1}{p-1}$, $\\alpha=1$, $\\beta=2$, its error term can be expressed as $O^*\\left(\\mathrm{p}(q)\\cdot\\frac{\\mathrm{w}^q\\ \\mathrm{P}}{\\sqrt{X}}\\right)$, where\n\\begin{align*}\n\\mathrm{p}(q)&=\\prod_{p|q}\\left(1+\\frac{p-2}{p^{\\frac{3}{2}}-p-\\sqrt{p}+2}\\right),\\\\\n \\mathrm{P}&=\\prod_{p}\\left(1+\\frac{1}{(p-1)(\\sqrt{p}-1)}\\right)\\in[\\sage{Trunc(Prod_Ram_Lower,dlong)},\\sage{Trunc(Prod_Ram_Upper,dlong)}],\\\\\n\\mathrm{w}^q&=\\begin{cases}\n\\sage{Trunc(CONSTANT2,digits)},&\\text{ if }2|q,\\\\\n\\left(1-\\frac{1}{\\sqrt{2}}\\right)\\left(\\mathrm{E}_1^{(1)}+\\frac{\\mathrm{E}_1^{(2)}}{\\varphi_{\\frac{1}{2}}(2)}\\right)=\\sage{Trunc(CONSTANT_RAM,digits)}\\ldots,&\\text{ if }2\\nmid q\n\\end{cases}\\leq\\sage{Upper(CONSTANT_RAM,digits)}\\prod_{2|q}\\sage{Upper(CONSTANT2\/CONSTANT_RAM,digits)},\n\\end{align*} \nand where $\\mathrm{E}_1^{(v)}$, $v\\in\\{1,2\\},$ is defined in \\S\\ref{particular}.\n\\end{proof}\n\nWhen there is no coprimality conditions, we have obtained an error constant equal to $\\sage{Upper(CONSTANT_RAM*Prod_Ram_Upper,digits)}$, that held under condition $X>0$. Ramar\\'e and Akhilesh in \\cite[Thm. 1.2]{RA13} have given the constant $3.95$ under the condition $X\\geq 1$, later improved by Ramar\\'e himself in \\cite{RA19} to $2.44$ under the condition $X>1$. From these last two bounds, it is not difficult to extend the range of estimation to $X>0$, as we have done for example throughout Lemma \\ref{SumEstimations}, and these bounds continue to be better than the value $\\sage{Upper(CONSTANT_RAM*Prod_Ram_Upper,digits)}$.\n\nNonetheless, the above lemma improve considerably \\cite[Thm. 1.1]{RA13} when coprimality conditions given by $q\\geq 2$ are involved. For example, we have\n\\begin{align}\\label{values}\n\\sage{Upper(crux(v0)*Prod_Ram_Upper,digits)}\\cdot\\mathrm{p}(\\sage{v0})\\leq\\sage{Upper(pp(v0)*crux(v0)*Prod_Ram_Upper,digits)}&\\leq\\sage{Lower(5.9*ramp(v0),digits)}\\leq 5.9\\cdot j(\\sage{v0}),\\nonumber\\\\\n\\sage{Upper(crux(v1)*Prod_Ram_Upper,digits)}\\cdot\\mathrm{p}(\\sage{v1})\\leq\\sage{Upper(pp(v1)*crux(v1)*Prod_Ram_Upper,digits)}&\\leq\\sage{Lower(5.9*ramp(v1),digits)}\\leq 5.9\\cdot j(\\sage{v1}),\\nonumber\\\\\n\\sage{Upper(crux(v2)*Prod_Ram_Upper,digits)}\\cdot\\mathrm{p}(\\sage{v2})\\leq\\sage{Upper(pp(v2)*crux(v2)*Prod_Ram_Upper,digits)}&\\leq\\sage{Lower(5.9*ramp(v2),digits)}\\leq 5.9\\cdot j(\\sage{v2}),\\\\\n\\sage{Upper(crux(v1_2)*Prod_Ram_Upper,digits)}\\cdot\\mathrm{p}(\\sage{v1_2})\\leq\\sage{Upper(pp(v1_2)*crux(v1_2)*Prod_Ram_Upper,digits)}&\\leq\\sage{Lower(5.9*ramp(v1_2),digits)}\\leq 5.9\\cdot j(\\sage{v1_2}),\\nonumber\\\\\n\\sage{Upper(crux(v3_2)*Prod_Ram_Upper,digits)}\\cdot\\mathrm{p}(\\sage{v3_2})\\leq\\sage{Upper(pp(v3_2)*crux(v3_2)*Prod_Ram_Upper,digits)}&\\leq\\sage{Lower(5.9*ramp(v3_2),digits)}\\leq 5.9\\cdot j(\\sage{v3_2}),\\nonumber\\\\\n\\sage{Upper(crux(v7_2)*Prod_Ram_Upper,digits)}\\cdot\\mathrm{p}(\\sage{v7_2})\\leq\\sage{Upper(pp(v7_2)*crux(v7_2)*Prod_Ram_Upper,digits)}&\\leq\\sage{Lower(5.9*ramp(v7_2),digits)}\\leq 5.9\\cdot j(\\sage{v7_2}),\\nonumber\n\\end{align}\nwhere $j$ is the error term arithmetic function defined in \\cite[Thm. 1.1]{RA13} as $2\\mapsto\\frac{21}{25}$ and $p\\geq 3\\mapsto 1+\\frac{p-2}{p^{\\frac{3}{2}}-\\sqrt{p}+1}$. Furthermore, the estimation given in Lemma \\ref{consequences} is better than the one in \\cite[Thm. 1.1]{RA13} for all $q=p$ prime. Indeed, we observe in \\eqref{values} that it is better when $p\\in\\{2,3,5\\}$; now, since\n\\begin{align*}\n\\frac{p-2}{p^{\\frac{3}{2}}-p-\\sqrt{p}+2}&<\\frac{1}{\\sqrt{p}}&&\\text{ for all }p\\geq 3,\\\\\n\\frac{p-2}{p^{\\frac{3}{2}}-\\sqrt{p}+1}&>\\frac{1}{2\\sqrt{p}}&&\\text{ for all }p\\geq 5,\n\\end{align*}\nwe have, for all $p\\geq 3$, that\n\\begin{align*}\n\\sage{Upper(crux(v1)*Prod_Ram_Upper,digits)}\\cdot\\mathrm{p}(p)\\leq\\sage{Upper(crux(v1)*Prod_Ram_Upper,digits)}\\cdot\\left(1+\\frac{1}{\\sqrt{p}}\\right)\\leq5.9\\cdot\\left(1+\\frac{1}{2\\sqrt{p}}\\right)\\leq 5.9\\cdot j(p),\n\\end{align*}\nwhence the conclusion.\n\n\nAs a final remark, observe that that the main contribution to the product $\\mathrm{P}$ given in Lemma \\ref{consequences} is precisely when $p=2$. This is the reason why, in the present work, we have distinguished if $q$ is either odd or even. Further, as the second main contribution to the product $\\mathrm{P}$ is given by its factor at $p=3$ (the subsequent factors when $p>3$ being rather small, as $\\frac{1}{\\sqrt{p}-1}<1$), the interested reader may study the behavior of the error term bounds given in Theorem \\ref{general++}, and therefore the error term in Lemma \\ref{consequences}, by distinguishing whether or not $(6,q)=1$: this procedure will require an extension of Lemma \\ref{sum2:critic:1} to the cases $(3,q)=1$ and, by using the inclusion-exclusion principle, to the case $(6,q)=1$; afterwards, the analysis will continue exactly as in the current version of Theorem \\ref{general++}. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\\section{Introduction} \\label{sec:introduction}\nMechanical search -- extracting a desired object from a heap of objects -- is a fundamental task for robots in unstructured e-commerce warehouse environments or for robots in home settings. It remains challenging due to uncertainty in perception and actuation as well as lack of models for occluded objects in the heap. \n\n\n\nData-driven methods are promising for grasping unknown objects in clutter and bin picking~\\cite{mahler2019learning,pinto2016supersizing,kalashnikov2018qt,morrison2018closing,gualtieri2016high}, and can reliably plan grasps on the most accessible object without semantic knowledge of the target object. Some reinforcement learning~\\cite{yang2019deep,jang2017end} or hierachical~\\cite{danielczuk2019mechanical} mechanical search policies use semantics, but have so far been limited to specific objects or heuristic policies. \n\nIn this paper, we draw on recent work on shape completion to reason about occluded objects~\\cite{varley2017shape,price2019inferring} and work on predicting multiple pose hypotheses~\\cite{manhardt2018explaining,rupprecht2017learning}. X-Ray combines occlusion inference and hypothesis predictions to estimate an occupancy distribution for the bounding box most similar to the target object to estimate likely poses -- translations and rotations in the image plane. X-Ray can efficiently extract the target object from a heap where it is fully occluded or partially occluded (Figure~\\ref{fig:splash}).\n\n\\begin{figure}[t!]\n\\centering\n\\includegraphics[width=\\linewidth]{splash-v4.png}\n\\caption{Mechanical search with a fully occluded target object (top row) and a partially occluded target object (bottom row). We predict the target object occupancy distribution, which depends on the target object's visibility and the heap (second column). Each pixel value in the distribution image corresponds to the likelihood of that pixel containing part of the target object. X-Ray plans a grasp on the object that minimizes the estimated support of the resulting occupancy distribution to minimize the number of actions to extract the target object. We show two nearly-identical heaps; in the fully occluded case, X-Ray grasps the mustard bottle whereas in the partially occluded case, the policy grasps the face lotion (third column), resulting in the respective next states (fourth column).}\n\\label{fig:splash}\n\\vspace{-8pt}\n\\end{figure}\n\n\nThis paper provides four contributions:\n\\begin{enumerate}\n\\item X-Ray (maXimize Reduction in support Area of occupancY distribution): a mechanical search policy that minimizes support of learned occupancy distributions.\n\\item An algorithm for estimating target object occupancy distributions using a set of neural networks trained on a dataset of synthetic images that transfers seamlessly to real images.\n\\item A synthetic dataset generation method and 100,000 RGBD images of heaps labeled with occupancy distributions for a single partially or fully occluded target object, constructed for transfer to real images.\n\\item Experiments comparing the mechanical search policy against two baselines in 1,000 simulated and 20 physical heaps that suggest the policy can reduce the median number of actions needed to extract the target object by 20\\% with a simulated success rate of 87\\% and physical success rate of 100\\%.\n\\end{enumerate}\n\n\\section{Related Work} \\label{sec:relwork}\n\n\\subsection{Pose Hypothesis Prediction}\nThere is a substantial amount of related work in computer vision on 3D and 6D pose prediction of both known and unknown objects in RGB, depth, and RGBD images~\\cite{kehl2017ssd,xiang2017posecnn,li2018deepim,hinterstoisser2012model}. Many of these papers assume that the target objects are either fully visible or have minor occlusions. In addition, many assume that there is no ambiguity in object pose due to self-occlusion or rotational symmetry of the object, as these factors can significantly decrease performance for neural network-based approaches~\\cite{corona2018pose}. Recent work has attempted to address the pose ambiguity that results from object geometry or occlusions by restricting the range of rotations~\\cite{rad2017bb8} predicting multiple hypotheses for each detected object~\\cite{rupprecht2017learning, manhardt2018explaining}. \\citet{rupprecht2017learning} find that refining multiple pose hypotheses to a 6D prediction outperforms single hypothesis predictions on a variety of vision tasks, such as human pose estimation, object classification, and frame prediction. \\citet{manhardt2018explaining} note that directly regressing to a rotation for objects with rotational symmetries can result in an averaging effect where the predicted pose does not match any of the possible poses; thus, they predict multiple pose hypotheses for objects with pose ambiguities to better predict the underlying pose and show Bingham distributions of the predicted hypotheses. However, only minor occlusions are considered and since ground truth pose distributions are not available for these images and objects, comparisons for continuous distributions can only be made qualitatively. Predicting multiple hypotheses or a distribution to model ambiguity has also been applied to gaze prediction from facial images~\\cite{prokudin2018deep}, segmentation~\\cite{kohl2018probabilistic}, and monocular depth prediction~\\cite{yang2019inferring}. In contrast to these works, we learn occupancy distributions in a supervised manner.\n\n\\subsection{Object Search}\nThere has been a diverse set of approaches to grasping in cluttered environments, including methods that use geometric knowledge of the objects in the environment to perform wrench-based grasp metric calculations, nearest-neighbor lookup in a precomputed database, or template matching~\\cite{berenson2008grasp,moll2017randomized,mahler2016dex}, as well as methods using only raw sensor data~\\cite{katz2014perceiving,saxena2008learning}, commonly leveraging convolutional neural networks~\\cite{kalashnikov2018qt,jang2017end,lenz2015deep}. While multi-step bin-picking techniques have been studied, they do not take a specific target object into account~\\cite{mahler2017learning}.\n\n\\citet{kostrikov2016end} learn a critic-only reinforcement learning policy to push blocks in a simulated environment to uncover an occluded MNIST block. \\citet{zeng2018learning} train joint deep fully-convolutional neural networks to predict both pushing and grasping affordances from heightmaps of a scene containing multicolored blocks, then show that the resulting policy (VPG) can separate and grasp novel objects in cluttered heaps. The policy can be efficiently trained on both simulated and physical systems, and can quickly learn elegant pushes to expand the set of available grasps in the scene. \\citet{yang2019deep} train similar grasping and pushing networks as well as separate explorer and coordinator networks to address the exploration\/exploitation tradeoff for uncovering a target object. Their policy learns to push through heaps of objects to find the target and then coordinate grasping and pushing actions to extract it, outperforming a target-centered VPG baseline in success rate and number of actions. Both approaches can generalize to objects outside the training distribution, although they are evaluated on a limited set of novel objects, and Yang \\textit{et al.} separate the cases where the target object is partially occluded and fully occluded. Additionally, we focus only on grasping actions, as some mechanical search environments may be constrained or objects may be fragile.\n\nRecently, several approaches to the mechanical search problem have been proposed, both in tabletop and bin picking environments. \\citet{price2019inferring} propose a shape completion approach that predicts occlusion regions for objects to guide exploration in a tabletop scene, while \\citet{xiao2019online} implement a particle filter approach and POMDP solver to attempt to track all visible and occluded objects in the scene. However, 75\\% of the objects in Price \\textit{et al.}'s evaluation scenes are seen in training and Xiao \\textit{et al.}'s method requires models of each of the objects in the scene. We benchmark our policy on a variety of non-rigid, non-convex household objects not seen in training and require no object models. In previous work, \\citet{danielczuk2019mechanical} proposed a general mechanical search problem formulation and introduced a two-stage perception and search policy pipeline. In contrast, we introduce a novel perception network and policy based on minimizing support of occupancy distributions that outperforms the methods introduced in~\\cite{danielczuk2019mechanical}.\n\\section{Problem Statement} \\label{sec:problem}\nWe consider an instance of the mechanical search problem where a robot must extract a known target object from a heap of unknown objects by iteratively grasping to remove non-target objects. The objective is to extract the target object using the fewest number of grasps.\n\n\\subsection{Assumptions}\n\\begin{itemize}\n \\item One known target object, fully or partially occluded by unknown objects in a heap on a planar workspace.\n \\item A robot with a gripper, an overhead RGBD sensor with known camera intrinsics and pose relative to the robot.\n \\item A maximum of one object is grasped per timestep.\n \\item A target object detector that can return a binary mask of visible target object pixels when queried.\n\\end{itemize}\n\n\\begin{figure*}[th!]\n\\vspace{1.5mm}\n\\includegraphics[width=\\textwidth]{dataset_generation-v2.png}\n\\caption{Training dataset generation for learning the occupancy distribution function. Each dataset image is generated by sampling $N = 14$ object models from a dataset of 1296 CAD models. The target object (colored red) is dropped, followed by the $N$ other objects (colored gray), into a planar workspace using dynamic simulation. Camera intrinsics and pose are sampled from uniform distributions centered around their nominal values and an RGBD image is rendered of the scene. The augmented depth image (top right), consisting of a binary target object modal mask and a two-channel depth image, is the only input used for training for seamless transfer from simulation to real images. The ground truth target object distribution is generated by summing all shifted amodal target object masks whose modal masks correspond with the target object modal mask.}\n\\label{fig:datagen}\n\\vspace{-8pt}\n\\end{figure*}\n\n\\subsection{Definitions} \\label{subsec:defs}\nWe define the problem as a partially-observable Markov decision process (POMDP) with the 7-tuple $(S, A, T, R, \\Omega, O, \\gamma)$ and a maximum horizon $H$:\n\\begin{itemize}\n \\item \\textbf{States} $(S)$: A state $\\mathbf{s}_k$ at timestep $k$ consists of the robot, a static overhead RGBD camera, and a static bin containing $N+1$ objects, target object $\\mathcal{O}_t$ and distractor objects $\\lbrace \\mathcal{O}_{1,k}, \\mathcal{O}_{2,k}, \\ldots, \\mathcal{O}_{N,k}\\rbrace$. No prior information is known about the $N$ distractor objects. \n \\item \\textbf{Actions} $(A)$: A grasp action $\\mathbf{a}_k$ at timestep $k$ executed by the robot's gripper.\n \\item \\textbf{Transitions} $(T)$: In simulation, the transition model $T(\\mathbf{s}_{k+1} \\ | \\ \\mathbf{a}_k, \\mathbf{s}_k)$ is equivalent to that used by Mahler et al.~\\cite{mahler2017learning} and uses pybullet~\\cite{coumans2017bullet} for dynamics. On the physical system, next states are determined by executing the action on a physical robot and waiting until objects come to rest.\n \\item \\textbf{Rewards} $(R)$: The reward $r_k = R(\\mathbf{s}_k, \\mathbf{a}_k, \\mathbf{s}_{k+1}) \\in \\lbrace 0, 1 \\rbrace$ is 1 if the target object is successfully grasped and lifted from the bin, otherwise the reward is 0.\n \\item \\textbf{Observations} $(\\Omega)$: An observation $\\mathbf{y}_k \\in \\mathbb{R}_+^{h \\times w \\times 4}$ at timestep $k$ consists of an RGBD image with width $w$ and height $h$ taken by the overhead camera.\n \\item \\textbf{Observation Model} $(O)$: A deterministic observation model $O(\\mathbf{y}_k \\ | \\ \\mathbf{s}_k)$ is defined by known camera intrinsics and extrinsics.\n \\item \\textbf{Discount Factor} $(\\gamma)$: To encourage efficient extraction of the target object, $0 < \\gamma < 1$.\n\\end{itemize}\n\nWe also define the following terms:\n\\begin{itemize}\n \\item Modal Segmentation Mask $(\\mathcal{M}_{m,i})$: the region(s) of pixels in an image corresponding to object $\\mathcal{O}_i$ which are visible~\\cite{kanizsa1979organization}.\n \\item Amodal Segmentation Mask $(\\mathcal{M}_{a,i})$: the region(s) of pixels in an image corresponding to object $\\mathcal{O}_i$ which are visible or invisible (occluded by other objects in the image)~\\cite{kanizsa1979organization}.\n \\item The oriented minimum bounding box is the 3D box with the minimum volume that encloses the object, subject to no orientation constraints. We use this box to determine scale and aspect ratio for a target object.\n \\item The \\textit{occupancy distribution} $\\rho \\in \\mathcal{P}$ is the unnormalized distribution describing the likelihood that a given pixel in the observation image contains some part of the target object's amodal segmentation mask.\n\\end{itemize}\n\n\\subsection{Objective} \\label{subsec:objective}\nGiven this problem definition and assumptions, the objective is to find a policy $\\pi_\\theta^*$ with parameters $\\theta$ that maximizes the expected discounted sum of rewards:\n\\begin{align*}\n \\theta^* = \\arg \\max_\\theta \\ \\mathbb{E}_{p(\\tau | \\theta)} \\left[\\sum_{k=0}^{H-1} \\gamma^k R(\\mathbf{s}_k, \\pi_\\theta(\\mathbf{y}_k), \\mathbf{s}_{k+1}) \\right]\n\\end{align*}\nwhere $p(\\tau \\ | \\ \\theta) = \\mathbb{P}(s_0) \\Pi_{k=0}^{H-1} T(\\mathbf{s}_{k+1} \\ | \\ \\pi_\\theta(\\mathbf{y}_k), \\mathbf{s}_k) O(\\mathbf{y}_k \\ | \\ \\mathbf{s}_k)$ is the distribution of state trajectories $\\tau$ induced by a policy $\\pi_\\theta$~\\cite{mahler2017learning}. Maximizing this objective corresponds to removing the target object in the fewest number of actions.\n\n\\subsection{Surrogate Reward} \\label{subsec:surrogate-reward}\nBecause the reward defined in Section~\\ref{subsec:defs} is sparse and the transition function relies on complex inter-object and grasp contact dynamics, it is difficult to directly optimize for $\\pi_\\theta$. Thus, we instead introduce a dense surrogate reward $\\Tilde{R}$ describing the reduction of the support of the target object's occupancy distribution:\n\\begin{align*}\n \\Tilde{R}(\\mathbf{y}_k, \\mathbf{y}_{k+1}) = |\\textrm{supp}(f_\\rho(\\mathbf{y}_{k}))| - |\\textrm{supp}(f_\\rho(\\mathbf{y}_{k+1}))|,\n\\end{align*}\nwhere $f_\\rho : \\Omega \\longrightarrow \\mathcal{P}$ is a function that takes an observation $\\mathbf{y}_k$ and produces the corresponding occupancy distribution $\\rho_k$ for a given bounding box and $\\textrm{supp}(\\rho) = \\lbrace (i, j) \\in \\lbrace 0, \\ldots, h-1 \\rbrace \\times \\lbrace 0, \\ldots, w-1 \\rbrace \\ | \\ \\rho(i,j) \\neq 0$ is the \\textit{support} of the occupancy distribution. Then, $|\\textrm{supp}(\\rho)|$ is the number of nonzero pixels in $\\rho$. Section~\\ref{sec:perception} discusses a data-driven approximation for the function $f_\\rho$ while Section~\\ref{sec:xray-policy} discusses a greedy policy using the learned $f_\\rho$ and $\\Tilde{R}$.\n\\section{Learning Occupancy Distributions} \\label{sec:perception}\nWe describe a method for estimating the function $f_\\rho$ via a deep neural network. Each pixel in the occupancy distribution $\\rho \\in [0, 1]^{h \\times w}$ has a value representing the likelihood of it containing part of the target object's amodal segmentation mask, or the likelihood that some part of the object, in some planar translation or rotation, would occupy that pixel without any occlusions from other objects. We train this pixelwise distribution network on a dataset of augmented depth images and ground-truth occupancy distributions.\n\n\\begin{table*}[th!]\n\t\\centering\n\t\\vspace{1.5mm}\n\t\\begin{tabu} to \\textwidth {X[2c]X[c]X[c]X[c]X[c]X[c]X[c]X[c]X[c]} \\toprule\n\t\t & \\multicolumn{2}{c}{\\textbf{Test}} & \\multicolumn{2}{c}{\\textbf{Lid}} & \\multicolumn{2}{c}{\\textbf{Domino}} & \\multicolumn{2}{c}{\\textbf{Flute}} \\\\\n\t\t\\textbf{Aspect Ratio} & Bal. Acc. & IoU & Bal. Acc. & IoU & Bal. Acc. & IoU & Bal. Acc. & IoU \\\\\\midrule\n\t\t1:1 & $98\\%$ & $0.91$ & $\\bm{93\\%}$ & $\\bm{0.70}$ & $92\\%$ & $0.74$ & $71\\%$ & $0.30$\\\\\n\t\t2:1 & $97\\%$ & $0.90$ & $79\\%$ & $0.44$ & $\\bm{96\\%}$ & $0.81$ & $84\\%$ & $0.44$\\\\\n\t\t5:1 & $97\\%$ & $0.90$ & $66\\%$ & $0.23$ & $96\\%$ & $\\bm{0.83}$ & $\\bm{86\\%}$ & $\\bm{0.49}$\\\\\n\t\t10:1 & $97\\%$ & $0.87$ & $84\\%$ & $0.49$ & $82\\%$ & $0.58$ & $82\\%$ & $0.41$ \\\\\\bottomrule\n\t\\end{tabu}\n\t\\caption{Balanced accuracy (Bal. Acc.) and Intersection over Union (IoU) metrics for networks trained on various aspect ratio target boxes. The first column is the respective set of 2,000 test images for the network's training dataset. The other columns show how the networks can generalize to unseen objects outside the training distribution. Each dataset contains 1,000 test images for the lid, domino, and flute objects, respectively. These objects are shown in Figure~\\ref{fig:perceptionbenchmark} and have approximate aspect ratios of 1:1, 2:1, and 5:1, respectively. Each network performs very well when estimating distributions for its training target object and makes reasonable predictions for target objects with similar bounding box aspect ratios, even for novel target objects at different scales and in the presence of new occluding objects. However, a network trained on a small aspect ratio does not generalize well to higher aspect ratio objects, as it tends to overestimate the occupancy distribution.}\n\t\\label{tab:perceptionbenchmark}\n\t\\vspace{-6pt}\n\\end{table*}\n\n\\subsection{Dataset Generation} We generate a dataset of 10,000 synthetic augmented depth images labeled with target object occupancy distributions for a rectangular box target object. We choose 10 box targets of various dimensions ranging from $3 cm \\times 3 cm \\times 5 mm$ to $9.5 cm \\times 0.95 cm \\times 5 mm$ (aspect ratios varying from 1:1 to 10:1) with equal volume and generate a dataset for each, resulting in a total of 100,000 dataset images. We choose a relatively small thickness for the target so that it is more likely to be occluded in heaps of objects, as it tends to lie flat on the workspace. We sample a state $\\mathbf{s}_0$ by uniformly sampling a set of $N$ 3D CAD models as well as a heap center and 2D offsets for each object from a 2D truncated gaussian. First, $\\mathcal{O}_t$ is dropped from a fixed height above the workspace, then the other $N$ objects are dropped one by one from a fixed height and dynamic simulation is run until all objects come to rest (all velocities are zero). Any objects that fall outside of the workspace are removed. $N$ is drawn from a Poisson distribution ($\\lambda = 12$) truncated such that $N \\in [10, 15]$. The 3D CAD models are drawn from a dataset of 1296 models available on Thingiverse, including ``packaged\" models, where the original model has been augmented with a rectangular backing, as in~\\cite{mahler2019learning}. The camera position is drawn from a uniform distribution over a viewsphere and camera intrinsics are sampled uniformly from a range around their nominal values. We use the Photoneo Phoxi S datasheet intrinsics and a camera pose where the camera points straight down at the heap at a height of $0.8 m$ for the nominal values. An RGBD image is rendered and augmented depth images are created by concatenating a binary modal mask of the target object with the depth image. Note that if the target object is not visible, the image is equivalent to a two-channel depth image, as the first channel is all zeros. We find that training on these images, as opposed to training on RGBD images directly, allows for seamless transfer between simulated and real images.\n\nTo generate the ground-truth occupancy distribution, we find the set of translations and rotations in the image plane for the target object such that an image rendered from the same camera pose with all other objects in the scene in the same respective poses will yield the same target object modal segmentation mask. Thus, when the object is fully visible, the distribution's support collapses to the pixels of the target object modal segmentation mask. However, when the object is partially or fully occluded, then multiple target object translations or rotations may result in the same image and the distribution will spread to reflect where the target could hypothetically be hiding. In practice, we generate this distribution by discretizing the set of possible translations into a $64 \\times 48$ grid (every 8 pixels in the image) and rotations into 16 bins, then shifting and rotating a target-only depth image to each point on the grid, offsetting by the depth of the bottom of the workspace at that point. By comparing the depths for the set of these shifted and rotated depth images to original depth image, we can determine the modal segmentation mask for the target object as if it were at each location. Any location for which there is intersection-over-union (IoU) greater than 0.9 (or, in cases where the target object has a blank modal mask due to full occlusion, any location for which the modal mask is also blank) is considered to result in the same image. Then, the amodal target object masks from all locations resulting in the same image are summed and the resulting normalized single-channel image is the ground truth occupancy distribution. A visualization of this process is shown in Figure~\\ref{fig:datagen}. Dataset generation for 10,000 images took about 5 hours on an Ubuntu 16.04 machine with a 12-core 3.7 GHz i7-8700k processor.\n\n\\subsection{Occupancy Distribution Model} We split each dataset of 10,000 images image-wise and object-wise into training and test sets (8,000 training images and 2,000 test images, where objects are also split such that training objects only appear in training images and test objects only appear in test images). We train a fully-convolutional network with a ResNet-50 backbone~\\cite{long2015fully} using a pixelwise mean-squared-error loss for 40 epochs with a learning rate of $10^{-5}$, momentum of 0.99, and weight decay of 0.0005. The input images were preprocessed by subtracting the mean pixel values calculated over the dataset and transposing to BGR. Training took approximately 2.5 hours on an NVIDIA V100 GPU and a single forward pass took 6 ms on average as compared to 1.5 s for generating the ground-truth distribution.\n\n\\subsection{Simulation Experiments for Occupancy Distributions}\nWe benchmark the trained model on the full set of 2,000 test images as well as on 1,000 images with three other simulated target objects shown in Figure~\\ref{fig:perceptionbenchmark} - a lid, a domino, and a flute - to test generalization to object shapes, aspect ratios and scales not seen during training. We chose these target objects due to their diversity in scale and object aspect ratio (e.g., the flute is longer, thinner, and deeper, while the lid is nearly square and flat). We report two metrics: balanced accuracy, the mean of pixelwise accuracies on positive and negative pixel labels, and intersection-over-union, the sum of positive pixels in both the ground truth and predicted distribution divided by the sum of total positive pixels in either distribution. We consider true positives as the ground truth pixel having normalized value greater than 0.1 and the predicted value being within 0.2 of the ground truth value. Similarly, we consider true negatives as the ground truth pixel having normalized value less than 0.1 and the predicted value being within 0.2 of the ground truth value. Results are shown in Table~\\ref{tab:perceptionbenchmark}. \n\n\\textbf{Target Object Scale.} For objects of different scale than the training target object, we scale the input image by a factor equal to the difference in scale between the box target object and the other target object, feed it through the network, and then rescale the output distribution. We find that this scaling dramatically improves performance with minimal preprocessing of the input image; for example, when testing on the lid object, which is about twice as large as the training box object, we increase balanced accuracy and IoU from $63.0\\%$ and $0.186$ to $93.1\\%$ and $0.697$, respectively.\n\n\\begin{figure}[t!]\n \\centering\n \\vspace{1.5mm}\n \\includegraphics[width=0.75\\linewidth]{aspect_ratios.png}\n \\caption{The ground truth occupancy distributions for a target object of various aspect ratios for the same heap image.}\n \\label{fig:aspectratios}\n \\vspace{-8pt}\n\\end{figure}\n\n\\textbf{Target Aspect Ratios.} We found that, while our network performed well on objects with similar aspect ratios, longer and thinner objects with higher aspect ratios resulted in the model overestimating the support of the distribution. This effect can be seen in Figure~\\ref{fig:aspectratios}, which shows ground truth occupancy distributions for target objects of different aspect ratios in the same heap image. Table~\\ref{tab:perceptionbenchmark} suggests that the trained networks can accurately predict occupancy distributions for target objects that have similar aspect ratios to the training boxes, but do not perform as well when tasked with predicting a distribution for objects with dramatically different aspect ratios. In particular, the network trained with a 1:1 box target object tends to overestimate the support for target objects with high aspect ratios, leading to a drop in metrics. This effect is especially visible along corners of occluding objects, where more rotations of a low aspect ratio object are possible, while only one or two rotations of a high aspect ratio object are possible.\n\n\\begin{figure}\n \\centering\n \\vspace{1.5mm}\n \\includegraphics[width=\\linewidth]{benchmark_perception-v3.png}\n \\caption{Example predicted target object occupancy distributions for three target objects, a lid, domino, and flute, unseen during training (far left). Warmer colors indicate a higher likelihood of that pixel containing part of the target object's amodal mask. The network is able to accurately predict a distribution across many objects, a collapsed distribution when the object is partially visible, and multimodal distributions when there are gaps between objects (top three rows). The final row shows a failure mode where the network spuriously predicts an extra mode for the distribution when the target object is partially occluded.}\n \\label{fig:perceptionbenchmark}\n \\vspace{-8pt}\n\\end{figure}\n\n\\begin{figure*}[th!]\n\\centering\n\\vspace{1.5mm}\n\\includegraphics[width=\\textwidth]{policy.png}\n\\caption{The perception stage takes as input an RGBD image of the scene and outputs an occupancy distribution prediction using a network based on the target object bounding box dimensions and the created augmented depth image. The perception stage also produces a set of segmentation masks. The X-Ray mechanical search policy then finds the mask that has the most overlap with the occupancy distribution (colored yellow in the grasp scores image) and plans a grasp on that mask.}\n\\label{fig:policy}\n\\vspace{-8pt}\n\\end{figure*}\n\nFigure~\\ref{fig:perceptionbenchmark} shows occupancy distribution predictions with ground truth distributions for the three unseen objects using the network trained on the closest aspect ratio target object and scaled appropriately. Results suggest that the network is able to accurately predict diverse distributions when occluding objects not seen in training are present. Figure~\\ref{fig:perceptionbenchmark} suggests not only that the network can predict the correct distribution spanning multiple occluding objects in unimodal and multimodal cases when the target object is fully occluded, but also that it can correctly collapse the distribution to a small area around the visible part of the target object when it is only partially occluded.\n\n\\section{X-Ray: Mechanical Search Policy} \\label{sec:xray-policy}\n\nUsing the learned occupancy distribution function $f_\\rho$, we propose X-Ray, a mechanical search policy that optimizes for the objective and surrogate reward $\\Tilde{R}$ defined in Section~\\ref{sec:problem}. We create both simulated and physical object heaps and generate overhead camera images using an observation model based on the Photoneo PhoXi S depth camera. The heap RGBD image and target object are inputs to the perception system, which uses the network trained on the most similar bounding box to the target object to predict an occupancy distribution for the target. The policy takes the predicted distribution and a set of modal segmentation masks for the scene and computes a grasping action that would maximally reduce the support of the subsequent distribution. Specifically, the policy takes an element-wise product of each segmentation mask with the predicted occupancy distribution and sums over all entries in the resulting image, leading to a score for each of the segmentation masks. The policy then plans a grasp on the object mask with the highest score and executes it, as shown in Figure~\\ref{fig:policy}. \n\n\\subsection{Simulation Experiments with X-Ray}\nWe first evaluate the mechanical search policy with simulated heaps of novel objects. To further test the ability of the learned network to generalize to unseen occluding objects, we use a set of objects unseen in training and validation: 46 YCB objects~\\cite{calli2015benchmarking} and 13 ``packaged'' YCB objects (augmented in the same way as described in Section~\\ref{sec:perception}). Initial states were generated as explained in Section~\\ref{sec:perception}, first dropping the target object, followed by the other $N$ objects. We use $N=14$ so each heap initially contained 15 total objects, color{red}a similar or larger size to previous bin-picking work~\\cite{mahler2017learning,morrison2018closing}. As the focus of this work was not instance segmentation or target detection, we use ground truth segmentation masks and target binary masks in simulation, although we note that any class-agnostic instance segmentation network~\\cite{kuo2019shapemask,danielczuk2019segmenting} or object detection network~\\cite{zhao2019object} can be substituted. For each grasp, either a parallel jaw or suction cup grasp, we use wrench space analysis to determine whether it would result in the object being lifted from the workspace under quasi-static conditions~\\cite{prattichizzo2008grasping,mahler2016dex, mahler2017dex}. If the grasp is collision-free and the object can be lifted, the object is lifted until the remaining objects come to rest using dynamic simulation implemented in pybullet, resulting in the next state. Otherwise the state remains unchanged.\n\n\\begin{table}\n\t\\centering\n\t\\begin{tabu} to \\linewidth {XX[2c]X[c]X[c]X[c]}\n\t\\toprule\n\t\t\\textbf{Policy} & \\textbf{Success Rate} & \\multicolumn{3}{c}{\\textbf{Number of Actions Quartiles}} \\\\\\midrule\n\t\tRandom & $42\\%$ & $4$ & $7$ & $9$ \\\\\n\t\tLargest & $67\\%$ & $4$ & $\\mathbf{5}$ & $7$ \\\\\n\t\tX-Ray & $\\bm{82\\%}$ & $\\mathbf{3}$ & $\\mathbf{5}$ & $\\mathbf{6}$ \\\\\\bottomrule\n\t\\end{tabu}\n\t\\caption{Evaluation metrics for each policy over 1,000 simulated rollouts. The lower quartiles, medians, and upper quartiles for number of actions are reported for successful rollouts. X-Ray extracts the target at a higher success rate with significantly fewer actions.}\n\t\\label{tab:simresults}\n\t\\vspace{-6pt}\n\\end{table}\n\nIn addition to the policy proposed here, we evaluate two previously proposed baseline policies, \\textbf{Random} and \\textbf{Largest}~\\cite{danielczuk2019mechanical}. The \\textbf{Random} policy that first attempts to grasp the target object, and, if no grasps are available on the target object, grasps an object chosen uniformly at random from the bin. The \\textbf{Largest} policy that first attempts to grasp the target object, and, if no grasps are available on the target object, iteratively attempts to grasp the objects in the bin according to the size of their modal segmentation mask.\n\nEach policy was rolled out on 1,000 total heaps until either the target object was grasped (successful rollout) or the horizon $H=10$ was reached (failed rollout). We benchmark each policy using two metrics: success rate of the policy and mean number of actions taken to extract the target object in successful rollouts. Table~\\ref{tab:simresults} and Figure~\\ref{fig:simresults} show these metrics and the distribution of successful rollouts over the number of actions taken to extract the target object, respectively.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.95\\linewidth]{sim_results_max_10.png}\n\\caption{Histogram of the number of actions taken to extract the target object over the 1,000 simulated rollouts for the three policies tested. The median number of actions for each policy is shown by the corresponding vertical line.}\n\\label{fig:simresults}\n\\vspace{-8pt}\n\\end{figure}\n\nWhile the Random and Largest policies occasionally are able to quickly extract the target object, X-Ray consistently extracts the target in fewer actions and succeeds in 15\\% more heaps than the best-performing baseline. Largest is a reasonable heuristic for these heaps, as shown in~\\cite{danielczuk2019mechanical}, as large objects typically have a greater chance of occluding the target, but X-Ray combines this intuition with superior performance when the object is partially occluded. X-Ray outperforms the Largest policy on heaps where the target object is partially occluded by a thin or small object (such as a fork or dice) at some point during the rollout. In these scenarios, a robust grasp is often not available on the target object, and while X-Ray can correctly identify that the occluding object should be removed, the Largest policy will often grasp a larger object further from the target object. In scenarios where there are many large objects, but some are lying to the side, X-Ray will typically grasp objects that are in the more cluttered area of the bin, since they are more likely to reveal the target object. This behavior is a function of weighting the object area by the predicted distribution, which encourages the policy to ignore solitary objects.\n\n\\subsection{Physical Experiments with X-Ray}\nWe also evaluate X-Ray with heaps of novel household objects on a physical ABB YuMi robot with a suction cup and parallel jaw gripper, using two target objects. Some examples of the objects used can be seen in Figures~\\ref{fig:splash} and~\\ref{fig:policy}. Initial states were generated by placing the target object on the workspace, filling a bin with the $N$ other objects, and then dumping the bin on top of the target object. In these heaps, $N=24$ was used so that each heap initially contained 25 total objects. We chose 25 total objects because it has been commonly used in cluttered bin-picking environments~\\cite{mahler2019learning} and objects tend to disperse further on the physical setup. For segmentation masks, we used the class-agnostic instance segmentation network from~\\cite{danielczuk2019segmenting}, and for grasp quality analysis, we used FC-GQCNN~\\cite{satish2019policy}. To generate binary target masks, we use HSV color segmentation from OpenCV and use red target objects. While we make this assumption for simplicity, we note that we could substitute this process with a target object segmentation method that uses visual features, semantics and shape, such as the one described in~\\cite{danielczuk2019segmenting}.\n\nWe perform 20 rollouts for each of the three policies. Each policy was rolled out until either the target object was grasped (successful rollout) or the horizon $H=10$ was reached (failed rollout). We report the same metrics as in the simulated experiments in Table~\\ref{tab:physicalresults}.\n\nWe find that X-Ray outperforms both baselines, extracting the target object in a median 5 actions over the 20 rollouts as compared to 6 actions for the Largest and Random policies while succeeding in extracting the target object within 10 actions in each case. These results suggest that X-Ray not only can extract the target more efficiently than the baseline policies, but also has lower variance. The Largest policy performed comparatively worse with more objects in the heap than in simulation, as it relies heavily on accurate segmentation masks. However, when objects are densely clustered together, segmentation masks are often merged, leading to grasps on smaller objects that do not uncover the target. In this case or in the case of spurious segmentation masks that do not cover objects, X-Ray reduces this reliance on accurate segmentation masks, as the occupancy distribution and segmentation are combined to create a score for the mask. This property of X-Ray causes it to compare favorably to a policy that directly scores segmentation masks based on their relationship to the target object geometry. X-Ray also reduces reliance on the target object binary mask being accurate; if the detector cannot see enough of the target object to generate a detection even when it is partially visible, X-Ray will continue to try and uncover it according to the fully occluded occupancy distribution until more of the target is revealed.\n\n\\begin{table}\n\\vspace{2mm}\n\t\\centering\n\t\\begin{tabu} to \\linewidth {XX[2c]X[c]X[c]X[c]} \\toprule\n\t\t\\textbf{Policy} &\\textbf{Success Rate} & \\multicolumn{3}{c}{\\textbf{Number of Actions Quartiles}} \\\\\\midrule\n\t\tRandom & $85\\%$ & $\\mathbf{4}$ & $6$ & $7$ \\\\\n\t\tLargest & $85\\%$ & $4$ & $6$ & $7$ \\\\\n\t\tX-Ray & $\\bm{100\\%}$ & $\\mathbf{4}$ & $\\mathbf{5}$ & $\\mathbf{5.25}$ \\\\\\bottomrule\n\t\\end{tabu}\n\t\\caption{Evaluation metrics for each policy over 20 physical rollouts. The lower quartiles, medians, and upper quartiles for the number of actions are reported across successful rollouts. X-Ray extracts the target with significantly fewer actions, always extracting it within 10 actions.}\n\t\\label{tab:physicalresults}\n\t\\vspace{-6pt}\n\\end{table}\n\n\n\\section{Discussion and Future Work} \\label{sec:discussion}\nWe present X-Ray, a mechanical search algorithm that minimizes support of a learned occupancy distribution. We showed that a model trained only on a synthetic dataset of augmented depth images labeled with ground truth distributions learns to accurately predict occupancy distributions for target objects unseen in training. We benchmark X-Ray in both simulated and physical experiments, showing that it can efficiently extract the target object from challenging heaps containing 15-25 objects that fully occlude the target object in 82\\% - 100\\% of heaps using a median of just 5 actions.\n\nIn future work, we will address some of the failure modes of the system, especially for objects that are significantly non-planar. Currently, the assumption that the object is flat can result in incorrect occupancy distributions for taller objects. Additionally, we will look to add memory to the policy so that if objects shift into previously free space, the distribution will not cover that area, and explore reinforcement learning policies based on a reward of target object visibility.\n\n\\section*{Acknowledgments}\n\\footnotesize\nThis research was performed at the AUTOLAB at UC Berkeley in affiliation with the Berkeley AI Research (BAIR) Lab. The authors were supported in part by donations from Google. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. 1752814. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Sponsors. We thank our colleagues and collaborators who provided helpful feedback, code, and suggestions, especially Julian Ibarz, Brijen Thananjeyan, Andrew Li, Andrew Lee, Andrey Kurenkov, Roberto Mart\\'in Mart\\'in, Animesh Garg, Matt Matl, and Ashwin Balakrishna.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:Intro}\nAtomic nuclei in condensed phases behave, in many cases, as quantum objects. For instance, Nuclear Quantum Effects are responsible for the {\\em heat capacity problem}, i.e., the deviation from the classical Dulong and Petit law for the heat capacity of solids at low temperatures. The solution of this issue eventually led to the development of the harmonic theory of solids, an accurate quantum theory that lets us to compute their thermal properties at temperatures lower than the Debye temperature, and can be corrected to account for anharmonic effects~\\cite{BornHuang,AshcroftMermin}. By reducing the description of an insulating solid to a set of independent harmonic oscillators, the {\\em phonons}, weakly interacting through anharmonic couplings, this theory also provides a framework for the computation of transport properties, in particular of heat conductivity. In contrast to the very high accuracy that can be achieved for thermal properties, however, the computation of transport properties is sensibly more delicate and often requires ad hoc approximations for the lifetime of phonons, which is limited by phonon-phonon scattering processes and the presence of defects. \n\nThe general framework of the harmonic theory of solids, originally developed for crystals, can be adapted to {\\em disordered solids}. This is at the expenses of employing a numerical approach to characterize the harmonic eigenmodes, that replace phonons and are no longer determined by symmetries. Again, this procedure can be efficiently employed to determine thermal properties, while its application to transport is much more limited. Very often these properties are indeed calculated via classical statistical mechanics approaches (based on classical Molecular Dynamics simulations), whose results are next empirically corrected to account for quantum effects (see, among others, ~\\cite{mizuno2016relation}). We also note that, in systems (ordered or disordered) involving light nuclei (e.g., hydrogen in solid ice), the large wavelength associated with light atoms makes the harmonic approximation itself inappropriate. Therefore, an exact calculation should in general be considered even for thermal properties, or for the determination of phase boundaries~\\cite{Bronstein2016}.\n\nThe harmonic theory of crystalline solids undoubtedly constitutes a remarkable achievement, as many results can be obtained based on an almost fully analytical approach. However, the above limitations in computing transport properties or in applying the theory to disordered structures, point to the necessity of numerical approaches. It would therefore be highly desirable to develop a numerical methodology that could fully take into account the quantum nature of atomic nuclei, allowing us to determine without approximations both thermal and transport properties of any insulating solid. \n\nWhen interested in thermal properties, an exact numerical method that encompasses all quantum aspects and is valid at any temperature, independently of the strength of anharmonic effects, involves the path integral representation of the partition function~\\cite{Barker1979,Chandler1981,Herman1982,Pollock1984}. In the absence of exchange effects (a reasonable hypothesis in most common solids), the determination of thermodynamic properties at the inverse temperature $\\beta =(k_B T)^{-1}$ involves the sampling of an equivalent system where each quantum particle is replaced by a discretized \"path\" consisting of $M$ \"imaginary time slices\". The method becomes exact in the limit of large $M$, and the sampling of $N$ quantum degrees of freedom at temperature $T$ turns out to be equivalent to that of $N\\times M$ classical degrees of freedom at temperature $M\\times T$. This sampling can be achieved efficiently using Monte Carlo or Molecular Dynamics methods, leading to the PIMC and PIMD methods, respectively.\n\nComputation of transport properties is more problematic. The standard Green and Kubo statistical mechanics approach to transport coefficients~\\cite{Green1952,Kubo1957,Luttinger1964}, obtains the heat conductivity tensor $\\kappa$ in a system of volume $V$ at temperature $T$ from a time correlation function of the energy current operator $\\bf{J}$ as,\n\\begin{equation}\n\\kappa_{\\alpha \\beta } = \\frac{1}{Vk_BT^2}\\int_0^\\infty dt \\langle J_\\alpha(t)J_\\beta(0)\\rangle.\n\\label{eq:k_ab}\n\\end{equation}\nUnfortunately, the path integral method provides directly static (time-independent) quantities only. A possible solution to this problem has been identified long ago~\\cite{Thirumalai1983}, by noting that the PIMC approach can rather supply the analytical continuation of the correlation functions on the {\\em imaginary time} axis, simply by computing the correlation between two imaginary time slices along the path. The power spectrum, $S_{AB}(\\omega)$, of a real time correlator, $C_{AB}$, between two operators $A$ and $B$ can then be obtained in an apparently straightforward manner by using the identity,\n\\begin{equation}\nC_{AB}(i\\tau) = \\int_0^\\infty d\\omega \\left[S_{AB}(\\omega) e^{-\\hbar\\omega\\tau} + S_{BA}(\\omega)e^{-\\hbar\\omega(\\beta-\\tau)}\\right].\n\\label{eq:inversion}\n\\end{equation}\n\nWhile Eq.~(\\ref{eq:inversion}) in principle allows one to obtain $S$ based on the data for $C(i\\tau)$, with $\\tau$ in $[0,\\beta]$, it is well known that the inversion problem is ill-posed, in the sense that determining $S$ with high precision is an extremely difficult task, even if $C$ is known with excellent accuracy. For this reason, the approach pioneered by a few groups in the eighties within the framework of path integral calculations did not spread widely. Many recent studies obtained in various fields~\\cite{PhysRevB.57.10287,Bertaina2017,LEVY2017149,PhysRevB.95.014102,PhysRevB.98.134509}, however, indicate that the present computing capabilities should by now allow us to carry out this program satisfactorily, by addressing the two major (and related) difficulties: {\\em i)} to obtain with high accuracy the imaginary time correlation, in particular for current operators which suffer from the well known issue of diverging variance~\\cite{Herman1982} in the limit of large $M$; and {\\em ii)} to solve the ill-posed problem of obtaining the frequency spectrum from the imaginary time correlation functions. \n\nHere we address these two issues based on numerical and analytical calculations of very simple examples, namely a single harmonic oscillator or an ensemble of oscillators with a continuum distribution of frequencies. The interest of this choice is twofold. First, due to its simplicity, we can obtain exact analytical expressions for most quantities of interest, including all time dependent correlations and exact expressions for the discretized path integrals. The availability of these expressions enables a precise control of the different sources of error, which can be both of statistical origin or associated with the discretization itself. Second, the harmonic oscillator is at the heart of the harmonic theory of solids, the natural starting point for any calculation of transport in insulating solids. Completely controlling this case is, therefore, crucial for any serious step forward in this direction. \n\nThe manuscript is organized as follows: in Sect.~\\ref{sec:PIMC} we introduce the general formalism of the path integral and imaginary time correlations, while in Sect.~\\ref{sec:inversion} we present the procedure that we have developed to cope with the inversion problem. In Sect.~\\ref{sec:estimators} we next describe a new approach that circumvents the issue of the diverging variance for current-current correlators. Finally, in Sects~\\ref{sec:case1} and~\\ref{sec:case2} we illustrate the application of these methods to a single harmonic oscillator, followed by the case of a collection of oscillators with a continuum distribution of frequencies, mimicking the density of states of a crystalline solid. In Sect.~\\ref{sec:conclusion} we draw our conclusions.\n\\section{\\label{sec:PIMC}The path integral formalism for time correlations}\n\nThe path integral Monte Carlo method provides a numerically exact route to the evaluation of thermodynamic properties of quantum systems at finite temperature, $T$. If we consider, for simplicity, a system described by a single degree of freedom $X$ of mass $m$, with Hamiltonian $ \\hat{H} = \\hat{P}^2\/2m + U(\\hat{X})$, the average value of an observable $\\hat{A}$ is \n\\begin{equation}\n\\langle\\hat{A} \\rangle =\\frac{1}{Z(\\beta)} \\text{Tr} [\\hat{A}\\, e^{-\\beta \\hat{H} } ],\n\\end{equation}\nwhere $Z(\\beta)=\\text{Tr}[e^{-\\beta\\hat{H}}]$. In the PIMC approach, the trace is evaluated by expressing the density operator as $e^{-\\beta\\hat{H}}= (e^{-\\beta\\hat{H}\/M})^M$. In the position representation $\\vert X\\rangle$, and using the notation $\\rho(X,Y,\\tau) = \\langle X \\vert e^{-\\tau\\hat{H}} \\vert Y \\rangle $, we can write\n\\begin{multline}\n\\langle \\hat{A} \\rangle = \\frac{1}{Z(\\beta)}\n\\int dX_0\\ldots dX_M \\\\\n\\langle X_0 \\vert \\hat{A} \\vert X_1 \\rangle \\rho(X_1,X_2,\\beta\/M)\\ldots\\rho(X_M,X_0,\\beta\/M).\n\\label{eq:average1}\n\\end{multline}\nIf an expression for $\\rho(X,Y,\\tau)$ is known, the observable can be evaluated by sampling the \"path\" $\\{X_0\\ldots X_M\\}$ with a statistical weight proportional to $\\rho(X_1,X_2,\\beta\/M)\\ldots\\rho(X_M,X_0,\\beta\/M)$. As the matrix element $\\langle X_0 \\vert \\hat{A} \\vert X_1 \\rangle$ of a {\\em local} operator $\\hat{A}$ involves in general a term $\\delta(X_0-X_1)$, the sampling is actually performed over a closed path of $M$ points. In the following we will repeatedly consider the \"primitive\" approximation, based on the factorisation of the kinetic and potential parts of the density operator and valid in the limit of small $\\tau$\\cite{Chandler1981},\n\\begin{equation}\n\\rho_p(X,Y,\\tau) \\simeq \\sqrt{\\frac{m}{2\\pi\\hbar^2\\tau}}\\exp\\left\\{-m\\frac{ (X-Y)^2}{2\\hbar^2\\tau} -\\frac{\\tau}{2}\\left[U(X)+U(Y)\\right]\\right\\}.\n\\label{eq:primitive-approx}\n\\end{equation}\nThis simplified expression can be replaced by a more accurate one if needed, and if the exact value of $\\rho$ is known, as it is the case for the harmonic oscillator, the latter can be used to sample the path more efficiently~\\cite{feynman1998statistical}.\n\nHere, we are interested in equilibrium time correlation functions that determine the linear response properties of the system. A time correlation involving the observables $A$ at time $t$ and $B$ at time $t=0$ is the equilibrium average of the product of the operators $\\hat{A}(t)= e^{itH\/\\hbar}\\hat{A} e^{-itH\/\\hbar}$, and $\\hat{B}(0)=\\hat{B}$, which we can write as,\n\\begin{equation}\nC_{AB}(t\/\\hbar) = \\langle\\hat{A}(t)\\hat{B}(0)\\rangle = \\frac{1}{Z(\\beta)}\\text{Tr}[\\hat{A}(t)\\hat{B}(0)e^{-\\beta\\hat{H}}].\n\\end{equation}\nObviously, the splitting method could be applied to the operators $\\exp(it\\hat{H}\/\\hbar)$. Unfortunately, the statistical weight associated with the resulting path is imaginary, and therefore it is not suitable for usual sampling methods. If, however, the real time $t$ is replaced by an imaginary time $t=i\\tau \\hbar$, we can write,\n\\begin{multline}\nC_{AB}(i\\tau) = \\frac{1}{Z(\\beta)}\\text{Tr} [\\hat{A}e^{-\\tau\\hat{H}}\\hat{B}e^{-(\\beta-\\tau)\\hat{H}}] \\\\\n=\\frac{1}{Z(\\beta)} \\int dX dX'dY dY' \\\\\n\\langle X \\vert \\hat{A} \\vert X' \\rangle\n\\rho(X',Y,\\tau) \\langle Y \\vert \\hat{B} \\vert Y' \\rangle\n\\rho(Y',X,\\beta -\\tau),\n\\end{multline}\nwhich is defined for $0 \\le \\tau \\le \\beta$, and verifies $ C_{AB}(i\\tau)= C_{BA}(i(\\beta-\\tau))$. \n\nPartitioning again the interval $[0,\\beta]$ into $M$ slices of width $\\Delta\\tau= \\beta\/M$, the correlation function can be sampled for discrete values of $\\tau$ of the form $\\tau_k=k\\Delta\\tau$, with $k=0\\ldots M-1$, at a computational cost that is similar to that needed to calculate the thermodynamic observables of Eq.~(\\ref{eq:average1}), obtaining\n\\begin{multline}\n C_{AB}(i\\tau_k)= \n \\frac{1}{Z(\\beta)}\n \\int dXdY dX_1... dX_M \\langle X \\vert \\hat{A} \\vert X_1 \\rangle \\rho(X_1,X_2,\\Delta \\tau)...\\\\ \\rho(X_{k-1},X_k,\\Delta\\tau)\n \\langle X_k \\vert \\hat{B} \\vert Y \\rangle\n \\rho(Y,X_{k+1},\\Delta\\tau)...\\rho(X_M,X,\\Delta\\tau).\n\\end{multline}\nAs in Eq.~(\\ref{eq:average1}), here the sampling must be performed over the $\\{X_1\\ldots X_M\\}$ coordinates of the path, the $X$ and $Y$ variables being eliminated by the $\\delta$-functions contained in the matrix elements of $\\hat{A}$ and $\\hat{B}$.\n\n\n\\section{A statistical approach to the inversion problem}\n\\label{sec:inversion}\n\n\nOnce the imaginary time correlations, denoted by $C(\\tau)$ from now on, have been obtained for a set of $M$ discrete values $\\{\\tau_0...\\tau_{M-1}\\}$ in the interval $[0,\\beta]$, the real time correlation functions relevant to describe the system physical response can, in principle, be obtained by inverting Eq.~(\\ref{eq:inversion}). This is common to many studies of quantum systems, and generally described as the \"analytical continuation\" procedure. It is, however, ill-posed, in the sense that if the spectrum $S(\\omega)$~\\footnote{In this paragraph we drop the $AB$ subscripts in Eq.~(\\ref{eq:inversion})} is described by a set of parameters (such as the values of $S$ on a discrete $\\omega$-grid, or the coefficients of an expansion in terms of some basis set), and the $C(\\tau_k)$ are affected by statistical errors, a very large number of solutions for $S$ compatible with the original data will be found.\n\nThis topic is the subject of a vast literature, and it is fair to conclude that no single method emerges as a preferred solution. Generally speaking, most current solutions employ some particular version of a \"maximum entropy\" approach~\\cite{JARRELL1996,Boninsegni1996}. The spectral function, $S_{ME}$, is therefore obtained as an average over the possible $S(\\omega)$'s (defined by some finite set of parameters), weighted by the probability that they are the exact model given the data set $(\\textbf{C},\\sigma^2)$,\n\\begin{equation}\nS(\\omega)_{ME} = \\int \\mathcal{D}S \\ p(S|\\text{C},\\sigma^2)S(\\omega).\n\\label{eq:sme1}\n\\end{equation}\nHere $\\mathcal{D}S$ indicates the phase space element associated with the parametrization of $S(\\omega)$, $\\textbf{C} = (C(\\tau_1), C(\\tau_2), \\dots, C(\\tau_M))^{\\dagger} \\equiv (C_1, C_2, \\dots C_M)^{\\dagger}$ is a line vector that contains the data points, and $\\sigma^2$ describes the statistical uncertainty of these data in the form of a covariance matrix. By using the Bayes formula,\n\\begin{equation}\np(S|\\text{C},\\sigma^2) = \\frac{p(\\textbf{C},\\sigma^2|S)}{p(\\textbf{C}, \\sigma^2)}p(S),\n\\end{equation}\nand making the assumption of Gaussian statistics for the likelihood, we can write,\n\\begin{equation}\np(\\textbf{C}|S,\\sigma) \\propto e^{-\\frac{1}{2}(\\textbf{C} - \\textbf{C}[S])(\\sigma^2)^{-1} (\\textbf{C} - \\textbf{C}[S])} = e^{-\\frac{1}{2}\\chi^2[S]}\\label{likelihood},\n\\end{equation}\nwhich we can interpret as the definition of $\\chi^2[S]$. Here $\\textbf{C}[S]$ is the expression of the vector $C$, obtained by inserting a known spectrum $S$ into the r.h.s. of Eq.~(\\ref{eq:inversion}) and computing the resulting $M$ correlation values. In the case of a spectrum defined by the amplitudes $A(\\omega_p)$ for a set of $N_\\omega$ discrete frequencies on a regular grid, using Eq.~(\\ref{eq:inversion}) we obtain, \n\\begin{equation}\n\\tilde{C}[S](\\tau_\\alpha) = \\sum_{p=1}^{N_\\omega} A(\\omega_p) \\left( e^{-\\omega_p \\tau_\\alpha} + e^{-(\\beta - \\tau_\\alpha)\\omega_p}\\right) \\label{eq::correlation fit}.\n\\end{equation}\n\nIn traditional maximum entropy methods, Eq.~(\\ref{eq:sme1}) is solved at the saddle point level, by minimizing the functional $\\mathcal{F}=\\frac{1}{2}\\chi^2[S] - H[S]$. Here, $H[S]$ is an entropic functional, which assigns a penalty to irregular solutions that would lead to an overfitting of the statistical errors contained in the data. For a positive spectrum, $H[S]$ is usually chosen as the associated Shannon entropy, with a coefficient controlling the strength of the regularisation. \n\nIn this work we employ the so-called \"stochastic analytical inference\" or \"stochastic maximum entropy\"~\\cite{Fuchs2010} method, where Eq.~(\\ref{eq:sme1}) is sampled by Monte-Carlo methods over $\\mathcal{D}S$, which can be constrained to positive values of $S$ through the prior probability $p(S)$. The term $\\frac{1}{2}\\chi^2[S]$ can hence be considered as an effective energy functional, and the method can be refined by introducing an additional parameter in the form of an effective inverse temperature $\\Theta$ as,\n\\begin{equation}\nS(\\omega,\\Theta)_{ME}= {Z(\\Theta)^{-1}}\\int \\mathcal{D}S \\ S(\\omega) e^{-\\frac{1}{2}\\Theta \\chi^2[S]}.\n\\label{eq:sme2}\n\\end{equation}\nHere the normalisation $Z(\\Theta)= 1 \/ \\exp{\\{\\Theta F(\\Theta)\\}}$ is an effective partition function. Note that the traditional maximum entropy approach corresponds to a mean field version of Eq.~(\\ref{eq:sme2}), where one uses as an estimate of the spectrum the minimum of the mean field free energy $F_{MF}(\\theta)= \\frac{1}{2}\\chi^2[S]- \\Theta^{-1} H[S] $. In view of the following analysis, we make the simplifying assumption of uncorrelated data points, so that the covariance matrix is diagonal. As a result, we can write the energy functional $\\chi^2[S]$ in the form, \n\\begin{equation}\n\\chi^2 = \\sum_{\\alpha=0}^{M-1}\\frac{[C(\\tau_\\alpha) - \\tilde{C}[S](\\tau_\\alpha)]^2}{\\sigma^2(\\tau_\\alpha)},\n\\label{eq:chi2}\n\\end{equation}\nwith $\\sigma^2(\\tau_\\alpha)$ the statistical uncertainty on the data point $\\alpha$. Several arguments~\\cite{Fuchs2010} have been evoked for fixing $\\Theta=1$. In contrast, in~\\cite{Fuchs2010} it has been proposed to pick for $\\Theta$ the value $\\Theta^*$ that maximises $Z(\\Theta)$, which is argued to also maximise the posterior probability $P(\\theta | C)$. This possibility, which corresponds to a balance between energy and entropy dominated solutions, requires however a full free energy calculation. \n\nAt variance with these proposals, we optimise the value of $\\Theta$ employing the following procedure. An initial data set, $C(\\tau_\\alpha)$, is generated with known statistical uncertainty $\\sigma^2(\\tau_\\alpha)$ by using, for instance, a path integral simulation of the considered model. In cases were $C(\\tau)$ is known analytically, synthetic data could also be generated starting from the exact solution, and introducing a controlled uncertainty. Starting from these data, the spectrum $S_{ME}(\\Theta)$, described by $P$ degrees of freedom $A(\\omega_p)$, is obtained through a Monte-Carlo sampling of Eq.~(\\ref{eq:sme2}) for a given value of $\\Theta$. Note that a well converged Monte-Carlo average will lead to a spectrum $S_{ME}(\\Theta)$ with an associated $\\chi^2\\sim \\mathcal{O} (M\\epsilon)$, where $\\epsilon$ is a residual error, while the average $\\langle \\chi^2 \\rangle \\sim\\mathcal{O}(M\\epsilon+P\/\\Theta)$. We denote $\\bar{C}_\\Theta(\\tau_\\alpha)$ the correlation function associated with this average spectrum.\n\nIn order to determine the optimal choice of $\\Theta$, therefore discriminating among different models for $S(\\omega)$ (e.g., different finite discretizations on an $\\omega$-gird), we combine the maximum entropy approach with a validation procedure borrowed from the statistical learning theory~\\cite{MEHTA20191}. We, therefore, generate $P'$ new sets of validation data, $C_{\\mathrm{val}, i}(\\tau_\\alpha)$ ($i=1,\\ldots, P'$), by using the same technique (even not necessarily with the same accuracy) that we use to produce the original data set, and determine the associated,\n\\begin{equation}\n\\chi^2_{\\mathrm{val}} = \\frac{1}{P'}\\sum_{i=1}^{P'} \\sum_{\\alpha=0}^{M-1}[\\bar{C}_\\Theta(\\tau_\\alpha) - C_{\\mathrm{val},i}(\\tau_\\alpha)]^2 .\n\\label{eq::chi2 validation}\n\\end{equation}\nWe can show that this can be interpreted as a measure of the difference between the estimate $\\bar{C}_\\Theta(\\tau_\\alpha)$ and the exact correlation function, denoted by ${C}_{\\mathrm {exact}}(\\tau_\\alpha)$. Indeed, by writing \n\\begin{equation}\n \\chi^2_{\\mathrm{val}}= \\frac{1}{P'} \\sum_{i=1}^{P'} \\sum_{\\alpha=0}^{M-1} [\\bar{C}_\\Theta(\\tau_\\alpha) - {C}_{\\mathrm{exact}}(\\tau_\\alpha) + {C}_{\\mathrm{exact}}(\\tau_\\alpha) - C_{val,i}(\\tau_\\alpha)]^2 ,\n\\end{equation}\nin the limit of large $P'$ and assuming that the average over the validation data returns the exact correlation function, we obtain\n\\begin{equation}\n \\chi^2_{\\mathrm{val}}= \\sum_{\\alpha=0}^{M-1} [\\bar{C}_\\Theta(\\tau_\\alpha) - {C}_{\\mathrm{exact}}(\\tau_\\alpha)]^2 + \\sum_{\\alpha=0}^{M-1} \\sigma^2_{\\mathrm{val}}(\\tau_\\alpha).\n\\end{equation}\nHere, the first term is the distance of the estimate to the exact data, while the second is the variance of the validation data, which is independent of $\\Theta$. The choice of $\\Theta$ will therefore be eventually dictated by the behaviour of the first term.\n\\section{\\label{sec:estimators}Improved estimators for current correlations}\nThe computation of transport coefficients typically implies correlation functions involving the momentum operator, a prototypical one being $C_{pp}(\\tau) = \\langle p(\\tau) p(0) \\rangle $. In the path integral approach and within the primitive approximation of Eq.~(\\ref{eq:primitive-approx}), the momentum operator is expressed as a difference of coordinates, so that the correlation function for $\\tau \\ne 0$ takes the form $C_{pp}(\\tau_k) = -\\frac{1}{\\Delta\\tau^2}\\langle (x_{k+1}-x_k)(x_1-x_0)\\rangle$, where $x_k \\equiv x(\\tau_k)$, and $\\tau_k = k\\Delta\\tau \\equiv k\\frac{\\beta}{M}$ is the discretized imaginary time. The MC evaluation of $C_{pp}(\\tau_k)$ is hampered by the fact that, when $\\Delta\\tau$ gets small, relative fluctuations in $(x_{i+1}-x_i)$ become large and the variance of the measured observable grows rapidly (in fact it diverges for $\\Delta\\tau\\rightarrow0$). As the uncertainty $\\delta_{MC}$ of the MC estimate of an observable $A$ is related to its variance $\\sigma_A^2$ by $\\delta_{MC} \\propto \\sigma_A\/\\sqrt{\\tau_{sim}}$, one is therefore forced to increase the simulation time, $\\tau_{sim}$, in order to achieve a given precision. \n\nThis problem was identified early in the development of PIMC, when trying to estimate the atoms kinetic energy, which is $\\propto C_{pp}(\\tau=0)$. A solution was proposed in~\\cite{Herman1982}: instead of directly using the above expression for $C_{pp}(\\tau_k)$, the integrals entering the correlation function can be rearranged obtaining a new estimator for $C_{pp}(\\tau_k)$, with identical average but smaller variance. The new expression, known in the case of the kinetic energy as the \"virial estimator\", does not depend explicitly on $\\Delta\\tau$, and therefore does not suffer from the diverging variance associated with the \"naive\" estimator. \n\nWe now show that the strategy used to obtain the virial estimator can be generalized to any correlation function involving the momentum operator~\\cite{PhysRevLett.111.050406}. Specifically, we consider correlation functions of the general form involved in calculation of transport coefficients, e.~g., $C_{pF}(\\tau) = \\langle ( \\hat{p}(\\tau)\\hat{F}(\\tau))_s (\\hat{p}(0)\\hat{F}(0))_s \\rangle$. Here $\\hat{F}(\\tau)$ is a shorthand notation for a generic local function $F(\\hat{X}(\\tau))$, which in the case of heat transport would be related to the potential energy. The subscript $s$ indicates that the operator product, which represents an observable quantity, is by convention made Hermitian by symmetrizing the operator, as $( \\hat{p}\\hat{F})_s = \\frac{1}{2}(\\hat{p}\\hat{F}+\\hat{F}\\hat{p})$. \n\nWithin the primitive approximation and following this definition one obtains,\n\\begin{multline}\nC_{pF}(\\tau_k) = - \\frac{1}{\\Delta\\tau^2}m^2 \\langle (x_{k+1} - x_{k}) F(x_{k}) (x_{1} - x_{0}) F(x_0) \\rangle \\\\\n+ \\frac{1}{2\\Delta\\tau}m \\langle (x_{k+1} - x_{k}) F(x_{k}) F'(x_0)\\rangle - \\\\\n+ \\frac{1}{2\\Delta\\tau}m \\langle (x_{1} - x_{0}) F(x_{0}) F'(x_{k})\\rangle\n- \\frac{1}{4} \\langle F'(x_k) F'(x_0) \\rangle, \n\\label{eq::pF_correlation}\n\\end{multline}\nThis expression is valid for $k\\ge 1$, while the case $k=0$ must be treated separately, along similar lines. \n\nThe MC calculation of Eq.~(\\ref{eq::pF_correlation}) suffers from the same numerical problem as the momentum correlations, the variance of the leading term in $1\/\\Delta\\tau$ diverging as $\\Delta\\tau$ approaches zero. In order to improve the estimator, we have generalized the procedure originally used for the kinetic energy calculations ($C_{pp}(0)$), and obtain a new estimator with reduced variance for general correlation functions. We start from the first term in Eq.~(\\ref{eq::pF_correlation}), which has the strongest dependence on $\\Delta \\tau$, and can be expressed as,\n\\begin{multline}\n\\frac{1}{\\Delta\\tau^2}\\langle F(x_k)(x_{k+1}-x_k)F(x_0)(x_1-x_0)\\rangle =\\\\=\\frac{1}{\\hbar^2\\Delta\\tau^2 Z} \\int dx_0 \\int dx_1 \\dots \\int dx_M F(x_k) (x_{k+1}-x_k) F(x_0) (x_1-x_0)\\\\ \\rho_0(x_1-x_0; \\Delta\\tau)\\dots \\rho_0(x_M- x_{M-1}; \\Delta\\tau) \\exp\\left[-\\Delta\\tau \\sum _{j=0}^{M} V(x_i)\\right].\n\\end{multline}\nWe now transform the set of coordinates $\\{x_0, x_i\\}$ to $\\{x_0, y_i\\}$, such that $y_i = x_{i+1}-x_i$. The constraint $x_{M} \\equiv x_0$ is accounted for by introducing a term $\\delta\\left(\\sum_{i=0}^{M-1} y_i\\right)$, leading to\n\\begin{multline}\n\\frac{1}{\\Delta\\tau^2}\\langle F(x_k)(x_{k+1}-x_k)F(x_0)(x_1-x_0)\\rangle =\\\\=\\frac{1}{\\Delta\\tau^2 Z} \\int dx_0 \\int dy_0 \\dots \\int dy_{M-1} \\delta\\left(\\sum_{i=0}^{M-1} y_i\\right) F\\left(\\sum_{i=0}^{k-1}y_i +x_0\\right) \\\\ y_k F(x_0)y_0 \\rho_0(y_0; \\Delta \\tau)\\dots \\rho_0(y_{M-1};\\Delta \\tau) \\exp[-\\Delta\\tau W],\n\\end{multline}\nwith\n\\begin{equation}\n W = \\sum _{j=0}^{M-1} V\\left(\\sum_{i = 0}^j y_i + x_0\\right).\n\\end{equation}\nBy using the identity:\n\\begin{equation}\n\\frac{1}{\\hbar\\Delta\\tau}y_k \\rho(y_k;\\Delta\\tau)= -\\partial _{y_k} \\rho(y_k, \\Delta\\tau),\n\\end{equation}\nwe can integrate by parts for the integration over $y_k$. Our next step is based on the observation that the derivative of the $\\delta$ function w.~r.~t. to $y_0$ can be distributed over all coordinates, i.e., $\\partial_{y_k}\\delta\\left(\\sum y_j\\right) = \\frac{1}{M} \\sum_i \\partial_{y_i}\\delta\\left(\\sum y_j\\right)$. A second integration by parts over each of the $y_i$ variables eventually leads to\n\\begin{multline}\n\\frac{1}{\\Delta\\tau^2}\\langle F(x_k)(x_{k+1}-x_k)F(x_0)(x_1-x_0)\\rangle\n = \\\\ \\left\\langle F(x_k)(x_1-x_0)F(x_0)\\left[\\frac{1}{M}\\sum_{j=1}^{M-1} j \n V'(x_j)- \n \\sum_{j=k+1}^{M-1} V'(x_j)\\right]\n \\right\\rangle-\\\\- \\frac{k}{(\\Delta\\tau M)}\\langle F'(x_k)(x_1-x_0)F(x_0)\\rangle\n-\\frac{1}{(\\Delta\\tau M)}\\langle F(x_k)F(x_0)\\rangle. \\label{eq::virial_expression}\n\\end{multline}\nFor the special case $F(x) \\equiv 1$, we can show that Eq.~(\\ref{eq::virial_expression}) reduces to a virial-like formula for the momenta correlations $C_{pp}(\\tau_k)=\\langle x_k V'(x_0)\\rangle$ (see App.~\\ref{sec:appendixA}). Repeating the procedure for the terms linear in $\\frac{1}{\\Delta\\tau}$, such as the second term in Eq.~(\\ref{eq::virial_expression}), we can write the correlation in a form that apparently does not depend on $\\Delta \\tau$ (recall that $M\\Delta \\tau =\\beta$ is a constant). The calculations, together with the expressions appropriate for the special case $k=0$, are sketched in App.~\\ref{sec:appendixA}.\n\nIn contrast with the initial expression Eq.~(\\ref{eq::pF_correlation}), all terms are now well-defined as $\\Delta\\tau \\rightarrow 0$. We note, however, that the number of terms involved in the first part of Eq.~(\\ref{eq::virial_expression}) increases linearly with $M=\\beta\/\\Delta \\tau$, so that the gain following our manipulation is not immediately obvious. The argument that Eq.~(\\ref{eq::virial_expression}) indeed leads to a variance reduction is the following: If all the $M$ contributions to the first term were independent, its variance would scale as $\\Delta \\tau\\times M$, where $\\Delta \\tau$ comes from the term $\\langle \\vert x_1-x_0\\vert \\rangle$, and the factor $M$ accounts for the $M$ contributions in the sum. As the segments in the path are correlated, even if this estimate is only approximate it still indicates that the variance remains finite even for $\\Delta \\tau \\rightarrow 0$. We explicitly verify the variance reduction numerically for the harmonic oscillator in the following section.\n\nWe conclude this section by stressing that the above derivation to improve generic estimators involving momentum operators is by no means limited to the harmonic oscillator, but remains valid in general, in particular for the case of interacting particles and also beyond the use of the primitive approximation in the path integral.\n\\section{\\label{sec:case1}Case study I: the single harmonic oscillator}\n\\subsection{Computing correlation functions}\n\\label{sec:computing}\nWe now apply the methods described above to our test cases. We start by considering the canonical example of a single quantum harmonic oscillator of frequency $\\omega_0$ in one dimension, with potential energy $V=\\frac{1}{2} m\\omega_0^2 X^2$, and focus on the time correlation function of an operator with the structure of an energy current, e.~g., $C_{pV}(\\tau) = \\langle (p(\\tau)V(\\tau))_s (p(0)V(0))_s \\rangle$. The PIMC approach within the primitive approximation allows us to extract the values of the imaginary time correlation function $C_{pV}(\\tau_k)$, at $M$ discrete time values, $\\tau_k= (k-1)\\beta\/M$. Two main sources of inaccuracy are associated to this procedure: a systematic error, associated to the use of the primitive approximation for the density matrix, and the statistical uncertainty due to finite sampling. In the following we show how to plainly control these issues.\n\nFor an harmonic oscillator, the systematic deviation due to the discretization of the imaginary time $\\Delta\\tau=\\beta\/M$ can be assessed directly, by comparing the result expected from the PIMC approach (which in this case can be obtained exactly) with the analytical expression for the correlation function $C_{pV}(\\tau)$, which corresponds to the continuous limit $M\\rightarrow \\infty$. By applying the canonical formalism for the harmonic oscillator, we indeed obtain,\n\\begin{multline}\nC^{\\text{exact}}_{pV}(\\tau) =\\left( \\frac{m\\hbar^3\\omega_0^3}{256}\\right) \\frac{1}{\\sinh^3(\\beta \\omega_0\/2)}\\times \\\\\n\\left[12\\cosh\\left(\\frac{3\\beta\\omega_0}{2}-3\\omega_0 \\tau\\right)\\right. \\\\\n\\left.+2\\left(4e^{-\\beta\\omega_0}+e^{-2\\beta\\omega_0}+1\\right)e^{\\beta\\omega_0} \\cosh\\left(\\frac{\\beta\\omega_0}{2} -\\omega_0 \\tau\\right) \\right].\n\\label{eq::exact pv correlation}\n\\end{multline}\nIn order to calculate the exact expression of the correlation function within the primitive approximation of the discretized path integral, we first note that all the integrals involved in the calculation are Gaussian. By using the discretized representation for the momentum operator, one writes $C_{pV}(\\tau)$ as a thermodynamic average of products of the variables $x$. Wick's theorem allows to recast such correlations $\\langle x_1 \\dots x_{2n}\\rangle$ into products of pair correlation functions $\\langle x_ix_j\\rangle$ which are easily accessible as $\\langle x_ix_j\\rangle = A_{ij}^{-1}$. $\\mathbf{A}$ is a symmetric $M \\times M$ matrix, and we can write, $\\langle x_ix_j\\rangle =\\int dX x_ix_j\\text{e}^{-X^T\\mathbf{A}X}$. We can therefore use numerical methods to calculate the matrix elements, as discussed in App.~\\ref{sec:appendixB}. The relative difference between the two calculations is illustrated in Fig.~\\ref{fig::discretization_error_pV}.\nWe observe that, for a sufficiently small value of $\\beta\/M$, the deviation is virtually not affected by a change of $\\beta$.\n\\begin{figure}[t]\n\\center{\\includegraphics[width=1. \\linewidth]{fig01.pdf}}\n\\caption{\nRelative discretization error, $1-C_{pV}^{\\mathrm{PI}}(\\tau)\/ C_{pV}^{\\mathrm{exact}}(\\tau)$, between the path integral, $C_{pV}^{\\mathrm{PI}}(\\tau)$, and the exact results, $C_{pV}^{\\mathrm{exact}}(\\tau)$, for the energy current correlation function, as a function of $\\beta\/M$. We show the data corresponding to the imaginary times $\\tau=0$ and $\\tau=\\hbar\\beta\/2$, and indicate with symbols and solid lines the results for $\\beta=3$ and $10$, respectively.\n}\n\\label{fig::discretization_error_pV}\n\\end{figure}\n\\begin{figure}[b]\n\\center{\\includegraphics[width=1. \\linewidth]{fig02.pdf}}\n\\caption{\nDifference between the exact correlation function $C_{pV}^{\\mathrm{exact}}(\\tau)$ and the values obtained by Monte Carlo sampling, $C_{pV}^{\\mathrm{MC}}(\\tau)$, of a path with $M=100$ time slices, for $\\beta=1$, illustrating the variance reduction obtained by the improved\nestimator discussed in Sect.~\\ref{sec:estimators}. We show with line-points the primitive estimator and with the continuous line the improved estimator, both using the same\nMonte Carlo data.\n}\n\\label{fig::pv_correlation_beta1}\n\\end{figure}\n\\begin{figure}[t]\n\\center{\\includegraphics[width=1. \\linewidth]{fig03.pdf}}\n\\caption{\nReconstruction of the spectral function associated to $C_{pV}(\\tau)$ at $\\beta=10$ corresponding to the indicated values for the number of delta functions in the model, $N_\\omega$, and effective temperature $\\Theta=1$. The area of the filled rectangles indicate the weight of the two delta-functions of the exact spectrum centered at $\\omega_0 $ and $3\\omega_0$, corresponding to the $\\Delta\\omega=1$ discretization.\n}\n\\label{fig::sp function b10 discretization}\n\\end{figure}\n\\begin{figure}[b]\n\\center{\\includegraphics[width=1. \\linewidth]{fig04.pdf}}\n\\caption{\nReconstructed spectra for the energy current correlation function $C_{pV}(\\tau)$ at $\\beta=10$, with $N_\\omega=25$ and at the indicated values of $\\Theta$. The filled rectangles are centered at the positions of the two delta-functions of the exact spectrum, with an area corresponding to their respective weights.} \n\\label{fig::sp function b10 theta}\n\\end{figure}\n\nIn addition to this quantitative estimate, it is important to note that, for this system, the discretization preserves the qualitative shape of the correlation functions. One can show (see App.~\\ref{sec:appendixC}) that the calculation using a finite but large $M$ corresponds to the exact calculation ($M\\to\\infty$) for slightly shifted oscillator strength and inverse temperature. The Trotter error therefore only introduces small quantitative deviations in the spectral density, but does not give rise to spurious qualitative features such as a broadening of the spectral lines.\n\nWe next focus on the second source of error affecting the PIMC calculation, limited sampling. Indeed, error bars corresponding to average values are obtained by estimating the variance of the observable, which decreases as $\\tau_\\text{sim}^{-1\/2}$, with $\\tau_\\text{sim}$ the simulation time. For a given simulation time, the quality of the result therefore crucially depends on the variance of the estimator. We illustrate this point in Fig.\\ref{fig::pv_correlation_beta1}, by comparing calculations for the energy current correlation function, $C_{pV}$, using the naive estimator, Eq.~(\\ref{eq::pF_correlation}), and the improved version of Eq.~(\\ref{eq::virial_expression}). The data of Fig.~\\ref{fig::pv_correlation_beta1} clearly show that the virial estimator leads to a spectacular improvement compared to the naive one, with a statistical error that is now comparable to the systematic one resulting from the discretization. \n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=1. \\linewidth]{fig05.pdf}\n\\caption{\n$N_\\omega$-dependence of the $\\chi^2_{\\mathrm{val}}$ extracted from the validation step of the reconstructed spectral functions for $C_{pV}(\\tau)$, at $\\beta=10$ and with $\\Theta=1$. Squares and triangles correspond to shifted grids: for $N_\\omega=5$ red square shows the shift $\\delta\\omega=0.25$ and the green one $\\delta\\omega=0.5$, for $N_\\omega=10$ plot shows $\\delta\\omega=0.1$ as a red triangle and $\\delta\\omega=0.25$ as a green one.\n}\\label{fig::chi valid b10 discretization}\n\\end{figure}\n\\begin{figure}[b]\n\\center{\\includegraphics[width=1. \\linewidth]{fig06.pdf}}\n\\caption{ \nMain panel: Comparison of the $\\chi^2_{\\mathrm{val}}$ obtained from our validation for various values of $\\Theta$ and $N_\\omega$. The area of the circles is proportional to the corresponding value of $ \\chi^2_{\\mathrm{val}}$. Inset: $\\chi^2_{\\mathrm{val}}$ as a function of $\\Theta$, at the indicated values of $N_\\omega$.\n}\n\\label{chi2 valid b10 table}\n\\end{figure}\n\\begin{figure}[t]\n\\center{\\includegraphics[width=1. \\linewidth]{fig07.pdf}}\n\\caption{\nSpectral reconstruction for $C_{pV}(\\tau)$ at $\\beta=3$, obtained at the indicated values of the discretization, $N_\\omega$, for a fixed $\\Theta=1$. The filled rectangles are centered at the positions of the two delta-functions of the exact spectrum for $\\Delta\\omega=1$, with an area corresponding to their respective weights.\n}\n\\label{fig::sp function b3 discretization}\n\\end{figure}\n\\begin{figure}[b]\n\\center{\\includegraphics[width=1. \\linewidth]{fig08.pdf}}\n\\caption{\nSpectral reconstructions from $C_{pV}(\\tau)$ at $\\beta=3$ for $N_\\omega=5$ using different values of $\\Theta$. The filled rectangles are centered at the positions of the two delta-functions of the exact spectrum, with an area corresponding to their respective weights.\n}\n\\label{fig::sp function b3 theta}\n\\end{figure}\n\\begin{figure}[t]\t\\center{\\includegraphics[width=1. \\linewidth]{fig09.pdf}}\n\\caption{\nMain panel: $\\chi^2_{\\mathrm{val}}$ from the validation procedure at the corresponding values $\\Theta$ and $N_\\omega$. The area of the circles is proportional to the value of $ \\chi^2_{\\mathrm{val}}$. Inset: $\\chi^2_{\\mathrm{val}}$ as a function of the effective temperature $\\Theta$, at the indicated values of $N_\\omega$.\n}\n\\label{chi2 valid b3 table}\n\\end{figure}\n\\subsection{The inversion problem}\nWe now use the reconstruction procedure outlined in Sect.~\\ref{sec:inversion} to extract the frequency spectrum for the correlation functions obtained in Sect.~\\ref{sec:computing}. In order to perform a reconstruction one needs both to define the set of parameters that expresses the spectral density in Eq.~(\\ref{eq::correlation fit}) and in the integration measure of Eq.~(\\ref{eq:sme2}), and to chose the effective inverse temperature $\\Theta$. In the following, we use a discretized model of the spectral density, which is described as a sum of $N_\\omega$ delta-functions in the $\\omega$-space, see Eq.~(\\ref{eq:sme1}). Specifically, we consider a regular grid of $\\omega$-values defined on the interval $[0, 5]$, with a fixed spacing between points, $\\Delta\\omega=5\/N_\\omega$. In addition, we will consider the possibility of a global shift of the grid by $\\delta\\omega < \\Delta \\omega$. Unless specified otherwise, $\\delta\\omega=0$, and we fix the origin of the grid in $\\omega=0$. \n\nThe exact expression for the time correlation function, Eq.~(\\ref{eq::exact pv correlation}), implies that $C_{pV}(\\tau)$ decays exponentially with $\\tau$ in the interval $[0,\\beta\/2]$, with a decay rate $\\mathcal{O}(\\omega_0)$. Larger values of $\\beta$ therefore lead to a larger amplitude in the decay, with the consequence that the contribution of different frequencies can be more easily resolved for larger $\\beta$'s. In short, a correlation function of the form $[\\exp(-\\omega_0\\tau) +\\exp(-3\\omega_0\\tau)]$ will be hard to distinguish from $2\\exp(-2\\omega_0\\tau)$ if data are only available in the interval $ [0,1\/\\omega_0]$. Resolving the two frequencies $\\omega_0$ and $3\\omega_0$ is therefore essentially impossible if $\\beta\/2 <1$.\n\nIn order to illustrate this point, we calculate and analyse the spectral function for the energy current correlation functions at the two inverse temperatures $\\beta=3$ and $10$, with an imaginary time discretization $\\Delta\\tau = 0.1$. With this value of $\\Delta\\tau$, the systematic discretization error is smaller than the statistical error for our simulation time, so it can be safely neglected. The main constraint for the reconstruction comes from the imaginary time interval $[0, 1\/\\omega_0]$. The relative error of the MC data corresponding to these values of $\\tau$ is of $\\mathcal{O}(10^{-2})$. For larger $\\tau$ the relative error becomes comparable with the data due to the fact that $C_{pV}(\\tau)$ approaches 0 with $\\tau \\rightarrow\\beta\/2$.\n\nWe start by considering the case $\\beta=10$. First, we evaluate the effect of the grid size, $N_\\omega$, on the reconstruction. In Fig.~\\ref{fig::sp function b10 discretization} we show the spectra obtained for various values of $N_\\omega$, keeping a fixed $\\Theta=1$. As mentioned above, there is no {\\em a-priori} argument guiding the most appropriate parametrization of the spectrum. In the following we analyze the accuracy of the spectral reconstruction by comparing the values of $\\chi^2_{\\mathrm{val}}$ defined in Eq.~(\\ref{eq::chi2 validation}), using an independent test data set. This is obtained within an additional MC simulation of the correlation function, with the same parameters as the original one. We also consider a data set of the same size, $P'$, as the one that was used to produce $C_{pV}(\\tau_k)$.\n\\begin{figure}[t]\n\\center{\\includegraphics[width=1. \\linewidth]{fig10.pdf}}\n\\caption{\nSpectral reconstruction of $C_{pV}^{\\text{cont}} (\\tau)$ for the continuous distribution of oscillator frequencies, at the indicated values of the discretization $N_\\omega$, at fixed $\\Theta=1$. The shaded area indicates the exact spectral function.\n}\n\\label{fig::contin spectrum discr}\n\\end{figure}\n\\begin{figure}[b]\n\\center{\\includegraphics[width=1. \\linewidth]{fig11.pdf}}\n\\caption{\nSpectral reconstruction of $C_{pV}^{\\text{cont}} (\\tau)$ for the continuous distribution of oscillators, for $N_\\omega=10$ and $\\Theta=1$ and $10$, respectively. Here we compare the results pertaining to a grid shifted by $\\delta\\omega= 0.25$ to those with $\\delta\\omega=0$, the usual (not shifted) case. The shaded area indicates the exact spectral function.\n}\n\\label{fig::contin spectrum shift theta=1 and theta=10}\n\\end{figure}\n\nIn Fig.~\\ref{fig::chi valid b10 discretization} we show $\\chi^2_{\\mathrm{val}}$ as a function of the number of grid points. Clearly, increasing the number of coefficients $A(\\omega_i)$ of Eq.~(\\ref{eq:sme1}) does not lead to a better spectral reconstruction. In contrast, by introducing more degrees of freedom, one increases the entropy, and the spectral weight is smeared out excessively. In Fig.~\\ref{fig::chi valid b10 discretization} we also show the effect on $\\chi^2_{\\mathrm{val}}$ of a shift $\\delta \\omega$. As expected, shifting the nodes away from $\\omega_1=\\omega_0$ and $\\omega_2=3\\omega_0$, which are the only frequencies present in the exact spectrum determined by Eq.~(\\ref{eq::exact pv correlation}), deteriorates the accuracy of the spectrum obtained through the validation step. \n\nThe second parameter determining the quality of the statistical maximum entropy reconstruction is the effective temperature, $\\Theta$. In Fig.~\\ref{fig::sp function b10 theta} we show the behaviour of the spectral function for a chosen $\\omega$-grid at the indicated values of $\\Theta$. As expected from Eq.~(\\ref{eq:sme1}), by increasing $\\Theta$ the result approaches the most probable configuration that describes the correlation function $C_{pV}(\\tau)$, reducing entropic effects. In Fig.~\\ref{chi2 valid b10 table} we combine the above results for different pairs of parameters ($\\Theta$, $N_\\omega$), and plot the corresponding $\\chi^2_{\\mathrm{val}}$. Our validation procedure therefore strongly points to using models with a smaller number of delta functions combined with large values of $\\Theta \\gg 1$ for the spectral reconstruction. Based on the comparison with the exact spectrum, this choice is also clearly the one that leads to the description of the spectrum in closest agreement with the exact prediction. We conclude that the use of $\\chi^2_{\\mathrm{val}}$ indeed seems to provide an unbiased estimate of the quality of the reconstruction.\n\nWe now consider the spectral reconstruction for $C_{pV}(\\tau)$ at $\\beta=3$, again clarifying the influence of $\\Theta$ and of the lattice discretization $N_\\omega$. In Figs.~\\ref{fig::sp function b3 discretization} and~\\ref{fig::sp function b3 theta} we show selected examples of the resulting spectra. In contrast to the case $\\beta=10$, we now observe in general a much stronger broadening of the peaks, which prevents us from resolving the two peak structure for $\\Theta=1$, even for sparse $\\omega$-grids. However, when combining sparse grids with sufficiently large $\\Theta$ in the inversion, one improves towards the correct two peaks structure, as can be seen in Fig.~\\ref{fig::sp function b3 theta}. The data shown in Fig.~\\ref{chi2 valid b3 table} also indicate that this choice indeed corresponds to the lowest values of $\\chi^2_{\\mathrm{val}}$, confirming the validity of this indicator.\n\n\\begin{figure}[t]\n\\center{\\includegraphics[width=1. \\linewidth]{fig12.pdf}}\n\\caption{\nMain panel: $\\chi^2_{\\mathrm{val}}$ from the validation procedure at the corresponding values $\\Theta$ and $N_\\omega$. The area of the circles is proportional to the value of $ \\chi^2_{\\mathrm{val}}$. Blue circles correspond to the results for an $\\omega$-grid shifted by $\\delta\\omega=\\Delta\\omega \/ 2$. Inset: $\\chi^2_{\\mathrm{val}}$ as a function of the effective temperature $\\Theta$, at the indicated values of $N_\\omega$.\n}\n\\label{fig::cont spectrum valid}\n\\end{figure}\n\n\\section{\\label{sec:case2}Case study II: continuum distribution of oscillators}\nWe now move to our second test model, and study the potential energy current correlation function of a system containing a large number of independent, non interacting harmonic oscillators. Considering the $C_{pV}$ of Eq.~(\\ref{eq::exact pv correlation}) as a function of $\\omega_0$, the correlation function for an ensemble of oscillators with a continuum of frequencies can be written as,\n\\begin{equation}\nC_{pV}^{\\text{cont}} (\\tau) = \\int_0^{\\omega_{cut}} d\\omega_0\\; C^{\\text{exact}}_{pV}(\\tau;\\omega_0) g(\\omega_0).\n\\label{eq::exact continous pv correlation}\n\\end{equation}\nThe form of the density of states, $g(\\omega_0)$, and the value of the frequency cutoff, $\\omega_{cut}$, are arbitrary. In the following we consider a Debye-like form, $g(\\omega_0)\\propto\\omega_0^2$, with $\\omega_{cut}=1$, and fix $\\beta=10$. With this choice, the exact spectrum for the energy current correlation is a superposition of two functions with a compact support, assuming non zero values in the range $[0,\\omega_{cut}]$ and $[0,3\\omega_{cut}]$, respectively. As a result, it will display two sharp discontinuities, at $\\omega_{cut}$ and $3\\omega_{cut}$, respectively. \nContrary to the single oscillator case, here we do not generate the data by Monte Carlo simulation, but we rather employ the exact analytical expression, subsequently adding a Gaussian random noise with a variance proportional to the data themselves, $\\sigma_k = 10^{-2}\\times C_{pV}^{\\text{cont}} (\\tau_k)$. This variance is also used as the uncertainty to compute the $\\chi^2$ of Eq.~ (\\ref{eq:chi2}).\n\nBy following the same workflow discussed above for the single oscillator, we reconstruct the spectral densities for different values of $\\Theta$ and number of delta functions in the model, $N_\\omega$. In Fig.~\\ref{fig::contin spectrum discr}, we show the influence of the discretization $N_\\omega$ by fixing the canonical value $\\Theta=1$. Following the same procedure than above, we calculate again $\\chi^2_{\\mathrm{val}}$ for the validation set by generating test correlation function from the exact result of Eq.~(\\ref{eq::exact continous pv correlation}), with the same variance $\\sigma_k$. The values of $\\chi^2_{\\mathrm{val}}$, shown in Fig.~\\ref{fig::cont spectrum valid}, indicate again a more statistically sound reconstruction corresponding to sparse grids. Unfortunately, none of the curves of Fig.~\\ref{fig::contin spectrum discr}, convincingly captures the sharp edges of the exact spectral density, which rather resemble two symmetrically broadened peaks. Considering shifted grids (Fig.~\\ref{fig::contin spectrum shift theta=1 and theta=10}), however, as also quantitatively supported by the validation procedure, results in contrast in more asymmetric features, clearly improving the reconstruction towards the exact spectrum. Note, however, that employing sparse $\\omega$-grids considerably limits frequency resolution, so that the reconstruction in the case of the continuous spectrum with its sharp discontinuities remains quite difficult.\n\n\\section{\\label{sec:conclusion}Conclusion and outlook}\nIn this paper, we have examined the reconstruction of spectral functions for transport coefficients, starting from imaginary time correlation functions obtained by path integral Monte Carlo simulations. In particular, we have described a general strategy for wisely expressing improved estimators with reduced statistical variance for imaginary time correlation functions involving current or momentum operators. We have next introduced an inversion procedure based on a stochastic maximum entropy method, a Bayesian approach commonly used for such problems. The outcome of these procedures is in general strongly dependent on the involved parameters, as we have illustrated in the case of the harmonic oscillator spectra employing different values for the effective inverse temperature, $\\Theta$, as well as different choices for the grid discretization, $N_\\omega$, or offset, $\\delta \\omega$. Despite their apparent simplicity, the oscillator models studied here provide\nchallenging benchmarks for the spectral reconstruction due to the sharp undamped delta-functions they contain.\n\n\nPure Bayesian approaches suggest to eliminate the parameters dependence by using a flat prior with the most general and flexible model for the spectral density, e.~g., a large value for $N_\\omega$, together with $\\Theta=1$ to encompass all possible solutions consistent with the data. In contrast, in our case studies we have shown that the spectra corresponding to these standard choices exceedingly suffer from the usual problems of all maximum entropy reconstructions: broadening or merging of peaks, smoothing out any sharp features in the underlying exact spectrum. \n\n\nIndeed, in practice, path integral Monte Carlo data are strongly correlated in imaginary time, undermining a true justification of the Bayesian choice $\\Theta=1$. Different values of $\\Theta$ may be therefore considered to approximate efficiently the true, unknown likelihood function. On the other hand, the use of flexible models for the spectral function, containing a large number of parameters, possibly introduces a large amount of entropy into the Bayesian inversion, such that different parametrizations (linear or logarithmic grids in regions where spectral densities are flat, for instance) in general strongly modify the results. The representation of a model must therefore be considered itself as a \"parameter\", making illusory in our view a \"parameter-free\" Bayesian inversion.\n\nIn this paper we have addressed exactly the above difficulties, and developed a validation procedure to quantitatively control any parameter dependence of the Bayesian inversion. Our proposal is based on the quantity $\\chi^2_{\\mathrm{val}}$ constructed from independent data not involved in the maximum entropy inversion, which provides an efficient and readily applicable method to select the optimal choice of parameters, corresponding to the lowest value of $\\chi^2_{\\mathrm{val}}$.\n\nWe have shown explicitly that the new validation step clearly identifies a discrete set of two delta functions in the case study of the single harmonic oscillator, and provides unambiguous indications towards the correct asymmetric sharp edges in the case of an underlying continuous frequency spectrum. Also, in both cases, our validation procedure eventually selects models containing just a limited number of parameters, which intrinsically limits the resolution of the reconstruction. Overall, combining in a consistent workflow Bayesian inversion together with an efficient validation procedure able to select model parameters and effective temperature dependence, indeed seems to offer promising perspectives for capturing qualitative and quantitative features in spectral reconstruction.\n\nWe conclude by noting that the Green-Kubo method, combined with the harmonic theory of solids and a numerical perturbative treatment of anharmonic effects, has recently proven to be remarkably effective for the determination of heat conductivity at low temperature in systems such as amorphous silicon~\\cite{Isaeva2019,Simoncelli2019}. Our hope is to extend those works to arbitrary temperatures and stronger anharmonic effects, on one hand employing path integrals to relax the assumptions underlying the perturbative treatment of anharmonicity, and on the other hand using the strategies for the spectral reconstruction developed in the present paper.\n\\acknowledgements\n{This work has been supported by the project Heatflow (ANR-18-CE30-0019-01) funded by the french \"Agence Nationale de la Recherche\".}\n\\section*{Data Availability Statement}\nThe data that support the findings of this study are available from the corresponding author upon reasonable request.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nStarting with a question of Erd\\H{o}s and Rothschild~\\cite{erdos}, there has been substantial interest in the problem of characterizing graphs that admit the largest number of $r$-edge-colorings avoiding a fixed pattern of a graph $F$, where the number $r$ of colors and the pattern of $F$ are given. To be precise, fix an integer $r\\geq 2$ and a graph $F$. We say that $P$ is an \\emph{$r$-pattern} of $F$ if it is a partition of the edge set of $F$ into at most $r$ classes. An $r$-edge-coloring (or $r$-coloring, for short) of a graph $G$ is said to be \\emph{$P$-free} if $G$ does not contain a copy of $F$ such that the partition of the edge set induced by the coloring is isomorphic to $P$. We write $c_{r,P}(G)$ for the number of $P$-free $r$-colorings of a graph $G$ and we define \n$$c_{r,P}(n)=\\max\\{c_{r,P}(G)\\colon |V(G)|=n\\}.$$\nAn $n$-vertex graph $G$ such that $c_{r,P}(G)=c_{r,P}(n)$ is said to be \\emph{$(r,P)$-extremal}. \n\nWe focus on the case when $F$ is the triangle $K_3$. There are three possible patterns: the monochromatic pattern $K_3^M$, the rainbow pattern $K_3^R$, and the pattern $K_3^{(2)}$ with two classes, one containing two edges and one containing a single edge, depicted in the figure below. \n\n\n\\begin{figure}[H]\n\n\t\t\t\\begin{tikzpicture}[line cap=round,line join=round]\n\t\\begin{axis}[\nwidth=0.9\\textwidth,\nheight=0.25\\textwidth,\nxmin=-1, xmax=9,\nymin=0, ymax=0.5,\nminor tick num=3,\ngrid=none,\naxis lines=none]\n\n\n\\addplot[mark=*,mark size=3,mark options={draw=black,fill=black},line width=1pt,blue,samples=1] coordinates {(2,0)(1,0.5)(0,0)};\n\n\\addplot[mark=*,mark size=3,mark options={draw=black,fill=black},line width=2pt,blue,samples=1] coordinates {(0,0)(2,0)};\n\n\\addplot[mark=*,mark size=3,mark options={draw=black,fill=black},line width=1pt,blue,samples=1] coordinates {(5,0)(4,0.5)};\n\n\\addplot[mark=*,mark size=3,mark options={draw=black,fill=black},line width=2pt,green,samples=1] coordinates {(3,0)(5,0)};\n\n\\addplot[mark=*,mark size=3,mark options={draw=black,fill=black},line width=1pt,red,samples=1] coordinates {(3,0)(4,0.5)};\n\n\\addplot[mark=*,mark size=3,mark options={draw=black,fill=black},line width=2pt,blue,samples=1] coordinates {(6,0)(8,0)};\n\n\\addplot[mark=*,mark size=3,mark options={draw=black,fill=black},line width=1pt,blue,samples=1] coordinates {(8,0)(7,0.5)};\n\n\\addplot[mark=*,mark size=3,mark options={draw=black,fill=black},line width=1pt,red,samples=1] coordinates {(6,0)(7,0.5)};\n\n\\end{axis}\n\\end{tikzpicture} \\\\\n\n\n\t\n\t\\label{pmpa}\n\t\\caption{The patterns $K_3^M$, $K_3^R$, and $K_3^{(2)}$, respectively.}\n\\end{figure}\n\nRegarding the monochromatic pattern $K_3^M$, and assuming that $n$ is sufficiently large, it is known that the balanced, complete bipartite Tur\\'an graph $T_2(n)$ is the unique $(r,K_3^M)$-extremal graph if $r\\in \\{2,3\\}$~\\cite{alon,yuster}, and that $T_2(n)$ is not $(r,K_3^M)$-extremal for $r\\geq 4$. Extremal configurations are known for $r=4$~\\cite{yilma}, for $r\\in \\{5,6\\}$~\\cite{botler} and for $r=7$~\\cite{PS2022}. We should note that, for $r=5$, this problem admits several non-isomorphic configurations that achieve the extremal value in an asymptotic sense. The exact extremal graphs are not known for all sufficiently large $n$. Moreover, even though the extremal configurations are not known for larger values of $r$, structural properties of these configurations have been studied in~\\cite{PS2021,PS2022}. \n\nRegarding the rainbow pattern $K_3^R$, again assuming that $n$ is sufficiently large, it is known that the complete graph $K_n$ is $(r,K_3^M)$-extremal for $r\\in \\{2,3\\}$~\\cite{baloghli}, and that the Tur\\'{a}n graph $T_2(n)$ is $(r,K_3^M)$-extremal for $r \\geq 4$~\\cite{baloghli,rainbow_triangle}. We also refer the reader to~\\cite{BBH20} for related results.\n\nThe first results about the pattern $K_3^{(2)}$ have been obtained by two of the current authors~\\cite{2coloredlagos}, who showed that $T_2(n)$ is $(r,K_3^{(2)})$-extremal for $2 \\leq r \\leq 12$ and sufficiently large $n$. They also observed that this conclusion cannot be extended to $r \\geq 27$, as the following example illustrates. Let $n$ be a positive integer (for the sake of the argument, assume that it is divisible by $4$), and consider the complete, balanced $4$-partite Tur\\'an graph $T_4(n)$ with vertex partition $V_1\\cup\\cdots\\cup V_4$, where each class has size $n\/4$. Let $C$ be a set of 27 colors and partition $C$ as $C_1\\cup C_2\\cup C_3$, where $|C_i|=9$ for $1\\leq i\\leq 3$. Consider the colorings of $T_4(n)$ that assign colors in $C_1$ to edges between $V_1$ and $V_2$ and to edges between $V_3$ and $V_4$; colors in $C_2$ to edges between $V_1$ and $V_3$ and to edges between $V_2$ and $V_4$; colors in $C_3$ to edges between $V_1$ and $V_4$ and to edges between $V_2$ and $V_3$. Note that no copy of $K_3^{(2)}$ may appear in such a coloring (indeed, all triangles are rainbow), and that the number of such colorings is equal\n$$9^{\\frac{6n^2}{16}}=27^{\\frac{n^2}{4}}=c_{27,K_3^{(2)}}(T_2(n)).$$\nMoreover, other colorings of $T_4(n)$ may be produced, for instance by choosing a different partition $C_1\\cup C_2\\cup C_3$. This construction shows that $T_2(n)$ is not $(r,K_3^{(2)})$-extremal for any $r\\geq 27$.\n\nIn this paper, we prove that the result in~\\cite{2coloredlagos} may be extended all the way to $r=26$, as we now state.\n\\begin{theorem}\\label{theorema:main}\nGiven a number $r$ of colors satisfying $2 \\leq r \\leq 26$, there exists $n_0$ such that, for every $n \\geq n_0$ and every $n$-vertex graph $G$, we have\n\\begin{equation}\\label{eq_main}\nc_{r,K_3^{(2)}}(G) \\leq r^{\\ex(n,K_3)}.\n\\end{equation}\nMoreover, equality holds in~\\eqref{eq_main} for $n \\geq n_0$ if and only if $G$ is isomorphic to the bipartite Tur\\'{a}n graph $T_2(n)$.\n\\end{theorem}\n\nThe work in \\cite[Lemma~4.4]{eurocomb15} implies that to prove Theorem~\\ref{theorema:main}, it suffices to show a related stability result establishing that any $n$-vertex graph with a `large' number of $r$-colorings must be `almost bipartite'. For a formal statement of this stability result, given a graph $G=(V,E)$ and a set $W \\subseteq V$, we write $e_G(W)$ for the number $|E(G[W])|$ of edges in the subgraph of $G$ induced by $W$. \n\\begin{lemma} \\label{lemma:main_result}\nLet $2 \\leq r \\leq 26$. For any fixed $\\delta > 0$, there exists $n_0$ such that the following holds for all $n\\geq n_0$. If $G=(V,E)$ is an $n$-vertex graph such that\n$$ c_{r,K_3^{(2)}}(G) \\geq r^{\\ex(n, K_3)},$$ \nthen there is a partition $V = W_1 \\cup W_2$ of its vertex set such that $e_G(W_1) + e_G(W_2) \\leq \\delta n^2$.\n\\end{lemma} \n\nThe proof of our results combines the regularity method used in \\cite{alon} and~\\cite{eurocomb15} with linear programming. In some previous applications of this method, the general bounds provided by linear programming were not strong enough to extend the conclusion of Lemma~\\ref{lemma:main_result} to the entire range of $r$. In this paper we use an inductive component in the proof, which allows us to better exploit local restrictions and to extend the result in~\\cite{2coloredlagos} to all values of $r$ for which it was conjectured to hold. The remainder of the paper is organized as follows. In the next section, we introduce the basic preliminary results and notation required. In Section~\\ref{sec:main_proof}, we prove Lemma~\\ref{lemma:main_result}.\n\n\\section{Notation and auxiliary tools}\n\nIn this section, we introduce some notation and auxiliary tools that will be useful for our purposes.\n\n\\subsection{Stability for Erd\\H{o}s-Rothschild type problems} As mentioned in the introduction, results in~\\cite{eurocomb15} ensure that if we prove the stability result stated in Lemma~\\ref{lemma:main_result}, we immediately obtain our main result Theorem~\\ref{theorema:main}. To describe why this is the case, we start with a definition.\n\n\\begin{definition}\\label{colored_stability}\nLet $F$ be a graph with chromatic number $\\chi(F)=k \\geq 3$ and let $P$ be a pattern of $F$. We say that the pair $(F,P)$ satisfies the Color Stability Property for a positive integer $r$ if, for every $\\delta>0$, there exists $n_0$ with the following property. If $n>n_0$ and $G$ is an $n$-vertex graph such that $c_{r,P}(G) \\geq r^{\\ex(n,F)}$, then there exists a partition $V(G)=V_1 \\cup \\cdots \\cup V_{k-1}$ such that $\\sum_{i=1}^{k-1} e(V_i) < \\delta n^2$.\n\\end{definition}\nThe authors of~\\cite{eurocomb15} have proved that, under the technical conditions below, if we show that a pair $(F,P)$ satisfies the Color Stability Property for a positive integer $r$, then we may immediately conclude that the Tur\\'{a}n graph $T_{k-1}(n)$ is the unique $(r,P)$-extremal graph for sufficiently large $n$. In the next statement, a pattern $P$ of $K_{k}$ is \\emph{locally rainbow} if there is a vertex that is incident with edges in $k-1$ different classes of $P$. For $K_3$, the patterns $K_3^R$ and $K_3^{(2)}$ are locally rainbow patterns, but $K_3^M$ is not.\n\\begin{theorem}\\cite[Lemma 4.4]{eurocomb15} \\label{exact} \nLet $k \\geq 3$ and let $P$ be a locally rainbow pattern of $K_{k}$ such that $(K_{k},P)$ satisfies the Color Stability Property of Definition~\\ref{colored_stability} for a positive integer \n\\begin{equation*}\nr \\geq\n\\begin{cases}\n3 & \\textrm{ if $k=3$}\\\\\n\\lceil ek \\rceil & \\textrm{ if $k\\geq 4$},\n\\end{cases}\n\\end{equation*}\nwhere $e$ denotes Euler's number. Then there is $n_0$ such that every graph of order $n > n_0^2$ has at most $r^{\\ex(n,K_{k})}$ distinct $(K_{k},P)$-free $r$-edge colorings. Moreover, the only graph on $n$ vertices for which the number of such colorings is $r^{\\ex(n,K_{k})}$ is the Tur\\'{a}n graph $T_{k}(n)$.\n\\end{theorem}\nThis justifies why we only need to prove Lemma~\\ref{lemma:main_result} to derive our exact result. \n\n\\subsection{Regularity and embeddings} Our strategy relies on a colored version of the celebrated Szemer\\'{e}di Regularity Lemma. To state it, we need some terminology. Let $G = (V,E)$ be a graph, and let $A$ and $B$ be two subsets of $V(G)$. If $A$ and $B$ are non-empty, define the edge density between $A$ and $B$ by\n$$d_G(A,B) = \\frac{e_G(A,B)}{|A||B|},$$\nwhere $e_G(A,B)$ is the number of edges with one vertex in $A$ and the other in $B$.\nFor $\\eps > 0$ the pair $(A,B)$ is called \\emph{$\\eps$-regular} if, for all subsets $X \\subseteq A$ and $Y \\subseteq B$ satisfying $|X| \\geq \\eps|A|$ and $|Y| \\geq \\eps|B|$, we have\n$$|d_G(X,Y) - d_G(A,B)| < \\eps.$$\nAn \\emph{equitable partition} of a set $V$ is a partition of $V$ into pairwise disjoint classes $V_1,\\ldots,V_m$ satisfying $\\arrowvert |V_i| - |V_j| \\arrowvert \\leq 1$ for all pairs $i,j$. An equitable partition of the vertex set $V$ of $G$ into classes $V_1,\\ldots,V_m$ is called \\emph{$\\eps$-regular} if at most $\\eps \\binom{m}{2}$ of the pairs $(V_i, V_j)$ are not $\\eps$-regular.\n\nWe are now ready to state a colored version of the Regularity Lemma, whose proof may be found in \\cite{kosi}. For a positive integer $r$, we use the standard notation $[r] = \\{1, \\ldots , r\\}$.\n\\begin{lemma} \\label{lemma:regularity}\nFor every $\\eps > 0$ and every positive integer $r$, there exists $M = M(\\eps,r)$ such that the following holds. If the edges of any graph $G$ of order $n > M$ are $r$-colored $E(G) = E_1 \\cup \\cdots \\cup E_r$, then there is a partition of the vertex set $V(G) = V_1 \\cup \\cdots \\cup V_m$, with $1\/\\eps \\leq m \\leq M$, which is $\\eps$-regular simultaneously with respect to the graphs $G_i = (V,E_i)$ for all $i \\in [r]$.\n\\end{lemma}\nA partition $V_1 \\cup \\cdots \\cup V_m$ of $V(G)$ as in Lemma \\ref{lemma:regularity} will be called a \\textit{multicolored $\\eps$-regular partition}. For $\\eta>0$, we may define a \\textit{multicolored cluster graph} $\\mathcal{H}(\\eta)$ associated with this partition, where the vertex set is $[m]$ and $e = \\{i,j\\}$ is an edge of $\\mathcal{H}(\\eta)$ if $\\{V_i,V_j\\}$ is a regular pair in $G$ \\textit{for every} color $c \\in [r]$ and the edge density of the partition is at least $\\eta$ for some color $c \\in [r]$. Each edge $e$ in $\\mathcal{H}(\\eta)$ is assigned the list $L_{e}$ containing all colors for which its edge density is at least $\\eta$, so that $|L_e| \\geq 1$ for every edge in the multicolored cluster graph $\\mathcal{H}(\\eta)$. Given an (edge)-coloring $\\widehat{F}$ of a graph $F$, we say that a multicolored cluster graph $\\mathcal{H}$ contains $\\widehat{F} $ if $\\mathcal{H}$ contains a copy of $F$ for which the color of each edge of $\\widehat{F} $ is contained in the list of the corresponding edge in $\\mathcal{H}$. More generally, if $F$ is a graph with color pattern $P$, we say that $\\mathcal{H}$ contains $(F, P)$ if it contains some coloring $\\widehat{F}$ of $F$ with pattern $P$.\n\nIn connection with this definition, we shall use the following standard embedding result, which is a special case of~\\cite[Lemma~2.4]{eurocomb15}.\n\\begin{lemma} \\label{lemma:colored_subgraph} \nFor every $\\eta > 0$ and all positive integers $k$ and $r$, there exist $\\eps = \\eps (r, \\eta, k) > 0$ and a positive integer $n_0(r, \\eta, k)$ with the following property. Suppose that $G$ is an $r$-colored graph on $n > n_0$ vertices with a multicolored $\\eps$-regular partition $V = V_1 \\cup \\cdots \\cup V_m$ which defines the multicolored cluster graph $\\mathcal{H} = \\mathcal{H}(\\eta)$. Let $F$ be a fixed $k$-vertex graph with a prescribed color pattern $P$ on $t \\leq r$ classes. If $\\mathcal{H}$ contains $(F, P)$, then the graph $G$ also contains $(F, P)$.\n\\end{lemma}\n\n\\subsection{Stability}\nAnother basic tool in our paper is a stability result for graphs.\nWe shall use the following theorem by F\\\"uredi \\cite{fu15}.\n\\begin{theorem} \\label{theorem:stability_furedi} \nLet $G = (V,E)$ be a $K_{k}$-free graph on $n$ vertices. If $|E| = \\ex(n, K_{k}) - t$, then there exists a partition $V= V_1 \\cup \\cdots \\cup V_{k-1}$ with $\\sum_{i = 1}^{k-1} e(V_i) \\leq t$. \n \\end{theorem}\n \nWe also use the following version of a simple lemma due to Alon and Yuster \\cite{AY}. For completeness, we include its proof, that relies on the well known fact that any graph on $m\\geq 1$ edges contains a bipartite subgraph with more than $m\/2$ edges.\n\\begin{lemma} \\label{lemma:AY} \nLet $0 < t < n^2\/16$ and let $G$ be a $K_3$-free graph with $n$ vertices and with $\\ex(n,K_{3})-t$ edges. If we produce a new graph $G'$ by adding at least $5t$ new edges to the graph $G$, then $G'$ contains a copy of $K_{3}$ with exactly one new edge.\n\\end{lemma}\n\\begin{proof}\nLet $G=(V,E)$ be a $K_3$-free graph with $n$ vertices and with $\\ex(n,K_{3})-t$ edges and let $F$ be a set of at least $5t$ new edges. Let $E' \\subset E$ be a set with maximum cardinality such that $G'=(V,E')$ is bipartite. Let $V=V_1 \\cup V_2$ be the bipartition of $G'$ and let $E''=E \\setminus E'$. By Theorem~\\ref{theorem:stability_furedi}, we have $|E''| \\leq t$. \n\nThe number of edges of $F$ with one end in $V_1$ and the other in $V_2$ is at most \n$$|V_1||V_2|-|E'| \\leq \\ex(n,K_3)-|E'| \\leq |E|+t-|E'| = t+|E''|.$$\nSo, if $F'$ denotes the subset of $F$ containing all edges with both ends in $V_1$ or with both ends in $V_2$, we have\n$$|F'| \\geq 5t - (t+|E''|)=4t-|E''|.$$ \nTo conclude our proof, consider a maximum subset $F'' \\subset F'\\cup E''$ such that $G''=(V,F'')$ is bipartite. It is well known that \n$$|F''|>\\frac{|F'\\cup E''|}{2}\\geq \\frac{(4t-|E''|)+|E''|}{2}=2t.$$ \n\nTo conclude the proof, consider the graph $G^\\ast=(V,E' \\cup F'')$. Observe that the edges of $G^\\ast$ with one end in $V_1$ and the other in $V_2$ are in $E'$, and the others are in $F''$. By the above considerations, \n$$|E'|+|F''| > \\ex(n,K_{3})-(t+|E''|)+2t \\geq \\ex(n,K_{3}),$$ \nso that $G^\\ast$ contains a triangle by Tur\\'{a}n's Theorem. We claim that this triangle contains exactly one edge in $F''$. It is obvious that it must contain at least one edge in $F''$ and that it cannot contain an edge with both ends in $V_1$ and another with both ends in $V_2$. Moreover, if the triangle contained two edges with all ends in one of the sides of the bipartition, say in $V_1$, the third edge of the triangle would also connect vertices in $V_1$, so that the three edges would lie in $F''$, contradicting the fact that $G''=(V,F'')$ is bipartite.\n\\end{proof}\n\n\n\\section{Proof of Lemma~\\ref{lemma:main_result}}\\label{sec:main_proof}\n\nIn this section, we will prove Lemma~\\ref{lemma:main_result}. To this end, fix $r \\in \\{2,\\ldots,26\\}$ and let $\\delta>0$. With foresight, we consider auxiliary constants $\\alpha$, $\\xi$ and $\\eta > 0$ such that\n\\begin{center}\n\\begin{equation}\\label{eq:quantification}\n\\alpha=\\frac{1}{1000}, \\ \\ \\xi < \\dfrac{\\delta}{22}, \\ \\ \\xi > 10^4\\cdot H((r+1)\\eta) + (10^4+1)\\cdot (r+1)\\cdot \\eta \\ \\ \\text{ and } \\ \\ \\eta < \\dfrac{\\delta}{2r},\n\\end{equation}\n\\end{center}\nwhere $H \\colon [0,1] \\to [0,1]$ is the \\emph{entropy function} given by $H(0) = H(1) = 0$ and by $H(x) = -x \\log_2 x - (1-x) \\log_2(1-x)$ for $x \\in (0,1)$. \n\nLet $\\varepsilon = \\varepsilon(r,\\eta, 3) > 0$ and $n_0 = n_0(r,\\eta,3)$ satisfy the assumptions of Lemma~\\ref{lemma:colored_subgraph}, and assume without loss of generality that $\\varepsilon < \\min\\{\\eta\/2,1\/n_0\\}$. Fix $M = M(r,\\varepsilon)$ given by Lemma~\\ref{lemma:regularity}.\n\nGiven an $n$-vertex graph $G$ such that $n \\geq n_0$, let $\\mathcal{C}=\\mathcal{C}(G)$ the set of all $K_3^{(2)}$-free $r$-colorings of $G$. By Lemma \\ref{lemma:regularity} and the discussion following it, each coloring $\\Phi \\in \\mathcal{C}$ is associated with a multicolored $\\eps$-regular partition $V= V_1 \\cup \\cdots \\cup V_m$, where $1 \/ \\eps \\leq m \\leq M$. This partition is in turn is associated with a multicolored cluster graph $\\mathcal{H}=\\mathcal{H}(\\eta)$. Our choice of parameters implies that $\\mathcal{H}$ must be $K_3^{(2)}$-free, otherwise the coloring of $G$ leading to it would contain a copy of $(K_3,\\leq 2)$ by Lemma~\\ref{lemma:colored_subgraph}.\n\nTowards an upper bound on the size of $\\mathcal{C}$, we determine an upper bound on the number of colorings that give rise to a fixed partition $ V_1 \\cup \\cdots \\cup V_m$ and to a fixed multicolored cluster graph $\\mathcal{H}$. We first consider the edges of $G$ whose colors are not captured by the lists $L_e$ associated with edges $e\\in E(\\mathcal{H})$. Lemma~\\ref{lemma:regularity} ensures that, for each color in $[r]$, there are at most $\\eps \\binom{m}{2}$ irregular pairs with respect to the partition $V = V_1 \\cup \\dots \\cup V_m$, hence at most \n\\begin{align}\nr \\cdot \\eps \\cdot \\binom{m}{2} \\cdot \\left(\\frac{n}{m}\\right) ^ 2 \\leq r \\eps \\cdot n ^ 2 \\leq \\frac{r \\eta}{2} \\cdot n ^ 2 \\label{eq:irregular}\n\\end{align}\nedges of $G$ are contained in an irregular pair with respect to one of the colors. Moreover, there are at most\n\\begin{align}\nm \\cdot \\left(\\frac{n}{m}\\right) ^ 2 = \\frac{n ^ 2}{m} \\leq \\eps n ^ 2 \\leq \\frac{\\eta}{2} \\cdot n ^ 2 \\label{eq:inside}\n\\end{align}\nedges with both ends in the same class $V_i$. Finally, we consider edges $f$ whose endpoints are in distinct classes $V_i$ and $V_j$ and such that the edge density of the edges with the color of $f$ is less than $\\eta$ with respect to this pair. The number of edges of this type is at most\n\\begin{align}\nr \\cdot \\eta \\cdot \\binom{m}{2} \\cdot \\left(\\frac{n}{m}\\right) ^ 2 \\leq \\frac{r \\eta}{2} \\cdot n ^ 2. \\label{eq:density}\n\\end{align}\nUsing \\eqref{eq:irregular}, \\eqref{eq:inside} and \\eqref{eq:density} gives at most $(r+1) \\eta n^2$ edges of these three types. \n\nClearly, the remaining edges of $G$ have endpoints in pairs that are regular for every color and must be assigned a color\nthat is dense with respect to the pair where its endpoints lie, i.e., their color must lie in the list of the corresponding edge of $\\mathcal{H}$. This means that the number of elements of $\\mathcal{C}$ that can be associated with a given multicolored partition $V_1 \\cup \\cdots \\cup V_m$ and a given $m$-vertex multicolored cluster graph $\\mathcal{H}$ is bounded above by \n\\begin{align} \\label{eq:multicolor_bound}\n \\binom{n^2}{(r+1) \\eta n^2} \\cdot r^{(r+1) \\eta n^2} \\cdot \\left( \\prod_{j=1}^{r} j^{e_j(\\mathcal{H})} \\right)^{\\left( \\frac{n}{m} \\right)^2},\n\\end{align}\nwhere $e_j(\\mathcal{H})$ denotes the number of edges of $\\mathcal{H}$ whose lists have size equal to $j$.\nHere, we assume that $m$ divides $n$ to avoid dealing with lower order terms that can be absorbed into the error term $r^{(r+1) \\eta n^2}$.\nThere are at most $M^n$ partitions of $V$ on $m \\leq M$ classes and at most $2^{r\\binom{m}{2}}$ multicolored cluster graphs with vertex set $[m]$. Moreover, it is well-known that the entropy function satisfies\n$$\\binom{n^2}{(r+1) \\eta n^2} \\leq 2^{H((r+1)\\eta n^2)}.$$\n\nThus, summing the upper bound~\\eqref{eq:multicolor_bound} over all partitions and all corresponding multicolored cluster graphs, the number of $K_3^{(2)}$-free edge colorings of $G$ is at most\n\\begin{align}\nM^n \\cdot 2^{r M^2\/2} \\cdot 2 ^ {H((r+1) \\eta) n^2} \\cdot r^{(r+1) \\eta n^2} \\cdot \\max_{\\mathcal{H}} \\left( \\prod_{j=1}^{r} j ^ {\\frac{e_j(\\mathcal{H})}{|V(\\mathcal{H})|^2}} \\right)^{n^2}. \\label{eq:coloring_of_g}\n\\end{align}\n\nOur aim is to find an upper bound on \\eqref{eq:coloring_of_g}. The term $j=1$ in the product in \\eqref{eq:coloring_of_g} does not affect the result. So, we define $\\mathcal{S}=\\mathcal{S}(G)$ to be the set of all subgraphs of multicolored cluster graphs $\\mathcal{H}$ of $G$ such that all edges are associated with lists of size at least two. Abusing the terminology, we also call the subgraph given by edges whose lists have size at least 2 the multicolored cluster graph associated with a coloring of $G$. Note that $\\mathcal{H} \\in \\mathcal{S}$ is $K_3^{(2)}$-free if and only if all lists associated with edges on a triangle are mutually disjoint. Given $\\mathcal{H} \\in \\mathcal{S}$, we let \n\\begin{equation}\\label{def_cH}\nc(\\mathcal{H})=\\prod_{e\\in E(\\mathcal{H})}|L_e|^{\\frac{1}{|V(\\mathcal{H})|^2}}.\n\\end{equation}\nWe wish to find $\\max_{\\mathcal{H} \\in \\mathcal{S} } c(\\mathcal{H})$ to bound~\\eqref{eq:coloring_of_g}.\n\nAs discussed in~\\cite{2coloredlagos}, this is easy to do for the case of $r\\leq 12$ colors (for completeness, we present the proof for $r\\leq 12$ as an appendix). In the remainder of the proof, we focus on the remaining values of $r$ and consider the functions\n\\begin{eqnarray*}\nr_0&=&r_0(r)=\n\\begin{cases} \n6& \\textrm{ if } r=13\\\\\n\\lfloor r-2\\sqrt{r}\\rfloor& \\textrm{ if } 14\\leq r \\leq 26.\n\\end{cases} \\\\%\\label{def:r0}\\\\\nr_1&=&r_1(r)=r_0+1. \n\\end{eqnarray*}\nFor $r\\geq 13$, the crucial property in the definition of $r_1$ is that \n\\begin{equation}\\label{eq:Ar}\nA(r)=\\left\\lfloor (r-r_1)\/2 \\right\\rfloor \\cdot \\left\\lceil (r-r_1)\/2\\right\\rceil < r.\n\\end{equation}\nNote that, $A(r)\\leq (r-r_1)^2\/4$ and, as both factors of $A(r)$ are integers, we have $A(r) \\leq r-1$.\nThe validity of~\\eqref{eq:Ar} may be verified directly for $r=13$, and if $r \\geq 14$ we have $r-r_1=r-\\lfloor r-2\\sqrt{r}\\rfloor-1 2r_1+1>r$$ \nimplies that two of the lists have non-empty intersection, leading to a $K_3^{(2)}$ in $G$ by Lemma~\\ref{lemma:colored_subgraph}. \n\nBy Theorem~\\ref{theorem:stability_furedi}, there is a partition $U_1 \\cup U_{2} = [m]$ with\n\\begin{align*}\ne_{\\mathcal{H}^{\\prime}}(U_1) + \\ e_{\\mathcal{H}^{\\prime}}(U_{2}) \\leq \\xi m^2,\n\\end{align*}\nwhere $e_{\\mathcal{H}^{\\prime}}(U_i)$ is the number of edges of $\\mathcal{H}^{\\prime}$ with both endpoints in $U_i$. The bipartite subgraph $\\widehat{\\mathcal{H}}$ obtained from $\\mathcal{H}^{\\prime}$ obtained by removing all edges with both endpoints in the same class satisfies\n\\begin{align*}\ne(\\widehat{\\mathcal{H}}) \\geq (\\ex(m, K_3) - \\xi m ^ 2) - \\xi m^2= \\ex(m, K_3) - 2 \\xi m^2.\n\\end{align*}\n\nWe claim that, even if we add arbitrary edges with lists of size $1$ to $\\mathcal{H}$ (while preserving its property of being $K_3^{(2)}$-free), $e_1(\\mathcal{H}) + \\cdots + e_{r_0}(\\mathcal{H}) \\leq 10\\xi m^2$. Otherwise, by our choice of $\\xi$in~\\eqref{eq:quantification}, Lemma~\\ref{lemma:AY} can be applied and the graph obtained by adding the edges in $E_1 \\cup \\cdots \\cup E_{r_0}$ to $\\widehat{\\mathcal{H}}$ would contain a $K_3$ such that exactly one of the edges, say $f_1$, is in some set $U_i$. Let $f_2, f_{3}$ be the other edges of the copy of $K_3$, which lie in $E_{r_1} \\cup \\cdots \\cup E_{r}$. By construction, we have \n$$|L_{f_1}|+|L_{f_2}|+|L_{f_3}| \\geq 1+2r_1>r,$$\na contradiction.\n\nAs a consequence, the number of edges of $\\mathcal{H}$ with both ends in the same set $U_i$ is at most $11 \\xi m^2$. Let $W_i = \\cup_{j \\in U_i} V_j$ for $i \\in \\{1, 2\\}$. Then, by our choice of $\\eta$ and $\\xi$ in~\\eqref{eq:quantification}, we have\n\\begin{align*}\ne_G(W_1) + e_G(W_{2}) \\leq r \\eta n^2 + (n\/m)^2 \\cdot (e_\\mathcal{H}(U_1) + e_\\mathcal{H}(U_{2})) < \\delta n^2,\n\\end{align*} \nas required. This proves Lemma~\\ref{lemma:main_result}.\n\nWe now move to the actual proof of Claim~\\ref{claim13}. Given a cluster graph $\\mathcal{H}$, let $E_b(\\mathcal{H})$ be the set of all edges whose color lists have sizes between $r_1$ and $r$. We refer to them as the \\emph{blue} edges of $\\mathcal{H}$. Let $E_g(\\mathcal{H})$ be the set of all edges whose color lists have sizes between $2$ and $r_0$, the \\emph{green} edges of $\\mathcal{H}$. The main ingredient in the proof of Claim~\\ref{claim13} is the following auxiliary lemma.\n\\begin{lemma}\\label{lemma:claim13}\nLet $r$ be an integer such that $2\\leq r \\leq 26$ and let $\\mathcal{H}$ be a $(K_3,\\leq 2)$-free multicolored cluster graph for which all edges are green. Then, for all $0 < \\alpha \\leq \\frac{1}{1000}$ it is\n$$c(\\mathcal{H})\\leq r^{\\frac14-\\alpha} 0$ and $q \\leq \\sqrt{\\xi}m$, so that, by~\\eqref{eq_number} and our restriction on the number of blue edges, i.e., $k_1+\\sum_{j=1}^{k_1} n_1(e_{j}) \\leq \\ex(m, K_3) - \\xi m ^ 2$, we have\n\\begin{eqnarray*}\n\\sum_{j=1}^{k_1} n_2(e_{j})&=& k_1m-k_1^2-\\left(k_1+\\sum_{j=1}^{k_1} n_1(e_{j})\\right)\\\\\n&\\geq& q(2\\sqrt{\\xi}m-q).\n\\end{eqnarray*}\nEquation \\eqref{eq:26colors1} is at most\n\\begin{eqnarray}\\label{eq:26colors2} \n&& \\left(\\frac{B(r)}{r}\\right)^{q(2\\sqrt{\\xi}m-q)} \\cdot r^{\\frac{m^2}{4}-\\alpha(2\\sqrt{\\xi}m-2q)^2}.\n\\end{eqnarray}\nIf $q \\leq 3\\sqrt{\\xi} m\/4$, equation~\\eqref{eq:26colors2} is at most\n$$ r^{\\frac{m^2}{4}-\\alpha(2\\sqrt{\\xi}m-2q)^2} \\leq r^{\\frac{m^2}{4}- \\frac{\\alpha\\xi m^2}{4}} \\leq r^{\\frac{m^2}{4}-\\frac{1}{10^4}\\xi m^2}$$\nfor $\\alpha \\geq 1\/10^3$.\n\nIf $q \\geq 3\\sqrt{\\xi} m\/4$, equation~\\eqref{eq:26colors2} is at most\n$$\\left(\\frac{B(r)}{r}\\right)^{\\frac{15\\xi m^2}{16}} \\cdot r^{\\frac{m^2}{4}} \\stackrel{\\eqref{bound:B}}{\\leq} r^{\\frac{m^2}{4}-\\frac{1}{10^4}\\xi m^2},$$\nas for $2 \\leq r \\leq 26$\n$$\n\\left(\\frac{B(r)}{r}\\right)^{\\frac{15}{16}} \\cdot r^{\\frac{1}{10^4}} \\leq \\left(\\frac{r-1}{r}\\right)^{\\frac{15}{16}} \\cdot r^{\\frac{1}{10^4}} \\leq e^{-\\frac{15}{16r} + \\frac{1}{10^4} \\ln r } \\leq 1.\n$$\n\n\nCombining the above cases, and using the upper bound~\\eqref{eq:coloring_of_g}, we conclude that the number of $K_3^{(2)}$-free colorings of the graph $G$ satisfies\n\\begin{eqnarray*}\n|\\mathcal{C}_{r,(K_3,\\leq 2)}(G)| &\\leq & M^n \\cdot 2^{(H((r+1) \\eta)) n^2 + r M^2 \/ 2} \\cdot r^{(r+1) \\eta n^2} \\cdot\\left( r^{\\frac{m^2}{4}-\\frac{1}{10^4}\\xi m^2}\\right)^{\\left( \\frac{n}{m} \\right) ^ 2} \\\\\n &\\stackrel{n \\gg 1}{\\ll}& r^{\\ex(n, K_3)}, \\label{eq:result014}\n\\end{eqnarray*}\nas $\\xi > (10^4+1) \\cdot (r+1) \\cdot \\eta + 10^4 \\cdot H((r+1)\\eta)$, which is a contradiction to the hypothesis that $ |\\mathcal{C}_{r,(K_3,\\leq 2)}(G)| \\geq r^{\\ex(n,K_3)}$ and proves Claim~\\ref{claim13}. \n\nTo conclude the proof of Claim~\\ref{claim13} (and thus of Lemma~\\ref{lemma:main_result}), we still need to prove Lemma~\\ref{lemma:claim13}. To this end, fix an integer $r$ such that $13 \\leq r \\leq 26$, and let $\\mathcal{H}$ be a $K^{(2)}_3$-free multicolored cluster graph for which all edges are green. Recall that we are assuming that no edges of $\\mathcal{H}$ have lists of size less than two. \n\n\\subsection{Proof of Lemma~\\ref{lemma:claim13}}\n\nThe proof of Lemma~\\ref{lemma:claim13} will be by induction, and we shall split the set $\\mathcal{S}$ of $K_3^{(2)}$-free cluster graphs for which all edges are green into two classes. One such class, called $\\mathcal{S}_1$ contains all $K_4$-free $\\mathcal{H}\\in \\mathcal{S}$ such that there is no copy of $K_3$ whose three edges have color lists of size at least four\n\\begin{lemma}\\label{lemma26:s1<27}\nGiven $\\mathcal{H} \\in \\mathcal{S}_1$ with $m$ vertices, the following holds for $13 \\leq r \\leq 26$ and $0 < \\alpha \\leq \\frac{1}{1000}$:\n\\begin{eqnarray*}\nc(\\mathcal{H}) \\leq r^{\\frac{1}{4} - \\alpha}. \n\\end{eqnarray*}\n\\end{lemma}\n\n\\begin{proof}\nFix $\\mathcal{H} \\in \\mathcal{S}_1$ with $m$ vertices. If $m=1$, there is nothing to prove. If $m=2$ and $r=13$, we have $c(\\mathcal{H}) \\leq r_0 =6 \\leq 13^{1-\\alpha}$ for $\\alpha \\leq 0.3014 \\leq (\\log{13}-\\log{6})\/\\log{13}$. For $r\\geq 14$, we have \n$$c(\\mathcal{H}) \\leq r_0 \\leq r-2\\sqrt{r} = r\\left(1-\\frac{2}{\\sqrt{r}} \\right) \\leq r e^{-2\/\\sqrt{r}} \\leq r^{1-\\alpha}$$ for \n$\\alpha \\leq 0.1203 <\\frac{2}{\\sqrt{26} \\ln 26} \\leq \\frac{2}{\\sqrt{r} \\ln r}$.\n \n Next assume $m\\geq 3$. By the definition of $\\mathcal{S}_1$, the set $E_4\\cup \\cdots \\cup E_{r_0}$ cannot induce a copy of $K_3$. Thus, by Tur\\'{a}n's Theorem, we have\n\\begin{equation*}\n|E_4| +\\cdots + |E_{r_0}| \\leq \\ex(m, K_3) \\leq \\frac{1}{4} m^2.\n\\end{equation*}\n\nAgain by the definition of $\\mathcal{S}_1$, the set $E_2\\cup \\cdots \\cup E_{r_0}$ cannot contain a copy of $K_4$, leading to\n\\begin{equation*}\n|E_2| +\\cdots + |E_{r_0}| \\leq \\ex(m, K_4) \\leq \\frac{1}{3} m^2.\n\\end{equation*}\n\nIf we set $x_i=|E_i|\/m^2$ for $i \\in \\{2,\\ldots,r_0\\}$, by~\\eqref{def_cH}, we have\n$$c(\\mathcal{H}) = \\prod_{j=2}^{r_0} j^{x_j},$$ so that the logarithm $\\ln c(\\mathcal{H})$ is bounded above by the solution to the linear program\n\\begin{eqnarray*}\n&\\max& \\sum_{j=2}^{r_0} x_j \\ln j\\\\\n&\\textrm{s.t.}& \\sum_{j=4}^{r_0} x_j\\leq \\frac{1}{4},~~\n\\sum_{j=2}^{r_0} x_j \\leq \\frac{1}{3}\\\\\n&&x_j \\geq 0,~j\\in \\{2,\\ldots,r_0\\}.\n\\end{eqnarray*}\nSolving this linear program gives the optimal solution $x_{r_0}=1\/4$, $x_3=1\/12$ and $x_j=0$ for the remaining values of $j$. For $14\\leq r\\leq 26$, we have\n\\begin{eqnarray}\n&& r_0^{\\frac14} \\cdot 3^{\\frac{1}{12}} \\leq \\left( r - 2 \\sqrt{r}\\right)^{\\frac{1}{4}}\\cdot 3^{\\frac{1}{12}} \\leq r^{\\frac{1}{4} - \\alpha}\\nonumber \\\\\n&\\Longrightarrow& r^{\\alpha} \\cdot \\left( 1 - \\frac{2}{\\sqrt{r}} \\right)^{\\frac{1}{4}} \\cdot 3^{\\frac{1}{12}} \\leq 26^{\\alpha} \\cdot \\left( 1 - \\frac{2}{\\sqrt{26}} \\right)^{\\frac{1}{4}} \\cdot 3^{\\frac{1}{12}} \\leq 1. \\label{eq_final}\n\\end{eqnarray}\nFor $r=13$, we get\n\\begin{equation}\\label{eq_final2}\n13^{\\alpha} \\cdot \\left( \\frac{6}{13} \\right)^{\\frac{1}{4}} \\cdot 3^{\\frac{1}{12}} \\leq 1.\n\\end{equation}\n Equations~\\eqref{eq_final} and~\\eqref{eq_final2} hold for $0 < \\alpha \\leq 0.0101$. This leads to $c(\\mathcal{H}) \\leq r^{1\/4 -\\alpha}$.\n\\end{proof}\n\nTo complete the proof of Claim~\\ref{claim13}, we consider the cluster graphs that are not considered in Lemma~\\ref{lemma26:s1<27}. \nIn our arguments, we use the following optimization problem for given positive integers $p \\geq 2$ and $L$:\n\\begin{eqnarray}\\label{maxlemma}\n&\\max& \\prod_{j=1}^c x_j \\\\\n&\\textrm{s.t.}& c, x_1,\\dots , x_c\\in {\\mathbb N} = \\{1,2,\\ldots \\} \\nonumber \\\\\n&& x_1+\\cdots+x_c \\leq p\\nonumber\\\\\n&& c \\leq L.\\nonumber\n\\end{eqnarray}\n\n\\begin{definition}\\label{def:cs}\nGiven positive integers $k \\geq 2$ and $r \\geq 2$, let\n\\begin{itemize}\n\\item[(i)] $c_k(r)$ be the maximum of the optimization problem~\\eqref{maxlemma} with $p=r\\cdot \\lfloor k\/2 \\rfloor$ and $L=\\binom{k}{2}$; \n\n\\item[(ii)] $c_k^*(r)$ be the maximum of the optimization problem~\\eqref{maxlemma} with $p=r$ and $L=k$. \n\\end{itemize}\n\\end{definition}\n\nWe shall use the following three straightforward lemmas.\n \\begin{lemma}\\label{gen_claim}\nLet $k \\geq 3$, let $\\mathcal{H}$ be a $K_3^{(2)}$-free multicolored cluster graph such that $|L_e| \\geq 2$ for all $e \\in E(\\mathcal{H})$, and assume that $A \\subset V(\\mathcal{H})$ is such that $\\mathcal{H}[A]$ is isomorphic to $K_k$. For a vertex $v \\in V(\\mathcal{H})$ let $E'(v)=\\{\\{v,x\\} \\in E(\\mathcal{H}) \\colon x \\in A\\}$. For any $v \\in V(\\mathcal{H}) \\setminus A$, it holds that\n \\begin{equation}\\label{gen_UB}\n \\prod_{e \\in E'(v)} |L_e| \\leq c^*_k(r) \\leq \\overline{c}_k(r)=\\max\\left\\{\\left(\\frac{r}{j}\\right)^{j} \\colon j \\in \\{1,\\ldots,k\\}\\right\\}.\n \\end{equation}\n\\end{lemma}\n\n\\begin{proof}\nFor each edge $e \\in E'(v)$, set $x_e=|L_e|$. Because $A$ induces a clique and $\\mathcal{H}$ is $K_3^{(2)}$-free, the lists associated with edges between $v$ and $A$ are mutually disjoint, so that $\\sum_{e \\in E'(v)} x_e \\leq r$. Let $j\\leq k$ be the number of edges between $v$ and $A$. It is clear that \n$$\\prod_{e \\in E'(v)} |L_e| \\leq \\max\\left\\{\\prod_{i=1}^j a_i \\colon 1 \\leq j \\leq k, a_1,\\ldots,a_j>0, a_1+\\cdots+a_j \\leq r\\right\\}=c_k^*(r).$$\nThe result follows because for $a_1+\\cdots+a_j \\leq r$ it is\n$$\\prod_{i=1}^j a_i\\leq \\left(\\frac{r}{j}\\right)^j.$$\n\\end{proof}\n\n\\begin{lemma}\\label{lemma:UC}\nLet $r$ and $k \\geq 3$ be positive integers. For $j \\geq 1$, consider a partition $E(K_{k}) = E_1 \\cup \\cdots \\cup E_j$ of the edge set of the complete graph $K_k$ and integers $1 \\leq s_1, \\ldots , s_{j} \\leq r$ such that\n $$r \\left\\lfloor \\frac{k}{2} \\right\\rfloor < \\sum_{i=1}^j |E_i| s_i.$$ \nThen, for any assignment of color lists in $[r]$ to the edges of $K_k$ such that, for each $i$, all edges $e \\in E_i$ have list size at least $s_i$, there exists a copy of $K_3$ for which two of the lists have non-empty intersection. \n \\end{lemma}\n \n\\begin{proof}\nAssume that is an assignment of lists as in the statement such that, for all copies of $K_3$ in $K_k$, the lists associated with any two of its edges are disjoint. This means that, for every color $\\alpha$, the edges whose lists contain $\\alpha$ form a matching in $K_k$. Since a maximum matching in $K_k$ has size $\\lfloor k\/2 \\rfloor$, we must have\n$$ r \\left\\lfloor \\frac{k}{2} \\right\\rfloor \\geq \\sum_{e \\in E(K_k)} |L_e| \\geq \\sum_{i=1}^j |E_i| s_i,$$\ncontradicting our assumption about $r$ and $k$.\n \\end{proof}\n\n\\begin{lemma}\\label{lemma(ck)}\nLet $r\\geq 2$ and $k\\geq 3$ be integers. Let $\\mathcal{H}$ be a $K_3^{(2)}$-free multicolored cluster graph whose underlying graph is $K_k$ and whose edge lists are contained in $[r]$ and have size at least two. Then \n$$\\prod_{e \\in E(\\mathcal{H})} |L_e| \\leq \\tilde{c}_k(r)=\\left(\\frac{r}{\\binom{k}{2}}\\left\\lfloor \\frac k2 \\right\\rfloor \\right)^{\\binom{k}{2}}.$$\n\\end{lemma}\n\n\\begin{proof}\nGiven an edge $e \\in E(\\mathcal{H})$, let $x_e=|L_e|$. Let $E_i$ denote the set of edges of $\\mathcal{H}$ whose lists have size $i$. By Lemma~\\ref{lemma:UC}, $\\sum_{e \\in E(\\mathcal{H})} x_e = \\sum_{i=2}^r i \\cdot |E_i| \\leq r \\lfloor k\/2 \\rfloor$, since $\\mathcal{H}$ is $K_3^{(2)}$-free.\n\nIn particular, the vector $(x_e)_{e \\in E(\\mathcal{H})}$ is a feasible solution to the optimization problem~\\eqref{maxlemma} with $p=r \\lfloor k\/2 \\rfloor$ and $L=\\binom{k}{2}$. For the inequality, observe that for any choice of $j$ positive real numbers such that $a_1+ \\cdots + a_j \\leq r\\lfloor k\/2\\rfloor$, we have \n$$\\prod_{i=1}^{\\binom{k}{2}} a_i \\leq \\left(\\frac{r}{j}\\left\\lfloor \\frac k2 \\right\\rfloor \\right)^{\\binom{k}{2}}.$$\nThis concludes the proof.\n\\end{proof}\n\nWe are now ready to prove the desired result.\n\\begin{lemma}\\label{lemma26:s2<27}\nFix an integer $r$ such that $13 \\leq r \\leq 26$. Given $\\mathcal{H} \\in \\mathcal{S} \\setminus \\mathcal{S}_1$ and $0 < \\alpha\\leq \\frac{1}{1000}$, we have\n\\begin{eqnarray*}\\label{eq:lemma26}\nc(\\mathcal{H}) \\leq r^{\\frac14 - \\alpha}.\n\\end{eqnarray*}\n\\end{lemma}\n\n\\begin{proof}\nLet $r \\in \\{13,\\ldots,26\\}$. For a contradiction, assume that the result is false and choose a counterexample $\\mathcal{H} \\in \\mathcal{S} \\setminus \\mathcal{S}_1$ with the minimum number of vertices. Recall that the edges of $\\mathcal{H}$ have lists with sizes between $2$ and $r_0$. Let $m$ be the number of vertices of $\\mathcal{H}$. We first show that $\\mathcal{H}$ is not isomorphic to a clique $K_m$ such that $3 \\leq m \\leq 6$.\n\nFor $\\mathcal{H}$ isomorphic to $K_3$, Lemma~\\ref{lemma(ck)} tells us that $c(\\mathcal{H})^{9} \\leq \\tilde{c}_3(r) = (r\/3)^3$. Lemma~\\ref{lemma26:s2<27} holds in this case because $c(\\mathcal{H}) \\leq {((r\/3)^3)}^{\\frac{1}{9}} \\leq r^{\\frac{1}{4} - \\alpha}$ for $r^{\\frac{1}{12}+\\alpha} \\leq 26^{\\frac{1}{12}+\\alpha} \\leq 3^{\\frac13}$, which in turn holds for\n$$\\alpha \\leq 0.02906< \\frac{4\\ln{3}-\\ln{26}}{12 \\ln 26} .$$\n\nFor $\\mathcal{H}$ isomorphic to $K_4$, Lemma~\\ref{lemma(ck)} gives $c(\\mathcal{H})^{16} \\leq \\tilde{c}_4(r) =(r\/3)^6$. Therefore, we need $c(\\mathcal{H}) \\leq {((r\/3)^6)}^{\\frac{1}{16}} \\leq r^{\\frac{1}{4} - \\alpha}$, which holds for \n$$\n\\alpha \\leq 0.00144< \\frac{3\\ln{3}-\\ln{26}}{8 \\ln 26}.\n$$\n\nIf $\\mathcal{H}$ is isomorphic to $K_5$, Lemma~\\ref{lemma(ck)} gives $c(\\mathcal{H})^{25} \\leq \\tilde{c}_5(r) = (r\/5)^{10}$. Therefore, $c(\\mathcal{H}) \\leq ((r\/5)^{10})^{\\frac{1}{25}} ={(r\/5)^{2\/5}} < r^{\\frac{1}{4} - \\alpha}$ if\n$$\n\\alpha \\leq 0.04759 < \\frac{8 \\ln 5 - 3 \\ln 26}{20 \\ln 26}.\n$$\n\nFinally, if $\\mathcal{H}$ is isomorphic to $K_6$, by Lemma~\\ref{lemma(ck)} the product of the sizes of the color lists of $\\mathcal{H}$ is at most $\\tilde{c}_6(r)$. We get \n\\begin{eqnarray*}\nc(\\mathcal{H})^{36} &\\leq& (\\tilde{c}_6(r))^{\\frac{1}{36}} = \\left( \\frac{r}{5} \\right)^{\\frac{15}{36}} \n\\leq r^{\\frac{1}{4} - \\alpha},\n\\end{eqnarray*}\nwhich holds for \n$$ \\alpha \\leq 0.03915< \\frac{15 \\ln 5 - 6 \\ln 26}{36 \\ln 26}.\n$$\n\nHaving established that $\\mathcal{H}$ is not isomorphic to a clique on $3\\leq m \\leq 6$ vertices, let $\\omega(\\mathcal{H})\\geq 3$ denote the size of a maximum clique in $\\mathcal{H}$. If $\\omega(\\mathcal{H})\\geq 6$, then the fact that $\\mathcal{H}\\neq K_6$ implies that $m>6$. Fix $k=6$ and choose a set $A$ of vertices such that $A$ induces a copy of $K_6$ in $\\mathcal{H}$. Otherwise, let $k=\\omega(\\mathcal{H})$ and fix a set $A$ of vertices of size $k$ that induces a copy of $K_k$ in $\\mathcal{H}$. By the above, we know that $m>k$ in this case. Given a vertex $v \\in V(\\mathcal{H})\\setminus A$, let $c_v$ be the product of the sizes of the lists on edges connecting $v$ to $A$. Clearly,\n\\begin{equation}\\label{eq_UB}\nc(\\mathcal{H})^{m^2} = c(\\mathcal{H}[A])^{k^2} \\cdot \\left(\\prod_{v \\in V(\\mathcal{H})\\setminus A} c_v \\right) \\cdot c(\\mathcal{H}[V(\\mathcal{H})\\setminus A])^{(m-k)^2}.\n\\end{equation}\nWe know that $c(\\mathcal{H}[A]),c(\\mathcal{H}[V(\\mathcal{H})\\setminus A]) \\leq r^{\\frac{1}{4} - \\alpha} $ by the minimality of $\\mathcal{H}$. \n\nIf $k=6$, we have $c_v \\leq \\overline{c}_{6}(r)$ by Lemma~\\ref{gen_claim}. Then~\\eqref{eq_UB} leads to\n\\begin{eqnarray*}\nc(\\mathcal{H})^{m^2} &\\leq & r^{(\\frac{1}{4}- \\alpha)6^2} \\cdot (\\overline{c}_{6}(r))^{m-6} \\cdot r^{(\\frac{1}{4}- \\alpha)(m-6)^2}\\nonumber \\\\\n&=& \\left(\\frac{\\overline{c}_6(r)}{r^{3 - 12\\alpha}}\\right)^{m-6} \\cdot r^{(\\frac{1}{4} - \\alpha)m^2}.\n\\end{eqnarray*}\nWe conclude that $c(\\mathcal{H})^{m^2} \\leq r^{(\\frac{1}{4} - \\alpha)m^2}$ because $\\overline{c}_6(r)\/r^{3-12\\alpha} <1$ for $13 \\leq r \\leq 26$. Here, it suffices to verify that the quantity $\\overline{c}_6(r)$ defined in~\\eqref{gen_UB} satisfies $\\overline{c}_6(r) 0$ for which~(ii) holds.\n\nIf $k<4$, we know by the hypothesis that $\\mathcal{H} \\notin \\mathcal{S}_1$ that $\\mathcal{H}$ is $K_4$-free, but contains a copy of $K_3$ whose edges have lists of size at least four. We fix such a $3$-vertex set $A \\subset V(\\mathcal{H})$ that induces a copy of $K_3$ whose edges have lists of size at least four. If $v \\in V(\\mathcal{H}) \\setminus A$, then $v$ has at most two neighbors in $A$. If $v \\in V(\\mathcal{H}) \\setminus A$ has at most one neighbor in $A$, then its list has size at most $r_0$. If $v \\in V(\\mathcal{H}) \\setminus A$ has exactly two neighbors in $A$, say $v_1$ and $v_2$, then these edges form a triangle with an edge in the copy of $K_3$. Since the list of the edge $\\{v_1, v_2 \\}$ has size at least four, the product $c_v$ of the sizes of the lists associated with the two edges between $A$ and $v$ is at most $(r-4)^2\/4=s^*(r)$. We observe that $r_0 < s^*(r)$ for all $r\\geq 13$. With this, the inequality~\\eqref{UB_otherk} may be sharpened as\n$$c(\\mathcal{H})^{m^2} \\leq \\left(\\frac{(r-4)^2}{4r^{\\frac{3}{2} - 6\\alpha}}\\right)^{m-k} \\cdot r^{(\\frac{1}{4} - \\alpha)m^2}.$$ \nNote that $(r-4)^2 \\leq 4 \\cdot r^{\\frac{3}{2} - 6 \\alpha}$ is equivalent to \n$$r^{\\frac{1}{2}+6\\alpha}\\left(1-\\frac{4}{r}\\right)^2\\leq 4.$$ For $\\alpha>0$, the left-hand side is increasing as a function of $r$, so this holds for $13 \\leq r \\leq 26$ because $$26^{\\frac{1}{2}+6\\alpha}\\left(1-\\frac{4}{26}\\right)^2 \\leq 4$$ \nholds for $\\alpha \\leq 0.0046$. This concludes the induction step and proves Lemma~\\ref{lemma26:s2<27}.\n\\end{proof}\n\n\\section{Final remarks and open problems}\n\nThe objective of this paper was to characterize the values of $r \\geq 2$ for which the bipartite Tur\\'{a}n graph $T_2(n)$ is the unique $(r,K_3^{(2)})$-extremal graph for all sufficiently large $n$. With Theorem~\\ref{theorema:main}, we established that this holds precisely for $2\\leq r\\leq 26$. In this section, our aim is to put this result in a more general perspective and to discuss a few open problems.\n\nLet $P$ be a pattern of a complete graph $K_k$. The following facts are known to hold (see~\\cite{eurocomb15}):\n\\begin{itemize}\n\n\\item[(a)] If $P=K_k^R$, the rainbow pattern of $K_k$, then there exists $r_0$ such that the following holds for all $r\\geq r_0$. There exists $n_0$ such that, for all $n \\geq n_0$, the unique $n$-vertex $(r,P)$-extremal graph is the Tur\\'an graph $T_{k-1}(n)$.\n\n\\item[(b)] If $P\\neq K_k^R$, then there exists $r_1$ such that the following holds for all $r\\geq r_1$. There exists $n_0$ such that, for all $n \\geq n_0$, the Tur\\'{a}n graph $T_{k-1}(n)$ is \\emph{not} $(r,P)$-extremal.\n\\end{itemize}\nThis raises natural questions.\n\\begin{prob}\nGiven $k\\geq 3$, let $r_0(k)$ be the least value of $r_0$ such that (a) holds. Determine $r_0(k)$.\n\\end{prob}\nIt is known that $r_0(3)=4$~\\cite{baloghli}. For $k\\geq 4$, upper and lower bounds on $r_0(k)$ have been provided in~\\cite{multipattern,eurocomb15}, but we believe that the upper bounds in these papers are much larger than the actual value of this parameter. \n\n\\begin{prob}\nGiven $k\\geq 3$, characterize the $n$-vertex $(r,K_k^R)$-extremal graphs for $rr_1(P)$. \n\\end{prob}\nFor $P=K_3^{(2)}$ and $r=27$, we believe that $T_4(n)$ is the unique $(r,P)$-extremal graph for all sufficiently large $n$, but we do not have a proof of this. Moreover, the work in~\\cite{BHS17} implies that, except for the patterns $K_k^M$ and $K_3^{(2)}$, the $(r,P)$-extremal graphs mentioned in Problem~\\ref{prob4} must be complete multipartite graphs. However, it is not known whether the partition must always be equitable. Recent work by Botler et al.~\\cite{botler} and by Pikhurko and Staden~\\cite{PS2022} shows that, for some monochromatic patterns of complete graphs, unbalanced complete multipartite graphs can be very close to extremal, and may even be extremal in some cases. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}