diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzlvob" "b/data_all_eng_slimpj/shuffled/split2/finalzzlvob" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzlvob" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sec:introduction}\nThe dynamics of wall-bounded turbulent flows are linked closely to processes that dominate the flow close to the wall. A prominent feature of such flows is the presence of slow moving wavy `streaks' of fluid, which intermittently and abruptly lift-up away from the wall, and eject slow moving fluid towards the faster core~\\citep{Kline1967,Offen1975}. These `bursts' of slow moving streaks have been identified in a number of experiments using hydrogen bubble visualization~\\citep{Kline1967}, dye visualization~\\citep{Kim1971}, and observation of neutrally buoyant colloidal particles~\\citep{Corino1969}. The ejections are usually followed by `sweeps' of faster moving fluid towards the wall~\\citep{Corino1969}, completing the cycle of momentum exchange between the low speed near-wall layers and the high speed core. Several studies note that these intermittent bursts are important sources for the generation and dissipation of turbulent kinetic energy within boundary layers, control of transport phenomena, and are also responsible for the majority of turbulent drag acting on the wall~\\citep{Kline1967, Corino1969, Kim1971, Wallace1972, Lumley1998, Jimenez2012}. \n\nAlthough the existence of intermittent bursts in wall-bounded turbulence is widely accepted, there has been some ambivalence regarding their role in near-wall dynamics. \\citet{Robinson1991}, \\citet{Moin1998}, and \\citet{Schoppa2002} have suggested that bursts may not play as crucial a role in turbulence generation as previously thought. The main argument in favour of this viewpoint is that the intermittent events observed by \\citet{Kline1967} were caused by the passage of streamwise vortices over static measurement locations. However, certain studies have remarked that these strong intermittent events are not merely artefacts of vortices passing by, but should instead be viewed as intrinsic components of the near-wall dynamics~\\citep{Jimenez2012,Jimenez2013}. \\citet{Lumley1998} considered bursts to be integral to the formation and evolution of coherent structures, and proposed that the inhibition of bursts should be a crucial element of potential control strategies. \\citet{Jimenez2013} notes that frictional drag on the wall increases abruptly and substantially during bursting events. Several studies have proposed that coherent hairpin vortices may be consequences of instabilities and ejections associated with low-speed streaks~\\citep{LozanoDuran2014,Hack2018}. \\citet{Schlatter2014} suggested that hairpin vortices may be artefacts of the relatively moderate Reynolds numbers that prior DNSs had been restricted to owing to computational limitations. It is evident that there have been differences of opinion regarding the exact nature of near-wall dynamics, which highlights the need for novel analytical tools that can help interpret nonlinear turbulent flow data more effectively.\n\nWhile we have witnessed steady progress in both experimental diagnostics and simulation capabilities since some of the seminal studies discussed above, a comprehensive understanding of fundamental processes in near-wall turbulence, and more importantly, effective means of influencing them are still being sought~\\citep{Jimenez2018}. Disentangling the non-linear spatial and temporal correlations inherent in turbulent flows has proved to be the principal obstacle, and has been particularly challenging for reduced order modelling approaches such as Principal Component Analysis (also known as Proper Orthogonal Decomposition - POD), and Dynamic Mode Decomposition (DMD)~\\citep{Schmid2010}. Recently, novel techniques that have undergone rapid development owing to advances by the machine learning and computer vision community, have seen increased adoption for prediction and analysis tasks in fluid mechanics. Very early uses of Artificial Neural Networks (ANNs) for this purpose include studies by~\\citet{Fan1993}, and~\\citet{Lee1997}. \\citet{Milano2002} compared the prediction and reconstruction capabilities of nonlinear autoencoders to those of Principal Component Analysis in a turbulent channel flow simulation. \\citet{Hack2016} used ANNs to predict the transition to turbulence in a spatially developing boundary layer, by identifying near-wall streaks that were most likely to breakdown and induce the formation of turbulent spots. \\citet{Maulik2017} trained a single layer feedforward ANN to deconvolve low-pass filtered turbulent datasets, in order to reconstruct the subfilter length scales. \\citet{Fukami2019} and~\\citet{Liu2020} have also explored deconvolution to reconstruct subfilter scales, albeit using Convolutional Neural Networks (CNNs)~\\citep{Fukushima1980}, which preserve spatial correlations inherent in the data. CNNs have proved to be effective for predicting both steady~\\citep{Guo2016,Sekar2019} and unsteady~\\citep{Lee2019} laminar flows around bluff bodies, airfoils, and cylinders. CNNs have also been used in low Reynolds number flows to predict unsteady force coefficients for bluff bodies~\\citep{Miyanawala2018}, pressure distribution on a cylinder~\\citep{Ye2020}, and drag for arbitrary 2D shapes in laminar flows~\\citep{Viquerat2019}.\n\nGiven the integral role of bursting events in the turbulence generation cycle, and the innate ability of Neural Networks to identify nonlinear correlations, we train a 3D CNN to predict the intensity of strong and intermittent ejection events that occur in the near-wall region. This is done by first `labelling' 3D velocity fields extracted from a turbulent channel flow DNS with their corresponding ejection intensities, and then using the velocity fields as input, and ejection intensities (labels) as output for training. Once the CNN is able to correctly predict ejection intensities for out-of-sample velocity data, we visualize localized regions of the flow that the trained CNN focuses on in order to make accurate predictions. This allows us to look beyond the black-box nature of the neural network, to reveal physical processes that such networks are capable of identifying in extremely complex flow fields. Details regarding the numerical methods and training procedure for the CNN are provided in \\S\\ref{sec:methods}. Results demonstrating the identification capabilities of the CNN are presented in \\S\\ref{sec:results}, followed by concluding remarks in \\S\\ref{sec:conclusion}.\n\n\\section{Methods}\\label{sec:methods}\n\\subsection{Direct Numerical Simulation}\nThe data used for training the CNN was generated using a DNS of a periodic turbulent channel flow. The simulation is based on the incompressible Navier-Stokes equations, which are solved using a high order conservative finite difference scheme~\\citep{Desjardins2008}. The flow is driven by imposing a pressure gradient in the streamwise direction, which changes in time to maintain a constant mass flow rate. The simulation domain and its dimensions are shown in Figure~\\ref{fig:diagram}.\n\\begin{figure}\n\\centering\n\\includegraphics[width=1\\linewidth]{domain-para6.pdf}\n\\caption{A snapshot of the flow field from a turbulent channel flow simulation at $Re_\\tau = 300$. The horizontal plane shows an isocontour of the horizontal velocity component $u$, coloured using the vertical velocity $v$. Low-speed streaks manifest as sinuous ridges, and bright spots mark regions where the flow is being ejected away from the wall. The pink blobs denote high intensity ejection parcels where positive fluctuations for $v$ exceed 2 standard deviations, i.e., $v > \\mean{v} + 2\\sigma_v$. The grid cell sizes were kept uniform in the streamwise and spanwise directions ($\\Delta x=\\Delta z = 3.5\\delta^+$), whereas the cell heights were stretched from the wall to the channel center in a sinusoidal manner ($0.03\\delta^+ \\leq \\Delta y \\leq 2.4\\delta^+$). The white box in the bottom left corner depicts MFU-sized sections that the snapshots were divided into for training the CNN.}\n\\label{fig:diagram}\n\\end{figure}\nThe channel uses periodic boundaries in the streamwise and spanwise directions, and the no-slip boundary condition at the top and bottom walls. The friction Reynolds number for the data used for training the CNN is approximately $Re_\\tau = u_{\\tau} (L_y\/2)\/\\nu = 300$. Here, $u_\\tau = \\sqrt{\\tau\/\\rho}$ is the friction velocity, $\\tau = \\mu \\partial u\/\\partial y$ is the surface shear stress, $\\nu = \\mu\/\\rho$ is the kinematic viscosity, and $\\rho$ is the fluid density. The mean velocity and rms velocity profiles for two distinct simulations at $Re_\\tau = 300$ and $670$ are shown in Figure~\\ref{fig:loglaw}.\n\\begin{figure}\n\\centering\n\\subfloat[\\label{sfig:loglaw1}]{%\n\\includegraphics[width=0.5\\linewidth]{loglaws.pdf}\n}\n\\subfloat[\\label{sfig:loglaw2}]{%\n\\includegraphics[width=0.49\\linewidth]{plot_urms.pdf}\n}\n\\caption{\\protect\\subref{sfig:loglaw1} Mean horizontal velocity profile shown in wall units for $Re_\\tau=300$ (blue) and $Re_\\tau=670$ (red). \\protect\\subref{sfig:loglaw2} The corresponding rms velocity profiles shown in wall units. The symbols correspond to data from \\citet{Moser1999} for $Re_\\tau=395$ and $590$.}\n\\label{fig:loglaw}\n\\end{figure}\nOnce the flow is statistically stationary, several snapshots are recorded at intervals of approximately $40t^+$, which allows the individual snapshots to be temporally decorrelated. Here, $t^+ = \\delta^+\/u_\\tau$ is the viscous time scale and $\\delta^+ = \\nu\/u_{\\tau}$ is the viscous length scale. Each full-channel snapshot is divided up into Minimal Flow Unit-sized sections~\\citep{Jimenez1991}, as depicted by the white box in Figure~\\ref{fig:diagram}. Similarly, MFU-sized samples are extracted from the upper wall after flipping the wall-normal and spanwise velocities appropriately, so as to maintain the same orientation as the lower wall. This procedure yields 450 three-dimensional sections (velocity samples) per wall for each snapshot, and a total of 10,800 velocity samples from 12 independent full-channel snapshots.\n\n\\subsection{Labelling the burst intensity in 3D velocity samples}\n\nThe quadrant method introduced by \\citet{Wallace1972} has been used widely to classify bursts and sweeps ($u'<0, \\ v'>0$ for bursts, and $u'>0, \\ v'<0$ for sweeps). However, \\citet{Luchik1987} note that this technique experiences difficulties with detecting entire burst or ejection events. Moreover, the quadrant criteria do not require intense bursting activity, as they are merely associated with fluctuation signs with respect to the mean values. In the present work, we associate bursts with strong intermittent events as described by \\citet{Kline1967}, and consider ejections to be associated with large deviations in the vertical velocity. To determine the intensity of these ejection events, we compute the percentage of cells where positive fluctuations in $v$ exceed 2 standard deviations, i.e., $v > \\mean{v} + 2\\sigma_v$. This provides a useful indication of activity within each velocity sample, without having to rely on adjustable parameters. Each velocity sample is then interpolated onto a grid of size $64\\times40\\times64$ with uniformly spaced cells in the wall-normal direction, and reduced to half-precision floating point numbers to conserve memory during training.\n\n\\subsection{Training procedure and saliency maps}\n\\label{subsec:training}\nAfter labelling, the 10,800 velocity samples are split randomly into $85\\%$ training, $7.5\\%$ validation, and $7.5\\%$ test sets. The training samples are fed in batches of five to the CNN as input, along with the corresponding labels as output (Figure~\\ref{fig:arch}). We note that only the vertical velocity component $v$ is used for training, since it is most closely related to ejection events.\n\\begin{figure}\n\\centering\n\\includegraphics[width=1\\textwidth]{architecture5-color.pdf}\n\\caption{The Convolutional Neural Network takes a 3D velocity field (only the $v$ component) as input, and predicts the ejection intensity as output. The architecture consists of 4 convolution + pooling layers, which learn to identify and extract the most important flow features from the data. The 3D data is then flattened out, followed by two fully-connected layers terminating in the output node marked `prediction'. The number of distinct filtering kernels used at each convolution layer are shown as $\\times 32$, $\\times 64, \\cdots$, and the layer sizes are shown as $(64,40,64)$, and so on. Altogether, there are 2.2 million unknown parameters (weights and biases) that must be learned during training.}\n\\label{fig:arch}\n\\end{figure}\nThe max-pooling layers downsample the data by retaining a single cell out of every $2\\times2\\times2$ block of cells. This reduces the dimensionality of the data by one eighth after every pooling operation. The function of the convolution and pooling layers is to extract 3D features from the flow, whereas the fully connected layers towards the end learn to associate the assortment of feature maps with the appropriate ejection intensity value. The training was implemented using Keras and TensorFlow, which are open-source machine learning libraries. The loss-function was defined as the percentage error between the predicted value and the actual label for each sample, and the weights were updated using the Adam optimizer to minimize this loss. The network architecture and training procedure were optimized through a series of hyperparameter sweeps, and the optimal combination that yields the highest accuracy is shown in Table~\\ref{tab:kd}.\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{ccc}\n \\emph{Architecture and Training} & \\emph{Parameters} \\\\[3pt]\n Kernel Size & 3x3x3 \\\\\n Pooling Size & 2x2x2 \\\\\n Weight Initialization & he uniform \\\\\n Bias Initialization & Zeros \\\\\n Loss Function & Percent Error \\\\\n Optimization & Adam \\\\\n\\end{tabular}\n\\quad\n\\vline\n\\begin{tabular}{ccc}\n \\emph{Hyperparameters} & \\emph{Value} \\\\[3pt]\n Batch Size & 5 \\\\\n Epochs & 57 \\\\\n Dropout & 0.5 \\\\\n Learning Rate & 0.0001 \\\\\n Decay & 0.0001 \\\\\n Activation Function & ReLU \\\\\n\\end{tabular}\n\\caption{Parameters and hyperparameters related to the architecture and training.}\n\\label{tab:kd}\n\\end{center}\n\\end{table}\n\nOnce the network weights have been trained, there are several methods that can be used to understand what the network has `learned' to be important. We may plot the filter kernels, the feature maps (the output at each layer), or saliency maps as described by \\citet{Simonyan2013}. Saliency maps provide a visual representation of sensitivity analysis, and are generated by perturbing each point of the input data and measuring the resulting change in the output. An improved version which adjusts data points at convolutional layers throughout the architecture, but with respect to the final convolutional layer's feature maps, was developed by \\citet{Selvaraju2016}. This technique is referred to as Gradient-weighted Class Activation Mapping (Grad-CAM), and is used in the present work to identify salient regions in the near-wall flow.\n\n\\section{Results}\n\\label{sec:results}\nAfter successful training, the CNN was used for predicting ejection intensities for velocity samples extracted from a time-decorrelated snapshot. The results shown in Figure~\\ref{fig:preds} indicate that the CNN's predictions match the ground truth very well, regardless of whether the samples contain high intensity ejections or minimal activity.\n\\newsavebox{\\measurebox}\n\\begin{figure}\n\\sbox{\\measurebox}{%\n \\begin{minipage}[b]{0.4\\textwidth}\n {\\includegraphics[width=\\textwidth]{mini1.png}%\n}\n\\vfill\n\\subfloat\n []\n {\\label{fig:predictA}\\includegraphics[width=\\textwidth]{mini2.png}}\n\\end{minipage}\n\n \\begin{minipage}[b]{.6\\textwidth\n \\subfloat\n []\n {\\label{fig:predictB}\\includegraphics[width=\\textwidth]{prediction017-fin.pdf}}\n \\end{minipage}}\n\\usebox{\\measurebox}\\qquad\n\\caption{\\protect\\subref{fig:predictA} Two test samples that were not seen by the CNN during training or validation. The actual labels for the two datasets are $0.284\\% $ (top) and $4.751\\%$ (bottom), whereas the values predicted by the CNN are $0.282\\%$ and $4.749\\%$, respectively. \\protect\\subref{fig:predictB} Comparison of the labels (blue dots) and the predicted ejection intensities (red) for an out-of-sample snapshot which is time-decorrelated from the training dataset. The mean absolute percentage error in the predicted values is $9.7\\%$.}\n\\label{fig:preds}\n\\end{figure}\nWe examine the crucial flow features that the CNN has learned to focus on, by highlighting the salient regions using the Grad-CAM technique discussed in \\S\\ref{subsec:training}. For an intuitive explanation of the Grad-CAM technique, Figures~\\ref{fig:gradcamA} and~\\ref{fig:gradcamB} show how an image-classification network focuses on a dog's floppy ears, its eyes and the collar in order to make its determination that the picture is that of a dog. \n\\begin{figure}\n\\centering\n\\begin{minipage}[b]{0.38\\textwidth}\n\\centering\n\\subfloat\n[]\n{\\label{fig:gradcamA}\\includegraphics[width=0.4\\textwidth]{thor6reshape.png}}\n\\quad\n\\subfloat\n[]\n{\\label{fig:gradcamB}\\includegraphics[width=0.4\\textwidth]{gradcam-jet_trim.jpg}}\n\\end{minipage\n\\usebox{\\measurebox\n\\begin{minipage}[b]{0.58\\textwidth}\n\\centering\n\\subfloat\n []\n {\\label{fig:gradcamC}\\includegraphics[width=0.5\\textwidth]{fig3snap.png}}\n\\subfloat\n []\n {\\label{fig:gradcamD}\\includegraphics[width=0.5\\textwidth]{fig3grad.png}} \n\\end{minipage}\n\\caption{\\protect\\subref{fig:gradcamA} Input image and \\protect\\subref{fig:gradcamB} the corresponding Grad-CAM output from a CNN trained to discern between cats and dogs. The red and yellow areas depict the salient regions which most influence the CNN's prediction, namely, the ears, the eyes and the collar. \\protect\\subref{fig:gradcamC} Post-processed image for an input velocity sample, and \\protect\\subref{fig:gradcamD} the corresponding Grad-CAM resulting from the trained 3D CNN. The golden structures in \\protect\\subref{fig:gradcamD} indicate localized regions of the flow that are most influential for making the correct prediction. These salient regions correlate very well with the high-intensity ejection parcels and the bursting streak.}\n\\label{fig:grads}\n\\end{figure}\nSimilarly, figures~\\ref{fig:gradcamC} and~\\ref{fig:gradcamD} show the post-processed visualization of a velocity sample, and the corresponding Grad-CAM output when it is processed by the trained CNN. The pink fluid parcels in Figure~\\ref{fig:gradcamC} indicate regions of high ejection intensity, similar to Figure~\\ref{fig:diagram}. We also observe a bursting streak towards the back of the image, denoted by a brightly coloured ridgeline in the horizontal plane. From the Grad-CAM image in Figure~\\ref{fig:gradcamD}, we note that the CNN focuses on both the ejection parcels as well as the bursting streak, as is evident from the golden structures occupying the same spatial regions as the pink parcels, as well as engulfing the bursting streak in the back. This is a notable result, especially since the CNN was provided with no a priori knowledge of the flow patterns that it should focus on; rather, this ability was gained by the CNN through training on velocity samples that were assigned a single metric, i.e., the ejection intensity.\n\nWe now examine the ability of the CNN to track salient regions as the flow evolves in time. Figure~\\ref{fig:series} shows successive snapshots taken at a particular spatial location, with post-processed input velocity data overlayed with the Grad-CAM output. \n\\begin{figure\n\\centering\n\\subfloat[\\label{sfig:series1}]{%\n\\label{fig:seriesA}\\includegraphics[width=0.4\\linewidth]{series2Ann1.pdf\n}\n\\subfloat[\\label{sfig:series2}]{%\n\\label{fig:seriesB}\\includegraphics[width=0.4\\linewidth]{series2Ann2.pdf}\n}\n\\\\ \\centering\n\\subfloat[\\label{sfig:series3}]{%\n\\label{fig:seriesC} \\includegraphics[width=0.4\\linewidth]{series2Ann3.pdf}\n}\n\\subfloat[\\label{sfig:series4}]{%\n\\label{fig:seriesD}\\includegraphics[width=0.4\\linewidth]{series2Ann4.pdf}}\n\\caption{Four successive time instances showing an overlay of the Grad-CAM output over the corresponding flow field (animation available in Supplementary Movie 1). In \\protect\\subref{fig:seriesA} the CNN focuses its attention on ejection parcels that are already well formed, as well as on the streak that is undergoing bursting in the lower right corner. \\protect\\subref{fig:seriesB} As a new ejection parcel enters the field of view from the left, the CNN includes it as part of the salient regions. \\protect\\subref{fig:seriesC}, \\protect\\subref{fig:seriesD} As the bursting streak and ejection parcels move out of the field of view, the CNN switches its attention to the strong ejection parcel developing on the left.}\n\\label{fig:series}\n\\end{figure}\nAt $t_0$, the salient regions identify three distinct ejection packets, as well two bursting streaks towards the left and right edges. One viscous time unit later (i.e., at $t_0+t^+$), the CNN considers the larger ejection parcel entering the field of view from the left to be more important to its prediction, and focuses less on the parcel that has started dissipating near the lower right edge. At this instant, the ejection parcel towards the back and the bursting streak at the right edge are still influential in the CNN's prediction. At $t_0+2 t^+$ and $t_0+3 t^+$, the large ejection parcel that has entered the field of view is considered to be most significant for predicting the ejection intensity.\n\nTo determine how well the CNN trained at $Re_\\tau=300$ generalizes to different flow conditions, we test its prediction ability at a higher $Re_\\tau = 670$ in Figure~\\ref{fig:highRE}. The dimensions of the new velocity samples were identical to the $Re_\\tau=300$ samples in wall units, and the velocity was rescaled by multiplying with $u_{\\tau300}\/u_{\\tau670}$. \n\\begin{figure}\n\\centering\n\\subfloat[\\label{fig:predRe}]{%\n\\includegraphics[width=0.5\\linewidth]{predictionReshape1.pdf}\n}\n\\subfloat[\\label{fig:gradRe}]{%\n\\includegraphics[width=0.4\\linewidth]{re670_gradcam_hires.png}\n}\n\\caption{\\protect\\subref{fig:predRe} Prediction for $Re_{\\tau}= 670$ data using a CNN trained on the $Re_{\\tau}=300$ database. \\protect\\subref{fig:gradRe} The salient regions for the high $Re_{\\tau}=670$ samples.}\n\\label{fig:highRE}\n\\end{figure}\nDespite a notable difference in the Reynolds number, the network is able to make predictions with a mean absolute percentage error of less than $24\\%$. Moreover, the CNN is still able to discern the most relevant ejection parcels and bursting streaks. This highlights the capability of CNNs to reveal important physical processes that persist across diverse flow conditions.\n\n\\section{Conclusion}\n\\label{sec:conclusion}\nIn this work, we have trained a three dimensional Convolutional Neural Network (CNN) to predict the intensity of strong intermittent ejection events that occur in the near-wall layer of a turbulent channel flow simulation. The CNN is able to accurately predict ejection intensities in velocity samples taken from a snapshot that was not part of the training dataset, and was sufficiently removed so as to be temporally decorrelated. To understand which part of the data most influences the network's ability to make accurate predictions, we visualize salient regions in the flow where the CNN focuses its attention, using the Gradient-weighted Class Activation Mapping (Grad-CAM) technique. We observe that the resulting salient regions correlate well with high intensity ejection parcels as well as with low-speed streaks undergoing bursting. This indicates that the CNN is able to reveal dynamically crucial regions within the turbulent flow field, without a-priori knowledge of the intrinsic dynamics. Finally, we demonstrate that the CNN trained on data at $Re_\\tau=300$ is able to predict ejection intensities for samples at $Re_\\tau=670$. This suggests that the trained CNN is generalizable in its prediction ability, especially with regard to physical processes that persist across varying flow conditions. The results indicate that Convolutional Neural Networks, which were originally developed for image recognition and classification, have immense potential for uncovering non-linear correlations and spatial features in turbulent flow fields.\n\n\\section*{Acknowledgements}\nThis work was supported by the Department of Ocean and Mechanical Engineering at the Florida Atlantic University, as part of the start-up package of Siddhartha Verma. The authors thank Prof. Petros Koumoutsakos for helpful discussions, and for providing access to computational resources. Computational resources were provided by the Swiss National Supercomputing Centre (CSCS) under project ID s929, and by the National Science Foundation under grant CNS-1828181.\n\n\\bibliographystyle{jfm}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nLet $\\mathcal{A}$ be an associative algebra over a field $F$, and let $f\\in F\\langle X\\rangle$ be a multilinear polynomial from the free associative algebra $F\\langle X\\rangle$. Lvov posed the question to determine whether the image of a multilinear $f$ when evaluated on $\\mathcal{A}=M_n(F)$, is always a vector subspace of $M_n(F)$, see \\cite[Problem 1.93]{dnestr}. The original question is attributed to Kaplansky and asks the determination of the image of a polynomial $f$ on $\\mathcal{A}$. It is well known that the above question is equivalent to that of determining whether the image of a multilinear $f$ on $M_n(F)$ is 0, the scalar matrices, $sl_n(F)$ or $M_n(F)$. Clearly the first possibility corresponds to $f$ being a polynomial identity on $M_n(F)$, and the second gives the central polynomials. \n\nRecall here that the description of all polynomial identities on $M_n(F)$ is known only for $n\\le 2$, see \\cite{razmal, dral} for the case when $F$ is of characteristic 0, and \\cite{pkm2} for the case of $F$ infinite of characteristic $p>2$. The same holds for the central polynomials \\cite{okhitin, jcpk}. The theorem of Amitsur and Levitzki gives the least degree polynomial identity for $M_n(F)$, the standard polynomial $s_{2n}$, see \\cite{am_lev}. Recall also that one of the major breakthroughs in PI theory was achieved by Formanek and by Razmyslov \\cite{for, razm} who proved the existence of nontrivial (that is not identities) central polynomials for the matrix algebras. As for $sl_n(F)$, a theorem of Shoda \\cite{shoda} gives that every $n\\times n$ matrix over a field of zero characteristic is the commutator of two matrices; later on Albert and Muckenhoupt \\cite{albert} generalized this to arbitrary fields. Hence all four conjectured possibilities for the image of a multilinear polynomial on $M_n(F)$ can be achieved. \n\nThe study of images of polynomials on the full matrix algebra is of considerable interest for obvious reasons, among these the relation to polynomial identities. In \\cite{belov1} the authors settled the conjecture due to Lvov in the case of $2\\times 2$ matrices over a quadratically closed field $F$ (that is if $f$ is a given polynomial in several variables then every polynomial in one variable of degree $\\le 2\\deg f$ over $F$ has a root in $F$). They proved that for every multilinear polynomial $f$ and for every field $F$ that is quadratically closed with respect to $f$, the image of $f$ on $M_2(F)$ is 0, $F$, $sl_2(F)$ or $M_2(F)$. It should be noted that the authors in \\cite{belov1} proved a stronger result. Namely they considered a so-called semi-homogeneous polynomial $f$: a polynomial in $m$ variables $x_1$, \\dots, $x_m$ of weights $d_1$, \\dots, $d_m$ respectively such that every monomial of $f$ is of (weighted) degree $d$ for a fixed $d$. They proved that the image of such a polynomial on $M_2(F)$ must be 0, $F$, $sl_2(F)$, the set of all non-nilpotent traceless matrices, or a dense subset of $M_2(F)$. Here the density is according to the Zariski topology. Later on in \\cite{malev} the author gave the solution to the problem for $2\\times 2$ matrices for the case when $F$ is the field of the real numbers. In the case of $3\\times 3$ matrices the known results can be found in \\cite{belov2}. The images of polynomials on $n\\times n$ matrices for $n>3$ are hard to describe, and there are only partial results, see for example \\cite{belov3}. Hence in the case of $n\\times n$ matrices one is led to study images of polynomials of low degree. Interesting results in this direction are due to \\v Spenko \\cite{spenko}, she proved the conjecture raised by Lvov in case $F$ is an algebraically closed field of characteristic 0, and $f$ is a multilinear Lie polynomial of degree at most 4. Further advances in the field were made in \\cite{br_kl1, br_kl2, br}. In \\cite{br} the author proved that if $A$ is an algebra over an infinite field $F$ and $A=[A,A]$ then the image of an arbitrary polynomial which is neither an identity nor a central polynomial, equals $A$. Recently Malev \\cite{malev_quat} described completely the images of multilinear polynomials on the real quaternion algebra; he also described the images of semi-homogeneous polynomials on the same algebra.\n\nIf the base field $F$ is finite, a theorem of Chuang \\cite{chuang} states that the image of a polynomial without constant term can be every subset of $M_n(F)$ that contains $0$ and is closed under conjugation by invertible matrices. In the same paper it was also shown that such a statement fails when $F$ is infinite. \n\nTherefore it seems likely it should be very difficult to describe satisfactory the images of multilinear polynomials on $M_n(F)$. That is why people started studying images of polynomials on ``easier\" algebras, and also on algebras with an additional structure. The upper triangular matrix algebras are quite important in Linear Algebra because of their applications to different branches of Mathematics and Physics. They are also very important in PI theory: they describe, in a sense, the subvarieties of the variety of algebras generated by $M_2(F)$ in characteristic 0. Block-triangular matrices appear in the description of the so-called minimal varieties of algebras. The images of polynomials on the upper triangular matrices have been studied rather extensively. In \\cite{wang} the author described the images of multilinear polynomials on $UT_2(F)$, the $2\\times 2$ upper triangular matrices over a field $F$. The images of multilinear polynomials of degree up to 4 on $UT_n=UT_n(F)$ for every $n$ were classified in \\cite{mello_fag}, and in \\cite{Fag} the first named author of the present paper described the images of arbitrary multilinear polynomials on the strictly upper triangular matrices. It turned out that if $f$ is a multilinear polynomial of degree $m$ then its image on the strictly upper triangular matrix algebra $J$ is either 0 or $J^{m}$. The following conjecture was raised in \\cite{mello_fag}: Is the image of a multilinear polynomial on $UT_n(F)$ always a vector subspace of $UT_n(F)$? This conjecture was solved independently in \\cite{LWa}, for infinite fields (or finite fields with sufficiently many elements), and in \\cite{GMe}. Further results concerning images of polynomials on the upper triangular matrix algebra can be found in \\cite{tcm, zw, wzl}.\n\nGradings on algebras appeared long ago; the polynomial ring in one or several variables is naturally graded by the infinite cyclic group $\\mathbb{Z}$ by the degree. Gradings on algebras by finite groups are important in Linear Algebra and also in Theoretical Physics: the Grassmann (or exterior) algebra is naturally graded by the cyclic group of order 2, $\\mathbb{Z}_2$. In fact the Grassmann algebra is the most well-known example of a superalgebra. It should be noted that while in the associative case, the term ``superalgebra\" is synonymous to ``$\\textbf{Z}_2$-graded algebra\", if one considers nonassociative algebras these notions are very different: a Lie or a Jordan superalgebra seldom is a Lie or a Jordan algebra. We are not going to discuss further such topics because these are not relevant for our exposition.\n\nIn \\cite{wall}, Wall classified the finite dimensional $\\mathbb{Z}_2$-graded algebras that are graded simple. Later on the description of all gradings on matrix algebras was obtained as well as on simple Lie and Jordan algebras. We refer the readers to the monograph \\cite{ek} for the state-of-art and for further references. In PI theory gradings appeared in the works of Kemer, see \\cite{kemer}, and constituted one of the main tools in the classification of the ideals of identities of associative algebras, which in turn led him to the positive solution of the long-standing Specht Problem. It turns out that the graded identities are easier to describe than the ordinary ones; still they provide a lot of information on the latter. It is somewhat surprising that the images of polynomials have not been studied extensively in the graded setting. \nIn \\cite{Kul}, Kulyamin described the images of graded polynomials on matrix algebras over the group algebra of a finite group over a finite field. \n\nThe upper triangular matrix algebra admits various gradings, these were shown to be isomorphic to elementary ones, see \\cite{VZa}. A grading on a subalgebra $A$ of $M_n(F)$ is elementary if all matrix units $e_{ij}\\in A$ are homogeneous in the grading. All elementary gradings on $UT_n$ were classified in \\cite{VKV}; in the same paper the authors described the graded identities for all these gradings. In this paper we fix an arbitrary field $F$ and the upper triangular matrix algebra $UT_n$. \n\nIn Section \\ref{sect3} we prove that for an arbitrary group grading on $UT_n$, $n>1$, there are no nontrivial graded central polynomials. (Hence the image of a graded polynomial on $UT_n$ cannot be equal to the scalar matrices whenever $n>1$.) In Section \\ref{sect4} we consider a specific grading on $UT_n$, and describe all possible images of multilinear graded polynomials for that grading. It turns out that the images are always homogeneous vector subspaces. We impose a mild restriction on the cardinality of the base field. As a by-product of the proof we obtain a precise description of the graded identities for this grading.\n\nIn Section \\ref{sect5} we give a sufficient condition for the traceless matrices to be contained in the image of a multilinear graded polynomial. Once again we require a mild condition on the cardinality of the field. Section \\ref{sect6} studies the graded algebras $UT_2$ and $UT_3$. We prove that the image of a multilinear graded polynomial on $UT_2$, for every group grading, is a homogeneous subspace. In the case of $UT_3$, the image of such a polynomial is also a homogeneous subspace provided that the grading is nontrivial. In the case of the trivial grading, if the field contains at least 3 elements then the image of every multilinear polynomial is a vector subspace. Then we proceed with the Jordan algebra structure $UJ_n$ obtained from $UT_n$ by the Jordan (symmetric) product $a\\circ b= ab+ba$ provided that the characteristic of the base field is different from 2. The description of all group gradings on $UJ_n$ is more complicated than that on $UT_n$, see \\cite{KYa}, there appear gradings that are not isomorphic to elementary ones. The gradings on $UJ_2$ were described in \\cite{KMa}. We consider each one of these gradings, and prove that the image of a multilinear graded polynomial is always a homogeneous subspace. No restrictions on the base field are imposed (apart from the characteristic being different from 2). An analogous result is deduced for the Lie algebra $UT_2^{(-)}$ obtained from $UT_n$ by substituting the associative product by the Lie bracket $[a,b]=ab-ba$. Finally we consider $UJ_3$ equipped with the natural $\\mathbb{Z}_3$-grading: $\\deg e_{ij} = j-i\\pmod{3}$ for every $i\\le j$, assuming the base field infinite and of characteristic different from 2. We prove that the image of a multilinear graded polynomial is always a homogeneous subspace. \n\nWe hope that this paper will initiate a more detailed study of images of polynomials on algebras with additional structures. \n\n\n\n\\section{Preliminaries}\n\nUnless otherwise stated, we denote by $F$ an arbitrary field and $\\mathcal{A}$ an associative algebra over $F$. Given a group $G$, a $G$-grading on $\\mathcal{A}$ is a decomposition of $\\mathcal{A}$ in a direct sum of subspaces $\\mathcal{A}=\\bigoplus_{g\\in G}\\mathcal{A}_{g}$ such that $\\mathcal{A}_{g}\\mathcal{A}_{h}\\subset \\mathcal{A}_{gh}$, for all $g$, $h \\in G$. We define the support of a $G$-grading on $\\mathcal{A}$ as the subset $supp(\\mathcal{A})=\\{g\\in G\\mid \\mathcal{A}_{g}\\neq0\\}$. A subspace $U$ of $\\mathcal{A}$ is called homogeneous if $U=\\bigoplus_{g\\in G}(U\\cap \\mathcal{A}_{g})$. A graded homomorphism between two graded algebras $\\mathcal{A}=\\bigoplus_{g\\in G}\\mathcal{A}_{g}$ and $\\mathcal{B}=\\bigoplus_{g\\in G}\\mathcal{B}_{g}$ is defined as an algebra homomorphism $\\varphi\\colon \\mathcal{A}\\rightarrow \\mathcal{B}$ such that $\\varphi(\\mathcal{A}_{g})\\subset \\mathcal{B}_{g}$ for every $g\\in G$. We denote by $F\\langle X \\rangle^{gr}$ the free $G$-graded associative algebra generated by a set of noncommuting variables $X=\\{x_{i}^{(g)}\\mid i\\in\\mathbb{N},g\\in G\\}$. We also denote the neutral (that is of degree $1\\in G$) variables by $y$ and call them even variables, and the non neutral ones by $z$ and we call them odd variables. We draw the reader's attention that odd variables may have different degrees in the $G$-grading. \n\nWe define the image of a graded polynomial on an algebra as in \\cite{Kul}.\n\n\\begin{defi}\nLet $f\\in F\\langle X \\rangle^{gr}$ be a $G$-graded polynomial. The image of $f$ on the $G$-graded algebra $\\mathcal{A}$ is the set\n\\[\nIm(f)=\\{a\\in\\mathcal{A}\\mid a=\\varphi(f) \\ \\mbox{for some graded homomorphism} \\ \\varphi\\colon F\\langle X \\rangle^{gr}\\rightarrow \\mathcal{A}\\}\n\\]\n\\end{defi}\n\nEquivalently, if $f(x_{1}^{(g_{1})},\\dots,x_{n}^{(g_{n})})\\in F\\langle X \\rangle^{gr}$, then the image of $f$ on the algebra $\\mathcal{A}$ is the set $Im(f)=\\{f(a_{1}^{(g_{1})},\\dots,a_{n}^{(g_{n})})\\mid a_{i}^{(g_{i})}\\in \\mathcal{A}_{g_{i}}\\}$. We will also denote the image of $f$ on $\\mathcal{A}$ by $f(\\mathcal{A})$.\n\nWe now recall some basic properties of images of graded polynomials on algebras that will be used throughout the paper. \n\n\\begin{prp}\\label{basicprop}\nLet $f\\in F\\langle X \\rangle^{gr}$ be a polynomial and $\\mathcal{A}$ a $G$-graded algebra. \n\\begin{enumerate}\n \\item Let $U$ be one-dimensional subspace of $\\mathcal{A}$ such that $Im(f)\\subset U$ and assume that $\\lambda Im(f)\\subset Im(f)$ for every $\\lambda \\in F$. Then either $Im(f)=\\{0\\}$ or $Im(f)=U$;\n \\item If $1\\in \\mathcal{A}$ and $f\\in F\\langle X \\rangle^{gr}$ is a multilinear polynomial in neutral variables such that the sum of its coefficients is nonzero, then $Im(f)=\\mathcal{A}_{1}$;\n \\item $Im(f)$ is invariant under graded endomorphisms of $F\\langle X \\rangle^{gr}$;\n \\item If $supp(\\mathcal{A})$ is abelian and $f\\in F\\langle X \\rangle^{gr}$ is multilinear, then $Im(f)$ is a homogeneous subset.\n \\end{enumerate}\n\\end{prp}\n\n\\begin{proof}\nThe proofs of the first and third items are straightforward. For the second item it is enough to recall that if $\\mathcal{A}$ is a graded algebra with $1$, then $1\\in\\mathcal{A}_{1}$. Hence, given $a\\in\\mathcal{A}_{1}$ we have $a=f(\\alpha^{-1} a,1,\\dots,1)$, where $\\alpha\\neq0$ is the sum of the coefficients of $f$, and then $Im(f)=\\mathcal{A}_{1}$. For the last item, let $g_{1}$, \\dots, $g_{m}$ be the homogeneous degree of the variables that occur in $f$. If some $g_{i}\\notin supp(\\mathcal{A})$, then $Im(f)=\\{0\\}$ is a homogeneous subspace. Otherwise, since $supp(\\mathcal{A})$ is abelian, we have that each monomial of $f$ is of homogeneous degree $g_{1}\\cdots g_{m}$, and hence the same holds for $f$. \n\\end{proof}\n\nWe say that a nonzero polynomial $f\\in F\\langle X \\rangle^{gr}$ is a graded polynomial identity for a $G$-graded algebra $\\mathcal{A}$ if its image on $\\mathcal{A}$ is zero. The set of all graded polynomial identities of $\\mathcal{A}$ will be denoted by $Id^{gr}(\\mathcal{A})$. It is easy to check that $Id^{gr}(\\mathcal{A})$ is actually an ideal of $F\\langle X \\rangle^{gr}$ invariant under graded endomorphisms of $F\\langle X \\rangle^{gr}$. It is called the $T_{G}$-ideal of $\\mathcal{A}$. Given a nonempty subset $S$ of $F\\langle X \\rangle^{gr}$, we denote by $\\langle S \\rangle^{T_{G}}$ the $T_{G}$-ideal generated by $S$, that is the least $T_{G}$-ideal that contains the set $S$. The linearisation process also holds for graded polynomials, and as in the ordinary case we have the following statement.\n\n\\begin{prp}\\label{multiidentity}\nIf $\\mathcal{A}$ satisfies a graded polynomial identity, then $\\mathcal{A}$ also satisfies a multilinear one. Moreover, if $char(F)=0$, then $Id^{gr}(\\mathcal{A})$ is generated by its multilinear polynomials.\n\\end{prp}\n\n\nLet now $\\mathcal{A}=UT_{n}$ be the algebra of $n\\times n$ upper triangular matrices over the field $F$. A $G$-grading on $\\mathcal{A}$ is said to be elementary if all elementary matrices are homogeneous in this grading, or equivalently, if there exists an $n$-tuple $(g_{1},\\dots,g_{n})\\in G^{n}$ such that $\\deg(e_{ij})=g_{i}^{-1}g_{j}$. A theorem of Valenti and Zaicev states that every grading on $UT_{n}$ is essentially elementary.\n\n\\begin{thm}[\\cite{VZa}]\\label{gradingsupper}\nLet $G$ be a group and let $F$ be a field. Assume that $UT_{n}=\\mathcal{A}=\\bigoplus_{g\\in G} \\mathcal{A}_{g}$ is $G$-graded. Then $\\mathcal{A}$ is $G$-graded isomorphic to $UT_{n}$ endowed with some elementary $G$-grading.\n\\end{thm}\n\nBy Proposition 1.6 from \\cite{VKV} we have that an elementary grading on $UT_{n}$ is completely determined by the sequence $(\\deg(e_{12}),\\deg(e_{23}),\\ldots,\\deg(e_{n-1,n}))\\in G^{n-1}$.\n\n\nWe recall now some recent results about the description of images of multilinear polynomials on the algebra of upper triangular matrices. We start with the definition of the so-called commutator degree of an associative polynomial.\n\n\\begin{defi}\nLet $f\\in F\\langle X \\rangle$ be a polynomial. We say that $f$ has commutator degree $r$ if \n\\[\nf\\in\\langle [x_{1},x_{2}]\\cdots [x_{2r-1},x_{2r}]\\rangle^{T} \\ \\mbox{and} \\ f\\notin\\langle [x_{1},x_{2}]\\cdots [x_{2r+1},x_{2r+2}]\\rangle^{T}.\n\\]\n\\end{defi}\nIn \\cite{GMe}, Gargate and de Mello used the above definition to give a complete description of images of multilinear polynomials on $UT_{n}$ over infinite fields. Denoting by $J$ the Jacobson radical of $UT_{n}$ and $J^{0}=UT_{n}$, they proved the following theorem.\n\n\\begin{thm}\\label{tGargateThiago}\nLet $F$ be an infinite field and let $f\\in F\\langle X \\rangle$ be a multilinear polynomial. Then $Im(f)$ on $UT_{n}$ is $J^{r}$ if and only if $f$ has commutator degree $r$.\n\\end{thm}\n\nOne of the main steps in the proof of Theorem \\ref{tGargateThiago} was the characterization the polynomials of commutator degree $r$ in terms of their coefficients. An instance of such characterization has already been known, see \\cite{GMe} Lemma 3.3(2). \n\n\\begin{lem}[\\cite{GMe}]\\label{sumofcoeffi}\nLet $F$ be an arbitrary field and let $f\\in F\\langle X \\rangle$ be a multilinear polynomial. Then $f\\in \\langle [x_{1},x_{2}]\\rangle^{T}$ if and only if the sum of its coefficients is zero.\n\\end{lem}\n\nIt is worth mentioning that the above theorem has been extended for a larger class of fields by Luo and Wang in \\cite{LWa}.\n\n\\begin{thm}[\\cite{LWa}]\\label{TLuoWang}\nLet $n\\geq 2 $ be an integer, let $F$ be a field with at least $n(n-1)\/2$ elements and let $f\\in F\\langle X \\rangle$ be a multilinear polynomial. If $f$ has commutator degree $r$, then $Im(f)$ on $UT_{n}$ is $J^{r}$.\n\\end{thm}\n\nIn the next corollary we denote by $UT_{n}^{(-)}$ the Lie algebra defined on $UT_{n}$ by means of the Lie bracket $[a,b]=ab-ba$.\n\n\\begin{cor}\nLet $F$ be a field with at least $n(n-1)\/2$ elements and let $f\\in L(X)$ be a multilinear Lie polynomial. Then $Im(f)$ on $UT_{n}^{(-)}$ is $J^{r}$, for some $0\\leq r \\leq n$.\n\\end{cor}\n\n\\begin{proof}\nWe use the Poincar\u00e9-Birkhoff-Witt Theorem (and more precisely the Witt Theorem) to consider the free Lie algebra $L(X)$ as the subalgebra of $F\\langle X \\rangle^{(-)}$ generated by the set $X$. Since $F\\langle X\\rangle$ is the universal enveloping algebra of $L(X)$, given a multilinear Lie polynomial $f\\in L(X)$ there exists an associative polynomial $\\tilde{f}\\in F\\langle X \\rangle$ such that $Im(f)$ on $UT_{n}^{(-)}$ is equal to $Im(\\tilde{f})$ on $UT_{n}$. Now it is enough to apply Theorem \\ref{TLuoWang}.\n\\end{proof}\n\n\nLet $UT_{n}(d_{1},\\ldots,d_{k})$ be the upper block-triangular matrix algebra, that is, the subalgebra of $M_{n}(F)$ consisting of all block-triangular matrices of the form\n\\[\n\\begin{pmatrix}\n A_{1}& & * \\\\\n & \\ddots & \\\\\n 0 & & A_{k} \n\\end{pmatrix}\n\\]\nwhere $n=d_{1}+\\cdots+d_{k}$ and $A_{i}$ is a $d_{i}\\times d_{i}$ matrix. We will denote by $T$ the subalgebra of $UT_{n}(d_{1},\\ldots,d_{k})$ which consists of only triangular blocks of sizes $d_{i}$ on the main diagonal and zero elsewhere. That is, \n\\[\nT=\\begin{pmatrix}\nUT_{d_{1}} & &0 \\\\\n & \\ddots & \\\\\n 0& & UT_{d_{k}}\n\\end{pmatrix}\n\\]\nAs a consequence of the above theorem we obtain the following lemma.\n\n\\begin{lem}\\label{lblock}\nLet $F$ be a field with at least $n(n-1)\/2$ elements and let $f\\in F\\langle X \\rangle$ be a multilinear polynomial of commutator degree $r$. Then the image $Im(f)$ on $T$ is $J^{r}$, where $J=Jac(T)$ is the Jacobson radical of $T$.\n\\end{lem}\n\n\\begin{proof}\nWe note that $T\\cong UT_{d_{1}}\\times\\cdots\\times UT_{d_{k}}$. Hence, by \\cite[Proposition 5.60]{Bre},\n\\[\nJ=\\begin{pmatrix}\n J_{d_{1}} & & \\\\\n & \\ddots & \\\\\n & & J_{d_{k}}\n\\end{pmatrix}\n\\]\nwhere $J_{d_{i}}=Jac(UT_{d_{i}})$. Therefore by Theorem \\ref{TLuoWang}, we have\n\\[\nf(T)=\\begin{pmatrix}\n f(UT_{d_{1}}) & & \\\\\n &\\ddots &\\\\\n & & f(UT_{d_{k}})\n\\end{pmatrix} =\\begin{pmatrix}\n J_{d_{1}}^{r} & & \\\\\n &\\ddots &\\\\\n & & J_{d_{k}}^{r}\n\\end{pmatrix}=J^{r}.\n\\qedhere\n\\]\n\\end{proof}\n\nThroughout this paper we use the letters $w_{i}$ and $w_{i}^{(j)}$ to denote commuting variables. We recall the following well known result about commutative polynomials.\n\n\\begin{lem}\\label{lcomutpoly}\nLet $F$ be an infinite field and let $f_{1}(w_{1},\\dots,w_{m})$, \\dots, $f_{n}(w_{1},\\dots,w_{m})$ be commutative polynomials. Then there exist $a_{1}$, \\dots, $a_{m}\\in F$ such that \n\\[\nf_{1}(a_{1},\\dots,a_{m})\\neq0,\\quad \\dots, \\quad f_{n}(a_{1},\\dots,a_{m})\\neq0.\n\\]\n\\end{lem}\n\nA similar result also holds for finite fields, as long as some boundedness on the degrees of the variables is given (see \\cite[Proposition 4.2.3]{Dre}).\n\n\\begin{lem}\\label{lcomutpolyfinite}\nLet $F$ be a finite field with $n$ elements and let $f=f(w_{1},\\dots,w_{m})$ be a nonzero polynomial. If $\\deg_{w_{i}}(f)\\leq n-1$ for every $i=1$, \\dots, $n$, then there exist $a_{1}$, \\dots, $a_{m}\\in F$ such that $f(a_{1},\\dots,a_{m})\\neq0$.\n\\end{lem}\n\n\\begin{cor}\\label{ccomutpolyfinite}\nLet $F$ be a finite field with $n$ elements and let $f_{1}(w_{1}\\dots,w_{m})$, \\dots, $f_{n-1}(w_{1},\\dots,w_{m})$ be nonzero polynomials in commuting variables. If $\\deg_{w_{i}}(f_{j})\\leq 1$ for all $i$ and $j$, then there exist $a_{1}$, \\dots, $a_{m}\\in F$ such that\n\\[\nf_{1}(a_{1},\\dots,a_{m})\\neq0,\\quad \\dots,\\quad f_{n-1}(a_{1},\\dots,a_{m})\\neq0.\n\\]\n\\end{cor}\n\n\n\\section{Graded central polynomials for $UT_{n}$}\n\\label{sect3}\n\nOur goal in this section is to prove the non existence of graded central polynomials for the graded algebra of upper triangular matrices with entries in an arbitrary field. It is well known that the algebra of upper block triangular matrices has no central polynomials, see \\cite[Lemma 1]{gz_ijm}.\n\nWe will denote by $Z(\\mathcal{A})$ the centre of the algebra $\\mathcal{A}$. \n\n\\begin{defi}\nLet $f\\in F\\langle X \\rangle^{gr}$. We say that $f$ is a graded central polynomial for the algebra $\\mathcal{A}$ if $Im(f)\\subset Z(\\mathcal{A})$ and $f\\notin Id^{gr}(\\mathcal{A})$.\n\\end{defi}\n\nWe recall the following fact from \\cite[Lemma 1.4]{VKV}.\n\n\\begin{lem}\nLet $UT_{n}$ be endowed with some elementary grading. Then the subspace of all diagonal matrices is homogeneous of neutral degree.\n\\end{lem}\n\n\\begin{thm}\nLet $UT_{n}=\\mathcal{A}=\\bigoplus_{g\\in G}\\mathcal{A}_{g}$ be a $G$-grading on the algebra of upper triangular matrices over an arbitrary field. If $n>1$ then there exist no graded central polynomials for $\\mathcal{A}$.\n\\end{thm}\n\n\\begin{proof}\nBy Theorem \\ref{gradingsupper} we have that $\\mathcal{A}$ is graded isomorphic to some elementary grading on $UT_{n}$. Hence we may reduce our problem to elementary gradings. Now we assume that $f\\in F\\langle X \\rangle^{gr}$ is a polynomial with zero constant term, such that $Im(f)$ on $\\mathcal{A}$ is contained in $F=Z(\\mathcal{A})$. We write $f$ as $f=f_{1}+f_{2}$ where $f_{1}$ contains neutral variables only and $f_{2}$ has at least one non neutral variable in each of its monomials. Consider $a_{1}$, \\dots, $a_{m}\\in\\mathcal{A}_{1}$, and $b_{1}$, \\dots, $b_{l}$ non neutral variables (of homogeneous degree $\\ne 1$) that occur in $f$. Hence $f(a_{1},\\dots,a_{m},b_{1},\\dots,b_{l})=f_{1}(\\overline{a}_{1},\\dots,\\overline{a}_{m})+j_{1}+j_{2}$ where $j_{1}$, $j_{2}\\in J$, the Jacobson radical of $\\mathcal{A}$, and $\\overline{a}_{i}$ is the diagonal part of $a_{i}$. Since $Im(f)\\subset F$, then $j_{1}+j_{2}=0$ and hence $Im(f)=Im(f_{1})$, where the image of $f_{1}$ is taken on diagonal matrices only. Now, note that if $\\lambda_{1}$, \\dots, $\\lambda_{m}\\in F$ are arbitrary, then \n\\[\nf_{1}(\\lambda_{1}e_{11},\\dots,\\lambda_{m}e_{11})=f_{1}(\\lambda_{1},\\dots,\\lambda_{m})e_{11}.\n\\]\nSince $Im(f_{1})\\subset F$, we must have $f_{1}(\\lambda_{1},\\dots,\\lambda_{m})=0$. Hence, for diagonal matrices $D_{i}=\\displaystyle\\sum_{k=1}^{n}\\lambda_{k}^{(i)}e_{kk}$ we have \n\\[\nf_{1}(D_{1},\\dots,D_{m})=\\sum_{k=1}^{n}f_{1}(\\lambda_{1}^{(k)},\\dots,\\lambda_{k}^{(m)})e_{kk}=0,\n\\]\nand thus $Im(f)=\\{0\\}$. We conclude the non existence of graded central polynomials for $UT_{n}$.\n\\end{proof}\n\n\\section{Certain ${\\ensuremath{\\mathbb{Z}}}_{q}$-gradings on $UT_{n}$}\n\\label{sect4}\n\nThroughout this section we denote $UT_{n}=\\mathcal{A}$, endowed with the elementary $\\mathbb{Z}_{q}$-grading given by the following sequence in $\\mathbb{Z}_{q}^{n}$\n\\[\n(\\overline{0},\\overline{1},\\dots,\\overline{q-2},\\underbrace{\\overline{q-1},\\overline{q-1},\\dots,\\overline{q-1}}_{\\text{$n-q+1$ times}}). \\]\nGiven $q\\leq n$ an integer, we study the images of multilinear graded polynomials on $\\mathcal{A}$.\n\nOne can see that for $q=n$ we have the natural ${\\ensuremath{\\mathbb{Z}}}_{n}-$grading on $UT_{n}$ given by $\\deg e_{ij}=j-i\\pmod{n}$ for every $i\\le j$.\n\nWe note that the neutral component of $UT_{n}$ is given by a block triangular matrix with $q-1$ triangular blocks of size one each and a triangular block of size $n-q+1$ in the bottom right corner\n\\[\n\\mathcal{A}_{0}=\\begin{pmatrix}\n * & & & 0 \\\\\n & \\ddots & & \\\\\n & & * & \\\\\n 0& & & UT_{n-q+1}\n\\end{pmatrix}\n\\]\nFor $l\\in\\{1,\\dots,q-1\\}$ we have that the homogeneous component of degree $\\overline{l}$ is given by\n\\[\n\\mathcal{A}_{\\overline{l}}=span\\{e_{i,i+l}, e_{q-l,j} \\mid i=1,\\dots,q-l, j=q+1,\\dots,n\\}.\n\\]\nFor $1\\leq r \\leq n-q$ we also define the following homogeneous subspaces of $A_{\\overline{l}}$\n\\[\n\\mathcal{B}_{\\overline{l},r}=span\\{e_{q-l,j} \\mid j=q+r,\\ldots,n\\}.\n\\]\nAn immediate computation shows that the following are graded identities for $\\mathcal{A}$\n\\begin{align}\n&[y_{1},y_{2}]z\\equiv 0 \\label{identity1} \\\\\n&z_{1}z_{2}\\equiv 0 \\label{identity2} \\\\\n&{[y_{1},y_{2}]}\\cdots [y_{2(n-q+1)-1},y_{2(n-q+1)}]\\equiv 0 \\label{identity3}\n\\end{align}\nwhere the variables $y_i$ are neutral ones, $z$, $z_{1}$, $z_{2}$ are non neutral variables and $\\deg(z_{1})+\\deg(z_{2})=\\overline{0}$. A complete description of the graded polynomial identities for elementary gradings on $UT_{n}$ was given in \\cite{VKV} for infinite fields and in \\cite{GRi} for finite fields.\n\nWe state several lemmas concerning the description of some graded polynomials on $\\mathcal{A}$. In the upcoming lemmas, unless otherwise stated, we assume that the field $F$ has at least $n(n-1)\/2$ elements and $f\\in F\\langle X \\rangle^{gr}$ is a multilinear polynomial.\n\n\\begin{lem}\\label{l1Zq}\nIf $f=f(y_{1},\\dots,y_{m})$, then $Im(f)$ on $\\mathcal{A}$ is a homogeneous vector subspace.\n\\end{lem}\n\n\\begin{proof}\nIt is enough to apply Lemma \\ref{lblock}.\n\\end{proof}\n\n\nIn the next two lemmas we will assume that $f=f(z_{1},\\dots,z_{l},y_{l+1},\\dots,y_{m})$ where $\\deg(z_{i})=\\overline{1}$, $1\\leq i \\leq l$. It is obvious that in this case one must have $Im(f)$ on $\\mathcal{A}$ as a subset of $\\mathcal{A}_{\\overline{l}}$. Modulo the identity $(1)$ we rewrite the polynomial $f$ as\n\\[\nf=\\sum_{\\bm{i_{1}},\\dots, \\bm{i_{l}}}y_{\\bm{i_{1}}}z_{1}y_{\\bm{i_{2}}}z_{2}\\cdots y_{\\bm{i_{l}}}z_{l}g_{\\bm{i_{1}},\\dots,\\bm{i_{l}}}+h\n\\]\nwhere $y_{\\bm{i_{j}}}=y_{i_{j_{1}}}\\cdots y_{i_{j_{k_{j}}}}$ is such that $i_{j_{1}}<\\cdots k$ and $e_{k}=\\alpha_{i_{1},\\dots,i_{k-1}}w_{1}^{(i_{1})}\\cdots w_{1}^{(i_{k-1})}$. Then we take $w_{1}^{(i_{1})}=\\cdots=w_{1}^{(i_{k-1})}=1$ and we conclude that $\\alpha_{i_{1},\\dots,i_{k-1}}=0$. Hence $p_{1}=0$, which is a contradiction. Analogous claim holds for $p_{2}$. Therefore it is enough to use the variables $w_{1}^{(m)}$ and $w_{2}^{(m)}$ to realize any matrix in $\\mathcal{A}_{g_{1}}$ in the image of $f$. \n\\end{proof}\n\n\\begin{lem}\nLet $UT_{3}$ be endowed with the grading (I)(d). Then $Im(f)$ on $UT_{3}$ is a homogeneous subspace.\n\\end{lem}\n\n\\begin{proof}\nWe denote $g_{1}=g$ and note that $\\mathcal{A}_{1}=span\\{e_{11},e_{22},e_{33},e_{13}\\}$ and $\\mathcal{A}_{g}=span\\{e_{12},e_{23}\\}$. Then $\\mathcal{A}_{g}^{2}\\subset span\\{e_{13}\\}$ and $\\mathcal{A}$ satisfies the identities $z[y_{1},y_{2}]\\equiv 0$ and $[y_{1},y_{2}]z\\equiv 0$. The case when $f$ has one variable of homogeneous degree $g$ and $m-1$ neutral variables can be treated as in the previous lemma. The remaining cases are considered as above. \n\\end{proof}\n\nHence we have the following theorem.\n\n\\begin{thm}\nLet $F$ be an arbitrary field, let $UT_{3}=\\mathcal{A}=\\bigoplus_{g\\in G}A_{g}$ be some non trivial grading on $\\mathcal{A}$, and let $f\\in F\\langle X \\rangle^{gr}$ be a multilinear graded polynomial. Then $Im(f)$ on $\\mathcal{A}$ is a homogeneous subspace of $\\mathcal{A}$. If $|F|\\geq 3$ and $\\mathcal{A}$ is equipped with the trivial grading, then the image is also a subspace. \n\\end{thm}\n\n\\begin{proof}\nThe proof is clear from the previous lemmas and Proposition \\ref{lowprop}.\n\\end{proof}\n\n\\subsection{The graded Jordan algebra $UJ_{2}$}\n\nThroughout this subsection we assume that $F$ is a field of characteristic different from $2$ and we denote by $UJ_{n}$ the Jordan algebra of the upper triangular matrices with product $a\\circ b=ab+ba$. Unlike the associative setting, gradings on $UJ_{n}$ are not only elementary ones. Actually, a second kind of gradings also occurs on $UJ_{n}$, the so-called mirror type gradings, and we define these below. First of all let us introduce the following notation. \n\nLet $i$, $m$ be non negative integers and set \n\\[\nE_{i:m}^{+}=e_{i,i+m}+e_{n-i-m+1,n-i+1} \\ \\mbox{and} \\ E_{i:m}^{-}=e_{i,i+m}-e_{n-i-m+1,n-i+1}.\n\\]\n\\begin{defi}\nA $G$-grading on $UJ_{n}$ is called of mirror type if the matrices $E_{i:m}^{+}$ and $E_{i:m}^{-}$ are homogeneous, and $\\deg(E_{i:m}^{+})\\neq \\deg(E_{i:m}^{-})$.\n\\end{defi}\n\nWe recall the following theorem from \\cite{KYa}.\n\n\\begin{thm}[\\cite{KYa}]\nThe $G$-gradings on the Jordan algebra $UJ_{n}$ are, up to a graded isomorphism, elementary or of mirror type.\n\\end{thm}\n\n\nIn particular we have the following classification of the gradings on $UJ_{2}$.\n\n\n\\begin{prp}\nUp to a graded isomorphism, the gradings on $UJ_{2}$ are given by $UJ_{2}=\\mathcal{A}=\\bigoplus_{g\\in G}\\mathcal{A}_{g}$ where\n\n\\begin{itemize}\n\\item[(I)] elementary ones\n\\begin{itemize}\n\\item[(a)] trivial grading;\n\\item[(b)] $\\mathcal{A}_{1}=\\begin{pmatrix}\n a & 0 \\\\\n & b\n\\end{pmatrix}$, $\\mathcal{A}_{g}=\\begin{pmatrix}\n 0 & c \\\\\n & 0\n\\end{pmatrix}$\n\\end{itemize}\n\\item[(II)] mirror type ones;\n\\begin{itemize}\n\\item[(a)] $\\mathcal{A}_{1}=\\begin{pmatrix}\n a & 0 \\\\\n & a\n\\end{pmatrix}$, $\\mathcal{A}_{g}=\\begin{pmatrix}\n b & c \\\\\n & -b\n\\end{pmatrix}$\n\\item[(b)] $\\mathcal{A}_{1}=\\begin{pmatrix}\n a & b \\\\\n & a\n\\end{pmatrix}$, $\\mathcal{A}_{g}=\\begin{pmatrix}\n c & 0 \\\\\n & -c\n\\end{pmatrix}$\n\\item[(c)] $\\mathcal{A}_{1}=\\begin{pmatrix}\n a & 0 \\\\\n & a\n\\end{pmatrix}$, $\\mathcal{A}_{g}=\\begin{pmatrix}\n b & 0 \\\\\n & -b\n\\end{pmatrix}$, $\\mathcal{A}_{h}=\\begin{pmatrix}\n 0 & c \\\\\n & 0\n\\end{pmatrix}$\n\\end{itemize}\n\\end{itemize}\nwhere $g$, $h\\in G$ are elements of order $2$.\n\\end{prp}\n\n\n\nIn \\cite{KYa} it was also proved that the support of a grading on $UJ_{n}$ is always abelian (see \\cite{KYa} Theorem 24). Hence by Proposition \\ref{basicprop} (4) we have that $Im(f)$ on $UJ_{n}$ is a homogeneous subset for any multilinear graded polynomial $f\\in J(X)$.\n\nNext we analyse the images of a multilinear graded Jordan polynomial $f$ on the gradings considered above.\n\n\\begin{lem}\\label{l1jordan}\nLet $UJ_{2}$ be endowed with the grading (I)(b). Then $Im(f)$ on $UJ_{2}$ is a homogeneous subspace.\n\\end{lem}\n\n\\begin{proof}\nWe start with a multilinear polynomial $f$ in $m$ neutral variables. We evaluate each variable $y_{i}$ to an arbitrary diagonal matrix $D_{i}$. Therefore each monomial $\\mathbf{m}$ in $f$ is evaluated to $2^{m-1}\\beta D_{1}\\cdots D_{m}$, where $\\beta\\in F$ is the coefficient of $\\mathbf{m}$. Hence \n\\[\nf(D_{1},\\dots,D_{m})=2^{m-1}\\alpha D_{1}\\cdots D_{m}\n\\]\nwhere $\\alpha\\in F$ is the sum of all coefficients of $f$. In case $\\alpha=0$, then $f=0$ is a graded polynomial identity for $UJ_{2}$, otherwise we can take $D_{2}=\\cdots=D_{m}=I_{2}$ and use $D_{1}$ in order to obtain every diagonal matrix in the image of $f$.\n\nSince $UJ_{2}$ satisfies the graded identity $z_{1}\\circ z_{2}=0$ such that $\\deg(z_{1})=\\deg(z_{2})=g$, then we only need to analyse the case where $f$ is a multilinear polynomial in $m-1$ neutral variables and one of homogeneous degree $g$. Obviously we must have $Im(f)\\subset \\mathcal{A}_{g}$ and this homogeneous component is one-dimensional, then we are done.\n\\end{proof}\n\nFor the grading (II)(a) we recall a lemma from \\cite{GSa} applied to multilinear polynomials. In order to make the notation more compact we omit the symbol $\\circ$ for the Jordan product, and we write $ab$ instead of $a\\circ b$. If no brackets are given in a product, we assume these left-normed, that is $abc=(ab)c$.\n\n\\begin{lem}[\\cite{GSa}]\\label{l1DimasSalomao}\nLet $UJ_{2}$ be endowed with the grading (II)(a) and let $f\\in J(X)_{g}$ be a multilinear $\\mathbb{Z}_{2}$-graded polynomial. Then, modulo the graded identities of $UJ_{2}$, we can write $f$ as a linear combination of monomials of the type\n\\[\ny_{1}\\cdots y_{l}z_{i_{0}}(z_{i_{1}}z_{i_{2}})\\cdots (z_{i_{2m-1}}z_{i_{2m}}), 1<\\cdots0.\n\\]\n\\end{lem}\n\n\\begin{lem}\\label{l2jordan}\nLet $UJ_{2}$ be endowed with the grading (II)(a). Then $Im(f)$ on $UJ_{2}$ is a homogeneous subspace.\n\\end{lem}\n\n\\begin{proof}\nSince $\\dim\\mathcal{A}_{1}=1$ it follows that if the image of a multilinear polynomial on $UJ_{2}$ is contained in $\\mathcal{A}_{1}$ then it must be either $\\{0\\}$ or $\\mathcal{A}_{1}$.\n\nNow we consider a multilinear polynomial $f$ in homogeneous variables of degree $1$ and $g$ such that $\\deg f=g$. Let $\\mathbf{m}=y_{1}\\cdots y_{l}z_{i_{0}}(z_{i_{1}}z_{i_{2}})\\cdots (z_{i_{2m-1}}z_{i_{2m}})$ be a monomial as in Lemma \\ref{l1DimasSalomao}. We note that the main diagonal of a matrix in $m(UJ_{2})$ is such that the entry $(k,k)$ is given by $(-1)^{k+1}2^{m+l+1}a$ where $a$ is the product of the entries at position $(1,1)$ of all matrices $y$ and $z$. Hence every matrix in $Im(f)$ is of the form\n\\[\n\\begin{pmatrix}\n 2^{m+l+1}\\alpha\\cdot a & * \\\\\n & -2^{m+l+1}\\alpha\\cdot a\n\\end{pmatrix}\n\\]\nwhere $\\alpha$ is the sum of all coefficients of $f$.\n\nIn case $\\alpha=0$, then $Im(f)\\subset span\\{e_{12}\\}$ and then the image is completely determined. \n\nWe consider now $\\alpha\\neq0$. Without loss of generality, we assume that the nonzero scalar occurs in the monomial $y_{1}\\cdots y_{l}z_{0}(z_{1}z_{2})\\cdots (z_{2m-1}z_{2m})$. Then we make the following evaluation: $y_{1}=\\cdots=y_{l}=I_{2}$, $z_{0}=w_{1}(e_{11}-e_{22})+w_{2}e_{12}$ and $z_{i}=e_{11}-e_{22}$ for every $i=1$, \\dots, $2m$, where $w_{1}$, $w_{2}$ are commutative variables. Therefore\n\\[\nf(y_{1},\\dots,y_{l},z_{0},\\dots,z_{2m})=\\begin{pmatrix}\n 2^{m+l+1}\\alpha w_{1} & 2^{m+l+1}w_{2} \\\\\n & -2^{m+l+1}\\alpha w_{1}\n\\end{pmatrix}.\n\\]\nSince $char(F)\\neq 2$ and $\\alpha\\neq0$, it follows that $Im(f)=\\mathcal{A}_{g}$. \n\\end{proof}\n\nNow we consider the grading (II)(b) and we recall another lemma from \\cite{GSa}.\n\n\\begin{lem}[\\cite{GSa}]\\label{l2GSa}\nLet $f\\in J(X)_{1}$ be a multilinear polynomial. Then, modulo the graded identities of $UJ_{2}$, $f$ can be written as a linear combination of monomials of the form\n\\begin{enumerate}\n \\item $(y_{i_{1}}\\cdots y_{i_{r}})(z_{j_{1}}\\cdots z_{j_{l}})$;\n \\item $(((y_{i}z_{j_{1}})z_{j_{2}})y_{i_{1}}\\cdots y_{i_{r}})z_{j_{3}}\\cdots z_{j_{l}}$,\n\\end{enumerate}\nwhere $l\\geq 0$ is even, $r\\geq 0$, $i_{1}<\\cdots 0$.\nThe hypothesis is that there exists a graph on vertex set $[n] = \\set{1,\\ldots,n}$ such that\n\\begin{enumerate}\n\\item[\\rm (i)] non-neighbors in the graph have, in a certain sense, limited dependence, and\n\\item[\\rm(ii)] the probabilities of the events must satisfy a certain upper bound.\n\\end{enumerate}\nIn the original formulation of the LLL \\nolinebreak\\latexcite{ErdosLovasz}, \ncondition (i) is that each event must be independent from its non-neigbors,\nand condition (ii) is that each event must have probability at most $1\/4d$, where $d$ \nis the maximum degree in the graph.\n\nOver the years, new formulations of condition (i) were discovered,\nof which a very general one is stated below as inequality \\eqref{eq:Dep}.\nInstead of requiring independence between non-neighbors,\nit allows arbitrary dependencies, as long as one can establish a useful upper bound on the\nprobability of $E_i$ conditioned on any set of its non-neighboring events not occurring.\nWe believe this condition first appeared in a paper by Albert, Frieze and Reed \\nolinebreak\\latexcite{Albert},\nand is sometimes referred to as the ``lopsided\" version of the LLL.\n(This is more general than the condition used by Erdos and Spencer~\\nolinebreak\\latexcite{ErdosSpencer}.)\n\nSeveral new formulations of condition (ii) have been proposed over the years,\nnotably by Spencer \\nolinebreak\\latexcite{Spencer75,Spencer77} and by Shearer \\nolinebreak\\latexcite{Shearer}.\nShearer's condition is actually optimal, assuming that the graph is undirected.\nUnfortunately Shearer's condition is difficult to use in applications, so\nresearchers have also studied weaker conditions that are easier to use.\nOne of the most useful of those is the ``cluster expansion'' condition, due to Bissacot\net~al.~\\nolinebreak\\latexcite{Bissacot}.\n\nIn this note, we present short, self-contained proofs of the LLL \nin which condition (i) is formalized using \\eqref{eq:Dep}, as in Albert et~al.~\\nolinebreak\\latexcite{Albert},\nand condition (ii) is formalized using either Shearer's condition \\nolinebreak\\latexcite{Shearer}\nor the cluster expansion condition \\nolinebreak\\latexcite{Bissacot}.\n\\Section{ShearerShort} gives a short proof for Shearer's condition.\nOur proof follows the line of Shearer's original argument,\nalthough we believe our exposition is simpler and more direct.\n\\Section{cluster} gives a short proof for the cluster expansion condition.\nWhereas Bissacot et al.\\ used analytic methods inspired \nby statistical physics, we found a short combinatorial inductive argument. \nThis combinatorial proof originally appeared in Section 5.7 of \\nolinebreak\\latexcite{HV-arxiv}, \nbut since that may be somewhat difficult to find, we reproduce it here.\nTo conclude, we show that the cluster expansion condition implies the\nnear-optimal $p \\leq \\frac{1}{ed}$ condition for the symmetric LLL.\n\n\n\n\n\\section{Shearer's Lemma}\n\\SectionName{ShearerShort}\n\nThe following result is the ``lopsided Shearer's Lemma\", a generalization of the LLL\ncombining conditions from Albert et~al.~\\nolinebreak\\latexcite{Albert} and Shearer \\nolinebreak\\latexcite{Shearer}.\nThis formulation also appears in \\nolinebreak\\latexcite{Knuth}.\nLet $\\Gamma(i)$ denote the neighbors of vertex $i$ and let $\\Gamma^+(i) = \\Gamma(i) \\cup \\set{i}$.\nLet $\\operatorname{Ind}=\\operatorname{Ind}(G)$ denote the collection of all independent sets in the graph $G$.\n\n\\begin{lemma}[lopsided Shearer's Lemma]\n\\LemmaName{extShearer}\nSuppose that $G$ is a graph and $E_1,\\ldots,E_n$ events such that\n\\begin{equation}\n\\EquationName{Dep}\n\\Pr[E_i \\mid {\\textstyle \\bigcap}_{j \\in J} \\overline{E_j}] ~\\leq~ p_i\n\\qquad\\forall i \\in [n] ,\\, J \\subseteq [n] \\setminus \\Gamma^+(i).\n\\end{equation}\nFor each $S \\subseteq [n]$, define\n$$\\breve{q}_S ~=~ \\breve{q}_S(p) ~=~ \\sumstack{I \\subseteq S \\\\ I \\in \\operatorname{Ind}(G)} (-1)^{|I|} \\prod_{i \\in I} p_i.$$\nIf $\\breve{q}_S \\geq 0$ for all $S \\subseteq [n]$, then for each $A \\subseteq [n]$, we have\n$$ \\Pr[{\\textstyle \\bigcap}_{j \\in A} \\overline{E_j}] \\geq \\breve{q}_A.$$\n\\end{lemma}\n\nWe present an inductive proof of this lemma. First, we state the following recursive identity for $\\breve{q}_A$.\n\n\\begin{claim}[The ``fundamental identity'' for $\\breve{q}$.]\n\\ClaimName{fundamental-q}\nFor any $a \\in A$, we have\n$$\\breve{q}_A ~=~ \\breve{q}_{A \\setminus \\set{a}} \\,-\\, p_a \\cdot \\breve{q}_{A \\setminus \\Gamma^+(a)}.$$\n\\end{claim}\n\n\\begin{proof}\nEvery independent set $I \\subseteq A$ either contains $a$ or does not. In addition, if $a \\in I$\nthen $I$ is independent iff $I \\setminus \\{a\\}$ is an independent subset of $A \\setminus \\Gamma^+(a)$. Thus the terms in $\\breve{q}_A$ correspond one-to-one to terms on the right-hand side.\n\\end{proof}\n\nNext. define $\\breve{P}_A = \\Pr[\\bigcap_{i \\in A} \\overline{E_i}]$.\nThe following claim analogous to \\Claim{fundamental-q} is the key inequality in the original proof\nof the LLL \\nolinebreak\\latexcite{ErdosLovasz,Spencer75,AlonSpencer} although typical expositions do not\ncall attention to it.\n\n\\begin{claim}[The ``fundamental inequality\" for $\\breve{P}$]\n\\ClaimName{fundamentalP}\nAssume that \\eqref{eq:Dep} holds.\nThen for each $a \\in A$,\n$$\\breve{P}_A \\geq \\breve{P}_{A - a} - p_a \\breve{P}_{A \\setminus \\Gamma^+(a)}.$$\n\\end{claim}\n\n\\begin{proof}\nThe claim is derived as follows.\n$$\n\\breve{P}_A \n ~=~ \\breve{P}_{A-a} - \\Pr\\Bigg[ E_a \\cap \\bigcap_{i \\in A-a} \\overline{E_i} \\Bigg]\n ~\\geq~ \\breve{P}_{A-a} - \\Pr\\Bigg[ E_a \\cap \\bigcap_{i \\in A \\setminus \\Gamma^+(a)} \\overline{E_i} \\Bigg]\n ~\\geq~ \\breve{P}_{A-a} - p_a \\breve{P}_{A \\setminus \\Gamma^+(a)}\n$$\nThe first inequality is trivial (by monotonicity of measure with respect to taking subsets) and the\nsecond inequality is our assumption with $J = A \\setminus \\Gamma^+(a)$.\n\\end{proof}\n\nGiven these two claims, Shearer's Lemma follows by induction. \\vspace{3pt}\n\n\\begin{proofof}{\\Lemma{extShearer}}\nWe claim by induction on $|A|$ that for all $a \\in A$, \n\\begin{equation}\n\\EquationName{ShearerInduction}\n\\frac{\\breve{P}_{A}}{\\breve{P}_{A-a}} \\geq \\frac{\\breve{q}_{A}}{\\breve{q}_{A-a}}.\n\\end{equation}\nThe base case, $A = \\{a\\}$, holds because $\\breve{P}_{\\{a\\}} = \\breve{q}_{\\{a\\}} = 1 - p_a$\nand $\\breve{P}_\\emptyset = \\breve{q}_\\emptyset = 1$.\n\nFor $|A|>1$, the inductive hypothesis applied successively to the elements of $A \\cap \\Gamma(a)$ yields\n$$ \\frac{\\breve{P}_{A-a}}{\\breve{P}_{A \\setminus \\Gamma^+(a)}} \\geq\n\\frac{\\breve{q}_{A-a}}{\\breve{q}_{A \\setminus \\Gamma^+(a)}}.$$\nUsing this inequality, \\Claim{fundamental-q} and \\Claim{fundamentalP}, we obtain\n$$ \\frac{\\breve{P}_{A}}{\\breve{P}_{A-a}} \\geq 1 - p_a \\frac{\\breve{P}_{A \\setminus\n\\Gamma^+(a)}}{\\breve{P}_{A-a}} \\geq 1 - p_a \\frac{\\breve{q}_{A \\setminus \\Gamma^+(a)}}{\\breve{q}_{A-a}} \n= \\frac{\\breve{q}_A}{\\breve{q}_{A-a}}.$$\nThis proves the inductive claim.\n\nCombining \\eqref{eq:ShearerInduction} with the fact $\\breve{P}_\\emptyset = \\breve{q}_\\emptyset = 1$\nshows that $\\breve{P}_A \\geq \\breve{q}_A$ for all $A$.\n\\end{proofof}\n\n\\paragraph{Comparison to Shearer's original lemma.}\nShearer's lemma was originally stated as follows \\nolinebreak\\latexcite{Shearer}.\n\n\\begin{lemma}\n\\LemmaName{origShearer}\nSuppose that\n\\begin{equation}\n\\EquationName{Dep2}\n\\Pr[E_i \\mid {\\textstyle \\bigcap}_{j \\in J} \\overline{E_j}] ~=~ \\Pr[E_i]\n\\qquad\\forall i \\in [n] ,\\, J \\subseteq [n] \\setminus \\Gamma^+(i).\n\\end{equation}\nLet $p_i = \\Pr[E_i]$ and for each $S \\subseteq [n]$, define\n$$q_S = \\sumstack{I \\in \\operatorname{Ind}(G) \\\\ S \\subseteq I} (-1)^{|I \\setminus S|} \\prod_{i \\in I} p_i.$$\nIf $q_S \\geq 0$ for all $S \\subseteq [n]$, then\n$$ \\Pr[{\\textstyle \\bigcap}_{i=1}^{n} \\overline{E_i}] \\geq q_\\emptyset.$$\n\\end{lemma}\n\nThere are two differences between \\Lemma{extShearer} and \\Lemma{origShearer}.\nRegarding condition (i), \\Lemma{extShearer} uses \\eqref{eq:Dep}\nwhereas \\Lemma{origShearer} uses \\eqref{eq:Dep2};\nas discussed above, the former condition is more general.\nThe other main difference is the use of coefficients $q_S$ in \\Lemma{origShearer} as opposed to $\\breve{q}_S$ in \\Lemma{extShearer}. \nThe condition $q_S \\geq 0 \\:\\forall S$ turns out to be equivalent to $\\breve{q}_S \\geq 0 \\:\\forall S$,\nso \\Lemma{extShearer} and \\Lemma{origShearer} have equivalent formulations of condition (ii) (see \\nolinebreak\\latexcite{HV-arxiv} for more details).\nWe chose to state \\Lemma{extShearer} using $\\breve{q}_S$ because those are the coefficients that\nnaturally arise in the proof.\n\nShearer gives an interpretation of these coefficients as follows:\nthere is a unique probability space called the ``tight instance\" that minimizes the\nprobability of $\\Pr[ {\\textstyle \\bigcap}_i \\overline{E_i} ]$.\nIn that probability space, $q_S$ is exactly the probability that the events $\\setst{ E_i }{ i \\in S }$\noccur and the events $\\setst{ E_j }{ j \\notin S }$ do not occur.\nIn contrast, the coefficient $\\breve{q}_S$ is the probability that the events $\\setst{ E_i }{ i \\in S }$\ndo not occur. \nIn general, the coefficients are related by the identity $\\breve{q}_S = \\sum_{T \\subseteq [n] \\setminus\nS} q_T$, which can be proved by inclusion-exclusion.\nThe conclusion of \\Lemma{extShearer} is that $\\Pr[\\bigcap_{i=1}^{n} \\overline{E_i}] \\geq \\breve{q}_{[n]}$ and it is easy to see that $\\breve{q}_{[n]} = q_\\emptyset$. \nHence we recover \\Lemma{origShearer} from \\Lemma{extShearer}.\nThe tight instance also shows that the conclusion of Shearer's lemma is tight. \n\n\n\n\n\n\n\n\n\\section{Cluster Expansion}\n\\label{sec:cluster}\n\nNext we turn to a variant of the LLL that is stronger than the early formulations\n\\nolinebreak\\latexcite{ErdosLovasz,Spencer75,Spencer77} but weaker than Shearer's Lemma.\nThis lemma has been referred to as the {\\em cluster expansion} variant of the LLL; it was proved by Bissacot et al.~\\nolinebreak\\latexcite{Bissacot} using analytic techniques inspired by statistical physics. Although it is subsumed by Shearer's Lemma, it is typically easier to use in applications and\nprovides stronger quantitative results than the original LLL.\n\nFor variables $y_1,\\ldots,y_n$, we define\n$$\nY_S ~=~ \\sumstack{I \\in \\operatorname{Ind} \\\\ I \\subseteq S} y^I,\n$$\nwhere $y^I$ denotes $\\prod_{i \\in I} y_i$.\nThis is similar to the quantity $\\breve{q}_S$, but without the alternating sign.\n\n\\begin{lemma}[the cluster expansion lemma]\n\\LemmaName{cluster}\nSuppose that\n\\eqref{eq:Dep} holds \nand there exist $y_1,\\ldots,y_n>0$ such that for each $i \\in [n]$,\n\\begin{equation}\n\\EquationName{CLL}\np_i ~\\leq~ \\frac{y_i}{Y_{\\Gamma^+(i)}}.\n\\end{equation}\nThen\n$$ \\Pr[{\\textstyle \\bigcap}_{i=1}^{n} \\overline{E_i}] ~\\geq~ \\frac{1}{Y_{[n]}} ~>~ 0.$$\n\\end{lemma}\n\nHere we present an inductive combinatorial proof of \\Lemma{cluster}. \nFirst, some preliminary facts.\n\n\\begin{claim}[The ``Fundamental Identity'' for $Y$]\n\\ClaimName{fundamentalY}\nFor any $a \\in A$, we have\n$$ Y_{A} = Y_{A - a} + y_a Y_{A \\setminus \\Gamma^+(a)}.$$\n\\end{claim}\n\n\\begin{proof}\nThis follows from \\Claim{fundamental-q} since we can write $Y_S = \\breve{q}_S(-y)$.\nOr directly, every summand $y^J$ on the left-hand side either appears in $Y_{A - a}$\nif $a \\not\\in J$, or as a summand in $y_a Y_{A \\setminus \\Gamma^+(a)}$ if $a \\in J$.\n\\end{proof}\n\n\\begin{claim}[Log-subadditivity of $Y$]\n\\ClaimName{submult}\nIf $A, B$ are disjoint then $Y_{A \\union B} \\leq Y_A \\cdot Y_B$.\n\\end{claim}\n\\begin{proof}\nEvery summand $y^J$ of $Y_{A \\cup B}$ appears in the expansion of the product\n$$\nY_A \\cdot Y_B = \n \\sumstack{J' \\subseteq A \\\\ J' \\in \\operatorname{Ind}} \n \\sumstack{J'' \\subseteq B \\\\ J'' \\in \\operatorname{Ind}} y^{J'} y^{J''}\n$$\nby taking $J' = J \\intersect A$ and $J'' = J \\intersect B$.\nAll other terms on the right-hand side are non-negative.\n\\end{proof}\n\nThe following is the key inductive inequality, analogous to \\Equation{ShearerInduction}\nin the proof of Shearer's Lemma. Note that here the induction runs\nin the opposite direction for the $Y$ coefficients, which are indexed by complementary sets;\nthe reason for this lies in the fundamental identity for $Y$ (\\Claim{fundamentalY}) which\nhas the opposite sign compared to \\Claim{fundamental-q}.\nFor a set $S \\subseteq [n]$, we will use the notation $S^c = [n] \\setminus S$.\n\n\\begin{lemma}\n\\LemmaName{cluster-induction}\nSuppose that $p$ satisfies \\eqref{eq:CLL}.\nThen for every $a \\in S \\subseteq [n]$, $\\breve{P}_S > 0$ and\n$$\n\\frac{\\breve{P}_{S}}{\\breve{P}_{S-a}} ~\\geq~ \\frac{Y_{S^c}}{Y_{(S-a)^c}}.\n$$\n\\end{lemma}\n\n\\begin{proof}\nFirst, note that \\eqref{eq:CLL} implies that $p_i < 1$ for all $i$.\nWe proceed by induction on $|S|$. The base case is $S = \\{a\\}$. In that case we have\n$ \\frac{\\breve{P}_{\\{a\\}}}{\\breve{P}_\\emptyset} = \\Pr[\\overline{E_a}] \\geq 1 - p_a > 0$.\nOn the other hand, by the two claims above and \\eqref{eq:CLL}, we have \n$$ Y_{[n]} ~=~ Y_{[n]-a} + y_a Y_{[n] \\setminus \\Gamma^+(a)}\n ~\\geq~ Y_{[n] - a} + p_a Y_{\\Gamma^+(a)} Y_{[n] \\setminus \\Gamma^+(a)}\n ~\\geq~ Y_{[n] - a} + p_a Y_{[n]}.$$\nTherefore, $\\frac{Y_{[n]-a}}{Y_{[n]}} \\leq 1 - p_a$ which proves the base case.\n\nWe prove the inductive step by similar manipulations. Let $a \\in S$.\nWe can assume that $\\breve{P}_{S-a} > 0$ by the inductive hypothesis.\nBy \\Claim{fundamentalP}, we have\n$$ \\frac{\\breve{P}_S}{\\breve{P}_{S-a}} ~\\ge~ \n1 - p_a \\frac{\\breve{P}_{S \\setminus \\Gamma^+(a)}}{\\breve{P}_{S-a}}.$$\nThe inductive hypothesis applied repeatedly to the elements of $S \\cap \\Gamma(a)$ yields\n\\begin{equation*}\n1 - p_a \\frac{\\breve{P}_{S \\setminus \\Gamma^+(a)}}{\\breve{P}_{S-a}}\n ~\\geq~ 1 - p_a \\frac{Y_{(S \\setminus \\Gamma^+(a))^c}}{Y_{(S-a)^c}}\n ~=~ 1 - p_a \\frac{Y_{S^c \\cup \\Gamma^+(a)}}{Y_{S^c + a}}. \n\\end{equation*}\nBy the two claims above and \\eqref{eq:CLL}, we have\n$$ Y_{S^c+a} ~=~ Y_{S^c} + y_a Y_{S^c \\setminus \\Gamma^+(a)}\n ~\\geq~ Y_{S^c} + p_a Y_{\\Gamma^+(a)} Y_{S^c \\setminus \\Gamma^+(a)}\n ~\\geq~ Y_{S^c} + p_a Y_{S^c \\cup \\Gamma^+(a)}.$$\nWe conclude that\n$$ \\frac{\\breve{P}_{S}}{\\breve{P}_{S-a}}\n ~\\geq~ 1 - p_a \\frac{Y_{S^c \\cup \\Gamma^+(a)}}{Y_{S^c+a}}\n ~\\geq~ 1 - \\frac{Y_{S^c+a} - Y_{S^c}}{Y_{S^c+a}} = \\frac{Y_{S^c}}{Y_{(S-a)^c}} $$\nwhich also implies $\\breve{P}_S > 0$.\n\\end{proof}\n\nNow we can complete the proof of \\Lemma{cluster}.\n\n\\begin{proofof}{\\Lemma{cluster}}\nBy \\Lemma{cluster-induction}, we have $\\frac{\\breve{P}_{S}}{\\breve{P}_{S-a}} ~\\geq~ \\frac{Y_{S^c}}{Y_{(S-a)^c}}$ for all $a \\in S$.\nHence,\n$$ \\Pr[\\bigcap_{i=1}^{n} \\overline{E_i}] ~=~ \\breve{P}_{[n]}\n ~=~ \\prod_{i=1}^{n} \\frac{\\breve{P}_{[i]}}{\\breve{P}_{[i-1]}}\n ~\\geq~ \\prod_{i=1}^n \\frac{Y_{[i]^c}}{Y_{[i-1]^c}}\n ~=~ \\frac{1}{Y_{[n]}}.\n$$\n\\end{proofof}\n\nBy a similar proof, it can be proved that $\\frac{\\breve{q}_{S}}{\\breve{q}_{S-a}} ~\\geq~ \\frac{Y_{S^c}}{Y_{(S-a)^c}}$ for all $a \\in S$,\nwhich relates the cluster expansion lemma to Shearer's Lemma. We refer the reader to \\nolinebreak\\latexcite{HV-arxiv}.\n\n\n\\section{The Symmetric LLL}\n\nThe ``symmetric'' LLL does not use a different upper bound $p_i$\nfor each event $E_i$, and instead assigns $p_i = p$ for all $i$.\nThe question then becomes, given a dependency graph,\nwhat is the maximum $p$ such that the conclusion of the LLL holds?\nAs mentioned above, Erd\\H{o}s and Lov\\'{a}sz \\nolinebreak\\latexcite{ErdosLovasz} show that one may take $p=1\/4d$\nif the graph has maximum degree $d$.\nSpencer \\nolinebreak\\latexcite{Spencer77} showed the improved result $p = d^d\/(d+1)^{d+1} > \\frac{1}{e(d+1)}$,\nand Shearer \\nolinebreak\\latexcite{Shearer} improved that to the value\n$p = (d-1)^{d-1}\/d^d > \\frac{1}{ed}$, which is optimal as $n \\rightarrow \\infty$.\n\nWe now show that the cluster expansion lemma (\\Lemma{cluster}) gives a short proof\nof the $\\frac{1}{ed}$ bound, which is just slightly suboptimal.\nAlternative proofs of the $(d-1)^{d-1}\/d^d$ and $\\frac{1}{ed}$ bounds may be found in\nKnuth's exercises 323 and 325 \\nolinebreak\\latexcite{Knuth}.\n\n\\begin{lemma}[Near-optimal symmetric LLL]\nSuppose that $G$ has maximum degree $d \\geq 2$ and let\n$p = \\max_{i \\in [n]} \\, \\max_{J \\subseteq [n] \\setminus \\Gamma^+(i)} \\,\n\\Pr[E_i \\mid {\\textstyle \\bigcap}_{j \\in J} \\overline{E_j}]$.\nIf $$p \\leq \\frac{1}{ed}$$\nthen $$\\Pr[ {\\textstyle \\bigcap}_i \\overline{E_i} ] > 0.$$\n\\end{lemma}\n\\begin{proof}\nWe set $p_i=p$ and $y_i=y=\\frac{1}{d-1}$ for all $i$,\nthen apply \\Lemma{cluster}.\nTo do so, we must check that \\eqref{eq:CLL} is satisfied.\nNote that $Y_{\\Gamma^+(i)} = y + Y_{\\Gamma(i)} \\leq y + (1+y)^d$.\nThen\n$$\n\\frac{y_i}{Y_{\\Gamma^+(i)}}\n \\geq \\frac{y}{y+(1+y)^d}\n \n = \\frac{1}{1+\\frac{d^d}{(d-1)^{d-1}}}.\n$$\nThe claim is that this is at least $\\frac{1}{ed}$.\nBy simple manipulations, this claim is equivalent to\n\\begin{equation}\n\\EquationName{SymmIneq}\ne ~\\geq~ \\frac{1}{d} + \\Big( \\frac{d}{d-1} \\Big)^{d-1},\n\\end{equation}\nwhich we prove by a short calculus argument.\nFirst we derive the bound\n\\begin{align*}\n\\ln \\Big( \\frac{d}{d-1} \\Big)^{d-1}\n\\,=\\, \\!- (d-1) \\ln \\Big( 1 - \\frac{1}{d} \\Big)\n\\,=\\, (d-1) \\sum_{k=1}^{\\infty} \\frac{1}{k d^k}\n\\,=\\, 1 - \\sum_{k=1}^{\\infty} \\Big(\\frac{1}{k}-\\frac{1}{k+1}\\Big) \\frac{1}{d^k}\n\\,<\\, 1 - \\frac{1}{2d}.\n\\end{align*}\nFrom here, we obtain\n$$\n\\Big( \\frac{d}{d-1} \\Big)^{d-1}\n~<~ \\exp\\Big( 1 - \\frac{1}{2d} \\Big)\n~<~ e \\cdot \\Big( 1 - \\frac{1}{2d} \\Big)\n~<~ e - \\frac{1}{d}\n$$\nwhich establishes \\eqref{eq:SymmIneq}.\n\\end{proof}\n\n\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nDecentralization is back in the spotlight. While peer-to-peer (P2P) systems were popular in the 2000s, they subsequently lost their appeal to centralized social networks and streaming services. The tide has recently turned for two reasons. The first is the fear over excessive centralization of user data, sparked by pivotal incidents, such as the 2013 leaking of the PRISM surveillance program by Edward Snowden \\cite{greenwald2013nsa_prism} and \nthe 2016 Cambridge Analytica scandal \\cite{cadwalladr2017great_cambridgeanalytica}. The second is the increasing mainstream appeal of cryptocurrencies and blockchain-based applications \\cite{chen2018survey_blockchainapps}.\n\nAs a result, influential web technologists have called for a decentralized web with actions like the Decentralized Web Summits by the Internet Archives\\footnote{\\url{https:\/\/www.decentralizedweb.net\/}} and the Solid project by Tim Berners-Lee\n\\cite{mansour2016demonstration_solid}. From their part, policy-makers have answered this call, with the European Commission supporting decentralized technologies in its flagship Next Generation Internet initiative\\footnote{\\url{https:\/\/digital-strategy.ec.europa.eu\/en\/policies\/next-generation-internet-initiative}}. \n\nCurrently, the most popular decentralized technologies are distributed hash tables (DHTs) and blockchain. Of these, DHTs come from the previous wave of P2P research and enable document retrieval via unique textual identifiers, with strong guarantees on retrieval delay. For example, the Kademlia DHT powers the Interplanetary File System (IPFS), a decentralized file storage solution that aspires to become a pillar of the decentralized Web\\footnote{\\url{https:\/\/ipfs.io\/}}.\nOn the other hand, blockchain allows decentralized nodes to maintain common states via consensus mechanisms like proof-of-work and broadcasting. Instead of data, blockchain is typically used to broadcast monetary transactions and reward nodes for executing decentralized operations. For example, Filecoin builds on IPFS and uses blockchain to reward nodes for offering file storage. \n\nBut how can one \\textit{find} documents in decentralized systems? DHTs require previous knowledge of document identifiers, which must be acquired externally. Alternatively, they can implement distributed inverted indexes by storing relevant document identifiers for search keywords \\cite{reynolds2003efficient_dhtsearchengine}, as the YaCy search engine does\\footnote{\\url{https:\/\/yacy.net\/}}.\n\nHowever, this practice carries fundamental bandwidth and storage constraints\\cite{li2003feasibility_dhtwebindexing} and exact keyword matching is dated compared to the semantic awareness of modern search engines. On the other hand, unstructured search techniques, such as flooding, random walks, index sharing, and query caching\\cite{khatibi2021resource_unstructuredsurvey} often suffer from high communication overhead and unpredictable delays. Finally, blockchain has been used to reward nodes for executing indexing and retrieval operations in decentralized search engines, such as Presearch\\footnote{\\url{https:\/\/presearch.org\/}}, but broadcasting indexes to all nodes is prohibitive in terms of bandwidth and storage.\n\nWhile research on decentralized search has stagnated on the above bottlenecks, centralized search engines have evolved to better understand query semantics. This evolution has been driven by advancements in \\textit{embeddings}, latent representations of text and other types of content \\cite{lin2021pretrained_transformersforretrieval}. Retrieval with embeddings often follows a vector space model, which extracts vector representations for documents and queries and compares their relevance with a simple similarity metric, such as the dot product or cosine similarity. This way, the retrieval can be cast as a \\textit{nearest-neighbor} problem, which tries to find the nearest documents to a query according to the selected similarity metric. In contrast to term-frequency vectors, embeddings are lower-dimensional and enable semantic rather than exact term matching, giving rise to \\textit{dense retrieval}.\n\nHere, we argue that decentralized search can benefit from modern techniques employed by centralized search engines. To this end, we revisit the decentralized search problem from an embedding-based standpoint. We further employ a graph signal processing technique to implement similarity-based P2P query routing. We propose composing node embeddings from local node documents and diffusing them through P2P networks with decentralized implementations of graph filters, such as Personalized PageRank (PPR). We then use the diffused embeddings to guide decentralized search towards nodes with relevant documents. We experiment with a simulation of a real-world P2P network and investigate how our solution scales with the number of documents in the network. Our approach successfully locates relevant documents in nearby nodes but accuracy sharply declines as the number of documents increases, highlighting the need for further research.\n\n\\section{Background and Related Work}\nThis section explores related work on decentralized search (Subsection~\\ref{sec:decentralized search}) and then presents dense information retrieval (Subsection~\\ref{sec:dense retrieval}) and graph signal processing (Subsection~\\ref{sec:gsp}) background to contextualize later analysis.\n\n\\label{sec:background}\n\\subsection{Decentralized search} \\label{sec:decentralized search}\nDecentralized search received attention in the early 2000s for P2P file sharing systems, such as Gnutella and Freenet \\cite{aberer2002overview_overviewgnutellaetal}. Gnutella introduced \\textit{flooding}, the simplest technique for search, which forwards search queries to all nodes within a specified number of hops. As P2P platforms grew in size, flooding was soon found to not scale in terms of bandwidth consumption \\cite{ritter2001gnutella_gnutellascalability}, giving rise to alternatives, such as random walks, index sharing, and super-peer architectures \\cite{lua2005survey_earlysurveyonp2p}. Of these, \\textit{informed} methods exploit hints about possible document locations and outperform \\textit{blind} methods, like flooding and random walks in terms of delay and communication cost. This comes at the expense of costly state maintenance at nodes \\cite{tsoumakos2006analysis_blindinformeddistinction}. \n\\par\nInformed search methods rely on query routing and can be further categorized into \\textit{document-} and \\textit{query-oriented} ones \\cite{arour2015learning_querycontentoriented}. In document-oriented methods, P2P nodes exchange information about their stored documents \\cite{crespo2002routing_firstdocumentrouting, kumar2005efficient_bloomfilters}. As the storage cost increases with the number of documents in the network, the advertisement radius is limited and summarization is employed to compress the advertisements, for instance with Bloom filters \\cite{kumar2005efficient_bloomfilters}. Both techniques introduce routing errors. In query-oriented methods, nodes store information of passing queries and their results \\cite{kalogeraki2002local_firstqueryrouting, li2006improve_queryroutingwithrl} and, when a new query arrives, it is forwarded to the most successful route travelled by similar past queries. These methods are attractive because they avoid storing information about unpopular documents. On the other hand, they are blind to unseen queries, especially at the beginning of the network's operation when no information is available (cold-start problem).\n\nWhile informed search identifies the locations of relevant documents through routing, DHTs decouple these two operations with a clever application of hashing \\cite{lua2005survey_earlysurveyonp2p}. In particular, DHT nodes agree to store documents whose hash values are the closest to their own address, according to a distance function. As a result, when nodes search for a document, they can resolve its location and reach it through routing. For efficiency, most DHT systems, such as Chord, Pastry, and Kademlia, structure P2P networks so that all locations are reachable within a maximum number of hops \\cite{lua2005survey_earlysurveyonp2p}, although this structuring is not strictly required\\footnote{Efficient addressing can be enforced on networks with arbitrary structure, for example with greedy embeddings \\cite{hofer2013greedy_greedyembeddings}.}.\n\nThe theoretical properties and practicality of DHTs have made them attractive for modern decentralized systems, such as IPFS, but they are best suited for key-based retrieval. For other types of search, such as range and nearest neighbor queries, adaptations or other distributed data structures are needed, such as skip-lists and skip-graphs \\cite{reynolds2003efficient_dhtsearchengine, bongers2015survey_multidimensionalrangequeries, gao2007efficient_dhtssimilaritysearch}. These solutions carry their own limitations, including security concerns and poor load balancing of traffic.\n\n\\subsection{Dense retrieval}\\label{sec:dense retrieval}\nInformation retrieval is often based on vector space models that represent documents and queries as vectors and estimate document relevance to queries via a similarity metric. Text vector representations are traditionally derived from bag-of-words models based on word frequencies, predominantly the TF-IDF and BM25 models\\cite{manning2010introduction}. Those yield high-dimensional sparse vectors that can be efficiently stored in inverted index tables but do not capture the underlying semantics, such as implied contexts, synonyms, or word co-usage patterns. To address this issue, research has moved towards lower-dimensional dense representations, which encode latent semantics and enable soft matches. Dense retrieval has recently demonstrated definite improvement over sparse retrieval (represented by the BM25 model) \\cite{lin2019neural_neuralhype}, owing to the successful transfer of deep learning advances \\cite{lin2021neural_neuralhyperecant}. \n\nKey steps in this process have been the development of efficient vector representations for words with the Word2Vec and Glove frameworks \\cite{pennington2014glove_glove}, which were later extended to sentences. While sentence embeddings are less understood, they were shown to capture linguistic information \\cite{conneau2018you_sentenceembeddings} and are useful to retrieval \\cite{yang2019simple_sbertforadhoc}. \n\nCurrently, the state of the art for dense retrieval focuses on pre-trained transformer models, commonly based on BERT \\cite{devlin2018bert}, \nwhich are subsequently fine-tuned on downstream retrieval tasks \\cite{lin2021pretrained_transformersforretrieval}. There are two extreme approaches in using BERT for retrieval, \\textit{cross-encoders} and \\textit{bi-encoders}. Cross-encoders consider all interactions among query and document words, which yields the best accuracy but with high processing and energy costs. For instance, cross-encoders need to process all documents and queries at query-time, which incurs unreasonable delays. In contrast, bi-encoders conform to the vector space model in that documents and queries are transformed separately to vectors and interact via simple operations, such as the dot product or cosine similarity. While bi-encoders are less accurate than cross-encoders, they outperform BM25, enable proactive document indexing, and their inference is quick and cheap with approximate nearest-neighbor algorithms \\cite{aumuller2020ann}. Therefore, the vector space model and nearest-neighbor algorithms remain relevant for modern search applications.\n\n\\subsection{Graph signal processing}\\label{sec:gsp}\nGraph signal processing is a recently popularized field that generalizes traditional signal processing principles to graphs \\cite{ortega2018graph_gspsurvey,huang2018graph}. With this approach, graph signals are defined as collections of node values, e.g., scalars, vectors, and graph filters study their propagation through graphs. In particular, a graph convolution operation is defined, which performs one-hop propagation of node values through matrix multiplication, and graph filters are defined by weighted aggregation of multihop propagations. Popular graph filters, such as PPR and heat kernels perform the equivalent of low-pass filtering by placing higher importance to node values that are propagated fewer hops away.\n\\par\nWhen node values are vectors, graph filters operate independently on each vector dimension. This type of propagation is useful by itself for downstream predictive tasks, such as prediction propagation in graph neural networks \\cite{klicpera2018predict,dong2021equivalence}. In this work, we consider low-pass graph filters as a type of smoothing that concentrates around a small area around nodes. This area can be tuned by a single parameter of the PPR filter.\n\n\\section{Problem Setting}\n\\label{sec:problem_seting}\nThis section first presents dense retrieval operations, as they would be applied by modern centralized search engines (Subsection \\ref{subsec:centralized_setting}), and then re-formulates them in a decentralized setup (Subsection \\ref{subsec:decentralized_setting}).\n\n\\subsection{Centralized Setting}\n\\label{subsec:centralized_setting}\n\nIn the centralized setting, we consider search engines that are responsible for answering queries over collections of stored documents $\\mathcal{D}$. When engines receive queries $q$, they compute relevance scores $s(d,q)$ for all documents $d \\in \\mathcal{D}$. They then estimate the top-$k$ most relevant documents per\n\\begin{equation}\n \\underset{d \\in \\mathcal{D}}{\\text{arg top-}k} ~ s(d, q).\n\\end{equation}\n\nIn this paper, we consider the bi-encoder model of dense retrieval, which splits the score computation in two parts: i) an \\textit{encoding} part that transforms queries $q$ and documents $d$ to $\\nu$-dimensional embedding vectors $\\mathbf{e}_q, \\mathbf{e}_d$ respectively ($\\mathbf{e}_q, \\mathbf{e}_d \\in \\mathbb{R}^\\nu$), and ii) a \\textit{comparison} part that derives the score $s$ from the embeddings. This is formalized as \n\\begin{equation}\n\\label{candidate_document}\n s = \\phi(\\mathbf{e}_q, \\mathbf{e}_d) = \\phi\\left(\\eta_q (q), \\eta_d (d)\\right)\n\\end{equation}\nwhere $\\eta_q, \\eta_d$ are encoding functions for queries and documents respectively, and $\\phi$ is a comparison mechanism \\cite{deepretrievalframework}. The above formulation is attractive because it contains the computational complexity to the encoding function $\\eta$, which can be pre-computed during indexing. In contrast, the comparison function $\\phi$ is executed at query time and is therefore chosen to be computationally lightweight; usually, the dot product or cosine similarity is chosen\\footnote{These are equivalent when the embeddings are L2-normalized.}. These choices cast the retrieval as a $k$ nearest-neighbor problem, which can be computed efficiently with popular approximation algorithms, e.g., based on locality sensitive hashing or hierarchical navigable small world graphs \\cite{aumuller2020ann}. \n\n\\subsection{Decentralized Setting}\n\\label{subsec:decentralized_setting}\n\nTo move to the decentralized setting, we consider a P2P network whose nodes maintain their own private document collections. The network is modeled as an \\textit{undirected} graph $\\mathcal{G}=(\\mathcal{V}, \\mathcal{E})$, where $\\mathcal{V}$ is the set of nodes and $\\mathcal{E}\\subseteq \\mathcal{V}\\times \\mathcal{V}$ their communication edges, while $\\mathcal{D}_u\\subseteq\\mathcal{D}$ represents the local documents of node $u$.\n\\par\nWhen nodes initiate queries, they first execute the retrieval operations of subsection \\ref{subsec:centralized_setting} over their local document collections, and then forward queries to their one-hop neighbors to retrieve more results. Farther nodes can be contacted by relaying the queries along nodes. Since contacting all nodes would induce non-scalable communication costs and delays, we allow the search to fail to find relevant documents, even if these could have been retrieved by centralized search engines. The goal of our analysis is to make clever forwarding decisions to achieve high search hit accuracy of relevant documents.\n\n\\section{Diffusion-based decentralized search}\n\\label{sec:proposal}\nOur decentralized scheme for search is a document-oriented solution where nodes maintain a summary of documents available from their neighbors. These summaries take the form of \\textit{node embedding} vectors, denoted by $\\mathbf{e}_u$, which are composed from the embeddings of both local and nearby documents. To generate the node embeddings, when new nodes enter the network or update their document collections, they compute \\textit{personalization vectors}, denoted by $\\mathbf{e}_u^{(0)}$, which characterize their local document collections (Subsection \\ref{subsec:personalization}). Subsequently, the nodes diffuse their personalization vectors to the network with an iterative and asynchronous diffusion algorithm based on PPR (Subsection \\ref{subsec:diffusion}). This algorithm converges to the node embedding vectors and also keeps track the embeddings of the one-hop neighbors for each node. At query-time, the nodes can use their stored neighbor embeddings to forward queries towards promising next hops (Subsection \\ref{subsec:forwarding}).\n\n\\subsection{Node personalization}\n\\label{subsec:personalization}\n Ideally, for each node $u$, we would like to estimate the maximum score of all neighbors $v$, as in \\eqref{candidate_document}, without knowing their documents $\\mathcal{D}_v$. A simple way is to represent each node with the personalization vector $\\mathbf{e}_u^{(0)}$ that is the sum of the node's document embeddings. This has the attractive property that, due to the linearity of the interaction function, the dot product of the query with the neighbor embedding yields the total relevance of the neighbor's documents:\n\\begin{equation}\n \\mathbf{e}_q \\cdot \\mathbf{e}_v^{(0)} = \\mathbf{e}_q \\cdot \\sum_{d \\in \\mathcal{D}_v} \\mathbf{e}_d= \\sum_{d \\in \\mathcal{D}_v} \\mathbf{e}_q \\cdot \\mathbf{e}_d.\n\\end{equation}\nThis approach tends to score higher nodes with a larger number of documents. This is desirable in general although it runs the risk of prioritizing nodes with many irrelevant documents over nodes with a few but relevant documents. \n\n\\subsection{Diffusion of embeddings}\n\\label{subsec:diffusion}\nAfter computing their personalization vectors, the nodes transmit them to their neighbors. Instead of traditional $n$-hop advertising, we consider a diffusion scheme based on graph signal processing. A typical diffusion has the form:\n\\begin{equation}\n\\label{diffusion}\n\\mathbf{E} = \\mathbf{H} \\mathbf{E}^{(0)} ~ \\Rightarrow ~\n \\mathbf{e}_u = \\sum_{v \\in V} h_{u v} \\mathbf{e}_v^{(0)}\n\\end{equation}\nwhere $\\mathbf{E}^{(0)}$, $\\mathbf{E}$ are the initial and diffused embeddings in matrix form, $\\mathbf{H}$ is the weight matrix or impulse response of diffusion, whose elements $h_{u v}$ represent the impact of node $v$ to $u$. While the diffusion weights $\\mathbf{H}$ could be learned with a machine learning algorithm, the complexity of learning would scale with $\\mathcal{O}(N^2)$, which would be intractable for large graphs. Therefore, we have chosen the PPR algorithm for calculating the weights, which is a popular approach in the literature \\cite{klicpera2018predict}, and can be implemented in a decentralized and asynchronous way \\cite{krasanakis2021p2pgnn_asynchronousppr}, which is a highly desirable feature. \n\nIn PPR, we associate $h_{u v}$ with the probability to reach $v$ via a random walk that starts from $u$. If the random walk were allowed to progress, as in the traditional PageRank, it would forget its origin $u$ and converge to a probability characterizing only $v$. To avoid this, in PPR, we force the walker to teleport back to node $u$ with probability $a$. Thus, $h_{u v}$ is associated with the probability to reach node $v$ from $u$ with a short walk of average length $1\/a$.\n\nFormally, denoting by $\\boldsymbol{\\pi} [v]$ the probability of arriving at node $v$, and by $\\boldsymbol{\\delta}_u[v]$ the one-hot vector at node $u$, i.e., $\\boldsymbol{\\delta}_u[u]=1$ and $\\boldsymbol{\\delta}_u[v]=0$ for $v \\neq u$, we have\n\\begin{equation}\n\\label{ppr_recursive}\n \\boldsymbol{\\pi} [v] = (1-a) \\mathbf{A} \\boldsymbol{\\pi} [v] + a \\boldsymbol{\\delta}_u[v] \\Rightarrow \\boldsymbol{\\pi} [v] = a (\\mathbf{I}-(1-a)\\mathbf{A})^{-1} \\boldsymbol{\\delta}_u[v]\n\\end{equation}\nwhere $\\mathbf{I}$ is the identity matrix and $\\mathbf{A}$ the transition matrix of the Markov chain, based on a suitable normalization of the adjacency matrix of $\\mathcal{G}$ or external weights. Considering the definition of $\\boldsymbol{\\delta}_u[v]$, it is clear that the columns of $a (\\mathbf{I}-(1-a)\\mathbf{A})^{-1}$ correspond to the desired probabilities for different origins $u$. The diffused embeddings of \\eqref{diffusion} are thus given by:\n\\begin{equation}\n\\label{ppr_solution}\n\\mathbf{E} = a (\\mathbf{I}-(1-a)\\mathbf{A})^{-1} \\mathbf{E}^{(0)}\n\\end{equation}\nWhile the embeddings are propagated to the whole graph, the \\textit{effective} range of the diffusion is tuned by the parameter $a$. \n\nFor the decentralized and asynchronous implementation, we first express \\eqref{ppr_solution} iteratively as:\n\\begin{equation}\n\\label{ppr_iterative}\n \\mathbf{E}^{(t)} = (1-a) \\mathbf{A} \\mathbf{E}^{(t-1)} + a \\mathbf{E}^{(0)},\n\\end{equation}\nwhich converges to \\eqref{ppr_solution} but is synchronous. Subsequently, we make the iteration \\textit{asynchronous} by letting node pairs exchange and update embeddings. As proven in \\cite{krasanakis2021p2pgnn_asynchronousppr}, if the update intervals are not arbitrarily long, the embeddings converge to \\eqref{ppr_recursive} in distribution, which is a good approximation of the centralized scheme.\n\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[width=2.5in]{node.pdf}\n\\caption{Node operations when a query is received.}\n\\label{img:node_operations}\n\\end{figure}\n\n\\subsection{Forwarding operations}\n\\label{subsec:forwarding}\nNode embeddings are used at query-time to guide search towards promising nodes, essentially performing a biased random walk. Queries keep track of the $k$ most relevant documents they have encountered along with their relevance score\\footnote{If documents are too large, the message can track the IP addresses of the source nodes or content identifiers if available, e.g., IPFS content IDs.}. Since visiting all nodes in the network is impractical, we impose a maximum number of hops with a time-to-live (TTL) field in the query message, which helps prevent queries from circulating in the network indefinitely. Due to the TTL limitation, we prioritize unvisited nodes for forwarding. To this end, the nodes keep track of the neighbors from which they have received and to which they have sent messages. We purposefully reject the alternative (and slightly more efficient) solution of recording the visited nodes in the query message in order to protect the privacy of node connections. In our solution, nodes relay the queries recursively, i.e., from node to node, and when their TTL expires, a response message is returned to the querying nodes via backtracking.\n\nFig. \\ref{img:node_operations} illustrates the node operations when a new query arrives. As described in subsection \\ref{subsec:decentralized_setting}, nodes first evaluate the query on their local documents according to the retrieval operations of subsection \\ref{subsec:centralized_setting}. Afterwards, they decrement the TTL field of the query message by 1 and check if the message is still alive. If the TTL has expired, the nodes discard the query and send a query response message to the reverse path, otherwise, they commence the forwarding procedure: nodes first determine a set of candidate next hops from their neighbors, which excludes previously visited nodes remembered by the nodes\\footnote{If no neighbors remain after this step, nodes consider all their neighbors as candidates as we do not want to waste opportunities for forwarding considering the TTL limitation.}. Nodes then match via dot product the embeddings of the candidate next hops with the query embedding, and select a few neighbors with the highest score. When a single neighbor is selected, the outcome is a simple random walk, otherwise, multiple walks are executed in parallel.\n\n\\section{Experimental Evaluation}\\label{sec:experiments}\n\\label{sec:setup}\nWe evaluate a retrieval operation in a social P2P network based on two datasets: a social network graph and a corpus of pre-trained embeddings (Subsection~\\ref{sec:datasets}). Through simulation (Subsection~\\ref{sec:simulation}), we investigate the scalability of our scheme with the number of stored documents in the network, $M$, in terms of the hit accuracy (Subsection~\\ref{sec:acc}) and the average number of hops of successful queries (Subsection~\\ref{sec:hop}).\n\n\\subsection{Datasets}\n\\label{sec:datasets}\nExperiments are conducted on the Facebook social circles graph\\cite{leskovec2012learning_fbdataset} hosted by the SNAP project\\footnote{\\url{http:\/\/snap.stanford.edu}}. This is an undirected graph of 4,039 Facebook users (nodes) and their 88,234 friend relations (edges). We consider this graph representative of P2P networks built on top of social relations, which are expected to resemble friend relations of centralized social networks.\n\nDocuments and queries are represented using 300-d word embeddings, trained by the Glove model on Wikipedia articles \\cite{pennington2014glove_glove} and distributed by the GenSim library\\footnote{\\url{https:\/\/radimrehurek.com\/gensim\/}}. While Glove embeddings are not ideal for retrieval, they are good predictors of similarity with the cosine similarity metric. As mentioned in Section \\ref{sec:setup}, the nearest-neighbor search mechanism is independent from the embedding method, which allows us to study search in isolation. In fact, queries and documents can refer to any type of content, even multimedia, provided relevance is a linear function of their embeddings.\n\n\\begin{figure}[!t]\n \\centering\n\\fbox{\\parbox{0.8\\linewidth}{\\begin{algorithmic}[1]\n \\STATE Generate documents and queries from Glove\n \\STATE Distribute $N$ documents uniformly over $\\mathcal{G}$\n \\STATE Compute node embeddings\n \\REPEAT\n \\STATE Diffuse node embeddings asynchronously\n \\UNTIL embeddings converge\n \\STATE Distribute queries\n \\REPEAT\n \\STATE Forward queries\n \\UNTIL all queries expire\n\\end{algorithmic}}}\n\\caption{Pseudo-code for the simulation of the decentralized search setting.}\n\\label{fig:pseudocode}\n\\end{figure}\n\n\\subsection{Simulation setup}\n\\label{sec:simulation}\nFig. \\ref{fig:pseudocode} presents our simulation in pseudo-code. We first generate queries and documents from the Glove dataset using 1000 random words as queries and their nearest neighbors as gold documents, provided that their cosine similarity is over 0.6 and the two sets do not overlap. The remaining words are treated as a pool of irrelevant documents. We further distribute the documents over the graph's nodes uniformly and compute the node embeddings. This is followed by a warm up period, in which we diffuse the node embeddings over the network with the asynchronous PPR algorithm. The algorithm runs until the embeddings converge.\n\nWe then proceed with evaluating the top-$1$ document retrieval performance over sampled queries, whose number depends on the simulation scenario. In each iteration, queries are distributed over the network and are forwarded independently. For simplicity, each query performs a simple random walk, which is the most challenging case and can be easily extended to parallel walks. In the future, we plan to investigate parallel walks more thoroughly along with time-evolving conditions and the top-$k$ performance. More realistic document distributions are also worthwhile; in fact, they are expected to aid diffusion, since they naturally exhibit spatial correlation.\n\n\\subsection{Hit Accuracy}\n\\label{sec:acc}\nIn this series of experiments, we evaluate the accuracy of our algorithm over the number of stored documents in the network, $M$, and the teleport probability of PPR, $\\alpha$, which determines the average diffusion radius. For $M$, we select $10$, $100$, $1000$, and $10000$ documents to investigate 4 orders of magnitude. In each iteration, we store one gold and $M$-1 irrelevant documents in the network, and sample multiple querying nodes, one from each radius away from the location of the gold document. At the end of simulation, the accuracy is computed as the percentage of queries that retrieved the gold document within a TTL of 50 hops. The simulation is repeated for three different values of $\\alpha$, $0.1$, $0.5$, and $0.9$, as examples of heavy, moderate, and light diffusion respectively. The results are depicted in Fig. \\ref{fig:acc_analysis}.\n\nFigs. \\ref{fig:10docs} and \\ref{fig:100docs} show that our algorithm excels at finding documents within 2 hops away, provided that there are few documents in the network. In contrast, the accuracy starts to decline at 3 hops and deteriorates significantly farther away. Surprisingly, heavy diffusion does not aid accuracy, as more documents are discovered when the teleport probability is 0.9. The results change radically with more stored documents. In Figs. \\ref{fig:1000docs} and \\ref{fig:10000docs}, we see that the accuracy remains high mainly for documents in neighboring nodes and the impact of $\\alpha$ is more varied. In this case, heavier diffusion is better at small distances although $a=0.9$ appears beneficial at 3 and 4 hops when the stored documents are 1000. With 10000 documents, the performance deteriorates considerably.\n\nThe above show that the PPR diffusion is useful for local neighborhood search but its accuracy declines with the number of stored documents. This is attributed to the loss of information for individual documents when many embeddings are summed, either during summarization or diffusion. The behavior with $\\alpha$ can also be explained by the following trade-off: heavy diffusion (low $\\alpha$) announces documents within a wider range but adds more noise due to the summation of the embeddings. In contrast, light diffusion (high $\\alpha$) adds less noise but may fail to notify nearby nodes. Considering this trade-off, when few documents are stored in the network (Figs. \\ref{fig:10docs} and \\ref{fig:100docs}), it is preferable to leave fewer and cleaner hints as the random walk will eventually find the correct document. In contrast, with more documents (Figs. \\ref{fig:10docs} and \\ref{fig:100docs}), there is already noise in the network and light diffusion may hinder the random walk from finding documents even 1 hop away. \n\n\\subsection{Hop Count Analysis}\n\\label{sec:hop}\nIn this experiment, we compute the average hop count for successful queries until the gold document is found. As in Section \\ref{sec:acc}, the queries are considered successful when they retrieve the correct document within 50 hops. We note that, since the queries do not know when they find the gold document and must complete their TTL, the average hop count does not indicate bandwidth consumption but can guide the choice of TTL. For the setup, we execute 500 iterations in each of which we distribute 10 queries uniformly in the network, for a total of 5000 samples. We also choose the value 0.5 for the teleport probability $\\alpha$, scale the number of documents for 10 to 10000, and randomize the document distribution at each iteration, as in the accuracy experiment. Our results are summarized in Table \\ref{hop_analysis}.\n\nTable \\ref{hop_analysis} shows that less queries are successful when the stored documents increase, consistently with the accuracy results of Section \\ref{sec:acc}. Furthermore, with more documents, longer walks are required as both the median and the mean hops to reach the gold documents increase. The discrepancy between the median and the mean hops implies a skewed distribution, i.e., a few walks succeed after a large number of hops and drive the mean higher, which is corroborated by the high standard deviation. Combined with the results of the accuracy experiment, the above show that, even though documents are found predominantly by nearby nodes, some queries need to circulate for additional hops until they succeed. It is encouraging though that success is still possible with a high number of documents, such as 10000.\n\n\\begin{table}[!t]\n\\renewcommand{\\arraystretch}{1.3}\n\\caption{Average Hop Count}\n\\label{hop_analysis}\n\\centering\n\\begin{tabular}{c|c|c|c|c}\n\\hline\n$M$ documents & success rate & median hops & mean hops & std hops \\\\\n\\hline\n10 & 1905 \/ 5000 & 3 & 7.62 & 10.83 \\\\\n100 & 1265 \/ 5000& 4 & 11.21 & 13.37 \\\\\n1000 & 1054 \/ 5000 & 9 & 15.26 & 14.55 \\\\\n10000 & 877 \/ 5000 & 9 & 14.31 & 13.36 \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\\begin{figure*}[!t]\n\\centering\n\\subfloat[]{\\includegraphics[width=0.41\\linewidth]{fb_glove_10docs.pdf}\n\\label{fig:10docs}}\n\\subfloat[]{\\includegraphics[width=0.41\\linewidth]{fb_glove_100docs.pdf}\n\\label{fig:100docs}}\n\n\\subfloat[]{\\includegraphics[width=0.41\\linewidth]{fb_glove_1000docs.pdf}\n\\label{fig:1000docs}}\n\\subfloat[]{\\includegraphics[width=0.41\\linewidth]{fb_glove_10000docs.pdf}\n\\label{fig:10000docs}}\n\\caption{Accuracy analysis for a) 10, b) 100, c) 1000, and d) 10000 documents in the network.}\n\\label{fig:acc_analysis}\n\\end{figure*}\n\n\\section{Conclusions}\n\\label{sec:conclusions}\nAs decentralization is becoming an increasingly important feature of the future Internet, new algorithms are needed for effective decentralized search. In this paper, we revisit this long-standing problem from a combined embedding and graph diffusion perspective. Specifically, considering a P2P network with nodes of only local knowledge over their document collections, we apply the PPR algorithm to diffuse summarized information about the documents in the network. Our results show that this diffusion can be beneficial for local neighborhood search but further enhancements are needed to improve the performance for global search. Our current line of research is to exploit correlations in the document distribution and derive more sophisticated aggregation methods that encode more information about the grouped documents. \n\n\\section*{Acknowledgment}\nThis research was supported by the EU H2020 projects AI4Media (Grant Agreement 951911), MediaVerse (GA 957252) and HELIOS (GA 825585). \nThe authors want to thank Dr. Ioannis Sarafis for his productive feedback on the decentralized search scheme.\n\n\\balance\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction and Main Result}\n The enumeration of maps has a long history, in which the techniques and tools became more and more efficient and the classes of maps more and more sophisticated: In his \\textit{Census of Planar Maps}, William Tutte achieved groundbreaking progress in the 1960's \\cite{Tuttbij}. Bender and Canfield then left the realm of planar maps in the 1980's and also took an embedding into higher-genus surfaces into consideration \\cite{Bender}. In the 2000's, the branch of mathematical physics established a powerful and efficient universal procedure to reach all topological sectors in a recursive way: Topological recursion (TR) of Chekhov, Eynard and Orantin \\cite{Eynard:2007kz,Chekhov:2006vd} built a bridge between enumerative and complex geometry (and, based on the work \\cite{Kontsevich:1992ti}, bridges to intersection theory and integrable hierachies, which we will neglect here) and thus covered numerous, seemingly disconnected areas of mathematical fields, by one universal recursion procedure.\n\nTopological recursion possesses the initial data $(\\Sigma,x,y,B)$, where $x:\\Sigma\\to \\Sigma_0$ is a ramified covering of Riemann surfaces, $\\omega_{0,1}=y\\, dx$ is a meromorphic differential 1-form on $\\Sigma$ regular at the ramification points and $\\omega_{0,2}=B$ a symmetric bilinear differential form on $\\Sigma\\times \\Sigma$ with double pole on the diagonal and no residue. From this initial data, TR computes recursively in the negative Euler characteristic $-\\chi=2g+n-2$ an infinite sequence of symmetric meromorphic $n$-forms $\\omega_{g,n}$ on $\\Sigma^n$ with poles only at the ramification points for $-\\chi>0$. The precise formula and more details are given in Ch. \\ref{ch:proof}. For specific choices of the initial data $(\\Sigma,x,y,B)$, the meromorphic $n$-forms are encoding some enumerative problems.\n\nThe prime example of this framework was the recursive computation of generating functions counting objects known in the literature as \\textit{ordinary maps} (a very readable derivation can be found in \\cite{Eynard:2016yaa}):\n\\begin{theorem}[\\cite{Eynard:2016yaa}]\n\\label{th:eyn}\nThe spectral curve $(\\mathbb{P}^1,x_{ord},y_{ord}, \\frac{dz_1\\, dz_2}{(z_1-z_2)^2})$ with\n\\begin{align*}\n\tx_{ord}(z)=\\gamma \\bigg ( z+ \\frac{1}{z} \\biggl ) \\qquad y_{ord}(z)= \\sum_{k=0}^{d-1} u_{2k+1}z^{2k+1}\n\\end{align*}\nwhere\n\\begin{align*}\n\t\\gamma^2=1+\\sum_{k\\geq 1} t_{2k} \\binom{2k-1}{k} \\gamma^{2k}, \\qquad u_{2k+1}=\\gamma \\bigg(\\delta_{k,0}-\\sum_{j\\geq k+1}t_{2j}\\binom{2j-1}{j+k}\\gamma^{2j-2}\\bigg)\n\\end{align*}\ncomputes via TR (see formula \\eqref{BTR-intro}) generating functions for the enumeration of ordinary maps with $n$ marked faces of even boundary lengths. The faces have even degrees up to $2d$, where a face of degree $2k$ is weighted by $t_{2k}$.\n\\end{theorem}\\noindent\nThe theorem includes in general also faces of odd degree, but for later purposes, we want to state it in this form.\n\nSeveral more classes of maps, e.g. subsets of the ordinary maps, were then discovered to be governed by TR, as \\textit{ciliated} and \\textit{fully simple maps} \\cite{Borot:2017agy,Borot:2021eif}.\n\n In this letter, we will focus on another subset of maps, the \\textit{bipartite maps} containing only those ordinary maps of even face degrees, for which the corresponding maps have vertices in black and white such that no monochromatic edge occurs. A bipartite map is called \\textit{rooted}, if one edge is distinguished and oriented. This \\textit{rooted edge} (also called marked edge) conventionally has its origin in a white vertex (the \\textit{root vertex}). Rooting an edge creates a boundary of a certain even length $2l_k$ following the face to the right of the rooted edge. Several edges can be rooted such that the roots do not correspond to the same boundary. Bipartite maps already showed up in the context of TR, namely in \\cite{Chapuy2016} in which the authors were motivated by TR and established a recursive formula sharing many characteristics with the TR \\footnote{A more formal, but less illustrative definition for bipartite maps can be found in \\cite{Chapuy2016}}. However, the aim of their work is rather to prove rationality statements about bipartite maps and is thus written more in a combinatorist's language. All these statements are a direct consequence of TR. Their recursion and its proof are mainly built on ideas of TR, but no spectral curve was provided. Thus, the relation of their work to complex geometry will be established in the following Chapter \\ref{ch:proof}. We will deduce:\n\\begin{theorem}\\label{th:main}\nThe spectral curve $(\\mathbb{P}^1,x_{bip},y_{bip},\\frac{dz_1\\,dz_2}{(z_1-z_2)^2})$ with \n\\begin{align*}\n\tx_{bip}(z)=\\gamma^2\\bigg ( z+ \\frac{1}{z} \\bigg ) + 2\\gamma^2 \\qquad y_{bip}(z)=\\frac{\\sum_{k=0}^{d-1} u_{2k+1}z^{k+1}}{\\gamma (1+z)}\n\\end{align*}\ncomputes via TR generating functions for the enumeration of bipartite maps with $n$ marked faces (or rooted edges) of even boundary lengths. The faces have even degrees up to $2d$, where a face of degree $2k$ is weighted by $t_{2k}$. We have the following relation to Thm. \\ref{th:eyn}:\n\\begin{align*}\nx_{bip}(z^2)=x_{ord}(z)^2 \\qquad \\quad y_{bip}(z^2) = \\frac{y_{ord}(z)}{ x_{ord}(z)}\n\\end{align*}\n\\end{theorem}\nIn order to avoid misunderstandings, we would like to mention that an unconventional definition of bipartite maps, deviating from the one in this paper, is given in \\cite{Eynard:2016yaa} and coincides just for genus zero and one boundary. \n\nGiven these two spectral curves for ordinary and bipartite maps, the machinery of TR gives rise to generating functions as follows: Let $\\tilde{\\mathcal{T}}^{(g)}_{2l_1,...,2l_n}$ denote the generating function of bipartite maps with a natural embedding into a genus-$g$ surface with $n$ boundaries of length $2l_1,...,2l_n$ ($n$-fold rooted bipartite maps) and in the same manner $\\mathcal{T}^{(g)}_{2l_1,...,2l_n}$ for ordinary maps with faces of even degree. Note that in particular $2^{n-1}\\tilde{\\mathcal{T}}^{(0)}_{2l_1,...,2l_n}=\\mathcal{T}^{(0)}_{2l_1,...,2l_n}$ holds for genus $g=0$, however not for $g>0$, where only a small subset of ordinary maps are still bipartite. The prefactor $2^{n-1}$ has an easy combinatorial explanation: As described earlier, the \\textit{root vertex} is by convention a white one, the black-white coloring of the vertices is completely determined by fixing a root. Ignoring the colouring, as is it is done for ordinary maps, an other boundary can have twice the number of labellings. Inductively, this gives rise to $2^{n-1}$ distinct graphs, if $n$ faces are marked, as ordinary maps.\n\n Define the \\textit{correlators} $W$ and $\\tilde W$ as \n \\begin{align}\n\\label{resolv}\n W_n^{(g)}(x_{ord,1},...,x_{ord,n}) = \\sum_{l_1,...,l_n=1}^\\infty \\frac{\\mathcal{T}^{(g)}_{2l_1,...,2l_n}}{x_{ord,1}^{2l_1+1}...x_{ord.n}^{2l_n+1}} \\\\\n \\tilde {W}_n^{(g)}(x_{bip,1},...,x_{bip,n}) = \\sum_{l_1,...,l_n=1}^\\infty \\frac{\\tilde{\\mathcal{T}}^{(g)}_{2l_1,...,2l_n}}{x_{bip,1}^{l_1+1}...x_{bip,n}^{l_n+1}}\n\\end{align}\nfrom which the generating functions can be read off as a simple residue operation, e.g. for bipartite maps:\n\\begin{align*}\n\\tilde{\\mathcal{T}}^{(g)}_{2l_1,...,2l_n} = (-1)^n \\Res_{x_{bip,1}...x_{bip,n}\\to \\infty} x_{bip}^{l_1}\\cdot ...\\cdot x_{bip}^{l_n} \\tilde {W}_n^{(g)}(x_{bip,1},...,x_{bip,n})dx_{bip,1}...dx_{bip,n}\n\\end{align*}\nThe crucial connection to the infinite sequence of meromorphic $n$-forms $\\omega_{g,n}$ generated by TR is the following identification for $2g+n-2>0$\n\\begin{align}\n\t\\omega_{g,n}(z_1,...,z_n)=\\tilde{W}^{(g)}_n(x_{bip}(z_1),...,x_{bip}(z_n))dx_{bip}(z_1)...dx_{bip}(z_n).\n\\end{align}\nFor the stable topologies $2g+n-2\\leq 0$, the situation is a bit subtle.\n\nFrom Thm. \\ref{th:main}, we deduce the following equivalent representation of the generating functions of bipartite maps, building the bridge to TR:\n\\begin{corollary}\n\\label{cor1}\nLet $\\omega_{g,n}$ be the correlators of TR generated by \\eqref{BTR-intro} with $(x_{bip},y_{bip})$ given by Theorem \\ref{th:main}. Then $\\tilde{\\mathcal{T}}^{(g)}_{2l_1,...,2l_n}$ can be achieved as follows:\n\\begin{align*}\n\\tilde{\\mathcal{T}}^{(g)}_{2l_1,...,2l_n} = (-1)^n \\Res_{z_1,...,z_n \\to \\infty} x_{bip}(z_1)^{l_1} \\cdot ... \\cdot x_{bip}(z_n)^{l_n} \\omega_{g,n}(z_1,...,z_n)\n\\end{align*}\n\\end{corollary}\nAnalogously, generating functions for ordinary maps are obtained from the spectral curve of Theorem \\ref{th:eyn} (see \\cite{Eynard:2016yaa} for more details). This spectral curve, together with the recursion formula of \\cite[Thm. 3.9]{Chapuy2016}, will be the basis for the proof of Theorem \\ref{th:main} by direct identification.\n\n\\section*{ Acknowledgements}\nWe thank Guillaume Chapuy and Wenjie Fang for helpful discussions. JB is supported\\footnote{``Funded by\n the Deutsche Forschungsgemeinschaft (DFG, German Research\n Foundation) -- Project-ID 427320536 -- SFB 1442, as well as under\n Germany's Excellence Strategy EXC 2044 390685587, Mathematics\n M\\\"unster: Dynamics -- Geometry -- Structure.\"} by the Cluster of\nExcellence \\emph{Mathematics M\\\"unster}. He would like to thank the University of Oxford for its hospitality. AH is supported by\nthe Walter-Benjamin fellowship\\footnote{``Funded by\n the Deutsche Forschungsgemeinschaft (DFG, German Research\n Foundation) -- Project-ID 465029630}.\n \n\\section{Proof and Discussion}\n\\label{ch:proof}\n\\subsection{Reminder of previous results}\nFirst, we briefly recapitulate the procedure of topological recursion. Starting with the initial data, the spectral curve $(\\Sigma,x,y,B)$, TR constructs recursively in $2g+n-2$ an infinite sequence of meromorphic $n$-forms $\\omega_{g,n}$, starting with\n \\begin{align*}\n\\omega_{0,1}(z) = y(z)\\, dx (z) \\qquad \\omega_{0,2}(z_1,z_2) = B(z_1,z_2),\n\\end{align*}\n via the following residue formula:\n\\begin{align}\n& \\omega_{g,n+1}(I,z)\n \\label{BTR-intro}\n \\\\\n & =\\sum_{\\beta_i}\n \\Res\\displaylimits_{q\\to \\beta_i}\n K_i(z,q)\\bigg(\n \\omega_{g-1,n+2}(I, q,\\sigma_i(q))\n +\\hspace*{-1cm} \\sum_{\\substack{g_1+g_2=g\\\\ I_1\\uplus I_2=I\\\\\n (g_1,I_1)\\neq (0,\\emptyset)\\neq (g_2,I_2)}}\n \\hspace*{-1.1cm} \\omega_{g_1,|I_1|+1}(I_1,q)\n \\omega_{g_2,|I_2|+1}(I_2,\\sigma_i(q))\\!\\bigg).\n\\end{align}\nHere $I=\\{z_1,\\dots,z_n\\}$ is a collection of $n$ variables $z_j$, the sum is over the ramification points\n$\\beta_i$ of $x$ defined by $dx(\\beta_i)=0$. The kernel $K_i(z,q)$ is\ndefined in the vicinity of $\\beta_i$ by\n$K_i(z,q)=\\frac{\\frac{1}{2}\\int^{q}_{\\sigma_i(q)}\n B(z,q')}{\\omega_{0,1}(q)-\\omega_{0,1}(\\sigma_i(q))}$, where\n$\\sigma_i\\neq \\mathrm{id}$ is the local Galois involution\n$x(q)=x(\\sigma_i(q))$ near $\\beta_i$, and $\\beta_i$ as a fixed point.\n\nA TR-like formula to recursively generate correlators for bipartite maps was found in the aforementioned paper \\footnote{We adapt the notation of \\cite{Chapuy2016} to the TR literature by $p_k \\mapsto t_{2k}$, $z \\mapsto \\gamma^2$, $u \\mapsto z$, $F_g\\mapsto U_g$}:\n\\begin{theorem}[\\cite{Chapuy2016}]\n\\label{th:chap}\nLet $x(z)= \\frac{z}{(1+z\\gamma^2)^2}$. A correlators function $U_g(x(z))= \\sum_{l=1}^\\infty \\tilde{\\mathcal{T}}^{(g)}_{2l} x^l$, $g>1$, can be recursively obtained in the following way:\n\\begin{align*}\nU_g(x(z)) = \\frac{1}{P(z)} \\Res_{q \\to \\pm \\frac{1}{\\gamma^2}} \\frac{P(q)}{z-q} \\frac{x(q)}{Y(q)} \\bigg (U_{g-1}^{(2)}(q)+ \\sum_{\\substack{g_1+g_2=g\\\\g_i>0}} U_{g_1}(q)U_{g_2}(q) \\bigg ) \n\\end{align*}\nwith $P(q)=\\frac{1-\\gamma^2 q}{1+ \\gamma^2 q}$, $Y(q)$ see below. $U^{(2)}_g=\\sum_{l_1,l_2=1}^\\infty \\tilde{\\mathcal{T}}^{(g)}_{2l_1,2l_2} x^{l_1+l_2}$.\n\\end{theorem}\n\n\\subsection{Proof of the spectral curve}\nThe heuristic deduction of $(x_{bip},y_{bip})$ that finally turns Thm. \\ref{th:chap} into TR works as follows: \n\\begin{itemize}\n \\item $x_{bip}$: The work of Chapuy and Fang mainly relies on two important variable transformations. The first is the definition of $\\gamma^2$, arising already for ordinary maps and earlier works of Bender and Canfield \\cite{Bender}. The second, $x(z)= \\frac{z}{(1+z\\gamma^2)^2}$ will determine $x_{bip}$. Thm. \\ref{th:chap} creates generating functions as a series in positive powers of $x$. Sending $z \\to \\frac{z}{\\gamma^2}$ and then taking the reciprocal of $x$ gives the correct curve ramified covering. We confirm this with the relation $x_{bip}(z^2)=x_{ord}^2(z)$ together with a comparison of the correlators $W$ and $\\tilde W$, up to a global factor of $\\frac{1}{x_{bip}}$ on which we comment later - this factor becomes decisive for the geometry of the spectral curve.\n\\item $y_{bip}$: Analogously to ordinary maps, the expression $y_{bip}(z)-y_{bip}(\\sigma(z))$ can be directly read off from the kernel representation of the Tutte equation for the disk. This kernel $Y(z)=y_{bip}(1\/z)-y_{bip}(z)$ is already given in Prop. 3.3 in \\cite{Chapuy2016} and shows up in the recursion formula Thm. \\ref{th:chap} as well. After changing the variables as for $x_{bip}$, we can extract from \\cite[Chap. 5.1]{Chapuy2016} a suitable expression for $Y(z)\\cdot x_{bip}(z)$: \n\\begin{align*}\n&\\qquad \\quad Y(z)\\cdot x_{bip}(z) =\\\\\n& \\qquad \\quad \\gamma^2\\frac{(1+z)^2}{z} - (1+z)\\bigg [2- \\sum_{k=1}^d t_{2k} \\gamma^{2k} \\bigg (\\sum_{l=1}^{k-1}z^l \\binom{2k-1}{k+l} - \\sum_{l=-k}^0 z^l \\binom{2k-1}{k+l} \\bigg ) \\bigg ].\n\\end{align*}\nInserting the implicit equation of $\\gamma^2$ from Theorem \\ref{th:eyn} in the first term cancels partially the terms for $l=-1,0$. After some further lengthy but trivial algebra, the expression can be ordered in positive and negative powers of $z$, where the positive powers give \n\\begin{align}\\label{yeq}\n\ty_{bip}(z)\\cdot x_{bip}(z)= 1+z-(1+z)\\sum_{k=1}^{d-1} \\sum_{j\\geq k+1} t_{2j} \\binom{2j-1}{j+k} \\gamma^{2j} z^k.\n\\end{align}\nFinally, the definition of $u_{2k+1}$ in terms of $t_{2j}$ yields the desired identifications shown in Theorem \\ref{th:main}. \n\\end{itemize}\n\n\n\n\n\n\n\n\n\n\\subsection{Discussion of the result}\n\\label{ch:disc}\nOf particular interest is the somewhat unusual geometry of the spectral curve for bipartite maps. Its branch cut goes from $x_{bip}(1)=a=4\\gamma^2$ to $x_{bip}(-1)=b=0$. We naturally have the same Zhukovsky parametrisation as for $x_{ord}(z)$:\n\\begin{align}\\label{zhu}\n\\frac{a+b}{2}+\\frac{a-b}{4} \\bigg (z + \\frac{1}{z} \\bigg ) \\qquad \\mathrm{and} \\qquad \\sqrt{(x-a)(x-b)} =\\gamma^2\\bigg (z - \\frac{1}{z} \\bigg )\n\\end{align}\nHowever, the branch point at $0=x(\\beta_2)$ corresponding to the ramification point $\\beta_2=-1$ affects the pole structure of all $\\omega_{g,n}$ - the highest degree of the poles is different for the two ramification points. Due to the fact that $y_{bip}$ is irregular at the ramification point $\\beta_2=-1$ (whereas as required $\\omega_{0,1}=y \\,dx$ is still regular), the maximum order of poles $\\frac{1}{(z+1)^k}$ is reduced by two in comparison to the poles $\\frac{1}{(z-1)^l}$. However, this does not change the fact that one can generate symmetric $n$-forms from $(x_{bip},y_{bip})$. This is an interesting deviation from the most spectral curves. Despite the uncommon pole distribution at the ramification points, the universal symmetry under the Galois involution naturally holds: \n \\begin{align*} \n\\frac{\\omega_{g,n}(z,z_I)}{dx(z)} + \\frac{\\omega_{g,n}(\\frac{1}{z},z_I)}{dx(\\frac{1}{z})}=0 \\quad \\forall 2g+n-2>0 \n\\end{align*}\n For illustrative purposes, we give $\\omega_{1,1}$ as an example and set $\\tilde{y}_{bip}(z) = \\frac{1}{\\gamma} \\sum_{k=0}^{d-1}u_{2k+1} z^{k+1}$\n \\begin{align*} \n\\omega_{1,1}(z) =& \\frac{1}{16 \\gamma^2 (1+z)^2 \\tilde{y}'_{bip}(-1)}- \\frac{1}{16 \\gamma^2 (z-1)^4 y'_{bip}(1)}\\\\\n&-\\frac{1}{16 \\gamma^2 (z-1)^3 y'_{bip}(1)} + \\frac{3 y'_{bip}(1)+3 y''_{bip}(1)+ y'''_{bip}(1)}{96 \\gamma^2 (z-1)^2 y'^2_{bip}(1)}\n\\end{align*}\n\n \nFinally, we want to collect some interesting open questions: It is known \\cite{Borot:2017agy,Borot:2021eif} that the exchange of $x_{ord}$ and $y_{ord}$ gives rise to generating functions of fully simple maps. Does any sort of exchange of $x_{bip}$ and $y_{bip}$ have a comparable strong implication? Another question arises from the matrix models as realisations of those various types of maps. As known from the classical literature, bipartite maps arise from the complex matrix model, having a structural equivalence to the Hermitian 2-matrix model \\cite{Eynard:2005}. This model is (for certain boundary structures) already solved by TR. What is the relation between the two distinct spectral curves? A final question is dedicated only to quadrangulations. In \\cite{Branahl:2020yru} the quartic Kontsevich model (QKM) was shown to be solvable in terms of correlators $\\omega_{g,n}$ that follow an extension of TR. In this so-called blobbed topological recursion (BTR; general framework developed in \\cite{Borot:2015hna}), the $\\omega_{g,n}$ split into parts with poles at the ramification points (polar part) and with poles somewhere else (holomorphic part). In \\cite{Branahl:2021} it was stated that the pure (normalised) TR results contributing to $\\omega_{g,n}$ of the QKM generate bipartite (rooted) quadrangulations, whereas ordinary quadrangulations are generated when taking the complete BTR into account. Understanding this different approach to the partition functions of the complex and hermitian 1-matrix model from the beginning will be an interesting challenge for the future.\n\n\\subsection{Example of quadrangulations}\\label{a}\n In order to underpin the correctness of our spectral curve, let us only allow for $t_4 \\neq 0$ and $n=l=1$, yielding (with $u_1=\\frac{1}{\\gamma}$ and $u_3=-t_4\\gamma^3$):\n \\begin{align*} \nx_{bip}(z) = \\gamma^2 \\bigg( z+\\frac{1}{z} \\bigg) + 2\\gamma^2, \\qquad y_{bip}(z)= \\frac{\\frac{z}{\\gamma^2}-t_4 \\gamma^2 z^2}{1+z}, \\qquad \\gamma^2 = \\frac{1-\\sqrt{1-12t_4}}{6t_4}.\n\\end{align*}\n The expansions in $t_4$ by computer algebra can be found in Tab. \\ref{tab1}. Bipartite rooted quadrangulations are in particular interesting, since Tutte's famous bijection \\cite{Tuttbij} relates them to rooted ordinary maps for faces of any (not only even) degree.\n\\begin{table}[h]\n\\centerline{\\begin{tabular}[h!b]{|c|c|c|c||c|c|c|}\n\\hline\nOrder &$\\tilde{\\mathcal{T}}_2^{(0)}$&$\\tilde{\\mathcal{T}}_2^{(1)}$&$\\tilde{\\mathcal{T}}_2^{(2)}$ &$ \\mathcal{T}_2^{(0)}$&$ \\mathcal{T}_2^{(1)}$&$ \\mathcal{T}_2^{(2)}$ \\\\\n\\hline\n$(t_4)^0$ & 1 &0 &0 & 1 & 0 & 0 \\\\\n\\hline\n$(t_4)^1$ & 2 & 0 & 0 & 2 & 1 & 0 \\\\\n\\hline\n$(t_4)^2$ & 9 & 1 & 0 & 9 & 15 & 45 \\\\\n\\hline\n$(t_4)^3$ & 54 & 20 & 0 & 54 & 198 & 2007 \\\\\n\\hline\n$(t_4)^4$ & 378 & 307 & 21 & 378 & 2511 & 56646 \\\\\n\\hline\n$(t_4)^5$ & 2916 & 4280 & 966 & 2916 & 31266 & 1290087 \\\\\n\\hline\n\\end{tabular}}\n\\hspace*{1ex}\n\\caption{These numbers are generated by Thm \\ref{th:main} and Thm. \\ref{th:eyn} together with Cor. \\ref{cor1} and coincide with \\cite{Bender} and with OEIS no. A006300 $(g=1)$ and no. A006301 ($g=2$) for $\\tilde{\\mathcal{T}}_{2}^{(g)}$.}\n\\label{tab1}\n\\end{table}\n \n \n\\bibliographystyle{halpha-abbrv}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}