text
stringlengths
1
3.65k
source
stringlengths
15
79
this paper presents a predictive model for influenza - like - illness, based on twitter traffic. we gather data from twitter based on a set of keywords used in the influenza wikipedia page, and perform feature selection over all words used in 3 years worth of tweets, using real ili data from the greek cdc. we select a small set of words with high correlation to the ili score, and train a regression model to predict the ili score cases from the word features. we deploy this model on a streaming application and feed the resulting time - series to fluhmm, an existing prediction model for the phases of the epidemic. we find that twitter traffic offers a good source of information and can generate early warnings compared to the existing sentinel protocol using a set of associated physicians all over greece.
arxiv:2111.10675
a continuum approach to quantum chromodynamics ( qcd ), based upon schwinger - dyson ( sd ) and bethe - salpeter ( bs ) equations, is employed to provide a tightly constrained prediction for the $ \ gamma ^ { * } \ gamma ^ { * } \ rightarrow \ { \ pi ^ 0, \ eta, \ eta ', \ eta _ c, \ eta _ b \ } $ transition form factors ( tffs ) and their corresponding pole contribution to the hadronic light - by - light ( hlbl ) piece of the anomalous magnetic moment of the muon ( $ a _ \ mu $ ). this work relies on a practical and well - tested quark - photon vertex ansatz approach to evaluate the tffs for arbitrary space - like photon virtualities, in the impulse approximation. the numerical results are parametrized meticulously, ensuring a reliable evaluation of the hlbl contributions to $ a _ \ mu $. we obtain : $ a _ { \ mu } ^ { \ pi ^ 0 - \ textrm { pole } } = ( 6. 14 \ pm 0. 21 ) \ times 10 ^ { - 10 } $, $ a _ { \ mu } ^ { \ eta - \ textrm { pole } } = ( 1. 47 \ pm 0. 19 ) \ times 10 ^ { - 10 } $, $ a _ { \ mu } ^ { \ eta ' - \ textrm { pole } } = ( 1. 36 \ pm 0. 08 ) \ times 10 ^ { - 10 } $, yielding a total value of $ a _ { \ mu } ^ { \ pi ^ 0 + \ eta + \ eta ' - \ textrm { pole } } = ( 8. 97 \ pm 0. 48 ) \ times 10 ^ { - 10 } $, compatible with contemporary determinations. notably, we find that $ a _ { \ mu } ^ { \ eta _ c + \ eta _ b - \ textrm { pole } } \ approx a _ { \ mu } ^ { \ eta _ c - \ textrm { pole } } = ( 0. 09 \ pm 0. 01 ) \ times 10 ^ { - 10 } $, which might not be negligible once the percent precision in the computation of the light pseudoscalars is reached.
arxiv:1910.05960
we consider the scale - free adversarial multi - armed bandit ( mab ) problem with unrestricted feedback delays. in contrast to the standard assumption that all losses are $ [ 0, 1 ] $ - bounded, in our setting, losses can fall in a general bounded interval $ [ - l, l ] $, unknown to the agent beforehand. furthermore, the feedback of each arm pull can experience arbitrary delays. we propose a novel approach named scale - free delayed inf ( sfd - inf ) for this novel setting, which combines a recent " convex combination trick " together with a novel doubling and skipping technique. we then present two instances of sfd - inf, each with carefully designed delay - adapted learning scales. the first one sfd - tinf uses $ \ frac 12 $ - tsallis entropy regularizer and can achieve $ \ widetilde { \ mathcal o } ( \ sqrt { k ( d + t ) } l ) $ regret when the losses are non - negative, where $ k $ is the number of actions, $ t $ is the number of steps, and $ d $ is the total feedback delay. this bound nearly matches the $ \ omega ( ( \ sqrt { kt } + \ sqrt { d \ log k } ) l ) $ lower - bound when regarding $ k $ as a constant independent of $ t $. the second one, sfd - lbinf, works for general scale - free losses and achieves a small - loss style adaptive regret bound $ \ widetilde { \ mathcal o } ( \ sqrt { k \ mathbb { e } [ \ tilde { \ mathfrak l } _ t ^ 2 ] } + \ sqrt { kdl } ) $, which falls to the $ \ widetilde { \ mathcal o } ( \ sqrt { k ( d + t ) } l ) $ regret in the worst case and is thus more general than sfd - tinf despite a more complicated analysis and several extra logarithmic dependencies. moreover, both instances also outperform the existing algorithms for non - delayed ( i. e., $ d = 0 $ ) scale - free adversarial mab problems, which can be of independent interest.
arxiv:2110.13400
we present a new package, vest ( vector einstein summation tools ), that performs abstract vector calculus computations in mathematica. through the use of index notation, vest is able to reduce three - dimensional scalar and vector expressions of a very general type to a well defined standard form. in addition, utilizing properties of the levi - civita symbol, the program can derive types of multi - term vector identities that are not recognized by reduction, subsequently applying these to simplify large expressions. in a companion paper ( burby et al., 2013 ), we employ vest in the automation of the calculation of high - order lagrangians for the single particle guiding center system in plasma physics, a computation which illustrates its ability to handle very large expressions. vest has been designed to be simple and intuitive to use, both for basic checking of work and more involved computations.
arxiv:1309.2561
background : many genome - wide association studies have detected genomic regions associated with traits, yet understanding the functional causes of association often remains elusive. utilizing systems approaches and focusing on intermediate molecular phenotypes might facilitate biologic understanding. results : the availability of exome sequencing of two populations of african - americans and european - americans from the atherosclerosis risk in communities study allowed us to investigate the effects of annotated loss - of - function ( lof ) mutations on 122 serum metabolites. to assess the findings, we built metabolomic causal networks for each population separately and utilized structural equation modeling. we then validated our findings with a set of independent samples. by use of methods based on concepts of mendelian randomization of genetic variants, we showed that some of the affected metabolites are risk predictors in the causal pathway of disease. for example, lof mutations in the gene kiaa1755 were identified to elevate the levels of eicosapentaenoate ( p - value = 5e - 14 ), an essential fatty acid clinically identified to increase essential hypertension. we showed that this gene is in the pathway to triglycerides, where both triglycerides and essential hypertension are risk factors of metabolomic disorder and heart attack. we also identified that the gene cldn17, harboring loss - of - function mutations, had pleiotropic actions on metabolites from amino acid and lipid pathways. conclusion : using systems biology approaches for the analysis of metabolomics and genetic data, we integrated several biological processes, which lead to findings that may functionally connect genetic variants with complex diseases.
arxiv:1904.12652
i show that the characteristic diffusion timescale and the gamma - ray escape timescale, of sn ia supernova ejecta, are related with each other through the time when the bolometric luminosity, $ l _ { \ rm bol } $, intersects with instantaneous radioactive decay luminosity, $ l _ \ gamma $, for the second time after the light - curve peak. analytical arguments, numerical radiation - transport calculations, and observational tests show that $ l _ { \ rm bol } $ generally intersects $ l _ \ gamma $ at roughly $ 1. 7 $ times the characteristic diffusion timescale of the ejecta. this relation implies that the gamma - ray escape timescale is typically 2. 7 times the diffusion timescale, and also implies that the bolometric luminosity 15 days after the peak, $ l _ { \ rm bol } ( t _ { 15 } ) $, must be close to the instantaneous decay luminosity at that time, $ l _ \ gamma ( t _ { 15 } ) $. with the employed calculations and observations, the accuracy of $ l _ { \ rm bol } = l _ \ gamma $ at $ t = t _ { 15 } $ is found to be comparable to the simple version of " arnett ' s rule " ( $ l _ { \ rm bol } = l _ \ gamma $ at $ t = t _ { \ rm peak } $ ). this relation aids the interpretation of sn ia supernova light curves and may also be applicable to general hydrogen - free explosion scenarios powered by other central engines.
arxiv:1805.03712
inspired by the work in ref. [ 1 ], which considers the additional second - order contributions arising from nonlocal corrections due to two - point correlation functions of tensors of different ranks at distinct spacetime points, we similarly employ the nonequilibrium statistical operator method to extend this framework to include spin degrees of freedom. in addition to obtaining analogous extra second - order terms in the shear stress tensor, bulk viscous pressure, and charge diffusion currents resulting from such contributions, we further derive additional second - order terms originating from the same mechanism in the charge diffusion currents, rotational stress tensor and the boost heat vector. furthermore, we express all transport coefficients represented by two - point or three - point correlations in terms of retarded green ' s functions.
arxiv:2505.01814
motivated by recent experiments on bi $ _ 3 $ mn $ _ 4 $ o $ _ { 12 } $ ( no $ _ 3 $ ), we study a frustrated $ j _ 1 $ - $ j _ 2 $ heisenberg model on the two dimensional ( 2d ) honeycomb lattice. the classical $ j _ 1 $ - $ j _ 2 $ heisenberg model on the two dimensional ( 2d ) honeycomb lattice has n \ ' eel order for $ j _ 2 < j _ 1 / 6 $. for $ j _ 2 > j _ 1 / 6 $, it exhibits a one - parameter family of degenerate incommensurate spin spiral ground states where the spiral wave vector can point in any direction. spin wave fluctuations at leading order lift this accidental degeneracy in favor of specific wave vectors, leading to spiral order by disorder. for spin $ s = 1 / 2 $, quantum fluctuations are, however, likely to be strong enough to melt the spiral order parameter over a wide range of $ j _ 2 / j _ 1 $. over a part of this range, we argue that the resulting state is a valence bond solid ( vbs ) with staggered dimer order - this vbs is a nematic which breaks lattice rotational symmetry. our arguments are supported by comparing the spin wave energy with the energy of the dimer solid obtained using a bond operator formalism. turning to the effect of thermal fluctuations on the spiral ordered state, any nonzero temperature destroys the magnetic order, but the discrete rotational symmetry of the lattice remains broken resulting in a thermal analogue of the nematic vbs. we present arguments, supported by classical monte carlo simulations, that this nematic transforms into the high temperature symmetric paramagnet via a thermal phase transition which is in the universality class of the classical 3 - state potts ( clock ) model in 2d. we discuss the possible relevance of our results for honeycomb magnets, such as bi $ _ 3 $ m $ _ 4 $ o $ _ { 12 } $ ( no $ _ 3 $ ) ( with m = mn, v, cr ), and bilayer triangular lattice magnets.
arxiv:1004.1119
we report the measurement of the direct $ cp $ asymmetry in the radiative $ \ bar { b } \ rightarrow x _ { s + d } \ gamma $ decay using a data sample of $ ( 772 \ pm 11 ) \ times 10 ^ 6 $ $ b \ bar { b } $ pairs collected at the $ \ upsilon ( 4s ) $ resonance with the belle detector at the kekb asymmetric - energy $ e ^ + e ^ - $ collider. the $ cp $ asymmetry is measured as a function of the photon energy threshold. for $ e ^ { \ rm { * } } _ { \ gamma } \ geq 2. 1 ~ { \ rm { gev } } $, where $ e ^ { \ rm { * } } _ { \ gamma } $ is the photon energy in the center - of - mass frame, we obtain $ \ mathcal { a } _ { cp } ( \ bar { b } \ rightarrow x _ { s + d } \ gamma ) = ( 2. 2 \ pm 3. 9 \ pm 0. 9 ) \ % $, consistent with the standard model prediction.
arxiv:1501.01702
using the implicit function theorem, we prove existence of solutions of the so - called conformally covariant split system on compact 3 - dimensional riemannian manifolds. they give rise to non - constant mean curvature ( non - cmc ) vacuum initial data for the einstein equations. we investigate the conformally covariant split system defined on compact manifolds with or without boundaries. in the former case, the boundary corresponds to an apparent horizon in the constructed initial data. the case with a cosmological constant is then considered separately. finally, to demonstrate the applicability of the conformal covariant split system in numerical studies, we provide numerical examples of solutions on manifolds $ \ mathbb s ^ 1 \ times \ mathbb s ^ 2 $ and $ \ mathbb s ^ 1 \ times \ mathbb t ^ 2 $.
arxiv:1811.07639
in real catalyst particles, which have complex structures. instead, well - defined single crystal surfaces of catalytically active materials such as platinum are often used as model catalysts. multi - component materials systems are used to study interactions between catalytically active metal particles and supporting oxides ; these are produced by growing ultra - thin films or particles on a single crystal surface. relationships between the composition, structure, and chemical behavior of these surfaces are studied using ultra - high vacuum techniques, including adsorption and temperature - programmed desorption of molecules, scanning tunneling microscopy, low energy electron diffraction, and auger electron spectroscopy. results can be fed into chemical models or used toward the rational design of new catalysts. reaction mechanisms can also be clarified due to the atomic - scale precision of surface science measurements. = = = electrochemistry = = = electrochemistry is the study of processes driven through an applied potential at a solid – liquid or liquid – liquid interface. the behavior of an electrode – electrolyte interface is affected by the distribution of ions in the liquid phase next to the interface forming the electrical double layer. adsorption and desorption events can be studied at atomically flat single - crystal surfaces as a function of applied potential, time and solution conditions using spectroscopy, scanning probe microscopy and surface x - ray scattering. these studies link traditional electrochemical techniques such as cyclic voltammetry to direct observations of interfacial processes. = = = geochemistry = = = geological phenomena such as iron cycling and soil contamination are controlled by the interfaces between minerals and their environment. the atomic - scale structure and chemical properties of mineral – solution interfaces are studied using in situ synchrotron x - ray techniques such as x - ray reflectivity, x - ray standing waves, and x - ray absorption spectroscopy as well as scanning probe microscopy. for example, studies of heavy metal or actinide adsorption onto mineral surfaces reveal molecular - scale details of adsorption, enabling more accurate predictions of how these contaminants travel through soils or disrupt natural dissolution – precipitation cycles. = = physics = = surface physics can be roughly defined as the study of physical interactions that occur at interfaces. it overlaps with surface chemistry. some of the topics investigated in surface physics include friction, surface states, surface diffusion, surface reconstruction, surface phonons and plasmons, epitaxy, the emission and tunneling of electrons, spintronics, and the self - assembly of nanostructures on surfaces. techniques to
https://en.wikipedia.org/wiki/Surface_science
in a previous theoretical work [ arxiv : 2205. 01461 ], t. esslinger group proposed a scheme to realize a spatial - temporal lattice, which possesses dual periodicity on space and time, in a cavity - boson system pumped by a travelling wave laser. however, the prediction was made under the mean - field approximation. in this work, we investigate the dynamics beyond mean - field approximation. by including the fluctuation of the cavity field, we obtain a larger set of equations of motion. numerical results show that the spatial - temporal lattice is melted in the mean - field level but survives in the quantum fluctuation.
arxiv:2411.04687
given a polynomial map f on the euclidean n - space and a vector q, the polynomial complementarity problem, pcp ( f, q ), is the nonlinear complementarity problem of finding a nonnegative vector x such that y = f ( x ) + q is nonnegative and orthogonal to x. it is called a tensor complementarity problem if the polynomial map is homogeneous. in this paper, we establish results connecting the polynomial complementarity problem pcp ( f, q ) and the tensor complementarity problem pcp ( f *, 0 ), where f * is the leading term in the decomposition of f as a sum of homogeneous polynomial maps. we show, for example, that pcp ( f, q ) has a nonempty compact solution set for every q when zero is the only solution of pcp ( f *, 0 ) and the local ( topological ) degree of min { x, f * ( x ) } at the origin is nonzero. as a consequence, we establish karamardian type results for polynomial complementarity problems. by identifying a tensor a of order m and dimension n with its corresponding homogeneous polynomial f ( x ) : = ax ^ { m - 1 }, we relate our results to tensor complementarity problems. these results show that under appropriate conditions, pcp ( f + p, q ) has a nonempty compact solution set for all polynomial maps p of degree less than m - 1 and for all vectors q, thereby substantially improving the existing tensor complementarity results where only problems of the type pcp ( f, q ) are considered. we introduce the concept of degree of an r _ 0 - tensor and show that the degree of an r - tensor is one. we illustrate our results by constructing matrix based tensors.
arxiv:1609.05267
early type galaxies have projected central density brightness profile logarithmic slopes, gamma ^ prime, ranging from about 0 to 1. we show that gamma ^ prime is strongly correlated, r = 0. 83, with the coarse grain phase density of the galaxy core, q _ 0 = rho / sigma ^ 3. the gamma ^ prime - luminosity correlation is much weaker, r = - 0. 5. q _ 0 also serves to separate the distribution of power - law profiles, gamma ^ prime > 0. 5 from nearly flat profiles, gamma ^ prime < 0. 3, although there are many galaxies of intermediate slope, at intermediate $ q _ 0 $, in a volume limited sample. the transition phase density separating the two profile types is approximately 0. 003, which is also where the relation between q _ 0 and core mass shows a change in slope, the rotation rate of the central part of the galaxy increases, and the ratio of the black hole to core mass increases. these relations are considered relative to the globular cluster inspiral core buildup and binary black hole core scouring mechanisms for core creation and evolution. globular cluster inspiral models have quantitative predictions that the data support, but no single model yet completely explains the correlations.
arxiv:1401.4742
a commutative algebra $ \ mathbb { b } $ over the field of complex numbers with the bases $ \ { e _ 1, e _ 2 \ } $ satisfying the conditions $ ( e _ 1 ^ 2 + e _ 2 ^ 2 ) ^ 2 = 0 $, $ e _ 1 ^ 2 + e _ 2 ^ 2 \ ne 0 $, is considered. the algebra $ \ mathbb { b } $ is associated with the biharmonic equation. consider a schwartz - type boundary value problem on finding a monogenic function of the type $ \ phi ( xe _ 1 + ye _ 2 ) = u _ { 1 } ( x, y ) \, e _ 1 + u _ { 2 } ( x, y ) \, ie _ 1 + u _ { 3 } ( x, y ) \, e _ 2 + u _ { 4 } ( x, y ) \, ie _ 2 $, $ ( x, y ) \ in d $, when values of two components $ u _ 1 $, $ u _ 4 $ are given on the boundary of a domain $ d $ lying in the cartesian plane $ xoy $. we develop a method of its solving which is based on expressions of monogenic functions via corresponding analytic functions of the complex variable. for a half - plane and for a disk, solutions are obtained in explicit forms by means of schwartz - type integrals.
arxiv:1610.00436
we investigate z2 x z2 orientifolds with group actions involving shifts. a complete classification of possible geometries is presented where also previous work by other authors is included in a unified framework from an intersecting d - brane perspective. in particular, we show that the additional shifts not only determine the topology of the orbifold but also independently the presence of orientifold planes. in the second part, we work out in detail a basis of homological three cycles on shift z2 x z2 orientifolds and construct all possible fractional d - branes including rigid ones. a pati - salam type model with no open - string moduli in the visible sector is presented.
arxiv:hep-th/0604033
this paper is about $ \ phi $ - coordinated modules for weak quantum vertex algebras. among the main results, several canonical connections among $ \ phi $ - coordinated modules for different $ \ phi $ are established. for vertex operator algebras, a reinterpretation of frenkel - huang - lepowsky ' s theorem on contragredient module is given in terms of $ \ phi $ - coordinated modules.
arxiv:1608.03829
we present an exploratory investigation of the parity odd $ \ delta i = 1 $ pion - nucleon coupling $ h _ \ pi ^ 1 $ from lattice qcd. based on the pcac relation, we study the parity - conserving effective hamiltonian and extract the coupling by determining the nucleon mass splitting arising from the effective 4 - quark interactions using the feynman - hellmann theorem. we present preliminary results of the mass shift for a $ 32 ^ 3 \ times 64 $ ensemble of $ n _ f = 2 + 1 + 1 $ twisted mass fermions at pion mass $ 260 $ mev and lattice spacing $ a = 0. 097 $ fm.
arxiv:2111.09025
the seiberg - witten limit of fermionic n = 2 string theory with nonvanishing b - field is governed by noncommutative self - dual yang - mills theory ( ncsdym ) in 2 + 2 dimensions. conversely, the self - duality equations are contained in the equation of motion of n = 2 string field theory in a b - field background. therefore finding solutions to noncommutative self - dual yang - mills theory on r ^ { 2, 2 } might help to improve our understanding of nonperturbative properties of string ( field ) theory. in this paper, we construct nonlinear soliton - like and multi - plane wave solutions of the ncsdym equations corresponding to certain d - brane configurations by employing a solution generating technique, an extension of the so - called dressing approach. the underlying lax pair is discussed in two different gauges, the unitary and the hermitean gauge. several examples and applications for both situations are considered, including abelian solutions constructed from gms - like projectors, noncommutative u ( 2 ) soliton - like configurations and interacting plane waves. we display a correspondence to earlier work on string field theory and argue that the solutions found here can serve as a guideline in the search for nonperturbative solutions of nonpolynomial string field theory.
arxiv:hep-th/0211263
a deep community in a graph is a connected component that can only be seen after removal of nodes or edges from the rest of the graph. this paper formulates the problem of detecting deep communities as multi - stage node removal that maximizes a new centrality measure, called the local fiedler vector centrality ( lfvc ), at each stage. the lfvc is associated with the sensitivity of algebraic connectivity to node or edge removals. we prove that a greedy node / edge removal strategy, based on successive maximization of lfvc, has bounded performance loss relative to the optimal, but intractable, combinatorial batch removal strategy. under a stochastic block model framework, we show that the greedy lfvc strategy can extract deep communities with probability one as the number of observations becomes large. we apply the greedy lfvc strategy to real - world social network datasets. compared with conventional community detection methods we demonstrate improved ability to identify important communities and key members in the network.
arxiv:1407.6071
we determine internal characterisations for when a tensor category is ( super ) tannakian, for fields of positive characteristic. this generalises the corresponding characterisations in characteristic zero by p. deligne. we also explore notions of frobenius twists in tensor categories in positive characteristic.
arxiv:1812.02452
the success of state - of - the - art deep neural networks heavily relies on the presence of large - scale labelled datasets, which are extremely expensive and time - consuming to annotate. this paper focuses on tackling semi - supervised part segmentation tasks by generating high - quality images with a pre - trained gan and labelling the generated images with an automatic annotator. in particular, we formulate the annotator learning as a learning - to - learn problem. given a pre - trained gan, the annotator learns to label object parts in a set of randomly generated images such that a part segmentation model trained on these synthetic images with their predicted labels obtains low segmentation error on a small validation set of manually labelled images. we further reduce this nested - loop optimization problem to a simple gradient matching problem and efficiently solve it with an iterative algorithm. we show that our method can learn annotators from a broad range of labelled images including real images, generated images, and even analytically rendered images. our method is evaluated with semi - supervised part segmentation tasks and significantly outperforms other semi - supervised competitors when the amount of labelled examples is extremely limited.
arxiv:2211.03003
in this paper we define the schwartz linear operators among spaces of tempered distributions. these operators are the analogous of linear continuous operators among separable hilbert spaces, but in the case of spaces endowed with schwartz bases having a continuous index set. the schwartz linear operators enjoy properties very similar to those enjoyed by linear operators in the finite dimensional case. the schwartz operators are one possible rigorous mathematical model for the operators and observables used in quantum mechanics.
arxiv:1104.3380
detection of gravitational - wave ( gw ) sources enables the characterisation of binary compact objects and of their in - spiral. however, other dissipative processes can affect the in - spiral. here we show that the in - spiral of compact objects through a gaseous common - envelope ( ce ) arising from an evolved stellar companion produces a novel type of gw - sources, whose evolution is dominated by the dissipative gas dynamical friction effects from the ce, rather than the gw - emission itself. the evolution and properties of the gw - signals differ from those of isolated gas - poor mergers significantly. we find characteristic strains of $ \ sim10 ^ { - 23 } $ - $ 10 ^ { - 21 } $ ( $ 10 { \ rm kpc } / { d } $ ) for such sources - - observable by next - generation space - based gw - detectors. the evolution of the gw - signal can serve as a probe of the interior regions of the evolved star, and the final stages of ce - evolution, otherwise inaccessible through other observational means. moreover, such ce - mergers are frequently followed by observable explosive electromagnetic counterparts and / or the formation of exotic stars.
arxiv:1903.11072
some examples of branched hamiltonians are explored both classically and in the context of quantum mechanics, as recently advocated by shapere and wilczek. these are in fact cases of switchback potentials, albeit in momentum space, as previously analyzed for quasi - hamiltonian chaotic dynamical systems in a classical setting, and as encountered in analogous renormalization group flows for quantum theories which exhibit rg cycles. a basic two - worlds model, with a pair of hamiltonian branches related by supersymmetry, is considered in detail.
arxiv:1311.6147
community detection in a complex network is an important problem of much interest in recent years. in general, a community detection algorithm chooses an objective function and captures the communities of the network by optimizing the objective function, and then, one uses various heuristics to solve the optimization problem to extract the interesting communities for the user. in this article, we demonstrate the procedure to transform a graph into points of a metric space and develop the methods of community detection with the help of a metric defined for a pair of points. we have also studied and analyzed the community structure of the network therein. the results obtained with our approach are very competitive with most of the well - known algorithms in the literature, and this is justified over the large collection of datasets. on the other hand, it can be observed that time taken by our algorithm is quite less compared to other methods and justifies the theoretical findings.
arxiv:1508.06380
ultramagnetized neutron stars or magnetars are magnetically powered neutron stars. their strong magnetic fields dominate the physical processes in their crusts and their surroundings. the past few years have seen several advances in our theoretical and observational understanding of these objects. in spite of a surfeit of observations, their spectra are still poorly understood. i will discuss the emission from strongly magnetized condensed matter surfaces of neutron stars, recent advances in our expectations of the surface composition of magnetars and a model for the non - thermal emission from these objects.
arxiv:astro-ph/0504077
visual word sense disambiguation ( vwsd ) is a novel challenging task that lies between linguistic sense disambiguation and fine - grained multimodal retrieval. the recent advancements in the development of visiolinguistic ( vl ) transformers suggest some off - the - self implementations with encouraging results, which however we argue that can be further improved. to this end, we propose some knowledge - enhancement techniques towards improving the retrieval performance of vl transformers via the usage of large language models ( llms ) as knowledge bases. more specifically, knowledge stored in llms is retrieved with the help of appropriate prompts in a zero - shot manner, achieving performance advancements. moreover, we convert vwsd to a purely textual question - answering ( qa ) problem by considering generated image captions as multiple - choice candidate answers. zero - shot and few - shot prompting strategies are leveraged to explore the potential of such a transformation, while chain - of - thought ( cot ) prompting in the zero - shot setting is able to reveal the internal reasoning steps an llm follows to select the appropriate candidate. in total, our presented approach is the first one to analyze the merits of exploiting knowledge stored in llms in different ways to solve wvsd.
arxiv:2310.01960
a comparison between the line of sight power spectrum of absorption in the lyman - alpha forest and the cross power spectrum between the absorption in neighboring lines of sight offers an evolution - free means to constrain the cosmological constant, or dark energy. using cosmological simulations, we consider a maximum likelihood method to obtain constraints from this comparison. in our method, measurements of the auto and cross spectra from observations are compared with those from a multi - parameter grid of simulated models of the intergalactic medium. we then marginalize over nuisance parameters to obtain constraints on the cosmological constant. redshift space distortions due to peculiar velocities and thermal broadening, a potential difficulty in applying this test, are explicitly modeled in our simulations. to illustrate our method, we measure the cross spectrum from a new sample of five close quasar pairs, with separations of 0. 5 to 3 arcmin. we attempt to obtain a constraint on omega _ lambda, but find only weak constraints. an einstein - de - sitter cosmology is, however, disfavored by the data at a ~ 2 sigma confidence level. we consider the power of future observations, paying particular attention to the effects of spectral resolution and shot - noise. we find that ~ 50 moderate resolution, fwhm ~ 150 km / s, close separation ( ~ 30 - 120 arcsec ) pairs should allow a ( 2 sigma ) constraint on omega _ lambda at the level of 15 %, if other modeling parameters are determined through other means. we find that there is a sizeable gain from observing very close, ~ 30 arcsec, separation pairs provided they are observed with high spectral resolution. a sample of ~ 10 such pairs gives similar constraints to the 50 moderate resolution pairs mentioned above.
arxiv:astro-ph/0309204
in our previous paper [ 1 ], we studied a model of dark matter ( dm ) in which the hidden sector interacts with standard model particles via a hidden photonic portal ( hp ). we investigated the effects of this new interaction on the hydrogen atom and obtained an upper bound for the coupling of the model. in this work, we study the effects of hp on two interesting exotic atoms namely muonium and positronium. we obtain a tighter upper limit on the coupling. we also calculate the change ( shift ) in the aharonov - bohm phase due to hp and find that the phase shift is negligibly small ( for dm particles mass in the gev range ). recently a 3. 5 kev x ray line signal observed in the spectrum of 73 galaxy clusters, reported by the xxm - newton x ray observatory. since in hp model the dm particles can decay directly into photons, so we finally calculate the value of the coupling constant f using the condition delta e = 3. 5 kev.
arxiv:1511.05841
the growth factor of linear fluctuations is probably one of the least known quantity in observational cosmology. here we discuss the constraints that baryon oscillations in galaxy power spectra from future surveys can put on a conveniently parametrized growth factor. we find that spectroscopic surveys of $ 5000 deg ^ 2 $ extending to $ z \ approx 3 $ could estimate the growth index $ \ gamma $ within 0. 06 ; a similar photometric survey would give $ \ delta \ gamma \ approx 0. 15 $. this test provides an important consistency check for standard cosmological model and could constrain modified gravity models. we discuss the errors and the figure of merit for various combinations of redshift errors and survey size.
arxiv:0709.2792
in this paper, we study the k - stability of polarized spherical varieties. after reduction, it can be treated as a variational problem of the reduced functional of the futaki invariant on the associated moment polytope. with the convexity constraint of the problem, the minimizers are shown to satisfy the homogeneous monge - amp \ ` ere equation ( hma ). when the spherical variety has rank two, a simpler characterization can be established through properties of the hma. as an application, we determine the strict semistability and polystable degenerations for fano spherical varieties of rank two.
arxiv:2111.04269
in this note we present algorithms for computing euclidean minima of cubic number fields ; in particular, we were able to find all norm - euclidean cubic number fields with discriminants - 999 < d < 10000.
arxiv:1202.6019
traditionally, the duality between wilson loops and amplitudes beyond one loop in n = 4 sym is characterised by the remainder function. because of the perturbative origins of the bds expression, the remainder function is more natural at weak than at strong coupling. we advocate instead a more direct approach, based on considering ratios of wilson loops. this allows us to define a manifestly finite, regularisation independent, conformally invariant quantity. it does not make a direct reference to the bds expression and the definition is regularisation independent. it is a natural object at weak and at strong coupling, and in the latter case is directly related to the free energy of an auxiliary integrable system. we then compute these ratios for continuous families of regular polygons for 6, 8 and 10 points at one and two - loops. these results are compared to expressions derived recently at strong coupling.
arxiv:1003.4405
let $ ( x, \ omega ) $ be a compact k \ " { a } hler manifold. let $ ( l, h ) $ be a hermitian holomorphic line bundle over $ x $, such that $ \ theta _ { l, h } \ geq - \ varepsilon \ omega $ for a small $ \ varepsilon > 0 $, $ e $ be a holomorphic line bundle over $ x $. for $ k \ in \ mathbb { n } _ + $, denote by $ x _ k : = ( x, \ omega ^ k ) $ the k \ " { a } hler manifold $ x $ with new scaled metric $ \ omega ^ k = k \ omega $. estimates of the number of eigenvalues smaller than $ \ lambda $ of the $ \ debar $ - laplacian on forms on $ x _ k $ with values in $ l ^ k \ otimes e $ are presented for $ 0 \ leq \ lambda < k $. in particular, when $ \ lambda = 0 $, we get a numeric bound for the cohomology groups.
arxiv:1310.3557
we propose using the predictability of human motion to eliminate the overhead of distributed location services in human - carried manets, dubbing the technique location profile routing. this method outperforms the geographic hashing location service when nodes change locations 2x more frequently than they initiate connections ( e. g., start new tcp streams ), as in applications like text - and instant - messaging. prior characterizations of human mobility are used to show that location profile routing achieves a 93 % delivery ratio with a 1. 75x first - packet latency increase relative to an oracle location service.
arxiv:1403.4677
we performed a 12co and 13co - line study of the " brick " ( g0. 253 + 0. 016 ) in the galactic centre ( gc ) by analyzing archival data obtained with the nobeyama 45 - m telescope. we present kinematics and molecular gas distributions in the longitude - velocity diagram, and suggest that the brick is located along the gc arm i in the central molecular zone ( cmz ), which yields a distance from the sun of 8 kpc and galacto - centric distance of 0. 2 kpc. the major and minor - axis diameters of the brick are $ d _ x \ times d _ y = 8. 4 { \ rm pc } \ times 4. 1 { \ rm epc } $ at position angle of $ 40 \ deg $ and $ 130 \ deg $, respectively, and the scale radius is $ r _ { \ rm brick } = \ sqrt { d _ x d _ y } = 2. 96 $ pc. the molecular mass inferred from the 12co - line integrated intensity is $ m _ { \ rm bri, xco } \ sim 5. 1 \ times 10 ^ 4 m _ \ odot $ for a conversion factor $ x _ { \ rm co } = 1. 0 \ times 10 ^ { 20 } $ h $ _ 2 $ cm $ ^ { - 2 } $ [ k km / s ] $ ^ { - 1 } $. on the other hand, the dynamical ( virial ) mass for the measured velocity dispersion of $ \ sigma _ v = 10. 0 $ km / s is calculated to be $ m _ { \ rm bri, vir } \ sim 6. 8 \ times 10 ^ 4m _ \ odot $, which yields a new conversion factor of $ x _ { \ rm co } = 1. 3 \ times 10 ^ { 20 } $ h $ _ 2 $ cm $ ^ { - 2 } $ [ k km / s ] $ { - 1 } $. the brick ' s center has a cavity surrounded by a spherical molecular bubble of radius $ r _ { \ rm bub } = 1. 85 $ pc and mass $ \ sim 1. 7 \ times 10 ^ 4m _ \ odot $ expanding at $ v _ { \ rm exp } \ simeq 10 $ km / s with kinetic energy of $ e _ 0 \ sim 1. 7 \ times 10 ^ { 49
arxiv:2404.11892
there have been a long string of efforts to understand the source of the variability observed in microquasars but no model has yet gained wide acceptance, especially concerning the elusive high - frequency quasi - periodic oscillation ( hfqpo ). we first list the constraints arising from observations and how that translates for an hfqpo model. then we present how a model based on having the rossby wave instability ( rwi ) active in the disk could answer those constraints.
arxiv:1209.1958
we present a multi - wavelength analysis of star - forming galaxies in the massive cluster ms0451. 6 - 0305 at z $ \ sim $ 0. 54 to shed new light on the evolution of the far - infrared - radio relationship in distant rich clusters. we have derived total infrared luminosities for a spectroscopically confirmed sample of cluster and field galaxies through an empirical relation based on $ spitzer $ mips 24 $ \ mu $ m photometry. the radio flux densities were measured from deep very large array 1. 4 ghz radio continuum observations. we find the ratio of far - infrared to radio luminosity for galaxies in an intermediate redshift cluster to be $ q _ { \ rm fir } $ = 1. 80 $ \ pm $ 0. 15 with a dispersion of 0. 53. due to the large intrinsic dispersion, we do not find any observable change in this value with either redshift or environment. however, a higher percentage of galaxies in this cluster show an excess in their radio fluxes when compared to low redshift clusters ( $ 27 ^ { + 23 } _ { - 13 } \ % $ to $ 11 \ % $ ), suggestive of a cluster enhancement of radio - excess sources at this earlier epoch. in addition, the far - infrared - radio relationship for blue galaxies, where $ q _ { \ rm fir } $ = 2. 01 $ \ pm $ 0. 14 with a dispersion of 0. 35, is consistent with the predicted value from the field relationship, although these results are based on a sample from a single cluster.
arxiv:1411.3677
at present, only some special differential equations have explicit analytical solutions. in general, no one thinks that it is possible to analytically find the exact solution of nonlinear equations. in this article based on the idea that the numerical scheme with zero truncation error can give rise to exact solution, a general formula for the exact solution of the initial value problem of nonlinear ordinary differential equations is obtained. meanwhile, this formula enables us to construct a numerical scheme with zero truncation error for solving the nonlinear differential equations.
arxiv:2004.05329
the continuous improvement in localization errors ( sky position and distance ) in real time as lisa observes the gradual inspiral of a supermassive black hole ( smbh ) binary can be of great help in identifying any prompt electromagnetic counterpart associated with the merger. we develop a new method, based on a fourier decomposition of the time - dependent, lisa - modulated gravitational - wave signal, to study this intricate problem. the method is faster than standard monte carlo simulations by orders of magnitude. by surveying the parameter space of potential lisa sources, we find that counterparts to smbh binary mergers with total mass m ~ 10 ^ 5 - 10 ^ 7 m _ sun and redshifts z < ~ 3 can be localized to within the field of view of astronomical instruments ( ~ deg ^ 2 ) typically hours to weeks prior to coalescence. this will allow targeted searches for variable electromagnetic counterparts as the merger proceeds, as well as monitoring of the most energetic coalescence phase. a rich set of astrophysical and cosmological applications would emerge from the identification of electromagnetic counterparts to these gravitational - wave standard sirens.
arxiv:astro-ph/0701629
motivation : recent advances in high - throughput sequencing ( hts ) have made it possible to monitor genomes in great detail. new experiments not only use hts to measure genomic features at one time point but to monitor them changing over time with the aim of identifying significant changes in their abundance. in population genetics, for example, allele frequencies are monitored over time to detect significant frequency changes that indicate selection pressures. previous attempts at analysing data from hts experiments have been limited as they could not simultaneously include data at intermediate time points, replicate experiments and sources of uncertainty specific to hts such as sequencing depth. results : we present the beta - binomial gaussian process ( bbgp ) model for ranking features with significant non - random variation in abundance over time. the features are assumed to represent proportions, such as proportion of an alternative allele in a population. we use the beta - binomial model to capture the uncertainty arising from finite sequencing depth and combine it with a gaussian process model over the time series. in simulations that mimic the features of experimental evolution data, the proposed method clearly outperforms classical testing in average precision of finding selected alleles. we also present simulations exploring different experimental design choices and results on real data from drosophila experimental evolution experiment in temperature adaptation. availability : r software implementing the test is available at https : / / github. com / handetopa / bbgp
arxiv:1403.4086
we present high - resolution measurements of the thermal expansion and the magnetostriction of tlcucl $ _ { 3 } $ which shows field - induced antiferromagnetic order. we find pronounced anomalies in the field and temperature dependence of different directions of the lattice signaling a large magnetoelastic coupling. the phase boundary is extremely sensitive to pressure, e. g. the transition field would change by about + / - 185 $ % / gpa under uniaxial pressure applied along certain directions. this drastic effect can unambiguously be traced back to changes of the intradimer coupling under uniaxial pressure. the interdimer couplings remain essentially unchanged under pressure, but strongly change when tl is replaced by k.
arxiv:cond-mat/0412451
transactions submitted through the blockchain peer - to - peer ( p2p ) network may leak out exploitable information. we study the economic incentives behind the adoption of blockchain dark venues, where users ' transactions are observable only by miners on these venues. we show that miners may not fully adopt dark venues to preserve rents extracted from arbitrageurs, hence creating execution risk for users. the dark venue neither eliminates frontrunning risk nor reduces transaction costs. it strictly increases the payoff of miners, weakly increases the payoff of users, and weakly reduces arbitrageurs ' profits. we provide empirical support for our main implications, and show that they are economically significant. a 1 % increase in the probability of being frontrun raises users ' adoption rate of the dark venue by 0. 6 %. arbitrageurs ' cost - to - revenue ratio increases by a third with a dark venue.
arxiv:2202.05779
we give a notion of a comatrix coring which embodies all former constructions and, what is more interesting, leads to the formulation of a notion of galois coring and the statement of a faithfully flat descent theorem that generalize the previous versions.
arxiv:math/0509106
urysohn constructed a separable complete universal metric space homogeneous for all finite subspaces, which is today called the urysohn universal metric space. some authors have recently investigated an ultrametric analogue of this space. the isometry problem of such ultrametric spaces is our main subject in this paper. we first introduce the new notion of petaloid ultrametric spaces, which is intended to be a standard class of non - separable urysohn universal ultrametric spaces. next we prove that all petaloid spaces are isometric to each other and homogeneous for all finite subspaces ( and compact subspaces ). moreover, we show that the following spaces are petaloid, and hence they are isometric to each other and homogeneous : ( 1 ) the space of all continuous functions, whose images contain the zero, from the cantor set into the space of non - negative real numbers equipped with the nearly discrete topology, ( 2 ) the space of all continuous ultrametrics on a zero - dimensional infinite compact metrizable space, ( 3 ) the non - archimedean gromov - - hausdorff space, and ( 4 ) the space of all maps from the set of non - negative real numbers into the set of natural numbers whose supports are finite or decreasing sequences convergent to the zero.
arxiv:2302.00306
a theoretical study of elastic electron collisions with 9 conformers of the gas - phase amino acid $ \ alpha $ - alanine ( ch $ _ 3 $ ch ( nh $ _ 2 $ ) cooh ) is performed. the eigenphase sums, resonance features, differential and integral cross sections are computed for each individual conformer. resonance positions for the low - energy $ \ pi ^ * $ shape resonance are found to vary from 2. 6 ev to 3. 1 ev and the resonance widths from 0. 3 ev to 0. 5 ev. averaged cross sections for thermal mixtures of the 9 conformers are presented. both theoretical and experimental population ratios are considered. thermally - averaged cross sections obtained using the best theoretical estimates give reasonable agreement with the observed thermal cross sections. excited conformers iia and iib make a large contribution to this average due to their large permanent dipole moments.
arxiv:1609.06162
this paper details an innovative methodology to integrate image data into traditional econometric models. motivated by forecasting sales prices for residential real estate, we harness the power of deep learning to add " information " contained in images as covariates. specifically, images of homes were categorized and encoded using an ensemble of image classifiers ( resnet - 50, vgg16, mobilenet, and inception v3 ). unique features presented within each image were further encoded through panoptic segmentation. forecasts from a neural network trained on the encoded data results in improved out - of - sample predictive power. we also combine these image - based forecasts with standard hedonic real estate property and location characteristics, resulting in a unified dataset. we show that image - based forecasts increase the accuracy of hedonic forecasts when encoded features are regarded as additional covariates. we also attempt to " explain " which covariates the image - based forecasts are most highly correlated with. the study exemplifies the benefits of interdisciplinary methodologies, merging machine learning and econometrics to harness untapped data sources for more accurate forecasting.
arxiv:2403.19915
_ { \ lambda _ c } $ in agreement with the sm. the experiment identified the $ \ tau $ using its hadron decay into $ \ pi ^ - \ pi ^ + \ pi ^ - \ nu _ \ tau $, and this result for $ { \ cal r } _ { \ lambda _ c } $, which is in conflict with the phenomenology from the $ b $ - meson sector, needs confirmation from other tau reconstruction channels.
arxiv:2201.05537
in this paper we study a combinatorial reconfiguration problem that involves finding an optimal sequence of swaps to move an initial configuration of tokens that are placed on the vertices of a graph to a final desired one. this problem arises as a crucial step in reducing the depth of a quantum circuit when compiling a quantum algorithm. we provide the first known constant factor approximation algorithms for the parallel token swapping problem on graph topologies that are commonly found in modern quantum computers, including cycle graphs, subdivided star graphs, and grid graphs. we also study the so - called stretch factor of a natural lower bound to the problem, which has been shown to be useful when designing heuristics for the qubit routing problem. finally, we study the colored version of this reconfiguration problem where some tokens share the same color and are considered indistinguishable.
arxiv:2411.18581
3d face reconstruction is an important task in the field of computer vision. although 3d face reconstruction has being developing rapidly in recent years, it is still a challenge for face reconstruction under large pose. that is because much of the information about a face in a large pose will be unknowable. in order to address this issue, this paper proposes a novel 3d face reconstruction algorithm ( pifr ) based on 3d morphable model ( 3dmm ). after input a single face image, it generates a frontal image by normalizing the image. then we set weighted sum of the 3d parameters of the two images. our method solves the problem of face reconstruction of a single image of a traditional method in a large pose, works on arbitrary pose and expressions, greatly improves the accuracy of reconstruction. experiments on the challenging afw, lfpw and aflw database show that our algorithm significantly improves the accuracy of 3d face reconstruction even under extreme poses.
arxiv:1811.05295
spectra observed with the ultraviolet and visual echelle spectrograph ( uves ) on the european southern observatory ' s vlt exhibit long - range wavelength distortions. these distortions impose a systematic error on high - precision measurements of the fine - structure constant, $ \ alpha $, derived from intervening quasar absorption systems. if the distortion is modelled using a model that is too simplistic, the resulting bias in $ \ delta \ alpha / \ alpha $ away from the true value can be larger than the statistical uncertainty on the $ \ alpha $ measurement. if the effect is ignored altogether, the same is true. if the effect is modelled properly, accounting for the way in which final spectra are generally formed from the co - addition of exposures made at several different instrumental settings, the effect can be accurately removed and the correct $ \ delta \ alpha / \ alpha $ recovered.
arxiv:1701.03176
reliable qubits are difficult to engineer, but standard fault - tolerance schemes use seven or more physical qubits to encode each logical qubit, with still more qubits required for error correction. the large overhead makes it hard to experiment with fault - tolerance schemes with multiple encoded qubits. the 15 - qubit hamming code protects seven encoded qubits to distance three. we give fault - tolerant procedures for applying arbitrary clifford operations on these encoded qubits, using only two extra qubits, 17 total. in particular, individual encoded qubits within the code block can be targeted. fault - tolerant universal computation is possible with four extra qubits, 19 total. the procedures could enable testing more sophisticated protected circuits in small - scale quantum devices. our main technique is to use gadgets to protect gates against correlated faults. we also take advantage of special code symmetries, and use pieceable fault tolerance.
arxiv:1705.05365
in the espresso scenario, ultra - high - energy ( uhe ) cosmic rays ( crs ) are produced via a one - shot reacceleration of galactic - like crs in the relativistic jets of active galactic nuclei, independently of the scattering rate dictated by magnetic fluctuations. in mbarek & caprioli ( 2019 ), we traced test - particle crs in high - resolution magnetohyrodynamic ( mhd ) jet simulations and found that the associated spectral slope, chemical composition, and anisotropy are consistent with uhecr phenomenology. in this work, we extend such an analysis by including sub - grid pitch - angle scattering to model small - scale magnetic turbulence that cannot be resolved by mhd simulations. we find that a large scattering rate unlocks stochastic acceleration and fosters the energization of lower - energy crs, which eventually leads to harder uhecr spectra. yet, the particles that achieve the highest energies ( up to the hillas limit ) are invariably produced by espresso acceleration and their spectrum is independent of the assumed sub - grid scattering rate.
arxiv:2105.05262
the rare kaon decays k _ l - > pi ^ 0 nu { bar nu }, k ^ + - > pi ^ + nu { bar nu }, k _ l - > pi ^ 0 e ^ + e ^ - and k _ l - > pi ^ 0 mu ^ + mu ^ - are theoretically very clean and, being strongly ckm suppressed, highly sensitive to new physics ( np ). recent flavour physics analyses show that they represent unique probes for revealing np effects and to provide information on the np flavour structure. after a brief discussion of the main properties that make rare k decays so promising and of the basic ideas of the most interesting np models, we review the results of recent phenomenological analyses both within and beyond the framework of minimal flavour violation ( mfv ), where the sources of flavour violation are the same as in the standard model. within mfv we present the expectations found for rare k decays from a model - independent analysis and in three mfv models : the littlest higgs ( lh ) model, the ( extra - dimension ) appelquist - cheng - dobrescu model and the minimal supersymmetric standard model ( mssm ) with mfv. beyond mfv, we discuss the results recently found within the mssm ( without mfv ), the lh model with t - parity ( lht ) and the 3 - 3 - 1 ( z ' ) model. while in mfv models only small ( < 30 % ) np effects are allowed in the branching ratios of k _ l - > pi ^ 0 nu { bar nu }, k ^ + - > pi ^ + nu { bar nu }, k _ l - > pi ^ 0 e ^ + e ^ - and k _ l - > pi ^ 0 mu ^ + mu ^ -, beyond mfv, in particular in the mssm and in the lht model, large ( up to an order of magnitude ) enhancements w. r. t. the sm turn out to be possible.
arxiv:0706.3436
graphs are ubiquitous in encoding relational information of real - world objects in many domains. graph generation, whose purpose is to generate new graphs from a distribution similar to the observed graphs, has received increasing attention thanks to the recent advances of deep learning models. in this paper, we conduct a comprehensive review on the existing literature of deep graph generation from a variety of emerging methods to its wide application areas. specifically, we first formulate the problem of deep graph generation and discuss its difference with several related graph learning tasks. secondly, we divide the state - of - the - art methods into three categories based on model architectures and summarize their generation strategies. thirdly, we introduce three key application areas of deep graph generation. lastly, we highlight challenges and opportunities in the future study of deep graph generation. we hope that our survey will be useful for researchers and practitioners who are interested in this exciting and rapidly - developing field.
arxiv:2203.06714
we use a wkb approximation to establish a relation between the wavefront velocity in a strongly coupled theory and the local speed of light in a holographic dual, with our main focus put on systems with lifshitz scaling with dynamical exponent z. we then use einstein equations to relate the behavior of the local speed of light in the bulk with the null energy condition ( nec ) for bulk matter, and we show that it is violated for lifshitz backgrounds with z < 1. we study signal propagation in the gravity dual and show that violations of the nec are incompatible with causality in the strongly coupled theory, ruling out as holographic models lifshitz backgrounds with z < 1. we argue that causality violations in z < 1 theories will show up in correlators as superluminal modes and confirm this for a particular example with z = 1 / 2. finally, as an application, we use z < 1 solutions to uncover regions of the parameter space of curvature squared corrections to gravity where the nec can be violated.
arxiv:1007.1428
single pion and prompt photon large transverse momentum spectra in p - p and au - au collisions are computed in perturbative qcd at rhic energy, s ^ 1 / 2 = 200 gev. next - to - leading order calculations are discussed and compared with p - p scattering data. subsequently, quenching factors are computed to leading order for both pions and photons within the same energy loss model. the good agreement with phenix preliminary data allows for a lower estimate of the energy density reached in central au - au collisions, epsilon > 10 gev / fm ^ 3. double inclusive photon - pion production in p - p and au - au collisions is then addressed. next - to - leading order corrections prove rather small in p - p scattering. in au - au collisions, the quenching of momentum - correlation spectra is seen to be sensitive to parton energy loss processes, which would help to understand how the fragmentation dynamics is modified in nuclear collisions at rhic.
arxiv:hep-ph/0601075
we present the result of trawling through the wise archive for data on classical and recurrent novae. the data show a variety of spectral energy distributions, including stellar photospheres, dust and probable line emission. during the mission wise also detected some novae which erupted subsequent to the survey, providing information about the progenitor systems.
arxiv:1302.4334
timestamp automatic annotation ( taa ) is a crucial procedure for analyzing time - series scrna - seq data, as they unveil dynamic biological developments and cell regeneration process. however, current taa methods heavily rely on manual timestamps, often overlooking their reliability. this oversight can significantly degrade the performance of timestamp automatic annotation due to noisy timestamps. nevertheless, the current approach for addressing this issue tends to select less critical cleaned samples for timestamp calibration. to tackle this challenge, we have developed a novel timestamp calibration model called scpace for handling noisy labeled time - series scrna - seq data. this approach incorporates a latent variable indicator within a base classifier instead of probability sampling to detect noisy samples effectively. to validate our proposed method, we conducted experiments on both simulated and real time - series scrna - seq datasets. cross - validation experiments with different artificial mislabeling rates demonstrate that scpace outperforms previous approaches. furthermore, after calibrating the timestamps of the original time - series scrna - seq data using our method, we performed supervised pseudotime analysis, revealing that scpace enhances its performance significantly. these findings suggest that scpace is an effective tool for timestamp calibration by enabling reclassification and deletion of detected noisy labeled samples while maintaining robustness across diverse ranges of time - series scrna - seq datasets. the source code is available at https : / / github. com / opus - lightphenexx / scpace.
arxiv:2412.03027
a sliding camera inside an orthogonal polygon $ p $ is a point guard that travels back and forth along an orthogonal line segment $ \ gamma $ in $ p $. the sliding camera $ g $ can see a point $ p $ in $ p $ if the perpendicular from $ p $ onto $ \ gamma $ is inside $ p $. in this paper, we give the first constant - factor approximation algorithm for the problem of guarding $ p $ with the minimum number of sliding cameras. next, we show that the sliding guards problem is linear - time solvable if the ( suitably defined ) dual graph of the polygon has bounded treewidth. finally, we study art gallery theorems for sliding cameras, thus, give upper and lower bounds in terms of the number of guards needed relative to the number of vertices $ n $.
arxiv:1604.07099
using felgner ' s problem i revisit a key issue in using the " galois stratification procedure " that first appeared in [ frs76 ]. the emphasis here is on using arithmetic homotopy to make the production of poincare ; series attached to general diophantine statements canonical. according to work in progress of michael benedikt and e. hrushovski, galois stratification - over one finite field - is as efficient as is possible : on a statement of length n, it requires time bounded by a stack of exponentials of length linear in n. this doesn ' t take advantage of problems prepped for using homotopy aspects, chow motives, efficiently as in the main example which comes from my paper on the generalization of exceptional covers. that example [ frj, chap. 30 ], simplifies aspects of the original procedure. it combines this with the later theory of frobenius fields to produce objects over q whose reductions mod primes give the stratification procedure at the prime. the paper separates two different uses of the chebotarev non - regular analog. 1. field crossing to interpret poincare series coefficients directly from traces on chow motives ( providing valuable statements on variation with the prime p ; versus 2. chebotarev using lang - weil to approximate the number of points on an appropriate variety for the galois stratification procedure. we consider variables taking values in the algebraic closure of z / p but fixed by respective powers of the frobenius : we call these frobenius vectors. for this there is a twisted chebotarev version stemming from a conjecture of deligne, and outlined in a preprint of hrushovski. this paper expands on the work of d. wan, j. denef and f. loeser, j. nicaise, i. tomasic and e. hrushovski, all relevant to taking the galois stratification procedure beyond the original finite field framework.
arxiv:2208.09476
we consider bilinear pseudo - differential operators with symbols in the bilinear h \ " ormander class, $ bs _ { \ rho, \ rho } ^ m $, $ m \ in \ mathbb { r } $, $ 0 \ leq \ rho < 1 $. the aim of this paper is to discuss low regularity conditions for symbols to assure the boundedness from $ l ^ 2 \ times l ^ 2 $ to $ h ^ 1 $ and from $ l ^ 2 \ times bmo $ to $ l ^ 2 $.
arxiv:2001.04648
bayesian nonparametric marginal methods are very popular since they lead to fairly easy implementation due to the formal marginalization of the infinite - dimensional parameter of the model. however, the straightforwardness of these methods also entails some limitations : they typically yield point estimates in the form of posterior expectations, but cannot be used to estimate non - linear functionals of the posterior distribution, such as median, mode or credible intervals. this is particularly relevant in survival analysis where non - linear functionals such as e. g. the median survival time, play a central role for clinicians and practitioners. the main goal of this paper is to summarize the methodology introduced in [ arbel et al., comput. stat. data. an., 2015 ] for hazard mixture models in order to draw approximate bayesian inference on survival functions that is not limited to the posterior mean. in addition, we propose a practical implementation of an r package called momentify designed for moment - based density approximation, and, by means of an extensive simulation study, we thoroughly compare the introduced methodology with standard marginal methods and empirical estimation.
arxiv:1506.05269
we calculate the optical and raman response within a phenomenological model of fermion quasiparticles coupled to nearly critical collective modes. we find that, whereas critical scaling properties might be masked in optical spectra due to charge conservation, distinct critical signatures of charge and spin fluctuations can be detected in raman spectra exploiting specific symmetry properties. we compare our results with recent experiments on the cuprates.
arxiv:0803.1935
we introduce a greedy algorithm optimizing arrangements of lines with respect to a property. we apply this algorithm to the case of simpliciality : it recovers all known simplicial arrangements of lines in a very short time and also produces a yet unknown simplicial arrangement with 35 lines. we compute a ( certainly incomplete ) database of combinatorially simplicial complex arrangements of hyperplanes with up to 50 lines. surprisingly, it contains several examples whose matroids have an infinite space of realizations up to projectivities.
arxiv:2006.14431
intelligent reflecting surface ( irs ) is deemed as a promising and revolutionizing technology for future wireless communication systems owing to its capability to intelligently change the propagation environment and introduce a new dimension into wireless communication optimization. most existing studies on irs are based on an ideal reflection model. however, it is difficult to implement an irs which can simultaneously realize any adjustable phase shift for the signals with different frequencies. therefore, the practical phase shift model, which can describe the difference of irs phase shift responses for the signals with different frequencies, should be utilized in the irs optimization for wideband and multi - band systems. in this paper, we consider an irs - assisted multi - cell multi - band system, in which different base stations ( bss ) operate at different frequency bands. we aim to jointly design the transmit beamforming of bss and the reflection beamforming of the irs to minimize the total transmit power subject to signal to interference - plus - noise ratio ( sinr ) constraints of individual user and the practical irs reflection model. with the aid of the practical phase shift model, the influence between the signals with different frequencies is taken into account during the design of irs. simulation results illustrate the importance of considering the practical communication scenario on the irs designs and validate the effectiveness of our proposed algorithm.
arxiv:2101.01382
extracting and identifying latent topics in large text corpora has gained increasing importance in natural language processing ( nlp ). most models, whether probabilistic models similar to latent dirichlet allocation ( lda ) or neural topic models, follow the same underlying approach of topic interpretability and topic extraction. we propose a method that incorporates a deeper understanding of both sentence and document themes, and goes beyond simply analyzing word frequencies in the data. this allows our model to detect latent topics that may include uncommon words or neologisms, as well as words not present in the documents themselves. additionally, we propose several new evaluation metrics based on intruder words and similarity measures in the semantic space. we present correlation coefficients with human identification of intruder words and achieve near - human level results at the word - intrusion task. we demonstrate the competitive performance of our method with a large benchmark study, and achieve superior results compared to state - of - the - art topic modeling and document clustering models.
arxiv:2303.17324
the so - called vegetation red - edge ( vre ), a sharp increase in the reflectance around $ 700 nm $, is a characteristic of vegetation spectra, and can therefore be used as a biomarker if it can be detected in an unresolved extrasolar earth - like planet integrated reflectance spectrum. here we investigate the potential for detection of vegetation spectra during the last quaternary climatic extrema, the last glacial maximum ( lgm ) and the holocene optimum, for which past climatic simulations have been made. by testing the vre detectability during these extrema when earth ' s climate and biomes maps were different from today, we are able to test the vegetation detectability on a terrestrial planet different from our modern earth. data from the biome3. 5 model have been associated to visible gome spectra for each biome and cloud cover to derive earth ' s integrated spectra for given earth phases and observer positions. the vre is then measured. results show that the vegetation remains detectable during the last climatic extrema. compared to current earth, the holocene optimum with a greener sahara slightly increases the mean vre on one hand, while on the other hand, the large ice cap over the northern hemisphere during the lgm decreases vegetation detectability. we finally discuss the detectability of the vre in the context of recently proposed space missions.
arxiv:0901.1214
the ruijsenaars - schneider models are integrable dynamical realizations of the poincare group in 1 + 1 dimensions, which reduce to the calogero and sutherland systems in the nonrelativistic limit. in this work, a possibility to construct a one - parameter deformation of the ruijsenaars - schneider models by uplifting the poincare algebra in 1 + 1 dimensions to the anti de sitter algebra is studied. it is shown that amendments including a cosmological constant are feasible for the rational variant, while the hyperbolic and trigonometric systems are ruled out by our analysis. the issue of integrability of the deformed rational model is discussed in some detail. a complete proof of integrability remains a challenge.
arxiv:2411.13928
this paper studies the application of cooperative techniques for non - orthogonal multiple access ( noma ). more particularly, the fixed gain amplify - and - forward ( af ) relaying with noma is investigated over nakagami - $ m $ fading channels. two scenarios are considered insightfully. 1 ) the first scenario is that the base station ( bs ) intends to communicate with multiple users through the assistance of af relaying, where the direct links are existent between the bs and users ; and 2 ) the second scenario is that the af relaying is inexistent between the bs and users. to characterize the performance of the considered scenarios, new closed - form expressions for both exact and asymptomatic outage probabilities are derived. based on the analytical results, the diversity orders achieved by the users are obtained. for the first and second scenarios, the diversity order for the $ n $ - th user are $ \ mu ( n + 1 ) $ and $ \ mu n $, respectively. simulation results unveil that noma is capable of outperforming orthogonal multiple access ( oma ) in terms of outage probability and system throughput. it is also worth noting that noma can provide better fairness compared to conventional oma. by comparing the two scenarios, cooperative noma scenario can provide better outage performance relative to the second scenario.
arxiv:1812.07407
we study the space of all tilings which can be obtained using the robinson tiles ( this is a two - dimensional subshift of finite type ). we prove that it has a unique minimal subshift, and describe it by means of a substitution. this description allows to compute its cohomology groups, and prove that it is a model set.
arxiv:1203.1387
we study the quotient of the mapping class group $ \ operatorname { mod } _ g ^ n $ of a surface of genus $ g $ with $ n $ punctures, by the subgroup $ \ operatorname { mod } _ g ^ n [ p ] $ generated by the $ p $ - th powers of dehn twists. our first main result is that $ \ operatorname { mod } _ g ^ 1 / \ operatorname { mod } _ g ^ 1 [ p ] $ contains an infinite normal subgroup of infinite index, and in particular is not commensurable to a higher - rank lattice, for all but finitely many explicit values of $ p $. next, we prove that $ \ operatorname { mod } _ g ^ 0 / \ operatorname { mod } _ g ^ 0 [ p ] $ contains a k \ " ahler subgroup of finite index, for every $ p \ ge 2 $ coprime with six. finally, we observe that the existence of finite - index subgroups of $ \ operatorname { mod } _ g ^ 0 $ with infinite abelianization is equivalent to the analogous problem for $ \ operatorname { mod } _ g ^ 0 / \ operatorname { mod } _ g ^ 0 [ p ] $.
arxiv:1804.10440
gauge invariant lagrangian descriptions of irreducible and reducible half - integer higher - spin mixed - symmetric massless and massive representations of the poincare group with off - shell algebraic constraints are constructed within a metric - like formulation in a $ d $ - dimensional flat space - time on the basis of a suggested constrained brst approach. a lorentz - invariant resolution of the brst complex within the constrained brst formulations produces a gauge - invariant fang - fronsdal lagrangian entirely in terms of the initial triple gamma - traceless spin - tensor field $ \ psi _ { ( \ mu ) _ { n } } $ with gamma - traceless gauge parameter. the triplet and quartet formulations are derived. the minimal ( un ) constrained brst - - bv actions for above formulations are obtained, from proposed constrained brst - - bv approach to be by appropriate tools to construct interacting constrained lagrangians.
arxiv:1803.05173
table reasoning tasks have shown remarkable progress with the development of large language models ( llms ), which involve interpreting and drawing conclusions from tabular data based on natural language ( nl ) questions. existing solutions mainly tested on smaller tables face scalability issues and struggle with complex queries due to incomplete or dispersed data across different table sections. to alleviate these challenges, we propose tap4llm as a versatile pre - processor suite for leveraging llms in table - based tasks effectively. it covers several distinct components : ( 1 ) table sampling to decompose large tables into manageable sub - tables based on query semantics, ( 2 ) table augmentation to enhance tables with additional knowledge from external sources or models, and ( 3 ) table packing & serialization to convert tables into various formats suitable for llms ' understanding. in each module, we design and compare several common methods under various usage scenarios, aiming to shed light on the best practices for leveraging llms for table - reasoning tasks. our experiments show that our method improves llms ' reasoning capabilities in various tabular tasks and enhances the interaction between llms and tabular data by employing effective pre - processing.
arxiv:2312.09039
a code $ c $ is a subset of the vertex set of a graph and $ c $ is $ s $ - neighbour - transitive if its automorphism group $ { \ rm aut } ( c ) $ acts transitively on each of the first $ s + 1 $ parts $ c _ 0, c _ 1, \ ldots, c _ s $ of the distance partition $ \ { c = c _ 0, c _ 1, \ ldots, c _ \ rho \ } $, where $ \ rho $ is the covering radius of $ c $. while codes have traditionally been studied in the hamming and johnson graphs, we consider here codes in the kneser graphs. let $ \ omega $ be the underlying set on which the kneser graph $ k ( n, k ) $ is defined. our first main result says that if $ c $ is a $ 2 $ - neighbour - transitive code in $ k ( n, k ) $ such that $ c $ has minimum distance at least $ 5 $, then $ n = 2k + 1 $ ( i. e., $ c $ is a code in an odd graph ) and $ c $ lies in a particular infinite family or is one particular sporadic example. we then prove several results when $ c $ is a neighbour - transitive code in the kneser graph $ k ( n, k ) $. first, if $ { \ rm aut } ( c ) $ acts intransitively on $ \ omega $ we characterise $ c $ in terms of certain parameters. we then assume that $ { \ rm aut } ( c ) $ acts transitively on $ \ omega $, first proving that if $ c $ has minimum distance at least $ 3 $ then either $ k ( n, k ) $ is an odd graph or $ { \ rm aut } ( c ) $ has a $ 2 $ - homogeneous ( and hence primitive ) action on $ \ omega $. we then assume that $ c $ is a code in an odd graph and $ { \ rm aut } ( c ) $ acts imprimitively on $ \ omega $ and characterise $ c $ in terms of certain parameters. we give examples in each of these cases and pose several open problems.
arxiv:2307.09752
large language models ( llms ) have demonstrated remarkable capabilities, yet their transition to real - world applications reveals a critical limitation : the inability to adapt to individual preferences while maintaining alignment with universal human values. current alignment techniques adopt a one - size - fits - all approach that fails to accommodate users ' diverse backgrounds and needs. this paper presents the first comprehensive survey of personalized alignment - a paradigm that enables llms to adapt their behavior within ethical boundaries based on individual preferences. we propose a unified framework comprising preference memory management, personalized generation, and feedback - based alignment, systematically analyzing implementation approaches and evaluating their effectiveness across various scenarios. by examining current techniques, potential risks, and future challenges, this survey provides a structured foundation for developing more adaptable and ethically - aligned llms.
arxiv:2503.17003
we consider the scalar field profile around relativistic compact objects such as neutron stars for a range of modified gravity models with screening mechanisms of the chameleon and damour - polyakov types. we focus primarily on inverse power law chameleons and the environmentally dependent dilaton as examples of both mechanisms. we discuss the modified tolman - oppenheimer - volkoff equation and then implement a relaxation algorithm to solve for the scalar profiles numerically. we find that chameleons and dilatons behave in a similar manner and that there is a large degeneracy between the modified gravity parameters and the neutron star equation of state. this is exemplified by the modifications to the mass - radius relationship for a variety of model parameters.
arxiv:1702.02983
in elementary number theory, the proof may very well hinge on the remark that any natural number has a successor – a statement which should itself be proved or be taken as an axiom so is not trivial ( for more, see peano ' s axioms ). = = = trivial proofs = = = in some texts, a trivial proof refers to a statement involving a material implication pβ†’q, where the consequent q, is always true. here, the proof follows immediately by virtue of the definition of material implication in which as the implication is true regardless of the truth value of the antecedent p if the consequent is fixed as true. a related concept is a vacuous truth, where the antecedent p in a material implication pβ†’q is false. in this case, the implication is always true regardless of the truth value of the consequent q – again by virtue of the definition of material implication. = = humor = = a common joke in the mathematical community is to say that " trivial " is synonymous with " proved " β€” that is, any theorem can be considered " trivial " once it is known to be proved as true. two mathematicians who are discussing a theorem : the first mathematician says that the theorem is " trivial ". in response to the other ' s request for an explanation, he then proceeds with twenty minutes of exposition. at the end of the explanation, the second mathematician agrees that the theorem is trivial. but can we say that this theorem is trivial even if it takes a lot of time and effort to prove it? when a mathematician says that a theorem is trivial, but he is unable to prove it by himself at the moment that he pronounces it as trivial, is the theorem trivial? often, as a joke, a problem is referred to as " intuitively obvious ". for example, someone experienced in calculus would consider the following statement trivial : 0 1 x 2 d x = 1 3. { \ displaystyle \ int _ { 0 } ^ { 1 } x ^ { 2 } \, dx = { \ frac { 1 } { 3 } }. } however, to someone with no knowledge of integral calculus, this is not obvious, so it is not trivial. = = examples = = in number theory, it is often important to find factors of an integer number n. any number n has four obvious factors : Β±1 and Β±n. these are called " trivial factors ". any other factor, if it exists, would
https://en.wikipedia.org/wiki/Triviality_(mathematics)
the additive inverse ( sometimes called negation ) of the operand. abstractly then, the difference of two number is the sum of the minuend with the additive inverse of the subtrahend. while 0 is its own additive inverse ( βˆ’0 = 0 ), the additive inverse of a positive number is negative, and the additive inverse of a negative number is positive. a double application of this operation is written as βˆ’ ( βˆ’3 ) = 3. the plus sign is predominantly used in algebra to denote the binary operation of addition, and only rarely to emphasize the positivity of an expression. in common numeral notation ( used in arithmetic and elsewhere ), the sign of a number is often made explicit by placing a plus or a minus sign before the number. for example, + 3 denotes " positive three ", and βˆ’3 denotes " negative three " ( algebraically : the additive inverse of 3 ). without specific context ( or when no explicit sign is given ), a number is interpreted per default as positive. this notation establishes a strong association of the minus sign " βˆ’ " with negative numbers, and the plus sign " + " with positive numbers. = = = sign of zero = = = within the convention of zero being neither positive nor negative, a specific sign - value 0 may be assigned to the number value 0. this is exploited in the sgn { \ displaystyle \ operatorname { sgn } } - function, as defined for real numbers. in arithmetic, + 0 and βˆ’0 both denote the same number 0. there is generally no danger of confusing the value with its sign, although the convention of assigning both signs to 0 does not immediately allow for this discrimination. in certain european countries, e. g. in belgium and france, 0 is considered to be both positive and negative following the convention set forth by nicolas bourbaki. in some contexts, such as floating - point representations of real numbers within computers, it is useful to consider signed versions of zero, with signed zeros referring to different, discrete number representations ( see signed number representations for more ). the symbols + 0 and βˆ’0 rarely appear as substitutes for 0 + and 0βˆ’, used in calculus and mathematical analysis for one - sided limits ( right - sided limit and left - sided limit, respectively ). this notation refers to the behaviour of a function as its real input variable approaches 0 along positive ( resp., negative ) values ; the two limits need not exist or agree. = = = terminology for signs =
https://en.wikipedia.org/wiki/Sign_(mathematics)
we summarize results presented at this conference with special emphasis on hard processes with jets and heavy quarks, soft particle production, small x structure functions and diffraction as well as heavy ion collisions and quark gluon plasma.
arxiv:hep-ph/0411167
our main result is an explicit operator - theoretic formula for the number of colored planar maps with a fixed set of stars each of which has a fixed set of half - edges with fixed coloration. the formula bounds the number of such colored planar maps well enough to prove convergence near the origin of generating functions arising naturally in the matrix model context. such convergence is known but the proof of convergence proceeding by way of our main result is relatively simple. besides voiculescu ' s generalization of wigner ' s semicircle law, our main technical tool is an integration identity representing the joint cumulant of several functions of a gaussian random vector. the latter identity in the case of cumulants of order 2 reduces to one well - known as a means to prove the poincare inequality. we derive the identity by combining the heat equation with the so - called bkar formula from constructive quantum field theory and rigorous statistical mechanics.
arxiv:1203.3185
the self - consistent gw band gaps are known to be significantly overestimated. we show that this overestimation is, to a large extent, due to the neglect of the contribution of the lattice polarization to the screening of the electron - electron interaction. to solve this problem, we derive within the gw formalism a generalized plasmon - pole model that accounts for lattice polarization. the resulting gw self - energy is used to calculate the band structures of a set of binary semiconductors and insulators. the lattice contribution always decreases the band gap. the shrinkage increases with the size of the longitudinal - transverse optical splitting and it can represent more than 15 % of the band gap in highly polar compounds, reducing the band - gap percentage error by a factor of three.
arxiv:1304.7911
the success of graphene for nanopore dna sequencing has shown that it is possible to explore other potential single - atom and few - atom thick layers of elemental 2d materials beyond graphene ( e. g., phosphorene and silicene ). using density functional theory, we studied the interaction of dna bases with nanopores created in finite - size nanoribbons from graphene, phosphorene, and silicene. we observe that binding energies of dna bases using nanopores from phosphorene and silicene are generally smaller compared to graphene. the band gaps of phosphorene and silicene are significantly altered due to interaction with dna bases compared to graphene. our findings show that phosphorene and silicene are promising alternatives to graphene for dna base detection using advanced detection principles such as transverse tunneling current measurement.
arxiv:2112.07511
we propose several new schedules for strassen - winograd ' s matrix multiplication algorithm, they reduce the extra memory allocation requirements by three different means : by introducing a few pre - additions, by overwriting the input matrices, or by using a first recursive level of classical multiplication. in particular, we show two fully in - place schedules : one having the same number of operations, if the input matrices can be overwritten ; the other one, slightly increasing the constant of the leading term of the complexity, if the input matrices are read - only. many of these schedules have been found by an implementation of an exhaustive search algorithm based on a pebble game.
arxiv:0707.2347
the research on human emotion under multimedia stimulation based on physiological signals is an emerging field, and important progress has been achieved for emotion recognition based on multi - modal signals. however, it is challenging to make full use of the complementarity among spatial - spectral - temporal domain features for emotion recognition, as well as model the heterogeneity and correlation among multi - modal signals. in this paper, we propose a novel two - stream heterogeneous graph recurrent neural network, named hetemotionnet, fusing multi - modal physiological signals for emotion recognition. specifically, hetemotionnet consists of the spatial - temporal stream and the spatial - spectral stream, which can fuse spatial - spectral - temporal domain features in a unified framework. each stream is composed of the graph transformer network for modeling the heterogeneity, the graph convolutional network for modeling the correlation, and the gated recurrent unit for capturing the temporal domain or spectral domain dependency. extensive experiments on two real - world datasets demonstrate that our proposed model achieves better performance than state - of - the - art baselines.
arxiv:2108.03354
the use of third - party datasets and pre - trained machine learning models poses a threat to nlp systems due to possibility of hidden backdoor attacks. existing attacks involve poisoning the data samples such as insertion of tokens or sentence paraphrasing, which either alter the semantics of the original texts or can be detected. our main difference from the previous work is that we use the reposition of a two words in a sentence as a trigger. by designing and applying specific part - of - speech ( pos ) based rules for selecting these tokens, we maintain high attack success rate on sst - 2 and ag classification datasets while outperforming existing attacks in terms of perplexity and semantic similarity to the clean samples. in addition, we show the robustness of our attack to the onion defense method. all the code and data for the paper can be obtained at https : / / github. com / alekseevskaia / orderbkd.
arxiv:2402.07689
liu et al. recently reported that biallelic mutations in daglb are responsible for autosomal recessive early - onset parkinson ' s disease. they identified six patients carrying daglb mutations, all of chinese origin and presenting with typical parkinson disease. no additional cases outside china have been reported so far. to assess the causality of daglb in our cohort, we used data mining in the exomes of 684 index cases with either autosomal recessive or sporadic early onset parkinson disease ( < 50 years ). we identified a homozygous p. pro357leu missense variant in a single consanguineous pd case. this mutation predicted deleterious, affects a conserved amino acid localized in the catalytic domain of the protein nearby the pathological p. asp363gly mutation described in the previous paper. as the most frequent genes involved in ar - pd ( prkn, pink1 ), the daglb - associated disease presents and evolves like typical pd. this work reinforces the fact that daglb is involved in early onset parkinson disease, but given the fact that we identified a single patient among 684 index cases screened, we conclude that daglb is a very rare cause of early onset autosomal recessive parkinson disease. however, we demonstrate that, mutations in daglb are not limited to the chinese population but can also account for pd in north africa. we feel that these new data indicate that daglb variants should be considered in non - chinese cases with early - onset typical parkinson disease.
arxiv:2310.12521
the proliferation of deepfake media is raising concerns among the public and relevant authorities. it has become essential to develop countermeasures against forged faces in social media. this paper presents a comprehensive study on two new countermeasure tasks : multi - face forgery detection and segmentation in - the - wild. localizing forged faces among multiple human faces in unrestricted natural scenes is far more challenging than the traditional deepfake recognition task. to promote these new tasks, we have created the first large - scale dataset posing a high level of challenges that is designed with face - wise rich annotations explicitly for face forgery detection and segmentation, namely openforensics. with its rich annotations, our openforensics dataset has great potentials for research in both deepfake prevention and general human face detection. we have also developed a suite of benchmarks for these tasks by conducting an extensive evaluation of state - of - the - art instance detection and segmentation methods on our newly constructed dataset in various scenarios. the dataset, benchmark results, codes, and supplementary materials will be publicly available on our project page : https : / / sites. google. com / view / ltnghia / research / openforensics
arxiv:2107.14480
we investigate heavy quark symmetries for heavy meson hadronic molecules, and explore the consequences of assuming the x ( 3872 ) and $ z _ b ( 10610 ) $ as an isoscalar $ d \ bar d ^ * $ and an isovector $ b \ bar b ^ * $ hadronic molecules, respectively. the symmetry allows to predict new hadronic molecules, in particular we find an isoscalar $ 1 ^ { + + } $ $ b \ bar b ^ * $ bound state with a mass about 10580 mev and the isovector charmonium partners of the $ z _ b ( 10610 ) $ and the $ z _ b ( 10650 ) $ states. next, we study the $ x ( 3872 ) \ to d ^ 0 \ bar d ^ 0 \ pi ^ 0 $ three body decay. this decay mode is more sensitive to the long - distance structure of the x ( 3872 ) resonance than its $ j / \ psi \ pi \ pi $ and $ j / \ psi3 \ pi $ decays, which are mainly controlled by the short distance part of the x ( 3872 ) molecular wave function. we discuss the $ d ^ 0 \ bar d ^ 0 $ final state interactions, which in some situations become quite important. indeed in these cases, a precise measurement of this partial decay width could provide precise information on the interaction strength between the $ d ^ { ( * ) } \ bar d ^ { ( * ) } $ charm mesons.
arxiv:1409.4390
chubukov ' s proposal concerning the possibility of a nondimerized quantum nematic phase in the ground - state phase diagram of the bilinear - biquadratic spin - 1 chain is studied numerically. our results do not support the existence of this phase, but they rather indicate a direct transition from the ferromagnetic into the dimerized phase.
arxiv:cond-mat/9501108
analysis and design of filtered - x adaptive algorithms are conventionally done by assuming that the transfer function in the secondary path is a discrete - time system. however, in real systems such as active noise control, the secondary path is a continuous - time system. therefore, such a system should be analyzed and designed as a hybrid system including discrete - and continuous - time systems and ad / da devices. in this article, we propose a hybrid design taking account of continuous - time behavior of the secondary path via lifting ( continuous - time polyphase decomposition ) technique in sampled - data control theory.
arxiv:1308.3300
we present thermal emission measurements of gj 1132b spanning 5 - - 12 um obtained with the mid - infrared instrument low - resolution spectrometer ( miri / lrs ) on the james webb space telescope ( jwst ). gj 1132b is an m - dwarf rocky planet with teq = 584 k and an orbital period of 1. 6 days. we measure a white - light secondary eclipse depth of 140 + / - 17 ppm, which corresponds to a dayside brightness temperature of tp, dayside = 709 + / - 31 k using improved star and planet parameters. this measured temperature is only 1 sigma below the maximum possible dayside temperature of a bare rock ( i. e., assuming a zero albedo planet with no heat redistribution, tmax = 746 + 14 / - 11 k ). the emission spectrum is consistent with a featureless blackbody, which agrees with a wide range of possible surface compositions. by comparing forward models to the dayside emission spectrum, we rule out earth - thickness ( p ~ 1 bar ) atmospheres with at least 1 % h2o, atmospheres of any modeled thickness ( 10 ^ - 4 - - 10 ^ 2 bar ) that contain at least 1 % co2, and thick, venus - like atmospheres ( p > ~ 100 bar ) with at least 1 ppm co2 or h2o. we therefore conclude that gj 1132b likely does not have a significant atmosphere. this finding supports the concept of a universal ' cosmic shoreline ' given the high level of bolometric and xuv irradiation received by the planet.
arxiv:2408.13340
in mathematics, a domino is a polyomino of order 2, that is, a polygon in the plane made of two equal - sized squares connected edge - to - edge. when rotations and reflections are not considered to be distinct shapes, there is only one free domino. since it has reflection symmetry, it is also the only one - sided domino ( with reflections considered distinct ). when rotations are also considered distinct, there are two fixed dominoes : the second one can be created by rotating the one above by 90Β°. in a wider sense, the term domino is sometimes understood to mean a tile of any shape. = = packing and tiling = = dominos can tile the plane in a countably infinite number of ways. the number of tilings of a 2Γ—n rectangle with dominoes is f n { \ displaystyle f _ { n } }, the nth fibonacci number. domino tilings figure in several celebrated problems, including the aztec diamond problem in which large diamond - shaped regions have a number of tilings equal to a power of two, with most tilings appearing random within a central circular region and having a more regular structure outside of this " arctic circle ", and the mutilated chessboard problem, in which removing two opposite corners from a chessboard makes it impossible to tile with dominoes. = = see also = = dominoes, a set of domino - shaped gaming pieces tatami, japanese domino - shaped floor mats = = references = =
https://en.wikipedia.org/wiki/Domino_(mathematics)
in this paper, we present mondrian, an edge system that enables high - performance object detection on high - resolution video streams. many lightweight models and system optimization techniques have been proposed for resource - constrained devices, but they do not fully utilize the potential of the accelerators over dynamic, high - resolution videos. to enable such capability, we devise a novel compressive packed inference to minimize per - pixel processing costs by selectively determining the necessary pixels to process and combining them to maximize processing parallelism. in particular, our system quickly extracts rois and dynamically shrinks them, reflecting the effect of the fast - changing characteristics of objects and scenes. it then intelligently combines such scaled rois into large canvases to maximize the utilization of inference accelerators such as gpu. evaluation across various datasets, models, and devices shows mondrian outperforms state - of - the - art baselines ( e. g., input rescaling, roi extractions, roi extractions + batching ) by 15. 0 - 19. 7 % higher accuracy, leading to $ \ times $ 6. 65 higher throughput than frame - wise inference for processing various 1080p video streams. we will release the code after the paper review.
arxiv:2403.07598
heavy inertial particles transported by a turbulent flow are shown to concentrate in the regions where an advected passive scalar, such as temperature, displays very strong front - like discontinuities. this novel effect is responsible for extremely high levels of fluctuations for the passive field sampled by the particles that impacts the heat fluxes exchanged between the particles and the surrounding fluid. instantaneous and averaged heat fluxes are shown to follow strongly intermittent statistics and anomalous scaling laws.
arxiv:1401.2080
the relativistic heavy ion collider ( rhic ) at brookhaven national laboratory has been in operation since 2000. over the past decade, the luminosity in the polarized proton ( p - p ) operations has increased by more than one order of magnitude. the maximum total beam - beam tune shift with two collisions has reached 0. 018. the beam - beam interaction leads to large tune spread, emittance growth, and short beam and luminosity lifetimes. in this article, we review the beam - beam observations during the previous rhic p - p runs. the mechanism for particle loss is presented. the intra - beam scattering ( ibs ) contributions to emittance and bunch length growths are calculated and compared with the measurements. finally, we will discuss current limits in the rhic p - p operations and their solutions.
arxiv:1410.5936
recent advances in video manipulation techniques have made the generation of fake videos more accessible than ever before. manipulated videos can fuel disinformation and reduce trust in media. therefore detection of fake videos has garnered immense interest in academia and industry. recently developed deepfake detection methods rely on deep neural networks ( dnns ) to distinguish ai - generated fake videos from real videos. in this work, we demonstrate that it is possible to bypass such detectors by adversarially modifying fake videos synthesized using existing deepfake generation methods. we further demonstrate that our adversarial perturbations are robust to image and video compression codecs, making them a real - world threat. we present pipelines in both white - box and black - box attack scenarios that can fool dnn based deepfake detectors into classifying fake videos as real.
arxiv:2002.12749
neural vocoders have recently demonstrated high quality speech synthesis, but typically require a high computational complexity. lpcnet was proposed as a way to reduce the complexity of neural synthesis by using linear prediction ( lp ) to assist an autoregressive model. at inference time, lpcnet relies on the lp coefficients being explicitly computed from the input acoustic features. that makes the design of lpcnet - based systems more complicated, while adding the constraint that the input features must represent a clean speech spectrum. we propose an end - to - end version of lpcnet that lifts these limitations by learning to infer the lp coefficients from the input features in the frame rate network. results show that the proposed end - to - end approach equals or exceeds the quality of the original lpcnet model, but without explicit lp analysis. our open - source end - to - end model still benefits from lpcnet ' s low complexity, while allowing for any type of conditioning features.
arxiv:2202.11301
large language models ( llms ) have reshaped natural language processing, powering applications from multi - hop retrieval and question answering to autonomous agent workflows. yet, prompt engineering - - the task of crafting textual inputs to effectively direct llms - - remains difficult and labor - intensive, particularly for complex pipelines that combine multiple llm calls with functional operations like retrieval and data formatting. we introduce llm - autodiff : a novel framework for automatic prompt engineering ( ape ) that extends textual gradient - based methods ( such as text - grad ) to multi - component, potentially cyclic llm architectures. implemented within the adalflow library, llm - autodiff treats each textual input as a trainable parameter and uses a frozen backward engine llm to generate feedback - akin to textual gradients - - that guide iterative prompt updates. unlike prior single - node approaches, llm - autodiff inherently accommodates functional nodes, preserves time - sequential behavior in repeated calls ( e. g., multi - hop loops ), and combats the " lost - in - the - middle " problem by isolating distinct sub - prompts ( instructions, formats, or few - shot examples ). it further boosts training efficiency by focusing on error - prone samples through selective gradient computation. across diverse tasks, including single - step classification, multi - hop retrieval - based qa, and agent - driven pipelines, llm - autodiff consistently outperforms existing textual gradient baselines in both accuracy and training cost. by unifying prompt optimization through a graph - centric lens, llm - autodiff offers a powerful new paradigm for scaling and automating llm workflows - mirroring the transformative role that automatic differentiation libraries have long played in neural network research.
arxiv:2501.16673