text
stringlengths
1
3.65k
source
stringlengths
15
79
we demonstrate that perturbative qcd leads to positive 3d parton - - parton correlations inside nucleon explaining a factor two enhancement of the cross section of multi - parton interactions observed at tevatron at $ x _ i \ ge 0. 01 $ as compared to the predictions of the independent parton approximation. we also find that though perturbative correlations decrease with $ x $ decreasing, the nonperturbative mechanism kicks in and should generate correlation which, at $ x $ below $ 10 ^ { - 3 } $, is comparable in magnitude with the perturbative one for $ x \ sim 0. 01 $.
arxiv:1206.5594
examining the mechanisms underlying the formation and evolution of opinions within real - world social systems, which consist of numerous individuals, can provide valuable insights for effective social functioning and informed business decision making. the focus of our study is on the dynamics of opinions inside a networked multi - agent system. we provide a novel approach called the game theory based community - aware opinion formation process ( gcaofp ) to accurately represent the co - evolutionary dynamics of communities and opinions in real - world social systems. the gcaofp algorithm comprises two distinct steps in each iteration. 1 ) the community dynamics process conceptualizes the process of community formation as a non - cooperative game involving a finite number of agents. each individual agent aims to maximize their own utility by adopting a response that leads to the most favorable update of the community label. 2 ) the opinion formation process involves the updating of an individual agent ' s opinion within a community - aware framework that incorporates bounded confidence. this process takes into account the updated matrix of community members and ensures that an agent ' s opinion aligns with the opinions of others within their community, within certain defined limits. the present study provides a theoretical proof that under any initial conditions, the aforementioned co - evolutionary dynamics process will ultimately reach an equilibrium state. in this state, both the opinion vector and community member matrix will stabilize after a finite number of iterations. in contrast to conventional opinion dynamics models, the guaranteed convergence of agent opinion within the same community ensures that the convergence of opinions takes place exclusively inside a given community.
arxiv:2408.01196
transportation infrastructure of a country is one of the most important indicators of its economic growth. here we study the airport network of india ( ani ), which represents india ' s domestic civil aviation infrastructure, as a complex network. we find that ani, a network of domestic airports connected by air links, is a small - world network characterized by a truncated power - law degree distribution, and has a signature of hierarchy. we investigate ani as a weighted network to explore its various properties and compare them with their topological counterparts. the traffic in ani, as in the world - wide airport network ( wan ), is found to be accumulated on interconnected groups of airports and is concentrated between large airports. in contrast to wan, ani is found to be having disassortative mixing which is offset by the traffic dynamics. the analysis indicates toward possible mechanism of formation of a national transportation network, which is different from that on a global scale.
arxiv:cond-mat/0409773
explicit algebraic expressions for the expansion of the vibrational matrix elements in series of matrix elements on the wave functions of the ground vibrational state have been obtained for arbitrary sufficiently differentiable functions of the internuclear distance, arbitrary values v and v ', and the potential curves whose ladder operators can be constructed. a diagram technique have been developed for it that consists in : 1 ) the numeration of the matrix elements by points of the 2d diagram with coordinates ( l, k ), 2 ) the drawing arrows between points of this diagram corresponding to the action of the annihilation operators on the wave functions ; 3 ) total taking into account of all possible path vectors formed by the continuous sequences of arrows from point ( v, v ) towards points ( 0, k ). the only requirement is that the action of the operator on the wave functions should give the wave functions of the schr \ " odinger equation with the potential curve having the same parameter values. all necessary data for algebraic calculations of the vibrational dependence of matrix elements for the harmonic oscillator and the morse potential have been given. obtained expressions can be used to determine the absolute values and vibrational dependences of various spectroscopic characteristics of both ground and electronically excited states of diatomic molecules.
arxiv:1112.6409
recent years have seen growing interest in the retrofitting of type systems onto dynamically - typed programming languages, in order to improve type safety, programmer productivity, or performance. in such cases, type system developers must strike a delicate balance between disallowing certain coding patterns to keep the type system simple, or including them at the expense of additional complexity and effort. thus far, the process for designing retrofitted type systems has been largely ad hoc, because evaluating multiple variations of a type system on large bodies of existing code is a significant undertaking. we present trace typing : a framework for automatically and quantitatively evaluating variations of a retrofitted type system on large code bases. the trace typing approach involves gathering traces of program executions, inferring types for instances of variables and expressions occurring in a trace, and merging types according to merge strategies that reflect specific ( combinations of ) choices in the source - level type system design space. we evaluated trace typing through several experiments. we compared several variations of type systems retrofitted onto javascript, measuring the number of program locations with type errors in each case on a suite of over fifty thousand lines of javascript code. we also used trace typing to validate and guide the design of a new retrofitted type system that enforces fixed object layout for javascript objects. finally, we leveraged the types computed by trace typing to automatically identify tag tests - - - dynamic checks that refine a type - - - and examined the variety of tests identified.
arxiv:1605.01362
a systematic study of the nonlinear response of silicon photomultipliers ( sipms ) has been conducted through monte carlo simulations. sipms have been proven to show a universal nonlinear response when it is expressed in terms of relative parameters independent of both the gain and the photon detection efficiency ( pde ). nonlinearity has been shown to mainly depend on the balance between the photon rate and the pixel recovery time. however, exponential - like and finite light pulses have been found to lead to different nonlinear behaviors, which also depend on the correlated noise, the overvoltage dependence of the pde, and the impedance of the readout circuit. correlated noise has been shown to have a minor impact on nonlinearity, but it can significantly affect the shape of the sipm output current. considering these dependencies and previous statistical analysis of the nonlinear response of sipms, two simple fitting models have been proposed for exponential - like and finite light pulses, explaining the role of their various terms and parameters. even though these models have only three fitting parameters, they provide an accurate description of the nonlinear response of sipms for a wide range of situations.
arxiv:2401.06581
using integral equation theory of liquids to a binary mixed fluid lipid membrane, we study the membrane - mediated interactions between the macroions and the redistribution of neutral and charged lipids due to binding macroions. we find that when the concentration of binding macroions is infinitely dilute, the main contribution to the attractive potential between macroions is the line tension between neutral and charged lipids of the membrane, and the bridging effect also contributes to the attraction. as the relative concentration of charged lipids is increased, we observe a repulsive - attractive - repulsive potential transition due to the competition between the line tension of lipids and screened electrostatic macroion - macroion interactions. for the finite concentration of macroions, the main feature of the attraction is similar to the infinite dilution case. however, due to the interplay of formation of charged lipid - macroion complexes, the line tension of redistributed binary lipids induced by single macroion is lowered in this case, and the maximum of attractive potential will shift toward the higher values of the charged lipid concentration.
arxiv:cond-mat/0611107
we define a map $ \ mathcal { p } _ m $ attached to any polarized hodge module $ m $ such that the restriction of $ \ mathcal { p } _ m $ to a locus on which $ m $ is a variation of hodge structures induces the usual period integral pairing for this variation of hodge structures. in the case that $ m $ is the minimal extension of a simple polarized variation of hodge structures $ v $, we show that the homotopy image of $ \ mathcal { p } _ m $ is the minimal extension of the graph morphism of the usual period integral map for $ v $.
arxiv:1910.00035
in this note, we show that the hausdorff operator $ h _ { \ phi } $ is unbounded on a large family of quasi - banach spaces, unless $ h _ { \ phi } $ is a zero operator.
arxiv:2006.08139
in this paper, we finally catch up with proving the well - posedness of the linearized r13 moment model, which describes, e. g., rarefied gas flows. as an extension of the classical fluid equations, moment models are robust and have been frequently used, yet they are challenging to analyze due to their additional equations. by effectively grouping variables, we identify a 2 - by - 2 block structure, allowing the analysis of the well - posedness within the abstract lbb framework of saddle point problems. due to the unique tensorial structure of the equations, in addition to an interesting combination of tools from stokes ' and linear elasticity theory, we also need new coercivity estimates for tensor fields. these korn - type inequalities are established by analyzing the symbol map of the symmetric and trace - free part of tensor derivative fields. together with the corresponding right inverse of the tensorial divergence, we obtain the existence and uniqueness of weak solutions.
arxiv:2501.14108
we call a semigroup $ s $ weakly right noetherian if every right ideal of $ s $ is finitely generated ; equivalently, $ s $ satisfies the ascending chain condition on right ideals. we provide an equivalent formulation of the property of being weakly right noetherian in terms of principal right ideals, and we also characterise weakly right noetherian monoids in terms of their acts. we investigate the behaviour of the property of being weakly right noetherian under quotients, subsemigroups and various semigroup - theoretic constructions. in particular, we find necessary and sufficient conditions for the direct product of two semigroups to be weakly right noetherian. we characterise weakly right noetherian regular semigroups in terms of their idempotents. we also find necessary and sufficient conditions for a strong semilattice of completely simple semigroups to be weakly right noetherian. finally, we prove that a commutative semigroup $ s $ with finitely many archimedean components is weakly ( right ) noetherian if and only if $ s / \ mathcal { h } $ is finitely generated.
arxiv:2010.02724
genuine quantum - mechanical effects are readily observable in modern optomechanical systems comprising bosonic ( " classical " ) optical resonators. here we describe unique features and advantages of optical two - level systems, or qubits, for optomechanics. the qubit state can be coherently controlled using both phonons and resonant or detuned photons. we experimentally demonstrate this using charge - controlled inas quantum dots ( qds ) in surface - acoustic - wave resonators. time - correlated single - photon counting measurements reveal the control of qd population dynamics using engineered optical pulses and mechanical motion. as a first example, at moderate acoustic drive strengths, we demonstrate the potential of this technique to maximize fidelity in quantum microwave - to - optical transduction. specifically, we tailor the scheme so that mechanically assisted photon scattering is enhanced over the direct detuned photon scattering from the qd. spectral analysis reveals distinct scattering channels related to rayleigh scattering and luminescence in our pulsed excitation measurements which lead to time - dependent scattering spectra. quantum - mechanical calculations show good agreement with our experimental results, together providing a comprehensive description of excitation, scattering and emission in a coupled qd - phonon optomechanical system.
arxiv:2404.02079
the decentralized and trustless nature of cryptocurrencies and blockchain technology leads to a shift in the digital world. the possibility to execute small programs, called smart contracts, on cryptocurrencies like ethereum opened doors to countless new applications. one particular exciting use case is decentralized finance ( defi ), which aims to revolutionize traditional financial services by founding them on a decentralized infrastructure. we show the potential of defi by analyzing its advantages compared to traditional finance. additionally, we survey the state - of - the - art of defi products and categorize existing services. since defi is still in its infancy, there are countless hurdles for mass adoption. we discuss the most prominent challenges and point out possible solutions. finally, we analyze the economics behind defi products. by carefully analyzing the state - of - the - art and discussing current challenges, we give a perspective on how the defi space might develop in the near future.
arxiv:2101.05589
it is known that the solutions of pure classical 5d gravity with $ ads _ 5 $ asymptotics can describe strongly coupled large n dynamics in a universal sector of 4d conformal gauge theories. we show that when the boundary metric is flat we can uniquely specify the solution by the boundary stress tensor. we also show that in the fefferman - graham coordinates all these solutions have an integer taylor series expansion in the radial coordinate ( i. e. no $ log $ terms ). specifying an arbitrary stress tensor can lead to two types of pathologies, it can either destroy the asymptotic ads boundary condition or it can produce naked singularities. we show that when solutions have no net angular momentum, all hydrodynamic stress tensors preserve the asymptotic ads boundary condition, though they may produce naked singularities. we construct solutions corresponding to arbitrary hydrodynamic stress tensors in fefferman - graham coordinates using a derivative expansion. in contrast to eddington - finkelstein coordinates here the constraint equations simplify and at each order it is manifestly lorentz covariant. the regularity analysis, becomes more elaborate, but we can show that there is a unique hydrodynamic stress tensor which gives us solutions free of naked singularities. in the process we write down explicit first order solutions in both fefferman - graham and eddington - finkelstein coordinates for hydrodynamic stress tensors with arbitrary $ \ eta / s $. our solutions can describe arbitrary ( slowly varying ) velocity configurations. we point out some field - theoretic implications of our general results.
arxiv:0810.4851
in this work we introduce a new theoretical framework for einstein - gauss - bonnet theories of gravity, which results to particularly elegant, functionally simple and transparent gravitational equations of motion, slow - roll indices and the corresponding observational indices. the main requirement is that the einstein - gauss - bonnet theory has to be compatible with the gw170817 event, so the gravitational wave speed $ c _ t ^ 2 $ is required to be $ c _ t ^ 2 \ simeq 1 $ in natural units. this assumption was also made in a previous work of ours, but in this work we express all the related quantities as functions of the scalar field. the constraint $ c _ t ^ 2 \ simeq 1 $ restricts the functional form of the scalar gauss - bonnet coupling function $ \ xi ( \ phi ) $ and of the scalar potential $ v ( \ phi ) $, which must satisfy a differential equation. however, by also assuming that the slow - roll conditions hold true, the resulting equations of motion and the slow - roll indices acquire particularly simple forms, and also the relation that yields the $ e $ - foldings number is $ n = \ int _ { \ phi _ i } ^ { \ phi _ f } \ xi ' ' / \ xi ' d \ phi $, a fact that enables us to perform particularly simple calculations in order to study the inflationary phenomenological implications of several models. as it proves, the models we presented are compatible with the observational data, and also satisfy all the assumptions made during the process of extracting the gravitational equations of motion. more interestingly, we also investigated the phenomenological implications of an additional condition $ \ xi ' / \ xi ' ' \ ll 1 $, which is motivated by the slow - roll conditions that are imposed on the scalar field evolution and on the hubble rate, in which case the study is easier. our approach opens a new window in viable einstein - gauss - bonnet theories of gravity.
arxiv:2003.13724
we investigate the thermodynamics and phase structure of $ su ( 3 ) $ yang - mills theory on $ \ mathbb { t } ^ 2 \ times \ mathbb { r } ^ 2 $ with anisotropic spatial volumes in euclidean spacetime in lattice numerical simulations and an effective model. in lattice simulations, the energy - momentum tensor defined through the gradient flow is used for the analysis of the stress tensor on the lattice. it is found that a clear pressure anisotropy is observed only at a significantly shorter spatial extent compared with the free scalar theory. we then study the thermodynamics obtained on the lattice in an effective model that incorporates two polyakov loops along two compactified directions as dynamical variables. the model is constructed to reproduce thermodynamics measured on the lattice. the model analysis indicates the existence of a novel first - order phase transition and critical points as its endpoints. we argue that the interplay of the polyakov loops induces the first - order transition.
arxiv:2502.08892
deep model merging represents an emerging research direction that combines multiple fine - tuned models to harness their specialized capabilities across different tasks and domains. current model merging techniques focus on merging all available models simultaneously, with weight interpolation - based methods being the predominant approaches. however, these conventional approaches are not well - suited for scenarios where models become available sequentially, and they often suffer from high memory requirements and potential interference between tasks. in this study, we propose a training - free projection - based continual merging method that processes models sequentially through orthogonal projections of weight matrices and adaptive scaling mechanisms. our method operates by projecting new parameter updates onto subspaces orthogonal to existing merged parameter updates while using an adaptive scaling mechanism to maintain stable parameter distances, enabling efficient sequential integration of task - specific knowledge. our approach maintains constant memory complexity to the number of models, minimizes interference between tasks through orthogonal projections, and retains the performance of previously merged models through adaptive task vector scaling. extensive experiments on clip - vit models demonstrate that our method achieves a 5 - 8 % average accuracy improvement while maintaining robust performance in different task orderings.
arxiv:2501.09522
we deal with irregular curves contained in smooth, closed, and compact surfaces. for curves with finite total intrinsic curvature, a weak notion of parallel transport of tangent vector fields is well - defined in the sobolev setting. also, the angle of the parallel transport is a function with bounded variation, and its total variation is equal to an energy functional that depends on the " tangential " component of the derivative of the tantrix of the curve. we show that the total intrinsic curvature of irregular curves agrees with such an energy functional. by exploiting isometric embeddings, the previous results are then extended to irregular curves contained in riemannian surfaces. finally, the relationship with the notion of displacement of a smooth curve is analyzed.
arxiv:1906.10567
we performed a coincidence measurement of two nucleons emitted from the nonmesonic weak decay ( nmwd ) of ^ { 5 } _ { \ lambda } he formed via the ^ { 6 } li ( \ pi ^ +, k ^ + ) reaction. the energies of two nucleons and the pair number distributions in the opening angle between them were measured. in both np and nn pairs, we observed a clean back - to - back correlation coming from the two - body decay of \ lambda p - - > n p and \ lambda n - - > n n, respectively. the ratio of the nucleon pair numbers was n _ { nn } / n _ { np } = 0. 45 \ pm 0. 11 ( stat ) \ pm 0. 03 ( syst ) in the kinematic region of cos ( theta _ { nn } ) < - 0. 8. since each decay mode was exclusively detected, the measured ratio should be close to the ratio of \ gamma ( \ lambda p - - > np ) / \ gamma ( \ lambda n - - > nn ). the ratio is consistent with recent theoretical calculations based on the heavy meson / direct quark exchange picture.
arxiv:nucl-ex/0509015
time series forecasting is widely used in a multitude of domains. in this paper, we present four models to predict the stock price using the spx index as input time series data. the martingale and ordinary linear models require the strongest assumption in stationarity which we use as baseline models. the generalized linear model requires lesser assumptions but is unable to outperform the martingale. in empirical testing, the rnn model performs the best comparing to other two models, because it will update the input through lstm instantaneously, but also does not beat the martingale. in addition, we introduce an online to batch algorithm and discrepancy measure to inform readers the newest research in time series predicting method, which doesn ' t require any stationarity or non mixing assumptions in time series data. finally, to apply these forecasting to practice, we introduce basic trading strategies that can create win win and zero sum situations.
arxiv:1710.05751
this paper presents the " speak & improve challenge 2025 : spoken language assessment and feedback " - - a challenge associated with the isca slate 2025 workshop. the goal of the challenge is to advance research on spoken language assessment and feedback, with tasks associated with both the underlying technology and language learning feedback. linked with the challenge, the speak & improve ( s & i ) corpus 2025 is being pre - released, a dataset of l2 learner english data with holistic scores and language error annotation, collected from open ( spontaneous ) speaking tests on the speak & improve learning platform. the corpus consists of approximately 315 hours of audio data from second language english learners with holistic scores, and a 55 - hour subset with manual transcriptions and error labels. the challenge has four shared tasks : automatic speech recognition ( asr ), spoken language assessment ( sla ), spoken grammatical error correction ( sgec ), and spoken grammatical error correction feedback ( sgecf ). each of these tasks has a closed track where a predetermined set of models and data sources are allowed to be used, and an open track where any public resource may be used. challenge participants may do one or more of the tasks. this paper describes the challenge, the s & i corpus 2025, and the baseline systems released for the challenge.
arxiv:2412.11985
billions of people are sharing their daily life images on social media every day. however, their biometric information ( e. g., fingerprint ) could be easily stolen from these images. the threat of fingerprint leakage from social media raises a strong desire for anonymizing shared images while maintaining image qualities, since fingerprints act as a lifelong individual biometric password. to guard the fingerprint leakage, adversarial attack emerges as a solution by adding imperceptible perturbations on images. however, existing works are either weak in black - box transferability or appear unnatural. motivated by visual perception hierarchy ( i. e., high - level perception exploits model - shared semantics that transfer well across models while low - level perception extracts primitive stimulus and will cause high visual sensitivities given suspicious stimulus ), we propose fingersafe, a hierarchical perceptual protective noise injection framework to address the mentioned problems. for black - box transferability, we inject protective noises on fingerprint orientation field to perturb the model - shared high - level semantics ( i. e., fingerprint ridges ). considering visual naturalness, we suppress the low - level local contrast stimulus by regularizing the response of lateral geniculate nucleus. our fingersafe is the first to provide feasible fingerprint protection in both digital ( up to 94. 12 % ) and realistic scenarios ( twitter and facebook, up to 68. 75 % ). our code can be found at https : / / github. com / nlsde - safety - team / fingersafe.
arxiv:2208.10688
this study deals with the balance of humanoid or multi - legged robots in a multi - contact setting where a chosen subset of contacts is undergoing desired sliding - task motions. one method to keep balance is to hold the center - of - mass ( com ) within an admissible convex area. this area should be calculated based on the contact positions and forces. we introduce a methodology to compute this com support area ( csa ) for multiple fixed and sliding contacts. to select the most appropriate com position inside csa, we account for ( i ) constraints of multiple fixed and sliding contacts, ( ii ) desired wrench distribution for contacts, and ( iii ) desired position of com ( eventually dictated by other tasks ). these are formulated as a quadratic programming optimization problem. we illustrate our approach with pushing against a wall and wiping and conducted experiments using the hrp - 4 humanoid robot.
arxiv:1909.13696
we introduce an emerging ai - based approach and prototype system for assisting team formation when researchers respond to calls for proposals from funding agencies. this is an instance of the general problem of building teams when demand opportunities come periodically and potential members may vary over time. the novelties of our approach are that we : ( a ) extract technical skills needed about researchers and calls from multiple data sources and normalize them using natural language processing ( nlp ) techniques, ( b ) build a prototype solution based on matching and teaming based on constraints, ( c ) describe initial feedback about system from researchers at a university to deploy, and ( d ) create and publish a dataset that others can use.
arxiv:2201.05646
we compute the medium - mass nuclei $ ^ { 16 } $ o and $ ^ { 40 } $ ca using pionless effective field theory ( eft ) at next - to - leading order ( nlo ). the low - energy coefficients of the eft hamiltonian are adjusted to experimantal data for nuclei with mass numbers $ a = 2 $ and $ 3 $, or alternatively to results from lattice quantum chromodynamics ( qcd ) at an unphysical pion mass of 806 mev. the eft is implemented through a discrete variable representation in the harmonic oscillator basis. this approach ensures rapid convergence with respect to the size of the model space and facilitates the computation of medium - mass nuclei. at nlo the nuclei $ ^ { 16 } $ o and $ ^ { 40 } $ ca are bound with respect to decay into alpha particles. binding energies per nucleon are 9 - 10 mev and 30 - 40 mev at pion masses of 140 mev and 806 mev, respectively.
arxiv:1712.10246
transient and variable phenomena in astrophysical sources are of particular importance to understand the underlying gamma - ray emission processes. in the very - high energy gamma - ray domain, transient and variable sources are related to charged particle acceleration processes that could for instance help understanding the origin of cosmic - rays. the imaging atmospheric cherenkov technique used for gamma - ray astronomy above $ \ sim 100 $ gev is well suited for detecting such events. however, the standard analysis methods are not optimal for such a goal and more sensitive methods are specifically developed in this publication. the sensitivity improvement could therefore be helpful to detect brief and faint transient sources such as gamma - ray bursts.
arxiv:2001.06084
detailed imaging and spectroscopic analysis of the centers of nearby s0 and spiral galaxies shows the existence of " composite bulges ", where both classical bulges and disky pseudobulges coexist in the same galaxy. as part of a search for supermassive black holes in nearby galaxy nuclei, we obtained vlt - sinfoni observations in adaptive - optics mode of several of these galaxies. schwarzschild dynamical modeling enables us to disentangle the stellar orbital structure of the different central components, and to distinguish the differing contributions of kinematically hot ( classical bulge ) and kinematically cool ( pseudobulge ) components in the same galaxy.
arxiv:1409.7946
k - means is one of the most widely used algorithms for clustering in data mining applications, which attempts to minimize the sum of the square of the euclidean distance of the points in the clusters from the respective means of the clusters. however, k - means suffers from local minima problem and is not guaranteed to converge to the optimal cost. k - means + + tries to address the problem by seeding the means using a distance - based sampling scheme. however, seeding the means in k - means + + needs $ o \ left ( k \ right ) $ sequential passes through the entire dataset, and this can be very costly for large datasets. here we propose a method of seeding the initial means based on factorizations of higher order moments for bounded data. our method takes $ o \ left ( 1 \ right ) $ passes through the entire dataset to extract the initial set of means, and its final cost can be proven to be within $ o ( \ sqrt { k } ) $ of the optimal cost. we demonstrate the performance of our algorithm in comparison with the existing algorithms on various benchmark datasets.
arxiv:1511.05933
mining a set of meaningful topics organized into a hierarchy is intuitively appealing since topic correlations are ubiquitous in massive text corpora. to account for potential hierarchical topic structures, hierarchical topic models generalize flat topic models by incorporating latent topic hierarchies into their generative modeling process. however, due to their purely unsupervised nature, the learned topic hierarchy often deviates from users ' particular needs or interests. to guide the hierarchical topic discovery process with minimal user supervision, we propose a new task, hierarchical topic mining, which takes a category tree described by category names only, and aims to mine a set of representative terms for each category from a text corpus to help a user comprehend his / her interested topics. we develop a novel joint tree and text embedding method along with a principled optimization procedure that allows simultaneous modeling of the category tree structure and the corpus generative process in the spherical space for effective category - representative term discovery. our comprehensive experiments show that our model, named josh, mines a high - quality set of hierarchical topics with high efficiency and benefits weakly - supervised hierarchical text classification tasks.
arxiv:2007.09536
the constraints imposed by the requirement that the scalar potential of supersymmetric theories does not have unbounded directions and charge or color breaking minima deeper than the usual electroweak breaking minimum ( ewm ) are significantly relaxed if one just allows for a metastable ewm but with a sufficiently long lifetime. for this to be acceptable one needs however to explain how the vacuum state reaches this metastable configuration in the first place. we discuss the implications for this issue of the inflaton induced scalar masses, of the supersymmetry breaking effects generated during the preheating stage as well as of the thermal corrections to the scalar potential which appear after reheating. we show that their combined effects may efficiently drive the scalar fields to the origin, allowing them to then evolve naturally towards the ewm.
arxiv:hep-ph/9607403
this paper considers the multiantenna broadcast channel without transmit - side channel state information ( csit ). for this channel, it has been known that when all receivers have channel state information ( csir ), the degrees of freedom ( dof ) cannot be improved beyond what is available via tdma. the same is true if none of the receivers possess csir. this paper shows that an entirely new scenario emerges when receivers have unequal csir. in particular, orthogonal transmission is no longer dof - optimal when one receiver has csir and the other does not. a multiplicative superposition is proposed for this scenario and shown to attain the optimal degrees of freedom under a wide set of antenna configurations and coherence lengths. two signaling schemes are constructed based on the multiplicative superposition. in the first method, the messages of the two receivers are carried in the row and column spaces of a matrix, respectively. this method works better than orthogonal transmission while reception at each receiver is still interference free. the second method uses coherent signaling for the receiver with csir, and grassmannian signaling for the receiver without csir. this second method requires interference cancellation at the receiver with csir, but achieves higher dof than the first method.
arxiv:1207.6438
fast radio bursts ( frbs ) are short duration highly energetic dispersed radio pulses. we developed a generic formalism ( bera { \ em et al. }, 2016 ) to estimate the frb detection rate for any radio telescope with given parameters. by using this model, we estimated the frb detection rate for two indian radio telescope ; the ooty wide field array ( owfa ) ( bhattacharyya { \ em et al. }, 2017 ) and the upgraded giant metrewave radio telescope ( ugmrt ) ( bhattacharyya { \ em et al. }, 2018 ) with three beam forming modes. in this review article, i summarize these two works. we considered the energy spectrum of frbs as a power law and the energy distribution of frbs as a dirac delta function and a schechter luminosity function. we also considered two scattering models proposed by bhat { \ em et al. } ( 2004 ) and macquart \ & koay ( 2013 ) for these work and i consider frb pulse without scattering as a special case for this review. we found that the future prospects of detecting frbs by using these two indian radio telescopes is good. they are capable to detect a significant number of frbs per day. according to our prediction, we can detect $ \ sim 10 ^ 5 - 10 ^ 8 $, $ \ sim 10 ^ 3 - 10 ^ 6 $ and $ \ sim 10 ^ 5 - 10 ^ 7 $ frbs per day by using owfa, commensal systems of gmrt and ugmrt respectively. even a non detection of the predicted events will be very useful in constraining frb properties.
arxiv:1812.09143
in this paper, the problem of choosing the best allocation of excitations and measurements for the identification of a dynamic network is formally stated and analyzed. the best choice will be one that achieves the most accurate identification with the least costly experiment. accuracy is assessed by the trace of the asymptotic covariance matrix of the parameters estimates, whereas the cost criterion is the number of excitations and measurements. analytical and numerical results are presented for two classes of dynamic networks in state space form : branches and cycles. from these results, a number of guidelines for the choice emerge, which are based either on the topology of the network or on the relative magnitude of the modules being identified. an example is given to illustrate that these guidelines can to some extent be applied to networks of more generic topology.
arxiv:2007.09263
a new method for efficient isotope separation is proposed. it is based on efficient photoionization of atoms by a continuous - wave laser using resonant - enhancement in an ultra - large volume optical cavity. this method should enable higher efficiency than the existing state of the art and could be used as an alternative to radiochemistry. it should also allow separation of radioisotopes that are not amenable to standard radiochemistry, with important implications for medicine.
arxiv:2410.23139
background : liver tumors are abnormal growths in the liver that can be either benign or malignant, with liver cancer being a significant health concern worldwide. however, there is no dataset for plain scan segmentation of liver tumors, nor any related algorithms. to fill this gap, we propose plain scan liver tumors ( pslt ) and ynetr. methods : a collection of 40 liver tumor plain scan segmentation datasets was assembled and annotated. concurrently, we utilized dice coefficient as the metric for assessing the segmentation outcomes produced by ynetr, having advantage of capturing different frequency information. results : the ynetr model achieved a dice coefficient of 62. 63 % on the pslt dataset, surpassing the other publicly available model by an accuracy margin of 1. 22 %. comparative evaluations were conducted against a range of models including unet 3 +, xnet, unetr, swin unetr, trans - bts, cotr, nnunetv2 ( 2d ), nnunetv2 ( 3d fullres ), mednext ( 2d ) and mednext ( 3d fullres ). conclusions : we not only proposed a dataset named pslt ( plain scan liver tumors ), but also explored a structure called ynetr that utilizes wavelet transform to extract different frequency information, which having the sota in pslt by experiments.
arxiv:2404.00327
we propose a method for inference on moderately high - dimensional, nonlinear, non - gaussian, partially observed markov process models for which the transition density is not analytically tractable. markov processes with intractable transition densities arise in models defined implicitly by simulation algorithms. widely used particle filter methods are applicable to nonlinear, non - gaussian models but suffer from the curse of dimensionality. improved scalability is provided by ensemble kalman filter methods, but these are inappropriate for highly nonlinear and non - gaussian models. we propose a particle filter method having improved practical and theoretical scalability with respect to the model dimension. this method is applicable to implicitly defined models having analytically intractable transition densities. our method is developed based on the assumption that the latent process is defined in continuous time and that a simulator of this latent process is available. in this method, particles are propagated at intermediate time intervals between observations and are resampled based on a forecast likelihood of future observations. we combine this particle filter with parameter estimation methodology to enable likelihood - based inference for highly nonlinear spatiotemporal systems. we demonstrate our methodology on a stochastic lorenz 96 model and a model for the population dynamics of infectious diseases in a network of linked regions.
arxiv:1708.08543
we study implications of expansiveness and pointwise periodicity for certain groups and semigroups of transformations. among other things we prove that every pointwise periodic finitely generated group of cellular automata is necessarily finite. we also prove that a subshift over any finitely generated group that consists of finite orbits is finite, and related results for tilings of euclidean space.
arxiv:1703.10013
the conventional theory of combustion describes systems where all of the parameters are spatially homogeneous. on the other hand, combustion in disordered explosives has long been known to occur after local regions of the material, called " hot spots ", are ignited. in this article we show that a system of randomly distributed hot spots exhibits a dynamic phase transition, which, depending on parameters of the system can be either first or second order. these two regimes are separated by a tri - critical point. we also show that on the qualitative level the phase diagram of the system is universal. it is valid in two and three dimensions, in the cases when the hot spots interact either by heat or sound waves and in a broad range of microscopic disorder models.
arxiv:1803.10211
it has been established that the famous three - dimensional thurston geometries have four intrinsically lorentzian analogs. we explore these spacetimes in three - dimensional general relativity nonminimally coupled to a scalar field together with electromagnetic matter. we find that three of these spacetimes support electromagnetic radiation, while, the other is partially sourced by a nonnull field and supports gravitational radiation. by addressing this problem we have also found a novel type of gravitational cheshire effect.
arxiv:2311.12985
how can we detect fraudulent lockstep behavior in large - scale multi - aspect data ( i. e., tensors )? can we detect it when data are too large to fit in memory or even on a disk? past studies have shown that dense subtensors in real - world tensors ( e. g., social media, wikipedia, tcp dumps, etc. ) signal anomalous or fraudulent behavior such as retweet boosting, bot activities, and network attacks. thus, various approaches, including tensor decomposition and search, have been proposed for detecting dense subtensors rapidly and accurately. however, existing methods have low accuracy, or they assume that tensors are small enough to fit in main memory, which is unrealistic in many real - world applications such as social media and web. to overcome these limitations, we propose d - cube, a disk - based dense - subtensor detection method, which also can run in a distributed manner across multiple machines. compared to state - of - the - art methods, d - cube is ( 1 ) memory efficient : requires up to 1, 561x less memory and handles 1, 000x larger data ( 2. 6tb ), ( 2 ) fast : up to 7x faster due to its near - linear scalability, ( 3 ) provably accurate : gives a guarantee on the densities of the detected subtensors, and ( 4 ) effective : spotted network attacks from tcp dumps and synchronized behavior in rating data most accurately.
arxiv:1802.01065
we study a supersymmetric extension of the sm with higgs triplets in the scalar sector. we begin with a review of the sm, particularly the higgs mechanism. in the sm, the higgs mechanism requires the presence of a complex higgs doublet to break the electroweak symmetry and endow particles with a mass ; this process is called electroweak symmetry breaking. although this is the simplest possibility, higher scalar representations may also contribute to the ewsb. the extent to which these higher representations contribute to ewsb is constrained by precise measurements of the $ rho $ parameter. the model must predict $ \ rho = 1 $ at tree level. it is a fortuitous circumstance that simple doublet representations satisfy this requirement exactly. the underlying reason is that models with doublets satisfy an accidental custodial symmetry. therefore, one can add any number of scalar doublets and still satisfy this experimental constraint. for higher representations, it is a bit trickier to maintain the custodial symmetry. we study in this work a supersymmetric model that incorporates triplet representations and satisfies the custodial symmetry. the non - supersymmetric gorgi - machacek model is one example of a custodial invariant model of ssb with higgs triplets. however, the gm model has a fine - tuning problem beyond that of the sm. the solution to both issues is the supersymmetric custodial triplet model. the supersymmetric gm model arises as a low energy limit of the sctm. it is this model that we study here. we make use of public code, gmcalc and higgstools, to perform global fits to the parameters of this model and obtain limits on model parameters at the $ 95 % $ confidence level. for these hypothetical scalars, we identify the dominant decay channels and extract bounds on their branching ratios. we also examine the possible presence of a 95 gev higgs boson in the sgm.
arxiv:2408.04489
in this paper, we will prove that all batyrev calabi - yau threefolds, arising from a small resolution of a generic hyperplane section of a reflexive fano - gorenstein fourfold, have finite automorphism group. together with morrison conjecture, this suggests that batyrev calabi - yau threefolds should have a polyhedral kahler ( ample ) cone.
arxiv:1109.3238
a quasi - classical model ( qcm ) of molecular dynamics in intense femtosecond laser fields has been developed, and applied to a study of the effect of an ultrashort ` control ' pulse on the vibrational motion of a deuterium molecular ion in its ground electronic state. a nonadiabatic treatment accounts for the initial ionization - induced vibrational population caused by an ultrashort ` pump ' pulse. in the qcm, the nuclei move classically on the molecular potential as it is distorted by the laser - induced stark shift and transition dipole. the nuclei then adjust to the modified potential, non - destructively shifting the vibrational population and relative phase. this shift has been studied as a function of control pulse parameters. excellent agreement is observed with predictions of time - dependent quantum simulations, lending confidence to the validity of the model and permitting new observations to be made. the applicability of the qcm to more complex multi - potential energy surface molecules ( where a quantum treatment is at best difficult ) is discussed.
arxiv:1001.1138
the process of dynamic state estimation ( filtering ) based on point process observations is in general intractable. numerical sampling techniques are often practically useful, but lead to limited conceptual insight about optimal encoding / decoding strategies, which are of significant relevance to computational neuroscience. we develop an analytically tractable bayesian approximation to optimal filtering based on point process observations, which allows us to introduce distributional assumptions about sensor properties, that greatly facilitate the analysis of optimal encoding in situations deviating from common assumptions of uniform coding. numerical comparison with particle filtering demonstrate the quality of the approximation. the analytic framework leads to insights which are difficult to obtain from numerical algorithms, and is consistent with biological observations about the distribution of sensory cells ' tuning curve centers.
arxiv:1609.03519
been hailed as the first true mathematician and the first known individual to whom a mathematical discovery has been attributed. pythagoras established the pythagorean school, whose doctrine it was that mathematics ruled the universe and whose motto was " all is number ". it was the pythagoreans who coined the term " mathematics ", and with whom the study of mathematics for its own sake begins. the pythagoreans are credited with the first proof of the pythagorean theorem, though the statement of the theorem has a long history, and with the proof of the existence of irrational numbers. although he was preceded by the babylonians, indians and the chinese, the neopythagorean mathematician nicomachus ( 60 – 120 ad ) provided one of the earliest greco - roman multiplication tables, whereas the oldest extant greek multiplication table is found on a wax tablet dated to the 1st century ad ( now found in the british museum ). the association of the neopythagoreans with the western invention of the multiplication table is evident in its later medieval name : the mensa pythagorica. plato ( 428 / 427 bc – 348 / 347 bc ) is important in the history of mathematics for inspiring and guiding others. his platonic academy, in athens, became the mathematical center of the world in the 4th century bc, and it was from this school that the leading mathematicians of the day, such as eudoxus of cnidus ( c. 390 - c. 340 bc ), came. plato also discussed the foundations of mathematics, clarified some of the definitions ( e. g. that of a line as " breadthless length " ). eudoxus developed the method of exhaustion, a precursor of modern integration and a theory of ratios that avoided the problem of incommensurable magnitudes. the former allowed the calculations of areas and volumes of curvilinear figures, while the latter enabled subsequent geometers to make significant advances in geometry. though he made no specific technical mathematical discoveries, aristotle ( 384 – c. 322 bc ) contributed significantly to the development of mathematics by laying the foundations of logic. in the 3rd century bc, the premier center of mathematical education and research was the musaeum of alexandria. it was there that euclid ( c. 300 bc ) taught, and wrote the elements, widely considered the most successful and influential textbook of all time. the elements introduced mathematical rigor through the axiomatic method
https://en.wikipedia.org/wiki/History_of_mathematics
fundamentalist or evangelical christian beliefs in biblical literalism or biblical inerrancy, as opposed to the higher criticism supported by liberal christianity in the fundamentalist – modernist controversy. however, there are also examples of islamic and jewish scientific creationism that conform to the accounts of creation as recorded in their religious doctrines. the seventh - day adventist church has a history of support for creation science. this dates back to george mccready price, an active seventh - day adventist who developed views of flood geology, which formed the basis of creation science. this work was continued by the geoscience research institute, an official institute of the seventh - day adventist church, located on its loma linda university campus in california. creation science is generally rejected by the church of england as well as the roman catholic church. the pontifical gregorian university has officially discussed intelligent design as a " cultural phenomenon " without scientific elements. the church of england ' s official website cites charles darwin ' s local work assisting people in his religious parish. = = = views on science = = = creation science rejects evolution and the common descent of all living things on earth. instead, it asserts that the field of evolutionary biology is itself pseudoscientific or even a religion. creationists argue instead for a system called baraminology, which considers the living world to be descended from uniquely created kinds or " baramins. " creation science incorporates the concept of catastrophism to reconcile current landforms and fossil distributions with biblical interpretations, proposing the remains resulted from successive cataclysmic events, such as a worldwide flood and subsequent ice age. it rejects one of the fundamental principles of modern geology ( and of modern science generally ), uniformitarianism, which applies the same physical and geological laws observed on the earth today to interpret the earth ' s geological history. sometimes creationists attack other scientific concepts, like the big bang cosmological model or methods of scientific dating based upon radioactive decay. young earth creationists also reject current estimates of the age of the universe and the age of the earth, arguing for creationist cosmologies with timescales much shorter than those determined by modern physical cosmology and geological science, typically less than 10, 000 years. the scientific community has overwhelmingly rejected the ideas put forth in creation science as lying outside the boundaries of a legitimate science. the foundational premises underlying scientific creationism disqualify it as a science because the answers to all inquiry therein are preordained to conform to bible doctrine, and because that inquiry is constructed upon
https://en.wikipedia.org/wiki/Creation_science
we studied the optical properties of a resonantly excited trivalent er ensemble in si accessed via in situ single photon detection. a novel approach which avoids nanofabrication on the sample is introduced, resulting in a highly efficient detection of 70 excitation frequencies, of which 63 resonances have not been observed in literature. the center frequencies and optical lifetimes of all resonances have been extracted, showing that 5 % of the resonances are within 1 ghz of our electrically detected resonances and that the optical lifetimes range from 0. 5 ms up to 1. 5 ms. we observed inhomogeneous broadening of less than 400 mhz and an upper bound on the homogeneous linewidth of 1. 4 mhz and 0. 75 mhz for two separate resonances, which is a reduction of more than an order of magnitude observed to date. these narrow optical transition properties show that er in si is an excellent candidate for future quantum information and communication applications.
arxiv:2108.07090
the $ t $ - png model introduced in aggarwal, borodin, and wheeler ( 2021 ) is a deformed version of the polynuclear growth ( png ) model. in this paper, we prove the hydrodynamic limit of the model using soft techniques. one key element of the proof is the construction of a colored version of the $ t $ - png model, which allows us to apply the superadditive ergodic theorem and obtain the hydrodynamic limit, albeit without identifying the limiting constant. we then find this constant by proving a law of large numbers for the $ \ alpha $ - points, which generalizes groeneboom ( 2001 ). along the way, we construct the stationary $ t $ - png model and prove a version of burke ' s theorem for it.
arxiv:2204.11158
we prove that the sensitivity of any non - trivial graph property on $ n $ vertices is at least $ \ lfloor \ frac { 1 } { 2 } n \ rfloor $, provided $ n $ is sufficiently large.
arxiv:1609.05320
when generating in - silico clinical electrophysiological outputs, such as electrocardiograms ( ecgs ) and body surface potential maps ( bspms ), mathematical models have relied on single physics, i. e. of the cardiac electrophysiology ( ep ), neglecting the role of the heart motion. since the heart is the most powerful source of electrical activity in the human body, its motion dynamically shifts the position of the principal electrical sources in the torso, influencing electrical potential distribution and potentially altering the ep outputs. in this work, we propose a computational model for the simulation of ecgs and bspms by coupling a cardiac electromechanical model with a model that simulates the propagation of the ep signal in the torso, thanks to a flexible numerical approach, that simulates the torso domain deformation induced by the myocardial displacement. our model accounts for the major mechano - electrical feedbacks, along with unidirectional displacement and potential couplings from the heart to the surrounding body. for the numerical discretization, we employ a versatile intergrid transfer operator that allows for the use of different finite element spaces to be used in the cardiac and torso domains. our numerical results are obtained on a realistic 3d biventricular - torso geometry, and cover both cases of sinus rhythm and ventricular tachycardia ( vt ), solving both the electromechanical - torso model in dynamical domains, and the classical electrophysiology - torso model in static domains. by comparing standard 12 - lead ecg and bspms, we highlight the non - negligible effects of the myocardial contraction on the ep - outputs, especially in pathological conditions, such as the vt.
arxiv:2402.06308
recent observations suggest that hubble ' s constant is large, and hence that the universe appears to be younger than some of its constituents. the traditional escape route, which assumes that the expansion is accelerating, appears to be blocked by observations of type 1a supernovae, which suggest ( ed ) that the universe is decelerating. these observations are reconciled in a model in which the universe has experienced an inflationary phase in the recent past, driven by an ultra - light inflaton whose compton wavelength is of the same order as the hubble radius.
arxiv:astro-ph/0109379
the increasing proliferation of video surveillance cameras and the escalating demand for crime prevention have intensified interest in the task of violence detection within the research community. compared to other action recognition tasks, violence detection in surveillance videos presents additional issues, such as the wide variety of real fight scenes. unfortunately, existing datasets for violence detection are relatively small in comparison to those for other action recognition tasks. moreover, surveillance footage often features different individuals in each video and varying backgrounds for each camera. in addition, fast detection of violent actions in real - life surveillance videos is crucial to prevent adverse outcomes, thus necessitating models that are optimized for reduced memory usage and computational costs. these challenges complicate the application of traditional action recognition methods. to tackle all these issues, we introduce josenet, a novel self - supervised framework that provides outstanding performance for violence detection in surveillance videos. the proposed model processes two spatiotemporal video streams, namely rgb frames and optical flows, and incorporates a new regularized self - supervised learning approach for videos. josenet demonstrates improved performance compared to state - of - the - art methods, while utilizing only one - fourth of the frames per video segment and operating at a reduced frame rate. the source code is available at https : / / github. com / ispamm / josenet.
arxiv:2405.02961
it is shown that the list of unusual mesons planned for a careful study in photoproduction can be extended by the exotic states $ x ^ \ pm ( 1600 ) $ with $ i ^ g ( j ^ { pc } ) = 2 ^ + ( 2 ^ { + + } ) $ which should be looked for in the $ \ rho ^ \ pm \ rho ^ 0 $ decay channels in the reactions $ \ gamma n \ to \ rho ^ \ pm \ rho ^ 0n $ and $ \ gamma n \ to \ rho ^ \ pm \ rho ^ 0 \ delta $. the full classification of the $ \ rho ^ \ pm \ rho ^ 0 $ states by their quantum numbers is presented. a simple model for the spin structure of the $ \ gamma p \ to f _ 2 ( 1270 ) p $, $ \ gamma p \ to a ^ 0 _ 2 ( 1320 ) p $, and $ \ gamma n \ to x ^ \ pm ( n, \ delta ) $ reaction amplitudes is formulated and the tentative estimates of the corresponding cross sections at the incident photon energy $ e _ \ gamma \ approx 6 $ gev are obtained : $ \ sigma ( \ gamma p \ to f _ 2 ( 1270 ) p ) \ approx0. 12 $ $ \ mu $ b, $ \ sigma ( \ gamma p \ to a ^ 0 _ 2 ( 1320 ) p ) \ approx0. 25 $ $ \ mu $ b, $ \ sigma ( \ gamma n \ to x ^ \ pm n \ to \ rho ^ \ pm \ rho ^ 0n ) \ approx0. 018 $ $ \ mu $ b, and $ \ sigma ( \ gamma p \ to x ^ - \ delta ^ { + + } \ to \ rho ^ - \ rho ^ 0 \ delta ^ { + + } ) \ approx0. 031 $ $ \ mu $ b. the problem of the $ x ^ \ pm $ signal extraction from the natural background due to the other $ \ pi ^ \ pm \ pi ^ 0 \ pi ^ + \ pi ^ - $ production channels is discussed. in particular the estimates are presented for the $ \ gamma p \ to h _ 1 ( 1170 ) \ pi ^ + n $, $ \ gamma p \ to \ rho ' ^ { + } n \ to \ pi ^ + \ pi ^ 0 \ pi ^ + \ pi ^ - n $, and
arxiv:hep-ph/9901380
random numbers form an intrinsic part of modern day computing with applications in a wide variety of fields. but due to their limitations, the use of pseudo random number generators ( prngs ) is certainly not desirable for sensitive applications. quantum systems due to their intrinsic randomness form a suitable candidate for generation of true random numbers that can also be certified. in this work, the violation of chsh inequality has been used to propose a scheme by which one can generate device independent quantum random numbers by use of ibm quantum computers that are available on the cloud. the generated random numbers have been tested for their source of origin through experiments based on the testing of chsh inequality through available ibm quantum computers. the performance of each quantum computer against the chsh test has been plotted and characterized. further, efforts have been made to close as many loopholes as possible to produce device independent quantum random number generators. this study will provide new directions for the development of self - testing and semi - self - testing random number generators using quantum computers.
arxiv:2309.05299
felsner introduced a cycle reversal, namely the ` flip ' reversal, for \ alpha - orientations ( i. e., each vertex admits a prescribed out - degree ) of a graph g embedded on the plane and further proved that the set of all the \ alpha - orientations of g carries a distributive lattice with respect to the flip reversals. in this paper, we give an explicit formula for the minimum number of flips needed to transform one \ alpha - orientation into another for graphs embedded on the plane or sphere, respectively.
arxiv:1706.00970
we study the critical region of lattice qed4 in the quenched approximation. the issue of triviality is addressed by contrasting simulation results for $ < \ bar \ psi \ psi > $ and for the susceptibilities with the predictions of two critical scenarios - - powerlaw scaling, and triviality \ ` a la nambu - - jona lasinio. we discriminate among the two possibilities with reasonable accuracy and we confirm previous results for the critical point and exponents thanks to new analysis strategies and good quality data. the interplay of chiral symmetry breaking with the goldstone mechanism is studied in detail, and some puzzling features of past results are clarified. chiral symmetry restoration is observed in the spectrum : the candidate goldstone boson decouples in the weak coupling phase, while the propagators of the chiral doublets become degenerate. we also present the first measurements of the full mesonic spectrum, relevant for the study of flavour / rotational symmetry restoration. the systematic effects associated
arxiv:hep-lat/9411051
a measurement of the higgs boson ( h ) production via vector boson fusion ( vbf ) and its decay into a bottom quark - antiquark pair ( $ \ mathrm { b \ bar { b } } $ ) is presented using proton - proton collision data recorded by the cms experiment at $ \ sqrt { s } $ = 13 tev and corresponding to an integrated luminosity of 90. 8 fb $ ^ { - 1 } $. treating the gluon - gluon fusion process as a background and constraining its rate to the value expected in the standard model ( sm ) within uncertainties, the signal strength of the vbf process, defined as the ratio of the observed signal rate to that predicted by the sm, is measured to be $ \ mu ^ \ text { qqh } _ \ mathrm { hb \ bar { b } } $ = 1. 01 $ ^ { + 0. 55 } _ { - 0. 46 } $. the vbf signal is observed with a significance of 2. 4 standard deviations relative to the background prediction, while the expected significance is 2. 7 standard deviations. considering inclusive higgs boson production and decay into bottom quarks, the signal strength is measured to be $ \ mu ^ \ text { incl. } _ \ mathrm { hb \ bar { b } } $ = 0. 99 $ ^ { + 0. 48 } _ { - 0. 41 } $, corresponding to an observed ( expected ) significance of 2. 6 ( 2. 9 ) standard deviations.
arxiv:2308.01253
for a graph $ g = ( v, e ) $, a subset $ d $ of vertex set $ v $, is a dominating set of $ g $ if every vertex not in $ d $ is adjacent to atleast one vertex of $ d $. a dominating set $ d $ of a graph $ g $ with no isolated vertices is called a paired dominating set ( pd - set ), if $ g [ d ] $, the subgraph induced by $ d $ in $ g $ has a perfect matching. the min - pd problem requires to compute a pd - set of minimum cardinality. the decision version of the min - pd problem remains np - complete even when $ g $ belongs to restricted graph classes such as bipartite graphs, chordal graphs etc. on the positive side, the problem is efficiently solvable for many graph classes including intervals graphs, strongly chordal graphs, permutation graphs etc. in this paper, we study the complexity of the problem in at - free graphs and planar graph. the class of at - free graphs contains cocomparability graphs, permutation graphs, trapezoid graphs, and interval graphs as subclasses. we propose a polynomial - time algorithm to compute a minimum pd - set in at - free graphs. in addition, we also present a linear - time $ 2 $ - approximation algorithm for the problem in at - free graphs. further, we prove that the decision version of the problem is np - complete for planar graphs, which answers an open question asked by lin et al. ( in theor. comput. sci., $ 591 ( 2015 ) : 99 - 105 $ and algorithmica, $ 82 ( 2020 ) : 2809 - 2840 $ ).
arxiv:2112.05486
we establish the hamiltonian formulation of the teleparallel equivalent of general relativity, without fixing the time gauge condition, by rigorously performing the legendre transform. the time gauge condition, previously considered, restricts the teleparallel geometry to the three - dimensional spacelike hypersurface. geometrically, the teleparallel geometry is now extended to the four - dimensional space - time. the resulting hamiltonian formulation is different from the standard adm formulation in many aspects, the main one being that the dynamics is now governed by the hamiltonian constraint h _ 0 and a set of primary constraints. the vector constraint h _ i is derived from the hamiltonian constraint. the vanishing of the latter implies the vanishing of the vector constraint.
arxiv:gr-qc/0002059
we consider the residual empirical process in random design regression with long memory errors. we establish its limiting behaviour, showing that its rates of convergence are different from the rates of convergence for to the empirical process based on ( unobserved ) errors. also, we study a residual empirical process with estimated parameters. its asymptotic distribution can be used to construct kolmogorov - smirnov, cram \ ' { e } r - smirnov - von mises, or other goodness - of - fit tests. theoretical results are justified by simulation studies.
arxiv:1102.4368
this paper presents a comprehensive study on low - complexity waveform, modulation and coding ( wmc ) designs for the 3rd generation partnership project ( 3gpp ) ambient internet of things ( a - iot ). a - iot is a low - cost, low - power iot system inspired by ultra high frequency ( uhf ) radio frequency identification ( rfid ) and aims to leverage existing cellular network infrastructure for efficient rf tag management. the paper compares the physical layer ( phy ) design challenges and requirements of rfid and a - iot, particularly focusing on backscatter communications. an overview of the standardization for phy designs in release 19 a - iot is provided, along with detailed schemes of the proposed low - complex wmc designs. the performance of device - to - reader link designs is validated through simulations, demonstrating 6 db improvements of the proposed baseband waveform with coherent receivers compared to rfid line coding - based solutions with non - coherent receivers when channel coding is adopted.
arxiv:2501.08555
we examine the quasiparticle lifetime and spectral weight near the fermi surface in the two - dimensional hubbard model. we use the flex approximation to self - consistently generate the matsubara green ' s functions and then we analytically continue to the real axis to obtain the quasiparticle spectral functions. we compare the spectral functions found in the nearest neighbor hopping only hubbard model with those found when second neighbor hopping is included. this separates the effects of nesting, the van hove singularity and the short - ranged antiferromagnetic correlations. the quasiparticle scattering rate is enhanced along the ( 0, pi ) to ( pi, pi ) brillouin zone diagonal. when the density is close to half - filling these ' hot - spots ' lie on the fermi surface and the scattering rate increases with decreasing temperature. for the next - nearest neighbor hopping scenario we observe a large range of doping where there is no antiferromagnetism but the scattering rate has a linear temperature dependence. on decreasing the interaction this non - fermi liquid behavior is confined to doping levels where the fermi energy lies near to the van hove singularity. we conclude that the ' hot - spots ' are associated with the antiferromagnetic phase transition while the linear temperature dependence of the scattering rate is associated with the van hove singularity.
arxiv:cond-mat/9801030
since the proposal of transformers, these models have been limited to bounded input lengths, because of their need to attend to every token in the input. in this work, we propose unlimiformer : a general approach that wraps any existing pretrained encoder - decoder transformer, and offloads the cross - attention computation to a single k - nearest - neighbor ( knn ) index, while the returned knn distances are the attention dot - product scores. this knn index can be kept on either the gpu or cpu memory and queried in sub - linear time ; this way, we can index practically unlimited input sequences, while every attention head in every decoder layer retrieves its top - k keys, instead of attending to every key. we evaluate unlimiformer on several long - document and book - summarization benchmarks, showing that it can process even 500k token - long inputs from the booksum dataset, without any input truncation at test time. we demonstrate that unlimiformer improves pretrained models such as bart and longformer by extending them to unlimited inputs without additional learned weights and without modifying their code. we make our code and models publicly available at https : / / github. com / abertsch72 / unlimiformer.
arxiv:2305.01625
the inventories carried in a supply chain as a strategic tool to influence the competing firms are considered to be strategic inventories ( si ). we present a two - period game - theoretic supply chain model, in which a singular manufacturer supplies products to a pair of identical cournot duopolistic retailers. we show that the si carried by the retailers under dynamic contract is pareto - dominating for the manufacturer, retailers, consumers, the channel, and the society as well. we also find that the retailer ' s si, however, can be eliminated when the manufacturer commits wholesale contract or inventory holding cost is too high. in comparing the cases with and without downstream competition, we also show that the downstream cournot duopoly undermines the retailers in profits, but benefits all others.
arxiv:2109.06995
in the hong kong observatory, the analogue forecast system ( afs ) for precipitation has been providing useful reference in predicting possible daily rainfall scenarios for the next 9 days, by identifying historical cases with similar weather patterns to the latest output from the deterministic model of the european centre for medium - range weather forecasts ( ecmwf ). recent advances in machine learning allow more sophisticated models to be trained using historical data and the patterns of high - impact weather events to be represented more effectively. as such, an enhanced afs has been developed using the deep learning technique autoencoder. the datasets of the fifth generation of the ecmwf reanalysis ( era5 ) are utilised where more meteorological elements in higher horizontal, vertical and temporal resolutions are available as compared to the previous ecmwf reanalysis products used in the existing afs. the enhanced afs features four major steps in generating the daily rain class forecasts : ( 1 ) preprocessing of gridded era5 and ecmwf model forecast, ( 2 ) feature extraction by the pretrained autoencoder, ( 3 ) application of optimised feature weightings based on historical cases, and ( 4 ) calculation of the final rain class from a weighted ensemble of top analogues. the enhanced afs demonstrates a consistent and superior performance over the existing afs, especially in capturing heavy rain cases, during the verification period from 2019 to 2022. this paper presents the detailed formulation of the enhanced afs and discusses its advantages and limitations in supporting precipitation forecasting in hong kong.
arxiv:2501.02814
large language models ( llms ) have achieved impressive performance on code generation. although prior studies enhanced llms with prompting techniques and code refinement, they still struggle with complex programming problems due to rigid solution plans. in this paper, we draw on pair programming practices to propose paircoder, a novel llm - based framework for code generation. paircoder incorporates two collaborative llm agents, namely a navigator agent for high - level planning and a driver agent for specific implementation. the navigator is responsible for proposing promising solution plans, selecting the current optimal plan, and directing the next iteration round based on execution feedback. the driver follows the guidance of navigator to undertake initial code generation, code testing, and refinement. this interleaved and iterative workflow involves multi - plan exploration and feedback - based refinement, which mimics the collaboration of pair programmers. we evaluate paircoder with both open - source and closed - source llms on various code generation benchmarks. extensive experimental results demonstrate the superior accuracy of paircoder, achieving relative pass @ 1 improvements of 12. 00 % - 162. 43 % compared to prompting llms directly.
arxiv:2409.05001
for each object in a tensor triangulated category, we construct a natural continuous map from the object ' s support - - - a closed subset of the category ' s triangular spectrum - - - to the zariski spectrum of a certain commutative ring of endomorphisms. when applied to the unit object this recovers a construction of p. balmer. these maps provide an iterative approach for understanding the spectrum of a tensor triangulated category by starting with the comparison map for the unit object and iteratively analyzing the fibers of this map via " higher " comparison maps. we illustrate this approach for the stable homotopy category of finite spectra. in fact, the same underlying construction produces a whole collection of new comparison maps, including maps associated to ( and defined on ) each closed subset of the triangular spectrum. these latter maps provide an alternative strategy for analyzing the spectrum by iteratively building a filtration of closed subsets by pulling back filtrations of affine schemes.
arxiv:1302.4521
with ever - increasing productivity targets in mining operations, there is a growing interest in mining automation. in future mines, remote - controlled and autonomous haulers will operate underground guided by lidar sensors. we envision reusing lidar measurements to maintain accurate mine maps that would contribute to both safety and productivity. extrapolating from a pilot project on reliable wireless communication in boliden ' s kankberg mine, we propose establishing a system - of - systems ( sos ) with lidar - equipped haulers and existing mapping solutions as constituent systems. sos requirements engineering inevitably adds a political layer, as independent actors are stakeholders both on the system and sos levels. we present four sos scenarios representing different business models, discussing how development and operations could be distributed among boliden and external stakeholders, e. g., the vehicle suppliers, the hauling company, and the developers of the mapping software. based on eight key variation points, we compare the four scenarios from both technical and business perspectives. finally, we validate our findings in a seminar with participants from the relevant stakeholders. we conclude that to determine which scenario is the most promising for boliden, trade - offs regarding control, costs, risks, and innovation must be carefully evaluated.
arxiv:1705.05087
erd \ h os introduced the quantity $ s = t \ sum ^ t _ { i = 1 } x _ i $, where $ x _ 1, \ dots, x _ t $ are arithmetic progressions, and cover the square numbers up to $ n $. he conjectured that $ s $ is close to $ n $, i. e. the square numbers cannot be covered " economically " by arithmetic progressions. s \ ' ark \ " ozy confirmed this conjecture and proved that $ s \ geq cn / \ log ^ 2n $. in this paper, we extend this to shrinking polynomials and so - called $ \ { x _ i \ } $ quasi progressions.
arxiv:2302.00408
gamow - teller strength functions in full $ ( pf ) ^ { 8 } $ spaces are calculated with sufficient accuracy to ensure that all the states in the resonance region have been populated. many of the resulting peaks are weak enough to become unobservable. the quenching factor necessary to bring into agreement the low lying observed states with shell model predictions is shown to be due to nuclear correlations. to within experimental uncertainties it is the same that is found in one particle transfer and ( e, e ' ) reactions. perfect consistency between the observed $ ^ { 48 } ca ( p, n ) ^ { 48 } sc $ peaks and the calculation is achieved by assuming an observation threshold of 0. 75 \ % of the total strength, a value that seems typical in several experiments
arxiv:nucl-th/9401010
this work presents a systematic study of objective evaluations of abstaining classifications using information - theoretic measures ( itms ). first, we define objective measures for which they do not depend on any free parameter. this definition provides technical simplicity for examining " objectivity " or " subjectivity " directly to classification evaluations. second, we propose twenty four normalized itms, derived from either mutual information, divergence, or cross - entropy, for investigation. contrary to conventional performance measures that apply empirical formulas based on users ' intuitions or preferences, the itms are theoretically more sound for realizing objective evaluations of classifications. we apply them to distinguish " error types " and " reject types " in binary classifications without the need for input data of cost terms. third, to better understand and select the itms, we suggest three desirable features for classification assessment measures, which appear more crucial and appealing from the viewpoint of classification applications. using these features as " meta - measures ", we can reveal the advantages and limitations of itms from a higher level of evaluation knowledge. numerical examples are given to corroborate our claims and compare the differences among the proposed measures. the best measure is selected in terms of the meta - measures, and its specific properties regarding error types and reject types are analytically derived.
arxiv:1107.1837
we prove that odd unbounded p - summable fredholm modules are also bounded p - summable fredholm modules ( this is the odd counterpart of a result of a. connes for the case of even fredholm modules ).
arxiv:math/9908091
we develop an innovative numerical technique to describe few - body systems. correlated gaussian basis functions are used to expand the channel functions in the hyperspherical representation. the method is proven to be robust and efficient compared to other numerical techniques. the method is applied to few - body systems with short range interactions, including several examples for three - and four - body systems. specifically, for the two - component, four - fermion system, we extract the coefficients that characterize its behavior at unitarity.
arxiv:0904.1405
in this paper, channel estimation ( ce ) of intelligent reflecting surface aided near - field ( nf ) multi - user communication is investigated. initially, the least square ( ls ) estimator and minimum mean square error ( mmse ) estimator for the estimated channel are designed, and their mean square errors ( mses ) are derived. subsequently, to fully harness the potential of deep residual networks ( drns ) in denoising, the above ce problem is reconceptualized as a denoising task, and a drn - driven nf ce ( drn - nfce ) framework is proposed, and the cram $ \ acute { e } $ r - rao lower bound ( crlb ) is derived to serve as a benchmark for performance evaluation. in addition, to effectively capture and leverage these diverse channel features, a federated learning ( fl ) based global drn - nfce network, namely fl - drn - nfce, is constructed through collaborative training and joint optimization of single region drn - nfce ( sr - drn - nfce ) networks in different user regions. here, users are divided into multiple regions. correspondingly, a user region classifier based on convolutional neural network is designed to achieve the goal of matching datasets from different user regions to the corresponding sr - drn - nfce network. simulation results demonstrate that the proposed fl - drn - nfce framework outperforms ls, mmse, and no residual connections in terms of mse, and the proposed fl - drn - nfce method has higher ce accuracy over the sr - drn - nfce method.
arxiv:2410.20992
proxima centauri is known to host an earth - like planet in its habitable zone ; very recently a second candidate planet was proposed based on radial velocities. at quadrature, the expected projected separation of this new candidate is larger than 1 arcsec, making it a potentially interesting target for direct imaging. while difficult, identification of the optical counterpart of this planet would allow detailed characterization of the closest planetary system. we searched for a counterpart in sphere images acquired during four years through the shine survey. in order to account for the large orbital motion of the planet, we used a method that assumes the circular orbit obtained from radial velocities and exploits the sequence of observations acquired close to quadrature in the orbit. we checked this with a more general approach that considers keplerian motion, k - stacker. we did not obtain a clear detection. the best candidate has s / n = 6. 1 in the combined image. a statistical test suggests that the probability that this detection is due to random fluctuation of noise is < 1 % but this result depends on the assumption that distribution of noise is uniform over the image. the position of this candidate and the orientation of its orbital plane fit well with observations in the alma 12m array image. however, the astrometric signal expected from the orbit of the candidate we detected is 3 - sigma away from the astrometric motion of proxima as measured from early gaia data. this, together with the unexpectedly high flux associated with our direct imaging detection, means we cannot confirm that our candidate is indeed proxima c. on the other hand, if confirmed, this would be the first observation in imaging of a planet discovered from radial velocities and the second one ( after fomalhaut b ) of reflecting circumplanetary material. further confirmation observations should be done as soon as possible.
arxiv:2004.06685
abundances of alpha - elements such as ca and mg in disk and halo stars are usually derived from equivalent widths lines measured on high resolution spectra, and assuming local thermodynamic equilibrium ( lte ). in this paper, we present non - lte differential abundances derived by computing the statistical equilibrium of cai and mgi atoms, using high resolution equivalent widths available in the literature for 252 dwarf to subgiant stars. these non - lte abundances combined with recent determination of non - lte abundances of iron, seem to remove the dispersion of the [ ca / fe ] and [ mg / fe ] ratios in the galactic halo and disk phases, revealing new and surprising structures. these results have important consequences for chemical evolution models of the galaxy. in addition, non - lte abundance ratios for stars belonging to the m92 cluster apparently have the same behavior. more high resolution observations, mainly of globular clusters, are urgently needed to confirm our results.
arxiv:astro-ph/0004337
the key requirement for quantum networking is the distribution of entanglement between nodes. surprisingly, entanglement can be generated across a network without direct transfer - or communication - of entanglement. in contrast to information gain, which cannot exceed the communicated information, the entanglement gain is bounded by the communicated quantum discord, a more general measure of quantum correlation that includes but is not limited to entanglement. here, we experimentally entangle two communicating parties sharing three initially separable photonic qubits by exchange of a carrier photon that is unentangled with either party at all times. we show that distributing entanglement with separable carriers is resilient to noise and in some cases becomes the only way of distributing entanglement through noisy environments.
arxiv:1303.4634
we show that every orientable 3 - manifold is a classifying space b \ gamma where \ gamma is a groupoid of germs of homeomorphisms of r. this follows by showing that every orientable 3 - manifold m admits a codimension one foliation f such that the holonomy cover of every leaf is contractible. the f we construct can be taken to be c ^ 1 but not c ^ 2. the existence of such an f answers positively a question posed by tsuboi [ classifying spaces for groupoid structures, notes from minicourse at puc, rio de janeiro ( 2001 ) ], but leaves open the question of whether m = b \ gamma for some c ^ \ infty groupoid \ gamma.
arxiv:math/0206066
ensuring the safety of autonomous vehicles ( avs ) is paramount before they can be introduced to the market. more specifically, securing the safety of the intended functionality ( sotif ) poses a notable challenge ; while iso 21448 outlines numerous activities to refine the performance of avs, it offers minimal quantitative guidance. this paper endeavors to decompose the acceptance criterion into quantitative perception requirements, aiming to furnish developers with requirements that are not only understandable but also actionable. this paper introduces a risk decomposition methodology to derive sotif requirements for perception. more explicitly, for subsystemlevel safety requirements, we define a collision severity model to establish requirements for state uncertainty and present a bayesian model to discern requirements for existence uncertainty. for component - level safety requirements, we proposed a decomposition method based on the shapley value. our findings indicate that these methods can effectively decompose the system - level safety requirements into quantitative perception requirements, potentially facilitating the safety verification of various av components.
arxiv:2501.10097
we present a detailed investigation of minimum detection efficiencies, below which locality cannot be violated by any quantum system of any dimension in bipartite bell experiments. lower bounds on these minimum detection efficiencies are determined with the help of linear programming techniques. our approach is based on the observation that any possible bipartite quantum correlation originating from a quantum state in an arbitrary dimensional hilbert space is sandwiched between two probability polytopes, namely the local ( bell ) polytope and a corresponding nonlocal no - signaling polytope. numerical results are presented demonstrating the dependence of these lower bounds on the numbers of inputs and outputs of the bipartite physical system.
arxiv:0808.2126
in this work, the problem of cross - tier interference in a two - tiered ( macro - cell and cognitive small - cells ) network, under the complete spectrum sharing paradigm, is studied. a new orthogonal precoder transmit scheme for the small base stations, called multi - user vandermonde - subspace frequency division multiplexing ( mu - vfdm ), is proposed. mu - vfdm allows several cognitive small base stations to coexist with legacy macro - cell receivers, by nulling the small - to macro - cell cross - tier interference, without any cooperation between the two tiers. this cleverly designed cascaded precoder structure, not only cancels the cross - tier interference, but avoids the co - tier interference for the small - cell network. the achievable sum - rate of the small - cell network, satisfying the interference cancelation requirements, is evaluated for perfect and imperfect channel state information at the transmitter. simulation results for the cascaded mu - vfdm precoder show a comparable performance to that of state - of - the - art dirty paper coding technique, for the case of a dense cellular layout. finally, a comparison between mu - vfdm and a standard complete spectrum separation strategy is proposed. promising gains in terms of achievable sum - rate are shown for the two - tiered network w. r. t. the traditional bandwidth management approach.
arxiv:1302.4786
we study the cosmological evolution of a type - 0 string theory by employing non - criticality, which may be induced by fluctuations of the d3 brane worlds. we check the consistency of the approach to o ( alpha ' ) in the corresponding sigma - model. the ten - dimensional theory is reduced to an effective four - dimensional model, with only time dependent fields. we show that the four - dimensional universe has an inflationary phase and graceful exit from it, while the other extra dimensions are stabilized to a constant value, with the fifth dimension much larger than the others. we pay particular attention to demonstrating the role of tachyonic matter in inducing these features. the universe asymptotes, for large times, to a non - accelerating linearly - expanding universe with a time - dependent dilaton and a relaxing to zero vacuum energy a la quintessence.
arxiv:hep-th/0107124
the dynamics of the reynolds stress tensor for turbulent flows is described with an evolution equation coupling both geometric effects and turbulent source terms. the effects of the mean flow geometry are shown up when the source terms are neglected : the reynolds stress tensor is then expressed as the sum of three tensor products of vector fields which are governed by a distorted gyroscopic equation. along the mean flow trajectories, the fluctuations of velocity are described by differential equations whose coefficients depend only on the mean flow deformation. if the mean flow vorticity is small enough, an approximate turbulence model is derived, and its application to shear shallow water flows is proposed. moreover, the approximate turbulence model admits a variational formulation which is similar to the one of capillary fluids.
arxiv:0903.4949
the north celestial pole loop ( ncpl ) provides a unique laboratory for studying the early stage precursors of star formation. uncovering its origin is key to understanding the dynamical mechanisms that control the evolution of its contents. in this study, we explore the 3d geometry and the dynamics of the ncpl using high - resolution dust extinction data and h i data, respectively. we find that material toward polaris and ursa major is distributed along a plane similarly oriented to the radcliffe wave. the spider projected in between appears disconnected in 3d, a discontinuity in the loop shape. we find that the elongated cavity that forms the inner part of the ncpl is a protrusion of the local bubble ( lb ) likely filled with warm ( possibly hot ) gas that passes through and goes beyond the location of the dense clouds. an idealized model of the cavity as a prolate spheroid oriented toward the observer, reminiscent of the cylindrical model proposed by meyerdierks et al. ( 1991 ), encompasses the protrusion and fits into arcs of warm h i gas expanding laterally to it. as first argued by meyerdierks et al. ( 1991 ), the non - spherical geometry of the cavity and the lack of ob stars interior to it disfavor an origin caused by a single point - like source of energy or multiple supernovae. rather, the formation of the protrusion could be related to the propagation of warm gas from the lb into a pre - existing non - uniform medium in the lower halo, the topology of which was likely shaped by past star formation activity along the local arm.
arxiv:2212.02592
we describe a variety of methods to compute the functions $ k _ { ia } ( x ) $, $ l _ { ia } ( x ) $ and their derivatives for real $ a $ and positive $ x $. these functions are numerically satisfactory independent solutions of the differential equation $ x ^ 2 w ' ' + x w ' + ( a ^ 2 - x ^ 2 ) w = 0 $. in an accompanying paper ( algorithm xxx : modified bessel functions of imaginary order and positive argument ) we describe the implementation of these methods in fortran 77 codes.
arxiv:math/0401128
power flow solvable boundary plays an important role in contingency analysis, security assessment, and planning processes. however, to construct the real solvable boundary in multidimensional parameter space is burdensome and time consuming. in this paper, we develop a new technique to approximate the solvable boundary of distribution systems based on banach fixed point theorem. not only the new technique is fast and non - iterative, but also the approximated boundary is more valuable to system operators in the sense that it is closer to the feasible region. moreover, a simple solvable criterion is also introduced that can serve as a security constraint in various planning and operational problems.
arxiv:1503.01506
we approach the problem of constructing an explicit holographic dictionary for the ads $ _ 2 $ / cft $ _ 1 $ correspondence in the context of higher derivative gravitational actions in ads $ _ 2 $ space - times. these actions are obtained by an $ s ^ 2 $ reduction of four - dimensional $ { \ cal n } = 2 $ wilsonian effective actions with weyl squared interactions restricted to constant scalar backgrounds. bps black hole near - horizon space - times fall into this class of backgrounds, and by identifying the boundary operators dual to the bulk fields, we explicitly show how the wald entropy of the bps black hole is holographically encoded in the anomalous transformation of the operator dual to a composite bulk field. additionally, using a 2d / 3d lift, we show that the cft holographically dual to ads $ _ 2 $ is naturally embedded in the chiral half of the cft $ _ 2 $ dual to the ads $ _ 3 $ space - time, and we identify the specific operator in cft $ _ 1 $ that encodes the chiral central charge of the cft $ _ 2 $.
arxiv:2010.08761
let $ f : s ^ 2 \ to s ^ 2 $ be a continuous map of degree $ d $, $ | d | > 1 $, and let $ n _ nf $ denote the number of fixed points of $ f ^ n $. we show that if $ f $ is a thurston map with non hyperbolic orbifold, then either the growth rate inequality $ \ limsup \ frac { 1 } { n } \ log n _ nf \ geq \ log | d | $ holds for $ f $ or $ f $ has exactly two critical points which are fixed and totally invariant.
arxiv:2211.03571
\ ac { mpc } and \ ac { rl } are two powerful control strategies with, arguably, complementary advantages. in this work, we show how actor - critic \ ac { rl } techniques can be leveraged to improve the performance of \ ac { mpc }. the \ ac { rl } critic is used as an approximation of the optimal value function, and an actor roll - out provides an initial guess for primal variables of the \ ac { mpc }. a parallel control architecture is proposed where each \ ac { mpc } instance is solved twice for different initial guesses. besides the actor roll - out initialization, a shifted initialization from the previous solution is used. thereafter, the actor and the critic are again used to approximately evaluate the infinite horizon cost of these trajectories. the control actions from the lowest - cost trajectory are applied to the system at each time step. we establish that the proposed algorithm is guaranteed to outperform the original \ ac { rl } policy plus an error term that depends on the accuracy of the critic and decays with the horizon length of the \ ac { mpc } formulation. moreover, we do not require globally optimal solutions for these guarantees to hold. the approach is demonstrated on an illustrative toy example and an \ ac { ad } overtaking scenario.
arxiv:2406.03995
el nino is an extreme weather event featuring unusual warming of surface waters in the eastern equatorial pacific ocean. this phenomenon is characterized by heavy rains and floods that negatively affect the economic activities of the impacted areas. understanding how this phenomenon influences consumption behavior at different granularity levels is essential for recommending strategies to normalize the situation. with this aim, we performed a multi - scale analysis of data associated with bank transactions involving credit and debit cards. our findings can be summarized into two main results : coarse - grained analysis reveals the presence of the el ni \ ~ no phenomenon and the recovery time in a given territory, while fine - grained analysis demonstrates a change in individuals ' purchasing patterns and in merchant relevance as a consequence of the climatic event. the results also indicate that society successfully withstood the natural disaster owing to the economic structure built over time. in this study, we present a new method that may be useful for better characterizing future extreme events.
arxiv:2008.04887
we study the projective closures of three important families of affine monomial curves in dimension $ 4 $, namely the backelin curve, the bresinsky curve and the arslan curve, in order to explore possible connections between syzygies and the arithmetic cohen - macaulay property.
arxiv:2101.12440
viscoelastic flows through porous media become unstable and chaotic beyond critical flow conditions, impacting industrial and biological processes. recently, walkama \ textit { et al. } [ phys. rev. lett. \ textbf { 124 }, 164501 ( 2020 ) ] have shown that geometric disorder greatly suppresses such chaotic dynamics. we demonstrate experimentally that geometric disorder \ textit { per se } is not the reason for this suppression, and that disorder can also promote choatic fluctuations, given a slightly modified initial condition. the results are explained by the effect of disorder on the occurrence of stagnation points exposed to the flow field, which depends on the initially ordered geometric configuration.
arxiv:2105.11063
the evolution of a quantum system undergoing very frequent measurements takes place in a proper subspace of the total hilbert space ( quantum zeno effect ). when the measuring apparatus is included in the quantum description, the zeno effect becomes a pure consequence of the dynamics. we show that for continuous measurement processes the quantum zeno evolution derives from an adiabatic theorem. the system is forced to evolve in a set of orthogonal subspaces of the total hilbert space and a dynamical superselection rule arises. the dynamical properties of this evolution are investigated and several examples are considered.
arxiv:quant-ph/0202174
all data on the internet are transferred by network traffic, thus accurately modeling network traffic can help improve network services quality and protect data privacy. pretrained models for network traffic can utilize large - scale raw data to learn the essential characteristics of network traffic, and generate distinguishable results for input traffic without considering specific downstream tasks. effective pretrained models can significantly optimize the training efficiency and effectiveness of downstream tasks, such as application classification, attack detection and traffic generation. despite the great success of pretraining in natural language processing, there is no work in the network field. considering the diverse demands and characteristics of network traffic and network tasks, it is non - trivial to build a pretrained model for network traffic and we face various challenges, especially the heterogeneous headers and payloads in the multi - pattern network traffic and the different dependencies for contexts of diverse downstream network tasks. to tackle these challenges, in this paper, we make the first attempt to provide a generative pretrained model netgpt for both traffic understanding and generation tasks. we propose the multi - pattern network traffic modeling to construct unified text inputs and support both traffic understanding and generation tasks. we further optimize the adaptation effect of the pretrained model to diversified tasks by shuffling header fields, segmenting packets in flows, and incorporating diverse task labels with prompts. with diverse traffic datasets from encrypted software, dns, private industrial protocols and cryptocurrency mining, expensive experiments demonstrate the effectiveness of our netgpt in a range of traffic understanding and generation tasks on traffic datasets, and outperform state - of - the - art baselines by a wide margin.
arxiv:2304.09513
the article presents calculated dissociative recombination ( dr ) rate coefficients for h + 3. the previous theoretical work on h + 3 was performed using the adiabatic hyperspherical approximation to calculate the target ion vibrational states and it considered just a limited number of ionic rotational states. in this study, we use accurate vibrational wave functions and a larger number of possible rotational states of the h3 + ground vibrational level. the dr rate coefficient obtained is found to agree better with the experimental data from storage - ring experiments than the previous theoretical calculation. we present evidence that excited rotational states could be playing an important role in those experiments for collision energies above 10 mev. the dr rate coefficients calculated separately for ortho - and para - h3 + are predicted to differ significantly at low energy, a result consistent with a recent experiment. we also present dr rate coefficients for vibrationally - excited initial states of h3 +, which are found to be somewhat larger than the rate coefficient for the ground vibrational level.
arxiv:0708.2715
most scientific challenges can be framed into one of the following three levels of complexity of function approximation. type 1 : approximate an unknown function given input / output data. type 2 : consider a collection of variables and functions, some of which are unknown, indexed by the nodes and hyperedges of a hypergraph ( a generalized graph where edges can connect more than two vertices ). given partial observations of the variables of the hypergraph ( satisfying the functional dependencies imposed by its structure ), approximate all the unobserved variables and unknown functions. type 3 : expanding on type 2, if the hypergraph structure itself is unknown, use partial observations of the variables of the hypergraph to discover its structure and approximate its unknown functions. while most computational science and engineering and scientific machine learning challenges can be framed as type 1 and type 2 problems, many scientific problems can only be categorized as type 3. despite their prevalence, these type 3 challenges have been largely overlooked due to their inherent complexity. although gaussian process ( gp ) methods are sometimes perceived as well - founded but old technology limited to type 1 curve fitting, their scope has recently been expanded to type 2 problems. in this paper, we introduce an interpretable gp framework for type 3 problems, targeting the data - driven discovery and completion of computational hypergraphs. our approach is based on a kernel generalization of row echelon form reduction from linear systems to nonlinear ones and variance - based analysis. here, variables are linked via gps and those contributing to the highest data variance unveil the hypergraph ' s structure. we illustrate the scope and efficiency of the proposed approach with applications to ( algebraic ) equation discovery, network discovery ( gene pathways, chemical, and mechanical ) and raw data analysis.
arxiv:2311.17007
we explore the possible connection between the open cluster ic 2391 and the unbound argus association identified by the sacy survey. in addition to common kinematics and ages between these two systems, here we explore their chemical abundance patterns to confirm if the two substructures shared a common origin. we carry out a homogenous high - resolution elemental abundance study of eight confirmed members of ic 2391 as well as six members of the argus association using uves spectra. we derive spectroscopic stellar parameters and abundances for fe, na, mg, al, si, ca, ti, cr, ni and ba. all stars in the open cluster and argus association were found to share similar abundances with the scatter well within the uncertainties, where [ fe / h ] = - 0. 04 + / - 0. 03 for cluster stars and [ fe / h ] = - 0. 06 + / - 0. 05 for argus stars. effects of over - ionisation / excitation were seen for stars cooler than roughly 5200k as previously noted in the literature. also, enhanced ba abundances of around 0. 6 dex were observed in both systems. the common ages, kinematics and chemical abundances strongly support that the argus association stars originated from the open cluster ic 2391. simple modeling of this system find this dissolution to be consistent with two - body interactions.
arxiv:1301.5967
we examine theoretically electron paramagnetic resonance ( epr ) lineshapes as functions of resonance frequency, energy level, and temperature for single crystals of three different kinds of single - molecule nanomagnets ( smms ) : mn $ _ { 12 } $ acetate, fe $ _ 8 $ br, and the $ s = 9 / 2 $ mn $ _ 4 $ compound. we use a density - matrix equation and consider distributions in the uniaxial ( second - order ) anisotropy parameter $ d $ and the $ g $ factor, caused by possible defects in the samples. additionally, weak intermolecular exchange and electronic dipole interactions are included in a mean - field approximation. our calculated linewidths are in good agreement with experiments. we find that the distribution in $ d $ is common to the three examined single - molecule magnets. this could provide a basis for a proposed tunneling mechanism due to lattice defects or imperfections. we also find that weak intermolecular exchange and dipolar interactions are mainly responsible for the temperature dependence of the lineshapes for all three smms, and that the intermolecular exchange interaction is more significant for mn $ _ 4 $ than for the other two smms. this finding is consistent with earlier experiments and suggests the role of spin - spin relaxation processes in the mechanism of magnetization tunneling.
arxiv:cond-mat/0301561
ma _ n : = \ ma _ 1 ^ { \ t n } $. several characterizations of one - sided regular elements of a ring are given in module - theoretic and one - sided - ideal - theoretic way.
arxiv:2404.12116
we present a survey for optically thick lyman limit absorbers at z < 2. 6 using archival hubble space telescope observations with the faint object spectrograph and space telescope imaging spectrograph. we identify 206 lyman limit systems ( llss ) increasing the number of catalogued llss at z < 2. 6 by a factor of ~ 10. we compile a statistical sample of 50 tau _ lls > 2 llss drawn from 249 qso sight lines that avoid known targeting biases. the incidence of such llss per unit redshift, l ( z ) = dn / dz, at these redshifts is well described by a single power law, l ( z ) = c1 ( 1 + z ) ^ gamma, with gamma = 1. 33 + / - 0. 61 at z < 2. 6, or with gamma = 1. 83 + / - 0. 21 over the redshift range 0. 2 < z < 4. 9. the incidence of llss per absorption distance, l ( x ), decreases by a factor of ~ 1. 5 over the ~ 0. 6 gyr from z = 4. 9 to 3. 5 ; l ( x ) evolves much more slowly at low redshifts, decreasing by a similar factor over the ~ 8 gyr from z = 2. 6 to 0. 25. we show that the column density distribution function, f ( n ( hi ) ), at low redshift is not well fitted by a single power law index ( f ( n ( hi ) ) = c2 n ( hi ) ^ ( - beta ) ) over the column density range 13 < log n ( hi ) < 22 or log n ( hi ) > 17. 2. while low and high redshift f ( n ( hi ) ) distributions are consistent for log n ( hi ) > 19. 0, there is some evidence that f ( n ( hi ) ) evolves with z for log n ( hi ) < 17. 7, possibly due to the evolution of the uv background and galactic feedback. assuming llss are associated with individual galaxies, we show that the physical cross section of the optically thick envelopes of galaxies decreased by a factor of ~ 9 from z ~ 5 to 2 and has remained relatively constant since that time. we argue that a significant fraction of the observed population of llss arises in the circumgalactic gas of sub - l * galaxies.
arxiv:1105.0659