text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
predicting the relationship between a molecule ' s structure and its odor remains a difficult, decades - old task. this problem, termed quantitative structure - odor relationship ( qsor ) modeling, is an important challenge in chemistry, impacting human nutrition, manufacture of synthetic fragrance, the environment, and sensory neuroscience. we propose the use of graph neural networks for qsor, and show they significantly out - perform prior methods on a novel data set labeled by olfactory experts. additional analysis shows that the learned embeddings from graph neural networks capture a meaningful odor space representation of the underlying relationship between structure and odor, as demonstrated by strong performance on two challenging transfer learning tasks. machine learning has already had a large impact on the senses of sight and sound. based on these early results with graph neural networks for molecular properties, we hope machine learning can eventually do for olfaction what it has already done for vision and hearing.
|
arxiv:1910.10685
|
the stable roommates problem is a non - bipartite version of the well - known stable matching problem. teo and sethuraman proved that, for each instance of the stable roommates problem in the complete graphs, there exists a linear inequality system such that there exists a feasible solution to this system if and only if there exists a stable matching in the given instance. the aim of this paper is to extend the result of teo and sethuraman to the stable roommates problem with ties. more concretely, we prove that, for each instance of the stable roommates problem with ties in the complete graphs, there exists a linear inequality system such that there exists a feasible solution to this system if and only if there exists a super - stable matching in the given instance.
|
arxiv:2503.16052
|
recently, mlp - based vision backbones have achieved promising performance in several visual recognition tasks. however, the existing mlp - based methods directly aggregate tokens with static weights, leaving the adaptability to different images untouched. moreover, recent research demonstrates that mlp - transformer is great at creating long - range dependencies but ineffective at catching high frequencies that primarily transmit local information, which prevents it from applying to the downstream dense prediction tasks, such as semantic segmentation. to address these challenges, we propose a content - adaptive yet computationally efficient structure, dubbed dynamic spectrum mixer ( dsm ). the dsm represents token interactions in the frequency domain by employing the discrete cosine transform, which can learn long - term spatial dependencies with log - linear complexity. furthermore, a dynamic spectrum weight generation layer is proposed as the spectrum bands selector, which could emphasize the informative frequency bands while diminishing others. to this end, the technique can efficiently learn detailed features from visual input that contains both high - and low - frequency information. extensive experiments show that dsm is a powerful and adaptable backbone for a range of visual recognition tasks. particularly, dsm outperforms previous transformer - based and mlp - based models, on image classification, object detection, and semantic segmentation tasks, such as 83. 8 \ % top - 1 accuracy on imagenet, and 49. 9 \ % miou on ade20k.
|
arxiv:2309.06721
|
we study the effect of thermal charm production on charmonium regeneration in high energy nuclear collisions. by solving the kinetic equations for charm quark and charmonium distributions in pb + pb collisions, we calculate the global and differential nuclear modification factors $ r _ { aa } ( n _ { part } ) $ and $ r { aa } ( p _ t ) $ for $ j / \ psi $ s. due to the thermal charm production in hot medium, the charmonium production source changes from the initially created charm quarks at sps, rhic and lhc to the thermally produced charm quarks at future circular collider ( fcc ), and the $ j / \ psi $ suppression ( $ r _ { aa } < 1 $ ) observed so far will be replaced by a strong enhancement ( $ r _ { aa } > 1 $ ) at fcc at low transverse momentum.
|
arxiv:1602.01667
|
entanglement - assisted quantum ( quenta ) codes are a subclass of quantum error - correcting codes which use entanglement as a resource. these codes can provide error correction capability higher than the codes derived from the traditional stabilizer formalism. in this paper, it is shown a general method to construct quenta codes from cyclic codes. afterwards, the method is applied to reed - solomon codes, bch codes, and general cyclic codes. we use the euclidean and hermitian construction of quenta codes. two families of quenta codes are maximal distance separable ( mds ), and one is almost mds or almost near mds. the comparison of the codes in this paper is mostly based on the quantum singleton bound.
|
arxiv:1911.06384
|
we present a detailed description of the generalized geometric cluster algorithm for the efficient simulation of continuum fluids. the connection with well - known cluster algorithms for lattice spin models is discussed, and an explicit full cluster decomposition is derived for a particle configuration in a fluid. we investigate a number of basic properties of the geometric cluster algorithm, including the dependence of the cluster - size distribution on density and temperature. practical aspects of its implementation and possible extensions are discussed. the capabilities and efficiency of our approach are illustrated by means of two example studies.
|
arxiv:cond-mat/0503448
|
around the globe several observatories are seeking the first direct detection of gravitational waves ( gws ). these waves are predicted by einstein ' s general theory of relativity [ einstein, a., annalen der physik 49, 769 - 822 ( 1916 ) ] and are generated e. g. by black - hole binary systems [ sathyaprakash, b. s. and schutz, b. f., living rev. relativity 12, 2 ( 2009 ) ]. current gw detectors are michelson - type kilometer - scale laser interferometers measuring the distance changes between in vacuum suspended mirrors. the sensitivity of these detectors at frequencies above several hundred hertz is limited by the vacuum ( zero - point ) fluctuations of the electromagnetic field. a quantum technology - the injection of squeezed light [ caves, c. m., phys. rev. d 23, 1693 - 1708 ( 1981 ) ] - offers a solution to this problem. here we demonstrate the squeezed - light enhancement of geo600, which will be the gw observatory operated by the ligo scientific collaboration in its search for gws for the next 3 - 4 years. geo600 now operates with its best ever sensitivity, which proves the usefulness of quantum entanglement and the qualification of squeezed light as a key technology for future gw astronomy.
|
arxiv:1109.2295
|
in the last 25 years, a new generation of x - ray satellites imparted a significant leap forward in our knowledge of x - ray pulsars. the discovery of accreting and transitional millisecond pulsars proved that disk accretion can spin up a neutron star to a very high rotation speed. the detection of mev - gev pulsed emission from a few hundreds of rotation - powered pulsars probed particle acceleration in the outer magnetosphere, or even beyond. also, a population of two dozens of magnetars has emerged. integral played a central role to achieve these results by providing instruments with high temporal resolution up to the hard x - ray / soft gamma - ray band and a large field of view imager with good angular resolution to spot hard x - ray transients. in this article, we review the main contributions by integral to our understanding of the pulsating hard x - ray sky, such as the discovery and characterization of several accreting and transitional millisecond pulsars, the generation of the first catalog of hard x - ray / soft gamma - ray rotation - powered pulsars, the detection of polarization in the hard x - ray emission from the crab pulsar, and the discovery of persistent hard x - ray emission from several magnetars.
|
arxiv:2012.01346
|
in this work we model the quintessence potential in a taylor series expansion, up to second order, around the present - day value of the scalar field. the field is evolved in a thawing regime assuming zero initial velocity. we use the latest data from the planck satellite, baryonic acoustic oscillations observations from the sloan digital sky survey, and supernovae luminosity distance information from union2. 1 to constrain our models parameters, and also include perturbation growth data from the wigglez, boss and the 6df surveys. the supernova data provide the strongest individual constraint on the potential parameters. we show that the growth data performance is competitive with the other datasets in constraining the dark energy parameters we introduce. we also conclude that the combined constraints we obtain for our model parameters, when compared to previous works of nearly a decade ago, have shown only modest improvement, even with new growth of structure data added to previously - existent types of data.
|
arxiv:1501.02678
|
graph data have become increasingly common. visualizing them helps people better understand relations among entities. unfortunately, existing graph visualization tools are primarily designed for single - person desktop use, offering limited support for interactive web - based exploration and online collaborative analysis. to address these issues, we have developed argo lite, a new in - browser interactive graph exploration and visualization tool. argo lite enables users to publish and share interactive graph visualizations as urls and embedded web widgets. users can explore graphs incrementally by adding more related nodes, such as highly cited papers cited by or citing a paper of interest in a citation network. argo lite works across devices and platforms, leveraging webgl for high - performance rendering. argo lite has been used by over 1, 000 students at georgia tech ' s data and visual analytics class. argo lite may serve as a valuable open - source tool for advancing multiple cikm research areas, from data presentation, to interfaces for information systems and more.
|
arxiv:2008.11844
|
decentralized planning in uncertain environments is a complex task generally dealt with by using a decision - theoretic approach, mainly through the framework of decentralized partially observable markov decision processes ( dec - pomdps ). although dec - pomdps are a general and powerful modeling tool, solving them is a task with an overwhelming complexity that can be doubly exponential. in this paper, we study an alternate formulation of dec - pomdps relying on a sequence - form representation of policies. from this formulation, we show how to derive mixed integer linear programming ( milp ) problems that, once solved, give exact optimal solutions to the dec - pomdps. we show that these milps can be derived either by using some combinatorial characteristics of the optimal solutions of the dec - pomdps or by using concepts borrowed from game theory. through an experimental validation on classical test problems from the dec - pomdp literature, we compare our approach to existing algorithms. results show that mathematical programming outperforms dynamic programming but is less efficient than forward search, except for some particular problems. the main contributions of this work are the use of mathematical programming for dec - pomdps and a better understanding of dec - pomdps and of their solutions. besides, we argue that our alternate representation of dec - pomdps could be helpful for designing novel algorithms looking for approximate solutions to dec - pomdps.
|
arxiv:1401.3831
|
large language models ( llms ) have demonstrated powerful capabilities in both text understanding and generation. companies have begun to offer embedding as a service ( eaas ) based on these llms, which can benefit various natural language processing ( nlp ) tasks for customers. however, previous studies have shown that eaas is vulnerable to model extraction attacks, which can cause significant losses for the owners of llms, as training these models is extremely expensive. to protect the copyright of llms for eaas, we propose an embedding watermark method called embmarker that implants backdoors on embeddings. our method selects a group of moderate - frequency words from a general text corpus to form a trigger set, then selects a target embedding as the watermark, and inserts it into the embeddings of texts containing trigger words as the backdoor. the weight of insertion is proportional to the number of trigger words included in the text. this allows the watermark backdoor to be effectively transferred to eaas - stealer ' s model for copyright verification while minimizing the adverse impact on the original embeddings ' utility. our extensive experiments on various datasets show that our method can effectively protect the copyright of eaas models without compromising service quality.
|
arxiv:2305.10036
|
we prove for a wide class of saturated weakly branch group ( including the ( first ) grigorchuk group and the gupta - sidki group ) that the reidemeister number of any automorphism is infinite.
|
arxiv:math/0606725
|
knowledge, method, or practice is scientific. experimental results should be reproducible and verified by other researchers. these principles are intended to ensure experiments can be reproduced measurably given the same conditions, allowing further investigation to determine whether a hypothesis or theory related to given phenomena is valid and reliable. standards require the scientific method to be applied throughout, and bias to be controlled for or eliminated through randomization, fair sampling procedures, blinding of studies, and other methods. all gathered data, including the experimental or environmental conditions, are expected to be documented for scrutiny and made available for peer review, allowing further experiments or studies to be conducted to confirm or falsify results. statistical quantification of significance, confidence, and error are also important tools for the scientific method. = = = falsifiability = = = during the mid - 20th century, the philosopher karl popper emphasized the criterion of falsifiability to distinguish science from non - science. statements, hypotheses, or theories have falsifiability or refutability if there is the inherent possibility that they can be proven false, that is, if it is possible to conceive of an observation or an argument that negates them. popper used astrology and psychoanalysis as examples of pseudoscience and einstein ' s theory of relativity as an example of science. he subdivided non - science into philosophical, mathematical, mythological, religious and metaphysical formulations on one hand, and pseudoscientific formulations on the other. another example which shows the distinct need for a claim to be falsifiable was stated in carl sagan ' s publication the demon - haunted world when he discusses an invisible dragon that he has in his garage. the point is made that there is no physical test to refute the claim of the presence of this dragon. whatever test one thinks can be devised, there is a reason why it does not apply to the invisible dragon, so one can never prove that the initial claim is wrong. sagan concludes ; " now, what ' s the difference between an invisible, incorporeal, floating dragon who spits heatless fire and no dragon at all? ". he states that " your inability to invalidate my hypothesis is not at all the same thing as proving it true ", once again explaining that even if such a claim were true, it would be outside the realm of scientific inquiry. = = = mertonian norms = = = during 1942, robert k. merton identified a set of five
|
https://en.wikipedia.org/wiki/Pseudoscience
|
nrgr, an effective field theory approach to gravity, has emerged as a powerful tool to systematically compute higher order corrections in the post - newtonian expansion. here we discuss in somehow more detail the recently reported new results for the spin - spin gravitational potential at third post - newtonian order.
|
arxiv:gr-qc/0701106
|
we analyze a batched variant of stochastic gradient descent ( sgd ) with weighted sampling distribution for smooth and non - smooth objective functions. we show that by distributing the batches computationally, a significant speedup in the convergence rate is provably possible compared to either batched sampling or weighted sampling alone. we propose several computationally efficient schemes to approximate the optimal weights, and compute proposed sampling distributions explicitly for the least squares and hinge loss problems. we show both analytically and experimentally that substantial gains can be obtained.
|
arxiv:1608.07641
|
experimental designs with hierarchically - structured errors are pervasive in many biomedical areas ; it is important to take into account this hierarchical architecture in order to account for the dispersion and make reliable inferences from the data. this paper addresses the question of estimating a proportion or a ratio from positive or negative count data akin to those generated by droplet digital polymerase chain reaction experiments when the number of biological or technical replicates is limited. we present and discuss a bayesian framework, for which we provide and implement a gibbs sampler in r and compare it to a random effect model.
|
arxiv:2305.02700
|
understanding the mechanism ( s ) of the solar wind acceleration is important in astrophysics and geophysics. a promising model of the solar wind acceleration is known as the wave / turbulence - driven ( wtd ) model, in which alfv \ ' en waves feed energy to the solar wind. in this study, we tested the wtd model with global measurement of wind speed from interplanetary scintillation ( ips ) observations. for carrington rotations in minimal and maximal activity phases, we selected field lines calculated by the potential - field source - surface method in high - and mid - latitudes and compared the simulated and observed wind velocities. the simulation was performed in a self - consistent manner by solving the magnetohydrodynamic equations from the photosphere to the solar wind. in high - latitude regions, the simulated solar wind velocity agrees better with the ips observation than with the classical wang - - sheeley empirical estimation, both in maximal and minimal activity phases. in mid - latitude regions, the agreement worsens, possibly because of the inaccuracy of the wtd model and / or the magnetic - field extrapolation. our results indicate that the high - latitude solar wind is likely to be driven by waves and turbulence, and that the physics - based prediction of the solar wind velocity is highly feasible with an improved magnetic - field extrapolation.
|
arxiv:2202.10768
|
quantum computations promise the ability to solve problems intractable in the classical setting. restricting the types of computations considered often allows to establish a provable theoretical advantage by quantum computations, and later demonstrate it experimentally. in this paper, we consider space - restricted computations, where input is a read - only memory and only one ( qu ) bit can be computed on. we show that $ n $ - bit symmetric boolean functions can be implemented exactly through the use of quantum signal processing as restricted space quantum computations using $ o ( n ^ 2 ) $ gates, but some of them may only be evaluated with probability $ 1 / 2 + o ( n / \ sqrt { 2 } ^ n ) $ by analogously defined classical computations. we experimentally demonstrate computations of $ 3 $ -, $ 4 $ -, $ 5 $ -, and $ 6 $ - bit symmetric boolean functions by quantum circuits, leveraging custom two - qubit gates, with algorithmic success probability exceeding the best possible classically. this establishes and experimentally verifies a different kind of quantum advantage - - one where quantum scrap space is more valuable than analogous classical space - - and calls for an in - depth exploration of space - time tradeoffs in quantum circuits.
|
arxiv:2008.06478
|
we study order units in the real group ring and the augmentation ideal, as well as in matrix algebras. we identify an infinite family of order units in the powers of the augmentation ideal, that includes the laplacian, and show that these order units are naturally obtained via cohomological operations from more simpler diagonal order units in matrix algebras.
|
arxiv:2301.07590
|
multi - view learning is a learning problem that utilizes the various representations of an object to mine valuable knowledge and improve the performance of learning algorithm, and one of the significant directions of multi - view learning is sub - space learning. as we known, auto - encoder is a method of deep learning, which can learn the latent feature of raw data by reconstructing the input, and based on this, we propose a novel algorithm called auto - encoder based co - training multi - view learning ( acmvl ), which utilizes both complementarity and consistency and finds a joint latent feature representation of multiple views. the algorithm has two stages, the first is to train auto - encoder of each view, and the second stage is to train a supervised network. interestingly, the two stages share the weights partly and assist each other by co - training process. according to the experimental result, we can learn a well performed latent feature representation, and auto - encoder of each view has more powerful reconstruction ability than traditional auto - encoder.
|
arxiv:2201.02978
|
this study aims to explore the complex relationship between perceptual and cognitive interactions in multimodal data analysis, with a specific emphasis on spatial experience design in overseas chinese gardens. it is found that evaluation content and images on social media can reflect individuals ' concerns and sentiment responses, providing a rich data base for cognitive research that contains both sentimental and image - based cognitive information. leveraging deep learning techniques, we analyze textual and visual data from social media, thereby unveiling the relationship between people ' s perceptions and sentiment cognition within the context of overseas chinese gardens. in addition, our study introduces a multi - agent system ( mas ) alongside ai agents. each agent explores the laws of aesthetic cognition through chat scene simulation combined with web search. this study goes beyond the traditional approach of translating perceptions into sentiment scores, allowing for an extension of the research methodology in terms of directly analyzing texts and digging deeper into opinion data. this study provides new perspectives for understanding aesthetic experience and its impact on architecture and landscape design across diverse cultural contexts, which is an essential contribution to the field of cultural communication and aesthetic understanding.
|
arxiv:2312.17642
|
the functional renormalization group ( frg ) provides a flexible tool to study correlations in low - dimensional electronic systems. in this paper, we present a novel frg approach to the steady - state of quantum wires out of thermal equilibrium. our method is correct up to second order in the two - particle interaction and accounts for inelastic scattering. we combine semi - analytic solutions of the flow equations with mpi parallelization techniques, which allows us to treat systems of up to 60 lattice sites. the equilibrium limit is well - understood and serves as a benchmark. we compute effective distribution functions, the local density of states, and the steady - state current and demonstrate that all of these quantities depend strongly on the choice of the cutoff employed within the frg. non - equilibrium is plagued by the lack of physical arguments in favor of a certain cutoff as well as by the appearance of secular higher - order terms which are only partly included in our approach. this demonstrates the inadequacy of a straightforward second - order frg scheme to study interacting quantum wires out of equilibrium in the absence of a natural cutoff choice.
|
arxiv:2004.03946
|
according to padmanabhan ' s proposal, the difference between the surface degrees of freedom and the bulk degrees of freedom in a region of space may result in the acceleration of universe expansion through the relation $ \ delta v / \ delta t = n _ { \ rm sur } - n _ { \ rm bulk } $ where $ n _ { \ rm bulk } $ and $ n _ { \ rm sur } $ are referred to the degrees of freedom related to the matter and energy content inside the bulk and surface area, respectively \ cite { pad1 }. in this paper, we study the dynamical effect of the extrinsic geometrical embedding of an arbitrary four dimensional brane in a higher dimensional bulk space and investigate the corresponding degrees of freedom. considering the modification of friedmann equations arising from a general braneworld scenario, we obtain a correction term in padmanabhan ' s relation, denoting the number of degrees of freedom related to the extrinsic geometry of the brane embedded in higher dimensional spacetime as $ \ delta v / \ delta t = n _ { \ rm sur } - n _ { \ rm bulk } - n _ { \ rm extr } $ where $ n _ { \ rm extr } $ is referred to the degree of freedom related to the extrinsic geometry of the brane while $ n _ { \ rm sur } $ and $ n _ { \ rm bulk } $ are as well as before. finally, we study the validity of the first and second laws of thermodynamics for this general braneworld scenario in the state of thermal equilibrium and in the presence of confined matter fields to the brane with the induced geometric matter fields.
|
arxiv:1506.02388
|
cryptocurrencies have gained significant attention in recent years due to their decentralized nature and potential for financial innovation. thus, the ability to accurately predict its price has become a subject of great interest for investors, traders, and researchers. some works in the literature show how bitcoin ' s market sentiment correlates with its price fluctuations in the market. however, papers that consider the sentiment of the market associated with financial technical analysis indicators in order to predict bitcoin ' s price are still scarce. in this paper, we present a novel approach for predicting bitcoin price movements by combining the fear & greedy index, a measure of market sentiment, technical analysis indicators, and the potential of machine learning algorithms. this work represents a preliminary study on the importance of sentiment metrics in cryptocurrency forecasting. our initial experiments demonstrate promising results considering investment returns, surpassing the buy & hold baseline, and offering valuable insights about the combination of indicators of sentiment and market in a cryptocurrency prediction model.
|
arxiv:2410.14532
|
due to the limitation of strong - labeled sound event detection data set, using synthetic data to improve the sound event detection system performance has been a new research focus. in this paper, we try to exploit the usage of synthetic data to improve the feature representation. based on metric learning, we proposed inter - frame distance loss function for domain adaptation, and prove the effectiveness of it on sound event detection. we also applied multi - task learning with synthetic data. we find the the best performance can be achieved when the two methods being used together. the experiment on dcase 2018 task 4 test set and dcase 2019 task 4 synthetic set both show competitive results.
|
arxiv:2011.00695
|
in this paper we develop star topological and topological group - groupoid structures of monodromy groupoid and prove that the monodromy groupoid of a topological group - groupoid is also a topological group - groupoid.
|
arxiv:1801.08900
|
this is the first in a series of two papers to establish the mass - angular momentum inequality for multiple black holes. we study singular harmonic maps from domains of 3 - dimensional euclidean space to the hyperbolic plane having bounded hyperbolic distance to extreme kerr harmonic maps. we prove that every such harmonic map admits a unique tangent harmonic map at the extreme black hole horizon. the possible tangent maps are classified and shown to be shifted ` extreme kerr ' geodesics in the hyperbolic plane that depend on two parameters, one determined by angular momentum and another by conical singularities. in addition, rates of convergence to the tangent map are established. similarly, expansions in the asymptotically flat end are presented. these results, together with those of li - tian [ 24, 25 ] and weinstein [ 35, 36 ], provide a complete regularity theory for harmonic maps from $ \ mathbb r ^ 3 \ setminus z \ text { - axis } $ to $ \ mathbb h ^ 2 $ with these prescribed singularities. the analysis is additionally utilized to prove existence of the so called near horizon limit, and to compute the associated near horizon geometries of extreme black holes.
|
arxiv:2212.14826
|
we show a broad class of constraints compatible with itoh - narita - bogoyavlenskii lattice hierarchy. all these constraints can be written in the form of discrete conservation law $ i _ { i + 1 } = i _ i $ with appropriate homogeneous polynomial discrete function $ i = i [ a ] $.
|
arxiv:0902.4517
|
iin this paper, we consider an m ^ x / m / c queue with state - dependent control at idle time and catastrophes. properties of the queues which terminate when the servers become idle are firstly studied. recurrence, equilibrium distribution and equilibrium queue - size structure are studied for the case of resurrection and no catastrophes. all of these results and the first effective catastrophe occurrence time are then investigated for the case of resurrection and catastrophes. in particular, we can obtain the laplace transform of the transition probability for the absorptive m ^ x / m / c queue.
|
arxiv:1512.05033
|
we study the optical cooling of the resonator mirror in a cavity - optomechanical system that contains an optical gain medium. we find that the optical damping rate is vanishingly small for an incoherently pumped laser above threshold. in the presence of an external coherent drive however, the optical damping rate can be enhanced substantially with respect to that of a passive cavity. we show that the strength of the incoherent pump provides a conduit to tune the damping rate and the minimum attainable phonon number with the same radiation pressure force, and the latter can be lowered from that of a passive cavity if the thermal contribution is nonnegligible. we also show that the system can undergo a transition from the weak optomechanical coupling regime to the strong optomechanical coupling regime as the incoherent pump strength is varied.
|
arxiv:1301.3762
|
whilst many solutions have been found for the quantum yang - baxter equation ( qybe ), there are fewer known solutions available for its higher dimensional generalizations : zamolodchikov ' s tetrahedron equation ( zte ) and frenkel and moore ' s simplex equation ( fme ). in this paper, we present families of solutions to fme which may help us to understand more about higher dimensional generalization of qybe.
|
arxiv:hep-th/9510010
|
we give two classes of spherically symmetric exact solutions of the couple gravitational and electromagnetic fields with charged source in the tetrad theory of gravitation. the first solution depends on an arbitrary function $ h ( { r }, t ) $. the second solution depends on a constant parameter $ \ eta $. these solutions reproduce the same metric, i. e., the reissner - - nordstr $ \ ddot { o } $ m metric. if the arbitrary function which characterizes the first solution and the arbitrary constant of the second solution are set to be zero, then the two exact solutions will coincide with each other. we then calculate the energy content associated with these analytic solutions using the superpotential method. in particular, we examine whether these solutions meet the condition which m { \ o } ller required for a consistent energy - momentum complex : namely, we check whether the total four - momentum of an isolated system behaves as a four - vector under lorentz transformations. it is then found that the arbitrary function should decrease faster than $ 1 / \ sqrt { r } $ for $ r \ to \ infty $. it is also shown that the second exact solution meets the m { \ o } ller ' s condition.
|
arxiv:0704.3898
|
this paper examines the nonconvex quadratically constrained quadratic programming ( qcqp ) problems using an iterative method. one of the existing approaches for solving nonconvex qcqp problems relaxes the rank one constraint on the unknown matrix into semidefinite constraint to obtain the bound on the optimal value without finding the exact solution. by reconsidering the rank one matrix, an iterative rank minimization ( irm ) method is proposed to gradually approach the rank one constraint. each iteration of irm is formulated as a convex problem with semidefinite constraints. an augmented lagrangian method, named extended uzawa algorithm, is developed to solve the subproblem at each iteration of irm for improved scalability and computational efficiency. simulation examples are presented using the proposed method and comparative results obtained from the other methods are provided and discussed.
|
arxiv:1609.02609
|
the article presents a systematic review of the results of the development of the theoretical basis and the pilot implementation of data storage technology with automatic replenishment of data from sources belonging to different thematic segments. it is expected that the repository will contain information about objects with significant innovative potential. the mechanism of selection of such information is based on the determination of its semantic relevance to the generated search queries. at the same time, a quantitative assessment of the innovation of objects, in particular their technological novelty and demand is given. the article describes the accepted indicators of innovation, discusses the application of the theory of evidence for the processing of incomplete and fuzzy information, identifies the main ideas of the method of processing the results of measurements for the calculation of the probabilistic value of the components of innovation, briefly describes the application of the evolutionary approach in the formation of the linguistic model of the archetype of the object, provides information about the experimental verification of the adequacy of the developed computational model. the research results that are described in the article can be used for business planning, forecasting of technological development, information support of investment projects expertise.
|
arxiv:2103.14837
|
we demonstrate the use of automatic bayesian inference for the analysis of lisa data sets. in particular we describe a new automatic reversible jump markov chain monte carlo method to evaluate the posterior probability density functions of the a priori unknown number of parameters that describe the gravitational wave signals present in the data. we apply the algorithm to a simulated lisa data set containing overlapping signals from white dwarf binary systems ( dwd ) and to a separate data set containing a signal from an extreme mass ratio inspiral ( emri ). we demonstrate that the approach works well in both cases and can be regarded as a viable approach to tackle lisa data analysis challenges.
|
arxiv:gr-qc/0609010
|
we theoretically examined how the dielectric screening of two - dimensional layered materials affects the dipolar interaction between interlayer excitons in few - layer van der waals structures. our analysis indicates that the dipolar interaction is largely enhanced by two - dimensional dielectric screening at an inter - exciton separation of several nanometers or larger. the underlying mechanism can be attributed to the induced - charge densities in layered materials, which give rise to induced - dipole densities at large distances with directions parallel to that of the interlayer exciton. the interaction between quadrupolar excitons in trilayer structures are found to be enhanced even larger, with a magnitude one to two orders stronger than that without 2d dielectric screening. the strengths of these dipolar and quadrupolar interactions can be further tuned by engineering the dielectric environment.
|
arxiv:2311.06022
|
a reconfigurable intelligent surface ( ris ) is a planar surface that can enhance the quality of communication by providing control over the communication environment. reflection optimization is one of the pivotal challenges in ris setups. while there has been lots of research regarding the reflection optimization of ris, most works consider the independence of the phase shift and the amplitude of ris reflection coefficients. in practice, the phase shift and the amplitude are coupled and according to a recent study, the relation between them can be described using a function. in our work, we consider a practical system model with coupled phase shift and amplitude. we develop an efficient method for achieving capacity maximization by finding the optimal reflection coefficients of the ris elements. the complexity of our method is linear with the number of ris elements and the number of discrete phase shifts. we also develop a method that optimally selects the configuration set of the system, where a configuration set means a discrete set of reflection coefficient choices that a ris element can take.
|
arxiv:2411.15696
|
we generalize the action found by ' t hooft, which describes the gravitational interaction between ingoing and outgoing particles in the neighbourhood of a black hole. the effect of this back - reaction is that of a shock wave, and it provides a mechanism for recovering information about the momentum of the incoming particles. the new action also describes particles with transverse momenta and takes into account the transverse curvature of the hole, and has the form of a string theory action. apart from the polyakov term found by ' t hooft, we also find an antisymmetric tensor, which is here related to the momentum of the particles. at the quantum level, the identification between position and momentum operators leads to four non - commuting coordinates. a certain relation to m ( atrix ) theory is proposed.
|
arxiv:gr-qc/9707042
|
be called " nontrivial ". the homogeneous matrix equation a x = 0 { \ displaystyle a \ mathbf { x } = \ mathbf { 0 } }, where a { \ displaystyle a } is a fixed matrix, x { \ displaystyle \ mathbf { x } } is an unknown vector, and 0 { \ displaystyle \ mathbf { 0 } } is the zero vector, has an obvious solution x = 0 { \ displaystyle \ mathbf { x } = \ mathbf { 0 } }. this is called the " trivial solution ". any other solutions, with x = 0 { \ displaystyle \ mathbf { x } \ neq \ mathbf { 0 } }, are called " nontrivial ". in group theory, there is a very simple group with just one element in it ; this is often called the " trivial group ". all other groups, which are more complicated, are called " nontrivial ". in graph theory, the trivial graph is a graph which has only 1 vertex and no edge. database theory has a concept called functional dependency, written x → y { \ displaystyle x \ to y }. the dependence x → y { \ displaystyle x \ to y } is true if y is a subset of x, so this type of dependence is called " trivial ". all other dependences, which are less obvious, are called " nontrivial ". it can be shown that riemann ' s zeta function has zeros at the negative even numbers −2, −4, … though the proof is comparatively easy, this result would still not normally be called trivial ; however, it is in this case, for its other zeros are generally unknown and have important applications and involve open questions ( such as the riemann hypothesis ). accordingly, the negative even numbers are called the trivial zeros of the function, while any other zeros are considered to be non - trivial. = = see also = = degeneracy initial and terminal objects list of mathematical jargon pathological trivialism trivial measure trivial representation trivial topology = = references = = = = external links = = trivial entry at mathworld
|
https://en.wikipedia.org/wiki/Triviality_(mathematics)
|
a cell fluid model with a modified morse potential is studied. the supercritical states are considered with respect to a possibility to build a separation boundary between liquid - like and gas - like bahaviors. three different lines are calculated that can be used for this purpose : the locus of the isothermal compressibility maxima, the locus of the thermal expansion coefficient maxima, and the line where the effective chemical potential is zero, m = 0. by the symmetry of the functionals for the partition functions, the condition m = 0 in fluids is analogous to the absence of an external field in the ising model.
|
arxiv:2410.23694
|
offline imitation learning ( il ) refers to learning expert behavior solely from demonstrations, without any additional interaction with the environment. despite significant advances in offline il, existing techniques find it challenging to learn policies for long - horizon tasks and require significant re - training when task specifications change. towards addressing these limitations, we present go - dice an offline il technique for goal - conditioned long - horizon sequential tasks. go - dice discerns a hierarchy of sub - tasks from demonstrations and uses these to learn separate policies for sub - task transitions and action execution, respectively ; this hierarchical policy learning facilitates long - horizon reasoning. inspired by the expansive dice - family of techniques, policy learning at both the levels transpires within the space of stationary distributions. further, both policies are learnt with goal conditioning to minimize need for retraining when task goals change. experimental results substantiate that go - dice outperforms recent baselines, as evidenced by a marked improvement in the completion rate of increasingly challenging pick - and - place mujoco robotic tasks. go - dice is also capable of leveraging imperfect demonstration and partial task segmentation when available, both of which boost task performance relative to learning from expert demonstrations alone.
|
arxiv:2312.10802
|
underlay in - band device - to - device ( d2d ) communication can improve the spectrum efficiency of cellular networks. however, the coexistence of d2d and cellular users causes inter - cell and intra - cell interference. the former can be effectively managed through inter - cell interference coordination and, therefore, is not considered in this work. instead, we focus on the intra - cell interference and propose a d2d mode selection scheme to manage it inside a finite cellular network region. the potential d2d users are controlled by the base station ( bs ) to operate in d2d mode based on the average interference generated to the bs. using stochastic geometry, we study the outage probability experienced at the bs and a d2d receiver, and spectrum reuse ratio, which quantifies the average fraction of successfully transmitting d2d users. the analysis shows that the outage probability at the d2d receiver varies for different locations. additionally, without impairing the performance at the bs, if the path - loss exponent on the cellular link is slightly lower than that on the d2d link, the spectrum reuse ratio can have negligible decrease while the d2d users ' average number of successful transmissions increases with increasing d2d node density. this indicates that an increasing level of d2d communication can be beneficial in future networks..
|
arxiv:1510.03162
|
the oort cloud is thought to be a reservoir of icy planetesimals and the source of long - period comets ( lpcs ) implanted from the outer solar system during the time of giant planet formation. the abundance of rocky ice - free bodies is a key diagnostic of solar system formation models as it can distinguish between ` ` massive " and ` ` depleted " proto - asteroid belt scenarios and thus disentangle competing planet formation models. here we report a direct observation of a decimeter - sized ( $ \ sim2 $ kg ) rocky meteoroid on a retrograde lpc orbit ( $ e \ approx 1. 0 $, i = $ 121 ^ { \ circ } $ ). during its flight, it fragmented at dynamic pressures similar to fireballs dropping ordinary chondrite meteorites. a numerical ablation model fit produces bulk density and ablation properties also consistent with asteroidal meteoroids. we estimate the flux of rocky objects impacting earth from the oort cloud to be $ 1. 08 ^ { + 2. 81 } _ { - 0. 95 } \ mathrm { meteoroids / 10 ^ 6 km ^ 2 / yr } $ to a mass limit of 10 g. this corresponds to an abundance of rocky meteoroids of $ \ sim6 ^ { + 13 } _ { - 5 } $ \ % of all objects originating in the oort cloud and impacting earth to these masses. our result gives support to migration - based dynamical models of the formation of the solar system which predict that significant rocky material is implanted in the oort cloud, a result not explained by traditional solar system formation models.
|
arxiv:2212.06812
|
in this paper we consider a spin - $ \ frac { 3 } { 2 } $ dark matter ( dm ) particle coupled to neutrinos as a viable candidate to produce the observed dm relic density through the thermal freeze - out mechanism. the couplings of dm to neutrinos is considered first in a most general dimension six effective field theory framework. we then consider two specific neutrino - portal models discussed in the literature. in the first model dm couples to the standard model neutrinos through mixing generated by a sterile pseudo - dirac massive neutrino and the second model we consider is the widely studied $ u ( 1 ) _ { l _ \ mu - l _ \ tau } $ gauge symmetric model. for each of these models we explore the parameter space required to generate the observed relic density. the constraints on the parameters of these models from the existing and proposed neutrino experiments as well as from existing cosmological and astrophysical bounds are considered in the context of the relic density calculations.
|
arxiv:2206.06324
|
we study superconvergence property of the linear discontinuous galerkin finite element method with the polynomial preserving recovery ( ppr ) and richardson extrapolation for the two dimensional helmholtz equation. the error estimate with explicit dependence on the wave number $ k $, the penalty parameter $ \ mu $ and the mesh condition parameter $ \ alpha $ is derived. first, we prove that under the assumption $ k ( kh ) ^ 2 \ leq c _ 0 $ ( $ h $ is the mesh size ) and certain mesh condition, the estimate between the finite element solution and the linear interpolation of the exact solution is superconvergent under the $ \ norme { \ cdot } $ - seminorm. second, we prove a superconvergence result for the recovered gradient by ppr. furthermore, we estimate the error between the finite element gradient and recovered gradient, which motivate us to define the a posteriori error estimator. finally, some numerical examples are provided to confirm the theoretical results of superconvergence analysis. all theoretical findings are verified by numerical tests.
|
arxiv:1612.03386
|
noticing that the point - form approach referred to in many recent works implies physics described on hyperplanes, an approach inspired from dirac ' s one, which involves a hyperboloid surface, is presented. a few features pertinent to this new approach are emphasized. consequences as for the calculation of form factors are discussed.
|
arxiv:nucl-th/0501051
|
satisfiability modulo theories ( smt ) solving has become a critical part of many static analyses, including symbolic execution, refinement type checking, and model checking. we propose formulog, a domain - specific language that makes it possible to write a range of smt - based static analyses in a way that is both close to their formal specifications and amenable to high - level optimizations and efficient evaluation. formulog extends the logic programming language datalog with a first - order functional language and mechanisms for representing and reasoning about smt formulas ; a novel type system supports the construction of expressive formulas, while ensuring that neither normal evaluation nor smt solving goes wrong. our case studies demonstrate that a range of smt - based analyses can naturally and concisely be encoded in formulog, and that - - thanks to this encoding - - high - level datalog - style optimizations can be automatically and advantageously applied to these analyses.
|
arxiv:2009.08361
|
we present a quantum cellular automaton model in one space - dimension which has the dirac equation as emergent. this model, a discrete - time and causal unitary evolution of a lattice of quantum systems, is derived from the assumptions of homogeneity, parity and time - reversal invariance. the comparison between the automaton and the dirac evolutions is rigorously set as a discrimination problem between unitary channels. we derive an exact lower bound for the probability of error in the discrimination as an explicit function of the mass, the number and the momentum of the particles, and the duration of the evolution. computing this bound with experimentally achievable values, we see that in that regime the qca model cannot be discriminated from the usual dirac evolution. finally, we show that the evolution of one - particle states with narrow - band in momentum can be effi - ciently simulated by a dispersive differential equation for any regime. this analysis allows for a comparison with the dynamics of wave - packets as it is described by the usual dirac equation. this paper is a first step in exploring the idea that quantum field theory could be grounded on a more fundamental quantum cellular automaton model and that physical dynamics could emerge from quantum information processing. in this framework, the discretization is a central ingredient and not only a tool for performing non - perturbative calculation as in lattice gauge theory. the automaton model, endowed with a precise notion of local observables and a full probabilistic interpretation, could lead to a coherent unification of an hypothetical discrete planck scale with the usual fermi scale of high - energy physics.
|
arxiv:1212.2839
|
in this paper we obtain new results concerning maximum modulus of the polar derivative of a polynomial with restricted zeros. our results generalize and refine upon the results of aziz and shah [ an integral mean estimate for polynomial, indian j. pure appl. math. 28 ( 1997 ) 1413 - - 1419 ] and gardner, govil and weems [ some result concerning rate of growth of polynomials, east j. apporox. 10 ( 2004 ) 301 - - 312 ].
|
arxiv:0907.2836
|
let $ \ omega $ be a bounded pseudoconvex domain in $ \ mathbb { c } ^ n $ with lipschitz boundary and $ \ phi $ be a continuous function on $ \ overline { \ omega } $. we show that the toeplitz operator $ t _ { \ phi } $ with symbol $ \ phi $ is compact on the weighted bergman space if and only if $ \ phi $ vanishes on the boundary of $ \ omega $. we also show that compactness of the toeplitz operator $ t ^ { p, q } _ { \ phi } $ on $ \ overline { \ partial } $ - closed $ ( p, q ) $ - forms for $ 0 \ leq p \ leq n $ and $ q \ geq 1 $ is equivalent to $ \ phi = 0 $ on $ \ omega $.
|
arxiv:2302.05013
|
recent research investigates the decode - and - forward ( df ) relaying for mixed radio frequency ( rf ) and terahertz ( thz ) wireless links with zero - boresight pointing errors. in this letter, we analyze the performance of a fixed - gain amplify - and - forward ( af ) relaying for the rf - thz link to interface the access network on the rf technology with wireless thz transmissions. we develop probability density function ( pdf ) and cumulative distribution function ( cdf ) of the end - to - end snr for the relay - assisted system in terms of bivariate fox ' s h function considering $ \ alpha $ - $ \ mu $ fading for the thz system with non - zero boresight pointing errors and $ \ alpha $ - $ \ kappa $ - $ \ mu $ shadowed ( $ \ alpha $ - kms ) fading model for the rf link. using the derived pdf and cdf, we present exact analytical expressions of the outage probability, average bit - error - rate ( ber ), and ergodic capacity of the considered system. we also analyze the outage probability and average ber asymptotically for a better insight into the system behavior at high snr. we use simulations to compare the performance of the af relaying having a semi - blind gain factor with the recently proposed df relaying for thz - rf transmissions.
|
arxiv:2112.01984
|
in this article we introduce conformal riemannian morphisms. the idea of conformal riemannian morphism generalizes the notions of an isometric immersion, a riemannian submersion, an isometry, a riemannian map and a conformal riemannian map. we show that every injective conformal riemannian morphism is an injective conformal immersion, and that on a connected manifold, every surjective conformal riemannian morphism is a surjective conformal submersion, and every bijective conformal riemannian morphism is a conformal map.
|
arxiv:1804.06569
|
in this work, we explore the possibility that quantum fluctuations induce an electric or magnetic charge or both, in the context of gravity ' s rainbow. a semi - classical approach is adopted, where the graviton one - loop contribution to a classical energy in a background spacetime is computed through a variational approach with gaussian trial wave functionals. the energy density of the graviton one - loop contribution, in this context, acts as a source for the electric / magnetic charge. the ultraviolet ( uv ) divergences, which arise analyzing this procedure, are kept under control with the help of an appropriate choice of the rainbow ' s functions. in this way we avoid the introduction of any regularization / renormalization scheme. a comparison with the observed data lead us to determine the size of the electron and of the magnetic monopole which appear to be of planckian size. both results seem to be of the same order for a schwarzschild and a de sitter background, respectively. estimates on the magnetic monopole size have been done with the help of the dirac quantization procedure. we find that the monopole radius is larger than the electron radius. even in this case the ratio between the electric and magnetic monopole radius appears to be of the same order for both geometries.
|
arxiv:1305.3390
|
a remarkable several times increase ( up to 10 k ) of the superconducting critical temperature tc has been observed in point contacts created on the base of single crystals afe $ _ 2 $ as $ _ 2 $ ( a = k, cs, rb ). possible reasons for such a tc increase in point contacts are briefly discussed on a qualitative level. among them, it is most likely attributed to interfacial carrier doping and / or uniaxial non - homogeneous pressure arising when the contact is created.
|
arxiv:2009.05339
|
i introduce a survey of economic expectations formed by querying a large language model ( llm ) ' s expectations of various financial and macroeconomic variables based on a sample of news articles from the wall street journal between 1984 and 2021. i find the resulting expectations closely match existing surveys including the survey of professional forecasters ( spf ), the american association of individual investors, and the duke cfo survey. importantly, i document that llm based expectations match many of the deviations from full - information rational expectations exhibited in these existing survey series. the llm ' s macroeconomic expectations exhibit under - reaction commonly found in consensus spf forecasts. additionally, its return expectations are extrapolative, disconnected from objective measures of expected returns, and negatively correlated with future realized returns. finally, using a sample of articles outside of the llm ' s training period i find that the correlation with existing survey measures persists - - indicating these results do not reflect memorization but generalization on the part of the llm. my results provide evidence for the potential of llms to help us better understand human beliefs and navigate possible models of nonrational expectations.
|
arxiv:2305.02823
|
contrastive visual pretraining based on the instance discrimination pretext task has made significant progress. notably, recent work on unsupervised pretraining has shown to surpass the supervised counterpart for finetuning downstream applications such as object detection and segmentation. it comes as a surprise that image annotations would be better left unused for transfer learning. in this work, we investigate the following problems : what makes instance discrimination pretraining good for transfer learning? what knowledge is actually learned and transferred from these models? from this understanding of instance discrimination, how can we better exploit human annotation labels for pretraining? our findings are threefold. first, what truly matters for the transfer is low - level and mid - level representations, not high - level representations. second, the intra - category invariance enforced by the traditional supervised model weakens transferability by increasing task misalignment. finally, supervised pretraining can be strengthened by following an exemplar - based approach without explicit constraints among the instances within the same category.
|
arxiv:2006.06606
|
we find a family of complex saddle - points at large n of the matrix model for the superconformal index of su ( n ) n = 4 super yang - mills theory on $ s ^ 3 \ times s ^ 1 $ with one chemical potential $ \ tau $. the saddle - point configurations are labelled by points $ ( m, n ) $ on the lattice $ \ lambda _ \ tau = \ mathbb { z } \ tau + \ mathbb { z } $ with $ \ text { gcd } ( m, n ) = 1 $. the eigenvalues at a given saddle are uniformly distributed along a string winding $ ( m, n ) $ times along the $ ( a, b ) $ cycles of the torus $ \ mathbb { c } / \ lambda _ \ tau $. the action of the matrix model extended to the torus is closely related to the bloch - wigner elliptic dilogarithm, and the related bloch formula allows us to calculate the action at the saddle - points in terms of real - analytic eisenstein series. the actions of $ ( 0, 1 ) $ and $ ( 1, 0 ) $ agree with that of pure ads $ _ 5 $ and the supersymmetric ads $ _ 5 $ black hole, respectively. the black hole saddle dominates the canonical ensemble when $ \ tau $ is close to the origin, and there are new saddles that dominate when $ \ tau $ approaches rational points. the extension of the action in terms of modular forms leads to a simple treatment of the cardy - like limit $ \ tau \ to 0 $.
|
arxiv:1909.09597
|
pip - ii is an essential upgrade of the fermilab complex that will enable the worlds most intense high - energy beam of neutrinos for the international deep underground neutrino experiment at lbnf and support a broad physics program at fermilab. ultimately, the pip - ii superconducting linac will be capable of accelerating the $ h - $ cw beam to 800 mev with an average power of 1. 6 mw. to operate the linac with such high power, beam losses and beam emittance growth must be tightly controlled. in this paper, we present the results of global optimization of the linac options towards a robust and efficient physics design for the superconducting section of the pip - ii linac. we also investigate the impact of the nonlinear field of the dipole correctors on the beam quality and derive the requirement on the field quality using statistical analysis. finally, we assess the need to correct the quadrupole focusing produced by half wave, and single spoke accelerating cavities. we assess the feasibility of controlling the beam coupling in the machine by changing the polarity of the field of linac focusing solenoids
|
arxiv:2209.02520
|
we study the orientation in a uniform magnetic field of rod - like anisotropic biofluid crystals with an easy plane that makes an oblique angle with the crystal ' s c - axis. for a sufficiently strong field, these crystalline rods orient themselves such that the crystal ' s easy plane is parallel to the magnetic field, the rod ' s direction being defined as the direction of the crystal ' s c - axis. as the rod rotates about the crystal ' s hard axis there will therefore be a range of angles that the rod makes with the magnetic field. we detail this behavior by first providing illustrations of hemozoin crystals at various orientations. these illustrations clearly demonstrate that the orientation angle that the crystalline rod makes with respect to the magnetic field varies from about 30 deg to 150 deg. we also derive an analytical expression for the probability density function for the orientation angle. we find that the orientation angles are not uniformly distributed between the limits of 30 deg and 150 deg, but rather tend to cluster near these limits. this suggests experimental tests and addresses confusion about the rod orientation found in past literature. the relevance to other anisotropic biofluid crystals, such as those produced by gout, is also discussed.
|
arxiv:2408.13946
|
let $ m $ be a commutative cancellative monoid, and let $ r $ be an integral domain. the question of whether the monoid ring $ r [ x ; m ] $ is atomic provided that both $ m $ and $ r $ are atomic dates back to the 1980s. in 1993, roitman gave a negative answer to the question for $ m = \ mathbb { n } _ 0 $ : he constructed an atomic integral domain $ r $ such that the polynomial ring $ r [ x ] $ is not atomic. however, the question of whether a monoid algebra $ f [ x ; m ] $ over a field $ f $ is atomic provided that $ m $ is atomic has been open since then. here we offer a negative answer to this question. first, we find for any infinite cardinal $ \ kappa $ a torsion - free atomic monoid $ m $ of rank $ \ kappa $ satisfying that the monoid domain $ r [ x ; m ] $ is not atomic for any integral domain $ r $. then for every $ n \ ge 2 $ and for each field $ f $ of finite characteristic we exhibit a torsion - free atomic monoid of rank $ n $ such that $ f [ x ; m ] $ is not atomic. finally, we construct a torsion - free atomic monoid $ m $ of rank $ 1 $ such that $ \ mathbb { z } _ 2 [ x ; m ] $ is not atomic.
|
arxiv:1906.11138
|
generating a unitary transformation in the shortest possible time is of practical importance to quantum information processing because it helps to reduce decoherence effects and improve robustness to additive control field noise. many analytical and numerical studies have identified the minimum time necessary to implement a variety of quantum gates on coupled - spin qubit systems. this work focuses on exploring the pareto front that quantifies the trade - off between the competitive objectives of maximizing the gate fidelity $ \ mathcal { f } $ and minimizing the control time $ t $. in order to identify the critical time $ t ^ { \ ast } $, below which the target transformation is not reachable, as well as to determine the associated pareto front, we introduce a numerical method of pareto front tracking ( pft ). we consider closed two - and multi - qubit systems with constant inter - qubit coupling strengths and each individual qubit controlled by a separate time - dependent external field. our analysis demonstrates that unit fidelity ( to a desired numerical accuracy ) can be achieved at any $ t \ geq t ^ { \ ast } $ in most cases. however, the optimization search effort rises superexponentially as $ t $ decreases and approaches $ t ^ { \ ast } $. furthermore, a small decrease in control time incurs a significant penalty in fidelity for $ t < t ^ { \ ast } $, indicating that it is generally undesirable to operate below the critical time. we investigate the dependence of the critical time $ t ^ { \ ast } $ on the coupling strength between qubits and the target gate transformation. practical consequences of these findings for laboratory implementation of quantum gates are discussed.
|
arxiv:1112.0333
|
the relatively small binding energy in nuclei suggests that they may be well represented by near - bps skyrmions since their mass is roughly proportional to the baryon number $ a. $ for that purpose, we propose a generalization of the skyrme model with terms up to order six in derivatives of the pion fields and treat the nonlinear $ \ sigma $ and skyrme terms as small perturbations. for our special choice of mass term ( or potential ) $ v $, we obtain well - behaved analytical bps - type solutions with non - shell configurations for the baryon density, as opposed to the more complex shell - like configurations found in most extensions of the skyrme model. along with static and ( iso ) rotational energies, we add to the mass of the nuclei the often neglected coulomb energy and isospin breaking term. fitting the four model parameters, we find a remarkable agreement for the binding energy per nucleon $ b / a $ with respect to experimental data. these results support the idea that nuclei could be near - bps skyrmions.
|
arxiv:1205.1414
|
we characterise rectifiable subsets of a complete metric space $ x $ in terms of local approximation, with respect to the gromov - - hausdorff distance, by an $ n $ - dimensional banach space. in fact, if $ e \ subset x $ with $ \ mathcal { h } ^ n ( e ) < \ infty $ and has positive lower density almost everywhere, we prove that it is sufficient that, at almost every point and each sufficiently small scale, $ e $ is approximated by a bi - lipschitz image of euclidean space. we also introduce a generalisation of preiss ' s tangent measures that is suitable for the setting of arbitrary metric spaces and formulate our characterisation in terms of tangent measures. this definition is equivalent to that of preiss when the ambient space is euclidean, and equivalent to the measured gromov - - hausdorff tangent space when the measure is doubling.
|
arxiv:2109.12371
|
using numerical simulation we have studied a magnetization distribution and a process of magnetization reversal in nanoscale magnets placed above a superconductor plane. in order to consider an influence of superconductor on magnetization distribution in the nanomagnet we have used london approximation. we have found that for usual values of london penetration depth the ground state magnetization is mostly unchanged. but at the same time the fields of vortex nucleation and annihilation change significantly : the interval where vortex is stable enlarges on 100 - 200 oe for the particle above the superconductor. such fields are experimentally observable so there is a possibility of some practical applications of this effect.
|
arxiv:cond-mat/0410201
|
we perform two - ( 2d ) and three - dimensional ( 3d ) hydrodynamics simulations of convective oxygen shell - burning that takes place deep inside a massive progenitor star of a core - collapse supernova. using one dimensional ( 1d ) stellar evolution code, we first calculate the evolution of massive stars with an initial mass of 9 - 40 $ m _ \ odot $. four different overshoot parameters are applied, and co core mass trend similar to previous works is obtained in the 1d models. selecting eleven 1d models that have a silicon and oxygen coexisting layer, we perform 2d hydrodynamics simulations of the evolution $ \ sim $ 100 s until the onset of core - collapse. we find that convection with large - scale eddies and the turbulent mach number $ \ sim $ 0. 1 is obtained in the models having a si / o layer with a scale of 10 $ ^ 8 $ cm, whereas most models that have an extended o / si layer up to a few $ \ times 10 ^ 9 $ cm exhibit lower turbulent velocity. our results indicate that the supernova progenitors that possess a thick si / o layer could provide a preferable condition for perturbation - aided explosions. we perform 3d simulation of a 25 $ m _ \ odot $ model, which exhibits large - scale convection in the 2d models. the 3d model develops large ( $ \ ell = 2 $ ) convection similar to the 2d model, however, the turbulent velocity is lower. by estimating the neutrino emission properties of the 3d model, we point out that a time modulation of the event rates, if observed in kamland and hyper - kamiokande, would provide an important information about structural changes in the presupernova convective layer.
|
arxiv:1903.07811
|
let f denote a homogeneous degree 4 polynomial in 3 variables, and let s be an integer between 1 and 5. we would like to know if f can be written as a sum of fourth powers of s linear forms ( or a degeneration ). we determine necessary and sufficient conditions for this to be possible. these conditions are expressed as the vanishing of certain concomitants of f for the natural action of sl _ 3.
|
arxiv:math/0212169
|
simplicial complexes are a generalization of graphs that model higher - order relations. in this paper, we introduce simplicial patterns - - that we call simplets - - and generalize the task of frequent pattern mining from the realm of graphs to that of simplicial complexes. our task is particularly challenging due to the enormous search space and the need for higher - order isomorphism. we show that finding the occurrences of simplets in a complex can be reduced to a bipartite graph isomorphism problem, in linear time and at most quadratic space. we then propose an anti - monotonic frequency measure that allows us to start the exploration from small simplets and stop expanding a simplet as soon as its frequency falls below the minimum frequency threshold. equipped with these ideas and a clever data structure, we develop a memory - conscious algorithm that, by carefully exploiting the relationships among the simplices in the complex and among the simplets, achieves efficiency and scalability for our complex mining task. our algorithm, fresco, comes in two flavors : it can compute the exact frequency of the simplets or, more quickly, it can determine whether a simplet is frequent, without having to compute the exact frequency. experimental results prove the ability of fresco to mine frequent simplets in complexes of various size and dimension, and the significance of the simplets with respect to the traditional graph patterns.
|
arxiv:2201.08005
|
recently, wang and ma propose a conjecture associated with the possible generalization of andrews - warnaar identities. it is confirmed in this paper. as the applications of this conjecture, we prove that a family of series can be expressed by the partial theta functions and construct some new partial theta function identities.
|
arxiv:1805.01268
|
the masses of the excited heavy tetraquarks with hidden charm are calculated within the relativistic diquark - antidiquark picture. the dynamics of the light quark in a heavy - light diquark is treated completely relativistically. the diquark structure is taken into account by calculating the diquark - gluon form factor. new experimental data on charmonium - like states above open charm threshold are discussed. the obtained results indicate that x ( 3872 ), y ( 4260 ), y ( 4360 ), z ( 4248 ), z ( 4433 ) and y ( 4660 ) could be tetraquark states with hidden charm.
|
arxiv:0808.3912
|
achieving high quantum efficiency ( qe ) with low dark count is essential for highly sensitive photodetectors ( pds ), including single photon avalanche detectors ( spads ). however, high qe requires a thicker absorber region, which leads to high dark current and noise, which in turn affects the detectivity of pds and the photodetection efficiency and dark count of spads. the holy grail of photodetector and avalanche photodiode designs is to achieve highest qe with thinnest absorber and still enable large avalanche to gain as needed. we have developed a new design paradigm which exploits the coupling between dielectric mie resonance and transverse propagating waves in thin layers. the mie resonance launches the incident light at an angle in an ultrathin absorber, and when coupled to transverse waves, the light propagates laterally and is fully absorbed owing to the longer optical path. consequently, with appropriate choice of materials for a chosen wavelength, a high absorption ( ~ 90 % ) within typically < 100 nm absorber thickness is possible. for illustration, we apply our approach to design si - based detector operating at 810 nm and ingaas - based detector operating at 1550 nm and predict that the dark current at room temperature is reduced at least by two orders of magnitude. in addition, the lateral distances are often in a few microns and hence these designs can potentially enable avalanching for a large optical gain.
|
arxiv:2407.16830
|
vector autoregressive ( var ) models are popularly adopted for modelling high - dimensional time series, and their piecewise extensions allow for structural changes in the data. in var modelling, the number of parameters grow quadratically with the dimensionality which necessitates the sparsity assumption in high dimensions. however, it is debatable whether such an assumption is adequate for handling datasets exhibiting strong serial and cross - sectional correlations. we propose a piecewise stationary time series model that simultaneously allows for strong correlations as well as structural changes, where pervasive serial and cross - sectional correlations are accounted for by a time - varying factor structure, and any remaining idiosyncratic dependence between the variables is handled by a piecewise stationary var model. we propose an accompanying two - stage data segmentation methodology which fully addresses the challenges arising from the latency of the component processes. its consistency in estimating both the total number and the locations of the change points in the latent components, is established under conditions considerably more general than those in the existing literature. we demonstrate the competitive performance of the proposed methodology on simulated datasets and an application to us blue chip stocks data.
|
arxiv:2204.02724
|
in [ 1 ], a most general higher curvature non - local gravity action was derived that admits a particular $ r ^ 2 $ - like inflationary solution predicting the spectral index of primordial scalar perturbations $ n _ s ( n ) \ approx 1 - \ frac { 2 } { n } $, where $ n $ is the number of e - folds before the end of inflation, $ n \ gg 1 $, any value of the tensor - to - scalar ratio $ r ( n ) < 0. 036 $ and the tensor tilt $ n _ t ( n ) $ violating the $ r = - 8n _ t $ condition. in this paper, we compute scalar primordial non - gaussianities ( pngs ) in this theory and effectively demonstrate that higher curvature non - local terms lead to reduced bispectrum $ f _ { \ rm nl } \ left ( k _ 1, \, k _ 2, \, k _ 3 \ right ) $ mimicking several classes of scalar field models of inflation known in the literature. we obtain $ \ vert f _ { \ rm nl } \ vert \ sim o ( 1 - 10 ) $ in the equilateral, orthogonal, and squeezed limits and the running of these pngs measured by the quantity $ \ vert \ frac { d \ ln f _ { \ rm nl } } { d \ ln k } \ vert \ lesssim 1 $. such pngs are sufficiently large to be measurable by future cmb and large scale structure observations, thus providing a possibility to probe the nature of quantum gravity. furthermore, we demonstrate that the $ r ^ 2 $ - like inflation in non - local modification of gravity brings non - trivial predictions which go beyond the current status of effective field theories ( efts ) of single field, quasi - single field and multiple field inflation. a distinguishable feature of non - local $ r ^ 2 $ - like inflation compared to local efts is that we can have running of pngs at least an order of magnitude higher. in summary, through our generalized non - local $ r ^ 2 $ - like inflation, we obtain a robust geometric framework of inflation that can explain any detection of observable quantities related to scalar pngs.
|
arxiv:2210.16459
|
the surface code is a two - dimensional stabiliser code with parameters $ [ [ n, 1, \ theta ( \ sqrt { n } ) ] ] $. to this day, no stabiliser code with growing distance is know to live in less than two dimensions. in this note we show that no such code can exist.
|
arxiv:2503.17655
|
half of the energy is always lost when charging a capacitor. even in the limit of vanishing resistance, half of the charging energy is still lost - - to radiation instead of heat. while this fraction can technically be reduced by charging adiabatically, it otherwise places a fundamental limit on the charging efficiency of a capacitor. here we show that this 1 / 2 limit can be broken by coupling a ferroelectric to the capacitor dielectric. maxwell ' s equations are solved for the coupled system to analyze energy flow from the perspective of poynting ' s theorem and show that ( 1 ) total energy dissipation is reduced below the fundamental limit during charging and discharging ; ( 2 ) energy is saved by " recycling " the energy already stored in the ferroelectric phase transition ; and ( 3 ) this phase transition energy is directly transferred between the ferroelectric and dielectric during charging and discharging. these results demystify recent works on low energy negative capacitance devices as well as lay the foundation for improving fundamental energy efficiency in all devices that rely on energy storage in electric fields.
|
arxiv:1805.04259
|
we use numerical simulations of ray tracing through n - body simulations to investigate weak lensing by large - scale structure. these are needed for testing the analytic predictions of two - point correlators, to set error estimates on them and to investigate nonlinear gravitational effects in the weak lensing maps. on scales larger than 1 degree gaussian statistics suffice and can be used to estimate the sampling, noise and aliasing errors on the measured power spectrum. for this case we describe a minimum variance inversion procedure from the 2 - d to 3 - d power spectrum and discuss a sparse sampling strategy which optimizes the signal to noise on the power spectrum. on degree scales and smaller the shear and convergence statistics lie in the nonlinear regime and have a non - gaussian distribution. for this regime ray tracing simulations are useful to provide reliable error estimates and calibration of the measurements. we show how the skewness and kurtosis can in principle be used to probe the mean density in the universe, but are sensitive to sampling errors and require large observed areas. the probability distribution function is likely to be more useful as a tool to investigate nonlinear effects. in particular, it shows striking differences between models with different values of the mean density $ \ omega _ m $.
|
arxiv:astro-ph/9804238
|
it is known that the principal poincar \ ' e pontryagin function is generically an abelian integral. in non generic cases it is an iterated integral. in previous papers one of the authors gives a precise description of the principal poincar \ ' e pontryagin function, an iterated integral af length at most 2, involving a logarithmic function with only one ramification at a point at infinity. we show here that this property can be generalized to hamiltonians having real points at infinity and satisfying some properties.
|
arxiv:1104.4021
|
we derive relative upper bounds on the effective magnetic moment of dirac neutrinos from comparison of the standard weak and electromagnetic mechanisms of the neutrino luminosity due to the compton - like photoproduction of neutrino pairs in a degenerate gas of electrons on the lowest landau level in a strong magnetic field. these bounds are close to the known astrophysical and laboratory ones.
|
arxiv:1112.1635
|
let $ r _ 5 ( n ) $ be the largest cardinality of a set in $ \ { 1, \ ldots, n \ } $ which does not contain $ 5 $ elements in arithmetic progression. then there exists a constant $ c \ in ( 0, 1 ) $ such that \ [ r _ 5 ( n ) \ ll \ frac { n } { \ exp ( ( \ log \ log n ) ^ { c } ) }. \ ] our work is a consequence of recent improved bounds on the $ u ^ 4 $ - inverse theorem of the first author and the fact that $ 3 $ - step nilsequences may be approximated by locally cubic functions on shifted bohr sets. this combined with the density increment strategy of heath - brown and szemer { \ ' e } di, codified by green and tao, gives the desired result.
|
arxiv:2312.10776
|
recent experiments of the quasi - one - dimensional spin - 1 / 2 antiferromagnet copper benzoate established the existence of a magnetic field induced gap. the observed neutron scattering intensity exhibits resolution limited peaks at both the antiferromagnetic wave number and at incommensurate wave numbers related to the applied magnetic field. we determine the ratio of spectral weights of these peaks within the framework of a low - energy effective field theory description of the problem.
|
arxiv:cond-mat/0304244
|
we show that the auger air shower array has the potential to detect neutrinos of energies in the $ 10 ^ { 19 } ~ $ ev range through horizontal air showers. assuming some simple conservative trigger requirements we obtain the acceptance for horizontal air showers as induced by high energy neutrinos by two alternative methods and we then give the expected event rates for a variety of neutrino fluxes as predicted in different models which are used for reference.
|
arxiv:astro-ph/9801313
|
among the many variants of rl, an important class of problems is where the state and action spaces are continuous - - autonomous robots, autonomous vehicles, optimal control are all examples of such problems that can lend themselves naturally to reinforcement based algorithms, and have continuous state and action spaces. in this paper, we introduce a prioritized form of a combination of state - of - the - art approaches such as deep q - learning ( dqn ) and deep deterministic policy gradient ( ddpg ) to outperform the earlier results for continuous state and action space problems. our experiments also involve the use of parameter noise during training resulting in more robust deep rl models outperforming the earlier results significantly. we believe these results are a valuable addition for continuous state and action space problems.
|
arxiv:2410.11250
|
we study one - dimensional, interacting, gapped fermionic systems described by variants of the peierls - hubbard model and characterize their phases via a topological invariant constructed out of their green ' s functions. we demonstrate that the existence of topologically protected, zero - energy states at the boundaries of these systems can be tied to the values of their topological invariant, just like when working with the conventional, noninteracting topological insulators. we use a combination of analytical methods and the numerical density matrix renormalization group method to calculate the values of the topological invariant throughout the phase diagrams of these systems, thus deducing when topologically protected boundary states are present. we are also able to study topological states in spin systems because, deep in the mott insulating regime, these fermionic systems reduce to spin chains. in this way, we associate the zero - energy states at the end of an antiferromagnetic spin - one heisenberg chain with the topological invariant 2.
|
arxiv:1205.5095
|
high sensitivity observations of radio halos in galaxy clusters at frequencies lower than 330 mhz are still relatively rare, and very little is known compared to the classical 1. 4 ghz images. the few radio halos imaged down to 150 - 240 mhz show a considerable spread in size, morphology and spectral properties. all clusters belonging to the gmrt radio halo survey with detected or candidate cluster - scale diffuse emission have been imaged at 325 mhz with the gmrt. few of them were also observed with the gmrt at 240 mhz and 150 mhz. for a1682, imaging is particularly challenging due to the presence of strong and extended radio galaxies at the center. our data analysis suggests that the radio galaxies are superposed to very low surface brightness radio emission extended on the cluster scale, which we present here.
|
arxiv:1107.2198
|
when assessing risks on a finite - time horizon, the problem can often be reduced to the study of a random sequence $ c ( n ) = ( c _ 1, \ ldots, c _ n ) $ of random length $ n $, where $ c ( n ) $ comes from the product of a matrix $ a ( n ) $ of random size $ n \ times n $ and a random sequence $ x ( n ) $ of random length $ n $. our aim is to build a regular variation framework for such random sequences of random length, to study their spectral properties and, subsequently, to develop risk measures. in several applications, many risk indicators can be expressed from the asymptotic behavior of $ \ vert \ vert c ( n ) \ vert \ vert $, for some norm $ \ vert \ cdot \ vert $. we propose a generalization of breiman lemma that gives way to an asymptotic equivalent to $ \ vert c ( n ) \ vert $ and provides risk indicators such as the ruin probability and the tail index for shot noise processes on a finite - time horizon. lastly, we apply our final result to a model used in dietary risk assessment and in non - life insurance mathematics to illustrate the applicability of our method.
|
arxiv:1606.08321
|
cp asymmetries have been measured recently by the lhcb collaboration in three - body $ b ^ + $ decays to final states involving charged pions and kaons. large asymmetries with opposite signs at a level of about 60 % have been observed in $ b ^ \ pm \ to \ pi ^ \ pm ( { \ rm or } k ^ \ pm ) \ pi ^ + \ pi ^ - $ and $ b ^ \ pm \ to \ pi ^ \ pm k ^ + k ^ - $ for restricted regions in the dalitz plots involving $ \ pi ^ + \ pi ^ - $ and $ k ^ + k ^ - $ with low invariant mass. u - spin is shown to predict corresponding $ \ delta s = 0 $ and $ \ delta s = 1 $ asymmetries with opposite signs and inversely proportional to their branching ratios, in analogy with a successful relation predicted thirteen years ago between asymmetries in $ b _ s \ to k ^ - \ pi ^ + $ and $ b ^ 0 \ to k ^ + \ pi ^ - $. we compare these predictions with the measured integrated asymmetries. effects of specific resonant or non - resonant partial waves on enhanced asymmetries for low - pair - mass regions of the dalitz plot are studied in $ b ^ \ pm \ to \ pi ^ \ pm \ pi ^ + \ pi ^ - $. the closure of low - mass $ \ pi ^ + \ pi ^ - $ and $ k ^ + k ^ - $ channels involving only $ \ pi \ pi \ leftrightarrow k \ bar k $ rescattering may explain by cpt approximately equal magnitudes and opposite signs measured in $ b ^ \ pm \ to \ pi ^ \ pm \ pi ^ + \ pi ^ - $ and $ b ^ \ pm \ to \ pi ^ \ pm k ^ + k ^ - $.
|
arxiv:1306.2625
|
the experimental realization of lattices with chern bands in ultracold - atom and photonic systems has motivated the study of time - dependent phenomena, such as spatial propagation, in lattices with nontrivial topology. we study the dynamics of gaussian wavepackets on the haldane honeycomb chern - band lattice model, in the presence of a harmonic trap. we focus on the transverse response to a force, which is due partly to the berry curvature and partly to the transverse component of the energy band curvature. we evaluate the accuracy of a semiclassical description, which treats the wavepacket as a point particle in both real and momentum space, in reproducing the motion of a realistic wavepacket with finite extent. we find that, in order to accurately capture the wavepacket dynamics, the extent of the wavepacket in momentum space needs to be taken into account. the dynamics is sensitive to the interplay of band dispersion and berry curvature over the finite region of momentum ( reciprocal ) space where the wavepacket has support. moreover, if the wavepacket is prepared with a finite initial momentum, the semiclassical analysis reproduces its motion as long as it has a large overlap with the eigenstates of a single band. the semiclassical description generally improves with increasing real - space size of the wavepacket, as long as the external conditions ( e. g., external force ) remain uniform throughout the spatial extent of the wavepacket.
|
arxiv:1509.03638
|
in this paper we describe a project we initiated to investigate individual pixels in the downloaded kepler apertures in order to find objects in the background of the main targets with variable brightness. in the first paper of this series we discovered and investigated 547 short - period eclipsing binaries ( bienias et al. 2021 ). here we present the independent discovery of 26 new rr lyrae stars in the kepler background pixels obtained during the primary mission, and provide continuous and precise photometry for these objects. twenty - one of these stars were already noted by gaia or the pan - starrs survey. this new population of dominantly faint and distant rr lyrae stars increases by 50 % and complements nicely the 52 already known main target rr lyrae stars in the original kepler field. despite their faintness, the four - year quasi - uninterrupted light curves of these stars allow an unprecedented view of these faint halo objects. we present an analysis of the light curves of the new rr lyrae sample, verify their classification using fourier parameters, and discuss the properties of these newly found pulsating variable stars. most notably, this is the first time that such faint rr lyrae stars have been investigated with the help of a photometric data set with outstanding cadence and precision. interestingly, these objects share the properties of their brighter siblings in terms of sub - class characteristics, additional mode content, and modulation occurrence rates.
|
arxiv:2203.08596
|
we have used unbiased global optimization to fit a reactive force field to a given set of reference data. specifically, we have employed genetic algorithms ( ga ) to fit reaxff to sioh data, using an in - house ga code that is parallelized across reference data items via the message - passing interface ( mpi ). details of ga tuning turn out to be far less important for global optimization efficiency than using suitable ranges within which the parameters are varied. to establish these ranges, either prior knowledge can be used or successive stages of ga optimizations, each building upon the best parameter vectors and ranges found in the previous stage. we finally arrive at optimized force fields with smaller error measures than those published previously. hence, this optimization approach will contribute to converting force - field fitting from a specialist task to an everyday commodity, even for the more difficult case of reactive force fields.
|
arxiv:1909.06876
|
we demonstrate that there is an intimate relationship between the magnetic properties of derrida ' s random energy model ( rem ) of spin glasses and the problem of joint source - - channel coding in information theory. in particular, typical patterns of erroneously decoded messages in the coding problem have ` ` magnetization ' ' properties that are analogous to those of the rem in certain phases, where the non - - uniformity of the distribution of the source in the coding problem, plays the role of an external magnetic field applied to the rem. we also relate the ensemble performance ( random coding exponents ) of joint source - - channel codes to the free energy of the rem in its different phases.
|
arxiv:0803.2789
|
in this work, using the gaussian process, we explore the potentiality of future gravitational wave ( gw ) measurements to probe cosmic opacity at high redshifts through comparing its opacity - free luminosity distance ( ld ) with the opacity - dependent one from the combination of type ia supernovae ( snia ) and gamma - ray bursts ( grbs ). the gw data, snia and grb data are simulated from the measurements of the future einstein telescope, the actual pantheon compilation and the latest observation of grbs compiled by l. amati { \ it et al }, respectively. a nonparametric method is proposed to probe the spatial homogeneity of cosmic transparency at high redshift by comparing the ld reconstructed from the gw data with that reconstructed from the pantheon and grb data. in addition, the cosmic opacity is tested by using the parametrization for the optical depth, and the results show that the constraints on cosmic opacity are more stringent than the previous ones. it shows that the future gw measurements may be used as an important tool to probe the cosmic opacity in the high redshift region.
|
arxiv:2009.03041
|
inspired by the notion of quasi - infinite divisibility ( qid ), we introduce and study the class of freely quasi - infinitely divisible ( fqid ) distributions on $ \ mathbb { r } $, i. e. distributions which admit the free l \ ' { e } vy - khintchine - type representation with signed l \ ' { e } vy measure. we prove several properties of the fqid class, some of them in contrast to those of the qid class. for example, a fqid distribution may have negative gaussian part, and the total mass of its signed l \ ' { e } vy measure may be negative. finally, we extend the bercovici - pata bijection, providing a characteristic triplet, with the l \ ' { e } vy measure having nonzero negative part, which is at the same time classical and free characteristic triplet.
|
arxiv:2107.09473
|
the rapid growth of location - based services ( lbs ) has yielded massive amounts of data on human mobility. effectively extracting meaningful representations for user - generated check - in sequences is pivotal for facilitating various downstream services. however, the user - generated check - in data are simultaneously influenced by the surrounding objective circumstances and the user ' s subjective intention. specifically, the temporal uncertainty and spatial diversity exhibited in check - in data make it difficult to capture the macroscopic spatial - temporal patterns of users and to understand the semantics of user mobility activities. furthermore, the distinct characteristics of the temporal and spatial information in check - in sequences call for an effective fusion method to incorporate these two types of information. in this paper, we propose a novel spatial - temporal cross - view contrastive representation ( stccr ) framework for check - in sequence representation learning. specifically, stccr addresses the above challenges by employing self - supervision from " spatial topic " and " temporal intention " views, facilitating effective fusion of spatial and temporal information at the semantic level. besides, stccr leverages contrastive clustering to uncover users ' shared spatial topics from diverse mobility activities, while employing angular momentum contrast to mitigate the impact of temporal uncertainty and noise. we extensively evaluate stccr on three real - world datasets and demonstrate its superior performance across three downstream tasks.
|
arxiv:2407.15899
|
in this letter, we propose for the first time a method of abstracting the ppv ( perturbation projection vector ) characteristic of the up - to - date memristor - based oscillators. inspired from biological oscillators and its characteristic named prc ( phase response curve ), we build a bridge between prc and ppv. this relationship is verified rigorously using the transistor level simulation of colpitts and ring oscillators, i. e., comparing the ppv converted from prc and the ppv obtained from accurate pss + pxf simulation. then we apply this method to the ppv calculation of the memristor - based oscillator. by keeping the phase dynamics of the oscillator and dropping the details of voltage / current amplitude, the ppv modelling is highly efficient to describe the phase dynamics due to the oscillator coupling, and will be very suitable for the fast simulation of large scale oscillatory neural networks.
|
arxiv:1511.05437
|
the problem of an optimal mapping between hilbert spaces $ in $ of $ \ left | \ psi \ right \ rangle $ and $ out $ of $ \ left | \ phi \ right \ rangle $ based on a set of wavefunction measurements ( within a phase ) $ \ psi _ l \ to \ phi _ l $, $ l = 1 \ dots m $, is formulated as an optimization problem maximizing the total fidelity $ \ sum _ { l = 1 } ^ { m } \ omega ^ { ( l ) } \ left | \ langle \ phi _ l | \ mathcal { u } | \ psi _ l \ rangle \ right | ^ 2 $ subject to probability preservation constraints on $ \ mathcal { u } $ ( partial unitarity ). the constructed operator $ \ mathcal { u } $ can be considered as an $ in $ to $ out $ quantum channel ; it is a partially unitary rectangular matrix ( an isometry ) of dimension $ \ dim ( out ) \ times \ dim ( in ) $ transforming operators as $ a ^ { out } = \ mathcal { u } a ^ { in } \ mathcal { u } ^ { \ dagger } $. an iterative algorithm for finding the global maximum of this optimization problem is developed, and its application to a number of problems is demonstrated. a software product implementing the algorithm is available from the authors.
|
arxiv:2405.10263
|
a spheroidal anisotropic local momentum distribution is implemented in the statistical model of hadron production. we show that this form leads to exactly the same ratios of hadronic abundances as the equilibrium distributions, if the temperature is identified with a characteristic transverse - momentum scale. moreover, to a very good approximation the transverse - momentum spectra of hadrons are the same for isotropic and anisotropic systems, provided the size of the system at freeze - out is appropriately adjusted. we further show that this invariance may be used to improve the agreement between the model and experimental hbt results.
|
arxiv:1206.6587
|
the optical behaviour of the be star in the high mass x - ray transient a0535 + 26 / hde245770 shows that at the periastron typically there is an enhancement in the luminosity of order 0. 02 to few tenths mag, and the x - ray outburst happens about 8 days after the periastron. we construct a quantitative model of this event, basing on the a nonstationary accretion disk behavior, connected with a high ellipticity of the orbital motion. the ephemeris used in this paper - - jd $ _ { \ rm opt - outb } $ = jd $ _ 0 $ ( 2, 444, 944 ) $ \ pm $ n ( 111. 0 $ \ pm $ 0. 4 ) days are derived from the orbital period of the system p $ _ { \ rm orb } = 111. 0 \ pm 0. 4 $ days, determined by priedhorsky & terrell ( 1983 ), and from the optical flare of december 5, 1981 ( giovannelli et al., 1985 ) ( here after 811205 - e ; e stands for the event occurred at that date ) that triggered the subsequent x - ray outburst of december 13, 1981 ( nagase et al., 1982 ) ( here after 811213 - e ). we explain the observed time delay between the peaks of the optical and x - ray outbursts in this system by the time of radial motion of the matter in the accretion disk, after an increase of the mass flux in the vicinity of a periastral point in the binary. this time is determined by the turbulent viscosity, with the parameter $ \ alpha = 0. 1 - 0. 3 $. the increase of the mass flux is a sort of flush that reaches the external part of the accretion disk around the neutron star, producing an enhancement in the optical luminosity. the consequent x - ray flare happens when the matter reaches the hot central parts of the accretion disk, and the neutron star surface.
|
arxiv:1305.5149
|
the whole slide image ( wsi ) classification is often formulated as a multiple instance learning ( mil ) problem. since the positive tissue is only a small fraction of the gigapixel wsi, existing mil methods intuitively focus on identifying salient instances via attention mechanisms. however, this leads to a bias towards easy - to - classify instances while neglecting hard - to - classify instances. some literature has revealed that hard examples are beneficial for modeling a discriminative boundary accurately. by applying such an idea at the instance level, we elaborate a novel mil framework with masked hard instance mining ( mhim - mil ), which uses a siamese structure ( teacher - student ) with a consistency constraint to explore the potential hard instances. with several instance masking strategies based on attention scores, mhim - mil employs a momentum teacher to implicitly mine hard instances for training the student model, which can be any attention - based mil model. this counter - intuitive strategy essentially enables the student to learn a better discriminating boundary. moreover, the student is used to update the teacher with an exponential moving average ( ema ), which in turn identifies new hard instances for subsequent training iterations and stabilizes the optimization. experimental results on the camelyon - 16 and tcga lung cancer datasets demonstrate that mhim - mil outperforms other latest methods in terms of performance and training cost. the code is available at : https : / / github. com / dearcaat / mhim - mil.
|
arxiv:2307.15254
|
predictive maintenance systems have the potential to significantly reduce costs for maintaining aircraft fleets as well as provide improved safety by detecting maintenance issues before they come severe. however, the development of such systems has been limited due to a lack of publicly labeled multivariate time series ( mts ) sensor data. mts classification has advanced greatly over the past decade, but there is a lack of sufficiently challenging benchmarks for new methods. this work introduces the ngafid maintenance classification ( ngafid - mc ) dataset as a novel benchmark in terms of difficulty, number of samples, and sequence length. ngafid - mc consists of over 7, 500 labeled flights, representing over 11, 500 hours of per second flight data recorder readings of 23 sensor parameters. using this benchmark, we demonstrate that recurrent neural network ( rnn ) methods are not well suited for capturing temporally distant relationships and propose a new architecture called convolutional multiheaded self attention ( conv - mhsa ) that achieves greater classification performance at greater computational efficiency. we also demonstrate that image inspired augmentations of cutout, mixup, and cutmix, can be used to reduce overfitting and improve generalization in mts classification. our best trained models have been incorporated back into the ngafid to allow users to potentially detect flights that require maintenance as well as provide feedback to further expand and refine the ngafid - mc dataset.
|
arxiv:2110.03757
|
let $ g $ be a finite group and $ k $ a field of prime characteristic $ p $. we give a complete classification of endotrivial complexes, i. e. determine the picard group $ \ mathcal { e } _ k ( g ) $ of the tensor - triangulated category $ k ^ b ( { } _ { kg } \ mathbf { triv } ) $ recently studied by balmer and gallauer. for $ p $ - groups, we identify $ \ mathcal { e } _ k ( - ) $ with the rational $ p $ - biset functor $ cf _ b ( - ) $ of borel - smith functions, and recover a short exact sequence of rational $ p $ - biset functors constructed by bouc and yal \ c { c } in. as a consequence, we prove that every $ p $ - permutation autoequivalence of a $ p $ - group arises from a splendid rickard autoequivalence. additionally, we give a positive answer to a question of gelvin and yal \ c { c } in, showing the kernel of the bouc homomorphism for an arbitrary finite group $ g $ is described by superclass functions $ f : s _ p ( g ) \ to \ mathbb { z } $ satisfying the oriented artin - borel - smith conditions.
|
arxiv:2403.04088
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.