text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
sports betting ' s recent federal legalisation in the usa coincides with the golden age of machine learning. if bettors can leverage data to reliably predict the probability of an outcome, they can recognise when the bookmaker ' s odds are in their favour. as sports betting is a multi - billion dollar industry in the usa alone, identifying such opportunities could be extremely lucrative. many researchers have applied machine learning to the sports outcome prediction problem, generally using accuracy to evaluate the performance of predictive models. we hypothesise that for the sports betting problem, model calibration is more important than accuracy. to test this hypothesis, we train models on nba data over several seasons and run betting experiments on a single season, using published odds. we show that using calibration, rather than accuracy, as the basis for model selection leads to greater returns, on average ( return on investment of $ + 34. 69 \ % $ versus $ - 35. 17 \ % $ ) and in the best case ( $ + 36. 93 \ % $ versus $ + 5. 56 \ % $ ). these findings suggest that for sports betting ( or any probabilistic decision - making problem ), calibration is a more important metric than accuracy. sports bettors who wish to increase profits should therefore select their predictive model based on calibration, rather than accuracy.
|
arxiv:2303.06021
|
we characterize the kinematic and magnetic properties of hi filaments located in a high galactic latitude region ( $ 165 ^ \ circ < \ alpha < 195 ^ \ circ $ and $ 12 ^ \ circ < \ delta < 24 ^ \ circ $ ). we extract three - dimensional filamentary structures using \ texttt { fil3d } from the galactic arecibo l - band feed array hi ( galfa - hi ) survey 21 - cm emission data. our algorithm identifies coherent emission structures in neighboring velocity channels. based on the mean velocity, we identify a population of local and intermediate velocity cloud ( ivc ) filaments. we find the orientations of the local ( but not the ivc ) hi filaments are aligned with the magnetic field orientations inferred from planck 353 ghz polarized dust emission. we analyze position - velocity diagrams of the velocity - coherent filaments, and find that only 15 percent of filaments demonstrate significant major - axis velocity gradients with a median magnitude of 0. 5 km s $ ^ { - 1 } $ pc $ ^ { - 1 } $, assuming a fiducial filament distance of 100 pc. we conclude that the typical diffuse hi filament does not exhibit a simple velocity gradient. the reported filament properties constrain future theoretical models of filament formation.
|
arxiv:2309.10777
|
both galaxies and smbhs.
|
arxiv:2409.16347
|
computing with words ( cww ) has emerged as a powerful tool for processing the linguistic information, especially the one generated by human beings. various cww approaches have emerged since the inception of cww, such as perceptual computing, extension principle based cww approach, symbolic method based cww approach, and 2 - tuple based cww approach. furthermore, perceptual computing can use interval approach ( ia ), enhanced interval approach ( eia ), or hao - mendel approach ( hma ), for data processing. there have been numerous works in which hma was shown to be better at word modelling than eia, and eia better than ia. but, a deeper study of these works reveals that hma captures lesser fuzziness than the eia or ia. thus, we feel that eia is more suited for word modelling in multi - person systems and hma for single - person systems ( as eia is an improvement over ia ). furthermore, another set of works, compared the performances perceptual computing to the other above said cww approaches. in all these works, perceptual computing was shown to be better than other cww approaches. however, none of the works tried to investigate the reason behind this observed better performance of perceptual computing. also, no comparison has been performed for scenarios where the inputs are differentially weighted. thus, the aim of this work is to empirically establish that eia is suitable for multi - person systems and hma for single - person systems. another dimension of this work is also to empirically prove that perceptual computing gives better performance than other cww approaches based on extension principle, symbolic method and 2 - tuple especially in scenarios where inputs are differentially weighted.
|
arxiv:2004.14892
|
the authors construct the global macaulay inverse system for a zero - dimensional subscheme z of projective n - space p ^ n, from the local inverse systems of the irreducible components of z. they show that when z is locally gorenstein a generic homogeneous form f of degree d apolar to z determines z when d is larger than an invariant b ( z ). they also show that a natural upper bound for the hiilbert function of gorenstein artin quotient of the coordinate ring is achieved for large socle degree. they show the uniqueness of generalized additive decompositions of a homogeneous form f into powers of linear forms, under suitable hypotheses. they include many examples.
|
arxiv:1107.0094
|
this paper studies active learning ( al ) on graphs, whose purpose is to discover the most informative nodes to maximize the performance of graph neural networks ( gnns ). previously, most graph al methods focus on learning node representations from a carefully selected labeled dataset with large amount of unlabeled data neglected. motivated by the success of contrastive learning ( cl ), we propose a novel paradigm that seamlessly integrates graph al with cl. while being able to leverage the power of abundant unlabeled data in a self - supervised manner, nodes selected by al further provide semantic information that can better guide representation learning. besides, previous work measures the informativeness of nodes without considering the neighborhood propagation scheme of gnns, so that noisy nodes may be selected. we argue that due to the smoothing nature of gnns, the central nodes from homophilous subgraphs should benefit the model training most. to this end, we present a minimax selection scheme that explicitly harnesses neighborhood information and discover homophilous subgraphs to facilitate active selection. comprehensive, confounding - free experiments on five public datasets demonstrate the superiority of our method over state - of - the - arts.
|
arxiv:2010.16091
|
we analyze reputation dynamics in an online market for illicit drugs using a novel dataset of prices and ratings. the market is a black market, and so contracts cannot be enforced. we study the role that reputation plays in alleviating adverse selection in this market. we document the following stylized facts : ( i ) there is a positive relationship between the price and the rating of a seller. this effect is increasing in the number of reviews left for a seller. a mature highly - rated seller charges a 20 % higher price than a mature low - rated seller. ( ii ) sellers with more reviews charge higher prices regardless of rating. ( iii ) low - rated sellers are more likely to exit the market and make fewer sales. we show that these stylized facts are explained by a dynamic model of adverse selection, ratings, and exit, in which buyers form rational inferences about the quality of a seller jointly from his rating and number of sales. sellers who receive low ratings initially charge the same price as highly - rated sellers since early reviews are less informative about quality. bad sellers exit rather than face lower prices in the future. we provide conditions under which our model admits a unique equilibrium. we estimate the model, and use the result to compute the returns to reputation in the market. we find that the market would have collapsed due to adverse selection in the absence of a rating system.
|
arxiv:1703.01937
|
we present stellar parameters and abundances of 11 elements ( li, na, mg, al, si, ca, ti, cr, fe, ni, and zn ) of 13 f6 - k2 main - sequence stars in the young groups ab doradus, carina near, and ursa major. the exoplanet - host star \ iota horologii is also analysed. the three young associations have lithium abundance consistent with their age. all other elements show solar abundances. the three groups are characterised by a small scatter in all abundances, with mean [ fe / h ] values of 0. 10 ( \ sigma = 0. 03 ), 0. 08 ( \ sigma = 0. 05 ), and 0. 01 ( \ sigma = 0. 03 ) dex for ab doradus, carina near, and ursa major, respectively. the distribution of elemental abundances appears congruent with the chemical pattern of the galactic thin disc in the solar vicinity, as found for other young groups. this means that the metallicity distribution of nearby young stars, targets of direct - imaging planet - search surveys, is different from that of old, field solar - type stars, i. e. the typical targets of radial velocity surveys. the young planet - host star \ iota horologii shows a lithium abundance lower than that found for the young association members. it is found to have a slightly super - solar iron abundance ( [ fe / h ] = 0. 16 + - 0. 09 ), while all [ x / fe ] ratios are similar to the solar values. its elemental abundances are close to those of the hyades cluster derived from the literature, which seems to reinforce the idea of a possible common origin with the primordial cluster.
|
arxiv:1209.2591
|
recently, a lagrangian description of superfluids attracted some interest from the fluid / gravity - correspondence viewpoint. in this respect, the work of dubovksy et al. has proposed a new field theoretical description of fluids, which has several interesting aspects. on another side, we have provided in arxiv : 1304. 2206 a supersymmetric extension of the original works. in the analysis of the lagrangian structures a new invariant appeared which, although related to known invariants, provides, in our opinion, a better parametrisation of the fluid dynamics in order to describe the fluid / superfluid phases.
|
arxiv:1304.6915
|
we develop an integral version of deligne cohomology for smooth proper real varieties. for this purpose the role played by singular cohomology in the complex case has to be replaced by ordinary bigraded g - equivariant cohomology, where g = gal ( c / r ). this is the g - equivariant counterpart of singular cohomology. we establish the basic properties of the theory and give a geometric interpretation for the groups in dimension 2 in weights 1 and 2.
|
arxiv:0810.2058
|
new optical narrowband imaging observations of the fields of several ulxs are presented. known supershell nebulae are associated with a number of these ulxs, which we detect in emission line filters such as [ s ii ], he ii, [ o ii ] and [ o iii ]. new nebulae are discovered, which are candidate ulx - powered supershells. the morphologies and emission line fluxes of these nebulae could then be used to infer the properties of the emitting gas, which gives clues to the energizing source ( photoionization and / or shock - excitation, both possibly from the ulx ). studies of supershells powered by ulxs can help to constrain the nature of ulxs themselves, such as the isotropy of the x - ray emission and the strength of their outflows.
|
arxiv:1011.0598
|
cosmological gamma ray bursts are very likely powerful sources of high energy neutrinos and gravitational waves. the aim of this paper is to review and update the current predictions about the intensity of emission in this two forms to be expected from grb ' s. in particular a revised calculation of the neutrino emission by photohadronic interaction at the internal shock is obtained by numerical integration, including both the resonant and the hadronization channels. the detectability of gravitational waves from individual bursts could be difficult for presently planned detectors if the grb ' s are beamed, but it is possible, as we have proposed in a paper two years ago, that the incoherent superimposition of small amplitude pulse trains of gw ' s impinging on the detector, could be detected as an excess of noise in the full virgo detector, integrating over a time of the order of one year.
|
arxiv:astro-ph/0306118
|
this paper studies the problem of distributed weighted least - squares ( wls ) estimation for an interconnected linear measurement network with additive noise. two types of measurements are considered : self measurements for individual nodes, and edge measurements for the connecting nodes. each node in the network carries out distributed estimation by using its own measurement and information transmitted from its neighbours. we study two distributed estimation algorithms : a recently proposed distributed wls algorithm and the so - called gaussian belief propagation ( bp ) algorithm. we first establish the equivalence of the two algorithms. we then prove a key result which shows that the information matrix is always generalised diagonally dominant, under some very mild condition. using these two results and some known convergence properties of the gaussian bp algorithm, we show that the aforementioned distributed wls algorithm gives the globally optimal wls estimate asymptotically. a bound on its convergence rate is also presented.
|
arxiv:2002.11221
|
we characterise the pairs of graphs $ \ { x, y \ } $ such that all $ \ { x, y \ } $ - free graphs ( distinct from $ c _ 5 $ ) are perfect. similarly, we characterise pairs $ \ { x, y \ } $ such that all $ \ { x, y \ } $ - free graphs ( distinct from $ c _ 5 $ ) are $ \ omega $ - colourable ( that is, their chromatic number is equal to their clique number ). more generally, we show characterizations of pairs $ \ { x, y \ } $ for perfectness and $ \ omega $ - colourability of all connected $ \ { x, y \ } $ - free graphs which are of independence at least $ 3 $, distinct from an odd cycle, and of order at least $ n _ 0 $, and similar characterisations subject to each subset of these additional constraints. ( the classes are non - hereditary and the characterisations for perfectness and $ \ omega $ - colourability are different. ) we build on recent results of brause et al. on $ \ { k _ { 1, 3 }, y \ } $ - free graphs, and we use ramsey ' s theorem and the strong perfect graph theorem as main tools. we relate the present characterisations to known results on forbidden pairs for $ \ chi $ - boundedness and deciding $ k $ - colourability in polynomial time.
|
arxiv:2108.07071
|
the $ pp \ to p n \ pi ^ + $ reaction is a channel with the largest total cross section for pp collision in cosy / csr energy region. in this work, we investigate individual contributions from various $ n ^ * $ and $ \ delta ^ { * } $ resonances with mass up to about 2 gev for the $ pp \ to p n \ pi ^ + $ reaction. we extend a resonance model, which can reproduce the observed total cross section quite well, to give theoretical predictions of various differential cross sections for the present reaction at $ t _ p = 2. 88 $ gev. it could serve as a reference for identifying new physics in the future experiments at hirfl - csr.
|
arxiv:0902.1818
|
in today ' s private cloud, the resource of the datacenter is shared by multiple tenants. unlike the storage and computing resources, it ' s challenging to allocate bandwidth resources among tenants in private datacenter networks. state - of - the - art approaches are not effective or practical enough to meet tenants ' bandwidth requirements. in this paper, we propose pronet, a practical end - host - based solution for bandwidth sharing among tenants to meet their various demands. the key idea of pronet is byte - counter, a mechanism to collect the bandwidth usage of tenants on end - hosts to guide the adjustment of the whole network allocation, without putting much pressure on switches. we evaluate pronet both in our testbed and large - scale simulations. results show that pronet can support multiple allocation policies such as network proportionality and minimum bandwidth guarantee. accordingly, the application - level performance is improved.
|
arxiv:2305.02560
|
this article is devoted to the well - posedness of the stochastic compressible navier stokes equations. we establish the global existence of an appropriate class of weak solutions emanating from large inital data, set within a bounded domain. the stochastic forcing is of multiplicative type, white in time and colored in space. energy methods are used to merge techniques of p. l. lions for the deterministic, compressible system with the theory of martingale solutions to the incompressible, stochastic system. namely, we develop stochastic analogues of the weak compactness program of lions, and use them to implement a martingale method. the existence proof involves four layers of approximating schemes. we combine the three layer scheme of feiresil / novotny / petzeltova for the deterministic, compressible system with a time splitting method used by berthelin / vovelle for the one dimensional stochastic compressible euler equations.
|
arxiv:1504.00951
|
we propose some slight additions to o - o languages to implement the necessary features for using deductive object programming ( dop ). this way of programming based upon the manipulation of the production tree of the objects of interest, result in making persistent these objects and in sensibly lowering the code complexity.
|
arxiv:cs/0601035
|
current chatbot development platforms and frameworks facilitate setting up the language and dialog part of chatbots, while connecting it to backend services and business functions requires substantial manual coding effort and programming skills. this paper proposes an approach to overcome this situation. it proposes an architecture with a chatbot as frontend using an iot ( internet of things ) platform as a middleware for connections to backend services. specifically, it elaborates and demonstrates how to combine a chatbot developed on the open source development platform rasa with the open source platform node - red, allowing low - code or no - code development of a transactional conversational user interface from frontend to backend.
|
arxiv:2410.00006
|
we study the collective vibrational excitations of crystals under out - of - equilibrium steady conditions that give rise to entropy production. their excitation spectrum comprises equilibrium - like phonons of thermal origin and additional collective excitations called entropons because each of them represents a mode of spectral entropy production. entropons coexist with phonons and dominate over them when the system is far from equilibrium while they are negligible in near - equilibrium regimes. the concept of entropons has been recently introduced and verified in a special case of crystals formed by self - propelled particles. here, we show that entropons exist in a broader class of active cyrstals that are intrinsically out of equilibrium and characterized by the lack of detailed balance. after a general derivation, several explicit examples are discussed, including crystals consisting of particles with alignment interactions and frictional contact forces.
|
arxiv:2307.13306
|
for a prime number $ p $, an integer $ e \ geq 2 $ and a field $ f $ containing a primitive $ p ^ e $ - th root of unity, the index of central simple $ f $ - algebras of exponent $ p ^ e $ is bounded in terms of the $ p $ - symbol length of $ f $. for a nonreal field $ f $ of characteristic different from $ 2 $, the index of central simple algebras of exponent $ 4 $ is bounded in terms of the $ u $ - invariant of $ f $. finally, a new construction for nonreal fields of $ u $ - invariant $ 6 $ is presented.
|
arxiv:2301.03378
|
when executing a deep neural network ( dnn ), its model parameters are loaded into gpu memory before execution, incurring a significant gpu memory burden. there are studies that reduce gpu memory usage by exploiting cpu memory as a swap device. however, this approach is not applicable in most embedded systems with integrated gpus where cpu and gpu share a common memory. in this regard, we present demand layering, which employs a fast solid - state drive ( ssd ) as a co - running partner of a gpu and exploits the layer - by - layer execution of dnns. in our approach, a dnn is loaded and executed in a layer - by - layer manner, minimizing the memory usage to the order of a single layer. also, we developed a pipeline architecture that hides most additional delays caused by the interleaved parameter loadings alongside layer executions. our implementation shows a 96. 5 % memory reduction with just 14. 8 % delay overhead on average for representative dnns. furthermore, by exploiting the memory - delay tradeoff, near - zero delay overhead ( under 1 ms ) can be achieved with a slightly increased memory usage ( still an 88. 4 % reduction ), showing the great potential of demand layering.
|
arxiv:2210.04024
|
this paper investigates the joint localization, detection, and tracking of sound events using a convolutional recurrent neural network ( crnn ). we use a crnn previously proposed for the localization and detection of stationary sources, and show that the recurrent layers enable the spatial tracking of moving sources when trained with dynamic scenes. the tracking performance of the crnn is compared with a stand - alone tracking method that combines a multi - source ( doa ) estimator and a particle filter. their respective performance is evaluated in various acoustic conditions such as anechoic and reverberant scenarios, stationary and moving sources at several angular velocities, and with a varying number of overlapping sources. the results show that the crnn manages to track multiple sources more consistently than the parametric method across acoustic scenarios, but at the cost of higher localization error.
|
arxiv:1904.12769
|
double porosity models for the liquid filtration in a naturally fractured reservoir is derived from the homogenization theory. the governing equations on the microscopic level consist of the stationary stokes system for an incompressible viscous fluid, occupying a crack - pore space ( liquid domain ), and stationary lame equations for an incompressible elastic solid skeleton, coupled with corresponding boundary conditions on the common boundary " solid skeleton - liquid domain ". we suppose that the liquid domain is a union of two independent systems of cracks ( fissures ) and pores, and that the dimensionless size $ \ delta $ of pores depends on the dimensionless size $ \ varepsilon $ of cracks : $ \ delta = \ varepsilon ^ { r } $ with $ r > 1 $. the rigorous justification is fulfilled for homogenization procedure as the dimensionless size of the cracks tends to zero, while the solid body is geometrically periodic. as the result we derive the well - known biot - - terzaghi system of liquid filtration in poroelastic media, which consists of the usual darcy law for the liquid in cracks coupled with anisotropic lame ' s equation for the common displacements in the solid skeleton and in the liquid in pores and a continuity equation for the velocity of a mixture. the proofs are based on the method of reiterated homogenization, suggested by g. allaire and m. briane. as a consequence of the main result we derive the double porosity model for the filtration of the incompressible liquid in an absolutely rigid body.
|
arxiv:0903.0797
|
in this paper we prove the following results : $ 1 ) $ we show that any arithmetic quotient of a homogeneous space admits a natural real semi - algebraic structure for which its hecke correspondences are semi - algebraic. a particularly important example is given by hodge varieties, which parametrize pure polarized integral hodge structures. $ 2 ) $ we prove that the period map associated to any pure polarized variation of integral hodge structures $ \ mathbb { v } $ on a smooth complex quasi - projective variety $ s $ is definable with respect to an o - minimal structure on the relevant hodge variety induced by the above semi - algebraic structure. $ 3 ) $ as a corollary of $ 2 ) $ and of peterzil - starchenko ' s o - minimal chow theorem we recover that the hodge locus of $ ( s, \ mathbb { v } ) $ is a countable union of algebraic subvarieties of $ s $, a result originally due to cattani - deligne - kaplan. our approach simplifies the proof of cattani - deligne - kaplan, as it does not use the full power of the difficult multivariable $ sl _ 2 $ - orbit theorem of cattani - kaplan - schmid.
|
arxiv:1810.04801
|
this article is aimed at the investigation of some properties of the weibull cumulative exposure model on multiple - step step - stress accelerated life test data. although the model includes a probabilistic idea of miner ' s rule in order to express the effect of cumulative damage in fatigue, our result shows that the application of only this is not sufficient to express degradation of specimens and the shape parameter must be larger than 1. for a random variable obeying the model, its average and standard deviation are investigated on a various sets of parameter values. in addition, a way of checking the validity of the model is illustrated through an example of the maximum likelihood estimation on an actual data set, which is about time to breakdown of cross - linked polyethylene - insulated cables.
|
arxiv:1210.5918
|
gas and dust in the galactic center are subjected to energetic processing by intense uv radiation fields, widespread shocks, enhanced rates of cosmic - rays and x - rays, and strong magnetic fields. the giant molecular clouds in the galactic center present a rich chemistry in a wide variety of chemical compounds, some of which are prebiotic. we have conducted unbiased, ultrasensitive and broadband spectral surveys toward the g + 0. 693 - 0. 027 molecular cloud located in the galactic center, which have yielded the discovery of new complex organic molecules proposed as precursors of the " building blocks " of life. i will review our current understanding of the chemistry in galactic center molecular clouds, and summarize the recent detections toward g + 0. 693 - 0. 027 of key precursors of prebiotic chemistry. all this suggests that the ism is an important source of prebiotic material that could have contributed to the process of the origin of life on earth and elsewhere in the universe.
|
arxiv:2501.01782
|
we present spectroscopic observations of supernova 1994d in ngc 4526, an s0 $ _ 3 $ galaxy in the virgo cluster 15 mpc distant. the datasets consist of the interstellar ca \ ii \ and na \ i \ lines towards the supernova at high spectral resolution ( fwhm 6 \ kms ), \ halpha \ and [ n \, ii ] observations at lower resolution ( fwhm 33 \ kms ) of the nucleus of ngc 4526 and the supernova, obtained with the wht at la palma, and 21 cm spectra obtained with the 100 m effelsberg radiotelescope in the field of ngc 4526. the velocity of the gas in ngc 4526 determined from our \ halpha \ spectra is + 625 \ kms \ at the centre and + 880 \ kms \ at the supernova position. our value of the systemic velocity is higher than the previous value of + 450 \ kms. in our high resolution spectra we detect ca \ ii \ and na \ i \ absorption at + 714 \ kms \ which is produced in interstellar gas in ngc 4526. to our knowledge this is the first detection of interstellar absorption originating in a galaxy of early morphological type. we detect multi - component ca \ ii \ and na \ i \ absorption lines in the range from + 204 to + 254 \ kms \ which originate in a complex of high velocity clouds ( hvcs ) located at a distance $ \ ll $ 1 mpc, in the surroundings of the milky way. the gas appears to have near solar abundances, low dust content, and a diluted uv radiation field. at $ - $ 29 \ kms, we find weak ca \ ii \ absorption and weak h \ i \ emission. this component may originate in gas infalling onto the galactic disk. finally, close to rest velocity, we find both warm and cold gas located beyond 65 pc, probably associated with high latitude gas at the border of loop i. the total reddening of the supernova, estimated using the standard milky way gas - to - dust ratio, is $ e $ ( b $ - $ v ) $ \ simeq $ 0. 05.
|
arxiv:astro-ph/9410075
|
despite the empirical success of the actor - critic algorithm, its theoretical understanding lags behind. in a broader context, actor - critic can be viewed as an online alternating update algorithm for bilevel optimization, whose convergence is known to be fragile. to understand the instability of actor - critic, we focus on its application to linear quadratic regulators, a simple yet fundamental setting of reinforcement learning. we establish a nonasymptotic convergence analysis of actor - critic in this setting. in particular, we prove that actor - critic finds a globally optimal pair of actor ( policy ) and critic ( action - value function ) at a linear rate of convergence. our analysis may serve as a preliminary step towards a complete theoretical understanding of bilevel optimization with nonconvex subproblems, which is np - hard in the worst case and is often solved using heuristics.
|
arxiv:1907.06246
|
advances in modern technology have enabled the simultaneous recording of neural spiking activity, which statistically can be represented by a multivariate point process. we characterise the second order structure of this process via the spectral density matrix, a frequency domain equivalent of the covariance matrix. in the context of neuronal analysis, statistics based on the spectral density matrix can be used to infer connectivity in the brain network between individual neurons. however, the high - dimensional nature of spike train data mean that it is often difficult, or at times impossible, to compute these statistics. in this work, we discuss the importance of regularisation - based methods for spectral estimation, and propose novel methodology for use in the point process setting. we establish asymptotic properties for our proposed estimators and evaluate their performance on synthetic data simulated from multivariate hawkes processes. finally, we apply our methodology to neuroscience spike train data in order to illustrate its ability to infer brain connectivity.
|
arxiv:2403.12908
|
the description of stellar interior remains as a big challenge for the nuclear astrophysics community. the consolidated knowledge is restricted to density regions around the saturation of hadronic matter $ \ rho _ { 0 } = 2. 8 \ times 10 ^ { 14 } { \ rm \ g \ cm ^ { - 3 } } $, regimes where our nuclear models are successfully applied. as one moves towards higher densities and extreme conditions up to five to twenty times $ \ rho _ { 0 } $, little can be said about the microphysics of such objects. here, we employ a markov chain monte carlo ( mcmc ) strategy to access the variability of polytropic three - pircewised models for neutron star equation of state. with a fixed description of the hadronic matter, we explore a variety of models for the high density regimes leading to stellar masses up to $ 2. 5 \ m _ { \ odot } $. in addition, we also discuss the use of a bayesian power regression model with heteroscedastic error. the set of eos from the laser interferometer gravitational - wave observatory ( ligo ) was used as inputs and treated as data set for testing case.
|
arxiv:2205.01174
|
the inclusive $ \ upsilon ( 1s, 2s, 3s ) $ photoproduction at the future circular - electron - positron - collider ( cepc ) is studied based on the non - relativistic qcd ( nrqcd ). including the contributions from both direct and resolved photons, we present different distributions for $ \ upsilon ( 1s, 2s, 3s ) $ production and the results show there will be considerable events, which means that a well measurements on the $ \ upsilon $ photoprodution could be performed to further study on the heavy quarkonium physics at electron - positron collider in addition to hadron colliders. this supplement study is very important to clarify the current situation of the heavy quarkonium production mechanism.
|
arxiv:2009.11996
|
most tls clients such as modern web browsers enforce coarse - grained tls security configurations. they support legacy versions of the protocol that have known design weaknesses, and weak ciphersuites that provide fewer security guarantees ( e. g. non forward - secrecy ), mainly to provide backward compatibility. this opens doors to downgrade attacks, as is the case of the poodle attack [ 18 ], which exploits the client ' s silent fallback to downgrade the protocol version to exploit the legacy version ' s flaws. to achieve a better balance between security and backward compatibility, we propose a dns - based mechanism that enables tls servers to advertise their support for the latest version of the protocol and strong ciphersuites ( that provide forward - secrecy and authenticated - encryption simultaneously ). this enables clients to consider prior knowledge about the servers ' tls configurations to enforce a fine - grained tls configurations policy. that is, the client enforces strict tls configurations for connections going to the advertising servers, while enforcing default configurations for the rest of the connections. we implement and evaluate the proposed mechanism and show that it is feasible, and incurs minimal overhead. furthermore, we conduct a tls scan for the top 10, 000 most visited websites globally, and show that most of the websites can benefit from our mechanism.
|
arxiv:1809.05674
|
we present the first spectroscopic measurement of the spatial cross - correlation function between damped lyman alpha systems ( dlas ) and lyman break galaxies ( lbgs ). we obtained deep u ' bvri images of nine qso fields with 11 known z ~ 3 dlas and spectroscopically confirmed 211 r < 25. 5 photometrically selected z > 2 lbgs. we find strong evidence for an overdensity of lbgs near dlas versus random, the results of which are similar to that of lbgs near other lbgs. a maximum likelihood cross - correlation analysis found the best fit correlation length value of r _ 0 = 2. 9 ^ ( + 1. 4 ) _ ( - 1. 5 ) h ^ ( - 1 ) mpc using a fixed value of gamma = 1. 6. the implications of the dla - lbg clustering amplitude on the average dark matter halo mass of dlas are discussed.
|
arxiv:astro-ph/0511509
|
deriving the optimal safety stock quantity with which to meet customer satisfaction is one of the most important topics in stock management. however, it is difficult to control the stock management of correlated marketable merchandise when using an inventory control method that was developed under the assumption that the demands are not correlated. for this, we propose a deterministic approach that uses a probability inequality to derive a reasonable safety stock for the case in which we know the correlation between various commodities. moreover, over a given lead time, the relation between the appropriate safety stock and the allowable stockout rate is analytically derived, and the potential of our proposed procedure is validated by numerical experiments.
|
arxiv:1701.02245
|
some renormalization group approaches have been proposed during the last few years which are close in spirit to the nightingale phenomenological procedure. in essence, by exploiting the finite size scaling hypothesis, the approximate critical behavior of the model on infinite lattice is obtained through the exact computation of some thermal quantities of the model on finite clusters. in this work some of these methods are reviewed, namely the mean field renormalization group, the effective field renormalization group and the finite size scaling renormalization group procedures. although special emphasis is given to the mean field renormalization group ( since it has been, up to now, much more applied an extended to study a wide variety of different systems ) a discussion of their potentialities and interrelations to other methods is also addressed.
|
arxiv:cond-mat/9910458
|
we report theoretical investigations on the role of interfacial bonding mechanism and its resulting structures to quantum transport in molecular wires. two bonding mechanisms for the au - s bond in an au ( 111 ) / 1, 4 - benzenedithiol ( bdt ) / au ( 111 ) junction were identified by ab initio calculation, confirmed by a recent experiment, which, we showed, critically control charge conduction. it was found, for au / bdt / au junctions, the hydrogen atom, bound by a dative bond to the sulfur, is energetically non - dissociative after the interface formation. the calculated conductance and junction breakdown forces of h - non - dissociative au / bdt / au devices are consistent with the experimental values, while the h - dissociated devices, with the interface governed by typical covalent bonding, give conductance more than an order of magnitude larger. by examining the scattering states that traverse the junctions, we have revealed that mechanical and electric properties of a junction have strong correlation with the bonding configuration. this work clearly demonstrates that the interfacial details, rather than previously believed many - body effects, is of vital importance for correctly predicting equilibrium conductance of molecular junctions ; and manifests that the interfacial contact must be carefully understood for investigating quantum transport properties of molecular nanoelectronics.
|
arxiv:0907.4674
|
we experimentally demonstrate two - photon absorption ( tpa ) with broadband down - converted light ( squeezed vacuum ). although incoherent and exhibiting the statistics of a thermal noise, broadband down - converted light can induce tpa with the same sharp temporal behavior as femtosecond pulses, while exhibiting the high spectral resolution of the narrowband pump laser. using pulse - shaping methods, we coherently control tpa in rubidium, demonstrating spectral and temporal resolutions that are 3 - 5 orders of magnitude below the actual bandwidth and temporal duration of the light itself. such properties can be exploited in various applications such as spread - spectrum optical communications, tomography and nonlinear microscopy.
|
arxiv:quant-ph/0401088
|
vesicles and many biological membranes are made of two monolayers of lipid molecules and form closed lipid bilayers. the dynamical behaviour of vesicles is very complex and a variety of forms and shapes appear. lipid bilayers can be considered as a surface fluid and hence the governing equations for the evolution include the surface ( navier - - ) stokes equations, which in particular take the membrane viscosity into account. the evolution is driven by forces stemming from the curvature elasticity of the membrane. in addition, the surface fluid equations are coupled to bulk ( navier - - ) stokes equations. we introduce a parametric finite element method to solve this complex free boundary problem, and present the first three dimensional numerical computations based on the full ( navier - - ) stokes system for several different scenarios. for example, the effects of the membrane viscosity, spontaneous curvature and area difference elasticity ( ade ) are studied. in particular, it turns out, that even in the case of no viscosity contrast between the bulk fluids, the tank treading to tumbling transition can be obtained by increasing the membrane viscosity. besides the classical tank treading and tumbling motions, another mode ( called the transition mode in this paper, but originally called the vacillating - breathing mode and subsequently also called trembling, transition and swinging mode ) separating these classical modes appears and is studied by us numerically. we also study how features of equilibrium shapes in the ade and spontaneous curvature models, like budding behaviour or starfish forms, behave in a shear flow.
|
arxiv:1504.05424
|
online auctions play a central role in online advertising, and are one of the main reasons for the industry ' s scalability and growth. with great changes in how auctions are being organized, such as changing the second - to first - price auction type, advertisers and demand platforms are compelled to adapt to a new volatile environment. bid shading is a known technique for preventing overpaying in auction systems that can help maintain the strategy equilibrium in first - price auctions, tackling one of its greatest drawbacks. in this study, we propose a machine learning approach of modeling optimal bid shading for non - censored online first - price ad auctions. we clearly motivate the approach and extensively evaluate it in both offline and online settings on a major demand side platform. the results demonstrate the superiority and robustness of the new approach as compared to the existing approaches across a range of performance metrics.
|
arxiv:2009.01360
|
this report summarizes some of the activities of the higgstools initial training network working group in the period 2015 - 2017. the main goal of this working group was to produce a document discussing various aspects of state - of - the - art higgs physics at the large hadron collider ( lhc ) in a pedagogic manner. the first part of the report is devoted to a description of phenomenological searches for new physics at the lhc. as the experimental measurements become more and more precise, there is a pressing need for a consistent framework in which deviations from the sm predictions can be computed precisely. we critically review the use of the \ k { appa } - framework, fiducial and simplified template cross sections, effective field theories, pseudo - observables and phenomenological lagrangians. in the second part of the report, we propose $ \ varphi _ { \ eta } ^ * $ as a new and complementary observable for studying higgs boson production at large transverse momentum in the case where the higgs boson decays to two photons. we make a detailed study of the phenomenology of the $ \ varphi _ { \ eta } ^ * $ variable, contrasting the behaviour with the higgs transverse momentum distribution using a variety of theoretical tools including event generators and fixed order perturbative computations.
|
arxiv:1711.09875
|
in this paper, we study a plasmonic horn nanoantenna on a metal - backed substrate. the horn nanoantenna structure consists of a two - wire transmission line ( twtl ) flared at the end. we analyze the effect of the substrate thickness on the nanoantenna ' s radiation pattern, and demonstrate beam steering in a broad range of elevation angles. furthermore, we analyze the effect of the ground plane on the impedance matching between the antenna and the twtl, and observe that the ground plane increases the back reflection into the waveguide. to reduce the reflection, we develop a transmission line model to design an impedance matching section which leads to 99. 75 % power transmission to the nanoantenna.
|
arxiv:1608.08668
|
the proliferation of resourceful mobile devices that store rich, multidimensional and privacy - sensitive user data motivate the design of federated learning ( fl ), a machine - learning ( ml ) paradigm that enables mobile devices to produce an ml model without sharing their data. however, the majority of the existing fl frameworks rely on centralized entities. in this work, we introduce ipls, a fully decentralized federated learning framework that is partially based on the interplanetary file system ( ipfs ). by using ipls and connecting into the corresponding private ipfs network, any party can initiate the training process of an ml model or join an ongoing training process that has already been started by another party. ipls scales with the number of participants, is robust against intermittent connectivity and dynamic participant departures / arrivals, requires minimal resources, and guarantees that the accuracy of the trained model quickly converges to that of a centralized fl framework with an accuracy drop of less than one per thousand.
|
arxiv:2101.01901
|
we investigated, both experimentally and theoretically, the reflection phase shift ( rps ) of one - dimensional plasmon polaritons. we launched 1d plasmon polaritons in carbon nanotube and probed the plasmon interference pattern using scanning near - field optical microscopy ( snom ) technique, through which a non - zero phase shift was observed. we further developed a theory to understand the nonzero phase shift of 1d polaritons, and found that the rps can be understood by considering the evanescent field beyond the nanotube end. interesting, our theory shows a strong dependence of rps on polaritons wavelength and nanotube diameter, which is in stark contrast to 2d plasmon polaritons in graphene where the rps is a constant. in short wave region, the rps of 1d polaritons only depends on a dimensionless variable - - the ratio between polaritons wavelength and nanotube diameter. these results provide fundamental insights into the reflection of polaritons in 1d system, and could facilitate the design of ultrasmall 1d polaritonic devices, such as resonators, interferometers.
|
arxiv:1910.02767
|
we consider the problem of minimizing the sum of two convex functions. one of those functions has lipschitz - continuous gradients, and can be accessed via stochastic oracles, whereas the other is " simple ". we provide a bregman - type algorithm with accelerated convergence in function values to a ball containing the minimum. the radius of this ball depends on problem - dependent constants, including the variance of the stochastic oracle. we further show that this algorithmic setup naturally leads to a variant of frank - wolfe achieving acceleration under parallelization. more precisely, when minimizing a smooth convex function on a bounded domain, we show that one can achieve an $ \ epsilon $ primal - dual gap ( in expectation ) in $ \ tilde { o } ( 1 / \ sqrt { \ epsilon } ) $ iterations, by only accessing gradients of the original function and a linear maximization oracle with $ o ( 1 / \ sqrt { \ epsilon } ) $ computing units in parallel. we illustrate this fast convergence on synthetic numerical experiments.
|
arxiv:2205.12751
|
the babar collaboration has recently reported the observation of the decay mode $ b ^ - \ to d _ s ^ + k ^ - \ pi ^ - $. we investigate the role played by the $ d ^ { \ star \ star } $ resonances in this decay mode using hqet. although these resonances cannot appear as physical intermediate states in this reaction, their mass is very close to the $ d _ s ^ + k ^ - $ production threshold and may, therefore, play a prominent role. we pursue this possibility to extract information on the properties of the strong $ d ^ { \ star \ star } d m $ couplings. as a byproduct of this analysis we point out that future super - $ b $ factories may be able to measure the $ d _ 0 ^ 0 d ^ \ star \ gamma $ radiative coupling through the reaction $ b ^ - \ to d ^ \ star \ gamma \ pi ^ - $.
|
arxiv:hep-ph/0611085
|
possible extensions of the standard model of elementary particle physics suggest the existence of particles with small, unquantized electric charge. photon initiated pair production of millicharged fermions in an external magnetic field would manifest itself as a vacuum magnetic dichroism. we show that laser polarization experiments searching for this effect yield, in the mass range below 0. 1 ev, much stronger constraints on millicharged fermions than previously considered laboratory searches. vacuum magnetic birefringence originating from virtual pair production gives a slightly better constraint for masses between 0. 1 ev and a few ev. we comment on the possibility that the vacuum magnetic dichroism observed by pvlas arises from pair production of such millicharged fermions rather than from single production of axion - like particles. such a scenario can be confirmed or firmly excluded by a search for invisible decays of orthopositronium with a sensitivity of about 10 ^ ( - 9 ) in the corresponding branching fraction.
|
arxiv:hep-ph/0607118
|
since hochster ' s work, spectral spaces have attracted increasing interest. through this note we intend to show that the set of proper ideals of a ring endowed with coarse lower topology is a spectral space.
|
arxiv:2203.10967
|
in this paper, we delve into the pedestrian behavior understanding problem from the perspective of three different tasks : intention estimation, action prediction, and event risk assessment. we first define the tasks and discuss how these tasks are represented and annotated in two widely used pedestrian datasets, jaad and pie. we then propose a new benchmark based on these definitions, available annotations, and three new classes of metrics, each designed to assess different aspects of the model performance. we apply the new evaluation approach to examine four sota prediction models on each task and compare their performance w. r. t. metrics and input modalities. in particular, we analyze the differences between intention estimation and action prediction tasks by considering various scenarios and contextual factors. lastly, we examine model agreement across these two tasks to show their complementary role. the proposed benchmark reveals new facts about the role of different data modalities, the tasks, and relevant data properties. we conclude by elaborating on our findings and proposing future research directions.
|
arxiv:2407.00446
|
we generalize a theorem of kapranov by showing that the hall algebra of the category of coherent sheaves on a weighted projective line ( over a finite field ) provides a realization of the ( quantized ) enveloping algebra of a certain nilpotent subalgebra of the affinization of the correponding kac - moody algebra. in particular this yieds a geometric realization of the quantized enveloping algebra of 2 - toroidal ( or elliptic ) algebras of types d _ 4, e _ 6, e _ 7 or e _ 8 in terms of weighted projective lines of genus one.
|
arxiv:math/0205267
|
we discuss how renormalisation group equations can be consistently formulated using the algebraic renormalisation framework, in the context of a dimensionally - renormalised chiral field theory in the bmhv scheme, where the brst symmetry, originally broken at the quantum level, is restored via finite counterterms. we compare it with the more standard multiplicative renormalisation approach, which application would be more cumbersome in this setting. both procedures are applied and compared on the example of a massless chiral right - handed qed model, and beta - function and anomalous dimensions are evaluated up to two - loop orders.
|
arxiv:2208.09006
|
in this paper we prove tight bounds on the combinatorial and topological complexity of sets defined in terms of $ n $ definable sets belonging to some fixed definable family of sets in an o - minimal structure. this generalizes the combinatorial parts of similar bounds known in the case of semi - algebraic and semi - pfaffian sets, and as a result vastly increases the applicability of results on combinatorial and topological complexity of arrangements studied in discrete and computational geometry. as a sample application, we extend a ramsey - type theorem due to alon et al., originally proved for semi - algebraic sets of fixed description complexity to this more general setting.
|
arxiv:math/0612050
|
in the study of asymptotic geometry in banach spaces, a basic sequence which gives rise to a spreading model has been called a good sequence. it is well known that every normalized basic sequence in a banach space has a subsequence which is good. we investigate the assumption that every normalized block tree relative to a basis has a branch which is good. this combinatorial property turns out to be very strong and is equivalent to the space being $ 1 $ - asymptotic $ \ ell _ p $ for some $ 1 \ leq p \ leq \ infty $. we also investigate the even stronger assumption that every block basic sequence of a basis is good. finally, using the hindman - milliken - taylor theorem, we prove a stabilization theorem which produces a basic sequence all of whose normalized constant coefficient block basic sequences are good, and we present an application of this stabilization.
|
arxiv:1907.11863
|
consider a closed analytic curve $ \ gamma $ in the complex plane and denote by > $ d _ + $ and $ d _ - $ the interior and exterior domains with respect to the curve. the point $ z = 0 $ is assumed to be in $ d _ + $. then according to riemann theorem there exists a function $ w ( z ) = \ frac 1r z + \ sum _ { j = 0 } ^ \ infty p _ j z ^ { - j } $, mapping $ d _ - $ to the exterior of the unit disk $ \ { w \ in c | | w | > 1 \ } $. it is follow from [ arxiv : hep - th / 0005259 ] that this function is described by formula $ \ log w = \ log z - \ partial _ { t _ 0 } ( \ frac 12 \ partial _ { t _ 0 } + \ sum \ limits _ { k \ geqslant 1 } \ frac { z ^ { - k } } { k } \ partial _ { t _ k } ) v $, where $ v = v ( t _ 0, t _ 1, \ bar t _ 1, t _ 2, \ bar t _ 2,... ) $ is a function from the area $ t _ 0 $ of $ d _ + $ and the momemts $ t _ k $ of $ d _ - $. moreover, this function satisfies the dispersionless hirota equation for 2d toda lattice hierarchy. thus for an effectivisation of riemann theorem it is sufficiently to find a representation of $ v $ in the form of taylor series $ v = \ sum n ( i _ 0 | i _ 1,..., i _ k | \ bar i _ 1,..., \ bar i _ { \ bar k } ) t _ 0 t _ { i _ 1 },..., t _ k \ bar t _ { \ bar i _ 1 },..., \ bar t _ { \ bar i _ { \ bar k } } $. the numbers $ n ( i _ 0 | i _ 1,..., i _ k | \ bar i _ 1,..., \ bar i _ { \ bar k } ) $ for $ i _ \ alpha, \ bar i _ \ beta \ leqslant 2 $ is found in [ arxiv
|
arxiv:math/0103136
|
type ia supernovae play a critical role in stellar feedback and elemental enrichment in galaxies. recent transient surveys like the all - sky automated survey for supernova ( asas - sn ) and the dark energy survey ( des ) find that the specific ia rate at z ~ 0 may be ~ 15 - 50 times higher in lower - mass galaxies than at milky way - mass. independently, milky way observations show that the close - binary fraction of solar - type stars is higher at lower metallicity. motivated by these observations, we use the fire - 2 cosmological zoom - in simulations to explore the impact of varying ia rate models, including metallicity dependence, on galaxies across a range of stellar masses : 10 ^ 7 msun - 10 ^ { 11 } msun. first, we benchmark our simulated star - formation histories ( sfhs ) against observations. we show that assumed sfhs and stellar mass functions play a major role in determining the degree of tension between observations and metallicity - independent ia rate models, and potentially cause asas - sn and des observations to be much more consistent with each other than might naively appear. models in which the ia rate increases with decreasing metallicity ( as ~ z ^ { - 0. 5 } to z ^ { - 1 } ) provide significantly better agreement with observations. encouragingly, these increases in ia rate ( > 10 times in low - mass galaxies ) do not significantly impact galaxy stellar masses and morphologies : effective radii, axis ratios, and v / sigma remain largely unaffected except for our most extreme rate models. we explore implications for both [ fe / h ] and [ alpha / fe ] enrichment : metallicity - dependent ia rate models can improve agreement with observed stellar mass - metallicity relations in low - mass galaxies. our results demonstrate that a wide range of metallicity - dependent ia models are viable for galaxy formation and motivate future work in this area.
|
arxiv:2202.10477
|
although the known maximum total generalized correntropy ( mtgc ) and generalized maximum blakezisserman total correntropy ( gmbztc ) algorithms can maintain good performance under the errors - in - variables ( eiv ) model disrupted by generalized gaussian noise, their requirement for manual ad - justment of parameters is excessive, greatly increasing the practical difficulty of use. to solve this problem, the total arctangent based on logical distance metric ( tacldm ) algo - rithm is proposed by utilizing the advantage of few parameters in logical distance metric ( ldm ) theory and the convergence behavior is improved by the arctangent function. compared with other competing algorithms, the tacldm algorithm not only has fewer parameters, but also has better robustness to generalized gaussian noise and significantly reduces the steady - state error. furthermore, the analysis of the algorithm in the generalized gaussian noise environment is analyzed in detail in this paper. finally, computer simulations demonstrate the outstanding performance of the tacldm algorithm and the rigorous theoretical deduction in this paper.
|
arxiv:2405.12589
|
in this study, we propose a clustering - based approach on time - series data to capture covid - 19 spread patterns in the early period of the pandemic. we analyze the spread dynamics based on the early and post stages of covid - 19 for different countries based on different geographical locations. furthermore, we investigate the confinement policies and the effect they made on the spread. we found that implementations of the same confinement policies exhibit different results in different countries. specifically, lockdowns become less effective in densely populated regions, because of the reluctance to comply with social distancing measures. lack of testing, contact tracing, and social awareness in some countries forestall people from self - isolation and maintaining social distance. large labor camps with unhealthy living conditions also aid in high community transmissions in countries depending on foreign labor. distrust in government policies and fake news instigate the spread in both developed and under - developed countries. large social gatherings play a vital role in causing rapid outbreaks almost everywhere. while some countries were able to contain the spread by implementing strict and widely adopted confinement policies, some others contained the spread with the help of social distancing measures and rigorous testing capacity. an early and rapid response at the beginning of the pandemic is necessary to contain the spread, yet it is not always sufficient.
|
arxiv:2111.03020
|
we systematically revisit the description of $ d $ - branes on orbifolds and the classification of their charges via k - theory. we include enough details to make the results accessible to both physicists and mathematicians interested in these topics. the minimally charged branes predicted by k - theory in z _ n orbifolds with $ n $ odd are only bps. we confirm this result using the boundary state formalism for z _ 3. for z _ n x z _ n orbifolds with and without discrete torsion, we show that the k - theory classification of charges agrees with the boundary state approach, largely developed by gaberdiel and collaborators, including the types of representation on the chan - paton factors.
|
arxiv:hep-th/0703122
|
a clear consensus on how long it takes a particle to tunnel through a potential barrier has never been so urgently required, since the electron dynamics in strong - field ionization can be resolved on attosecond time - scale in experiment and the exact nature of the tunneling process is the key to trigger subsequent attosecond techniques. here a general picture of tunneling time is suggested by introducing a quantum travel time, which is defined as the ratio of the travel distance to the expected value of the velocity operator under the barrier. specially, if applied to rectangular barrier tunneling, it can retrieve the b \ " { u } ttiker - landauer time $ \ tau _ { bl } $ in the case of an opaque barrier, and has a clear physical meaning in the case of a very thin barrier wherein $ \ tau _ { bl } $ can not be well defined. in the case of strong - field tunneling process, with the help of the newly defined time, the tunneling delay time measured by attoclock experiment can be interpreted as a travel time spent by the electron to tunnel from a point under the barrier to the tunnel exit. in addition, a peculiar oscillation structure in the wavelength dependence of tunneling delay time in deep tunneling regime is observed, which is beyond the scope of adiabatic tunneling picture. this oscillation structure can be attributed to the interference between the ground state tunneling channel and the excited states tunneling channels.
|
arxiv:1903.06897
|
in this paper we probe inert higgs doublet model at the lhc using vector boson fusion ( vbf ) search strategy. we optimize the selection cuts and investigate the parameter space of the model and we show that the vbf search has a better reach when compared with the monojet searches. we also investigate the drell - yan type cuts and show that they can be important for smaller charged higgs masses. we determine the $ 3 \ sigma $ reach for the parameter space using these optimized cuts for a luminosity of 3000 fb $ ^ { - 1 } $.
|
arxiv:1709.09796
|
a detailed study is presented of the expected performance of the atlas detector. the reconstruction of tracks, leptons, photons, missing energy and jets is investigated, together with the performance of b - tagging and the trigger. the physics potential for a variety of interesting physics processes, within the standard model and beyond, is examined. the study comprises a series of notes based on simulations of the detector and physics processes, with particular emphasis given to the data expected from the first years of operation of the lhc at cern.
|
arxiv:0901.0512
|
we prove a new generalization of the higher - order cheeger inequality for partitioning with buffers. consider a graph $ g = ( v, e ) $. the buffered expansion of a set $ s \ subseteq v $ with a buffer $ b \ subseteq v \ setminus s $ is the edge expansion of $ s $ after removing all the edges from set $ s $ to its buffer $ b $. an $ \ varepsilon $ - buffered $ k $ - partitioning is a partitioning of a graph into disjoint components $ p _ i $ and buffers $ b _ i $, in which the size of buffer $ b _ i $ for $ p _ i $ is small relative to the size of $ p _ i $ : $ | b _ i | \ le \ varepsilon | p _ i | $. the buffered expansion of a buffered partition is the maximum of buffered expansions of the $ k $ sets $ p _ i $ with buffers $ b _ i $. let $ h ^ { k, \ varepsilon } _ g $ be the buffered expansion of the optimal $ \ varepsilon $ - buffered $ k $ - partitioning, then for every $ \ delta > 0 $, $ $ h _ g ^ { k, \ varepsilon } \ le o _ \ delta ( 1 ) \ cdot \ big ( \ frac { \ log k } { \ varepsilon } \ big ) \ cdot \ lambda _ { \ lfloor ( 1 + \ delta ) k \ rfloor }, $ $ where $ \ lambda _ { \ lfloor ( 1 + \ delta ) k \ rfloor } $ is the $ \ lfloor ( 1 + \ delta ) k \ rfloor $ - th smallest eigenvalue of the normalized laplacian of $ g $. our inequality is constructive and avoids the ` ` square - root loss ' ' that is present in the standard cheeger inequalities ( even for $ k = 2 $ ). we also provide a complementary lower bound, and a novel generalization to the setting with arbitrary vertex weights and edge costs. moreover our result implies and generalizes the standard higher - order cheeger inequalities and another recent cheeger - type inequality by kwok, lau, and lee ( 2017 ) involving robust vertex expansion.
|
arxiv:2308.10160
|
bl lac objects are known to have very energetic jets pointing towards the observer under small viewing angles. many of these show high luminosity over the whole energy range up to tev, mostly classified as high - energy peaked bl lac objects. recently, tev gamma - ray emission was detected from a low - energy peaked bl lac object. interestingly, this source has also a clear detection of an x - ray jet. we present a detailed study of this x - ray jet and its connection to the radio jet as well as a study of the underlying physical processes in the energetic jet, producing emission from the radio to the tev range.
|
arxiv:1111.2775
|
demand for low - latency and high - bandwidth data transfer between gpus has driven the development of multi - gpu nodes. physical constraints on the manufacture and integration of such systems has yielded heterogeneous intra - node interconnects, where not all devices are connected equally. the next generation of supercomputing platforms are expected to feature amd cpus and gpus. this work characterizes the extent to which interconnect heterogeneity is visible through gpu programming apis on a system with four amd mi250x gpus, and provides several insights for users of such systems.
|
arxiv:2302.14827
|
in language identification, a common first step in natural language processing, we want to automatically determine the language of some input text. monolingual language identification assumes that the given document is written in one language. in multilingual language identification, the document is usually in two or three languages and we just want their names. we aim one step further and propose a method for textual language identification where languages can change arbitrarily and the goal is to identify the spans of each of the languages. our method is based on bidirectional recurrent neural networks and it performs well in monolingual and multilingual language identification tasks on six datasets covering 131 languages. the method keeps the accuracy also for short documents and across domains, so it is ideal for off - the - shelf use without preparation of training data.
|
arxiv:1701.03338
|
determinism, no signaling and measurement independence are some of the constraints required for framing bell inequality. any model simulating nonlocal correlations must either individually or jointly give up these constraints. recently m. j. w. hall ( phys review a, \ textbf { 84 }, 022102 ( 2011 ) ) derived different forms of bell inequalities under the assumption of individual or joint relaxation of those constraints on both ( i. e., two ) the sides of a bipartite system. in this work we have investigated whether one sided relaxation can also be a useful resource for simulating nonlocal correlations or not. we have derived bell - type inequalities under the assumption of joint relaxation of these constraints only by one party of a bipartite system. interestingly we found that any amount of randomness in correlations of one party in absence of signaling between two parties is incapable of showing any sort of bell - chsh violation whereas signaling and measurement dependence individually can simulate any nonlocal correlations. we have also completed the proof of a recent conjecture due to hall ( phys. rev. a \ textbf { 82 }, 062117 ( 2010 ) ; phys. rev. a \ textbf { 84 }, 022102 ( 2011 ) ) for one sided relaxation scenario only.
|
arxiv:1304.7409
|
we present a new kind of generator of internal waves which has been designed for three purposes. first, the oscillating boundary conditions force the fluid particles to travel in the preferred direction of the wave ray, hence reducing the mixing due to forcing. secondly, only one ray tube is produced so that all of the energy is in the beam of interest. thirdly, temporal and spatial frequency studies emphasize the high quality for temporal and spatial monochromaticity of the emitted beam. the greatest strength of this technique is therefore the ability to produce a large monochromatic and unidirectional beam.
|
arxiv:physics/0609256
|
a minimal observable length is a common feature of theories that aim to merge quantum physics and gravity. quantum mechanically, this concept is associated to a nonzero minimal uncertainty in position measurements, which is encoded in deformed commutation relations. in spite of increasing theoretical interest, the subject suffers from the complete lack of dedicated experiments and bounds to the deformation parameters are roughly extrapolated from indirect measurements. as recently proposed, low - energy mechanical oscillators could allow to reveal the effect of a modified commutator. here we analyze the free evolution of high quality factor micro - and nano - oscillators, spanning a wide range of masses around the planck mass $ m _ { \ mathrm { p } } $ ( $ { \ approx 22 \, \ mu \ mathrm { g } } $ ), and compare it with a model of deformed dynamics. previous limits to the parameters quantifying the commutator deformation are substantially lowered.
|
arxiv:1411.6410
|
we study the dynamic surface response of neolithic stone settlements obtained with seismic ambient noise techniques near the city of carnac in french brittany. surprisingly, we find that menhirs ( neolithic human size standing alone granite stones ) with an aspect ratio between 1 and 2 periodically arranged atop a thin layer of sandy soil laid on a granite bedrock, exhibit fundamental resonances in the range of 10 to 25 hz. we propose an analogic kelvin - voigt viscoelastic model that explains the origin of such low frequency resonances. we further explore low frequency filtering effect with full wave finite element simulations. our numerical results confirm the bending nature of fundamental resonances of the menhirs and further suggest additional resonances of rotational and longitudinal nature in the frequency range 25 to 50 hz. our study thus paves the way for large scale seismic metasurfaces consisting of granite stones periodically arranged atop a thin layer of regolith over a bedrock, for ground vibration mitigation in earthquake engineering.
|
arxiv:2206.10825
|
while x - ray measurements have so far revealed an increase in the volume - averaged baryon fractions $ f _ b ( r ) $ of galaxy clusters with cluster radii $ r $, $ f _ b ( r ) $ should asymptotically reach a universal value $ f _ b ( \ infty ) = f _ b $, provided that clusters are representative of the universe. in the framework of hydrostatic equilibrium for intracluster gas, we have derived the necessary conditions for $ f _ b ( \ infty ) = f _ b $ : the x - ray surface brightness profile described by the $ \ beta $ model and the temperature profile approximated by the polytropic model should satisfy $ \ gamma \ approx2 ( 1 - 1 / 3 \ beta ) $ and $ \ gamma \ approx1 + 1 / 3 \ beta $ for $ \ beta < 1 $ and $ \ beta > 1 $, respectively, which sets a stringent limit to the polytropic index : $ \ gamma < 4 / 3 $. in particular, a mildly increasing temperature with radius is required if the observationally fitted $ \ beta $ parameter is in the range $ 1 / 3 < \ beta < 2 / 3 $. it is likely that a reliable determination of the universal baryon fraction can be achieved in the small $ \ beta $ clusters because the disagreement between the exact and asymptotic baryon fractions for clusters with $ \ beta > 2 / 3 $ breaks down at rather large radii ( $ \ ga30r _ c $ ) where hydrostatic equilibrium has probably become inapplicable. we further explore how to obtain the asymptotic value $ f _ b ( \ infty ) $ of baryon fraction from the x - ray measurement made primarily over the finite central region of a cluster. we demonstrate our method using a sample of 19 strong lensing clusters, which enables us to place a useful constraint on $ f _ b ( \ infty ) $ : $ 0. 094 \ pm0. 035 \ leq f _ b ( \ infty ) \ leq 0. 41 \ pm0. 18 $. an optimal estimate of $ f _ b ( \ infty ) $ based on three cooling flow clusters with $ \ beta < 1 / 2 $ in our lensing cluster sample yields $ < f _ b ( \ infty ) > = 0. 142 \ pm0. 007 $ or
|
arxiv:astro-ph/9909205
|
we propose a method to unify various stability results about symmetric ideals in polynomial rings by stratifying related derived categories. we execute this idea for chains of $ gl _ n $ - equivariant modules over an infinite field $ k $ of positive characteristic. we prove the le - - nagel - - nguyen - - r \ " omer conjectures for such sequences and obtain stability patterns in their resolutions as corollaries of our main result, which is a semiorthogonal decomposition for the bounded derived category of $ gl _ { \ infty } $ - equivariant modules over $ s = k [ x _ 1, x _ 2, \ ldots, x _ n, \ ldots ] $. our method relies on finite generation results for certain local cohomology modules. we also outline approaches ( i ) to investigate koszul duality for $ s $ - modules taking the frobenius homomorphism ( of $ gl _ { \ infty } $ ) into account, and ( ii ) to recover and extend murai ' s results about free resolutions of symmetric monomial ideals.
|
arxiv:2407.16071
|
archival hst data taken in f606w + f814w of two different fields in the outer regions of ngc 6946 is used to measure a tip of the red giant branch ( trgb ) distance to the galaxy. we employ a bayesian maximum - likelihood modeling method that incorporates the completeness of the photometry, and allows us to model the luminosity function of the rgb population. our two fields provide us with distances of 7. 74 $ \ pm $ 0. 42 mpc and 7. 69 $ \ pm $ 0. 50 mpc, respectively. our final distance of 7. 72 $ \ pm $ 0. 32 mpc is higher than most values published previously in the literature. this has important implications for supernova measurements, as ngc 6946 is host to the most observed supernovae ( 10 ) of any galaxy to date. we find that the supernovae in ngc 6946 are on average $ \ sim $ 2. 3 times more luminous than previous estimates. our distance gives ngc 6946 a peculiar velocity of $ v _ { pec } $ = $ - 229 $ $ \ pm $ $ 29 $ km / s in the local sheet frame. this velocity is the projected component of a substantial negative sgz motion, indicating a coherent motion along with the other members of the m101 wall toward the plane of the local sheet. the m101 wall, a spur off the local sheet into the local void, is experiencing the expansion of the local void.
|
arxiv:1807.05229
|
the definition of scattering operator in quantum field theory is critically reconsidered. the correct treatment of one - particle states is connected with separation of selfaction from interaction. the formalism of functional integral is used for the description of such a separation via introduction of the quantum equation of motion.
|
arxiv:1003.4854
|
we consider the continuous - time random walk of a particle in a two - dimensional self - affine quenched random potential of hurst exponent $ h > 0 $. the corresponding master equation is studied via the strong disorder renormalization procedure introduced in ref. [ c. monthus and t. garel, j. phys. a : math. theor. 41 ( 2008 ) 255002 ]. we present numerical results on the statistics of the equilibrium time $ t _ { eq } $ over the disordered samples of a given size $ l \ times l $ for $ 10 \ leq l \ leq 80 $. we find an ' infinite disorder fixed point ', where the equilibrium barrier $ \ gamma _ { eq } \ equiv \ ln t _ { eq } $ scales as $ \ gamma _ { eq } = l ^ h u $ where $ u $ is a random variable of order o ( 1 ). this corresponds to a logarithmically - slow diffusion $ | \ vec r ( t ) - \ vec r ( 0 ) | \ sim ( \ ln t ) ^ { 1 / h } $ for the position $ \ vec r ( t ) $ of the particle.
|
arxiv:0910.0111
|
it is shown, using high - precision numerical simulations, that the mobility edge of the 3d anderson model depends on the boundary hopping term t in the infinite size limit. the critical exponent is independent of it. the renormalized localization length at the critical point is also found to depend on t but not on the distribution of on - site energies for box and lorentzian distributions. implications of results for the description of the transition in terms of a local order - parameter are discussed.
|
arxiv:cond-mat/0701306
|
unsupervised video object segmentation ( uvos ) aims at detecting the primary objects in a given video sequence without any human interposing. most existing methods rely on two - stream architectures that separately encode the appearance and motion information before fusing them to identify the target and generate object masks. however, this pipeline is computationally expensive and can lead to suboptimal performance due to the difficulty of fusing the two modalities properly. in this paper, we propose a novel uvos model called simulflow that simultaneously performs feature extraction and target identification, enabling efficient and effective unsupervised video object segmentation. concretely, we design a novel simulflow attention mechanism to bridege the image and motion by utilizing the flexibility of attention operation, where coarse masks predicted from fused feature at each stage are used to constrain the attention operation within the mask area and exclude the impact of noise. because of the bidirectional information flow between visual and optical flow features in simulflow attention, no extra hand - designed fusing module is required and we only adopt a light decoder to obtain the final prediction. we evaluate our method on several benchmark datasets and achieve state - of - the - art results. our proposed approach not only outperforms existing methods but also addresses the computational complexity and fusion difficulties caused by two - stream architectures. our models achieve 87. 4 % j & f on davis - 16 with the highest speed ( 63. 7 fps on a 3090 ) and the lowest parameters ( 13. 7 m ). our simulflow also obtains competitive results on video salient object detection datasets.
|
arxiv:2311.18286
|
in 1972, chvatal gave a well - known sufficient condition for a graphical sequence to be forcibly hamiltonian, and showed that in some sense his condition is best possible. nash - williams gave examples of forcibly hamiltonian n - sequences that do not satisfy chvatla ' s condition for every n at least 5. in this note we generalize the nash - williams examples, and use this generalization to generate \ omega ( 2 ^ n / n ^. 5 ) forcibly hamiltonian n - sequences that do not satisfy chvatal ' s condition
|
arxiv:2106.08735
|
conb $ _ 2 $ o $ _ 6 $ is one of the few materials that is known to approximate the one - dimensional transverse - field ising model ( 1d - tfim ) near its quantum critical point. it has been inferred that co $ ^ { 2 + } $ acts as a pseudo - spin 1 / 2 with anisotropic exchange interactions that are largely ising - like, enabling the realization of the tfim. however, the behavior of conb $ _ 2 $ o $ _ 6 $ is known to diverge from the ideal tfim under transverse magnetic fields that are far from the quantum critical point, requiring the consideration of additional anisotropic, bond - dependent ( kitaev - like ) terms in the microscopic pseudo - spin 1 / 2 hamiltonian. these terms are expected to be controlled in part by single - ion physics, namely the wavefunction for the pseudo - spin 1 / 2 angular momentum doublet. here, we present the results of both inelastic neutron scattering measurements and electron paramagnetic resonance spectroscopy on conb $ _ 2 $ o $ _ 6 $, which elucidate the single - ion physics of co $ ^ { 2 + } $ in conb $ _ 2 $ o $ _ 6 $ for the first time. we find that the system is well - described by an intermediate spin - orbit coupled hamiltonian, and the ground state is a well - isolated kramers doublet with an anisotropic $ g $ - tensor. we provide the approximate wavefunctions for this doublet, which we expect will be useful in theoretical investigations of the anisotropic exchange interactions.
|
arxiv:2205.05130
|
i report on recent theoretical developments at quark matter 2006.
|
arxiv:hep-ph/0702004
|
supporting equitable instruction is an important issue for teachers attending diverse stem classrooms. visual learning analytics along with effective student survey measures can support providing on time feedback to teachers in making instruction more culturally relevant to all students. we adopted a user - centered approach, where we engaged seven middle school science teachers in iterative testing of thirty data visualizations disaggregated over markers such as gender and race for implementation of selected displays in a visual learning analytics tool - student electronic exit ticket ( seet ). this process helped us gather insights into teachers ' sensemaking in identifying patterns of student data related to gender and race, selecting and improving the design of the feedback displays for the seet [ 10 ].
|
arxiv:2401.07209
|
we evaluate the frame - independent gluon and charm parton - distribution functions ( pdfs ) of the deuteron utilizing light - front quantization and the impulse approximation. we use a nuclear wave function obtained from solving the nonrelativistic schroedinger equation with the realistic argonne v18 nuclear force, which we fold with the proton pdf. the predicted gluon distribution in the deuteron ( per nucleon ) is a few percent smaller than that of the proton in the domain x _ { bj } = q ^ 2 / ( 2 p _ n \ cdot q ) \ sim 0. 4, whereas it is strongly enhanced for x _ { bj } larger than 0. 6. we discuss the applicability of our analysis and comment on how to extend it to the kinematic limit x _ { bj } \ to 2. we also analyze the charm distribution of the deuteron within the same approach by considering both the perturbatively and non - perturbatively generated ( intrinsic ) charm contributions. in particular, we note that the intrinsic - charm content in the deuteron will be enhanced due to 6 - quark " hidden - color " qcd configurations.
|
arxiv:1805.03173
|
a reinvention of the school science curriculum is one that shapes students to contend with its changing influence on human welfare. scientific literacy, which allows a person to distinguish science from pseudosciences such as astrology, is among the attributes that enable students to adapt to the changing world. its characteristics are embedded in a curriculum where students are engaged in resolving problems, conducting investigations, or developing projects. alan j. friedman mentions why most scientists avoid educating about pseudoscience, including that paying undue attention to pseudoscience could dignify it. on the other hand, robert l. park emphasizes how pseudoscience can be a threat to society and considers that scientists have a responsibility to teach how to distinguish science from pseudoscience. pseudosciences such as homeopathy, even if generally benign, are used by charlatans. this poses a serious issue because it enables incompetent practitioners to administer health care. true - believing zealots may pose a more serious threat than typical con men because of their delusion to homeopathy ' s ideology. irrational health care is not harmless and it is careless to create patient confidence in pseudomedicine. on 8 december 2016, journalist michael v. levine pointed out the dangers posed by the natural news website : " snake - oil salesmen have pushed false cures since the dawn of medicine, and now websites like natural news flood social media with dangerous anti - pharmaceutical, anti - vaccination and anti - gmo pseudoscience that puts millions at risk of contracting preventable illnesses. " the anti - vaccine movement has persuaded large numbers of parents not to vaccinate their children, citing pseudoscientific research that links childhood vaccines with the onset of autism. these include the study by andrew wakefield, which claimed that a combination of gastrointestinal disease and developmental regression, which are often seen in children with asd, occurred within two weeks of receiving vaccines. the study was eventually retracted by its publisher, and wakefield was stripped of his license to practice medicine. alkaline water is water that has a ph of higher than 7, purported to host numerous health benefits, with no empirical backing. a practitioner known as robert o. young who promoted alkaline water and an " alkaline diet " was sent to jail for 3 years in 2017 for practicing medicine without a license. = = see also = = = = notes = = = = references = = = = bibliography = = = = = works cited = = = = = = further reading =
|
https://en.wikipedia.org/wiki/Pseudoscience
|
in vehicular networks of the future, sensing and communication functionalities will be intertwined. in this paper, we investigate a radar - assisted predictive beamforming design for vehicle - to - infrastructure ( v2i ) communication by exploiting the dual - functional radar - communication ( dfrc ) technique. aiming for realizing joint sensing and communication functionalities at road side units ( rsus ), we present a novel extended kalman filtering ( ekf ) framework to track and predict kinematic parameters of each vehicle. by exploiting the radar functionality of the rsu we show that the communication beam tracking overheads can be drastically reduced. to improve the sensing accuracy while guaranteeing the downlink communication sum - rate, we further propose a power allocation scheme for multiple vehicles. numerical results have shown that the proposed dfrc based beam tracking approach significantly outperforms the communication - only feedback based technique in the tracking performance. furthermore, the designed power allocation method is able to achieve a favorable performance trade - off between sensing and communication.
|
arxiv:2001.09306
|
the singular value decomposition ( svd ) is not only a classical theory in matrix computation and analysis, but also is a powerful tool in machine learning and modern data analysis. in this tutorial we first study the basic notion of svd and then show the central role of svd in matrices. using majorization theory, we consider variational principles of singular values and eigenvalues. built on svd and a theory of symmetric gauge functions, we discuss unitarily invariant norms, which are then used to formulate general results for matrix low rank approximation. we study the subdifferentials of unitarily invariant norms. these results would be potentially useful in many machine learning problems such as matrix completion and matrix data classification. finally, we discuss matrix low rank approximation and its recent developments such as randomized svd, approximate matrix multiplication, cur decomposition, and nystrom approximation. randomized algorithms are important approaches to large scale svd as well as fast matrix computations.
|
arxiv:1510.08532
|
we define $ pc $ - polynomial of graph which is related to clique, ( in ) dependence and matching polynomials. the growth rate of partially commutative monoid is equal to the largest root $ \ beta ( g ) $ of $ pc $ - polynomial of the corresponding graph. the random algebra is defined in such way that its growth rate equals the largest root of $ pc $ - polynomial of random graph. we prove that for almost all graphs all sufficiently large real roots of $ pc $ - polynomial lie in neighbourhoods of roots of $ pc $ - polynomial of random graph. we show how to calculate the series expansions of the latter roots. the average value of $ \ beta ( g ) $ over all graphs with the same number of vertices is computed. we found the graphs on which the maximal value of $ \ beta ( g ) $ with fixed numbers of vertices and edges is reached. from this, we derive the upper bound of $ \ beta ( g ) $. modulo one assumption, we do the same for minimal value of $ \ beta ( g ) $. we study the nordhaus - - - gaddum bounds of $ \ beta ( g ) + \ beta ( \ bar { g } ) $ and $ \ beta ( g ) \ beta ( \ bar { g } ) $.
|
arxiv:1808.03932
|
we formulate the problem of sampling and recovering clustered graph signal as a multi - armed bandit ( mab ) problem. this formulation lends naturally to learning sampling strategies using the well - known gradient mab algorithm. in particular, the sampling strategy is represented as a probability distribution over the individual arms of the mab and optimized using gradient ascent. some illustrative numerical experiments indicate that the sampling strategies based on the gradient mab algorithm outperform existing sampling methods.
|
arxiv:1805.05827
|
identifiability is a necessary condition for successful parameter estimation of dynamic system models. a major component of identifiability analysis is determining the identifiable parameter combinations, the functional forms for the dependencies between unidentifiable parameters. identifiable combinations can help in model reparameterization and also in determining which parameters may be experimentally measured to recover model identifiability. several numerical approaches to determining identifiability of differential equation models have been developed, however the question of determining identifiable combinations remains incompletely addressed. in this paper, we present a new approach which uses parameter subset selection methods based on the fisher information matrix, together with the profile likelihood, to effectively estimate identifiable combinations. we demonstrate this approach on several example models in pharmacokinetics, cellular biology, and physiology.
|
arxiv:1307.2298
|
we show that the magnetic moment of a nanoparticle embedded in the surface of a solid can be switched by surface acoustic waves ( saw ) in the ghz frequency range via a universal mechanism that does not depend on the structure of the particle and the structure of the substrate. it is based upon generation of the effective ac magnetic field in the coordinate frame of the nanoparticle by the shear deformation of the surface due to saw. the magnetization reversal occurs via a consecutive absorption of surface phonons of the controlled variable frequency. we derive analytical equations governing this process and solve them numerically for the practical range of parameters.
|
arxiv:1511.09109
|
feedback alignment algorithms are an alternative to backpropagation to train neural networks, whereby some of the partial derivatives that are required to compute the gradient are replaced by random terms. this essentially transforms the update rule into a random walk in weight space. surprisingly, learning still works with those algorithms, including training of deep neural networks. this is generally attributed to an alignment of the update of the random walker with the true gradient - the eponymous gradient alignment - - which drives an approximate gradient descend. the mechanism that leads to this alignment remains unclear, however. in this paper, we use mathematical reasoning and simulations to investigate gradient alignment. we observe that the feedback alignment update rule has fixed points, which correspond to extrema of the loss function. we show that gradient alignment is a stability criterion for those fixed points. it is only a necessary criterion for algorithm performance. experimentally, we demonstrate that high levels of gradient alignment can lead to poor algorithm performance and that the alignment is not always driving the gradient descend.
|
arxiv:2306.02325
|
based on photon - phonon nonlinear interaction, a scheme is proposed to realize a controllable multi - path photon - phonon converter at single - quantum level in a composed quadratically coupled optomechanical system. considering the realization of the scheme, an associated mechanical oscillator is introduced to enhance the effective nonlinear effect. thus, the single - photon state can be converted to the phonon state with high fidelity even under the current experimental condition that the single - photon coupling rate is much smaller than mechanical frequency ( $ g \ ll \ omega _ m $ ). the state transfer protocols and their transfer fidelity are discussed both analytically and numerically. a multi - path photon - phonon converter is designed, by combining the optomechanical system with low frequency resonators, which can be controlled by experimentally adjustable parameters. this work provides us a potential platform for quantum state transfer and quantum information processing.
|
arxiv:1701.05401
|
in this paper we formulate and study an optimal switching problem under partial information. in our model the agent / manager / investor attempts to maximize the expected reward by switching between different states / investments. however, he is not fully aware of his environment and only an observation process, which contains partial information about the environment / underlying, is accessible. it is based on the partial information carried by this observation process that all decisions must be made. we propose a probabilistic numerical algorithm based on dynamic programming, regression monte carlo methods, and stochastic filtering theory to compute the value function. in this paper, the approximation of the value function and the corresponding convergence result are obtained when the underlying and observation processes satisfy the linear kalman - bucy setting. a numerical example is included to show some specific features of partial information.
|
arxiv:1403.1795
|
can our video understanding systems perceive objects when a heavy occlusion exists in a scene? to answer this question, we collect a large - scale dataset called ovis for occluded video instance segmentation, that is, to simultaneously detect, segment, and track instances in occluded scenes. ovis consists of 296k high - quality instance masks from 25 semantic categories, where object occlusions usually occur. while our human vision systems can understand those occluded instances by contextual reasoning and association, our experiments suggest that current video understanding systems cannot. on the ovis dataset, the highest ap achieved by state - of - the - art algorithms is only 16. 3, which reveals that we are still at a nascent stage for understanding objects, instances, and videos in a real - world scenario. we also present a simple plug - and - play module that performs temporal feature calibration to complement missing object cues caused by occlusion. built upon masktrack r - cnn and sipmask, we obtain a remarkable ap improvement on the ovis dataset. the ovis dataset and project code are available at http : / / songbai. site / ovis.
|
arxiv:2102.01558
|
contemporary time series data often feature objects connected by a social network that naturally induces temporal dependence involving connected neighbours. the network vector autoregressive model is useful for describing the influence of linked neighbours, while recent generalizations aim to separate influence and homophily. existing approaches, however, require either correct specification of a time series model or accurate estimation of a network model or both, and rely exclusively on least - squares for parameter estimation. this paper proposes a new autoregressive model incorporating a flexible form for latent variables used to depict homophily. we develop a first - order differencing method for the estimation of influence requiring only the influence part of the model to be correctly specified. when the part including homophily is correctly specified admitting a semiparametric form, we leverage and generalize the recent notion of neighbour smoothing for parameter estimation, bypassing the need to specify the generative mechanism of the network. we develop new theory to show that all the estimated parameters are consistent and asymptotically normal. the efficacy of our approach is confirmed via extensive simulations and an analysis of a social media dataset.
|
arxiv:2309.08488
|
we are interested in the estimation of the distance in total variation $ $ \ delta : = \ | p _ { f ( x ) } - p _ { g ( x ) } \ | _ { \ mathrm var } $ $ between distributions of random variables $ f ( x ) $ and $ g ( x ) $ in terms of proximity of $ f $ and $ g. $ we propose a simple general method of estimating $ \ delta $. for gaussian and trigonometrical polynomials it gives an asymptotically optimal result ( when the degree tends to $ \ infty $ ).
|
arxiv:1611.03009
|
recently, lauritzen, raben - pedersen and thomsen proved that schubert varieties are globally $ f $ - regular. we give another proof.
|
arxiv:math/0409007
|
nuclear spin - 1 / 2 lattices where each spin has a small effective number of interacting neighbors represent a particular challenge for first - principles calculations of free induction decays ( fids ) observed by nuclear magnetic resonance ( nmr ). the challenge originates from the fact that these lattices are far from the limit where classical spin simulations perform well. here we use the recently developed method of hybrid quantum - classical simulations to compute nuclear fids for $ ^ { 29 } $ si - enriched silicon and fluorapatite. in these solids, small effective number of interacting neighbors is either due to the partition of the lattice into pairs of strongly coupled spins ( silicon ), or due to the partition into strongly coupled chains ( fluorapatite ). we find a very good overall agreement between the hybrid simulation results and the experiments. in addition, we introduce an extension of the hybrid method, which we call the method of " coupled quantum clusters ". it is tested on $ ^ { 29 } $ si - enriched silicon and found to exhibit excellent performance.
|
arxiv:1911.00990
|
we introduce doctor xavier, a bert - based diagnostic system that extracts relevant clinical data from transcribed patient - doctor dialogues and explains predictions using feature attribution methods. we present a novel performance plot and evaluation metric for feature attribution methods : feature attribution dropping ( fad ) curve and its normalized area under the curve ( n - auc ). fad curve analysis shows that integrated gradients outperforms shapley values in explaining diagnosis classification. doctor xavier outperforms the baseline with 0. 97 f1 - score in named entity recognition and symptom pertinence classification and 0. 91 f1 - score in diagnosis classification.
|
arxiv:2204.10178
|
the contact process is a simple model for the spread of an infection in a structured population. we consider a variant of this process on bienaym \ ' e - galton - watson trees, where vertices are equipped with a random fitness representing inhomogeneous transmission rates among individuals. in this paper, we establish conditions under which this inhomogeneous contact process exhibits a phase transition. we first prove that if certain mixed moments of the joint offspring and fitness distribution are finite, then the survival threshold is strictly positive. further, we show that, if slightly different mixed moments are infinite, then this implies that there is no phase transition and the process survives with positive probability for any choice of the infection parameter. a similar dichotomy is known for the contact process on a bienaym \ ' e - galton - watson tree. however, we show that the introduction of fitness means that we have to take into account the combined effect of fitness and offspring distribution to decide which scenario occurs.
|
arxiv:2110.14537
|
by means of the $ q $ - zeilberger algorithm, we prove a basic hypergeometric supercongruence modulo the fifth power of the cyclotomic polynomial $ \ phi _ n ( q ) $. this result appears to be quite unique, as in the existing literature so far no basic hypergeometric supercongruences modulo a power greater than the fourth of a cyclotomic polynomial have been proved. we also establish a couple of related results, including a parametric supercongruence.
|
arxiv:1812.11659
|
in this paper, we use the non - critical string / gauge duality to calculate the decay widths of large - spin mesons. since it is believed that the string theory of qcd is not a ten dimensional theory, we expect that the non - critical versions of ten dimensional black hole backgrounds lead to better results than the critical ones. for this purpose we concentrate on the confining theories and consider two different six dimensional black hole backgrounds. we choose the near extremal ads6 model and the near extremal km model to compute the decay widths of large - spin mesons. then, we present our results from these two non - critical backgrounds and compare them together with those from the critical models and experimental data.
|
arxiv:1105.6273
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.