text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
detecting and tracking code clones can ease various software development and maintenance tasks when changes in a code fragment should be propagated over all its copies. several deep learning - based clone detection models have appeared in the literature for detecting syntactic and semantic clones, widely evaluated with the bigclonebench dataset. however, class imbalance and the small number of semantic clones make bigclonebench less ideal for interpreting model performance. researchers also use other datasets such as googlecodejam, ojclone, and semanticclonebench to understand model generalizability. to overcome the limitations of existing datasets, the gpt - assisted semantic and cross - language clone dataset gptclonebench has been released. however, how these models compare across datasets remains unclear. in this paper, we propose a multi - step evaluation approach for five state - of - the - art clone detection models leveraging existing benchmark datasets, including gptclonebench, and using mutation operators to study model ability. specifically, we examine three highly - performing single - language models ( astnn, gmn, codebert ) on bigclonebench, semanticclonebench, and gptclonebench, testing their robustness with mutation operations. additionally, we compare them against cross - language models ( c4, clcdsa ) known for detecting semantic clones. while single - language models show high f1 scores for bigclonebench, their performance on semanticclonebench varies ( up to 20 % ). interestingly, the cross - language model ( c4 ) shows superior performance ( around 7 % ) on semanticclonebench over other models and performs similarly on bigclonebench and gptclonebench. on mutation - based datasets, c4 has more robust performance ( less than 1 % difference ) compared to single - language models, which show high variability.
|
arxiv:2412.14739
|
a phenomenological quasiparticle model is surveyed for 2 + 1 quark flavors and compared with recent lattice qcd results. emphasis is devoted to the effects of plasmons, plasminos and landau damping. it is shown that thermodynamic bulk quantities, known at zero chemical potential, can uniquely be mapped towards nonzero chemical potential by means of a thermodynamic consistency condition and a stationarity condition.
|
arxiv:0709.2262
|
the quasi - periodic oscillations ( qpos ) in black hole ( bh ) systems of different scales are interpreted based on the magnetic reconnection of the large - scale magnetic fields generated by the toroidal electric currents flowing in the inner region of accretion disk, where the current density is assumed to be proportional to the mass density of the accreting plasma. the magnetic connection ( mc ) is taken into account in resolving the dynamic equations of the accretion disk, in which the mc between the inner and outer disk regions, the mc between the plunging region and the disk, and the mc between the bh horizon and the disk are involved. it turns out that the single qpo frequency of several bh systems of different scales can be fitted by invoking the magnetic reconnection due to the mc between the inner and outer regions of the disk, where the bh binaries xte j1859 + 226, xte j1650 - 500 and grs 1915 + 105 and the massive bhs in ngc 5408 x - 1 and re j1034 + 396 are included. in addition, the x - ray spectra corresponding to the qpos are fitted for these sources based on the typical disk - corona model.
|
arxiv:1301.0162
|
the e08 - 027 ( g2p ) experiment measured the spin structure functions of the proton at jefferson laboratory in newport news, va. longitudinally polarized electrons were scattered from a transversely and longitudinally polarized solid ammonia target in hall a, with the polarized nh $ _ 3 $ acting as an effective proton target. focusing on small scattering angle events at the electron energies available at jefferson lab, the experiment covered a kinematic phase space of 0. 02 gev $ ^ 2 $ $ < q ^ 2 < $ 0. 20 gev $ ^ 2 $ in the proton ' s resonance region. the spin structure functions, $ g _ { 1 } ^ p ( x, q ^ 2 ) $ and $ g _ { 2 } ^ p ( x, q ^ 2 ) $, are extracted from an inclusive polarized cross section measurement of the electron - proton interaction. integrated moments of $ g _ 1 ( x, q ^ 2 ) $ are calculated and compared to theoretical predictions made by chiral perturbation theory. the $ g _ 1 ( x, q ^ 2 ) $ results are in agreement with previous measurements, but include a significant increase in statistical precision. the spin structure function contributions to the hyperfine energy levels in the hydrogen atom are also investigated. the $ g _ 2 ( x, q ^ 2 ) $ measured contribution to the hyperfine splitting is the first ever experimental determination of this quantity. the results of this thesis suggest a disagreement of over 100 % with previously published model results.
|
arxiv:1708.08297
|
as a tool to address the equivalence problem in sub - riemannian geometry, we introduce a canonical choice of grading and compatible affine connection, available on any sub - riemannian manifold with constant symbol. we completely compute these structures for contact manifolds of constant symbol, including the cases where the connections of tanaka - webster - tanno are not defined. we also give an original intrinsic grading on sub - riemannian ( 2, 3, 5 ) - manifolds, and use this to present the first flatness theorem in this setting.
|
arxiv:2010.05366
|
let $ b _ 1 $ be a ball in $ \ mathbb { r } ^ n $ centred at the origin and $ b _ 0 $ be a smaller ball compactly contained in $ b _ 1 $. for $ p \ in ( 1, \ infty ) $, using the shape derivative method, we show that the first eigenvalue of the $ p $ - laplacian in annulus $ b _ 1 \ setminus \ overline { b _ 0 } $ strictly decreases as the inner ball moves towards the boundary of the outer ball. the analogous results for the limit cases as $ p \ to 1 $ and $ p \ to \ infty $ are also discussed. using our main result, further we prove the nonradiality of the eigenfunctions associated with the points on the first nontrivial curve of the fu \ v { c } ik spectrum of the $ p $ - laplacian on bounded radial domains.
|
arxiv:1611.03532
|
hyperparameter tuning is one of the the most time - consuming parts in machine learning. despite the existence of modern optimization algorithms that minimize the number of evaluations needed, evaluations of a single setting may still be expensive. usually a resampling technique is used, where the machine learning method has to be fitted a fixed number of k times on different training datasets. the respective mean performance of the k fits is then used as performance estimator. many hyperparameter settings could be discarded after less than k resampling iterations if they are clearly inferior to high - performing settings. however, resampling is often performed until the very end, wasting a lot of computational effort. to this end, we propose the sequential random search ( sqrs ) which extends the regular random search algorithm by a sequential testing procedure aimed at detecting and eliminating inferior parameter configurations early. we compared our sqrs with regular random search using multiple publicly available regression and classification datasets. our simulation study showed that the sqrs is able to find similarly well - performing parameter settings while requiring noticeably fewer evaluations. our results underscore the potential for integrating sequential tests into hyperparameter tuning.
|
arxiv:2112.12438
|
omega \ to \ mathbb { r } } is a random variable on ( ω, f, p ) { \ displaystyle ( \ omega, { \ mathcal { f } }, p ) } then the support of x { \ displaystyle x } is the smallest closed set r x ⊆ r { \ displaystyle r _ { x } \ subseteq \ mathbb { r } } such that p ( x ∈ r x ) = 1. { \ displaystyle p \ left ( x \ in r _ { x } \ right ) = 1. } in practice however, the support of a discrete random variable x { \ displaystyle x } is often defined as the set r x = { x ∈ r : p ( x = x ) > 0 } { \ displaystyle r _ { x } = \ { x \ in \ mathbb { r } : p ( x = x ) > 0 \ } } and the support of a continuous random variable x { \ displaystyle x } is defined as the set r x = { x ∈ r : f x ( x ) > 0 } { \ displaystyle r _ { x } = \ { x \ in \ mathbb { r } : f _ { x } ( x ) > 0 \ } } where f x ( x ) { \ displaystyle f _ { x } ( x ) } is a probability density function of x { \ displaystyle x } ( the set - theoretic support ). note that the word support can refer to the logarithm of the likelihood of a probability density function. = = support of a distribution = = it is possible also to talk about the support of a distribution, such as the dirac delta function δ ( x ) { \ displaystyle \ delta ( x ) } on the real line. in that example, we can consider test functions f, { \ displaystyle f, } which are smooth functions with support not including the point 0. { \ displaystyle 0. } since δ ( f ) { \ displaystyle \ delta ( f ) } ( the distribution δ { \ displaystyle \ delta } applied as linear functional to f { \ displaystyle f } ) is 0 { \ displaystyle 0 } for such functions, we can say that the support of δ { \ displaystyle \ delta } is { 0 } { \ displaystyle \ { 0 \ } } only. since measures ( including probability measures ) on the real line are special cases of distributions, we can also
|
https://en.wikipedia.org/wiki/Support_(mathematics)
|
we provide conditions that ensure that the recentered maximum of the gaussian free field on a sequence of graphs fluctuates at the same order as the field at the point of maximal variance. in particular, on a sequence of such graphs the recentered maximum is not tight, similarly to the situation in z but in contrast with the situation in z ^ 2. we show that our conditions cover a large class of " fractal " graphs.
|
arxiv:1302.2135
|
we present a practical algorithm to decode erasures of reed - solomon codes over the q elements binary field in o ( q \ log _ 2 ^ 2 q ) time where the constant implied by the o - notation is very small. asymptotically fast algorithms based on fast polynomial arithmetic were already known, but even if their complexity is similar, they are mostly impractical. by comparison our algorithm uses only a few walsh transforms and has been easily implemented.
|
arxiv:0901.1886
|
let $ k $ be a field and let $ a $ be a finitely generated $ k $ - algebra. the algebra $ a $ is said to be cancellative if whenever $ b $ is another $ k $ - algebra with the property that $ a [ x ] \ cong b [ x ] $ then we necessarily have $ a \ cong b $. an important result of abhyankar, eakin, and heinzer shows that if $ a $ is a finitely generated commutative integral domain of krull dimension one then it is cancellative. we consider the question of cancellation for finitely generated not - necessarily - commutative domains of gelfand - kirillov dimension one, and show that such algebras are necessarily cancellative when the characteristic of the base field is zero. in particular, this recovers the cancellation result of abhyankar, eakin, and heinzer in characteristic zero when one restricts to the commutative case. we also provide examples that show affine domains of gelfand - kirillov dimension one need not be cancellative when the base field has positive characteristic, giving a counterexample to a conjecture of tang, the fourth - named author, and zhang. in addition, we prove a skew analogue of the result of abhyankar - eakin - heinzer, in which one works with skew polynomial extensions as opposed to ordinary polynomial rings.
|
arxiv:1909.04023
|
we review recent results on chiral $ su ( 2 ) _ l \ times su ( 2 ) _ r \ approx o ( 4 ) $ and $ u ( 1 ) _ a $ symmetry restoration in qcd. in particular, we discuss how ward identities allow one to derive general results on partner degeneration, which shed light on the distinction between the $ o ( 4 ) $ and $ o ( 4 ) \ times u ( 1 ) _ a $ patterns of the chiral transition. for that purpose, susceptibilities associated with the $ o ( 4 ) $ and $ u ( 1 ) _ a $ symmetries are studied. from this analysis we conclude that in the ideal regime of exact $ o ( 4 ) $ restoration ( formally achieved in the limit of two massless flavours ), $ u ( 1 ) _ a $ partners degenerate as well. we also discuss the role of the thermal $ f _ 0 ( 500 ) $ state to describe thermodynamic observables sensitive to chiral restoration, such as the scalar susceptibility. we pay special attention to the consistency of our results with recent lattice analysis.
|
arxiv:1712.00074
|
in a recent article [ 13 ], g. janelidze introduced the concept of ideally exact categories as a generalization of semi - abelian categories, aiming to incorporate relevant examples of non - pointed categories, such as the categories $ \ textbf { ring } $ and $ \ textbf { cring } $ of unitary ( commutative ) rings. he also extended the notion of action representability to this broader framework, proving that both $ \ textbf { ring } $ and $ \ textbf { cring } $ are action representable. this article investigates the representability of actions of unitary non - associative algebras. after providing a detailed description of the monadic adjunction associated with any category of unitary algebra, we use the construction of the external weak actor [ 4 ] in order to prove that the categories of unitary ( commutative ) associative algebras and that of unitary alternative algebras are action representable. the result is then extended for unitary ( commutative ) poisson algebras, where the explicit construction of the universal strict general actor is employed.
|
arxiv:2503.04488
|
this paper introduces a code generator designed for node - level optimized, extreme - scalable, matrix - free finite element operators on hybrid tetrahedral grids. it optimizes the local evaluation of bilinear forms through various techniques including tabulation, relocation of loop invariants, and inter - element vectorization - implemented as transformations of an abstract syntax tree. a key contribution is the development, analysis, and generation of efficient loop patterns that leverage the local structure of the underlying tetrahedral grid. these significantly enhance cache locality and arithmetic intensity, mitigating bandwidth - pressure associated with compute - sparse, low - order operators. the paper demonstrates the generator ' s capabilities through a comprehensive educational cycle of performance analysis, bottleneck identification, and emission of dedicated optimizations. for three differential operators ( $ - \ delta $, $ - \ nabla \ cdot ( k ( \ mathbf { x } ) \, \ nabla \, ) $, $ \ alpha ( \ mathbf { x } ) \, \ mathbf { curl } \ \ mathbf { curl } + \ beta ( \ mathbf { x } ) $ ), we determine the set of most effective optimizations. applied by the generator, they result in speed - ups of up to 58 $ \ times $ compared to reference implementations. detailed node - level performance analysis yields matrix - free operators with a throughput of 1. 3 to 2. 1 gdof / s, achieving up to 62 % peak performance on a 36 - core intel ice lake socket. finally, the solution of the curl - curl problem with more than a trillion ( $ 10 ^ { 12 } $ ) degrees of freedom on 21504 processes in less than 50 seconds demonstrates the generated operators ' performance and extreme - scalability as part of a full multigrid solver.
|
arxiv:2404.08371
|
we study the problem of bidding in uniform price auctions widely used in practice. although these auctions are non - truthful for bidders with quasilinear utility functions, several empirical findings suggest that the auction format induces truthful bidding from the bidders. we attribute this difference in theory and practice to the assumption of the behavioral model of the bidders. in this pursuit, we study uniform price auctions in a repeated setting from the perspective of a value - maximizing buyer who aims to maximize their acquired cumulative value across $ t $ rounds, subject to per - round return - on - investment ( roi ) constraints. for a roi - constrained, value - maximizing buyer, we study a generalized version of the uniform bidding format, commonly used in practice, which we term as $ m $ - uniform bidding. to characterize the optimal $ m $ - uniform bid, we introduce and study the notion of universally feasible ( uf ) bidding policies, which are robust, meaning that roi feasibility is obtained regardless of the competitors ' bids. we show that the optimal class of uf bidding policies is essentially a generalization of truthful bidding policies, which depends only on the valuation curve of the bidder and target roi. to measure the performance of uf bidding policies against the optimal bidding policy that is not necessarily uf, we introduce a metric called the price of universal feasibility ( pouf ) and establish that pouf is at most 2, irrespective of $ m $ and the upper bound is tight. we further compare the generalized $ m $ - uniform bidding interface against the classical uniform bidding format under which $ m = 1 $, showing the total value under $ m $ - uniform bidding increases at most by a factor of $ m $. numerical simulations on semi - synthetic data demonstrate that uf bidding policies perform significantly better than the derived theoretical bounds.
|
arxiv:2406.03674
|
significant decrease of spontaneous magnetization in frustrated one - dimensional ferro - and ferrimagnets due to non - magnetic impurities is predicted. using the density - matrix renormalization group method and the exact diagonalization method, we confirm that the total spin can vanish due to a single impurity in finite chains. introducing the picture of magnetic domain inversion, we numerically investigate the impurity - density dependence of magnetization. in particular, we show that even with an infinitesimal density of impurities the magnetization in the ground state is reduced by about 40 % from that of the corresponding pure system. conditions for the materials which may show this anomalous impurity effect are formulated.
|
arxiv:cond-mat/0603193
|
coherent optical fibre networks are extremely sensitive to thermal, mechanical and acoustic noise, which requires elaborate schemes of phase stabilization with dedicated auxiliary lasers, multiplexers and photodetectors. this is particularly demanding in quantum networks operating at the single - photon level. here we propose a simple method of phase stabilization based on single - photon counting and apply it to quantum fibre networks implementing single - photon interference on a lossless beamsplitter and coherent perfect absorption on a metamaterial absorber. as a proof of principle, we show dissipative single - photon switching with visibility close to 80 %. this method can be employed in quantum networks of greater complexity without classical stabilization rigs, potentially increasing efficiency of the quantum channels.
|
arxiv:1911.00221
|
the rank of an $ a $ - hypergeometric $ d $ - module $ m _ a ( \ beta ) $, associated with a full rank $ ( d \ times n ) $ - matrix $ a $ and a vector of parameters $ \ beta \ in \ mathbb { c } ^ d $, is known to be the normalized volume of $ a $, denoted $ \ mathrm { vol } ( a ) $, when $ \ beta $ lies outside the exceptional arrangement $ \ mathcal { e } ( a ) $, an affine subspace arrangement of codimension at least two. if $ \ beta \ in \ mathcal { e } ( a ) $ is simple, we prove that $ d - 1 $ is a tight upper bound for the ratio $ \ mathrm { rank } ( m _ a ( \ beta ) ) / \ mathrm { vol } ( a ) $ for any $ d \ geq 3 $. we also prove that the set of parameters $ \ beta $ such that this ratio is at least $ 2 $ is an affine subspace arrangement of codimension at least $ 3 $.
|
arxiv:1907.08669
|
we re - introduce a derivative - free subspace optimization framework originating from chapter 5 of the ph. d. thesis [ z. zhang, on derivative - free optimization methods, ph. d. thesis, chinese academy of sciences, beijing, 2012 ] of the author under the supervision of ya - xiang yuan. at each iteration, the framework defines a ( low - dimensional ) subspace based on an approximate gradient, and then solves a subproblem in this subspace to generate a new iterate. we sketch the global convergence and worst - case complexity analysis of the framework, elaborate on its implementation, and present some numerical results on solving problems with dimensions as high as 10 ^ 4 using only inaccurate function values.
|
arxiv:2501.04536
|
let $ h $ be a reflexive, dense, separable, infinite dimensional complex hilbert space and let $ b ( h ) $ be the algebra of all bounded linear operators on $ h $. in this paper, we carry out characterizations of norm - attainable operators in normed spaces. we give conditions for norm - attainability of linear functionals in banach spaces, non - power operators on $ h $ and elementary operators. lastly, we characterize a new notion of norm - attainability for power operators in normed spaces.
|
arxiv:2004.05496
|
the flavor content of nucleon form factors is analyzed using two different theoretical approaches. the first is based on a phenomenological two - component model in which the external photon couples to both an intrinsic three - quark structure and a meson cloud via vector - meson dominance. the flavor content of the nucleon form factors is extracted without introducing any additional parameter. a comparison with recent data from parity - violating electron scattering experiments shows a good overall agreement for the strange form factors. a more microscopic approach is that of an unquenched quark model proposed by geiger and isgur which is based on valence quark plus glue dominance to which quark - antiquark pairs are added in perturbation. in the original version the importance of $ s \ bar { s } $ loops in the proton was studied. here we present the formalism for a new generation of unquenched quark models which, among other extensions, includes the contributions of $ u \ bar { u } $ and $ d \ bar { d } $ loops. finally, we discuss some preliminary results in the closure limit.
|
arxiv:nucl-th/0703053
|
textual analysis of typical microbial genomes reveals that they have the statistical characteristics of a dna sequence of a much shorter length. this peculiar property supports an evolutionary model in which a genome evolves by random mutation but primarily grows by random segmental self - copying. that genomes grew mostly by self - copying is consistent with the observation that repeat sequences in all genomes are widespread and intragenomic and intergenomic homologous genes are preponderance across all life forms. the model predicates the coexistence of the two competing modes of evolution : the gradual changes of classical darwinism and the stochastic spurts envisioned in ` ` punctuated equilibrium ' '.
|
arxiv:physics/0206024
|
we construct a hermitian random matrix model that provides a stable non - perturbative completion of cangemi - jackiw ( cj ) gravity, a two - dimensional theory of flat spacetimes. the matrix model reproduces, to all orders in the topological expansion, the euclidean partition function of cj gravity with an arbitrary number of boundaries. the non - perturbative completion enables the exact computation of observables in flat space quantum gravity which we use to explicitly characterize the bondi hamiltonian spectrum. we discuss the implications of our results for the flat space s - matrix and black holes.
|
arxiv:2205.02240
|
we calculate the skewness ( the third moment $ s _ 3 $ ) of matter distribution in dynamical dark energy cosmologies. we pay particular attention to the impact of dark energy perturbations on this quantity. there is indeed a clear signature of dark energy perturbations on this quantity. by properly allowing dark energy perturbations we show that their impact on $ s _ 3 $ is strong enough ( a factor $ \ sim 3 $ greater ) to easily discriminate between clustering and non - clustering dark energy cosmologies. this indicates that high order statistics of the cosmic density field are useful to the study of dark energy models and are potentially able to rule out clustering dark energy cosmologies.
|
arxiv:1912.00094
|
hospital inpatient care costs is the largest component of health care expenditures in the us. at the same time, the number of non - hospital rehabilitative settings, such as skilled nursing facilities ( snfs ), has increased. lower costs and increased availability have made snfs and other non - hospital rehabilitation settings a promising care alternative to hospitalization. to maximize their benefits, transitions to snfs require special attention, since poorly coordinated transitions can lead to worse outcomes and higher costs via unnecessary hospital readmissions. this study presents a framework to improve care transitions based on the premise that certain snfs may provide better care for some patients. we estimate readmission rates by snf and patient types using observational data from a tertiary teaching hospital and nearby snfs. we then analyze and solve a stochastic model optimizing patient transfer decisions to minimize readmissions. our model accounts for patient discharge patterns and snf capacity availability. we provide conditions for when an easy - to - use myopic policy, which assigns discharged patients to the snf with the lowest readmission rate that is available, is optimal. we also show when an optimal policy has a threshold - like structure. using estimated readmission rates, we compare the performance of the myopic policy and a proposed policy that depends on the discharge process, readmission rates, and future snf availability. we evaluate when the myopic policy may be beneficial to use and when the proposed transfer heuristic provides a better alternative. otherwise, we contend that using a stochastic optimization model for guiding transfer decisions may help reduce readmissions.
|
arxiv:2203.04335
|
a combination of three or more tones played together is called a chord. in the chromatic scale, chords which are consonant are of particular interest and can be divided into several groups, two main ones being the major and minor chords. this paper shows that if three sounds are produced by three spatially separated sources, a " happy " sounding major chord can be observed as its " sad " sounding counterpart depending on the observer ' s velocity - a consequence of the well known doppler effect. the analysis is further extended to show that almost any triad may be observed by choosing an appropriate frame of reference, and several interesting symmetries, asymmetries and features of the system are discussed. finally, the possibility of applications of this effect in the music performance and creation in the context of " interactive listener " is discussed and suggestions for overcoming some technical difficulties are proposed.
|
arxiv:0807.2493
|
this set of lectures covers the very basics of flavor physics and are aimed to be an entry point to the subject. a lot of problems are provided in the hope of making the manuscript a self study guide.
|
arxiv:1006.3534
|
the status of lattice qcd investigations at high temperature is reviewed. after a short introduction into thermal qcd on the lattice we report on the present understanding of the phase diagram and the equation of state, in particular in presence of dynamical quarks. we continue with a discussion of various screening lengths in the plasma phase including results from dimensionally reduced qcd. this is followed by summarizing lattice data on quark number susceptibilities and spectral densities, both of which are of immediate relevance to the interpretation of heavy ion experiments. a major section is devoted to presenting simulations of qcd at small yet phenomenologically important values for the baryon density.
|
arxiv:hep-ph/0303042
|
the implementation of a spin qubit in a quantum ring occupied by one or a few electrons is proposed. quantum bit involves the zeeman sublevels of the highest occupied orbital. such a qubit can be initialized, addressed, manipulated, read out and coherently coupled to other quantum rings. an extensive discussion of relaxation and decoherence is presented. by analogy with quantum dots, the spin relaxation times due to spin - orbit interaction for experimentally accessible quantum ring architectures are calculated. the conditions are formulated under which qubits build on quantum rings can have long relaxation times of the order of seconds. rapidly improving nanofabrication technology have made such ring devices experimentally feasible and thus promising for quantum state engineering.
|
arxiv:1011.2540
|
monocular depth prediction plays a crucial role in understanding 3d scene geometry. although recent methods have achieved impressive progress in terms of evaluation metrics such as the pixel - wise relative error, most methods neglect the geometric constraints in the 3d space. in this work, we show the importance of the high - order 3d geometric constraints for depth prediction. by designing a loss term that enforces a simple geometric constraint, namely, virtual normal directions determined by randomly sampled three points in the reconstructed 3d space, we significantly improve the accuracy and robustness of monocular depth estimation. significantly, the virtual normal loss can not only improve the performance of learning metric depth, but also disentangle the scale information and enrich the model with better shape information. therefore, when not having access to absolute metric depth training data, we can use virtual normal to learn a robust affine - invariant depth generated on diverse scenes. in experiments, we show state - of - the - art results of learning metric depth on nyu depth - v2 and kitti. from the high - quality predicted depth, we are now able to recover good 3d structures of the scene such as the point cloud and surface normal directly, eliminating the necessity of relying on additional models as was previously done. to demonstrate the excellent generalizability of learning affine - invariant depth on diverse data with the virtual normal loss, we construct a large - scale and diverse dataset for training affine - invariant depth, termed diverse scene depth dataset ( diversedepth ), and test on five datasets with the zero - shot test setting. code is available at : https : / / git. io / depth
|
arxiv:2103.04216
|
we survey the definition of the radial julia set of a meromorphic function ( in fact, more generally, any " ahlfors islands map " ), and give a simple proof that the hausdorff dimension of the reduced julia set always coincides with the hyperbolic dimension.
|
arxiv:0712.4267
|
dexterous manipulation of objects through fine control of physical contacts is essential for many important tasks of daily living. a fundamental ability underlying fine contact control is compliant control, \ textit { i. e. }, controlling the contact forces while moving. for robots, the most widely explored approaches heavily depend on models of manipulated objects and expensive sensors to gather contact location and force information needed for real - time control. the models are difficult to obtain, and the sensors are costly, hindering personal robots ' adoption in our homes and businesses. this study performs model - free reinforcement learning of a normal contact force controller on a robotic manipulation system built with a low - cost, information - poor tactile sensor. despite the limited sensing capability, our force controller can be combined with a motion controller to enable fine contact interactions during object manipulation. promising results are demonstrated in non - prehensile, dexterous manipulation experiments.
|
arxiv:2305.17843
|
consider an operator equation ( * ) $ b ( u ) + \ ep u = 0 $ in a real hilbert space, where $ \ ep > 0 $ is a small constant. the dsm ( dynamical systems method ) for solving equation ( * ) consists of a construction of a cauchy problem, which has the following properties : 1 ) it has a global solution for an arbitrary initial data, 2 ) this solution tends to a limit as time tends to infinity, 3 ) the limit solves the equation $ b ( u ) = 0 $. existence of the unique solution is proved by the dsm for equation ( * ) with monotone hemicontinuous operators $ b $ defined on all of $ if $ \ ep = 0 $ and equation ( * * ) $ b ( u ) = 0 $ is solvable, the dsm yields $ solution to ( * * ).
|
arxiv:math/0404437
|
understanding the latent spaces learned by deep learning models is crucial in exploring how they represent and generate complex data. autoencoders ( aes ) have played a key role in the area of representation learning, with numerous regularization techniques and training principles developed not only to enhance their ability to learn compact and robust representations, but also to reveal how different architectures influence the structure and smoothness of the lower - dimensional non - linear manifold. we strive to characterize the structure of the latent spaces learned by different autoencoders including convolutional autoencoders ( caes ), denoising autoencoders ( daes ), and variational autoencoders ( vaes ) and how they change with the perturbations in the input. by characterizing the matrix manifolds corresponding to the latent spaces, we provide an explanation for the well - known observation that the latent spaces of cae and dae form non - smooth manifolds, while that of vae forms a smooth manifold. we also map the points of the matrix manifold to a hilbert space using distance preserving transforms and provide an alternate view in terms of the subspaces generated in the hilbert space as a function of the distortion in the input. the results show that the latent manifolds of cae and dae are stratified with each stratum being a smooth product manifold, while the manifold of vae is a smooth product manifold of two symmetric positive definite matrices and a symmetric positive semi - definite matrix.
|
arxiv:2412.04755
|
so far existence of dissipative weak solutions for the compressible navier - stokes equations ( i. e. weak solutions satisfying the relative energy inequality ) is known only in the case of boundary conditions with non zero inflow / outflow ( i. e., in particular, when the normal component of the velocity on the boundary of the ow domain is equal to zero ). most of physical applications ( as ows in wind tunnels, pipes, reactors of jet engines ) requires to consider non - zero inflow - outflow boundary condtions. we prove existence of dissipative weak solutions to the compressible navier - stokes equations in barotropic regime ( adiabatic coefficient gamma > 3 / 2, in three dimensions, gamma > 1 in two dimensions ) with large velocity prescribed at the boundary and large density prescribed at the inflow boundary of a bounded piecewise regular lipschitz domain, without any restriction neither on the shape of the inflow / outflow boundaries nor on the shape of the domain. it is well known that the relative energy inequality has many applications, e. g., to investigation of incompressible or inviscid limits, to the dimension reduction of flows, to the error estimates of numerical schemes. in this paper we deal with one of its basic applications, namely weak - strong uniqueness principle.
|
arxiv:1905.02667
|
in the monitoring of a complex electric grid, it is of paramount importance to provide operators with early warnings of anomalies detected on the network, along with a precise classification and diagnosis of the specific fault type. in this paper, we propose a novel multi - stage early warning system prototype for electric grid fault detection, classification, subgroup discovery, and visualization. in the first stage, a computationally efficient anomaly detection method based on quartiles detects the presence of a fault in real time. in the second stage, the fault is classified into one of nine pre - defined disaster scenarios. the time series data are first mapped to highly discriminative features by applying dimensionality reduction based on temporal autocorrelation. the features are then mapped through one of three classification techniques : support vector machine, random forest, and artificial neural network. finally in the third stage, intra - class clustering based on dynamic time warping is used to characterize the fault with further granularity. results on the bonneville power administration electric grid data show that i ) the proposed anomaly detector is both fast and accurate ; ii ) dimensionality reduction leads to dramatic improvement in classification accuracy and speed ; iii ) the random forest method offers the most accurate, consistent, and robust fault classification ; and iv ) time series within a given class naturally separate into five distinct clusters which correspond closely to the geographical distribution of electric grid buses.
|
arxiv:1903.06700
|
an analysis of the homodyne tomography process that is often used to determine the wigner functions of quantum optical states is performed to consider the effects of the spatiotemporal degrees of freedom. the homodyne tomography process removes those parts of the input state that are not associated with the mode of the local oscillator by tracing out those degrees of freedom. using a functional approach to incorporate all the spatiotemporal degrees of freedom, we find that this reduction in the degrees of freedom introduces distortions in the observed wigner function. the analysis also shows how the homodyne tomography process introduces a resolution that depends on the strength of the local oscillator. as examples, we consider coherent states, fock states and squeezed vacuum states.
|
arxiv:2204.05063
|
this paper gives an overview of impersonation bots that generate output in one, or possibly, multiple modalities. we also discuss rapidly advancing areas of machine learning and artificial intelligence that could lead to frighteningly powerful new multi - modal social bots. our main conclusion is that most commonly known bots are one dimensional ( i. e., chatterbot ), and far from deceiving serious interrogators. however, using recent advances in machine learning, it is possible to unleash incredibly powerful, human - like armies of social bots, in potentially well coordinated campaigns of deception and influence.
|
arxiv:1706.05143
|
the effect of gas compression at the developed stages of flame acceleration in smooth - wall and obstructed channels is studied. we demonstrate analytically that gas compression moderates the acceleration rate and perform numerical simulations within the problem of flame transition to detonation. it is shown that flame acceleration undergoes three distinctive stages : 1 ) initial exponential acceleration in the incompressible regime, 2 ) moderation of the acceleration process due to gas compression, so that the exponential acceleration state goes over to a much slower one, 3 ) eventual saturation to a steady ( or statistically - steady ) high - speed deflagration velocity, which may be correlated with the chapman - jouguet deflagration speed. the possibility of deflagration - to - detonation transition is demonstrated.
|
arxiv:1203.1205
|
memory effects are studied in the simplest scalar - tensor theory, the brans - - dicke ( bd ) theory. to this end, we introduce, in bd theory, novel kundt spacetimes ( without and with gyratonic terms ), which serve as backgrounds for the ensuing analysis on memory. the bd parameter $ \ omega $ and the scalar field ( $ \ phi $ ) profile, expectedly, distinguishes between different solutions. choosing specific localised forms for the free metric functions $ h ' ( u ) $ ( related to the wave profile ) and $ j ( u ) $ ( the gyraton ) we obtain displacement memory effects using both geodesics and geodesic deviation. an interesting and easy - to - understand exactly solvable case arises when $ \ omega = - 2 $ ( with $ j ( u ) $ absent ) which we discuss in detail. for other $ \ omega $ ( in the presence of $ j $ or without ), numerically obtained geodesics lead to results on displacement memory which appear to match qualitatively with those found from a deviation analysis. thus, the issue of how memory effects in bd theory may arise and also differ from their gr counterparts, is now partially addressed, at least theoretically, within the context of this new class of kundt geometries.
|
arxiv:2011.12368
|
imaging antiferromagnetic 180 { \ deg } domains with actively controlled visibility is vital for both fundamental science and sophisticated applications. while optical second - harmonic generation ( shg ) is a well - known technique for distinguishing such domains in non - centrosymmetric antiferromagnets, a general material - based strategy to control domain contrast remains elusive. using van der waals antiferromagnet mnps $ _ 3 $ as a proof of concept, we demonstrate the tuning of nonreciprocity - induced domain contrast in shg through applying an in - plane electric field that transforms the magnetic point group to its unitary subgroup. the interference among intrinsic electric - dipole, magnetic - dipole, and field - induced electric - dipole transitions, each carrying distinct characters under space - inversion ( $ \ mathcal { p } $ ) and time - reversal ( $ \ mathcal { t } $ ) operations, enables large tuning of domain contrast and nonreciprocity in a broad spectral range. this strategy, generically applicable to systems characterized by $ \ mathcal { pt } $ - symmetric magnetic groups with a polar unitary subgroup, offers a path to fast electrical modulation of nonlinear nonreciprocal photonic behaviors using antiferromagnets.
|
arxiv:2401.11222
|
extraneous variables are variables that are irrelevant for a certain task, but heavily affect the distribution of the available data. in this work, we show that the presence of such variables can degrade the performance of deep - learning models. we study three datasets where there is a strong influence of known extraneous variables : classification of upper - body movements in stroke patients, annotation of surgical activities, and recognition of corrupted images. models trained with batch normalization learn features that are highly dependent on the extraneous variables. in batch normalization, the statistics used to normalize the features are learned from the training set and fixed at test time, which produces a mismatch in the presence of varying extraneous variables. we demonstrate that estimating the feature statistics adaptively during inference, as in instance normalization, addresses this issue, producing normalized features that are more robust to changes in the extraneous variables. this results in a significant gain in performance for different network architectures and choices of feature statistics.
|
arxiv:2002.04019
|
we compute the generating function of random planar quadrangulations with three marked vertices at prescribed pairwise distances. in the scaling limit of large quadrangulations, this discrete three - point function converges to a simple universal scaling function, which is the continuous three - point function of pure 2d quantum gravity. we give explicit expressions for this universal three - point function both in the grand - canonical and canonical ensembles. various limiting regimes are studied when some of the distances become large or small. by considering the case where the marked vertices are aligned, we also obtain the probability law for the number of geodesic points, namely vertices that lie on a geodesic path between two given vertices, and at prescribed distances from these vertices.
|
arxiv:0805.2355
|
a recent trend in the context of graph theory is to bring theoretical analyses closer to empirical observations, by focusing the studies on random graph models that are used to represent practical instances. there, it was observed that geometric inhomogeneous random graphs ( girgs ) yield good representations of complex real - world networks, by expressing edge probabilities as a function that depends on ( heterogeneous ) vertex weights and distances in some underlying geometric space that the vertices are distributed in. while most of the parameters of the model are understood well, it was unclear how the dimensionality of the ground space affects the structure of the graphs. in this paper, we complement existing research into the dimension of geometric random graph models and the ongoing study of determining the dimensionality of real - world networks, by studying how the structure of girgs changes as the number of dimensions increases. we prove that, in the limit, girgs approach non - geometric inhomogeneous random graphs and present insights on how quickly the decay of the geometry impacts important graph structures. in particular, we study the expected number of cliques of a given size as well as the clique number and characterize phase transitions at which their behavior changes fundamentally. finally, our insights help in better understanding previous results about the impact of the dimensionality on geometric random graphs.
|
arxiv:2302.04113
|
the successful assembly of heterostructures consisting of several layers of different 2d materials in arbitrary order by exploiting van der waals forces has truly been a game changer in the field of low dimensional physics. for instance, the encapsulation of graphene or mos2 between atomically flat hexagonal boron nitride ( hbn ) layers with strong affinity and graphitic gates that screen charge impurity disorder provided access to a plethora of interesting physical phenomena by drastically boosting the device quality. the encapsulation is accompanied by a self - cleansing effect at the interfaces. the otherwise predominant charged impurity disorder is minimized and random strain fluctuations ultimately constitute the main source of residual disorder. despite these advances, the fabricated heterostructures still vary notably in their performance. while some achieve record mobilities, others only possess mediocre quality. here, we report a reliable method to improve fully completed van der waals heterostructure devices with a straightforward post - processing surface treatment based on thermal annealing and contact mode afm. the impact is demonstrated by comparing magnetotransport measurements before and after the afm treatment on one and the same device as well as on a larger set of treated and untreated devices to collect device statistics. both the low temperature properties as well as the room temperature electrical characteristics, as relevant for applications, improve on average substantially. we surmise that the main beneficial effect arises from reducing nanometer scale corrugations at the interfaces, i. e. the detrimental impact of random strain fluctuations.
|
arxiv:1903.10260
|
lhcb is a fully instrumented forward spectrometer with particle identification and muon reconstruction covering the pseudorapidity ( $ \ eta $ ) range from 2 to 5. its full jet reconstruction capability makes the lhcb experiment a suitable venue to explore jet substructure observables, particularly formation of hadrons and heavy quarkonia resonances inside jets. this contribution presents a brief overview of ongoing research program and discusses recent results on non - identified charged hadron distributions in z - tagged jets and charmonium distributions within jets. future works towards furthering the knowledge of hadronizion are also discussed.
|
arxiv:2108.09294
|
we present an overview of the coupled - channels optical model and the hauser - feshbach theory code coh $ _ 3 $, which focuses on the nuclear reaction calculations in the kev to tens of mev region with special attention to the nuclear deformation. the code consists of three major sections that undertake the one - body potential mean - field theory, the coupled - channels optical model, and the hauser - feshbach statistical decay. there are other complementary segments to perform the whole nuclear reaction calculations, such as the direct / semidirect radiative capture process, pre - equilibrium process, and prompt fission neutron emission.
|
arxiv:1901.05641
|
in bilayer quantum hall ( blqh ) systems at $ \ nu $ = 2, three different kinds of ground states are expected to be realized, i. e. a spin polarized phase ( spin phase ), a pseudospin polarized phase ( ppin phase ) and a canted antiferromagnetic phase ( c - phase ). an su ( 4 ) scheme gives a powerful tool to investigate blqh systems which have not only the spin su ( 2 ) but also the layer ( pseudospin ) su ( 2 ) degrees of freedom. in this paper, we discuss an origin of the c - phase in the su ( 4 ) context and investigate su ( 4 ) coherent effects to it. we show peculiar operators in the su ( 4 ) group which do not exist in su $ _ { \ text { spin } } $ ( 2 ) $ \ otimes $ su $ _ { \ text { ppin } } $ ( 2 ) group play a key role to its realization. it is also pointed out that not only spins but also pseudospins are ` ` canted ' ' in the c - phase.
|
arxiv:cond-mat/0302377
|
we use borehole resistivity measurements to map the electrical properties of the subsurface and to increase the productivity of a reservoir. when used for geosteering purposes, it becomes essential to invert them in real time. in this work, we explore the possibility of using deep neural network ( dnn ) to perform a rapid inversion of borehole resistivity measurements. herein, we build a dnn that approximates the following inverse problem : given a set of borehole resistivity measurements, the dnn is designed to deliver a physically meaningful and data - consistent piecewise one - dimensional layered model of the surrounding subsurface. once the dnn is built, we can perform the actual inversion of the field measurements in real time. we illustrate the performance of dnn of logging - while - drilling measurements acquired on high - angle wells via synthetic data.
|
arxiv:1810.04522
|
boundary condition changing operators in conformal field theory describe various types of " sudden switching " problems in condensed matter physics such as the x - ray edge singularity. we review this subject and give two extensions of previous work. a general derivation of a connection between the x - ray edge singularity, the anderson orthogonality catastrophe and finite - size scaling of energies is given. the formalism is also extended to include boundstates.
|
arxiv:hep-th/9611064
|
the generation of an early kination dominated era within a tracking quintessential model is investigated, the relic density of the weakly interacting massive particles ( wimps ) is calculated and we show that it can be enhanced with respect to its value in the standard cosmology. by adjusting the parameters of the quintessential scenario, the cold dark matter abundance in the universe can become compatible with large values for the annihilation cross section times the velocity of the wimps. using these values and assuming that the wimps annihilate predominantly to $ \ mu ^ + \ mu ^ - $, we calculate the induced fluxes of $ e ^ \ pm $ cosmic rays and fit the current pamela and fermi - lat data. we achieve rather good fits in conjunction with a marginal fulfillment of the restriction arisen from the cosmic microwave background radiation.
|
arxiv:1001.2870
|
the cornell electron - positron storage ring ( cesr ) has been converted from a high energy physics electron - positron collider to operate as a dedicated synchrotron light source for the cornell high energy synchrotron source ( chess ) and to conduct accelerator physics research as a test accelerator, capable of studying topics relevant to future damping rings, colliders and light sources. some of the specific topics that were targeted for the initial phase of operation of the storage ring in this mode, labeled cesrta ( cesr as a test accelerator ), included 1 ) tuning techniques to produce low emittance beams, 2 ) the study of electron cloud development in a storage ring and 3 ) intra - beam scattering effects. the complete conversion of cesr to cesrta occurred over a several year period, described elsewhere. as a part of this conversion the cesr beam position monitoring ( cbpm ) system was completely upgraded to provide the needed instrumental capabilities for these studies. this paper describes the new cbpm system hardware, its function and representative measurements performed by the upgraded system.
|
arxiv:1706.00360
|
efficient inference for wide output layers ( wols ) is an essential yet challenging task in large scale machine learning. most approaches reduce this problem to approximate maximum inner product search ( mips ), which relies heavily on the observation that for a given model, ground truth labels correspond to logits of highest value during full model inference. however, such an assumption is restrictive in practice. in this paper, we argue that approximate mips subroutines, despite having sub - linear computation time, are sub - optimal because they are tailored for retrieving large inner products with high recall instead of retrieving the correct labels. with wol, the labels often have moderate inner products, which makes approximate mips more challenging. we propose an alternative problem formulation, called label superior sampling ( lss ), where the objective is to tailor the system to ensure retrieval of the correct label. accordingly, we propose a novel learned hash approach, which is significantly more efficient and sufficient for high inference accuracy than mips baselines. our extensive evaluation indicates that lss can match or even outperform full inference accuracy with around 5x speed up and 87 % energy reduction.
|
arxiv:2007.01230
|
in the first part of this manuscript a relationship between the spectrum of self - adjoint operator matrices and the spectra of their diagonal entries is found. this leads to enclosures for spectral points and in particular, enclosures for eigenvalues. we also consider graph invariant subspaces, and their corresponding angular operators. the existence of a bounded angular operator leads to basis properties of the first component of eigenvectors of operator matrices for which the corresponding eigenvalues lie in a half line. the results are applied to an example from magnetohydrodynamics.
|
arxiv:1309.2100
|
the next - to - leading order ( nlo ) qcd calculation for the isolated photon and isolated photon plus jet photoproduction at the ep collider desy hera is presented. the predictions for the isolated photon with no restrictions imposed on the jet are compared with the previous ones obtained in the small cone approximation, and the differences are found to be below 2 %. the theoretical uncertainties in the cross section of the photoproduction of the photon plus jet are discussed. a short comparison with the new preliminary h1 data and with the nlo predictions of fontannaz et al. is also presented.
|
arxiv:hep-ph/0309308
|
we apply statistical tests, based on the study of thecoefficients in a wavelet decomposition, to a cosmological signal : the cosmic microwave background ( cmb ) anisotropies. the latter represent the superposition of primary anisotropy imprints of the initial density perturbations and secondary anisotropies due to photon interactions after recombination. in an inflationary scenario with gaussian distributed fluctuations, we study the statistical signature of the secondary effects. more specifically, we investigate the dominant effects arising from the sunyaev - zel ' dovich effect of galaxy clusters. our study predicts the non - gaussian signature of these secondary anisotropies and its detectability in the context of the future cmb satellite planck surveyor.
|
arxiv:astro-ph/0003256
|
hubble space telescope observations of black hole x - ray transients are discussed in the context of the disk instability outburst model. we focus on the multiwavelength campaign following gro j1655 - 40 through the summer 1996 outburst.
|
arxiv:astro-ph/9708157
|
locations ref _ a and ref _ b, then evaluate the function ' s body with those references passed in. this gives the function the ability to look up the original argument values passed in through dereferencing the parameters ( some languages use specific operators to perform this ), to modify them via assignment as if they were local variables, and to return values via the references. this is the call - by - reference evaluation strategy. evaluation strategy is part of the semantics of the programming language definition. some languages, such as purescript, have variants with different evaluation strategies. some declarative languages, such as datalog, support multiple evaluation strategies. some languages define a calling convention. in rewriting, a reduction strategy or rewriting strategy is a relation specifying a rewrite for each object or term, compatible with a given reduction relation. a rewriting strategy specifies, out of all the reducible subterms ( redexes ), which one should be reduced ( contracted ) within a term. one of the most common systems involves lambda calculus. = = well - defined expressions = = the language of mathematics exhibits a kind of grammar ( called formal grammar ) about how expressions may be written. there are two considerations for well - definedness of mathematical expressions, syntax and semantics. syntax is concerned with the rules used for constructing, or transforming the symbols of an expression without regard to any interpretation or meaning given to them. expressions that are syntactically correct are called well - formed. semantics is concerned with the meaning of these well - formed expressions. expressions that are semantically correct are called well - defined. = = = well - formed = = = the syntax of mathematical expressions can be described somewhat informally as follows : the allowed operators must have the correct number of inputs in the correct places ( usually written with infix notation ), the sub - expressions that make up these inputs must be well - formed themselves, have a clear order of operations, etc. strings of symbols that conform to the rules of syntax are called well - formed, and those that are not well - formed are called, ill - formed, and do not constitute mathematical expressions. for example, in arithmetic, the expression 1 + 2 × 3 is well - formed, but × 4 ) x +, / y { \ displaystyle \ times 4 ) x +, / y }. is not. however, being well - formed is not enough to be considered well - defined. for example in arithmetic, the expression 1 0 { \ textstyle { \ frac
|
https://en.wikipedia.org/wiki/Expression_(mathematics)
|
we classify conformally flat riemannian $ 3 - $ manifolds which possesses a free isometric $ s ^ 1 - $ action.
|
arxiv:0902.4555
|
very recently, a 3 $ d $ based honeycomb cobaltate na $ _ 2 $ co $ _ 2 $ teo $ _ 6 $ has garnered tremendous attention due to the proposed proximity to the kitaev spin - liquid state as its 4 $ d $ / 5 $ d $ counterparts. here, we use zn to substitute co in a broad range and perform systematic studies on na $ _ 2 $ co $ _ { 2 - x } $ zn $ _ x $ teo $ _ 6 $ by structural, magnetic, and thermodynamic measurements, and track the doping evolution of its magnetic ground states. due to the extremely close radii of zn $ ^ { 2 + } $ and high - spin co $ ^ { 2 + } $ ions, the substitution can be easily achieved. x - ray diffractions reveal no structural transition but only minor changes on the lattice parameter $ c $ over a wide substitution range $ 0 \ leq x \ leq 1. 5 $. magnetic susceptibility and specific heat measurements both suggest an antiferromagnetic ground state which is gradually suppressed with doping. it can survive with $ x $ up to $ \ sim1. 0 $. then it evolves into a spin - glass phase with short - range order that is rapidly supplanted by a magnetically disordered state when $ x \ geq 1. 3 $. by summarizing all these data, we construct a magnetic phase diagram of na $ _ 2 $ co $ _ { 2 - x } $ zn $ _ x $ teo $ _ 6 $. our results demonstrate that the zn doping can effectively suppress the magnetic order and induce a possibe quantum paramagnetic state. these may serve as a platform to investigate the kitaev physics in this system.
|
arxiv:2302.00314
|
demands for implementing original oss that can achieve high i / o performance on pc / at compatible hardware have recently been increasing, but conventional os debugging environments have not been able to simultaneously assure their stability, be easily customized to new oss and new i / o devices, and assure efficient execution of i / o operations. we therefore developed a novel os debugging method using a lightweight virtual machine. we evaluated this debugging method experimentally and confirmed that it can transfer data about 5. 4 times as fast as the conventional virtual machine monitor.
|
arxiv:0710.4635
|
articulated robots such as manipulators increasingly must operate in uncertain and dynamic environments where interaction ( with human coworkers, for example ) is necessary. in these situations, the capacity to quickly adapt to unexpected changes in operational space constraints is essential. at certain points in a manipulator ' s configuration space, termed singularities, the robot loses one or more degrees of freedom ( dof ) and is unable to move in specific operational space directions. the inability to move in arbitrary directions in operational space compromises adaptivity and, potentially, safety. we introduce a geometry - aware singularity index, defined using a riemannian metric on the manifold of symmetric positive definite matrices, to provide a measure of proximity to singular configurations. we demonstrate that our index avoids some of the failure modes and difficulties inherent to other common indices. further, we show that this index can be differentiated easily, making it compatible with local optimization approaches used for operational space control. our experimental results establish that, for reaching and path following tasks, optimization based on our index outperforms a common manipulability maximization technique and ensures singularity - robust motions.
|
arxiv:2103.05362
|
in this paper, we establish a new quasi - shadowing property for any nonuiformly partially hyperbolic set of a $ c ^ { 1 + \ alpha } $ diffeomorphism, which is adaptive to the movement of the pseudo - orbit. moreover, the quasi - specification property and quasi - closing property are also investigated. as an application of quasi - closing property, we extend katok ' s reslut on the growth of periodoc orbits for hyperbolic ergodic measure to any ergodic measure : the number of quasi - periodic points grows exponentially at least the metric entropy.
|
arxiv:2501.03071
|
the rapid development of large multimodal models ( lmms ) has significantly advanced multimodal understanding by harnessing the language abilities of large language models ( llms ) and integrating modality - specific encoders. however, lmms are plagued by hallucinations that limit their reliability and adoption. while traditional methods to detect and mitigate these hallucinations often involve costly training or rely heavily on external models, recent approaches utilizing internal model features present a promising alternative. in this paper, we critically assess the limitations of the state - of - the - art training - free technique, the logit lens, in handling generalized visual hallucinations. we introduce contextuallens, a refined method that leverages contextual token embeddings from middle layers of lmms. this approach significantly improves hallucination detection and grounding across diverse categories, including actions and ocr, while also excelling in tasks requiring contextual understanding, such as spatial relations and attribute comparison. our novel grounding technique yields highly precise bounding boxes, facilitating a transition from zero - shot object segmentation to grounded visual question answering. our contributions pave the way for more reliable and interpretable multimodal models.
|
arxiv:2411.19187
|
the study of markov models is central to control theory and machine learning. a quantum analogue of partially observable markov decision process was studied in ( barry, barry, and aaronson, phys. rev. a, 90, 2014 ). it was proved that goal - state reachability is undecidable in the quantum setting, whereas it is decidable classically. in contrast to this classical - to - quantum transition from decidable to undecidable, we observe that the problem of approximating the optimal policy which maximizes the average discounted reward over an infinite horizon remains decidable in the quantum setting. given that most relevant problems related to markov decision process are undecidable classically ( which immediately implies undecidability in the quantum case ), this provides one of the few examples where the quantum problem is tractable.
|
arxiv:1911.01953
|
background precise prediction of cancer types is vital for cancer diagnosis and therapy. important cancer marker genes can be inferred through predictive model. several studies have attempted to build machine learning models for this task however none has taken into consideration the effects of tissue of origin that can potentially bias the identification of cancer markers. results in this paper, we introduced several convolutional neural network ( cnn ) models that take unstructured gene expression inputs to classify tumor and non - tumor samples into their designated cancer types or as normal. based on different designs of gene embeddings and convolution schemes, we implemented three cnn models : 1d - cnn, 2d - vanilla - cnn, and 2d - hybrid - cnn. the models were trained and tested on combined 10, 340 samples of 33 cancer types and 731 matched normal tissues of the cancer genome atlas ( tcga ). our models achieved excellent prediction accuracies ( 93. 9 - 95. 0 % ) among 34 classes ( 33 cancers and normal ). furthermore, we interpreted one of the models, known as 1d - cnn model, with a guided saliency technique and identified a total of 2, 090 cancer markers ( 108 per class ). the concordance of differential expression of these markers between the cancer type they represent and others is confirmed. in breast cancer, for instance, our model identified well - known markers, such as gata3 and esr1. finally, we extended the 1d - cnn model for prediction of breast cancer subtypes and achieved an average accuracy of 88. 42 % among 5 subtypes. the codes can be found at https : / / github. com / chenlabgccri / cancertypeprediction.
|
arxiv:1906.07794
|
deterministic constructions of expander graphs have been an important topic of research in computer science and mathematics, with many well - studied constructions of infinite families of expanders. in some applications, though, an infinite family is not enough : we need expanders which are " close " to each other. we study the following question : construct an an infinite sequence of expanders $ g _ 0, g _ 1, \ dots $, such that for every two consecutive graphs $ g _ i $ and $ g _ { i + 1 } $, $ g _ { i + 1 } $ can be obtained from $ g _ i $ by adding a single vertex and inserting / removing a small number of edges, which we call the expansion cost of transitioning from $ g _ i $ to $ g _ { i + 1 } $. this question is very natural, e. g., in the context of datacenter networks, where the vertices represent racks of servers, and the expansion cost captures the amount of rewiring needed when adding another rack to the network. we present an explicit construction of $ d $ - regular expanders with expansion cost at most $ 5d / 2 $, for any $ d \ geq 6 $. our construction leverages the notion of a " 2 - lift " of a graph. this operation was first analyzed by bilu and linial, who repeatedly applied 2 - lifts to construct an infinite family of expanders which double in size from one expander to the next. our construction can be viewed as a way to " interpolate " between bilu - linial expanders with low expansion cost while preserving good edge expansion throughout. while our main motivation is centralized ( datacenter networks ), we also get the best - known distributed expander construction in the " self - healing " model.
|
arxiv:1507.01196
|
existing 3d instance segmentation methods typically assume that all semantic classes to be segmented would be available during training and only seen categories are segmented at inference. we argue that such a closed - world assumption is restrictive and explore for the first time 3d indoor instance segmentation in an open - world setting, where the model is allowed to distinguish a set of known classes as well as identify an unknown object as unknown and then later incrementally learning the semantic category of the unknown when the corresponding category labels are available. to this end, we introduce an open - world 3d indoor instance segmentation method, where an auto - labeling scheme is employed to produce pseudo - labels during training and induce separation to separate known and unknown category labels. we further improve the pseudo - labels quality at inference by adjusting the unknown class probability based on the objectness score distribution. we also introduce carefully curated open - world splits leveraging realistic scenarios based on inherent object distribution, region - based indoor scene exploration and randomness aspect of open - world classes. extensive experiments reveal the efficacy of the proposed contributions leading to promising open - world 3d instance segmentation performance.
|
arxiv:2309.14338
|
let p > 3 be a prime and let e, e ' be supersingular elliptic curves over f _ p. we want to construct an isogeny phi : e - - > e '. the currently fastest algorithm for finding isogenies between supersingular elliptic curves solves this problem by performing a " meet - in - the - middle " breadth - first search in the full supersingular 2 - isogeny graph over f _ { p ^ 2 }. in this paper we consider the structure of the isogeny graph of supersingular elliptic curves over f _ p. we give an algorithm to construct isogenies between such supersingular elliptic curves that works faster than the usual algorithm. we then discuss how this results can be used to obtain an improved algorithm for the general supersingular isogeny problem.
|
arxiv:1310.7789
|
we present an elementary construction of the non - connective algebraic k - theory spectrum associated to an additive category following the contracted functor approach due to bass. it comes with a universal property that easily allows us to identify it with other constructions, for instance with the one of pedersen - weibel in terms of z ^ i - graded objects and bounded homomorphisms.
|
arxiv:1303.1272
|
we present a method of training character manipulation of amorphous materials such as those often used in cooking. common examples of amorphous materials include granular materials ( salt, uncooked rice ), fluids ( honey ), and visco - plastic materials ( sticky rice, softened butter ). a typical task is to spread a given material out across a flat surface using a tool such as a scraper or knife. we use reinforcement learning to train our controllers to manipulate materials in various ways. the training is performed in a physics simulator that uses position - based dynamics of particles to simulate the materials to be manipulated. the neural network control policy is given observations of the material ( e. g. a low - resolution density map ), and the policy outputs actions such as rotating and translating the knife. we demonstrate policies that have been successfully trained to carry out the following tasks : spreading, gathering, and flipping. we produce a final animation by using inverse kinematics to guide a character ' s arm and hand to match the motion of the manipulation tool such as a knife or a frying pan.
|
arxiv:2103.02533
|
results of vlbi and gps observations were analyzed with goal to investigate differences in observed baseline length derived from both techniques. vlbi coordinates for european stations were obtained from processing of all available observations collected on european and global vlbi network. advanced model for antenna thermal deformation was applied to account for change of horizontal component of baseline length. gps data were obtained from re - processing of the weekly epn ( european permanent gps network ) solutions. systematic differences between results obtained with two techniques including linear drift and seasonal effects are determined.
|
arxiv:1102.0661
|
a novel extension of independent component and independent vector analysis for blind extraction / separation of one or several sources from time - varying mixtures is proposed. the mixtures are assumed to be separable source - by - source in series or in parallel based on a recently proposed mixing model that allows for the movements of the desired source while the separating beamformer is time - invariant. the popular fastica algorithm is extended for these mixtures in one - unit, symmetric and block - deflation variants. the algorithms are derived within a unified framework so that they are applicable in the real - valued as well as complex - valued domains, and jointly to several mixtures, similar to independent vector analysis. performance analysis of the one - unit algorithm is provided ; it shows its asymptotic efficiency under the given mixing and statistical models. numerical simulations corroborate the validity of the analysis, confirm the usefulness of the algorithms in separation of moving sources, and show the superior speed of convergence and ability to separate super - gaussian as well as sub - gaussian signals.
|
arxiv:2007.11241
|
we study the relation between the ising problem hamiltonian parameters and the minimum spectral gap ( min - gap ) of the system hamiltonian in the ising - based quantum annealer. the main argument we use in this paper to assess the performance of a qa algorithm is the presence or absence of an anti - crossing during quantum evolution. for this purpose, we introduce a new parametrization definition of the anti - crossing. using the maximum - weighted independent set ( mis ) problem in which there are flexible parameters ( energy penalties j between pairs of edges ) in an ising formulation as the model problem, we construct examples to show that by changing the value of j, we can change the quantum evolution from one that has an anti - crossing ( that results in an exponential small min - gap ) to one that does not have, or the other way around, and thus drastically change ( increase or decrease ) the min - gap. however, we also show that by changing the value of $ j $ alone, one can not avoid the anti - crossing. we recall a polynomial reduction from an ising problem to an mis problem to show that the flexibility of changing parameters without changing the problem to be solved can be applied to any ising problem. as an example, we show that by such a reduction alone, it is possible to remove the anti - crossing and thus increase the min - gap. our anti - crossing definition is necessarily scaling invariant as scaling the problem hamiltonian does not change the nature ( i. e. presence or absence ) of an anti - crossing. as a side note, we show exactly how the min - gap is scaled if we scale the problem hamiltonian by a constant factor.
|
arxiv:1910.02985
|
we calculate moments of the so - called kesten distribution by means of the expansion of the denominator of the density of this distribution and then integrate all summands with respect to the semicircle distribution. by comparing this expression with the formulae for the moments of kesten ' s distribution obtained by other means, we find identities involving polynomials whose power coefficients are closely related to catalan numbers, catalan triangles, binomial coefficients. finally, as applications of these identities we obtain various interesting relations between the aforementioned numbers, also concerning lucas, fibonacci and fine numbers.
|
arxiv:2106.10461
|
kaluza - klein fields characterizing, from a four - dimensional viewpoint, the presence of compact universal extra dimensions would alter low - energy observables through effects determined by some compactification scale, $ r ^ { - 1 } $, since the one - loop level, thus being particularly relevant for physical phenomena forbidden at tree level by the standard model. the present paper explores, for the case of one universal extra dimension, such new - physics contributions to higgs decays $ h ^ { ( 0 ) } \ to q ^ { ( 0 ) } _ \ alpha q ^ { ( 0 ) } _ \ beta $, into pairs of quarks with different flavors, a sort of decay process which, in the standard model, strictly occurs at the loop level. finite results, decoupling as $ r ^ { - 1 } \ to \ infty $, are calculated. approximate short expressions, valid for large compactification scales, are provided. we estimate that kaluza - klein contributions lie below predictions from the standard model, being about 2 to 3 orders of magnitude smaller for compactification scales within $ 1. 4 \, { \ rm tev } < r ^ { - 1 } < 10 \, { \ rm tev } $.
|
arxiv:2003.05571
|
in all of the diverse areas of science where waves play an important role, one of the most fundamental solutions of the corresponding wave equation is a stationary wave with constant intensity. the most familiar example is that of a plane wave propagating in free space. in the presence of any hermitian potential, a wave ' s constant intensity is, however, immediately destroyed due to scattering. here we show that this fundamental restriction is conveniently lifted when working with non - hermitian potentials. in particular, we present a whole new class of waves that have constant intensity in the presence of linear as well as of nonlinear inhomogeneous media with gain and loss. these solutions allow us to study, for the first time, the fundamental phenomenon of modulation instability in an inhomogeneous environment. our results pose a new challenge for the experiments on non - hermitian scattering that have recently been put forward.
|
arxiv:1503.08986
|
building on work of barker, humpherys, lafitte, rudd, and zumbrun in the shock wave case, we study stability of compressive, or " shock - like ", boundary layers of the isentropic compressible navier - stokes equations with gamma - law pressure by a combination of asymptotic ode estimates and numerical evans function computations. our results indicate stability for gamma in the interval [ 1, 3 ] for all compressive boundary - layers, independent of amplitude, save for inflow layers in the characteristic limit ( not treated ). expansive inflow boundary - layers have been shown to be stable for all amplitudes by matsumura and nishihara using energy estimates. besides the parameter of amplitude appearing in the shock case, the boundary - layer case features an additional parameter measuring displacement of the background profile, which greatly complicates the resulting case structure. moreover, inflow boundary layers turn out to have quite delicate stability in both large - displacement and large - amplitude limits, necessitating the additional use of a mod - two stability index studied earlier by serre and zumbrun in order to decide stability.
|
arxiv:0706.3415
|
in this short note, we first consider some inequalities for comparison of some algebraic properties of two continuous algebra - multiplications on an arbitrary banach space and then, as an application, we consider some very basic observations on the space of all continuous algebra - multiplications on a banach space.
|
arxiv:1907.10042
|
we establish a large sieve inequality for power moduli in $ \ mathbb { z } [ i ] $, extending earlier work by l. zhao and the first - named author on the large sieve for power moduli for the classical case of moduli in $ \ mathbb { z } $. our method starts with a version of the large sieve for $ \ mathbb { r } ^ 2 $. we convert the resulting counting problem back into one for $ \ mathbb { z } [ i ] $ which we then attack using weyl differencing and poisson summation.
|
arxiv:1802.08964
|
this paper introduces a novel methodology that leverages the hamilton - jacobi solution to enhance non - linear model predictive control ( mpc ) in scenarios affected by navigational uncertainty. using hamilton - jacobi - theoretic approach, a methodology to improve trajectory tracking accuracy among uncertainties and non - linearities is formulated. this paper seeks to overcome the challenge of real - time computation of optimal control solutions for model predictive control applications by leveraging the hamilton - jacobi solution in the vicinity of a nominal trajectory. the efficacy of the proposed methodology is validated within a chaotic system of the planar circular restricted three - body problem.
|
arxiv:2503.23603
|
multiboson production provides a unique way to probe electroweak symmetry breaking ( ewsb ) and physics beyond the standard model ( sm ). with the discovery of the higgs boson, the default model is that ewsb occurs according to the higgs mechanism. deviations from the sm in higgs and gauge boson properties due to new physics at a higher energy scale can be parameterized by higher - dimension operators in an effective field theory ( eft ). we present sensitivity studies for dimension - 6 and dimension - 8 operators in an eft by looking for anomalous vector boson scattering and triboson production, at proton - proton colliders with center - of - mass energies of 14 tev, 33 tev and 100 tev, respectively.
|
arxiv:1309.7452
|
we derive the specific baryonic angular momentum of five gas rich dwarf galaxies from hi kinematics complemented by stellar mass profiles. since the gas mass of these galaxies is much larger than the stellar mass, the angular momentum can be determined with relatively little uncertainty arising from the uncertainties in the stellar mass to light ratio. we compare the relation between the specific baryonic angular momentum ( j ) and the total baryonic mass ( m ) for these galaxies with that found for spiral galaxies. our combined sample explores the j - m plane over 3 orders of magnitude in baryon mass. we find that our sample dwarf have significantly higher specific angular momentum than expected from the relation found for spiral galaxies. the probability that these gas rich dwarf galaxies follow the same relation as spirals is found to be $ < 10 ^ { - 6 } $. this implies a difference in the evolution of angular momentum in these galaxies compared to larger ones. we suggest that this difference could arise due to one or more of the following : a lower baryon fraction in dwarf galaxies, particularly that arising from preferential outflows low angular momentum gas as found in high resolution simulations that include baryonic feedback ; " cold mode " anisotropic accretion from cosmic filaments. our work reinforces the importance of the j - m plane in understanding the evolution of galaxies.
|
arxiv:1702.02893
|
a distribution system can flexibly adjust its substation - level power output by aggregating its local distributed energy resources ( ders ). due to der and network constraints, characterizing the exact feasible power output region is computationally intensive. hence, existing results usually rely on unpractical assumptions or suffer from conservativeness issues. sampling - based data - driven methods can potentially address these limitations. still, existing works usually exhibit computational inefficiency issues as they use a random sampling approach, which carries little information from network physics and provides few insights into the iterative search process. this letter proposes a novel network - informed data - driven method to close this gap. a computationally efficient data sampling approach is developed to obtain high - quality training data, leveraging network information and legacy learning experience. then, a classifier is trained to estimate the feasible power output region with high accuracy. numerical studies based on a real - world southern california edison network validate the performance of the proposed work.
|
arxiv:2310.05529
|
motivated by application to multiple m5 branes, we study some properties of non - abelian two - form gauge theories. we note that the fake curvature condition which is commonly used in the literature would restrict the dynamics to be either a free theory or a topological system. we then propose a modification of transformation law which simplifies the gauge transformation of 3 - form field strength and enables us to write down a gauge invariant action. we then argue that a generalization of stueckelberg mechanism naturally gives mass to the two - form gauge field. for the application to multiple m5 - branes, it should be identified with the kk modes.
|
arxiv:1206.5643
|
we study human mobility networks through timeseries of contacts between individuals. our proposed random walkers induced temporal graph ( rwig ) model generates temporal graph sequences based on independent random walkers that traverse an underlying graph in discrete time steps. co - location of walkers at a given node and time defines an individual - level contact. rwig is shown to be a realistic model for temporal human contact graphs, which may place rwig on a same footing as the erdos - renyi ( er ) and barabasi - albert ( ba ) models for fixed graphs. moreover, rwig is analytically feasible : we derive closed form solutions for the probability distribution of contact graphs.
|
arxiv:2409.08690
|
this research aims to explore various methods for assessing user feedback in mixed - initiative conversational search ( cs ) systems. while cs systems enjoy profuse advancements across multiple aspects, recent research fails to successfully incorporate feedback from the users. one of the main reasons for that is the lack of system - user conversational interaction data. to this end, we propose a user simulator - based framework for multi - turn interactions with a variety of mixed - initiative cs systems. specifically, we develop a user simulator, dubbed convsim, that, once initialized with an information need description, is capable of providing feedback to a system ' s responses, as well as answering potential clarifying questions. our experiments on a wide variety of state - of - the - art passage retrieval and neural re - ranking models show that effective utilization of user feedback can lead to 16 % retrieval performance increase in terms of ndcg @ 3. moreover, we observe consistent improvements as the number of feedback rounds increases ( 35 % relative improvement in terms of ndcg @ 3 after three rounds ). this points to a research gap in the development of specific feedback processing modules and opens a potential for significant advancements in cs. to support further research in the topic, we release over 30, 000 transcripts of system - simulator interactions based on well - established cs datasets.
|
arxiv:2304.13874
|
we are going to widen the scope of the previously defined hausdorff - integral in two ways. first, in the sense, that we develop the theory of the integral on some naturally generalized measure spaces. second, we extend it to functions taking values in $ [ 0, + \ infty ) \ times [ 0, + \ infty ) $. in all our intentions, we follow the same attitude that we had in our previous investigation, i. e. we work in the realm of hausdorff dimension and measure.
|
arxiv:2402.09118
|
previous models for video captioning often use the output from a specific layer of a convolutional neural network ( cnn ) as video features. however, the variable context - dependent semantics in the video may make it more appropriate to adaptively select features from the multiple cnn layers. we propose a new approach for generating adaptive spatiotemporal representations of videos for the captioning task. a novel attention mechanism is developed, that adaptively and sequentially focuses on different layers of cnn features ( levels of feature " abstraction " ), as well as local spatiotemporal regions of the feature maps at each layer. the proposed approach is evaluated on three benchmark datasets : youtube2text, m - vad and msr - vtt. along with visualizing the results and how the model works, these experiments quantitatively demonstrate the effectiveness of the proposed adaptive spatiotemporal feature abstraction for translating videos to sentences with rich semantics.
|
arxiv:1611.07837
|
we consider control of uncertain linear time - varying stochastic systems from the perspective of regret minimization. specifically, we focus on the problem of designing a feedback controller that minimizes the loss relative to a clairvoyant optimal policy that has foreknowledge of both the system dynamics and the exogenous disturbances. in this competitive framework, establishing robustness guarantees proves challenging as, differently from the case where the model is known, the clairvoyant optimal policy is not only inapplicable, but also impossible to compute without knowledge of the system parameters. to address this challenge, we embrace a scenario optimization approach, and we propose minimizing regret robustly over a finite set of randomly sampled system parameters. we prove that this policy optimization problem can be solved through semidefinite programming, and that the corresponding solution retains strong probabilistic out - of - sample regret guarantees in face of the uncertain dynamics. our method naturally extends to include satisfaction of safety constraints with high probability. we validate our theoretical results and showcase the potential of our approach by means of numerical simulations.
|
arxiv:2304.14835
|
a realistic communication system model is critical in power system studies emphasizing the cyber and physical intercoupling. in this paper, we provide characteristics that could be used in modeling the underlying cyber network for power grid models. a real utility communication network and a simplified inter - substation connectivity model are studied, and their statistics could be used to fulfill the requirements for different modeling resolutions.
|
arxiv:2103.01275
|
we discuss the relevance of nuclear medium effects in the analysis of some low and medium energy neutrino reactions of current interest. in particular, we study the quasi - elastic ( qe ) process, where rpa correlations and final state interactions ( fsi ) are shown to play a crucial role. we have also investigated the neutrino induced coherent pion production. we find a strong reduction of the cross section due to the distortion of the pion wave function and the modification of the production mechanisms in the nucleus. the sensitivity of the results to the axial $ n \ delta $ coupling $ c _ 5 ^ a ( 0 ) $ has been also investigated.
|
arxiv:0802.1128
|
the commonly used spatial entropy $ h _ { r } ( \ mathcal { u } ) $ of the multi - dimensional shift space $ \ mathcal { u } $ is the limit of growth rate of admissible local patterns on finite rectangular sublattices which expands to whole space $ \ mathbb { z } ^ { d } $, $ d \ geq 2 $. this work studies spatial entropy $ h _ { \ omega } ( \ mathcal { u } ) $ of shift space $ \ mathcal { u } $ on general expanding system $ \ omega = \ { \ omega ( n ) \ } _ { n = 1 } ^ { \ infty } $ where $ \ omega ( n ) $ is increasing finite sublattices and expands to $ \ mathbb { z } ^ { d } $. $ \ omega $ is called genuinely $ d $ - dimensional if $ \ omega ( n ) $ contains no lower - dimensional part whose size is comparable to that of its $ d $ - dimensional part. we show that $ h _ { r } ( \ mathcal { u } ) $ is the supremum of $ h _ { \ omega } ( \ mathcal { u } ) $ for all genuinely two - dimensional $ \ omega $. furthermore, when $ \ omega $ is genuinely $ d $ - dimensional and satisfies certain conditions, then $ h _ { \ omega } ( \ mathcal { u } ) = h _ { r } ( \ mathcal { u } ) $. on the contrary, when $ \ omega ( n ) $ contains a lower - dimensional part, then $ h _ { r } ( \ mathcal { u } ) < h _ { \ omega } ( \ mathcal { u } ) $ for some $ \ mathcal { u } $. therefore, $ h _ { r } ( \ mathcal { u } ) $ is appropriate to be the $ d $ - dimensional spatial entropy.
|
arxiv:1412.6859
|
we study integrals of motion for hirota bilinear difference equation that is satisfied by the eigenvalues of the transfer - matrix. the combinations of the eigenvalues of the transfer - matrix are found, which are integrals of motion for integrable discrete models for the $ a _ { k - 1 } $ algebra with zero and quasiperiodic boundary conditions. discrete analogues of the equations of motion for the bullough - dodd model and non - abelian generalization of liouville model are obtained.
|
arxiv:solv-int/9911009
|
new cp - violating asymmetries of decay leptons in $ e ^ + \, e ^ - \ ; \ ra \ ; t \, \ bar { t } $, arising from electric and weak dipole couplings of $ t \, \ bar { t } $ to $ \ gamma $ and $ z $, are examined in the case of unpolarized and longitudinally polarized electrons. the new asymmetries measured together with the old ones can help to determine independently the real and imaginary parts of the electric as well as weak dipole couplings. longitudinal beam polarization, if present, obviates the need for the simultaneous measurement of more than one asymmetry, and enhances considerably the sensitivity to the cp - violating parameters. numerical results are presented for the next linear collider with $ \ sqrt { s } = 500 $ gev and $ \ int { \ cal l } \, dt \ ; = \ ; 10 \, { \ rm fb } ^ { - 1 } $.
|
arxiv:hep-ph/9410357
|
human volumetric capture is a long - standing topic in computer vision and computer graphics. although high - quality results can be achieved using sophisticated off - line systems, real - time human volumetric capture of complex scenarios, especially using light - weight setups, remains challenging. in this paper, we propose a human volumetric capture method that combines temporal volumetric fusion and deep implicit functions. to achieve high - quality and temporal - continuous reconstruction, we propose dynamic sliding fusion to fuse neighboring depth observations together with topology consistency. moreover, for detailed and complete surface generation, we propose detail - preserving deep implicit functions for rgbd input which can not only preserve the geometric details on the depth inputs but also generate more plausible texturing results. results and experiments show that our method outperforms existing methods in terms of view sparsity, generalization capacity, reconstruction quality, and run - time efficiency.
|
arxiv:2105.01859
|
we consider large - scale markov decision processes ( mdps ) with an unknown cost function and employ stochastic convex optimization tools to address the problem of imitation learning, which consists of learning a policy from a finite set of expert demonstrations. we adopt the apprenticeship learning formalism, which carries the assumption that the true cost function can be represented as a linear combination of some known features. existing inverse reinforcement learning algorithms come with strong theoretical guarantees, but are computationally expensive because they use reinforcement learning or planning algorithms as a subroutine. on the other hand, state - of - the - art policy gradient based algorithms ( like im - reinforce, im - trpo, and gail ), achieve significant empirical success in challenging benchmark tasks, but are not well understood in terms of theory. with an emphasis on non - asymptotic guarantees of performance, we propose a method that directly learns a policy from expert demonstrations, bypassing the intermediate step of learning the cost function, by formulating the problem as a single convex optimization problem over occupancy measures. we develop a computationally efficient algorithm and derive high confidence regret bounds on the quality of the extracted policy, utilizing results from stochastic convex optimization and recent works in approximate linear programming for solving forward mdps.
|
arxiv:2201.00039
|
in this paper, we investigate the topological number of de - sitter black hole solutions with different charges $ ( q ) $ and rotational $ ( a ) $ parameters. by using generalized free energy and duan ' s $ \ phi $ - mapping topological current theory, we find that the topological numbers of black holes can still be classified as three types. in addition, we interestingly found the topological classes for de - sitter $ ( $ ds $ ) $ spacetime with distinct horizon, i. e, black hole event horizon and cosmological horizon, will be different. moreover, we also investigate topological classifications of ds black hole solutions in higher dimensions with or without gauss - bonnet term.
|
arxiv:2303.13105
|
we study a class of 2 - variable polynomials called exact polynomials which contains $ a $ - polynomials of knot complements. the mahler measure of these polynomials can be computed in terms of a volume function defined on the vanishing set of the polynomial. we prove that the local extrema of the volume function are on the 2 - dimensional torus and give a closed formula for the mahler measure in terms of these extremal values. this formula shows that the mahler measure of an irreducible and exact polynomial divided by $ \ pi $ is greater than the amplitude of the volume function. we also prove a $ k $ - theoretical criterium for a polynomial to be a factor of an $ a $ - polynomial and give a topological interpretation of its mahler measure.
|
arxiv:1804.01395
|
this paper is about a new model of opinion dynamics with opinion - dependent connectivity. we assume that agents update their opinions asynchronously and that each agent ' s new opinion depends on the opinions of the $ k $ agents that are closest to it. we show that the resulting dynamics is substantially different from comparable models in the literature, such as bounded - confidence models. we study the equilibria of the dynamics, observing that they are robust to perturbations caused by the introduction of new agents. we also prove that if the number of agents $ n $ is smaller than $ 2k $, the dynamics converge to consensus. this condition is only sufficient.
|
arxiv:1803.07401
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.