text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
mixed - state phases of matter under local decoherence have recently garnered significant attention due to the ubiquitous presence of noise in current quantum processors. one of the key issues is understanding how topological quantum memory is affected by realistic coherent noises, such as random rotation noise and amplitude damping noise. in this work, we investigate the intrinsic error threshold of the two - dimensional toric code, a paradigmatic topological quantum memory, under these coherent noises by employing both analytical and numerical methods based on the doubled hilbert space formalism. a connection between the mixed - state phase of the decohered toric code and a non - hermitian ashkin - teller - type statistical mechanics model is established, and the mixed - state phase diagrams under the coherent noises are obtained. we find remarkable stability of mixed - state topological order under random rotation noise with axes near the $ y $ - axis of qubits. we also identify intriguing extended critical regions at the phase boundaries, highlighting a connection with non - hermitian physics. the upper bounds for the intrinsic error threshold are determined by these phase boundaries, beyond which quantum error correction becomes impossible.
|
arxiv:2411.03441
|
the supersymmetric so ( 10 ) theory ( " nmso ( 10 ) gut " ) based on the \ hfil \ break $ { \ bf { 210 + 126 + \ oot } } $ higgs system proposed in 1982 has evolved into a realistic theory capable of fitting the known low energy particle physics data besides providing a dark matter candidate and embedding inflationary cosmology. it dynamically resolves longstanding issues such as fast dimension five operator mediated proton decay in susy guts by allowing explicit and complete calculation of crucial threshold effects at $ m _ { susy } $ and $ m _ { gut } $ in terms of fundamental parameters. this shows that so ( 10 ) yukawas responsible for observed fermion masses as well as operator dimension 5 mediated proton decay can be highly suppressed on a " higgs dissolution edge " in the parameter space of guts with rich superheavy spectra. this novel and generically relevant result highlights the need for every realistic uv completion model with a large / infinite number of heavy fields coupled to the light higgs doublets to explicitly account for the large wave function renormalization effects on emergent light higgs fields in order to be considered a quantitatively well defined candidate uv completion. the nmsgut predicts large soft susy breaking trilinear couplings and distinctive sparticle spectra. measurable or near measurable level of tensor perturbations - and thus large inflaton mass scale - may be accommodated by supersymetric seesaw inflation within the nmsgut based on an lhn flat direction inflaton if the higgs component contains contributions from heavy higgs components. successful nmsgut fits suggest a \ emph { renormalizable } yukawon ultra minimal gauged theory of flavor based upon the nmsgut higgs structure.
|
arxiv:1506.05850
|
we investigate polyelectrolyte brushes in the osmotic regime using both theoretical analysis and molecular dynamics simulation techniques. in the simulations at moderate bjerrum length, we observe that the brush height varies weakly with grafting density, in contrast to the accepted scaling law, which predicts a brush thickness independent of the grafting density. we show that such behavior can be explained by considering lateral electrostatic effects ( within the non - linear poisson - boltzmann theory ) combined with the coupling between lateral and longitudinal degrees of freedom due to the conserved polymer volume ( which are neglected in scaling arguments ). we also take the non - linear elasticity of polyelectrolyte chains into consideration, which makes significant effects as chains are almost fully stretched in the osmotic regime. it is shown that all these factors lead to a non - monotonic behavior for the brush height as a function of the grafting density. at large grafting densities, the brush height increases with increasing the grafting density due to the volume constraint. at small grafting densities, we obtain a re - stretching of the chains for decreasing grafting density, which is caused by lateral electrostatic contributions and the counterion - condensation process at polyelectrolyte chains. these results are obtained assuming all counterions to be trapped within the brush, which is valid for sufficiently long chains of large charge fraction.
|
arxiv:cond-mat/0504414
|
to achieve fast computation, it is crucial to reset the memory to a desired state within a limited time. however, the inherent delay in the system ' s response often prevents reaching the desired state once the control process is completed in finite time. to address this challenge, we propose a shortcut strategy that incorporates an auxiliary control to guide the system towards an equilibrium state that corresponds to the intended control, thus enabling accurate memory reset. through the application of thermodynamic geometry, we derive an optimal shortcut protocol for erasure processes that minimizes the energy cost. this research provides an effective design principle for realizing the finite - time erasure process while simultaneously reducing the energy cost, thereby alleviating the burden of heat dissipation.
|
arxiv:2307.00964
|
recognizing the category of the object and using the features of the object itself to predict grasp configuration is of great significance to improve the accuracy of the grasp detection model and expand its application. researchers have been trying to combine these capabilities in an end - to - end network to grasping specific objects in a cluttered scene efficiently. in this paper, we propose an end - to - end semantic grasp detection model, which can accomplish both semantic recognition and grasp detection. and we also design a target feature attention mechanism to guide the model focus on the features of target object ontology for grasp prediction according to the semantic information. this method effectively reduces the background features that are weakly correlated to the target object, thus making the features more unique and guaranteeing the accuracy and efficiency of grasp detection. experimental results show that the proposed method can achieve 98. 38 % accuracy in cornell grasp dataset. furthermore, our results on complex multi - object scenarios or more rigorous evaluation metrics show the domain adaptability of our method over the state - of - the - art.
|
arxiv:2111.10522
|
the equation of state of quantum chromodynamics ( qcd ) at finite density is currently known only in a limited range in the baryon chemical potential $ \ mu _ b $. this is due to fundamental shortcomings of traditional methods such as taylor expansion around $ \ mu _ b = 0 $. in this contribution, we present an alternative scheme that displays substantially improved convergence over the taylor expansion method. we calculate the alternative expansion coefficients in the continuum, and show our results for the thermodynamic observables up to $ \ mu _ b / t \ le3. 5 $.
|
arxiv:2112.00083
|
the time - dependent behavior of a two - level system interacting with a quantum oscillator system is analyzed in the case of a coupling larger than both the energy separation between the two levels and the energy of quantum oscillator ( $ \ omega < \ omega < \ lambda $, where $ \ omega $ is the frequency of the transition between the two levels, $ \ omega $ is the frequency of the oscillator, and $ \ lambda $ is the coupling between the two - level system and the oscillator ). our calculations show that the amplitude of the expectation value of the oscillator coordinate decreases as the two - level system undergoes the transition from one level to the other, while the transfer probability between the levels is staircase - like. this behavior is explained by the interplay between the adiabatic and the non - adiabatic regimes encountered during the dynamics with the system acting as a quantum counterpart of the landau - zener model. the transition between the two levels occurs as long as the expectation value of the oscillator coordinate is driven close to zero. on the contrary, if the initial conditions are set such that the expectation values of the oscillator coordinate are far from zero, the system will remain locked on one level.
|
arxiv:cond-mat/0608483
|
we report the discovery by the rxte pca of a second transient accreting millisecond pulsar, xte j1751 - 305, during regular monitoring observations of the galactic bulge region. the pulsar has a spin frequency of 435 hz, making it one of the fastest pulsars. the pulsations contain the signature of orbital doppler modulation, which implies an orbital period of 42 minutes, the shortest orbital period of any known radio or x - ray millisecond pulsar. the mass function, f _ x = ( 1. 278 + / - 0. 003 ) x 10 ^ { - 6 } m _ sun, yields a minimum mass for the companion of between 0. 013 and 0. 017 m _ sun, depending on the mass of the neutron star. no eclipses were detected. a previous x - ray outburst in june, 1998, was discovered in archival all - sky monitor data. assuming mass transfer in this binary system is driven by gravitational radiation, we constrain the orbital inclination to be in the range 30 - 85 deg, and the companion mass to be 0. 013 - 0. 035 m _ sun. the companion is most likely a heated helium dwarf. we also present results from the chandra hrc - s observations which provide the best known position of xte j1751 - 305.
|
arxiv:astro-ph/0206491
|
for a sub - riemannian structure on the torus, satisfying the h \ " ormander condition, we consider the ma \ ~ n \ ' e lagrangian associated to a horizontal vector field. assuming that the aubry set consists in a finite number of static classes, we show that the invariant measure, for the horizontal stochastic perturbation of the flow of the vector field, determines a particular weak kam solution of the lagrangian, as the perturbation tends to zero.
|
arxiv:2401.10335
|
this paper describes our approach to dstc 9 track 2 : cross - lingual multi - domain dialog state tracking, the task goal is to build a cross - lingual dialog state tracker with a training set in rich resource language and a testing set in low resource language. we formulate a method for joint learning of slot operation classification task and state tracking task respectively. furthermore, we design a novel mask mechanism for fusing contextual information about dialogue, the results show the proposed model achieves excellent performance on dstc challenge ii with a joint accuracy of 62. 37 % and 23. 96 % in multiwoz ( en - zh ) dataset and crosswoz ( zh - en ) dataset, respectively.
|
arxiv:2106.14433
|
a dichroic atomic vapor laser lock ( davll ) system exploiting buffer - gas - filled millimeter - scale vapor cells is presented. this system offers similar stability as achievable with conventional davll system using bulk vapor cells, but has several important advantages. in addition to its compactness, it may provide continuous stabilization in a multi - gigahertz range around the optical transition. this range may be controlled either by changing the temperature of the vapor or by application of a buffer gas under an appropriate pressure. in particular, we experimentally demonstrate the ability of the system to lock the laser frequency between two hyperfine components of the $ ^ { 85 } $ rb ground state or as far as 16 ghz away from the closest optical transition.
|
arxiv:1512.08919
|
we study factor of i. i. d. processes on the $ d $ - regular tree for $ d \ geq 3 $. we show that if such a process is restricted to two distant connected subgraphs of the tree, then the two parts are basically uncorrelated. more precisely, any functions of the two parts have correlation at most $ k ( d - 1 ) / ( \ sqrt { d - 1 } ) ^ k $, where $ k $ denotes the distance of the subgraphs. this result can be considered as a quantitative version of the fact that factor of i. i. d. processes have trivial 1 - ended tails.
|
arxiv:1603.08423
|
a novel policy gradient ( pg ) algorithm, called $ \ textit { matryoshka policy gradient } $ ( mpg ), is introduced and studied, in the context of fixed - horizon max - entropy reinforcement learning, where an agent aims at maximizing entropy bonuses additional to its cumulative rewards. in the linear function approximation setting with softmax policies, we prove uniqueness and characterize the optimal policy of the entropy regularized objective, together with global convergence of mpg. these results are proved in the case of continuous state and action space. mpg is intuitive, theoretically sound and we furthermore show that the optimal policy of the infinite horizon max - entropy objective can be approximated arbitrarily well by the optimal policy of the mpg framework. finally, we provide a criterion for global optimality when the policy is parametrized by a neural network in terms of the neural tangent kernel at convergence. as a proof of concept, we evaluate numerically mpg on standard test benchmarks.
|
arxiv:2303.12785
|
star clusters are evolving n - body systems. we discuss the early dynamics of star clusters, the process of primordial mass segregation and clustering observed in certain young clusters. we discuss how the dynamics coupled with stellar evolution of a cluster define the radial profile, mass function of and disruption the cluster and compare these parameters with some known clusters. as a member of the thirty meter telescope ( tmt ), international science driven team ( isdt ), i shall use these details to help define the science case, requirements and the expected precision in answering possible questions about the evolution of star clusters in terms of astrometry and high resolution spectroscopy. i shall also report on some of the resolutions made at the recent tmt - forum held in mysore, india.
|
arxiv:1809.09917
|
we investigated the nucleation process at the molecular level. controlled sticking of individual atoms onto mass selected clusters over a wide mass range has been carried out for the first time. we measured the absolute unimolecular nucleation cross sections of cationic sodium clusters na _ { n } ^ { + } in the range n = 25 - 200 at several collision energies. the widely used hard sphere approximation clearly fails for small sizes : not only should vapor - to - liquid nucleation theories be modified, but also, through the microreversibility principle, cluster decay rate statistical models.
|
arxiv:0711.1797
|
during a public health crisis like covid - 19, individuals ' adoption of protective behaviors, such as self - isolation and wearing masks, can significantly impact the spread of the disease. in the meanwhile, the spread of the disease can also influence individuals ' behavioral choices. moreover, when facing uncertain losses, individuals ' decisions tend to be irrational. therefore, it is critical to study individuals ' irrational behavior choices in the context of a pandemic. in this paper, we propose an epidemic - behavior co - evolution model that captures the dynamic interplay between individual decision - making and disease spread. to account for irrational decision - making, we incorporate the prospect theory in our individual behavior modeling. we conduct a theoretical analysis of the model, examining the steady states that emerge from the co - evolutionary process. we use simulations to validate our theoretical findings and gain further insights. this investigation aims to enhance our understanding of the complex dynamics between individual behavior and disease spread during a pandemic.
|
arxiv:2310.17112
|
network connectivity is one of the major design issues in the context of mobile sensor networks. due to diverse communication patterns, some nodes lying in high - traffic zones may consume more energy and eventually die out resulting in network partitioning. this phenomenon may deprive a large number of alive nodes of sending their important time critical data to the sink. the application of data caching in mobile sensor networks is exponentially increasing as a high - speed data storage layer. this paper presents a deep learning - based beamforming approach to find the optimal transmission strategies for cache - enabled backhaul networks. in the proposed scheme, the sensor nodes in isolated partitions work together to form a directional beam which significantly increases their overall communication range to reach out a distant relay node connected to the main part of the network. the proposed methodology of cooperative beamforming - based partition connectivity works efficiently if an isolated cluster gets partitioned with a favorably large number of nodes. we also present a new cross - layer method for link cost that makes a balance between the energy used by the relay. by directly adding the accessible auxiliary nodes to the set of routing links, the algorithm chooses paths which provide maximum dynamic beamforming usage for the intermediate nodes. the proposed approach is then evaluated through simulation results. the simulation results show that the proposed mechanism achieves up to 30 % energy consumption reduction through beamforming as partition healing in addition to guarantee user throughput.
|
arxiv:2308.04797
|
generalized quasi - topological gravities ( gqtgs ) are higher - curvature extensions of einstein gravity characterized by the existence of non - hairy generalizations of the schwarzschild black hole which satisfy $ g _ { tt } g _ { rr } = - 1 $, as well as for having second - order linearized equations around maximally symmetric backgrounds. in this paper we provide strong evidence that any gravitational effective action involving higher - curvature corrections is equivalent, via metric redefinitions, to some gqtg. in the case of theories involving invariants constructed from contractions of the riemann tensor and the metric, we show this claim to be true as long as ( at least ) one non - trivial gqtg invariant exists at each order in curvature - - - and extremely conclusive evidence suggests this is the case in general dimensions. when covariant derivatives of the riemann tensor are included, the evidence provided is not as definitive, but we still prove the claim explicitly for all theories including up to eight derivatives of the metric as well as for terms involving arbitrary contractions of two covariant derivatives of the riemann tensor and any number of riemann tensors. our results suggest that the physics of generic higher - curvature gravity black holes is captured by their gqtg counterparts, dramatically easier to characterize and universal. as an example, we map the gravity sector of the type - iib string theory effective action in ads $ _ 5 $ at order $ \ mathcal { o } ( { \ alpha ^ { \ prime } } ^ 3 ) $ to a gqtg and show that the thermodynamic properties of black holes in both frames match.
|
arxiv:1906.00987
|
random - effects models are frequently used to synthesise information from different studies in meta - analysis. while likelihood - based inference is attractive both in terms of limiting properties and of implementation, its application in random - effects meta - analysis may result in misleading conclusions, especially when the number of studies is small to moderate. the current paper shows how methodology that reduces the asymptotic bias of the maximum likelihood estimator of the variance component can also substantially improve inference about the mean effect size. the results are derived for the more general framework of random - effects meta - regression, which allows the mean effect size to vary with study - specific covariates.
|
arxiv:1801.09002
|
recently the space of tree level color structures for gluon scattering was determined in arxiv : 1403. 6837 together with its transformation properties under permutations. here we generalize the discussion to loops, demonstrating a reduction of an arbitrary color diagram to its vacuum skeleton plus rays. for 1 - loop there are no residual relations and we determine the space of color structures both diagrammatically and algebraically in terms of certain sunny diagrams. we present the generating function for the characteristic polynomials and a list of irreducible representations for $ 3 \ le n \ le 9 $ external legs. finally we present a new proof for the 1 - loop shuffle relations based on the cyclic shuffle and split operations.
|
arxiv:1406.1504
|
in this note we give a wellfoundedness proof of a computable notation system for first - order reflection.
|
arxiv:1506.05280
|
elements of a global operator approach to the wznw models for compact riemann surfaces of arbitrary genus g with n marked points were given by schlichenmaier and sheinman. this contribution reports on the results. the approach is based on the multi - point krichever - novikov algebras of global meromorphic functions and vector fields, and the global algebras of affine type and their representations. using the global sugawara construction and the identification of a certain subspace of the vector field algebra with the tangent space to the moduli space of the geometric data, knizhnik - zamalodchikov equations are defined. some steps of the approach of tsuchia, ueno and yamada to wznw models are presented to compare it with our approach.
|
arxiv:math/0001040
|
exascale computing will get mankind closer to solving important social, scientific and engineering problems. due to high prototyping costs, high performance computing ( hpc ) system architects make use of simulation models for design space exploration and hardware - software co - design. however, as hpc systems reach exascale proportions, the cost of simulation increases, since simulators themselves are largely single - threaded. tools for selecting representative parts of parallel applications to reduce running costs are widespread, e. g., barrierpoint achieves this by analysing, in simulation, abstract characteristics such as basic blocks and reuse distances. however, architectures new to hpc have a limited set of tools available. in this work, we provide an independent cross - architectural evaluation on real hardware - across intel and arm - of the barrierpoint methodology, when applied to parallel hpc proxy applications. we present both cases : when the methodology can be applied and when it cannot. in the former case, results show that we can predict the performance of full application execution by running shorter representative sections. in the latter case, we dive into the underlying issues and suggest improvements. we demonstrate a total simulation time reduction of up to 178x, whilst keeping the error below 2. 3 % for both cycles and instructions.
|
arxiv:1803.09584
|
structure and production of doubly charmed tetraquarks t _ cc ( cc ubar dbar ) are studied from the viewpoint of color configurations. based on the diquark correlation, the tetraquark t _ cc with i ( jp ) = 0 ( 1 + ) is considered to be stable against strong decay. we discuss that the mixing probability of color antitriplet and sextet cc components in t _ cc is suppressed by 1 / m _ c ^ 2, so the two configurations are separately realized in the heavy quark limit. utilizing the nonrelativistic qcd framework, we evaluate the production cross sections of t _ cc in electron - positron collisions. the momentum dependence of the cross section of color antitriplet is found to be different from that of sextet, which can be used to discriminate the color structure of the t _ cc states in experimental measurements.
|
arxiv:1209.6207
|
in this paper, new levin methods are presented for calculating oscillatory integrals with algebraic and / or logarithmic singularities. to avoid singularity, the technique of singularity separation is applied and then the singular ode occurring in classic levin methods is converted into two kinds of non - singular odes. the solutions of one can be obtained explicitly, while those of the other can be solved efficiently by collocation methods. the proposed methods can attach arbitrarily high asymptotic orders and also enjoy superalgebraic convergence with respect to the number of collocation points. several numerical experiments are presented to validate the efficiency of the proposed methods.
|
arxiv:1912.09698
|
one important factor affecting the critical current density in type - ii superconductors is the formation of artificial pinning centers. hence, the engineering of pinning centers in superconducting systems has garnered considerable attention. in this study, the effect of moir \ ' e patterned pinning centers on the critical current density of superconducting tapes is investigated. the langevin equation is solved by taking into account the prominent forces within the superconductor medium, using the appropriate boundary conditions for vortices. the vortices dynamics are investigated by performing molecular dynamics simulations, which are used to calculate the corresponding critical current densities. results show a significant enhancement in the critical current density at particular angles of the relative rotation of the primary lattices. it is also revealed that for stronger pinning forces, the calculated critical current densities are higher in the moir \ ' e lattices compared to the primary lattices of pinning centers.
|
arxiv:2406.03013
|
the analytical theory of diffusive cosmic ray acceleration at parallel stationary shock waves with magnetostatic turbulence is generalized to arbitrary shock speeds $ v _ s = \ beta _ 1c $, including in particular relativistic speeds. this is achieved by applying the diffusion approximation to the relevant fokker - planck particle transport equation formulated in the mixed comoving coordinate system. in this coordinate system the particle ' s momentum coordinates $ p $ and $ \ mu = p _ { \ parallel } / p $ are taken in the rest frame of the streaming plasma, whereas the time and space coordinates are taken in the observer ' s system. for magnetostatic slab turbulence the diffusion - convection transport equation for the isotropic ( in the rest frame of the streaming plasma ) part of the particle ' s phase space density is derived. for a step - wise shock velocity profile the steady - state diffusion - convection transport equation is solved. for a symmetric pitch - angle scattering fokker - planck coefficient $ d _ { \ mu \ mu } ( - \ mu ) = d _ { \ mu \ mu } ( \ mu ) $ the steady - state solution is independent of the microphysical scattering details. for nonrelativistic mono - momentum particle injection at the shock the differential number density of accelerated particles is a lorentzian - type distribution function which at large momenta approaches a power law distribution function $ n ( p \ ge p _ c ) \ propto p ^ { - \ xi } $ with the spectral index $ \ xi ( \ beta _ 1 ) = 1 + [ 3 / ( \ gamma _ 1 \ sqrt { r ^ 2 - \ beta _ 1 ^ 2 } - 1 ) ( 1 + 3 \ beta _ 1 ^ 2 ) ] $. for nonrelativistic ( $ \ beta _ 1 \ ll 1 $ ) shock speeds this spectral index agrees with the known result $ \ xi ( \ beta _ 1 \ ll 1 ) \ simeq ( r + 2 ) / ( r - 1 ) $, whereas for ultrarelativistic ( $ \ gamma _ 1 \ gg 1 $ ) shock speeds the spectral index value is close to unity.
|
arxiv:1503.04737
|
despite their paucity, massive hot stars are real cosmic engines of fundamental importance in shaping our universe, from its very early stages up to its current appearance. understanding the physics of massive stars is then a key issue for many relevant astrophysical phenomena. probing the massive stellar population of nearby galaxies by means of quantitative spectroscopy allows us to unveil a wealth of information that will aid our current understanding of stellar and galaxy evolution. in addition, blue luminous stars can be used as standard candles for extragalactic distances up to 10 mpc. in this contribution, we present a brief overview of recent steps we have undertaken in this exciting research field.
|
arxiv:0708.2737
|
a multidimensional extraction of the structure function ratio $ \ sigma _ { lt ' } / \ sigma _ { 0 } $ from the hard exclusive $ \ vec { e } p \ to e ^ \ prime n \ pi ^ + $ reaction above the resonance region has been performed. the study was done based on beam - spin asymmetry measurements using a 10. 6 gev incident electron beam on a liquid - hydrogen target and the clas12 spectrometer at jefferson lab. the measurements focus on the very forward regime ( $ t / q ^ { 2 } $ $ \ ll $ 1 ) with a wide kinematic range of $ x _ { b } $ in the valence regime ( 0. 17 $ < $ $ x _ { b } $ $ < $ 0. 55 ), and virtualities $ q ^ { 2 } $ ranging from 1. 5 gev $ ^ { 2 } $ up to 6 gev $ ^ { 2 } $. the results and their comparison to theoretical models based on generalized parton distributions demonstrate the sensitivity to chiral - odd gpds and the directly related tensor charge of the nucleon. in addition, the data is compared to an extension of a regge formalism at high photon virtualities. it was found that the regge model provides a better description at low $ q ^ { 2 } $, while the gpd model is more appropriate at high $ q ^ { 2 } $.
|
arxiv:2210.14557
|
this article focuses on the study of toric algebraic statistical models which correspond to toric del pezzo surfaces with du val singularities. a closed - form for the maximum likelihood estimate of algebraic statistical models which correspond to cubic and quartic toric del pezzo surfaces with du val singular points is given. also, we calculate the ml degrees of some toric del pezzo surfaces of degree less than or equal to six, which equals the degree of the surface in all the case but one, namely the quintic with two points of type $ \ mathbb { a } _ 1 $.
|
arxiv:1602.08307
|
this work addresses some relevant characteristics and properties of $ q $ - generalized associative algebras and $ q $ - generalized dendriform algebras such as bimodules, matched pairs. we construct for the special case of $ q = - 1 $ an antiassociative algebra with a decomposition into the direct sum of the underlying vector spaces of another antiassociative algebra and its dual such that both of them are subalgebras and the natural symmetric bilinear form is invariant or the natural antisymmetric bilinear form is sympletic. the former is called a double construction of quadratic antiassociative algebra and the later is a double construction of sympletic antiassociative algebra which is interpreted in terms of antidendrifom algebras. we classify the 2 - dimensional antiassociative algebras and thoroughly give some double constructions of quadratic and sympletic antiassociative algebras.
|
arxiv:2007.11991
|
in biometrics and related fields, the cox proportional hazards model are widely used to analyze with covariate adjustment. however, when some covariates are not observed, an unbiased estimator usually cannot be obtained. even if there are some unmeasured covariates, instrumental variable methods can be applied under some assumptions. in this paper, we propose the new instrumental variable estimator for the cox proportional hazards model. the estimator is the similar feature as martinez - camblor et al., 2019, but not the same exactly ; we use an idea of limited - information maximum likelihood. we show that the estimator has good theoretical properties. also, we confirm properties of our method and previous methods through simulations datasets.
|
arxiv:2206.01302
|
the integration of predictive maintenance and cybersecurity represents a transformative advancement for small and medium - sized enterprises ( smes ) operating within the industry 4. 0 paradigm. despite their economic importance, smes often face significant challenges in adopting advanced technologies due to resource constraints and knowledge gaps. the detecta 2. 0 project addresses these hurdles by developing an innovative system that harmonizes real - time anomaly detection, sophisticated analytics, and predictive forecasting capabilities. the system employs a semi - supervised methodology, combining unsupervised anomaly detection with supervised learning techniques. this approach enables more agile and cost - effective development of ai detection systems, significantly reducing the time required for manual case review. at the core lies a digital twin interface, providing intuitive real - time visualizations of machine states and detected anomalies. leveraging cutting - edge ai engines, the system intelligently categorizes anomalies based on observed patterns, differentiating between technical errors and potential cybersecurity incidents. this discernment is fortified by detailed analytics, including certainty levels that enhance alert reliability and minimize false positives. the predictive engine uses advanced time series algorithms like n - hits to forecast future machine utilization trends. this proactive approach optimizes maintenance planning, enhances cybersecurity measures, and minimizes unplanned downtimes despite variable production processes. with its modular architecture enabling seamless integration across industrial setups and low implementation costs, detecta 2. 0 presents an attractive solution for smes to strengthen their predictive maintenance and cybersecurity strategies.
|
arxiv:2405.15832
|
recent advances in the visualization of continuous multimodal multi - objective optimization ( mmmoo ) landscapes brought a new perspective to their search dynamics. locally efficient ( le ) sets, often considered as traps for local search, are rarely isolated in the decision space. rather, intersections by superposing attraction basins lead to further solution sets that at least partially contain better solutions. the multi - objective gradient sliding algorithm ( mogsa ) is an algorithmic concept developed to exploit these superpositions. while it has promising performance on many mmmoo problems with linear le sets, closer analysis of mogsa revealed that it does not sufficiently generalize to a wider set of test problems. based on a detailed analysis of shortcomings of mogsa, we propose a new algorithm, the multi - objective landscape explorer ( mole ). it is able to efficiently model and exploit le sets in mmmoo problems. an implementation of mole is presented for the bi - objective case, and the practicality of the approach is shown in a benchmarking experiment on the bi - objective bbob testbed.
|
arxiv:2204.10848
|
this paper investigates the effect of permutations on blocks of a prime reciprocal sequence on its randomness. a relationship between the number of permutations used and the improvement of performance is presented. this can be used as a method for increasing the cryptographic strength of pseudorandom sequences.
|
arxiv:1202.0200
|
we describe a new method for combinatorially computing the transverse invariant in knot floer homology. previous work of the authors and stone used braid diagrams to combinatorially compute knot floer homology of braid closures. however, that approach was unable to explicitly identify the invariant of transverse links that naturally appears in braid diagrams. in this paper, we improve the previous approach in order to compute the transverse invariant. we define a new combinatorial complex that computes knot floer homology and identify the braid invariant of transverse knots and links in the homology of this complex.
|
arxiv:1703.06861
|
the zak - otfs input / output ( i / o ) relation is predictable and non - fading when the delay and doppler periods are greater than the effective channel delay and doppler spreads, a condition which we refer to as the crystallization condition. the filter taps can simply be read off from the response to a single zak - otfs pilot pulsone, and the i / o relation can be reconstructed for a sampled system that operates under finite duration and bandwidth constraints. in previous work we had measured ber performance of a baseline system where we used separate zak - otfs subframes for sensing and data transmission. in this letter we demonstrate how to use turbo signal processing to match ber performance of this baseline system when we integrate sensing and communication within the same zak - otfs subframe. the turbo decoder alternates between channel sensing using a noise - like waveform ( spread pulsone ) and recovery of data transmitted using point pulsones.
|
arxiv:2406.06024
|
we compute the superconformal index of the $ \ mathcal { n } = 4 $ $ su ( n ) $ yang - mills theory through a residue calculation. the method is similar in spirit to the bethe ansatz formalism, except that all poles are explicitly known, and we do not require specialization of any of the chemical potentials. our expression for the index allows us to revisit the cardy limit using modular properties of four - dimensional supersymmetric partition functions. we find that all residues contribute at leading order in the cardy limit. in a specific region of flavour chemical potential space, close to the two unrefined points, in fact all residues contribute universally. these universal residues precisely agree with the entropy functions of the asymptotically ads $ _ 5 $ black hole and its " twin saddle " respectively. finally, we discuss how our formula is suited to study the implications of four - dimensional modularity for the index beyond the cardy limit.
|
arxiv:2011.06605
|
the study of combinatorial games is intimately tied to the study of graphs, as any game can be realized as a directed graph in which players take turns traversing the edges until reaching a sink. however, there have heretofore been few efforts towards analyzing game graphs using graph theoretic metrics and techniques. a set $ s $ of vertices in a graph $ g $ resolves $ g $ if every vertex in $ g $ is uniquely determined by the vector of its distances from the vertices in $ s $. a metric basis of $ g $ is a smallest resolving set and the metric dimension is the cardinality of a metric basis. in this article we examine the metric dimension of the graphs resulting from some rulesets, including both short games ( those which are sure to end after finitely many turns ) and loopy games ( those games for which the associated graph contains cycles ).
|
arxiv:1905.05033
|
there is a long tradition of the axiomatic study of consensus methods in phylogenetics that satisfy certain desirable properties. one recently - introduced property is associative stability, which is desirable because it confers a computational advantage, in that the consensus method only needs to be computed " pairwise ". in this paper, we introduce a phylogenetic consensus method that satisfies this property, in addition to being " regular ". the method is based on the introduction of a partial order on the set of rooted phylogenetic trees, itself based on the notion of a hierarchy - preserving map between trees. this partial order may be of independent interest. we call the method " lattice consensus ", because it takes the unique maximal element in a lattice of trees defined by the partial order. aside from being associatively stable, lattice consensus also satisfies the property of being pareto on rooted triples, answering in the affirmative a question of bryant et al ( 2017 ). we conclude the paper with an answer to another question of bryant et al, showing that there is no regular extension stable consensus method for binary trees.
|
arxiv:1810.06831
|
a gl ( 2, r ) structure on an ( n + 1 ) - dimensional manifold is a smooth pointwise identification of tangent vectors with polynomials in two variables homogeneous of degree n. this, for even n = 2k, defines a conformal structure of signature ( k, k + 1 ) by specifying the null vectors to be the polynomials with vanishing quadratic invariant. we focus on the case n = 6 and show that the resulting conformal structure in seven dimensions is compatible with a conformal g _ 2 structure or its non - compact analogue. if a gl ( 2, r ) structure arises on a moduli space of rational curves on a surface with self - intersection number 6, then certain components of the intrinsic torsion of the g _ 2 structure vanish. we give examples of simple 7th order odes whose solution curves are rational and find the corresponding g _ 2 structures. in particular we show that bryant ' s weak g _ 2 holonomy metric on the homology seven - sphere so ( 5 ) / so ( 3 ) is the unique weak g _ 2 metric arising from a rational curve.
|
arxiv:1002.3963
|
the major aim of this paper is to explain the data poisoning attacks using label - flipping during the training stage of the electroencephalogram ( eeg ) signal - based human emotion evaluation systems deploying machine learning models from the attackers ' perspective. human emotion evaluation using eeg signals has consistently attracted a lot of research attention. the identification of human emotional states based on eeg signals is effective to detect potential internal threats caused by insider individuals. nevertheless, eeg signal - based human emotion evaluation systems have shown several vulnerabilities to data poison attacks. the findings of the experiments demonstrate that the suggested data poison assaults are model - independently successful, although various models exhibit varying levels of resilience to the attacks. in addition, the data poison attacks on the eeg signal - based human emotion evaluation systems are explained with several explainable artificial intelligence ( xai ) methods, including shapley additive explanation ( shap ) values, local interpretable model - agnostic explanations ( lime ), and generated decision trees. and the codes of this paper are publicly available on github.
|
arxiv:2301.06923
|
parametric variability is inevitable in actual energy harvesters. it can significantly affect crucial aspects of the system performance, especially in harvesting systems that present geometric parameters, material properties, or excitation conditions that are susceptible to small perturbations. this work aims to develop an investigation to identify the most critical parameters in the dynamic behavior of asymmetric bistable energy harvesters with nonlinear piezoelectric coupling, considering the variability of their physical and excitation properties. for this purpose, a global sensitivity analysis based on orthogonal variance decomposition, employing sobol indices, is performed to quantify the effect of the harvester parameters on the variance of the recovered power. this technique quantifies the variance concerning each parameter individually and collectively regarding the total variation of the model. the results indicate that the frequency and amplitude of excitation, asymmetric terms and electrical proprieties of the piezoelectric coupling are the most critical parameters that affect the mean power harvested. it is also shown that the order of importance of the parameters can change according to the stability of the harvester ' s dynamic response. in this way, a better understanding of the system under analysis is obtained since the study allows the identification of vital parameters that rule the change of dynamic behavior and therefore constitutes a powerful tool in the robust design, optimization, and response prediction of nonlinear harvesters.
|
arxiv:2107.04647
|
neural radiance fields ( nerf ) achieve photo - realistic view synthesis with densely captured input images. however, the geometry of nerf is extremely under - constrained given sparse views, resulting in significant degradation of novel view synthesis quality. inspired by self - supervised depth estimation methods, we propose structnerf, a solution to novel view synthesis for indoor scenes with sparse inputs. structnerf leverages the structural hints naturally embedded in multi - view inputs to handle the unconstrained geometry issue in nerf. specifically, it tackles the texture and non - texture regions respectively : a patch - based multi - view consistent photometric loss is proposed to constrain the geometry of textured regions ; for non - textured ones, we explicitly restrict them to be 3d consistent planes. through the dense self - supervised depth constraints, our method improves both the geometry and the view synthesis performance of nerf without any additional training on external data. extensive experiments on several real - world datasets demonstrate that structnerf surpasses state - of - the - art methods for indoor scenes with sparse inputs both quantitatively and qualitatively.
|
arxiv:2209.05277
|
object pose estimation enables robots to understand and interact with their environments. training with synthetic data is necessary in order to adapt to novel situations. unfortunately, pose estimation under domain shift, i. e., training on synthetic data and testing in the real world, is challenging. deep learning - based approaches currently perform best when using encoder - decoder networks but typically do not generalize to new scenarios with different scene characteristics. we argue that patch - based approaches, instead of encoder - decoder networks, are more suited for synthetic - to - real transfer because local to global object information is better represented. to that end, we present a novel approach based on a specialized feature pyramid network to compute multi - scale features for creating pose hypotheses on different feature map resolutions in parallel. our single - shot pose estimation approach is evaluated on multiple standard datasets and outperforms the state of the art by up to 35 %. we also perform grasping experiments in the real world to demonstrate the advantage of using synthetic data to generalize to novel environments.
|
arxiv:2010.16117
|
the equilibrium points and their linear stability has been discussed in the generalized photogravitational chermnykh ' s problem. the bigger primary is being considered as a source of radiation and small primary as an oblate spheroid. the effect of radiation pressure has been discussed numerically. the collinear points are linearly unstable and triangular points are stable in the sense of lyapunov stability provided $ \ mu < \ mu _ { routh } = 0. 0385201 $. the effect of gravitational potential from the belt is also examined. the mathematical properties of this system are different from the classical restricted three body problem.
|
arxiv:0806.1132
|
large multi - modality models ( lmms ) have made significant progress in visual understanding and generation, but they still face challenges in general visual editing, particularly in following complex instructions, preserving appearance consistency, and supporting flexible input formats. to address this gap, we introduce risebench, the first benchmark for evaluating reasoning - informed visual editing ( rise ). risebench focuses on four key reasoning types : temporal, causal, spatial, and logical reasoning. we curate high - quality test cases for each category and propose an evaluation framework that assesses instruction reasoning, appearance consistency, and visual plausibility with both human judges and an lmm - as - a - judge approach. our experiments reveal that while gpt - 4o - native significantly outperforms other open - source and proprietary models, even this state - of - the - art system struggles with logical reasoning tasks, highlighting an area that remains underexplored. as an initial effort, risebench aims to provide foundational insights into reasoning - aware visual editing and to catalyze future research. though still in its early stages, we are committed to continuously expanding and refining the benchmark to support more comprehensive, reliable, and scalable evaluations of next - generation multimodal systems. our code and data will be released at https : / / github. com / phoenixz810 / risebench.
|
arxiv:2504.02826
|
the complex spatiotemporal flow patterns in living tissues, driven by active forces, have many of the characteristics associated with inertial turbulence even though the reynolds number is extremely low. analyses of experimental data from two - dimensional epithelial monolayers in combination with agent - based simulations show that cell division and apoptosis lead to directed cell motion for hours, resulting in rapid topological transitions in neighboring cells. these transitions in turn generate both long ranged and long lived clockwise and anticlockwise vortices, which gives rise to turbulent - like flows. both experiments and simulations show that at long wavelengths the wave vector ( $ k $ ) dependent energy spectrum $ e ( k ) \ approx k ^ { - 5 / 3 } $, coinciding with the kolmogorov scaling in fully developed inertial turbulence. using theoretical arguments and simulations, we show that long - lived vortices lead to long - time tails in the velocity auto - correlation function, $ c _ v ( t ) \ sim t ^ { - 1 / 2 } $, which has the same structure as in classical 2d fluids but with a different scaling exponent.
|
arxiv:2211.14410
|
recent advances in machine learning ( ml ) for automating analog circuit synthesis have been significant, yet challenges remain. a critical gap is the lack of a standardized evaluation framework, compounded by various process design kits ( pdks ), simulation tools, and a limited variety of circuit topologies. these factors hinder direct comparisons and the validation of algorithms. to address these shortcomings, we introduced analoggym, an open - source testing suite designed to provide fair and comprehensive evaluations. analoggym includes 30 circuit topologies in five categories : sensing front ends, voltage references, low dropout regulators, amplifiers, and phase - locked loops. it supports several technology nodes for academic and commercial applications and is compatible with commercial simulators such as cadence spectre, synopsys hspice, and the open - source simulator ngspice. analoggym standardizes the assessment of ml algorithms in analog circuit synthesis and promotes reproducibility with its open datasets and detailed benchmark specifications. analoggym ' s user - friendly design allows researchers to easily adapt it for robust, transparent comparisons of state - of - the - art methods, while also exposing them to real - world industrial design challenges, enhancing the practical relevance of their work. additionally, we have conducted a comprehensive comparison study of various analog sizing methods on analoggym, highlighting the capabilities and advantages of different approaches. analoggym is available in the github repository https : / / github. com / coda - team / analoggym. the documentation is also available at http : / / coda - team. github. io / analoggym /.
|
arxiv:2409.08534
|
we consider the landau - de gennes variational model for nematic liquid crystals, in three - dimensional domains. more precisely, we study the asymptotic behaviour of minimizers as the elastic constant tends to zero, under the assumption that minimizers are uniformly bounded and their energy blows up as the logarithm of the elastic constant. we show that there exists a closed set s of finite length, such that minimizers converge to a locally harmonic map away from s. moreover, s restricted to the interior of the domain is a locally finite union of straight line segments. we provide sufficient conditions, depending on the domain and the boundary data, under which our main results apply. we also discuss some examples.
|
arxiv:1501.05236
|
this paper investigates the steady axisymmetric structure of the cold boundary - layer flow surrounding fire whirls developing over localized fuel sources lying on a horizontal surface. the inviscid swirling motion found outside the boundary layer, driven by the entrainment of the buoyant turbulent plume of hot combustion products that develops above the fire, is described by an irrotational solution, obtained by combining taylor ' s self - similar solution for the motion in the axial plane with the azimuthal motion induced by a line vortex of circulation $ 2 \ pi \ gamma $. the development of the boundary layer from a prescribed radial location is determined by numerical integration for different swirl levels, measured by the value of the radial - to - azimuthal velocity ratio $ \ sigma $ at the initial radial location. as in the case $ \ sigma = 0 $, treated in the seminal boundary - layer analysis of burggraf et al. ( phys. fluids, 1971 ), the pressure gradient associated with the centripetal acceleration of the inviscid flow is seen to generate a pronounced radial inflow. specific attention is given to the terminal shape of the boundary - layer velocity near the axis, which displays a three - layered structure that is described by matched asymptotic expansions. the resulting composite expansion, dependent on the level of ambient swirl through the parameter $ \ sigma $, is employed as boundary condition to describe the deflection of the boundary - layer flow near the axis to form a vertical swirl jet. numerical solutions of the resulting non - slender collision region for different values of $ \ sigma $ are presented both for inviscid flow and for viscous flow with moderately large values of the controlling reynolds number $ \ gamma / \ nu $. the velocity description provided is useful in mathematical formulations of localized fire - whirl flows, providing consistent boundary conditions accounting for the ambient swirl level.
|
arxiv:2201.07516
|
we introduce an approach for global fitting of the recently published high - throughput and high accuracy clonogenic cell - survival data for therapeutic scanned proton beams. our fitting procedure accounts for the correlation between the cell - survival, the absorbed ( physical ) dose and the proton linear energy transfer ( let ). the fitting polynomials and constraints have been constructed upon generalization of the microdosimetric kinetic model ( gmkm ) adapted to account for the low energy and high lineal - energy spectrum of the beam where the current radiobiological models may underestimate the reported relative biological effectiveness ( rbe ). the parameters ( { \ alpha }, \ b { eta } ) of the linear - quadratic ( lq ) model calculated by the presented method reveal a smooth transition from low to high lets which is an advantage of the current method over methods previously employed to fit the same clonogenic data. finally, the presented approach provides insight into underlying microscopic mechanisms which, with future study, may help to elucidate radiobiological responses along the bragg curve and resolve discrepancies between experimental data and current rbe models.
|
arxiv:1812.09635
|
the purpose of this paper is to show that the mathematical treatment of three dimensional rotations can be simplified, and its geometrical understanding improved, by using the rodrigues ' vector representation. we present a novel geometrical interpretation of the rodrigues ' vector. based on this interpretation and simple geometrical considerations, we derive euler rodrigues ' formula, cayley ' s rotation formula, and the composition law for finite rotations. the level of this discussion should be suitable for undergraduate physics or engineering courses where rotations are discussed.
|
arxiv:1607.05999
|
we show that as the result of the nesting property of the fermi surface, the quarter - doped hubbard model on honeycomb lattice is unstable with respect to the formation of a magnetic insulating state with nonzero spin chirality for infinitesimally small strength of electron correlation. the insulating state is found to be topological nontrivial and to have a quantized hall conductance of $ \ sigma _ { xy } = \ frac { e ^ { 2 } } { h } $. we find the fermi surface nesting is robust for arbitrary value of next - nearest - neighbor hopping integral. it is thus very possible that the quarter - doped graphene system will realize such an exotic ground state. we also show that the quarter - doped hubbard model on honeycomb lattice is in exact equivalence in the weak coupling limit with the 3 / 4 - filled hubbard model on triangular lattice, in which similar effect is also observed.
|
arxiv:1103.2420
|
we present principled bayesian model comparison through simulation - based neural classification applied to sn ia analysis. we validate our approach on realistically simulated sn ia light curve data, demonstrating its ability to recover posterior model probabilities while marginalizing over > 4000 latent variables. the amortized nature of our technique allows us to explore the dependence of bayes factors on the true parameters of simulated data, demonstrating occam ' s razor for nested models. when applied to a sample of 86 low - redshift snae ia from the carnegie supernova project, our method prefers a model with a single dust law and no magnitude step with host mass, disfavouring different dust laws for low - and high - mass hosts with odds in excess of 100 : 1.
|
arxiv:2311.15650
|
problem. educational disparities in mathematics performance are a persistent challenge. this study aims to unravel the complex factors contributing to these disparities among students internationally, with a focus on the interpretability of the contributing factors. methodology. utilizing data from the programme for international student assessment ( pisa ), we conducted rigorous preprocessing and variable selection to prepare for applying binary classification interpretability models. these models were trained using the stratified k - fold technique to ensure balanced representation and assessed using six key metrics. solution. by applying interpretability models such as shapley additive explanations ( shap ) analysis, we identified critical factors impacting student performance, including reading accessibility, critical thinking skills, gender, and geographical location. results. our findings reveal significant disparities linked to resource availability, with students from lower socioeconomic backgrounds possessing fewer books and demonstrating lower performance in mathematics. the geographical analysis highlighted regional educational disparities, with certain areas consistently underperforming in pisa assessments. gender also emerged as a determinant, with females contributing differently to performance levels across the spectrum. conclusion. the study provides insights into the multifaceted determinants of student mathematics performance and suggests potential avenues for future research to explore global interpretability models and further investigate the socioeconomic, cultural, and educational factors at play.
|
arxiv:2502.19424
|
machine learning algorithms are now capable of performing evaluations previously conducted by human experts ( e. g., medical diagnoses ). how should we conceptualize the difference between evaluation by humans and by algorithms, and when should an individual prefer one over the other? we propose a framework to examine one key distinction between the two forms of evaluation : machine learning algorithms are standardized, fixing a common set of covariates by which to assess all individuals, while human evaluators customize which covariates are acquired to each individual. our framework defines and analyzes the advantage of this customization - - the value of context - - in environments with high - dimensional data. we show that unless the agent has precise knowledge about the joint distribution of covariates, the benefit of additional covariates generally outweighs the value of context.
|
arxiv:2402.11157
|
we propose prm, a novel photometric stereo based large reconstruction model to reconstruct high - quality meshes with fine - grained local details. unlike previous large reconstruction models that prepare images under fixed and simple lighting as both input and supervision, prm renders photometric stereo images by varying materials and lighting for the purposes, which not only improves the precise local details by providing rich photometric cues but also increases the model robustness to variations in the appearance of input images. to offer enhanced flexibility of images rendering, we incorporate a real - time physically - based rendering ( pbr ) method and mesh rasterization for online images rendering. moreover, in employing an explicit mesh as our 3d representation, prm ensures the application of differentiable pbr, which supports the utilization of multiple photometric supervisions and better models the specular color for high - quality geometry optimization. our prm leverages photometric stereo images to achieve high - quality reconstructions with fine - grained local details, even amidst sophisticated image appearances. extensive experiments demonstrate that prm significantly outperforms other models.
|
arxiv:2412.07371
|
it is shown that, for maximally monotone linear relations defined on a general banach space, the monotonicities of dense type, of negative - infimum type, and of fitzpatrick - phelps type are the same and equivalent to monotonicity of the adjoint. this result also provides affirmative answers to two problems : one posed by phelps and simons, and the other by simons.
|
arxiv:1103.6239
|
we consider a markov chain approximation scheme for utility maximization problems in continuous time, which uses, in turn, a piecewise constant policy approximation, euler - maruyama time stepping, and a gauss - hermite approximation of the gaussian increments. the error estimates previously derived in picarelli and reisinger ( 2019 ) are asymmetric between lower and upper bounds due to the control approximation and improve on known results in the literature in the lower case only. in the present paper, we use duality results to obtain a posteriori upper error bounds which are empirically of the same order as the lower bounds. the theoretical results are confirmed by our numerical tests.
|
arxiv:2001.01110
|
we consider the laplace - beltrami operator with dirichlet boundary conditions on convex domains in a riemannian manifold $ ( m ^ n, g ) $, and prove that the product of the fundamental gap with the square of the diameter can be arbitrarily small whenever $ m ^ n $ has even a single tangent plane of negative sectional curvature. in particular, the fundamental gap conjecture strongly fails for small deformations of euclidean space which introduce any negative curvature. we also show that when the curvature is negatively pinched, it is possible to construct such domains of any diameter up to the diameter of the manifold. the proof is adapted from the argument of bourni et. al. ( annales henri poincar \ ' e 2022 ), which established the analogous result for convex domains in hyperbolic space, but requires several new ingredients.
|
arxiv:2211.06404
|
we analyzed period changes of the high - amplitude delta scuti variable star css _ j102714. 3 + 205943 for about 20 years, utilizing data from the automated sky surveys along with our own observations. with the help of the o - c diagram, we found that the period decreased noticeably between jd2454800 and jd2457300. a possible cause of the change could be intrinsic processes in the star. however, the observed behavior of the o - c diagram can also be explained by the light - time effect if the star is a component of a binary system. times of maxima for the star, derived from the surveys and our observations, are listed.
|
arxiv:2503.23521
|
we introduce a new dynamic model with the capability of recognizing both activities that an individual is performing as well as where that ndividual is located. our model is novel in that it utilizes a dynamic graphical model to jointly estimate both activity and spatial context over time based on the simultaneous use of asynchronous observations consisting of gps measurements, and measurements from a small mountable sensor board. joint inference is quite desirable as it has the ability to improve accuracy of the model. a key goal, however, in designing our overall system is to be able to perform accurate inference decisions while minimizing the amount of hardware an individual must wear. this minimization leads to greater comfort and flexibility, decreased power requirements and therefore increased battery life, and reduced cost. we show results indicating that our joint measurement model outperforms measurements from either the sensor board or gps alone, using two types of probabilistic inference procedures, namely particle filtering and pruned exact inference.
|
arxiv:1206.6869
|
we consider the forced harmonic oscillator quantized according to infinite statistics ( a special case of the ` quon ' algebra proposed by greenberg ). we show that in order for the statistics to be consistently evolved the forcing term must be identically zero for all time. hence only the free harmonic oscillator may be quantized according to infinite statistics.
|
arxiv:hep-th/9401162
|
this article addresses the much debated question whether the degree of hydrophobicity of single - layer graphene ( 1lg ) is different from the one of double - layer graphene ( 2lg ). knowledge of the water affinity of graphene and its spatial variations is critically important as it can affect the graphene properties as well as the performance of graphene devices exposed to humidity. by employing chemical force microscopy ( cfm ) with a probe rendered hydrophobic by functionalization with octadecyltrichlorosilane ( ots ), the adhesion force between the probe and epitaxial graphene on sic has been measured in deionized water. owing to the hydrophobic attraction, a larger adhesion force was measured on 2lg domains of graphene surfaces, thus showing that 2lg is more hydrophobic than 1lg. identification of 1lg and 2lg domains was achieved through kelvin probe force microscopy and raman spectral mapping. approximate values of the adhesion force per ots molecule have been calculated through contact area analysis. furthermore, the contrast of friction force images measured in contact mode was reversed to the 1lg / 2lg adhesion contrast and its origin was discussed in terms of the likely water depletion over hydrophobic domains as well as deformation in the contact area between afm tip and 1lg.
|
arxiv:1804.09998
|
we investigate theoretically the bose - hubbard version of the celebrated su - schrieffer - heeger topological model, which essentially describes a one - dimensional dimerized array of coupled oscillators with on - site interactions. we study the physics arising from the whole gamut of possible dimerizations of the chain, including both the weakly and the strongly dimerized limiting cases. focusing on two - excitation subspace, we systematically uncover and characterize the different types of states which may emerge due to the competition between the inter - oscillator couplings, the intrinsic topology of the lattice, and the strength of the on - site interactions. in particular, we discuss the formation of scattering bands full of extended states, bound bands full of two - particle pairs ( including so - called ` doublons ', when the pair occupies the same lattice site ), and different flavors of topological edge states. the features we describe may be realized in a plethora of systems, including nanoscale architectures such as photonic cavities and optical lattices, and provide perspectives for topological many - body physics.
|
arxiv:2105.04406
|
based on the comprehensive national death registry of mexico spanning from 1998 to 2022 a point and interval estimation method for the excess mortality in mexico during the years 2020 - 2022 is proposed based on illness - induced deaths only, using a polynomial regression model. the results obtained estimate that the excess mortality is around 788, 000 people ( 39. 3 % ) equivalently to a rate of 626 per 100, 000 inhabitants. the male / female ratio is estimated to be 1. 7 times. as a reference for comparison, for the whole period 2020 - 2020 mexico ' s inegi estimated an excess of mortality between 673, 000 with a quasi - poisson model and 808, 000 using endemic channels estimation.
|
arxiv:2311.15483
|
a proof - labeling scheme ( pls ) for a boolean predicate $ \ pi $ on labeled graphs is a mechanism used for certifying the legality with respect to $ \ pi $ of global network states in a distributed manner. in a pls, a certificate is assigned to each processing node of the network, and the nodes are in charge of checking that the collection of certificates forms a global proof that the system is in a correct state, by exchanging the certificates once, between neighbors only. the main measure of complexity is the size of the certificates. many plss have been designed for certifying specific predicates, including cycle - freeness, minimum - weight spanning tree, planarity, etc. in 2021, a breakthrough has been obtained, as a meta - theorem stating that a large set of properties have compact plss in a large class of networks. namely, for every $ \ mathrm { mso } _ 2 $ property $ \ pi $ on labeled graphs, there exists a pls for $ \ pi $ with $ o ( \ log n ) $ - bit certificates for all graphs of bounded tree - depth. this result has been extended to the larger class of graphs with bounded { tree - width }, using certificates on $ o ( \ log ^ 2 n ) $ bits. we extend this result even further, to the larger class of graphs with bounded clique - width, which, as opposed to the other two aforementioned classes, includes dense graphs. we show that, for every $ \ mathrm { mso } _ 1 $ property $ \ pi $ on labeled graphs, there exists a pls for $ \ pi $ with $ o ( \ log ^ 2 n ) $ bit certificates for all graphs of bounded clique - width.
|
arxiv:2307.14292
|
while pre - trained language models ( plms ) have become a de - facto standard promoting the accuracy of text classification tasks, recent studies find that plms often predict over - confidently. although various calibration methods have been proposed, such as ensemble learning and data augmentation, most of the methods have been verified in computer vision benchmarks rather than in plm - based text classification tasks. in this paper, we present an empirical study on confidence calibration for plms, addressing three categories, including confidence penalty losses, data augmentations, and ensemble methods. we find that the ensemble model overfitted to the training set shows sub - par calibration performance and also observe that plms trained with confidence penalty loss have a trade - off between calibration and accuracy. building on these observations, we propose the calibrated plm ( call ), a combination of calibration techniques. the call complements the drawbacks that may occur when utilizing a calibration method individually and boosts both classification and calibration accuracy. design choices in call ' s training procedures are extensively studied, and we provide a detailed analysis of how calibration techniques affect the calibration performance of plms.
|
arxiv:2302.06690
|
we report on the implementation and detailed modelling of a josephson parametric amplifier ( jpa ) made from an array of eighty superconducting quantum interference devices ( squids ), forming a non - linear quarter - wave resonator. this device was fabricated using a very simple single step fabrication process. it shows a large bandwidth ( 45 mhz ), an operating frequency tunable between 5. 9 ghz and 6. 8 ghz and a large input saturation power ( - 117 dbm ) when biased to obtain 20 db of gain. despite the length of the squid array being comparable to the wavelength, we present a model based on an effective non - linear lc series resonator that quantitatively describes these figures of merit without fitting parameters. our work illustrates the advantage of using array - based jpa since a single - squid device showing the same bandwidth and resonant frequency would display a saturation power 15 db lower.
|
arxiv:1809.08476
|
we derived similar to bo et al. ( 2010 ) results but in the case when the dynamics of the fx rate is driven by a general merton jump - diffusion process. the main results of our paper are as follows : 1 ) formulas for the esscher transform parameters which ensure that the martingale condition for the discounted foreign exchange rate is a martingale for a general merton jump - - diffusion process are derived ; using the values of these parameters we proceeded to a risk - neural measure and provide new formulas for the distribution of jumps, the mean jump size, and the poisson process intensity with respect to the measure ; pricing formulas for european call foreign exchange options have been given as well ; 2 ) obtained formulas are applied to the case of the exponential processes ; 3 ) numerical simulations of european call foreign exchange option prices for different parameters are also provided ; 4 ) codes for matlab functions used in numerical simulations of option prices are given.
|
arxiv:1402.2273
|
it is shown that the boldface maximality principle for subcomplete forcing, together with the assumption that the universe has only set - many grounds, implies the existence of a ( parameter - free ) definable well - ordering of $ \ mathcal { p } ( \ omega _ 1 ) $. the same conclusion follows from the boldface maximality for subcomplete forcing, assuming there is no inner model with an inaccessible limit of measurable cardinals. similarly, the bounded subcomplete forcing axiom, together with the assumption that $ x ^ \ # $ does not exist, for some $ x \ subseteq \ omega _ 1 $, implies the existence of a well - order of $ \ mathcal { p } ( \ omega _ 1 ) $ which is $ \ delta _ 1 $ - definable without parameters, and $ \ delta _ 1 ( h _ { \ omega _ 2 } ) $ - definable using a subset of $ \ omega _ 1 $ as a parameter. this well - order is in $ l ( \ mathcal { p } ( \ omega _ 1 ) ) $. enhanced version of bounded forcing axioms are introduced that are strong enough to have the implications of the maximality principles mentioned above.
|
arxiv:1708.08167
|
in the past decades, time ordered perturbation theory was very successful in describing relativistic scattering processes. it was developed for local quantum field theories. however, there are field theories which are governed by non - local interactions, for example non - commutative quantum field theory ( ncqft ). filk ( phys. lett. b 376 ( 1996 ) 53 ) first studied ncqft perturbatively obtaining the usual feynman propagator and additional phase factors as the basic elements of perturbation theory. however, this treatment is only applicable for cases, where the deformation of space - time does not involve time. thus, we generalize filk ' s approach in two ways : first, we study non - local interactions of a very general type able to embed ncqft. and second, we also include the case, where non - locality involves time. a few applications of the obtained formalism will also be discussed.
|
arxiv:hep-th/0306101
|
we built a catalog of 122 fr ~ ii radio galaxies, called frii { \ sl { cat } }, selected from a published sample obtained by combining observations from the nvss, first, and sdss surveys. the catalog includes sources with redshift $ \ leq 0. 15 $, an edge - brightened radio morphology, and those with at least one of the emission peaks located at radius $ r $ larger than 30 kpc from the center of the host. the radio luminosity at 1. 4 ghz of the \ frii \ sources covers the range $ l _ { 1. 4 } \ sim 10 ^ { 39. 5 } - 10 ^ { 42. 5 } $ $ \ ergs $. the \ frii \ catalog has 90 \ % of low and 10 \ % of high excitation galaxies ( legs and hegs ), respectively. the properties of these two classes are significantly different. the frii { \ sl { cat } } legs are mostly luminous ( $ - 20 \ gtrsim m _ r \ gtrsim - 24 $ ), red early - type galaxies with black hole masses in the range $ 10 ^ 8 \ lesssim m _ { \ rm bh } \ lesssim 10 ^ 9 m _ \ odot $ ; they are essentially indistinguishable from the fr ~ is belonging to the fri { \ sl { cat } }. the heg fr ~ iis are associated with optically bluer and mid - ir redder hosts than the leg fr ~ iis and to galaxies and black holes that are smaller, on average, by a factor $ \ sim $ 2. fr ~ iis have a factor $ \ sim $ 3 higher average radio luminosity than fr ~ is. nonetheless, most ( $ \ sim 90 $ \ % ) of the selected fr ~ iis have a radio power that is lower, by as much as a factor of $ \ sim $ 100, than the transition value between fr ~ is and fr ~ iis found in the 3c sample. the correspondence between the morphological classification of fr ~ i and fr ~ ii and the separation in radio power disappears when including sources selected at low radio flux thresholds, which is in line with previous results. in conclusion, a radio source produced by a low power jet can be edge brightened or edge darkened, and the outcome is not related to differences in the optical properties of the host galaxy.
|
arxiv:1703.03427
|
with the success of transformer architectures across diverse applications, the error correction code transformer ( ecct ) has gained significant attention for its superior decoding performance. in spite of its advantages, the error floor phenomenon in ecct decoding remains unexplored. we present the first investigation of the error floor issue in ecct and propose a hybrid decoding approach that integrates hard decision decoders as pre - and post - decoders with ecct to effectively lower the error floor. in particular, we introduce a novel loss function for ecct that considers the dynamics of hybrid decoding algorithm. training ecct with the proposed loss function enhances its ability to correct specific error patterns by taking into account its interaction with the auxiliary decoders. simulation results demonstrate that the proposed hybrid decoder with the novel loss function significantly outperforms the original ecct in both the waterfall and the error floor regions.
|
arxiv:2502.09065
|
we study a new algorithmic process of graph growth which starts from a single initial vertex and operates in discrete time - steps, called \ emph { slots }. in every slot, the graph grows via two operations ( i ) vertex generation and ( ii ) edge activation. the process completes at the last slot where a ( possibly empty ) subset of the edges of the graph will be removed. removed edges are called \ emph { excess edges }. the main problem investigated in this paper is : given a target graph $ g $, we are asked to design an algorithm that outputs such a process growing $ g $, called a \ emph { growth schedule }. additionally, the algorithm should try to minimize the total number of slots $ k $ and of excess edges $ \ ell $ used by the process. we provide both positive and negative results for different values of $ k $ and $ \ ell $, with our main focus being either schedules with sub - linear number of slots or with zero excess edges.
|
arxiv:2107.14126
|
high order dependency parsing leverages high order features such as siblings or grandchildren to improve state of the art accuracy of current first order dependency parsers. the present paper uses biaffine scores to provide an estimate of the arc scores and is then propagated into a graphical model. the inference inside the graphical model is solved using dual decomposition. the present algorithm propagates biaffine neural scores to the graphical model and by leveraging dual decomposition inference, the overall circuit is trained end - to - end to transfer first order informations to the high order informations.
|
arxiv:2306.10469
|
enhancement of strangeness production has since long been proposed as a promising signal of quark - gluon plasma production. a convenient indicator for it is the wroblewski parameter which has been shown to be about a factor two higher in heavy ion collisions. using a method proposed by us earlier, we obtained lattice qcd results for the wroblewski parameter from our simulations of qcd with two light quarks both below and above the chiral transition. our first principles based and parameter free result compare well with the a - a data.
|
arxiv:hep-ph/0403172
|
gravitational waves are a sensitive probe into the structure of compact astronomical objects and the nature of gravity in the strong regime. modifications of near - horizon physics can imprint on the late time ringdown waveform, leaving behind a train of echoes, from which useful information about new physics in the strong gravity regime can be extracted. we propose a novel approach to compute the ringdown waveform and characterize individual echoes perturbatively using the fredholm determinants, which can be intuitively represented via a diagrammatical scheme. direct non - perturbative treatments can also be easily implemented for some cases. numerically, the method is also effective and accurate even for relatively low resources.
|
arxiv:1908.00189
|
geological engineering is a discipline of engineering concerned with the application of geological science and engineering principles to fields, such as civil engineering, mining, environmental engineering, and forestry, among others. the work of geological engineers often directs or supports the work of other engineering disciplines such as assessing the suitability of locations for civil engineering, environmental engineering, mining operations, and oil and gas projects by conducting geological, geoenvironmental, geophysical, and geotechnical studies. they are involved with impact studies for facilities and operations that affect surface and subsurface environments. the engineering design input and other recommendations made by geological engineers on these projects will often have a large impact on construction and operations. geological engineers plan, design, and implement geotechnical, geological, geophysical, hydrogeological, and environmental data acquisition. this ranges from manual ground - based methods to deep drilling, to geochemical sampling, to advanced geophysical techniques and satellite surveying. geological engineers are also concerned with the analysis of past and future ground behaviour, mapping at all scales, and ground characterization programs for specific engineering requirements. these analyses lead geological engineers to make recommendations and prepare reports which could have major effects on the foundations of construction, mining, and civil engineering projects. some examples of projects include rock excavation, building foundation consolidation, pressure grouting, hydraulic channel erosion control, slope and fill stabilization, landslide risk assessment, groundwater monitoring, and assessment and remediation of contamination. in addition, geological engineers are included on design teams that develop solutions to surface hazards, groundwater remediation, underground and surface excavation projects, and resource management. like mining engineers, geological engineers also conduct resource exploration campaigns, mine evaluation and feasibility assessments, and contribute to the ongoing efficiency, sustainability, and safety of active mining projects = = history = = while the term geological engineering was not coined until the 19th century, principles of geological engineering are demonstrated through millennia of human history. = = = ancient engineering = = = one of the oldest examples of geological engineering principles is the euphrates tunnel, which was constructed around 2180 b. c. – 2160 b. c... this, and other tunnels and qanats from around the same time were used by ancient civilizations such as babylon and persia for the purposes of irrigation. another famous example where geological engineering principles were used in an ancient engineering project was the construction of the eupalinos aqueduct tunnel in ancient greece. this was the first tunnel to be constructed inward from both ends using principles of geometry and trigonometry, marking a significant
|
https://en.wikipedia.org/wiki/Geological_engineering
|
the rise of social networks as the primary means of communication in almost every country in the world has simultaneously triggered an increase in the amount of fake news circulating online. this fact became particularly evident during the 2016 u. s. political elections and even more so with the advent of the covid - 19 pandemic. several research studies have shown how the effects of fake news dissemination can be mitigated by promoting greater competence through lifelong learning and discussion communities, and generally rigorous training in the scientific method and broad interdisciplinary education. the urgent need for models that can describe the growing infodemic of fake news has been highlighted by the current pandemic. the resulting slowdown in vaccination campaigns due to misinformation and generally the inability of individuals to discern the reliability of information is posing enormous risks to the governments of many countries. in this research using the tools of kinetic theory we describe the interaction between fake news spreading and competence of individuals through multi - population models in which fake news spreads analogously to an infectious disease with different impact depending on the level of competence of individuals. the level of competence, in particular, is subject to an evolutionary dynamic due to both social interactions between agents and external learning dynamics. the results show how the model is able to correctly describe the dynamics of diffusion of fake news and the important role of competence in their containment.
|
arxiv:2109.14087
|
using a self - consistent, hartree description for both infinite nuclear matter and finite nuclei based on a relativistic quark model ( the quark - meson coupling model ), we investigate the variation of the masses of the non - strange vector mesons, the hyperons and the nucleon in infinite nuclear matter and in finite nuclei.
|
arxiv:nucl-th/9608060
|
we present timing and spectral analysis of rxte - pca observations of the accretion powered pulsar 4u 1907 + 09 between june 2007 and august 2008. 4u 1907 + 09 had been in a spin - down episode with a spin - down rate of $ - 3. 54 \ times10 ^ { - 14 } $ hz s $ ^ { - 1 } $ before 1999. from rxte observations after march 2001, the source showed a $ \ sim 60 $ % decrease in spin - down magnitude and integral observations after march 2003 showed that source started to spin - up. we found that the source recently entered a new spin - down episode with a spin - down rate of $ - 3. 59 \ times 10 ^ { - 14 } $ hz s $ ^ { - 1 } $. this spin - down rate is pretty close to the previous long term spin - down rate of the source measured before 1999. from the spectral analysis, we showed that hydrogen column density varies with the orbital phase.
|
arxiv:0812.4189
|
pidrone is a quadrotor platform created to accompany an introductory robotics course. students build an autonomous flying robot from scratch and learn to program it through assignments and projects. existing educational robots do not have significant autonomous capabilities, such as high - level planning and mapping. we present a hardware and software framework for an autonomous aerial robot, in which all software for autonomy can run onboard the drone, implemented in python. we present an unscented kalman filter ( ukf ) for accurate state estimation. next, we present an implementation of monte carlo ( mc ) localization and fastslam for simultaneous localization and mapping ( slam ). the performance of ukf, localization, and slam is tested and compared to ground truth, provided by a motion - capture system. our evaluation demonstrates that our autonomous educational framework runs quickly and accurately on a raspberry pi in python, making it ideal for use in educational settings.
|
arxiv:1910.03516
|
penetration testing is a vital practice for identifying and mitigating vulnerabilities in cybersecurity systems, but its manual execution is labor - intensive and time - consuming. existing large language model ( llm ) - assisted or automated penetration testing approaches often suffer from inefficiencies, such as a lack of contextual understanding and excessive, unstructured data generation. this paper presents vulnbot, an automated penetration testing framework that leverages llms to simulate the collaborative workflow of human penetration testing teams through a multi - agent system. to address the inefficiencies and reliance on manual intervention in traditional penetration testing methods, vulnbot decomposes complex tasks into three specialized phases : reconnaissance, scanning, and exploitation. these phases are guided by a penetration task graph ( ptg ) to ensure logical task execution. key design features include role specialization, penetration path planning, inter - agent communication, and generative penetration behavior. experimental results demonstrate that vulnbot outperforms baseline models such as gpt - 4 and llama3 in automated penetration testing tasks, particularly showcasing its potential in fully autonomous testing on real - world machines.
|
arxiv:2501.13411
|
we study the center of several types of path algebras. we start with the path algebra $ ke $ and prove that if the number of vertices is infinite then the center is zero. otherwise, it coincides with the field $ k $ except when the graph $ e $ is a cycle in which case the center is $ k [ x ] $, the polynomial algebra in one indeterminate. then we compute the centers of prime cohn and leavitt path algebras. a lower and an upper bound for the center of a leavitt path algebra are given by introducing the graded baer radical for graded algebras.
|
arxiv:1209.4375
|
we develop techniques for determining the exact asymptotic speed of convergence in the multidimensional normal approximation of smooth functions of gaussian fields. as a by - product, our findings yield exact limits and often give rise to one - term generalized edgeworth expansions increasing the speed of convergence. our main mathematical tools are malliavin calculus, stein ' s method and the fourth moment theorem. this work can be seen as an extension of the results of arxiv : 0803. 0458 to the multi - dimensional case, with the notable difference that in our framework covariances are allowed to fluctuate. we apply our findings to exploding functionals of brownian sheets, vectors of toeplitz quadratic functionals and the breuer - major theorem.
|
arxiv:1305.6523
|
this paper identifies three categories of model : the technology impact model ; the social impact model and the integrationist model, which imply different views of the " impact " of information technology on work organisation. these models are used to structure data from case studies conducted by the authors to explore the implications of the use of computer - based information systems for managers ' work. the paper argues that the " impact " of information systems is not a single stable and predictable outcome but a non - linear ongoing process that changes and evolves over time. it also argues that the actions of individuals and groups within an organisation are not wholly determined by outside forces : people can and do react to, and shape, systems in different ways. in this sense, the " impact " of computer - based information systems on managers ' work reflects decisions made by managers themselves about how the technology is used.
|
arxiv:cs/0102029
|
we propose a reduction scheme for a system constituted by two coupled harmonically - bound brownian oscillators. we reduce the description by constructing a lower dimensional model which inherits some of the basic features of the original dynamics and is written in terms of suitable transport coefficients. the proposed procedure is twofold : while the deterministic component of the dynamics is obtained by a direct application of the invariant manifold method, the diffusion terms are determined via the fluctuation - dissipation theorem. we highlight the behavior of the coefficients up to a critical value of the coupling parameter, which marks the endpoint of the interval in which a contracted description is available. the study of the weak coupling regime is addressed and the commutativity of alternative reduction paths is also discussed.
|
arxiv:2209.13481
|
snowflake growth provides us with a fascinating example of spontaneous pattern formation in nature. attempts to understand this phenomenon have led to important insights in non - equilibrium dynamics observed in various active scientific fields, ranging from pattern formation in physical and chemical systems, to self - assembly problems in biology. yet, very few models currently succeed in reproducing the diversity of snowflake forms in three dimensions, and the link between model parameters and thermodynamic quantities is not established. here, we report a modified phase field model that describes the subtlety of the ice vapour phase transition, through anisotropic water molecules attachment and condensation, surface diffusion, and strong anisotropic surface tension, that guarantee the anisotropy, faceting and dendritic growth of snowflakes. we demonstrate that this model reproduces the growth dynamics of the most challenging morphologies of snowflakes from the nakaya diagram. we find that the growth dynamics of snow crystals matches the selection theory, consistently with previous experimental observations.
|
arxiv:1611.03394
|
during the covid - 19 pandemic, policy makers at the greater london authority, the regional governance body of london, uk, are reliant upon prompt and accurate data sources. large well - defined heterogeneous compositions of activity throughout the city are sometimes difficult to acquire, yet are a necessity in order to learn ' busyness ' and consequently make safe policy decisions. one component of our project within this space is to utilise existing infrastructure to estimate social distancing adherence by the general public. our method enables near immediate sampling and contextualisation of activity and physical distancing on the streets of london via live traffic camera feeds. we introduce a framework for inspecting and improving upon existing methods, whilst also describing its active deployment on over 900 real - time feeds.
|
arxiv:2012.07751
|
precision control over hybrid physical systems at the quantum level is important for the realization of many quantum - based technologies. in the field of quantum information processing ( qip ) and quantum networking, various proposals discuss the possibility of hybrid architectures where specific tasks are delegated to the most suitable subsystem. for example, in quantum networks, it may be advantageous to transfer information from a subsystem that has good memory properties to another subsystem that is more efficient at transporting information between nodes in the network. for trapped - ions, a hybrid system formed of different species introduces extra degrees of freedom that can be exploited to expand and refine the control of the system. ions of different elements have previously been used in qip experiments for sympathetic cooling, creation of entanglement through dissipation, and quantum non - demolition ( qnd ) measurement of one species with another. here, we demonstrate an entangling quantum gate between ions of different elements which can serve as an important building block of qip, quantum networking, precision spectroscopy, metrology, and quantum simulation. a geometric phase gate between a $ ^ 9 $ be $ ^ + $ ion and a $ ^ { 25 } $ mg $ ^ + $ ion is realized through an effective spin - spin interaction generated by state - dependent forces induced with laser beams. combined with single - qubit gates and same - species entangling gates, this mixed - element entangling gate provides a complete set of gates over such a hybrid system for universal qip. using a sequence of such gates, we demonstrate a controlled - not ( cnot ) gate and a swap gate. we further demonstrate the robustness of these gates against thermal excitation and show improved detection in quantum logic spectroscopy ( qls ). we also observe a strong violation of a chsh - type bell inequality on entangled states composed of different ion species.
|
arxiv:1508.03392
|
several high specificity and sensitivity seizure prediction methods with convolutional neural networks ( cnns ) are reported. however, cnns are computationally expensive and power hungry. these inconveniences make cnn - based methods hard to be implemented on wearable devices. motivated by the energy - efficient spiking neural networks ( snns ), a neuromorphic computing approach for seizure prediction is proposed in this work. this approach uses a designed gaussian random discrete encoder to generate spike sequences from the eeg samples and make predictions in a spiking convolutional neural network ( spiking - cnn ) which combines the advantages of cnns and snns. the experimental results show that the sensitivity, specificity and auc can remain 95. 1 %, 99. 2 % and 0. 912 respectively while the computation complexity is reduced by 98. 58 % compared to cnn, indicating that the proposed spiking - cnn is hardware friendly and of high precision.
|
arxiv:2102.12773
|
we induce superconductivity by proximity effect in thin layers of gold and study the number of conduction channels which contribute to the current in one - atom contacts and atomic wires. the atomic contacts and wires are fabricated with a scanning tunneling microscope. the set of transmission probabilities of the conduction channels is obtained from the analysis of the $ i ( v ) $ characteristic curve which is highly non - linear due to multiple andreev reflections. in agreement with theoretical calculations we find that there is only one channel which is almost completely open.
|
arxiv:cond-mat/0303195
|
we provide computationally efficient, differentially private algorithms for the classical regression settings of least squares fitting, binary regression and linear regression with unbounded covariates. prior to our work, privacy constraints in such regression settings were studied under strong a priori bounds on covariates. we consider the case of gaussian marginals and extend recent differentially private techniques on mean and covariance estimation ( kamath et al., 2019 ; karwa and vadhan, 2018 ) to the sub - gaussian regime. we provide a novel technical analysis yielding differentially private algorithms for the above classical regression settings. through the case of binary regression, we capture the fundamental and widely - studied models of logistic regression and linearly - separable svms, learning an unbiased estimate of the true regression vector, up to a scaling factor.
|
arxiv:2202.11199
|
the numerical method developed in [ 30 ] for optimal control problems involving sweeping processes with smooth sweeping set c is generalized to the case where c is nonsmooth, namely, c is the intersection of a finite number of sublevel sets of smooth functions. the novelty of this extension resides in producing for the general setting a different approach, since the one used for the smooth sweeping sets is not applicable here.
|
arxiv:2311.16611
|
a new versatile facility leetech for detector r & d, tests and calibration is designed and constructed. it uses electrons produced by the photoinjector phil at lal, orsay and provides a powerful tool for wide range r & d studies of different detector concepts delivering " mono - chromatic " samples of low energy electrons with adjustable energy and intensity. among other innovative instrumentation techniques, leetech will be used for testing various gaseous tracking detectors and studying new micromegas / ingrid concept which has very promising characteristics of spatial resolution and can be a good candidate for particle tracking and identification. in this paper the importance and expected characteristics of such facility based on detailed simulation studies are addressed.
|
arxiv:1601.04348
|
b \ " or \ " oczky, lutwak, yang and zhang recently proved the log - brunn - minkowski inequality which is stronger than the classical brunn - minkowski inequality for two origin - symmetric convex bodies in the plane. this paper establishes the log - brunn - minkowski, log - minkowski, $ l _ p $ - minkowski and $ l _ p $ - brunn - minkowski inequalities for two convex bodies in $ \ mathbb { r } ^ 3 $.
|
arxiv:1810.05775
|
we introduce the decision - aware time - series conditional generative adversarial network ( dat - cgan ) as a method for time - series generation. the framework adopts a multi - wasserstein loss on structured decision - related quantities, capturing the heterogeneity of decision - related data and providing new effectiveness in supporting the decision processes of end users. we improve sample efficiency through an overlapped block - sampling method, and provide a theoretical characterization of the generalization properties of dat - cgan. the framework is demonstrated on financial time series for a multi - time - step portfolio choice problem. we demonstrate better generative quality in regard to underlying data and different decision - related quantities than strong, gan - based baselines.
|
arxiv:2009.12682
|
let $ \ mathrm { int } ( n ) $ denote the set of nonempty left weak bruhat intervals in the symmetric group $ \ mathfrak { s } _ n $. we investigate the equivalence relation $ \ overset { d } { \ simeq } $ on $ \ mathrm { int } ( n ) $, where $ i \ overset { d } { \ simeq } j $ if and only if there exists a descent - preserving poset isomorphism between $ i $ and $ j $. for each equivalence class $ c $ of $ ( \ mathrm { int } ( n ), \ overset { d } { \ simeq } ) $, a partial order $ \ preceq $ is defined by $ [ \ sigma, \ rho ] _ l \ preceq [ \ sigma ', \ rho ' ] _ l $ if and only if $ \ sigma \ preceq _ r \ sigma ' $. kim - lee - oh ( 2023 ) showed that the poset $ ( c, \ preceq ) $ is isomorphic to a right weak bruhat interval. in this paper, we focus on lower and upper descent weak bruhat intervals, specifically those of the form $ [ w _ 0 ( s ), \ sigma ] _ l $ or $ [ \ sigma, w _ 1 ( s ) ] _ l $, where $ w _ 0 ( s ) $ is the longest element in the parabolic subgroup $ \ mathfrak { s } _ s $ of $ \ mathfrak { s } _ n $, generated by $ \ { s _ i \ mid i \ in s \ } $ for a subset $ s \ subseteq [ n - 1 ] $, and $ w _ 1 ( s ) $ is the longest element among the minimal - length representatives of left $ \ mathfrak { s } _ { [ n - 1 ] \ setminus s } $ - cosets in $ \ mathfrak { s } _ n $. we begin by providing a poset - theoretic characterization of the equivalence relation $ \ overset { d } { \ simeq } $. using this characterization, the minimal and maximal elements within an equivalence class $ c $ are identified when $ c $ is a lower or upper descent interval. under an additional condition, a detailed description of the structure of $ ( c, \ preceq
|
arxiv:2412.08413
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.