text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
in topology there is a theorem of atiyah, concerning k - theory of classifying space of connected compact lie group. we consider an algebraic analogue of this theorem. we prove that for a split reductive algebraic group g over a field there is an isomorphism between k - theory of etale classifying space of group g and a completion of the g - equivariant k - theory of the base field.
|
arxiv:1111.4685
|
we calculate the microstates free entropy dimension of natural generators in an amalgamated free product of certain von neumann algebras, with amalgamation over a hyperfinite subalgebra. in particular, some ` exotic ' popa algebra generators of free group factors are shown to have the expected free entropy dimension. we also show that microstates and non - - microstates free entropy dimension agree for generating sets of many groups. in the appendix by wolfgang lueck, the first l ^ 2 - betti number for certain amalgamated free products of groups is calculated.
|
arxiv:math/0609080
|
we review recent results on string coupling selection rules for heterotic orbifolds, derived using conformal field theory. such rules are the first step towards understanding the viability of the recently obtained compactifications with potentially realistic particle spectra. they arise from the properties of the worldsheet instantons that mediate the couplings, and include stringy effects that would seem ' miraculous ' to an effective field theory observer.
|
arxiv:1401.6162
|
a maximal green sequence introduced by b. keller is a certain sequence of quiver mutations at green vertices. t. br \ " ustle, g. dupont and m. p \ ' erotin showed that for an acyclic quiver, maximal green sequences are realized as maximal paths in the hasse quiver of the poset of support tilting modules. they also considered possible lengths of maximal green sequences. in this paper, we calculate possible lengths of maximal green sequences for a quiver of type $ a $ or of type $ \ tilde { a } _ { n, 1 } $ by using theory of tilting mutation.
|
arxiv:1507.02852
|
unmanned aerial vehicles ( uavs ) provide a novel means of extracting road and traffic information from video data. in particular, by analyzing objects in a video frame, uavs can detect traffic characteristics and road incidents. leveraging the mobility and detection capabilities of uavs, we investigate a navigation algorithm that seeks to maximize information on the road / traffic state under non - recurrent congestion. we propose an active exploration framework that ( 1 ) assimilates uav observations with speed - density sensor data, ( 2 ) quantifies uncertainty on the road / traffic state, and ( 3 ) adaptively navigates the uav to minimize this uncertainty. the navigation algorithm uses the a - optimal information measure ( mean uncertainty ), and it depends on covariance matrices generated by a dual state ensemble kalman filter ( enkf ). in the enkf procedure, since observations are a nonlinear function of the incident state variables, we use diagnostic variables that represent model predicted measurements. we also present a state update procedure that maintains a monotonic relationship between incident parameters and measurements. we compare the traffic / incident state estimates resulting from the uav navigation - estimation procedure against corresponding estimates that do not use targeted uav observations. our results indicate that uavs aid in detection of incidents under congested conditions where speed - density data are not informative.
|
arxiv:1807.11660
|
when a real fluid is expelled quickly from a tube, it forms a jet separated from the surrounding fluid by a thin, turbulent layer. on the other hand, when the same fluid is sucked into the tube, it comes in from all directions, forming a sink - like flow. we show that, even for the ideal flow described by the time - reversible euler equation, an experimenter who only controls the pressure in a pump attached to the tube would see jets form in one direction exclusively. the asymmetry between outflow and inflow therefore does not depend on viscous dissipation, but rather on the experimenter ' s limited control of initial and boundary conditions. this illustrates, in a rather different context from the usual one of thermal physics, how irreversibility may arise in systems whose microscopic dynamics are fully reversible.
|
arxiv:1301.3915
|
we introduce " cp trajectory diagram in bi - probability space " as a powerful tool for pictorial representation of the genuine cp and the matter effects in neutrino oscillations. existence of the correlated ambiguity in a determination of cp violating phase \ delta and the sign of \ delta m ^ 2 _ { 13 } is uncovered. principles of tuning beam energy for a given baseline distance are proposed to resolve the ambiguity and to maximize the cp - odd effect. we finally point out, quite contrary to what is usually believed, that the ambiguity may be resolved with 50 % chance in the super - jhf experiment despite its relatively short baseline of 300 km.
|
arxiv:hep-ph/0111130
|
we prove that certain riemannian manifolds can be isometrically embedded inside calabi - yau manifolds. for example we prove that given any real - analytic one parameter family of riemannian metrics $ g _ t $ on a 3 - dimensional manifold $ y $ with volume form independent of $ t $ and with a real - analytic family of nowhere vanishing harmonic one forms $ \ theta _ t $, then $ ( y, g _ t ) $ can be realized as a family of special lagrangian submanifolds of a calabi - yau manifold $ x $. we also prove that certain principal torus bundles can be equivariantly and isometrically embedded inside calabi - yau manifolds with torus action. we use this to construct examples of $ n $ - parameter families of special lagrangian tori inside $ n + k $ - dimensional calabi - yau manifolds with torus symmetry. we also compute mclean ' s metric of 3 - dimensional special lagrangian fibrations with $ t ^ 2 $ - symmetry.
|
arxiv:math/0503494
|
we use computer simulations and a simple free energy model to study the response of a bilayer membrane to the application of a negative ( compressive ) mechanical tension. such a tension destabilizes the long wavelength undulation modes of giant vesicles, but it can be sustained when small membranes and vesicles are considered. our negative tension simulation results reveal two regimes - ( i ) a weak negative tension regime characterized by stretching - dominated elasticity, and ( ii ) a strong negative tension regime featuring bending - dominated elastic behavior. this resembles the findings of the classic evans and rawicz micropipette aspiration experiment in giant unilamellar vesicles ( guvs ) [ phys, rev. lett. { \ bf 64 }, 2094 ( 1990 ) ]. however, while in guvs the crossover between the two elasticity regimes occurs at a small positive surface tension, in smaller membranes it takes place at a moderate negative tension. another interesting observation concerning the response of a small membrane to negative surface tension is related to the relationship between the mechanical and fluctuation tensions, which are equal to each other for non - negative values. when the tension decreases to negative values, the fluctuation tension $ \ gamma $ drops somewhat faster than $ \ tau $ in the small negative tension regime, before it saturates ( and becomes larger than $ \ tau $ ) for large negative tensions. the bending modulus exhibit an " opposite " trend. it remains almost unchanged in the stretching - dominated elastic regime, and decreases in the bending - dominated regime. both the amplitudes of the thermal height undulations and the projected area variations diverge at the onset of mechanical instability.
|
arxiv:1503.02818
|
in this paper, we investigate the existence and uniqueness of solutions for a fractional boundary value problem supplemented with nonlocal riemann - liouville fractional integral and caputo fractional derivative boundary conditions. our results are based on some known tools of fixed point theory. finally, some illustrative examples are included to verify the validity of our results.
|
arxiv:1805.06015
|
in this paper we investigate the approximation of continuous functions on the wasserstein space by smooth functions, with smoothness meant in the sense of lions differentiability. in particular, in the case of a lipschitz function we are able to construct a sequence of infinitely differentiable functions having the same lipschitz constant as the original function. this solves an open problem raised in [ 11 ]. for ( resp. twice ) continuously differentiable function, we show that our approximation also holds for the first - order derivative ( resp. second - order derivatives ), therefore solving another open problem raised in [ 11 ].
|
arxiv:2303.15160
|
we use 2hex smeared gauge configurations generated with an $ \ mathrm { n } _ \ mathrm { f } \ mathrm { = 2 + 1 } $ clover improved wilson action to investigate $ \ pi \ pi $ scattering in the $ \ rho $ channel. the range of lattice spacings ( 0. 054 to 0. 12 fm ) and space - like extents ( 32 and 48 ) allows us to extract the scattering parameters through the volume dependence of the $ \ pi \ pi $ - state energies according to l \ " uscher ' s formalism. the pion masses ( 134 to 300 mev ) are light enough to allow the decay of the rho and the level repulsion observed indicates that our data are sensitive to the interaction. we analyse our data with a multi - channel gevp variational formula. our results are in good agreement with the experimental values and consistent with a weak pion mass dependence of the $ \ rho \ pi \ pi $ coupling constant.
|
arxiv:1410.8447
|
we introduce a new approach to studying spherical spin glass dynamics based on differential inequalities for one - time observables. using this approach, we obtain an approximate phase diagram for the evolution of the energy $ h $ and its gradient under langevin dynamics for spherical $ p $ - spin models. we then derive several consequences of this phase diagram. for example, at any temperature, uniformly over all starting points, the process must reach and remain in an absorbing region of large negative values of $ h $ and large ( in norm ) gradients in order 1 time. furthermore, if the process starts in a neighborhood of a critical point of $ h $ with negative energy, then both the gradient and energy must increase macroscopically under this evolution, even if this critical point is a saddle with index of order $ n $. as a key technical tool, we estimate sobolev norms of spin glass hamiltonians, which are of independent interest.
|
arxiv:1808.00929
|
the protocols for the control and readout of nitrogen vacancy ( nv ) centres electron spins in diamond offer an advanced platform for quantum computation, metrology and sensing. these protocols are based on the optical readout of photons emitted from nv centres, which process is limited by the yield of photons collection. here we report on a novel principle for the detection of nv centres magnetic resonance in diamond by directly monitoring spin - preserving electron transitions through measurement of nv centre related photocurrent. the demonstrated direct detection technique offers a sensitive way for the readout of diamond nv sensors and diamond quantum devices on diamond chips. the photocurrent detection of magnetic resonance ( pdmr ) scheme is based on the detection of charge carriers promoted to the conduction band of diamond by the two - photon ionization of nv - centres. optical detection of magnetic resonance ( odmr ) and pdmr are compared, by performing both measurements simultaneously. the minima detected in the measured photocurrent at resonant microwave frequencies are attributed to the spin - dependent occupation probability of the nv - ground state, originating from spin - selective non - radiative transitions.
|
arxiv:1502.07551
|
in this paper, we study degenerate almost complex surfaces in the semi - riemannian nearly k \ " ahler $ \ mathrm { sl } _ 2 \ mathbb { r } \ times \ mathrm { sl } _ 2 \ mathbb { r } $. the geometry of these surfaces depends on the almost product structure of the ambient space and one can distinguish two distinct cases. the geometry of these surfaces is influenced by the almost product structure of the ambient space, leading to two distinct cases. the first case arises when the tangent bundle of the surface is preserved under the almost product structure, while the second case occurs when the tangent bundle of the surface is not invariant under this structure. in both cases, we obtain a complete and explicit classification.
|
arxiv:2307.12766
|
simultaneous transport and magnetization studies in bi2sr2cacu2o8 crystals at elevated currents reveal large discrepancies, including finite resistivity at temperatures of 40k below the magnetic irreversibility line. this resistivity, measured at the top surface, is non - monotonic in temperature and extremely non - linear. the vortex velocity derived from magnetization is six orders of magnitude lower than the velocity derived from simultaneous transport measurements. the new findings are ascribed to a shear - induced decoupling, in which the pancake vortices flow only in the top few cuo2 planes, and are decoupled from the pinned vortices in the rest of the crystal.
|
arxiv:cond-mat/0003143
|
large language models ( llms ) have issues with document question answering ( qa ) in situations where the document is unable to fit in the small context length of an llm. to overcome this issue, most existing works focus on retrieving the relevant context from the document, representing them as plain text. however, documents such as pdfs, web pages, and presentations are naturally structured with different pages, tables, sections, and so on. representing such structured documents as plain text is incongruous with the user ' s mental model of these documents with rich structure. when a system has to query the document for context, this incongruity is brought to the fore, and seemingly trivial questions can trip up the qa system. to bridge this fundamental gap in handling structured documents, we propose an approach called pdftriage that enables models to retrieve the context based on either structure or content. our experiments demonstrate the effectiveness of the proposed pdftriage - augmented models across several classes of questions where existing retrieval - augmented llms fail. to facilitate further research on this fundamental problem, we release our benchmark dataset consisting of 900 + human - generated questions over 80 structured documents from 10 different categories of question types for document qa. our code and datasets will be released soon on github.
|
arxiv:2309.08872
|
the celebrated theorem of chung, graham, and wilson on quasirandom graphs implies that if the 4 - cycle and edge counts in a graph $ g $ are both close to their typical number in $ \ mathbb { g } ( n, 1 / 2 ), $ then this also holds for the counts of subgraphs isomorphic to $ h $ for any $ h $ of constant size. we aim to prove a similar statement where the notion of close is whether the given ( signed ) subgraph count can be used as a test between $ \ mathbb { g } ( n, 1 / 2 ) $ and a stochastic block model $ \ mathbb { sbm }. $ quantitatively, this is related to approximately maximizing $ h \ longrightarrow | \ phi ( h ) | ^ { \ frac { 1 } { | \ mathsf { v } ( h ) | } }, $ where $ \ phi ( h ) $ is the fourier coefficient of $ \ mathbb { sbm } $, indexed by subgraph $ h. $ this formulation turns out to be equivalent to approximately maximizing the partition function of a spin model over alphabet equal to the community labels in $ \ mathbb { sbm }. $ we resolve the approximate maximization when $ \ mathbb { sbm } $ satisfies one of four conditions : 1 ) the probability of an edge between any two vertices in different communities is exactly $ 1 / 2 $ ; 2 ) the probability of an edge between two vertices from any two communities is at least $ 1 / 2 $ ( this case is also covered in a recent work of yu, zadik, and zhang ) ; 3 ) the probability of belonging to any given community is at least $ c $ for some universal constant $ c > 0 $ ; 4 ) $ \ mathbb { sbm } $ has two communities. in each of these cases, we show that there is an approximate maximizer of $ | \ phi ( h ) | ^ { \ frac { 1 } { | \ mathsf { v } ( h ) | } } $ in the set $ \ mathsf { a } = \ { \ text { stars, 4 - cycle } \ }. $ this implies that if there exists a constant - degree polynomial test distinguishing $ \ mathbb { g } ( n, 1 / 2 ) $ and $ \ mathbb { sbm }, $ then the two distributions can also be distinguished via the signed count of some
|
arxiv:2504.17202
|
the release of differentially private streaming data has been extensively studied, yet striking a good balance between privacy and utility on temporally correlated data in the stream remains an open problem. existing works focus on enhancing privacy when applying differential privacy to correlated data, highlighting that differential privacy may suffer from additional privacy leakage under correlations ; consequently, a small privacy budget has to be used which worsens the utility. in this work, we propose a post - processing framework to improve the utility of differential privacy data release under temporal correlations. we model the problem as a maximum posterior estimation given the released differentially private data and correlation model and transform it into nonlinear constrained programming. our experiments on synthetic datasets show that the proposed approach significantly improves the utility and accuracy of differentially private data by nearly a hundred times in terms of mean square error when a strict privacy budget is given.
|
arxiv:2306.13293
|
a superpermutation is a sequence that contains every permutation of $ n $ distinct symbols as a contiguous substring. for instance, a valid example for three symbols is a sequence that contains all six permutations. this paper introduces a new algorithm that constructs such sequences more efficiently than existing recursive and graph - theoretic methods. unlike traditional techniques that suffer from scalability and factorial memory demands, the proposed approach builds superpermutations directly and compactly. this improves memory usage, enabling the construction of larger sequences previously considered impractical.
|
arxiv:2505.09628
|
solutions as $ \ alpha _ { 1 }, \ alpha _ { 2 } \ rightarrow 0 $. the first solution will disappear and the second solution will converge to the normalized solution of system ( 1. 1 ) with $ \ alpha _ { 1 } = \ alpha _ { 2 } = 0 $, which has been studied by t. bartsch, l. jeanjean and n. soave ( j. math. pures appl. 2016 ). furthermore, by refining the upper bound of the ground state energy, we provide a precise mass collapse behavior of the ground states. the results in this paper complement the main results established by x. luo, x. yang and w. zou ( arxiv : 2107. 08708 ), where the authors considered the case $ n = 4 $.
|
arxiv:2108.10317
|
in this paper we give a meaning to the nonlinear characteristic cauchy problem for the wave equation in base form by replacing it by a family of non - characteristic problems in an appropriate algebra of generalized functions. we prove existence of a solution and we precise how it depends on the choice made. we also check that in the classical case ( non - characteristic ) our new solution coincides with the classical one.
|
arxiv:0811.2256
|
we study the modifications of synchrotron radiation of charges in a storage ring as they are cooled. the pair correlation lengths between the charges are manifest in the synchrotron radiation and coherence effects exist for wavelengths longer than the coherence lengths between the charges. therefore the synchrotron radiation can be used as a diagnostic tool to determine the state ( gas, liquid, crystal ) of the charged plasma in the storage ring. we show also that the total power of the synchrotron radiation is enormously reduced for crystallized beams. this opens the possibility of accelerating particles to ultra - relativistic energies using small - sized cyclic accelerators.
|
arxiv:physics/9811007
|
measuring the intensity of events is crucial for monitoring and tracking armed conflict. advances in automated event extraction have yielded massive data sets of " who did what to whom " micro - records that enable data - driven approaches to monitoring conflict. the goldstein scale is a widely - used expert - based measure that scores events on a conflictual - cooperative scale. it is based only on the action category ( " what " ) and disregards the subject ( " who " ) and object ( " to whom " ) of an event, as well as contextual information, like associated casualty count, that should contribute to the perception of an event ' s " intensity ". this paper takes a latent variable - based approach to measuring conflict intensity. we introduce a probabilistic generative model that assumes each observed event is associated with a latent intensity class. a novel aspect of this model is that it imposes an ordering on the classes, such that higher - valued classes denote higher levels of intensity. the ordinal nature of the latent variable is induced from naturally ordered aspects of the data ( e. g., casualty counts ) where higher values naturally indicate higher intensity. we evaluate the proposed model both intrinsically and extrinsically, showing that it obtains comparatively good held - out predictive performance.
|
arxiv:2210.03971
|
an approach for a poincar \ ' e covariant description of nuclear structure and of lepton scattering off nuclei is proposed within the relativistic hamiltonian dynamics in the light - front form. indeed a high level of accuracy is needed for a comparison with the increasingly precise present and future experimental data at high momentum transfer. therefore, to distinguish genuine qcd effects or effects of medium modified nucleon structure functions from conventional nuclear structure effects, the commutation rules between the poincar \ ' e generators should be satisfied. for the first time in this paper a proper hadronic tensor for inclusive deep inelastic scattering of electrons off nuclei is derived in the impulse approximation in terms of the single nucleon hadronic tensor. our approach is based : i ) on a light - front spectral function for nuclei, obtained taking advantage of the successful non - relativistic knowledge of nuclear interaction, and ii ) on the free current operator that, if defined in the breit reference frame with the momentum transfer, $ \ bf q $, parallel to the $ z $ axis, fulfills poincar \ ' e covariance and current conservation. our results can be generalized : i ) to exclusive processes or to semi - inclusive deep inelastic scattering processes ; ii ) to the case where the final state interaction is considered through a glauber approximation ; iii ) to finite momentum transfer kinematics. as a first test, the hadronic tensor is applied to obtain the nuclear structure function f $ _ 2 ^ a $ and to evaluate the emc effect for $ ^ 3he $ in the bjorken limit. encouraging results including only the two - body part of the light - front spectral function are presented.
|
arxiv:2004.05877
|
bluemuse is an integral field spectrograph in an early development stage for the eso vlt. for our design of the data reduction software for this instrument, we are first reviewing capabilities and issues of the pipeline of the existing muse instrument. muse has been in operation at the vlt since 2014 and led to discoveries published in more than 600 refereed scientific papers. while bluemuse and muse have many common properties we briefly point out a few key differences between both instruments. we outline a first version of the flowchart for the science reduction, and discuss the necessary changes due to the blue wavelength range covered by bluemuse. we also detail specific new features, for example, how the pipeline and subsequent analysis will benefit from improved handling of the data covariance, and a more integrated approach to the line - spread function, as well as improvements regarding the wavelength calibration which is of extra importance in the blue optical range. we finally discuss how simulations of bluemuse datacubes are being implemented and how they will be used to prepare the science of the instrument.
|
arxiv:2209.06022
|
f - harmonic maps were first introduced and studied by lichnerowicz in \ cite { li } ( see also section 10. 20 in eells - lemaire ' s report \ cite { el } ). in this paper, we study a subclass of f - harmonic maps called f - harmonic morphisms which pull back local harmonic functions to local f - harmonic functions. we prove that a map between riemannian manifolds is an f - harmonic morphism if and only if it is a horizontally weakly conformal f - harmonic map. this generalizes the well - known fuglede - ishihara characterization for harmonic morphisms. some properties and many examples as well as some non - existence of f - harmonic morphisms are given. we also study the f - harmonicity of conformal immersions.
|
arxiv:1103.5687
|
monte carlo sampling is a powerful toolbox of algorithmic techniques widely used for a number of applications wherein some noisy quantity, or summary statistic thereof, is sought to be estimated. in this paper, we survey the literature for implementing monte carlo procedures using quantum circuits, focusing on the potential to obtain a quantum advantage in the computational speed of these procedures. we revisit the quantum algorithms that could replace classical monte carlo and then consider both the existing quantum algorithms and the potential quantum realizations that include adaptive enhancements as alternatives to the classical procedure.
|
arxiv:2303.04945
|
capsule networks ( capsnets ) have recently gotten attention as a novel neural architecture. this paper presents the sequential routing framework which we believe is the first method to adapt a capsnet - only structure to sequence - to - sequence recognition. input sequences are capsulized then sliced by a window size. each slice is classified to a label at the corresponding time through iterative routing mechanisms. afterwards, losses are computed by connectionist temporal classification ( ctc ). during routing, the required number of parameters can be controlled by the window size regardless of the length of sequences by sharing learnable weights across the slices. we additionally propose a sequential dynamic routing algorithm to replace traditional dynamic routing. the proposed technique can minimize decoding speed degradation caused by the routing iterations since it can operate in a non - iterative manner without dropping accuracy. the method achieves a 1. 1 % lower word error rate at 16. 9 % on the wall street journal corpus compared to bidirectional long short - term memory - based ctc networks. on the timit corpus, it attains a 0. 7 % lower phone error rate at 17. 5 % compared to convolutional neural network - based ctc networks ( zhang et al., 2016 ).
|
arxiv:2007.11747
|
adiabatic quantum computing ( aqc ) is an attractive paradigm for solving hard integer polynomial optimization problems. available hardware restricts the hamiltonians to be of a structure that allows only pairwise interactions. this requires that the original optimization problem to be first converted - - from its polynomial form - - to a quadratic unconstrained binary optimization ( qubo ) problem, which we frame as a problem in algebraic geometry. additionally, the hardware graph where such a qubo - hamiltonian needs to be embedded - - assigning variables of the problem to the qubits of the physical optimizer - - is not a complete graph, but rather one with limited connectivity. this " problem graph to hardware graph " embedding can also be framed as a problem of computing a groebner basis of a certain specially constructed polynomial ideal. we develop a systematic computational approach to prepare a given polynomial optimization problem for aqc in three steps. the first step reduces an input polynomial optimization problem into a qubo through the computation of the groebner basis of a toric ideal generated from the monomials of the input objective function. the second step computes feasible embeddings. the third step computes the spectral gap of the adiabatic hamiltonian associated to a given embedding. these steps are applicable well beyond the integer polynomial optimization problem. our paper provides the first general purpose computational procedure that can be used directly as a $ translator $ to solve polynomial integer optimization. alternatively, it can be used as a test - bed ( with small size problems ) to help design efficient heuristic quantum compilers by studying various choices of reductions and embeddings in a systematic and comprehensive manner. an added benefit of our framework is in designing ising architectures through the study of $ \ mathcal y - $ minor universal graphs.
|
arxiv:1810.01440
|
the morphology of graphene formed on the ( 000 - 1 ) surface ( the c - face ) and the ( 0001 ) surface ( the si - face ) of sic, by annealing in ultra - high vacuum or in an argon environment, is studied by atomic force microscopy and low - energy electron microscopy. the graphene forms due to preferential sublimation of si from the surface. in vacuum, this sublimation occurs much more rapidly for the c - face than the si - face, so that 150 c lower annealing temperatures are required for the c - face to obtain films of comparable thickness. the evolution of the morphology as a function of graphene thickness is examined, revealing significant differences between the c - face and the si - face. for annealing near 1320 c, graphene films of about 2 monolayers ( ml ) thickness are formed on the si - face, but 16 ml is found for the c - face. in both cases, step bunches are formed on the surface. for the si - face, layer - by - layer growth of the graphene is observed in areas between the step bunches. at 1170 c, for the c - face, a more 3 - dimensional type of growth is found. the average thickness is then about 4 ml, but with a wide variation in local thickness ( 2 - 7 ml ) over the surface. the spatial arrangement of constant - thickness domains are found to be correlated with step bunches on the surface, which form in a more restricted manner than at 1320 c. it is argued that these domains are somewhat disconnected, so that no strong driving force for planarization of the film exists. in a 1 - atm argon environment, permitting higher growth temperatures, the graphene morphology for the si - face is found to become more layer - by - layer - like even for graphene thickness as low as 1 ml. however, for the c - face the morphology becomes much worse, with the surface displaying markedly inhomogeneous nucleation of the graphene. it is demonstrated that these surfaces are unintentionally oxidized, which accounts for the inhomogeneous growth.
|
arxiv:1011.2510
|
the enhancement of gilbert damping observed for ni80fe20 ( py ) films in contact with the non - magnetic metals cu, pd, ta and pt, is quantitatively reproduced using first - principles scattering theory. the " spin - pumping " theory that qualitatively explains its dependence on the py thickness is generalized to include a number of factors known to be important for spin transport through interfaces. determining the parameters in this theory from first - principles shows that interface spin - flipping makes an essential contribution to the damping enhancement. without it, a much shorter spin - flip diffusion length for pt would be needed than the value we calculate independently.
|
arxiv:1406.6225
|
we construct hopf algebras whose elements are representations of combinatorial automorphism groups, by generalising a theorem of zelevinsky on hopf algebras of representations of wreath products. as an application we attach symmetric functions to representations of graph automorphism groups, generalising and refining stanley ' s chromatic symmetric function.
|
arxiv:2004.04599
|
concerns for the resilience of cyber - physical systems ( cps ) s in critical infrastructure are growing. cps integrate sensing, computation, control, and networking into physical objects and mission - critical services, connecting traditional infrastructure to internet technologies. while this integration increases service efficiency, it has to face the possibility of new threats posed by the new functionalities. this leads to cyber - threats, such as denial - of - service, modification of data, information leakage, spreading of malware, and many others. cyber - resilience refers to the ability of a cps to prepare, absorb, recover, and adapt to the adverse effects associated with cyber - threats, e. g., physical degradation of the cps performance resulting from a cyber - attack. cyber - resilience aims at ensuring cps survival by keeping the core functionalities of the cps in case of extreme events. the literature on cyber - resilience is rapidly increasing, leading to a broad variety of research works addressing this new topic. in this article, we create a systematization of knowledge about existing scientific efforts of making cpss cyber - resilient. we systematically survey recent literature addressing cyber - resilience with a focus on techniques that may be used on cpss. we first provide preliminaries and background on cpss and threats, and subsequently survey state - of - the - art approaches that have been proposed by recent research work applicable to cpss. in particular, we aim at differentiating research work from traditional risk management approaches based on the general acceptance that it is unfeasible to prevent and mitigate all possible risks threatening a cps. we also discuss questions and research challenges, with a focus on the practical aspects of cyber - resilience, such as the use of metrics and evaluation methods as well as testing and validation environments.
|
arxiv:2302.05402
|
in this work we propose an action to describe diffusion limited chemical reactions belonging to various classes of universality. this action is treated through thompson ' s approach and can encompass both cases where we have segregation as in the $ a + b \ to 0 $ reaction, as well the simplest one, namely the $ a + a \ to 0 $ reaction. our results for long time and long wavelength behaviors of the species concentrations and reaction rates agree with exact results of peliti for $ a + a \ to 0 $ reaction and rigorous results of bramson and lebowitz for $ a + b \ to 0 $ reaction, with equal initial concentrations. the different classes of universality are reflected by the obtained upper critical dimensions varying continuously from $ d _ { c } = 2 $ in the first case to $ d _ { c } = 4 $ in the last one. just at the upper critical dimensions we find universal logarithmic corrections to the mean field behavior.
|
arxiv:hep-ph/0004254
|
we present a study of photon - photon scattering for $ w _ { \ gamma \ gamma } < 5 $ gev. we extend earlier calculations of this cross section for $ w _ { \ gamma \ gamma } > $ 5 gev into the low mass range where photoproduction of the pseudoscalar mesons $ \ eta $ ( 548 ), $ \ eta ^ { ' } ( 958 ) $ and other mesonic resonances contribute to the two - photon final states. we consider the dominant background of the two photon final state which arises from $ \ gamma \ gamma $ decays of photoproduced $ \ pi ^ { 0 } \ pi ^ 0 $ - pairs. we discuss how to reduce the background by imposing cuts on different kinematical variables. we present results for alice and lhcb kinematics.
|
arxiv:1910.02690
|
the process $ e ^ + e ^ - \ to h t \ bar t $ can be used at the next linear collider to measure the higgs - top yukawa coupling. in this paper, we compute $ 2 \ to 8 $ processes of the form $ e ^ + e ^ - \ to b \ bar b b \ bar b w ^ + w ^ - \ to b \ bar b b \ bar b \ ell ^ \ pm \ nu _ \ ell q \ bar q ' $, accounting for the higgs - top - antitop signal as well as several irreducible backgrounds in the semi - leptonic top - antitop decay channel. we restrict ourselves to the case of a light higgs boson in the range 100 gev $ \ le m _ h \ le $ 140 gev. we use helicity amplitude techniques to compute exactly such processes at tree level in the framework of the standard model. total rates and differential spectra of phenomenological interest are given and discussed
|
arxiv:hep-ph/9902214
|
we present the computation of higgs boson production in association with a jet at the lhc including qcd corrections up to nnlo. the calculation includes the subsequent decay of the higgs boson into four leptons, allowing for the full reconstruction of the final - state kinematics. in anticipation of improved lhc measurements based on the full run ii dataset, we present a study for single - and double - differential cross sections within the fiducial volume as defined in prior atlas analyses. higher - order corrections are found to have a sizeable impact on both normalisation and shape of differential cross sections.
|
arxiv:1912.03560
|
recently, sequential recommendation has been adapted to the llm paradigm to enjoy the power of llms. llm - based methods usually formulate recommendation information into natural language and the model is trained to predict the next item in an auto - regressive manner. despite their notable success, the substantial computational overhead of inference poses a significant obstacle to their real - world applicability. in this work, we endeavor to streamline existing llm - based recommendation models and propose a simple yet highly effective model lite - llm4rec. the primary goal of lite - llm4rec is to achieve efficient inference for the sequential recommendation task. lite - llm4rec circumvents the beam search decoding by using a straight item projection head for ranking scores generation. this design stems from our empirical observation that beam search decoding is ultimately unnecessary for sequential recommendations. additionally, lite - llm4rec introduces a hierarchical llm structure tailored to efficiently handle the extensive contextual information associated with items, thereby reducing computational overhead while enjoying the capabilities of llms. experiments on three publicly available datasets corroborate the effectiveness of lite - llm4rec in both performance and inference efficiency ( notably 46. 8 % performance improvement and 97. 28 % efficiency improvement on ml - 1m ) over existing llm - based methods. our implementations will be open sourced.
|
arxiv:2402.09543
|
in this paper, we investigate some electrically charged magnetic solutions of the su ( 2 ) yang - mills - higgs field theory in the net zero topological charge sector. we only examine the case when the higgs field vanishes at two points along the z - axis and when the higgs field vanishes along a ring with the z - axis as its symmetry axis. we study the possible electric charges the dyons can carry in relation to the electric - magnetic charge separations and calculate for the finite total energy and magnet dipole moment of these dyons. these stationary dyon solutions do not satisfy the first order bogomol ' nyi equations and are non bps solutions. they are axially symmetric saddle point solutions and are characterized by the electric charge parameter, $ - 1 < \ eta < 1 $, which determines the net electric charges of these dyons. these dyon solutions are solved numerically when the magnetic charges are $ n $ = 1, 2, 3, 4, and 5 ; and when the strength of the higgs field potential is non vanishing with $ \ lambda = 1 $. when $ \ lambda = 1 $, we found that the net electric charge approaches a finite critical value as $ \ eta $ approaches $ \ pm 1 $. hence the electromagnetic charge separation, total energy, and magnet dipole moment of the dyon also approach finite critical value.
|
arxiv:1102.4058
|
we define a $ \ mathbb { z } _ 2 $ - valued topological and gauge invariant associated to any 1 - dimensional, translation - invariant topological insulator which satisfies either particle - hole symmetry or chiral symmetry. the invariant can be computed from the berry phase associated to a suitable basis of bloch functions which is compatible with the symmetries. we compute the invariant in the su - schrieffer - heeger model for chiral symmetric insulators, and in the kitaev model for particle - hole symmetric insulators. we show that in both cases the $ \ mathbb { z } _ 2 $ invariant predicts the existence of zero - energy boundary states for the corresponding truncated models.
|
arxiv:2303.08464
|
it is well known that the temperature dependence of the effective magnetocrystalline anisotropy energy obeys the $ l ( l + 1 ) / 2 $ power law of magnetization in the callen - callen theory. therefore, according to the callen - callen theory, the magnetocrystalline anisotropy energy is assumed to be zero at the critical temperature where the magnetization is approximately zero. this study estimates the temperature dependence of the magnetocrystalline anisotropy energy by integrating the magnetization versus magnetic field ( $ m $ - - $ h $ ) curves, and found that the magnetocrystalline anisotropy is still finite even above the curie temperature in the uniaxial anisotropy, whereas this does not appear in the cubic anisotropy case. the origin is the fast reduction of the anisotropy field, which is the magnetic field required to saturate the magnetization along the hard axis, in the case of cubic anisotropy. therefore, the magnetization anisotropy and anisotropic magnetic susceptibility, those are the key factors of magnetic anisotropy, could not be established in the case of cubic anisotropy. in addition, the effect of magnetocrystalline anisotropy on magnetocaloric properties, as the difference between the entropy change curves of alfe $ _ { 2 } $ b $ _ { 2 } $ appears above the curie temperature, which is in good agreement with a previous experimental study. this is proof of magnetic anisotropy at slightly above curie temperature.
|
arxiv:2112.08154
|
we develop a general theory describing the thermodynamical behavior of open quantum systems coupled to thermal baths beyond perturbation theory. our approach is based on the exact time - local quantum master equation for the reduced open system states, and on a principle of minimal dissipation. this principle leads to a unique prescription for the decomposition of the master equation into a hamiltonian part representing coherent time evolution and a dissipator part describing dissipation and decoherence. employing this decomposition we demonstrate how to define work, heat, and entropy production, formulate the first and second law of thermodynamics, and establish the connection between violations of the second law and quantum non - markovianity.
|
arxiv:2109.11893
|
in cold atomic systems, fast and high - resolution microscopy of individual atoms is crucial, since it can provide direct information on the dynamics and correlations of the system. here, we demonstrate nanosecond - scale two - dimensional stroboscopic pictures of a single trapped ion beyond the optical diffraction limit, by combining the main idea of ground - state depletion microscopy with quantum state transition control in cold atoms. we achieve a spatial resolution up to 175 ~ nm using an na = 0. 1 objective in the experiment, which represents a more than tenfold improvement compared with direct fluorescence imaging. to show the potential of this method, we apply it to observe the secular motion of the trapped ion, we demonstrate a temporal resolution up to 50 ~ ns with a displacement detection sensitivity of 10 ~ nm. our method provides a powerful tool for probing particle positions, momenta, and correlations, as well as their dynamics in cold atomic systems.
|
arxiv:2104.10026
|
in this paper, we propose a method to solve a bi - objective variant of the well - studied traveling thief problem ( ttp ). the ttp is a multi - component problem that combines two classic combinatorial problems : traveling salesman problem ( tsp ) and knapsack problem ( kp ). we address the bi - ttp, a bi - objective version of the ttp, where the goal is to minimize the overall traveling time and to maximize the profit of the collected items. our proposed method is based on a biased - random key genetic algorithm with customizations addressing problem - specific characteristics. we incorporate domain knowledge through a combination of near - optimal solutions of each subproblem in the initial population and use a custom repair operator to avoid the evaluation of infeasible solutions. the bi - objective aspect of the problem is addressed through an elite population extracted based on the non - dominated rank and crowding distance. furthermore, we provide a comprehensive study showing the influence of each parameter on the performance. finally, we discuss the results of the bi - ttp competitions at emo - 2019 and gecco - 2019 conferences where our method has won first and second places, respectively, thus proving its ability to find high - quality solutions consistently.
|
arxiv:2002.04303
|
new exceptional ( i. e. non - repeating ) prime number multiplets are given and formulated in terms of arithmetic progressions, along with laws governing them. accompanying repeating prime number multiplets are pointed out. prime number multiplets with less regular distances are studied.
|
arxiv:1105.4092
|
time series modeling and analysis has become critical in various domains. conventional methods such as rnns and transformers, while effective for discrete - time and regularly sampled data, face significant challenges in capturing the continuous dynamics and irregular sampling patterns inherent in real - world scenarios. neural differential equations ( ndes ) represent a paradigm shift by combining the flexibility of neural networks with the mathematical rigor of differential equations. this paper presents a comprehensive review of nde - based methods for time series analysis, including neural ordinary differential equations, neural controlled differential equations, and neural stochastic differential equations. we provide a detailed discussion of their mathematical formulations, numerical methods, and applications, highlighting their ability to model continuous - time dynamics. furthermore, we address key challenges and future research directions. this survey serves as a foundation for researchers and practitioners seeking to leverage ndes for advanced time series analysis.
|
arxiv:2502.09885
|
off - policy evaluation methods are important in recommendation systems and search engines, where data collected under an existing logging policy is used to estimate the performance of a new proposed policy. a common approach to this problem is weighting, where data is weighted by a density ratio between the probability of actions given contexts in the target and logged policies. in practice, two issues often arise. first, many problems have very large action spaces and we may not observe rewards for most actions, and so in finite samples we may encounter a positivity violation. second, many recommendation systems are not probabilistic and so having access to logging and target policy densities may not be feasible. to address these issues, we introduce the featurized embedded permutation weighting estimator. the estimator computes the density ratio in an action embedding space, which reduces the possibility of positivity violations. the density ratio is computed leveraging recent advances in normalizing flows and density ratio estimation as a classification problem, in order to obtain estimates which are feasible in practice.
|
arxiv:2203.02807
|
in recent years, muscle synergies have been pro - posed for proportional myoelectric control. synergies were extracted using matrix factorisation techniques ( mainly non - negative matrix factorisation, nmf ), which requires identification of synergies to tasks or movements. in addition, nmf methods were viable only with a task dimension of 2 degrees of freedoms ( dofs ). here, the potential use of a higher - order tensor model for myoelectric control is explored. we assess the ability of a constrained tucker tensor decomposition to estimate consistent synergies when the task dimensionality is increased up to 3 - dofs. synergies extracted from 3rd - order tensor of 1 and 3 dofs were compared. results showed that muscle synergies extracted via constrained tucker decomposition were consistent with the increase of task - dimension. hence, these results support the consideration of proportional 3 - dof myoelectric control based on tensor decompositions.
|
arxiv:2007.01944
|
underlying domain. recently, an automated method was introduced for engineering ontologies in life sciences such as gene ontology ( go ), one of the most successful and widely used biomedical ontology. based on information theory, it restructures ontologies so that the levels represent the desired specificity of the concepts. similar information theoretic approaches have also been used for optimal partition of gene ontology. given the mathematical nature of such engineering algorithms, these optimizations can be automated to produce a principled and scalable architecture to restructure ontologies such as go. open biomedical ontologies ( obo ), a 2006 initiative of the u. s. national center for biomedical ontology, provides a common ' foundry ' for various ontology initiatives, amongst which are : the generic model organism project ( gmod ) gene ontology consortium sequence ontology ontology lookup service the plant ontology consortium standards and ontologies for functional genomics and more = = see also = = iso / iec 21838 ontology ( information science ) ontology components ontology double articulation ontology learning ontology modularization semantic decision table semantic integration semantic technology semantic web linked data = = references = = this article incorporates public domain material from the national institute of standards and technology = = further reading = = kotis, k., a. papasalouros, g. a. vouros, n. pappas, and k. zoumpatianos, " enhancing the collective knowledge for the engineering of ontologies in open and socially constructed learning spaces ", journal of universal computer science, vol. 17, issue 12, pp. 1710 – 1742, 08 / 2011 kotis, k., and a. papasalouros, " learning useful kick - off ontologies from query logs : hcome revised ", 4th international conference on complex, intelligent and software intensive systems ( cisis - 2010 ), kracow, ieee computer society press, 2010. john davies ( ed. ) ( 2006 ). semantic web technologies : trends and research in ontology - based systems. wiley. isbn 978 - 0 - 470 - 02596 - 3 asuncion gomez - perez, mariano fernandez - lopez, oscar corcho ( 2004 ). ontological engineering : with examples from the areas of knowledge management, e - commerce and the semantic web. springer, 2004. jarrar, mustafa ( 2006 ). " position paper ". proceedings of the 15th international conference on world wide web - www ' 06. pp.
|
https://en.wikipedia.org/wiki/Ontology_engineering
|
for orthogonal polynomials defined by compact jacobi matrix with exponential decay of the coefficients, precise properties of orthogonality measure is determined. this allows showing uniform boundedness of partial sums of orthogonal expansions with respect to $ l ^ \ infty $ norm, which generalize analogous results obtained for little $ q $ - legendre, little $ q $ - jacobi and little $ q $ - laguerre polynomials, by the authors.
|
arxiv:math/0509241
|
given two random variables taking values in a bounded interval, we study whether one dominates the other in higher - order stochastic dominance depends on the reference interval in the model setting. we obtain two results. first, the stochastic dominance relations get strictly stronger when the reference interval shrinks if and only if the order of stochastic dominance is larger than three. second, for mean - preserving stochastic dominance relations, the reference interval is irrelevant if and only if the difference between the degree of the stochastic dominance and the number of moments is no larger than three. these results highlight complications arising from using higher - order stochastic dominance in economic applications.
|
arxiv:2411.15401
|
with the increasing maturity and expansion of the cryptocurrency market, understanding and predicting its price fluctuations has become an important issue in the field of financial engineering. this article introduces an innovative genetic algorithm - generated alpha sentiment ( gas ) blending ensemble model specifically designed to predict bitcoin market trends. the model integrates advanced ensemble learning methods, feature selection algorithms, and in - depth sentiment analysis to effectively capture the complexity and variability of daily bitcoin trading data. the gas framework combines 34 alpha factors with 8 news economic sentiment factors to provide deep insights into bitcoin price fluctuations by accurately analyzing market sentiment and technical indicators. the core of this study is using a stacked model ( including lightgbm, xgboost, and random forest classifier ) for trend prediction which demonstrates excellent performance in traditional buy - and - hold strategies. in addition, this article also explores the effectiveness of using genetic algorithms to automate alpha factor construction as well as enhancing predictive models through sentiment analysis. experimental results show that the gas model performs competitively in daily bitcoin trend prediction especially when analyzing highly volatile financial assets with rich data.
|
arxiv:2411.03035
|
consideration of the model of the relativistic particle with curvature and torsion in the three - dimensional space - time shows that the squaring of the primary constraints entails a wrong result. the complete set of the hamiltonian constraints arising here correspond to another model with an action similar but not identical with the initial action.
|
arxiv:hep-th/9309021
|
we present the discovery of grb 020405 made with the inter - planetary network ( ipn ). with a duration of 60 s, the burst appears to be a typical long duration event. we observed the 75 - square acrminute ipn error region with the mount stromlo observatory ' s 50 - inch robotic telescope and discovered a transient source which subsequently decayed and was also associated with a variable radio source. we identify this source as the afterglow of grb 020405. subsequent observations by other groups found varying polarized flux and established a redshift of 0. 690 to the host galaxy. motivated by the low redshift we triggered observations with wfpc2 on - board the hubble space telescope ( hst ). modeling the early ground - based data with a jet model, we find a clear red excess over the decaying optical lightcurves that is present between day 10 and day 141 ( the last hst epoch ). this ` bump ' has the spectral and temporal features expected of an underlying supernova ( sn ). in particular, the red color of the putative sn is similar to that of the sn associated with grb 011121, at late time. restricting the sample of grbs to those with z < 0. 7, a total of five bursts, red bumps at late times are found in grb 970228, grb 011121, and grb 020405. it is possible that the simplest idea, namely that all long duration grbs have underlying sne with a modest dispersion in their properties ( especially peak luminosity ), is sufficient to explain the non detections.
|
arxiv:astro-ph/0208008
|
for varieties given by an equation n _ { k / k } ( \ xi ) = p ( t ), where n _ { k / k } is the norm form attached to a field extension k / k and p ( t ) in k [ t ] is a polynomial, three topics have been investigated : ( 1 ) computation of the unramified brauer group of such varieties over arbitrary fields ; ( 2 ) rational points and brauer - manin obstruction over number fields ( under schinzel ' s hypothesis ) ; ( 3 ) zero - cycles and brauer - manin obstruction over number fields. in this paper, we produce new results in each of three directions. we obtain quite general results under the assumption that k / k is abelian ( as opposed to cyclic in earlier investigation ).
|
arxiv:1202.4115
|
in the paper, we proposed the dantzig selector based on the $ \ ell _ { 1 } - \ alpha \ ell _ { 2 } $ ~ $ ( 0 < \ alpha \ leq1 ) $ minimization for the signal recovery. in the dantzig selector, the constraint $ \ | { \ bf a } ^ { \ top } ( { \ bf b } - { \ bf a } { \ bf x } ) \ | _ \ infty \ leq \ eta $ for some small constant $ \ eta > 0 $ means the columns of $ { \ bf a } $ has very weakly correlated with the error vector $ { \ bf e } = { \ bf a } { \ bf x } - { \ bf b } $. first, recovery guarantees based on the restricted isometry property ( rip ) are established for signals. next, we propose the effective algorithm to solve the proposed dantzig selector. last, we illustrate the proposed model and algorithm by extensive numerical experiments for the recovery of signals in the cases of gaussian, impulsive and uniform noise. and the performance of the proposed dantzig selector is better than that of the existing methods.
|
arxiv:2105.14229
|
wide band gap oxides are promising host materials for spin defect qubits, offering unique advantages such as a dilute nuclear spin environment. zinc oxide ( zno ), in particular, can achieve exceptional high purity, which enables long spin coherence time. in this work, we theoretically search for deep - level point defects in zno with optimal physical properties for optically - addressable spin qubits. using first - principles calculations, we predict the molybdenum - vacancy complex defect $ mo _ { zn } v _ o $ in zno to own promising spin and optical properties, including spin - triplet ground state, optical transition in the visible to near - infrared range with high quantum yield, allowed intersystem crossings with a sizable optically - detected magnetic resonance contrast, and long spin t $ _ 2 $ and t $ ^ * _ 2 $. notably, we find the huang - rhys factor of the defect to be around 5, which is significantly smaller than the typical range of 10 - 30 for most known defects in zno. furthermore, we compare the spin decoherence driven by the nuclear spin bath and paramagnetic impurity baths. we find that the paramagnetic impurities are very effective in causing spin decoherence even with very low concentrations, implying that they can likely dominate the spin decoherence in zno even after isotopic purification. using the computed excited - state energies and kinetic rates as inputs, we predict the odmr contrast and propose a new protocol for spin qubit initialization and readout, which could be generalized to other systems with forbidden axial intersystem crossings.
|
arxiv:2502.00551
|
optical implementations of neural networks ( onns ) herald the next - generation high - speed and energy - efficient deep learning computing by harnessing the technical advantages of large bandwidth and high parallelism of optics. however, due to the problems of incomplete numerical domain, limited hardware scale, or inadequate numerical accuracy, the majority of existing onns were studied for basic classification tasks. given that regression is a fundamental form of deep learning and accounts for a large part of current artificial intelligence applications, it is necessary to master deep learning regression for further development and deployment of onns. here, we demonstrate a silicon - based optical coherent dot - product chip ( ocdc ) capable of completing deep learning regression tasks. the ocdc adopts optical fields to carry out operations in complete real - value domain instead of in only positive domain. via reusing, a single chip conducts matrix multiplications and convolutions in neural networks of any complexity. also, hardware deviations are compensated via in - situ backpropagation control provided the simplicity of chip architecture. therefore, the ocdc meets the requirements for sophisticated regression tasks and we successfully demonstrate a representative neural network, the automap ( a cutting - edge neural network model for image reconstruction ). the quality of reconstructed images by the ocdc and a 32 - bit digital computer is comparable. to the best of our knowledge, there is no precedent of performing such state - of - the - art regression tasks on onn chip. it is anticipated that the ocdc can promote novel accomplishment of onns in modern ai applications including autonomous driving, natural language processing, and scientific study.
|
arxiv:2105.12122
|
the fourier transform is one of the most important linear transformations used in science and engineering. cooley and tukey ' s fast fourier transform ( fft ) from 1964 is a method for computing this transformation in time $ o ( n \ log n ) $. from a lower bound perspective, relatively little is known. ailon shows in 2013 an $ \ omega ( n \ log n ) $ bound for computing the normalized fourier transform assuming only unitary operations on pairs of coordinates is allowed. the goal of this document is to describe a natural open problem that arises from this work, which is related to group theory, and in particular to representation theory.
|
arxiv:1907.07471
|
the goal of the present paper is to deliberate certain types of metric such as $ * $ - $ \ eta $ - ricci - yamabe soliton on $ \ alpha $ - cosymplectic manifolds with respect to quarter - symmetric metric connection. further, we have proved some curvature properties of $ \ alpha $ - cosymplectic manifolds admitting quarter - symmetric metric connection. here, we have shown the characteristics of the soliton when the manifold satisfies quarter - symmetric metric connection on $ \ alpha $ - cosymplectic manifolds. later, we have acquired laplace equation from $ * $ - $ \ eta $ - ricci - yamabe soliton equation when the potential vector field $ \ xi $ of the soliton is of gradient type in terms of quarter - symmetric metric connection. next, we have developed the nature of the soliton when the vector field is conformal killing admitting quarter - symmetric metric connection. finally, we present an example of a 5 - dimensional $ \ alpha $ - cosymplectic metric as a $ * $ - $ \ eta $ - ricci - yamabe soliton with respect to a quarter - symmetric metric connection to prove our results.
|
arxiv:2109.04700
|
the continued evolution of cmos technology demands materials and architectures that emphasize low power consumption, particularly for computations involving large scale data processing and multivariable optimization. ferroelectric materials offer promising solutions through enabling dual - purpose memory units capable of performing both storage and logic operations. in this study, we demonstrate ferroelectric field effect transistors ( fefets ) with mos2 monolayer channels fabricated on ultrathin 5 nm and 10 nm ferroelectric aluminum scandium nitride ( alscn ) films. by decreasing the thickness of the ferroelectric film, we achieve significantly reduced gate voltages ( < 3v ) required to switch the conductance of the devices, enabling operation at low voltages compatible with advanced cmos. we observe a characteristic crossover in hysteresis behavior that varies with film thickness, channel fabrication method, and environmental conditions. through systematic investigation of multiple parameters including channel fabrication methods, dimensional scaling, and environmental effects, we provide pathways to improve device performance. while our devices demonstrate clear ferroelectric switching behavior, further optimization is required to enhance the on / off ratio at zero gate voltage while continuing to reduce the coercive field of these ultrathin films.
|
arxiv:2504.07271
|
let $ \ mathfrak { g } $ be a simple classical lie algebra over $ \ mathbb { c } $ and $ g $ be the adjoint group. consider a nilpotent element $ e \ in \ mathfrak { g } $, and the adjoint orbit $ \ mathbb { o } = ge $. the formal slices to the codimension $ 2 $ orbits in the closure $ \ overline { \ mathbb { o } } \ subset \ mathfrak { g } $ are well - known due to the work of kraft and procesi. in this paper, we prove a similar result for the universal $ g $ - equivariant cover $ \ widetilde { \ mathbb { o } } $ of $ \ mathbb { o } $. namely, we describe the codimension $ 2 $ singularities for its affinization $ spec ( \ mathbb { c } [ \ widetilde { \ mathbb { o } } ] ) $.
|
arxiv:2003.09356
|
it is generally assumed that ionization in slow collisions of light atomic particles, whose constituents ( electrons and nuclei ) move with velocities orders of magnitude smaller than the speed of light, is driven solely by the coulomb force. here we show, however, that the breit interaction - - a relativistic correction to the coulomb interaction between electrons - - can become the main actor when the colliding system couples resonantly to the quantum radiation field. our results demonstrate that this ionization mechanism can be very efficient in various not too dense physical environments, including stellar plasmas and atomic beams propagating in gases.
|
arxiv:2309.09280
|
we investigate, in the context of a real massless scalar field in $ 1 + 1 $ dimensions, models of partially reflecting mirrors simulated by dirac $ \ delta - \ delta ^ { \ prime } $ point interactions. in the literature, these models do not exhibit full transparency at high frequencies. in order to provide a more realistic feature for these models, we propose a modified $ \ delta - \ delta ^ { \ prime } $ point interaction that enables full transparency in the limit of high frequencies. taking this modified $ \ delta - \ delta ^ { \ prime } $ model into account, we investigate the casimir force, comparing our results with those found in the literature.
|
arxiv:1607.06321
|
based on the amplitude behavior of quantum rabi oscillation driven by a coherent field we show that there exists an upper bound to the number of logical operation performed on any single qubit within one error - correction period of a quantum computation. we introduce a parameter to depict the maximum of this number and estimate its decoherence limit. the analysis shows that a generally accepted error - rate threshold of quantum logic gates limits the parameter to so small a number that even a double of fault - tolerant toffoli gates can hardly be implemented reliably within one error - correction period. this result suggests that the design of feasible fault - tolerant quantum circuits is still an arduous task.
|
arxiv:0712.3197
|
vine copulas are flexible dependence models using bivariate copulas as building blocks. if the parameters of the bivariate copulas in the vine copula depend on covariates, one obtains a conditional vine copula. we propose an extension for the estimation of continuous conditional vine copulas, where the parameters of continuous conditional bivariate copulas are estimated sequentially and separately via gradient - boosting. for this purpose, we link covariates via generalized linear models ( glms ) to kendall ' s $ \ tau $ correlation coefficient from which the corresponding copula parameter can be obtained. consequently, the gradient - boosting algorithm estimates the copula parameters providing a natural covariate selection. in a second step, an additional covariate deselection procedure is applied. the performance of the gradient - boosted conditional vine copulas is illustrated in a simulation study. linear covariate effects in low - and high - dimensional settings are investigated for the conditional bivariate copulas separately and for conditional vine copulas. moreover, the gradient - boosted conditional vine copulas are applied to the temporal postprocessing of ensemble weather forecasts in a low - dimensional setting. the results show, that our suggested method is able to outperform the benchmark methods and identifies temporal correlations better. eventually, we provide an r - package called boostcopula for this method.
|
arxiv:2406.13500
|
in this paper, we study clifford - wolf translations of finsler spaces. we first give a characterization of clifford - wolf translations of finsler spaces in terms of killing vector fields. in particular, we show that there is a natural correspondence between clifford - wolf translations and the killing vector fields of constant length. in the special case of homogeneous randers spaces, we give some explicit sufficient and necessary conditions for an isometry to be a clifford - wolf translation. finally, we construct some explicit examples to explain some of the results of this paper.
|
arxiv:1201.3714
|
the advent of increasingly precise gyroscopes has played a key role in the technological development of navigation systems. ring - laser and fibre - optic gyroscopes, for example, are widely used in modern inertial guidance systems and rely on the interference of unentangled photons to measure mechanical rotation. the sensitivity of these devices scales with the number of particles used as $ 1 / \ sqrt { n } $. here we demonstrate how, by using sources of entangled particles, it is possible to do better and even achieve the ultimate limit allowed by quantum mechanics where the precision scales as 1 / n. we propose a gyroscope scheme that uses ultra - cold atoms trapped in an optical ring potential.
|
arxiv:1003.3587
|
computing tasks may often be posed as optimization problems. the objective functions for real - world scenarios are often nonconvex and / or nondifferentiable. state - of - the - art methods for solving these problems typically only guarantee convergence to local minima. this work presents hamilton - jacobi - based moreau adaptive descent ( hj - mad ), a zero - order algorithm with guaranteed convergence to global minima, assuming continuity of the objective function. the core idea is to compute gradients of the moreau envelope of the objective ( which is " piece - wise convex " ) with adaptive smoothing parameters. gradients of the moreau envelope \ rev { ( \ ie proximal operators ) } are approximated via the hopf - lax formula for the viscous hamilton - jacobi equation. our numerical examples illustrate global convergence.
|
arxiv:2202.11014
|
to - end packet transit delay or the number of packets switched in an hour. the design of high - performance systems uses analytical or simulation modeling, whereas the delivery of high - performance implementation involves thorough performance testing. performance engineering relies heavily on statistics, queueing theory, and probability theory for its tools and processes. = = = program management and project management = = = program management ( or project management ) has many similarities with systems engineering, but has broader - based origins than the engineering ones of systems engineering. project management is also closely related to both program management and systems engineering. both include scheduling as engineering support tool in assessing interdisciplinary concerns under management process. in particular, the direct relationship of resources, performance features, and risk to the duration of a task or the dependency links among tasks and impacts across the system lifecycle are systems engineering concerns. = = = proposal engineering = = = proposal engineering is the application of scientific and mathematical principles to design, construct, and operate a cost - effective proposal development system. basically, proposal engineering uses the " systems engineering process " to create a cost - effective proposal and increase the odds of a successful proposal. = = = reliability engineering = = = reliability engineering is the discipline of ensuring a system meets customer expectations for reliability throughout its life ( i. e. it does not fail more frequently than expected ). next to the prediction of failure, it is just as much about the prevention of failure. reliability engineering applies to all aspects of the system. it is closely associated with maintainability, availability ( dependability or rams preferred by some ), and integrated logistics support. reliability engineering is always a critical component of safety engineering, as in failure mode and effects analysis ( fmea ) and hazard fault tree analysis, and of security engineering. = = = risk management = = = risk management, the practice of assessing and dealing with risk is one of the interdisciplinary parts of systems engineering. in development, acquisition, or operational activities, the inclusion of risk in tradeoffs with cost, schedule, and performance features, involves the iterative complex configuration management of traceability and evaluation to the scheduling and requirements management across domains and for the system lifecycle that requires the interdisciplinary technical approach of systems engineering. systems engineering has risk management define, tailor, implement, and monitor a structured process for risk management which is integrated into the overall effort. = = = safety engineering = = = the techniques of safety engineering may be applied by non - specialist engineers in designing complex systems to minimize the probability of safety - critical failures. the " system safety engineering " function
|
https://en.wikipedia.org/wiki/Systems_engineering
|
electrode - electrolyte interfaces are crucial for electrochemical energy conversion and storage. at these interfaces, the liquid electrolytes form electrical double layers ( edls ). however, despite more than a century of active research, the fundamental structure of edls remains elusive to date. experimental characterization and theoretical calculations have both provided insights, yet each method by itself only offers incomplete or inexact information of the multifaceted edl structure. here we provide a survey of the mainstream approaches for edl quantification, with a particular focus on the emerging 3d atomic force microscopy ( 3d - afm ) imaging which provides real - space atomic - scale edl structures. to overcome the existing limits of edl characterization methods, we propose a new approach to integrate 3d - afm with classical molecular dynamics ( md ) simulation, to enable realistic, precise, and high - throughput determination and prediction of edl structures. as examples of real - world application, we will discuss the feasibility of using this joint experiment - theory method to unravel the edl structure at various carbon - based electrodes for supercapacitors, batteries, and electrocatalysis. looking forward, we believe 3d - afm, future versions of scanning probe microscopy, and their integration with theory offer promising platforms to profile liquid structures in many electrochemical systems.
|
arxiv:2409.10008
|
total variation ( tv ) regularization is popular in image restoration and reconstruction due to its ability to preserve image edges. to date, most research activities on tv models concentrate on image restoration from blurry and noisy observations, while discussions on image reconstruction from random projections are relatively fewer. in this paper, we propose, analyze, and test a fast alternating minimization algorithm for image reconstruction from random projections via solving a tv regularized least - squares problem. the per - iteration cost of the proposed algorithm involves a linear time shrinkage operation, two matrix - vector multiplications and two fast fourier transforms. convergence, certain finite convergence and $ q $ - linear convergence results are established, which indicate that the asymptotic convergence speed of the proposed algorithm depends on the spectral radii of certain submatrix. moreover, to speed up convergence and enhance robustness, we suggest an accelerated scheme based on an inexact alternating direction method. we present experimental results to compare with an existing algorithm, which indicate that the proposed algorithm is stable, efficient and competitive with twist \ cite { twist } - - a state - of - the art algorithm for solving tv regularization problems.
|
arxiv:1001.1774
|
micro - aerial vehicles ( mavs ) have the advantage of moving freely in 3d space. however, creating compact and sparse map representations that can be efficiently used for planning for such robots is still an open problem. in this paper, we take maps built from noisy sensor data and construct a sparse graph containing topological information that can be used for 3d planning. we use a euclidean signed distance field, extract a 3d generalized voronoi diagram ( gvd ), and obtain a thin skeleton diagram representing the topological structure of the environment. we then convert this skeleton diagram into a sparse graph, which we show is resistant to noise and changes in resolution. we demonstrate global planning over this graph, and the orders of magnitude speed - up it offers over other common planning methods. we validate our planning algorithm in real maps built onboard an mav, using rgb - d sensing.
|
arxiv:1803.04345
|
we study the sound perturbation of a rotating acoustic black hole in the presence of a disclination. the radial part of the massless klein - gordon equation is written into a heun form, and its analytical solution is obtained. these solutions have an explicit dependence on the parameter of the disclination. we obtain the exact hawking - unruh radiation spectrum.
|
arxiv:1607.02750
|
the aim of this paper is to study a problem raised by n. c. phillips concerning the existence of takai duality for $ l ^ p $ operator crossed products $ f ^ { p } ( g, a, \ alpha ) $, where $ g $ is a locally compact abelian group, $ a $ is an $ l ^ { p } $ operator algebra and $ \ alpha $ is an isometric action of $ g $ on $ a $. inspired by d. williams ' proof for the takai duality theorem for crossed products of $ c ^ * $ - algebras, we construct a homomorphism $ \ phi $ from $ f ^ { p } ( \ hat { g }, f ^ p ( g, a, \ alpha ), \ hat { \ alpha } ) $ to $ \ mathcal { k } ( l ^ { p } ( g ) ) \ otimes _ { p } a $ which is a natural $ l ^ p $ - analog of d. williams ' map. for countable discrete abelian groups $ g $ and separable unital $ l ^ p $ operator algebras $ a $ which have unique $ l ^ p $ operator matrix norms, we show that $ \ phi $ is an isomorphism if and only if either $ g $ is finite or $ p = 2 $ ; in particular, $ \ phi $ is an isometric isomorphism in the case that $ p = 2 $. moreover, it is proved that $ \ phi $ is equivariant for the double dual action $ \ hat { \ hat { \ alpha } } $ of $ g $ on $ f ^ p ( \ hat { g }, f ^ p ( g, a, \ alpha ), \ hat { \ alpha } ) $ and the action $ \ mathrm { ad } \ rho \ otimes \ alpha $ of $ g $ on $ \ mathcal { k } ( l ^ p ( g ) ) \ otimes _ p a $.
|
arxiv:2212.00408
|
a continuously measured quantum system with multiple jump channels gives rise to a stochastic process described by random jump times and random emitted symbols, representing each jump channel. while much is known about the waiting time distributions, very little is known about the statistics of the emitted symbols. in this letter we fill in this gap. first, we provide a full characterization of the resulting stochastic process, including efficient ways of simulating it, as well as determining the underlying memory structure. second, we show how to unveil patterns in the stochastic evolution : some systems support closed patterns, wherein the evolution runs over a finite set of states, or at least recurring states. but even if neither is possible, we show that one may still cluster the states approximately, based on their ability to predict future outcomes. we illustrate these ideas by studying transport through a boundary - driven one - dimensional xy spin chain.
|
arxiv:2305.07957
|
we introduce a new version $ kk ^ { \ rm alg } $ of bivariant $ k $ - theory that is defined on the category of all locally convex algebras. a motivating example is the weyl algebra $ w $, i. e. the algebra generated by two elements satisfying the heisenberg commutation relation, with the fine locally convex topology. we determine its $ kk ^ { \ rm alg } $ - invariants using a natural extension for $ w $. using similar methods the $ kk ^ { \ rm alg } $ - invariants can be determined for many other algebras of similar type.
|
arxiv:math/0401295
|
this project outlines the complete development of a variable star classification algorithm methodology. with the advent of big - data in astronomy, professional astronomers are left with the problem of how to manage large amounts of data, and how this deluge of information can be studied in order to improve our understanding of the universe. while our focus will be on the development of machine learning methodologies for the identification of variable star type based on light curve data and associated information, one of the goals of this work is the acknowledgment that the development of a true machine learning methodology must include not only study of what goes into the service ( features, optimization methods ) but a study on how we understand what comes out of the service ( performance analysis ). the complete development of a beginning - to - end system development strategy is presented as the following individual developments ( simulation, training, feature extraction, detection, classification, and performance analysis ). we propose that a complete machine learning strategy for use in the upcoming era of big data from the next generation of big telescopes, such as lsst, must consider this type of design integration.
|
arxiv:2008.13775
|
existing methods for distillation do not efficiently utilize the training data. this work presents a novel approach to perform distillation using only a subset of the training data, making it more data - efficient. for this purpose, the training of the teacher model is modified to include self - regulation wherein a sample in the training set is used for updating model parameters in the backward pass either if it is misclassified or the model is not confident enough in its prediction. this modification restricts the participation of samples, unlike the conventional training method. the number of times a sample participates in the self - regulated training process is a measure of its significance towards the model ' s knowledge. the significance values are used to weigh the losses incurred on the corresponding samples in the distillation process. this method is named significance - based distillation. two other methods are proposed for comparison where the student model learns by distillation and incorporating self - regulation as the teacher model, either utilizing the significance information computed during the teacher ' s training or not. these methods are named hybrid and regulated distillations, respectively. experiments on benchmark datasets show that the proposed methods achieve similar performance as other state - of - the - art methods for knowledge distillation while utilizing a significantly less number of samples.
|
arxiv:2102.07125
|
in this paper, we propose a density estimation algorithm called \ textit { gradient boosting histogram transform } ( gbht ), where we adopt the \ textit { negative log likelihood } as the loss function to make the boosting procedure available for the unsupervised tasks. from a learning theory viewpoint, we first prove fast convergence rates for gbht with the smoothness assumption that the underlying density function lies in the space $ c ^ { 0, \ alpha } $. then when the target density function lies in spaces $ c ^ { 1, \ alpha } $, we present an upper bound for gbht which is smaller than the lower bound of its corresponding base learner, in the sense of convergence rates. to the best of our knowledge, we make the first attempt to theoretically explain why boosting can enhance the performance of its base learners for density estimation problems. in experiments, we not only conduct performance comparisons with the widely used kde, but also apply gbht to anomaly detection to showcase a further application of gbht.
|
arxiv:2106.05738
|
we discuss the origin of two classes of germinal centers that have been observed during humoral immune responses : some germinal centers develop very well and give rise to a large number of high affinity antibody producing plasma cells. other germinal center reaction are very weak and the output production is practically absent. we propose an explanation for this nearly all - or - none behavior of germinal center reactions : the affinity of the seeder b - cells to the antigen is the critical parameter that determines the fate of the germinal center reaction. this hypothesis is verified in the framework of a space - time simulation of germinal center reactions.
|
arxiv:physics/0209009
|
we present recent results aiming at assessing the coverage properties of bayesian and frequentist inference methods, as applied to the reconstruction of supersymmetric parameters from simulated lhc data. we discuss the statistical challenges of the reconstruction procedure, and highlight the algorithmic difficulties of obtaining accurate profile likelihood estimates.
|
arxiv:1105.5244
|
bh176 is an old metal - rich star cluster. it is spatially and kinematically consistent with belonging to the monoceros ring. it is larger in size and more distant from the galactic plane than typical open clusters, and it does not belong to the galactic bulge. our aim is to determine the origin of this unique object by accurately determining its distance, metallicity, and age. the best way to reach this goal is to combine spectroscopic and photometric methods. we present medium - resolution observations of red clump and red giant branch stars in bh176 obtained with the gemini south multi - object spectrograph. we derive radial velocities, metallicities, effective temperatures, and surface gravities of the observed stars and use these parameters to distinguish member stars from field objects. we determine the following parameters for bh176 : $ v _ h = 0 \ pm 15 $ km / s, $ [ fe / h ] = - 0. 1 \ pm 0. 1 $, age $ 7 \ pm 0. 5 $ gyr, $ e ( v - i ) = 0. 79 \ pm 0. 03 $, distance $ 15. 2 \ pm 0. 2 $ kpc, $ \ alpha $ - element abundance $ [ \ alpha / fe ] \ sim 0. 25 $ dex ( the mean of [ mg / fe ], and [ ca / fe ] ). bh176 is a member of old galactic open clusters that presumably belong to the thick disk. it may have originated as a massive star cluster after the encounter of the forming thin disk with a high - velocity gas cloud or as a satellite dwarf galaxy.
|
arxiv:1408.1629
|
a superconducting chip containing a regular array of flux qubits, tunable interqubit inductive couplers, an xy - addressable readout system, on - chip programmable magnetic memory, and a sparse network of analog control lines has been studied. the architecture of the chip and the infrastructure used to control it were designed to facilitate the implementation of an adiabatic quantum optimization algorithm. the performance of an eight - qubit unit cell on this chip has been characterized by measuring its success in solving a large set of random ising spin glass problem instances as a function of temperature. the experimental data are consistent with the predictions of a quantum mechanical model of an eight - qubit system coupled to a thermal environment. these results highlight many of the key practical challenges that we have overcome and those that lie ahead in the quest to realize a functional large scale adiabatic quantum information processor.
|
arxiv:1004.1628
|
the solution of an extended riemann problem is used to provide the internal boundary conditions at a junction when simulating one - dimensional flow through an open channel network. the proposed approach, compared to classic junction models, does not require the tuning of semi - empirical coefficients and it is theoretically well - founded. the riemann problem approach is validated using experimental data, two - dimensional model results and analytical solutions. in particular, a set of experimental data is used to test each model under subcritical steady flow conditions, and different channel junctions are considered, with both continuous and discontinuous bottom elevation. moreover, the numerical results are compared with analytical solutions in a star network to test unsteady conditions. satisfactory results are obtained for all the simulations, and particularly for y - shaped networks and for cases involving variations in channels ' bottom and width. by contrast, classic models suffer when geometrical channel effects are involved.
|
arxiv:1912.06573
|
electrical control of spin polarization is very desirable in spintronics, since electric field can be easily applied locally in contrast with magnetic field. here, we propose a new concept of bipolar magnetic semiconductor ( bms ) in which completely spin - polarized currents with reversible spin polarization can be created and controlled simply by applying a gate voltage. this is a result of the unique electronic structure of bms, where the valence and conduction bands possess opposite spin polarization when approaching the fermi level. our band structure and spin - polarized electronic transport calculations on semi - hydrogenated single - walled carbon nanotubes confirm the existence of bms materials and demonstrate the electrical control of spin - polarization in them.
|
arxiv:1208.1355
|
generative diffusion models are becoming one of the most popular prior in image restoration ( ir ) tasks due to their remarkable ability to generate realistic natural images. despite achieving satisfactory results, ir methods based on diffusion models present several limitations. first of all, most non - blind approaches require an analytical expression of the degradation model to guide the sampling process. secondly, most existing blind approaches rely on families of pre - defined degradation models for training their deep networks. the above issues limit the flexibility of these approaches and so their ability to handle real - world degradation tasks. in this paper, we propose a novel inn - guided probabilistic diffusion algorithm for non - blind and blind image restoration, namely indigo and blindindigo, which combines the merits of the perfect reconstruction property of invertible neural networks ( inn ) with the strong generative capabilities of pre - trained diffusion models. specifically, we train the forward process of the inn to simulate an arbitrary degradation process and use the inverse to obtain an intermediate image that we use to guide the reverse diffusion sampling process through a gradient step. we also introduce an initialization strategy, to further improve the performance and inference speed of our algorithm. experiments demonstrate that our algorithm obtains competitive results compared with recently leading methods both quantitatively and visually on synthetic and real - world low - quality images.
|
arxiv:2501.14014
|
we present a novel approach to mitigate buffer overflow attack using variable record table ( vrt ). dedicated memory space is used to automatically record base and bound information of variables extracted during runtime. we instrument frame pointer and function ( s ) related registers to decode variable memory space in stack and heap. we have modified simplescalar / pisa simulator to extract variables space of six ( 6 ) benchmark suites from mibench. we have tested 290 small c programs ( mit corpus suite ) having 22 different buffer overflow vulnerabilities in stack and heap. experimental results show that our approach can detect buffer overflow attack with zero instruction overhead with the memory space requirement up to 13kb to maintain vrt for a program with 324 variables.
|
arxiv:1909.07821
|
we summarise the main features of vincia ' s antenna - based treatment of qcd initial - and final - state showers, which includes iterated tree - level matrix - element corrections and automated evaluations of perturbative shower uncertainties. the latter are computed on the fly and are cast as a set of alternative weights for each generated event. the resulting algorithm has been made publicly available as a plug - in to the pythia 8 event generator.
|
arxiv:1609.07205
|
microscopic spin interactions on a deformed kagom \ ' { e } lattice of volborthite are investigated through magnetoelastic couplings. a negative longitudinal magnetostriction $ \ delta l < 0 $ in the $ b $ axis is observed, which depends on the magnetization $ m $ with a peculiar relation of $ \ delta l / l \ propto m ^ { 1. 3 } $. based on the exchange striction model, it is argued that the negative magnetostriction originates from a pantograph - like lattice change of the cu - o - cu chain in the $ b $ axis, and that the peculiar dependence arises from the local spin correlation. this idea is supported by dft + $ u $ calculations simulating the lattice change and a finite - size calculation of the spin correlation, indicating that the recently proposed coupled - trimer model is a plausible one.
|
arxiv:1903.04934
|
continual graph learning ( cgl ) studies the problem of learning from an infinite stream of graph data, consolidating historical knowledge, and generalizing it to the future task. at once, only current graph data are available. although some recent attempts have been made to handle this task, we still face two potential challenges : 1 ) most of existing works only manipulate on the intermediate graph embedding and ignore intrinsic properties of graphs. it is non - trivial to differentiate the transferred information across graphs. 2 ) recent attempts take a parameter - sharing policy to transfer knowledge across time steps or progressively expand new architecture given shifted graph distribution. learning a single model could loss discriminative information for each graph task while the model expansion scheme suffers from high model complexity. in this paper, we point out that latent relations behind graph edges can be attributed as an invariant factor for the evolving graphs and the statistical information of latent relations evolves. motivated by this, we design a relation - aware adaptive model, dubbed as ram - cg, that consists of a relation - discovery modular to explore latent relations behind edges and a task - awareness masking classifier to accounts for the shifted. extensive experiments show that ram - cg provides significant 2. 2 %, 6. 9 % and 6. 6 % accuracy improvements over the state - of - the - art results on citationnet, ogbn - arxiv and twitch dataset, respective.
|
arxiv:2308.08259
|
the force estimation problem in quantum metrology with an arbitrary non - markovian gaussian bath is considered. no assumptions are made on the bath spectrum and coupling strength with the probe. considering the natural global unitary evolution of both bath and probe and assuming initial global gaussian states we are able to solve the main issues of any quantum metrological problem : the best achievable precision determined by the quantum fisher information, the best initial state and the best measurement. studying the short time behavior and comparing to regular markovian dynamics we observe an increase of quantum fisher information. we emphasize that this phenomenon is due to the ability to perform measurements below the correlation time of the bath, activating non - markovian effects. this brings huge consequences for the sequential preparation - and - measurement scenario as the quantum fisher information becomes unbounded when the initial probe mean energy goes to infinity, whereas its markovian counterpart remains bounded by a constant. the long time behavior shows the complexity and potential variety of non - markovian effects, somewhere between the exponential decay characteristic of markovian dynamics and the sinusoidal oscillations characteristic of resonant narrow bands.
|
arxiv:1604.08849
|
as research on action recognition matures, the focus is shifting away from categorizing basic task - oriented actions using hand - segmented video datasets to understanding complex goal - oriented daily human activities in real - world settings. temporally structured models would seem obvious to tackle this set of problems, but so far, cases where these models have outperformed simpler unstructured bag - of - word types of models are scarce. with the increasing availability of large human activity datasets, combined with the development of novel feature coding techniques that yield more compact representations, it is time to revisit structured generative approaches. here, we describe an end - to - end generative approach from the encoding of features to the structural modeling of complex human activities by applying fisher vectors and temporal models for the analysis of video sequences. we systematically evaluate the proposed approach on several available datasets ( adl, mpiicooking, and breakfast datasets ) using a variety of performance metrics. through extensive system evaluations, we demonstrate that combining compact video representations based on fisher vectors with hmm - based modeling yields very significant gains in accuracy and when properly trained with sufficient training samples, structured temporal models outperform unstructured bag - of - word types of models by a large margin on the tested performance metric.
|
arxiv:1508.06073
|
ab initio pseudopotentials are a linchpin of modern molecular and condensed matter electronic structure calculations. in this work, we employ multi - objective optimization to maximize pseudopotential softness while maintaining high accuracy and transferability. to accomplish this, we develop a formulation in which softness and accuracy are simultaneously maximized, with accuracy determined by the ability to reproduce all - electron energy differences between bravais lattice structures, whereupon the resulting pareto frontier is scanned for the softest pseudopotential that provides the desired accuracy in established transferability tests. we employ an evolutionary algorithm to solve the multi - objective optimization problem and apply it to generate a comprehensive table of optimized norm - conserving vanderbilt ( oncv ) pseudopotentials ( https : / / github. com / sparc - x / spms - psps ). we show that the resulting table is softer than existing tables of comparable accuracy, while more accurate than tables of comparable softness. the potentials thus afford the possibility to speed up calculations in a broad range of applications areas while maintaining high accuracy.
|
arxiv:2209.09806
|
the pluto - charon ( pc ) pair is usually thought of as a binary in the dual synchronous state, which is the endpoint of its tidal evolution. the discovery of the small circumbinary moons, styx, nix, kerberos, and hydra, placed close to the mean motions resonances ( mmrs ) 3 / 1, 4 / 1, 5 / 1, and 6 / 1 with charon, respectively, reveals a complex dynamical architecture of the system. several formation mechanisms for the pc system have been proposed. our goal is to analyse the past and current orbital dynamics of the satellite system. we study the past and current dynamics of the pc system through a large set of numerical integrations of the exact equations of motion, accounting for the gravitational interactions of the pc binary with the small moons and the tidal evolution, modelled by the constant time lag approach. we construct the stability maps in a pseudo - jacobian coordinate system. in addition, considering a more realistic model, which accounts for the zonal harmonic $ j _ 2 $ of the pluto ' s oblateness and the accreting mass of charon, we investigate the tidal evolution of the whole system. our results show that, in the chosen reference frame, the current orbits of all satellites are nearly circular, nearly planar and nearly resonant with charon that can be seen as an indicator of the convergent dissipative migration experimented by the system in the past. we verify that, under the assumption that charon completes its formation during the tidal expansion, the moons can safely cross the main mmrs, without their motions being strongly excited and consequently ejected. in the more realistic scenario proposed here, the small moons survive the tidal expansion of the pc binary, without having to invoke the hypothesis of the resonant transport. our results point out that the possibility to find additional small moons in the pc system cannot be ruled out.
|
arxiv:2112.11972
|
recently, we saw the emergence of consensus - based database systems that promise resilience against failures, strong data provenance, and federated data management. typically, these fully - replicated systems are operated on top of a primary - backup consensus protocol, which limits the throughput of these systems to the capabilities of a single replica ( the primary ). to push throughput beyond this single - replica limit, we propose concurrent consensus. in concurrent consensus, replicas independently propose transactions, thereby reducing the influence of any single replica on performance. to put this idea in practice, we propose our rcc paradigm that can turn any primary - backup consensus protocol into a concurrent consensus protocol by running many consensus instances concurrently. rcc is designed with performance in mind and requires minimal coordination between instances. furthermore, rcc also promises increased resilience against failures. we put the design of rcc to the test by implementing it in resilientdb, our high - performance resilient blockchain fabric, and comparing it with state - of - the - art primary - backup consensus protocols. our experiments show that rcc achieves up to 2. 75x higher throughput than other consensus protocols and can be scaled to 91 replicas.
|
arxiv:1911.00837
|
detection transformer ( detr ) and its variants ( detrs ) have been successfully applied to crowded pedestrian detection, which achieved promising performance. however, we find that, in different degrees of crowded scenes, the number of detrs ' queries must be adjusted manually, otherwise, the performance would degrade to varying degrees. in this paper, we first analyze the two current query generation methods and summarize four guidelines for designing the adaptive query generation method. then, we propose rank - based adaptive query generation ( raqg ) to alleviate the problem. specifically, we design a rank prediction head that can predict the rank of the lowest confidence positive training sample produced by the encoder. based on the predicted rank, we design an adaptive selection method that can adaptively select coarse detection results produced by the encoder to generate queries. moreover, to train the rank prediction head better, we propose soft gradient l1 loss. the gradient of soft gradient l1 loss is continuous, which can describe the relationship between the loss value and the updated value of model parameters granularly. our method is simple and effective, which can be plugged into any detrs to make it query - adaptive in theory. the experimental results on crowdhuman dataset and citypersons dataset show that our method can adaptively generate queries for detrs and achieve competitive results. especially, our method achieves state - of - the - art 39. 4 % mr on crowdhuman dataset.
|
arxiv:2310.15725
|
we present the first measurement of the black hole ( bh ) mass function for broad - line active galaxies in the local universe. using the ~ 9000 broad - line active galaxies from the fourth data release of the sloan digital sky survey, we construct a broad - line luminosity function that agrees very well with the local soft x - ray luminosity function. using standard virial relations, we then convert observed broad - line luminosities and widths into bh masses. a mass function constructed in this way has the unique capability to probe the mass region < 10 ^ 6 m _ sun, which, while insignificant in terms of total bh mass density, nevertheless may place important constraints on the mass distribution of seed bhs in the early universe. the characteristic local active bh has a mass of ~ 10 ^ 7 m _ sun radiating at 10 % of the eddington rate. the active fraction is a strong function of bh mass ; at both higher and lower masses the active mass function falls more steeply than one would infer from the distribution of bulge luminosity. the deficit of local massive radiating bhs is a well - known phenomenon, while we present the first robust measurement of a decline in the space density of active bhs at low mass.
|
arxiv:0705.0020
|
the question of self - adjoint realizations of sign - indefinite second order differential operators is discussed in terms of a model problem. operators of the type $ - \ frac { d } { dx } \ sgn ( x ) \ frac { d } { dx } $ are generalized to finite, not necessarily compact, metric graphs. all self - adjoint realizations are parametrized using methods from extension theory. the spectral and scattering theory of the self - adjoint realizations are studied in detail.
|
arxiv:1211.4144
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.