text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
we analyze the conjugate gradient ( cg ) method with variable preconditioning for solving a linear system with a real symmetric positive definite ( spd ) matrix of coefficients $ a $. we assume that the preconditioner is spd on each step, and that the condition number of the preconditioned system matrix is bounded above by a constant independent of the step number. we show that the cg method with variable preconditioning under this assumption may not give improvement, compared to the steepest descent ( sd ) method. we describe the basic theory of cg methods with variable preconditioning with the emphasis on ` ` worst case ' scenarios, and provide complete proofs of all facts not available in the literature. we give a new elegant geometric proof of the sd convergence rate bound. our numerical experiments, comparing the preconditioned sd and cg methods, not only support and illustrate our theoretical findings, but also reveal two surprising and potentially practically important effects. first, we analyze variable preconditioning in the form of inner - outer iterations. in previous such tests, the unpreconditioned cg inner iterations are applied to an artificial system with some fixed preconditioner as a matrix of coefficients. we test a different scenario, where the unpreconditioned cg inner iterations solve linear systems with the original system matrix $ a $. we demonstrate that the cg - sd inner - outer iterations perform as well as the cg - cg inner - outer iterations in these tests. second, we show that variable preconditioning may surprisingly accelerate the sd and thus the cg convergence. specifically, we compare the cg methods using a two - grid preconditioning with fixed and randomly chosen coarse grids, and observe that the fixed preconditioner method is twice as slow as the method with random preconditioning.
|
arxiv:math/0605767
|
in a recent work, dancs and he found an euler - type formula for $ \, \ zeta { ( 2 \, n + 1 ) } $, $ \, n \, $ being a positive integer, which contains a series they could not reduce to a finite closed - form. this open problem reveals a greater complexity in comparison to $ \ zeta ( 2n ) $, which is a rational multiple of $ \ pi ^ { 2n } $. for the dirichlet beta function, the things are ` inverse ' : $ \ beta ( 2n + 1 ) $ is a rational multiple of $ \ pi ^ { 2n + 1 } $ and no closed - form expression is known for $ \ beta ( 2n ) $. here in this work, i modify the dancs - he approach in order to derive an euler - type formula for $ \, \ beta { ( 2n ) } $, including $ \, \ beta { ( 2 ) } = g $, the catalan ' s constant. i also convert the resulting series into zeta series, which yields new exact closed - form expressions for a class of zeta series involving $ \, \ beta { ( 2n ) } $ and a finite number of odd zeta values. a closed - form expression for a certain zeta series is also conjectured.
|
arxiv:0910.5004
|
the ascent and descent of the mittag - leffler property were instrumental in proving zariski locality of the notion of an ( infinite dimensional ) vector bundle by raynaud and gruson in \ cite { rg }. more recently, relative mittag - leffler modules were employed in the theory of ( infinitely generated ) tilting modules and the associated quasi - coherent sheaves, \ cite { ah }, \ cite { hst }. here, we study the ascent and descent along flat and faithfully flat homomorphisms for relative versions of the mittag - leffler property. in particular, we prove the zariski locality of the notion of a locally f - projective quasi - coherent sheaf for all schemes, and for each $ n \ geq 1 $, of the notion of an $ n $ - drinfeld vector bundle for all locally noetherian schemes.
|
arxiv:2208.00869
|
this work presents a novel formulation and numerical strategy for the simulation of geometrically nonlinear structures. first, a non - canonical hamiltonian ( poisson ) formulation is introduced by including the dynamics of the stress tensor. this framework is developed for von - k \ ' arm \ ' an nonlinearities in beams and plates, as well as finite strain elasticity with saint - venant material behavior. in the case of plates, both negligible and non - negligible membrane inertia are considered. for the former case the two - dimensional elasticity complex is leveraged to express the dynamics in terms of the airy stress function. the finite element discretization employs a mixed approach, combining a conforming approximation for displacement and velocity fields with a discontinuous stress tensor representation. a staggered, linear implicit time integration scheme is proposed, establishing connections with existing explicit - implicit energy - preserving methods. the stress degrees of freedom are statically condensed, reducing the computational complexity to solving a system with a positive definite matrix. the methodology is validated through numerical experiments on the duffing oscillator, a von - k \ ' arm \ ' an beam, and a column undergoing finite strain elasticity. comparisons with fully implicit energy - preserving method and the explicit newmark scheme demonstrate that the proposed approach achieves superior accuracy while maintaining energy stability. additionally, it enables larger time steps compared to explicit schemes and exhibits computational efficiency comparable to the leapfrog method.
|
arxiv:2503.04695
|
the difficulty of textual style transfer lies in the lack of parallel corpora. numerous advances have been proposed for the unsupervised generation. however, significant problems remain with the auto - evaluation of style transfer tasks. based on the summary of pang and gimpel ( 2018 ) and mir et al. ( 2019 ), style transfer evaluations rely on three criteria : style accuracy of transferred sentences, content similarity between original and transferred sentences, and fluency of transferred sentences. we elucidate the problematic current state of style transfer research. given that current tasks do not represent real use cases of style transfer, current auto - evaluation approach is flawed. this discussion aims to bring researchers to think about the future of style transfer and style transfer evaluation research.
|
arxiv:1910.03747
|
we analyze a crossover between ergodic and non - ergodic regimes in an interacting spin chain with a dilute density of impurities, defined as spins with a strong local field. the dilute limit allows us to unravel some finite size effects and propose a mechanism for the delocalization of these impurities in the thermodynamic limit. in particular we show that impurities will always relax by exchanging energy with the rest of the chain. the relaxation rate only weakly depends on the impurity density and decays exponentially, up to logarithmic corrections, with the field strength. we connect the relaxation to fast operator spreading and show that the same mechanism destabilizes the recursive construction of local integrals of motion at any impurity density. in the high field limit, impurities will appear to be localized, and the system will be non - ergodic, over a wide range of system sizes. however, this is a transient effect and the eventual delocalization can be understood in terms of a flowing localization length.
|
arxiv:2105.09348
|
the { \ em atom - bond connectivity ( abc ) index } is a degree - based graph topological index that found chemical applications. the problem of complete characterization of trees with minimal $ abc $ index is still an open problem. in ~ \ cite { d - sptmabci - 2014 }, it was shown that trees with minimal abc index do not contain so - called { \ em $ b _ k $ - branches }, with $ k \ geq 5 $, and that they do not have more than four $ b _ 4 $ - branches. our main results here reveal that the number of $ b _ 1 $ and $ b _ 2 $ - branches are also bounded from above by small fixed constants. namely, we show that trees with minimal abc index do not contain more than four $ b _ 1 $ - branches and more than eleven $ b _ 2 $ - branches.
|
arxiv:1501.05752
|
a sum - network is a directed acyclic network in which all terminal nodes demand the ` sum ' of the independent information observed at the source nodes. many characteristics of the well - studied multiple - unicast network communication problem also hold for sum - networks due to a known reduction between instances of these two problems. our main result is that unlike a multiple unicast network, the coding capacity of a sum - network is dependent on the message alphabet. we demonstrate this using a construction procedure and show that the choice of a message alphabet can reduce the coding capacity of a sum - network from $ 1 $ to close to $ 0 $.
|
arxiv:1504.05618
|
with the rapid advancement of quantum computing technology, there is a growing need for new debugging tools for quantum programs. recent research has highlighted the potential of assertions for debugging quantum programs. in this paper, we investigate assertions in quantum ternary systems, which are more challenging than those in quantum binary systems due to the complexity of ternary logic. we propose quantum ternary circuit designs to assert classical, entanglement, and superposition states, specifically geared toward debugging quantum ternary programs.
|
arxiv:2312.15309
|
using the instant form dynamics of poincar \ ' e invariant quantum mechanics and the modified relativistic impulse approximation proposed previously we calculate asymptotics of electromagnetic form factors for the deuteron considered as two - - nucleon system. we show that today experiment on the elastic $ ed $ - scattering has reached asymptotic regime. the possible range of momentum transfer when the quark degrees of freedom could be seen in future jlab experiments is estimated. the explicit relation between the behavior of deuteron wave function at $ r = 0 $ and the form factors asymptotics is obtained. the conditions on wave functions to give the asymptotics predicted by qcd and quark counting rules are formulated.
|
arxiv:0801.2868
|
, although the methods of model theory focus more on logical considerations than those fields. the set of all models of a particular theory is called an elementary class ; classical model theory seeks to determine the properties of models in a particular elementary class, or determine whether certain classes of structures form elementary classes. the method of quantifier elimination can be used to show that definable sets in particular theories cannot be too complicated. tarski established quantifier elimination for real - closed fields, a result which also shows the theory of the field of real numbers is decidable. he also noted that his methods were equally applicable to algebraically closed fields of arbitrary characteristic. a modern subfield developing from this is concerned with o - minimal structures. morley ' s categoricity theorem, proved by michael d. morley, states that if a first - order theory in a countable language is categorical in some uncountable cardinality, i. e. all models of this cardinality are isomorphic, then it is categorical in all uncountable cardinalities. a trivial consequence of the continuum hypothesis is that a complete theory with less than continuum many nonisomorphic countable models can have only countably many. vaught ' s conjecture, named after robert lawson vaught, says that this is true even independently of the continuum hypothesis. many special cases of this conjecture have been established. = = recursion theory = = recursion theory, also called computability theory, studies the properties of computable functions and the turing degrees, which divide the uncomputable functions into sets that have the same level of uncomputability. recursion theory also includes the study of generalized computability and definability. recursion theory grew from the work of rozsa peter, alonzo church and alan turing in the 1930s, which was greatly extended by kleene and post in the 1940s. classical recursion theory focuses on the computability of functions from the natural numbers to the natural numbers. the fundamental results establish a robust, canonical class of computable functions with numerous independent, equivalent characterizations using turing machines, λ calculus, and other systems. more advanced results concern the structure of the turing degrees and the lattice of recursively enumerable sets. generalized recursion theory extends the ideas of recursion theory to computations that are no longer necessarily finite. it includes the study of computability in higher types as well as areas such as hyperarith
|
https://en.wikipedia.org/wiki/Mathematical_logic
|
performed with different techniques. many problems can be solved by both direct algorithms and iterative approaches. for example, the eigenvectors of a square matrix can be obtained by finding a sequence of vectors xn converging to an eigenvector when n tends to infinity. to choose the most appropriate algorithm for each specific problem, it is important to determine both the effectiveness and precision of all the available algorithms. the domain studying these matters is called numerical linear algebra. as with other numerical situations, two main aspects are the complexity of algorithms and their numerical stability. determining the complexity of an algorithm means finding upper bounds or estimates of how many elementary operations such as additions and multiplications of scalars are necessary to perform some algorithm, for example, multiplication of matrices. calculating the matrix product of two n - by - n matrices using the definition given above needs n3 multiplications, since for any of the n2 entries of the product, n multiplications are necessary. the strassen algorithm outperforms this " naive " algorithm ; it needs only n2. 807 multiplications. theoretically faster but impractical matrix multiplication algorithms have been developed, as have speedups to this problem using parallel algorithms or distributed computation systems such as mapreduce. in many practical situations, additional information about the matrices involved is known. an important case is sparse matrices, that is, matrices whose entries are mostly zero. there are specifically adapted algorithms for, say, solving linear systems ax = b for sparse matrices a, such as the conjugate gradient method. an algorithm is, roughly speaking, numerically stable if little deviations in the input values do not lead to big deviations in the result. for example, one can calculate the inverse of a matrix by computing its adjugate matrix : a − 1 = adj ( a ) / det ( a ). { \ displaystyle { \ mathbf { a } } ^ { - 1 } = \ operatorname { adj } ( { \ mathbf { a } } ) / \ det ( { \ mathbf { a } } ). } however, this may lead to significant rounding errors if the determinant of the matrix is very small. the norm of a matrix can be used to capture the conditioning of linear algebraic problems, such as computing a matrix ' s inverse. = = decomposition = = there are several methods to render matrices into a more easily accessible form. they are generally referred to as matrix decomposition or matrix factorization techniques. these techniques are of interest because
|
https://en.wikipedia.org/wiki/Matrix_(mathematics)
|
when the electric conductance of a nano - sized metal is measured at low temperatures, it often exhibits complex but reproducible patterns as a function of external magnetic fields, called quantum fingerprints in electric conductance. such complex patterns are due to quantum - mechanical interference of conduction electrons ; when thermal disturbance is feeble and coherence of the electrons extends all over the sample, the quantum interference pattern reflects microscopic structures, such as crystalline defects and the shape of the sample, giving rise to complicated interference. although the interference pattern carries such microscopic information, it looks so random that it has not been analysed. here we show that machine learning allows us to decipher quantum fingerprints ; fingerprint patterns in magneto - conductance are shown to be transcribed into spatial images of electron wave function intensities ( wis ) in a sample by using generative machine learning. the output wis reveal quantum interference states of conduction electrons, as well as sample shapes. the present result augments the human ability to identify quantum states, and it should allow microscopy of quantum nanostructures in materials by making use of quantum fingerprints.
|
arxiv:2206.05882
|
to kt, lg telecom, and sk telecom. the companies were supposed to invest $ 44 million in the project, which was to be completed in 2015. = = = geolocation = = = wi - fi positioning systems use known positions of wi - fi hotspots to identify a device ' s location. it is used when gps isn ' t suitable due to issues like signal interference or slow satellite acquisition. this includes assisted gps, urban hotspot databases, and indoor positioning systems. wi - fi positioning relies on measuring signal strength ( rssi ) and fingerprinting. parameters like ssid and mac address are crucial for identifying access points. the accuracy depends on nearby access points in the database. signal fluctuations can cause errors, which can be reduced with noise - filtering techniques. for low precision, integrating wi - fi data with geographical and time information has been proposed. the wi - fi rtt capability introduced in ieee 802. 11mc allows for positioning based on round trip time measurement, an improvement over the rssi method. the ieee 802. 11az standard promises further improvements in geolocation accuracy. = = = motion detection = = = wi - fi sensing is used in applications such as motion detection and gesture recognition. = = operational principles = = wi - fi stations communicate by sending each other data packets, blocks of data individually sent and delivered over radio on various channels. as with all radio, this is done by the modulation and demodulation of carrier waves. different versions of wi - fi use different techniques, 802. 11b uses direct - sequence spread spectrum on a single carrier, whereas 802. 11a, wi - fi 4, 5 and 6 use orthogonal frequency - division multiplexing. channels are used half duplex and can be time - shared by multiple networks. any packet sent by one computer is locally received by stations tuned to that channel, even if that information is intended for just one destination. stations typically ignore information not addressed to them. the use of the same channel also means that the data bandwidth is shared, so for example, available throughput to each device is halved when two stations are actively transmitting. as with other ieee 802 lans, stations come programmed with a globally unique 48 - bit mac address. the mac addresses are used to specify both the destination and the source of each data packet. on the reception of a transmission, the receiver uses the destination address to determine whether the transmission is relevant to the station or should be ignored. a scheme known as carrier - sense multiple access with collision
|
https://en.wikipedia.org/wiki/Wi-Fi
|
the popularity of online fashion shopping continues to grow. the ability to offer an effective recommendation to customers is becoming increasingly important. in this work, we focus on fashion outfits challenge, part of sigir 2022 workshop on ecommerce. the challenge is centered around fill in the blank ( fitb ) task that implies predicting the missing outfit, given an incomplete outfit and a list of candidates. in this paper, we focus on applying siamese networks on the task. more specifically, we explore how combining information from multiple modalities ( textual and visual modality ) impacts the performance of the model on the task. we evaluate our model on the test split provided by the challenge organizers and the test split with gold assignments that we created during the development phase. we discover that using both visual, and visual and textual data demonstrates promising results on the task. we conclude by suggesting directions for further improvement of our method.
|
arxiv:2207.10355
|
providing fault - tolerance for long - running gpu - intensive jobs requires application - specific solutions, and often involves saving the state of complex data structures spread among many graphics libraries. this work describes a mechanism for transparent gpu - independent checkpoint - restart of 3d graphics. the approach is based on a record - prune - replay paradigm : all opengl calls relevant to the graphics driver state are recorded ; calls not relevant to the internal driver state as of the last graphics frame prior to checkpoint are discarded ; and the remaining calls are replayed on restart. a previous approach for opengl 1. 5, based on a shadow device driver, required more than 78, 000 lines of opengl - specific code. in contrast, the new approach, based on record - prune - replay, is used to implement the same case in just 4, 500 lines of code. the speed of this approach varies between 80 per cent and nearly 100 per cent of the speed of the native hardware acceleration for opengl 1. 5, as measured when running the ioquake3 game under linux. this approach has also been extended to demonstrate checkpointing of opengl 3. 0 for the first time, with a demonstration for pymol, for molecular visualization.
|
arxiv:1312.6650
|
in this paper, we examine the statistical soundness of comparative assessments within the field of recommender systems in terms of reliability and human uncertainty. from a controlled experiment, we get the insight that users provide different ratings on same items when repeatedly asked. this volatility of user ratings justifies the assumption of using probability densities instead of single rating scores. as a consequence, the well - known accuracy metrics ( e. g. mae, mse, rmse ) yield a density themselves that emerges from convolution of all rating densities. when two different systems produce different rmse distributions with significant intersection, then there exists a probability of error for each possible ranking. as an application, we examine possible ranking errors of the netflix prize. we are able to show that all top rankings are more or less subject to high probabilities of error and that some rankings may be deemed to be caused by mere chance rather than system quality.
|
arxiv:1706.08866
|
this contribution gives a short review of recent theoretical advances in most topics of nuclear cluster physics concentrating, however, around { $ \ alpha $ } particle clustering. along the route, the point of view will be critical mentioning not only progress but also failures and open problems.
|
arxiv:1811.11580
|
we study the differential identities of the algebra $ m _ k ( f ) $ of $ k \ times k $ matrices over a field $ f $ of characteristic zero when its full lie algebra of derivations, $ l = \ mbox { der } ( m _ k ( f ) ) $, acts on it. we determine a set of 2 generators of the ideal of differential identities of $ m _ k ( f ) $ for $ k \ geq 2 $. moreover, we obtain the exact values of the corresponding differential codimensions and differential cocharacters. finally we prove that, unlike the ordinary case, the variety of differential algebras with $ l $ - action generated by $ m _ k ( f ) $ has almost polynomial growth for all $ k \ geq 2 $.
|
arxiv:2403.09337
|
physical observation is made relative to a reference frame which is essentially a quantum system. thus, a quantum system must be described relative to a quantum reference frame ( qrf ). further requirements on qrf include using only relational observables and not assuming the existence of external reference frame. to address these requirements, two approaches are proposed in the literature. the first one is an operational approach ( f. giacomini, et al, nat. comm. 10 : 494, 2019 ) which focuses on the quantization of transformation between qrfs. the second approach attempts to derive the quantum transformation between qrfs from first principles ( a. vanrietvelde, et al, \ textit { quantum } 4 : 225, 2020 ). such first principle approach describes physical systems as symmetry induced constrained hamiltonian systems. the dirac quantization of such systems before removing redundancy is interpreted as perspective - neutral description. then, a systematic redundancy reduction procedure is introduced to derive description from perspective of a qrf. the first principle approach recovers some of the results from the operational approach, but not yet include an important part of a quantum theory - the measurement theory. this paper is intended to bridge the gap. we show that the von neumann quantum measurement theory can be embedded into the perspective - neutral framework. this allows us to successfully recover the results found in the operational approach, with the advantage that the transformation operator can be derived from the first principle. in addition, the formulation presented here reveals several interesting conceptual insights. for instance, the projection operation in measurement needs to be performed after redundancy reduction. these results represent one step forward in understanding how quantum measurement should be formulated when the reference frame is also a quantum system.
|
arxiv:1911.04903
|
the most well - known tool for studying contextuality in quantum computation is the n - qubit stabilizer state tableau representation. we provide an extension that describes not only the quantum state, but is also outcome deterministic. the extension enables a value assignment to exponentially many pauli observables, yet remains quadratic in both memory and computational complexity. furthermore, we show that the mechanisms employed for contextuality and measurement disturbance are wholly separate. the model will be useful for investigating the role of contextuality in $ n $ - qubit quantum computation.
|
arxiv:2202.05081
|
we formulate conjectural blowup formulas for segre and verlinde numbers on moduli spaces of sheaves on projective surfaces $ s $ with $ p _ g ( s ) > 0 $ and $ b _ 1 ( s ) = 0 $. as applications we give a give a conjectural formula for the donaldson invariants of $ s $ in arbitrary rank, as well as for the $ k $ - theoretic donaldson invariants, and some donaldson invariants with fundamental matters.
|
arxiv:2109.13144
|
we study in simple terms the role of feedback in establishing the scaling relations of low - surface - brightness and dwarf galaxies with stellar masses in the range 6x10 ^ 5 < m * < 3x10 ^ 10 msun. these galaxies, as measured from sdss and in the local group, show tight correlations of internal velocity, metallicity and surface brightness ( or radius ) with m *. they define a fundamental line distinguishing them from the brighter galaxies of high surface brightness and metallicity. the idealized model assumes spherical collapse of cdm haloes to virial equilibrium and angular - momentum conservation. the relations for bright galaxies are reproduced by assuming that m * is a constant fraction of the halo mass m. the upper bound to the low - luminosity lsbs coincides with the virial velocity of haloes in which supernova feedback could significantly suppress star formation, v < 100km / s ( dekel & silk 1986 ). we argue that the energy fed to the gas obeys e _ sn \ prop m * despite the radiative losses, and equate it with the binding energy of the gas to obtain m * / m \ prop v ^ 2. this idealized model provides surprisingly good fits to the scaling relations of low - luminosity lsbs and dwarfs, which indicates that supernova feedback had a primary role in determining the fundamental line. the apparent lower bound for galaxies at v ~ 10km / s may be due to the cooling barrier at t ~ 10 ^ 4 k. some fraction of the dark haloes may show no stars due to complete gas removal either by supernova winds from neighboring galaxies or by radiative feedback after cosmological reionization at z _ ion. radiative feedback may also explain the distinction between dwarf spheroidals ( de ) and irregulars ( di ), where the des of v < 30km / s form stars before z _ ion and are then cleaned out of gas, while the dis with v > 30km / s retain gas - rich discs with feedback - regulated star formation.
|
arxiv:astro-ph/0210454
|
the fabrication of high - quality organic - inorganic hybrid halide perovskite layers is the key prerequisite for the realization of high efficient photon energy harvest and electric energy conversion in their related solar cells. in this article, we report a novel fabrication technique of ch3nh3pbi3 layer based on high temperature chemical vapor reaction. ch3nh3pbi3 layers have been prepared by the reaction of pbi2 films which were deposited by pulsed laser deposition, with ch3nh3i vapor at various temperatures from 160 oc to 210 oc. x - ray diffraction patterns confirm the formation of pure phase, and photoluminescence spectra show the strong peak at around 760 nm. scanning electron microscopy images confirm the significantly increased average grain size from nearly 1 { \ mu } m at low reaction temperature of 160 oc to more than 10 { \ mu } m at high reaction temperature of 200 oc. the solar cells were fabricated, and short - circuit current density of 15. 75 ma / cm2, open - circuit voltage of 0. 49 v and fill factor of 71. 66 % have been obtained.
|
arxiv:1708.02935
|
we investigate the effect of a constant static bias force on the dynamically induced shape morphing of a pre - buckled bistable beam, focusing on the beam ' s ability to change its vibration to be near different stable states under harmonic excitation. our study explores four categories of oscillatory motions : switching, reverting, vacillating, and intra - well in the parameter space. we aim to achieve transitions between stable states of the pre - buckled bistable beam with minimal excitation amplitude. our findings demonstrate the synergistic effects between dynamic excitation and static bias force, showing a broadening of the non - fractal region for switching behavior ( i. e., switching from the first stable state to the second stable state ) in the parameter space. this study advances the understanding of the dynamics of key structural components for multi - stable mechanical metamaterials, offering new possibilities for novel designs in adaptive applications.
|
arxiv:2409.18942
|
notwithstanding the growing presence of agvs in the industry, there is a lack of research about multi - wheeled agvs which offer higher maneuverability and space efficiency. in this paper, we present generalized path continuity conditions as a continuation of previous research done for vehicles with more constrained kinematic capabilities. we propose a novel approach for analytically defining various kinematic modes ( motion modes ), that agvs with multiple steer and drive wheels can utilize. this approach enables deriving vehicle kinematic equations based on the vehicle configuration and its constraints, path shape, and corresponding motion mode. finally, we derive general continuity conditions for paths that multi - wheeled agvs can follow, and show through examples how they can be utilized in layout design methods.
|
arxiv:2103.01619
|
we study base - point - freeness for big and nef line bundles on hyperk \ " ahler manifolds of generalized kummer type : for $ n \ in \ { 2, 3, 4 \ } $, we show that, generically in all but a finite number of irreducible components of the moduli space of polarized $ \ mathrm { kum } ^ n $ - type varieties, the polarization is base - point - free. we also prove generic base - point - freeness in the moduli space in all dimensions if the polarization has divisibility one.
|
arxiv:2211.11485
|
szlach \ ' anyi showed that bialgebroids can be characterised using skew monoidal categories. the characterisation reduces the amount of data, structure, and properties required to define them. lack and street provide a bicategorical account of that same fact ; they characterise quantum categories in terms of skew monoidal structures internal to a monoidal bicategory. a quantum category is an opmonoidal monad on an enveloping monoidale $ r ^ \ circ \ otimes r $ in a monoidal bicategory. in a previous paper, we characterised opmonoidal arrows on enveloping monoidales as a simpler structure called oplax action. this is the second paper based on the author ' s phd thesis. here, motivated by the fact that opmonoidal monads are monads in the bicategory of monoidales, opmonoidal arrows, and opmonoidal cells ; we prove that right skew monoidales are " monads of oplax actions ". to do so, we arrange oplax actions as the 1 - simplices of a simplicial object in $ \ mathsf { cat } $. in nice cases this simplicial object is ought to be thought as a bicategory whose arrows are oplax actions, that is to say, it is weakly equivalent to a nerve of a bicategory. we define monads of oplax actions as simplicial maps out of the catalan simplicial set and prove that these are in bijective correspondence with right skew monoidales whose unit has a right adjoint, no assumptions required on the ambient monoidal bicategory.
|
arxiv:1805.09961
|
in this partly expository paper, we study the set a of groups of orientation - preserving homeomorphisms of the circle s ^ 1 which do not admit non - abelian free subgroups. we use classical results about homeomorphisms of the circle and elementary dynamical methods to derive various new and old results about the groups in a. of the known results, we include some results from a family of results of beklaryan and malyutin, and we also give a new proof of a theorem of margulis. our primary new results include a detailed classification of the solvable subgroups of r. thompson ' s group t.
|
arxiv:0910.0218
|
an algebro - geometric approach to representations of sklyanin algebra is proposed. to each 2 \ times 2 quantum l - operator an algebraic curve parametrizing its possible vacuum states is associated. this curve is called the vacuum curve of the l - operator. an explicit description of the vacuum curve for quantum l - operators of the integrable spin chain of xyz type with arbitrary spin $ \ ell $ is given. the curve is highly reducible. for half - integer $ \ ell $ it splits into $ \ ell + { 1 / 2 } $ components isomorphic to an elliptic curve. for integer $ \ ell $ it splits into $ \ ell $ elliptic components and one rational component. the action of elements of the l - operator to functions on the vacuum curve leads to a new realization of the sklyanin algebra by difference operators in two variables restricted to an invariant functional subspace.
|
arxiv:solv-int/9801022
|
directional transforms have recently raised a lot of interest thanks to their numerous applications in signal compression and analysis. in this letter, we introduce a generalization of the discrete fourier transform, called steerable dft ( sdft ). since the dft is used in numerous fields, it may be of interest in a wide range of applications. moreover, we also show that the sdft is highly related to other well - known transforms, such as the fourier sine and cosine transforms and the hilbert transforms.
|
arxiv:1703.05022
|
a poisson autoregressive ( par ) model accounting for discreteness and autocorrelation of count time series data is typically estimated in the state - space modelling framework through extended kalman filter. however, because of the complex dependencies in count time series, estimation becomes more challenging. par is viewed as an additive model and estimated using a hybrid of cubic smoothing splines and maximum likelihood estimation ( mle ) in the backfitting framework. simulation studies show that this estimation method is comparable or better than par estimated in the state - space context, especially with larger count values. however, as [ 2 ] formulated par for stationary counts, both estimation procedures underestimate parameters in nearly nonstationary models. the flexibility of the additive model has two benefits though : robust estimation in the presence of temporary structural change, and ; viability to integrate par model into a more complex model structure. we further generalized the par ( p ) model into multiple time series of counts and illustrated with indicators in the financial markets.
|
arxiv:2104.13520
|
assuming that a horizontal abelian ( gauge ) symmetry is at the origin of texture zeros in the fermion mass matrices we show how realistic mass patterns can be generated in the presence of scalar fields whose vacuum expectation value breaks the extra $ u ( 1 ) $ symmetry. in the simplest scenario with just one pair of singlet fields and under the assumption of l - r symmetry one obtains quark mass matrices { \ it \ ` a la fritzsch }. the $ u ( 1 ) $ symmetry can be made anomaly free by the green - schwarz mechanism in which case the canonical unification of the gauge couplings emerges as its byproduct. the generation of neutrino masses requires either two extra heavy scalar ( higgs ) fields to determine the texture structure of the righthanded neutrino mass matrix or the latter will contain a hierarchy of scales.
|
arxiv:hep-ph/9506301
|
we have performed a long - term measurement of the neutron flux with the high efficiency neutron spectrometry array hensa in the hall a of the canfranc underground laboratory. the hall a measurement campaign lasted from october 2019 to march 2021, demonstrating an excellent stability of the hensa setup. preliminary results on the neutron flux from this campaign are presented for the first time. in phase 1 ( 113 live days ) a total neutron flux of 1. 66 ( 2 ) $ \ times $ 10 $ ^ { - 5 } $ cm $ ^ { - 2 } $ s $ ^ { - 1 } $ is obtained. our results are in good agreement with those from our previous shorter measurement where a reduced experimental setup was employed.
|
arxiv:2111.09202
|
existing methods for explainable artificial intelligence ( xai ), including popular feature importance measures such as sage, are mostly restricted to the batch learning scenario. however, machine learning is often applied in dynamic environments, where data arrives continuously and learning must be done in an online manner. therefore, we propose isage, a time - and memory - efficient incrementalization of sage, which is able to react to changes in the model as well as to drift in the data - generating process. we further provide efficient feature removal methods that break ( interventional ) and retain ( observational ) feature dependencies. moreover, we formally analyze our explanation method to show that isage adheres to similar theoretical properties as sage. finally, we evaluate our approach in a thorough experimental analysis based on well - established data sets and data streams with concept drift.
|
arxiv:2303.01181
|
we show carriers dynamics on bulk silicon in the sub pico - second time scale. the results are obtained from a first - principles implementation of the the kadanoff - baym equations within the generalized baym - kadanoff ansatz and the complete collision approximation. the resulting scattering term is similar to the scattering described within the semi - classical boltzmann equation.
|
arxiv:1503.00866
|
pravuil is a robust, secure, and scalable consensus protocol for a permissionless blockchain suitable for deployment in an adversarial environment such as the internet. pravuil circumvents previous shortcomings of other blockchains : - bitcoin ' s limited adoption problem : as transaction demand grows, payment confirmation times grow much lower than other pow blockchains - higher transaction security at a lower cost - more decentralisation than other permissionless blockchains - impossibility of full decentralisation and the blockchain scalability trilemma : decentralisation, scalability, and security can be achieved simultaneously - sybil - resistance for free implementing the social optimum - pravuil goes beyond the economic limits of bitcoin or other pow / pos blockchains, leading to a more valuable and stable crypto - currency
|
arxiv:2105.10464
|
in this paper, we consider a two - dimensional sticky brownian motion. sticky brownian motions can be viewed as time - changed semimartingale reflecting brownian motions, which find applications in many areas including queueing theory and mathematical finance. for example, a sticky brownian motion can be used to model a storage system. with exceptional services. in this paper, we focus on stationary distributions for sticky brownian motions. the main results obtained here include tail asymptotic properties in boundary stationary distributions, marginal distributions, and joint distributions. the kernel method, copula concept and extreme value theory are main tools used in our analysis.
|
arxiv:1806.04660
|
conformer has shown a great success in automatic speech recognition ( asr ) on many public benchmarks. one of its crucial drawbacks is the quadratic time - space complexity with respect to the input sequence length, which prohibits the model to scale - up as well as process longer input audio sequences. to solve this issue, numerous linear attention methods have been proposed. however, these methods often have limited performance on asr as they treat tokens equally in modeling, neglecting the fact that the neighbouring tokens are often more connected than the distanced tokens. in this paper, we take this fact into account and propose a new locality - biased linear attention for conformer. it not only achieves higher accuracy than the vanilla conformer, but also enjoys linear space - time computational complexity. to be specific, we replace the softmax attention with a locality - biased linear attention ( lbla ) mechanism in conformer blocks. the lbla contains a kernel function to ensure the linear complexities and a cosine reweighing matrix to impose more weights on neighbouring tokens. extensive experiments on the librispeech corpus show that by introducing this locality bias to the conformer, our method achieves a lower word error rate with more than 22 % inference speed.
|
arxiv:2203.15609
|
an extension to computational mechanics complexity measure is proposed in order to tackle quantum states complexity quantification. the method is applicable to any $ n - $ partite state of qudits through some simple modifications. a werner state was considered to test this approach. the results show that it undergoes a phase transition between entangled and separable versions of itself. also, results suggest interplay between quantum state complexity robustness rise and entanglement. finally, only via symbolic dynamics statistical analysis, the proposed method was able to distinguish separable and entangled dynamical structural differences.
|
arxiv:1110.6129
|
network covert channels are hidden communication channels in computer networks. they influence several factors of the cybersecurity economy. for instance, by improving the stealthiness of botnet communications, they aid and preserve the value of darknet botnet sales. covert channels can also be used to secretly exfiltrate confidential data out of organizations, potentially resulting in loss of market / research advantage. considering the above, efforts are needed to develop effective countermeasures against such threats. thus in this paper, based on the introduced novel warden taxonomy, we present and evaluate a new concept of a dynamic warden. its main novelty lies in the modification of the warden ' s behavior over time, making it difficult for the adaptive covert communication parties to infer its strategy and perform a successful hidden data exchange. obtained experimental results indicate the effectiveness of the proposed approach.
|
arxiv:2103.00433
|
we study the asymptotic decay of the fourier spectrum of real functions $ f \ colon \ { - 1, 1 \ } ^ n \ rightarrow \ mathbb { r } $ in the spirit of bohr ' s phenomenon from complex analysis. every such function admits a canonical representation through its fourier - walsh expansion $ f ( x ) = \ sum _ { s \ subset \ { 1, \ ldots, n \ } } \ widehat { f } ( s ) x ^ s \,, $ where $ x ^ s = \ prod _ { k \ in s } x _ k $. given a class $ \ mathcal { f } $ of functions on the boolean cube $ \ { - 1, 1 \ } ^ { n } $, the boolean radius of $ \ mathcal { f } $ is defined to be the largest $ \ rho \ geq 0 $ such that $ \ sum _ { s } { | \ widehat { f } ( s ) | \ rho ^ { | s | } } \ leq \ | f \ | _ { \ infty } $ for every $ f \ in \ mathcal { f } $. we give the precise asymptotic behaviour of the boolean radius of several natural subclasses of functions on finite boolean cubes, as e. g. the class of all real functions on $ \ { - 1, 1 \ } ^ { n } $, the subclass made of all homogeneous functions or certain threshold functions. compared with the classical complex situation subtle differences as well as striking parallels occur.
|
arxiv:1707.09186
|
modelling real world systems involving humans such as biological processes for disease treatment or human behavior for robotic rehabilitation is a challenging problem because labeled training data is sparse and expensive, while high prediction accuracy is required from models of these dynamical systems. due to the high nonlinearity of problems in this area, data - driven approaches gain increasing attention for identifying nonparametric models. in order to increase the prediction performance of these models, abstract prior knowledge such as stability should be included in the learning approach. one of the key challenges is to ensure sufficient flexibility of the models, which is typically limited by the usage of parametric lyapunov functions to guarantee stability. therefore, we derive an approach to learn a nonparametric lyapunov function based on gaussian process regression from data. furthermore, we learn a nonparametric gaussian process state space model from the data and show that it is capable of reproducing observed data exactly. we prove that stabilization of the nominal model based on the nonparametric control lyapunov function does not modify the behavior of the nominal model at training samples. the flexibility and efficiency of our approach is demonstrated on the benchmark problem of learning handwriting motions from a real world dataset, where our approach achieves almost exact reproduction of the training data.
|
arxiv:2006.07868
|
quantum friction, the electromagnetic fluctuation - induced frictional force decelerating an atom which moves past a macroscopic dielectric body, has so far eluded experimental evidence despite more than three decades of theoretical studies. inspired by the recent finding that dynamical corrections to such an atom ' s internal dynamics are enhanced by one order of magnitude for vertical motion - - compared to the paradigmatic setup of parallel motion - - we generalize quantum friction calculations to arbitrary angles between the atom ' s direction of motion and the surface in front of which it moves. motivated by the disagreement between quantum friction calculations based on markovian quantum master equations and time - dependent perturbation theory, we carry out our derivations of the quantum frictional force for arbitrary angles employing both methods and compare them.
|
arxiv:1612.01715
|
this paper targets control problems that exhibit specific safety and performance requirements. in particular, the aim is to ensure that an agent, operating under uncertainty, will at runtime strictly adhere to such requirements. previous works create so - called shields that correct an existing controller for the agent if it is about to take unbearable safety risks. however, so far, shields do not consider that an environment may not be fully known in advance and may evolve for complex control and learning tasks. we propose a new method for the efficient computation of a shield that is adaptive to a changing environment. in particular, we base our method on problems that are sufficiently captured by potentially infinite markov decision processes ( mdp ) and quantitative specifications such as mean payoff objectives. the shield is independent of the controller, which may, for instance, take the form of a high - performing reinforcement learning agent. at runtime, our method builds an internal abstract representation of the mdp and constantly adapts this abstraction and the shield based on observations from the environment. we showcase the applicability of our method via an urban traffic control problem.
|
arxiv:2010.03842
|
in hadron - nucleus interactions, the stronger is nuclear shadowing in the total cross section the higher is the multiplicity of secondary hadrons. in deep inelastic scattering, nuclear shadowing at small $ x $ is associated with the hadronlike behaviour of photons as contrasted to the pointlike behaviour in the non - shadowing region of large $ x $. in this paper we predict smaller mean multiplicity of secondary hadrons, and weaker fragmentation of the target nucleus, in deep inelastic leptoptoproduction on nuclei in the shadowing region of small $ x $ as compared to the non - shadowing region of large $ x $. this paradoxial conclusion has its origin in nuclear enhancement of the coherent diffraction dissociation of photons. we present numerical predictions for multiproduction in $ \ mu xe $ interactions studied by the fermilab e665 collaboration.
|
arxiv:hep-ph/9406229
|
in recommendation systems, high - quality user embeddings can capture subtle preferences, enable precise similarity calculations, and adapt to changing preferences over time to maintain relevance. the effectiveness of recommendation systems depends on the quality of user embedding. we propose to asynchronously learn high fidelity user embeddings for billions of users each day from sequence based multimodal user activities through a transformer - like large scale feature learning module. the async learned user representations embeddings ( alure ) are further converted to user similarity graphs through graph learning and then combined with user realtime activities to retrieval highly related ads candidates for the ads delivery system. our method shows significant gains in both offline and online experiments.
|
arxiv:2406.05898
|
the direct method based on the definition of conserved currents of a system of differential equations is applied to compute the space of conservation laws of the ( 1 + 1 ) - dimensional wave equation in the light - cone coordinates. then noether ' s theorem yields the space of variational symmetries of the corresponding functional. the results are also presented for the standard space - time form of the wave equation.
|
arxiv:1912.03698
|
robotic learning for navigation in unfamiliar environments needs to provide policies for both task - oriented navigation ( i. e., reaching a goal that the robot has located ), and task - agnostic exploration ( i. e., searching for a goal in a novel setting ). typically, these roles are handled by separate models, for example by using subgoal proposals, planning, or separate navigation strategies. in this paper, we describe how we can train a single unified diffusion policy to handle both goal - directed navigation and goal - agnostic exploration, with the latter providing the ability to search novel environments, and the former providing the ability to reach a user - specified goal once it has been located. we show that this unified policy results in better overall performance when navigating to visually indicated goals in novel environments, as compared to approaches that use subgoal proposals from generative models, or prior methods based on latent variable models. we instantiate our method by using a large - scale transformer - based policy trained on data from multiple ground robots, with a diffusion model decoder to flexibly handle both goal - conditioned and goal - agnostic navigation. our experiments, conducted on a real - world mobile robot platform, show effective navigation in unseen environments in comparison with five alternative methods, and demonstrate significant improvements in performance and lower collision rates, despite utilizing smaller models than state - of - the - art approaches. for more videos, code, and pre - trained model checkpoints, see https : / / general - navigation - models. github. io / nomad /
|
arxiv:2310.07896
|
we consider graphs parameterized on a portion $ x \ subset \ mathbb z ^ d \ times \ { 1, \ ldots, m \ } ^ k $ of a cylindrical subset of the lattice $ \ mathbb z ^ d \ times \ mathbb z ^ k $, and perform a discrete - to - continuum dimension - reduction process for energies defined on $ x $ of quadratic type. our only assumptions are that $ x $ be connected as a graph and periodic in the first $ d $ - directions. we show that, upon scaling of the domain and of the energies by a small parameter $ \ varepsilon $, the scaled energies converge to a $ d $ - dimensional limit energy. the main technical points are a dimension - lowering coarse - graining process and a discrete version of the $ p $ - connectedness approach by zhikov.
|
arxiv:2107.10809
|
this review summarizes effective field theory techniques, which are the modern theoretical tools for exploiting the existence of hierarchies of scale in a physical problem. the general theoretical framework is described, and explicitly evaluated for a simple model. power - counting results are illustrated for a few cases of practical interest, and several applications to quantum electrodynamics are described.
|
arxiv:hep-th/0701053
|
wind - wave interaction involves wind forcing on wave surface and wave effects on the turbulent wind structures, which essentially influences the wind and wave loading on structures. existing research on wind - wave interaction modeling ignores the inherent strong turbulences of wind. the present study aims to characterize the turbulent airflow over wave surfaces and wave dynamics under wind driving force. a high - fidelity two - phase model is developed to simulate highly turbulent wind - wave fields. instead of using uniform wind, inherent wind turbulences are prescribed at the inlet boundary using the turbulent spot method. the developed model is validated by comparing the simulated wind - wave flow characteristics with experimental data. with the validated model, a numerical case study is conducted on a 10 ^ 2 m scale under extreme wind and wave conditions. the result shows that when inherent wind turbulences are considered, the resultant turbulence is strengthened and is the summation of the inherent turbulence and the wave - induced turbulence. in addition, the wave coherent velocities and shelter effect are enhanced because of the presence of wind inherent turbulence. the regions of intense turbulence depend on the relative speed between wind velocity and wave phase speed. higher wind velocities induce greater turbulence intensities, which can be increased by up to 17 %. the different relative speed between wind and wave can induce opposite positive - negative patterns of wave coherent velocities. the wave - coherent velocity is approximately proportional to the wind velocity, and the influenced region mainly depends on the wave heights.
|
arxiv:2206.02085
|
nuclear segmentation, classification and quantification within haematoxylin & eosin stained histology images enables the extraction of interpretable cell - based features that can be used in downstream explainable models in computational pathology ( cpath ). however, automatic recognition of different nuclei is faced with a major challenge in that there are several different types of nuclei, some of them exhibiting large intraclass variability. in this work, we propose an approach that combine separable - hovernet and instance - yolov5 to indentify colon nuclei small and unbalanced. our approach can achieve mpq + 0. 389 on the segmentation and classification - preliminary test dataset and r2 0. 599 on the cellular composition - preliminary test dataset on isbi 2022 conic challenge.
|
arxiv:2203.00262
|
determining accurately when regime and structural changes occur in various time - series data is critical in many social and natural sciences. we develop and show further the equivalence of two consistent estimation techniques in locating the change point under the framework of a generalised version of the ornstein - uhlehnbeck process. our methods are based on the least sum of squared error and the maximum log - likelihood approaches. the case where both the existence and the location of the change point are unknown is investigated and an informational methodology is employed to address these issues. numerical illustrations are presented to assess the performance of the methods.
|
arxiv:1610.03153
|
in bayesian semi - parametric analyses of time - to - event data, non - parametric process priors are adopted for the baseline hazard function or the cumulative baseline hazard function for a given finite partition of the time axis. however, it would be controversial to suggest a general guideline to construct an optimal time partition. while a great deal of research has been done to relax the assumption of the fixed split times for other non - parametric processes, to our knowledge, no methods have been developed for a gamma process prior, which is one of the most widely used in bayesian survival analysis. in this paper, we propose a new bayesian framework for proportional hazards models where the cumulative baseline hazard function is modeled a priori by a gamma process. a key feature of the proposed framework is that the number and position of interval cutpoints are treated as random and estimated based on their posterior distributions.
|
arxiv:2008.02204
|
graphical models are a powerful tool in modelling and analysing complex biological associations in high - dimensional data. the r - package netgwas implements the recent methodological development on copula graphical models to ( i ) construct linkage maps, ( ii ) infer linkage disequilibrium networks from genotype data, and ( iii ) detect high - dimensional genotype - phenotype networks. the netgwas learns the structure of networks from ordinal data and mixed ordinal - and - continuous data. here, we apply the functionality in netgwas to various multivariate example datasets taken from the literature to demonstrate the kind of insight that can be obtained from the package. we show that our package offers a more realistic association analysis than the classical approaches, as it discriminates between direct and induced correlations by adjusting for the effect of all other variables while performing pairwise associations. this feature controls for spurious interactions between variables that can arise from conventional approaches in a biological sense. the netgwas package uses a parallelization strategy on multi - core processors to speed - up computations. the netgwas package is freely available at https : / / cran. r - project. org / web / packages / netgwas
|
arxiv:1710.01236
|
automatic speech recognition is a difficult problem in pattern recognition because several sources of variability exist in the speech input like the channel variations, the input might be clean or noisy, the speakers may have different accent and variations in the gender, etc. as a result, domain adaptation is important in speech recognition where we train the model for a particular source domain and test it on a different target domain. in this paper, we propose a technique to perform unsupervised gender - based domain adaptation in speech recognition using phonetic features. the experiments are performed on the timit dataset and there is a considerable decrease in the phoneme error rate using the proposed approach.
|
arxiv:2108.02850
|
one requirement of maintaining digital information is storage. with the latest advances in the digital world, new emerging media types have required even more storage space to be kept than before. in fact, in many cases it is required to have larger amounts of storage to keep up with protocols that support more types of information at the same time. in contrast, compression algorithms have been integrated to facilitate the transfer of larger data. numerical representations are construed as embodiments of information. however, this correct association of a sequence could feasibly be inverted to signify an elongated series of numerals. in this work, a novel mathematical paradigm was introduced to engineer a methodology reliant on iterative logarithmic transformations, finely tuned to numeric sequences. through this fledgling approach, an intricate interplay of polymorphic numeric manipulations was conducted. by applying repeated logarithmic operations, the data were condensed into a minuscule representation. approximately thirteen times surpassed the compression method, zip. such extreme compaction, achieved through iterative reduction of expansive integers until they manifested as single - digit entities, conferred a novel sense of informational embodiment. instead of relegating data to classical discrete encodings, this method transformed them into a quasi - continuous, logarithmically. by contrast, this introduced approach revealed that morphing data into deeply compressed numerical substrata beyond conventional boundaries was feasible. a holistic perspective emerges, validating that numeric data can be recalibrated into ephemeral sequences of logarithmic impressions. it was not merely a matter of reducing digits, but of reinterpreting data through a resolute numeric vantage.
|
arxiv:2412.11236
|
the h \ " uckel hamiltonian is an incredibly simple tight - binding model famed for its ability to capture qualitative physics phenomena arising from electron interactions in molecules and materials. part of its simplicity arises from using only two types of empirically fit physics - motivated parameters : the first describes the orbital energies on each atom and the second describes electronic interactions and bonding between atoms. by replacing these traditionally static parameters with dynamically predicted values, we vastly increase the accuracy of the extended h \ " uckel model. the dynamic values are generated with a deep neural network, which is trained to reproduce orbital energies and densities derived from density functional theory. the resulting model retains interpretability while the deep neural network parameterization is smooth, accurate, and reproduces insightful features of the original static parameterization. finally, we demonstrate that the h \ " uckel model, and not the deep neural network, is responsible for capturing intricate orbital interactions in two molecular case studies. overall, this work shows the promise of utilizing machine learning to formulate simple, accurate, and dynamically parameterized physics models.
|
arxiv:1909.12963
|
the dynamics of fluctuations is considered for electrons near a positive ion or for charges in a confining trap. the stationary nonuniform equilibrium densities are discussed and contrasted. the linear response function for small perturbations of this nonuniform state is calculated from a linear markov kinetic theory whose generator for the dynamics is exact in the short time limit. the kinetic equation is solved in terms of an effective mean field single particle dynamics determined by the local density and dynamical screening by a dielectric function for the non - uniform system. the autocorrelation function for the total force on the charges is discussed.
|
arxiv:0809.3071
|
anomaly detection is crucial to ensure the security of cyber - physical systems ( cps ). however, due to the increasing complexity of cpss and more sophisticated attacks, conventional anomaly detection methods, which face the growing volume of data and need domain - specific knowledge, cannot be directly applied to address these challenges. to this end, deep learning - based anomaly detection ( dlad ) methods have been proposed. in this paper, we review state - of - the - art dlad methods in cpss. we propose a taxonomy in terms of the type of anomalies, strategies, implementation, and evaluation metrics to understand the essential properties of current methods. further, we utilize this taxonomy to identify and highlight new characteristics and designs in each cps domain. also, we discuss the limitations and open problems of these methods. moreover, to give users insights into choosing proper dlad methods in practice, we experimentally explore the characteristics of typical neural models, the workflow of dlad methods, and the running performance of dl models. finally, we discuss the deficiencies of dl approaches, our findings, and possible directions to improve dlad methods and motivate future research.
|
arxiv:2003.13213
|
financial technology ( abbreviated as fintech ) refers to the application of innovative technologies to products and services in the financial industry. this broad term encompasses a wide array of technological advancements in financial services, including mobile banking, online lending platforms, digital payment systems, robo - advisors, and blockchain - based applications such as cryptocurrencies. financial technology companies include both startups and established technology and financial firms that aim to improve, complement, or replace traditional financial services. = = evolution = = the evolution of financial technology spans over a century, marked by significant technological innovations that have revolutionized the financial industry. while the application of technology to finance has deep historical roots, the term " financial technology " emerged in the late 20th century and gained prominence in the 1990s. the earliest documented use of the term dates back to 1967, appearing in an article in the boston globe titled " fin - tech new source of seed money. " this piece reported on a startup investment company established by former executives of computer control company, aimed at providing venture capital and industry expertise to startups in the financial technology industry. however, the term didn ' t gain popularity until the early 1990s when citicorp chairman john reed used it to describe the financial services technology consortium. this project, initiated by citigroup, was designed to promote technological cooperation in the financial sector, marking a pivotal moment in the industry ' s collaborative approach to innovation. the financial technology ecosystem includes various types of companies. while startups developing new financial technologies or services are often associated with financial technology, the sector also encompasses established technology companies expanding into financial services and traditional financial institutions adopting new technologies. this diverse landscape has led to innovations across multiple financial sectors, including banking, insurance, investment, and payment systems. financial technology applications span a wide range of financial services. these include digital banking, mobile payments and digital wallets, peer - to - peer lending platforms, robo - advisors and algorithmic trading, insurtech, blockchain and cryptocurrency, regulatory technology, and crowdfunding platforms. = = history = = = = = foundations = = = the late 19th century laid the groundwork for early financial technology with the development of the telegraph and transatlantic cable systems. these innovations transformed the transmission of financial information across borders, enabling faster and more efficient communication between financial institutions. a significant milestone in electronic money movement came with the establishment of the fedwire funds service by the federal reserve banks in 1918. this early electronic funds transfer system used telegraph lines to facilitate secure transfers
|
https://en.wikipedia.org/wiki/Financial_technology
|
in recent years, the use of adjoint vectors in computational fluid dynamics ( cfd ) has seen a dramatic rise. their utility in numerous applications, including design optimization, data assimilation, and mesh adaptation has sparked the interest of both researchers and practitioners alike. in many of these fields, the concept of an adjoint is explained differently, with various notations and motivations employed. further complicating matters is the existence of two seemingly different types of adjoints - - " continuous " and " discrete " - - as well as the more formal definition of adjoint operators employed in linear algebra and functional analysis. these issues can make the fundamental concept of an adjoint difficult to pin down. in these notes, we hope to clarify some of the ideas surrounding adjoint vectors and to provide a useful reference for both continuous and discrete adjoints alike. in particular, we focus on the use of adjoints within the context of output - based mesh adaptation, where the goal is to achieve accuracy in a particular quantity ( or " output " ) of interest by performing targeted adaptation of the computational mesh. while this is our application of interest, the ideas discussed here apply directly to design optimization, data assimilation, and many other fields where adjoints are employed.
|
arxiv:1712.00693
|
we study weighted porous media equations on domains $ \ omega \ subseteq { \ mathbb r } ^ n $, either with dirichlet or with neumann homogeneous boundary conditions when $ \ omega \ not = { \ mathbb r } ^ n $. existence of weak solutions and uniqueness in a suitable class is studied in detail. moreover, $ l ^ { q _ 0 } $ - $ l ^ \ varrho $ smoothing effects ( $ 1 \ leq q _ 0 < \ varrho < \ infty $ ) are discussed for short time, in connection with the validity of a poincar \ ' e inequality in appropriate weighted sobolev spaces, and the long - time asymptotic behaviour is also studied. particular emphasis is given to the neumann problem, which is much less studied in the literature, as well as to the case $ \ omega = { \ mathbb r } ^ n $ when the corresponding weight makes its measure finite, so that solutions converge to their weighted average instead than to zero. examples are given in terms of wide classes of weights.
|
arxiv:1204.6159
|
in information retrieval research, precision and recall have long been used to evaluate ir systems. however, given that a number of retrieval systems resembling one another are already available to the public, it is valuable to retrieve novel relevant documents, i. e., documents that cannot be retrieved by those existing systems. in view of this problem, we propose an evaluation method that favors systems retrieving as many novel documents as possible. we also used our method to evaluate systems that participated in the irex workshop.
|
arxiv:cs/0011002
|
with our new speckle imaging polarimeter we have obtained first polarimetric images with sub - arcsecond resolution of the luminous blue variable eta carinae in the halpha line. the polarization patterns at the 3 ' ' scale match well earlier conventional imaging photometry and can be interpreted as mie scattering. in centered long - exposure images we detected in polarized light a bar in the ne part of the equatorial plane of eta carinae. high - resolution 0. 11 ' ' polarimetric speckle reconstructions reveal a compact structure elongated in the same direction which is consistent, in degree and position angle of the polarisation, with the presence of a circumstellar, equatorial disk. the degree of polarization of the previously discovered speckle objects and the halpha arm is relatively low ( ~ 10 % ) and thus may indicate a position within the equatorial plane. we also discovered a highly polarized ( 20 % - 40 % ) bipolar structure along the major axis of the homunculus nebula which can be traced down to the sub - arcsecond scale. this is probably the inner part of a bipolar outflow into the homunculus.
|
arxiv:astro-ph/9601119
|
in this paper, we develop a new rainbow hamilton framework, which is of independent interest, settling the problem proposed by gupta, hamann, m \ " { u } yesser, parczyk, and sgueglia when $ k = 3 $, and draw the general conclusion for any $ k \ geq3 $ as follows. a $ k $ - graph system $ \ textbf { h } = \ { h _ i \ } _ { i \ in [ n ] } $ is a family of not necessarily distinct $ k $ - graphs on the same $ n $ - vertex set $ v $, moreover, a $ k $ - graph $ h $ on $ v $ is rainbow if $ e ( h ) \ subseteq \ bigcup _ { i \ in [ n ] } e ( h _ i ) $ and $ | e ( h ) \ cap e ( h _ i ) | \ leq1 $ for $ i \ in [ n ] $. we show that given $ \ gamma > 0 $, sufficiently large $ n $ and an $ n $ - vertex $ k $ - graph system $ \ textbf { h } = \ { h _ i \ } _ { i \ in [ n ] } $, if $ \ delta _ { k - 2 } ( h _ i ) \ geq ( 5 / 9 + \ gamma ) \ binom { n } { 2 } $ for $ i \ in [ n ] $ where $ k \ geq3 $, then there exists a rainbow tight hamilton cycle. this result implies the conclusion in a single graph, which was proved by lang and sanhueza - matamala [ $ j. lond. math. soc., 2022 $ ], polcyn, reiher, r \ " { o } dl and sch \ " { u } lke [ $ j. combin. theory \ ser. b, 2021 $ ] independently.
|
arxiv:2302.00080
|
this work presents an analysis of the efficiency of image augmentations for the face recognition problem from limited data. we considered basic manipulations, generative methods, and their combinations for augmentations. our results show that augmentations, in general, can considerably improve the quality of face recognition systems and the combination of generative and basic approaches performs better than the other tested techniques.
|
arxiv:2105.08796
|
recent experiments on bi - based cuprate superconductors have revealed an unexpected enhancement of the pairing correlations near the interstitial oxygen dopant ions. here we propose a possible mechanism - - based on local screening effects - - by which the oxygen dopants do modify the electronic parameters within the cuo _ 2 planes and strongly increase the superexchange coupling j. this enhances the spin pairing effects locally and may explain the observed spatial variations of the density of states and the pairing gap.
|
arxiv:1008.0435
|
we report on a search for standard model t - channel and s - channel single top quark production in ppbar collisions at a center of mass energy of 1. 96 tev. we use a data sample corresponding to 162 pb ^ { - 1 } recorded by the upgraded collider detector at fermilab. we find no significant evidence for electroweak top quark production and set upper limits at the 95 % confidence level on the production cross section, consistent with the standard model : 10. 1 pb for the t - channel, 13. 6 pb for the s - channel and 17. 8 pb for the combined cross section of t - and s - channel.
|
arxiv:hep-ex/0410058
|
in this paper we study duality properties of the m ( atrix ) theory compactified on a circle. we establish the equivalence of this theory to the strong coupling limit of type iib string theory compactified on a circle. in the m ( atrix ) theory context, our major evidence for this duality consists of identifying the bps states of iib strings in the spectrum and finding the remnant symmetry of sl ( 2, z ) and the associated tau moduli. by this iib / m duality, a number of insights are gained into the physics of longitudinal membranes in the infinite momentum frame. we also point out an accidental affine lie symmetry in the theory.
|
arxiv:hep-th/9703016
|
in this article, we give a proof of multiplicativity for $ \ gamma $ - factors, an equality of parabolically induced and inducing factors, in the context of the braverman - kazhdan / ngo program, under the assumption of commutativity of the corresponding fourier transforms and a certain generalized harish - chandra transform. we also discuss the resolution of singularities and their rationality for reductive monoids, which are among the basic objects in the program.
|
arxiv:2106.13399
|
we propose to couple a trapped single electron to superconducting structures located at a variable distance from the electron. the electron is captured in a cryogenic penning trap using electric fields and a static magnetic field in the tesla range. measurements on the electron will allow investigating the properties of the superconductor such as vortex structure, damping and decoherence. we propose to couple a superconducting microwave resonator to the electron in order to realize a circuit qed - like experiment, as well as to couple superconducting josephson junctions or superconducting quantum interferometers ( squids ) to the electron. the electron may also be coupled to a vortex which is situated in a double well potential, realized by nearby pinning centers in the superconductor, acting as a quantum mechanical two level system that can be controlled by a transport current tilting the double well potential. when the vortex is trapped in the interferometer arms of a squid, this would allow its detection both by the squid and by the electron.
|
arxiv:1009.3425
|
the cyclic sos model is considered on the basis of smirnov ' s form factor bootstrap approach. integral solutions to the quantum knizhnik - zamolodchikov equations of level 0 are presented.
|
arxiv:hep-th/0402112
|
we consider a model of scalar field with non minimal kinetic and gauss bonnet couplings as a source of dark energy. based on asymptotic limits of the generalized friedmann equation, we impose restrictions on the kinetic an gauss - bonnet couplings. this restrictions considerable simplify the equations, allowing for exact solutions unifying early time matter dominance with transitions to late time quintessence and phantom phases. the stability of the solutions in absence of matter has been studied.
|
arxiv:1209.1137
|
the tight confinement of the evanescent light field around the waist of an optical nanofiber makes it a suitable tool for studying nonlinear optics in atomic media. here, we use an optical nanofiber embedded in a cloud of laser - cooled 87rb for near - infrared frequency upconversion via a resonant two - photon process. sub - nw powers of the two - photon beams, at 780 nm and 776 nm, co - propagate through the optical nanofiber and generation of 420 nm photons is observed. a measurement of the autler - townes splitting provides a direct measurement of the rabi frequency of the 780 nm transition. through this method, dephasings of the system can be studied. in this work, the optical nanofiber is used as an excitation and detection tool simultaneously, and it highlights some of the advantages of using fully fibered systems for nonlinear optics with atoms.
|
arxiv:1502.01123
|
as multicore systems continue to gain ground in the high performance computing world, linear algebra algorithms have to be reformulated or new algorithms have to be developed in order to take advantage of the architectural features on these new processors. fine grain parallelism becomes a major requirement and introduces the necessity of loose synchronization in the parallel execution of an operation. this paper presents an algorithm for the qr factorization where the operations can be represented as a sequence of small tasks that operate on square blocks of data. these tasks can be dynamically scheduled for execution based on the dependencies among them and on the availability of computational resources. this may result in an out of order execution of the tasks which will completely hide the presence of intrinsically sequential tasks in the factorization. performance comparisons are presented with the lapack algorithm for qr factorization where parallelism can only be exploited at the level of the blas operations.
|
arxiv:0707.3548
|
consensus clustering has been widely used in bioinformatics and other applications to improve the accuracy, stability and reliability of clustering results. this approach ensembles cluster co - occurrences from multiple clustering runs on subsampled observations. for application to large - scale bioinformatics data, such as to discover cell types from single - cell sequencing data, for example, consensus clustering has two significant drawbacks : ( i ) computational inefficiency due to repeatedly applying clustering algorithms, and ( ii ) lack of interpretability into the important features for differentiating clusters. in this paper, we address these two challenges by developing impacc : interpretable minipatch adaptive consensus clustering. our approach adopts three major innovations. we ensemble cluster co - occurrences from tiny subsets of both observations and features, termed minipatches, thus dramatically reducing computation time. additionally, we develop adaptive sampling schemes for observations, which result in both improved reliability and computational savings, as well as adaptive sampling schemes of features, which leads to interpretable solutions by quickly learning the most relevant features that differentiate clusters. we study our approach on synthetic data and a variety of real large - scale bioinformatics data sets ; results show that our approach not only yields more accurate and interpretable cluster solutions, but it also substantially improves computational efficiency compared to standard consensus clustering approaches.
|
arxiv:2110.02388
|
we recently introduced a new family of processes which describe particles which only can move at the speed of light c in the ordinary 3d physical space. the velocity, which randomly changes direction, can be represented as a point on the surface of a sphere of radius c and its trajectories only may connect the points of this variety. a process can be constructed both by considering jumps from one point to another ( velocity changes discontinuously ) and by continuous velocity trajectories on the surface. we followed this second new strategy assuming that the velocity is described by a wiener process ( which is isotropic only in the ' rest frame ' ) on the surface of the sphere. using both ito calculus and lorentz boost rules, we succeed here in characterizing the entire lorentz - invariant family of processes. moreover, we highlight and describe the short - term ballistic behavior versus the long - term diffusive behavior of the particles in the 3d physical space.
|
arxiv:2004.11983
|
student dropout prediction provides an opportunity to improve student engagement, which maximizes the overall effectiveness of learning experiences. however, researches on student dropout were mainly conducted on school dropout or course dropout, and study session dropout in a mobile learning environment has not been considered thoroughly. in this paper, we investigate the study session dropout prediction problem in a mobile learning environment. first, we define the concept of the study session, study session dropout and study session dropout prediction task in a mobile learning environment. based on the definitions, we propose a novel transformer based model for predicting study session dropout, das : deep attentive study session dropout prediction in mobile learning environment. das has an encoder - decoder structure which is composed of stacked multi - head attention and point - wise feed - forward networks. the deep attentive computations in das are capable of capturing complex relations among dynamic student interactions. to the best of our knowledge, this is the first attempt to investigate study session dropout in a mobile learning environment. empirical evaluations on a large - scale dataset show that das achieves the best performance with a significant improvement in area under the receiver operating characteristic curve compared to baseline models.
|
arxiv:2002.11624
|
storing data in dna is being explored as an efficient solution for archiving and in - object storage. synthesis time and cost remain challenging, significantly limiting some applications at this stage. in this paper we investigate efficient synthesis, as it relates to cyclic synchronized synthesis technologies, such as photolithography. we define performance metrics related to the number of cycles needed for the synthesis of any fixed number of bits. we first expand on some results from the literature related to the channel capacity, addressing densities beyond those covered by prior work. this leads us to develop effective encoding achieving rate and capacity that are higher than previously reported. finally, we analyze cost based on a parametric definition and determine some bounds and asymptotics. we investigate alphabet sizes that can be larger than 4, both for theoretical completeness and since practical approaches to such schemes were recently suggested and tested in the literature.
|
arxiv:2412.05865
|
in this paper we introduce the notion of timelike surface with harmonic inverse mean curvature in 3 - dimensional lorentzian space forms, and study their fundamental properties.
|
arxiv:math/0512308
|
profile guided optimization is an effective technique for improving the optimization ability of compilers based on dynamic behavior, but collecting profile data is expensive, cumbersome, and requires regular updating to remain fresh. we present a novel statistical approach to inferring branch probabilities that improves the performance of programs that are compiled without profile guided optimizations. we perform offline training using information that is collected from a large corpus of binaries that have branch probabilities information. the learned model is used by the compiler to predict the branch probabilities of regular uninstrumented programs, which the compiler can then use to inform optimization decisions. we integrate our technique directly in llvm, supplementing the existing human - engineered compiler heuristics. we evaluate our technique on a suite of benchmarks, demonstrating some gains over compiling without profile information. in deployment, our technique requires no profiling runs and has negligible effect on compilation time.
|
arxiv:2112.14679
|
the galaxy correlation function serves as a fundamental tool for studying cosmology, galaxy formation, and the nature of dark matter. it is well established that more massive, redder and more compact galaxies tend to have stronger clustering in space. these results can be understood in terms of galaxy formation in cold dark matter ( cdm ) halos of different mass and assembly history. here, we report an unexpectedly strong large - scale clustering for isolated, diffuse and blue dwarf galaxies, comparable to that seen for massive galaxy groups but much stronger than that expected from their halo mass. our analysis indicates that the strong clustering aligns with the halo assembly bias seen in simulations with the standard $ \ lambda $ cdm cosmology only if more diffuse dwarfs formed in low - mass halos of older ages. this pattern is not reproduced by existing models of galaxy evolution in a $ \ lambda $ cdm framework, and our finding provides new clues for the search of more viable models. our results can be explained well by assuming self - interacting dark matter, suggesting that such a scenario should be considered seriously.
|
arxiv:2504.03305
|
a phenomenology of isotropic magnetohydrodynamic turbulence subject to both rotation and applied magnetic field is presented. it is assumed that the triple correlations decay - time is the shortest between the eddy turn - over time and the ones associated to the rotating frequency and alfv \ ' en wave period. for $ pm = 1 $ it leads to four kinds of piecewise spectra, depending on the four parameters, injection rate of energy, magnetic diffusivity, rotation rate and applied field. with a shell model of mhd turbulence ( including rotation and applied magnetic field ), spectra for $ pm \ le 1 $ are presented, together with the ratio between magnetic and viscous dissipation.
|
arxiv:1009.3549
|
for split smooth del pezzo surfaces, we analyse the structure of the effective cone and prove a recursive formula for the value of alpha, appearing in the leading constant as predicted by peyre of manin ' s conjecture on the number of rational points of bounded height on the surface. furthermore, we calculate alpha for all singular del pezzo surfaces of degree at least 3.
|
arxiv:math/0702549
|
recent structure function results from h1 and zeus are presented. the data have been recorded in e + p and e - p collisions for both neutral current and charged current reactions, covering a wide kinematic range of squared four - momentum transfers q2, from 0. 2gev2 to 30000gev2, and bjorken x between ~ 5 * 10 - 6 and 0. 65. data from both experiments have been combined, leading to significantly reduced experimental uncertainties. the combined measurements are analysed in a nlo qcd fit, and a set of parton density functions, herapdf1. 0, is extracted from these data alone. new direct measurements of the structure function fl, making use of dedicated low energy runs of the hera machine, are also presented. the impact of the hera data on the parton density functions and predictions for lhc is discussed.
|
arxiv:1009.0978
|
most prior work on task - oriented dialogue systems are restricted to limited coverage of domain apis. however, users oftentimes have requests that are out of the scope of these apis. this work focuses on responding to these beyond - api - coverage user turns by incorporating external, unstructured knowledge sources. our approach works in a pipelined manner with knowledge - seeking turn detection, knowledge selection, and response generation in sequence. we introduce novel data augmentation methods for the first two steps and demonstrate that the use of information extracted from dialogue context improves the knowledge selection and end - to - end performances. through experiments, we achieve state - of - the - art performance for both automatic and human evaluation metrics on the dstc9 track 1 benchmark dataset, validating the effectiveness of our contributions.
|
arxiv:2106.09174
|
being a powerful tool for linear time - invariant ( lti ) systems, system response analysis can also be applied to the so - called linear space - invariant ( lsi ) but time - varying systems, which is a dual of the conventional lti problems. in this paper, we propose a system response analysis method for lsi problems by conducting fourier transform of the field distribution on the space instead of time coordinate. specifically, input and output signals can be expressed in the wavenumber ( spatial frequency ) domain. in this way, the system function in wavenumber domain can also be obtained for lsi systems. given an arbitrary input and temporal profile of the medium, the output can be easily predicted using the system function. moreover, for a complex temporal system, the proposed method allows for decomposing it into multiple simpler subsystems that appear in sequence in time. the system function of the whole system can be efficiently calculated by multiplying those of the individual subsystems.
|
arxiv:2208.00845
|
with the remarkable success of deep neural networks, there is a growing interest in research aimed at providing clear interpretations of their decision - making processes. in this paper, we introduce attribution equilibrium, a novel method to decompose output predictions into fine - grained attributions, balancing positive and negative relevance for clearer visualization of the evidence behind a network decision. we carefully analyze conventional approaches to decision explanation and present a different perspective on the conservation of evidence. we define the evidence as a gap between positive and negative influences among gradient - derived initial contribution maps. then, we incorporate antagonistic elements and a user - defined criterion for the degree of positive attribution during propagation. additionally, we consider the role of inactivated neurons in the propagation rule, thereby enhancing the discernment of less relevant elements such as the background. we conduct various assessments in a verified experimental environment with pascal voc 2007, ms coco 2014, and imagenet datasets. the results demonstrate that our method outperforms existing attribution methods both qualitatively and quantitatively in identifying the key input features that influence model decisions.
|
arxiv:2205.11109
|
end - to - end autonomous driving ( e2ead ) methods typically rely on supervised perception tasks to extract explicit scene information ( e. g., objects, maps ). this reliance necessitates expensive annotations and constrains deployment and data scalability in real - time applications. in this paper, we introduce ssr, a novel framework that utilizes only 16 navigation - guided tokens as sparse scene representation, efficiently extracting crucial scene information for e2ead. our method eliminates the need for human - designed supervised sub - tasks, allowing computational resources to concentrate on essential elements directly related to navigation intent. we further introduce a temporal enhancement module, aligning predicted future scenes with actual future scenes through self - supervision. ssr achieves a 27. 2 \ % relative reduction in l2 error and a 51. 6 \ % decrease in collision rate to uniad in nuscenes, with a 10. 9 $ \ times $ faster inference speed and 13 $ \ times $ faster training time. moreover, ssr outperforms vad - base with a 48. 6 - point improvement on driving score in carla ' s town05 long benchmark. this framework represents a significant leap in real - time autonomous driving systems and paves the way for future scalable deployment. code is available at https : / / github. com / peidongli / ssr.
|
arxiv:2409.18341
|
detailed mapping of the distributions and kinematics of gases in cometary comae at radio wavelengths can provide fundamental advances in our understanding of cometary activity and outgassing mechanisms. furthermore, the measurement of molecular abundances in comets provides new insights into the chemical composition of some of the solar system ' s oldest and most primitive materials. here we investigate the opportunities for significant progress in cometary science using a very large radio interferometer. the ngvla concept will enable detection and mapping of a range of key coma species in the 1. 2 - 116 ghz range, and will allow for the first time, high - resolution mapping of the fundamental cometary molecules oh and nh $ _ 3 $. the extremely high angular resolution and continuum sensitivity of the proposed ngvla will also allow the possibility of imaging thermal emission from the nucleus itself, as well as large dust / ice grains in the comae, of comets passing within $ \ sim1 $ au of earth.
|
arxiv:1810.07867
|
it is now well understood that equations of viscoelasticity can be seen as perturbation of wave type equations. this observation can be exploited in several different ways and it turns out that it is a usefull tool when studying controllability. here we compare a viscoelastic system which fills a surface of a solid region ( the string case has already been studied ) with its memoryless counterpart ( which is a generalized telegraph equation ) in order to prove exact controllability of the viscoelastic body at precisely the same times at which the telegraph equation is controllable. the comparison is done using a moment method approach to controllability and we prove, using the perturbations theorems of paley - wiener and bari, that a new sequence derived from the viscoelastic system is a riesz sequence, a fact that implies controllability of the viscoelastic system. the results so obtained generalize existing controllability results and furthermore show that the ` ` sharp ' ' control time for the telegraph equation and the viscoelastic system coincide.
|
arxiv:1305.1477
|
we study the physical processes involved in the potential influence of amazon ( am ) hydroclimatology over the tropical north atlantic ( tna ) sea surface temperatures ( sst ) at interannual timescales, by analyzing time series of the precipitation index ( p - e ) over am, as well as the surface atmospheric pressure gradient between both regions, and tna ssts. we use a recurrence joint probability based analysis that accounts for the lagged nonlinear dependency between time series, which also allows quantifying the statistical significance, based on a twin surrogates technique of the recurrence analysis. by means of such nonlinear dependence analysis we find that at interannual timescales am hydrology influences future states of the tna ssts from 0 to 2 months later with a 90 % to 95 % statistical confidence. it also unveils the existence of two - way feedback mechanisms between the variables involved in the processes : ( i ) precipitation over am leads the atmospheric pressure gradient between tna and am from 0 and 2 month lags, ( ii ) the pressure gradient leads the trade zonal winds over the tna from 0 to 3 months and from 7 to 12 months, ( iii ) the zonal winds lead the ssts from 0 to 3 months, and ( iv ) the ssts lead precipitation over am by 1 month lag. the analyses were made for time series spanning from 1979 to 2008, and for extreme precipitation events in the am during the years 1999, 2005, 2009 and 2010. we also evaluated the monthly mean conditions of the relevant variables during the extreme am droughts of 1963, 1980, 1983, 1997, 1998, 2005, and 2010, and also during the floods of 1989, 1999, and 2009. our results confirm that the amazon river basin acts as a land surface - atmosphere bridge that links the tropical pacific and tna ssts at interannual timescales....
|
arxiv:2504.02102
|
here we present the quantum storage of three - dimensional orbital - angular - momentum photonic entanglement in a rare - earth - ion - doped crystal. the properties of the entanglement and the storage process are confirmed by the violation of the bell - type inequality generalized to three dimensions after storage ( $ s = 2. 152 \ pm0. 033 $ ). the fidelity of the memory process is $ 0. 993 \ pm0. 002 $, as determined through complete quantum process tomography in three dimensions. an assessment of the visibility of the stored weak coherent pulses in higher - dimensional spaces, demonstrates that the memory is highly reliable for 51 spatial modes. these results pave the way towards the construction of high - dimensional and multiplexed quantum repeaters based on solid - state devices. the multimode capacity of rare - earth - based optical processor goes beyond the temporal and the spectral degree of freedom, which might provide a useful tool for photonic information processing.
|
arxiv:1412.5243
|
we report the nanodiamond - derived onion - like carbon successfully applied as an electrocatalyst for [ vo ] 2 + / [ vo2 ] + redox flow battery, as drop - coated ( in the as - synthesized state ) on glassy carbon or carbon felt electrodes. we show that its reversibility and catalytic activity in its as - synthesized state was comparable to some of the best data in the literature which employed surface modifications. we clarified the origin of such excellent performances by physical / electrochemical analyses.
|
arxiv:1605.05938
|
we investigate the expected gravitational wave emission from coalescing supermassive black hole ( smbh ) binaries resulting from mergers of their host galaxies. when galaxies merge, the smbhs in the host galaxies sink to the center of the new merged galaxy and form a binary system. we employ a semi - analytic model of galaxy and quasar formation based on the hierarchical clustering scenario to estimate the amplitude of the expected stochastic gravitational wave background owing to inspiraling smbh binaries and bursts owing to the smbh binary coalescence events. we find that the characteristic strain amplitude of the background radiation is $ h _ c ( f ) \ sim 10 ^ { - 16 } ( f / 1 \ mu { \ rm hz } ) ^ { - 2 / 3 } $ for $ f \ lesssim 1 \ mu { \ rm hz } $ just below the detection limit from measurements of the pulsar timing provided that smbhs coalesce simultaneously when host galaxies merge. the main contribution to the total strain amplitude of the background radiation comes from smbh coalescence events at $ 0 < z < 1 $. we also find that a future space - based gravitational wave interferometer such as the planned \ textit { laser interferometer space antenna } ( { \ sl lisa } ) might detect intense gravitational wave bursts associated with coalescence of smbh binaries with total mass $ m _ { \ rm tot } < 10 ^ 7 m _ { \ odot } $ at $ z \ gtrsim 2 $ at a rate $ \ sim 1. 0 { \ rm yr } ^ { - 1 } $. our model predicts that burst signals with a larger amplitude $ h _ { \ rm burst } \ sim 10 ^ { - 15 } $ correspond to coalescence events of massive smbh binary with total mass $ m _ { \ rm tot } \ sim 10 ^ 8 m _ { \ odot } $ at low redshift $ z \ lesssim 1 $ at a rate $ \ sim 0. 1 { \ rm yr } ^ { - 1 } $ whereas those with a smaller amplitude $ h _ { \ rm burst } \ sim 10 ^ { - 17 } $ correspond to coalescence events of less massive smbh binary with total mass $ m _ { \ rm tot } \ sim 10 ^ 6 m _ { \ odot } $ at high redshift $ z \ gtrsim 3 $.
|
arxiv:astro-ph/0404389
|
we propose a robust calibration pipeline that optimises the selection of calibration samples for the estimation of calibration parameters that fit the entire scene. we minimise user error by automating the data selection process according to a metric, called variability of quality ( voq ) that gives a score to each calibration set of samples. we show that this voq score is correlated with the estimated calibration parameter ' s ability to generalise well to the entire scene, thereby overcoming the overfitting problems of existing calibration algorithms. our approach has the benefits of simplifying the calibration process for practitioners of any calibration expertise level and providing an objective measure of the quality for our calibration pipeline ' s input and output data. we additionally use a novel method of assessing the accuracy of the calibration parameters. it involves computing reprojection errors for the entire scene to ensure that the parameters are well fitted to all features in the scene. our proposed calibration pipeline takes 90s, and obtains an average reprojection error of 1 - 1. 2cm, with standard deviation of 0. 4 - 0. 5cm over 46 poses evenly distributed in a scene. this process has been validated by experimentation on a high resolution, software definable lidar, baraja spectrum - scan ; and a low, fixed resolution lidar, velodyne vlp - 16. we have shown that despite the vast differences in lidar technologies, our proposed approach manages to estimate robust calibration parameters for both. our code and data set used for this paper are made available as open - source.
|
arxiv:2103.12287
|
we show that if $ z $ is " homogeneously multifractal " ( in a sense we precisely define ), then $ z $ is the composition of a monofractal function $ g $ with a time subordinator $ f $ ( i. e. $ f $ is the integral of a positive borel measure supported by $ \ zu $ ). when the initial function $ z $ is given, the monofractality exponent of the associated function $ g $ is uniquely determined. we study in details a classical example of multifractal functions $ z $, for which we exhibit the associated functions $ g $ and $ f $. this provides new insights into the understanding of multifractal behaviors of functions.
|
arxiv:0804.1887
|
we investigate the effects of noncommutativity between the position - position, position - momentum and momentum - momentum of a phase space corresponding to a modified cosmological model. we show that the existence of such noncommutativity results in a moyal poisson algebra between the phase space variables in which the product law between the functions is of the kind of an $ \ alpha $ - deformed product. we then transform the variables in such a way that the poisson brackets between the dynamical variables take the form of a usual poisson bracket but this time with a noncommutative structure. for a power law expression for the function of the ricci scalar with which the action of the gravity model is modified, the exact solutions in the commutative and noncommutative cases are presented and compared. in terms of these solutions we address the issue of the late time acceleration in cosmic evolution.
|
arxiv:1411.3623
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.