text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
we describe new families of the knizhnik - zamolodchikov - bernard ( kzb ) equations related to the wzw - theory corresponding to the adjoint $ g $ - bundles of different topological types over complex curves $ \ sigma _ { g, n } $ of genus $ g $ with $ n $ marked points. the bundles are defined by their characteristic classes - elements of $ h ^ 2 ( \ sigma _ { g, n }, \ mathcal { z } ( g ) ) $, where $ \ mathcal { z } ( g ) $ is a center of the simple complex lie group $ g $. the kzb equations are the horizontality condition for the projectively flat connection ( the kzb connection ) defined on the bundle of conformal blocks over the moduli space of curves. the space of conformal blocks has been known to be decomposed into a few sectors corresponding to the characteristic classes of the underlying bundles. the kzb connection preserves these sectors. in this paper we construct the connection explicitly for elliptic curves with marked points and prove its flatness.
|
arxiv:1207.4386
|
we find a simple solution to the problem of probe laser light shifts in two - photon optical atomic clocks. we show that there exists a magic polarization at which the light shifts of the two atomic states involved in the clock transition are identical. we calculate the differential polarizability as a function of laser polarization for two - photon optical clocks based on neutral calcium and strontium, estimate the magic polarization angle for these clocks, and determine the extent to which probe laser light shifts can be suppressed. we show that the light shift and the two - photon excitation rate can be independently controlled using the probe laser polarization.
|
arxiv:1812.10780
|
we discuss physical interpretation of { \ lambda } cdm cosmology from a machian model of the universe containing nothing but visible matter ( ordinary matter, radiation ). the friedmann equation can be derived from a machian definition of energy, whereby both kinetic and potential energy of a particle are related to all cosmic matter - energy within the particle ' s gravitational horizon. the distance to this horizon thus appears as a parameter in all forms of matter - energy density. from conservation of machian energy it follows that all different types of matter - energy are uniformly characterized by \ rho \ propto 1 / a, i. e., by a constant deceleration q = - 1 / 2. this coincides with relative densities \ varomega _ { m } = 1 / 3 and \ omega _ { \ lambda } = 2 / 3 of { \ lambda } cdm. thus the machian cosmological model matches present relative densities of { \ lambda } cdm, without invoking dark components.
|
arxiv:1504.01924
|
biological living materials, such as animal bones and plant stems, are able to self - heal, regenerate, adapt and make decisions under environmental pressures. despite recent successful efforts to imbue synthetic materials with some of these remarkable functionalities, many emerging properties of complex adaptive systems found in biology remain unexplored in engineered living materials. here, we report on a three - dimensional printing approach that harnesses the emerging properties of fungal mycelium to create living complex materials that self - repair, regenerate and adapt to the environment while fulfilling an engineering function. hydrogels loaded with the fungus ganoderma lucidum are 3d printed into lattice architectures to enable mycelial growth in a balanced exploration and exploitation pattern that simultaneously promotes colonization of the gel and bridging of air gaps. to illustrate the potential of such living complex materials, we 3d print a robotic skin that is mechanically robust, self - cleaning, and able to autonomously regenerate after damage.
|
arxiv:2203.00976
|
even though a variety of methods have been proposed in the literature, efficient and effective latent - space control ( i. e., control in a learned low - dimensional space ) of physical systems remains an open challenge. we argue that a promising avenue is to leverage powerful and well - understood closed - form strategies from control theory literature in combination with learned dynamics, such as potential - energy shaping. we identify three fundamental shortcomings in existing latent - space models that have so far prevented this powerful combination : ( i ) they lack the mathematical structure of a physical system, ( ii ) they do not inherently conserve the stability properties of the real systems, ( iii ) these methods do not have an invertible mapping between input and latent - space forcing. this work proposes a novel coupled oscillator network ( con ) model that simultaneously tackles all these issues. more specifically, ( i ) we show analytically that con is a lagrangian system - i. e., it possesses well - defined potential and kinetic energy terms. then, ( ii ) we provide formal proof of global input - to - state stability using lyapunov arguments. moving to the experimental side, we demonstrate that con reaches soa performance when learning complex nonlinear dynamics of mechanical systems directly from images. an additional methodological innovation contributing to achieving this third goal is an approximated closed - form solution for efficient integration of network dynamics, which eases efficient training. we tackle ( iii ) by approximating the forcing - to - input mapping with a decoder that is trained to reconstruct the input based on the encoded latent space force. finally, we show how these properties enable latent - space control. we use an integral - saturated pid with potential force compensation and demonstrate high - quality performance on a soft robot using raw pixels as the only feedback information.
|
arxiv:2409.08439
|
newtonian cosmology is commonly used in astrophysical problems, because of its obvious simplicity when compared with general relativity. however it has inherent difficulties, the most obvious of which is the non - existence of a well - posed initial value problem. in this paper we investigate how far these problems are met by using the post - newtonian approximation in cosmology.
|
arxiv:gr-qc/9903056
|
we investigate experimentally both the amplitude and phase channels of the collective modes in the quasi - 1d charge - density - wave ( cdw ) system, k0. 3moo3, by combining ( i ) optical impulsive - raman pump - probe and ( ii ) terahertz time - domain spectroscopy ( thz - tds ), with high resolution and a detailed analysis of the full complex - valued spectra in both cases. this allows an unequivocal assignment of the observed bands to cdw modes across the thz range up to 9 thz. we revise and extend a time - dependent ginzburg - landau model to account for the observed temperature dependence of the modes, where the combination of both amplitude and phase modes allows one to robustly determine the bare - phonon and electron - phonon coupling parameters. while the coupling is indeed strongest for the lowest - energy phonon, dropping sharply for the immediately subsequent phonons, it grows back significantly for the higher - energy phonons, demonstrating their important role in driving the cdw formation. we also include a reassessment of our previous analysis of the lowest - lying phase modes, whereby assuming weaker electronic damping for the phase channel results in a qualitative picture more consistent with quantum - mechanical treatments of the collective modes, with a strongly coupled amplitudon and phason as the lowest modes.
|
arxiv:2303.08558
|
= constructing fields within a bigger field = = = fields can be constructed inside a given bigger container field. suppose given a field e, and a field f containing e as a subfield. for any element x of f, there is a smallest subfield of f containing e and x, called the subfield of f generated by x and denoted e ( x ). the passage from e to e ( x ) is referred to by adjoining an element to e. more generally, for a subset s β f, there is a minimal subfield of f containing e and s, denoted by e ( s ). the compositum of two subfields e and e β² of some field f is the smallest subfield of f containing both e and e β². the compositum can be used to construct the biggest subfield of f satisfying a certain property, for example the biggest subfield of f, which is, in the language introduced below, algebraic over e. = = = field extensions = = = the notion of a subfield e β f can also be regarded from the opposite point of view, by referring to f being a field extension ( or just extension ) of e, denoted by f / e, and read " f over e ". a basic datum of a field extension is its degree [ f : e ], i. e., the dimension of f as an e - vector space. it satisfies the formula [ g : e ] = [ g : f ] [ f : e ]. extensions whose degree is finite are referred to as finite extensions. the extensions c / r and f4 / f2 are of degree 2, whereas r / q is an infinite extension. = = = = algebraic extensions = = = = a pivotal notion in the study of field extensions f / e are algebraic elements. an element x β f is algebraic over e if it is a root of a polynomial with coefficients in e, that is, if it satisfies a polynomial equation en xn + enβ1xnβ1 + + e1x + e0 = 0, with en,..., e0 in e, and en = 0. for example, the imaginary unit i in c is algebraic over r, and even over q, since it satisfies the equation i2 + 1 = 0. a field extension in which every element of f is algebraic over e is called an algebraic extension. any finite extension is necessarily algebraic, as can be deduced from the above
|
https://en.wikipedia.org/wiki/Field_(mathematics)
|
we present pyxtal ff, a package based on python programming language, for developing machine learning potentials ( mlps ). the aim of pyxtal ff is to promote the application of atomistic simulations by providing several choices of structural descriptors and machine learning regressions in one platform. based on the given choice of structural descriptors ( including the atom - centered symmetry functions, embedded atom density, so4 bispectrum, and smooth so3 power spectrum ), pyxtal ff can train the mlps with either the generalized linear regression or neural networks model, by simultaneously minimizing the errors of energy / forces / stress tensors in comparison with the data from the ab - initio simulation. the trained mlp model from pyxtal ff is interfaced with the atomic simulation environment ( ase ) package, which allows different types of light - weight simulations such as geometry optimization, molecular dynamics simulation, and physical properties prediction. finally, we will illustrate the performance of pyxtal ff by applying it to investigate several material systems, including the bulk sio2, high entropy alloy nbmotaw, and elemental pt for general purposes. full documentation of pyxtal ff is available at https : / / pyxtal - ff. readthedocs. io.
|
arxiv:2007.13012
|
we assign to each young diagram $ \ lambda $ a subset $ \ mathcal { b } _ { \ lambda ' } $ of the collection of garsia - stanton descent monomials, and prove that it determines a basis of the garsia - procesi module $ r _ \ lambda $, whose graded character is the hall - littlewood polynomial $ \ tilde { h } _ { \ lambda } [ x ; t ] $. this basis is a major index analogue of the basis $ \ mathcal { b } _ \ lambda \ subset r _ \ lambda $ defined by certain recursions in due to garsia and procesi, in the same way that the descent basis is related to the artin basis of the coinvariant algebra $ r _ n $, which in fact corresponds to the case when $ \ lambda = 1 ^ n $. by anti - symmetrizing a subset of this basis with respect to the corresponding young subgroup under the springer action, we obtain a basis in the parabolic case, as well as a corresponding formula for the expansion of $ \ tilde { h } _ { \ lambda } [ x ; t ] $. despite a similar appearance, it does not appear obvious how to connect these formulas appear to the specialization of the modified macdonald formula of haglund, haiman and loehr at $ q = 0 $.
|
arxiv:2403.16278
|
we consider advection of small inertial particles by a random fluid flow with a strong steady shear component. it is known that inertial particles suspended in a random flow can exhibit clusterization even if the flow is incompressible. we study this phenomenon through statistical characteristics of a separation vector between two particles. as usual in a random flow, moments of distance between particles grow exponentially. we calculate the rates of this growth using the saddle - point approximation in the path - integral formalism. we also calculate correction to the lyapunov exponent due to small inertia by a perturbation theory expansion.
|
arxiv:1108.2691
|
altermagnetism has surfaced as a novel magnetic phase, bridging the properties of ferro - and anti - ferromagnetism. the momentum - dependent spin - splitting observed in these materials reflects their unique symmetry characteristics which also establish the conditions for chiral magnons to emerge. here we provide the first direct experimental evidence for a chiral magnon in the altermagnetic candidate mnte, revealed by circular - dichroism resonant inelastic x - ray scattering ( cd - rixs ). this mode which we term chiral altermagnon exhibits a distinct momentum dependence consistent with the proposed altermagnetic $ g - $ wave symmetry of mnte. our results reveal a new class of magnetic excitations, demonstrating how altermagnetic order shapes spin dynamics and paves the way for advances in spintronic and quantum technologies.
|
arxiv:2501.17380
|
the subject of our thesis is the uniqueness theory of meromorphic functions and it is devoted to problems concerning bruck conjecture, set sharing and related topics. the tool, we used in our discussions is classical nevanlinna theory of meromorphic functions. in 1996, in order to find the relation between an entire function with its derivative, counterpart sharing one value cm, a famous conjecture was proposed by r. bruck. since then the conjecture and its analogous results have been investigated by many researchers and continuous efforts have been put on by them. in our thesis, we have obtained similar types of conclusions as that of bruck for two differential polynomials which in turn improve several existing results under different sharing environment. a number of examples have been exhibited to justify the necessity or sharpness of some conditions, hypothesis used in the thesis. as a variation of value sharing, f. gross first introduced the idea of set sharing, by proposing a problem, which has later became popular as gross problem. inspired by the gross ' problem, the set sharing problems were started which was later shifted towards the characterization of the polynomial backbone of different unique range sets. in our study, we introduced some new type of unique range sets and at the same time, we further explored the anatomy of these unique range sets generating polynomials as well as connected bruck conjecture with gross ' problem.
|
arxiv:1711.08808
|
magnetic fields at the surface of a few early - type stars have been directly detected. these fields have magnitudes between a few hundred g up to a few kg. in one case, evidence of magnetic braking has been found. we investigate the effects of magnetic braking on the evolution of rotating ( $ \ upsilon _ { \ rm ini } $ = 200 km s $ ^ { - 1 } $ ) 10 m $ _ \ odot $ stellar models at solar metallicity during the main - sequence ( ms ) phase. the magnetic braking process is included in our stellar models according to the formalism deduced from 2d mhd simulations of magnetic wind confinement by ud - doula and co - workers. various assumptions are made regarding both the magnitude of the magnetic field and of the efficiency of the angular momentum transport mechanisms in the stellar interior. when magnetic braking occurs in models with differential rotation, a strong and rapid mixing is obtained at the surface accompanied by a rapid decrease in the surface velocity. such a process might account for some ms stars showing strong mixing and low surface velocities. when solid - body rotation is imposed in the interior, the star is slowed down so rapidly that surface enrichments are smaller than in similar models with no magnetic braking. in both kinds of models ( differentially or uniformly rotating ), magnetic braking due to a field of a few 100 g significantly reduces the angular momentum of the core during the ms phase. this reduction is much greater in solid - body rotating models.
|
arxiv:1011.5795
|
we relax the strong rationality assumption for the agents in the paradigmatic kyle model of price formation, thereby reconciling the framework of asymmetrically informed traders with the adaptive market hypothesis, where agents use inductive rather than deductive reasoning. building on these ideas, we propose a stylised model able to account parsimoniously for a rich phenomenology, ranging from excess volatility to volatility clustering. while characterising the excess - volatility dynamics, we provide a microfoundation for garch models. volatility clustering is shown to be related to the self - excited dynamics induced by traders ' behaviour, and does not rely on clustered fundamental innovations. finally, we propose an extension to account for the fragile dynamics exhibited by real markets during flash crashes.
|
arxiv:2206.06764
|
using total variation based energy minimisation we address the recovery of a blurred ( convoluted ) one dimensional ( 1d ) barcode. we consider functionals defined over all possible barcodes with fidelity to a convoluted signal of a barcode, and regularised by total variation. our fidelity terms consist of the l ^ 2 distance either directly to the measured signal or preceded by deconvolution. key length scales and parameters are the x - dimension of the underlying barcode, the size of the supports of the convolution and deconvolution kernels, and the fidelity parameter. for all functionals, we establish regimes ( sufficient conditions ) wherein the underlying barcode is the unique minimiser. we also present some numerical experiments suggesting that these sufficient conditions are not optimal and the energy methods are quite robust for significant blurring.
|
arxiv:0910.2494
|
the concept of effective order is a popular methodology in the deterministic literature for the construction of efficient and accurate integrators for differential equations over long times. the idea is to enhance the accuracy of a numerical method by using an appropriate change of variables called the processor. we show that this technique can be extended to the stochastic context for the construction of new high order integrators for the sampling of the invariant measure of ergodic systems. the approach is illustrated with modifications of the stochastic $ \ theta $ - method applied to brownian dynamics, where postprocessors achieving order two are introduced. numerical experiments, including stiff ergodic systems, illustrate the efficiency and versatility of the approach.
|
arxiv:1411.3134
|
we obtained extensive narrowband photoelectric photometry of comet 21p / giacobini - zinner with observations spanning 33 years. the original data from 1985 ( schleicher et al. 1987 ) were re - reduced and are presented along with data from three additional apparitions including 2018 / 19. the original conclusion regarding giacobini - zinner ' s chemical composition remains unchanged, with it having a 4 - 6x depletion in the carbon - chain molecules c2 and c3, and in nh, as compared with both oh and cn. the comet continues to exhibit a large asymmetry in production rates as a function of time and heliocentric distance, with production reaching a peak 3 - 5 weeks prior to perihelion. all species, including dust, follow the same general production rate curve each apparition, and the carbon - bearing species are always very similar to one another. however, oh and nh each differ in detail from the carbon - bearing species, implying somewhat varied composition between source regions. longer term, there are only small secular changes among the apparitions before and near perihelion, but larger changes are evident as the comet recedes from the sun, suggestive of a progressive precession of the rotation axis.
|
arxiv:2204.12435
|
. an expression with no variables would define a constant function. in this way, two expressions are said to be equivalent if, for each combination of values for the free variables, they have the same output, i. e., they represent the same function. the equivalence between two expressions is called an identity and is sometimes denoted with β‘. { \ displaystyle \ equiv. } for example, in the expression n = 1 3 ( 2 n x ), { \ textstyle \ sum _ { n = 1 } ^ { 3 } ( 2nx ), } the variable n is bound, and the variable x is free. this expression is equivalent to the simpler expression 12 x ; that is n = 1 3 ( 2 n x ) β‘ 12 x. { \ displaystyle \ sum _ { n = 1 } ^ { 3 } ( 2nx ) \ equiv 12x. } the value for x = 3 is 36, which can be denoted n = 1 3 ( 2 n x ) | x = 3 = 36. { \ displaystyle \ sum _ { n = 1 } ^ { 3 } ( 2nx ) { \ big | } _ { x = 3 } = 36. } = = = polynomial evaluation = = = a polynomial consists of variables and coefficients, that involve only the operations of addition, subtraction, multiplication and exponentiation to nonnegative integer powers, and has a finite number of terms. the problem of polynomial evaluation arises frequently in practice. in computational geometry, polynomials are used to compute function approximations using taylor polynomials. in cryptography and hash tables, polynomials are used to compute k - independent hashing. in the former case, polynomials are evaluated using floating - point arithmetic, which is not exact. thus different schemes for the evaluation will, in general, give slightly different answers. in the latter case, the polynomials are usually evaluated in a finite field, in which case the answers are always exact. for evaluating the univariate polynomial a n x n + a n β 1 x n β 1 + + a 0, { \ textstyle a _ { n } x ^ { n } + a _ { n - 1 } x ^ { n - 1 } + \ cdots + a _ { 0 }, } the most naive method would use n { \ displaystyle n } multiplications to compute a n x n { \ displaystyle a _ { n } x ^ { n } }, use n β 1 { \ texts
|
https://en.wikipedia.org/wiki/Expression_(mathematics)
|
we investigate the spin excitation spectra in chiral and polar magnets by the linear spin - wave theory for an effective spin model with symmetric and antisymmetric long - range interactions. in one dimension, we obtain the analytic form of the dynamical spin structure factor for proper - screw and cycloidal helical spin states with uniform twists, which shows a gapless mode with strong intensity at the helical wave number. when introducing spin anisotropy in the symmetric interactions, we numerically show that the stable spin spirals become elliptically anisotropic with nonuniform twists and the spin excitation is gapped. in higher dimensions, we find that similar anisotropy stabilizes multiple - $ q $ spin states, such as vortex crystals and hedgehog lattices. we show that the anisotropy in these states manifests itself in the dynamical spin structure factor : a strong intensity in the transverse components to the wave number appears only when the helical wave vector and the corresponding easy axis are perpendicular to each other. our findings could be useful not only to identify the spin structure but also to deduce the stabilization mechanism by inelastic neutron scattering measurements.
|
arxiv:2109.05628
|
we give a proof of the formality conjecture of kaledin and lehn : on a complex projective k3 surface, the dg algebra rhom ( f, f ) is formal for any sheaf f polystable with respect to an ample line bundle. our main tool is the uniqueness of dg enhancement of the bounded derived category of coherent sheaves. we also extend the formality result to derived objects that are polystable with respect to a generic bridgeland stability condition.
|
arxiv:1803.03974
|
we explore an instantaneous decoherence correction ( idc ) approach for the decoherence and energy relaxation in the quantum - classical dynamics of charge transport in organic semiconducting crystals. these effects, originating from environmental fluctuations, are essential ingredients of the carrier dynamics. the idc is carried out by measurement - like operations in the adiabatic representation. while decoherence is inherent in the idc, energy relaxation is taken into account by considering the detailed balance through the introduction of energy - dependent reweighing factors, which could be either boltzmann ( idc - bm ) or miller - abrahams ( idc - ma ) type. for a non - diagonal electron - phonon coupling model, it is shown that the idc tends to enhance diffusion while energy relaxation weakens this enhancement. as expected, both the idc - bm and idc - ma achieve a near - equilibrium distribution at finite temperatures in the diffusion process, while the ehrenfest dynamics renders system tending to infinite temperature limit. the resulting energy relaxation times with the two kinds of factors lie in different regimes and exhibit different dependence on temperature, decoherence time and electron - phonon coupling strength, due to different dominant relaxation process.
|
arxiv:1505.02234
|
we give a proof of the well - known fact that the $ \ ok $ - module $ \ e $ of smooth functions is flat by means of residue theory and integral formulas. a variant of the proof gives a related statement for classes of functions of lower regularity. we also prove a brian \ c { c } on - skoda type theorem for ideals of the form $ \ e a $, where $ a $ is an ideal in $ \ ok $.
|
arxiv:1905.04927
|
in this paper we give explicit formulae in momentum and coordinate space for the three - nucleon potentials due to $ \ rho $ and $ \ pi $ meson exchange, derived from off - mass - shell meson - nucleon scattering amplitudes which are constrained by the symmetries of qcd and by the experimental data. those potentials have already been applied to nuclear matter calculations. here we display additional terms which appear to be the most important for nuclear structure. the potentials are decomposed in a way that separates the contributions of different physical mechanisms involved in the meson - nucleon amplitudes. the same type of decomposition is presented for the $ \ pi - \ pi $ tm force : the $ \ delta $, the chiral symmetry breaking and the nucleon pair terms are isolated.
|
arxiv:nucl-th/9305017
|
apart from being so far the only known binary multiferroic compound, cuo has a much higher transition temperature into the multiferroic state, 230 k, than any other known material in which the electric polarization is induced by spontaneous magnetic order, typically lower than 100 k. although the magnetically induced ferroelectricity of cuo is firmly established, no magnetoelectric effect has been observed so far as direct crosstalk between bulk magnetization and electric polarization counterparts. here we demonstrate that high magnetic fields of about 50 t are able to suppress the helical modulation of the spins in the multiferroic phase and dramatically affect the electric polarization. furthermore, just below the spontaneous transition from commensurate ( paraelectric ) to incommensurate ( ferroelectric ) structures at 213 k, even modest magnetic fields induce a transition into the incommensurate structure and then suppress it at higher field. thus, remarkable hidden magnetoelectric features are uncovered, establishing cuo as prototype multiferroic with abundance of competitive magnetic interactions.
|
arxiv:1601.04607
|
using exact results, we determine the complex - temperature phase diagrams of the 2d ising model on three regular heteropolygonal lattices, $ ( 3 \ cdot 6 \ cdot 3 \ cdot 6 ) $ ( kagom \ ' { e } ), $ ( 3 \ cdot 12 ^ 2 ) $, and $ ( 4 \ cdot 8 ^ 2 ) $ ( bathroom tile ), where the notation denotes the regular $ n $ - sided polygons adjacent to each vertex. we also work out the exact complex - temperature singularities of the spontaneous magnetisation. a comparison with the properties on the square, triangular, and hexagonal lattices is given. in particular, we find the first case where, even for isotropic spin - spin exchange couplings, the nontrivial non - analyticities of the free energy of the ising model lie in a two - dimensional, rather than one - dimensional, algebraic variety in the $ z = e ^ { - 2k } $ plane.
|
arxiv:hep-lat/9503005
|
integrated mathematics is the term used in the united states to describe the style of mathematics education which integrates many topics or strands of mathematics throughout each year of secondary school. each math course in secondary school covers topics in algebra, geometry, trigonometry and functions. nearly all countries throughout the world, except the united states, normally follow this type of integrated curriculum. in the united states, topics are usually integrated throughout elementary school up to the seventh or sometimes eighth grade. beginning with high school level courses, topics are usually separated so that one year a student focuses entirely on algebra ( if it was not already taken in the eighth grade ), the next year entirely on geometry, then another year of algebra ( sometimes with trigonometry ), and later an optional fourth year of precalculus or calculus. precalculus is the exception to the rule, as it usually integrates algebra, trigonometry, and geometry topics. statistics may be integrated into all the courses or presented as a separate course. new york state began using integrated math curricula in the 1980s, but recently returned to a traditional curriculum. a few other localities in the united states have also tried such integrated curricula, including georgia, which mandated them in 2008 but subsequently made them optional. more recently, a few other states have mandated that all districts change to integrated curricula, including north carolina, west virginia and utah. some districts in other states, including california, have either switched or are considering switching to an integrated curriculum. under the common core standards adopted by most states in 2012, high school mathematics may be taught using either a traditional american approach or an integrated curriculum. the only difference would be the order in which the topics are taught. supporters of using integrated curricula in the united states believe that students will be able to see the connections between algebra and geometry better in an integrated curriculum. general mathematics is another term for a mathematics course organized around different branches of mathematics, with topics arranged according to the main objective of the course. when applied to primary education, the term general mathematics may encompass mathematical concepts more complex than basic arithmetic, like number notation, addition and multiplication tables, fractions and related operations, measurement units. when used in context of higher education, the term may encompass mathematical terminology and concepts, finding and applying appropriate techniques to solve routine problems, interpreting and representing practical information given in various forms, interpreting and using mathematical models, and constructing mathematical arguments to solve familiar and unfamiliar problems. = = references = =
|
https://en.wikipedia.org/wiki/Integrated_mathematics
|
transition metal surfaces catalyse a broad range of thermally - activated reactions involving carbon - containing - species - - from atomic carbon to small hydrocarbons or organic molecules, and polymers. these reactions yield well - separated phases, for instance graphene and the metal surface, or, on the contrary, alloyed phases, such as metal carbides. here, we investigate carbon phases on a rhenium ( 0001 ) surface, where the former kind of phase can transform into the latter. we find that this transformation occurs with increasing annealing time, which is hence not suitable to increase the quality of graphene. our scanning tunneling spectroscopy and reflection high - energy electron diffraction analysis reveal that repeated short annealing cycles are best suited to increase the lateral extension of the structurally coherent graphene domains. using the same techniques and with the support of density functional theory calculations, we next unveil, in real space, the symmetry of the many variants ( two six - fold families ) of a rhenium surface carbide observed with diffraction since the 1970s, and finally propose models of the atomic details. one of these models, which nicely matches the microscopy observations, consists of parallel rows of eight aligned carbon trimers with a so - called $ ( 7 \ times \ sqrt { \ mathrm { 19 } } ) $ unit cell with respect to re ( 0001 ).
|
arxiv:2011.09171
|
this chapter presents joint interference suppression and power allocation algorithms for ds - cdma and mimo networks with multiple hops and amplify - and - forward and decode - and - forward ( df ) protocols. a scheme for joint allocation of power levels across the relays and linear interference suppression is proposed. we also consider another strategy for joint interference suppression and relay selection that maximizes the diversity available in the system. simulations show that the proposed cross - layer optimization algorithms obtain significant gains in capacity and performance over existing schemes.
|
arxiv:1301.5912
|
we introduce rvsnupy, a new python package designed to measure spectroscopic redshifts. based on inverse - variance weighted cross - correlation, rvsnupy determines the redshifts by comparing observed spectra with various rest - frame template spectra. we test the performance of rvsnupy based on ~ 6000 objects in the hectomap redshift survey observed with both sdss and mmt / hectospec. we demonstrate that a slight redshift offset ( ~ 40 km / s ) between sdss and mmt / hectospec measurements reported from previous studies results from the small offsets in the redshift template spectra used for sdss and hectospec reductions. we construct the universal set of template spectra, including empirical sdss template spectra, carefully calibrated to the rest frame. our test for the hectomap objects with duplicated observations shows that rvsnupy with the universal template spectra yields the homogeneous redshift from the spectra obtained with different spectrographs. we highlight that rvsnupy is a powerful redshift measurement tool for current and future large - scale spectroscopy surveys, including a - spec, desi, 4most, and subaru / pfs.
|
arxiv:2505.01710
|
with the help of the f - basis provided by the drinfeld twist or factorizing f - matrix for the open xxz spin chain with non - diagonal boundary terms, we obtain the determinant representations of the scalar products of bethe states of the model.
|
arxiv:1011.4719
|
by establishing an invariant set ( 1. 11 ) for the prandtl equation in crocco transformation, we prove orbital and asymptotic stability of blasius - like steady states against oleinik ' s monotone solutions.
|
arxiv:2208.00569
|
a brief review is given of the ways of testing sugra unified models and a class of string models using data from precision electroweak experiments, yukawa unification constraints, and constraints from dark matter experiments. models discussed in detail include msugra, extended sugra model with non - universalities within so ( 10 ) grand unification, and effective theories with modular invariant soft breaking within a generic heterotic string framework. the implications of the hyperbolic branch including the focus point and inversion regions for the discovery of supersymmetry in collider experiments and for the detection of dark matter in the direct detection experiments are also discussed.
|
arxiv:hep-ph/0412168
|
a key question in galaxy evolution has been the importance of the apparent ` clumpiness ' of high redshift galaxies. until now, this property has been primarily investigated in rest - frame uv, limiting our understanding of their relevance. are they short - lived or are associated with more long - lived massive structures that are part of the underlying stellar disks? we use jwst / nircam imaging from ceers to explore the connection between the presence of these ` clumps ' in a galaxy and its overall stellar morphology, in a mass - complete ( $ log \, m _ { * } / m _ { \ odot } > 10. 0 $ ) sample of galaxies at $ 1. 0 < z < 2. 0 $. exploiting the uninterrupted access to rest - frame optical and near - ir light, we simultaneously map the clumps in galactic disks across our wavelength coverage, along with measuring the distribution of stars among their bulges and disks. firstly, we find that the clumps are not limited to rest - frame uv and optical, but are also apparent in near - ir with $ \ sim 60 \, \ % $ spatial overlap. this rest - frame near - ir detection indicates that clumps would also feature in the stellar - mass distribution of the galaxy. a secondary consequence is that these will hence be expected to increase the dynamical friction within galactic disks leading to gas inflow. we find a strong negative correlation between how clumpy a galaxy is and strength of the bulge. this firmly suggests an evolutionary connection, either through clumps driving bulge growth, or the bulge stabilizing the galaxy against clump formation, or a combination of the two. finally, we find evidence of this correlation differing from rest - frame optical to near - ir, which could suggest a combination of varying formation modes for the clumps.
|
arxiv:2309.05737
|
we describe a family of architectures to support transductive inference by allowing memory to grow to a finite but a - priori unknown bound while making efficient use of finite resources for inference. current architectures use such resources to represent data either eidetically over a finite span ( " context " in transformers ), or fading over an infinite span ( in state space models, or ssms ). recent hybrid architectures have combined eidetic and fading memory, but with limitations that do not allow the designer or the learning process to seamlessly modulate the two, nor to extend the eidetic memory span. we leverage ideas from stochastic realization theory to develop a class of models called b ' mojo to seamlessly combine eidetic and fading memory within an elementary composable module. the overall architecture can be used to implement models that can access short - term eidetic memory " in - context, " permanent structural memory " in - weights, " fading memory " in - state, " and long - term eidetic memory " in - storage " by natively incorporating retrieval from an asynchronously updated memory. we show that transformers, existing ssms such as mamba, and hybrid architectures such as jamba are special cases of b ' mojo and describe a basic implementation, to be open sourced, that can be stacked and scaled efficiently in hardware. we test b ' mojo on transductive inference tasks, such as associative recall, where it outperforms existing ssms and hybrid models ; as a baseline, we test ordinary language modeling where b ' mojo achieves perplexity comparable to similarly - sized transformers and ssms up to 1. 4b parameters, while being up to 10 % faster to train. finally, we show that b ' mojo ' s ability to modulate eidetic and fading memory results in better inference on longer sequences tested up to 32k tokens, four - fold the length of the longest sequences seen during training.
|
arxiv:2407.06324
|
affine deligne - lusztig varieties are analogs of deligne - lusztig varieties in the context of an affine root system. we prove a conjecture stated in the paper arxiv : 0805. 0045v4 by haines, kottwitz, reuman, and the first named author, about the question which affine deligne - lusztig varieties ( for a split group and a basic $ \ sigma $ - conjugacy class ) in the iwahori case are non - empty. if the underlying algebraic group is a classical group and the chosen basic $ \ sigma $ - conjugacy class is the class of $ b = 1 $, we also prove the dimension formula predicted in op. cit. in almost all cases.
|
arxiv:1006.2291
|
this paper studies uncertainty set estimation for unknown linear systems. uncertainty sets are crucial for the quality of robust control since they directly influence the conservativeness of the control design. departing from the confidence region analysis of least squares estimation, this paper focuses on set membership estimation ( sme ). though good numerical performances have attracted applications of sme in the control literature, the non - asymptotic convergence rate of sme for linear systems remains an open question. this paper provides the first convergence rate bounds for sme and discusses variations of sme under relaxed assumptions. we also provide numerical results demonstrating sme ' s practical promise.
|
arxiv:2309.14648
|
the weyl semimetals are topologically protected from a gap opening against weak disorder in three dimensions. however, a strong disorder drives this relativistic semimetal through a quantum transition towards a diffusive metallic phase characterized by a finite density of states at the band crossing. this transition is usually described by a perturbative renormalization group in $ d = 2 + \ varepsilon $ of a $ u ( n ) $ gross - neveu model in the limit $ n \ to 0 $. unfortunately, this model is not multiplicatively renormalizable in $ 2 + \ varepsilon $ dimensions : an infinite number of relevant operators are required to describe the critical behavior. hence its use in a quantitative description of the transition beyond one - loop is at least questionable. we propose an alternative route, building on the correspondence between the gross - neveu and gross - neveu - yukawa models developed in the context of high energy physics. it results in a model of weyl fermions with a random non - gaussian imaginary potential which allows one to study the critical properties of the transition within a $ d = 4 - \ varepsilon $ expansion. we also discuss the characterization of the transition by the multifractal spectrum of wave functions.
|
arxiv:1605.02009
|
mobile devices and the internet of things ( iot ) devices nowadays generate a large amount of heterogeneous spatial - temporal data. it remains a challenging problem to model the spatial - temporal dynamics under privacy concern. federated learning ( fl ) has been proposed as a framework to enable model training across distributed devices without sharing original data which reduce privacy concern. personalized federated learning ( pfl ) methods further address data heterogenous problem. however, these methods don ' t consider natural spatial relations among nodes. for the sake of modeling spatial relations, graph neural netowork ( gnn ) based fl approach have been proposed. but dynamic spatial - temporal relations among edge nodes are not taken into account. several approaches model spatial - temporal dynamics in a centralized environment, while less effort has been made under federated setting. to overcome these challeges, we propose a novel federated adaptive spatial - temporal attention ( fedasta ) framework to model the dynamic spatial - temporal relations. on the client node, fedasta extracts temporal relations and trend patterns from the decomposed terms of original time series. then, on the server node, fedasta utilize trend patterns from clients to construct adaptive temporal - spatial aware graph which captures dynamic correlation between clients. besides, we design a masked spatial attention module with both static graph and constructed adaptive graph to model spatial dependencies among clients. extensive experiments on five real - world public traffic flow datasets demonstrate that our method achieves state - of - art performance in federated scenario. in addition, the experiments made in centralized setting show the effectiveness of our novel adaptive graph construction approach compared with other popular dynamic spatial - temporal aware methods.
|
arxiv:2405.13090
|
we use the relativistic hartree - fock method, many - body perturbation theory and configuration - interaction method to calculate the dependence of atomic transition frequencies on the fine structure constant, alpha. the results of these calculations will be used in the search for variation of the fine structure constant in quasar absorption spectra.
|
arxiv:physics/0404008
|
we study the accuracy of differentially private mechanisms in the continual release model. a continual release mechanism receives a sensitive dataset as a stream of $ t $ inputs and produces, after receiving each input, an accurate output on the obtained inputs. in contrast, a batch algorithm receives the data as one batch and produces a single output. we provide the first strong lower bounds on the error of continual release mechanisms. in particular, for two fundamental problems that are widely studied and used in the batch model, we show that the worst case error of every continual release algorithm is $ \ tilde \ omega ( t ^ { 1 / 3 } ) $ times larger than that of the best batch algorithm. previous work shows only a polylogarithimic ( in $ t $ ) gap between the worst case error achievable in these two models ; further, for many problems, including the summation of binary attributes, the polylogarithmic gap is tight ( dwork et al., 2010 ; chan et al., 2010 ). our results show that problems closely related to summation - - specifically, those that require selecting the largest of a set of sums - - are fundamentally harder in the continual release model than in the batch model. our lower bounds assume only that privacy holds for streams fixed in advance ( the " nonadaptive " setting ). however, we provide matching upper bounds that hold in a model where privacy is required even for adaptively selected streams. this model may be of independent interest.
|
arxiv:2112.00828
|
in this paper we study families of projective manifolds with good minimal models. after constructing a suitable moduli functor for polarized varieties with canonical singularities, we show that, if not birationally isotrivial, the base spaces of such families support subsheaves of log - pluridifferentials with positive kodaira dimension. consequently we prove that, over special base schemes, families of this type can only be birationally isotrivial and, as a result, confirm a conjecture of kebekus and kov \ ' acs.
|
arxiv:2005.01025
|
this work deals with the exponential stabilization of a system of three semilinear parabolic partial differential equations ( pdes ), written in a strict feedforward form. the diffusion coefficients are considered distinct and the pdes are interconnected via both a reaction matrix and a nonlinearity. only one of the pdes is assumed to be controlled internally, thereby leading to an underactuated system. constructive and efficient control of such underactuated systems is a nontrivial open problem, which has been solved recently for the linear case. in this work, these results are extended to the semilinear case, which is highly challenging due the interconnection that is introduced by the nonlinearity. modal decomposition is employed, where due to nonlinearity, the finite - dimensional part of the solution is coupled with the infinite - dimensional tail. a transformation is then performed to map the finite - dimensional part into a target system, which allows for an efficient design of a static linear proportional state - feedback controller. furthermore, a high - gain approach is employed in order to compensate for the nonlilinear terms. lyapunov stability analysis is performed, leading to lmi conditions guaranteeing exponential stability with arbitrary decay rate. the lmis are shown to always be feasible, provided the number of actuators and the value of the high gain parameter are large enough. numerical examples show the efficiency of the proposed approach.
|
arxiv:2304.01548
|
the topology of the internet has typically been measured by sampling traceroutes, which are roughly shortest paths from sources to destinations. the resulting measurements have been used to infer that the internet ' s degree distribution is scale - free ; however, many of these measurements have relied on sampling traceroutes from a small number of sources. it was recently argued that sampling in this way can introduce a fundamental bias in the degree distribution, for instance, causing random ( erdos - renyi ) graphs to appear to have power law degree distributions. we explain this phenomenon analytically using differential equations to model the growth of a breadth - first tree in a random graph g ( n, p = c / n ) of average degree c, and show that sampling from a single source gives an apparent power law degree distribution p ( k ) ~ 1 / k for k < c.
|
arxiv:cond-mat/0312674
|
in this paper, a new practice - ready method for the real - time estimation of traffic conditions and travel times on highways is introduced. first, after a principal component analysis, observation days of a historical dataset are clustered. two different methods are compared : a gaussian mixture model and a k - means algorithm. the clustering results reveal that congestion maps of days of the same group have substantial similarity in their traffic conditions and dynamic. such a map is a binary visualization of the congestion propagation on the freeway, giving more importance to the traffic dynamics. second, a consensus day is identified in each cluster as the most representative day of the community according to the congestion maps. third, this information obtained from the historical data is used to predict traffic congestion propagation and travel times. thus, the first measurements of a new day are used to determine which consensual day is the closest to this new day. the past observations recorded for that consensual day are then used to predict future traffic conditions and travel times. this method is tested using ten months of data collected on a french freeway and shows very encouraging results.
|
arxiv:2011.05073
|
we describe the local transition probability of a singular diagonal action on the standard non - uniform quotient of $ pgl _ 3 $ associated to the type 1 geodesic flow. as a consequence, we deduce the strongly positive recurrence property of the geodesic flow.
|
arxiv:2401.11747
|
we compute the variation of the masses of the proton and the neutron induced by the presence of a strong external magnetic field. we discuss the choice of the wave function and different techniques how to apply the magnetic field. the results obtained from a 24 16 ^ 2 8 and a 16 ^ 2 8 ^ 2 lattice are compared with a phenomenological approach.
|
arxiv:hep-lat/9412007
|
in this paper we use spontaneous flux production in annular superconductors to shed light on the kibble - zurek scenario. in particular, we examine the effects of finite size and external fields, neither of which is directly amenable to the kz analysis. supported by 1d and 3d simulations, the properties of a superconducting ring are seen to be well represented by analytic gaussian approximations which encode the kz scales indirectly. experimental results for annuli in the presence of external fields corroborate these findings.
|
arxiv:1302.7296
|
we investigate massless fermion production by a two - dimensional dilatonic black hole. our analysis is based on the bogoliubov transformation relating the outgoing fermion field observed outside the black hole horizon to the incoming field present before the black hole creation. it takes full account of the fact that the transformation is neither invertible nor unitarily implementable. the particle content of the outgoing radiation is specified by means of inclusive probabilities for the detection of sets of outgoing fermions and antifermions in given states. for states localized near the horizon these probabilities characterize a thermal equilibrium state. the way the probabilities become thermal as one approaches the horizon is discussed in detail.
|
arxiv:gr-qc/9403045
|
we present and study the results for the standard model process $ e ^ + e ^ - \ to \ nu \ bar { \ nu } b \ bar { b } $ at c. m energies 150 $ \ leq \ sqrt { s } ( gev ) \ leq $ 240 and for higgs boson masses $ 80 gev \ leq m _ h \ leq 120 gev $, obtained from all tree - level diagrams and including the most important radiative corrections. the $ \ sqrt { s } $ dependence and the interference properties of the higgs boson contribution and of various coherent background contributions to the total cross section are examined. the effect of the qed initial state radiative corrections is estimated. the important differential distributions for the higgs boson and the background components are studied, providing information usefull for choosing cuts in higgs searches. we also examine the effect of a minimal set of cuts and evaluate the importance of the ww fusion for detecting a higher mass higgs boson at lepii.
|
arxiv:hep-ph/9603383
|
in a recent project, castillo, libedinsky, plaza, and the author established a deep connection between the size of lower bruhat intervals in affine weyl groups and the volume of the permutohedron, showing that the former can be expressed as a linear combination of the latter. in this paper, we provide a formula for the volume of this polytope in terms of dyck paths. thus, we present a shorter, alternative, and enlightening proof of a previous formula given by postnikov.
|
arxiv:2503.23122
|
a scaling law analysis of the world data on inclusive large - pt hadron production in hadronic collisions is carried out. a significant deviation from leading - twist perturbative qcd predictions at next - to - leading order is reported. the observed discrepancy is largest at high values of xt = 2pt / sqrt ( s ). in contrast, the production of prompt photons and jets exhibits the scaling behavior which is close to the conformal limit, in agreement with the leading - twist expectation. these results bring evidence for a non - negligible contribution of higher - twist processes in large - pt hadron production in hadronic collisions, where the hadron is produced directly in the hard subprocess rather than by gluon or quark jet fragmentation. predictions for scaling exponents at rhic and lhc are given, and it is suggested to trigger the isolated large - pt hadron production to enhance higher - twist processes.
|
arxiv:0911.4604
|
folding of the triangular lattice in a discrete three - dimensional space is investigated by means of the transfer - matrix method. this model was introduced by bowick and co - workers as a discretized version of the polymerized membrane in thermal equilibrium. the folding rule ( constraint ) is incompatible with the periodic - boundary condition, and the simulation has been made under the open - boundary condition. in this paper, we propose a modified constraint, which is compatible with the periodic - boundary condition ; technically, the restoration of translational invariance leads to a substantial reduction of the transfer - matrix size. treating the cluster sizes l \ le 7, we analyze the singularities of the crumpling transitions for a wide range of the bending rigidity k. we observe a series of the crumpling transitions at k = 0. 206 ( 2 ), - 0. 32 ( 1 ), and - 0. 76 ( 10 ). at each transition point, we estimate the latent heat as q = 0. 356 ( 30 ), 0. 08 ( 3 ), and 0. 05 ( 5 ), respectively.
|
arxiv:1003.2034
|
the quantum charge - coupled device ( qccd ) architecture is a modular design to expand trapped - ion quantum computer that relies on the coherent shuttling of qubits across an array of segmented electrodes. leveraging trapped ions for their long coherence times and high - fidelity quantum operations, qccd technology represents a significant advancement toward practical, large - scale quantum processors. however, shuttling increases thermal motion and consistently necessitates qubit swaps, significantly extend execution time and negatively affect application success rates. in this paper, we introduce s - sync - - a compiler designed to co - optimize the number of shuttling and swapping operations. s - sync exploits the unique properties of qccd and incorporates generic swap operations to efficiently manage shuttle and swap counts simultaneously. building on the static topology formulation of qccd, we develop scheduling heuristics to enhance overall performance. our evaluations demonstrate that our approach reduces the shuttling number by 3. 69x on average and improves the success rate of quantum applications by 1. 73x on average. moreover, we apply s - sync to gain insights into executing applications across various qccd topologies and to compare the trade - offs between different initial mapping methods.
|
arxiv:2505.01316
|
we propose a novel reduced - order methodology to describe complex multi - frequency fluid dynamics from time - resolved snapshot data. starting point is the cluster - based network model ( cnm ) thanks to its fully automatable development and human interpretability. our key innovation is to model the transitions from cluster to cluster much more accurately by replacing snapshot states with short - term trajectories ( " orbits " ) over multiple clusters, thus avoiding nonphysical intra - cluster diffusion in the dynamic reconstruction. the proposed orbital cnm ( ocnm ) employs functional clustering to coarse - grain the short - term trajectories. specifically, different filtering techniques, resulting in different temporal basis expansions, demonstrate the versatility and capability of the ocnm to adapt to diverse flow phenomena. the ocnm is illustrated on the stuart - landau oscillator and its post - transient solution with time - varying parameters to test its ability to capture the amplitude selection mechanism and multi - frequency behaviours. then, the ocnm is applied to the fluidic pinball across varying flow regimes at different reynolds numbers, including the periodic, quasi - periodic, and chaotic dynamics. this orbital - focused perspective enhances the understanding of complex temporal behaviours by incorporating high - frequency behaviour into the kinematics of short - time trajectories while modelling the dynamics of the lower frequencies. in analogy to spectral proper orthogonal decomposition, which marked the transition from spatial - only modes to spatio - temporal ones, this work advances from analysing temporal local states to examining piecewise short - term trajectories, or orbits. by merging advanced analytical methods, such as the functional representation of short - time trajectories with cnm, this study paves the way for new approaches to dissect the complex dynamics characterising turbulent systems.
|
arxiv:2407.01109
|
low - cost miniaturised sensors offer significant advantage to monitor the environment in real - time and accurately. the area of air quality monitoring has attracted much attention in recent years because of the increasing impacts on the environment and more personally to human health and mental wellbeing. rapid growth in sensors and internet of things ( iot ) technologies is paving the way for low - cost systems to transform global monitoring of air quality. drawing on 4 years of development work, in this paper we outline the design, implementation and analysis of \ textit { enviro - iot } as a step forward to monitoring air quality levels within urban environments by means of a low - cost sensing system. an in - the - wild study for 9 - months was performed to evaluate the enviro - iot system against industry standard equipment is performed with accuracy for measuring particulate matter 2. 5, 10 and nitrogen dioxide achieving 98 \ %, 97 \ % and 97 \ % respectively. the results in this case study are made up of 57, 120 which highlight that it is possible to take advantage of low - cost sensors coupled with iot technologies to validate the enviro - iot device against research - grade industrial instruments.
|
arxiv:2502.07596
|
b a. { \ displaystyle { \ mathbf { ab } } \ neq { \ mathbf { ba } }. } in other words, matrix multiplication is not commutative, in marked contrast to ( rational, real, or complex ) numbers, whose product is independent of the order of the factors. an example of two matrices not commuting with each other is : [ 1 2 3 4 ] [ 0 1 0 0 ] = [ 0 1 0 3 ], { \ displaystyle { \ begin { bmatrix } 1 & 2 \ \ 3 & 4 \ \ \ end { bmatrix } } { \ begin { bmatrix } 0 & 1 \ \ 0 & 0 \ \ \ end { bmatrix } } = { \ begin { bmatrix } 0 & 1 \ \ 0 & 3 \ \ \ end { bmatrix } }, } whereas [ 0 1 0 0 ] [ 1 2 3 4 ] = [ 3 4 0 0 ]. { \ displaystyle { \ begin { bmatrix } 0 & 1 \ \ 0 & 0 \ \ \ end { bmatrix } } { \ begin { bmatrix } 1 & 2 \ \ 3 & 4 \ \ \ end { bmatrix } } = { \ begin { bmatrix } 3 & 4 \ \ 0 & 0 \ \ \ end { bmatrix } }. } besides the ordinary matrix multiplication just described, other less frequently used operations on matrices that can be considered forms of multiplication also exist, such as the hadamard product and the kronecker product. they arise in solving matrix equations such as the sylvester equation. = = = row operations = = = there are three types of row operations : row addition, that is, adding a row to another. row multiplication, that is, multiplying all entries of a row by a non - zero constant ; row switching, that is, interchanging two rows of a matrix ; these operations are used in several ways, including solving linear equations and finding matrix inverses with gauss elimination and gauss β jordan elimination, respectively. = = = submatrix = = = a submatrix of a matrix is a matrix obtained by deleting any collection of rows and / or columns. for example, from the following 3 - by - 4 matrix, we can construct a 2 - by - 3 submatrix by removing row 3 and column 2 : a = [ 1 2 3 4 5 6 7 8
|
https://en.wikipedia.org/wiki/Matrix_(mathematics)
|
multitime quantum correlation functions are central objects in physical science, offering a direct link between experimental observables and the dynamics of an underlying model. while experiments such as 2d spectroscopy and quantum control can now measure such quantities, the accurate simulation of such responses remains computationally expensive and sometimes impossible, depending on the system ' s complexity. a natural tool to employ is the generalized quantum master equation ( gqme ), which can offer computational savings by extending reference dynamics at a comparatively trivial cost. however, dynamical methods that can tackle chemical systems with atomistic resolution, such as those in the semiclassical hierarchy, often suffer from poor accuracy, limiting the credence one might lend to their results. by combining work on the accuracy - boosting formulation of semiclassical memory kernels with recent work on the multitime gqme, here we show for the first time that one can exploit a multitime semiclassical gqme to dramatically improve both the accuracy of coarse mean - field ehrenfest dynamics and obtain orders of magnitude efficiency gains.
|
arxiv:2405.08983
|
the paper relates character value of an irreducible representation of a compact connected lie group at certain elements of finite order with the dimension of a representation on another group, up to some precise constants, which all have significance. an important input is to analyse torsion elements of order d in an adjoint group with minimal dimensional centraliser, and to prove that in most cases when d divides the coxeter number of g, this gives rise to a unique conjugacy class.
|
arxiv:2504.14684
|
we review a combinatoric approach to the hodge conjecture for fermat varieties and announce new cases where the conjecture is true.
|
arxiv:2101.04739
|
gaussian processes ( gp ) are a versatile tool in machine learning and computational science. we here consider the case of multi - output gaussian processes ( mogp ) and present low - rank approaches for efficiently computing the posterior mean of a mogp. starting from low - rank spatio - temporal data we consider a structured covariance function, assuming separability across space and time. this separability, in turn, gives a decomposition of the covariance matrix into a kronecker product of individual covariance matrices. incorporating the typical noise term to the model then requires the solution of a large - scale stein equation for computing the posterior mean. for this, we propose efficient low - rank methods based on a combination of a lrpcg method with the sylvester equation solver kpik adjusted for solving stein equations. we test the developed method on real world street network graphs by using graph filters as covariance matrices. moreover, we propose a degree - weighted average covariance matrix, which can be employed under specific assumptions to achieve more efficient convergence.
|
arxiv:2504.21527
|
we compare the invariants of flat vector bundles defined by atiyah et al. and jones et al. and prove that, up to weak homotopy, they induce the same map, denoted by $ e $, from the $ 0 $ - connective algebraic $ k $ - theory space of the complex numbers to the homotopy fiber of the chern character. we examine homotopy properties of this map and its relation with other known invariants. in addition, using the formula for $ \ tilde { \ xi } $ - invariants of lens spaces derived from donnelly ' s fixed point theorem and the $ 4 $ - dimensional cobordisms constructed via relative kirby diagrams, we recover the formula for the real part of $ e $ - invariants of seifert homology spheres given by jones and westbury, up to sign. we conjecture that this geometrically defined map $ e $ can be represented by an infinite loop map. the results in its companion paper [ wang2 ] give strong evidence for this conjecture.
|
arxiv:1707.01289
|
minimum weight codewords play a crucial role in the error correction performance of a linear block code. in this work, we establish an explicit construction for these codewords of polar codes as a sum of the generator matrix rows, which can then be used as a foundation for two applications. in the first application, we obtain a lower bound for the number of minimum - weight codewords ( a. k. a. the error coefficient ), which matches the exact number established previously in the literature. in the second application, we derive a novel method that modifies the information set ( a. k. a. rate profile ) of polar codes and pac codes in order to reduce the error coefficient, hence improving their performance. more specifically, by analyzing the structure of minimum - weight codewords of polar codes ( as special sums of the rows in the polar transform matrix ), we can identify rows ( corresponding to \ textit { information } bits ) that contribute the most to the formation of such codewords and then replace them with other rows ( corresponding to \ textit { frozen } bits ) that bring in few minimum - weight codewords. a similar process can also be applied to pac codes. our approach deviates from the traditional constructions of polar codes, which mostly focus on the reliability of the sub - channels, by taking into account another important factor - the weight distribution. extensive numerical results show that the modified codes outperform pac codes and crc - polar codes at the practical block error rate of $ 10 ^ { - 2 } $ - $ 10 ^ { - 3 } $.
|
arxiv:2111.08843
|
we consider sensor scheduling as the optimal observability problem for partially observable markov decision processes ( pomdp ). this model fits to the cases where a markov process is observed by a single sensor which needs to be dynamically adjusted or by a set of sensors which are selected one at a time in a way that maximizes the information acquisition from the process. similar to conventional pomdp problems, in this model the control action is based on all past measurements ; however here this action is not for the control of state process, which is autonomous, but it is for influencing the measurement of that process. this pomdp is a controlled version of the hidden markov process, and we show that its optimal observability problem can be formulated as an average cost markov decision process ( mdp ) scheduling problem. in this problem, a policy is a rule for selecting sensors or adjusting the measuring device based on the measurement history. given a policy, we can evaluate the estimation entropy for the joint state - measurement processes which inversely measures the observability of state process for that policy. considering estimation entropy as the cost of a policy, we show that the problem of finding optimal policy is equivalent to an average cost mdp scheduling problem where the cost function is the entropy function over the belief space. this allows the application of the policy iteration algorithm for finding the policy achieving minimum estimation entropy, thus optimum observability.
|
arxiv:cs/0609157
|
we consider a special form of parametric generalized equations arising from electronic circuits with ac sources and study the effect of perturbing the input signal on solution trajectories. using methods of variational analysis and strong metric regularity property of an auxiliary map, we are able to prove the regularity properties of the solution trajectories inherited by the input signal. furthermore, we establish the existence of continuous solution trajectories for the perturbed problem. this can be achieved via a result of uniform strong metric regularity for the auxiliary map. key words and phrases : generalized equations, electronic circuits, strong metric regularity, uniform strong metric regularity, perturbations.
|
arxiv:1801.03116
|
we report the observation of prominent shubnikov - de haas oscillations in a topological insulator, bi $ _ 2 $ te $ _ 2 $ se, with large bulk resistivity ( 6 $ \ omega $ cm at 4 k ). by fitting the sdh oscillations, we infer a large metallicity parameter $ k _ f \ ell $ = 41, with a surface mobility ( $ \ mu _ s \ sim $ 2, 800 cm $ ^ 2 $ / vs ) much larger than the bulk mobility ( $ \ mu _ b \ sim $ 50 cm $ ^ 2 $ / vs ). the plot of the index fields $ b _ { \ nu } $ vs. filling factor $ \ nu $ shows a $ \ frac12 $ - shift, consistent with massless, dirac states. evidence for fractional - filling states is seen in an 11 - t field.
|
arxiv:1101.1315
|
in a multi - party fair coin - flipping protocol, the parties output a common ( close to ) unbiased bit, even when some adversarial parties try to bias the output. in this work we focus on the case of an arbitrary number of corrupted parties. cleve [ stoc 1986 ] has shown that in any such $ m $ - round coin - flipping protocol, the corrupted parties can bias the honest parties ' common output bit by $ \ theta ( 1 / m ) $. for more than two decades, the best known coin - flipping protocol was the one of awerbuch et al. [ manuscript 1985 ], who presented a $ t $ - party, $ m $ - round protocol with bias $ \ theta ( t / \ sqrt { m } ) $. this was changed by the breakthrough result of moran et al. [ tcc 2009 ], who constructed an $ m $ - round, two - party coin - flipping protocol with optimal bias $ \ theta ( 1 / m ) $. haitner and tsfadia [ stoc 2014 ] constructed an $ m $ - round, three - party coin - flipping protocol with bias $ o ( \ log ^ 3m / m ) $. still for the case of more than three parties, the best known protocol remained the $ \ theta ( t / \ sqrt { m } ) $ - bias protocol of awerbuch et al. we make a step towards eliminating the above gap, presenting a $ t $ - party, $ m $ - round coin - flipping protocol, with bias $ o ( \ frac { t ^ 4 \ cdot 2 ^ t \ cdot \ sqrt { \ log m } } { m ^ { 1 / 2 + 1 / \ left ( 2 ^ { t - 1 } - 2 \ right ) } } ) $ for any $ t \ le \ tfrac12 \ log \ log m $. this improves upon the $ \ theta ( t / \ sqrt { m } ) $ - bias protocol of awerbuch et al., and in particular, for $ t \ in o ( 1 ) $ it is an $ 1 / m ^ { \ frac12 + \ theta ( 1 ) } $ - bias protocol. for the three - party case, it is an $ o ( \ sqrt { \ log m } / m ) $ - bias protocol, improving over the $ o ( \ log ^ 3m / m ) $ - bias protocol of haitner
|
arxiv:2104.08820
|
a sheet of elastic foil rolled into a cylinder and deformed between two parallel plates acts as a non - hookean spring if deformed normally to the axis. for large deformations the elastic force shows an interesting inverse squares dependence on the interplate distance [ siber and buljan, arxiv : 1007. 4699 ( 2010 ) ]. the phenomenon has been used as a basis for an experimental problem at the 41st international physics olympiad. we show that the corresponding variational problem for the equilibrium energy of the deformed cylinder is equivalent to a minimum action description of a simple gravitational pendulum with an amplitude of 90 degrees. we use this analogy to show that the power - law of the force is exact for distances less than a critical value. an analytical solution for the elastic force is found and confirmed by measurements over a range of deformations covering both linear and non - hookean behavior.
|
arxiv:1008.4649
|
associative memory and probabilistic modeling are two fundamental topics in artificial intelligence. the first studies recurrent neural networks designed to denoise, complete and retrieve data, whereas the second studies learning and sampling from probability distributions. based on the observation that associative memory ' s energy functions can be seen as probabilistic modeling ' s negative log likelihoods, we build a bridge between the two that enables useful flow of ideas in both directions. we showcase four examples : first, we propose new energy - based models that flexibly adapt their energy functions to new in - context datasets, an approach we term \ textit { in - context learning of energy functions }. second, we propose two new associative memory models : one that dynamically creates new memories as necessitated by the training data using bayesian nonparametrics, and another that explicitly computes proportional memory assignments using the evidence lower bound. third, using tools from associative memory, we analytically and numerically characterize the memory capacity of gaussian kernel density estimators, a widespread tool in probababilistic modeling. fourth, we study a widespread implementation choice in transformers - - normalization followed by self attention - - to show it performs clustering on the hypersphere. altogether, this work urges further exchange of useful ideas between these two continents of artificial intelligence.
|
arxiv:2402.10202
|
we show that the potential of nambu - goldstone bosons can have two or more local minima e. g. at antipodal positions in the vacuum manifold. this happens in many models of composite higgs and of composite dark matter. trigonometric potentials lead to unusual features, such as symmetry non - restoration at high temperature. in some models, such as the minimal $ \ rm so ( 5 ) / so ( 4 ) $ composite higgs with fermions in the fundamental representation, the two minima are degenerate giving cosmological domain - wall problems. otherwise, an unusual cosmology arises, that can lead to supermassive primordial black holes ; to vacuum or thermal decays ; to a high - temperature phase of broken $ \ mathrm { su } ( 2 ) _ l $, possibly interesting for baryogenesis.
|
arxiv:1902.05933
|
epr - steering refers to the ability of one observer to convince a distant observer that they share entanglement by making local measurements. determining which states allow a demonstration of epr - steering remains an open problem in general. here, we outline and demonstrate a method of analytically constructing new classes of two - qubit states which are non - steerable by arbitrary projective measurements, from consideration of local operations performed by the steering party on states known to be non - steerable.
|
arxiv:1906.04693
|
the debate on the oxygen abundances of metal - poor stars has its origin in contradictory results obtained using different abundance indicators. to achieve a better understanding of the problem we have acquired high quality spectra with the ultraviolet and visual echelle spectrograph at vlt, with a signal - to - noise of the order of 100 in the near ultraviolet and 500 in the optical and near infrared wavelength range. three different oxygen abundance indicators, oh ultraviolet lines around 310. 0 nm, the [ oi ] line at 630. 03 nm and the oi lines at 777. 1 - 5 nm were observed in the spectra of 13 metal - poor subgiants with - 3. 0 < = [ fe / h ] < = - 1. 5. oxygen abundances were obtained from the analysis of these indicators which was carried out assuming local thermodynamic equilibrium and plane - parallel model atmospheres. abundances derived from oi were corrected for departures from local thermodynamic equilibrium. stellar parameters were computed using teff - vs - color calibrations based on the infrared flux method and balmer line profiles, hipparcos parallaxes and feii lines. [ o / fe ] values derived from the forbidden line at 630. 03 nm are consistent with an oxygen / iron ratio that varies linearly with [ fe / h ] as [ o / fe ] } = - 0. 09 ( + / - 0. 08 ) [ fe / h ] + 0. 36 ( + / - 0. 15 ). values based on the oi triplet are on average 0. 19 + / - 0. 22 dex ( s. d. ) higher than the values based on the forbidden line while the agreement between oh ultraviolet lines and the forbidden line is much better with a mean difference of the order of - 0. 09 + / - 0. 25 dex ( s. d. ). in general, our results follow the same trend as previously published results with the exception of the ones based on oh ultraviolet lines. in that case our results lie below the values which gave rise to the oxygen abundance debate for metal - poor stars.
|
arxiv:astro-ph/0512290
|
##tyle n - 1 } multiplications to compute a n β 1 x n β 1 { \ displaystyle a _ { n - 1 } x ^ { n - 1 } } and so on for a total of n ( n + 1 ) 2 { \ textstyle { \ frac { n ( n + 1 ) } { 2 } } } multiplications and n { \ displaystyle n } additions. using better methods, such as horner ' s rule, this can be reduced to n { \ displaystyle n } multiplications and n { \ displaystyle n } additions. if some preprocessing is allowed, even more savings are possible. = = = computation = = = a computation is any type of arithmetic or non - arithmetic calculation that is " well - defined ". the notion that mathematical statements should be ' well - defined ' had been argued by mathematicians since at least the 1600s, but agreement on a suitable definition proved elusive. a candidate definition was proposed independently by several mathematicians in the 1930s. the best - known variant was formalised by the mathematician alan turing, who defined a well - defined statement or calculation as any statement that could be expressed in terms of the initialisation parameters of a turing machine. turing ' s definition apportioned " well - definedness " to a very large class of mathematical statements, including all well - formed algebraic statements, and all statements written in modern computer programming languages. despite the widespread uptake of this definition, there are some mathematical concepts that have no well - defined characterisation under this definition. this includes the halting problem and the busy beaver game. it remains an open question as to whether there exists a more powerful definition of ' well - defined ' that is able to capture both computable and ' non - computable ' statements. all statements characterised in modern programming languages are well - defined, including c + +, python, and java. common examples of computation are basic arithmetic and the execution of computer algorithms. a calculation is a deliberate mathematical process that transforms one or more inputs into one or more outputs or results. for example, multiplying 7 by 6 is a simple algorithmic calculation. extracting the square root or the cube root of a number using mathematical models is a more complex algorithmic calculation. = = = = rewriting = = = = expressions can be computed by means of an evaluation strategy. to illustrate, executing a function call f ( a, b ) may first evaluate the arguments a and b, store the results in references or memory
|
https://en.wikipedia.org/wiki/Expression_(mathematics)
|
experimental investigations of surface forces generally involve two solid bodies of simple and well - defined geometry interacting across a medium. direct measurement of their surface interaction can be interpreted to reveal fundamental physics in confinement, i. e. independent of the particular geometry. however real solids are deformable - they can change shape due to their mutual interaction - and this can influence force measurements. these aspects are frequently not considered, and remain poorly understood. we have performed experiments in a dry atmosphere and across an ionic liquid with a surface force balance ( sfb ), combining measurement of the surface interactions and simultaneous in - situ characterization of the geometry. first we find that, whilst the variation of the contact radius with the force across dry nitrogen can be interpreted by the johnson - kendall - roberts ( jkr ) model, for the ( ionic ) liquid it is well described only by the derjaguin - muller - toporov ( dmt ) model. secondly, we find that mica does not only bend but also experiences a compression. by performing experiments with substantially thicker mica than usual we were able to investigate this with high precision, and find compression of order 1 nm with 7 um mica. these findings imply that, in some cases structural forces measured across nanoconfined liquids must be interpreted as a convolution of the surface forces across the liquid and the mechanical response of the confining solids. we discuss the influence of mica thickness, and propose a scaling criterion to distinguish situations where the solid deformation is negligible and when it is dominant.
|
arxiv:2001.01090
|
digital cubical singular homology $ dh _ q ( x ) $ for digital images $ x $ was developed by the first and third authors, and digital analogues to various results in classical algebraic topology were proved. another homology denoted $ h _ q ^ { c _ 1 } ( x ) $ was developed by the second author for $ c _ 1 $ - digital images, which is computationally much simpler than $ dh _ q ( x ) $. this paper shows the functoriality of $ h _ q ^ { c _ 1 } $, as well as a chain map between $ dh _ q ( x ) $ and $ h _ q ^ { c _ 1 } ( x ) $ when $ x $ is a $ c _ 1 $ - digital image.
|
arxiv:2205.07457
|
the atlas collaboration has recently observed that the variance of the transverse momentum per particle ( $ [ p _ t ] $ ), when measured as a function of the collision multiplicity ( $ n _ { ch } $ ) in pb + pb collisions, decreases by a factor $ 2 $ for the largest values of $ n _ { ch } $, corresponding to ultra - central collisions. we show that this phenomenon is naturally explained by invoking impact parameter ( $ b $ ) fluctuations, which contribute to the variance, and gradually disappear in ultra - central collisions. it implies that $ n _ { ch } $ and $ [ p _ t ] $ are strongly correlated at fixed $ b $, which is explained by the local thermalization of the qgp medium.
|
arxiv:2312.10161
|
generating videos predicting the future of a given sequence has been an area of active research in recent years. however, an essential problem remains unsolved : most of the methods require large computational cost and memory usage for training. in this paper, we propose a novel method for generating future prediction videos with less memory usage than the conventional methods. this is a critical stepping stone in the path towards generating videos with high image quality, similar to that of generated images in the latest works in the field of image generation. we achieve high - efficiency by training our method in two stages : ( 1 ) image reconstruction to encode video frames into latent variables, and ( 2 ) latent variable prediction to generate the future sequence. our method achieves an efficient compression of video into low - dimensional latent variables by decomposing each frame according to its hierarchical structure. that is, we consider that video can be separated into background and foreground objects, and that each object holds time - varying and time - independent information independently. our experiments show that the proposed method can efficiently generate future prediction videos, even for complex datasets that cannot be handled by previous methods.
|
arxiv:2106.03502
|
we present the discovery of three new scattered kuiper belt objects ( skbos ) from a wide - field survey of the ecliptic. this continuing survey has to date covered 20. 2 square degrees to a limiting red magnitude of 23. 6. we combine the data from this new survey with an existing survey conducted at the university of hawaii 2. 2m telescope to constrain the number and mass of the skbos. the skbos are characterized by large eccentricities, perihelia near 35 au, and semi - major axes > 50 au. using a maximum - likelihood model, we estimate the total number of skbos larger than 100 km in diameter to be n = 3. 1 ( + 1. 9 / - 1. 3 ) x 10 ^ 4 ( 1 sigma ) and the total mass of skbos to be about 0. 05 earth masses, demonstrating that the skbos are similar in number and mass to the kuiper belt inside 50 au.
|
arxiv:astro-ph/9912428
|
we discuss thw relations between the elastic and inelastic cross - sections valid for the shadow and reflective modes of the elastic scattering. considerations are based on the unitarity arguments. it is shown that the redistribution of the total interaction probability between the elastic and inelastic interactions can lead to increasing ratio of $ \ sigma _ { el } ( s ) / \ sigma _ { tot } ( s ) $ at the lhc energies in presence of the reflective scattering mode. the form of the inelastic overlap function becomes peripheral due to the negative feedback. in the absorptive scattering mode, the mechanism of this increase is a different one since the impact parameter dependence of the inelastic interactions probability is central in this case. a short notice is also given on the slope parameter and the leading contributions to its energy dependence in the both modes.
|
arxiv:1901.00311
|
we review the experimental results obtained by the na49 collaboration in the context of its beam energy scan programme. the data on particle yields and spectral distributions suggest that the deconfinement phase transition is first reached in central collisions of heavy nuclei at about 30a gev beam energy. hadron - string transport models as well as the hadron gas model fail to describe the observed energy dependences unless additional parameters or unmeasured states are included.
|
arxiv:0908.2720
|
we develop a theory of insertion and deletion tolerance for point processes. a process is insertion - tolerant if adding a suitably chosen random point results in a point process that is absolutely continuous in law with respect to the original process. this condition and the related notion of deletion - tolerance are extensions of the so - called finite energy condition for discrete random processes. we prove several equivalent formulations of each condition, including versions involving palm processes. certain other seemingly natural variants of the conditions turn out not to be equivalent. we illustrate the concepts in the context of a number of examples, including gaussian zero processes and randomly perturbed lattices, and we provide applications to continuum percolation and stable matching.
|
arxiv:1007.3538
|
we revisit the possibility that the top quark forward - backward asymmetry arises from the on - shell production and decay of scalar top partners to top - antitop pairs with missing transverse energy. although the asymmetry is produced by t - channel exchange of a light mediator, the model remains unconstrained by low energy atomic parity violation tests. an interesting connection to the active neutrino sector through a type - i seesaw operator helps to evade stringent monojet constraints and opens up a richer collider phenomenology. after performing a global fit to top data from both the tevatron and the lhc, we obtain a viable region of parameter space consistent with all phenomenological and collider constraints. we also discuss the discovery potential of a predicted monotop signal and related lepton charge asymmetry at the lhc.
|
arxiv:1308.3712
|
we consider integral geometry inverse problems for unitary connections and skew - hermitian higgs fields on manifolds with negative sectional curvature. the results apply to manifolds in any dimension, with or without boundary, and also in the presence of trapped geodesics. in the boundary case, we show injectivity of the attenuated ray transform on tensor fields with values in a hermitian bundle ( i. e. vector valued case ). we also show that a connection and higgs field on a hermitian bundle are determined up to gauge by the knowledge of the parallel transport between boundary points along all possible geodesics. the main tools are an energy identity, the pestov identity with a unitary connection, which is presented in a general form, and a precise analysis of the singularities of solutions of transport equations when there are trapped geodesics. in the case of closed manifolds, we obtain similar results modulo the obstruction given by twisted conformal killing tensors, and we also study this obstruction.
|
arxiv:1502.04720
|
the equilibrium transport properties of an elementary nanostructured device with side - coupled geometry are computed and related to universal functions. the computation relies on a real - space formulation of the numerical renormalization - group ( nrg ) procedure. the real - space construction, dubbed enrg, is more straightforward than the nrg discretization and allows more faithful description of the coupling between quantum dots and conduction states. the procedure is applied to an anderson - model description of a quantum wire side - coupled to a single quantum dot. a gate potential controls the dot occupation. in the kondo regime, the electrical conductance through this device is known to map linearly onto a universal function of the temperature scaled by the kondo temperature. here, the energy moments from which the seebeck coefficient and the thermal conductance can be computed are shown to map linearly onto universal functions also. the moments and transport properties computed by the enrg procedure are shown to agree very well with these analytical developments. algorithms facilitating comparison with experimental results are discussed. as an illustration, one of the algorithms is applied to thermal dependence of the thermopower measured by k \ " { o } hler [ phd thesis, tud, dresden, 2007 ] in lu $ _ { 0. 9 } $ yb $ _ { 0. 1 } $ rh $ _ { 2 } $ si $ _ { 2 } $.
|
arxiv:2109.12254
|
primitive recursion is a mature, well - understood topic in the theory and practice of programming. yet its dual, primitive corecursion, is underappreciated and still seen as exotic. we aim to put them both on equal footing by giving a foundation for primitive corecursion based on computation, giving a terminating calculus analogous to the original computational foundation of recursion. we show how the implementation details in an abstract machine strengthens their connection, syntactically deriving corecursion from recursion via logical duality. we also observe the impact of evaluation strategy on the computational complexity of primitive ( co ) recursive combinators : call - by - name allows for more efficient recursion, but call - by - value allows for more efficient corecursion.
|
arxiv:2103.08521
|
this paper is the third in a series which explores a combinatorial method for generating lattice polygons in the plane. i call this method the plaid model. in this paper i prove the main result i had been aiming for since the beginning, which is to show that there is a coarse isomorphism between the plaid model and the so - called arithmetic graph for outer billiards on kites. the content of the theorem is that the plaid model predicts the symbolic dynamics of the outer billiards orbits, up to an error of one unit. this result combines with the work in the other papers to give a second proof that outer billiards has unbounded orbits for every irrational kite. so far, these are the only known polygonal examples with this property.
|
arxiv:1511.09091
|
we have performed local stm studies on potassium - doped c60 ( kxc60 ) monolayers over a wide regime of the phase diagram. as k content increases from x = 3 to 5, kxc60 monolayers undergo metal - insulator - metal reentrant phase transitions and exhibit a variety of novel orientational orderings. the most striking new structure has a pinwheel - like 7 - molecule unit cell in insulating k4 + dc60. we propose that the driving mechanism for the orientational ordering in kxc60 is the lowering of electron kinetic energy through maximization of the overlap of neighboring molecular orbitals over the entire doping range x = 3 to 5. in the insulating and metallic phases this gives rise to orbital versions of the superexchange and double - exchange interactions respectively.
|
arxiv:0712.0421
|
an exactly solvable model describing the low density limit of the spin - 1 bosons in a one - dimensional optical lattice is proposed. the exact bethe ansatz solution shows that the low energy physics of this system is described by a quantum liquid of spin singlet bound pairs. motivated by the exact results, a mean - field approach to the corresponding three - dimensional system is carried out. condensation of singlet pairs and coexistence with ordinary bose - einstein condensation are predicted.
|
arxiv:cond-mat/0610435
|
this thesis is intended to provide an account of the theory and applications of operational methods that allow the " translation " of the theory of special functions and polynomials into a " different " mathematical language. the language we are referring to is that of symbolic methods, largely based on a formalism of umbral type which provides a tremendous simplification of the derivation of the associated properties. the strategy we will follow is that of establishing the rules to replace higher trascendental functions in terms of elementary functions and to take advantage from such a recasting.
|
arxiv:1803.03108
|
programmable logic controllers ( plcs ) are responsible for automating process control in many industrial systems ( e. g. in manufacturing and public infrastructure ), and thus it is critical to ensure that they operate correctly and safely. the majority of plcs are programmed in languages such as structured text ( st ). however, a lack of formal semantics makes it difficult to ascertain the correctness of their translators and compilers, which vary from vendor - to - vendor. in this work, we develop k - st, a formal executable semantics for st in the k framework. defined with respect to the iec 61131 - 3 standard and plc vendor manuals, k - st is a high - level reference semantics that can be used to evaluate the correctness and consistency of different st implementations. we validate k - st by executing 509 st programs extracted from github and comparing the results against existing commercial compilers ( i. e., codesys, cx - programmer, and gx works2 ). we then apply k - st to validate the implementation of the open source openplc platform, comparing the executions of several test programs to uncover five bugs and nine functional defects in the compiler.
|
arxiv:2202.04076
|
we have used an updated version of the empirically and semi - empirically calibrated basel library of synthetic stellar spectra of lejeune et al. ( 1997, 1998 ) and westera et al. ( 1999 ) to calculate synthetic photometry in the ubvrijhkll ' m, hst - wfpc2, geneva, and washington systems for the entire set of non - rotating geneva stellar evolution models covering masses from 0. 4 - 0. 8 to 120 - 150 msun and metallicities z = 0. 0004 ( 1 / 50 zsun ) to 0. 1 ( 5 zsun ). the results are provided in a database which includes all individual stellar tracks and the corresponding isochrones covering ages from 10 ^ 3 yr to 16 - - 20 gyr in time steps of delta ( log t ) = 0. 05 dex. the database also includes a new grid of stellar tracks of very metal - poor stars ( z = 0. 0004 ) from 0. 8 - 150 msun calculated with the geneva stellar evolution code. the full database will be available in electronic form at the cds ( http : / / cdsweb. u - strasbg. fr / cgi - bin / qcat? j / a + a / ( vol ) / ( page ) ) and at http : / / webast. ast. obs - mip. fr / stellar /.
|
arxiv:astro-ph/0011497
|
we state the central limit theorem, as the degree goes to infinity, for the normalized volume of the zero set of a rectangular kostlan - shub - smale random polynomial system. this paper is a continuation of { \ it central limit theorem for the number of real roots of kostlan shub smale random polynomial systems } by the same authors in which the square case was considered. our main tools are the kac - rice formula for the second moment of the volume of the zero set and an expansion of this random variable into the it \ ^ o - wiener chaos.
|
arxiv:1808.02967
|
the eor 21 - cm signal is expected to become highly non - gaussian as reionization progresses. this severely affects the error - covariance of the eor 21 - cm power spectrum which is important for predicting the prospects of a detection with ongoing and future experiments. most earlier works have assumed that the eor 21 - cm signal is a gaussian random field where ( 1 ) the error variance depends only on the power spectrum and the number of fourier modes in the particular $ k $ bin, and ( 2 ) the errors in the different $ k $ bins are uncorrelated. here we use an ensemble of simulated 21 - cm maps to analyze the error - covariance at various stages of reionization. we find that even at the very early stages of reionization ( $ \ bar { x } _ { \ rm hi } \ sim 0. 9 $ ) the error variance significantly exceeds the gaussian predictions at small length - scales ( $ k > 0. 5 \, { \ rm mpc } ^ { - 1 } $ ) while they are consistent at larger scales. the errors in most $ k $ bins ( both large and small scales ), are however found to be correlated. considering the later stages ( $ \ bar { x } _ { \ rm hi } = 0. 15 $ ), the error variance shows an excess in all $ k $ bins within $ k \ ge 0. 1 \, { \ rm mpc } ^ { - 1 } $, and it is around $ 200 $ times larger than the gaussian prediction at $ k \ sim 1 \, { \ rm mpc } ^ { - 1 } $. the errors in the different $ k $ bins are all also highly correlated, barring the two smallest $ k $ bins which are anti - correlated with the other bins. our results imply that the predictions for different 21 - cm experiments based on the gaussian assumption underestimate the errors, and it is necessary to incorporate the non - gaussianity for more realistic predictions.
|
arxiv:1606.03874
|
the " pretrain - then - finetune " paradigm is commonly adopted in the deployment of large language models. low - rank adaptation ( lora ), a parameter - efficient fine - tuning method, is often employed to adapt a base model to a multitude of tasks, resulting in a substantial collection of lora adapters derived from one base model. we observe that this paradigm presents significant opportunities for batched inference during serving. to capitalize on these opportunities, we present s - lora, a system designed for the scalable serving of many lora adapters. s - lora stores all adapters in the main memory and fetches the adapters used by the currently running queries to the gpu memory. to efficiently use the gpu memory and reduce fragmentation, s - lora proposes unified paging. unified paging uses a unified memory pool to manage dynamic adapter weights with different ranks and kv cache tensors with varying sequence lengths. additionally, s - lora employs a novel tensor parallelism strategy and highly optimized custom cuda kernels for heterogeneous batching of lora computation. collectively, these features enable s - lora to serve thousands of lora adapters on a single gpu or across multiple gpus with a small overhead. compared to state - of - the - art libraries such as huggingface peft and vllm ( with naive support of lora serving ), s - lora can improve the throughput by up to 4 times and increase the number of served adapters by several orders of magnitude. as a result, s - lora enables scalable serving of many task - specific fine - tuned models and offers the potential for large - scale customized fine - tuning services. the code is available at https : / / github. com / s - lora / s - lora
|
arxiv:2311.03285
|
predicting click - through rates is a crucial function within recommendation and advertising platforms, as the output of ctr prediction determines the order of items shown to users. the embedding \ & mlp paradigm has become a standard approach for industrial recommendation systems and has been widely deployed. however, this paradigm suffers from cold - start problems, where there is either no or only limited user action data available, leading to poorly learned id embeddings. the cold - start problem hampers the performance of new items. to address this problem, we designed a novel diffusion model to generate a warmed - up embedding for new items. specifically, we define a novel diffusion process between the id embedding space and the side information space. in addition, we can derive a sub - sequence from the diffusion steps to expedite training, given that our diffusion model is non - markovian. our diffusion model is supervised by both the variational inference and binary cross - entropy objectives, enabling it to generate warmed - up embeddings for items in both the cold - start and warm - up phases. additionally, we have conducted extensive experiments on three recommendation datasets. the results confirmed the effectiveness of our approach.
|
arxiv:2504.06270
|
we investigate brane - world models in different viable $ f ( r ) $ gravity theories where the lagrangian is an arbitrary function of the curvature scalar. deriving the warped metric for this model, resembling randal - sundrum ( rs ) like solutions, we determine the graviton kk modes. the recent observations at the lhc, which constrain the rs graviton kk modes to a mass range greater than 3 tev, are incompatible to rs model predictions. it is shown that the models with $ f ( r ) $ gravity in the bulk address the issue which in turn constrains the $ f ( r ) $ model itself.
|
arxiv:1403.3164
|
reconfigurable intelligent surface ( ris ) has attracted enormous interest for its potential advantages in assisting both wireless communication and environmental sensing. in this paper, we study a challenging multiuser tracking problem in the multiple - input multiple - output ( mimo ) orthogonal frequency division multiplexing ( ofdm ) system aided by multiple riss. in particular, we assume that a multi - antenna base station ( bs ) receives the ofdm symbols from single - antenna users reflected by multiple riss and tracks the positions of these users. considering the users ' mobility and the blockage of light - of - sight ( los ) paths, we establish a probability transition model to characterize the tracking process, where the geometric constraints between channel parameters and multiuser positions are utilized. we further develop an online message passing algorithm, termed the bayesian multiuser tracking ( bmt ) algorithm, to estimate the multiuser positions, the angles - of - arrivals ( aoas ) at multiple riss, and the time delay and the blockage of the los path. the bayesian cramer rao bound ( bcrb ) is derived as the fundamental performance limit of the considered tracking problem. based on the bcrb, we optimize the passive beamforming ( pbf ) of the multiple riss to improve the tracking performance. simulation results show that the proposed pbf design significantly outperforms the counterpart schemes, and our bmt algorithm can achieve up to centimeter - level tracking accuracy.
|
arxiv:2304.11884
|
we use high - resolution eulerian simulations to study the stability of cold gas flows in a galaxy size dark matter halo ( 10 ^ 12 msun ) at redshift z = 2. our simulations show that a cold stream penetrating a hot gaseous halo is stable against thermal convection and kelvin - helmholtz instability. we then investigate the effect of a satellite orbiting the main halo in the plane of the stream. the satellite is able to perturb the stream and to inhibit cold gas accretion towards the center of the halo for 0. 5 gyr. however, if the supply of cold gas at large distances is kept constant, the cold stream is able to re - establish itself after 0. 3 gyr. we conclude that cold streams are very stable against a large variety of internal and external perturbations.
|
arxiv:1401.1812
|
this paper develops the large deviations theory for the point process associated with the euclidean volume of $ k $ - nearest neighbor balls centered around the points of a homogeneous poisson or a binomial point processes in the unit cube. two different types of large deviation behaviors of such point processes are investigated. our first result is the donsker - varadhan large deviation principle, under the assumption that the centering terms for the volume of $ k $ - nearest neighbor balls grow to infinity more slowly than those needed for poisson convergence. additionally, we also study large deviations based on the notion of $ \ mathcal m _ 0 $ - topology, which takes place when the centering terms tend to infinity sufficiently fast, compared to those for poisson convergence. as applications of our main theorems, we discuss large deviations for the number of poisson or binomial points of degree at most $ k $ in a geometric graph in the dense regime.
|
arxiv:2210.12423
|
in this chapter we offer an introduction to weak values from a three - fold perspective : first, outlining the protocols that enable their experimental determination ; next, deriving their correlates in the quantum formalism and, finally, discussing their ontological significance according to different quantum theories or interpretations. we argue that weak values have predictive power and provide novel ways to characterise quantum systems. we show that this holds true regardless of ongoing ontological disputes. and, still, we contend that certain " hidden " variables theories like bohmian mechanics constitute very valuable heuristic tools for identifying informative weak values or functions thereof. to illustrate these points, we present a case study concerning quantum thermalization. we show that certain weak values, singled out by bohmian mechanics as physically relevant, play a crucial role in elucidating the thermalization time of certain systems, whereas standard expectation values are " blind " to the onset of thermalization.
|
arxiv:2310.03852
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.