text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
the electron density and temperature profiles measured with thomson scattering at the stellarator wendelstein 7 - x show features which seem to be unphysical, but so far could not be associated with any source of error considered in the data processing. a detailed bayesian analysis reveals that errors in the spectral calibration cannot explain the features observed in the profiles. rather, it seems that small fluctuations in the laser position are sufficient to affect the profile substantially. the impact of these fluctuations depends on the laser position itself, which, in turn, provides a method to find the optimum laser alignment in the future.
|
arxiv:2111.03562
|
##sphutasiddhanta. greek and other ancient mathematical advances, were often trapped in cycles of bursts of creativity, followed by long periods of stagnation, but this began to change as knowledge spread in the early modern period. = = = symbolic stage and early arithmetic = = = the transition to fully symbolic algebra began with ibn al - banna ' al - marrakushi ( 1256 – 1321 ) and abu al - hasan ibn ʿali al - qalasadi, ( 1412 – 1482 ) who introduced symbols for operations using arabic characters. the plus sign ( + ) appeared around 1351 with nicole oresme, likely derived from the latin et ( meaning " and " ), while the minus sign ( − ) was first used in 1489 by johannes widmann. luca pacioli included these symbols in his works, though much was based on earlier contributions by piero della francesca. the radical symbol ( √ ) for square root was introduced by christoph rudolff in the 1500s, and parentheses for precedence by niccolo tartaglia in 1556. francois viete ’ s new algebra ( 1591 ) formalized modern symbolic manipulation. the multiplication sign ( × ) was first used by william oughtred and the division sign ( ÷ ) by johann rahn. rene descartes further advanced algebraic symbolism in la geometrie ( 1637 ), where he introduced the use of letters at the end of the alphabet ( x, y, z ) for variables, along with the cartesian coordinate system, which bridged algebra and geometry. isaac newton and gottfried wilhelm leibniz independently developed calculus in the late 17th century, with leibniz ' s notation becoming the standard. = = variables and evaluation = = in elementary algebra, a variable in an expression is a letter that represents a number whose value may change. to evaluate an expression with a variable means to find the value of the expression when the variable is assigned a given number. expressions can be evaluated or simplified by replacing operations that appear in them with their result, or by combining like - terms. for example, take the expression 4 x 2 + 8 { \ displaystyle 4x ^ { 2 } + 8 } ; it can be evaluated at x = 3 in the following steps : 4 ( 3 ) 2 + 3 { \ textstyle 4 ( 3 ) ^ { 2 } + 3 }, ( replace x with 3 ) 4 ⋅ ( 3 ⋅ 3 ) + 8 { \ displaystyle 4 \ cdot ( 3 \ cdot
|
https://en.wikipedia.org/wiki/Expression_(mathematics)
|
we compare two area spectra proposed in loop quantum gravity in different approaches to compute the entropy of the schwarzschild black hole. we describe the black hole in general microcanonical and canonical area ensembles for these spectra. we show that in the canonical ensemble, the results for all statistical quantities for any spectrum can be reproduced by a heuristic picture of bekenstein up to second order. for one of these spectra - the equally - spaced spectrum - in light of a proposed connection of the black hole area spectrum to the quasinormal mode spectrum and following hep - th / 0304135, we present explicit calculations to argue that this spectrum is completely consistent with this connection. this follows without requiring a change in the gauge group of the spin degrees of freedom in this formalism from su ( 2 ) to so ( 3 ). we also show that independent of the area spectrum, the degeneracy of the area observable is bounded by $ c a \ exp ( a / 4 ) $, where $ a $ is measured in planck units and $ c $ is a constant of order unity.
|
arxiv:gr-qc/0401110
|
conformal mechanics related to the near horizon extreme kerr - newman - ads - ds black hole is studied. a unique n = 2 supersymmetric extension of the conformal mechanics is constructed.
|
arxiv:1103.1047
|
in eu2zniro6, effectively two atoms are active i. e. ir is magnetically active, which results in complex magnetic ordering within the ir sublattice at low temperature. on the other hand, although eu is a van - vleck paramagnet, it is active in the electronic channels involving 4f 6 crystal - field split levels. phonons, quanta of lattice vibration, involving vibration of atoms in the unit cell, are intimately coupled with both magnetic and electronic degrees of freedom ( dof ). here, we report a comprehensive study focusing on the phonons as well as intra - configurational excitations in double - perovskite eu2zniro6. our studies reveal strong coupling of phonons with the underlying magnetic dof reflected in the renormalization of the phonon self - energy parameters well above the spin - solid phase ( tn ~ 12 k ) till temperature as high as ~ 3tn, evidences broken spin rotational symmetry deep into the paramagnetic phase. in particular, all the observed first - order phonon modes show softening of varying degree below ~ 3tn, and low - frequency phonons become sharper, while the high - frequency phonons show broadening attributed to the additional available magnetic damping channels. we also observed a large number of high - energy modes, 39 in total, attributed to the electronic transitions between 4f - levels of the rare - earth eu3 + ion and these modes shows anomalous temperature evolution as well as mixing of the crystal - field split levels attributed to the strong coupling of electronic and lattice dof.
|
arxiv:2006.09653
|
iron - titanium sulfides feti2s4 and fe2tis4 have been structurally and magnetically characterized using powder x ray diffraction with rietveld refinement method and 57fe m \ " ossbauer spectroscopy at variable temperature. both sulfides have the same crystallographic phase, based on the monoclinic cr3s4 type structure and vary in atomic coordinates ; feti2s4 retains the ideal atomic positions proposed for the cr3s4 phase, while in fe2tis4 the metal displacements from ideal sites are noticeable. m \ " ossbauer spectra reveal different magnetic behaviors ; in feti2s4 there is a transition from paramagnetic to magnetic ordering at temperature tc = 145 k, giving rise to unusually low hyperfine magnetic field of 2. 5 t ( at 77 k ) if compared with values of iron magnetic moments reported previously, this behavior is explained on the basis of blocking fe localized magnetic moments by the spin density wave ( sdw ) originated from 3d ti atoms. in fe2tis4 a transition from paramagnetic to ( sdw ) arises at tc = 290 k, the sdw is spread in both fe and ti metals through [ 101 ] crystallographic plane and undergoes a transition of first order from incommensurate sdw ( iswd ) to commensurate sdw ( csdw ) at tic = 255 k. the atomic positions in the unit cell are correlated to the magnetic behavior in both sulfides.
|
arxiv:1808.03362
|
a significant re - evaluation of greek philosophy of mathematics. according to legend, fellow pythagoreans were so traumatized by this discovery that they murdered hippasus to stop him from spreading his heretical idea. simon stevin was one of the first in europe to challenge greek ideas in the 16th century. beginning with leibniz, the focus shifted strongly to the relationship between mathematics and logic. this perspective dominated the philosophy of mathematics through the time of boole, frege and russell, but was brought into question by developments in the late 19th and early 20th centuries. = = = contemporary philosophy = = = a perennial issue in the philosophy of mathematics concerns the relationship between logic and mathematics at their joint foundations. while 20th - century philosophers continued to ask the questions mentioned at the outset of this article, the philosophy of mathematics in the 20th century was characterized by a predominant interest in formal logic, set theory ( both naive set theory and axiomatic set theory ), and foundational issues. it is a profound puzzle that on the one hand mathematical truths seem to have a compelling inevitability, but on the other hand the source of their " truthfulness " remains elusive. investigations into this issue are known as the foundations of mathematics program. at the start of the 20th century, philosophers of mathematics were already beginning to divide into various schools of thought about all these questions, broadly distinguished by their pictures of mathematical epistemology and ontology. three schools, formalism, intuitionism, and logicism, emerged at this time, partly in response to the increasingly widespread worry that mathematics as it stood, and analysis in particular, did not live up to the standards of certainty and rigor that had been taken for granted. each school addressed the issues that came to the fore at that time, either attempting to resolve them or claiming that mathematics is not entitled to its status as our most trusted knowledge. surprising and counter - intuitive developments in formal logic and set theory early in the 20th century led to new questions concerning what was traditionally called the foundations of mathematics. as the century unfolded, the initial focus of concern expanded to an open exploration of the fundamental axioms of mathematics, the axiomatic approach having been taken for granted since the time of euclid around 300 bce as the natural basis for mathematics. notions of axiom, proposition and proof, as well as the notion of a proposition being true of a mathematical object ( see assignment ), were formalized, allowing them to be treated mathematically. the zermelo – fraenkel
|
https://en.wikipedia.org/wiki/Philosophy_of_mathematics
|
a partially annealed mean - field spin - glass model with a locally embedded pattern is studied. the model consists of two dynamical variables, spins and interactions, that are in contact with thermal baths at temperatures t _ s and t _ j, respectively. unlike the quenched system, characteristic correlations among the interactions are induced by the partial annealing. the model exhibits three phases, which are paramagnetic, ferromagnetic and spin - glass phases. in the ferromagnetic phase, the embedded pattern is stably realized. the phase diagram depends significantly on the ratio of two temperatures n = t _ j / t _ s. in particular, a reentrant transition from the embedded ferromagnetic to the spin - glass phases with t _ s decreasing is found only below at a certain value of n. this indicates that above the critical value n _ c the embedded pattern is supported by local field from a non - embedded region. some equilibrium properties of the interactions in the partial annealing are also discussed in terms of frustration.
|
arxiv:1010.5346
|
in this paper we will make use of the mackaay - vaz approach to the universal $ \ mathfrak { sl } _ 3 $ - homology to define a family of cycles ( called $ \ beta _ 3 $ - invariants ) which are transverse braid invariants. this family includes wu ' s $ \ psi _ { 3 } $ - invariant. furthermore, we analyse the vanishing of the homology classes of the $ \ beta _ 3 $ - invariants and relate it to the vanishing of plamenevskaya ' s and wu ' s invariants. finally, we use the $ \ beta _ 3 $ - invariants to produce some bennequin - type inequalities.
|
arxiv:1806.00752
|
we present a manually annotated lexical semantic change dataset for russian : rushifteval. its novelty is ensured by a single set of target words annotated for their diachronic semantic shifts across three time periods, while the previous work either used only two time periods, or different sets of target words. the paper describes the composition and annotation procedure for the dataset. in addition, it is shown how the ternary nature of rushifteval allows to trace specific diachronic trajectories : ` changed at a particular time period and stable afterwards ' or ` was changing throughout all time periods '. based on the analysis of the submissions to the recent shared task on semantic change detection for russian, we argue that correctly identifying such trajectories can be an interesting sub - task itself.
|
arxiv:2106.08294
|
we use semi - analytic galaxy catalogs based on two high - resolution cosmological $ n $ - body simulations, millennium - wmap7 and millennium - ii, to investigate the formation of the local group ( lg ) analogs. unlike previous studies, we use the observed stellar masses to select the lg member ( milky way ( mw ) and m31 ) analogs, and then impose constrains using the observed separation, isolation, and kinematics of the two main member galaxies. by comparing radial and low - ellipticity orbits between the mw and m31, we find higher tangential velocity results in higher total mass, which are 4. 4 $ ^ { + 2. 4 } _ { - 1. 5 } \ times $ 10 $ ^ { 12 } \ rm m _ { \ odot } $ and 6. 6 $ ^ { + 2. 7 } _ { - 1. 5 } \ times $ 10 $ ^ { 12 } \ rm m _ { \ odot } $ for radial and low - ellipticity orbits. the orbits also influence the individual mass distribution of mw and m31 analogs. for radial orbits, the typical host halo masses of the mw and m31 are 1. 5 $ ^ { + 1. 4 } _ { - 0. 7 } \ times $ 10 $ ^ { 12 } \ rm m _ { \ odot } $ and 2. 5 $ ^ { + 1. 3 } _ { - 1. 1 } \ times $ 10 $ ^ { 12 } \ rm m _ { \ odot } $ ; for low - ellipticity orbits, the masses are 2. 5 $ ^ { + 2. 2 } _ { - 1. 4 } \ times $ 10 $ ^ { 12 } \ rm m _ { \ odot } $ and 3. 8 $ ^ { + 2. 8 } _ { - 1. 8 } \ times $ 10 $ ^ { 12 } \ rm m _ { \ odot } $. the lg is located primarily in filaments with tails extending toward higher densities up to $ \ delta \ sim4. 5 $. the dark matter velocity anisotropy parameters $ \ beta $ of both the mw and m31 analogs are close to zero in the center, increasing to 0. 2 - - 0. 3 at 50 - - 80 kpc and decreasing slowly outward. the slope is much flatter than computed from the mw satellites, and the amplitude is smaller than traced
|
arxiv:2001.09589
|
cyclic prefix direct sequence spread spectrum ( cp - dsss ) is a novel waveform that is proposed as a solution to massive machine type communications ( mmtc ) for 5g and beyond. this paper analyzes the capacity of cp - dsss in comparison with orthogonal frequency - division multiplexing ( ofdm ). we show that cp - dsss achieves the same capacity as ofdm and can be optimized with similar precoding methods ( e. g., water - filling ). because of its spread spectrum nature, cp - dsss can operate as secondary network using the same spectrum as the primary 4g or 5g network, but transmitting at much lower power. accordingly, the combination of primary and secondary signals in the envisioned setup may be viewed as a power noma ( non - orthogonal multiple access ) technique where primary signals are detected and subtracted from the received signal first before detecting the secondary signals. in order to operate at a sufficiently low interference level to the primary network, details of cp - dsss capacity for symbol rate reduction and multi - antenna operation are developed. the capacity limits established in this paper can be used as a baseline to evaluate the performance of future cp - dsss receiver architectures for single - and multi - user scenarios.
|
arxiv:2003.08599
|
the relation between tempered distributions and measures is analysed and clarified. while this is straightforward for positive measures, it is surprisingly subtle for signed or complex measures.
|
arxiv:2202.09175
|
the prevalence of graph - based data has spurred the rapid development of graph neural networks ( gnns ) and related machine learning algorithms. yet, despite the many datasets naturally modeled as directed graphs, including citation, website, and traffic networks, the vast majority of this research focuses on undirected graphs. in this paper, we propose magnet, a spectral gnn for directed graphs based on a complex hermitian matrix known as the magnetic laplacian. this matrix encodes undirected geometric structure in the magnitude of its entries and directional information in their phase. a " charge " parameter attunes spectral information to variation among directed cycles. we apply our network to a variety of directed graph node classification and link prediction tasks showing that magnet performs well on all tasks and that its performance exceeds all other methods on a majority of such tasks. the underlying principles of magnet are such that it can be adapted to other spectral gnn architectures.
|
arxiv:2102.11391
|
applying the poincare - birkhoff - witt property and the groebner - shirshov bases technique, we find the linear basis of the associative universal enveloping algebra in the sense of v. ginzburg and m. kapranov of a pair of compatible lie brackets. we state that the growth rate of this universal enveloping over $ n $ - dimensional compatible lie algebra equals $ n + 1 $.
|
arxiv:2110.06518
|
to address the challenges of long - tailed classification, researchers have proposed several approaches to reduce model bias, most of which assume that classes with few samples are weak classes. however, recent studies have shown that tail classes are not always hard to learn, and model bias has been observed on sample - balanced datasets, suggesting the existence of other factors that affect model bias. in this work, we first establish a geometric perspective for analyzing model fairness and then systematically propose a series of geometric measurements for perceptual manifolds in deep neural networks. subsequently, we comprehensively explore the effect of the geometric characteristics of perceptual manifolds on classification difficulty and how learning shapes the geometric characteristics of perceptual manifolds. an unanticipated finding is that the correlation between the class accuracy and the separation degree of perceptual manifolds gradually decreases during training, while the negative correlation with the curvature gradually increases, implying that curvature imbalance leads to model bias. building upon these observations, we propose curvature regularization to facilitate the model to learn curvature - balanced and flatter perceptual manifolds. evaluations on multiple long - tailed and non - long - tailed datasets show the excellent performance and exciting generality of our approach, especially in achieving significant performance improvements based on current state - of - the - art techniques. our work opens up a geometric analysis perspective on model bias and reminds researchers to pay attention to model bias on non - long - tailed and even sample - balanced datasets.
|
arxiv:2303.12307
|
in this paper, we prove a lower bound for $ \ underset { \ chi \ neq \ chi _ 0 } { \ max } \ bigg | \ sum _ { n \ leq x } \ chi ( n ) \ bigg | $, when $ x = \ frac { q } { ( \ log q ) ^ b } $. this improves on a result of granville and soundararajan for large character sums when the range of summation is wide. when $ b $ goes to zero, our lower bound recovers the expected maximal value of character sums for most characters.
|
arxiv:2005.11386
|
the requirements and operating conditions for a muon collider storage ring ( mcsr ) pose significant challenges to superconducting magnets. the dipole magnets should provide a high magnetic field to reduce the ring circumference and thus maximize the number of muon collisions during their lifetime. one third of the beam energy is continuously deposited along the lattice by the decay electrons at the rate of 0. 5 kw / m for a 1. 5 - tev c. o. m. and a luminosity of 1034 cm - 2s - 1. unlike dipoles in proton machines, the mcsr dipoles should allow this dynamic heat load to escape the magnet helium volume in the horizontal plane, predominantly towards the ring center. this paper presents the analysis and comparison of radiation effects in mcsr based on two dipole magnets designs. tungsten masks in the interconnect regions are used in both cases to mitigate the unprecedented dynamic heat deposition and radiation in the magnet coils.
|
arxiv:1202.5333
|
we discuss how embeddings in connection with the campbell - magaard ( cm ) theorem can have a physical interpretation. we show that any embedding whose local existence is guaranteed by the cm theorem can be viewed as a result of the dynamical evolution of initial data given in a four - dimensional spacelike hypersurface. by using the cm theorem, we establish that for any analytic spacetime, there exist appropriate initial data whose cauchy development is a five - dimensional vacuum space into which the spacetime is locally embedded. we shall see also that the spacetime embedded is cauchy stable with respect these the initial data.
|
arxiv:gr-qc/0503103
|
let $ x $ be a smooth projective quadric defined over a field of characteristic 2. we prove that in the chow group of codimension 2 or 3 of $ x $ the torsion subgroup has at most two elements. in codimension 2, we determine precisely when this torsion subgroup is nontrivial. in codimension 3, we show that there is no torsion if $ \ dim x \ ge 11 $. this extends the analogous results in characteristic different from 2, obtained by karpenko in the nineteen - nineties.
|
arxiv:2101.03001
|
we study a one parameter family of discrete loewner evolutions driven by a random walk on the real line. we show that it converges to the stochastic loewner evolution ( sle ) under rescaling. we show that the discrete loewner evolution satisfies markovian - type and symmetry properties analogous to sle, and establish a phase transition property for the discrete loewner evolution when the parameter equals 4.
|
arxiv:math/0303119
|
we argue that a large negative running spectral index, if confirmed, might suggest that there are abundant structures in the inflaton potential, which result in a fairly large ( both positive and negative ) running of the spectral index at all scales. it is shown that the center value of the running spectral index suggested by the recent cmb data can be easily explained by an inflaton potential with superimposed periodic oscillations. in contrast to cases with constant running, the perturbation spectrum is enhanced at small scales, due to the repeated modulations. we mention that such features at small scales may be seen by 21 cm observations in the future.
|
arxiv:1011.3988
|
we address the question and related controversy of the formulation of the $ q $ - entropy, and its relative entropy counterpart, for models described by continuous ( non - discrete ) sets of variables. we notice that an $ l _ p $ normalized functional proposed by lutwak - yang - zhang ( lyz ), which is essentially a variation of a properly normalized relative r \ ' { e } nyi entropy up to a logarithm, has extremal properties that make it an attractive candidate which can be used to construct such a relative $ q $ - entropy. we comment on the extremizing probability distributions of this lyz functional, its relation to the escort distributions, a generalized fisher information and the corresponding cram \ ' { e } r - rao inequality. we point out potential physical implications of the lyz entropic functional and of its extremal distributions.
|
arxiv:1905.01672
|
lib, a predicted layered compound analogous to the mgb $ _ 2 $ superconductor, has been recently synthesized via cold compression and quenched to ambient pressure, yet its superconducting properties have not been measured. according to prior isotropic superconductivity calculations, the critical temperature ( $ t _ { \ rm { c } } $ ) was expected to be only 10 $ - $ 15 k. using the anisotropic migdal - eliashberg formalism, we show that the $ t _ { \ rm { c } } $ may actually exceed 32 k. our analysis of the contribution from different electronic states helps explain the detrimental effect of pressure and doping on the compound ' s superconducting properties. in the search for related superconductors, we screened li - mg - b binary and ternary layered materials and found metastable phases with $ t _ { \ rm { c } } $ close to or even 10 $ - $ 20 % above the record 39 k value in mgb $ _ 2 $. our reexamination of the li - b binary phase stability reveals a possible route to synthesize the lib superconductor at lower pressures readily achievable in multianvil cells.
|
arxiv:2208.12855
|
let $ f : m \ to \ mathbb { r } $ be a morse function on a smooth closed surface, $ v $ be a connected component of some critical level of $ f $, and $ \ mathcal { e } _ v $ be its atom. let also $ \ mathcal { s } ( f ) $ be a stabilizer of the function $ f $ under the right action of the group of diffeomorphisms $ \ mathrm { diff } ( m ) $ on the space of smooth functions on $ m, $ and $ \ mathcal { s } _ v ( f ) = \ { h \ in \ mathcal { s } ( f ) \, | h ( v ) = v \ }. $ the group $ \ mathcal { s } _ v ( f ) $ acts on the set $ \ pi _ 0 \ partial \ mathcal { e } _ v $ of connected components of the boundary of $ \ mathcal { e } _ v. $ therefore we have a homomorphism $ \ phi : \ mathcal { s } ( f ) \ to \ mathrm { aut } ( \ pi _ 0 \ partial \ mathcal { e } _ v ) $. let also $ g = \ phi ( \ mathcal { s } ( f ) ) $ be the image of $ \ mathcal { s } ( f ) $ in $ \ mathrm { aut } ( \ pi _ 0 \ partial \ mathcal { e } _ v ). $ suppose that the inclusion $ \ partial \ mathcal { e } _ v \ subset m \ setminus v $ induces a bijection $ \ pi _ 0 \ partial \ mathcal { e } _ v \ to \ pi _ 0 ( m \ setminus v ). $ let $ h $ be a subgroup of $ g. $ we present a sufficient condition for existence of a section $ s : h \ to \ mathcal { s } _ v ( f ) $ of the homomorphism $ \ phi, $ so, the action of $ h $ on $ \ partial \ mathcal { e } _ v $ lifts to the $ h $ - action on $ m $ by $ f $ - preserving diffeomorphisms of $ m $. this result holds for a larger class of smooth functions $ f : m \ to \ mathbb { r } $ having the following property : for each critical point $ z $ of
|
arxiv:1610.01219
|
in the wake of responsible ai, interpretability methods, which attempt to provide an explanation for the predictions of neural models have seen rapid progress. in this work, we are concerned with explanations that are applicable to natural language processing ( nlp ) models and tasks, and we focus specifically on the analysis of counterfactual, contrastive explanations. we note that while there have been several explainers proposed to produce counterfactual explanations, their behaviour can vary significantly and the lack of a universal ground truth for the counterfactual edits imposes an insuperable barrier on their evaluation. we propose a new back translation - inspired evaluation methodology that utilises earlier outputs of the explainer as ground truth proxies to investigate the consistency of explainers. we show that by iteratively feeding the counterfactual to the explainer we can obtain valuable insights into the behaviour of both the predictor and the explainer models, and infer patterns that would be otherwise obscured. using this methodology, we conduct a thorough analysis and propose a novel metric to evaluate the consistency of counterfactual generation approaches with different characteristics across available performance indicators.
|
arxiv:2305.17055
|
due to their conceptual simplicity, k - means algorithm variants have been extensively used for unsupervised cluster analysis. however, one main shortcoming of these algorithms is that they essentially fit a mixture of identical spherical gaussians to data that vastly deviates from such a distribution. in comparison, general gaussian mixture models ( gmms ) can fit richer structures but require estimating a quadratic number of parameters per cluster to represent the covariance matrices. this poses two main issues : ( i ) the underlying optimization problems are challenging due to their larger number of local minima, and ( ii ) their solutions can overfit the data. in this work, we design search strategies that circumvent both issues. we develop more effective optimization algorithms for general gmms, and we combine these algorithms with regularization strategies that avoid overfitting. through extensive computational analyses, we observe that optimization or regularization in isolation does not substantially improve cluster recovery. however, combining these techniques permits a completely new level of performance previously unachieved by k - means algorithm variants, unraveling vastly different cluster structures. these results shed new light on the current status quo between gmm and k - means methods and suggest the more frequent use of general gmms for data exploration. to facilitate such applications, we provide open - source code as well as julia packages ( unsupervisedclustering. jl and regularizedcovariancematrices. jl ) implementing the proposed techniques.
|
arxiv:2302.02450
|
a fully resolvable quantum many - body hamiltonian is introduced that mimics the behavior of the autocatalytic chemical reaction a + b < - > 2b involving two different molecular species, a and b. the model also describes two nonlinearly - coupled modes of an optical cavity. consistent with the current understanding of the relaxation dynamics of integrable systems in isolation, the wavefunction following a quantum quench exhibits irreversibility with retention of the memory about its initial conditions. salient features of the model include a marked similarity with conventional quantum decay and a total b - to - a conversion, with associated classical - like behavior of the wavefunction, when the initial state does not contain a - type molecules.
|
arxiv:2208.04183
|
we propose a mechanism to give mass to tensor matter field which preserve the u ( 1 ) symmetry. we introduce a complex vector field that couples with the tensor in a topological term. we also analyze the influence of the kinetic terms of the complex vector in our mechanism.
|
arxiv:hep-th/0412013
|
we consider versions of malliavin calculus on path spaces of compact manifolds with diffusion measures, defining gross - sobolev spaces of differentiable functions and proving their intertwining with solution maps, i, of certain stochastic differential equations. this is shown to shed light on fundamental uniqueness questions for this calculus including uniqueness of the closed derivative operator $ d $ and markov uniqueness of the associated dirichlet form. a continuity result for the divergence operator by kree and kree is extended to this situation. the regularity of conditional expectations of smooth functionals of classical wiener space, given i, is considered and shown to have strong implications for these questions. a major role is played by the ( possibly sub - riemannian ) connections induced by stochastic differential equations : damped markovian connections are used for the covariant derivatives.
|
arxiv:math/0605089
|
in magnetized plasmas, a turbulent cascade occurs in phase space at scales smaller than the thermal larmor radius ( " sub - larmor scales " ) [ phys. rev. lett. 103, 015003 ( 2009 ) ]. when the turbulence is restricted to two spatial dimensions perpendicular to the background magnetic field, two independent cascades may take place simultaneously because of the presence of two collisionless invariants. in the present work, freely decaying turbulence of two - dimensional electrostatic gyrokinetics is investigated by means of phenomenological theory and direct numerical simulations. a dual cascade ( forward and inverse cascades ) is observed in velocity space as well as in position space, which we diagnose by means of nonlinear transfer functions for the collisionless invariants. we find that the turbulence tends to a time - asymptotic state, dominated by a single scale that grows in time. a theory of this asymptotic state is derived in the form of decay laws. each case that we study falls into one of three regimes ( weakly collisional, marginal, and strongly collisional ), determined by a dimensionless number d *, a quantity analogous to the reynolds number. the marginal state is marked by a critical number d * = d0 that is preserved in time. turbulence initialized above this value become increasingly inertial in time, evolving toward larger and larger d * ; turbulence initialized below d0 become more and more collisional, decaying to progressively smaller d *.
|
arxiv:1208.1369
|
in the framework of real 2 - component spinors in three dimensional space - time we present a description of topologically massive gravity ( tmg ) in terms of differential forms with triad scalar coefficients. this is essentially a real version of the newman - penrose formalism in general relativity. a triad formulation of tmg was considered earlier by hall, morgan and perjes, however, due to an unfortunate choice of signature some of the spinors underlying the hall - morgan - perjes formalism are real, while others are pure imaginary. we obtain the basic geometrical identities as well as the tmg field equations including a cosmological constant for the appropriate signature. as an application of this formalism we discuss the bianchi type $ viii - ix $ exact solutions of tmg and point out that they are parallelizable manifolds. we also consider various re - identifications of these homogeneous spaces that result in black hole solutions of tmg.
|
arxiv:gr-qc/9812090
|
optical beam center position on an array of detectors is an important ( hidden ) parameter that is essential not only from a tracking perspective, but is also important for optimal detection of pulse position modulation symbols in free - space optical communications. in this paper, we have examined the beam position estimation problem for photon - counting detector arrays, and to this end, we have proposed and analyzed a number of non - bayesian beam position estimators. these estimators are compared in terms of their mean - square error, bias and the probability of error performance. additionally, the cramer - rao lower bounds ( crlb ) of the tracking error is also derived, and the crlb curves give us additional insights concerning the effect of number of detectors and the beam radius on mean - square error performance. finally, the effect of beam position estimation on the probability of error performance is investigated, and our study concludes that the probability of error of the system is minimized when the beam position on the array is estimated as accurately as possible.
|
arxiv:2001.04007
|
the nature of cosmic ray ( cr ) transport in the milky way remains elusive. the predictions of current micro - physical cr transport models in magneto - hydrodynamic ( mhd ) turbulence are drastically different from what is observed. these models usually focus on mhd turbulence with a strong guide field and ignore the impact of turbulent intermittency on particle propagation. this motivates our studying the alternative regime of large - amplitude turbulence with $ \ delta b / b _ 0 \ gg 1 $, in which intermittent small - scale magnetic field reversals are ubiquitous. we study particle transport in such turbulence by integrating trajectories in stationary snapshots. to quantify spatial diffusion, we use a setup with continuous particle injection and escape, which we term the turbulent leaky box. we find that particle transport is very different from the strong - guide - field case. low - energy particles are better confined than high - energy particles, despite less efficient pitch - angle isotropization at small energies. in the limit of weak guide field, energy - dependent confinement is driven by the energy - dependent ( in ) ability to follow reversing magnetic field lines exactly and by the scattering in regions of ` ` resonant curvature ", where the field line bends on a scale that is of order the local particle gyro - radius. we derive a heuristic model of particle transport in magnetic folds that approximately reproduces the energy dependence of transport found numerically. we speculate that cr propagation in the galaxy is regulated by the intermittent field reversals highlighted here and discuss the implications of our findings for cr transport in the milky way.
|
arxiv:2304.12335
|
let g be a complex reductive algebraic group. fix a borel subgroup b of g, with unipotent radical u, and a maximal torus t in b with character group x ( t ). let s be a submonoid of x ( t ) generated by finitely many dominant weights. v. alexeev and m. brion introduced a moduli scheme m _ s which classifies pairs ( x, f ) where x is an affine g - variety and f is a t - equivariant isomorphism between the categorical quotient of x by u and the toric variety determined by s. in this paper, we prove that m _ s is isomorphic to an affine space when s is the weight monoid of a spherical g - module with g of type a.
|
arxiv:1008.0911
|
allow specialized cell lines to thrive in cultures replicating their native environments, but it also makes bioreactors attractive tools for culturing stem cells. a successful stem - cell - based bioreactor is effective at expanding stem cells with uniform properties and / or promoting controlled, reproducible differentiation into selected mature cell types. there are a variety of bioreactors designed for 3d cell cultures. there are small plastic cylindrical chambers, as well as glass chambers, with regulated internal humidity and moisture specifically engineered for the purpose of growing cells in three dimensions. the bioreactor uses bioactive synthetic materials such as polyethylene terephthalate membranes to surround the spheroid cells in an environment that maintains high levels of nutrients. they are easy to open and close, so that cell spheroids can be removed for testing, yet the chamber is able to maintain 100 % humidity throughout. this humidity is important to achieve maximum cell growth and function. the bioreactor chamber is part of a larger device that rotates to ensure equal cell growth in each direction across three dimensions. quinxell technologies now under quintech life sciences from singapore has developed a bioreactor known as the tisxell biaxial bioreactor which is specially designed for the purpose of tissue engineering. it is the first bioreactor in the world to have a spherical glass chamber with biaxial rotation ; specifically to mimic the rotation of the fetus in the womb ; which provides a conducive environment for the growth of tissues. multiple forms of mechanical stimulation have also been combined into a single bioreactor. using gene expression analysis, one academic study found that applying a combination of cyclic strain and ultrasound stimulation to pre - osteoblast cells in a bioreactor accelerated matrix maturation and differentiation. the technology of this combined stimulation bioreactor could be used to grow bone cells more quickly and effectively in future clinical stem cell therapies. mc2 biotek has also developed a bioreactor known as prototissue that uses gas exchange to maintain high oxygen levels within the cell chamber ; improving upon previous bioreactors, since the higher oxygen levels help the cell grow and undergo normal cell respiration. active areas of research on bioreactors includes increasing production scale and refining the physiological environment, both of which could improve the efficiency and efficacy of bioreactors in research or clinical use. bioreactors are currently used to study, among other things, cell and tissue level therapies, cell and tissue response to specific physiological
|
https://en.wikipedia.org/wiki/Tissue_engineering
|
in this paper, we will use the entropy approach to derive a necessary and sufficient condition for the existence of an element that belongs to at least half of the sets in a finite family of sets.
|
arxiv:2412.18622
|
electron motion in an atom driven by an intense linearly polarized laser field can exhibit a laser - dressed stable state, referred to as the kramers - henneberger ( kh ) state or kh atom. up to now, the existence conditions of this state rely on the presence of a double well in the kh potential, obtained by averaging the motion over one period of the laser. however, the approximation involved in the averaging is largely invalid in the region of the double well structure ; therefore this raises the question of its relevance for identifying signatures of these exotic states. here we present a method to establish conditions for the existence of the kh atom based on a nonperturbative approach. we show that the kh atom is structured by an asymmetric periodic orbit with the same period as the laser field in a wide range of laser parameters. its imprint is clearly visible on the wavefunction in quantum simulations. we identify the range of parameters for which this kh state is effective, corresponding to an elliptic periodic orbit.
|
arxiv:2407.18575
|
the rapid advancement of large language models ( llms ) has introduced new challenges in distinguishing human - written text from ai - generated content. in this work, we explored a pipelined approach for ai - generated text detection that includes a feature extraction step ( i. e. prompt - based rewriting features inspired by raidar and content - based features derived from the nela toolkit ) followed by a classification module. comprehensive experiments were conducted on the defactify4. 0 dataset, evaluating two tasks : binary classification to differentiate human - written and ai - generated text, and multi - class classification to identify the specific generative model used to generate the input text. our findings reveal that nela features significantly outperform raidar features in both tasks, demonstrating their ability to capture nuanced linguistic, stylistic, and content - based differences. combining raidar and nela features provided minimal improvement, highlighting the redundancy introduced by less discriminative features. among the classifiers tested, xgboost emerged as the most effective, leveraging the rich feature sets to achieve high accuracy and generalisation.
|
arxiv:2503.22338
|
context. gaia is an esa cornerstone mission launched on 19 december 2013 aiming to obtain the most complete and precise 3d map of our galaxy by observing more than one billion sources. this paper is part of a series of documents explaining the data processing and its results for gaia data release 1, focussing on the g band photometry. aims. this paper describes the calibration model of the gaia photometric passband for gaia data release 1. methods. the overall principle of splitting the process into internal and external calibrations is outlined. in the internal calibration, a self - consistent photometric system is generated. then, the external calibration provides the link to the absolute photometric flux scales. results. the gaia photometric calibration pipeline explained here was applied to the first data release with good results. details are given of the various calibration elements including the mathematical formulation of the models used and of the extraction and preparation of the required input parameters ( e. g. colour terms ). the external calibration in this first release provides the absolute zero point and photometric transformations from the gaia g passband to other common photometric systems. conclusions. this paper describes the photometric calibration implemented for the first gaia data release and the instrumental effects taken into account. for this first release no aperture losses, radiation damage, and other second - order effects have not yet been implemented in the calibration.
|
arxiv:1611.02036
|
we explore the properties of non - local effective actions which include gravitational couplings. non - local functions originally defined in flat space can not be easily generalized to curved space. the problem is made worse by the calculational impossibility of providing closed form expressions in a general metric. the technique of covariant perturbation theory ( cpt ) has been pioneered by vilkovisky, barvinsky and collaborators whereby the effective action is displayed as an expansion in the generalized curvatures similar to the schwinger - de witt local expansion. we present an alternative procedure to construct the non - local action which we call { \ em non - linear completion }. our approach is in one - to - one correspondence with the more familiar diagrammatic expansion of the effective action. this technique moreover enables us to decide on the appropriate non - local action that generates the qed trace anomaly in 4 $ d $. in particular we discuss carefully the curved space generalization of $ \ ln \ box $, and show that the anomaly requires both the anomalous logarithm as well as $ 1 / \ box $ term where the latter is related to the riegert anomaly action.
|
arxiv:1507.06321
|
using techniques of optimal transportation and gradient flows in metric spaces, we extend the notion of riemannian curvature dimension condition $ rcd ( k, \ infty ) $ introduced ( in case the reference measure is finite ) by giuseppe savare ', the first and the second author, to the case the reference measure is $ \ sigma $ - finite ; in this way the theory includes natural examples as the euclidean $ n $ - dimensional space endowed with the lebesgue measure, and noncompact manifolds with bounded geometry endowed with the riemannian volume measure. another major goal of the paper is to simplify the axiomatization of $ rcd ( k, \ infty ) $ ( even in case of finite reference measure ) replacing the assumption of strict $ cd ( k, \ infty ) $ with the classic notion of $ cd ( k, \ infty ) $.
|
arxiv:1207.4924
|
predictive coding ( pc ) accounts of perception now form one of the dominant computational theories of the brain, where they prescribe a general algorithm for inference and learning over hierarchical latent probabilistic models. despite this, they have enjoyed little export to the broader field of machine learning, where comparative generative modelling techniques have flourished. in part, this has been due to the poor performance of models trained with pc when evaluated by both sample quality and marginal likelihood. by adopting the perspective of pc as a variational bayes algorithm under the laplace approximation, we identify the source of these deficits to lie in the exclusion of an associated hessian term in the pc objective function, which would otherwise regularise the sharpness of the probability landscape and prevent over - certainty in the approximate posterior. to remedy this, we make three primary contributions : we begin by suggesting a simple monte carlo estimated evidence lower bound which relies on sampling from the hessian - parameterised variational posterior. we then derive a novel block diagonal approximation to the full hessian matrix that has lower memory requirements and favourable mathematical properties. lastly, we present an algorithm that combines our method with standard pc to reduce memory complexity further. we evaluate models trained with our approach against the standard pc framework on image benchmark datasets. our approach produces higher log - likelihoods and qualitatively better samples that more closely capture the diversity of the data - generating distribution.
|
arxiv:2303.04976
|
2nd ed. ). boston : ap professional. isbn 0 - 12 - 518405 - 0. carroll, john m. ( 2000 ). making use : scenario - based design of human – computer interactions. cambridge, mass. : mit press. isbn 0 - 262 - 03279 - 1. rosson, mary beth ; john millar carroll ( 2002 ). usability engineering : scenario - based development of human - computer interaction. morgan kaufmann. isbn 1 - 55860 - 712 - 9. nielsen, jakob ( 1993 ). usability engineering. morgan kaufmann. isbn 978 - 0 - 12 - 518406 - 9. spool, jared ; tara scanlon ; carolyn snyder ; terri deangelo ( 1998 ). web site usability : a designer ' s guide. morgan kaufmann. isbn 978 - 1 - 55860 - 569 - 5. mayhew, deborah ( 1999 ). the usability engineering lifecycle : a practitioner ' s handbook. morgan kaufmann. isbn 978 - 1 - 55860 - 561 - 9. faulkner, xristine ( 2000 ). usability engineering. palgrave. isbn 978 - 0 - 333 - 77321 - 5. smith, michael j. ( 2001 ). usability evaluation and interface design : cognitive engineering, intelligent agents, and virtual reality, volume 1 ( human factors and ergonomics ). crc press. isbn 978 - 0 - 8058 - 3607 - 3. rosson, mary beth ; john millar carroll ( 2002 ). usability engineering : scenario - based development of human - computer interaction. morgan kaufmann. jacko, julie ( 2012 ). human - computer interaction handbook : fundamentals, evolving technologies, and emerging applications. crc press. isbn 978 - 1 - 4398 - 2943 - 1. leventhal, laura ( 2007 ). usability engineering : process, products & examples. prentice hall. isbn 978 - 0 - 13 - 157008 - 5. sears, andrew ; julie a. jacko ( 2007 ). the human - computer interaction handbook : fundamentals, evolving technologies and emerging applications. crc press. isbn 978 - 0 - 8058 - 5870 - 9. = = external links = = digital. gov usability. gov the national institute of standards and technology the web accessibility initiative guidelines = = references = =
|
https://en.wikipedia.org/wiki/Usability_engineering
|
we investigate ultracold and dilute bosonic atoms under strong transverse harmonic confinement by using a 1d modified gross - pitaevskii equation ( 1d mgpe ), which accounts for the energy dependence of the two - body scattering amplitude within an effective - range expansion. we study sound waves and solitons of the quasi - 1d system comparing 1d mgpe results with the 1d gpe ones. we point out that, when the finite - size nature of the interaction is taken into account, the speed of sound and the density profiles of both dark and bright solitons show relevant quantitative changes with respect to what predicted by the standard 1d gpe.
|
arxiv:1501.01546
|
we construct an evidently positive multiple series as a generating function for partitions satisfying the multiplicity condition in schur ' s partition theorem. refinements of the series when parts in the said partitions are classified according to their parities or values mod 3 are also considered. direct combinatorial interpretations of the series are provided.
|
arxiv:1812.10039
|
benefiting from language flexibility and compositionality, humans naturally intend to use language to command an embodied agent for complex tasks such as navigation and object manipulation. in this work, we aim to fill the blank of the last mile of embodied agents - - object manipulation by following human guidance, e. g., " move the red mug next to the box while keeping it upright. " to this end, we introduce an automatic manipulation solver ( amsolver ) system and build a vision - and - language manipulation benchmark ( vlmbench ) based on it, containing various language instructions on categorized robotic manipulation tasks. specifically, modular rule - based task templates are created to automatically generate robot demonstrations with language instructions, consisting of diverse object shapes and appearances, action types, and motion constraints. we also develop a keypoint - based model 6d - cliport to deal with multi - view observations and language input and output a sequence of 6 degrees of freedom ( dof ) actions. we hope the new simulator and benchmark will facilitate future research on language - guided robotic manipulation.
|
arxiv:2206.08522
|
the parity violating asymmetry $ a _ { pv } $ in $ ^ { 208 } $ pb, recently measured by the prex - 2 collaboration, is studied using modern relativistic ( covariant ) and non - relativistic energy density functionals. we first assess the theoretical uncertainty on $ a _ { pv } $ which is intrinsic to the adopted approach. to this end, we use quantified functionals that are able to accommodate our previous knowledge on nuclear observables such as binding energies, charge radii, and the dipole polarizability $ \ alpha _ d $ of $ ^ { 208 } $ pb. we then add the quantified value of $ a _ { pv } $ together with $ \ alpha _ d $ to our calibration dataset to optimize new functionals. based on these results, we predict a neutron skin thickness in $ ^ { 208 } $ pb $ r _ \ mathrm { skin } = 0. 19 \ pm 0. 02 $ \, fm and the symmetry - energy slope $ l = 54 \ pm 8 $ \, mev. these values are consistent with other estimates based on astrophysical data and are significantly lower than those recently reported using a particular set of relativistic energy density functionals. we also make a prediction for the $ a _ { pv } $ value in $ ^ { 48 } $ ca that will be soon available from the crex measurement.
|
arxiv:2105.15050
|
based on quantum - kinetic equations, coupled spin - charge drift - diffusion equations are derived for a two - dimensional electron gas on a cylindrical surface. besides the rashba and dresselhaus spin - orbit interaction, the elastic scattering on impurities, and a constant electric field are taken into account. from the solution of the drift - diffusion equations, a long - lived spin excitation is identified for spins coupled to the rashba term on a cylinder with a given radius. the electric - field driven weakly damped spin waves are manifest in the components of the magnetization and have the potential for non - ballistic spin - device applications.
|
arxiv:0808.0069
|
in recent astronomical observations, an almost dark galaxy, designated as nube, has unveiled an intriguing anomaly in its stellar distribution. specifically, nube exhibits an exceptionally low central brightness, with the 2d half - light radius of its stars far exceeding the typical values found in dwarf galaxies, and even surpassing those observed in ultra - diffuse galaxies ( udgs ). this phenomenon is difficult to explain within the framework of cold dark matter ( cdm ). meanwhile, due to its ultralight particle mass, fuzzy dark matter ( fdm ) exhibits a de broglie wavelength on the order of kiloparsecs under the typical velocities of galaxies. the interference between different modes of the fdm wave gives rise to fluctuations in the gravitational field, which can lead to the dynamical heating of stars within galaxies, resulting in an expansion of their spatial distribution. in this paper, we aim to interpret the anomalous stellar distribution observed in nube as a consequence of the dynamical heating effect induced by fdm. our findings suggest that a fdm particle mass around $ 1 - 2 \ times 10 ^ { - 23 } $ ev can effectively account for this anomaly. and we propose that the fdm dynamical heating effect provides a new insight into understanding the formation of field udgs.
|
arxiv:2404.05375
|
we show that proper lie groupoids are locally linearizable. as a consequence, the orbit space of a proper lie groupoid is a smooth orbispace ( a hausdorff space which locally looks like the quotient of a vector space by a linear compact lie group action ). in the case of proper ( quasi - ) symplectic groupoids, the orbit space admits a natural integral affine structure, which makes it into an affine orbifold with locally convex polyhedral boundary, and the local structure near each boundary point is isomorphic to that of a weyl chamber of a compact lie group. we then apply these results to the study of momentum maps of hamiltonian actions of proper ( quasi - ) symplectic groupoids, and show that these momentum maps preserve natural transverse affine structures with local convexity properties. many convexity theorems in the literature can be recovered from this last statement and some elementary results about affine maps.
|
arxiv:math/0407208
|
the advent of federated learning ( fl ) highlights the practical necessity for the right to be forgotten for all clients, allowing them to request data deletion from the machine learning models service provider. this necessity has spurred a growing demand for federated unlearning ( fu ). feature unlearning has gained considerable attention due to its applications in unlearning sensitive, backdoor, and biased features. existing methods employ the influence function to achieve feature unlearning, which is impractical for fl as it necessitates the participation of other clients, if not all, in the unlearning process. furthermore, current research lacks an evaluation of the effectiveness of feature unlearning. to address these limitations, we define feature sensitivity in evaluating feature unlearning according to lipschitz continuity. this metric characterizes the model outputs rate of change or sensitivity to perturbations in the input feature. we then propose an effective federated feature unlearning framework called ferrari, which minimizes feature sensitivity. extensive experimental results and theoretical analysis demonstrate the effectiveness of ferrari across various feature unlearning scenarios, including sensitive, backdoor, and biased features. the code is publicly available at https : / / github. com / ongwinkent / federated - feature - unlearning
|
arxiv:2405.17462
|
the orientation of water molecules is the key factor for the fast transport of water in small nanotubes. it has been accepted that the bidirectional water burst in short nanotubes can be transformed into unidirectional transport when the orientation of water molecules is maintained in long nanotubes under the external field. in this work, based on molecular dynamics simulations and first - principles calculations, we showed without external field, it only needs 21 water molecules to maintain the unidirectional single file water intrinsically in carbon nanotube at seconds. detailed analysis indicates that the surprising result comes from the step by step process for the flip of water chain, which is different with the perceived concerted mechanism. considering the thickness of cell membrane ( normally 5 - 10 nm ) is larger than the length threshold of the unidirectional water wire, this study suggests it may not need the external field to maintain the unidirectional flow in the water channel at the macroscopic timescale.
|
arxiv:2008.05148
|
blu - ray is the name of a next - generation optical disc format jointly developed by the blu - ray disc association a group of the world ' s leading consumer electronics, personal computer and media manufacturers. the format was developed to enable recording, rewriting and playback of high - definition video, as well as storing large amounts of data. this extra capacity combined with the use of advanced video and audio codec will offer consumers an unprecedented hd experience. while current optical disc technologies such as dvd and dvdram rely on a red laser to read and write data, the new format uses a blue - violet laser instead, hence the name blu - ray. blu ray also promises some added security, making ways for copyright protections. blu - ray discs can have a unique id written on them to have copyright protection inside the recorded streams. blu. ray disc takes the dvd technology one step further, just by using a laser with a nice color.
|
arxiv:1310.1551
|
parameter - efficient fine - tuning ( peft ) has become a key training strategy for large language models. however, its reliance on fewer trainable parameters poses security risks, such as task - agnostic backdoors. despite their severe impact on a wide range of tasks, there is no practical defense solution available that effectively counters task - agnostic backdoors within the context of peft. in this study, we introduce obliviate, a peft - integrable backdoor defense. we develop two techniques aimed at amplifying benign neurons within peft layers and penalizing the influence of trigger tokens. our evaluations across three major peft architectures show that our method can significantly reduce the attack success rate of the state - of - the - art task - agnostic backdoors ( 83. 6 % $ \ downarrow $ ). furthermore, our method exhibits robust defense capabilities against both task - specific backdoors and adaptive attacks. source code will be obtained at https : / / github. com / obliviatearr / obliviate.
|
arxiv:2409.14119
|
} to be the supremum of all the sums of finitely many of them. a measure μ { \ displaystyle \ mu } on σ { \ displaystyle \ sigma } is κ { \ displaystyle \ kappa } - additive if for any λ < κ { \ displaystyle \ lambda < \ kappa } and any family of disjoint sets x α, α < λ { \ displaystyle x _ { \ alpha }, \ alpha < \ lambda } the following hold : α ∈ λ x α ∈ σ { \ displaystyle \ bigcup _ { \ alpha \ in \ lambda } x _ { \ alpha } \ in \ sigma } μ ( α ∈ λ x α ) = α ∈ λ μ ( x α ). { \ displaystyle \ mu \ left ( \ bigcup _ { \ alpha \ in \ lambda } x _ { \ alpha } \ right ) = \ sum _ { \ alpha \ in \ lambda } \ mu \ left ( x _ { \ alpha } \ right ). } the second condition is equivalent to the statement that the ideal of null sets is κ { \ displaystyle \ kappa } - complete. = = = sigma - finite measures = = = a measure space ( x, σ, μ ) { \ displaystyle ( x, \ sigma, \ mu ) } is called finite if μ ( x ) { \ displaystyle \ mu ( x ) } is a finite real number ( rather than ∞ { \ displaystyle \ infty } ). nonzero finite measures are analogous to probability measures in the sense that any finite measure μ { \ displaystyle \ mu } is proportional to the probability measure 1 μ ( x ) μ. { \ displaystyle { \ frac { 1 } { \ mu ( x ) } } \ mu. } a measure μ { \ displaystyle \ mu } is called σ - finite if x { \ displaystyle x } can be decomposed into a countable union of measurable sets of finite measure. analogously, a set in a measure space is said to have a σ - finite measure if it is a countable union of sets with finite measure. for example, the real numbers with the standard lebesgue measure are σ - finite but not finite. consider the closed intervals [ k, k + 1 ] { \ displaystyle [ k, k + 1 ] } for all integers k ; { \ displaystyle k ; } there are countably many such intervals, each has measure 1
|
https://en.wikipedia.org/wiki/Measure_(mathematics)
|
we unveil the existence of a non - trivial berry phase associated to the dynamics of a quantum particle in a one dimensional box with moving walls. it is shown that a suitable choice of boundary conditions has to be made in order to preserve unitarity. for these boundary conditions we compute explicitly the geometric phase two - form on the parameter space. the unboundedness of the hamiltonian describing the system leads to a natural prescription of renormalization for divergent contributions arising from the boundary.
|
arxiv:1509.00381
|
we investigate the flow of material from highly misaligned and polar circumbinary discs that feed the formation of circumstellar discs around each binary component. with three - dimensional hydrodynamic simulations we consider equal mass binaries with low eccentricity. we also simulate inclined test particles and highly - misaligned circumstellar discs around one binary component for comparison. during kozai - lidov ( kl ) cycles, the circumstellar disc structure is altered through exchanges of disc eccentricity with disc tilt. highly inclined circumstellar discs and test particles around individual binary components can experience very strong kl oscillations. the continuous accretion of highly misaligned material from the circumbinary disc allows the kl oscillations of circumstellar discs to be long - lived. in this process, the circumbinary material is continuously delivered with a high inclination to the lower inclination circumstellar discs. we find that the simulation resolution is important for modeling the longevity of the kl oscillations. an initially polar circumbinary disc forms nearly polar, circumstellar discs that undergo kl cycles. the gas steams accreting onto the polar circumstellar discs vary in tilt during each binary orbital period, which determines how much material is accreted onto the discs. the long - lived kl cycles in polar circumstellar discs may lead to the formation of polar s - type planets in binary star systems.
|
arxiv:2301.11769
|
beam lithography ) is the practice of scanning a beam of electrons in a patterned fashion across a surface covered with a film ( called the resist ), ( " exposing " the resist ) and of selectively removing either exposed or non - exposed regions of the resist ( " developing " ). the purpose, as with photolithography, is to create very small structures in the resist that can subsequently be transferred to the substrate material, often by etching. it was developed for manufacturing integrated circuits, and is also used for creating nanotechnology architectures. the primary advantage of electron beam lithography is that it is one of the ways to beat the diffraction limit of light and make features in the nanometer range. this form of maskless lithography has found wide usage in photomask - making used in photolithography, low - volume production of semiconductor components, and research & development. the key limitation of electron beam lithography is throughput, i. e., the very long time it takes to expose an entire silicon wafer or glass substrate. a long exposure time leaves the user vulnerable to beam drift or instability which may occur during the exposure. also, the turn - around time for reworking or re - design is lengthened unnecessarily if the pattern is not being changed the second time. it is known that focused - ion beam lithography has the capability of writing extremely fine lines ( less than 50 nm line and space has been achieved ) without proximity effect. however, because the writing field in ion - beam lithography is quite small, large area patterns must be created by stitching together the small fields. ion track technology is a deep cutting tool with a resolution limit around 8 nm applicable to radiation resistant minerals, glasses and polymers. it is capable of generating holes in thin films without any development process. structural depth can be defined either by ion range or by material thickness. aspect ratios up to several 104 can be reached. the technique can shape and texture materials at a defined inclination angle. random pattern, single - ion track structures and an aimed pattern consisting of individual single tracks can be generated. x - ray lithography is a process used in the electronic industry to selectively remove parts of a thin film. it uses x - rays to transfer a geometric pattern from a mask to a light - sensitive chemical photoresist, or simply " resist ", on the substrate. a series of chemical treatments then engraves the produced pattern into the material underneath the photo
|
https://en.wikipedia.org/wiki/MEMS
|
let $ \ mu $ be a positive measure on $ r ^ d $. it is known that if the space $ l ^ 2 ( \ mu ) $ has a frame of exponentials then the measure $ \ mu $ must be of " pure type " : it is either discrete, absolutely continuous or singular continuous. it has been conjectured that a similar phenomenon should be true within the class of singular continuous measures, in the sense that $ \ mu $ cannot admit an exponential frame if it has components of different dimensions. we prove that this is not the case by showing that the sum of an arc length measure and a surface measure can have a frame of exponentials. on the other hand we prove that a measure of this form cannot have a frame of exponentials if the surface has a point of non - zero gaussian curvature. this is in spite of the fact that each " pure " component of the measure separately may admit such a frame.
|
arxiv:1607.06267
|
lung cancer is the leading cause of cancer death in the world. accurate determination of the egfr ( epidermal growth factor receptor ) mutation status is highly relevant for the proper treatment of this patients. purpose : the aim of this study was to predict the mutational status of the egfr in non - small cell lung cancer patients using radiomics features extracted from pet - ct images. methods : retrospective study that involve 34 patients with lung cancer confirmed by histology and egfr status mutation assessment. a total of 2. 205 radiomics features were extracted from manual segmentation of the pet - ct images using pyradiomics library. both computed tomography and positron emission tomography images were used. all images were acquired with intravenous iodinated contrast and f18 - fdg. preprocessing includes resampling, normalization, and discretization of the pixel intensity. three methods were used for the feature selection process : backward selection ( set 1 ), forward selection ( set 2 ), and feature importance analysis of random forest model ( set 3 ). nine machine learning methods were used for radiomics model building. results : 35. 2 % of patients had egfr mutation, without significant differences in age, gender, tumor size and suvmax. after the feature selection process 6, 7 and 17 radiomics features were selected, respectively in each group. the best performances were obtained by ridge regression in set 1 : auc of 0. 826 ( 95 % ci, 0. 811 - 0. 839 ), random forest in set 2 : auc of 0. 823 ( 95 % ci, 0. 808 - 0. 838 ) and neural network in set 3 : auc of 0. 821 ( 95 % ci, 0. 808 - 0. 835 ). conclusion : the radiomics features analysis has the potential of predicting clinically relevant mutations in lung cancer patients through a non - invasive methodology.
|
arxiv:2303.08569
|
in a recent letter [ phys. rev. lett., 77, 4536 ( 1996 ), chao - dyn / 9609014 ] altland and zirnbauer claim that they rigorously proved the complete analogy between a ( classically chaotic ) dynamical system and disordered ( random ) solids. the purpose of this comment is to show that, in fact, their theory fails to take into account dynamical features which go beyond standard random matrix theory description.
|
arxiv:cond-mat/9703106
|
here we study the np - complete $ k $ - sat problem. although the worst - case complexity of np - complete problems is conjectured to be exponential, there exist parametrized random ensembles of problems where solutions can typically be found in polynomial time for suitable ranges of the parameter. in fact, random $ k $ - sat, with $ \ alpha = m / n $ as control parameter, can be solved quickly for small enough values of $ \ alpha $. it shows a phase transition between a satisfiable phase and an unsatisfiable phase. for branch and bound algorithms, which operate in the space of feasible boolean configurations, the empirically hardest problems are located only close to this phase transition. here we study $ k $ - sat ( $ k = 3, 4 $ ) and the related optimization problem max - sat by a linear programming approach, which is widely used for practical problems and allows for polynomial run time. in contrast to branch and bound it operates outside the space of feasible configurations. on the other hand, finding a solution within polynomial time is not guaranteed. we investigated several variants like including artificial objective functions, so called cutting - plane approaches, and a mapping to the np - complete vertex - cover problem. we observed several easy - hard transitions, from where the problems are typically solvable ( in polynomial time ) using the given algorithms, respectively, to where they are not solvable in polynomial time. for the related vertex - cover problem on random graphs these easy - hard transitions can be identified with structural properties of the graphs, like percolation transitions. for the present random $ k $ - sat problem we have investigated numerous structural properties also exhibiting clear transitions, but they appear not be correlated to the here observed easy - hard transitions. this renders the behaviour of random $ k $ - sat more complex than, e. g., the vertex - cover problem.
|
arxiv:1702.02821
|
we show that the number $ p \ _ d $ of non - similar perfect $ d $ - dimensional lattices satisfies eventually the inequalities $ e ^ { d ^ { 1 - \ epsilon } } < p \ _ d < e ^ { d ^ { 3 + \ epsilon } } $ for arbitrary smallstrictly positive $ \ epsilon $.
|
arxiv:1704.02234
|
we address the issue of coupling variables which are essentially classical to variables that are quantum. two approaches are discussed. in the first ( based on collaborative work with l. di \ ' osi ), continuous quantum measurement theory is used to construct a phenomenological description of the interaction of a quasiclassical variable $ x $ with a quantum variable $ x $, where the quasiclassical nature of $ x $ is assumed to have come about as a result of decoherence. the state of the quantum subsystem evolves according to the stochastic non - linear schr \ " odinger equation of a continuously measured system, and the classical system couples to a stochastic c - number $ \ x ( t ) $ representing the imprecisely measured value of $ x $. the theory gives intuitively sensible results even when the quantum system starts out in a superposition of well - separated localized states. the second approach involves a derivation of an effective theory from the underlying quantum theory of the combined quasiclassical - - quantum system, and uses the decoherent histories approach to quantum theory.
|
arxiv:gr-qc/9808071
|
the nearest exoplanets to the sun are our best possibilities for detailed characterization. we report the discovery of a compact multi - planet system of super - earths orbiting the nearby red dwarf gj 887, using radial velocity measurements. the planets have orbital periods of 9. 3 and 21. 8 ~ days. assuming an earth - like albedo, the equilibrium temperature of the 21. 8 day planet is approx 350 k ; which is interior, but close to the inner edge, of the liquid - water habitable zone. we also detect a further unconfirmed signal with a period of 50 days which could correspond to a third super - earth in a more temperate orbit. gj 887 is an unusually magnetically quiet red dwarf with a photometric variability below 500 parts - per - million, making its planets amenable to phase - resolved photometric characterization.
|
arxiv:2006.16372
|
to mine large digital libraries in humanistically meaningful ways, scholars need to divide them by genre. this is a task that classification algorithms are well suited to assist, but they need adjustment to address the specific challenges of this domain. digital libraries pose two problems of scale not usually found in the article datasets used to test these algorithms. 1 ) because libraries span several centuries, the genres being identified may change gradually across the time axis. 2 ) because volumes are much longer than articles, they tend to be internally heterogeneous, and the classification task needs to begin with segmentation. we describe a multi - layered solution that trains hidden markov models to segment volumes, and uses ensembles of overlapping classifiers to address historical change. we test this approach on a collection of 469, 200 volumes drawn from hathitrust digital library. to demonstrate the humanistic value of these methods, we extract 32, 209 volumes of fiction from the digital library, and trace the changing proportions of first - and third - person narration in the corpus. we note that narrative points of view seem to have strong associations with particular themes and genres.
|
arxiv:1309.3323
|
beta - decay rates for spherical neutron - rich r - process waiting - point nuclei are calculated within a fully self - consistent quasiparticle random - phase approximation, formulated in the hartree - fock - bogolyubov canonical single - particle basis. the same skyrme force is used everywhere in the calculation except in the proton - neutron particle - particle channel, where a finite - range force is consistently employed. in all but the heaviest nuclei, the resulting half - lives are usually shorter by factors of 2 to 5 than those of calculations that ignore the proton - neutron particle - particle interaction. the shorter half - lives alter predictions for the abundance distribution of r - process elements and for the time it takes to synthesize them.
|
arxiv:nucl-th/9902059
|
de - rating or vulnerability factors are a major feature of failure analysis efforts mandated by today ' s functional safety requirements. determining the functional de - rating of sequential logic cells typically requires computationally intensive fault - injection simulation campaigns. in this paper a new approach is proposed which uses machine learning to estimate the functional de - rating of individual flip - flops and thus, optimising and enhancing fault injection efforts. therefore, first, a set of per - instance features is described and extracted through an analysis approach combining static elements ( cell properties, circuit structure, synthesis attributes ) and dynamic elements ( signal activity ). second, reference data is obtained through first - principles fault simulation approaches. finally, one part of the reference dataset is used to train the machine learning algorithm and the remaining is used to validate and benchmark the accuracy of the trained tool. the intended goal is to obtain a trained model able to provide accurate per - instance functional de - rating data for the full list of circuit instances, an objective that is difficult to reach using classical methods. the presented methodology is accompanied by a practical example to determine the performance of various machine learning models for different training sizes.
|
arxiv:2002.09945
|
vision - based formation control systems are attractive because they can use inexpensive sensors and can work in gps - denied environments. the safety assurance for such systems is challenging : the vision component ' s accuracy depends on the environment in complicated ways, these errors propagate through the system and lead to incorrect control actions, and there exists no formal specification for end - to - end reasoning. we address this problem and propose a technique for safety assurance of vision - based formation control : first, we propose a scheme for constructing quantizers that are consistent with vision - based perception. next, we show how the convergence analysis of a standard quantized consensus algorithm can be adapted for the constructed quantizers. we use the recently defined notion of perception contracts to create error bounds on the actual vision - based perception pipeline using sampled data from different ground truth states, environments, and weather conditions. specifically, we use a quantizer in logarithmic polar coordinates, and we show that this quantizer is suitable for the constructed perception contracts for the vision - based position estimation, where the error worsens with respect to the absolute distance between agents. we build our formation control algorithm with this nonuniform quantizer, and we prove its convergence employing an existing result for quantized consensus.
|
arxiv:2210.00982
|
piecewise - deterministic markov process ( pdmp ) samplers constitute a state of the art markov chain monte carlo ( mcmc ) paradigm in bayesian computation, with examples including the zig - zag and bouncy particle sampler ( bps ). recent work on the zig - zag has indicated its connection to hamiltonian monte carlo, a version of the metropolis algorithm that exploits hamiltonian dynamics. here we establish that, in fact, the connection between the paradigms extends far beyond the specific instance. the key lies in ( 1 ) the fact that any time - reversible deterministic dynamics provides a valid metropolis proposal and ( 2 ) how pdmps ' characteristic velocity changes constitute an alternative to the usual acceptance - rejection. we turn this observation into a rigorous framework for constructing rejection - free metropolis proposals based on bouncy hamiltonian dynamics which simultaneously possess hamiltonian - like properties and generate discontinuous trajectories similar in appearance to pdmps. when combined with periodic refreshment of the inertia, the dynamics converge strongly to pdmp equivalents in the limit of increasingly frequent refreshment. we demonstrate the practical implications of this new paradigm, with a sampler based on a bouncy hamiltonian dynamics closely related to the bps. the resulting sampler exhibits competitive performance on challenging real - data posteriors involving tens of thousands of parameters.
|
arxiv:2405.08290
|
we investigate the response of self - interacting dark matter ( sidm ) halos to the growth of galaxy potentials using idealized simulations, each run in tandem with standard collisionless cold dark matter ( cdm ). we find a greater diversity in the sidm halo profiles compared to the cdm halo profiles. if the stellar gravitational potential strongly dominates in the central parts of a galaxy, then sidm halos can be as dense as cdm halos on observable scales. for extreme cases with highly compact disks core collapse can occur, leading to sidm halos that are denser and cuspier than their cdm counterparts. if the stellar potential is not dominant, then sidm halos retain constant density cores with densities far below cdm predictions. when a disk potential is present, the inner sidm halo becomes \ em { more flattened } in the disk plane than the cdm halo. these results are in excellent quantitative agreement with the predictions of kaplinghat et al. ( 2014 ). we also simulated a galaxy cluster halo with a central stellar distribution similar to the brightest central galaxy of the cluster a2667. a sidm halo simulated with cross section over mass $ \ sigma / m = 0. 1 \ \ mathrm { cm ^ 2 g ^ { - 1 } } $ provides a good match to the measured dark matter density profile of a2667, while an adiabatically - contracted cdm halo is denser and cuspier. the cored profile of the same halo simulated with $ \ sigma / m = 0. 5 \ \ mathrm { cm ^ 2 g ^ { - 1 } } $ is not dense enough to match a2667. our findings are in agreement with previous results that $ \ sigma / m \ gtrsim 0. 1 \ \ mathrm { cm ^ 2 g ^ { - 1 } } $ is disfavored for dark matter collision velocities in excess of about 1500 km / s. more generally, the predictive cross - talk between baryonic potentials and sidm density distributions offers new directions for constraining sidm cross sections in massive galaxies where baryons are dynamically important.
|
arxiv:1609.08626
|
shor algorithm led to the discovery of multiple vulnerabilities in a number of cryptosystems. as a result, post - quantum cryptography attempts to provide cryptographic solutions that can face these attacks, ensuring the security of sensitive data in a future where quantum computers are assumed to exist. error correcting codes are a source for efficiency when it comes to signatures, especially random ones described in this paper, being quantum - resistant and reaching the gilbert - varshamov bound, thus offering a good trade - off between rate and distance. in the light of this discussion, we introduce a signature based on a family of linear error - block codes ( leb ), with strong algebraic properties : it is the family of quasi - cyclic leb codes that we do define algebraically during this work.
|
arxiv:2503.23405
|
near - field observations may provide tight constraints - i. e. " boundary conditions " - on any model of structure formation in the universe. detailed observational data have long been available for the milky way ( e. g. freeman $ \ & $ bland - hawthorn 2002 ) and have provided tight constraints on several galaxy formation models ( e. g. abadi et al. 2003, bekki $ \ & $ chiba 2001 ). an implicit assumption still remains unanswered though : is the milky way a " normal " spiral? searching for directions, it feels natural to look at our neighbour : andromeda. an intriguing piece of the puzzle is provided by contrasting its stellar halo with that of our galaxy, even more so since mouhcine et al. ( 2005 ) have suggested that a correlation between stellar halo metallicity and galactic luminosity is in place and would leave the milky way halo as an outlier with respect to other spirals of comparable luminosities. further questions hence arise : is there any stellar halo - galaxy formation symbiosis? our first step has been to contrast the chemical evolution of the milky way with that of andromeda by means of a semi - analytic model. we have then pursued a complementary approach through the analysis of several semi - cosmological late - type galaxy simulations which sample a wide variety of merging histories. we have focused on the stellar halo properties in the simulations at redshift zero and shown that - at any given galaxy luminosity - the metallicities of the stellar halos in the simulations span a range in excess of $ \ sim $ 1 dex, a result which is strengthened by the robustness tests we have performed. we suggest that the underlying driver of the halo metallicity dispersion can be traced to the diversity of galactic mass assembly histories inherent within the hierarchical clustering paradigm.
|
arxiv:1506.04238
|
we present an initial design study for ldmx, the light dark matter experiment, a small - scale accelerator experiment having broad sensitivity to both direct dark matter and mediator particle production in the sub - gev mass region. ldmx employs missing momentum and energy techniques in multi - gev electro - nuclear fixed - target collisions to explore couplings to electrons in uncharted regions that extend down to and below levels that are motivated by direct thermal freeze - out mechanisms. ldmx would also be sensitive to a wide range of visibly and invisibly decaying dark sector particles, thereby addressing many of the science drivers highlighted in the 2017 us cosmic visions new ideas in dark matter community report. ldmx would achieve the required sensitivity by leveraging existing and developing detector technologies from the cms, hps and mu2e experiments. in this paper, we present our initial design concept, detailed geant - based studies of detector performance, signal and background processes, and a preliminary analysis approach. we demonstrate how a first phase of ldmx could expand sensitivity to a variety of light dark matter, mediator, and millicharge particles by several orders of magnitude in coupling over the broad sub - gev mass range.
|
arxiv:1808.05219
|
in many - body systems with u ( 1 ) global symmetry, the charge fluctuations in a subregion reveal important insights into entanglement and other global properties. for subregions with sharp corners, bipartite fluctuations have been predicted to exhibit a universal shape dependence on the corner angle in certain quantum phases and transitions, characterized by a " universal angle function " and a " universal coefficient. " however, we demonstrate that this simple formula is insufficient for charge insulators, including composite fermi liquids. in these systems, the corner contribution may depend on the corner angle, subregion orientation, and other microscopic details. we provide an infinite series representation of the corner term, introducing orientation - resolved universal angle functions with their non - universal coefficients. in the small - angle limit or under orientation averaging, the remaining terms ' coefficients are fully determined by the many - body quantum metric, which, while not universal, adheres to both a universal topological lower bound and an energetic upper bound. we also clarify the conditions for bound saturation in ( anisotropic ) landau levels, leveraging the generalized kohn theorem and holomorphic properties of many - body wavefunctions. we find that a broad class of fractional quantum hall wavefunctions, including unprojected parton states and composite - fermion fermi sea wavefunctions, saturates the bounds.
|
arxiv:2408.16057
|
in this note we show that the latest determinations of the residual mercury ' s perihelion advance, obtained by accounting for almost all known newtonian and post - newtonian orbital effects, yields only very broad constraints on the cosmological constant. indeed, from \ delta \ dot \ omega = - 0. 0036 + - 0. 0050 arcseconds per century one gets - 2 10 ^ - 34 km ^ - 2 < lambda < 4 10 ^ - 35 km ^ - 2. the currently accepted value for lambda, obtained from many independent cosmological and large - scale measurements, amounts to almost 10 ^ - 46 km ^ - 2.
|
arxiv:gr-qc/0511137
|
in the search for quantum advantage with near - term quantum devices, navigating the optimization landscape is significantly hampered by the barren plateaus phenomenon. this study presents a strategy to overcome this obstacle without changing the quantum circuit architecture. we propose incorporating auxiliary control qubits to shift the circuit from a unitary $ 2 $ - design to a unitary $ 1 $ - design, mitigating the prevalence of barren plateaus. we then remove these auxiliary qubits to return to the original circuit structure while preserving the unitary $ 1 $ - design properties. our experiment suggests that the proposed structure effectively mitigates the barren plateaus phenomenon. a significant experimental finding is that the gradient of $ \ theta _ { 1, 1 } $, the first parameter in the quantum circuit, displays a broader distribution as the number of qubits and layers increases. this suggests a higher probability of obtaining effective gradients. this stability is critical for the efficient training of quantum circuits, especially for larger and more complex systems. the results of this study represent a significant advance in the optimization of quantum circuits and offer a promising avenue for the scalable and practical implementation of quantum computing technologies. this approach opens up new opportunities in quantum learning and other applications that require robust quantum computing power.
|
arxiv:2406.03748
|
we prove that any power of the logarithm of fourier series with random signs is integrable. this result has applications to the distribution of values of random taylor series, one of which answers a long - standing question by j. - p. kahane.
|
arxiv:1301.0529
|
we consider the exclusive diffractive dissociation of a proton into three jets with large transverse momenta in the double - logarithmic approximation of perturbative qcd. this process is sensitive to the proton unintegrated gluon distribution at small x and to the proton light - cone distribution amplitudes. according to our estimates, an observation of such processes in the early runs at lhc is feasible for jet transverse momenta of the order of 5 gev.
|
arxiv:0810.4075
|
we consider two compacta with minimal non - elementary convergence actions of a countable group. when there exists an equivariant continuous map from one to the other, we call the first a blow - up of the second and the second a blow - down of the first. when both actions are geometrically finite, it is shown that one is a blow - up of the other if and only if each parabolic subgroup with respect to the first is parabolic with respect to the second. as an application, for each compactum with a geometrically finite convergence action, we construct its blow - downs with convergence actions which are not geometrically finite.
|
arxiv:1201.6104
|
in this lectures, we give a review about the minimal supersymmetric standard model ( mssm ) with $ r $ - parity violation because it provides an attractive way to generate neutrino masses, lepton mixing angles in acconcordance to present neutrino data.
|
arxiv:2204.05348
|
exact and approximate analytical formulas are derived for the internal structure and global parameters of the spherical non - rotating quasi - incompressible planet. the planet is modeled by a polytrope with a small polytropic index n < < 1, and solutions of the relevant differential equations are obtained analytically, to the second order of n.
|
arxiv:astro-ph/0401359
|
despite the immense popularity of the automated program repair ( apr ) field, the question of patch validation is still open. most of the present - day approaches follow the so - called generate - and - validate approach, where first a candidate solution is being generated and after validated against an oracle. the latter, however, might not give a reliable result, because of the imperfections in such oracles ; one of which is usually the test suite. although ( re - ) running the test suite is right under one ' s nose, in real life applications the problem of over - and underfitting often occurs, resulting in inadequate patches. efforts that have been made to tackle with this problem include patch filtering, test suite expansion, careful patch producing and many more. most approaches to date use post - filtering relying either on test execution traces or make use of some similarity concept measured on the generated patches. our goal is to investigate the nature of these similarity - based approaches. to do so, we trained a doc2vec model on an open - source javascript project and generated 465 patches for 10 bugs in it. these plausible patches alongside with the developer fix are then ranked based on their similarity to the original program. we analyzed these similarity lists and found that plain document embeddings may lead to misclassification - it fails to capture nuanced code semantics. nevertheless, in some cases it also provided useful information, thus helping to better understand the area of automated program repair.
|
arxiv:2103.16846
|
driving safety has drawn much public attention in recent years due to the fast - growing number of cars. smoking is one of the threats to driving safety but is often ignored by drivers. existing works on smoking detection either work in contact manner or need additional devices. this motivates us to explore the practicability of using smartphones to detect smoking events in driving environment. in this paper, we propose a cigarette smoking detection system, named hearsmoking, which only uses acoustic sensors on smartphones to improve driving safety. after investigating typical smoking habits of drivers, including hand movement and chest fluctuation, we design an acoustic signal to be emitted by the speaker and received by the microphone. we calculate relative correlation coefficient of received signals to obtain movement patterns of hands and chest. the processed data is sent into a trained convolutional neural network for classification of hand movement. we also design a method to detect respiration at the same time. to improve system performance, we further analyse the periodicity of the composite smoking motion. through extensive experiments in real driving environments, hearsmoking detects smoking events with an average total accuracy of 93. 44 percent in real - time.
|
arxiv:2503.23391
|
this note gives a brief survey of the minimum dilatation problem for pseudo - anosov mapping classes, and the first explicit train track description of an infinite family of pseudo - anosov mapping classes with orientable stable foliations and the conjectural minimum dilatation for closed surfaces of even genus $ g \ ge 2 $.
|
arxiv:1403.2987
|
motivated by the recent experiments on the triangular lattice spin liquid ybmggao $ _ 4 $, we explore the effect of spin - orbit coupling on the effective - spin correlation of the yb local moments. we point out the anisotropic interaction between the effective - spins on the nearest neighbor bonds is sufficient to reproduce the spin - wave dispersion of the fully polarized state in the presence of strong magnetic field normal to the triangular plane. we further evaluate the effective - spin correlation within the mean - field spherical approximation. we explicitly demonstrate that, the nearest - neighbor anisotropic effective - spin interaction, originating from the strong spin - orbit coupling, enhances the effective - spin correlation at the m points in the brillouin zone. we identify these results as the strong evidence for the anisotropic interaction and the strong spin - orbit coupling in ybmggao $ _ 4 $.
|
arxiv:1608.06445
|
we consider the dynamics of a translocation process of a flexible linear polymer through a nanopore into an environment of active rods in the { \ it trans } side. using langevin dynamics simulations we find that the rods facilitate translocation to the { \ it trans } side even when there are initially more monomers on the { \ it cis } than on the { \ it trans } side. structural analysis of the translocating polymer reveals that active rods induce a folded structure to the { \ it trans } - side subchain in the case of successful translocation events. by keeping the initial number of monomers on the { \ it cis } - side subchain fixed, we map out a state diagram for successful events as a function of the rod number density for a variety of system parameters. this reveals competition between facilitation by the rods at low densities and crowding that hinders translocation at higher densities.
|
arxiv:2211.07114
|
in this paper we study positive fixed points of hammerstein integral operators with degenerate kernel in the cone of c [ 0, 1 ]. problem on a number of positive fixed points of the hammerstein integral operator leads to the study positive roots of polynomials with real coefficients. consider a model on a cayley tree with nearest - neighbor interactions and with the set [ 0, 1 ] of spin values. the uniqueness translational - invariant gibbs measures for the given model is proved.
|
arxiv:1911.10837
|
comparing the number of clear nights ( cloud free ) available for astronomical observations is a critical task because it should be based on homogeneous methodologies. current data are mainly based on different judgements based on observer logbooks or on different instruments. in this paper we present a new homogeneous methodology on very different astronomical sites for modern optical astronomy, in order to quantify the available night time fraction. the data are extracted from night time goes12 satellite infrared images and compared with ground based conditions when available. in this analysis we introduce a wider average matrix and 3 - bands correlation in order to reduce the noise and to distinguish between clear and stable nights. temporal data are used for the classification. in the time interval 2007 - 2008 we found that the percentage of the satellite clear nights is 88 % at paranal, 76 % at la silla, 72. 5 % at la palma, 59 % at mt. graham and 86. 5 % at tolonchar. the correlation analysis of the three goes12 infrared bands b3, b4 and b6 indicates that the fraction of the stable nights is lower by 2 % to 20 % depending on the site.
|
arxiv:1011.4815
|
in this paper, we present talaria, a novel permissioned blockchain simulator that supports numerous protocols and use cases, most notably in supply chain management. talaria extends the capability of blocksim, an existing blockchain simulator, to include permissioned blockchains and serves as a foundation for further private blockchain assessment. talaria is designed with both practical byzantine fault tolerance ( pbft ) and simplified version of proof - of - authority consensus protocols, but can be revised to include other permissioned protocols within its modular framework. moreover, talaria is able to simulate different types of malicious authorities and a variable daily transaction load at each node. in using talaria, business practitioners and policy planners have an opportunity to measure, evaluate, and adapt a range of blockchain solutions for commercial operations.
|
arxiv:2103.02260
|
the first part of this note is a short introduction on continued fraction expansions for certain algebraic power series. in the last part, as an illustration, we present a family of algebraic continued fractions of degree 4, including a toy example considered about thirty years ago in a pioneer work in this area.
|
arxiv:1402.4928
|
while novel computer vision architectures are gaining traction, the impact of model architectures is often related to changes or exploring in training methods. identity mapping - based architectures resnets and densenets have promised path - breaking results in the image classification task and are go - to methods for even now if the data given is fairly limited. considering the ease of training with limited resources this work revisits the resnets and improves the resnet50 \ cite { resnets } by using mixup data - augmentation as regularization and tuning the hyper - parameters.
|
arxiv:2111.11616
|
high - dimensional data, where the number of variables exceeds or is comparable to the sample size, is now pervasive in many scientific applications. in recent years, bayesian shrinkage models have been developed as effective and computationally feasible tools to analyze such data, especially in the context of linear regression. in this paper, we focus on the normal - gamma shrinkage model developed by griffin and brown. this model subsumes the popular bayesian lasso model, and a three - block gibbs sampling algorithm to sample from the resulting intractable posterior distribution has been developed by griffin and brown. we consider an alternative two - block gibbs sampling algorithm and rigorously demonstrate its advantage over the three - block sampler by comparing specific spectral properties. in particular, we show that the markov operator corresponding to the two - block sampler is trace class ( and hence hilbert - schmidt ), whereas the operator corresponding to the three - block sampler is not even hilbert - schmidt. the trace class property for the two - block sampler implies geometric convergence for the associated markov chain, which justifies the use of markov chain clt ' s to obtain practical error bounds for mcmc based estimates. additionally, it facilitates theoretical comparisons of the two - block sampler with sandwich algorithms which aim to improve performance by inserting inexpensive extra steps in between the two conditional draws of the two - block sampler.
|
arxiv:1804.05915
|
we have selected a complete sample of flat - spectrum radio quasars ( fsrqs ) from the wmap 7 - yr catalog within the sdss area, all with measured redshift, and have compared the black hole mass estimates based on fitting a standard accretion disk model to the ` blue bump ' with those obtained from the commonly used single epoch virial method. the sample comprises 79 objects with a flux density limit of 1 jy at 23 ghz, 54 of which ( 68 % ) have a clearly detected ` blue bump '. thirty - four of the latter have, in the literature, black hole mass estimates obtained with the virial method. the mass estimates obtained from the two methods are well correlated. if the calibration factor of the virial relation is set to $ f = 4. 5 $, well within the range of recent estimates, the mean logarithmic ratio of the two mass estimates is equal to zero with a dispersion close to the estimated uncertainty of the virial method. the fact that the two independent methods agree so closely in spite of the potentially large uncertainties associated with each lends strong support to both of them. the distribution of black - hole masses for the 54 fsrqs in our sample with a well detected blue bump has a median value of $ 7. 4 \ times 10 ^ { 8 } \, m _ \ odot $. it declines at the low mass end, consistent with other indications that radio loud agns are generally associated with the most massive black holes, although the decline may be, at least partly, due to the source selection. the distribution drops above $ \ log ( m _ \ bullet / m _ \ odot ) = 9. 4 $, implying that ultra - massive black holes associated with fsrqs must be rare.
|
arxiv:1309.4108
|
the energy evolution of average multiplicities and multiplicity fluctuations in jets produced in heavy - ion collisions is investigated from a toy qcd - inspired model. in this model, we use modified splitting functions accounting for medium - enhanced radiation of gluons by a fast parton which propagates through the quark gluon plasma. the leading contribution of the standard production of soft hadrons is enhanced by a factor $ \ sqrt { n _ s } $ while next - to - leading order ( nlo ) corrections are suppressed by $ 1 / \ sqrt { n _ s } $, where the parameter $ n _ s > 1 $ accounts for the induced - soft gluons in the medium. our results for such global observables are cross - checked and compared with their limits in the vacuum.
|
arxiv:0811.2418
|
recent advancements in graph representation learning have led to the emergence of condensed encodings that capture the main properties of a graph. however, even though these abstract representations are powerful for downstream tasks, they are not equally suitable for visualisation purposes. in this work, we merge mapper, an algorithm from the field of topological data analysis ( tda ), with the expressive power of graph neural networks ( gnns ) to produce hierarchical, topologically - grounded visualisations of graphs. these visualisations do not only help discern the structure of complex graphs but also provide a means of understanding the models applied to them for solving various tasks. we further demonstrate the suitability of mapper as a topological framework for graph pooling by mathematically proving an equivalence with min - cut and diff pool. building upon this framework, we introduce a novel pooling algorithm based on pagerank, which obtains competitive results with state of the art methods on graph classification benchmarks.
|
arxiv:2002.03864
|
we prove three theorems about the use of a counting operator to study the spectrum of model hamiltonians. we analytically calculate the eigenvalues of the hubbard ring with four lattice positions and apply our theorems to describe the observed level crossings.
|
arxiv:0811.3077
|
in this paper we deal with the problem of computing the sum of the $ k $ - th powers of all the elements of the matrix ring $ \ mathbb { m } _ d ( r ) $ with $ d > 1 $ and $ r $ a finite commutative ring. we completely solve the problem in the case $ r = \ mathbb { z } / n \ mathbb { z } $ and give some results that compute the value of this sum if $ r $ is an arbitrary finite commutative ring $ r $ for many values of $ k $ and $ d $. finally, based on computational evidence and using some technical results proved in the paper we conjecture that the sum of the $ k $ - th powers of all the elements of the matrix ring $ \ mathbb { m } _ d ( r ) $ is always $ 0 $ unless $ d = 2 $, $ \ textrm { card } ( r ) \ equiv 2 \ pmod 4 $, $ 1 < k \ equiv - 1, 0, 1 \ pmod 6 $ and the only element $ e \ in r \ setminus \ { 0 \ } $ such that $ 2e = 0 $ is idempotent, in which case the sum is $ \ textrm { diag } ( e, e ) $.
|
arxiv:1505.08132
|
in introducing second quantization for fermions, jordan and wigner ( 1927 / 1928 ) observed that the algebra of a single pair of fermion creation and annihilation operators in quantum mechanics is closely related to the algebra of quaternions h. for the first time, here we exploit this fact to study nonlinear bogolyubov - valatin transformations ( canonical transformations for fermions ) for a single fermionic mode. by means of these transformations, a class of fermionic hamiltonians in an external field is related to the standard fermi oscillator.
|
arxiv:quant-ph/0411170
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.