text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
to meet the requirements of real - world applications, it is essential to control generations of large language models ( llms ). prior research has tried to introduce reinforcement learning ( rl ) into controllable text generation while most existing methods suffer from overfitting issues ( finetuning - based methods ) or semantic collapse ( post - processing methods ). however, current rl methods are generally guided by coarse - grained ( sentence / paragraph - level ) feedback, which may lead to suboptimal performance owing to semantic twists or progressions within sentences. to tackle that, we propose a novel reinforcement learning algorithm named tole which formulates token - level rewards for controllable text generation, and employs a " first - quantize - then - noise " paradigm to enhance the robustness of the rl algorithm. furthermore, tole can be flexibly extended to multiple constraints with little computational expense. experimental results show that our algorithm can achieve superior performance on both single - attribute and multi - attribute control tasks. we have released our codes at https : / / github. com / windylee0822 / ctg
|
arxiv:2403.11558
|
we construct explicit canonical transformations producing non - abelian duals in principal chiral models with arbitrary group g. some comments concerning the extension to more general $ \ sigma $ - models, like wzw models, are given.
|
arxiv:hep-th/9503045
|
in this article, we will show the global wellposedness and scattering of the cubic defocusing nonlinear schr \ " odinger equation on waveguide $ \ mathbb { r } ^ 2 \ times \ mathbb { t } $ in $ h ^ 1 $. we first establish the linear profile decomposition in $ h ^ { 1 } ( \ mathbb { r } ^ 2 \ times \ mathbb { t } ) $ motivated by the linear profile decomposition of the mass - critical schr \ " odinger equation in $ l ^ 2 ( \ mathbb { r } ^ 2 ) $. then by using the solution of the infinite dimensional vector - valued resonant nonlinear schr \ " odinger system to approximate the nonlinear profile, we can prove scattering in $ h ^ 1 $ by using the concentration - compactness / rigidity method.
|
arxiv:1705.00954
|
due to noisy actuation and external disturbances, tuning controllers for high - speed flight is very challenging. in this paper, we ask the following questions : how sensitive are controllers to tuning when tracking high - speed maneuvers? what algorithms can we use to automatically tune them? to answer the first question, we study the relationship between parameters and performance and find out that the faster the maneuver, the more sensitive a controller becomes to its parameters. to answer the second question, we review existing methods for controller tuning and discover that prior works often perform poorly on the task of high - speed flight. therefore, we propose autotune, a sampling - based tuning algorithm specifically tailored to high - speed flight. in contrast to previous work, our algorithm does not assume any prior knowledge of the drone or its optimization function and can deal with the multi - modal characteristics of the parameters ' optimization space. we thoroughly evaluate autotune both in simulation and in the physical world. in our experiments, we outperform existing tuning algorithms by up to 90 % in trajectory completion. the resulting controllers are tested in the airsim game of drones competition, where we outperform the winner by up to 25 % in lap - time. finally, we show that autotune improves tracking error when flying a physical platform with respect to parameters tuned by a human expert.
|
arxiv:2103.10698
|
using the app coronagraph of vlt / naco we searched for planetary mass companions around hd115892 and hd172555 in the thermal infrared at 4 micron. both objects harbor unusually luminous debris disks for their age and it has been suggested that small dust grains were produced recently in transient events ( e. g., a collision ) in these systems. such a collision of planetesimals or protoplanets could have been dynamically triggered by yet unseen companions. we did not detect any companions in our images but derived the following detection limits : for both objects we would have detected companions with apparent magnitudes between ~ 13. 2 - 14. 1 mag at angular separations between 0. 4 - 1. 0 " at the 5 - sigma level. for hd115892 we were sensitive to companions with 12. 1 mag even at 0. 3 ". using theoretical models these magnitudes are converted into mass limits. for hd115892 we would have detected objects with 10 - 15 m _ jup at angular separations between 0. 4 - 1. 0 " ( 7 - 18 au ). at 0. 3 " ( ~ 5. 5 au ) the detection limit was ~ 25 m _ jup. for hd172555 we reached detection limits between 2 - 3 m _ jup at separations between 0. 5 - 1. 0 " ( 15 - 29 au ). at 0. 4 " ( ~ 11 au ) the detection limit was ~ 4 m _ jup. despite the non - detections our data demonstrate the unprecedented contrast performance of naco / app in the thermal infrared at very small inner working angles and we show that our observations are mostly background limited at separation > 0. 5 ".
|
arxiv:1106.4528
|
minimax optimization has been central in addressing various applications in machine learning, game theory, and control theory. prior literature has thus far mainly focused on studying such problems in the continuous domain, e. g., convex - concave minimax optimization is now understood to a significant extent. nevertheless, minimax problems extend far beyond the continuous domain to mixed continuous - discrete domains or even fully discrete domains. in this paper, we study mixed continuous - discrete minimax problems where the minimization is over a continuous variable belonging to euclidean space and the maximization is over subsets of a given ground set. we introduce the class of convex - submodular minimax problems, where the objective is convex with respect to the continuous variable and submodular with respect to the discrete variable. even though such problems appear frequently in machine learning applications, little is known about how to address them from algorithmic and theoretical perspectives. for such problems, we first show that obtaining saddle points are hard up to any approximation, and thus introduce new notions of ( near - ) optimality. we then provide several algorithmic procedures for solving convex and monotone - submodular minimax problems and characterize their convergence rates, computational complexity, and quality of the final solution according to our notions of optimally. our proposed algorithms are iterative and combine tools from both discrete and continuous optimization. finally, we provide numerical experiments to showcase the effectiveness of our purposed methods.
|
arxiv:2111.01262
|
we present and discuss a method to identify substructures in combined angular - redshift samples of galaxies within clusters. the method relies on the use of discrete wavelet transform ( hereafter dwt ) and has already been applied to the analysis of the coma cluster ( gambera et al. 1997 ). the main new ingredient of our method with respect to previous studies lies in the fact that we make use of a 3d data set rather than a 2d. we test the method on mock cluster catalogs with spatially localized substructures and on a n - body simulation. our main conclusion is that our method is able to identify the existing substructures provided that : a ) the subclumps are detached in part or all of the phase space, b ) one has a statistically significant number of redshifts, increasing as the distance decreases due to redshift distortions ; c ) one knows { \ it a priori } the scale on which substructures are to be expected. we have found that to allow an accurate recovery we must have both a significant number of galaxies ( $ \ approx 200 $ for clusters at z $ \ geq 0. 4 $ or about 800 at z $ \ leq $ 0. 4 ) and a limiting magnitude for completeness $ m _ b = 16 $. the only true limitation to our method seems to be the necessity of knowing { \ it a priori } the scale on which the substructure is to be found. this is an intrinsic drawback of the method and no improvement in numerical codes based on this technique could make up for it.
|
arxiv:astro-ph/9908066
|
the present work is devoted to the study of faint be stars observed by corot in the fourth long run ( lra02 ). the astrophysical parameters were determined from the spectra observed with the vlt / flames instruments at eso. spectra were fitted with models of stellar atmospheres using our girfit package. spectra in the lambda - lambda 6400 - 7200 $ aa domain enabled the confirmation or a first identification of be star candidates. the apparent parameters ( teff, log g, vsin i ) for a set of 19 b and be stars were corrected for the effects induced by the rapid rotation. these allowed us to determine : 1 ) masses that are in agreement with those measured for detached binary systems ; 2 ) distances that agree with the gaia parallaxes ; and 3 ) centrifugal / gravity equatorial force ratios of ~ 0. 6 - 0. 7, which indicate that our be stars are subcritical rotators. a study of the balmer halpha, hgamma and hdelta emission lines produced : extents of the circumstellar disk ( cd ) emitting regions that agree with the interferometric inferences in other be stars ; r - dependent exponents n ( r ) of the cd radial density distributions ; cd base densities. the hgamma and hdelta emission lines are formed in cd layers close to the central star. these lines produced a different value of the exponent n ( r ) than assumed for halpha. further detailed studies of hgamma and hdelta emission lines could reveal the physical properties of regions where the viscous transport of angular momentum to the remaining cd regions is likely to originate from. the subcritical rotation of be stars suggests that their huge discrete mass - ejections and concomitant non - radial pulsations might have a common origin in stellar envelope regions that become unstable to convection due to rotation. the errors induced on the estimated teff by the possible presence of stripped sub - dwarf o / b companions are not likely to exceed their present uncertainties.
|
arxiv:2307.06968
|
we build the wrapped fukaya category w ( e ) for any monotone symplectic manifold, convex at infinity. we define the open - closed and closed open - string maps. we study their algebraic properties and prove that the string maps are compatible with the eigenvalue splitting of w ( e ). we extend abouzaid ' s generation criterion from the exact to the monotone setting. we construct an acceleration functor from the compact fukaya category which on hochschild ( co ) homology commutes with the string maps and the canonical map from quantum cohomology qh ( e ) to symplectic cohomology sh ( e ). we define the qh ( e ) - and sh ( e ) - module structure on the hochschild ( co ) homology of w ( e ) which is compatible with the string maps. the module and unital algebra structures, and the generation criterion, also hold for the compact fukaya category f ( e ), and also hold for closed monotone symplectic manifolds. as an application, we show that the wrapped category of any monotone negative line bundle over any projective space is proper ( cohomologically finite ). for any monotone negative line bundle e over a toric fano variety, we show that sh ( e ) is non - trivial and that w ( e ) contains an essential non - displaceable monotone lagrangian torus.
|
arxiv:1201.5880
|
in an accompanying publication, the meerkat pulsar timing array ( mpta ) collaboration reports tentative evidence for the presence of a stochastic gravitational - wave background, following observations of similar signals from the european and indian pulsar timing arrays, nanograv, the parkes pulsar timing array and the chinese pulsar timing array. if such a gravitational - wave background signal originates from a population of inspiraling supermassive black - hole binaries, the signal may be anisotropically distributed on the sky. in this letter we evaluate the anisotropy of the mpta signal using a spherical harmonic decomposition. we discuss complications arising from the covariance between pulsar pairs and regularisation of the fisher matrix. applying our method to the 4. 5 yr dataset, we obtain two forms of sky maps for the three most sensitive mpta frequency bins between 7 - 21 nhz. our " clean maps ' ' estimate the distribution of gravitational - wave strain power with minimal assumptions. our radiometer maps answer the question : is there a statistically significant point source? we find a noteworthy hotspot in the 7 nhz clean map with a $ p $ - factor of $ p = 0. 015 $ ( not including trial factors ). future observations are required to determine if this hotspot is of astrophysical origin.
|
arxiv:2412.01214
|
we propose in this paper a new paradigm for facial video compression. we leverage the generative capacity of gans such as stylegan to represent and compress a video, including intra and inter compression. each frame is inverted in the latent space of stylegan, from which the optimal compression is learned. to do so, a diffeomorphic latent representation is learned using a normalizing flows model, where an entropy model can be optimized for image coding. in addition, we propose a new perceptual loss that is more efficient than other counterparts. finally, an entropy model for video inter coding with residual is also learned in the previously constructed latent representation. our method ( sganc ) is simple, faster to train, and achieves better results for image and video coding compared to state - of - the - art codecs such as vtm, av1, and recent deep learning techniques. in particular, it drastically minimizes perceptual distortion at low bit rates.
|
arxiv:2207.04324
|
the long - lived electronic spin of the nitrogen - vacancy ( nv ) center in diamond is a promising quantum sensor for detecting nanoscopic magnetic and electric fields in a variety of experimental conditions. nevertheless, an outstanding challenge in improving measurement sensitivity is the poor signal - to - noise ratio ( snr ) of prevalent optical spin - readout techniques. here, we address this limitation by coupling individual nv centers to optimized diamond nanopillar structures, thereby improving optical collection efficiency of fluorescence. first, we optimize the structure in simulation, observing an increase in collection efficiency for tall ( $ \ geq $ 5 $ \ mu $ m ) pillars with tapered sidewalls. we subsequently verify these predictions by fabricating and characterizing a representative set of structures using a reliable and reproducible nanofabrication process. an optimized device yields increased snr, owing to improvements in collimation and directionality of emission. promisingly, these devices are compatible with low - numerical - aperture, long - working - distance collection optics, as well as reduced tip radius, facilitating improved spatial resolution for scanning applications.
|
arxiv:2306.02966
|
anyons are quasiparticles with fractional statistics, bridging between fermions and bosons. we propose an experimental setup to measure the statistical angle of topological anyons emitted from a quantum point contact ( qpc ) source. the setup involves an droplet along a fractional quantum hall liquid edge, formed by defining a droplet with two negatively biased gates. in the weak tunneling regime, we calculate the charge current, showing its time evolution depends solely on the anyons ' statistical properties, with temperature and scaling dimension affecting only the constant prefactor. we compute the cross - correlation between the anyon current transmitted from the source and the current after the junction, providing a direct method to detect anyon braiding statistics.
|
arxiv:2503.17008
|
the effects of the axial anomaly are suppressed at high temperatures due to screening effects in the quark - gluon plasma. if the suppression is nearly complete close to the chiral transition temperature, this can have consequences for the nature of the phase transition. the use of a chiral action such as domain wall fermions allows us to gain a deeper insight into the issue. our lattice sizes were $ 16 ^ 3 \ times 8 \ times l _ s $, with $ l _ s = 32 $ or 48, and our pion mass was approximately 200 mev. we found that $ u ( 1 ) _ a $ stayed broken above the chiral transition. however the breaking was found to be due to topologically nontrivial configurations which raises the question as to whether it persists in the thermodynamic limit. we also present results for the eigenvalue density of the dirac operator. it is seen that although the density decreases dramatically across the chiral transition temperature, $ u ( 1 ) _ a $ still remains broken at our current volume and quark mass due to the presence of zero modes.
|
arxiv:1112.0364
|
a spectroscopic survey of previously - unstudied luyten half second proper motion stars has resulted in the discoveries of two new cool magnetic white dwarfs. one ( lhs 2273 ) is a routine da star, t = 6, 500k, with zeeman - split h alpha and h beta, for which a simple model suggests a polar field strength of 18. 5 mg viewed close to equator - on. however, the white dwarf lhs 2534 proves to be the first magnetic dz showing zeeman - split na i and mg i components, as well as ca i and ca ii lines for which zeeman components are blended. the na i splittings result in a mean surface field strength estimate of 1. 92 mg. apart from the magnetic field, lhs 2534 is one of the most heavily - blanketed and coolest dz white dwarfs at t ~ 6, 000k.
|
arxiv:astro-ph/0012274
|
we present stellar age determinations for 4661 red giant branch stars in the apo - k2 catalog, derived using mass estimates from k2 asteroseismology from the k2 galactic archaeology program and elemental abundances from the apache point galactic evolution experiment survey. our sample includes 17 of the 19 fields observed by k2, making it one of the most comprehensive catalogs of accurate stellar ages across the galaxy in terms of the wide range of populations spanned by its stars, enabling rigorous tests of galactic chemical evolution models. taking into account the selection functions of the k2 sample, the data appear to support the age - chemistry morphology of stellar populations predicted by both inside - out and late - burst scenarios. we also investigate trends in age versus stellar chemistry and galactic position, which are consistent with previous findings. comparisons against apokasc - 3 asteroseismic ages show agreement to within ~ 3 %. we also discuss offsets between our ages and spectroscopic ages. finally, we note that ignoring the effects of $ \ alpha $ - enhancement on stellar opacity ( either directly or with the salaris metallicity correction ) results in an ~ 10 % offset in age estimates for the most $ \ alpha $ - enhanced stars, which is an important consideration for continued tests of galactic models with this and other asteroseismic age samples.
|
arxiv:2403.16250
|
low - scale supersymmetry breaking in string motivated theories implies the presence of o ( 100 ) tev scale moduli, which generically lead to a significant modification of the history of the universe prior to big bang nucleosynthesis. such an approach implies a non - thermal origin for dark matter resulting from scalar decay, where the lightest supersymmetric particle can account for the observed dark matter relic density. we study the further effect of the decay on the baryon asymmetry of the universe, and find that this can satisfactorily address the problem of the over - production of the baryon asymmetry by the affleck - dine mechanism in the mssm. remarkably, there is a natural connection between the baryon and dark matter abundances today, which leads to a solution of the ` cosmic coincidence problem '.
|
arxiv:1108.5178
|
identification of boosted, hadronically - decaying top quarks is a problem of central importance for physics goals of the large hadron collider. we present a theoretical analysis of top quark tagging, establishing zeroth - order, minimal assumptions that should be satisfied by any purported top - tagged jet, like existence of three hard subjets, a bottom - tagged subjet, total mass consistent with the top quark, and a pairwise subjet mass consistent with the w boson. from these minimal assumptions, we construct the optimal discrimination observable, the likelihood ratio, for the binary discrimination problem of top quark - initiated versus bottom quark - initiated jets through next - to - leading order in the strong coupling. we compare and compute corresponding signal and background efficiencies both analytically and from simulated data, validating an understanding of the relevant physics identified and exploited by the likelihood. in the process, we construct a method for systematic interpretability of the likelihood ratio for this problem, and explicitly establish a hard floor on possible discrimination power. these results can correspondingly be applied to understanding and interpreting machine learning studies of this problem.
|
arxiv:2411.00104
|
we present deep 1. 2 millimeter photometry of 37 stars in the young ( 5 myr ) upper scorpius ob association, sensitive to ~ 4 x 10 ^ - 3 mjup of cool millimeter dust. disks around four low - and solar - mass stars are detected, as well as one debris disk around an intermediate mass star, with dust masses ranging from 3. 6 x 10 ^ - 3 - - 1. 0 x 10 ^ - 1 mjup. the source with the most massive disk exhibits a transition - disk spectral energy distribution. combining our results with previous studies, we find the millimeter - detection fraction of class ii sources has significantly decreased from younger ages, and comparison with near - infrared and halpha measurements indicates the present disks have undergone significant evolution in composition or structure at all radii. the disks of upper scorpius represent the tail - end of the depletion of primordial disks ; while a few near - solar mass stars may still sustain giant planet formation, this process has finished around higher mass stars
|
arxiv:1111.0101
|
we introduce a novel methodology for establishing the presence of standing accretion shock instabilities ( sasi ) in the dynamics of a core collapse supernova from the observed neutrino event rate at water - or ice - based neutrino detectors. the methodology uses a likelihood ratio in the frequency domain as a test - statistics ; it is also employed to assess the potential to estimate the frequency and the amplitude of the sasi modulations of the neutrino signal. the parameter estimation errors are consistent with the minimum possible errors as evaluated from the inverse of the fisher information matrix, and close to the theoretical minimum for the sasi amplitude. using results from a core - collapse simulation of a 15 solar - mass star by kuroda $ \ it { et \, al. } $ ( 2017 ) as a test bed for the method, we find that sasi can be identified with high confidence for a distance to the supernova of up to $ \ sim 6 $ kpc for icecube and and up to $ \ sim 3 $ kpc for a 0. 4 mt mass water cherenkov detector. this methodology will aid the investigation of a future galactic supernova.
|
arxiv:1911.10656
|
in this work, we explore links between natural homology and persistent homology for the classification of directed spaces. the former is an algebraic invariant of directed spaces, a semantic model of concurrent programs. the latter was developed in the context of topological data analysis, in which topological properties of point - cloud data sets are extracted while eliminating noise. in both approaches, the evolution homological properties are tracked through a sequence of inclusions of usual topological spaces. exploiting this similarity, we show that natural homology may be considered a persistence object, and may be calculated as a colimit of uni - dimensional persistent homologies along traces. finally, we suggest further links and avenues of future work in this direction.
|
arxiv:2305.03357
|
in this paper we propose a wavelet - based methodology for estimation and variable selection in partially linear models. the inference is conducted in the wavelet domain, which provides a sparse and localized decomposition appropriate for nonparametric components with various degrees of smoothness. a hierarchical bayes model is formulated on the parameters of this representation, where the estimation and variable selection is performed by a gibbs sampling procedure. for both the parametric and nonparametric part of the model we are using point - mass - at - zero contamination priors with a double exponential spread distribution. only a few papers in the area of partially linear wavelet models exist, and we show that the proposed methodology is often superior to the existing methods with respect to the task of estimating model parameters. moreover, the method is able to perform bayesian variable selection by a stochastic search for the parametric part of the model.
|
arxiv:1609.07233
|
let $ s _ k $ be the space of holomorphic cusp forms of weight $ k $ with respect to $ sl _ 2 ( \ mathbb { z } ) $. let $ f \ in s _ k $ be a normalized hecke eigenform, $ l _ f ( s ) $ the $ l $ - function attached to the form $ f $. in this paper we consider the distribution of zeros of $ l _ f ( s ) $ in the strip $ \ sigma \ leq \ re s \ leq 1 $ for fixed $ \ sigma > 1 / 2 $ with respect to the imaginary part. we study estimates of \ [ n _ f ( \ sigma, t ) = # \ { \ rho \ in \ mathbb { c } \ mid l _ f ( \ rho ) = 0, \ sigma \ leq \ re \ rho \ leq 1, 0 \ leq \ im \ rho \ leq t } \ ] for $ 1 / 2 \ leq \ sigma \ leq1 $ and large $ t > 0 $. using the methods of karatsuba and voronin we shall give another proof for ivi \ ' { c } ' s method.
|
arxiv:1310.0765
|
we study efficient importance sampling techniques for particle filtering ( pf ) when either ( a ) the observation likelihood ( ol ) is frequently multimodal or heavy - tailed, or ( b ) the state space dimension is large or both. when the ol is multimodal, but the state transition pdf ( stp ) is narrow enough, the optimal importance density is usually unimodal. under this assumption, many techniques have been proposed. but when the stp is broad, this assumption does not hold. we study how existing techniques can be generalized to situations where the optimal importance density is multimodal, but is unimodal conditioned on a part of the state vector. sufficient conditions to test for the unimodality of this conditional posterior are derived. the number of particles, n, to accurately track using a pf increases with state space dimension, thus making any regular pf impractical for large dimensional tracking problems. we propose a solution that partially addresses this problem. an important class of large dimensional problems with multimodal ol is tracking spatially varying physical quantities such as temperature or pressure in a large area using a network of sensors which may be nonlinear and / or may have non - negligible failure probabilities.
|
arxiv:0805.0053
|
a theory of nonlinear signal propagation in multi - span wavelength division multiplexed coherent transmission systems that employ the semiconductor optical amplifier as in - line amplifiers is presented for the first time. the rigorous derivation, based on time - domain first - order perturbation theory, is developed in detail. the end result is the expressions for the signal - to - noise ratio of the received sampled photocurrent, including contributions from noise, fiber - induced nonlinear distortions, and amplified - induced nonlinear distortions.
|
arxiv:1711.06108
|
alzheimer ' s disease and frontotemporal dementia are common forms of neurodegenerative dementia. behavioral alterations and cognitive impairments are found in the clinical courses of both diseases and their differential diagnosis is sometimes difficult for physicians. therefore, an accurate tool dedicated to this diagnostic challenge can be valuable in clinical practice. however, current structural imaging methods mainly focus on the detection of each disease but rarely on their differential diagnosis. in this paper, we propose a deep learning based approach for both problems of disease detection and differential diagnosis. we suggest utilizing two types of biomarkers for this application : structure grading and structure atrophy. first, we propose to train a large ensemble of 3d u - nets to locally determine the anatomical patterns of healthy people, patients with alzheimer ' s disease and patients with frontotemporal dementia using structural mri as input. the output of the ensemble is a 2 - channel disease ' s coordinate map able to be transformed into a 3d grading map which is easy to interpret for clinicians. this 2 - channel map is coupled with a multi - layer perceptron classifier for different classification tasks. second, we propose to combine our deep learning framework with a traditional machine learning strategy based on volume to improve the model discriminative capacity and robustness. after both cross - validation and external validation, our experiments based on 3319 mri demonstrated competitive results of our method compared to the state - of - the - art methods for both disease detection and differential diagnosis.
|
arxiv:2211.14096
|
abstract end - to - end text - to - speech ( tts ) systems has proved its great success in the presence of a large amount of high - quality training data recorded in anechoic room with high - quality microphone. another approach is to use available source of found data like radio broadcast news. we aim to optimize the naturalness of tts system on the found data using a novel data processing method. the data processing method includes 1 ) utterance selection and 2 ) prosodic punctuation insertion to prepare training data which can optimize the naturalness of tts systems. we showed that using the processing data method, an end - to - end tts achieved a mean opinion score ( mos ) of 4. 1 compared to 4. 3 of natural speech. we showed that the punctuation insertion contributed the most to the result. to facilitate the research and development of tts systems, we distributed the processed data of one speaker at https : / / forms. gle / 6hk5ykqgdxaac2bu6.
|
arxiv:2004.09607
|
we report a measurement of dressed - spin effects of polarized 3he atoms from a cold atomic source traversing a region of constant magnetic field b0 and a transverse oscillatory dressing field bd cos ( omega * t ). the observed effects are compared with a numerical simulation using the bloch equation as well as a calculation based on the dressed - atom formalism. an application of the dressed spin of 3he for a proposed neutron electric dipole moment measurement is also discussed.
|
arxiv:nucl-ex/0703029
|
– 182. ) the gay science public domain audiobook at librivox
|
https://en.wikipedia.org/wiki/The_Gay_Science
|
we show that in any harmonic space, the eigenvalue spectra of the laplace operator on small geodesic spheres around a given point determine the norm $ | \ nabla r | $ of the covariant derivative of the riemannian curvature tensor in that point. in particular, the spectra of small geodesic spheres in a harmonic space determine whether the space is locally symmetric. for the proof we use the first few heat invariants and consider certain coefficients in the radial power series expansions of the curvature invariants $ | r | ^ 2 $ and $ | ric | ^ 2 $ of the geodesic spheres. moreover, we obtain analogous results for geodesic balls with either dirichlet or neumann boundary conditions.
|
arxiv:1001.1611
|
high - energy proton - and deuteron - nucleus collisions provide an excellent tool for studying a wide array of physics effects, including modifications of parton distribution functions in nuclei, gluon saturation, and color neutralization and hadronization in a nuclear environment, among others. all of these effects are expected to have a significant dependence on the size of the nuclear target and the impact parameter of the collision, also known as the collision centrality. in this article, we detail a method for determining centrality classes in p ( d ) + a collisions via cuts on the multiplicity at backward rapidity ( i. e., the nucleus - going direction ) and for determining systematic uncertainties in this procedure. for d + au collisions at sqrt ( s _ nn ) = 200 gev we find that the connection to geometry is confirmed by measuring the fraction of events in which a neutron from the deuteron does not interact with the nucleus. as an application, we consider the nuclear modification factors r _ { p ( d ) + a }, for which there is a potential bias in the measured centrality dependent yields due to auto - correlations between the process of interest and the backward rapidity multiplicity. we determine the bias correction factor within this framework. this method is further tested using the hijing monte carlo generator. we find that for d + au collisions at sqrt ( s _ nn ) = 200 gev, these bias corrections are small and vary by less than 5 % ( 10 % ) up to p _ t = 10 ( 20 ) gev. in contrast, for p + pb collisions at sqrt ( s _ nn ) = 5. 02 tev we find these bias factors are an order of magnitude larger and strongly p _ t dependent, likely due to the larger effect of multi - parton interactions.
|
arxiv:1310.4793
|
in this paper we present an architecture that enables the redesign of large - scale quantum circuits on quantum hardware based on the entangling quantum generative adversarial network ( eq - gan ). specifically, by prepending a random quantum circuit module to the standard eq - gan framework, we extend its capability from quantum state learning to unitary transformation learning. the completeness of this architecture is theoretically proved. moreover, an efficient local random circuit is proposed, which significantly enhances the practicality of our architecture. for concreteness, we apply this architecture to three crucial applications in circuit optimization, including the equivalence checking of ( non - ) parameterized circuits, as well as the variational reconstruction of quantum circuits. the feasibility of our approach is demonstrated by excellent results in both classical and noisy intermediate - scale quantum ( nisq ) hardware implementations. we believe our work will facilitate the implementation and validation of the advantages of quantum algorithms.
|
arxiv:2412.20893
|
we propose a scheme of multipartite entanglement distillation driven by a complementary pair of stabilizer measurements, to distill directly a wider range of states beyond the stabilizer code states ( such as the greenberger - horne - zeilinger states ). we make our idea explicit by constructing a recurrence protocol for the 3 - qubit w state. noisy w states resulting from typical decoherence can be directly purified in a few steps, if their initial fidelity is larger than a threshold. for general input mixed states, we observe distillations to hierarchical fixed points, i. e., not only to the w state but also to the 2 - qubit bell pair, depending on their initial entanglement.
|
arxiv:quant-ph/0506092
|
galaxy - scale strong lenses in galaxy clusters provide a unique tool to investigate their inner mass distribution and the sub - halo density profiles in the low - mass regime, which can be compared with the predictions from cosmological simulations. we search for galaxy - galaxy strong - lensing systems in hst multi - band imaging of galaxy cluster cores from the clash and hff programs by exploring the classification capabilities of deep learning techniques. convolutional neural networks are trained utilising highly - realistic simulations of galaxy - scale strong lenses injected into the hst cluster fields around cluster members. to this aim, we take advantage of extensive spectroscopic information on member galaxies in 16 clusters and the accurate knowledge of the deflection fields in half of these from high - precision strong lensing models. using observationally - based distributions, we sample magnitudes, redshifts and sizes of the background galaxy population. by placing these sources within the secondary caustics associated with cluster galaxies, we build a sample of ~ 3000 galaxy - galaxy strong lenses which preserve the full complexity of real multi - colour data and produce a wide diversity of strong lensing configurations. we study two deep learning networks processing a large sample of image cutouts in three hst / acs bands, and we quantify their classification performance using several standard metrics. we find that both networks achieve a very good trade - off between purity and completeness ( 85 % - 95 % ), as well as good stability with fluctuations within 2 % - 4 %. we characterise the limited number of false negatives and false positives in terms of the physical properties of the background sources and cluster members. we also demonstrate the neural networks ' high degree of generalisation by applying our method to hst observations of 12 clusters with previously known galaxy - scale lensing systems.
|
arxiv:2303.00769
|
the recent fit of cosmological parameters by the dark energy spectroscopic instrument ( desi ) collaboration will have a significant impact on our understanding of the universe. given its importance, we conduct several consistency checks and draw conclusions from the fit. specifically, we focus on the following key issues relevant to cosmology : ( i ) the acceleration of the universe ' s expansion, which, according to the fit, differs over cosmological time compared to the standard cosmological model ; ( ii ) the age of the universe, which appears slightly shorter than the age of the oldest stars ; and ( iii ) the solution of the scale factor, both numerically and in an approximate analytical form.
|
arxiv:2409.19577
|
neutrinos are the only particles in the standard model of particle physics that have only been observed with left handed chirality to date. if right handed neutrinos exist, they would not only explain the observed neutrino oscillations, but could also be responsible for several phenomena in cosmology, including the baryon asymmetry of the universe, dark matter and dark radiation. a crucial parameter in this context is their majorana mass, which in principle could lie anywhere between the ev scale and gut scale. the implications for experiments and cosmology strongly depend on the choice of the mass scale. we review recent progress in the phenomenology of right handed neutrinos with different masses, focusing on scenarios in which the mass is at least a kev. we emphasise the possibility to discover heavy neutrinos that are responsible for the baryon asymmetry of the universe via low scale leptogenesis in near future experiments, such as lhc, belle ii, ship, fcc - ee or cepc.
|
arxiv:1510.07883
|
in this paper, we present a simple model of scale - free networks that incorporates both preferential & random attachment and anti - preferential & random deletion at each time step. we derive the degree distribution analytically and show that it follows a power law with the degree exponent in the range of ( 2, infinity ). we also find a way to derive an expression of the clustering coefficient for growing networks and compute the average path length through simulation.
|
arxiv:physics/0409061
|
new bivariate griffiths polynomials of $ q $ - racah type are introduced and characterized. they generalize the polynomials orthogonal on the multinomial distribution introduced by r. griffiths fifty years ago. they also correspond to a $ q $ - deformation of the griffiths polynomials of racah type introduced previously by the authors and collaborators. the latter are recovered from the former by a $ q \ to1 $ limit. we show that these new polynomials are bispectral and biorthogonal. we also exhibit some symmetry relations that are essential in the proof of the bispectrality property.
|
arxiv:2407.17016
|
we present preliminary results of grating observations of yy mensae and v824 arae by chandra and xmm - newton. spectral features are presented in the context of the emission measure distributions, the coronal abundances, and plasma electron densities. in particular, we observe a coronal n / c enhancement in yy men believed to reflect the photospheric composition ( cn cycle ). finally, we interpret line broadening in yy men as doppler thermal broadening in its very hot corona.
|
arxiv:astro-ph/0310032
|
we construct an example of blow - up in a flow of min - plus linear operators arising as solution operators for a hamilton - jacobi equation with a hamiltonian of the form | p | ^ alpha + u ( x, t ), where alpha > 1 and the potential u ( x, t ) is uniformly bounded together with its gradient. the construction is based on the fact that, for a suitable potential defined on a time interval of length t, the absolute value of velocity for a lagrangian minimizer can be as large as o ( ( log t ) ^ ( 2 - 2 / alpha ) ). we also show that this growth estimate cannot be surpassed. implications of this example for existence of global generalized solutions to randomly forced hamilton - jacobi or burgers equations are discussed.
|
arxiv:math/0312395
|
vine copulas ( or pair - copula constructions ) have become an important tool for high - dimensional dependence modeling. typically, so called simplified vine copula models are estimated where bivariate conditional copulas are approximated by bivariate unconditional copulas. we present the first non - parametric estimator of a non - simplified vine copula that allows for varying conditional copulas using penalized hierarchical b - splines. throughout the vine copula, we test for the simplifying assumption in each edge, establishing a data - driven non - simplified vine copula estimator. to overcome the curse of dimensionality, we approximate conditional copulas with more than one conditioning argument by a conditional copula with the first principal component as conditioning argument. an extensive simulation study is conducted, showing a substantial improvement in the out - of - sample kullback - leibler divergence if the null hypothesis of a simplified vine copula can be rejected. we apply our method to the famous uranium data and present a classification of an eye state data set, demonstrating the potential benefit that can be achieved when conditional copulas are modeled.
|
arxiv:1603.01424
|
this work summarises some of the attempts to explain the phenomenon of dark energy as an effective description of complex gravitational physics and the proper interpretation of observations. cosmological backreaction has been shown to be relevant for observational ( precision ) cosmology, nevertheless no convincing explanation of dark energy by means of backreaction has been given so far.
|
arxiv:1003.3026
|
trajectory analysis is essential in many applications. in this paper, we address the problem of representing motion trajectories in a highly informative way, and consequently utilize it for analyzing trajectories. our approach first leverages the complete information from given trajectories to construct a thermal transfer field which provides a context - rich way to describe the global motion pattern in a scene. then, a 3d tube is derived which depicts an input trajectory by integrating its surrounding motion patterns contained in the thermal transfer field. the 3d tube effectively : 1 ) maintains the movement information of a trajectory, 2 ) embeds the complete contextual motion pattern around a trajectory, 3 ) visualizes information about a trajectory in a clear and unified way. we further introduce a droplet - based process. it derives a droplet vector from a 3d tube, so as to characterize the high - dimensional 3d tube information in a simple but effective way. finally, we apply our tube - and - droplet representation to trajectory analysis applications including trajectory clustering, trajectory classification & abnormality detection, and 3d action recognition. experimental comparisons with state - of - the - art algorithms demonstrate the effectiveness of our approach.
|
arxiv:1609.03058
|
we give two proofs that the 3 - torus is not weakly d - congruent to the connected sum of three s ^ 1xs ^ 2 ' s, if d > 2. we study how cohomology ring structure relates to weak congruence. we give an example of three 3 - - manifolds which are weakly 5 - congruent but are not 5 - congruent.
|
arxiv:0711.2673
|
nickel ( ii ) oxide, nio, a wide band gap mott insulator characterized by strong coulomb repulsion between d - electrons and displaying antiferromagnetic order at room temperature, has gained attention in recent years as a very promising candidate for applications in a broad set of areas, including chemistry and metallurgy to spintronics and energy harvesting. here, we report on the synthesis of polycrystalline nio fabricated using spray - pyrolysis technique, which is a deposition technique able to produce quite uniform films of pure and crystalline materials without the need of high vacuum or inert atmospheres. we then characterized the composition and structure of our nio thin films using x - ray diffraction, and atomic force and scanning electron microscopies, respectively. we completed our study by looking at the phononic and magnonic properties of our nio thin films via raman spectroscopy, and at the ultrafast electron dynamics by using optical pump probe spectroscopy. we found that our nio samples display the same phononic and magnonic dispersion expected for single crystal nio at room temperature, and that electron dynamics in our system is similar to those of previously reported nio mono - and poli - crystalline systems synthesized with different techniques. these results prove that spray - pyrolysis can be used as affordable and large - scale fabrication technique to synthetize strongly correlated materials for a large set of applications.
|
arxiv:2402.01437
|
we provide an abstract variational existence and uniqueness result for multi - valued, monotone, non - coercive stochastic evolution inclusions in hilbert spaces with general additive and wiener multiplicative noise. as examples we discuss certain singular diffusion equations such as the stochastic 1 - laplacian evolution ( total variation flow ) in all space dimensions and the stochastic singular fast diffusion equation. in case of additive wiener noise we prove the existence of a unique weak - * mean ergodic invariant measure.
|
arxiv:1112.5672
|
. 3 mb $ and $ 74. 7 \ pm 1. 7 mb $ respectively, and with atlas values $ 71. 3 \ pm 0. 9 mb $ and $ 71. 7 \ pm 0. 7mb $ respectively. the predictions at $ 546 gev $ and $ 1800 gev $ also agree with $ \ bar { p } p $ experimental results of abe et al \ cite { abe } at $ 546 gev $ and $ 1800 gev $. the model yields for $ \ sqrt { s } > 0. 5 tev $, with pdg2013 total cross sections, and schegelsky - ryskin slopes as input, $ \ sigma _ { inel } ( s ) = 22. 6 +. 034 ln s +. 158 ( ln s ) ^ 2 mb, and \ sigma _ { inel } / \ sigma _ { tot } \ rightarrow 0. 56, s \ rightarrow \ infty, $ where $ s $ is in $ gev ^ 2 $
|
arxiv:1602.03627
|
we investigate the medium modification of a partonic jet shower traversing in a hot quark - gluon plasma. we derive and solve a differential equation that governs the evolution of the radiated gluon spectrum as the jet propagates through the medium. energy contained inside the jet cone is lost by dissipation through elastic collisions with the medium and by scattering of shower partons to larger angles. we find that the jet energy loss at early times is dominated by medium effects on the vacuum radiation, and by medium - induced radiation effects at late times. we compare our numerical results for the nuclear modification of the di - jet asymmetry with that recently reported by the atlas collaboration.
|
arxiv:1012.5280
|
we propose a novel benchmark environment for safe reinforcement learning focusing on aquatic navigation. aquatic navigation is an extremely challenging task due to the non - stationary environment and the uncertainties of the robotic platform, hence it is crucial to consider the safety aspect of the problem, by analyzing the behavior of the trained network to avoid dangerous situations ( e. g., collisions ). to this end, we consider a value - based and policy - gradient deep reinforcement learning ( drl ) and we propose a crossover - based strategy that combines gradient - based and gradient - free drl to improve sample - efficiency. moreover, we propose a verification strategy based on interval analysis that checks the behavior of the trained models over a set of desired properties. our results show that the crossover - based training outperforms prior drl approaches, while our verification allows us to quantify the number of configurations that violate the behaviors that are described by the properties. crucially, this will serve as a benchmark for future research in this domain of applications.
|
arxiv:2112.10593
|
inflationary era of our universe can be characterized as semi - classical because it can be described in the context of four - dimensional einsteins ' s gravity involving quantum corrections. these string motivated corrections originate from quantum theories of gravity such as superstring theories and include higher gravitational terms as, gauss - bonnet and chern - simons terms. in this paper we investigated inflationary phenomenology coming from a scalar field, with quadratic curvature terms in the view of gw170817. firstly, we derived the equations of motion, directly from the gravitational action. as a result, formed a system of differential equations with respect to hubble ' s parameter and the inflaton field which was very complicated and cannot be solved analytically, even in the minimal coupling case. based on the observations from gw170817, which have shown that the speed of the primordial gravitational wave is equal to the speed of light, our equations of motion where simplified after applying this constraint, the slow - roll approximations and neglecting the string corrections. we described the dynamics of inflationary phenomenology and proved that theories with gauss - bonnet term can be compatible with recent observations. also, the chern - simons term leads to asymmetric generation and evolution of the two circular polarization states of gravitational wave. finally, viable inflationary models are presented, consistent with the observational constraints. the possibility of a blue tilted tensor spectral index is briefly investigated.
|
arxiv:2107.09457
|
the sudden widespread menace created by the present global pandemic covid - 19 has had an unprecedented effect on our lives. man - kind is going through humongous fear and dependence on social media like never before. fear inevitably leads to panic, speculations, and the spread of misinformation. many governments have taken measures to curb the spread of such misinformation for public well being. besides global measures, to have effective outreach, systems for demographically local languages have an important role to play in this effort. towards this, we propose an approach to detect fake news about covid - 19 early on from social media, such as tweets, for multiple indic - languages besides english. in addition, we also create an annotated dataset of hindi and bengali tweet for fake news detection. we propose a bert based model augmented with additional relevant features extracted from twitter to identify fake tweets. to expand our approach to multiple indic languages, we resort to mbert based model which is fine - tuned over created dataset in hindi and bengali. we also propose a zero - shot learning approach to alleviate the data scarcity issue for such low resource languages. through rigorous experiments, we show that our approach reaches around 89 % f - score in fake tweet detection which supercedes the state - of - the - art ( sota ) results. moreover, we establish the first benchmark for two indic - languages, hindi and bengali. using our annotated data, our model achieves about 79 % f - score in hindi and 81 % f - score for bengali tweets. our zero - shot model achieves about 81 % f - score in hindi and 78 % f - score for bengali tweets without any annotated data, which clearly indicates the efficacy of our approach.
|
arxiv:2010.06906
|
we propose a model with a compensating scalar field whose back reaction to the cosmological curvature cancels possible vacuum energy density down to the terms of the order of the time dependent critical energy density. thus the model simultaneously solves the mystery of the compensation of vacuum energy with the accuracy of 120 orders of magnitude and explains existence of the observed dark energy. at an early stage the suggested cosmological model might experience exponential expansion without an additional inflaton field.
|
arxiv:astro-ph/0307442
|
the structural and elastic properties of orthorhombic black phosphorus have been investigated using first - principles calculations based on density functional theory. the structural parameters have been calculated using the local density approximation ( lda ), the generalized gradient approximation ( gga ), and with several dispersion corrections to include van der waals interactions. it is found that the dispersion corrections improve the lattice parameters over lda and gga in comparison with experimental results. the calculations reproduce well the experimental trends under pressure and show that van der waals interactions are most important for the crystallographic b - axis, in the sense that they have the largest effect on the bonding between the phosphorus layers. the elastic constants are calculated and are found to be in good agreement with experimental values. the calculated c $ _ { 22 } $ elastic constant is significantly larger than the c $ _ { 11 } $ and c $ _ { 33 } $ parameters, implying that black phosphorus is stiffer against strain along the a - axis than along the b - and c - axes. from the calculated elastic constants, the mechanical properties such as bulk modulus, shear modulus, young ' s modulus and poisson ' s ratio are obtained. the calculated raman active optical phonon frequencies and their pressure variations are in excellent agreement with available experimental results.
|
arxiv:1211.3512
|
} $, does an $ ( e, e ) $ - crossword exist ( of any size )? "
|
arxiv:1411.5437
|
we present star ' s measurement of the $ e ^ { + } e ^ { - } $ continuum as a function of centrality, invariant mass, and transverse momentum for u + u collisions at $ \ sqrt { s _ { nn } } $ = 193 gev. also reported are the acceptance - corrected $ e ^ { + } e ^ { - } $ invariant mass spectra for minimum - bias au + au collisions at $ \ sqrt { s _ { nn } } $ = 27, 39, and 62. 4 gev and u + u collisions at $ \ sqrt { s _ { nn } } $ = 193 gev. the connection between the integrated $ e ^ { + } e ^ { - } $ excess yields normalized by charge particle multiplicity ( $ dn _ { ch } / dy $ ) at mid - rapidity and the lifetime of the fireball is discussed.
|
arxiv:1612.05484
|
origin - destination integer multicommodity flow problems differ from classic multicommodity models in that each commodity has one source and one sink, and each commodity must be routed along a single path. a new invisible - hand heuristic that mimics economic markets ' behavior is presented and tested on large - scale telecommunications networks, with solution times two orders of magnitude faster than cplex ' s lp relaxation, more dramatic mip ratios, and small solution value differences.
|
arxiv:2007.06693
|
lyric - to - melody generation is an important task in songwriting, and is also quite challenging due to its unique characteristics : the generated melodies should not only follow good musical patterns, but also align with features in lyrics such as rhythms and structures. these characteristics cannot be well handled by neural generation models that learn lyric - to - melody mapping in an end - to - end way, due to several issues : ( 1 ) lack of aligned lyric - melody training data to sufficiently learn lyric - melody feature alignment ; ( 2 ) lack of controllability in generation to better and explicitly align the lyric - melody features. in this paper, we propose re - creation of creations ( roc ), a new paradigm for lyric - to - melody generation. roc generates melodies according to given lyrics and also conditions on user - designated chord progression. it addresses the above issues through a generation - retrieval pipeline. specifically, our paradigm has two stages : ( 1 ) creation stage, where a huge amount of music fragments generated by a neural melody language model are indexed in a database through several key features ( e. g., chords, tonality, rhythm, and structural information ) ; ( 2 ) re - creation stage, where melodies are re - created by retrieving music fragments from the database according to the key features from lyrics and concatenating best music fragments based on composition guidelines and melody language model scores. roc has several advantages : ( 1 ) it only needs unpaired melody data to train melody language model, instead of paired lyric - melody data in previous models. ( 2 ) it achieves good lyric - melody feature alignment in lyric - to - melody generation. tested by english and chinese lyrics, roc outperforms previous neural based lyric - to - melody generation models on both objective and subjective metrics.
|
arxiv:2208.05697
|
in large - scale uav swarms, dynamically executing machine learning tasks can pose significant challenges due to network volatility and the heterogeneous resource constraints of each uav. traditional approaches often rely on centralized orchestration to partition tasks among nodes. however, these methods struggle with communication bottlenecks, latency, and reliability when the swarm grows or the topology shifts rapidly. to overcome these limitations, we propose a fully distributed, diffusive metric - based approach for split computing in uav swarms. our solution introduces a new iterative measure, termed the aggregated gigaflops, capturing each node ' s own computing capacity along with that of its neighbors without requiring global network knowledge. by forwarding partial inferences intelligently to underutilized nodes, we achieve improved task throughput, lower latency, and enhanced energy efficiency. further, to handle sudden workload surges and rapidly changing node conditions, we incorporate an early - exit mechanism that can adapt the inference pathway on - the - fly. extensive simulations demonstrate that our approach significantly outperforms baseline strategies across multiple performance indices, including latency, fairness, and energy consumption. these results highlight the feasibility of large - scale distributed intelligence in uav swarms and provide a blueprint for deploying robust, scalable ml services in diverse aerial networks.
|
arxiv:2503.16146
|
polarized color photography provides both visual textures and object surficial information in one single snapshot. however, the use of the directional polarizing filter array causes extremely lower photon count and snr compared to conventional color imaging. thus, the feature essentially leads to unpleasant noisy images and destroys polarization analysis performance. it is a challenge for traditional image processing pipelines owing to the fact that the physical constraints exerted implicitly in the channels are excessively complicated. to address this issue, we propose a learning - based approach to simultaneously restore clean signals and precise polarization information. a real - world polarized color image dataset of paired raw short - exposed noisy and long - exposed reference images are captured to support the learning - based pipeline. moreover, we embrace the development of vision transformer and propose a hybrid transformer model for the polarized color image denoising, namely pocoformer, for a better restoration performance. abundant experiments demonstrate the effectiveness of proposed method and key factors that affect results are analyzed.
|
arxiv:2207.00215
|
we present galaxai - a versatile machine learning toolbox for efficient and interpretable end - to - end analysis of spacecraft telemetry data. galaxai employs various machine learning algorithms for multivariate time series analyses, classification, regression and structured output prediction, capable of handling high - throughput heterogeneous data. these methods allow for the construction of robust and accurate predictive models, that are in turn applied to different tasks of spacecraft monitoring and operations planning. more importantly, besides the accurate building of models, galaxai implements a visualisation layer, providing mission specialists and operators with a full, detailed and interpretable view of the data analysis process. we show the utility and versatility of galaxai on two use - cases concerning two different spacecraft : i ) analysis and planning of mars express thermal power consumption and ii ) predicting of integral ' s crossings through van allen belts.
|
arxiv:2108.01407
|
spin waves, the collective excitations of the magnetic order parameter, and magnons, the associated quasiparticles, are envisioned as possible data carriers in future wave - based computing architectures. on the road towards spin - wave computing, the development of a diode - like device capable of transmitting spin waves in only one direction, thus allowing controlled signal routing, is an essential step. here, we report on the design and experimental realization of a microstructured magnonic diode in a ferromagnetic bilayer system. effective unidirectional propagation of spin waves is achieved by taking advantage of nonreciprocities produced by dynamic dipolar interactions in transversally magnetized media, which lack symmetry about their horizontal midplane. more specifically, dipolar - induced nonreciprocities are used to engineer the spin - wave dispersion relation of the bilayer system so that the group velocity is reduced to very low values for one direction of propagation, and not for the other, thus producing unidirectional slow spin waves. brillouin light scattering and propagating spin - wave spectroscopy are used to demonstrate the diode - like behavior of the device, the composition of which was previously optimized through micromagnetic simulations. simulations.
|
arxiv:1912.09735
|
the metal - hydride - based topochemical reduction process has produced novel thermodynamically unstable phases across various transition metal oxide series with unusual crystal structures and non - trivial ground states. here, by such an oxygen ( de - ) intercalation method we synthesis a novel samarium nickelate with ordered nickel valences associated with tri - component coordination configurations. this structure, with a formula of sm $ _ { 9 } $ ni $ _ { 9 } $ o $ _ { 22 } $ as revealed by four - dimensional scanning transmission electron microscopy, emerges from the intricate planes of { 303 } $ _ { \ text { pc } } $ ordered apical oxygen vacancies. x - ray spectroscopy measurements and ab - initio calculations show the coexistence of square - planar, pyramidal and octahedral ni sites with mono -, bi - and tri - valences. it leads to an intense orbital polarization, charge - ordering, and a ground state with a strong electron localization marked by the disappearance of ligand - hole configuration at low - temperature. this new nickelate compound provides another example of previously inaccessible materials enabled by topotactic transformations and presents a unique platform where mixed ni valence can give rise to exotic phenomena.
|
arxiv:2308.02855
|
many risk - sensitive applications require machine learning ( ml ) models to be interpretable. attempts to obtain interpretable models typically rely on tuning, by trial - and - error, hyper - parameters of model complexity that are only loosely related to interpretability. we show that it is instead possible to take a meta - learning approach : an ml model of non - trivial proxies of human interpretability ( phis ) can be learned from human feedback, then this model can be incorporated within an ml training process to directly optimize for interpretability. we show this for evolutionary symbolic regression. we first design and distribute a survey finalized at finding a link between features of mathematical formulas and two established phis, simulatability and decomposability. next, we use the resulting dataset to learn an ml model of interpretability. lastly, we query this model to estimate the interpretability of evolving solutions within bi - objective genetic programming. we perform experiments on five synthetic and eight real - world symbolic regression problems, comparing to the traditional use of solution size minimization. the results show that the use of our model leads to formulas that are, for a same level of accuracy - interpretability trade - off, either significantly more or equally accurate. moreover, the formulas are also arguably more interpretable. given the very positive results, we believe that our approach represents an important stepping stone for the design of next - generation interpretable ( evolutionary ) ml algorithms.
|
arxiv:2004.11170
|
the advantages of angular differential imaging ( adi ) has been previously untested in imaging the host galaxies of damped lyman alpha ( dla ) systems. in this pilot study, we present the first application of adi to directly imaging the host galaxy of the dla seen towards the quasar j1431 + 3952. k - band imaging of the field surrounding j1431 + 3952 was obtained on the gemini north telescope with the adaptive optics system and a laser guide star. we computed a sensitivity curve that demonstrates the sensitivity of our observations as a function of k - band magnitude, impact parameter and dla angular size. for an impact parameter of 0. 5 " ( 3. 4 kpc at the redshift of the absorber ) our mass sensitivity is log ( m _ star / m _ sun ) ~ 9. 2 and drops to ~ 9. 0 at separations beyond ~ 6 kpc for the smallest size model galaxy. three candidate galaxies are identified within 5 ". stellar masses were computed from the k - band photometry yielding values of log ( m _ star / m _ sun ) ~ 9. 9, 9. 7 and 11. 1 respectively. the likely identification of the absorbing galaxy is discussed, and we conclude that the galaxy with the largest impact parameter and highest stellar mass is unlikely to be the host, based on its inconsistency with the n ( hi ) impact parameter relation and inconsistent photometric redshift. whilst we cannot distinguish between the remaining two candidates as the dla host, we note that despite the low spin temperature and relatively high metallicity of the dla, the host does not appear to be a particularly luminous ( high mass ) galaxy.
|
arxiv:1609.00384
|
it is proven that any spherically symmetric spacetime that possesses a compact cauchy surface $ \ sigma $ and that satisfies the dominant - energy and non - negative - pressures conditions must have a finite lifetime in the sense that all timelike curves in such a spacetime must have a length no greater than $ 10 \ max _ \ sigma ( 2m ) $, where $ m $ is the mass associated with the spheres of symmetry. this result gives a complete resolution, in the spherically symmetric case, of one version of the closed - universe recollapse conjecture ( though it is likely that a slightly better bound can be established ). this bound has the desirable properties of being computable from the ( spherically symmetric ) initial data for the spacetime and having a very simple form. in fact, its form is the same as was established, using a different method, for the spherically symmetric massless scalar field spacetimes, thereby proving a conjecture offered in that work. prospects for generalizing these results beyond the spherically symmetric case are discussed.
|
arxiv:gr-qc/9409011
|
let $ l $ be an $ n $ - component link ( $ n > 1 $ ) with pairwise nonzero linking numbers in a rational homology $ 3 $ - sphere $ y $. assume the link complement $ x : = y \ setminus \ nu ( l ) $ has nondegenerate thurston norm. in this paper, we study when a thurston norm - minimizing surface $ s $ properly embedded in $ x $ remains norm - minimizing after dehn filling all boundary components of $ x $ according to $ \ partial s $ and capping off $ \ partial s $ by disks. in particular, for $ n = 2 $ the capped - off surface is norm - minimizing when $ [ s ] $ lies outside of a finite set of rays in $ h _ 2 ( x, \ partial x ; \ mathbb { r } ) $. when $ y $ is an integer homology sphere this gives an upper bound on the number of surgeries on $ l $ which may yield $ s ^ 1 \ times s ^ 2 $. the main techniques come from gabai ' s proof of the property r conjecture and related work.
|
arxiv:1906.08458
|
we present a model - independent anatomy of the $ \ delta f = 2 $ transitions $ k ^ 0 - \ bar k ^ 0 $, $ b _ { s, d } - \ bar b _ { s, d } $ and $ d ^ 0 - \ bar d ^ 0 $ in the context of the standard model effective field theory ( smeft ). we present two master formulae for the mixing amplitude $ \ big [ m _ { 12 } \ big ] _ \ text { bsm } $. one in terms of the wilson coefficients ( wcs ) of the low - energy effective theory ( left ) operators evaluated at the electroweak scale $ \ mu _ \ text { ew } $ and one in terms of the wcs of the smeft operators evaluated at the bsm scale $ \ lambda $. the coefficients $ p _ a ^ { ij } $ entering these formulae contain all the information below the scales $ \ mu _ \ text { ew } $ and $ \ lambda $, respectively. renormalization group effects from the top - quark yukawa coupling play the most important role. the collection of the individual contributions of the smeft operators to $ \ big [ m _ { 12 } \ big ] _ \ text { bsm } $ can be considered as the smeft atlas of $ \ delta f = 2 $ transitions and constitutes a travel guide to such transitions far beyond the scales explored by the lhc. we emphasize that this atlas depends on whether the down - basis or the up - basis for smeft operators is considered. we illustrate this technology with tree - level exchanges of heavy gauge bosons ( $ z ^ \ prime $, $ g ^ \ prime $ ) and corresponding heavy scalars.
|
arxiv:2009.07276
|
after an introduction to the sequential version of form and the mechanisms behind it we report on the status of our ongoing project of its parallelization. an analysis of the parallel platforms used is given and the structure of a parallel prototype of form is explained.
|
arxiv:hep-ph/9906426
|
we address the discrepancy between the rosenbluth and polarization transfer data for the electromagnetic form factors of the nucleon. assuming that the effect of two - photon corrections on the polarization transfer data is negligible, we obtain a model - independent estimate of the two - photon correction delta ^ ( 2 \ gamma ). we analyze the polarization transfer data and the cross section data separately using dispersion relations. a central value as well as an error estimate for delta ^ ( 2 \ gamma ) is then obtained from a comparison of the two analyses. the resulting values for delta ^ ( 2 \ gamma ) are in good agreement with direct calculations available in the literature.
|
arxiv:0705.3385
|
noncommutative field theories on moyal spaces can be conveniently handled within a framework of noncommutative geometry. several renormalisable matter field theories that are now identified are briefly reviewed. the construction of renormalisable gauge theories on these noncommutative moyal spaces, which remains so far a challenging problem, is then closely examined. the computation in 4 - d of the one - loop effective gauge theory generated from the integration over a scalar field appearing in a renormalisable theory minimally coupled to an external gauge potential is presented. the gauge invariant effective action is found to involve, beyond the expected noncommutative version of the pure yang - mills action, additional terms that may be interpreted as the gauge theory counterpart of the harmonic term, which for the noncommutative $ \ phi ^ 4 $ - theory on moyal space ensures renormalisability. a class of possible candidates for renormalisable gauge theory actions defined on moyal space is presented and discussed.
|
arxiv:0708.2471
|
we analyze the relation between the lagrangian and hamiltonian brst symmetry generators for a recently proposed two - dimensional symmetry. in particular it is shown that this symmetry may be obtained from a canonical transformation in the ghost sector in a gauge independent way.
|
arxiv:hep-th/0005203
|
we present a brief summary of recent results concerning the unambiguous definition and experimental extraction of the gauge - invariant and process - independent neutrino charge radius.
|
arxiv:hep-ph/0210312
|
embedding learning is an important technique in deep recommendation models to map categorical features to dense vectors. however, the embedding tables often demand an extremely large number of parameters, which become the storage and efficiency bottlenecks. distributed training solutions have been adopted to partition the embedding tables into multiple devices. however, the embedding tables can easily lead to imbalances if not carefully partitioned. this is a significant design challenge of distributed systems named embedding table sharding, i. e., how we should partition the embedding tables to balance the costs across devices, which is a non - trivial task because 1 ) it is hard to efficiently and precisely measure the cost, and 2 ) the partition problem is known to be np - hard. in this work, we introduce our novel practice in meta, namely autoshard, which uses a neural cost model to directly predict the multi - table costs and leverages deep reinforcement learning to solve the partition problem. experimental results on an open - sourced large - scale synthetic dataset and meta ' s production dataset demonstrate the superiority of autoshard over the heuristics. moreover, the learned policy of autoshard can transfer to sharding tasks with various numbers of tables and different ratios of the unseen tables without any fine - tuning. furthermore, autoshard can efficiently shard hundreds of tables in seconds. the effectiveness, transferability, and efficiency of autoshard make it desirable for production use. our algorithms have been deployed in meta production environment. a prototype is available at https : / / github. com / daochenzha / autoshard
|
arxiv:2208.06399
|
we study the codimension - two bifurcations exhibited by a recently - developed sir - type mathematical model for the spread of covid - 19, as its two main parameters - - the susceptible individuals ' cautiousness level and the hospitals ' bed - occupancy rate - - vary over their domains. we use auto to generate the model ' s bifurcation diagrams near the relevant bifurcation points : two bogdanov - takens points and two generalised hopf points, as well as a number of phase portraits describing the model ' s orbital behaviours for various pairs of parameter values near each bifurcation point. the analysis shows that, when a backward bifurcation occurs at the basic reproduction threshold, the transition of the model ' s asymptotic behaviour from endemic to disease - free takes place via an unexpectedly complex sequence of topological changes, involving the births and disappearances of not only equilibria but also limit cycles and homoclinic orbits. epidemiologically, the analysis confirms the importance of a proper control of the values of the aforementioned parameters for a successful eradication of covid - 19. we recommend a number of strategies by which such a control may be achieved.
|
arxiv:2307.08892
|
recently, a perturbative model of non - linear fiber propagation in uncompensated optical transmission systems has been proposed, called gn - model [ 1 ]. here, an extended and more detailed version of the gn - model derivation [ 1 ] is reported, providing deeper insight into the model. some straightforward generalizations of the model are also proposed.
|
arxiv:1209.0394
|
multi - agent reinforcement learning ( marl ) methods often suffer from high sample complexity, limiting their use in real - world problems where data is sparse or expensive to collect. although latent - variable world models have been employed to address this issue by generating abundant synthetic data for marl training, most of these models cannot encode vital global information available during training into their latent states, which hampers learning efficiency. the few exceptions that incorporate global information assume centralized execution of their learned policies, which is impractical in many applications with partial observability. we propose a novel model - based marl algorithm, mabl ( multi - agent bi - level world model ), that learns a bi - level latent - variable world model from high - dimensional inputs. unlike existing models, mabl is capable of encoding essential global information into the latent states during training while guaranteeing the decentralized execution of learned policies. for each agent, mabl learns a global latent state at the upper level, which is used to inform the learning of an agent latent state at the lower level. during execution, agents exclusively use lower - level latent states and act independently. crucially, mabl can be combined with any model - free marl algorithm for policy learning. in our empirical evaluation with complex discrete and continuous multi - agent tasks including smac, flatland, and mamujoco, mabl surpasses sota multi - agent latent - variable world models in both sample efficiency and overall performance.
|
arxiv:2304.06011
|
this paper presents the tracking approach for deriving detectably recoverable ( and thus also durable ) implementations of many widely - used concurrent data structures. such data structures, satisfying detectable recovery, are appealing for emerging systems featuring byte - addressable non - volatile main memory ( nvram ), whose persistence allows to efficiently resurrect failed processes after crashes. detectable recovery ensures that after a crash, every executed operation is able to recover and return a correct response, and that the state of the data structure is not corrupted. info - structure based ( isb ) - tracking amends descriptor objects used in existing lock - free helping schemes with additional fields that track an operation ' s progress towards completion and persists these fields to memory in order to ensure detectable recovery. isb - tracking avoids full - fledged logging and tracks the progress of concurrent operations in a per - process manner, thus reducing the cost of ensuring detectable recovery. we have applied isb - tracking to derive detectably recoverable implementations of a queue, a linked list, a binary search tree, and an exchanger. experimental results show the feasibility of the technique.
|
arxiv:1905.13600
|
a network control system ( ncs ) consists of control components that interact with the plant over a shared network. the system dynamics of a ncs could be subject to noise arising from randomness in the times at which the data is transmitted over the network, corruption of the transmitted data by the communication network, and external disturbances that might affect the plant. a question of interest is to understand how the statistics of the data transmission times affects the system dynamics, and under what conditions the system is stable. another related issue is designing a controller that meets desired performance specifications ( e. g., a specific mean and variance of the system state ). here, we consider a minimal ncs that consists of a plant and a controller, and it is subject to random transmission times, channel corruption and external disturbances. we derive exact dynamics of the first two moments of the system, and use them to derive the stability conditions of the system. we further design a control law that steers the system to a desired mean and variance. finally, we demonstrate our results using different examples, and show that under some specific conditions, randomness in the data transmission times can even reduce the variability contributed from disturbance.
|
arxiv:1704.00236
|
we consider the production of a vector boson ( $ z $, $ w ^ \ pm $ or $ \ gamma ^ * $ ) at next - to - next - to - leading order in the strong coupling constant $ \ alpha _ { \ rm s } $. we impose a transverse - momentum cutoff, $ q _ { \ rm t } ^ { \ rm cut } $, on the vector boson produced in the $ qg $ - initiated channel. we then compute the power corrections in the cutoff, up to the second power, of the real - virtual interference contribution to the cumulative cross section at order $ \ alpha _ { \ rm s } ^ 2 $. other terms with the same kinematics, originating from the subtraction method applied to the double - real contribution, have been also considered. the knowledge of such power corrections is a required ingredient in order to reduce the dependence on the transverse - momentum cutoff of the qcd cross sections at next - to - next - to - leading order, when the $ q _ { \ rm t } $ - subtraction method is applied. in addition, the study of the dependence of the cross section on $ q _ { \ rm t } ^ { \ rm cut } $ allows as well for an understanding of its behaviour in the small transverse - momentum limit, giving hints on the structure at all orders in $ \ alpha _ { \ rm s } $ and on the identification of universal patterns. our result are presented in an analytic form, using the process - independent procedure described in a previous paper for the calculation of the all - order power corrections in $ q _ { \ rm t } ^ { \ rm cut } $.
|
arxiv:2012.10538
|
fifth generation ( 5g ) technology is an emerging and fast adopting technology which is being utilized in most of the novel applications that require highly reliable low - latency communications. it has the capability to provide greater coverage, better access, and best suited for high density networks. having all these benefits, it clearly implies that 5g could be used to satisfy the requirements of autonomous vehicles. automated driving vehicles and systems are developed with a promise to provide comfort, safe and efficient drive reducing the risk of life. but, recently there are fatalities due to these autonomous vehicles and systems. this is due to the lack of robust state - of - art which has to be improved further. with the advent of 5g technology and rise of autonomous vehicles ( avs ), road safety is going to get more secure with less human errors. however, integration of 5g and av is still at its infant stage with several research challenges that needs to be addressed. this survey first starts with a discussion on the current advancements in avs, automation levels, enabling technologies and 5g requirements. then, we focus on the emerging techniques required for integrating 5g technology with avs, impact of 5g and b5g technologies on avs along with security concerns in avs. the paper also provides a comprehensive survey of recent developments in terms of standardisation activities on 5g autonomous vehicle technology and current projects. the article is finally concluded with lessons learnt, future research directions and challenges.
|
arxiv:2207.10510
|
in this paper, we develop a theory about the relationship between invariant and equivariant maps with regard to a group $ g $. we then leverage this theory in the context of deep neural networks with group symmetries in order to obtain novel insight into their mechanisms. more precisely, we establish a one - to - one relationship between equivariant maps and certain invariant maps. this allows us to reduce arguments for equivariant maps to those for invariant maps and vice versa. as an application, we propose a construction of universal equivariant architectures built from universal invariant networks. we, in turn, explain how the universal architectures arising from our construction differ from standard equivariant architectures known to be universal. furthermore, we explore the complexity, in terms of the number of free parameters, of our models, and discuss the relation between invariant and equivariant networks ' complexity. finally, we also give an approximation rate for g - equivariant deep neural networks with relu activation functions for finite group g.
|
arxiv:2409.16922
|
the leading and the subleading landau singularities in affine toda field theories are examined in some detail. formulae describing the subleading simple pole structure of box diagrams are given explicitly. this leads to a new and nontrivial test of the conjectured exact s - matrices for these theories. we show that to the one - loop level the conjectured s - matrices of the $ a _ n $ toda family reproduce the correct singularity structure, leading as well as subleading, of the field theoretical amplitudes. the present test has the merit of being independent of the details of the renormalisations.
|
arxiv:hep-th/9207025
|
although existing techniques have proposed automated approaches to alleviate the path explosion problem of symbolic execution, users still need to optimize symbolic execution by applying various searching strategies carefully. as existing approaches mainly support only coarse - grained global searching strategies, they cannot efficiently traverse through complex code structures. in this paper, we propose eunomia, a symbolic execution technique that allows users to specify local domain knowledge to enable fine - grained search. in eunomia, we design an expressive dsl, aes, that lets users precisely pinpoint local searching strategies to different parts of the target program. to further optimize local searching strategies, we design an interval - based algorithm that automatically isolates the context of variables for different local searching strategies, avoiding conflicts between local searching strategies for the same variable. we implement eunomia as a symbolic execution platform targeting webassembly, which enables us to analyze applications written in various languages ( like c and go ) but can be compiled into webassembly. to the best of our knowledge, eunomia is the first symbolic execution engine that supports the full features of the webassembly runtime. we evaluate eunomia with a dedicated microbenchmark suite for symbolic execution and six real - world applications. our evaluation shows that eunomia accelerates bug detection in real - world applications by up to three orders of magnitude. according to the results of a comprehensive user study, users can significantly improve the efficiency and effectiveness of symbolic execution by writing a simple and intuitive aes script. besides verifying six known real - world bugs, eunomia also detected two new zero - day bugs in a popular open - source project, collections - c.
|
arxiv:2304.07204
|
the substrate temperature required for synthesis of graphene in an arc discharge plasma was studied. it was shown that an increase of copper substrate temperature up to the melting point leads to an increase in the amount of graphene production and the quality of graphene sheets. favorable range of substrate temperatures for arc - based graphene synthesis was determined, and it is in a relatively narrow range of about 1210 - 1340 k.
|
arxiv:1503.04083
|
the paper investigates the fundamental convergence properties of sharpness - aware minimization ( sam ), a recently proposed gradient - based optimization method [ foret et al., 2021 ] that significantly improves the generalization of deep neural networks. the convergence properties, including the stationarity of accumulation points, the convergence of the sequence of gradients to the origin, the sequence of function values to the optimal value, and the sequence of iterates to the optimal solution, are established for the method. the universality of the provided convergence analysis, based on inexact gradient descent frameworks khanh et al. [ 2023b ], allows its extensions to efficient normalized versions of sam such as f - sam [ li et al., 2024 ], vasso [ li and giannakis, 2023 ], rsam [ liu et al., 2022 ], and to the unnormalized versions of sam such as usam [ andriushchenko and flammarion, 2022 ]. numerical experiments are conducted on classification tasks using deep learning models to confirm the practical aspects of our analysis.
|
arxiv:2401.08060
|
data - driven tools are increasingly used to make consequential decisions. they have begun to advise employers on which job applicants to interview, judges on which defendants to grant bail, lenders on which homeowners to give loans, and more. in such settings, different data - driven rules result in different decisions. the problem is : to every data - driven rule, there are exceptions. while a data - driven rule may be appropriate for some, it may not be appropriate for all. as data - driven decisions become more common, there are cases in which it becomes necessary to protect the individuals who, through no fault of their own, are the data - driven exceptions. at the same time, it is impossible to scrutinize every one of the increasing number of data - driven decisions, begging the question : when and how should data - driven exceptions be protected? in this piece, we argue that individuals have the right to be an exception to a data - driven rule. that is, the presumption should not be that a data - driven rule - - even one with high accuracy - - is suitable for an arbitrary decision - subject of interest. rather, a decision - maker should apply the rule only if they have exercised due care and due diligence ( relative to the risk of harm ) in excluding the possibility that the decision - subject is an exception to the data - driven rule. in some cases, the risk of harm may be so low that only cursory consideration is required. although applying due care and due diligence is meaningful in human - driven decision contexts, it is unclear what it means for a data - driven rule to do so. we propose that determining whether a data - driven rule is suitable for a given decision - subject requires the consideration of three factors : individualization, uncertainty, and harm. we unpack this right in detail, providing a framework for assessing data - driven rules and describing what it would mean to invoke the right in practice.
|
arxiv:2212.13995
|
we consider visual domains in which a class label specifies the content of an image, and class - irrelevant properties that differentiate instances constitute the style. we present a domain - independent method that permits the open - ended recombination of style of one image with the content of another. open ended simply means that the method generalizes to style and content not present in the training data. the method starts by constructing a content embedding using an existing deep metric - learning technique. this trained content encoder is incorporated into a variational autoencoder ( vae ), paired with a to - be - trained style encoder. the vae reconstruction loss alone is inadequate to ensure a decomposition of the latent representation into style and content. our method thus includes an auxiliary loss, leakage filtering, which ensures that no style information remaining in the content representation is used for reconstruction and vice versa. we synthesize novel images by decoding the style representation obtained from one image with the content representation from another. using this method for data - set augmentation, we obtain state - of - the - art performance on few - shot learning tasks.
|
arxiv:1810.00110
|
using high frame - rate ultrasound and high sensitivity motion tracking, we recently showed that shear waves sent to the ex vivo porcine brain develop into shear shock waves with destructive local accelerations inside the brain which may be a key mechanism behind deep traumatic brain injuries. direct measurement of brain motion at an adequate frame - rate during impacts has been a persistent challenge. here we present the ultrasound observation of shear shock waves in the acoustically challenging environment of the in situ porcine brain during a single - shot impact. the brain was attached to a plate source which was vibrated at a moderate amplitude of 25g, to propagate a 40 hz shear wave into the brain. simultaneously, images of the moving brain were acquired at 2193 images / s, using a custom imaging sequence with 8 interleaved ultrasound transmit - receive events, designed to accurately track shear shocks. to achieve a long field - of - view, wide - beam emissions were designed using time - reversal ultrasound simulations and no compounding was used to avoid motion blurring. a peak acceleration of 102g was measured at the shock - front, 7. 1 mm deep inside the brain. it is also shown that experimental shear velocity, acceleration, and strain - rate waveforms in brain are in excellent agreement with theoretical predictions from a custom higher - order finite volume method hence demonstrating the capabilities to measure rapid brain motion even in the presence of strong acoustical reverberations from the porcine skull.
|
arxiv:2104.11911
|
grading system where the highest grade of a typical class can be as low as 60 % ( c - ). there are studies that suggest a direct correlation between reduced social mobility and differences unique to the chilean higher education system. = = see also = = british undergraduate degree classification british degree abbreviations list of tagged degrees master of science = = notes = = = = references = =
|
https://en.wikipedia.org/wiki/Bachelor_of_Science
|
the primary results of most observations of cosmic microwave background ( cmb ) anisotropy are estimates of the angular power spectrum averaged through some broad band, called band - powers. these estimates are in turn what are used to produce constraints on cosmological parameters due to all cmb observations. essential to this estimation of cosmological parameters is the calculation of the expected band - power for a given experiment, given a theoretical power spectrum. here we derive the " band power " window function which should be used for this calculation, and point out that it is not equivalent to the window function used to calculate the variance. this important distinction has been absent from much of the literature : the variance window function is often used as the band - power window function. we discuss the validity of this assumed equivalence, the role of window functions for experiments that constrain the power in { \ it multiple } bands, and summarize a prescription for reporting experimental results. the analysis methods detailed here are applied in a companion paper to three years of data from the medium scale anisotropy measurement.
|
arxiv:astro-ph/9902046
|
self - organized criticality can be translated into the language of absorbing state phase transitions. most models for which this analogy is established have been investigated for their absorbing state characteristics. in this article, we transform the self - organized critical oslo model into an absorbing state oslo model and analyze the avalanche behavior. we find that the resulting gap exponent, d, is consistent with its value in the self - organized critical model. for the avalanche size exponent, \ tau, an analysis of the effect of the external drive and the boundary conditions is required.
|
arxiv:cond-mat/0405454
|
we extend the notion of rauzy induction of interval exchange transformations to the case of toral $ \ mathbb { z } ^ 2 $ - rotation, i. e., $ \ mathbb { z } ^ 2 $ - action defined by rotations on a 2 - torus. if $ \ mathcal { x } _ { \ mathcal { p }, r } $ denotes the symbolic dynamical system corresponding to a partition $ \ mathcal { p } $ and $ \ mathbb { z } ^ 2 $ - action $ r $ such that $ r $ is cartesian on a sub - domain $ w $, we express the 2 - dimensional configurations in $ \ mathcal { x } _ { \ mathcal { p }, r } $ as the image under a $ 2 $ - dimensional morphism ( up to a shift ) of a configuration in $ \ mathcal { x } _ { \ widehat { \ mathcal { p } } | _ w, \ widehat { r } | _ w } $ where $ \ widehat { \ mathcal { p } } | _ w $ is the induced partition and $ \ widehat { r } | _ w $ is the induced $ \ mathbb { z } ^ 2 $ - action on $ w $. we focus on one example $ \ mathcal { x } _ { \ mathcal { p } _ 0, r _ 0 } $ for which we obtain an eventually periodic sequence of 2 - dimensional morphisms. we prove that it is the same as the substitutive structure of the minimal subshift $ x _ 0 $ of the jeandel - rao wang shift computed in an earlier work by the author. as a consequence, $ \ mathcal { p } _ 0 $ is a markov partition for the associated toral $ \ mathbb { z } ^ 2 $ - rotation $ r _ 0 $. it also implies that the subshift $ x _ 0 $ is uniquely ergodic and is isomorphic to the toral $ \ mathbb { z } ^ 2 $ - rotation $ r _ 0 $ which can be seen as a generalization for 2 - dimensional subshifts of the relation between sturmian sequences and irrational rotations on a circle. batteries included : the algorithms and code to reproduce the proofs are provided.
|
arxiv:1906.01104
|
recent theoretical developments to calculate cross sections of hadronic objects in the high energy limit are summarised and experimental attempts to establish the need for new qcd effects connected with a resummation of small hadron momentum fractions x are reviewed. the relation between small - $ x $ parton dynamics and the phenomenon of diffraction is briefly out - lined. in addition, a search for a novel, non - perturbative qcd effect, the production of qcd instanton induced events, is presented.
|
arxiv:hep-ph/0111232
|
six $ c $ - even states, denoted as $ x $, with quantum numbers $ j ^ { pc } = 0 ^ { - + } $, $ 1 ^ { \ pm + } $, or $ 2 ^ { \ pm + } $, are searched for via the $ e ^ + e ^ - \ to \ gamma d _ { s } ^ { \ pm } d _ { s } ^ { * \ mp } $ process using $ ( 1667. 39 \ pm8. 84 ) ~ \ mathrm { pb } ^ { - 1 } $ of $ e ^ + e ^ - $ collision data collected with the besiii detector operating at the bepcii storage ring at center - of - mass energy of $ \ sqrt { s } = ( 4681. 92 \ pm0. 30 ) ~ \ mathrm { mev } $. no statistically significant signal is observed in the mass range from $ 4. 08 $ to $ 4. 32 ~ \ mathrm { gev } / c ^ { 2 } $. the upper limits of $ \ sigma [ e ^ + e ^ - \ to \ gamma x ] \ cdot \ mathcal { b } [ x \ to d _ { s } ^ { \ pm } d _ { s } ^ { * \ mp } ] $ at a $ 90 \ % $ confidence level are determined.
|
arxiv:2404.02033
|
. ion, electron pulsed arc, spark, friction and induction. [ ref : r. chattopadhyay : advanced thermally assisted surface engineering processes, springer, new york, usa, 2004 ] it is estimated that loss due to wear and corrosion in the us is approximately $ 500 billion. in the us, there are around 9524 establishments ( including automotive, aircraft, power and construction industries ) who depend on engineered surfaces with support from 23, 466 industries. there are around 65 academic institutions world - wide engaged in surface engineering research and education. = = surface cleaning techniques = = surface cleaning, synonymously referred to as dry cleaning, is a mechanical cleaning technique used to reduce superficial soil, dust, grime, insect droppings, accretions, or other surface deposits. ( dry cleaning, as the term is used in paper conservation, does not employ the use of organic solvents. ) surface cleaning may be used as an independent cleaning technique, as one step ( usually the first ) in a more comprehensive treatment, or as a prelude to further treatments ( e. g., aqueous immersion ) which may cause dirt to set irreversibly in paper fibers. = = purpose = = the purpose of surface cleaning is to reduce the potential for damage to paper artifacts by removing foreign material which can be abrasive, acidic, hygroscopic, or degradative. the decision to remove surface dirt is also for aesthetic reasons when it interferes with the visibility of the imagery or information. a decision must be made balancing the probable care of each object against the possible problems related to surface cleaning. = = environmental benefits = = the application of surface engineering to components leads to improved lifetime ( e. g., by corrosion resistance ) and improved efficiency ( e. g., by reducing friction ) which directly reduces the emissions corresponding to those components. applying innovative surface engineering technologies to the energy sector has the potential of reducing annual co2 - eq emissions by up to 1. 8 gt in 2050 and 3. 4 gt in 2100. this corresponds to 7 % and 8. 5 % annual reduction in the energy sector in 2050 and 2100, respectively. despite those benefits, a major environmental drawback is the dissipative losses occurring throughout the life cycle of the components, and the associated environmental impacts of them. in thermal spray surface engineering applications, the majority of those dissipative losses occur at the coating stage ( up to 39 % ), where part of the sprayed powders
|
https://en.wikipedia.org/wiki/Surface_engineering
|
we propose a novel design of a laboratory search for axions based on photon regeneration with superconducting rf cavities. our particular setup uses a toroid as a region of confined static magnetic field, while production and detection cavities are positioned in regions of vanishing external field. this permits cavity operation at quality factors of $ q \ sim 10 ^ { 10 } - 10 ^ { 12 } $. the limitations due to fundamental issues such as signal screening and back - reaction are discussed, and the optimal sensitivity is calculated. this experimental design can potentially probe axion - photon couplings beyond astrophysical limits, comparable and complementary to next generation optical experiments.
|
arxiv:1904.07245
|
in jet quenching, a hard qcd parton, before fragmenting into a jet of hadrons, deposits a fraction of its energy in the medium, leading to suppressed production of high - $ p _ t $ hadrons. assuming that the deposited energy quickly thermalizes, we simulate the subsequent hydrodynamic evolution of the qgp fluid. explicit simulation of au + au collision with and without a quenching jet indicate that elliptic flow is greatly reduced in a jet event. the result can be used to identify the jet events in heavy ion collisions.
|
arxiv:0705.1059
|
existence of stationary solutions to the coagulation - fragmentation equation is shown when the coagulation kernel $ k $ and the overall fragmentation rate $ a $ are given by $ k ( x, y ) = x ^ \ alpha y ^ \ beta + x ^ \ beta y ^ \ alpha $ and $ a ( x ) = x ^ \ gamma $, respectively, with $ 0 \ le \ alpha \ le \ beta \ le1 $, $ \ alpha + \ beta \ in [ 0, 1 ) $, and $ \ gamma > 0 $. the proof requires two steps : a dynamical approach is first used to construct stationary solutions under the additional assumption that the coagulation kernel and the overall fragmentation rate are bounded from below by a positive constant. the general case is then handled by a compactness argument.
|
arxiv:1904.01868
|
in the present paper we begin studies on the large time asymptotic behavior for solutions of the cauchy problem for the novikov - - veselov equation ( an analog of kdv in 2 + 1 dimensions ) at positive energy. in addition, we are focused on a family of reflectionless ( transparent ) potentials parameterized by a function of two variables. in particular, we show that there are no isolated soliton type waves in the large time asymptotics for these solutions in contrast with well - known large time asymptotics for solutions of the kdv equation with reflectionless initial data.
|
arxiv:1010.2897
|
shack hartmann wavefront sensor is a two dimensional array of lenslets which is used to detect the incoming phase distorted wavefront through local tilt measurements made by recording the spot pattern near the focal plane. wavefront reconstruction is performed in two stages - ( a ) image centroiding to calculate local slopes, ( b ) formation of the wavefront shape from local slope measurement. centroiding accuracy contributes to most of the wavefront reconstruction error in shack hartmann sensor based adaptive optics system with readout and background noise. it becomes even more difficult in atmospheric adaptive optics case, where scintillation effects may also occur. in this paper we used a denoising technique based on thresholded zernike reconstructor to minimize the effects due to readout and background noise. at low signal to noise ratio, this denoising technique can be improved further by taking the advantage of the shape of the spot. assuming a gaussian pattern for individual spots, it is shown that the centroiding accuracy can be improved in the presence of strong scintillations and background.
|
arxiv:0910.3386
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.