text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
for $ d \ ge 4 $, the noether - lefschetz locus $ \ mathrm { nl } _ d $ parametrizes smooth, degree $ d $ surfaces in $ \ mathbb { p } ^ 3 $ with picard number at least $ 2 $. a conjecture of harris states that there are only finitely many irreducible components of the noether - lefschetz locus of non - maximal codimension. voisin showed that the conjecture is false for sufficiently large $ d $, but is true for $ d \ le 5 $. she also showed that for $ d = 6, 7 $, there are finitely many \ emph { reduced }, irreducible components of $ \ mathrm { nl } _ d $ of non - maximal codimension. in this article, we prove that for any $ d \ ge 6 $, there are infinitely many \ emph { non - reduced } irreducible components of $ \ mathrm { nl } _ d $ of non - maximal codimension.
|
arxiv:2204.08079
|
we suggest to use for $ xy _ 2 $ molecules some results previously established in a series of articles for vibrational modes and electronic states with an $ e $ symmetry type. we first summarize the formalism for the standard $ u ( 2 ) \ supset su ( 2 ) \ supset so ( 2 ) $ chain which, for its most part, can be kept for the study of both stretching and bending modes of $ xy _ 2 $ molecules. next the also standard chain $ u ( 3 ) \ supset u ( 2 ) \ supset su ( 2 ) \ supset so ( 2 ) $ which is necessary, within the considered approach, is introduced for the stretching modes. all operators acting within the irreducible representation ( \ textit { irrep } ) $ [ n00 ] \ equiv [ n \ dot { 0 } ] $ of $ u ( 3 ) $ are built and their matrix elements computed within the standard basis. all stretch - bend interaction operators taking into account the polyad structure associated with a resonance $ \ omega _ 1 \ approx \ omega _ 3 \ approx 2 \ omega _ 2 $ are obtained. as an illustration, an application to the $ d _ 2s $ molecular system is considered, especially the symmetrization in $ c _ { 2v } $. it is shown that our unitary formalism allows to reproduce in an extremely satisfactory way all the experimental data up to the dissociation limit.
|
arxiv:0812.1211
|
efficient downscaling of large ensembles of coarse - scale information is crucial in several applications, such as oceanic and atmospheric modeling. the determining form map is a theoretical lifting function from the low - resolution solution trajectories of a dissipative dynamical system to their corresponding fine - scale counterparts. recently, a physics - informed deep neural network ( " cdanet " ) was introduced, providing a surrogate of the determining form map for efficient downscaling. cdanet was demonstrated to efficiently downscale noise - free coarse - scale data in a deterministic setting. herein, the performance of well - trained cdanet models is analyzed in a stochastic setting involving ( i ) observational noise, ( ii ) model noise, and ( iii ) a combination of observational and model noises. the analysis is performed employing the rayleigh - benard convection paradigm, under three training conditions, namely, training with perfect, noisy, or downscaled data. furthermore, the effects of noises, rayleigh number, and spatial and temporal resolutions of the input coarse - scale information on the downscaled fields are examined. the results suggest that the expected l2 - error of cdanet behaves quadratically in terms of the standard deviations of the observational and model noises. the results also suggest that cdanet responds to uncertainties similar to the theorized and numerically - validated cda behavior with an additional error overhead due to cdanet being a surrogate model of the determining form map.
|
arxiv:2310.11945
|
we give a systematic study of q - algebraic equations. the questions of existence, uniqueness and regularity of the solutions are solved in the space of grid - based hahn series. the regularity is understood in terms of asymptotic behavior of coefficients, and is the main focus of this work. the results and algorithms are illustrated by many examples.
|
arxiv:2006.09527
|
it is widely believed that supernova remnants are the best candidate sources for the observed cosmic ray flux up to the knee, i. e. up to ~ pev energies. indeed, the gamma - ray spectra of some supernova remnants can be well explained by assuming the decay of neutral pions which are created in hadronic interactions. therefore, fitting the corresponding gamma spectra allows us to derive the spectra of cosmic rays at the source which are locally injected into our galaxy. using these spectra as a starting point, we propagate the cosmic rays through the galaxy using the publicly available galprop code. here, we will present first results on the contribution of those snrs to the total cosmic ray flux and discuss implications.
|
arxiv:1501.06434
|
recently, motivated by supersymmetric gauge theory, cachazo, douglas, seiberg, and witten proposed a conjecture about finite dimensional simple lie algebras, and checked it in the classical cases. we prove the conjecture for type g _ 2, and also verify a consequence of the conjecture in the general case.
|
arxiv:math/0305175
|
we classify reflexive graded right ideals, up to isomorphism and shift, of generic cubic three dimensional artin - schelter regular algebras. we also determine the possible hilbert functions of these ideals. these results are obtained by using similar methods as for quadratic artin - schelter algebras. in particular our results apply to the enveloping algebra of the heisenberg - lie algebra from which we deduce a classification of right ideals of an invariant ring of the first weyl algebra.
|
arxiv:math/0601096
|
lorentz invariance is such an important principle of fundamental physics that it should constantly be subjected to experimental scrutiny as well as theoretical questioning. distant astrophysical sources of energetic photons with rapid time variations, such as active galactic nuclei ( agns ) and gamma - ray bursters ( grbs ), provide ideal experimental opportunities for testing lorentz invariance. the cherenkov telescope array ( cta ) is an excellent experimental tool for making such tests with sensitivities exceeding those possible using other detectors.
|
arxiv:1111.1178
|
this work has been submitted to the ieee for possible publication. copyright may be transferred without notice, after which this version may no longer be accessible. unsupervised skill discovery seeks to acquire different useful skills without extrinsic reward via unsupervised reinforcement learning ( rl ), with the discovered skills efficiently adapting to multiple downstream tasks in various ways. however, recent advanced skill discovery methods struggle to well balance state exploration and skill diversity, particularly when the potential skills are rich and hard to discern. in this paper, we propose \ textbf { co } ntrastive dyna \ textbf { m } ic \ textbf { s } kill \ textbf { d } iscovery \ textbf { ( comsd ) } \ footnote { code and videos : https : / / github. com / liuxin0824 / comsd } which generates diverse and exploratory unsupervised skills through a novel intrinsic incentive, named contrastive dynamic reward. it contains a particle - based exploration reward to make agents access far - reaching states for exploratory skill acquisition, and a novel contrastive diversity reward to promote the discriminability between different skills. moreover, a novel dynamic weighting mechanism between the above two rewards is proposed to balance state exploration and skill diversity, which further enhances the quality of the discovered skills. extensive experiments and analysis demonstrate that comsd can generate diverse behaviors at different exploratory levels for multi - joint robots, enabling state - of - the - art adaptation performance on challenging downstream tasks. it can also discover distinguishable and far - reaching exploration skills in the challenging tree - like 2d maze.
|
arxiv:2309.17203
|
we consider the curie - weiss potts model in zero external field under independent symmetric spin - flip dynamics. we investigate dynamical gibbs - non - gibbs transitions for a range of initial inverse temperatures beta < 3, which covers the phase transition point beta = 4 log 2 [ 8 ]. we show that finitely many types of trajectories of bad empirical measures appear, depending on the parameter beta, with a possibility of re - entrance into the gibbsian regime, of which we provide a full description.
|
arxiv:2011.00350
|
we report on deep xmm - newton and nustar observations of the high redshift, z = 2. 94, extremely red quasar ( erq ), sdss j165202. 60 + 172852. 4, with known galactic ionized outflows detected via spatially - resolved [ oiii ] emission lines. x - ray observations allow us to directly probe the accretion disk luminosity and the geometry and scale of the circumnuclear obscuration. we fit the spectra from the xmm - newton / epic and nustar detectors with a physically motivated torus model and constrain the source to exhibit a near compton - thick column density of $ n _ h = ( 1. 02 ^ { + 0. 76 } _ { - 0. 41 } ) \ times10 ^ { 24 } \ textrm { cm } ^ { - 2 } $, a near edge - on geometry with the line - of - sight inclination angle of $ \ theta _ i = 85 ^ { \ circ } $, and a scattering fraction of $ f _ { sc } \ sim 3 $ %. the absorption - corrected, intrinsic 2 - 10 kev x - ray luminosity of $ l _ { \ textrm { 2 - 10 } } = ( 1. 4 ^ { + 1 } _ { - 1 } ) \ times10 ^ { 45 } \ textrm { erg s } ^ { - 1 } $ reveals a powerful quasar that is not intrinsically x - ray weak, consistent with observed trends in other erqs. we also estimate the physical properties of the obscuration, although highly uncertain : the warm ionized scattering density of $ n _ e \ sim 7. 5 \ times ( 10 ^ 2 - 10 ^ 3 ) \ textrm { cm } ^ { - 3 } $ and the obscuration mass of $ m _ { obsc } \ sim 1. 7 \ times ( 10 ^ 4 - 10 ^ 6 ) m _ { \ odot } $. as previously suggested with shallower x - ray observations, optical and infrared selection of erq has proved effective in finding obscured quasars with powerful outflow signatures. our observations provide an in - depth view into the x - ray properties of erqs and support the conclusions of severely photon - limited studies of obscured quasar populations at high redshifts.
|
arxiv:2101.06613
|
we introduce a notion of rainbow saturation and the corresponding rainbow saturation number. this is the saturation version of the rainbow tur \ ' an numbers whose systematic study was initiated by keevash, mubayi, sudakov, and verstra \ " ete. we give examples of graphs for which the rainbow saturation number is bounded away from the ordinary saturation number. this includes all complete graphs $ k _ n $ for $ n \ geq 4 $, and several bipartite graphs. it is notable that there are non - bipartite graphs for which this is the case, as this does not happen when it comes to the rainbow extremal number versus the traditional extremal number. we also show that saturation numbers are linear for a large class of graphs, providing a partial rainbow analogue of a well known theorem of k \ ' asonyi and tuza. we conclude this paper with related open questions and conjectures.
|
arxiv:2003.13200
|
we used the iram interferometer to obtain sub - arcsecond resolution observations of the high - mass star - forming region w3 ( oh ) and its surroundings at a frequency of 220 ghz. with the improved angular resolution, we distinguish 3 peaks in the thermal dust continuum emission originating from the hot core region about 6 arcsec ( 0. 06 pc ) east of w3 ( oh ). the dust emission peaks are coincident with known radio continuum sources, one of which is of non - thermal nature. the latter source is also at the center of expansion of a powerful bipolar outflow observed in water maser emission. we determine the hot core mass to be 15 solar masses based on the integrated dust continuum emission. simultaneously many molecular lines are detected allowing the analysis of the temperature structure and the distribution of complex organic molecules in the hot core. from hnco lines, spanning a wide range of excitation, two 200 k temperature peaks are found coincident with dust continuum emission peaks suggesting embedded heating sources within them.
|
arxiv:astro-ph/9901261
|
in semantic segmentation, the accuracy of models heavily depends on the high - quality annotations. however, in many practical scenarios such as medical imaging and remote sensing, obtaining true annotations is not straightforward and usually requires significant human labor. relying on human labor often introduces annotation errors, including mislabeling, omissions, and inconsistency between annotators. in the case of remote sensing, differences in procurement time can lead to misaligned ground truth annotations. these label errors are not independently distributed, and instead usually appear in spatially connected regions where adjacent pixels are more likely to share the same errors. to address these issues, we propose an approximate bayesian estimation based on a probabilistic model that assumes training data includes label errors, incorporating the tendency for these errors to occur with spatial correlations between adjacent pixels. bayesian inference requires computing the posterior distribution of label errors, which becomes intractable when spatial correlations are present. we represent the correlation of label errors between adjacent pixels through a gaussian distribution whose covariance is structured by a kac - murdock - szeg \ " { o } ( kms ) matrix, solving the computational challenges. through experiments on multiple segmentation tasks, we confirm that leveraging the spatial correlation of label errors significantly improves performance. notably, in specific tasks such as lung segmentation, the proposed method achieves performance comparable to training with clean labels under moderate noise levels. code is available at https : / / github. com / pfnet - research / bayesian _ spatialcorr.
|
arxiv:2504.14795
|
metabolic substrates, such as oxygen and glucose, are rapidly delivered to the cell through filtration across microvessels walls. modelling this important process is complicated by the coupling between flow and transport equations, which are linked through the osmotic pressure induced by the colloidal plasma proteins. microvessel wall is a composite media with the internal glycocalyx layer exerting a remarkable sieving effect on macromolecules, with respect to the external layer composed by endothelial cells. the physiological structure of microvessel is represented as the superimposition of two membranes with different properties ; the inner membrane represents the glycocalyx, while the outer membrane represents the surrounding endothelial cells. application of mass conservation principle and thermodynamic considerations lead to a model composed by two coupled second - order partial differential equations in the hydrostatic and osmotic pressures, one expressing volumetric mass conservation and the other, which is non - linear, expressing macromolecules mass conservation. despite system complexity, the assumption that the properties of the layers are piece - wise constant allows us to obtain analytical solutions for the two pressures. this solution is in agreement with experimental observations, which contrary to common belief, show that flow reversal cannot occur in steady state unless luminal hydrostatic pressure drops below physiologically plausible values. the observed variations of volumetric and solute mass flux in case of a significant reduction of luminal hydrostatic pressure are in qualitative agreement with observed variations during experiments reported in the literature. on the other hand, homogenising microvessel wall into a single - layer membrane with equivalent properties leads to a different distribution of pressure across microvessel wall, not consistent with observations.
|
arxiv:1308.1271
|
in this work we analyze how the spectrum of primordial scalar perturbations is modified, within the emergent universe scenario, when a particular version of the continuous spontaneous localization ( csl ) model is incorporated as the generating mechanism of initial perturbations, providing also an explanation to the quantum - to - classical transition of such perturbations. on the other hand, a phase of super - inflation, prior to slow - roll inflation, is a characteristic feature of the emergent universe hypothesis. in recent works, it was shown that the super - inflation phase could generically induce a suppression of the temperature anisotropies of the cmb at large angular scales. we study here under what conditions the csl maintains or modifies these characteristics of the emergent universe and their compatibility with the cmb observations.
|
arxiv:2108.01472
|
in a recent investigation of chiral behavior in quenched lattice qcd, the flavor - singlet pseudoscalar ` ` hairpin ' ' vertex associated with the eta prime mass was studied for pion masses ranging from approximately 275 to 675 mev. throughout this mass range, the quark - disconnected pseudoscalar correlator is well - described by a pure double - pion - pole diagram with a p ^ 2 independent mass insertion. the residue of the double pole was found to exhibit a significant quark mass dependence, evidenced by a negative slope of the effective mass insertion ( m _ 0 ^ { eff } ) ^ 2 as a function of m _ { \ pi } ^ 2. it has been observed by sharpe that, with a consistent nlo calculation in quenched chiral perturbation theory, this mass dependence is uniquely predicted in terms of the single - pole coefficient \ alpha _ { \ phi } and the leutwyler parameter l _ 5. since \ alpha _ { \ phi } is found to be approximately zero, the chiral slope of the double - pole residue determines a value for l _ 5. this provides a consistency check between the chiral slope of the hairpin mass insertion and that of the pion decay constant. we investigate the consistency of these mass dependences in our monte carlo results at two values of lattice spacing. within statistics, the slopes are found to be consistent with the q \ chipt prediction, confirming that the observed negative slope of m _ 0 ^ { eff } arises as an effect of the l _ 5 leutwyler term.
|
arxiv:hep-lat/0405020
|
for better classification generative models are used to initialize the model and model features before training a classifier. typically it is needed to solve separate unsupervised and supervised learning problems. generative restricted boltzmann machines and deep belief networks are widely used for unsupervised learning. we developed several supervised models based on dbn in order to improve this two - phase strategy. modifying the loss function to account for expectation with respect to the underlying generative model, introducing weight bounds, and multi - level programming are applied in model development. the proposed models capture both unsupervised and supervised objectives effectively. the computational study verifies that our models perform better than the two - phase training approach.
|
arxiv:1804.09812
|
strongly correlated quantum many - body systems at low dimension exhibit a wealth of phenomena, ranging from features of geometric frustration to signatures of symmetry - protected topological order. in suitable descriptions of such systems, it can be helpful to resort to effective models which focus on the essential degrees of freedom of the given model. in this work, we analyze how to determine the validity of an effective model by demanding it to be in the same phase as the original model. we focus our study on one - dimensional spin - 1 / 2 systems and explain how non - trivial symmetry protected topologically ordered ( spt ) phases of an effective spin 1 model can arise depending on the couplings in the original hamiltonian. in this analysis, tensor network methods feature in two ways : on the one hand, we make use of recent techniques for the classification of spt phases using matrix product states in order to identify the phases in the effective model with those in the underlying physical system, employing kuenneth ' s theorem for cohomology. as an intuitive paradigmatic model we exemplify the developed methodology by investigating the bi - layered delta - chain. for strong ferromagnetic inter - layer couplings, we find the system to transit into exactly the same phase as an effective spin 1 model. however, for weak but finite coupling strength, we identify a symmetry broken phase differing from this effective spin - 1 description. on the other hand, we underpin our argument with a numerical analysis making use of matrix product states.
|
arxiv:1704.02992
|
some clusters of galaxies have been identified as powerful sources of non - thermal radiation, from the radio to x - ray wavelengths. the classical models proposed for the explanation of this radiation usually require large energy densities in cosmic rays in the intracluster medium and magnetic fields much lower that those measured using the faraday rotation. we study here the role that mergers of clusters of galaxies may play in the generation of the non - thermal radiation, and we seek for additional observable consequences of the model. we find that if hard x - rays and radio radiation are respectively interpreted as inverse compton scattering ( ics ) and synchrotron emission of relativistic electrons, large gamma ray fluxes are produced, and for the coma cluster, where upper limits are available, these limits are exceeded. we also discuss an alternative and testable model that naturally solves the problems mentioned above.
|
arxiv:astro-ph/0007208
|
macroscale experiments show that a train of two immiscible liquid drops, a bislug, can spontaneously move in a capillary tube because of surface tension asymmetries. we use molecular dynamics simulation of lennard - jones fluids to demonstrate this phenomenon for nvt ensembles in sub - micron tubes. we deliberately tune the strength of intermolecular forces and control the velocity of bislug in different wetting and viscosity conditions. we compute the velocity profile of particles across the tube, and explain the origin of deviations from the classical parabolae. we show that the self - generated molecular flow resembles the poiseuille law when the ratio of the tube radius to its length is less than a critical value.
|
arxiv:0912.2413
|
if $ \ omega \ subset \ r ^ n $ is a bounded domain, the existence of solutions $ { \ bf u } \ in h ^ 1 _ 0 ( \ omega ) ^ n $ of $ { div } { \ bf u } = f $ for $ f \ in l ^ 2 ( \ omega ) $ with vanishing mean value, is a basic result in the analysis of the stokes equations. in particular it allows to show the existence of a solution $ ( { \ bf u }, p ) \ in h ^ 1 _ 0 ( \ omega ) ^ n \ times l ^ 2 ( \ omega ) $, where $ { \ bf u } $ is the velocity and $ p $ the pressure. it is known that the above mentioned result holds when $ \ omega $ is a lipschitz domain and that it is not valid for arbitrary h \ " older - $ \ alpha $ domains. in this paper we prove that if $ \ omega $ is a planar simply connected h \ " older - $ \ alpha $ domain, there exist right inverses of the divergence which are continuous in appropriate weighted spaces, where the weights are powers of the distance to the boundary. moreover, we show that the powers of the distance in the results obtained are optimal. in our results, the zero boundary condition is replaced by a weaker one. for the particular case of domains with an external cusp of power type, we prove that our weaker boundary condition is equivalent to the standard one. in this case we show the well posedness of the stokes equations in appropriate weighted sobolev spaces obtaining as a consequence the existence of a solution $ ( { \ bf u }, p ) \ in h ^ 1 _ 0 ( \ omega ) ^ n \ times l ^ r ( \ omega ) $ for some $ r < 2 $ depending on the power of the cusp.
|
arxiv:0804.4873
|
in the context of ' infinite - volume mixing ' we prove global - local mixing for the boole map, a. k. a. boole transformation, which is the prototype of a non - uniformly expanding map with two neutral fixed points. global - local mixing amounts to the decorrelation of all pairs of global and local observables. in terms of the equilibrium properties of the system it means that the evolution of every absolutely continuous probability measure converges, in a certain precise sense, to an averaging functional over the entire space.
|
arxiv:1802.00397
|
one of the few accepted dynamical foundations of non - additive " non - extensive " ) statistical mechanics is that the choice of the appropriate entropy functional describing a system with many degrees of freedom should reflect the rate of growth of its configuration or phase space volume. we present an example of a group, as a metric space, that may be used as the phase space of a system whose ergodic behavior is statistically described by the recently proposed $ \ delta $ - entropy. this entropy is a one - parameter variation of the boltzmann / gibbs / shannon functional and is quite different, in form, from the power - law entropies that have been recently studied. we use the first grigorchuk group for our purposes. we comment on the connections of the above construction with the conjectured evolution of the underlying system in phase space.
|
arxiv:1705.06001
|
this paper proposes a novel training scheme for fast matching models in search ads, which is motivated by the real challenges in model training. the first challenge stems from the pursuit of high throughput, which prohibits the deployment of inseparable architectures, and hence greatly limits the model accuracy. the second problem arises from the heavy dependency on human provided labels, which are expensive and time - consuming to collect, yet how to leverage unlabeled search log data is rarely studied. the proposed training framework targets on mitigating both issues, by treating the stronger but undeployable models as annotators, and learning a deployable model from both human provided relevance labels and weakly annotated search log data. specifically, we first construct multiple auxiliary tasks from the enumerated relevance labels, and train the annotators by jointly learning from those related tasks. the annotation models are then used to assign scores to both labeled and unlabeled training samples. the deployable model is firstly learnt on the scored unlabeled data, and then fine - tuned on scored labeled data, by leveraging both labels and scores via minimizing the proposed label - aware weighted loss. according to our experiments, compared with the baseline that directly learns from relevance labels, training by the proposed framework outperforms it by a large margin, and improves data efficiency substantially by dispensing with 80 % labeled samples. the proposed framework allows us to improve the fast matching model by learning from stronger annotators while keeping its architecture unchanged. meanwhile, our training framework offers a principled manner to leverage search log data in the training phase, which could effectively alleviate our dependency on human provided labels.
|
arxiv:1901.10710
|
artificial neural network ( nn ) architecture design is a nontrivial and time - consuming task that often requires a high level of human expertise. neural architecture search ( nas ) serves to automate the design of nn architectures and has proven to be successful in automatically finding nn architectures that outperform those manually designed by human experts. nn architecture performance can be quantified based on multiple objectives, which include model accuracy and some nn architecture complexity objectives, among others. the majority of modern nas methods that consider multiple objectives for nn architecture performance evaluation are concerned with automated feed forward nn architecture design, which leaves multi - objective automated recurrent neural network ( rnn ) architecture design unexplored. rnns are important for modeling sequential datasets, and prominent within the natural language processing domain. it is often the case in real world implementations of machine learning and nns that a reasonable trade - off is accepted for marginally reduced model accuracy in favour of lower computational resources demanded by the model. this paper proposes a multi - objective evolutionary algorithm - based rnn architecture search method. the proposed method relies on approximate network morphisms for rnn architecture complexity optimisation during evolution. the results show that the proposed method is capable of finding novel rnn architectures with comparable performance to state - of - the - art manually designed rnn architectures, but with reduced computational demand.
|
arxiv:2403.11173
|
our input is a bipartite graph $ g = ( a \ cup b, e ) $ where each vertex in $ a \ cup b $ has a preference list strictly ranking its neighbors. the vertices in $ a $ and in $ b $ are called students and courses, respectively. each student $ a $ seeks to be matched to $ \ mathsf { cap } ( a ) \ ge 1 $ courses while each course $ b $ seeks $ \ mathsf { cap } ( b ) \ ge 1 $ many students to be matched to it. the gale - shapley algorithm computes a pairwise - stable matching ( one with no blocking edge ) in $ g $ in linear time. we consider the problem of computing a popular matching in $ g $ - - a matching $ m $ is popular if $ m $ cannot lose an election to any matching where vertices cast votes for one matching versus another. our main contribution is to show that a max - size popular matching in $ g $ can be computed by the 2 - level gale - shapley algorithm in linear time. this is an extension of the classical gale - shapley algorithm and we prove its correctness via linear programming.
|
arxiv:1609.07531
|
feature toggles and configuration options are modern programmatic techniques to easily include or exclude functionality in a software product. the research contributions to these two techniques have most often been focused on either one of them separately. however, focusing on the similarities of these two techniques may enable a more fruitful combined family of research on software configuration, a term we use to encompass both techniques. also, a common terminology may have enabled meta - analysis, a more practical application of the research on the two techniques, and prevented duplication of research effort. the goal of this research study is to aid researchers in conducting a family of research on software configuration by extending an existing model of software configuration that provides terminology for research studies. to achieve our goal, we started with seigmund et al. model of software configuration ( msc ) which was developed based on interviews and publications on configuration options. we explicitly extend the msc to include feature toggles and to add qualitative analysis of feature toggle - related resources. from our analysis, we proposed mscv2 as an extended version of msc and evaluated it through its application on five academic publications and the chrome system. our results indicate that multiple researchers studying the same system may provide different definitions of software configuration in their publications. also, similar research questions may be answered on feature toggles and configuration options repeatedly because of a lack of a clear definition of software configuration. these observations indicate that having a model for defining software configuration may enable more clear and generalized research on the software configuration family of research. practitioners benefit mscv2 in their systems to better knowledge transfer to other practitioners and researchers.
|
arxiv:2212.00505
|
we study the structure of $ d $ - modules over a ring $ r $ which is a direct summand of a polynomial or a power series ring $ s $ with coefficients over a field. we relate properties of $ d $ - modules over $ r $ to $ d $ - modules over $ s $. we show that the localization $ r _ f $ and the local cohomology module $ h ^ i _ i ( r ) $ have finite length as $ d $ - modules over $ r $. furthermore, we show the existence of the bernstein - sato polynomial for elements in $ r $. in positive characteristic, we use this relation between $ d $ - modules over $ r $ and $ s $ to show that the set of $ f $ - jumping numbers of an ideal $ i \ subseteq r $ is contained in the set of $ f $ - jumping numbers of its extension in $ s $. as a consequence, the $ f $ - jumping numbers of $ i $ in $ r $ form a discrete set of rational numbers. we also relate the bernstein - sato polynomial in $ r $ with the $ f $ - thresholds and the $ f $ - jumping numbers in $ r $.
|
arxiv:1611.04412
|
we study the old problem of the uniqueness of chemical evolution models by analyzing a set of multiphase models for the galaxy ngc 4303 computed for a variety of plausible physical input parameters. our aim is to determine if the input parameters may be strongly constrained when nebular abundances and stellar spectral indices radial distributions are used. we run a large number of models ( 500 ) for ngc 4303 varying the input parameters. less than 4 % of the models ( 19 ) fit the present day observational data within a region of 95 % probability. the number of models reduces to ~ 1 % ( 6 ) when we also ask them to reproduce the time - averaged abundances represented by the spectral indices. thus, by proving that only a small fraction of models are favored in reproducing the present day radial abundance distributions and the spectral indices data simultaneously, we show that these spectral indices provide strong time - dependent additional constraints to the possible star formation and chemical histories of spiral disks.
|
arxiv:astro-ph/0203399
|
metamaterials comprising assemblies of dielectric resonators have attracted much attention due to their low intrinsic loss and isotropic optical response. in particular, metasurfaces made from silicon dielectric resonators have shown desirable behaviors such as efficient nonlinear optical conversion, spectral filtering and advanced wave - front engineering. to further explore the potential of dielectric metamaterials, we present all - dielectric metamaterials fabricated from epitaxially grown iii - v semiconductors that can exploit the high second - order optical susceptibilities of iii - v semiconductors, as well as the ease of monolithically integrating active / gain media. specifically, we create gaas nano - resonators using a selective wet oxidation process that forms a low refractive index algao ( n ~ 1. 6 ) under layer similar to silicon dielectric resonators formed using silicon - on - insulator wafers. we further use the same fabrication processes to demonstrate multilayer iii - v dielectric resonator arrays that provide us with new degrees of freedom in device engineering. for these arrays, we experimentally measure ~ 100 % reflectivity over a broad spectral range. we envision that all - dielectric iii - v semiconductor metamaterials will open up new avenues for passive, active and nonlinear all dielectric metamaterials
|
arxiv:1605.00298
|
we consider the robin laplacian in the domains $ \ omega $ and $ \ omega ^ \ varepsilon $, $ \ varepsilon > 0 $, with sharp and blunted cusps, respectively. assuming that the robin coefficient $ a $ is large enough, the spectrum of the problem in $ \ omega $ is known to be residual and to cover the whole complex plane, but on the contrary, the spectrum in the lipschitz domain $ \ omega ^ \ varepsilon $ is discrete. however, our results reveal the strange behavior of the discrete spectrum as the blunting parameter $ \ varepsilon $ tends to 0 : we construct asymptotic forms of the eigenvalues and detect families of " hardly movable " and " plummeting " ones. the first type of the eigenvalues do not leave a small neighborhood of a point for any small $ \ varepsilon > 0 $ while the second ones move at a high rate $ o ( | \ ln \ varepsilon | ) $ downwards along the real axis $ \ mathbb { r } $ to $ - \ infty $. at the same time, any point $ \ lambda \ in \ mathbb { r } $ is a " blinking eigenvalue ", i. e., it belongs to the spectrum of the problem in $ \ omega ^ \ varepsilon $ almost periodically in the $ | \ ln \ varepsilon | $ - scale. besides standard spectral theory, we use the techniques of dimension reduction and self - adjoint extensions to obtain these results.
|
arxiv:1809.10963
|
to estimate real accuracy of eop prediction real - time predictions made by the iers subbureau for rapid service and prediction ( usno ) and at the institute of applied astronomy ( iaa ) eop service are analyzed. methods of a priory estimate of accuracy of prediction are discussed.
|
arxiv:1102.0655
|
we consider bound states of fermions with an anomalous magnetic moments ( neutrinos, neutrons ) in radial electric and magnetic field of monopole. in case of radial magnetic field the interaction $ \ vec { \ sigma } \ vec { h } $ violates p - parity and for this reason we must use the method of ( hep - ph / 9901248 ) where both components of spinor considererd as a linear combination of spheric spinors with different p - parity. also we apply pseudoscalar - like interaction ( 2 ) obtained in \ cite { df } to monopole case and add it in dirac equation. we obtain the system of differential equations for radial functions which define energy levels of fermions with anomalous magnetic moments in the presence of monopole.
|
arxiv:hep-ph/0001263
|
we present preliminary results from the deepest and largest photographic proper - motion survey ever undertaken of the galactic bulge. our first - epoch plate material ( from 1972 - 3 ) goes deep enough ( v _ lim = 22 ) to reach below the bulge main - sequence turnoff. these plates cover an area of approximately 25 arc - min x 25 arc - min of the bulge in the low - extinction ( a _ v = 0. 8 mag ) plaut field at l = 0 deg, b = - 8 deg, approximately 1 kpc south of the nucleus. this is the point at which the transition between bulge and halo populations likely occurs and is, therefore, an excellent location to study the interface between the dense metal - rich bulge and the metal - poor halo. in this conference we report results based on three first - epoch and three second - epoch plates spanning 21 years. it is found that it is possible to obtain proper - motions with errors less than 0. 5 mas / yr for a substantial number of stars down to v = 20, without color restriction. for the subsample with errors less than 1 mas / yr we derive proper - motion dispersions in the direction of galactic longitude and latitude of 3. 378 + / - 0. 033 mas / yr and 2. 778 + / - 0. 028 $ mas / yr respectively. these dispersions agree with those derived by spaenhauer et al. ( 1992 ) in baade ' s window.
|
arxiv:astro-ph/9607135
|
let $ m $ be a compact manifold equipped with a pair of complementary foliations, say horizontal and vertical. in catuogno, silva and ruffino ( $ stoch $. $ dyn $., 2013 ) it is shown that, up to a stopping time $ \ tau $, a stochastic flow of local diffeomorphisms $ \ varphi _ t $ in $ m $ can be written as a markovian process in the subgroup of diffeomorphisms which preserve the horizontal foliation composed with a process in the subgroup of diffeomorphisms which preserve the vertical foliation. here, we discuss topological aspects of this decomposition. the main result guarantees the global decomposition of a flow if it preserves the orientation of a transversely orientable foliation. in the last section, we present an it \ ^ o - liouville formula for subdeterminants of linearised flows. we use this formula to obtain sufficient conditions for the existence of the decomposition for all $ t \ geq 0 $.
|
arxiv:1511.01376
|
1 } { 2 } ( - p ) ^ { \ frac { p - 3 } { 4 } } \ det [ s ( p ) ], $ $ where $ \ det [ s ( p ) ] $ is the determinant of the $ \ frac { p - 1 } { 2 } $ by $ \ frac { p - 1 } { 2 } $ matrix $ s ( p ) $ with entries $ s ( p ) _ { j, k } = ( \ frac { j ^ 2 + k ^ 2 } { p } ) $ for any $ 1 \ le j, k \ le ( p - 1 ) / 2 $.
|
arxiv:1904.06055
|
functional magnetic resonance imaging ( fmri ) is one of the most popular methods for studying the human brain. task - related fmri data processing aims to determine which brain areas are activated when a specific task is performed and is usually based on the blood oxygen level dependent ( bold ) signal. the background bold signal also reflects systematic fluctuations in regional brain activity which are attributed to the existence of resting - state brain networks. we propose a new fmri data generating model which takes into consideration the existence of common task - related and resting - state components. we first estimate the common task - related temporal component, via two successive stages of generalized canonical correlation analysis and, then, we estimate the common task - related spatial component, leading to a task - related activation map. the experimental tests of our method with synthetic data reveal that we are able to obtain very accurate temporal and spatial estimates even at very low signal to noise ratio ( snr ), which is usually the case in fmri data processing. the tests with real - world fmri data show significant advantages over standard procedures based on general linear models ( glms ).
|
arxiv:2210.08531
|
edge computing is promoted to meet increasing performance needs of data - driven services using computational and storage resources close to the end devices, at the edge of the current network. to achieve higher performance in this new paradigm one has to consider how to combine the efficiency of resource usage at all three layers of architecture : end devices, edge devices, and the cloud. while cloud capacity is elastically extendable, end devices and edge devices are to various degrees resource - constrained. hence, an efficient resource management is essential to make edge computing a reality. in this work, we first present terminology and architectures to characterize current works within the field of edge computing. then, we review a wide range of recent articles and categorize relevant aspects in terms of 4 perspectives : resource type, resource management objective, resource location, and resource use. this taxonomy and the ensuing analysis is used to identify some gaps in the existing research. among several research gaps, we found that research is less prevalent on data, storage, and energy as a resource, and less extensive towards the estimation, discovery and sharing objectives. as for resource types, the most well - studied resources are computation and communication resources. our analysis shows that resource management at the edge requires a deeper understanding of how methods applied at different levels and geared towards different resource types interact. specifically, the impact of mobility and collaboration schemes requiring incentives are expected to be different in edge architectures compared to the classic cloud solutions. finally, we find that fewer works are dedicated to the study of non - functional properties or to quantifying the footprint of resource management techniques, including edge - specific means of migrating data and services.
|
arxiv:1801.05610
|
we define a methodology to quantify market activity on a 24 hour basis by defining a scale, the so - called scale of market quakes ( smq ). the smq is designed within a framework where we analyse the dynamics of excess price moves from one directional change of price to the next. we use the smq to quantify the fx market and evaluate the performance of the proposed methodology at major news announcements. the evolution of smq magnitudes from 2003 to 2009 is analysed across major currency pairs.
|
arxiv:0909.1690
|
extra dimensions, which led to the foundation and inception of string theory, provide an elegant approach to force - unification. with bulk curvature as high as the planck scale, higher curvature terms, namely f ( r ) gravity seems to be a natural addendum in the bulk action. these can not only pass the classic tests of general relativity but also serve as potential alternatives to dark matter and dark energy. with interesting implications in inflationary cosmology, gravitational waves and particle phenomenology it is worth exploring the impact of extra dimensions and f ( r ) gravity in black hole accretion. various classes of black hole solutions have been derived which bear non - trivial imprints of these ultraviolet corrections to general relativity. this in turn gets engraved in the continuum spectrum emitted by the accretion disk around black holes. since the near horizon regime of supermassive black holes manifest maximum curvature effects, we compare the theoretical estimates of disk luminosity with quasar optical data to discern the effect of the modified background on the spectrum. in particular, we explore a certain class of black hole solution bearing a striking resemblance with the well - known reissner nordstrom de sitter / anti - de sitter / flat spacetime which unlike general relativity can also accommodate a negative charge parameter. by computing error estimators like chi - square, nash - sutcliffe efficiency, index of agreement, etc. we infer that optical observations of quasars favor a negative charge parameter which can be a possible indicator of extra dimensions. the analysis also supports an asymptotically de sitter spacetime with an estimate of the magnitude of the cosmological constant whose origin is solely attributed to f ( r ) gravity in higher dimensions.
|
arxiv:1905.12820
|
we give a polynomial time approximation scheme ( ptas ) for the unit demand capacitated vehicle routing problem ( cvrp ) on trees, for the entire range of the tour capacity. the result extends to the splittable cvrp.
|
arxiv:2111.03735
|
we consider the capacity problem for wireless networks. networks are modeled as random unit - disk graphs, and the capacity problem is formulated as one of finding the maximum value of a multicommodity flow. in this paper, we develop a proof technique based on which we are able to obtain a tight characterization of the solution to the linear program associated with the multiflow problem, to within constants independent of network size. we also use this proof method to analyze network capacity for a variety of transmitter / receiver architectures, for which we obtain some conclusive results. these results contain as a special case ( and strengthen ) those of gupta and kumar for random networks, for which a new derivation is provided using only elementary counting and discrete probability tools.
|
arxiv:cs/0503047
|
remote teleoperation of robots can broaden the reach of domain specialists across a wide range of industries such as home maintenance, health care, light manufacturing, and construction. however, current direct control methods are impractical, and existing tools for programming robot remotely have focused on users with significant robotic experience. extending robot remote programming to end users, i. e., users who are experts in a domain but novices in robotics, requires tools that balance the rich features necessary for complex teleoperation tasks with ease of use. the primary challenge to usability is that novice users are unable to specify complete and robust task plans to allow a robot to perform duties autonomously, particularly in highly variable environments. our solution is to allow operators to specify shorter sequences of high - level commands, which we call task - level authoring, to create periods of variable robot autonomy. this approach allows inexperienced users to create robot behaviors in uncertain environments by interleaving exploration, specification of behaviors, and execution as separate steps. end users are able to break down the specification of tasks and adapt to the current needs of the interaction and environments, combining the reactivity of direct control to asynchronous operation. in this paper, we describe a prototype system contextualized in light manufacturing and its empirical validation in a user study where 18 participants with some programming experience were able to perform a variety of complex telemanipulation tasks with little training. our results show that our approach allowed users to create flexible periods of autonomy and solve rich manipulation tasks. furthermore, participants significantly preferred our system over comparative more direct interfaces, demonstrating the potential of our approach.
|
arxiv:2109.02301
|
adaptive optics ( ao ) have been used to correct wavefronts to achieve diffraction limited point spread functions in a broad range of optical applications, prominently ground - based astronomical telescopes operating in near infra - red. while most ao systems cannot provide diffraction - limited performance in the optical passband ( 400 nm - 900 nm ), ao can improve image concentration, as well as both near and far field image stability, within an ao - fed spectrograph. enhanced near and far field stability increase wavelength - scale stability in high dispersion spectrographs. in this work, we describe detailed modelling of the stability improvements achievable on extremely large telescopes. these improvements in performance may enable the mass measurement of earth twins by the precision radial velocity method, and the discovery of evidence of exobiotic activity in exoplanet atmospheres with the next generation of extremely large telescopes ( elts ). in this paper, we report on numerical simulations of the impact of ao on the performance of the gmt - consortium large earth finder ( g - clef ) instrument for the future giant magellan telescope ( gmt ). the proximate cause of this study is to evaluate what improvements ao offer for exoplanet mass determination by the precision radial velocity ( prv ) method and the discovery of biomarkers in exoplanet atmospheres. a modified ao system capable of achieving this improved stability even with changing conditions is proposed.
|
arxiv:1810.01020
|
we study multiplicity of the eigenvalues of the hodge laplacian on smooth, compact riemannian manifolds of dimension five for generic families of metrics. we prove that generically the hodge laplacian, restricted to the subspace of co - exact two - forms, has nonzero eigenvalues of multiplicity two. the proof is based on the fact that hodge laplacian restricted to the subspace of co - exact two - forms is minus the square of the beltrami operator, a first - order operator. we prove that for generic metrics the spectrum of the beltrami operator is simple. because the beltrami operator in this setting is a skew - adjoint operator, this implies the main result for the hodge laplacian.
|
arxiv:1501.06165
|
the detailed age - chemical abundance relations of stars measures time - dependent chemical evolution. these trends offer strong empirical constraints on nucleosynthetic processes, as well as the homogeneityof star - forming gas. characterizing chemical abundances of stars across the milky way over time has been made possible very recently, thanks to surveys like gaia, apogee and kepler. studies of the low - $ { \ alpha } $ disk have shown that individual elements have unique age - abundance trends and the intrinsic dispersion around these relations is small. in this study, we examine and compare the age distribution of stars across both the high and low - $ { \ alpha } $ disk and quantify the intrinsic dispersion of 16 elements around their age - abundance relations at [ fe / h ] = 0 using apogee dr16. we find the high - $ { \ alpha } $ disk has shallower age - abundance relations compared to the low - $ { \ alpha } $ disk, but similar median intrinsic dispersions of ~ 0. 04 dex, suggesting universal element production mechanisms for the high and low - $ { \ alpha } $ disks, despite differences in formation history. we visualize the temporal and spatial distribution of disk stars in small chemical cells, revealing signatures of upside - down and inside - out formation. further, the metallicity skew and the [ fe / h ] - age relations - across radius indicates different initial metallicity gradients and evidence for radial migration. our study is accompanied by an age catalogue for 64, 317 stars in apogee derived usingthe cannon with ~ 1. 9 gyr uncertainty across all ages ( apo - can stars ) as well as a red clump catalogue of 22, 031 stars with a contamination rate of 2. 7 %.
|
arxiv:2102.12003
|
a unification of left - right $ \ rm { su } ( 3 ) _ \ rm { l } \ times \ rm { su } ( 3 ) _ \ rm { r } $, colour $ \ rm { su } ( 3 ) _ \ rm { c } $ and family $ \ rm { su } ( 3 ) _ \ rm { f } $ symmetries in a maximal rank - 8 subgroup of $ { \ rm { e } } _ 8 $ is proposed as a landmark for future explorations beyond the standard model ( sm ). we discuss the implications of this scheme in a supersymmetric ( susy ) model based on the trinification gauge $ \ left [ \ rm { su } ( 3 ) \ right ] ^ 3 $ and global $ \ rm { su } ( 3 ) _ \ rm { f } $ family symmetries. among the key properties of this model are the unification of sm higgs and lepton sectors, a common yukawa coupling for chiral fermions, the absence of the $ \ mu $ - problem, gauge couplings unification and proton stability to all orders in perturbation theory. the minimal field content consistent with a sm - like effective theory at low energies is composed of one $ \ mathrm { e } _ 6 $ $ 27 $ - plet per generation as well as three gauge and one family $ \ rm { su } ( 3 ) $ octets inspired by the fundamental sector of $ { \ rm { e } } _ 8 $. the details of the corresponding ( susy and gauge ) symmetry breaking scheme, multi - scale gauge couplings ' evolution, and resulting effective low - energy scenarios are discussed.
|
arxiv:1711.05199
|
we present further evidence for a dual conformal symmetry in the four - gluon planar scattering amplitude in n = 4 sym. we show that all the momentum integrals appearing in the perturbative on - shell calculations up to five loops are dual to true conformal integrals, well defined off shell. assuming that the complete off - shell amplitude has this dual conformal symmetry and using the basic properties of factorization of infrared divergences, we derive the special form of the finite remainder previously found at weak coupling and recently reproduced at strong coupling by ads / cft. we show that the same finite term appears in a weak coupling calculation of a wilson loop whose contour consists of four light - like segments associated with the gluon momenta. we also demonstrate that, due to the special form of the finite remainder, the asymptotic regge limit of the four - gluon amplitude coincides with the exact expression evaluated for arbitrary values of the mandelstam variables.
|
arxiv:0707.0243
|
inspired by the work of [ kempe, kleinberg, oren, slivkins, ec13 ] we introduce and analyze a model on opinion formation ; the update rule of our dynamics is a simplified version of that of kempe et. al. we assume that the population is partitioned into types whose interaction pattern is specified by a graph. interaction leads to population mass moving from types of smaller mass to those of bigger. we show that starting uniformly at random over all population vectors on the simplex, our dynamics converges point - wise with probability one to an independent set. this settles an open problem of kempe et. al., as applicable to our dynamics. we believe that our techniques can be used to settle the open problem for the kempe et. al. dynamics as well. next, we extend the model of kempe et. al. by introducing the notion of birth and death of types, with the interaction graph evolving appropriately. birth of types is determined by a bernoulli process and types die when their population mass is less than a parameter $ \ epsilon $. we show that if the births are infrequent, then there are long periods of " stability " in which there is no population mass that moves. finally we show that even if births are frequent and " stability " is not attained, the total number of types does not explode : it remains logarithmic in $ 1 / \ epsilon $.
|
arxiv:1607.03881
|
we show that flows of pure electromagnetic energy are subject to instabilities familiar from hydrodynamics and plasma physics - - - kelvin - helmholtz and screw instabilities. in the framework of force - free electrodynamics, it is found that electric - like tangential discontinuities are kelvin - helmholtz - unstable. all non - trivial cylindrically symmetrical magnetic configurations are screw - unstable. the poynting jet of the goldreich - julian pulsar is screw - unstable if the current density in the polar cap region exceeds the charge density times the speed of light. this may be the process that determines the pulsar luminosity.
|
arxiv:astro-ph/9902288
|
by j. halcombe laning was used by hamilton ' s team to develop asynchronous flight software : because of the flight software ' s system - software ' s error detection and recovery techniques that included its system - wide " kill and recompute " from a " safe place " restart approach to its snapshot and rollback techniques, the display interface routines ( aka the priority displays ) together with its man - in - the - loop capabilities were able to be created in order to have the capability to interrupt the astronauts ' normal mission displays with priority displays of critical alarms in case of an emergency. this depended on our assigning a unique priority to every process in the software in order to ensure that all of its events would take place in the correct order and at the right time relative to everything else that was going on. hamilton ' s priority alarm displays interrupted the astronauts ' normal displays to warn them that there was an emergency " giving the astronauts a go / no - go decision ( to land or not to land ) ". jack garman, a nasa computer engineer in mission control, recognized the meaning of the errors that were presented to the astronauts by the priority displays and shouted, " go, go! " and they continued. paul curto, a senior technologist who nominated hamilton for a nasa space act award, called hamilton ' s work " the foundation for ultra - reliable software design ". hamilton later wrote of the incident : the computer ( or rather the software in it ) was smart enough to recognize that it was being asked to perform more tasks than it should be performing. it then sent out an alarm, which meant to the astronaut, ' i ' m overloaded with more tasks than i should be doing at this time and i ' m going to keep only the more important tasks ' ; i. e., the ones needed for landing... actually, the computer was programmed to do more than recognize error conditions. a complete set of recovery programs was incorporated into the software. the software ' s action, in this case, was to eliminate lower priority tasks and re - establish the more important ones... if the computer hadn ' t recognized this problem and taken recovery action, i doubt if apollo 11 would have been the successful moon landing it was. = = = businesses = = = in 1976, hamilton co - founded with saydean zeldin a company called higher order software ( hos ) to further develop ideas about error prevention and fault tolerance emerging from their experience at mit working on the apollo program
|
https://en.wikipedia.org/wiki/Margaret_Hamilton_(software_engineer)
|
this paper is a survey on the structure of manifolds with a lower ricci curvature bound.
|
arxiv:math/0612107
|
equation of motion for the galactic tide is treated for the case of a comet situated in the oort cloud of comets. we take into account that galactic potential and mass density depend on a distance from the galactic equator and on a distance from the rotational axis of the galaxy. secular evolution of orbital elements is presented. new terms generated by the sun ' s oscillation about the galactic plane are considered. the inclusion of the new terms into the equation of motion of the comet leads to orbital evolution which may be significantly different from the conventional approach. the usage of the secular time derivatives is limited to the cases when orbital period of the comet is much less than i ) the period of oscillations of the sun around the galactic equator, and, ii ) the orbital period of the motion of the sun around the galactic center.
|
arxiv:0912.3449
|
fine - tuning provides an effective means to specialize pre - trained models for various downstream tasks. however, fine - tuning often incurs high memory overhead, especially for large transformer - based models, such as llms. while existing methods may reduce certain parts of the memory required for fine - tuning, they still require caching all intermediate activations computed in the forward pass to update weights during the backward pass. in this work, we develop tokentune, a method to reduce memory usage, specifically the memory to store intermediate activations, in the fine - tuning of transformer - based models. during the backward pass, tokentune approximates the gradient computation by backpropagating through just a subset of input tokens. thus, with tokentune, only a subset of intermediate activations are cached during the forward pass. also, tokentune can be easily combined with existing methods like lora, further reducing the memory cost. we evaluate our approach on pre - trained transformer models with up to billions of parameters, considering the performance on multiple downstream tasks such as text classification and question answering in a few - shot learning setup. overall, tokentune achieves performance on par with full fine - tuning or representative memory - efficient fine - tuning methods, while greatly reducing the memory footprint, especially when combined with other methods with complementary memory reduction mechanisms. we hope that our approach will facilitate the fine - tuning of large transformers, in specializing them for specific domains or co - training them with other neural components from a larger system. our code is available at https : / / github. com / facebookresearch / tokentune.
|
arxiv:2501.18824
|
man - hours, machine time, energy and other resources that do not generate value. = = = management science = = = management science uses various scientific research - based principles, strategies, and analytical methods including mathematical modeling, statistics and numerical algorithms to improve an organization ' s ability to enact rational and meaningful management decisions by arriving at optimal or near optimal solutions to complex decision problems. = = = engineering design management = = = engineering design management represents the adaptation and application of customary management practices, with the intention of achieving a productive engineering design process. engineering design management is primarily applied in the context of engineering design teams, whereby the activities, outputs and influences of design teams are planned, guided, monitored and controlled. = = = human factors safety culture = = = critical to management success in engineering is the study of human factors and safety culture involved with highly complex tasks within organizations large and small. in complex engineering systems, human factors safety culture can be critical in preventing catastrophe and minimizing the realized hazard rate. critical areas of safety culture are minimizing blame avoidance, minimizing power distance, an appropriate ambiguity tolerance and minimizing a culture of concealment. increasing organizational empathy and an ability to clearly report problems up the chain of management is important to the success of any engineering program. managing an engineering firm is in opposition to the management of a law firm. law firms keep secrets while engineering firms succeed when information is deiminated clearly and quickly. engineering managers must push against a culture of concealment which may be promoted by the law department. managers in an engineering firm must be ready to push back against schedule and budget constraints from the executive suite. engineering managers must use engineering law to push back against the executive suite to ensure public safety. the executive suite in an engineering organization can become consumed with financial data imperiling public safety. = = education = = engineering management programs typically include instruction in accounting, economics, finance, project management, systems engineering, industrial engineering, mathematical modeling and optimization, management information systems, quality control and six sigma, operations management, operations research, human resources management, industrial psychology, safety and health. there are many options for entering into engineering management, albeit that the foundation requirement is an engineering license. = = = undergraduate degrees = = = although most engineering management programs are geared toward graduate studies, there are a number of institutions that teach em at the undergraduate level. over twenty undergraduate engineering management related programs are accredited by abet including : west point ( united states military academy ), western michigan university ( abet - accredited by
|
https://en.wikipedia.org/wiki/Engineering_management
|
we explore two methods of compressing the redshift space galaxy power spectrum and bispectrum with respect to a chosen set of cosmological parameters. both methods involve reducing the dimension of the original data - vector ( e. g. 1000 elements ) to the number of cosmological parameters considered ( e. g. seven ) using the karhunen - lo \ ` eve algorithm. in the first case, we run mcmc sampling on the compressed data - vector in order to recover the one - dimensional ( 1d ) and two - dimensional ( 2d ) posterior distributions. the second option, approximately 2000 times faster, works by orthogonalising the parameter space through diagonalisation of the fisher information matrix before the compression, obtaining the posterior distributions without the need of mcmc sampling. using these methods for future spectroscopic redshift surveys like desi, euclid and pfs would drastically reduce the number of simulations needed to compute accurate covariance matrices with minimal loss of constraining power. we consider a redshift bin of a desi - like experiment. using the power spectrum combined with the bispectrum as a data - vector, both compression methods on average recover the 68 % credible regions to within 0. 7 % and 2 % of those resulting from standard mcmc sampling respectively. these confidence intervals are also smaller than the ones obtained using only the power spectrum by ( 81 %, 80 %, 82 % ) respectively for the bias parameter b _ 1, the growth rate f and the scalar amplitude parameter a _ s.
|
arxiv:1709.03600
|
anomaly detection and localization of visual data, including images and videos, are of great significance in both machine learning academia and applied real - world scenarios. despite the rapid development of visual anomaly detection techniques in recent years, the interpretations of these black - box models and reasonable explanations of why anomalies can be distinguished out are scarce. this paper provides the first survey concentrated on explainable visual anomaly detection methods. we first introduce the basic background of image - level and video - level anomaly detection. then, as the main content of this survey, a comprehensive and exhaustive literature review of explainable anomaly detection methods for both images and videos is presented. next, we analyze why some explainable anomaly detection methods can be applied to both images and videos and why others can be only applied to one modality. additionally, we provide summaries of current 2d visual anomaly detection datasets and evaluation metrics. finally, we discuss several promising future directions and open problems to explore the explainability of 2d visual anomaly detection. the related resource collection is given at https : / / github. com / wyzjack / awesome - xad.
|
arxiv:2302.06670
|
the phenix experiment has measured direct photons at $ \ sqrt { s _ { nn } } $ = 200 gev in $ p + p $, $ d $ + au and au + au collisions. for $ p _ { t } $ $ < $ 4 gev / $ c $, the internal conversion into $ e ^ { + } e ^ { - } $ pairs has been used to measure the direct photons in au + au.
|
arxiv:nucl-ex/0605005
|
new calculations of radiative rates and electron impact excitation cross sections for fe xi are used to derive emission line intensity ratios involving 3s ^ 23p ^ 4 - 3s ^ 23p ^ 33d transitions in the 180 - 223 a wavelength range. these ratios are subsequently compared with observations of a solar active region, obtained during the 1995 flight solar euv research telescope and spectrograph ( serts ). the version of serts flown in 1995 incorporated a multilayer grating that enhanced the instrumental sensitivity for features in the 170 - 225 a wavelength range, observed in second - order between 340 and 450 a. this enhancement led to the detection of many emission lines not seen on previous serts flights, which were measured with the highest spectral resolution ( 0. 03 a ) ever achieved for spatially resolved active region spectra in this wavelength range. however, even at this high spectral resolution, several of the fe xi lines are found to be blended, although the sources of the blends are identified in the majority of cases. the most useful fe xi electron density diagnostic line intensity ratio is i ( 184. 80 a ) / i ( 188. 21 a ). this ratio involves lines close in wavelength and free from blends, and which varies by a factor of 11. 7 between n _ e = 10 ^ 9 and 10 ^ 11 cm ^ - 3, yet shows little temperature sensitivity. an unknown line in the serts spectrum at 189. 00 a is found to be due to fe xi, the first time ( to our knowledge ) this feature has been identified in the solar spectrum. similarly, there are new identifications of the fe xi 192. 88, 198. 56 and 202. 42 a features, although the latter two are blended with s viii / fe xii and fe xiii, respectively.
|
arxiv:astro-ph/0504106
|
we analyze the phase diagram of an elementary statistical lattice model of classical, discrete, spin variables, with nearest - neighbor ferro - magnetic isotropic interactions in competition with chiral interactions along an axis. at the mean - field level, we show the existence of para - magnetic lines of transition to a region of modulated ( helimagnetic ) structures. we then turn to the analysis of the analogous problem on a cayley tree. taking into account the simplicity introduced by the infinite - coordination limit of the tree, we explore several details of the phase diagrams in terms of temperature and a parameter of competition. in particular, we characterize sequences of modulated ( helical ) structures associated with devil ' s staircases of a fractal character.
|
arxiv:2108.08833
|
we explore the sparticle and higgs spectroscopy of an su ( 5 ) inspired extension of the constrained minimal supersymmetric standard model ( cmssm ). the universal soft parameter m _ 0 is replaced by m _ { \ bar 5 } and m _ { 10 }, where m _ { \ bar 5 } and m _ { 10 } denote universal soft scalar masses associated with fields in the five and ten dimensional representations of su ( 5 ). the special case m _ { \ bar 5 } < < m _ { 10 } yields a rather characteristic sparticle spectroscopy which can be tested at the lhc. we highlight a few benchmark points in which the lightest neutralino saturates the wmap bound on cold dark matter abundance.
|
arxiv:0811.1187
|
in this thesis we look into programming by example ( pbe ), which is about finding a program mapping given inputs to given outputs. pbe has traditionally seen a split between formal versus neural approaches, where formal approaches typically involve deductive techniques such as sat solvers and types, while the neural approaches involve training on sample input - outputs with their corresponding program, typically using sequence - based machine learning techniques such as lstms [ 41 ]. as a result of this split, programming types had yet to be used in neural program synthesis techniques. we propose a way to incorporate programming types into a neural program synthesis approach for pbe. we introduce the typed neuro - symbolic program synthesis ( tnsps ) method based on this idea, and test it in the functional programming context to empirically verify type information may help improve generalization in neural synthesizers on limited - size datasets. our tnsps model builds upon the existing neuro - symbolic program synthesis ( nsps ), a tree - based neural synthesizer combining info from input - output examples plus the current program, by further exposing information on types of those input - output examples, of the grammar production rules, as well as of the hole that we wish to expand in the program. we further explain how we generated a dataset within our domain, which uses a limited subset of haskell as the synthesis language. finally we discuss several topics of interest that may help take these ideas further. for reproducibility, we release our code publicly.
|
arxiv:2008.12613
|
the quantum theory of linearized perturbations of the gravitational field of a schwarzschild black hole is presented. the fundamental operators are seen to be the perturbed weyl scalars $ \ dot \ psi _ 0 $ and $ \ dot \ psi _ 4 $ associated with the newman - penrose description of the classical theory. formulae are obtained for the expectation values of the modulus squared of these operators in the boulware, unruh and hartle - hawking quantum states. differences between the renormalized expectation values of both $ \ bigl | \ dot \ psi _ 0 \ bigr | ^ 2 $ and $ \ bigl | \ dot \ psi _ 4 \ bigr | ^ 2 $ in the three quantum states are evaluated numerically.
|
arxiv:gr-qc/9412075
|
a systematic study of possible deuteronlike two - meson bound states, { \ it deusons }, is presented. previous arguments that many such bound states may exist are elaborated with detailed arguments and numerical calculations including, in particular, the tensor potential. in the heavy meson sector one - pion exchange alone is strong enough to form at least deuteron - like $ b \ bar b ^ * $ and $ b ^ * \ bar b ^ * $ composites bound by approximately 50 mev. composites of $ d \ bar d ^ * $ and $ d ^ * \ bar d ^ * $ states bound by pion exchange alone are expected near the thresholds, while in the light meson sector one generally needs some additional short range attraction to form bound states. the quantum numbers of these states are i = 0, in $ b \ bar b ^ * $ one predictss the states : $ \ eta _ b ( \ approx 10545 ), \ \ chi _ { b1 } ( \ approx 10562 ) $, and in $ b ^ * \ bar b ^ * $ one finds the states : $ \ eta _ b ( \ approx 10590 ), \ \ chi _ { b0 } ( \ approx 10582 ), \ h _ b ( \ approx 10608 ), \ \ chi _ { b2 } ( \ approx 10602 ) $. near the $ d \ bar d ^ * $ threshold the states : $ \ eta _ c ( \ approx 3870 ), \ \ chi _ { c0 } ( \ approx 3870 ) $ are predicted, and near the $ d ^ * \ bar d ^ * $ threshold one finds the states : $ \ chi _ { b0 } ( \ approx 4015 ), \ \ eta _ { c } ( \ approx 4015 ), \ h _ c ( \ approx 4015 ), \ \ chi _ { c2 } ( \ approx 4015 ) $. within the light meson sector pion exchange gives strong attraction for $ p \ bar v $ and $ v \ bar v $ systems with quantum numbers where the best non - $ q \ bar q $ candidates exist, although pion exchange alone is not strong enough to support such bound states.
|
arxiv:hep-ph/9310247
|
weak line emission originating in the photosphere is well known from o stars and widely used for luminosity classification. the physical origin of the line emission are nlte effects, most often optical pumping by far - uv lines. analogous lines in b stars of lower luminosity are identified in radially pulsating $ \ beta $ cephei stars. their diagnostic value is shown for radially pulsating stars, as these lines probe a much larger range of the photosphere than absorption lines, and can be traced to regions where the pulsation amplitude is much lower than seen in the absorption lines.
|
arxiv:1407.8133
|
in this paper, we consider the state controllability of networked systems, where the network topology is directed and weighted and the nodes are higher - dimensional linear time - invariant ( lti ) dynamical systems. we investigate how the network topology, the node - system dynamics, the external control inputs, and the inner interactions affect the controllability of a networked system, and show that for a general networked multi - input / multi - output ( mimo ) system : 1 ) the controllability of the overall network is an integrated result of the aforementioned relevant factors, which cannot be decoupled into the controllability of individual node - systems and the properties solely determined by the network topology, quite different from the familiar notion of consensus or formation controllability ; 2 ) if the network topology is uncontrollable by external inputs, then the networked system with identical nodes will be uncontrollable, even if it is structurally controllable ; 3 ) with a controllable network topology, controllability and observability of the nodes together are necessary for the controllability of the networked systems under some mild conditions, but nevertheless they are not sufficient. for a networked system with single - input / single - output ( siso ) lti nodes, we present precise necessary and sufficient conditions for the controllability of a general network topology.
|
arxiv:1505.01255
|
in this paper, we investigate a dirichlet series involving the fourier $ - $ jacobi coefficients of two cusp forms $ f, g $ for orthogonal groups of signature $ ( 2, n + 2 ) $. in the case when $ f $ is a hecke eigenform and $ g $ is a maass lift of a specific poincar \ ' e series, we establish a connection with the standard $ l - $ function attached to $ f $. what is more, we find explicit choices of orthogonal groups, for which we obtain a clear $ - $ cut euler product expression for this dirichlet series. our considerations recover a classical result for siegel modular forms, first introduced by kohnen and skoruppa, but also provide a range of new examples, which can be related to other kinds of modular forms, such as paramodular, hermitian and quaternionic.
|
arxiv:2407.18663
|
we study the theory of a u ( 1 ) gauge field coupled to a spinon fermi surface. recently this model has been proposed as a possible description of the organic compound $ \ kappa - ( bedt - ttf ) _ 2 cu _ 2 ( cn ) _ 3 $. we calculate the susceptibility of this system and in particular examine the effect of pairing of the underlying spin liquid. we show that this proposed theory is consistent with the observed susceptibility measurements.
|
arxiv:cond-mat/0611224
|
the process industry ' s high expectations for digital twins require modeling approaches that can generalize across tasks and diverse domains with potentially different data dimensions and distributional shifts i. e., foundational models. despite success in natural language processing and computer vision, transfer learning with ( self - ) supervised signals for pre - training general - purpose models is largely unexplored in the context of digital twins in the process industry due to challenges posed by multi - dimensional time - series data, lagged cause - effect dependencies, complex causal structures, and varying number of ( exogenous ) variables. we propose a novel channel - dependent pre - training strategy that leverages synchronized cause - effect pairs to overcome these challenges by breaking down the multi - dimensional time - series data into pairs of cause - effect variables. our approach focuses on : ( i ) identifying highly lagged causal relationships using data - driven methods, ( ii ) synchronizing cause - effect pairs to generate training samples for channel - dependent pre - training, and ( iii ) evaluating the effectiveness of this approach in channel - dependent forecasting. our experimental results demonstrate significant improvements in forecasting accuracy and generalization capability compared to traditional training methods.
|
arxiv:2411.10152
|
we use a monte carlo code to calculate the geodesic orbits of test particles around kerr black holes, generating a distribution function of both bound and unbound populations of dark matter particles. from this distribution function, we calculate annihilation rates and observable gamma - ray spectra for a few simple dark matter models. the features of these spectra are sensitive to the black hole spin, observer inclination, and detailed properties of the dark matter annihilation cross section and density profile. confirming earlier analytic work, we find that for rapidly spinning black holes, the collisional penrose process can reach efficiencies exceeding $ 600 \ % $, leading to a high - energy tail in the annihilation spectrum. the high particle density and large proper volume of the region immediately surrounding the horizon ensures that the observed flux from these extreme events is non - negligible.
|
arxiv:1506.06728
|
in this paper we present an informal description of the cyclic inflation scenario which allows our universe to " start " with a negative potential energy, inflate, and then gracefully exit to a positive potential energy universe. we discuss how this scenario fares in comparison with the standard inflationary paradigm with respect to the classic cosmological puzzles associated with the horizon, flatness and isotropy of our current universe. we also discuss some of the most debilitating problems of cyclic cosmologies, tolman ' s entropy problem, and the problem with the overproduction of blackholes. we also sketch the calculation of the primordial spectrum in these models and possible observable signatures. we end with a special focus on the exit mechanism where the universe can transition from the negative to a positive potential region. the treatise is based on an ongoing collaboration between the authors and closely follows conference presentations given on the subject by tb.
|
arxiv:1105.2636
|
extending g \ " odel ' s \ emph { dialectica } interpretation, we provide a functional interpretation of classical theories of positive arithmetic inductive definitions, reducing them to theories of finite - type functionals defined using transfinite recursion on well - founded trees.
|
arxiv:0802.1938
|
we report the measurement of extremely slow hole spin relaxation dynamics in small ensembles of self - assembled ingaas quantum dots. individual spin orientated holes are optically created in the lowest orbital state of each dot and read out after a defined storage time using spin memory devices. the resulting luminescence signal exhibits a pronounced polarization memory effect that vanishes for long storage times. the hole spin relaxation dynamics are measured as a function of external magnetic field and lattice temperature. we show that hole spin relaxation can occur over remarkably long timescales in strongly confined quantum dots ( up to ~ 270 us ), as predicted by recent theory. our findings are supported by calculations that reproduce both the observed magnetic field and temperature dependencies. the results suggest that hole spin relaxation in strongly confined quantum dots is due to spin orbit mediated phonon scattering between zeeman levels, in marked contrast to higher dimensional nanostructures where it is limited by valence band mixing.
|
arxiv:0705.1466
|
we study the problem of a budget limited buyer who wants to buy a set of items, each from a different seller, to maximize her value. the budget feasible mechanism design problem aims to design a mechanism which incentivizes the sellers to truthfully report their cost, and maximizes the buyer ' s value while guaranteeing that the total payment does not exceed her budget. such budget feasible mechanisms can model a buyer in a crowdsourcing market interested in recruiting a set of workers ( sellers ) to accomplish a task for her. this budget feasible mechanism design problem was introduced by singer in 2010. there have been a number of improvements on the approximation guarantee of such mechanisms since then. we consider the general case where the buyer ' s valuation is a monotone submodular function. we offer two general frameworks for simple mechanisms, and by combining these frameworks, we significantly improve on the best known results for this problem, while also simplifying the analysis. for example, we improve the approximation guarantee for the general monotone submodular case from 7. 91 to 5 ; and for the case of large markets ( where each individual item has negligible value ) from 3 to 2. 58. more generally, given an $ r $ approximation algorithm for the optimization problem ( ignoring incentives ), our mechanism is a $ r + 1 $ approximation mechanism for large markets, an improvement from $ 2r ^ 2 $. we also provide a similar parameterized mechanism without the large market assumption, where we achieve a $ 4r + 1 $ approximation guarantee.
|
arxiv:1703.10681
|
the genuine physical reasons explaining the delocalization effect of the hubbard repulsion u are analyzed. first it is shown that always when this effect is observed, u acts on the background of a macroscopic degeneracy present in a multiband type of system. after this step i demonstrate that acting in such conditions, by strongly diminishing the double occupancy, u spreads out the contributions in the ground state wave function, hence strongly increases the one - particle localization length, consequently extends the one - particle behavior producing conditions for a delocalization effect. to be valuable, the reported results are presented in exact terms, being based on the first exact ground states deduced at half filling in two dimensions for a prototype two band system, the generic periodic anderson model at finite value of the interaction.
|
arxiv:0804.3872
|
this paper provides both theoretical and experimental evidence for the existence of an energy / frequency convexity rule, which relates energy consumption and cpu frequency on mobile devices. we monitored a typical smartphone running a specific computing - intensive kernel of multiple nested loops written in c using a high - resolution power gauge. data gathered during a week - long acquisition campaign suggest that energy consumed per input element is strongly correlated with cpu frequency, and, more interestingly, the curve exhibits a clear minimum over a 0. 2 ghz to 1. 6 ghz window. we provide and motivate an analytical model for this behavior, which fits well with the data. our work should be of clear interest to researchers focusing on energy usage and minimization for mobile devices, and provide new insights for optimization opportunities.
|
arxiv:1401.4655
|
var3 : in our papers 2013 - - 2018 we classified degenerations and picard lattices of kahlerian k3 surfaces with finite symplectic automorphism groups of high order. for remaining groups of small order : $ d _ 6 $, $ c _ 4 $, $ ( c _ 2 ) ^ 2 $, $ c _ 3 $, $ c _ 2 $ and $ c _ 1 $ it was not completely considered. cases of $ d _ 6 $ and $ c _ 4 $ were recently completely considered in [ 19 ] and [ 20 ]. here we consider the analogous complete classification for the group $ ( c _ 2 ) ^ 2 $ of the order 4.
|
arxiv:1804.00991
|
the aim of this paper is to employ variational techniques and critical point theory to prove some conditions for the existence of solutions to nonlinear impulsive dynamic equation with homogeneous dirichlet boundary conditions. also we will be interested in the solutions of the impulsive nonlinear problem with linear derivative dependence satisfying an impulsive condition.
|
arxiv:1302.4666
|
activity and has given simple users and e. g. cardiologists to be able to analyze parameters related to their quality of life. wearable technology are devices that people can wear at all times throughout the day, and also throughout the night. they help measure certain values such as heartbeat and rhythm, quality of sleep, total steps in a day, and may help recognize certain diseases such as heart disease, diabetes, and cancer. they may promote ideas on how to improve one ' s health and stay away from certain impending diseases. these devices give daily feedback on what to improve on and what areas people are doing well in, and this motivates and continues to push the user to keep on with their improved lifestyle. over time, wearable technology has impacted the health and physical activity market an immense amount as, according to pevnick et al 2018, " the consumer - directed wearable technology market is rapidly growing and expected to exceed $ 34b by 2020. " this shows how the wearable technology sector is increasingly becoming more and more approved amongst all people who want to improve their health and quality of life. wearable technology can come in all forms from watches, pads placed on the heart, devices worn around the arms, all the way to devices that can measure any amount of data just through touching the receptors of the device. in many cases, wearable technology is connected to an app that can relay the information right away ready to be analyzed and discussed with a cardiologist. in addition, according to the american journal of preventive medicine they state, " wearables may be a low - cost, feasible, and accessible way for promoting pa. " essentially, this insinuates that wearable technology can be beneficial to everyone and really is not cost prohibited. also, when consistently seeing wearable technology being actually utilized and worn by other people, it promotes the idea of physical activity and pushes more individuals to take part. wearable technology also helps with chronic disease development and monitoring physical activity in terms of context. for example, according to the american journal of preventive medicine, " wearables can be used across different chronic disease trajectory phases ( e. g., pre - versus post - surgery ) and linked to medical record data to obtain granular data on how activity frequency, intensity, and duration changes over the disease course and with different treatments. " wearable technology can be beneficial in tracking and helping analyze data in terms of how one is performing as time goes on, and how they may be performing with different changes
|
https://en.wikipedia.org/wiki/Wearable_technology
|
monolith is a proposed massive ( 34 kt ) magnetized tracking calorimeter at the gran sasso laboratory in italy, optimized for the detection of atmospheric muon neutrinos. the main goal is to establish ( or reject ) the neutrino oscillation hypothesis through an explicit observation of the full first oscillation swing. the delta m ^ 2 sensitivity range for this measurement comfortably covers the complete super - kamiokande allowed region. other measurements include studies of matter effects and the nc / cc and anti - nu / nu ratio, the study of cosmic ray muons in the multi - tev range, and auxiliary measurements from the cern to gran sasso neutrino beam. depending on approval, data taking with part of the detector could start in 2004. the detector and its performance are described, and its potential later use as a neutrino factory detector is addressed.
|
arxiv:hep-ex/0008067
|
assessing the compliance of a white - box turbulence model with known turbulent knowledge is straightforward. it enables users to screen conventional turbulence models and identify apparent inadequacies, thereby allowing for a more focused and fruitful validation and verification. however, comparing a black - box machine - learning model to known empirical scalings is not straightforward. unless one implements and tests the model, it would not be clear if a machine - learning model, trained at finite reynolds numbers preserves the known high reynolds number limit. this is inconvenient, particularly because model implementation involves retraining and re - interfacing. this work attempts to address this issue, allowing fast a priori screening of machine - learning models that are based on feed - forward neural networks ( fnn ). the method leverages the mathematical theorems we present in the paper. these theorems offer estimates of a network ' s limits even when the exact weights and biases are unknown. for demonstration purposes, we screen existing machine - learning wall models and rans models for their compliance with the log layer physics and the viscous layer physics in a priori manner. in addition, the theorems serve as essential guidelines for future machine - learning models.
|
arxiv:2310.09366
|
we present results for the first positive parity excited state of the nucleon, namely, the roper resonance ( $ n ^ { { { 1 / 2 } } ^ { + } } $ = 1440 mev ) from a variational analysis technique. the analysis is performed for pion masses as low as 224 mev in quenched qcd with the flic fermion action. a wide variety of smeared - smeared correlation functions are used to construct correlation matrices. this is done in order to find a suitable basis of operators for the variational analysis such that eigenstates of the qcd hamiltonian may be isolated. a lower lying roper state is observed that approaches the physical roper state. to the best of our knowledge, the first time this state has been identified at light quark masses using a variational approach.
|
arxiv:0906.5433
|
this paper conducts a numerical study of a geometrical structure called $ \ epsilon $ - school for predator - avoidance fish schools, based on our previous mathematical model. our results show that during a predator attack, the number of $ \ epsilon $ - school increases from one to a certain value. after the attack, the number of $ \ epsilon $ - school decreases in the first two predator - avoidance patterns, but continues to increase in the third pattern. a constant value for the number of the $ \ epsilon $ - school is observed in the last pattern. these suggests that when the predator is approaching, each individual in the school focuses more on avoiding the predator, rather than on interacting with its schoolmates. such a trait is in agreement with real - life behavior in the natural ecosystem.
|
arxiv:2303.01706
|
the origin of successive phase transitions observed in the layered perovskite $ \ alpha $ - sr $ _ 2 $ cro $ _ 4 $ is studied by the density - functional - theory - based electronic structure calculation and mean - field analysis of the proposed low - energy effective model. we find that, despite the fact that the cro $ _ 6 $ octahedron is elongated along the $ c $ - axis of the crystal structure, the crystal - field level of nondegenerate $ 3d _ { xy } $ orbitals of the cr ion is lower in energy than that of doubly degenerate $ 3d _ { yz } $ and $ 3d _ { xz } $ orbitals, giving rise to the orbital degrees of freedom in the system with a $ 3d ^ 2 $ electron configuration. we show that the higher ( lower ) temperature phase transition is caused by the ordering of the orbital ( spin ) degrees of freedom.
|
arxiv:1511.06217
|
the objective of this work is to expand upon previous works, considering socially acceptable behaviours within robot navigation and interaction, and allow a robot to closely approach static and dynamic individuals or groups. the space models developed in this dissertation are adaptive, that is, capable of changing over time to accommodate the changing circumstances often existent within a social environment. the space model ' s parameters ' adaptation occurs with the end goal of enabling a close interaction between humans and robots and is thus capable of taking into account not only the arrangement of the groups, but also the basic characteristics of the robot itself. this work also further develops a preexisting approach pose estimation algorithm in order to better guarantee the safety and comfort of the humans involved in the interaction, by taking into account basic human sensibilities. the algorithms are integrated into ros ' s navigation system through the use of the $ costmap2d $ and the $ move \ _ base $ packages. the space model adaptation is tested via comparative evaluation against previous algorithms through the use of datasets. the entire navigation system is then evaluated through both simulations ( static and dynamic ) and real life situations ( static ). these experiments demonstrate that the developed space model and approach pose estimation algorithms are capable of enabling a robot to closely approach individual humans and groups, while maintaining considerations for their comfort and sensibilities.
|
arxiv:2310.09916
|
stochastic homogeneous hyperelastic solids are characterised by strain - energy densities where the parameters are random variables defined by probability density functions. these models allow for the propagation of uncertainties from input data to output quantities of interest. to investigate the effect of probabilistic parameters on predicted mechanical responses, we study radial oscillations of cylindrical and spherical shells of stochastic incompressible isotropic hyperelastic material, formulated as quasi - equilibrated motions where the system is in equilibrium at every time instant. additionally, we study finite shear oscillations of a cuboid, which are not quasi - equilibrated. we find that, for hyperelastic bodies of stochastic neo - hookean or mooney - rivlin material, the amplitude and period of the oscillations follow probability distributions that can be characterised. further, for cylindrical tubes and spherical shells, when an impulse surface traction is applied, there is a parameter interval where the oscillatory and non - oscillatory motions compete, in the sense that both have a chance to occur with a given probability. we refer to the dynamic evolution of these elastic systems, which exhibit inherent uncertainties due to the material properties, as ` likely oscillatory motions '.
|
arxiv:1901.06145
|
diffusive dynamics abound in nature and have been especially studied in physical, biological, and financial systems. these dynamics are characterised by a linear growth of the mean squared displacement ( msd ) with time. often, the conditions that give rise to simple diffusion are violated, and many systems, such as biomolecules inside cells, microswimmers, or particles in turbulent flows, undergo anomalous diffusion, featuring an msd that grows following a power law with an exponent $ \ alpha $. precisely determining this exponent and the generalised diffusion coefficient provides valuable information on the systems under consideration, but it is a very challenging task when only a few short trajectories are available, which is common in non - equilibrium and living systems. estimating the exponent becomes overwhelmingly difficult when the diffusive dynamics switches between different behaviours, characterised by different exponents $ \ alpha $ or diffusion coefficients $ k $. we develop a method based on recurrent neural networks that successfully estimates the anomalous diffusion exponents and generalised diffusion coefficients of individual trajectories that switch between multiple diffusive states. our method returns the $ \ alpha $ and $ k $ as a function of time and identifies the times at which the dynamics switches between different behaviours. we showcase the method ' s capabilities on the dataset of the 2024 anomalous diffusion challenge.
|
arxiv:2503.09422
|
we develop the representation theory for reductive linear differential algebraic groups ( ldags ). in particular, we exhibit an explicit sharp upper bound for orders of derivatives in differential representations of reductive ldags, extending existing results, which were obtained for sl ( 2 ) in the case of just one derivation. as an application of the above bound, we develop an algorithm that tests whether the parameterized differential galois group of a system of linear differential equations is reductive and, if it is, calculates it.
|
arxiv:1304.2693
|
we experimentally study collective decay of an extended disordered ensemble of $ n $ atoms inside a hollow - core fiber. we observe up to $ 300 $ - fold enhanced decay rates, strong optical bursts and a coherent ringing. due to inhomogeneities limiting the synchronization of atoms, the data does not show the typical scaling with $ n $. we show that an effective number of collective emitters can be determined to recover the $ n $ scaling known to homogeneous ensembles over a large parameter range. this provides physical insight into the limits of collective decay and allows for its optimization in extended ensembles as used, e. g., in quantum optics, precision time - keeping or waveguide qed.
|
arxiv:2307.11623
|
we obtain an improved bochner inequality based on the curvature - dimension condition $ { \ rm rcd } ^ * ( k, n ) $ and propose a definition of $ n $ - dimensional ricci tensor on metric measure spaces.
|
arxiv:1412.0441
|
an innovation ecosystem is a multi - stakeholder environment, where different stakeholders interact to solve complex socio - technical challenges. we explored how stakeholders use digital tools, human resources, and their combination to gather information and make decisions in innovation ecosystems. to comprehensively understand stakeholders ' motivations, information needs and practices, we conducted a three - part interview study across five stakeholder groups ( n = 13 ) using an interactive digital dashboard. we found that stakeholders were primarily motivated to participate in innovation ecosystems by the potential social impact of their contributions. we also found that stakeholders used digital tools to seek " high - level " information to scaffold initial decision - making efforts but ultimately relied on contextual information provided by human networks to enact final decisions. therefore, people, not digital tools, appear to be the key source of information in these ecosystems. guided by our findings, we explored how technology might nevertheless enhance stakeholders ' decision - making efforts and enable robust and equitable innovation ecosystems.
|
arxiv:2307.04263
|
context. a recent observational census of kuiper belt objects ( kbos ) has unveiled anomalous orbital structures. this has led to the hypothesis that an additional $ \ sim5 - 10 ~ m _ { \ oplus } $ planet exists. this planet, known as planet 9, occupies an eccentric and inclined orbit at hundreds of astronomical units. however, the kbos under consideration have the largest known semimajor axes at $ a > 250 $ au ; thus they are very difficult to detect. / / aims. in the context of the proposed planet 9, we aim to measure the mean plane of the kuiper belt at $ a > 50 $ au. in a comparison of the expected and observed mean planes, some constraints would be put on the mass and orbit of this undiscovered planet. methods. we adopted and developed the theoretical approach of volk & malhotra ( 2017 ) to the relative angle $ \ delta $ between the expected mean plane of the kuiper belt and the invariable plane determined by the eight known planets. numerical simulations were constructed to validate our theoretical approach. then similar to volk & malhotra ( 2017 ), we derived the angle $ \ delta $ for the real observed kbos with $ 100 < a < 200 $ au, and the measurement uncertainties were also estimated. finally, for comparison, maps of the theoretically expected $ \ delta $ were created for different combinations of possible planet 9 parameters. results. the expected mean plane of the kuiper belt nearly coincides with the said invariable plane interior to $ a = 90 $ au. but these two planes deviate noticeably from each other at $ a > 100 $ au owing to the presence of planet 9 because the relative angle $ \ delta $ could be as large as $ \ sim10 ^ { \ circ } $. using the $ 1 \ sigma $ upper limit of $ \ delta < 5 ^ { \ circ } $ deduced from real kbo samples as a constraint, we present the most probable parameters of planet 9.
|
arxiv:2004.06914
|
accretion flows may produce profuse winds when they have positive specific energy. winds deplete matter from the inner region of the disk and makes the inner region thinner, optically. since there are fewer electrons in this region, it becomes easier to comptonize this part by the soft photons which are intercepted from the keplerian disk farther out. we present a self - consistent picture of winds from an accretion disk and show how the spectra of the disk is softened due to the outflowing wind.
|
arxiv:astro-ph/9810412
|
the unique elemental abundance pattern of the carbon - rich stars cs29498 - 043 and cs22949 - 037 is characterized by a large excess of magnesium and silicon in comparison with iron. this excess is investigated in the context of a supernova - induced star formation scenario, and it is concluded that these stars were born from the matter swept up by supernova remnants containing little iron and that such supernovae are similar to the least - luminous sne ever observed, sne 1997d and 1999br. comparison of the observed abundance pattern in iron - group elements of subluminous supernovae with those of other supernovae leads to an intriguing implication for explosion, nucleosynthesis, and mixing in supernovae. the observed invariance of these ratios can not be accounted for by a spherically symmetric supernova model.
|
arxiv:astro-ph/0301236
|
##odynamics, materials science, structural analysis, manufacturing and electricity mechatronics engineering - includes a combination of mechanical engineering, electrical engineering, telecommunications engineering, control engineering and computer engineering mining engineering β deals with discovering, extracting, beneficiating, marketing and utilizing mineral deposits. nuclear engineering β customarily includes nuclear fission, nuclear fusion and related topics such as heat / thermodynamics transport, nuclear fuel or other related technology ( e. g., radioactive waste disposal ) and the problems of nuclear proliferation. may also include radiation protection, particle detectors and medical physics. petroleum engineering β a field of engineering concerned with the activities related to exploration and production of hydrocarbons from the earth ' s subsurface. plastics engineering β a vast field which includes plastic processing, mold designing... process engineering β the understanding and application of the fundamental principles and laws of nature that allow humans to transform raw material and energy into products that are useful to society, at an industrial level. production engineering β a term used in the uk and europe similar to industrial engineering in north america. it includes the engineering of machines, people, processes and management. explores the applications of the theoretical field of mechanics. textile engineering β based on the conversion of three types of fiber into yarn, then fabric, then textiles robotics and automation engineering β relates all engineering fields for implementation in robotics and automation structural engineering β analyze, design, plan and research structural components, systems and loads, in order to achieve design goals including high - risk structures ensuring the safety and comfort of users or occupants in a wide range of specialties. software engineering β systematic application of scientific and technological knowledge, methods and experience to the design, implementation, testing and documentation of software systems engineering β focuses on the analysis, design, development and organization of complex systems = = international variations = = = = = australia = = = in australia, the bachelor of engineering ( be or beng - depending on the institution ) is a four - year undergraduate degree course and a professional qualification. the title of β engineer β is not protected in australia, therefore anyone can claim to be an engineer and practice without the necessary competencies, understanding of standards or in compliance with a code of ethics. the industry has attempted to overcome the lack of title protection through chartership ( cpeng ), national registration ( ner ) and various state registration ( rpeq ) programs which are usually obtained after a few years of professional practice. = = = canada = = = in canada, degrees awarded for undergraduate engineering studies include
|
https://en.wikipedia.org/wiki/Bachelor_of_Engineering
|
we investigate the analytic continuation of wave equations into the complex position plane. for the particular case of electromagnetic waves we provide a physical meaning for such an analytic continuation in terms of a family of closely related inhomogeneous media. for bounded permittivity profiles we find the phenomenon of reflection can be related to branch cuts in the wave that originate from poles of the permittivity at complex positions. demanding that these branch cuts disappear, we derive a large family of inhomogeneous media that are reflectionless for a single angle of incidence. extending this property to all angles of incidence leads us to a generalized form of the poschl teller potentials. we conclude by analyzing our findings within the phase integral ( wkb ) method.
|
arxiv:1508.04461
|
we propose a one - step scheme to implement a multiqubit controlled phase gate of one qubit simultaneously controlling multiple qubits with three - level atoms at distant nodes in coupled cavity arrays. the selective qubit - qubit couplings are achieved by adiabatically eliminating the atomic excited states and photonic states and the required phase shifts between the control qubit and any target qubit can be realized through suitable choices of the parameters of the external fields. moreover, the effective model is robust against decoherence because neither the atoms nor the field modes during the gate operation are excited, leading to a useful step toward scalable quantum computing networks.
|
arxiv:1402.0650
|
the compressed word problem for a finitely generated monoid m asks whether two given compressed words over the generators of m represent the same element of m. for string compression, straight - line programs, i. e., context - free grammars that generate a single string, are used in this paper. it is shown that the compressed word problem for a free inverse monoid of finite rank at least two is complete for pi ^ p _ 2 ( second universal level of the polynomial time hierarchy ). moreover, it is shown that there exists a fixed finite idempotent presentation ( i. e., a finite set of relations involving idempotents of a free inverse monoid ), for which the corresponding quotient monoid has a pspace - complete compressed word problem. it was shown previously that the ordinary uncompressed word problem for such a quotient can be solved in logspace. finally, a pspace - algorithm that checks whether a given element of a free inverse monoid belongs to a given rational subset is presented. this problem is also shown to be pspace - complete ( even for a fixed finitely generated submonoid instead of a variable rational subset ).
|
arxiv:1106.1000
|
certain excess versions of the minkowski and h \ " older inequalities are given. these new results generalize and improve the minkowski and h \ " older inequalities.
|
arxiv:1807.11108
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.