text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
a locally compact group $ g $ has the factorization property if the map $ $ c ^ * ( g ) \ odot c ^ * ( g ) \ ni a \ otimes b \ mapsto \ lambda ( a ) \ rho ( b ) \ in \ mathcal b ( l ^ 2 ( g ) ) $ $ is continuous with respect to the minimal c * - norm. this paper seeks to initiate a rigorous study of this property in the case of locally compact groups which, in contrast to the discrete case, has been relatively untouched. a partial solution to the question of when the factorization property passes to continuous embeddings is given - - a question which traces back to kirchberg ' s seminal work on the topic and is known to be false in general. it is also shown that every " residually amenably embeddable " group must necessarily have the factorization property and that an analogue of kirchberg ' s characterization of the factorization property for discrete groups with property ( t ) holds for a more general class of groups.
|
arxiv:1709.09272
|
in this paper we give an algorithm for solving a main case of the conjugacy problem in the braid groups. we also prove that half - twists satisfy a special root property which allows us to reduce the solution for the conjugacy problem in half - twists into the free group. using this algorithm one is able to check conjugacy of a given braid to one of e. artin ' s generators in any power, and compute its root. moreover, the braid element which conjugates a given half - twist to one of e. artin ' s generators in any power can be restored. the result is applicable to calculations of braid monodromy of branch curves and verification of hurwitz equivalence of braid monodromy factorizations, which are essential in order to determine braid monodromy type of algebraic surfaces and symplectic 4 - manifolds.
|
arxiv:math/0211197
|
in addition to the large surveys and catalogs of massive young stellar objects and outflows, dedicated studies of particular sources, in which high - angular observations ( mainly at near - ir and mm ) are analyzed in depth, are needed to shed light on the processes involved in the formation of massive stars. the galactic source g079. 1272 + 02. 2782 ( g79 ), a myso at about 1. 4 kpc, is an ideal source to carry out this kind of studies. near - ir integral field spectroscopic observations were carried out using nifs at gemini - north. the spectral and angular resolutions, allow us to perform a detailed study of the source and its southern jet, resolving structures with sizes between 200 and 300 au. as a complement, millimeter data retrieved from the jcmt and the iram 30m telescope databases were analyzed to study the molecular gas at a larger spatial scale. the analysis of a jet extending southwards shows cork - screw like structures at 2. 2 um continuum, strongly suggesting that the jet is precessing. the jet velocity is estimated in 30 - 43 km / s and it is coming to us along the line of sight. we suggest that the precession may be produced by the gravitational tidal effects generated in a probable binary system, and we estimate a jet precession period of about 10 ^ 3 yr, indicating a slow - precessing jet, which is in agreement with the observed helical features. an analysis of h2 lines along the jet allows us to investigate in detail a bow - shock produced by this jet. we find that this bow - shock is indeed generated by a c - type shock and it is observed coming to us, with some inclination angle, along the line of sight. this is confirmed by the analysis of molecular outflows at a larger spatial scale. a brief analysis of several molecular species at millimeter wavelengths indicates a complex chemistry developing at the external layers of the molecular clump in which myso g79 is embedded.
|
arxiv:2208.03200
|
while the bethe ansatz solution of the haldane - - shastry model appears to suggest that the spinons represent a free gas of half - fermions, bernevig, giuliano, and laughlin ( bgl ) ( cond - mat / 0011069, cond - mat / 0011270 ) have concluded recently that there is an attractive interaction between spinons. we argue that the dressed scattering matrix obtained with the asymptotic bethe ansatz is to be interpreted as the true and physical scattering matrix of the excitations, and hence, that the result by bgl is inconsistent with an earlier result by essler ( cond - mat / 9406081 ). we critically re - examine the analysis of bgl, and conclude that there is no interaction between spinons or spinons and holons in the haldane - - shastry model.
|
arxiv:cond-mat/0409495
|
we establish bounds on a finite separable extension of function fields in terms of the relative class number, thus reducing the problem of classifying extensions with a fixed relative class number to a finite computation. we also solve the relative class number two problem in all cases where the base field has constant field not equal to $ \ mathbb { f } _ 2 $.
|
arxiv:2412.12467
|
dirac constraint theory allows to identify the york canonical basis ( diagonalizing the york - lichnerowicz approach ) in adm tetrad gravity for asymptotically minkowskian space - times without super - translations. this allows to identify the inertial ( gauge ) and tidal ( physical ) degrees of freedom of the gravitational field and to interpret ashtekar variables in these space - times. the use of radar 4 - coordinates centered on a time - like observer allows to connect the 3 + 1 splittings of space - time with the relativistic metrology used in atomic physics and astronomy. the asymptotic adm poincar \ ' e group replaces the poincar \ ' e group of particle physics. the general relativistic remnant of the gauge freedom in clock synchronization is described by the inertial gauge variable $ { } ^ 3k $, the trace of the extrinsic curvature of the non - euclidean 3 - spaces. the theory can be linearized in a post - minkowskian way by using the asymptotic minkowski metric as an asymptotic background at spatial infinity and the family of non - harmonic 3 - orthogonal schwinger time gauges allows to reproduce the known results on gravitational waves in harmonic gauges. it is shown that the main signatures for the existence of dark matter can be reinterpreted as an relativistic inertial effect induced by $ { } ^ 3k $ : while in the space - time inertial and gravitational masses coincide ( equivalence principle ), this is not true in the non - euclidean 3 - spaces ( breaking of newton equivalence principle ), where the inertial mass has extra $ { } ^ 3k $ - dependent terms simulating dark matter. therefore a post - minkowskian extension of the existing post - newtonian celestial reference frame is needed.
|
arxiv:1108.3224
|
the observational study of star formation relations in galaxies is central to unraveling the physical processes at work on local and global scales. we wish to expand the sample of extreme starbursts, represented by local lirgs and ulirgs, with high quality observations in the 1 - 0 line of hcn. we study if a universal law can account for the star formation relations observed for the dense molecular gas in normal star forming galaxies and extreme starbursts. we have used the iram 30m telescope to observe a sample of 19 lirgs in the 1 - 0 lines of co, hcn and hco +. the analysis of the new data proves that the efficiency of star formation in the dense molecular gas ( sfe - dense ) of extreme starbursts is a factor 3 - 4 higher compared to normal galaxies. we find a duality in kennicutt - schmidt ( ks ) laws that is reinforced if we account for the different conversion factor for hcn ( alpha - hcn ) in extreme starbursts and for the unobscured star formation rate in normal galaxies. this result extends to the higher molecular densities probed by hcn lines the more extreme bimodal behavior of star formation laws, derived from co molecular lines by two recent surveys. we have confronted our observations with the predictions of theoretical models in which the efficiency of star formation is determined by the ratio of a constant star formation rate per free - fall time ( sfr - ff ) to the local free - fall time. we find that it is possible to fit the observed differences in the sfe - dense between normal galaxies and lirgs / ulirgs using a common constant sfr - ff and a set of physically acceptable hcn densities, but only if sfr - ff ~ 0. 005 - 0. 01 and / or if alpha - hcn is a factor of a few lower than our favored values. star formation recipes that explicitly depend on the galaxy global dynamical time scales do not significantly improve the fit to the new hcn data presented in this work.
|
arxiv:1111.6773
|
we prove that for any finite index subgroup $ \ ga $ in $ sl _ n ( \ mathbb { z } ) $, there exists $ k = k ( n ) \ in \ mathbb { n } $, $ \ ep = \ ep ( \ ga ) > 0 $, and an infinite family of finite index subgroups in $ \ ga $ with a kazhdan constant greater than $ \ ep $ with respect to a generating set of order $ k $. on the other hand, we prove that for any finite index subgroup $ \ ga $ of $ sl _ n ( \ mathbb { z } ) $, and for any $ \ ep > 0 $ and $ k \ in \ mathbb { n } $, there exists a finite index subgroup $ \ ga ' \ leq \ ga $ such that the kazhdan constant of any finite index subgroup in $ \ ga ' $ is less than $ \ ep $, with respect to any generating set of order $ k $. in addition, we prove that the kazhdan constant of the principal congruence subgroup $ \ gamma _ n ( m ) $, with respect to a generating set consisting of elementary matrices ( and their conjugates ), is greater than $ \ frac { c } { m } $, where $ c > 0 $ depends only on $ n $. for a fixed $ n $, this bound is asymptotically best possible.
|
arxiv:1007.4463
|
backflow, or retropropagation, is a counterintuitive phenomenon whereby for a forward - propagating wave the energy locally propagates backward. in the context of backflow, physically most interesting are the so - called unidirectional waves, which contain only forward propagating plane wave constituents. yet, very few such waves possessing closed - form analytic expressions for evaluation of the poynting vector are known. in this study, we examine energy backflow in a novel ( 2 + 1 ) - dimensional unidirectional monochromatic wave and in a ( 2 + 1 ) d spatio - temporal wave packet, analytic expressions which we succeeded to find. we also present a detailed study of the backflow in the " needle " pulse. this is an interesting model object because well - known superluminal non - diffracting space - time wave packets can be derived from its factored wave function. finally we study the backflow in an unidirectional version of the so - called focus wave mode - - a pulse propagating luminally and without spread, which is the first and most studied representative of the ( 3 + 1 ) d non - diffracting space - time wave packets ( also referred to as spatiotemporally localized waves ).
|
arxiv:2405.02284
|
we exploit a gauge invariant approach for the analysis of the equations governing the dynamics of active scalar fluctuations coupled to the fluctuations of the metric along holographic rg flows. in the present approach, a second order ode for the active scalar emerges rather simply and makes it possible to use the green ' s function method to deal with ( quadratic ) interaction terms. we thus fill a gap for active scalar operators, whose three - point functions have been inaccessible so far, and derive a general, explicitly bose symmetric formula thereof. as an application we compute the relevant three - point function along the gppz flow and extract the irreducible trilinear couplings of the corresponding superglueballs by amputating the external legs on - shell.
|
arxiv:hep-th/0310129
|
carbon - 12 and carbon - 13 abundances are measured in eleven red - giant members of the globular cluster omega centauri via observations of first - overtone co bands near 2. 3 microns. the mean value for the entire sample is < 12c / 13c > = 4. 3 + / - 0. 4, with nine giants equal, within the errors, to the equilibrium ration of 12c / 13c = 3. 5. no correlation is found within omega cen between 12c / 13c and the abundance of iron. the relation between 12c / 13c and other abundance ratios, such as [ o / fe ], { na / fe ], or [ al / fe ] are also discussed.
|
arxiv:astro-ph/0207434
|
we study the integrability of the conformal geodesic flow ( also known as the conformal circle flow ) on the $ so ( 3 ) $ - - invariant gravitational instantons. on a hyper - - k \ " ahler four - - manifold the conformal geodesic equations reduce to geodesic equations of a charged particle moving in a constant self - - dual magnetic field. in the case of the anti - - self - - dual taub nut instanton we integrate these equations completely by separating the hamilton - - jacobi equations, and finding a commuting set of first integrals. this gives the first example of an integrable conformal geodesic flow on a four - - manifold which is not a symmetric space. in the case of the eguchi - - hanson we find all conformal geodesics which lie on the three - - dimensional orbits of the isometry group. in the non - - hyper - - k \ " ahler case of the fubini - - study metric on $ \ cp ^ 2 $ we use the first integrals arising from the conformal killing - - yano tensors to recover the known complete integrability of conformal geodesics.
|
arxiv:1906.08375
|
frequency analysis of the rf emission of oscillating josephson supercurrent is a powerful passive way of probing properties of topological josephson junctions. in particular, measurements of the josephson emission enables to detect the expected presence of topological gapless andreev bound states that give rise to emission at half the josephson frequency $ f _ j $, rather than conventional emission at $ f _ j $. here we report direct measurement of rf emission spectra on josephson junctions made of hgte - based gate - tunable topological weak links. the emission spectra exhibit a clear signal at half the josephson frequency $ f _ { \ rm j } / 2 $. the linewidths of emission lines indicate a coherence time of $ 0. 3 - \ si { 4 } { ns } $ for the $ f _ { \ rm j } / 2 $ line, much shorter than for the $ f _ { \ rm j } $ line ( $ 3 - \ si { 4 } { ns } $ ). these observations strongly point towards the presence of topological gapless andreev bound states, and pave the way for a future hgte - based platform for topological quantum computation.
|
arxiv:1603.09611
|
knowledge graph embedding ( kge ), aiming to embed entities and relations into low - dimensional vectors, has attracted wide attention recently. however, the existing research is mainly based on the black - box neural models, which makes it difficult to interpret the learned representation. in this paper, we introduce disene, an end - to - end framework to learn disentangled knowledge graph embeddings. specially, we introduce an attention - based mechanism that enables the model to explicitly focus on relevant components of entity embeddings according to a given relation. furthermore, we introduce two novel regularizers to encourage each component of the entity representation to independently reflect an isolated semantic aspect. experimental results demonstrate that our proposed disene investigates a perspective to address the interpretability of kge and is proved to be an effective way to improve the performance of link prediction tasks.
|
arxiv:2010.14730
|
the majority of astronomers and physicists accept the reality of dark energy and also believe that it can only be studied indirectly through observation of the motions of stars and galaxies. in this paper i open the experimental question of whether it is possible to directly detect dark energy through the presence of dark energy density. two thirds of this paper outlines the major aspects of dark energy density as now comprehended by the astronomical and physics community. the final third summarizes various proposals for direct detection of dark energy density or its possible effects. at this time i do not have a fruitful answer to the question : can the existence of dark energy be directly detected?
|
arxiv:0809.5083
|
the effect of pressure on the structural properties of lanthanum sesquicarbide la2c3 ( t _ c = 13 k ) has been investigated at room temperature by angle - dispersive powder x - ray diffraction in a diamond anvil cell. the compound remains in the cubic pu2c3 - type structure at pressures up to at least 30 gpa. the corresponding equation of state parameters are reported and discussed in terms of phase stability of la2c3. pressure - volume data of the impurity phase lac2 are reported also for pressures up to 13 gpa.
|
arxiv:cond-mat/0503597
|
in this paper, we present an agda formalization of a normalizer for simply - typed lambda terms. the normalizer consists of two coinductively defined functions in the delay monad : one is a standard evaluator of lambda terms to closures, the other a type - directed reifier from values to eta - long beta - normal forms. their composition, normalization - by - evaluation, is shown to be a total function a posteriori, using a standard logical - relations argument. the successful formalization serves as a proof - of - concept for coinductive programming and reasoning using sized types and copatterns, a new and presently experimental feature of agda.
|
arxiv:1406.2059
|
generalizations and variations of the fundamental lemma by willems et al. are an active topic of recent research. in this note, we explore and formalize the links between kernel regression and some known nonlinear extensions of the fundamental lemma. applying a transformation to the usual linear equation in hankel matrices, we arrive at an alternative implicit kernel representation of the system trajectories while keeping the requirements on persistency of excitation. we show that this representation is equivalent to the solution of a specific kernel regression problem. we explore the possible structures of the underlying kernel as well as the system classes to which they correspond.
|
arxiv:2403.05368
|
creativity ". : 7 in a 2018 public service announcement, the fbi warned that widespread collection of student information by educational technologies, including web browsing history, academic progress, medical information, and biometrics, created the potential for privacy and safety threats if such data was compromised or exploited. the transition from in - person learning to distance education in higher education due to the covid - 19 pandemic has led to enhanced extraction of student data enabled by complex data infrastructures. these infrastructures collect information such as learning management system logins, library metrics, impact measurements, teacher evaluation frameworks, assessment systems, learning analytic traces, longitudinal graduate outcomes, attendance records, social media activity, and so on. the copious amounts of information collected are quantified for the marketization of higher education, employing this data as a means to demonstrate and compare student performance across institutions to attract prospective students, mirroring the capitalistic notion of ensuring efficient market functioning and constant improvement through measurement. this desire of data has fueled the exploitation of higher education by platform companies and data service providers who are outsourced by institutions for their services. the monetization of student data in order to integrate corporate models of marketization further pushes higher education, widely regarded as a public good, into a privatized commercial sector. = = teacher training = = since technology is not the end goal of education, but rather a means by which it can be accomplished, educators must have a good grasp of the technology and its advantages and disadvantages. teacher training aims for the effective integration of classroom technology. the evolving nature of technology may unsettle teachers, who may experience themselves as perpetual novices. finding quality materials to support classroom objectives is often difficult. random professional development days are inadequate. according to jenkins, " rather than dealing with each technology in isolation, we would do better to take an ecological approach, thinking about the interrelationship among different communication technologies, the cultural communities that grow up around them, and the activities they support. " jenkins also suggested that the traditional school curriculum guided teachers to train students to be autonomous problem solvers. however, today ' s workers are increasingly asked to work in teams, drawing on different sets of expertise, and collaborating to solve problems. learning styles and the methods of collecting information have evolved, and " students often feel locked out of the worlds described in their textbooks through the depersonalized and abstract prose used to describe them ". these twenty - first - century skills can be attained through the incorporation and engagement with technology. changes
|
https://en.wikipedia.org/wiki/Educational_technology
|
and several economic historians have described hesiod as the " first economist ". however, the word oikos, the greek word from which the word economy derives, was used for issues regarding how to manage a household ( which was understood to be the landowner, his family, and his slaves ) rather than to refer to some normative societal system of distribution of resources, which is a more recent phenomenon. xenophon, the author of the oeconomicus, is credited by philologues for being the source of the word economy. joseph schumpeter described 16th and 17th century scholastic writers, including tomas de mercado, luis de molina, and juan de lugo, as " coming nearer than any other group to being the ' founders ' of scientific economics " as to monetary, interest, and value theory within a natural - law perspective. two groups, who later were called " mercantilists " and " physiocrats ", more directly influenced the subsequent development of the subject. both groups were associated with the rise of economic nationalism and modern capitalism in europe. mercantilism was an economic doctrine that flourished from the 16th to 18th century in a prolific pamphlet literature, whether of merchants or statesmen. it held that a nation ' s wealth depended on its accumulation of gold and silver. nations without access to mines could obtain gold and silver from trade only by selling goods abroad and restricting imports other than of gold and silver. the doctrine called for importing inexpensive raw materials to be used in manufacturing goods, which could be exported, and for state regulation to impose protective tariffs on foreign manufactured goods and prohibit manufacturing in the colonies. physiocrats, a group of 18th - century french thinkers and writers, developed the idea of the economy as a circular flow of income and output. physiocrats believed that only agricultural production generated a clear surplus over cost, so that agriculture was the basis of all wealth. thus, they opposed the mercantilist policy of promoting manufacturing and trade at the expense of agriculture, including import tariffs. physiocrats advocated replacing administratively costly tax collections with a single tax on income of land owners. in reaction against copious mercantilist trade regulations, the physiocrats advocated a policy of laissez - faire, which called for minimal government intervention in the economy. adam smith ( 1723 – 1790 ) was an early economic theorist. smith was harshly critical of the mercantilists but described the physiocratic system " with all its imperfections " as
|
https://en.wikipedia.org/wiki/Economics
|
we prove a carbery - wright style anti - concentration inequality for the unitary haar measure, by showing that the probability of a polynomial in the entries of a random unitary falling into an $ \ varepsilon $ range is at most a polynomial in $ \ varepsilon $. using it, we show that the scrambling speed of a random quantum circuit is lower bounded : namely, every input qubit has an influence that is at least exponentially small in depth, on any output qubit touched by its lightcone. we give three applications of this new scrambling speed lower bound that apply to random quantum circuits with haar random gates : $ \ bullet $ an optimal $ \ omega ( \ log \ varepsilon ^ { - 1 } ) $ depth lower bound for $ \ varepsilon $ - approximate unitary designs ; $ \ bullet $ a polynomial - time quantum algorithm that computes the depth of a bounded - depth circuit, given oracle access to the circuit ; $ \ bullet $ a polynomial - time algorithm that learns log - depth circuits up to polynomially small diamond distance, given oracle access to the circuit. the first depth lower bound works against any architecture. the latter two algorithms apply to architectures defined over any geometric dimension, and can be generalized to a wide class of architectures with good lightcone properties.
|
arxiv:2407.19561
|
we establish extended thermodynamics ( et ) of rarefied polyatomic gases with six independent fields, i. e., the mass density, the velocity, the temperature and the dynamic pressure, without adopting the near - equilibrium approximation. the closure is accomplished by the maximum entropy principle ( mep ) adopting a distribution function that takes into account the internal degrees of freedom of a molecule. the distribution function is not necessarily near equilibrium. the result is in perfect agreement with the phenomenological et theory. to my knowledge, this is the first example of molecular extended thermodynamics with a non - linear closure. the integrability condition of the moments requires that the dynamical pressure should be bounded from below and from above. in this domain the system is symmetric hyperbolic. finally we verify the k - condition for this model and show the existence of global smooth solutions.
|
arxiv:1504.05857
|
we prove that the topological conjugacy relations both for minimal systems and pointed minimal systems are not borel - reducible to any borel $ s _ { \ infty } $ - action.
|
arxiv:2401.11310
|
the paper investigates the relative expressiveness of two logic - based languages for reasoning over streams, namely lars programs - - the language of the logic - based framework for analytic reasoning over streams called lars - - and ldsr - - the language of the recent extension of the i - dlv system for stream reasoning called i - dlv - sr. although these two languages build over datalog, they do differ both in syntax and semantics. to reconcile their expressive capabilities for stream reasoning, we define a comparison framework that allows us to show that, without any restrictions, the two languages are incomparable and to identify fragments of each language that can be expressed via the other one.
|
arxiv:2208.12726
|
must be made to re - live, to some extent, the creative process. in other words, he must be induced, with proper aid and guidance, to make some of the fundamental discoveries of science by himself, to experience in his own mind some of those flashes of insight which have lightened its path.... the traditional method of confronting the student not with the problem but with the finished solution, means depriving him of all excitement, [ shutting ] off the creative impulse, [ reducing ] the adventure of mankind to a dusty heap of theorems. specific hands - on illustrations of this approach are available. = = research = = the practice of science education has been increasingly informed by research into science teaching and learning. research in science education relies on a wide variety of methodologies, borrowed from many branches of science and engineering such as computer science, cognitive science, cognitive psychology and anthropology. science education research aims to define or characterize what constitutes learning in science and how it is brought about. john d. bransford, et al., summarized massive research into student thinking as having three key findings : preconceptions prior ideas about how things work are remarkably tenacious and an educator must explicitly address a students ' specific misconceptions if the student is to reconfigure his misconception in favour of another explanation. therefore, it is essential that educators know how to learn about student preconceptions and make this a regular part of their planning. knowledge organization in order to become truly literate in an area of science, students must, " ( a ) have a deep foundation of factual knowledge, ( b ) understand facts and ideas in the context of a conceptual framework, and ( c ) organize knowledge in ways that facilitate retrieval and application. " metacognition students will benefit from thinking about their thinking and their learning. they must be taught ways of evaluating their knowledge and what they do not know, evaluating their methods of thinking, and evaluating their conclusions. some educators and others have practiced and advocated for discussions of pseudoscience as a way to understand what it is to think scientifically and to address the problems introduced by pseudoscience. educational technologies are being refined to meet the specific needs of science teachers. one research study examining how cellphones are being used in post - secondary science teaching settings showed that mobile technologies can increase student engagement and motivation in the science classroom. according to a bibliography on constructivist - oriented research on teaching and learning science in 2005, about 64 percent of studies documented are carried out
|
https://en.wikipedia.org/wiki/Science_education
|
as the minituarization of electronic devices, which are sensitive to temperature, grows apace, sensing of temperature with ever smaller probes is more important than ever. genuinely quantum mechanical schemes of thermometry are thus expected to be crucial to future technological progress. we propose a new method to measure the temperature of a bath using the weak measurement scheme with a finite dimensional probe. the precision offered by the present scheme not only shows similar qualitative features as the usual quantum fisher information based thermometric protocols, but also allows for flexibility over setting the optimal thermometric window through judicious choice of post selection measurements.
|
arxiv:1901.07415
|
cosmic rays ( cr ) propagate through the galactic scales down to the smaller scales at which stars form. crs are close to energy equipartition with the other components of the interstellar medium and can provide a support against gravity if pressure gradients develop. we study crs propagation within a turbulent and magnetised bi - stable interstellar gas and identify the conditions necessary for cr trapping. we present a numerical study using 3d simulations of the evolution of a mixture of interstellar gas and crs, in which turbulence is driven by stochastic forcing within a box of 40 pc. we explore a large parameter space ( cr diffusion coefficient, magnetisation, driving scale and amplitude of the turbulence forcing, initial cr energy ). we identify a clear transition in the interstellar dynamics for cr diffusion coefficient below a critical value, which depends on the characteristic length scale l as $ d _ { \ rm crit } \ simeq 3. 1 ~ 10 ^ { 23 } ~ { \ rm ~ cm ^ 2 / s } ~ ( l / { \ rm 1 ~ pc } ) ^ { q + 1 } $, where the exponent q relates the turbulent velocity dispersion to the length scale as $ v \ simeq l ^ q $. in our simulations this transition occurs around $ d _ { \ rm crit } \ simeq 10 ^ { 24 - 25 } $ cm ^ 2 / s. the transition is recovered in all cases of our parameter study and is in very good agreement with our simple analytical estimate. in the trapped cr regime, the induced cr pressure gradients can modify the gas flow and provide a support against the thermal instability development. we discuss possible mechanisms that can reduce significantly the cr diffusion coefficients within the interstellar medium. cr pressure gradients can develop and modify the evolution of thermally bi - stable gas for diffusion coefficients $ d \ leq 10 ^ { 25 } $ cm ^ 2 / s or in regions where the cr pressure exceeds the thermal one by factor > 10. this study provides the basis of further works including more realistic cr diffusion coefficients, as well as local cr sources.
|
arxiv:1811.11509
|
the atomic masses of the isotopes $ ^ { 206, 207 } $ ra have been measured via decay - correlated mass spectroscopy using a multi - reflection time - of - flight mass spectrograph equipped with an $ \ alpha $ - tof detector. the ra isotopes were produced as fusion - evaporation products in the $ ^ { 51 } $ v + $ ^ { 159 } $ tb reaction system and delivered by the gas - filled recoil ion separator garis - ii at riken. the $ \ alpha $ - tof detector provides for high - accuracy mass measurements by correlating time - of - flight signals with subsequent $ \ alpha $ - decay events. the masses of $ ^ { 206 } $ ra and $ ^ { 207g, m } $ ra were directly measured using a multi - reflection time - of - flight mass spectrograph equipped with an $ \ alpha $ - tof detector. a mass excess of me = 3538 ( 15 ) kev / c $ ^ 2 $ and an excitation energy of e $ _ { \ rm ex } $ = 552 ( 42 ) kev were determined. the $ \ alpha $ - decay branching ratio of $ ^ { 207m } $ ra, b $ \ alpha $ = 0. 26 ( 20 ), was directly determined from decay - correlated time - of - flight signals, and the reduced alpha width of $ ^ { 207m } $ ra was calculated to be $ \ delta ^ 2 $ = 50 + 62 - 41 kev from the branching ratio. the spin - parity of $ ^ { 207m } $ ra was confirmed to be $ j ^ \ pi $ = 13 / 2 $ ^ - $ from decay correlated mass measurement results.
|
arxiv:2108.06245
|
topological materials are derived from the interplay between symmetry and topology. advances in topological band theories have led to the prediction that the antiperovskite oxide sr $ _ 3 $ sno is a topological crystalline insulator, a new electronic phase of matter where the conductivity in its ( 001 ) crystallographic planes is protected by crystallographic point group symmetries. realization of this material, however, is challenging. guided by thermodynamic calculations we design and implement a deposition approach to achieve the adsorption - controlled growth of epitaxial sr $ _ 3 $ sno single - crystal films by molecular - beam epitaxy ( mbe ). in - situ transport and angle - resolved photoemission spectroscopy measurements reveal the metallic and non - trivial topological nature of the as - grown samples. compared with conventional mbe, the synthesis route used results in superior sample quality and is readily adapted to other topological systems with antiperovskite structures. the successful realization of thin films of topological crystalline insulators opens opportunities to manipulate topological states by tuning symmetries via epitaxial strain and heterostructuring.
|
arxiv:1912.13431
|
in this paper, we study quantum query complexity of the following rather natural tripartite generalisations ( in the spirit of the 3 - sum problem ) of the hidden shift and the set equality problems, which we call the 3 - shift - sum and the 3 - matching - sum problems. the 3 - shift - sum problem is as follows : given a table of $ 3 \ times n $ elements, is it possible to circularly shift its rows so that the sum of the elements in each column becomes zero? it is promised that, if this is not the case, then no 3 elements in the table sum up to zero. the 3 - matching - sum problem is defined similarly, but it is allowed to arbitrarily permute elements within each row. for these problems, we prove lower bounds of $ \ omega ( n ^ { 1 / 3 } ) $ and $ \ omega ( \ sqrt n ) $, respectively. the second lower bound is tight. the lower bounds are proven by a novel application of the dual learning graph framework and by using representation - theoretic tools.
|
arxiv:1712.10194
|
spatiotemporal patterns such as traveling waves are frequently observed in recordings of neural activity. the mechanisms underlying the generation of such patterns are largely unknown. previous studies have investigated the existence and uniqueness of different types of waves or bumps of activity using neural - field models, phenomenological coarse - grained descriptions of neural - network dynamics. but it remains unclear how these insights can be transferred to more biologically realistic networks of spiking neurons, where individual neurons fire irregularly. here, we employ mean - field theory to reduce a microscopic model of leaky integrate - and - fire ( lif ) neurons with distance - dependent connectivity to an effective neural - field model. in contrast to existing phenomenological descriptions, the dynamics in this neural - field model depends on the mean and the variance in the synaptic input, both determining the amplitude and the temporal structure of the resulting effective coupling kernel. for the neural - field model we employ liner stability analysis to derive conditions for the existence of spatial and temporal oscillations and wave trains, that is, temporally and spatially periodic traveling waves. we first prove that wave trains cannot occur in a single homogeneous population of neurons, irrespective of the form of distance dependence of the connection probability. compatible with the architecture of cortical neural networks, wave trains emerge in two - population networks of excitatory and inhibitory neurons as a combination of delay - induced temporal oscillations and spatial oscillations due to distance - dependent connectivity profiles. finally, we demonstrate quantitative agreement between predictions of the analytically tractable neural - field model and numerical simulations of both networks of nonlinear rate - based units and networks of lif neurons.
|
arxiv:1801.06046
|
we introduce two - sorted theories in the style of cook and nguyen for the complexity classes parityl and det, whose complete problems include determinants over gf ( 2 ) and z, respectively. the definable functions in these theories are the functions in the corresponding complexity classes ; thus each theory formalizes reasoning using concepts from its corresponding complexity class.
|
arxiv:1001.1960
|
data assimilation schemes are confronted with the presence of model errors arising from the imperfect description of atmospheric dynamics. these errors are usually modeled on the basis of simple assumptions such as bias, white noise, first order markov process. in the present work, a formulation of the sequential extended kalman filter is proposed, based on recent findings on the universal deterministic behavior of model errors in deep contrast with previous approaches ( nicolis, 2004 ). this new scheme is applied in the context of a spatially distributed system proposed by lorenz ( 1996 ). it is found that ( i ) for short times, the estimation error is accurately approximated by an evolution law in which the variance of the model error ( assumed to be a deterministic process ) evolves according to a quadratic law, in agreement with the theory. moreover, the correlation with the initial condition error appears to play a secondary role in the short time dynamics of the estimation error covariance. ( ii ) the deterministic description of the model error evolution, incorporated into the classical extended kalman filter equations, reveals that substantial improvements of the filter accuracy can be gained as compared with the classical white noise assumption. the universal, short time, quadratic law for the evolution of the model error covariance matrix seems very promising for modeling estimation error dynamics in sequential data assimilation.
|
arxiv:0802.4217
|
there have been many attempts to construct de sitter space - times in string theory. while arguably there have been some successes, this has proven challenging, leading to the de sitter swampland conjecture : quantum theories of gravity do not admit stable or metastable de sitter space. here we explain that, within controlled approximations, one lacks the tools to construct de sitter space in string theory. such approximations would require the existence of a set of ( arbitrarily ) small parameters, subject to severe constraints. but beyond this one also needs an understanding of big - bang and big - crunch singularities that is not currently accessible to standard approximations in string theory. the existence or non - existence of metastable de sitter space in string theory remains a matter of conjecture.
|
arxiv:2008.12399
|
in this paper we construct a candidate for a spectral triple on a quotient space of gauge connections modulo gauge transformations and show that it is related to a kasparov type bi - module over two canonical algebras : the hd - algebra, which is a non - commutative c * - algebra generated by parallel transports along flows of vector fields, and an exterior algebra on a space of gauge transformations. the latter algebra is related to the ghost sector in a brst quantisation scheme. previously we have shown that key elements of bosonic and fermionic quantum field theory on a curved background emerge from a spectral triple of this type. in this paper we show that a dynamical metric on the underlying manifold also emerges from the construction. we first rigorously construct a dirac type operator on the a quotient space of gauge connections modulo gauge transformations, and discuss the commutator between this dirac type operator and the hd - algebra. to do this we first construct a gauge - covariant metric on the configuration space and use it to construct the triple. the key step in this construction is to require the volume of the quotient space to be finite, which amounts to an ultra - violet regularisation. since the metric on the configuration space is dynamical with respect to the time - evolution generated by the dirac type operator in the triple, it is possible to interpret the regularisation as a physical feature ( as opposed to static regularisations, which are always computational artefacts ). finally, we construct a bott - dirac operator that connects our construction with quantum yang - mills theory.
|
arxiv:2309.06374
|
the supernova impostor sn 2009ip has re - brightened several times since its initial discovery in august 2009. during its last outburst in late september 2012 it reached a peak brightness of m $ _ v $ $ \ sim $ 13. 5 ( m $ _ v $ brighter than - 18 ) causing some to speculate that it had undergone a terminal core - collapse supernova. relatively high - cadence multi - wavelength photometry of the post - peak decline revealed bumps in brightness infrequently observed in other type iin supernovae. these bumps occurred synchronously in all uv and optical bands with amplitudes of 0. 1 - - 0. 4 mag at intervals of 10 - - 30 days. episodic continuum brightening and dimming in the uv and optical with these characteristics is not easilly explained within the context of models that have been proposed for the late september 2012 outburst of sn 2009ip. we also present evidence that the post peak fluctuations in brightness occur at regular intervals and raise more questions about their origin.
|
arxiv:1308.3682
|
subsequence clustering of multivariate time series is a useful tool for discovering repeated patterns in temporal data. once these patterns have been discovered, seemingly complicated datasets can be interpreted as a temporal sequence of only a small number of states, or clusters. for example, raw sensor data from a fitness - tracking application can be expressed as a timeline of a select few actions ( i. e., walking, sitting, running ). however, discovering these patterns is challenging because it requires simultaneous segmentation and clustering of the time series. furthermore, interpreting the resulting clusters is difficult, especially when the data is high - dimensional. here we propose a new method of model - based clustering, which we call toeplitz inverse covariance - based clustering ( ticc ). each cluster in the ticc method is defined by a correlation network, or markov random field ( mrf ), characterizing the interdependencies between different observations in a typical subsequence of that cluster. based on this graphical representation, ticc simultaneously segments and clusters the time series data. we solve the ticc problem through alternating minimization, using a variation of the expectation maximization ( em ) algorithm. we derive closed - form solutions to efficiently solve the two resulting subproblems in a scalable way, through dynamic programming and the alternating direction method of multipliers ( admm ), respectively. we validate our approach by comparing ticc to several state - of - the - art baselines in a series of synthetic experiments, and we then demonstrate on an automobile sensor dataset how ticc can be used to learn interpretable clusters in real - world scenarios.
|
arxiv:1706.03161
|
cross - sectional " information coefficient " ( ic ) is a widely and deeply accepted measure in portfolio management. the paper gives an insight into ic in view of high - dimensional directional statistics : ic is a linear operator on the components of a centralizing - unitizing standardized random vector of next - period cross - sectional returns. our primary research first clearly defines ic with the high - dimensional directional statistics, discussing its first two moments. we derive the closed - form expressions of the directional statistics ' covariance matrix and ic ' s variance in a homoscedastic condition. also, we solve the optimization of ic ' s maximum expectation and minimum variance. simulation intuitively characterizes the standardized directional statistics and ic ' s p. d. f.. the empirical analysis of the chinese stock market uncovers interesting facts about the standardized vectors of cross - sectional returns and helps obtain the time series of the measure in the real market. the paper discovers a potential application of directional statistics in finance, proves explicit results of the projected normal distribution, and reveals ic ' s nature.
|
arxiv:1912.10709
|
we show that the two binary operations in double inverse semigroups, as considered by kock [ 2007 ], necessarily coincide.
|
arxiv:1501.03690
|
we study the problem of coexistence in a two - type competition model governed by first - passage percolation on $ \ zd $ or on the infinite cluster in bernoulli percolation. actually, we prove for a large class of ergodic stationary passage times that for distinct points $ x, y \ in \ zd $, there is a strictly positive probability that $ \ { z \ in \ zd ; d ( y, z ) < d ( x, z ) \ } $ and $ \ { z \ in \ zd ; d ( y, z ) > d ( x, z ) \ } $ are both infinite sets. we also show that there is a strictly positive probability that the graph of time - minimizing path from the origin in first - passage percolation has at least two topological ends. this generalizes results obtained by h { \ " a } ggstr { \ " o } m and pemantle for independent exponential times on the square lattice.
|
arxiv:math/0312369
|
in this paper, we analysis the dynamics, at the quantum level, of the self - dual field minimally coupled to bosons with lorentz symmetry breaking. we quantize the model by applying the dirac bracket canonical quantization procedure. in addition, we test the relativistic invariance of the model by computing the boson - boson elastic scattering amplitude. therefore, we show that the lorentz symmetry breaking has been restored at the quantum level. we finalize our analysis by computing the dual equivalence between the self - dual model with lorentz symmetry breaking coupled with bosonic matter and the maxwell - chern - simons with lorentz invariance violation coupled with bosonic field.
|
arxiv:2403.10224
|
an adaptive analogue of the yu. e. nesterov method for variational inequalities with a strongly monotone operator is proposed. some estimates are obtained for the parameters determining the quality of the solution of the variational inequality depending on the number of iterations.
|
arxiv:1803.04045
|
the pair separation model of goto and vassilicos ( s goto and j c vassilicos, 2004, new j. phys., 6, p. 65 ) is revisited and placed on a sound mathematical foundation. a dns of two dimensional homogeneous isotropic turbulence with an inverse energy cascade and a k ^ { - 5 / 3 } power law is used to investigate properties of pair separation in two dimensional turbulence. a special focus lies on the time asymmetry observed between forward and backward separation. application of the present model to this data suffers from finite inertial range effects and thus, conditional averaging on scales rather than on time has been employed to obtain values for the richardson constants and their ratio. the richardson constants for the forward and backward case are found to be ( 1. 066 + / - 0. 020 ) and ( 0. 999 + / - 0. 007 ) respectively. the ratio of richardson constants for the backwards and forwards case is therefore g _ b / g _ f = ( 0. 92 + / - 0. 03 ), and hence exhibits a qualitatively different behavior from pair separation in three dimensional turbulence, where g _ b > g _ f ( j berg et al., 2006, phys. rev. e, 74 ( 1 ), p. 016304 ). this indicates that previously proposed explanations for this time asymmetry based on the strain tensor eigenvalues are not sufficient to describe this phenomenon in two dimensional turbulence. we suggest an alternative qualitative explanation based on the time asymmetry related to the inverse versus forward energy cascade. in two dimensional turbulence, this asymmetry manifests itself in merging eddies due to the inverse cascade, leading to the observed ratio of richardson constants.
|
arxiv:0806.1867
|
graphs have been extensively used to represent data from various domains. in the era of big data, information is being generated at a fast pace, and analyzing the same is a challenge. various methods have been proposed to speed up the analysis of the data and also mining it for information. all of this often involves using a massive array of compute nodes, and transmitting the data over the network. of course, with the huge quantity of data, this poses a major issue to the task of gathering intelligence from data. therefore, in order to address such issues with big data, using data compression techniques is a viable option. since graphs represent most real world data, methods to compress graphs have been in the forefront of such endeavors. in this paper we propose techniques to compress graphs by finding specific patterns and replacing those with identifiers that are of variable length, an idea inspired by huffman coding. specifically, given a graph g = ( v, e ), where v is the set of vertices and e is the set of edges, and | v | = n, we propose methods to reduce the space requirements of the graph by compressing the adjacency representation of the same. the proposed methods show up to 80 % reduction is the space required to store the graphs as compared to using the adjacency matrix. the methods can also be applied to other representations as well. the proposed techniques help solve the issues related to computing on the graphs on resources limited compute nodes, as well as reduce the latency for transfer of data over the network in case of distributed computing.
|
arxiv:1806.08831
|
in this modern era of technology with e - commerce developing at a rapid pace, it is very important to understand customer requirements and details from a business conversation. it is very crucial for customer retention and satisfaction. extracting key insights from these conversations is very important when it comes to developing their product or solving their issue. understanding customer feedback, responses, and important details of the product are essential and it would be done using named entity recognition ( ner ). for extracting the entities we would be converting the conversations to text using the optimal speech - to - text model. the model would be a two - stage network in which the conversation is converted to text. then, suitable entities are extracted using robust techniques using a ner bert transformer model. this will aid in the enrichment of customer experience when there is an issue which is faced by them. if a customer faces a problem he will call and register his complaint. the model will then extract the key features from this conversation which will be necessary to look into the problem. these features would include details like the order number, and the exact problem. all these would be extracted directly from the conversation and this would reduce the effort of going through the conversation again.
|
arxiv:2211.17107
|
groundwater is a precious natural resource. groundwater level ( gwl ) forecasting is crucial in the field of water resource management. measurement of gwl from observation - wells is the principle source of information about the aquifer and is critical to its evaluation. most part of the udupi district of karnataka state in india consists of geological formations : lateritic terrain and gneissic complex. due to the topographical ruggedness and inconsistency in rainfall, the gwl in udupi region is declining continually and most of the open wells are drying - up during the summer. hence, the current research aimed at developing a groundwater level forecasting model by using hybrid long short - term memory - lion algorithm ( lstm - la ). the historical gwl and rainfall data from an observation well from udupi district, located in karnataka state, india, were used to develop the model. the prediction accuracy of the hybrid lstm - la model was better than that of the feedforward neural network ( ffnn ) and the isolated lstm models. the hybrid lstm - la based forecasting model is promising for a larger dataset.
|
arxiv:1912.05934
|
we prove that the general tensor of size 2 ^ n and rank k has a unique decomposition as the sum of decomposable tensors if k < = 0. 9997 ( 2 ^ n ) / ( n + 1 ) ( the constant 1 being the optimal value ). similarly, the general tensor of size 3 ^ n and rank k has a unique decomposition as the sum of decomposable tensors if k < = 0. 998 ( 3 ^ n ) / ( 2n + 1 ) ( the constant 1 being the optimal value ). some results of this flavor are obtained for tensors of any size, but the explicit bounds obtained are weaker.
|
arxiv:1303.6915
|
recently, pre - trained vision - language models have been increasingly used to tackle the challenging zero - shot segmentation task. typical solutions follow the paradigm of first generating mask proposals and then adopting clip to classify them. to maintain the clip ' s zero - shot transferability, previous practices favour to freeze clip during training. however, in the paper, we reveal that clip is insensitive to different mask proposals and tends to produce similar predictions for various mask proposals of the same image. this insensitivity results in numerous false positives when classifying mask proposals. this issue mainly relates to the fact that clip is trained with image - level supervision. to alleviate this issue, we propose a simple yet effective method, named mask - aware fine - tuning ( maft ). specifically, image - proposals clip encoder ( ip - clip encoder ) is proposed to handle arbitrary numbers of image and mask proposals simultaneously. then, mask - aware loss and self - distillation loss are designed to fine - tune ip - clip encoder, ensuring clip is responsive to different mask proposals while not sacrificing transferability. in this way, mask - aware representations can be easily learned to make the true positives stand out. notably, our solution can seamlessly plug into most existing methods without introducing any new parameters during the fine - tuning process. we conduct extensive experiments on the popular zero - shot benchmarks. with maft, the performance of the state - of - the - art methods is promoted by a large margin : 50. 4 % ( + 8. 2 % ) on coco, 81. 8 % ( + 3. 2 % ) on pascal - voc, and 8. 7 % ( + 4. 3 % ) on ade20k in terms of miou for unseen classes. the code is available at https : / / github. com / jiaosiyu1999 / maft. git.
|
arxiv:2310.00240
|
in this paper we consider second order fully nonlinear operators with an additive superlinear gradient term. like in the pioneering paper of brezis for the semilinear case, we obtain the existence of entire viscosity solutions, defined in all the space, without assuming global bounds. a uniqueness result is also obtained for special gradient terms, subject to a convexity / concavity type assumption where superlinearity is essential and has to be handled in a different way from the linear case.
|
arxiv:1506.06994
|
in this letter, we study the energy efficiency maximization problem for a fluid antenna system ( fas ) in near field communications. specifically, we consider a point - to - point near - field system where the base station ( bs ) transmitter has multiple fixed - position antennas and the user receives the signals with multiple fluid antennas. our objective is to jointly optimize the transmit beamforming of the bs and the fluid antenna positions at the user for maximizing the energy efficiency. our scheme is based on an alternating optimization algorithm that iteratively solves the beamforming and antenna position subproblems. our simulation results validate the performance improvement of the proposed algorithm and confirm the effectiveness of fas.
|
arxiv:2407.05791
|
we embed feynman integrals in the subvarieties of grassmannians through homogenization of the integrands in projective space, then obtain gkz - systems satisfied by those scalar integrals. the feynman integral can be written as linear combinations of the hypergeometric functions of a fundamental solution system in neighborhoods of regular singularities of the gkz - system, whose linear combination coefficients are determined by the integral on an ordinary point or some regular singularities. taking some feynman diagrams as examples, we elucidate in detail how to obtain the fundamental solution systems of feynman integrals in neighborhoods of regular singularities. furthermore we also present the parametric representations of feynman integrals of the 2 - loop self - energy diagrams which are convenient to embed in the subvarieties of grassmannians.
|
arxiv:2206.04224
|
the out - of - equilibrium mean - field dynamics of a model for wave - particle interaction is investigated. such a model can be regarded as a general formulation for all those applications where the complex interplay between particles and fields is known to be central, e. g., electrostatic instabilities in plasma physics, particle acceleration and free - electron lasers ( fels ). the latter case is here assumed as a paradigmatic example. a transition separating different macroscopic regimes is numerically identified and interpreted by making use of the so - called violent relaxation theory. in the context of free - electron lasers, such a theory is showed to be effective in predicting the saturated regime for energies below the transition. the transition is explained as a dynamical switch between two metastable regimes, and is related to the properties of a stationary point of an entropic functional.
|
arxiv:0902.0712
|
we provide a set of rules to define several spinful quantum hall model states. the method extends the one known for spin polarized states. it is achieved by specifying an undressed root partition, a squeezing procedure and rules to dress the configurations with spin. it applies to both the excitation - less state and the quasihole states. in particular, we show that the naive generalization where one preserves the spin information during the squeezing sequence, may fail. we give numerous examples such as the halperin states, the non - abelian spin - singlet states or the spin - charge separated states. the squeezing procedure for the series ( k = 2, r ) of spinless quantum hall states, which vanish as r powers when k + 1 particles coincide, is generalized to the spinful case. as an application of our method, we show that the counting observed in the particle entanglement spectrum of several spinful states matches the one obtained through the root partitions and our rules. this counting also matches the counting of quasihole states of the corresponding model hamiltonians, when the latter is available.
|
arxiv:1107.2232
|
we implement a photon - counting optical time domain reflectometer ( otdr ) at 1. 55um which exhibits a high 2 - point resolution and a high accuracy. it is based on a low temporal - jitter photon - counting module at 1. 55um. this detector is composed of a periodically poled lithium niobate ( ppln ) waveguide, which provides a wavelength conversion from near infrared to visible light, and a low jitter silicon photon - counting detector. with this apparatus, we obtain centimetre resolution over a measurement range of tens of kilometres.
|
arxiv:0802.1921
|
this study introduces dk - practice ( dynamic knowledge prediction and educational content recommendation system ), an intelligent online platform that leverages machine learning to provide personalized learning recommendations based on student knowledge state. students participate in a short, adaptive assessment using the question - and - answer method regarding key concepts in a specific knowledge domain. the system dynamically selects the next question for each student based on the correctness and accuracy of their previous answers. after the test is completed, dk - practice analyzes students ' interaction history to recommend learning materials to empower the student ' s knowledge state in identified knowledge gaps. both question selection and learning material recommendations are based on machine learning models trained using anonymized data from a real learning environment. to provide self - assessment and monitor learning progress, dk - practice allows students to take two tests : one pre - teaching and one post - teaching. after each test, a report is generated with detailed results. in addition, the platform offers functions to visualize learning progress based on recorded test statistics. dk - practice promotes adaptive and personalized learning by empowering students with self - assessment capabilities and providing instructors with valuable information about students ' knowledge levels. dk - practice can be extended to various educational environments and knowledge domains, provided the necessary data is available according to the educational topics. a subsequent paper will present the methodology for the experimental application and evaluation of the platform.
|
arxiv:2501.10373
|
a locally finite face - to - face tiling of euclidean d - space by convex polytopes is called combinatorially multihedral if its combinatorial automorphism group has only finitely many orbits on the tiles. the paper describes a local characterization of combinatorially multihedral tilings in terms of centered coronas. this generalizes the local theorem for monotypic tilings, established in an earlier paper, which characterizes the case of combinatorial tile - transitivity.
|
arxiv:0809.2291
|
classical assumptions like strong convexity and lipschitz smoothness often fail to capture the nature of deep learning optimization problems, which are typically non - convex and non - smooth, making traditional analyses less applicable. this study aims to elucidate the mechanisms of non - convex optimization in deep learning by extending the conventional notions of strong convexity and lipschitz smoothness. by leveraging these concepts, we prove that, under the established constraints, the empirical risk minimization problem is equivalent to optimizing the local gradient norm and structural error, which together constitute the upper and lower bounds of the empirical risk. furthermore, our analysis demonstrates that the stochastic gradient descent ( sgd ) algorithm can effectively minimize the local gradient norm. additionally, techniques like skip connections, over - parameterization, and random parameter initialization are shown to help control the structural error. ultimately, we validate the core conclusions of this paper through extensive experiments. theoretical analysis and experimental results indicate that our findings provide new insights into the mechanisms of non - convex optimization in deep learning.
|
arxiv:2410.05807
|
in a recent paper, one of us studied spherically symmetric, asymptotically flat solutions of shape dynamics, finding that the spatial metric has characteristics of a wormhole - two asymptotically flat ends and a minimal - area sphere, or ` throat ', in between. in this paper we investigate whether that solution can emerge as a result of gravitational collapse of matter. with this goal, we study the simplest kind of spherically - symmetric matter : an infinitely - thin shell of dust. our system can be understood as a model of a star accreting a thin layer of matter. we solve the dynamics of the shell exactly and find that, indeed, as it collapses, the shell leaves in its wake the wormhole metric. in the maximal - slicing time we use for asymptotically flat solutions, the shell only approaches the throat asymptotically and does not cross it in a finite amount of time ( as measured by a clock ` at infinity ' ). this leaves open the possibility that a more realistic cosmological solution of shape dynamics might see this crossing happening in a finite amount of time ( as measured by the change of relational / shape degrees of freedom ).
|
arxiv:1509.00833
|
the ever - continuing explosive growth of on - demand content distribution has imposed great pressure on mobile / wireless network infrastructures. to ease congestion in the network and to increase perceived user experience, caching of popular content closer to the end - users can play a significant role and as such this issue has received significant attention over the last few years. additionally, energy efficiency is treated as a fundamental requirement in the design of next - generation mobile networks. however, there has been little attention to the overlapping area between energy efficiency and network caching especially when considering multipath routing. to this end, this paper proposes an energy - efficient caching with multipath routing support. the proposed scheme provides a joint anchoring of popular content into a set of potential caching nodes with optimized multipath support whilst ensuring a balance between transmission and caching energy cost. the proposed model also considers different content delivery modes, such as multicast and unicast. two separated integer - linear programming ( ilp ) models are formulated for each delivery mode. to tackle the curse of dimensionality we then provide a greedy simulated annealing algorithm, which not only reduces the time complexity but also provides a competitive performance. a wide set of numerical investigations reveal that the proposed scheme reduces the energy consumption up to 80 % compared with other widely used caching approaches under the premise of network resource limitation. sensitivity analysis to different parameters is also meticulously discussed in this paper.
|
arxiv:2104.13493
|
existence of pulsating stars in eclipsing binaries has been known for decades. these types of objects are extremely valuable systems for astronomical studies as they exhibit both eclipsing and pulsation variations. the eclipsing binaries are the only way to directly measure the mass and radius of stars with a good accuracy ( $ \ leq $ 1 \ % ), while the pulsations are a unique way to probe the stellar interior via oscillation frequencies. there are different types of pulsating stars existing in eclipsing binaries. one of them is the delta scuti variables. currently, the known number of delta scuti stars in eclipsing binaries is around 90 according to the latest catalog of these variables. an increasing number of these kinds of variables is important to understand the stellar structure, evolution and the effect of binarity on the pulsations. therefore, in this study, we focus on discovering new eclipsing binaries with delta scuti component ( s ). we searched for the northern tess field with a visual inspection by following some criteria such as light curve shape, the existence of pulsation like variations in the out - of - eclipse light curve and the teff values of the targets. as a result of these criteria, we determined some targets. the tess light curves of the selected targets first were removed from the binarity and frequency analysis was performed on the residuals. the luminosity, absolute and bolometric magnitudes of the targets were calculated as well. to find how much of these parameters represent the primary binary component ( more luminous ) we also computed the flux density ratio of the systems by utilizing the area of the eclipses. in addition, the positions of the systems in the h - r diagram were examined considering the flux density ratios. as a consequence of the investigation, we defined 38 candidates delta scuti and also one maia variable in eclipsing binary systems.
|
arxiv:2204.12952
|
threshold theorem is probably the most important development of mathematical epidemic modelling. unfortunately, some models may not behave according to the threshold. in this paper, we will focus on the final outcome of sir model with demography. the behaviour of the model approached by deteministic and stochastic models will be introduced, mainly using simulations. furthermore, we will also investigate the dynamic of susceptibles in population in absence of infective. we have successfully showed that both deterministic and stochastic models performed similar results when $ r _ 0 \ leq 1 $. that is, the disease - free stage in the epidemic. but when $ r _ 0 > 1 $, the deterministic and stochastic approaches had different interpretations.
|
arxiv:1803.01496
|
geographic information systems ( gis ) now provide accurate maps of terrain, roads, waterways, and building footprints and heights. aircraft, particularly small unmanned aircraft systems, can exploit additional information such as building roof structure to improve navigation accuracy and safety particularly in urban regions. this paper proposes a method to automatically label building roof shape types. satellite imagery and lidar data from witten, germany are fed to convolutional neural networks ( cnn ) to extract salient feature vectors. supervised training sets are automatically generated from pre - labeled buildings contained in the openstreetmap database. multiple cnn architectures are trained and tested, with the best performing networks providing a condensed feature set for support vector machine and decision tree classifiers. satellite and lidar data fusion is shown to provide greater classification accuracy than through use of either data type individually.
|
arxiv:1802.06274
|
technology to control electron beams lead resulted in the first useful scanning electron microscope in 1952 by mcmullan in charles oatley ' s lab at cambridge university. a series of phd students in that lab continued to improved the technique. thomas eugene everhart, working mostly on semiconductor surfaces, developed the voltage contrast technique and the everhart - thornley detector. = = references = = = = bibliography = = schultz, h. : electron beam welding, abington publishing von dobeneck, d. : electron beam welding – examples of 30 years of job - shop experience elfik. isibrno. cz / en : electron beam welding ( in czech and / or english ) visser, a. : werkstoffabtrag durch elektronen - und photonenstrahlen ; verlag < technische rundschau >, blaue reihe, heft 104 klein, j., ed., welding : processes, quality and applications, nova science publishers, inc., n. y., chapters 1 and 2, pp. 1 – 166 nemtanu, m. r., brasoveanu, m., ed., practical aspects and applications of electron beam irradiation, transworld research network, 37 / 661 ( 2 ), fort p. o., trivandrum - 695 023, kerala, india
|
https://en.wikipedia.org/wiki/Electron-beam_technology
|
nanocarriers are nanosized materials commonly used for targeted - oriented delivery of active compounds, including antimicrobials and small - molecular drugs. they equally represent fundamental and engineering challenges since sophisticated nanocarriers must show adequate structure, stability, and function in complex ambients. here, we report on the computational design of a distinctive class of nanocarriers, built from buckled armored nanodroplets, able to selectively encapsulate or release a probe load under specific flow conditions. mesoscopic simulations offer detailed insight into the interplay between the characteristics of laden surface coverage and evolution of the droplet morphology. first, we describe in detail the formation of \ textit { pocket - like } structures in pickering emulsion nanodroplets and their stability under external flow. then we use that knowledge to test the capacity of these emulsion - based pockets to yield flow - assisted encapsulation or expulsion of a probe load. finally, the rheological properties of our model carrier are put into perspective with those of delivery systems employed in pharmaceutical and cosmetic technology.
|
arxiv:2101.07070
|
in this study, we propose methods for the automatic detection of photospheric features ( bright points and granules ) from ultra - violet ( uv ) radiation, using a feature - based classifier. the methods use quiet - sun observations at 214 nm and 525 nm images taken by sunrise on 9 june 2009. the function of region growing and mean shift procedure are applied to segment the bright points ( bps ) and granules, respectively. zernike moments of each region are computed. the zernike moments of bps, granules, and other features are distinctive enough to be separated using a support vector machine ( svm ) classifier. the size distribution of bps can be fitted with a power - law slope - 1. 5. the peak value of granule sizes is found to be about 0. 5 arcsec ^ 2. the mean value of the filling factor of bps is 0. 01, and for granules it is 0. 51. there is a critical scale for granules so that small granules with sizes smaller than 2. 5 arcsec ^ 2 cover a wide range of brightness, while the brightness of large granules approaches unity. the mean value of bp brightness fluctuations is estimated to be 1. 2, while for granules it is 0. 22. mean values of the horizontal velocities of an individual bp and an individual bp within the network were found to be 1. 6 km / s and 0. 9 km / s, respectively. we conclude that the effect of individual bps in releasing energy to the photosphere and maybe the upper layers is stronger than what the individual bps release into the network.
|
arxiv:1407.2447
|
in this paper, we examine self - supervised learning methods, particularly vicreg, to provide an information - theoretical understanding of their construction. as a first step, we demonstrate how information - theoretic quantities can be obtained for a deterministic network, offering a possible alternative to prior work that relies on stochastic models. this enables us to demonstrate how vicreg can be ( re ) discovered from first principles and its assumptions about data distribution. furthermore, we empirically demonstrate the validity of our assumptions, confirming our novel understanding of vicreg. finally, we believe that the derivation and insights we obtain can be generalized to many other ssl methods, opening new avenues for theoretical and practical understanding of ssl and transfer learning.
|
arxiv:2207.10081
|
we prove a nontrivial energy bound for a finite set of affine transformations over a general field and discuss a number of implications. these include new bounds on growth in the affine group, a quantitative version of a theorem by elekes about rich lines in grids. we also give a positive answer to a question of yufei zhao that for a plane point set p for which no line contains a positive proportion of points from p, there may be at most one line, meeting the set of lines defined by p in at most a constant multiple of | p | points.
|
arxiv:1911.03401
|
compared with conventional grating - based spectrometers, reconstructive spectrometers based on spectrally engineered filtering have the advantage of miniaturization because of the less demand for dispersive optics and free propagation space. however, available reconstructive spectrometers fail to balance the performance on operational bandwidth, spectral diversity and angular stability. in this work, we proposed a compact silicon metasurfaces based spectrometer / camera. after angle integration, the spectral response of the system is robust to angle / aperture within a wide working bandwidth from 400nm to 800nm. it is experimentally demonstrated that the proposed method could maintain the spectral consistency from f / 1. 8 to f / 4 ( the corresponding angle of incident light ranges from 7 { \ deg } to 16 { \ deg } ) and the incident hyperspectral signal could be accurately reconstructed with a fidelity exceeding 99 %. additionally, a spectral imaging system with 400x400 pixels is also established in this work. the accurate reconstructed hyperspectral image indicates that the proposed aperture - robust spectrometer has the potential to be extended as a high - resolution broadband hyperspectral camera.
|
arxiv:2310.20289
|
we present a novel data generation tool for document processing. the tool focuses on providing a maximal level of visual information in a normal type document, ranging from character position to paragraph - level position. it also enables working with a large dataset on low - resource languages as well as providing a mean of processing thorough full - level information of the documented text. the data generation tools come with a dataset of 320000 vietnamese synthetic document images and an instruction to generate a dataset of similar size in other languages. the repository can be found at : https : / / github. com / tson1997 / sdl - document - image - generation
|
arxiv:2106.15117
|
authentication or content encryption. vpns, for example, can be used to separate the traffic of different user communities over an underlying network with strong security features. vpn may have best - effort performance or may have a defined service level agreement ( sla ) between the vpn customer and the vpn service provider. = = = global area network = = = a global area network ( gan ) is a network used for supporting mobile users across an arbitrary number of wireless lans, satellite coverage areas, etc. the key challenge in mobile communications is handing off communications from one local coverage area to the next. in ieee project 802, this involves a succession of terrestrial wireless lans. = = organizational scope = = networks are typically managed by the organizations that own them. private enterprise networks may use a combination of intranets and extranets. they may also provide network access to the internet, which has no single owner and permits virtually unlimited global connectivity. = = = intranet = = = an intranet is a set of networks that are under the control of a single administrative entity. an intranet typically uses the internet protocol and ip - based tools such as web browsers and file transfer applications. the administrative entity limits the use of the intranet to its authorized users. most commonly, an intranet is the internal lan of an organization. a large intranet typically has at least one web server to provide users with organizational information. = = = extranet = = = an extranet is a network that is under the administrative control of a single organization but supports a limited connection to a specific external network. for example, an organization may provide access to some aspects of its intranet to share data with its business partners or customers. these other entities are not necessarily trusted from a security standpoint. the network connection to an extranet is often, but not always, implemented via wan technology. = = = internet = = = an internetwork is the connection of multiple different types of computer networks to form a single computer network using higher - layer network protocols and connecting them together using routers. the internet is the largest example of internetwork. it is a global system of interconnected governmental, academic, corporate, public, and private computer networks. it is based on the networking technologies of the internet protocol suite. it is the successor of the advanced research projects agency network ( arpanet ) developed by darpa of the united states department of defense. the internet utilizes copper communications and an optical networking backbone to enable the world wide web ( www ), the internet
|
https://en.wikipedia.org/wiki/Computer_network
|
we describe a semi - analytic approach to the two - band ginzburg - landau theory, which predicts the behavior of vortices in two - band superconductors. we show that the character of the short - range vortex - vortex interaction is determined by the sign of the normal domain - superconductor interface energy, in analogy with the conventional differentiation between type - i and type - ii superconductors. however, we also show that the long - range interaction is determined by a modified ginzburg - landau parameter $ \ kappa ^ * $, different from the standard $ \ kappa $ of a bulk superconductor. this opens the possibility for non - monotonic vortex - vortex interaction, which is temperature - dependent, and can be further tuned by alterations of the material on the microscopic scale.
|
arxiv:1105.2403
|
in this paper we study a pollution regulation problem in an electricity market with a network structure. the market is ruled by an independent system operator ( iso for short ) who has the goal of reducing the pollutant emissions of the providers in the network, by encouraging the use of cleaner technologies. the problem of the iso formulates as a contracting problem with each one of the providers, who interact among themselves by playing a stochastic differential game. the actions of the providers are not observable by the iso which faces moral hazard. by using the dynamic programming approach, we represent the value function of the iso as the unique viscosity solution to the corresponding hamilton - jacobi - bellman equation. we prove that this solution is smooth and characterise the optimal controls for the iso. numerical solutions to the problem are presented and discussed. we consider also a simpler problem for the iso, with constant production levels, that can be solved explicitly in a particular setting.
|
arxiv:2111.13505
|
in this paper, we present an efficient and stable method to determine the one - particle green ' s function in the hybridization - expansion continuous - time ( ct - hyb ) quantum monte carlo method, within the framework of the dynamical mean - field theory. the high - frequency tail of the impurity self - energy is replaced with a noise - free function determined by a dual - expansion around the atomic limit. this method does not depend on the explicit form of the interaction term. more advantageous, it does not introduce any additional numerical cost to the ct - hyb simulation. we discuss the symmetries of the two - particle vertex, which can be used to optimize the simulation of the four - point correlation functions in the ct - hyb. here, we adopt it to accelerate the dual - expansion calculation, which turns out to be especially suitable for the study of material systems with complicated band structures. as an application, a two - orbital anderson impurity model with a general on - site interaction form is studied. the phase diagram is extracted as a function of the coulomb interactions for two different hund ' s coupling strengths. in the presence of the hybridization between different orbitals, for smaller interaction strengths, this model shows a transition from metal to band - insulator. increasing the interaction strengths, this transition is replaced by a crossover from mott insulator to band - insulator behavior.
|
arxiv:1109.4056
|
in the 20 + years of doppler observations of stars, scientists have uncovered a diverse population of extrasolar multi - planet systems. a common technique for characterizing the orbital elements of these planets is markov chain monte carlo ( mcmc ), using a keplerian model with random walk proposals and paired with the metropolis - hastings algorithm. for approximately a couple of dozen planetary systems with doppler observations, there are strong planet - planet interactions due to the system being in or near a mean - motion resonance ( mmr ). an n - body model is often required to accurately describe these systems. further computational difficulties arise from exploring a high - dimensional parameter space ( $ \ sim $ 7 x number of planets ) that can have complex parameter correlations. to surmount these challenges, we introduce a differential evolution mcmc ( demcmc ) applied to radial velocity data while incorporating self - consistent n - body integrations. our radial velocity using n - body demcmc ( run dmc ) algorithm improves upon the random walk proposal distribution of the traditional mcmc by using an ensemble of markov chains to adaptively improve the proposal distribution. we describe the methodology behind the algorithm, along with results of tests for accuracy and performance. we find that most algorithm parameters have a modest effect on the rate of convergence. however, the size of the ensemble can have a strong effect on performance. we show that the optimal choice depends on the number of planets in a system, as well as the computer architecture used and the resulting extent of parallelization. while the exact choices of optimal algorithm parameters will inevitably vary due to the details of individual planetary systems, we offer recommendations for choosing the demcmc algorithm ' s algorithmic parameters that result in excellent performance for a wide variety of planetary systems.
|
arxiv:1311.5229
|
the event sequence of many diverse systems is represented as a sequence of discrete events in a continuous space. examples of such an event sequence are earthquake aftershock events, financial transactions, e - commerce transactions, social network activity of a user, and the user ' s web search pattern. finding such an intricate pattern helps discover which event will occur in the future and when it will occur. a hawkes process is a mathematical tool used for modeling such time series discrete events. traditionally, the hawkes process uses a critical component for modeling data as an intensity function with a parameterized kernel function. the hawkes process ' s intensity function involves two components : the background intensity and the effect of events ' history. however, such parameterized assumption can not capture future event characteristics using past events data precisely due to bias in modeling kernel function. this paper explores the recent advancement using novel deep learning - based methods to model kernel function to remove such parametrized kernel function. in the end, we will give potential future research directions to improve modeling using the hawkes process.
|
arxiv:2104.11092
|
eeg - based neural networks, pivotal in medical diagnosis and brain - computer interfaces, face significant intellectual property ( ip ) risks due to their reliance on sensitive neurophysiological data and resource - intensive development. current watermarking methods, particularly those using abstract trigger sets, lack robust authentication and fail to address the unique challenges of eeg models. this paper introduces a cryptographic wonder filter - based watermarking framework tailored for eeg - based neural networks. leveraging collision - resistant hashing and public - key encryption, the wonder filter embeds the watermark during training, ensuring minimal distortion ( $ \ leq 5 \ % $ drop in eeg task accuracy ) and high reliability ( 100 \ % watermark detection ). the framework is rigorously evaluated against adversarial attacks, including fine - tuning, transfer learning, and neuron pruning. results demonstrate persistent watermark retention, with classification accuracy for watermarked states remaining above 90 \ % even after aggressive pruning, while primary task performance degrades faster, deterring removal attempts. piracy resistance is validated by the inability to embed secondary watermarks without severe accuracy loss ( $ > 10 \ % $ in eegnet and ccnn models ). cryptographic hashing ensures authentication, reducing brute - force attack success probabilities. evaluated on the deap dataset across models ( ccnn, eegnet, tsception ), the method achieves $ > 99. 4 \ % $ null - embedding accuracy, effectively eliminating false positives. by integrating wonder filters with eeg - specific adaptations, this work bridges a critical gap in ip protection for neurophysiological models, offering a secure, tamper - proof solution for healthcare and biometric applications. the framework ' s robustness against adversarial modifications underscores its potential to safeguard sensitive eeg models while maintaining diagnostic utility.
|
arxiv:2502.05931
|
medical differential phase contrast x - ray imaging ( dpci ) promises improved soft - tissue contrast at lower x - ray dose. the dose strongly depends on both the angular sensitivity and on the visibility of a grating - based talbot - lau interferometer. using a conventional x - ray tube, a high sensitivity and a high visibility are somewhat contradicting goals : to increase sensitivity, the grating period has to be reduced and / or the grating distance increased. technically, this means using a higher talbot order ( 3rd or 5th one instead of first one ). this however reduces the visibility somewhat, because only a smaller part of the tube spectrum will get used. this work proposes to relax this problem by changing the phase grating geometry. this allows to double sensitivity ( i. e., double the talbot order ) without reducing the visibility. one proposed grating geometry is an older binary one ( 75 % of a period $ \ pi $ - shifting ), but applied in a novel way. the second proposed geometry is a novel one, requiring three height levels for polychromatic correction. the advantage is quantified by a simulation of the resulting interference patterns. visibilities for the common $ \ pi $ - shifting gratings are compared with the proposed alternative geometries. this is done depending on photon energy and opening ratio of the coherence grating g0. it shows that despite of doubled sensitivity of the proposed gratings, the overall visibility might even improve a little.
|
arxiv:1603.03922
|
support vector machine ( svm ) based multivariate pattern analysis ( mvpa ) has delivered promising performance in decoding specific task states based on functional magnetic resonance imaging ( fmri ) of the human brain. conventionally, the svm - mvpa requires careful feature selection / extraction according to expert knowledge. in this study, we propose a deep neural network ( dnn ) for directly decoding multiple brain task states from fmri signals of the brain without any burden for feature handcrafts. we trained and tested the dnn classifier using task fmri data from the human connectome project ' s s1200 dataset ( n = 1034 ). in tests to verify its performance, the proposed classification method identified seven tasks with an average accuracy of 93. 7 %. we also showed the general applicability of the dnn for transfer learning to small datasets ( n = 43 ), a situation encountered in typical neuroscience research. the proposed method achieved an average accuracy of 89. 0 % and 94. 7 % on a working memory task and a motor classification task, respectively, higher than the accuracy of 69. 2 % and 68. 6 % obtained by the svm - mvpa. a network visualization analysis showed that the dnn automatically detected features from areas of the brain related to each task. without incurring the burden of handcrafting the features, the proposed deep decoding method can classify brain task states highly accurately, and is a powerful tool for fmri researchers.
|
arxiv:1801.09858
|
non - commutative geometry at inflation can give arise to parity violating modulations of the primordial power spectrum. we develop the statistical tools needed for investigating whether these modulations are evident in the cosmic microwave background ( cmb ). the free parameters of the models are two directional parameters ( theta, phi ), the signal amplitude a *, and a tilt parameter n * that modulates correlation power on different scales. the signature of the model corresponds to a kind of hemispherical power asymmetry. when analyzing the 7 - year wmap data we find a weak signature for a preferred direction in the q -, v -, and w bands with direction ( l, b ) = ( - 225 deg, - 25 deg ) + - ( 20 deg, 20 deg ), which is close to another previously discovered hemispherical power asymmetry. although these results are intriguing, the significance of the detection in the w -, v - and q - bands are nonzero at about 2 sigma, suggesting that the simplest parameterization of the leading correction represents only partially the effects of the space - time non - commutativity possibly responsible for the hemispherical asymmetry. our constraints on the presence of a dipole are independent of its physical origin and prefer a blue - tilted spectral index n * ~ 0 with the amplitude a * ~ 0. 18.
|
arxiv:1011.5353
|
we initiate the study of the social welfare loss caused by corrupt auctioneers, both in single - item and multi - unit auctions. in our model, the auctioneer may collude with the winning bidders by letting them lower their bids in exchange for a ( possibly bidder - dependent ) fraction $ \ gamma $ of the surplus. we consider different corruption schemes. in the most basic one, all winning bidders lower their bid to the highest losing bid. we show that this setting is equivalent to a $ \ gamma $ - hybrid auction in which the payments are a convex combination of first - price and the second - price payments. more generally, we consider corruption schemes that can be related to $ \ gamma $ - approximate first - price auctions ( $ \ gamma $ - fpa ), where the payments recover at least a $ \ gamma $ - fraction of the first - price payments. our goal is to obtain a precise understanding of the robust price of anarchy ( poa ) of such auctions. if no restrictions are imposed on the bids, we prove a bound on the robust poa of $ \ gamma $ - fpa which is tight ( over the entire range of $ \ gamma $ ) for the single - item and the multi - unit auction setting. on the other hand, if the bids satisfy the no - overbidding assumption a more fine - grained landscape of the price of anarchy emerges, depending on the auction setting and the equilibrium notion. albeit being more challenging, we derive ( almost ) tight bounds for both auction settings and several equilibrium notions, basically leaving open some ( small ) gaps for the coarse - correlated price of anarchy only.
|
arxiv:2106.01822
|
chemical analyses of late - type stars are usually carried out following the classical recipe : lte line formation and homogeneous, plane - parallel, flux - constant, and lte model atmospheres. we review different results in the literature that have suggested significant inconsistencies in the spectroscopic analyses, pointing out the difficulties in deriving independent estimates of the stellar fundamental parameters and hence, detecting systematic errors. the trigonometric parallaxes measured by the hipparcos mission provide accurate appraisals of the stellar surface gravity for nearby stars, which are used here to check the gravities obtained from the photospheric iron ionization balance. we find an approximate agreement for stars in the metallicity range - 1 < = [ fe / h ] < = 0, but the comparison shows that the differences between the spectroscopic and trigonometric gravities decrease towards lower metallicities for more metal - deficient dwarfs ( - 2. 5 < = [ fe / h ] < = - 1. 0 ), which casts a shadow upon the abundance analyses for extreme metal - poor stars that make use of the ionization equilibrium to constrain the gravity. the comparison with the strong - line gravities derived by edvardsson ( 1988 ) and fuhrmann ( 1998a ) confirms that this method provides systematically larger gravities than the ionization balance. the strong - line gravities get closer to the physical ones for the stars analyzed by fuhrmann, but they are even further away than the iron ionization gravities for the stars of lower gravities in edvardsson ' s sample. the confrontation of the deviations of the iron ionization gravities in metal - poor stars reported here with departures from the excitation balance found in the literature, show that they are likely to be induced by the same physical mechanism ( s ).
|
arxiv:astro-ph/9907155
|
heavy ball momentum is crucial in accelerating ( stochastic ) gradient - based optimization algorithms for machine learning. existing heavy ball momentum is usually weighted by a uniform hyperparameter, which relies on excessive tuning. moreover, the calibrated fixed hyperparameter may not lead to optimal performance. in this paper, to eliminate the effort for tuning the momentum - related hyperparameter, we propose a new adaptive momentum inspired by the optimal choice of the heavy ball momentum for quadratic optimization. our proposed adaptive heavy ball momentum can improve stochastic gradient descent ( sgd ) and adam. sgd and adam with the newly designed adaptive momentum are more robust to large learning rates, converge faster, and generalize better than the baselines. we verify the efficiency of sgd and adam with the new adaptive momentum on extensive machine learning benchmarks, including image classification, language modeling, and machine translation. finally, we provide convergence guarantees for sgd and adam with the proposed adaptive momentum.
|
arxiv:2110.09057
|
in recent years, extensive research has been conducted in the area of service level agreement ( sla ) for utility computing systems. an sla is a formal contract used to guarantee that consumers ' service quality expectation can be achieved. in utility computing systems, the level of customer satisfaction is crucial, making slas significantly important in these environments. fundamental issue is the management of slas, including sla autonomy management or trade off among multiple quality of service ( qos ) parameters. many sla languages and frameworks have been developed as solutions ; however, there is no overall classification for these extensive works. therefore, the aim of this chapter is to present a comprehensive survey of how slas are created, managed and used in utility computing environment. we discuss existing use cases from grid and cloud computing systems to identify the level of sla realization in state - of - art systems and emerging challenges for future research.
|
arxiv:1010.2881
|
lexical semantic resources, like wordnet, are often used in real applications of natural language document processing. for example, we integrated germanet in our document suite xdoc of processing of german forensic autopsy protocols. in addition to the hypernymy and synonymy relation, we want to adapt germanet ' s verb frames for our analysis. in this paper we outline an approach for the domain related enrichment of germanet verb frames by corpus based syntactic and co - occurred data analyses of real documents.
|
arxiv:cs/0501094
|
the behaviour of sports balls during impact defines some special features of each sport. the velocity of the game, the accuracy of passes or shots, the control of the ball direction after impact, the risks of injury, are all set by the impact mechanics of the ball. for inflated sports balls, those characteristics are finely tuned by the ball inner pressure. as a consequence, inflation pressures are regulated for sports played with inflated balls. despite a good understanding of ball elasticity, the source of energy dissipation for inflated balls remains controversial. we first give a clear view of non - dissipative impact mechanics. second we review, analyse and estimate the different sources of energy dissipation of the multi - physics phenomena that occur during the impact. finally, we propose several experiments to decide between gas compression, shell visco - elastic dissipation, solid friction, sound emission or shell vibrations as the major source of energy dissipation.
|
arxiv:1708.01282
|
we explore a deformation of the flat space symmetric space sigma model action. the deformed action is designed to allow a lax connection for the equations of motion, similar to the undeformed model. for this to work, we identify a set of constraints that the deformation operator, which is incorporated into the action, must fulfil. after defining the deformation, we explore simple solutions to these constraints and describe the resulting deformed backgrounds. specifically, we find flat space in cartesian coordinates with arbitrary constant $ h $ - flux or linear $ h $ - flux in a light cone coordinate. additionally, we find the nappi - witten background along with various nappi - witten - like backgrounds with near arbitrary constant $ h $ - flux. finally, we discuss the symmetries of the deformed models, finding that the deformed symmetries will always include a set of symmetries that in the undeformed limit becomes the total set of translations.
|
arxiv:2407.16853
|
we present the mechanism of interaction of wnt network module, which is responsible for periodic sometogenesis, with p53 regulatory network, which is one of the main regulators of various cellular functions, and switching of various oscillating states by investigating p53 - wnt model. the variation in nutlin concentration in p53 regulating network drives the wnt network module to different states, stabilized, damped and sustain oscillation states, and even to cycle arrest. similarly, the change in axin concentration in wnt could able to modulate the p53 dynamics at these states. we then solve the set of coupled ordinary differential equations of the model using quasi steady state approximation. we, further, demonstrate the change of p53 and gsk3 interaction rate, due to hypothetical catalytic reaction or external stimuli, can able to regulate the dynamics of the two network modules, and even can control their dynamics to protect the system from cycle arrest ( apoptosis ).
|
arxiv:1503.04732
|
differential cross sections for elastic scattering of photons from the deuteron have recently been measured at the tagged - photon facility at the max iv laboratory in lund, sweden. these first new measurements in more than a decade further constrain the isoscalar electromagnetic polarizabilities of the nucleon and provide the first - ever results above 100 mev, where the sensitivity to the polarizabilities is increased. we add 23 points between 70 and 112 mev, at angles 60deg, 120deg and 150deg. analysis of these data using a chiral effective field theory indicates that the cross sections are both self - consistent and consistent with previous measurements. extracted values of \ alpha _ s = [ 12. 1 + / - 0. 8 ( stat ) + / - 0. 2 ( bsr ) + / - 0. 8 ( th ) ] x 10 ^ { - 4 } fm ^ 3 and \ beta _ s = [ 2. 4 + / - 0. 8 ( stat ) + / - 0. 2 ( bsr ) + / - 0. 8 ( th ) ] x 10 ^ { - 4 } fm ^ 3 are obtained from a fit to these 23 new data points. this paper presents in detail the experimental conditions and the data analysis used to extract the cross sections.
|
arxiv:1503.08094
|
we show that gaussian process regression ( gpr ) can be used to infer the electromagnetic ( em ) duct height within the marine atmospheric boundary layer ( mabl ) from sparsely sampled propagation factors within the context of bistatic radars. we use gpr to calculate the posterior predictive distribution on the labels ( i. e. duct height ) from both noise - free and noise - contaminated array of propagation factors. for duct height inference from noise - contaminated propagation factors, we compare a naive approach, utilizing one random sample from the input distribution ( i. e. disregarding the input noise ), with an inverse - variance weighted approach, utilizing a few random samples to estimate the true predictive distribution. the resulting posterior predictive distributions from these two approaches are compared to a " ground truth " distribution, which is approximated using a large number of monte - carlo samples. the ability of gpr to yield accurate and fast duct height predictions using a few training examples indicates the suitability of the proposed method for real - time applications.
|
arxiv:1905.10653
|
we selected a sample of 33 gamma ray bursts ( grbs ) detected by swift, with known redshift and optical extinction at the host frame. for these, we constructed the de - absorbed and k - corrected x - ray and optical rest frame light curves. these are modelled as the sum of two components : emission from the forward shock due to the interaction of a fireball with the circum - burst medium and an additional component, treated in a completely phenomenological way. the latter can be identified, among other possibilities, as " late prompt " emission produced by a long lived central engine with mechanisms similar to those responsible for the production of the " standard " early prompt radiation. apart from flares or re - brightenings, that we do not model, we find a good agreement with the data, despite of their complexity and diversity. although based in part on a phenomenological model with a relatively large number of free parameters, we believe that our findings are a first step towards the construction of a more physical scenario. our approach allows us to interpret the behaviour of the optical and x - ray afterglows in a coherent way, by a relatively simple scenario. within this context it is possible to explain why sometimes no jet break is observed ; why, even if a jet break is observed, it is often chromatic ; why the steepening after the jet break time is often shallower than predicted. finally, the decay slope of the late prompt emission after the shallow phase is found to be remarkably similar to the time profile expected by the accretion rate of fall - back material ( i. e. proportional to t ^ { - 5 / 3 } ), suggesting that this can be the reason why the central engine can be active for a long time.
|
arxiv:0811.1038
|
various interacting lattice path models of polymer collapse in two dimensions demonstrate different critical behaviours. this difference has been without a clear explanation. the collapse transition has been variously seen to be in the duplantier - saleur $ \ theta $ - point university class ( specific heat cusp ), the interacting trail class ( specific heat divergence ) or even first - order. here we study via monte carlo simulation a generalisation of the duplantier - saleur model on the honeycomb lattice and also a generalisation of the so - called vertex - interacting self - avoiding walk model ( configurations are actually restricted trails known as grooves ) on the triangular lattice. crucially for both models we have three and two body interactions explicitly and differentially weighted. we show that both models have similar phase diagrams when considered in these larger two - parameter spaces. they demonstrate regions for which the collapse transition is first - order for high three body interactions and regions where the collapse is in the duplantier - saleur $ \ theta $ - point university class. we conjecture a higher order multiple critical point separating these two types of collapse.
|
arxiv:1510.06891
|
in this paper we introduce a hands - on activity in which introductory astronomy students act as gravitational wave astronomers by extracting information from simulated gravitational wave signals. the process mimics the way true gravitational wave analysis will be handled by using plots of a pure gravitational wave signal. the students directly measure the properties of the simulated signal, and use these measurements to evaluate standard formulae for astrophysical source parameters. an exercise based on the discussion in this paper has been written and made publicly available online for use in introductory laboratory courses.
|
arxiv:physics/0610028
|
one of the main atmospheric features in exoplanet atmospheres, detectable both from ground - and space - based facilities, is rayleigh scattering. in hydrogen - dominated planetary atmospheres, rayleigh scattering causes the measured planetary radius to increase toward blue wavelengths in the optical range. we obtained a spectrophotometric time series of one transit of the saturn - mass planet wasp - 69b using the osiris instrument at the gran telescopio canarias. from the data we constructed 19 spectroscopic transit light curves representing 20 nm wide wavelength bins spanning from 515 nm to 905 nm. we derived the transit depth for each curve individually by fitting an analytical model together with a gaussian process to account for systematic noise in the light curves. we find that the transit depth increases toward bluer wavelengths, indicative of a larger effective planet radius. our results are consistent with space - based measurements obtained in the near infrared using the hubble space telescope, which show a compatible slope of the transmission spectrum. we discuss the origin of the detected slope and argue between two possible scenarios : a rayleigh scattering detection originating in the planet ' s atmosphere or a stellar activity induced signal from the host star.
|
arxiv:2007.02741
|
all known solutions to the einstein equations describing rotating cylindrical wormholes lack asymptotic flatness and therefore cannot describe wormhole entrances as local objects in our universe. to overcome this difficulty, wormhole solutions are joined to flat asymptotic regions at some surfaces $ \ sigma _ - $ and $ \ sigma _ + $. the whole configuration thus consists of three regions, the internal one containing a wormhole throat, and two flat external ones, considered in rotating reference frames. using a special kind of anisotropic fluid respecting the weak energy condition ( wec ) as a source of gravity in the internal region, we show that the parameters of this configuration can be chosen in such a way that matter on both junction surfaces $ \ sigma _ - $ and $ \ sigma _ + $ also respects the wec. closed timelike curves are shown to be absent by construction in the whole configuration. it seems to be the first example of regular twice asymptotically flat wormholes without exotic matter and without closed timelike curves, obtained in general relativity.
|
arxiv:1807.03641
|
deformable registration is a crucial step in many medical procedures such as image - guided surgery and radiation therapy. most recent learning - based methods focus on improving the accuracy by optimizing the non - linear spatial correspondence between the input images. therefore, these methods are computationally expensive and require modern graphic cards for real - time deployment. in this paper, we introduce a new light - weight deformable registration network that significantly reduces the computational cost while achieving competitive accuracy. in particular, we propose a new adversarial learning with distilling knowledge algorithm that successfully leverages meaningful information from the effective but expensive teacher network to the student network. we design the student network such as it is light - weight and well suitable for deployment on a typical cpu. the extensively experimental results on different public datasets show that our proposed method achieves state - of - the - art accuracy while significantly faster than recent methods. we further show that the use of our adversarial learning algorithm is essential for a time - efficiency deformable registration method. finally, our source code and trained models are available at : https : / / github. com / aioz - ai / ldr _ aldk.
|
arxiv:2110.01293
|
this is a contribution to the review " 50 years of quantum chromdynamics " edited by f. gross and e. klempt [ arxiv : 2212. 11107 ], to be published in epjc. the contribution reviews the properties of baryons with one heavy flavor : the lifetimes of ground states and the spectrum of excited states. the importance of symmetries to understand the excitation spectrum is underlined. an overview of searches for pentaquarks is given.
|
arxiv:2211.12897
|
we consider the born - oppenheimer problem near conical intersection in two dimensions. for energies close to the crossing energy we describe the wave function near an isotropic crossing and show that it is related to generalized hypergeometric functions 0f3. this function is to a conical intersection what the airy function is to a classical turning point. as an application we calculate the anomalous zeeman shift of vibrational levels near a crossing.
|
arxiv:quant-ph/9911121
|
we report a detailed magneto - transport study in single crystals of nbp. high quality crystals were grown by vapour transport method. an exceptionally large magnetoresistance is confirmed at low temperature which is non - saturating and is linear at high fields. models explaining the linear magnetoresistance are discussed and it is argued that in nbp this is linked to charge carrier mobility fluctuations. negative longitudinal magnetoresistance is not seen, unlike several other weyl monopnictides, suggesting lack of well defined chiral anomaly in nbp. unambiguous shubnikov - de - haas oscillations are observed at low temperatures that are correlated to berry phases. the landau fan diagram indicates trivial berry phase in nbp crystals corresponding to fermi surface extrema at 30. 5 tesla.
|
arxiv:1608.06587
|
the growth - rate function for a minor - closed class $ \ mathcal { m } $ of matroids is the function $ h $ where, for each non - negative integer $ r $, $ h ( r ) $ is the maximum number of elements of a simple matroid in $ \ mathcal { m } $ with rank at most $ r $. the growth - rate theorem of geelen, kabell, kung, and whittle shows, essentially, that the growth - rate function is always either linear, quadratic, exponential, or infinite. morover, if the growth - rate function is quadratic, then $ h ( r ) \ ge \ binom { r + 1 } { 2 } $, with the lower bound coming from the fact that such classes necessarily contain all graphic matroids. we characterise the classes that satisfy $ h ( r ) = \ binom { r + 1 } { 2 } $ for all sufficiently large $ r $.
|
arxiv:1409.0777
|
this paper shows that masked autoencoders ( mae ) are scalable self - supervised learners for computer vision. our mae approach is simple : we mask random patches of the input image and reconstruct the missing pixels. it is based on two core designs. first, we develop an asymmetric encoder - decoder architecture, with an encoder that operates only on the visible subset of patches ( without mask tokens ), along with a lightweight decoder that reconstructs the original image from the latent representation and mask tokens. second, we find that masking a high proportion of the input image, e. g., 75 %, yields a nontrivial and meaningful self - supervisory task. coupling these two designs enables us to train large models efficiently and effectively : we accelerate training ( by 3x or more ) and improve accuracy. our scalable approach allows for learning high - capacity models that generalize well : e. g., a vanilla vit - huge model achieves the best accuracy ( 87. 8 % ) among methods that use only imagenet - 1k data. transfer performance in downstream tasks outperforms supervised pre - training and shows promising scaling behavior.
|
arxiv:2111.06377
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.